Safely drive employee productivity and innovation with public GenAI apps and LLMs
All major public LLM vendors capture user data for model training.
Every 100 business user prompts contain 32 instances of Personally Identifiable Information (PII).
34% of AI-generated content contains financial, legal or SW code topics where hallucinations may go undetected.
Source: NROC Security user base
Monitor how the organization uses GenAI, assess risks, and prove compliance. Develop AI policies with facts.
Dashboard and insights on apps, usage, data in prompts, content created and security friction
Metrics on classified data in prompts and the riskiest topics in created content
Logs about every prompt, response and policy action - referencing the users’ corporate identities
Allow access to the right AI for the task at hand
Single Sign-On (SSO) for users utilizing their corporate IDs, even when using private IDs on consumer apps
Access controlled using customizable policies that consider the app’s risk profile
Tailored departmental policies based on Active Directory (AD) groups
Enforce policies to both prompts and responses based on out-of-the box guardrails
Guardrails to prevent PII, IP and data leakage, prompt injections and jailbreaks, saving users from accidental data leakages
Ability to define use case boundaries for each app, e.g. such as if software code creation is allowed
Real-time cues to support safe GenAI usage, while explicitly asking users to evaluate and accept a risk
Right data for the right AI, with the ability to block some data from any AI
Categorization of prompts based on topic and files based on content, without relying on pre-made labels
Ability to allow the right categories of files to certain apps, while blocking them from other AIs
Custom document categories defined by analyzing a sample set of files using a desktop app
Easy to deploy, certified for security, and
compliant with workplace privacy regulations
Several options to direct AI-related web traffic to the proxy:
Configuration can be pushed to workstations using common Device Management solutions
Works with common SSO providers to authenticate end users on their corporate IDs
Certified for SOC2 type 1, in progress for type 2. Compliant with GDPR. ISO 27k on the roadmap
Configuration and admin role options to facilitate deployments subject to workplace privacy regulations
Best Practices for Securing Public GenAI Apps and LLM Apps in the Enterprise
It is very positive that the big GenAI vendors, like OpenAI, Microsoft and others, have been granting more transparency into how they treat the content they collect from their end users. It’s only reasonable that you know where your prompt may end up. Will it be used to train the model, or will it even show up in an AI generated response to somebody else?