Governance and guardrails for GenAI

Safely drive employee productivity and innovation with public GenAI apps and LLMs

All major public LLM vendors capture user data for model training.

Every 100 business user prompts contain 32 instances of Personally Identifiable Information (PII). 

34% of AI-generated content contains financial, legal or SW code topics where hallucinations may go undetected.

Source: NROC Security user base

Solution overview

Why NROC Security

Visibility & insight

Monitor how the organization uses GenAI, assess risks, and prove compliance. Develop AI policies with facts.

Real-time monitoring

Dashboard and insights on apps, usage, data in prompts, content created and security friction

Facts on risk exposure

Metrics on classified data in prompts and the riskiest topics in created content

Compliance record

Logs about every prompt, response and policy action - referencing the users’ corporate identities

Access governance to GenAI apps

Allow access to the right AI for the task at hand

Authenticated GenAI usage

Single Sign-On (SSO) for users utilizing their corporate IDs, even when using private IDs on consumer apps

Gated access to Gen AI apps

Access controlled using customizable policies that consider the app’s risk profile

User group-based policies

Tailored departmental policies based on Active Directory (AD) groups

Prompt & response guards

Enforce policies to both prompts and responses based on out-of-the box guardrails

Prompt content guardrails

Guardrails to prevent PII, IP and data leakage, prompt injections and jailbreaks, saving users from accidental data leakages

Response content guardrails

Ability to define use case boundaries for each app, e.g. such as if software code creation is allowed

User guidance and accountability

Real-time cues to support safe GenAI usage, while explicitly asking users to evaluate and accept a risk

Data flow guards

Right data for the right AI, with the ability to block some data from any AI

Proprietary AI-based categorization

Categorization of prompts based on topic and files based on content, without relying on pre-made labels

Data flow controls for attachments

Ability to allow the right categories of files to certain apps, while blocking them from other AIs

Offline training zone

Custom document categories defined by analyzing a sample set of files using a desktop app

How it works

Unique cloud-based
proxy architecture

Easy to deploy, certified for security, and
compliant with workplace privacy regulations

Easy to deploy

Redirection of AI traffic only

Several options to direct AI-related web traffic to the proxy:

  • Proxy auto-configuration (PAC) in workstations
  • Proxy chaining from an existing SWG/SASE solution
  • Rules in a DNS proxy

No endpoint agents or plugin to install

Configuration can be pushed to workstations using common Device Management solutions

SSO-based user authentication

Works with common SSO providers to authenticate end users on their corporate IDs

Okta
EntraID

Compliant with requirements and regulations

Security and compliance

Certified for SOC2 type 1, in progress for type 2. Compliant with GDPR. ISO 27k on the roadmap

ISO 27001
SOC2 Type 1
GDPR

Workplace privacy

Configuration and admin role options to facilitate deployments subject to workplace privacy regulations

User behavior risks
Visibility
Governance
CISO
Productivity

CISO Guide to Securing Employee Use of GenAI

Best Practices for Securing Public GenAI Apps and LLM Apps in the Enterprise

Guardrails
Supported GenAI App

Tricks and treats - privacy and data protection terms of popular GenAI services

It is very positive that the big GenAI vendors, like OpenAI, Microsoft and others, have been granting more transparency into how they treat the content they collect from their end users. It’s only reasonable that you know where your prompt may end up. Will it be used to train the model, or will it even show up in an AI generated response to somebody else?

Guardrails
Productivity
Supported GenAI App
User behavior risks

NROC Security releases support for Grok

Grok3 by xAI was launched on 17 February 2025 and is now supported by NROC Security

Guardrails
Prompt risks
User behavior risks
Visibility
Supported GenAI App

NROC Security Becomes First GenAI Security Vendor to Support DeepSeek AI

NROC Security announces support DeepSeek AI as the first security vendor for GenAI Apps at work.

Safely allow more GenAI at work and drive continuous learning and change