June 10, 2024

Governance and security framework for Gen AI apps

Security framework required for safe adoption of Gen AI applications

Generative AI applications introduce new cybersecurity risks. Traditional cybersecurity solutions poorly address the specific Gen AI risks, like conversational user experiences, in which the Gen AI application can perform a wide variety of tasks and pose no limits to what type of information the user can enter. Many organizations have issued policies and training to end users, but the lack of technical security controls is holding back many rollouts of Generative AI technologies.

Based on over 50 interviews, NROC Security has conceptualized a security framework required for safe adoption of Gen AI applications. It consists of 4 discreet policy items that govern access, data, use case and the responsible use of AI. The below table describes the framework, sets the essential policy control (WHAT) and the implementation challenges (HOW)

AI policy element

Controls

Implementation questions

Access

Which users are allowed to access Gen AI apps?

How to efficiently administer user access rights?

What contextual controls need to be satisfied to access a Gen AI app (e.g., geolocation, use of corporate ID)?

How to reliably assess the user context and quickly allow/deny an attempted access?

Which Gen AI apps are allowed to be accessed?

How to rate the risk of various Gen AI apps?

Data Security

What information should not flow into Gen AI apps?

How to recognize personally identifiable information and other discreet secrets?

How to recognize sensitive corporate information reliably and cost efficiently? 

Use case ‘anti-drift’

What content is allowed/disallowed to be created with Gen AI apps?

How to recognize content types (e.g., text, multimedia)?

How to recognize the category of the generated content (e.g., legal, financial, medical, SW)? 

Responsible AI

What Gen AI app and input data was used to create a specific piece of content?

How to log and tag outputs and provide a great user experience to retrieve explanations?

What created content is harmful and needs to be removed?

How to recognize harmful content (hate, violence)?

Like in most cybersecurity responses to new technologies entering the enterprise, the framework is a mix of old and new, and finding the most efficient way to implement is a key to meeting the business requirements. Active directory groups and attributes should be the basis for access, but context is specific to Gen AI apps. Conversational user experiences challenge data security to match a large copy/pasting or ingestion of unstructured information where keywords alone might trigger many alerts.

Use case anti-drift and responsible AI are totally new policy items. Yesterday’s SaaS did not need controls concerning what it is used for. The application was made for a certain use case and pretty rigidly stored and processed information to deliver its value. For example, regulating who is allowed to use what tool for creating software code is a new requirement. The same way, not all apps are qualified to give medical advice, yet they offer some when asked.

Responsible AI is a fast-evolving space. Keeping track of what content was created by AI is already good ‘bedside manners’, increasingly a compliance requirement, and some occasions mandate a disclosure to a customer or employee of the AI’s involvement. The same goes with explainability, i.e., knowing who used what tool with what data to create a piece of content.

The makers of the apps and the providers of the underlying models will each do their part, but governance and security of the usage, data and results is still the responsibility of the enterprise. NROC was founded to enable enterprises govern and secure adoption of generative AI technologies. We all are in the early innings of this, and look forward to innovating with our customers. More information about NROC Security please see our website at www.nrocsecurity.com

Get insights on boosting GenAI app adoption safely

Subscribe to NROC security blog

Governance
Productivity
User behavior risks

Updated CISO guide for GenAI security

Our updated CISO guide explains how traditional security architectures are challenged by this new breed of business software and the best practices CISOs can implement. This guide enables CISOs to mitigate the risk of data leaks, prevent compliance violations, provide a great end user experience, and contribute to the overall business goal of driving personal productivity using GenAI.

Governance
Prompt risks
Visibility
Webinars
Productivity

On-demand webinar: Productivity-First Governance for GenAI

Productivity-First Governance for GenAI with The Cibernetica Group and NROC Security

Governance
Productivity
User behavior risks

How AI Champions drive personal productivity and ROI from GenAI

Practical ways to build end user confidence, skills, security, and governance so employees can realize productivity gains from GenAI tools like ChatGPT.

Governance
Productivity

6 key GenAI trends shaping employee productivity in 2026

Employee productivity is entering a new phase as GenAI tools move from experimentation to everyday work. We founded NROC Security on the belief that enterprise employees want to use tools like ChatGPT, to get more done, while organizations remain rightly concerned about security and data exposure. By the end of 2025, most organizations had stopped saying “no” to GenAI, and instead started introducing policies and staff guidance for acceptable use, even as the technical enforcement lagged behind.