April 11, 2025

CISO Guide to Securing Employee Use of GenAI

Best Practices for Securing Public GenAI Apps and LLM Apps in the Enterprise

Public GenAI apps, like ChatGPTs and Copilots, are now widely used at work. It's not without its risks. The providers have created a maze of data protection promises, which mean that a wrong AI, subscription, login, setting or geo may mean that the end user prompt becomes the AI maker's to use. At the same time, NROC Security has seen that every 100 prompts contain 32 instances of PII.

Fundamentally, Gen AI apps represent a new breed of software where use cases are invented on the fly, user inputs are unpredictable, any data can be used, and there is no promise of the accuracy of the outputs. This creates several issues for traditional security architectures: lack of visibility, inability to to control user-driven data exposure, unable to monitor potentially inaccurate outputs, as well as gaps with identity and access management. The end users need guidance, not friction. The security team needs effective controls, and the compliance team needs evidence of policy compliance.

This guide, based on over 130 practitioner interviews, defines the issues and suggests best practices how CISO teams need to go beyond the traditional playbook. The guide concludes how well executed AI security can build trust in AI usage and provide insights for shaping the AI agenda. Security can be an accelerator for organizational learning and innovation.

Get insights on boosting GenAI app adoption safely

Subscribe to NROC security blog

Case study
Governance
Prompt risks
Visibility

Case study: Securing GenAI to unleash personal productivity and innovation

A city tours and cruise company needed help in getting visibility and protection to unleash business user innovation

Managed Security Service Providers
Governance
Guardrails
Visibility
Prompt risks

Staying Relevant with AI: Why GenAI Security is Becoming a Core MSSP Capability

MSSPs are under pressure to deliver AI relevance as generative AI adoption is accelerating faster than traditional security controls can catch up. Enterprises are embracing tools like ChatGPT, Copilot, and Gemini across departments, often without oversight — creating fresh risks, compliance obligations, and uncertainty. For MSSPs, this represents both a challenge and an opportunity: a chance to provide clarity, guardrails, and governance that make GenAI security a top customer priority.

Guardrails
Prompt risks
User behavior risks

Google indexes shared ChatGPT conversations

Shared ChatGPT chats appear now in Google search results.

Governance
Productivity
User behavior risks
Visibility
CISO

Want to get GenAI right? Start with how your people use it

Your colleagues are using GenAI right now—but probably not in the way your IT team intended. From data leaks to app overload, organizations are learning that enabling GenAI isn’t just about buying a license—it’s about rethinking policy, trust, and productivity. Here’s what we’ve learned.

Safely allow more GenAI at work and drive continuous learning and change