Kanopy Blog | AI Agent Security & Shadow AI Insights
Book a demo

5 Ways to Secure Shadow AI Inside Your Approved Tools

Written by:
Amichai Shulman
13 May 2026
Shadow AI

Introduction: The New Shadow AI Problem

Shadow AI is not only happening in rogue tools employees download without permission. Increasingly, it is happening inside the platforms your organization has already approved.

A business user builds an AI agent in a sanctioned platform. They connect it to a database. They add an email action. They configure permissions. They publish the workflow.

From the outside, everything looks approved. But inside that approved tool, a new risk may have been created: an agentic workflow with access to sensitive data, business logic, and the ability to take action without going through the normal security review process.

That is the new Shadow AI challenge.

Security teams can no longer stop at asking, β€œWhich AI tools are employees using?” They need to ask: What are employees building inside the AI tools we already approved?

The original source highlights this shift clearly: business users are now creating agentic workflows inside sanctioned platforms, often without security visibility into their permissions, data access, or runtime behavior.

1. Discover Every AI Agent and Workflow Built Inside Approved Platforms

You cannot secure what you cannot see.

Most organizations have some level of visibility into which AI platforms have been approved. But that does not mean they know what has been built inside those platforms. A sanctioned platform can contain hundreds or thousands of business-created agents, automations, prompt flows, connectors, and workflows. Some may be harmless. Others may have access to sensitive systems or the ability to trigger risky actions.

Security teams should build an inventory of:

  • AI agents
  • Automated workflows
  • Prompt-based apps
  • AI-powered business processes
  • Connected data sources
  • External actions such as email, ticket creation, file sharing, or database updates
  • Public-facing versus internal workflows
  • Owners and creators of each workflow

Start with your approved AI and automation platforms. Identify who is building inside them, what they are building, and which workflows have access to business data.

The goal is not to shut everything down. The goal is to replace hidden risk with visibility.

2. Map What Data Each AI Workflow Can Access

The risk is not just the AI tool itself. The real risk comes from the data and permissions connected to it.

An agent that answers questions from public product documentation is very different from an agent connected to employee records, customer data, financial systems, CRM exports, internal tickets, or production databases.

A workflow may have been created for one narrow purpose, but if it has broad permissions, it may be able to access much more than intended.

For every AI workflow, security teams should understand:

  • Which systems it connects to
  • Which databases, tables, files, or records it can access
  • Whether access is scoped or overly broad
  • Whether it can retrieve sensitive information
  • Whether it can combine data from multiple systems
  • Whether it can expose data externally

If an agent only needs access to product information, it should not have access to employee data, financial records, or full database environments.

3. Reduce Over-Privileged Access Before It Becomes a Breach

Many Shadow AI risks are not caused by malicious employees. They are caused by well-intentioned users making small configuration mistakes.

A user may give an agent access to an entire database when it only needs one table. They may leave table names open. They may add an email action without realizing the agent can now send sensitive data outside the organization.

That is how a productivity workflow becomes a security incident. The uploaded source gives a clear example: an agent designed to answer product questions can become dangerous if it is over-permissioned and allowed to access unrelated database tables, such as employee data.

Prioritize workflows that have:

  • Broad database access
  • Access to sensitive business systems
  • Public-facing interfaces
  • Email or external communication capabilities
  • Ability to export, update, delete, or move data
  • Permissions that exceed the workflow’s intended purpose

Each agent should only access the data it needs to perform its intended function. Security teams should review high-risk workflows first, especially those that are public-facing, connected to sensitive data, or able to send information externally.

4. Set Guardrails That Business Users Cannot Accidentally Bypass

Business users are moving fast. They are building workflows to solve real problems. But they should not be expected to make perfect security decisions every time they configure an agent.

That is where guardrails come in. Guardrails give security teams a way to enforce policies across approved platforms, even when workflows are created by non-developers.

Examples of useful guardrails:

  • No corporate data can be sent to personal email addresses.
  • Public-facing agents cannot access internal employee records.
  • AI workflows cannot connect to sensitive databases without approval.
  • Agents cannot query entire databases by default.
  • Sensitive data cannot be exported to unmanaged destinations.
  • High-risk actions require review or approval.
  • Workflows with external sharing must be monitored more closely.

Create global policies that apply across business-built AI workflows. The goal is to make the safe path the default path. Even if a user misconfigures a workflow, guardrails should prevent the most dangerous outcomes.

5. Monitor Runtime Behavior, Not Just Initial Configuration

Agentic workflows are dynamic. They do not always behave like traditional applications. Their actions can depend on user prompts, connected data, permissions, plugins, and external triggers.

A workflow may look safe when it is created, but behave differently in the real world. That is why runtime protection is critical. Security needs to understand what agents actually do, not just how they were configured on day one.

Security teams should monitor:

  • Which data sources an agent accesses
  • Whether the agent stays within its intended purpose
  • Which prompts trigger risky behavior
  • Whether the agent attempts to access sensitive or unrelated systems
  • Whether it sends data externally
  • Whether it performs unusual actions
  • Whether its behavior changes over time

Establish a behavioral profile for each AI workflow. For example, if an agent is built to answer questions about public product information, that should be its baseline. If someone prompts it to access employee records, financial data, or internal credentials, that should be treated as a deviation and blocked in real time.

The source text emphasizes this shift from static scanning to runtime protection: security teams need to detect and prevent risky behavior as workflows operate, not only review them before deployment.

Bottom Line: Approved Tools Still Need Security Oversight

Approving an AI platform is not the same as securing everything built inside it. Your organization may have approved the tool. But did security approve the agent? Did anyone review the data access? Were permissions scoped correctly?

That is the new Shadow AI reality. The next phase of AI security is not only about blocking unauthorized tools. It is about gaining visibility and control over the agents, automations, and workflows being created inside sanctioned platforms.

Business users should be able to innovate. But security teams need the visibility, guardrails, and runtime protection to make sure that innovation does not accidentally open the vault.

Shadow AI is not always outside your approved stack. Sometimes, it is being built inside it.

Power to the People.

Risk to the Enterprise.

The 2026 State of Security in Business-Built Applications and AI Agents
Reported by 200 Enterprise CISOs.

Reveal what’s really growing in your jungle.