A real security challenge behind this artificial intelligence

So much has already been said about how large language models (LLMs) such as ChatGPT, Claude, and Lamma transformed the way organizations grow their business. The model context protocol (MCP) published by Anthropic opens the doors for even more innovation through seamless integration. We are only starting to understand the extent of this transformation will have on technology and business.
As a whole this technology has been rapidly and widely adopted. LLMs are used by over 90% of fortune 500 companies, thousands of MCP servers are published online for developers to utilize, and the MCP repository has already been forked over 4,000 times. Whether you like it or not, these tools are being connected and integrated into your organization’s environments.
These new tools bring with them novel risks to Non-Human Identity (NHI) security, access to data, and resource entitlements. Security professionals who have a “should I allow this?” approach should consider adopting a “how do we secure this?” approach instead. So, how do I enable developers and non-developers to maximize the value LLMs bring without compromising security? First we need to understand the challenge at hand.
What you know, and what you don’t know…
Leveraging GenAI solutions requires connecting them to your data. Whether it is to analyze trends in usage data, or automate and streamline your user experience, connectivity is key. And the ways to connect models to your environment is growing as well. For example, over MCP you can now easily connect an AI agent to a plethora of services your organization already utilizes. However, these agents don’t just analyze data; they can take active actions and execute automations across systems. This opens up powerful new possibilities, but also significantly increases the potential blast radius of a bad actor or a misconfiguration.

This connectivity is facilitated by creating new identities or leveraging existing ones. In both cases, there is a risk of over-provisioning permissions or assigning roles that are poorly suited for the task, especially if those identities aren't properly monitored or secured. This problem is further compounded by how these integrations are introduced. Developers, eager to explore and implement the latest technologies, may hastily onboard AI tools without fully considering the scope of access being granted. In other cases, non-developers might connect AI services through OAuth2 using their personal accounts; creating unmonitored, unmanaged backdoors into sensitive environments.
As AI becomes more embedded in everyday workflows, security professionals must ensure that the convenience of integration does not come at the cost of visibility and control.
Keys to the kingdom
We’ve established that AI agents are being granted access to internal systems, from CRMs and cloud infrastructure to financial tools and code repositories. But we need to emphasize that these workloads often operate with elevated privileges, yet there's minimal monitoring of what they're doing with that access. And unlike traditional analytics tools, AI agents can take action: opening tickets, modifying configurations, updating records, or triggering workflows. This level of autonomy makes them powerful but also risky.
For example, within an MCP environment, an AI agent might monitor system usage patterns and, upon detecting performance degradation, proactively open a ticket in your ITSM platform, scale out infrastructure via your cloud provider’s API, and update an internal dashboard to reflect the change, all without human intervention. This kind of automation is powerful, but consider the permissions that were granted to facilitate this ability: read production environments logs and metrics; write access to ITSM; modify production deployment configurations.
Now consider that in the rush to implement and utilize, many would simply give free reign to the agent by providing their own credentials, or provisioning access through largely over permissive identities. It is increasingly difficult to navigate modern IAM requirements and developers would rather get the app running. What’s more concerning is that these agents often operate in the background, in bulk, and outside of standard security controls. Their activity under-logged and their behavior essentially unmonitored.
Giving AI agents the keys to the kingdom without visibility or guardrails is a recipe for trouble. Organizations should treat these identities as high-risk actors, applying least privilege, robust logging, and constant oversight to ensure safety and control. A task easier said than done.
Fertile soil for both innovator and attacker
The hype surrounding GenAI has created a gold rush effect: everyone wants in, fast. The good news is that modern tools make it easier than ever to integrate AI capabilities. Even non-developers can wire up an AI agent to internal systems through drag-and-drop builders or simple OAuth flows. But that accessibility is a double-edged sword.
While it empowers innovation, it also invites risk. These new builders, often unfamiliar with security principles, may unknowingly expose sensitive systems, over-permission identities, or introduce unmonitored integrations. Even seasoned developers, driven by tight deadlines or the pressure to deliver "something AI," may skip important security steps like privilege scoping, audit logging, or code reviews.
A glaring example is clear when you study how Claude Desktop is configured to work with MCP servers. In most cases the simplest way to pass the API key is simply to store it in the claude_desktop_config.json configuration file. By quickly following online examples and not considering security implications you end up with these keys stored cleartext in the config later being uploaded to GitHub! All in the name of quick development.

A similar situation occurs in coding agents such as Cursor and Roo Code which designate a project level MCP server configuration file which is more likely to be accidentally pushed to a repository.


And lurking within this AI Agents and MCP fueled frenzy are bad actors who know that the noise around GenAI makes it easy to hide malicious tools or abuse legitimate ones. The rapid adoption and lack of consistent oversight create an environment where security blind spots are not only possible, they’re expected.
Set yourself up to win
What does it mean to set yourself and your organization to win during this critical transformation period? The goal isn’t to slow innovation; it’s to make sure it’s happening safely, with clear visibility and control. Winning in this new landscape means developing the capabilities and expertise to rapidly assess and manage the risk of AI agents and third-party tools connected to your environment.
To do that effectively, your organization needs to:
- Identify the vendor or application: Understand what’s being introduced into your environment and whether it’s coming from a reputable source. Not all AI tools are created equal—and some may be built more for speed than for safety.
- Establish ownership and business justification: Every identity should have a clearly defined owner who can justify its presence. This helps tie usage back to business value and ensures someone is accountable. It also helps to keep track of, and decommission all the failed efforts which every success entails.
- Audit granted vs. used permissions: Don’t just look at what access has been requested—monitor what’s actually being used by the identities. Over-scoped permissions are common and dangerous. Especially when considering additions such as MCP Servers which grant AI agents the ability to take action dynamically and in bulk.
- Empower safe self-service: Innovation doesn’t have to be blocked. Provide teams with secure, guardrail-equipped workflows for provisioning identities for AI tools, agents, LLMs, MCP Servers, etc. Become proficient in this new digital language so that security becomes an enabler rather than a bottleneck.
By embedding these capabilities, security teams can confidently support AI adoption while minimizing the risk—and complexity—that comes with it.
Wrapping Up: Secure your AI NHIs
GenAI is transforming the way organizations build, operate, and interact with their systems, —and it’s not slowing down. With that transformation comes a new set of risks: AI agents operating with broad permissions, introduced by both technical and non-technical staff, often without proper oversight.
But this isn’t a call to resist innovation, —it’s a call to guide it. Security practitioners have a critical role to play in enabling safe adoption by building visibility, control, and accountability into every stage of the identity lifecycle.
What should be your key takeaways? Here’s our suggested checklist to secure GenAI in your environment:
Inventory
- Identify all AI agents, LLM tools, and third-party integrations connected to your systems
- Determine their source and assess vendor reputation
Ownership
- Assign a business or technical owner to every integration
- Require documented justification for its use
Permission Analysis
- Review what permissions each integration is granted
- Compare against what is actually used to identify over-scoping
Behavior Monitoring
- Treat AI agents as privileged identities
- Implement anomaly detection and alerting for suspicious behavior
Identity Provisioning
- Enable secure, self-service access to AI tools with predefined security guardrails
- Ensure that access is facilitated securely, auditable, and attested
Education & Policy
- Train teams (technical and non-technical) on the security implications of AI integration
- Define and enforce policies around who can introduce new tools and how they must be vetted
We do newsletters, too
Discover tips, technical guides and best practices in our biweekly newsletter.