The AI Agents Most Companies Forget to Secure

Key insights
- Traditional identity systems protect users up to the first AI agent but leave everything after that unmonitored, creating a false sense of security
- Co-pilot agents use the user's identity by default, so nobody can tell whether a human or an AI performed an action
- Checking permissions at the moment of data access is a fundamental shift from how security has worked for decades
- The hardest barrier to agentic security may not be the technology itself but getting security, IT, and development teams to actually work together
This is an AI-generated summary. The source video includes demos, visuals and context not covered here. Watch the video β Β· How our articles are made β
In Brief
IBM security leaders Bob Kalka and Tyler Lynch break down why deploying AI agents opens up serious identity and access management gaps that most organizations are not prepared for. They outline four security holes that appear the moment agents start calling backend resources, five imperatives for deploying agents safely, and three technologies needed to tie it all together. The core message: the biggest challenge is not technical but organizational.
Related reading:
What is agentic runtime security?
Agentic runtime security means protecting AI agents while they are actually running, not just when they are first set up. Traditional identity and access management (IAM) controls who can log in and what apps they can open. But AI agents go beyond that first checkpoint. They call other agents, access databases, and talk to backend systems, all at machine speed. Runtime security extends those identity checks into every step of that chain.
The scale makes this urgent. Kalka points out that roughly 80% of all cyberattacks target compromised identities, and that is just on the human side. Add AI agents to the mix and it gets much worse: there are already 45 to 90 non-human identities for every single human identity in a typical organization. Non-human identities include workloads, microservices (small, independent software components that each handle a specific task), containers, and now AI agents.
The four security gaps
When a user opens an app that calls an AI agent, traditional IAM protects everything up to that first agent. After that point, agents reaching into backend resources open up four crucial holes.
No traceability
Every AI agent needs a unique identifier so you can trace exactly what it did. Without one, you have no way to audit an agent's actions or figure out which instance caused a problem.
Too much access
Developers often give agents broad permissions because they are not sure what the agent will need. Over time, those privileges pile up. Nobody audits them, nobody strips them back, and the agent ends up with far more access than any single task requires. The security principle of least privilege says you should only grant the minimum access needed for a specific task, but agents routinely break this rule.
Hidden impersonation
Sometimes users hand off tasks to an agent on purpose, asking it to act on their behalf. That is fine, but it needs proper audit logging. The bigger danger is impersonation: co-pilot agents, AI assistants built into the apps you already use, run on your desktop and act as you, using the user's identity instead of getting their own. From a security standpoint, nobody can tell whether the user or the agent did something.
Open door to data
The final connection between an agent and sensitive data is the hardest to lock down. Agents may share database credentials. They move at machine speed. And when an agent touches sensitive data, the system should check right then whether that access is still allowed. "And guess who's checking? Nobody."
Five imperatives for deploying agents safely
Kalka and Lynch lay out five imperatives they see as requirements for any AI project.
Register your agents
Every agent must have a registered identity. That registration should include an assessment of what could go wrong, especially if the agent makes external connections.
Give access per task
Remove all default permissions. Grant access on the fly, just in time, at the session level. An HR (Human Resources) agent should only be able to onboard an employee during the specific session when that task is requested, not permanently.
Track who asked for what
When a user asks an agent to do something, that chain of delegation must be traceable. You need to know that a specific user asked a specific agent to take a specific action. This matters for banking transactions, infrastructure setup, and any regulated workflow.
Check in the moment
Check authorization at the moment data is accessed, not based on what was approved a month ago. This is the last-mile problem, and it needs near real-time checks on every external connection an agent makes.
Prove full control
In regulated industries like healthcare and finance, every step from human identity to agent action to data access must be auditable. It is no longer enough to audit only the human-to-application connection.
Three automated systems
Lynch describes three automated systems that need to work together across both humans and agents.
Traffic control
Decides who and what gets through. It coordinates identity checks across the entire chain and makes sure accountability is built into every step.
Rule engine
Applies automatic rules at each point in the chain, from initial access to the final data request. Without it, you cannot prove who did what.
Monitoring
Has two parts. It shows you how your security is set up: how many secrets managers your dev teams are running, whether credentials are in one place or scattered across dozens of tools. It also catches problems in real time, like an agent that skips the secrets manager and never gets a unique identifier.
The organizational bottleneck
The most striking point in the video is not technical. The real barrier is that security teams, IT teams, and development teams work in silos. Kalka describes a common pattern: ask a CISO (Chief Information Security Officer, the person responsible for an organization's cybersecurity) what they do with the development team, and they will say they have a monthly meeting to discuss initiatives. Next month, they have another meeting to discuss the same initiatives.
Development wants to ship agents to move the business forward. IT wants to manage them. Security worries about risk. Without shared visibility across all three groups, none of the technical solutions above will actually work.
Practical implications
For security teams
Extending IAM to cover non-human identities is no longer optional. Start by finding out how many unregistered agents are already running in your environment and whether any are using user credentials.
For developers
Giving agents broad permissions because you are not sure what they need is exactly the kind of overprivilege these experts warn about. Build with dynamic, session-level access from the start.
For leadership
The technology exists. The harder problem is breaking down the walls between security, IT, and development. Monthly status meetings are not collaboration.
Glossary
| Term | Definition |
|---|---|
| Non-human identity (NHI) | A digital identity assigned to software like an AI agent, microservice, or container, rather than a person. |
| Identity and access management (IAM) | Systems that control who or what can access which resources in an organization. |
| Least privilege | Security principle: give only the minimum access needed for a specific task, nothing more. |
| Zero trust | A security model that never automatically trusts anything. Every request must be verified, every time. |
| Open door to data | The final connection between an agent and the data it accesses. The hardest point to secure because checks must happen in real time. |
| Runtime security | Security checks that happen while software is actually running, not just during setup or deployment. |
| Traffic control | An automated system that decides who and what gets through, coordinating identity checks across the chain. |
| Rule engine | Automatic rules that ensure access control is enforced at every link in the chain. |
| Monitoring | Sees everything happening in the system and alerts when something is wrong. |
| Impersonation | When an AI agent inherits a user's identity instead of having its own, making it impossible to distinguish agent actions from user actions. |
| Secrets manager | Software that stores and manages sensitive credentials like passwords and API keys (the access codes that let programs talk to each other). |
Sources and resources
Want to go deeper? Watch the full video on YouTube β