Skip to content
Back to articles

The AI Agents Most Companies Forget to Secure

March 15, 2026Β·5 min readΒ·1,031 words
AInon-human identity securityagentic AI runtime securityzero trust AI agentsidentity access management gaps
IBM Technology video on agentic runtime security and non-human identities
Image: Screenshot from YouTube.

Key insights

  • Traditional identity systems protect users up to the first AI agent but leave everything after that unmonitored, creating a false sense of security
  • Co-pilot agents use the user's identity by default, so nobody can tell whether a human or an AI performed an action
  • Checking permissions at the moment of data access is a fundamental shift from how security has worked for decades
  • The hardest barrier to agentic security may not be the technology itself but getting security, IT, and development teams to actually work together
SourceYouTube
Published March 15, 2026
IBM Technology
IBM Technology
Hosts:Bob Kalka and Tyler Lynch

This is an AI-generated summary. The source video includes demos, visuals and context not covered here. Watch the video β†’ Β· How our articles are made β†’

In Brief

IBM security leaders Bob Kalka and Tyler Lynch break down why deploying AI agents opens up serious identity and access management gaps that most organizations are not prepared for. They outline four security holes that appear the moment agents start calling backend resources, five imperatives for deploying agents safely, and three technologies needed to tie it all together. The core message: the biggest challenge is not technical but organizational.


What is agentic runtime security?

Agentic runtime security means protecting AI agents while they are actually running, not just when they are first set up. Traditional identity and access management (IAM) controls who can log in and what apps they can open. But AI agents go beyond that first checkpoint. They call other agents, access databases, and talk to backend systems, all at machine speed. Runtime security extends those identity checks into every step of that chain.

The scale makes this urgent. Kalka points out that roughly 80% of all cyberattacks target compromised identities, and that is just on the human side. Add AI agents to the mix and it gets much worse: there are already 45 to 90 non-human identities for every single human identity in a typical organization. Non-human identities include workloads, microservices (small, independent software components that each handle a specific task), containers, and now AI agents.


The four security gaps

When a user opens an app that calls an AI agent, traditional IAM protects everything up to that first agent. After that point, agents reaching into backend resources open up four crucial holes.

1

No traceability

Every AI agent needs a unique identifier so you can trace exactly what it did. Without one, you have no way to audit an agent's actions or figure out which instance caused a problem.

2

Too much access

Developers often give agents broad permissions because they are not sure what the agent will need. Over time, those privileges pile up. Nobody audits them, nobody strips them back, and the agent ends up with far more access than any single task requires. The security principle of least privilege says you should only grant the minimum access needed for a specific task, but agents routinely break this rule.

3

Hidden impersonation

Sometimes users hand off tasks to an agent on purpose, asking it to act on their behalf. That is fine, but it needs proper audit logging. The bigger danger is impersonation: co-pilot agents, AI assistants built into the apps you already use, run on your desktop and act as you, using the user's identity instead of getting their own. From a security standpoint, nobody can tell whether the user or the agent did something.

4

Open door to data

The final connection between an agent and sensitive data is the hardest to lock down. Agents may share database credentials. They move at machine speed. And when an agent touches sensitive data, the system should check right then whether that access is still allowed. "And guess who's checking? Nobody."

AI agents and securityHow to secure AI agents in productionThree groups of people that must collaborateCISOOwns the security riskITManages infrastructureDevBuilds the AI agentsChief Information Security OfficerInformation TechnologyDevelopment teamThe agent flow β€” from user to sensitive dataUserAPPApplicationAGENTNon-human identitySensitive dataSecurity stops hereThe user is verified, the agent does what it wantsNo security check on theagent's access to data4 security gapsNo traceabilityNo one knows which agent did whatToo much accessThe agent has access long after the task is doneHidden impersonationThe agent pretends to be youOpen door to dataThe agent fetches data with no security check5 imperativesRegister agentsEach agent gets a unique ID and risk scoreGive access per taskNo persistent access between sessionsTrack who asked for whatLink every action to a userCheck in the momentVerify access when data is fetchedProve full controlTraceable chain from user to dataThree automated systems that secure the chainTraffic controlDecides who and whatgets throughRule engineAutomatic rules enforcedat every link in the chainMonitoringSees everything that happensand alerts when something is wrong80% of cyberattacks target logins and access. For every employee,there are 45–90 digital accounts with no human behind them. Agents add even more.Based on IBM Technology β€” Bob Kalka & Tyler Lynch (2026)Diagram: Claude Code

Five imperatives for deploying agents safely

Kalka and Lynch lay out five imperatives they see as requirements for any AI project.

1

Register your agents

Every agent must have a registered identity. That registration should include an assessment of what could go wrong, especially if the agent makes external connections.

2

Give access per task

Remove all default permissions. Grant access on the fly, just in time, at the session level. An HR (Human Resources) agent should only be able to onboard an employee during the specific session when that task is requested, not permanently.

3

Track who asked for what

When a user asks an agent to do something, that chain of delegation must be traceable. You need to know that a specific user asked a specific agent to take a specific action. This matters for banking transactions, infrastructure setup, and any regulated workflow.

4

Check in the moment

Check authorization at the moment data is accessed, not based on what was approved a month ago. This is the last-mile problem, and it needs near real-time checks on every external connection an agent makes.

5

Prove full control

In regulated industries like healthcare and finance, every step from human identity to agent action to data access must be auditable. It is no longer enough to audit only the human-to-application connection.


Three automated systems

Lynch describes three automated systems that need to work together across both humans and agents.

1

Traffic control

Decides who and what gets through. It coordinates identity checks across the entire chain and makes sure accountability is built into every step.

2

Rule engine

Applies automatic rules at each point in the chain, from initial access to the final data request. Without it, you cannot prove who did what.

3

Monitoring

Has two parts. It shows you how your security is set up: how many secrets managers your dev teams are running, whether credentials are in one place or scattered across dozens of tools. It also catches problems in real time, like an agent that skips the secrets manager and never gets a unique identifier.


The organizational bottleneck

The most striking point in the video is not technical. The real barrier is that security teams, IT teams, and development teams work in silos. Kalka describes a common pattern: ask a CISO (Chief Information Security Officer, the person responsible for an organization's cybersecurity) what they do with the development team, and they will say they have a monthly meeting to discuss initiatives. Next month, they have another meeting to discuss the same initiatives.

Development wants to ship agents to move the business forward. IT wants to manage them. Security worries about risk. Without shared visibility across all three groups, none of the technical solutions above will actually work.


Practical implications

For security teams

Extending IAM to cover non-human identities is no longer optional. Start by finding out how many unregistered agents are already running in your environment and whether any are using user credentials.

For developers

Giving agents broad permissions because you are not sure what they need is exactly the kind of overprivilege these experts warn about. Build with dynamic, session-level access from the start.

For leadership

The technology exists. The harder problem is breaking down the walls between security, IT, and development. Monthly status meetings are not collaboration.


Glossary

TermDefinition
Non-human identity (NHI)A digital identity assigned to software like an AI agent, microservice, or container, rather than a person.
Identity and access management (IAM)Systems that control who or what can access which resources in an organization.
Least privilegeSecurity principle: give only the minimum access needed for a specific task, nothing more.
Zero trustA security model that never automatically trusts anything. Every request must be verified, every time.
Open door to dataThe final connection between an agent and the data it accesses. The hardest point to secure because checks must happen in real time.
Runtime securitySecurity checks that happen while software is actually running, not just during setup or deployment.
Traffic controlAn automated system that decides who and what gets through, coordinating identity checks across the chain.
Rule engineAutomatic rules that ensure access control is enforced at every link in the chain.
MonitoringSees everything happening in the system and alerts when something is wrong.
ImpersonationWhen an AI agent inherits a user's identity instead of having its own, making it impossible to distinguish agent actions from user actions.
Secrets managerSoftware that stores and manages sensitive credentials like passwords and API keys (the access codes that let programs talk to each other).

Sources and resources

Share this article