How to Secure AI Agents at Runtime

Key insights
- Static API keys in AI agents are a serious risk: a single prompt injection gives unlimited access to every connected service, because the credentials never expire.
- Dynamic credentials flip the security model. Instead of permanent access, agents get temporary keys that expire automatically after each session. This is the same zero-trust principle used in cloud security, now applied to AI.
- CIBA turns the user's phone into a security checkpoint for AI agents. Even if an agent is compromised, every sensitive action triggers a push notification the user must approve.
This is an AI-generated summary. The source video may include demos, visuals and additional context.
In Brief
Most engineers building AI agents are not security experts, and that gap creates real risk. Tyler Lynch from IBM explains how to close it with three layers of agentic runtime security: dynamic credentials that expire automatically, user identity via OAuth 2.0, and push notifications for sensitive operations. Together, these layers protect against both accidental misuse and deliberate attacks like prompt injection.
Related reading:
The problem with static credentials
An AI agent is only as useful as the external connections it can make. A typical agent connects to a database, an LLM provider (a large language model service like ChatGPT or Claude), and possibly a SaaS app (cloud software you subscribe to) like Salesforce. To access those services, it needs credentials: passwords or API keys (secret codes that let one program talk to another).
The traditional approach is to hardcode those credentials directly into the application. Lynch recommends against that, and the reason is straightforward: a static credential never expires. If an attacker tricks the agent into doing something it shouldn't (a technique called prompt injection), they can read or write anything the agent has access to, for as long as the credential is valid.
This is what Lynch calls non-human identity (NHI): the digital identity of a program or AI agent (rather than a person) that needs access to services. Managing NHI securely is the core challenge of agentic runtime security.
Layer 1: Dynamic credentials
The first fix is to stop using permanent credentials entirely.
Dynamic credentials are created just in time, at the moment the agent actually needs access, and are automatically revoked at the end of the session. The window of exposure goes from "forever" to "two minutes or less." Even if an attacker injects a malicious instruction, the credentials they could steal are already gone by the time the session ends.
This is the same zero-trust principle that cloud security has used for years: never grant standing privilege (permanent access that is always active) when temporary access is enough.
Lynch's full architecture. Left side: the user (green stick figure) with the IDP (identity provider) below and OAuth 2.0 CIBA to the left, the push notification that asks for approval on a separate device, outside the browser. Right side: the AI agent connects to three services under NHI (non-human identity): a database, an LLM, and a SaaS app. Bottom: "Dynamic Creds" and "OAuth 2." Screenshot from YouTube.
Layer 2: User identity via OAuth 2.0
Most AI agents are not fully autonomous. A real user interacts with them. So a second question is: who is that user, and what are they allowed to do?
This is where an identity provider (IDP) comes in. An IDP is a service that manages user logins and knows who you are. Examples include Okta and IBM Verify. The AI agent communicates with the IDP to understand the user's identity and context.
The mechanism behind this is a standard called OAuth 2.0. You have probably seen it in action: when you click "Log in with Google," a page appears asking whether you authorize the app to access your profile or act on your behalf. That consent screen is OAuth 2.0. Lynch calls the specific flow used here "authorization code flow": the agent asks, the IDP checks, the user approves.
Layer 3: CIBA for sensitive operations
Dynamic credentials and user identity cover most situations. But some operations are too consequential for a simple yes/no during login. If an HR agent can onboard or offboard employees, a mistake (or an attack) could affect people's jobs and the company's finances.
For these high-stakes actions, Lynch recommends OAuth 2.0 CIBA (Client-Initiated Backchannel Authentication). He describes it as "passkeys for agents." Here is how it works:
- The user asks the AI agent to do something sensitive.
- The agent detects that the action requires extra approval.
- The agent contacts the IDP, which sends a push notification directly to the user's phone, outside the browser entirely.
- The user reads the details and taps to approve or deny.
The key word is "backchannel": the approval happens on a separate device and a separate channel from the browser session. This makes it much harder to forge.
Lynch illustrates why this matters with a prompt injection scenario: "If there was a prompt injection that said off-board all employees, I would get a notice for each one of those to my phone." The attacker's injected instruction would trigger a flood of approval requests that the real user would immediately recognize as wrong.
Why all three layers together
Each layer solves a different problem:
- Dynamic credentials limit what an attacker can do with stolen access.
- OAuth 2.0 with an IDP ensures the agent knows who the user is and acts only within that user's permissions.
- CIBA adds a human checkpoint for actions with serious real-world consequences.
None of the three is sufficient alone. Together, they form what Lynch calls agentic runtime security: a security model built specifically for agents that act autonomously but still need to be accountable.
Glossary
| Term | Definition |
|---|---|
| Non-human identity (NHI) | The digital identity of a program or AI agent (not a person) that needs to log in and access services |
| Dynamic credentials | Temporary passwords or API keys created at the moment of use and deleted automatically when the session ends |
| Standing privilege | Permanent access rights that are always active, even when not in use |
| OAuth 2.0 | A security standard that lets you log in with an existing account (like Google or Microsoft) instead of creating a new password |
| IDP (Identity Provider) | A service that manages user logins and knows who you are. Examples include Okta and IBM Verify |
| CIBA (Client-Initiated Backchannel Authentication) | A way for an app to request your approval via a push notification on your phone, completely separate from the browser |
| Prompt injection | A trick where someone hides instructions in text to make an AI agent do something it was not supposed to do |
Sources and resources
Want to go deeper? Watch the full video on YouTube →