Skip to content
Back to articles

AI Cyberattacks Are About to Hit Compute Speed

April 1, 2026/5 min read/1,006 words
AIAI AgentsAI SecurityGoogle
YouTube summary: AI Attacks Outpace Human Defenses, Warns Cyber Expert
Image: Screenshot from YouTube.

Key insights

  • Cybersecurity is entering a speed war where the side that automates first gains the advantage.
  • Human judgment still matters for strategy, but humans cannot stay inside every defensive loop once attacks compress from days to seconds.
  • AI agents raise the value of model weights and system prompts because they encode how an organization uses its own data.
  • Mandia's pitch is not just a warning about attackers. It is also a business case for agent-vs-agent defense.
SourceYouTube
Published April 1, 2026
Bloomberg Technology
Bloomberg Technology
Hosts:Tim Stenovec
Guest:Kevin Mandia

This is an AI-generated summary. The source video may include demos, visuals and additional context.

Watch the video · How the articles are generated

In Brief

Kevin Mandia, founder of Mandiant and former FireEye chief, says AI is about to change cybersecurity in a very specific way: the speed of attack will rise faster than the speed of defense. In a short interview on Bloomberg Technology, he argues that within less than two years, AI agents will become a primary offensive tool in cyber operations because they can think, learn, and act with "total recall" at machine speed.

That matters because Mandia is not describing a small productivity gain. He is describing a structural shift. If vulnerability discovery compresses from days or minutes down to seconds, then a defense process that still waits for human review will simply be too slow.


The setup: source code leaks matter differently in the agent era

The clip starts in the middle of a discussion about a case where source code was exposed because of user error rather than a direct attack. Mandia's first point is measured: losing source code is embarrassing, but it is not the same as losing customer data, credentials, or model weights. He explicitly says those assets were not lost in the case being discussed.

Why does he single out model weights? Because in an agent-based system, the valuable layer is not just the code repository. It is also the memory, instructions, and internal logic that define how agents use a company's data. Mandia says agents have "total recall on your data and how you purpose those agents". That makes leakage more dangerous than a normal source-code embarrassment, because attackers can study how the software works and look for weaknesses such as prompt injection, where inputs trick an AI system into ignoring its real instructions.

His conclusion is interestingly pragmatic. More eyes on the code may also speed up fixes. If millions of people can inspect exposed code, some of them will spot weaknesses before criminals do, forcing a faster release cycle. That is not a comforting message, but it is realistic.


The real claim: cyber offense gets AI first

Mandia's central argument is that every major technology shift follows the same pattern: malicious use arrives faster than beneficial use. He uses the internet as the analogy. The internet allowed money, information, and coordination to move globally, and crime and espionage followed those same routes.

His prediction is blunt. Within under two years, AI agents will "become the offense in the cyber domain." The reason is not mysticism. It is speed plus memory plus automation. An attacker no longer needs to manually step through each phase of reconnaissance, testing, exploitation, and follow-up. Agents can run those loops continuously.

That changes the unit of time that matters. Security teams usually think in alerts, tickets, escalations, and patch windows. Mandia is saying the contest will move below that level. When vulnerability discovery happens in seconds rather than days or minutes, the old workflow is no longer just inefficient. It is structurally mismatched to the problem.


Why humans cannot stay in the defensive loop

The sharpest line in the clip is not about attackers. It is about defenders. Mandia says defense will have to become autonomous because it "can't have humans in the loop".

That does not mean humans disappear from cybersecurity. It means the role changes. Humans stay in strategy, policy, thresholds, approvals, and post-incident analysis. But humans cannot approve every microscopic defensive action if the attack surface is being scanned and exploited at compute speed.

This creates a useful tension with another current AI debate. In many business workflows, "human in the loop" is the right answer because human judgment prevents silent failure. But in active cyber defense, Mandia argues that keeping humans in every tactical loop becomes its own vulnerability. If a system pauses for human confirmation while an AI-driven attacker keeps moving, the pause itself becomes the weakness.

That is the core distinction. Human oversight still matters. Human latency does not scale.


The business angle behind the warning

Mandia ends with a brief description of what his company Armadin is building: "all offense all the time". The idea is not to attack customers. It is to use offensive agents to train defensive agents, so the good guys find new and novel attack paths before bad actors do.

This matters because it frames the next phase of cybersecurity as agent versus agent. Defenders will need systems that continuously simulate attack behavior, probe for weaknesses, and update defensive posture automatically. Static audits and annual security reviews will not be enough if both attack discovery and exploitation are increasingly automated.

It also explains why security companies are excited, not just afraid. If Mandia is right, a huge new market opens up around autonomous defense, attack simulation, and machine-speed response.


What to watch next

The interview is short, so it leaves open questions. Mandia does not explain how autonomous defense avoids overreacting, breaking normal workflows, or generating false positives at scale. He also does not address what happens when both sides use similar models and similar infrastructure.

Still, the directional claim is clear. Cybersecurity is moving from a human-speed discipline to a compute-speed discipline. The organizations that adapt fastest will not be the ones with the most alerts or the biggest team. They will be the ones that decide which judgments belong to humans and which reactions must be delegated to machines.

For readers outside security, the takeaway is simple: AI agents are not only changing how companies build software or automate office work. They are also changing the balance between attacker speed and defender speed. And in cybersecurity, speed often decides the outcome.


Practical implications

  • Security teams should assume that manual review will become a bottleneck in at least some parts of cyber defense.
  • Companies deploying AI agents should treat prompts, memory, and model behavior as high-value assets, not just supporting infrastructure.
  • Leadership teams should expect more "agent versus agent" security products as the market shifts toward autonomous defense.

Glossary

TermDefinition
AI agentSoftware that can plan, act, and adapt toward a goal without waiting for a human at every step.
Prompt injectionA way of tricking an AI system by feeding it inputs that override or manipulate its instructions.
Model weightsThe internal numerical parameters a trained AI model uses to make predictions and generate output.
Autonomous defenseSecurity systems that detect and respond to threats automatically instead of waiting for manual approval.
Compute speedMachine speed. In this context, the pace at which software can scan, test, and act far faster than humans can react.
EspionageSpying to steal sensitive information, often by governments or state-linked groups.

Sources and resources

Share this article