US Needs One AI Rulebook, Not Fifty

Key insights
- Fifty separate state AI laws create a patchwork that hurts companies and citizens alike, yet the US still has no federal framework. Meanwhile the EU's AI Act is already in force, leaving American businesses navigating conflicting rules across state lines.
- The Anthropic case makes a sharper point visible: when a company's values clash with what the government wants to do with its technology, Scheidler argues it is not the company's call to make. That tension will only grow as AI models become more capable.
- Regulating autonomous AI agents too early, before anyone fully understands how they work in practice, could erase the productivity gains now showing up across every major industry, including government itself.
This is an AI-generated summary. The source video may include demos, visuals and additional context.
In Brief
David Sacks, the former White House AI and crypto advisor, said this week that a patchwork of AI rules spread across 50 different states is simply not sustainable. Joe Scheidler, chief executive of Helios (a company that builds AI operating systems for both government and private sector) and a former White House and State Department advisor, agreed. Speaking on Bloomberg Technology, he called for a single national AI framework and warned that moving too fast to regulate could cost the US its lead in a global technology race.
The core problem, Scheidler said, is a mismatch between how fast AI is moving and how slowly fragmented regulation can keep up. AI is a horizontal technology, meaning it cuts across every industry at once, from healthcare to finance to defense. Trying to regulate it with dozens of incompatible state laws creates chaos for builders and does not meaningfully protect anyone.
He put the stakes plainly: "If not America, then who?" The implication is that if the US ties itself in regulatory knots, other countries will set the global standard for how AI develops instead.
Related reading:
The problem with 50 different rulebooks
Right now, the US has no single federal AI law. What it does have is a growing collection of state-level rules that differ from one another in scope, language, and intent. A company building an AI product has to track what is legal in California, Texas, Illinois, New York, and dozens more, with each state potentially drawing the lines differently.
Scheidler described this as a "patchwork of regulations across 50 states" that is not a sustainable structure in a global competitive technology race. His argument is not that AI should go unregulated. It is that fragmented regulation is the worst of both worlds: companies face real compliance costs, but the rules do not add up to coherent protection for the public.
By contrast, the European Union passed its AI Act, a single law covering all member states. Whatever its flaws, it gives companies one set of rules to follow. The US has no equivalent.
When company values meet government needs
The second topic Scheidler addressed is harder to resolve. Anthropic has become the most visible example of a tension that will only grow: what happens when an AI company's values conflict with what the government wants to do with its technology?
Anthropic is the biggest case study on AI in national security and defense right now. The company has publicly pushed back on how the Pentagon has treated it, and some in Washington take the position that it is not up to private companies to decide how government agencies, particularly defense agencies, use the technology they build.
Scheidler sided with that view, carefully. He acknowledged that every American has the right to express their First Amendment rights (the constitutional protection for free speech). But he drew a line at private executives asserting too much influence over systems of government that already have clear processes for change. In his framing, if you disagree with what an administration is doing, you work through democratic channels. You do not use your position as an AI provider to block it.
He also noted that AI and machine learning in military contexts are nothing new. The US military has used these tools for decades. The current attention is partly driven by adjacent stories pulling the topic into the press.
How to regulate AI agents without killing them
The final question was the most forward-looking: how do you regulate autonomous AI agents (AI systems that can take actions and complete tasks on their own, without a person directing every step)?
Scheidler said the answer is a verticalized mindset, meaning industry-specific rules rather than one broad law covering everything. Regulating AI in healthcare should look different from regulating AI in defense or finance, because the risks and workflows are different in each case.
He also pushed back on the impulse to move fast on agent regulation. Platforms like Helios are trying to bridge the gap between generic AI models (trained on mixed-quality data from across the internet) and the sensitive, controlled workflows that government and enterprise actually need. Locking down agent behavior before that gap is fully understood could eliminate the productivity gains that are already showing up across the economy.
The dilemma is real: regulate too little, and you leave people exposed to harms from autonomous systems. Regulate too much, too soon, and you kneecap the technology before it has had a chance to demonstrate what it can do, handing the advantage to countries that are less cautious.
Glossary
| Term | Definition |
|---|---|
| Patchwork of regulations | When many different authorities each make their own rules that do not line up with each other, creating an inconsistent and complicated landscape. |
| National framework | One set of rules that applies across the whole country, instead of different rules in each state. |
| Agentic workflows | Processes where an AI system acts independently, completing tasks step by step without a human directing each move. |
| Verticalized approach | Regulating by industry (healthcare, defense, finance) rather than applying one generic rule to all of AI. |
Sources and resources
Want to go deeper? Watch the full video on YouTube →