Skip to content
Back to articles

OpenAI and Anthropic Spent Record Millions Lobbying DC

March 14, 2026ยท4 min readยท725 words
AIAI lobbyingOpenAIAnthropicAI regulation
Forbes video thumbnail about AI companies' lobbying spending
Image: Screenshot from YouTube.

Key insights

  • The companies calling for AI regulation are simultaneously lobbying for policies that benefit their own businesses, creating tension between safety advocacy and commercial interest.
  • California's SB 53 is the only concrete AI safety law that passed, while federal efforts have stalled, leaving states to fill the regulatory vacuum.
  • Dollar-per-agency government deals position AI labs as critical infrastructure providers, creating dependence that makes future regulation harder.
SourceYouTube
Published March 13, 2026
Forbes
Forbes
Hosts:John Palmer

This is an AI-generated summary. The source video includes demos, visuals and context not covered here. Watch the video โ†’ ยท How our articles are made โ†’

In Brief

OpenAI and Anthropic each spent record amounts on federal lobbying in 2025, with Anthropic spending $3.13 million and OpenAI close behind at $2.99 million. At the same time, both companies are pursuing government contracts and offering federal agencies access to their AI models for as little as $1. They are also lobbying for tax credits and streamlined approvals for massive data center projects. A Forbes investigation finds that the companies building the most powerful AI systems are also becoming Washington's most active lobbyists, raising questions about who shapes the rules for a technology that affects everyone.


The spending

OpenAI, valued at $830 billion, and Anthropic, valued at $380 billion, both escalated their federal lobbying efforts in 2025. OpenAI spent $2.99 million and Anthropic spent $3.13 million at the federal level, with each company adding roughly $300,000 in California lobbying on top. Both companies only started lobbying in 2023, making the rapid spending increase notable.

Anthropic also made its first political donation in 2025, announcing a $20 million contribution to Public First Action, an AI regulation advocacy group. The company called the effort nonpartisan, though the supported policies often conflict with anti-regulation figures in the Trump administration, including AI czar David Sacks.


The SB 53 split

The clearest policy divide between the two companies came over California's AI safety law, SB 53, which took effect in January 2026. The law requires large AI model makers to create safety guardrails and self-report how they address risks before launching new systems. Companies that fail to comply face fines.

Both companies lobbied on the bill, but took opposite sides. OpenAI reportedly opposed it. Anthropic endorsed it. How much that distinction matters is debated. Kyle Qi of Llama Ventures, which holds an indirect stake in Anthropic, called the endorsement "more of a publicity move". Michael Kleinman of the Future of Life Institute was more blunt. Until companies support meaningful legislation, he says, talk of regulation is "empty." In his view, the industry is "hellbent on avoiding regulation."


Federal contracts and national security

At the federal level, the two companies' priorities look remarkably similar. Both lobbied on national security and AI infrastructure in 2025. Both offered government agencies access to their models for $1, with Elon Musk's xAI offering a similar deal for $42.

The pitch to Washington is straightforward: faster AI development strengthens American competitiveness. In an October 2025 letter to the White House, OpenAI's chief global affairs officer Chris Lehane called AI a "national security imperative" and "a once-in-a-century opportunity" to strengthen the economy.

Anthropic's lobbying disclosures reference export controls and the GAIN AI Act, introduced in October 2025 to restrict sales of advanced AI chips to adversaries like China and Russia by granting U.S. customers a right of first refusal (a legal right that lets certain buyers purchase before others). In 2024, OpenAI quietly dropped language from its policies that prohibited "military and warfare" applications.


The data center push

Developing frontier AI, the most advanced and capable models available, requires massive data centers. Anthropic, OpenAI, Google, and Meta are all racing to expand capacity, but more than half of data center developments in 2025 were delayed at least three months, according to real estate firm JLL.

Both companies are lobbying for tax credits and streamlined approvals for new data centers. OpenAI alone is spending roughly $1.44 trillion over eight years on infrastructure, reportedly without a clear plan to fund it all. Anthropic has even offered to cover electricity costs for connecting new data centers to the power grid.


Why this matters

So far, no federal legislation meaningfully reshaping AI development has passed. Matt Lerner of Founders Pledge says "Washington is woefully behind on AI policy". That leaves policymakers "dangerously dependent on tech companies for information" about technologies that will affect everyone.

That dependence creates a structural problem. The $1 government deals position AI labs as critical infrastructure providers. The lobbying dollars shape the rules these companies operate under. And the companies calling loudest for safety are the same ones spending millions to ensure the regulations work in their favor. California's SB 53 remains the only concrete AI safety law on the books, while federal efforts have stalled.


Glossary

TermDefinition
LobbyingWhen companies pay people to influence politicians and government decisions on their behalf.
Federal lobbyingLobbying directed at the national government, including Congress and the White House.
SB 53California's AI safety law requiring large AI model makers to create safety guardrails and report risks before launching new systems.
Export controlsGovernment restrictions on selling certain technologies, like advanced AI chips, to other countries.
Frontier AIThe most advanced and capable AI models available at any given time.
Right of first refusalA legal right that lets certain buyers purchase something before it can be offered to others.
Data centerA large facility housing thousands of computers that power cloud services and AI systems.
GAIN AI ActA bill introduced in 2025 to restrict sales of advanced AI chips to adversaries by giving U.S. customers priority access.

Sources and resources

Share this article