New Documentary Says We Must Choose Our AI Future Now

Key insights
- Harris reframes the AI debate: the question is not whether AI goes well or badly, but whether we actively choose to steer it. That shift turns the audience from spectators into participants.
- The Human Movement Harris describes already exists in fragments: lawsuits, boycotts, phone-free schools. It just has not recognized itself as one unified force yet
- Recursive self-improvement, where AI improves its own code without human help, could arrive within 1-2 years according to industry insiders. No rules exist to handle it yet.
This is an AI-generated summary. The source video may include demos, visuals and additional context.
In Brief
A new documentary called The AI Doc: Or How I Became an Apocaloptimist opened in theaters across the United States on March 27, 2026. It was directed by Daniel Roher, who also directed the Oscar-winning documentary Navalny, and premiered at the Sundance Film Festival in January. The film follows Roher as he prepares to become a father and sets out to interview AI leaders to understand the world his child will inherit. It currently holds an 89% positive rating on Rotten Tomatoes. NBC News correspondent Gadi Schwartz spoke with Tristan Harris, co-founder of the Center for Humane Technology, about what the film is trying to do and what viewers should take from it.
Related reading:
Agency, not optimism
When Schwartz asks whether Harris shares the film's optimism, Harris immediately changes the question. He says he thinks about it in terms of agency rather than optimism or pessimism. "The default path that we're heading on with AI is very dangerous," he says, but the film's purpose is not to deliver a verdict on the future. It is to give people enough clarity to make a choice.
Harris argues that most public debate about AI is stuck in the wrong conversation. You have optimists explaining why everything will be great. You have pessimists warning about what could go wrong. You have CEOs saying whatever serves their companies. The film puts all of them in the same picture so the audience can compare them directly. "If we can have clarity, then we can have agency about which future we want to choose," Harris says. "So really, I emphasize choice rather than optimism or pessimism."
The documentary's subtitle, "or How I Became an Apocaloptimist," captures this idea. An apocaloptimist is someone who sees both the catastrophic risks and the transformative potential of a technology without letting either view drown out the other.
The threats are already visible
Harris lists what the dangerous default path looks like. On the economic side, he warns of mass unemployment where 100 million people don't have a job and no real path to retraining, because AI will improve faster than humans can learn new skills. On the military side, he says we are building the Terminator autonomous weapons future that we warned ourselves not to build. In Ukraine, AI-guided drones are already being used to make targeting decisions without a human making the final call. The human is removed precisely because it makes the drone harder to jam electronically.
Harris also invokes a quote from the biologist E.O. Wilson, who died in 2021: "We have paleolithic brains, medieval institutions, and godlike technology." Harris extends the metaphor to AI, noting we now have 21st-century technology that is accelerating toward what he calls 24th-century capability, while our governments are still running on 18th-century designs. The result is what he calls "upgrade or collapse": either humanity upgrades its institutions to match the speed of technology, or things go badly.
The Human Movement already exists
Schwartz presses Harris: after people see the film and feel some clarity, what do they actually do? Harris points to what he calls The Human Movement, which he says already exists in fragments, even if most people do not think of it as a single movement.
He gives four examples. A $375 million lawsuit against Meta for deliberately designing products to addict children. The U.S. Senate blocking an attempt at federal preemption (a law that would have prevented individual states from regulating AI, which Congress struck down to preserve states' ability to act). Parents organizing to make entire schools phone-free. And the consumer response when Anthropic objected to Pentagon surveillance demands: ChatGPT subscriptions dropped sharply, while Anthropic subscriptions jumped. "We want to build a pro-human future," Harris says, "not one that is disempowering humans, taking away our freedom, taking away our privacy, or taking away our kids' mental health."
The website thehumanmovement.org, which Harris mentions in the interview, lists concrete actions people can take to push for stronger oversight.
The black line
The most urgent part of the interview comes when Schwartz asks how much time we actually have. Harris says the question is hard to answer because several different risks sit on different timelines, but the one that worries people inside AI the most is recursive self-improvement.
Recursive self-improvement means an AI system begins improving its own code and capabilities without human involvement, creating a cycle that speeds up on its own. Rather than a team of engineers doing AI research, someone presses a button and the AI runs its own experiments, writes its own improvements, and becomes something no one fully understands. Harris compares it to crossing the event horizon of a black hole: you do not know what comes out the other side.
He calls this not just a red line but a black line. A red line is something you are not supposed to cross. A black line means you cannot safely cross it at all, at least not until you know how to do it carefully. Industry insiders estimate recursive self-improvement could happen anywhere between 1 to 2 years from now, if not sooner. Meanwhile, companies are racing toward it. "They're basically counting on the fact that the public is oblivious and isn't tracking this," Harris says, "which is why there needs to be massive public pressure."
He adds that this is not a US-vs-China problem. Both the Chinese Communist Party and any US president want to remain in control, and neither actually wants an uncontrollable AI taking over. That creates a surprising mutual interest in establishing human control as a shared floor.
Pro-steering, not anti-technology
Harris is careful to say the movement he is describing is not anti-technology. It is pro-steering and pro-braking. The film celebrates what AI could do — new cancer drugs, breakthroughs in materials science, tools that expand what is humanly possible. The argument is not that AI should stop, but that accelerating without steering leads to a crash.
The AI Doc aims to be a collective moment where audiences from across the political spectrum can sit with the same facts, the same voices, and then decide which future they want. Harris says the film includes the tech CEOs themselves, who are forced to answer the same questions. He does not think their answers are particularly reassuring.
Glossary
| Term | Definition |
|---|---|
| Apocaloptimist | Someone who sees both the catastrophic risks and the transformative potential of a technology, without letting either view cancel out the other. Coined for this documentary. |
| Recursive self-improvement | When an AI system improves its own code and capabilities without human involvement, creating a feedback loop of accelerating intelligence. Harris calls this a "black line" that should not be crossed without careful safeguards. |
| Federal preemption | When the federal government claims the exclusive right to regulate something, preventing individual states from making their own rules. The U.S. Senate blocked an attempt at federal preemption of AI regulation, keeping states free to act. |
| Human in the loop | A system design where a human makes the final decision at key points. AI-guided drones in Ukraine increasingly remove the human from the loop to defeat signal jamming. |
Sources and resources
Want to go deeper? Watch the full video on YouTube →