Cover Image Put your HTML text here

The AI Doc: Why a New Film Is Making People Both Scared and Hopeful About the Future

TRENDING
February 20, 2026 | by AI Editorial Team

A new documentary is quietly becoming one of the most talked-about AI films of the decade: The AI Doc: Or How I Became an Apocaloptimist. Running 1 hour and 43 minutes, it’s turning heads at Sundance and in tech circles for a simple reason: it makes people feel both terrified and strangely hopeful about artificial intelligence at the same time.

Instead of treating AI as a distant sci‑fi fantasy, the film follows a father‑to‑be watching AI accelerate faster than any technology in human history and asking a blunt question: What happens when the most powerful technology humans have ever made gets out ahead of us? You can see this framing in the official synopsis and festival materials, which stress that AI is here now, not tomorrow.

The film doesn’t pick a side between utopia and doom. Instead, it offers a new word for this strange moment we’re in: apocaloptimist—someone who can see the apocalypse in the distance, but still chooses to look for reasons to be optimistic. That tension between panic and hope runs through the entire documentary, and it’s a core part of why it’s sparking so much debate among critics, technologists, and everyday viewers alike.

Below, we unpack what The AI Doc is trying to do, how it was made, why the reviews are so divided, and what the film reveals about where AI—and all of us—might be heading next.

Theme How It Appears in the Film Emotional Weight
Apocalypse Deepfakes, self-replicating systems, black-box models, global AI race High
Optimism Medical breakthroughs, scientific acceleration, creative collaboration Medium
Human Story Father-to-be framing, baby on the way, family future questions Medium
Uncertainty Refusal to offer simple answers, focus on open questions and debate Variable

What Is The AI Doc: Or How I Became an Apocaloptimist?

At its core, The AI Doc: Or How I Became an Apocaloptimist is a feature-length documentary about the fastest-changing technology on Earth: artificial intelligence. According to production and release details on Rotten Tomatoes, the film runs 1 hour and 43 minutes and is directed by Daniel Roher and Charlie Tyrell.

Roher comes into the project with serious documentary credentials: he’s best known for Navalny, his Oscar-winning film about Russian opposition leader Alexei Navalny. Tyrell has built a reputation as a creative non‑fiction filmmaker with a knack for visual experimentation. Together, they shift from political dissidents and personal histories to the global AI race—and more specifically, what that race means for a new baby being born into this world.

The official description calls the film “hand-made and eye-opening,” presenting AI as “the most powerful technology humanity has created”—a line echoed in its festival write‑ups. That power is framed as a double-edged sword: it’s precisely because AI is so capable that the stakes are so high if we get it wrong.

Instead of drowning viewers in technical jargon, the film uses a mix of urgency, humor, and inventive visuals to sort through what the directors themselves call today’s “AI insanity.” This mirrors how many businesses are wrestling with AI today—trying to make sense of exponential change while still having to execute on real‑world plans, a dynamic you see in AI‑first strategy work like strategic AI transformation frameworks and digital transformation roadmaps.

“The film frames AI as the most powerful technology we’ve ever built—but one whose direction is still up for grabs. That’s where the ‘apocaloptimist’ comes in: someone who holds panic and hope in the same hand.”

Inside the Story: A Father-to-Be Faces the AI Future

Where many tech documentaries stay abstract and policy‑driven, The AI Doc stays grounded in something deeply personal: a baby on the way.

The central framing device is straightforward but emotionally heavy:

  • A future parent is watching the rapid development of AI and asking, “What kind of world is my child about to grow up in?”

Through that lens, AI stops being just a scientific or economic issue. It becomes a parenting question. This framing shows up repeatedly in the film’s official description and early reviews: the debate over AI is no longer just about GDP growth or model benchmarks; it’s about bedtime stories, future schools, and whether a child’s first “friend” might be a chatbot instead of another kid.

That point of view reshapes the entire film:

  • Risks stop being abstract probabilities and become questions about safety, truth, and control in a child’s life.
  • Opportunities become less about shareholder value and more about better medicine, faster science, and more time for human connection.
  • AI ethics turn into day‑to‑day tradeoffs families will have to make—echoing concerns businesses wrestle with when they adopt automation and AI at scale, as outlined in enterprise AI adoption guides.

By keeping the story close to this one family, the film manages a tricky balance: the ideas are simple enough for non‑technical viewers (and even younger audiences) to follow, but substantial enough that AI researchers and policymakers still see their own concerns reflected on screen.

Why “Apocaloptimist”? Balancing Doom and Hope

The strangest and most memorable word in the title—apocaloptimist—isn’t just a branding gimmick. It’s the emotional core of the film.

Across interviews and scenes, we watch the director’s attitude evolve in two directions at once:

1. The Apocalyptic Side

On the apocalyptic end, the film surfaces several very real concerns:

  • Hyper‑realistic manipulation – AI systems that can imitate voices and faces in seconds, making it trivial to create convincing deepfakes.
  • Black‑box models – systems whose inner workings are so complex that even their creators struggle to explain individual decisions.
  • A global AI arms race – companies and countries (notably the U.S. and China) pushing for speed over safety, a dynamic discussed in the film’s Sundance 2026 panel.

These are not speculative fears. They echo broader concerns that show up across AI ethics discussions and analyses of autonomous decision-making: when opaque systems operate at scale, the cost of mistakes and misuse can be enormous.

2. The Optimistic Side

Yet at the same time, the director encounters compelling reasons to be hopeful:

  • AI in medicine, science, and education – tools that can help doctors analyze data faster, support scientists exploring new hypotheses, and assist teachers with personalized learning.
  • Creative collaboration – artists experimenting with AI as a partner rather than a replacement, using generative tools to expand what’s possible in film, music, and design.
  • Growing calls for responsibility – experts across industry and academia pushing for transparency, safety research, and robust public debate, as highlighted in the Sundance discussion.

By the end of the film, instead of choosing one side, the filmmaker lands on the hybrid identity of “apocaloptimist”—someone who can say:

“Yes, I can imagine very bad outcomes. But I still believe we can steer this.”

Coverage of the film on platforms like Rotten Tomatoes and in essays about documentary storytelling, such as this look at Frederick Wiseman’s work, highlights how well this term captures the mood of our AI era: a mix of dread, curiosity, and stubborn hope.

How the Film Was Made: A Sisyphean AI Quest

Behind the scenes, making The AI Doc turned out to be almost as chaotic and difficult as tracking the technology itself. In a 39‑minute Sundance 2026 panel moderated by Variety, the directors and producers describe the project as a Sisyphean task: pushing a boulder up a hill that keeps getting steeper—and keeps changing shape.

A Mountain of Interviews and Sources

To tell the story, the team went far beyond a handful of talking heads:

  • They interviewed over 40 people on camera, ranging from AI researchers and safety experts to policy thinkers and industry insiders.
  • Those interviews generated an enormous 3,326 pages of transcripts—a mountain of words to sift through, as they recount in the Sundance discussion.
  • They developed over 100 contacts across major AI labs such as OpenAI, Anthropic, and Google DeepMind, creating a network of sources deep inside organizations that typically stay out of the spotlight.

Crucially, the team didn’t just sit back and wait for celebrity CEOs to respond to emails. They used a deliberate “laddering up” strategy:

  • Start with lower‑profile insiders who are closer to the work.
  • Use those conversations to build understanding, trust, and credibility.
  • Then climb gradually toward more central, higher‑profile figures in the AI ecosystem.

By the end of this process, they say they reached 10 out of 10 of the AI lab interview subjects they had initially set out to include, offering a rare look into the thinking inside labs like those building cutting‑edge tools such as OpenAI’s deep research systems.

Filmmaking in a Moving Storm

All of this was happening while the AI landscape itself was lurching forward month by month. As the filmmakers describe in the Sundance panel:

  • New AI models were being released mid‑production.
  • Safety and policy debates were shifting in real time.
  • Public awareness spiked repeatedly after viral demos and controversy cycles.

Each time the team felt like they had a solid story arc, reality changed again. That forced them to repeatedly reinvent the structure of the film, much like how companies have to revisit their AI roadmaps as new capabilities land—echoing the iterative planning you see in AI implementation roadmaps.

To cope with this moving target, the filmmakers:

  • Worked with loose, adaptable “story skeletons”—structural outlines they could rearrange as fresh developments arrived.
  • Leant into total collaboration, with directors also serving as producers and everyone contributing to structure and story.
  • Drew inspiration from AI itself, comparing the film’s evolving narrative to machine learning experiments where models “self-destruct and relearn” after failure—just as their own edits sometimes collapsed and had to be rebuilt.
“The boulder kept rolling back down. Every time a new model launched or a safety scandal broke, they had to rethink what story they were telling—mirroring how humanity is being forced to co‑evolve with AI.”

Visualizing the Invisible: How the Film Shows AI

One of the hardest problems any AI documentary faces is visual: how do you show something you can’t see? Neural networks don’t have obvious, cinematic images the way a rocket launch or a protest march does. They live on chips and servers, inside data centers and code.

The AI Doc leans into this challenge with a vivid, mixed‑media style. As noted in early critic reactions on Rotten Tomatoes, the film uses:

  • Surreal imagery to evoke the “inner life” of AI systems.
  • Playful mixed media to keep complex explanations engaging.
  • Fast, colorful editing that mirrors the noise and overload of today’s AI news cycle.

Critic Cole Groth at FandomWire gave the film an 8/10, praising its “visually stunning efforts to depict the ‘unvisualizable’”—the internal mechanics and wide‑ranging impact of AI systems we usually only see as text boxes on a screen. Similarly, Daniel Howat at Next Best Picture also rated it 8/10, highlighting how the creative imagery keeps viewers engaged even as the film tackles dense concepts.

Surreal Metaphors

Abstract sequences stand in for the “thoughts” and inner workings of large models.

Risk Imagery

Deepfake‑like visuals and self‑replicating patterns evoke loss of control and safety risks.

Wonder & Play

Colorful, experimental sequences mirror the excitement many feel around new AI tools.

Adaptive Storyboarding

Plan B and C concepts let the team keep updating animations as the tech landscape shifts.

In the Sundance panel, the filmmakers explained that they wanted these visuals to balance risk and wonder. On one side, they show AI systems that can:

  • Clone voices and photos in seconds, making fake media almost indistinguishable from the real thing.
  • Self‑replicate, spreading or modifying code without direct human oversight.

On the other side, they leave space for awe, curiosity, and experimentation—mirroring how many creators are exploring tools like Runway‑style AI tools that enable new kinds of video and image generation.

To keep up with rapid AI advances, the team kept Plan B and Plan C visual concepts ready throughout production, swapping in updated animations, edits, and metaphors as the underlying technology changed.

Who Made The AI Doc? A Heavy-Hitting Creative Team

Part of what makes The AI Doc stand out is the unusually strong lineup behind it. Per production credits listed on Rotten Tomatoes, the film combines serious documentary pedigree with blockbuster‑level creative firepower.

  • Directors:
    • Daniel Roher – Oscar‑winning director of Navalny, known for sharp political storytelling.
    • Charlie Tyrell – documentary filmmaker with a flair for inventive, visually driven non‑fiction.
  • Producers:
    • Daniel Kwan – one half of the directing duo behind the Oscar‑sweeping Everything Everywhere All at Once.
    • Jonathan Wang – producer of that same multiverse‑bending hit.
    • Shane Boris, Diane Becker, Ted Tremper – seasoned names in documentary and narrative production.
  • Production companies:
    • Fishbowl Films
    • Cottage M
  • Distributor:
    • Focus Features, a major distributor known for carefully curated, conversation‑starting films.
  • Genre & language:
    • Documentary, English‑language.
  • Runtime:
    • 1 hour 43 minutes.

This blend of rigorous documentary skill and boundary‑pushing creative energy helps explain why The AI Doc can feel both informationally dense and stylistically wild at the same time.

From Sundance to Theaters: How The AI Doc Is Reaching Audiences

The AI Doc began its public life on one of the most important stages for independent film: the Sundance Film Festival 2026, where it premiered to festival audiences eager to make sense of the AI wave that has swept through every part of culture.

From there, the film was acquired by Focus Features, with a limited theatrical release set for March 27, 2026. That release pattern matters:

  • The film will open first in select cities, rather than as a wide blockbuster rollout.
  • Early audiences are likely to be festivalgoers, tech enthusiasts, AI researchers, and cinephiles who actively seek out conversation‑starting work.
  • If response is strong, it could expand to more theaters or make a relatively fast jump to major streaming platforms.

At this stage, the public conversation around the movie is still being shaped primarily by critics and festival audiences, which is why the emerging split in reviews is so telling.

What Critics Are Saying: Praise, Pushback, and a Big Debate

On Rotten Tomatoes, The AI Doc: Or How I Became an Apocaloptimist currently has a Tomatometer score based on a small pool of nine critic reviews. There are no verified audience ratings yet, since the film hasn’t reached a wide public release.

The critical response so far paints a picture of a film that is energetic, urgent, and visually bold—but also controversial in how it handles AI’s darkest risks.

The Praise: Timely, Funny, and Accessible

Several critics see The AI Doc as one of the most effective AI documentaries to date:

  • Brian Tallerico (RogerEbert.com) gives the film 3 out of 4 stars, commending its ability to raise awareness and its willingness to spark conversation rather than pretend it has all the answers.
  • Derrick Murray (NERDBOT) rates it 4 out of 5, calling the film a Sundance standout that’s timely, urgent, and funny—a rare mix for a topic that often feels abstract and heavy.
  • Cole Groth (FandomWire) and Daniel Howat (Next Best Picture) both give it 8/10, focusing on its “visually stunning” style and creative imagery that keeps audiences engaged through complex material.

These positive reviews tend to emphasize the film’s role as a gateway: something that can pull broad, non‑technical audiences into the AI conversation without overwhelming them, much like accessible explainers on AI and automation trends help non‑experts understand the stakes.

The Criticisms: Too Trusting, Too Toothless?

Other critics are less convinced, arguing that the film’s measured tone ends up going too easy on the companies driving AI’s rapid deployment.

  • William Bibbiani (TheWrap) criticizes the film’s choice of pro‑AI sources, calling some of them untrustworthy. He argues that strongly critical or anti‑AI perspectives don’t get the exploration they deserve.
  • Tim Grierson (Screen International) goes further, describing the film as “toothless” for accepting interviewees’ statements too uncritically, without aggressively challenging their assumptions.

Even several of the more positive critiques echo a similar observation: The AI Doc seems more interested in framing questions and starting debates than in laying down firm conclusions about whether AI is on balance good, bad, or manageable.

“For some, that open‑endedness is exactly what we need in a rapidly changing field. For others, it feels like a dodge at a moment when they want sharper, more confrontational journalism about AI’s risks.”

Inside the Sundance Panel: Transparency, Black Boxes, and Public Power

One of the most revealing looks into what the filmmakers actually think about AI comes from their Sundance 2026 panel discussion. Joined by producers Diane Becker and Ted Tremper, directors Daniel Roher and Charlie Tyrell expand on several of the documentary’s biggest themes.

1. Black Boxes vs. Public Demands for Transparency

The team emphasizes that many modern AI systems are effectively black boxes: their internal operations are so complex and emergent that even domain experts struggle to give clear, causal explanations for individual outputs.

At the same time, they note a growing public demand for transparency:

  • People are becoming less willing to accept “it’s too complicated” as an explanation.
  • They are particularly skeptical when opacity is justified by references to an AI race with China, where secrecy and speed are used to rationalize hiding technical details from the public.

That tension—between black‑box complexity and a healthy desire for accountability—is one of the documentary’s central threads. It also echoes conversations happening in industry about AI audits and assessment frameworks, where organizations are under increasing pressure to show how their systems work and how they’re being monitored.

2. Risks That Exist Today, Not Just in Sci‑Fi

Over the course of production, the filmmakers say they were struck by how many AI risks are no longer speculative:

  • Voice and image cloning – AI can already clone a voice or photo in seconds, making it easy to create highly convincing deepfakes.
  • Self‑replicating systems – some AI‑driven or AI‑assisted systems can copy or propagate themselves without direct human oversight, raising fears about loss of control.

These are the same kinds of capabilities that have driven the growth of AI detection tools and authenticity verification products, as people try to keep up with an information environment where almost anything can be faked.

3. Co‑evolution: Humans and AI Learning Together

Despite those risks, the panel returns multiple times to the idea of co‑evolution: the notion that as AI systems advance, humans—individually and collectively—are learning and adapting too.

  • Governments are scrambling to write new rules and regulatory regimes.
  • Companies are building internal guidelines, AI governance boards, and risk assessments, much like the processes described in AI automation adoption playbooks.
  • Everyday people are learning, painfully and quickly, how to recognize deepfakes, mistrust too‑good‑to‑be‑true videos, and fact‑check suspicious content.

Interestingly, the filmmakers describe their own production process as mirroring a machine learning lifecycle: they would try a narrative approach, watch it “self‑destruct” as the world changed, then rebuild a new version that better fit the latest reality.

Limits of What We Know: A Film Inside an Unfinished Story

It’s important to note that even with all the available public information—from Rotten Tomatoes listings to the Sundance panel—we’re still seeing The AI Doc through a partial lens.

Right now:

  • There’s no complete public list of every AI expert, policy voice, or industry figure interviewed in the film.
  • There are no large‑scale audience metrics yet—no reliable survey data or box‑office trends to show how the broader public is reacting.

In that sense, the film mirrors AI itself: it’s a work in progress being released into a world that is also mid‑transition. As the limited theatrical run begins and more people see The AI Doc, we can expect:

  • More detailed think pieces and policy‑oriented reviews.
  • Reaction essays from people working inside AI labs, who recognize themselves or their colleagues on screen.
  • Debates in classrooms, boardrooms, and online communities about whether the film goes too easy on AI, too hard, or lands in an uncomfortable but honest middle.

Why The AI Doc Matters Right Now

Regardless of where you land on the critical spectrum, the timing of The AI Doc is no accident. AI systems—from chatbots and image generators to powerful code and research assistants—are already reshaping:

  • How we work (automation, AI copilots, and synthetic media).
  • How we learn (AI tutors, adaptive curricula, and personalized content).
  • How we create (generative art, music, and video).
  • How we govern and do business (algorithmic decision‑making, risk scoring, and strategic planning).

Meanwhile, businesses are racing to figure out how to build AI‑first strategies, deploy automation safely as described in automation trend analyses, and manage the very real organizational changes highlighted in enterprise AI adoption frameworks.

The AI Doc enters that moment and does a few key things:

  • It frames AI through the eyes of a parent, not just a CEO or policymaker, making the stakes feel intimate and immediate.
  • It mixes urgency with humor, cutting through both hype and despair to keep people engaged long enough to think.
  • It blends apocalyptic fears with grounded optimism, refusing both easy utopianism and pure doom.
  • It draws on dozens of interviews with insiders at labs like OpenAI, Anthropic, and Google DeepMind, offering a more textured view of the people building and critiquing these systems.
  • It visualizes the invisible, using daring, colorful imagery to make abstract machine learning concepts feel concrete.

Why This Documentary Format Matters

For many people, a 1‑hour‑43‑minute documentary will be more influential than any white paper or technical blog post. Films like The AI Doc can shift how a broad audience feels about AI—what questions they ask, what policies they support, and how comfortable they are with AI‑driven change in their own workplaces and communities.

Importantly, the film does not claim to have final answers. It doesn’t calculate the precise odds of AI catastrophe or utopia. It doesn’t endorse a five‑point regulatory plan. Instead, it asks viewers to live for a while in that uncomfortable middle space—where catastrophe is plausible but so is tremendous progress, and everything depends on what we do next.

The Takeaway: Learning to Live as Apocaloptimists

After spending time with The AI Doc: Or How I Became an Apocaloptimist, one idea tends to linger: whether we like it or not, most of us are being pushed toward apocaloptimism.

We can:

  • Clearly see the dangers—from deepfakes and voice cloning to black‑box systems and global AI arms races, the types of threats highlighted in the Sundance panel.
  • Recognize the potential—for better medicine, faster science, new forms of art, and tools that expand rather than shrink human creativity, as emphasized in the film’s official description.
  • Demand more openness and accountability from the companies and labs racing to deploy advanced AI, whether that’s through audits, transparency reports, or regulatory oversight.

And at the same time, we can choose not to look away.

In that sense, The AI Doc is ultimately less about AI itself and more about us—about the courage it takes to stand in the middle of fear and hope and keep asking hard questions anyway. As the film moves from Sundance screens to a March 27, 2026 theatrical release and, eventually, into living rooms around the world, that may be its most important contribution:

Not to finish the debate about AI, but to start it—across homes, classrooms, newsrooms, and labs everywhere.

The technology will keep racing ahead. The boulder will keep rolling. But if The AI Doc succeeds, we may have a few more people willing to push—clear‑eyed about the apocalypse, but still stubbornly optimistic that we can steer away from it.

Add to Conversation5062