• Core Concepts AI
  • Posts
  • April Fool's / Orson Welles' Accidental Warning for Our Digital Future

April Fool's / Orson Welles' Accidental Warning for Our Digital Future

The Accelerating Collision of Trust, Technology, and Truth in AI

Come for the news, stay for the laughs

Morning Brew isn’t just any newsletter—it’s your free shortcut to business news that actually matters. Fast, fun, and—dare we say—enjoyable.

No fluff, no jargon, and it takes less time to read than it does to brew your coffee (unless you’ve got a Keurig—then you might get to enjoy your Morning Brew with your actual brew).

Join over 4 million professionals who read it daily. Delivered bright and early, it’s news on your time—whether you read it when you wake up, over lunch, or before bed.

It’s October 30, 1938, and Orson Welles—age twenty-three—strolls through the chilled streets of New York City as he heads toward CBS, clutching a script in his hand, unaware that he’s about to light the fuse of a national hysteria that has not been matched since.

Inside the studio, a bustling swarm of technicians prepares everything needed for the show: sound effects, controls, cues, etc. Right at 8:00 pm, a little red “On Air” light turns on, and Orson Welles, deepening his voice with the practiced authority of a news anchor, begins reporting over the air that Martian cylinders have crashed in New Jersey.

But it gets worse.

The alien invaders are more powerful than we can imagine. Heat-rays vaporize entire militias. Poison gas floats through the streets of New York. Here and there, terrified screams from passerby punctuate the various eyewitness reports.

In the middle of the broadcast, the station’s phones begin to ring.

And ring.

………………….And ring.

It seems that listeners across America believe they are under actual alien attack. Families flee homes, police stations are flooded with calls, and highways clog with terrified citizens seeking escape. Welles continues broadcasting, though he reminds listeners of the program that this was merely the Mercury Theatre's Halloween offering.

Using little more than sound effects and rehearsed voices, this famous “War of the Worlds” broadcast has metastasized in the collective imagination into something bigger than anyone could have imagined.

Perhaps America’s first viral “deepfake.”

Today's Digital Deceptions

Fast forward / Time travel to 2025, and we live in a world where fiction no longer needs a radio broadcast or sound booth. Today, it takes a few lines of code.

A few seconds of someone's voice.

A few well-crafted frames.

We’re not waiting for another War of the Worlds moment. Lucky us, we’re already in the middle of it.

Because let’s just reiterate the very, very obvious: AI-generated media is no longer emerging. It’s arrived. And it’s indistinguishable, in many cases, from the real thing.

Seeing can no longer necessarily be believing.

Why Does This Stuff Work?

So…why are these deceptions so effective? What is it about deepfakes (about synthetic media in general) that gets past our guard?

  • Believability and realism. In 2020, researchers Nightingale and Farid found that participants could only correctly identify AI-generated faces 48.2% of the time…worse than flipping a coin. Our brains are wired to believe what we see, and AI is closing the gap between real and fake faster than we can adapt.

  • Viral spread. In 1938, the Welles broadcast was limited to a single radio signal, a single moment in time. Today, of course, synthetic media can richocet around the world within seconds. A 2018 MIT study found that false news spreads faster and farther than the truth on Twitter. The more shocking the story, the quicker it moves.

  • Confirmation bias. Humans are inclined to believe what fits our worldview. In 1938, listeners already anxious about the rising threat of war were more likely to believe in an alien invasion. Today, people are more likely to accept a deepfake if it aligns with their political beliefs, fears, assumptions, etc.

Add to that what danah boyd calls context collapse (where content is stripped from its original framing and absorbed through the lens of personal bias) and you have a perfect storm.

Real-World Consequences: Some Recent Examples

1. Political Disruption: The Zelenskyy Deepfake (March 2022)

During the early weeks of Russia’s full-scale invasion of Ukraine, a deepfake video emerges online showing Ukrainian President Volodymyr Zelenskyy allegedly calling on Ukrainian soldiers to lay down their arms and surrender.

2. Financial Fraud: Deepfake Voice Clone Used in $35 Million Scam (2020)

Cybercriminals in 2020 used AI-generated voice cloning to impersonate the director of a German-based energy firm. A bank manager in Hong Kong receives a phone call and believes he is speaking with the executive, who instructs him to transfer $35 million as part of a supposed company acquisition. Supporting emails and legal documents—also forged—convince the bank of the request’s legitimacy. The funds are transferred before the fraud is uncovered, and the money vanishes into a web of accounts across multiple countries.

3. Public Panic: Fake Images of Pentagon Explosion (May 2023)

In May 2023, a fake image depicting an explosion near the Pentagon goes viral on Twitter. Lots of fake blue-check accounts, including one impersonating Bloomberg News, amplify the photo before fact-checkers can intervene.

And of course,, who can forget “Pope-in-a-Coatgate”?

Learning From History

Context collapse is powerful: When media is encountered without proper context—whether a radio broadcast tuned into midway or a decontextualized video clip on social media—humans tend to fill in the blanks with worst-case scenarios.

New media requires new literacy: In 1938, many Americans were still learning how to critically process radio content. Today, we're in a similar transition period with AI-generated media.

Institutional trust matters: Those who trusted CBS implicitly were more likely to believe the broadcast was real. Similarly, today's deepfakes work best when they leverage existing trust in institutions or public figures.

Like Welles' War of the Worlds broadcast, today's synthetic (see: B.S.) media doesn't create panic from nothing. Instead, it amplifies what's already within us. Our fears. Our biases.

Our willingness to believe.

The match has been struck. The kindling of our fractured information ecosystem is already smoldering. And unlike Welles' broadcast, there's no simple disclaimer at the end to tell us what was real and what wasn't.

This isn't just history repeating itself, and the question isn't whether we'll face another "War of the Worlds" moment.

We're already living through it.

We are the Martians.

North Light AI helps organizations stay grounded—with ethical, human-centered AI strategies. Contact us at NorthLightAI.com to learn how we can help.

7