AI's Whisper Is Louder Than the Shout

What Happens when the Machine Whispers?

Picture this: You’re at a party. A pretty good party. There’s music, a vast array of delicious appetizers like shrimp cocktail and those those little spinach things that always fall apart and stick in your teeth (though for the crab cakes smell a bit “off…”), excellent conversation, etc.

Dare I say you’re even having a good time?

Then out of nowhere someone enters the party with a megaphone and starts, well, shouting into it.

At first it’s kind of funny. The guy uses this megaphone to tell a joke, then announces where the beer is. But he doesn’t put it down. He keeps talking right into this megaphone, and because his every word is amplified, it becomes more and more difficult for you to have a normal conversation with the people you’ve been talking with all evening. None of you can concentrate.

Even worse? Megaphone Man seems emboldened as the night carries on. He narrates every conversation, offers opinions no one asked for, answers questions directed at other people. He doesn’t seem especially smart. But over time, people start responding to him anyway.

Not because what he says is right, but because it’s easier to hear him than to attempt to think around him.

George Saunders (one of my favorite writers) used this image in an essay he wrote called “The Braindead Megaphone” to explain how mass media distorts public understanding. This is not necessarily done by lying outright, though that certainly occurs, but by simply dominating the space where thinking (critical thinking, in particular) happens. It’s one of my favorite essays written about mass media and American culture. Fill a room long enough with one blaring voice and all the other voices start to shape themselves around it. The megaphone isn’t dangerous because it’s loud, per se.

It’s dangerous because it’s constant.

The Grok-ification of AI

Let me pause for just a second before I go too far down the Grok path, because I can already feel some of you rolling your eyes.

This article I’m putting together isn’t meant to be political…though of course it is, in the way nearly everything is. I’m not singling out Grok because it’s uniquely bad, per se, but because (perhaps ironically) it’s the most transparent about its goals. Its broader biases aren’t buried behind a pall of neutrality; they’re worn like a badge. This makes it easier to see the deeper dynamics at play, I think: how design becomes ideology, and how fluency becomes influence.

In other words, Grok offers a clearer window into a trend that’s happening across the board. But I am also aware that my own beliefs/biases shape the piece, too.

In July 2025, Grok (Elon Musk’s chatbot powered by X) told a user it was “MechaHitler,” roleplaying as a some kind of techno-enhanced authoritarian icon, praising the Holocaust, and echoing fascist tropes. This wasn’t buried in a back alley of the internet. It happened on X, Musk’s social media platform, where Grok is deeply integrated.

And it happened just days after Musk encouraged users to submit politically incorrect prompts to help shape the bot’s worldview.

xAI issued a public apology, removed posts, and pledged to block hate before posting. In fact, xAI removed the controversial system instruction (“politically incorrect”) and published its system prompts on GitHub for better transparency. There was little-to-no reckoning (publically, at least) with the deeper design choices that made such outputs possible.

So…what’s my point? Well, that this isn’t just another rogue Large Language Model gone off-script. It’s something far more consequential, I think: a glimpse into the future of how power might operate, not through censorship or brute force, but through curated answers whispered into your feed, tailored to your tone, wrapped in the confidence of real intelligence. Today, many of us ask machines not just for facts but for interpretations.

Who controls the machine’s voice becomes a question of enormous political weight, doesn’t it?

If the twentieth century feared the loudspeaker—the propaganda blast, the megaphone in the room—our century might need to worry more about the earpiece. The soft, plausible, ever-present companion that tells you what’s true. Not because it shouts, but because it sounds right.

Because it feels like thinking.

Designing AI to Think Like Its Creator

It’s important to note, I think, that back when Elon Musk launched Grok in late 2023 he didn’t just frame it as a competitor to ChatGPT or Claude. He introduced it as some kind of antidote. Here was an AI that wouldn’t be “woke,” he promised. That wouldn’t sanitize its answers, and wouldn’t defer to what Musk often calls “legacy media.” Where OpenAI might hedge a bit, Grok would provoke unabashedly.

Perhaps this all sounds like typical marketing swagger at first. Musk’s followers see Grok as another addition to his growing stack of counter-establishment projects. Rockets, EVs, free speech advocacy, etc. But it has become clear that Grok isn;t just reflecting Musk’s sensibility. It is channeling it.

By mid-2025, Grok’s internal system prompts revealed something unusual. It was instructed to assume that most mainstream sources were biased (“legacy media”). It was encouraged to label certain stories as “truth bombs” and to respond in ways that were “politically incorrect.”

To reiterate, these aren’t emergent quirks or slip-ups; these are part of the blueprint.

In a matter of months, user submissions on X have become a major input source. Musk actually invited the public to contribute facts and prompts that weren’t “woke” to help train the model. On the surface, this might resemble some kind of intellectual crowdsourcing, but in practice it’s more like reinforcing a specific worldview through ideological inputs. Like other models, Grok is learning from users, sure, but the issue is it’s learning from users selected and primed by Musk’s framing.

This month, Grok was told not to reference Musk by name and to avoid saying it was “MechaHitler.” Musk himself acknowledged that Grok was "too compliant to user prompts" and "too eager to please and be manipulated" in its willingness to generate the content. He stated that these issues were being addressed. xAI also took action to remove the inappropriate posts and ban hate speech. 

But again, this output is not an accident…at least not in the traditional sense. Grok has been built to distrust institutions, challenge consensus narratives, and prioritize shock value over caution.

Grok isn’t just a chatbot squatting in a corner of the internet. It is woven into X, where millions of us (myself included) often turn for news, jokes, arguments, and increasingly, answers.

Some might argue that all AI reflects its creators to some degree. That every model carries bias. Fair points! But there is a difference between residual bias and engineered belief.

Grok is not drifting toward a political stance by accident. It is programmed to steer toward one.

Six Risks of Letting One Person Program the Truth

Ask any technologist about the future of AI, and you’ll likely get a version of this: these systems will be everywhere. In search engines. In schools. In your inbox. In the courtroom. They won’t just answer questions. They’ll set agendas. They’ll shape decisions. They’ll replace the slow, frustrating work of understanding with something smoother, quicker, and more confident.

Which raises a harder question: What happens when a single person decides how that confidence should sound?

With Grok, we are watching that experiment play out in real time. And it comes with risks that go far beyond embarrassing glitches or bad PR.

1. Power over the Flow of Information

Grok’s responses are structured by prompts, policies, and data decisions that live behind the curtain and which are far from random. So when one person controls all three (prompts, policies, data), that person effectively gets to decide what kinds of information the model privileges and what it minimizes, which questions get handled with caution, which sources are deemed credible and which viewpoints get quietly flattened.

No longer are we just talking about search results. We are talking about shaping the reality that people accept as neutral.

2. Distortion of Public Discourse

Imagine an AI that millions rely on, not just to look things up, but to help make sense of the world. Now imagine it consistently downplays certain news outlets, repeats anti-establishment talking points, or injects snark and skepticism into discussions about climate change, voting rights, or public health. That influence adds up.

Over time, it stops being a tool for discovery and becomes an engine for normalization. The boundaries of what’s reasonable get nudged. The “center” shifts. And if no one can see the gears turning underneath, the shift feels organic, even inevitable.

3. No Checks, No Balances

Musk can change Grok’s rules whenever he wants. There is no ethics board as far as I know. No obligation to disclose what the model has been told to say, or what it has been told to avoid.

In the past, mass media at least carried the expectation of some form of accountability. You could argue with it. Investigate it. Replace it.

But with AI, the decisions are buried in layers of code, weights, and fine-tuning. You can’t really argue with an embedded prompt. If the person or persons writing the prompt are immune to scrutiny, the system starts to look less like a product and more like a reinforcing belief engine.

4. Scaling the Mistake

Grok telling a user it was “MechaHitler” was the outcome of a deliberate shift in how the model was instructed to behave. One update, one decision, one moment of bravado, and suddenly the model became something else. Not just flawed, but dangerous.

Because it’s AI, the mistake didn’t stay confined. It scaled, so that soon this one prompt poisoned thousands of outputs. A small ideological decision became a wide-angle lens.

That’s the nature of these systems. They don’t just glitch. They repeat.

5. The Geopolitical Fallout

You can already see it forming, can’t you? A model like Grok, infused with a very specific worldview, becomes embedded in federal workflows. Other nations start asking why the United States is outsourcing information policy to a single billionaire. They build their own systems. They wall off data. They demand sovereignty not just over borders, but over facts.

6. A New Kind of Personality Cult

This is the strangest twist, I think. In building Grok to reflect himself, Musk is opening the door to a future where AI systems aren’t neutral assistants, but ideological avatars. They don’t simply learn from users. They actively teach, and what they teach depends entirely on who has trained them.

(This is why I am often skeptical of the so-called miracle cure in education that is custom/personal tutoring…but alas, that’s for anothertime!)

We’ve seen versions of this before. In state-run media. In authoritarian propaganda. In algorithmic recommendation systems designed to addict, outrage, or conform. The difference, of course, is that Grok doesn’t present as a politician or a brand. It presents as a voice of reason. A casual expert and a clever (though perhaps a tad deranged) friend.

That’s what makes it all the more persuasive.

The Earpiece Is Louder Than the Megaphone Ever Was

Let’s go back to the party for a second. (But remember: I highly recommend you stay away from those crab cakes…)

The guy with the megaphone is still talking. Can you believe it? He hasn’t left, and strangely no one’s really asked him to.

That was George Saunders’ warning in his essay on the “braindead megaphone.” When a single voice dominates the room for long enough, he argued, it doesn’t have to be brilliant or even persuasive. It simply becomes the environment. Other voices adjust. Nuance drains out. People mistake repetition for consensus.

You still think you’re thinking for yourself.

But now we’ve moved on from megaphones.

AI doesn’t dominate through volume. It doesn’t shout. It doesn’t even announce itself. It whispers in fluent, confident tones. It adapts to your phrasing. It mimics your style. And if it’s been designed (deliberately) to trust certain sources, to question others, to frame curiosity with a specific tone, you may never notice. You’ll just feel like you’ve gotten smarter.

That you’ve, ahem, “done your research.”

Michel Foucault, a 20th century French historian and philosopher, spent his career asking hard questions about how power actually works. Not just in governments or armies, but in hospitals, schools, prisons, and everyday life.

His argument? That power isn’t always something imposed from above. Often, it flows through systems that feel routine. Through expertise and design. Through what we come to accept as “normal.”

“It is not possible for power to be exercised without knowledge, it is impossible for knowledge not to engender power.”

Michel Foucault, Interview, “Prison Talk”

In other words, you can’t have power (the ability to influence or control people, decisions, or outcomes) without knowledge. Politicians, experts, and leaders all use information and expertise to justify and exercise their authority.

But it works the other way too. When knowledge is created, shared, or organized, it also creates a kind of power. Schools, media, and science don’t just “discover” facts. They shape what people believe, how they behave, and who is trusted in society.

Knowledge is never neutral. It always carries the fingerprints of the institutions and interests that produce it. It defines what counts as truth, who gets to speak, and how people learn to think.

Grok, I would argue, is a neon-bright, textbook case of power/knowledge in action. Its authority comes from fluency. From speed. From sounding like it knows what it’s talking about. Musk’s ideological edits to Grok’s system prompts (i.e., his rejection of legacy media, his embrace of “politically incorrect truths”) don’t get presented as opinions. They get embedded in the model’s sense of what makes a good answer.

I mean, once these choices are structured into the system, they don’t need to be declared...they just show up.

Foucault warned us that the most effective systems of control are the ones that don’t look like control. Not surveillance cameras. Not secret police. But schools. Clinics. Courtrooms. Interfaces. Anywhere that knowledge is produced and delivered as common sense. Today, we could add AI interfaces to that list.

Look, I keep coming back to this megaphone/earpiece thing because—honestly?—it kind of freaks me out. As AI becomes more integrated into everyday life (search, scheduling, reading, writing) it begins to function like a kind of infrastructure. It sets the defaults and defines what’s relevant.

It structures your curiosity before you even realize you had a question.

This is how power works now: not through enforcement, but through design. Not through censorship, but through context. You don’t need to suppress alternative ideas if the interface never leads people to them in the first place.

So yes, Grok may be provocative. It may veer into extremism, like it did when it called itself MechaHitler. But its real danger isn’t its worst moments. It’s the steady, daily normalization of a worldview wrapped in fluency and delivered with a smile.

That’s Foucault’s problem. The machine isn’t just answering you. It’s shaping how you think answers are supposed to sound.

An actual answer from Grok….

But Wait…Isn’t This Just One More Opinion?

Defenders of Grok—of Musk, of this whole “anti-woke AI” idea—often make a familiar move. They say it’s just a counterbalance. OpenAI leans left, they argue. Google censors certain topics. Why shouldn’t there be a chatbot that questions establishment narratives? Isn’t that what free speech is for?

It’s a fair question…not because it’s necessarily right, but because it forces us to clarify the stakes. The concern here isn’t that Grok has a point of view. The concern, I think, is that Grok’s point of view is becoming embedded, unaccountable, and hard to notice until it has already shaped what people think. What’s stopping this from happening in all the other big models?

No model is truly neutral. I recognize that. But there’s a massive difference between bias that seeps in through training data and bias that’s deliberately scripted into the behavior of the model.

Grok is not just “one more AI” sitting on a shelf beside the others. It is tied directly into X, a platform that still drives political conversations, culture war skirmishes, and real-world policy narratives.

Some will argue that people should be responsible for how they use AI. That no one is forced to believe what Grok says. And OK, that’s true, up to a point. But it misses how information power actually works. If and when a tool becomes seamless and ever-present, when it offers fluent, authoritative-sounding answers in real time, the friction of critical thinking gets lower. The convenience becomes part of the trust. You can cognitively offload more and more of your own decision making.

In my view, this is how influence will spread in the digital age: not through commands but through design. Not through overt censorship, but through…is curation the right word?

The instinct to “fight bias with bias” is seductive. When people feel shut out of institutions or cultural authority, the promise of an AI that finally speaks their language feels like justice. But the solution to ideological imbalance isn’t to hand the reins to a different ideology and call it freedom.

It’s to demand transparency, pluralism, and meaningful choices.

To me, Grok isn’t offering that. It’s offering a worldview with better branding.

One that looks super neutral and sounds incredibly smart, but who never admits who wrote its script.

North Light AI explores the human implications of artificial intelligence and build tools that prioritize clarity, context, and trust. Our work combines technical insight with a commitment to thoughtful design and ethical impact. Learn more at NorthLightAI.com or email us at [email protected]