- Core Concepts AI
- Posts
- We're Skipping the Struggle
We're Skipping the Struggle
John Dewey and the Crisis of Learning
Where "Dewey" Go From Here?
I keep coming back to John Dewey.
Dewey was a philosopher who spent his life thinking about democracy and education. For him, these were the same subject. He wrote extensively on how people learn, and became—for a time—America's most influential public intellectual.
Then he fell out of fashion. Too earnest, apparently. Too optimistic. Too focused on civic life when academia turned toward other concerns.
But his central insight holds: Democracy requires a certain kind of person. Not informed people, exactly, because information isn't enough. Democracy requires people capable of inquiry. People who can sit with confusion, test ideas against evidence, revise when they're wrong, take seriously perspectives that aren't their own.
Dewey thought these capacities weren't natural. They had to be cultivated. That cultivation was the point of education—not just schooling, but the broader process by which a society shapes minds capable of self-governance.
Give people the conditions for genuine inquiry, Dewey believed—real problems, real collaboration, real consequences—and understanding emerges. Deprive them of those conditions, and you get something else. Memorization. Performance.
People who know how to look like they're thinking without actually doing it.
I'm not going to explain how algorithmic manipulation works. I mean, you know. Social media optimizes for engagement, engagement means emotion, emotion means conflict and confirmation, the result is filter bubbles and polarization and distraction.
We've been talking about this for a decade.
What I'm less sure we've wrestled with is what this means for learning. I'm not talking about schooling. I'm talking about learning—the process by which humans come to understand something they didn't understand before.
Something has shifted in what counts as knowing. What it feels like to have a question and pursue it.
Dewey's word for learning—inquiry—means something specific.
Inquiry starts with genuine confusion. Something doesn't fit. You expected one thing and got another. This feels uncomfortable—and it's supposed to. The discomfort is the engine.
You try things. Hypotheses, experiments, attempts. Most fail. Failure is information. You revise and try again. This takes time, because understanding isn't transmitted. It's built, slowly, through effort that often feels like flailing.
And you can't do it alone. Other people see what you miss. They ask questions you didn't think to ask. Inquiry is collaborative not because collaboration is nice but because single perspectives are always partial.
You need friction.
This process is slow and uncomfortable. It requires staying confused longer than you'd like.
What in our current environment trains any of this?
The Feeling of Knowing
Here's what I think has shifted. Not knowledge, but the feeling of knowing.
Real inquiry feels like struggle. You're uncertain, and you sit with it. You're wrong, and you revise. When understanding finally comes, it comes with a history: you remember the confusion, the failed attempts, the slow clarification. The knowledge is yours because you built it.
The algorithmic environment offers something else: the feeling of knowing without the process. Scroll long enough and you'll encounter what looks like information on any topic. Facts, arguments, confident voices. It feels like learning. You came with a question; you leave with an answer.
But the answer arrived without inquiry. You encountered a claim that felt right—felt right because the algorithm selected it to feel right—and absorbed it.
I notice this in myself. The creeping impatience with difficulty. The reflexive reach for the phone when a question arises. The way an answer from the feed feels like knowledge even when I couldn't reconstruct the reasoning if you asked.
Thinking takes time, and the algorithmic environment eliminates time. Everything is fast. Scroll, react, scroll. The rhythm trains impatience. Thinking also requires tolerating not-knowing, but the feed floods you with confident voices. Certainty is more engaging than doubt, so doubt starts to feel like weakness.
And thinking requires contact with different perspectives—not as enemies to defeat but as perspectives that might show you something. The algorithm sorts for agreement. When you encounter difference, it's packaged as outrage.
Thinking requires silence. Space. The algorithmic environment fills every gap.
We're removing the conditions for thought while expecting people to think.
Now Add AI
If algorithmic feeds trained us away from inquiry, AI threatens to finish the job.
Ask a question, get an answer. Instantly. Confidently. No struggle, no uncertainty, no friction. The interface is designed to feel like insight. You came with confusion; you leave with clarity. What's not to like?
What's not to like is that the clarity is fake. Not because AI is always wrong—sometimes it's remarkably right—but because you didn't do anything. The answer arrived without inquiry. You absorbed it or you didn't. The process that Dewey thought was the whole point—the struggle that builds understanding—got skipped.
AI offers the feeling of knowing at scale. Any question, any time, answered in seconds. For a culture already trained to mistake the feeling for the reality, it's almost irresistible.
I watch people (myself included!) use AI the way I watch people scroll feeds. Not as a tool for inquiry but as a substitute for it.
Why sit with confusion when the machine will resolve it? Why struggle when the answer is right there?
But understanding isn't the answer. Understanding is what happens to you when you struggle toward the answer. Skip the struggle and you get information without comprehension. A head full of claims you couldn't defend if pressed.
The Deeper Problem
We talk about misinformation, polarization, distraction, cheating, plagiarism, job displacement. Real concerns. But these are symptoms.
The deeper problem: we're building an environment that makes inquiry feel unnecessary. First the feeds, now the models. Technology after technology designed to give us answers, resolve confusion, eliminate the friction that Dewey thought was the whole point.
Misinformation can be corrected. Polarization can be reduced. Cheating can be detected.
But if we've lost the capacity for inquiry—if we've trained ourselves to expect answers without struggle and to mistake absorption for understanding—then no correction helps. We don't have the equipment to evaluate the corrections.
Dewey didn't worry about people believing wrong things. He worried about people losing the capacity to form genuine beliefs at all. Beliefs that emerge from inquiry rather than absorption.
AI could accelerate that loss. Or it could help reverse it. Depends on how we build it, and what we ask it to do.
WWDD: What Would Dewey Do?
I have hunches more than answers.
If AI is going to serve inquiry rather than replace it, it has to be designed for productive discomfort. The kind that makes you think.
That means AI that asks questions instead of just answering them. It means AI that withholds the answer when struggle would serve you better, that introduces friction on purpose—alternative viewpoints, complicating evidence, reasons you might be wrong. AI that's slow when slow is what's needed. AI that says "I'm not sure—what do you think?" instead of performing omniscience.
The current trajectory—instant answers, confident tone, frictionless interface—is a design choice. It's the choice that feels good in the moment and sells well. Deweyan AI would feel worse in the moment. It would require patience. It would trust that the struggle is worth it.
I don't know if the market will build this. The incentives point the other way. But educators might demand it. Some of us might choose it, if we understood what was at stake.
Dewey's wager was that the conditions for inquiry, once experienced, become their own argument. People who've felt what it's like to actually think—to struggle and revise and come out the other side with something they built—won't settle for simulation.
The algorithmic environment buries that experience. AI could bury it deeper. Or it could help us dig it back out.
I'm hoping for the second, though not necessarily betting on it.
The New Framework for Enterprise Voice AI
Enterprise teams are automating more calls than ever — but without a consistent framework, deployments become unpredictable, costly, and slow to scale.
The BELL Framework introduces a structured way to design, test, launch, and improve Voice AI agents with reliability.
Get the guide enterprises are now using to de-risk voice automation and accelerate deployment.

