- Core Concepts AI
- Posts
- The Humanities Crisis Is an AI Crisis
The Humanities Crisis Is an AI Crisis
What happens when we automate poetry before we learn to read it?
A version of this post appeared on Andrew Mitchell’s Linkedin.
You’ve Hit Capacity. Now What?
You built your business by saying yes to everything. Every detail. Every deadline. Every late night.
But now? You’re leading less and managing more.
BELAY’s eBook Delegate to Elevate pulls from over a decade of experience helping thousands of founders and executives hand off work — without losing control. Learn how top leaders reclaim their time, ditch the burnout, and step back into the role only they can fill: visionary.
It’s not just about scaling. It’s about getting back to leading.
The ceiling you’re feeling? Optional.
THE IRONY
When I heard that West Virginia University killed its entire World Languages department in August 2023—French, Chinese, German, Spanish, gone—I wasn't shocked. This was one of many. I'd worked in higher ed long enough to know the crude choreography of budget cuts: first the euphemisms about "efficiency" and "realignment," then the layoffs, then promises of better days ahead.
Then nothing.
What got me was the timing. This was the same time ChatGPT was truly mainstream, the same year governments scrambled to draft AI regulations, the same year everyone suddenly cared about the "ethics" of machines that could write essays about what it means to be human.
And here were our largest public universities deciding that the actual study of language, culture, and ethics was...optional. No, that’s not right—not even optional. Not an option at all.
The message landed clearly enough: the future belongs to the people building the systems, not the people who might understand what those systems mean or why they matter.
I'm not saying the humanities were thriving before AI showed up—they've been getting gutted for decades—but the contrast hit me hard. We're building technology explicitly designed to mimic humanity while systematically defunding the people who teach us how to be human.
How We Got Here
The money problems are real. In New Hampshire, where I live, per-student state funding has dropped 40% since 1990 when you adjust for inflation. We spend $4,629 per student. Dead last in the country. The national average is over $11,000.
This means universities rely more and more on tuition, which means they chase "marketable" majors: business, data science, nursing, cybersecurity. The ROI logic takes over. Learning has to justify itself in economic terms. Parents want salary data. Legislators want metrics. Administrators turn curiosity into key performance indicators.
And I get it to an extent. College costs a fortune, and a philosophy degree looks like a waste when success gets measured six months after graduation.
Neil Postman saw this coming back in the 1980s. In Amusing Ourselves to Death, he warned we weren't heading toward Orwell's world of oppression but Huxley's world of distraction. The danger wasn't that truth would be banned but that it would become utterly irrelevant. Somehow we’ve institutionalized that logic now. Programs survive only if they're instantly gratifying, easily branded, endlessly marketable.
Public universities behave like corporations selling degrees to customers. Administrative layers multiply. Marketing offices rival academic departments in size. When humanities enrollments dip, administrators cut funding. Classes disappear, advising vanishes, tenure lines freeze. Students see shabby buildings and overworked adjuncts and conclude the institution doesn't believe in the subject. They switch majors. Enrollment drops further.
Alas, the spreadsheet confirms what the spreadsheet created.
The Silicon Valley Paradox
Here's what makes this surreal: Silicon Valley suddenly speaks fluent humanities. Have you noticed that? Every keynote promises "human-centered AI." Every press release invokes empathy, creativity, ethics.
Jensen Huang at NVIDIA said: "It is our job to create computing technology such that nobody has to program. And that the programming language is human; everybody in the world is now a programmer."
Sounds great. But look at the actual budgets. The same companies quoting Aristotle in product launches have fired their ethics teams and lobbied against meaningful regulation. Philosophers may get hired to polish mission statements, if they are lucky, but not to change them.
This is a kind of moral outsourcing: celebrating the language of ethics while defunding its practitioners. "Human-centered" in techspeak often means nothing more than "watered-down, slightly human flavored."

Gif by disneyplus on Giphy
What We're Losing
We're not just losing departments. We're losing the capacity to think.
Research shows that humanities programs remain one of the few places in higher education where ambiguity, moral tension, and complex interpretation are built into the learning process. Students report measurable gains in critical thinking, ethical reasoning, intellectual flexibility.
Early evidence suggests that overreliance on generative AI can dull those exact habits of mind.
You don’t need Scooby Doo to solve this mystery: when we dismantle the disciplines that require thinking, we risk hollowing out the capacity to think at all.
And yes, humanities programs do cultivate skills every employer claims to want: critical thinking, communication, ethical reasoning. But more than that, they teach moral imagination. The ability to interpret complexity, to see the world through someone else's eyes, to ask whether we should, not just how we can.
The data contradicts the "uselessness" narrative too.
The Strada Education Foundation, for example, found that 82% of liberal arts graduates are employed, earning a median of $55,000. Those who pursue graduate study average $76,000. Two-thirds change careers between their first and second jobs, showing more adaptability than narrowly trained peers. They also report higher civic engagement and life satisfaction.
The problem is perception, not outcomes.
The Empathy Test
Philip K. Dick wrote Do Androids Dream of Electric Sheep?, the novel behind Blade Runner. In it, the only way to distinguish humans from androids is a test that measures empathy. But by the end, it's the humans who seem least capable of passing. They've outsourced feeling to gadgets and simulated compassion through "empathy boxes."
That's where we are. AI can mimic sympathy and generate moral language, but it doesn't feel anything. And it shouldn't. The problem is we risk becoming the same.
The humanities are our real-world empathy test. They measure and strengthen our capacity for empathy, context, meaning.
Eliminating them isn't fiscal prudence. It's a species-level error.
The Real Paradox
Because we know by now that artificial intelligence isn't just a technical challenge…it's a moral one. These are systems that decide who gets hired, who gets loans, who is released from prison, who receives medical care. In other words, these are questions of justice, fairness, dignity.
Precisely the questions the humanities exist to ask.
The tech community knows this. The US government's AI Risk Management Framework centers "risks to individuals and society." The EU AI Act treats ethics and human rights as legal obligations. Fei-Fei Li, whose early research taught computers to "see," now leads one of the world's most influential labs arguing that "AI is a human technology whose values must be human values."
Harvard's Embedded EthiCS program has become the model for rebuilding that bridge. Instead of cordoning off ethics in a separate philosophy elective, Harvard embeds philosophers directly inside computer science courses. They co-design and co-teach with computer science faculty, so students encounter moral reasoning as part of their technical craft, not as an afterthought.
Carnegie Mellon, MIT, and Stanford have launched similar programs. These aren't add-ons. They're working blueprints for fusing humanistic and technical intelligence.
But here's the kicker: more and more, only wealthy private universities can afford those collaborations.
Everyone else is cutting the programs that make them possible.
What the Humanities Actually Are
I should be clear about my viewpoint here: the humanities don't exist to serve technology. They're not ethical maintenance crews for Tech. Literature, philosophy, history, and languages have always justified themselves by deepening our capacity for meaning, not by protecting us from our inventions.
But the danger now is amplified because of that disconnect. I am far less concerned about AI becoming more human than I am about humans becoming more mechanical. More optimized, less empathetic, less capable of ambiguity or sustained thought.
The evidence is already visible and predates the GenAI boom of course. In 2023, only 33% of American eighth graders were proficient in reading. That's the lowest rate in two decades. We're losing not only programs but the habits of mind that sustain democracy: reading critically, following complex arguments, imagining perspectives other than our own.
The humanities don't need AI to justify their existence. But I’d argue that AI needs the humanities to justify its existence. Without them, AI will have no conscience, no memory, and no language capable of asking what being human still means.
Where We Go From Here
The solution isn't mysterious. It's moral clarity backed by models that already work.
Acknowledge the real cause. Declining enrollment is the predictable result of decades of disinvestment. When institutions actually invest, the numbers recover. Arizona State University created Humanities Labs—team-taught courses connecting literature, philosophy, and history with urgent global problems. Enrollment grew because they treated the humanities as essential.
Integrate ethics across the curriculum. Harvard's model works. So do the programs at MIT, Stanford, and Carnegie Mellon. Moral reasoning can be taught as rigorously as coding.
Create bridge appointments. Carnegie Mellon and Toronto have institutionalized cross-departmental positions so scholars can move between philosophy, data science, and engineering. These aren't temporary task forces. They're permanent structures.
Protect ethical infrastructure. Stanford's Human-Centered AI Institute embeds humanistic scholars directly in AI research and policy. Oxford's Humanities Cultural Program has a £150 million endowment. Major research universities can treat the humanities as innovation engines, not cost centers.
Fix the metrics. If we measure universities only by graduate salaries, we'll keep getting transactional learning. Expand the metrics. Evaluate institutions by their contributions to civic trust, ethical capacity, democratic resilience. Track alumni who teach, serve, innovate for the public good. Make it count.
Reclaim governance from the spreadsheet. Restore faculty authority over curriculum and resources. If administrative dashboards replace departmental deliberation, moral reasoning gets treated as inefficiency.
Renew public funding. State legislatures should treat humanities education like infrastructure—as the wiring that keeps society functional. Ethical literacy is as critical as digital literacy.
Make AI companies put their money where their mouths are. If Google and OpenAI are serious about transforming education responsibly, they need to treat educators as partners in governance, not end users. Create governing boards with real decision-making power—educators, ethicists, learning scientists, historians of technology. Not advisory councils. Actual authority.
Fund joint research chairs. Publish transparent learning impact reports. Endow open research labs within public universities. Establish educator sabbatical programs inside AI companies so teachers can shape design teams directly.
AI promises to rewrite learning, but it should be actual educators need to hold the pen.
The Bottom Line
The question AI poses is old, just dressed in new clothes: What the heck does it mean to be human?
You can't answer that with Python. It requires the languages, histories, and moral vocabularies that the liberal arts have preserved for centuries.
To cut those disciplines now, at this exact technological moment, is to amputate our collective conscience. It's to fulfill Postman's darkest prophecy and Dick's deepest fear: a culture that confuses simulation for substance and optimization for wisdom.
The humanities aren't nostalgic ornaments. They're the operating system of a human future. Without them, we're just training technicians to build better cages.
The scariest part? We probably won't even notice when the doors close behind us.
Somewhere, the financial logic says we can't afford the humanities. The moral logic says we can't afford to lose them.
Guess which one keeps winning?


