- Core Concepts AI
- Posts
- Who's Lying About AI and Your Future?
Who's Lying About AI and Your Future?
The Uncomfortable Truth About Job Automation
Can I be honest with you?
You're being lied to.
I realize this isn't terribly specific—we're all being lied to about a million-and-one things daily, from the environmental impact of our reusable grocery bags to the likelihood that this new diet will finally work. But I'm talking about something particular.
I am talking about Artificial Intelligence.
By now you've surely encountered these reassurances floating through conference rooms and TED talks:
"AI won't take your job, but someone who knows AI will."
"AI is not coming for your job."
These statements arrive with such certainty, such polished conviction. They sound reasonable, measured, the voice of adult wisdom cutting through panic. The problem is, well, they're not true. Perhaps not outright lies, but maybe something worse…a purposeful obfuscation of reality dressed up in tech-speak platitudes.
Even more troubling: many of these comforting phrases originated in the very AI systems we're discussing, then adopted by humans who needed something smart-sounding to say at the next board meeting. There's a special irony here, watching people outsource their thinking about AI to AI, like asking your replacement to draft your retirement speech.
The data—and many experts without products to sell you—tell a different story. A story without the neat narrative arc, without the obligatory optimistic ending that makes everyone feel better.
AI is coming for you. AI will eat your job if it can.
This might sound strange coming from someone who has embedded himself in this technology both professionally and personally. I've spent a decade building these systems, teaching them, watching them evolve from glorified calculators to something approaching cognition. So let me be clear: I'm not an AI fatalist. I don't believe we're facing a jobless future where humans serve no purpose.
I recently had the opportunity to join the Advisory Board of a liberal arts college in New Hampshire. In fact, I myself graduated with a liberal arts degree and found my way into technology and AI in a meandering and unexpected way. Call me biased, call me naïve, but I truly believe that liberal arts majors can and will have enormous influence in how we shape technology's future.
Because these are the people we need to help us confront some uncomfortable truths. To help us look squarely at what's already happening, not what tech evangelists promise will happen.
Reality is always messier than the PowerPoint version.
The Numbers
Looking at the fine folks at IMB, we can see that there is currently a hiring freeze on back-office roles, as up to 30% of HR tasks have been earmarked for automation.
That's roughly 7,800 positions potentially on the chopping block—7,800 mortgages, college funds, and retirement plans quietly marked with an asterisk.
Klarna, a Swedish fin-tech company, uses a bot that replaced the work of an estimated 700+ human agents—and is projected to add $40 million in profit this year alone.
The math is simple and brutal: humans cost money; algorithms don't request raises or maternity leave or mental health days when the world feels too sharp-edged to face.
And these aren't isolated incidents. They're early indicators of a seismic shift.
In a 2023 World Economic Forum Future of Jobs survey, employers projected that 23% of all roles would change between 2023-27, with 69 million new jobs created and 83 million eliminated…a net loss of 14 million jobs. In AI time, theyear 2023 is practically paleolithic; the technology has evolved more in the past eighteen months than in the previous five years.
The scope of automation continues to expand. McKinsey estimates that up to 30% of the hours Americans work today could be automated by 2030 once generative AI is added to the mix, driving at least 12 million additional occupational shifts. Notice how we use words like "shifts" and "transitions" when we mean "disruptions" and "eliminations, a kind of lexical sleight of hand meant to soften, superficially, what's happening.
As recently as February 2025, a Pew survey found that 52% of U.S. workers are worried about AI's long-term impact on their own jobs; just 6% think it will create more opportunities for them.
Compare that to the 27% of Americans who think winning the lottery is a great financial plan.
People have more faith in winning PowerBall than they do in finding opportunies via AI.
This isn't simply irrational fear. It's pattern recognition, the same skill that helped our ancestors avoid becoming meals for lurking predators. The same skill AI uses at scale across billions (trillions) of data points.
The workforce senses what executives are sometimes reluctant to admit aloud: substitution, not just "augmentation," is very much on the table. Technology as a “friendly helper” rather than a replacement has always been more aspirational than truthful.
And it's not just office workers. Artists are having their work plundered wholesale by algorithms, with entire portfolios being fed into systems that can now mimic their style without compensation or consent. There's something particularly unsettling about watching a lifetime of creative development reduced to statistical patterns.
But here's the crucial detail hidden in the data: this displacement isn't happening evenly. OECD research confirms that workers without college degrees are clustering in exactly the occupations with the slowest growth and highest automation risk. Meanwhile, the industries most exposed to AI disruption (tech, finance) are, ironically, also the most optimistic about its impact. Perhaps this is because they're the ones holding the metaphorical keys to the castle—the algorithms.
I see in these numbers something more profound than economic restructuring. I see a redistribution of agency: who gets to DECIDE and who must simply ADAPT. Because of course the question isn't whether change is coming but rather who will bear its costs and who will reap its benefits.
What We're Really Talking About with AI and Automation
Before I keep going, it's important to take a step back and put some of what I am talking about in context.
There’s still a surprising amount of debate about what actually constitutes Artificial intelligence, but I am not interested in that semantic game. For my purposes, AI is simply the design of computer systems that can perform tasks we usually think require "intelligence," e.g., understanding language, recognizing images, making decisions, spotting patterns, etc.
Automation, by contrast, has been around for centuries and perhaps most famously hit the zeitgeist in the early 20th century with Henry Ford's moving assembly line, which broke complex manufacturing into simple, repeatable steps handled by machines or specialized workers.
Over time, "automation" has come to mean any system that replaces manual, rule-based work. You know, factory robots welding cars with tireless (awkward pun intended) precision; software macros filling spreadsheets without typographical errors or complaints about carpal tunnel; automated checkout lanes scanning groceries while customers awkwardly bag their own purchases.
Today, AI and automation converge in ways both subtle and profound: machines not only follow fixed rules but can (setting semantics aside) learn, adapt, and even carry out parts of "knowledge work" that once seemed oh-so-safely human.
Example: Legal software can identify relevant precedents in thousands of cases in seconds, the kind of work that once occupied rooms full of paralegals for weeks. Radiologists use algorithms that can spot tiny anomalies in medical scans with superhuman accuracy. Financial systems flag fraud patterns no human analyst could detect across billions of transactions.
The convergence of AI and Automation.
The key difference from previous waves of automation? Earlier technologies mostly replaced physical labor and basic calculations, whereas today's AI is coming for tasks we thought were uniquely, immutably human: writing, analyzing, creating, and deciding. It's climbing the ladder of abstraction, rung by rung, reaching for work we thought safe by virtue of its complexity or creativity.
And it's moving with a speed that makes previous technological revolutions look glacial. Even three or four years ago, tasks which required a specialized team and millions (billions?) of funding now run on your beat-to-hell Lenovo for less than the price of a cup of coffee. a fu
The acceleration is borderline frightening because it’s shifted into a fundamental compression of the adoption curve, giving us less time to adapt than any previous generation faced with technological disruption.
The water is rising, and unlike previous floods, it's not just reaching the first floor. It's finding its way upstairs. In mere minutes!
Why Some Jobs Are More Vulnerable Than Others
Economists Frank Levy and Richard Murnane made a distinction that illuminates our current predicament with a particular clarity, I think. In their analysis, they separated human labor into two categories that, once understood, seem as obvious and fundamental as the difference between swimming and drowning.
Non-heuristic (rule-based) work follows well-defined procedures and algorithms. If A happens, do B; if B happens, do C. And so on and so forth.
These are tasks with explicit, repeatable steps like data entry, basic bookkeeping, and routine customer service. Because these processes can be fully specified (i.e. written down in a manual) they're prime targets for automation. The proverbial fish in a barrel…if those fish went ahead and arranged themselves in neat rows and wore name tags.
Heuristic work, by contrast, requires judgment, pattern recognition in novel contexts, creative problem-solving, and tacit knowledge that can't be reduced to fixed code. We can think of diagnosing a complex medical condition from ambiguous symptoms, negotiating a sensitive contract, crafting a persuasive campaign for president of the universe, or identifying cultural bias in an algorithm.
These tasks resist simple automation because they demand contextual intelligence and adaptability…the distinctly human capacity to navigate situations where the map differs from the territory.
One summer when I was fifteen or sixteen, I worked with my Great Uncle, a carpenter for fifty years, to fit a door into a crooked old frame. The angles were wrong, the wood had warped, the measurements made no logical sense. But his hands knew what his mind couldn't formalize: how to shave here, adjust there, compensate for a century of settling. That's heuristic work. No algorithm could replicate what his hands knew.
Subsequent research by Autor, Levy, and Murnane broke this down further, showing that while routine cognitive and manual jobs have declined steadily, non-routine analytical and interpersonal roles have grown. In the modern age, demand for non-routine cognitive work has consistently outpaced demand for rule-following roles.
Even the writers of the influential book The Second Machine Age (Erik Brynjolfsson and Andrew McAfee, who are far from technophobes), acknowledge that while digital technologies excel at routine mental tasks, they still fall short on activities requiring contextual understanding, ethical discernment, and cultural nuance.
This presents both a warning and an opportunity.
The warning? If your job consists primarily of following established procedures, AI is indeed coming for large chunks of it, not someday but soon. Like…right now.
The opportunity? As machines take over routines, humans can and should add value through judgment, creativity, and interpersonal skills. (Which just so happens to be where liberal arts training shine….)
The most vulnerable workers are those among us whose skills are more narrowly confined to rule-based tasks. This is regardless of industry or salary or education. A junior paralegal who mostly searches for case precedents faces more immediate risk than a master plumber solving unique problems in unpredictable, dusty (disgusting) environments. The paralegal works with information that can be digitized and searched; the plumber works with physical reality in all its leaky, corroded, non-standard glory.
In other words, it's not just about what you do, but how you do it.
The dividing line runs not between industries or education levels but through them, separating the rule-followers from the exception-handlers, the procedure-executors from the judgment-appliers. And that line is moving, month by month, algorithm by algorithm.
Where Humans (Still) Win and AI Shines
Again, I admit all of this probably sounds strange—even hypocritical—coming from someone who has embedded himself so firmly in the technology, both at a career and a personal level. Like being the virologist who studies Ebola while acknowledging it might someday help unlock cancer treatments.
The risk? You sound either naively optimistic or detached from the immediate dangers.
However, there are lots of remarkable advancements unfolding around us…truly incredible use cases that sometimes get missed behind the abstract fog of the technology itself.
Drug discovery is a big one… perhaps the biggest. NS018_055 for idiopathic pulmonary fibrosis moved from target discovery to Phase I in less than 30 months—about half the usual time—thanks to an end-to-end AI pipeline. It is now dosing patients in Phase II, meaning real people with a devastating disease have real hope they wouldn't otherwise have.
A deep-learning screen identified a novel narrow-spectrum compound that kills Acinetobacter baumannii (say that ten times fast), a WHO "critical" superbug, opening a path to new antibiotics.
In conservation, cloud services trained on 35 million labeled images across nearly 1300 species auto-classify wildlife photos, filter blanks, and flag rare sightings, turning months of manual work into minutes.
These successes aren't replacing domain expert but amplifying them. Like giving a master carpenter power tools instead of spork.
The most successful implementations pair AI capabilities with human judgment. And this is where generalists may have a surprising edge.
Contextual Intelligence
Heuristic tasks require seeing the "bigger picture" and adapting to new information. Liberal arts training emphasizes historical perspective, ethical reasoning, and critical analysis…exactly the muscles needed when AI's outputs must be judged for accuracy, fairness, or relevance.Creative Problem-Solving
When AI-generated options fall short, someone must spot the gap and invent a novel solution. Literature, philosophy, and design programs train you to ask "What if...?" and explore alternatives rather than merely execute a script. The marketing director who pivots a campaign after sensing cultural headwinds is exercising judgment beyond what models can simulate. They're drawing on a lifetime of accumulated human experience that no dataset fully captures.Narrative & Communication
Yes, AI can draft text, but only humans can weave stories that truly resonate emotionally, persuade stakeholders, or navigate sensitive social contexts.Effective prompting itself is its own rhetorical skill: framing questions to elicit useful, precise outputs. Understanding not just what words mean but what they mean in context, to specific people, in a specific history, is still very mjuch human.
Ethics & Oversight
As AI permeates decision-making, ethical dilemmas multiply. Who's responsible when an algorithm discriminates? Humanities graduates are primed to debate values, craft policy, and demand transparency—guardrails that technical teams alone often miss. They ask not just "How does it work?" but "Should it work this way?" and "Who benefits?" and "Who might be harmed?"Interdisciplinary Translation
Generalists excel at "translating" between domains. For example, turning legal requirements into data science specs, or explaining complex models to non-technical stakeholders. It’s bridge-building, and it is indispensable in AI projects where miscommunication means wasted effort or unintended harm.
The transition isn't from "human to machine" but from "rule-follower to problem-solver." Or rather, that’s how it should be, though we will see what future iterations bring.
Getting Ahead of the Wave: What You Can Do Right Now
So what can we do? Hide under desks? Switch careers? Pray for benevolent AI overlords?
None of the above.
Instead, we need to take concrete action to position ourselves on the right side of this transition. Here are five steps anyone can take:
Write out your workflow.
Make a list of the top 10 tasks you do each week and mark anything that's rule-based, repetitive, or text-heavy. These tasks are likely first in line for automation. Be brutally honest. That monthly report you spend days assembling? The data extraction you perform by hand? The standard emails you send? All prime candidates for the AI overlords. Because you can't adapt to what you haven't acknowledged.Pair every vulnerable task with a leverage task.
For each at-risk task, identify a related activity where human judgment adds more value. If report drafting can be automated, double down on stakeholder interviewing or strategic framing. If data analysis becomes AI-assisted, focus on question formulation and implication synthesis. This pairing approach helps to ensure you, the Human, is moving up the value chain as routine work gets automated.Build/Experiment with an AI-enhanced portfolio.
Start experimenting with AI tools in your domain, even if your employer hasn't formally adopted them. Keep samples showing before-and-after efficiency or quality gains when you're test driving the tools. The evidence of your initiative will speak volumes when automation conversations begin. And trust me, begin they will…Cross-train with technical colleagues.
Volunteer to be the "explainer " translating user requirements or domain expertise to data scientists and developers. You'll learn the tech as you teach the context, building the hybrid skills most organizations now quite desperately need. The humanities graduate who speaks just enough data science, for instance, can become an indispensable translator between worlds.Invest in “meta-skills.”
Critical thinking. Visual rhetoric. Ethical reasoning. Change management. These consistently rank above coding in employer reskilling plans. And they aren't just soft skills; they're human capabilities that AI (right now, at least) struggles to replicate.
The worst strategy is passive denial. Don’t wait for formal training or corporate directives.
This isn't the utopian paradise tech evangelists promise, where machines handle all drudgery while humans focus solely on creative fulfillment. Nor is it the dystopian nightmare of mass unemployment. The reality lies in between: a significant, sometimes painful transition that will reward those who combine domain expertise with technological literacy.
Yes, AI is coming for big chunks of our jobs…especially the narrow, repetitive pieces. The goal isn't to outrun the technology, but to evolve alongside it, continually moving toward the areas where human judgment remains essential:
Here with us, right now, in the real world.