Sloptimization, Shrimp Jesus, and the Lemons Problem

Is Anyone Home?

In partnership with

For the past few months or so, I’ve found myself deeply interested in the idea of “AI slop.” Not the actual slop itself so much as the definition of it, the conditions that make it proliferate, and what—if anything—can or should be done about it.

This is a big topic, too big for a blog post, and it’s also one that can easily turn preachy. That’s not my intent. My hope is to step back and observe slop as carefully as I can. Despite working in technology and AI, I’m a writer, a former editor of a literary journal, and a teacher of writing who is, admittedly, pretty worried about generative AI’s slop-farming incentives.

That said, this is also a much longer piece than I normally share here. Too long for a blog. But oh well.

Behold: Shrimp Jesus

Do you remember Shrimp Jesus?

Picture, if you will, Jesus Christ rendered in gorgeous, altarpiece realism—soft light, tender eyes, the whole Renaissance glow. Except he’s made of shrimp. Or he’s standing in a pile of shrimp. Or blessing a heap of shrimp with hands that are also, somehow, shrimp.

In 2024 Shrimp Jesus was everywhere, it seemed… not everywhere like a meme, exactly. More like mold. Spreading through Facebook feeds, spawning variants, multiplying overnight. Content farms running image generators at industrial scale, producing hundreds of posts a day, each costing nearly nothing.

Shrimp Jesus caught on not because anyone loved it, but because it was weird enough to make people react.

“Amen!” wrote some.
“What the hell am i looking at??” wrote others.
To the algorithm, both responses are valuable.

No one asked for Shrimp Jesus. It’s like a very weird prayer-shaped hole where a prayer might go.

Shrimp Jesus is sort of funny, I guess. But it’s also a diagnostic of something larger: AI slop.

In early 2024, researchers at the Stanford Internet Observatory and Georgetown University tried to document what happens when production costs approach zero and distribution rewards volume.

They studied 100+ Facebook pages posting AI-generated images at industrial scale—we’re talking dozens per day, around the clock. The images showed lots of things…including Jesus Christ fused with shrimp and crabs. They also showed children holding impossibly perfect paintings; elderly women with birthday cakes and captions reading "I baked it myself." Users were explicitly asked to engage: "rate this painting by a first-time artist" or type "GREAT JOB!”

Some of these AI-generated posts reached tens of million of viewers who all paid the price for admission: a little bit of their attention.

When Life Sells You Used Lemons

Back in 1970—the same year ChatGPT tells me Jimi Hendrix and Janis Joplin died, the same year the Environmental Protection Agency was launched—an economist named George Akerlof published a paper called The Market for ‘Lemons’: Quality Uncertainty and the Market Mechanism.” He was thirty years old and teaching at Berkeley.

A little over thirty years later, the Nobel committee called it one of the most important contributions to twentieth-century economics and awarded Akerlof the Nobel Prize.

The paper is about used cars. It’s surprisingly accessible, even if you’re not an economist (see: Yours Truly). Akerlof is trying to explain something that now seems almost embarrassingly cliché today: why used car lots are—so often—full of junk.

It’s genuinely interesting if you pause to scratch at it a bit. Cars are useful. People need them. Some used cars are perfectly good vehicles with years of life left.

So why do we assume a used car is probably a lemon? Why does “used car salesman” mean what it means?

Why do we always assume we are about to get screwed?

The problem isn’t the cars, Akerlof realized. The problem is what buyers can’t see.

Ever buy a used car? I have once, and I hated it. If you’re not a mechanic, you’re essentially guessing. You can kick the tires, check the mileage, take it for a spin. But you can’t see the transmission that’s about to fail or the head gasket that’s been leaking for six months. (I know so little about cars I had to get generative AI to write that sentence, which is maybe the point.)

The seller, meanwhile, likely knows a lot of what you don’t. You’re working outside the car. They have inside information.

Economists call this information asymmetry: one side knows more than the other, and that gap changes everything.

So what do you do, as a buyer? You protect yourself. You assume the car is average, maybe worse. After all, why would someone sell a genuinely great car at a price that doesn’t reflect how great it is? If the car is priced low, something must be wrong. So you offer what an average car seems worth. You bake your suspicion into the bid.

This is where the spiral starts.

If you’re the seller with a genuinely good car, that average-car offer is insulting. You maintained this thing. You kept receipts. You changed the oil on schedule. You know what you have.

But the buyer can’t verify any of that. So they offer the average price.

You say no. You take your car home. You exit the market.

What’s left? The cars that weren’t good enough to walk away. The pool gets worse.

Buyers notice—maybe not consciously, but they notice. The cars now on offer don’t look as good. So they adjust expectations down, offer a little less. Now the next tier of decent cars becomes too good to sell at the new going rate. Those sellers exit too.

Each round pushes out the people with something worth selling and leaves behind the people who don’t. The market selects for lemons—not because anyone wants lemons, but because the structure makes lemons inevitable.

Akerlof called this adverse selection. The used-car example stuck, so economists just call it “the lemons problem.”

But the lemons problem was never really about cars. Cars were just the illustration anyone could picture.

The real insight is what happens to any market when buyers can’t reliably tell quality from junk.

Akerlof knew this. In the paper he applies the same logic to health insurance, to credit markets, to the economic costs of dishonesty. He was building, as he put it, “a structure for the economic costs of dishonesty”—a way to see how markets eat themselves when trust breaks down.

You get a lemons problem whenever two conditions hold:

  1. Quality is invisible. Buyers can’t tell good from bad at the moment they decide. They’re forced to judge by surfaces, shortcuts, proxies.

  2. Price can’t sort it out. Normally good things cost more, so price helps separate them. In lemons markets, either prices are stuck, or everything costs the same, or the “price” is a non-price (like a click). The sorting mechanism fails.

Used cars have both. You can’t see a dying engine from the lot. And the person with a pristine vintage Mustang won’t accept Craigslist average for it.

So they don’t sell. And Craigslist gets (somehow) worse.

In Art as Experience, the philosopher John Dewey argues that art is not mainly a thing but a doing. True art grows from a full process in which a person meets resistance, makes choices, revises, and arrives at what Dewey calls "an experience"….a unified arc with a beginning, struggle, and consummation.

The force of art comes from that lived continuity between making and undergoing. If we treat output as the whole story, we lose the part Dewey thinks makes art art: the shaping of experience through intentional effort.

Slop skips this. The output may be competent, sure, but the intent feels thin because the experience that formed it is thin. A prompt is not a struggle. A model inference is not a choice. What emerges has the shape of art without the weight of it.

The Lemons Problem, but for Everything We Read

Okay—but what does any of this have to do with AI slop?

Content has the same two conditions, maybe even more so.

Pretend you’re scrolling your phone. You see a headline, a thumbnail, the first sentence. You’ve got three seconds before your thumb decides: click or move on.

In that window it’s hard to tell whether something took six months to research or six minutes to generate. It’s hard to tell whether the writer is an expert or someone who skimmed Wikipedia. It’s hard to tell whether the stats are real or the quotes accurate or the argument sound.

The assessment window is tiny and the surface is smooth. Everything has a generic professional sheen now. The prose is clean-enough. The formatting is correct. It all looks like genuine content. Like maybe it’s worth your time.

That’s condition one: quality is invisible at scroll speed.

And what about price? Well—what price? This is the internet, where almost everything is free at the moment you encounter it. The paywalled investigation and the AI-generated listicle both cost you the same nothing until you’re already inside.

That’s condition two: price can’t help you.

So we’ve got a lemons market: people who can’t tell quality from junk choosing from a sea of options with no price signal to guide them.

The journalist who spent three months reporting a story—traveling, interviewing, digging through records—is competing with a bot that summarized the same topic from other summaries. At scroll speed they look similar enough, so they get the same payment: whatever attention they can grab before the thumb moves on.

If you’re the journalist, this is demoralizing. Your work isn’t being valued because it can’t be seen. The market is offering the average-content price, and your content isn’t average. So what do you do?

Maybe you keep going out of stubbornness. But maybe you leave. You take a PR job. You burn out. You quit. When you leave, what’s left? The people already producing slop at a pace that fits a flooded market. The average quality drops.

Or maybe you stay but adapt. You let AI do a first draft. You publish more often because volume is the only way to be noticed, and volume means cutting corners. You’re still in the market, but you’re no longer doing what you used to do. The average quality drops.

Or maybe you just stop trying so hard. Nobody can tell anyway. The thing you spent a week on performed the same as the thing you knocked out in an hour. So you spend an hour.

Why not?

And the average quality drops.

So what is slop, exactly?

It’s worth being precise here, because “slop” can sound like “stuff I don’t like.” That’s not what I mean.

Slop is not “bad content,” per se. Bad content has been around forever. Bad content is your weird uncle’s email forwards, the local newspaper that can’t afford a copy editor, the self-published novel that needed another draft.

Bad content fails in human ways. Someone tried and fell short. You can feel the trying.

Slop doesn’t try. A communication that moves forward without any authentic human intent.

Slop is not an attempt that just didn’t work out. It’s the absence of attempt somehow dressed in the costume of attempt. It learned what trying looks like, not what trying is.

A more precise definition:

Slop is content optimized for the gap between how fast you assess something and how long it takes to discover whether it’s any good.

Slop isn’t trying to be good because good isn’t a metric that matters…slop seeks minimum viable competence, just enough to earn the click before you notice there’s nothing there.

If you want a test, here are three basic questions:

  1. Knock-knock, is anyone home?
    Is there a mind behind this? A person who had a thought and meant to communicate it? Or would the process produce the same output regardless of whether it was true, useful, or interesting?

  2. Would the creator be embarrassed if you looked closely?
    If you follow the sources, do they say what’s claimed? If you push on the argument, does it hold? Slop wasn’t built for scrutiny; it was built for scroll speed.

  3. Is it trying to pass, or trying to be?
    Trying to be and failing is just bad content. We’ve all made bad content. Some of you may consider this long-winded essay bad content. But the idea of trying to pass is different. It’s a kind of forgery that doesn’t really care one way or another f it’s caught, because by the time you catch it, the click is already counted.

Quick Note

Before anyone assumes where I'm coming from, please know I am not anti-AI. And I am not saying anything and everything created with AI assistance is inherently slop.

Instead, I am trying to pin down something a bit more slippery: the conditions under which a tool can, potentially and even quietly, degrade communication into something we don’t really want.

Why does it feel like slop is winning?

Nobody wakes up hoping to scroll machine-generated smoke. “Oh, I can’t wait to find out what the algoirthm thinks! Man, I hope I can get some Shrimp Jesus this morning!”

Slop wins because the larger system wants what slop is good at. Platforms make money by selling your attention to advertisers. The longer you stay, the more ads you see, the more money they make. This is the business model. So the platform’s job is to keep you scrolling, watching, reacting.

We know this.

But platforms can’t measure truth, usefulness, beauty, or importance…not directly. (It could be argued that humans have difficult with this too, but I won’t go there right now.) The platforms can measure behavior, though: clicks and watch time and comments and shares and return visits. So that’s precisely what they optimize for.

Sure, sometimes quality and engagement overlap. But any overlap is incidental. Platforms aren’t selecting for quality and getting engagement; they’re selecting for engagement and getting whatever comes with it.

  • Slop is optimized for watch time. A two-minute idea becomes a twelve-minute video because twelve minutes means more ads.

  • Slop is optimized for reaction. Confusion, outrage, smugness—anything that makes you comment counts as signal.

  • Slop is optimized for volume. Post once a week and you vanish. Post ten times a day and you stay in the mix. Slop can do ten. Quality usually can’t.

So when platforms look at slop and quality, what do they see? Two things that perform similarly on the metrics that matter to the almighty algorithm. Sometimes slop performs better, because it was designed to ace the test and quality was designed for something else.

Research on attention economics helps us notice this pressure. When content becomes abundant, attention becomes the scarce resource. Since constant novelty and high posting frequency work well in that competition, speed turns into a rational strategy for creators. Recommendation systems do not just distribute culture; they shape it by pushing creators toward the tempo and formats the algorithm rewards.

It is here, I think, where the "primary user" distinction becomes crucial. In almost every case of slop proliferation, the primary user is not the end consumer. It is the system that profits from scale: advertisers, platforms, content-farm operators, scam infrastructures, brand-churn models. The public interacts with the outputs, but often as targets, inputs, or captive audiences…not as people whose needs shaped the product in any meaningful way.

I read a LinkedIn post recently dismissively calling anti-AI folks “Luddites.” But the Luddites weren’t technophobes. They targeted specific owners who used machines to cut wages, deskill labor, and replace adults with children. It wasn’t technology itself they hated—it was how it was deployed.

Isn’t that what slop is doing now, to writers and illustrators and actors and everyone else?

Slop wins by flooding, not by convincing. The slop economy sells the effect of meaning, not meaning.

It’s the industrial commodification of “Eh, good enough.”

So where does that leave us?

In Akerlof’s paper, he points out that markets can develop countermeasures to the lemons problem: warranties, certifications, inspection regimes, reputation systems, lemon laws. These mechanisms work because they make quality more visible or dishonesty more expensive. They add friction in the right places.

But those fixes assume things that now feel quaint. They assume production takes time. They assume reputation accumulates slowly. They assume volume is limited enough to monitor.

Content has almost no friction left. You can’t certify a million posts a day. You can’t build stable reputation when bylines vanish and everything is anonymous. You can’t enforce standards when the flood is bigger than any enforcement mechanism.

The tools that saved the used-car market can’t save this one.

I don’t have a solution, and I’m suspicious of anyone who says they do. But it’s something I want to continue to explore and understand, because I think it’s at the heart of so much distrust around the technology as whole. (That, and the technocratic billionaires pulling all the strings!)

The question isn’t whether good work survives. It will. It always has—somewhere.

The question is whether it stays findable. Whether the people who want it can reach it, or whether they just keep swimming through an endless sea of shrimp, never knowing there was anything else.

At the risk of sounding dramatic: I worry we’re building a world full of holes shaped like real things.

You now…the journalism-shaped content. The expertise-shaped advice.

The shapes are all there, but the insides—the guts—are missing.

Even if the models improve—and of course they will—the central question for me is this: When a culture shifts toward instant, infinite output, what happens to the slow forms of choice, doubt, and revision that give art its force? Slop makes the gap visible. The aesthetic complaint from “anti-AI Luddites” is not snobbery. It is a claim about what art does, and what is lost when the process that gives it force is bypassed.

It is people who aren’t accepting the whole “democratization of art!” at face value. Yes, access expands with these tools. But ownership and power concentrate. Many people can generate but only a few control the tools, the data, or the markets those tools reshape. The benefits of cheap production flow upward, and the costs (displacement, wage pressure, cultural degradation) are distributed downward.

Downward to you and me.

You can (easily) launch a newsletter too

This newsletter you couldn’t wait to open? It runs on beehiiv — the absolute best platform for email newsletters.

Our editor makes your content look like Picasso in the inbox. Your website? Beautiful and ready to capture subscribers on day one.

And when it’s time to monetize, you don’t need to duct-tape a dozen tools together. Paid subscriptions, referrals, and a (super easy-to-use) global ad network — it’s all built in.

beehiiv isn’t just the best choice. It’s the only choice that makes sense.