Go Ahead and Ask for Some Mean

The case against using AI politely

In partnership with

The Voice You've Been Avoiding

For twenty years, the internet trained us to think of knowledge as basic retrieval. You type a question, you get a page. The skill was in knowing what to type.

Generative AI doesn't work that way, and I keep watching people use it as if it does. They ask for answers. They want the thing Google would have given them, just faster and in paragraph form.

I think this misses something important.

The Value of a Little Mean

A friend of mine built a scheduling app for doctors. Twenty-five clinics were using it. He was raising money and had a pitch deck he felt good about—he'd been over it maybe a hundred times, tightening the language, adjusting the graphs.

He knew it cold.

One night, late, he uploaded it to ChatGPT and asked it to respond as a VC who was about to pass.

The response came back in seconds, blunt and unvarnished. It pointed out that he'd mentioned no churn numbers. That he'd buried a pricing model change in one sentence on slide eight. That his only acquisition channel was his own outreach, which meant his unit economics would collapse the moment he hired a sales team. That he had no HIPAA certification and was selling to healthcare.

He told me he read it twice. The first time to argue with it in his head. The second time because he knew it was right.

But all of it was stuff he sort of knew deep down. The churn numbers were bad, so he hadn't calculated them. The pricing transition was awkward, so he'd rushed past it. He'd been doing all the sales himself because that was working, and he hadn't wanted to think about what happened when it couldn't be him anymore.

The deck wasn't hiding these things from investors (who are good at sniffing these things out). It was hiding them from him.

He rebuilt the deck that week. Not to spin the problems better but to actually face them. Here's our churn and here's why and here's how we're fixing it, etc. Here's the pricing change as its own section with a transition plan. Here's what happens to margins when we hire salespeople, and here's how we survive it.

Two weeks later he closed the round.

He didn't get information he couldn't have gotten elsewhere. Any smart investor would have asked those questions. Any honest friend might have, too. But he didn't have that friend in the room at midnight when he was finishing the deck. And he wasn't going to call an investor to pre-reject him.

What the AI gave him was a voice willing to be unkind at the exact moment he needed it.

Get a Little Mean, ChatGPT!

I've started doing this with my own work, and I'll tell you what it's actually like: it's mildly unpleasant.

Last month I was finishing a piece of writing I'd spent weeks on. I uploaded it and asked the model to respond as an editor who thought the piece was a failure. To be specific about why and not to be generous.

The response was three paragraphs. The first pointed out that my opening was doing something I tend to do a lot: announcing that conventional wisdom is wrong before I'd earned the reader's trust. (Classic clickbait, am I right?)

The second noted that I kept hedging—presenting one idea, then immediately qualifying it. Every paragraph gave with one hand and took back with the other. The piece was so balanced it never actually said anything.

The third said the ending was a thesis statement disguised as an insight, and that I'd stopped one step before the piece actually got interesting.

I felt my chest tighten reading it. My first impulse was to explain. “How dare you!” I mean, my structure was intentional. The opening was fine, actually, if you understood what I was trying to do. The ending landed differently if you'd been following the argument and weren’t a big dumb machine.

Then I sat with it for an hour. I reread what I'd written. The critique was right about all of it.

The thing is, I sort of knew. I'd felt the opening was a little aggressive. I'd noticed the repetition and told myself it was rhythm. I'd ended where I ended because I was tired and wanted to be done. Seeing it written out, by something that had no interest in protecting my feelings, made it impossible to keep pretending.

So…I cut the opening. I rewrote maybe a third of the sentences. I pushed past the ending I'd settled for and found what I actually wanted to say. The piece got better.

There's real research that shows why this works.

When you make a model critique its own drafts and revise, outputs improve. When multiple model instances argue from different positions, reasoning gets sharper. The pattern holds across tasks: adding friction—opposition, doubt, adversarial pressure—makes thinking better.

But I don't think this is really a discovery about AI. It’s more of a reminder about thinking itself.

The people I know who are good at what they do all have some version of this. A habit of asking themselves where they're wrong before someone else asks for them. A voice in their head that says what if you're wrong about this while there's still time to change. They developed it over years, usually through painful experience…failures that taught them to be harder on themselves in private.

Most of us never build that voice because it's uncomfortable to live with. It requires sitting with the possibility that the thing you made, the thing you're proud of, might be lazy. Might be sloppy. Might be hiding something you don't want to see.

It's much easier to ship it, tell yourself it's good enough, and move on.

The prompts that work aren't polite.

"Can you review this?" gets you something gentle. Good structure, nice pacing, maybe some areas to improve. It's feedback in the way a colleague gives feedback when they don't want to make things, ahem, awkward.

But "you're a customer who returned this product after one week—tell me what made you leave" gets you something else.

So does "you're the investor who passed…what did you see that made you walk away."

So does "you think this piece of writing is a failure….explain why, specifically, and don't be nice about it."

The difference is stance. The first asks for help. The second asks to be attacked.

Generally, what comes back when you ask to be attacked falls into three categories.

First, the obvious problems you'd stopped seeing because you'd looked at the thing too long. The button that's invisible, the sentence that doesn't scan, the flow that makes sense to you because you built it.

Second, the decisions you made for convenience and then convinced yourself were principled. You're asking for the email too early because that's how you set up the database. You're hiding the price because you weren't sure what to charge. You ended the piece where you did because you were tired.

Third—and this is the one that matters big time—the problems you were actively avoiding. The metric that doesn't look good or the user feedback you haven't addressed.

The part of your own work you don't want to examine because you're afraid of what you'll find.

That third category is where big, big value lives because those are the things that will catch up with you eventually. They always do. You can find them now, in private, when there's still time to fix them.

Or you can find them later, in public, when it's expensive and embarrassing and too late.

The Widening Gap

The gap that's opening up isn't between people who use these tools and people who don't.

It's between people willing to be told they're wrong and people who'd rather not know.

I think about my friend, alone with his laptop at midnight, reading a critique he didn't want to hear. The version of him that closed the round wasn't smarter than the version who uploaded the deck.

He was just willing to sit there, feel his chest tighten, and not click away.

That's a huge skill now. Not simply prompting.

Just staying in the room when the room gets a bit uncomfortable.

Start learning AI in 2025

Everyone talks about AI, but no one has the time to learn it. So, we found the easiest way to learn AI in as little time as possible: The Rundown AI.

It's a free AI newsletter that keeps you up-to-date on the latest AI news, and teaches you how to apply it in just 5 minutes a day.

Plus, complete the quiz after signing up and they’ll recommend the best AI tools, guides, and courses – tailored to your needs.