The Google Habit That's Ruining Your AI Results Post

What Brain Surgeons Get Wrong About AI

In partnership with

Learn from this investor’s $100m mistake

In 2010, a Grammy-winning artist passed on investing $200K in an emerging real estate disruptor. That stake could be worth $100+ million today.

One year later, another real estate disruptor, Zillow, went public. This time, everyday investors had regrets, missing pre-IPO gains.

Now, a new real estate innovator, Pacaso – founded by a former Zillow exec – is disrupting a $1.3T market. And unlike the others, you can invest in Pacaso as a private company.

Pacaso’s co-ownership model has generated $1B+ in luxury home sales and service fees, earned $110M+ in gross profits to date, and received backing from the same VCs behind Uber, Venmo, and eBay. They even reserved the Nasdaq ticker PCSO.

Paid advertisement for Pacaso’s Regulation A offering. Read the offering circular at invest.pacaso.com. Reserving a ticker symbol is not a guarantee that the company will go public. Listing on the NASDAQ is subject to approvals.

Smart People, Dumb Results

I have an old neighbor who is a brain surgeon. She operates on people's heads for a living.

But watch her use ChatGPT and she turns into a cavewoman.

Last month she needed help planning her daughter's birthday party. She typed something along the lines of:

"Birthday party ideas for 8-year-old girl princess theme indoor February.”

And ChatGPT spat out the most generic list imaginable. Pin the tail on the unicorn. Princess dress-up station. Pink cupcakes.

She read it, rolled her eyes, and muttered "This thing is useless."

Thirty minutes later, however, and I watched her and her husband have the most productive conversation about the exact same party.

They talked about her daughter's current obsessions (apparently she's into both princesses AND dinosaurs now). They discussed what worked at her cousin's party last year. They brainstormed (bad pun intended) how to handle a party in city condo.

By the end of their conversation, she had a brilliant plan: a "Paleontologist Princess" theme where kids dig for "royal jewels" (plastic gems) in sandbox excavation sites.

Creative. Personal. Perfect for her kid.

The difference? Well, she treated her husband like a human and ChatGPT like a vending machine.

We're All Doing This Wrong

Twenty years of Google broke our brains.

Google trained us to be keyword warriors. Craft the perfect search. One shot. Done. Move to results.

So naturally, many of us approach AI the same way. Write the perfect prompt. Expect magic.

Get disappointed.

But AI isn't Google. That might sound obvious out loud, but ask yourself: Have you been using AI like a search engine?

The better use: it’s a conversation that gets smarter.

Think about it: When you need help from an actual human, do you walk up and bark vague commands? "Create comprehensive analysis of market conditions with actionable recommendations!"

Of course not. You'd sound insane.

No. Here, you'd probably start with context. You’d ask questions and build understanding.

The Difference Between Search and Conversation

Here's what that the brain surgeon should have done (there’s a sentence I never thought I’d utter):

"I'm planning my daughter's 8th birthday party. She's obsessed with princesses but also just discovered dinosaurs.'“

Is this the ultimate, perfect prompt? No.

But at least here we have something to work with.

Because now AI can ask questions. Real questions, e.g.:

"How many kids are coming?" "What's worked at parties she's been to recently?" "Does she prefer the fancy Disney princesses or is she more into adventure?"

Each answer should, in theory, make the AI smarter about what she actually needs.

Ten minutes later, with this new approach in hand, she's got ideas that no generic party planning list would likely include. Why?

Because she started talking instead of commanding.

Same brain surgeon. Same AI. Completely different results.

The only thing that changed? She treated it like her husband, not like Google. (OK, perhaps that’s a bit weird. But you get my point…)

Why This Changes Content Farming

The vast majority of people use Large Language Models like ChatGPT for content creation. And that’s OK, to a point. But the truth is, at least right now, AI is pretty terrible at creating content. It's boring, generic, and sounds like everything else. (Kind of like how I feel about most Marvel movies.)

But AI is very good at finding content. The good stuff. Written by humans for humans.

Let me show you what I mean.

Bad approach: "Write blog post about remote work productivity tips."

Result: The same 10 tips everyone's seen a million times.

Better approach: "Find me companies that saw productivity increase after going remote. What did they do differently? Which approaches worked for companies with under 100 employees?"

Result: Real case studies. Actual data. Specific examples you can learn from.

One gives you filler. The other gives you intelligence.

The Real Problem We're Solving

Do we need more content? Probably not. The internet has enough blog posts (which is a truth I continue to ignore as I write these posts for Core Concepts. Hey, we’re all hypocrites!)

But if we are going to create content, shouldn’t we strive toward the right content? Content suited for our specific situation(s). Our exact audience. Our weird constraints.

That's where curation excels over flat blah-blah-blah creation.

I worked with a high school teacher who was hunting for examples of persuasive writing. She could've asked AI to write some really generic fake student essays. Instead, she asked it to find recent op-eds about issues teenagers actually care about.

Can you guess which approach worked better with her students?

The Skill That Actually Matters

This generation of kids are about to graduate into a world flooded with AI-generated, well….everything.

Blogs, essays, emails…even friends. Entire worlds!

But a lot of that stuff will be white noise.

The valuable skill to navigate this very weird future, in my opinion? Knowing how to find the signal. How to curate quality from chaos.

In other words, having conversations that lead to insights instead of issuing commands that lead to templates.

Breaking the Google Habit

Next time you use AI, try this:

Don't craft the perfect prompt. Start with a problem:

"I'm trying to figure out..." "I'm struggling with..." "I need to understand..." Then respond to what it says. Ask follow-ups. Get specific. Build understanding together.

Your Google brain will scream that this is inefficient, but your results will prove otherwise.

Here's how you know if you're doing it right:

After 10 minutes with AI, do you have generic content that could apply to anyone? Or do you have specific examples, data points, and insights that fit your exact situation?

One means you're still thinking like Google. The other means you're finally thinking like AI.

The conversation is the point. Not the prompt.