What If Fluent Writing Is Actually the Problem?

The relationship signal your generated text is missing

In partnership with

Why AI Is Good at Generating Text But (Often) Very Bad at Knowing What to Say

Psychologist James Pennebaker had a question: What language predicts if two people will connect? To do this, he analyzed transcripts from speed-dating events, looking for patterns in how people talked to each other.

There’s the obvious hypothesis, of course: people who connect talk about the same things, e.g., shared interests, shared values, compatible life goals. The content of the conversation should predict the outcome.

But that's not what good old Jimmy P found.

The words that predicted connection weren't the meaningful ones. Not "career" or "family" or "travel."

Instead, they were the tiny, boring words that carry almost no meaning at all: pronouns like I and you and we, articles like the and a, hedges like just and kind of and really.

When two people unconsciously matched each other's use of these function words (Pennebaker called it Language Style Matching), they were about 3x more likely to report mutual romantic interest. The effect held up in follow-up studies: couples with higher style matching were more likely to stay together months later.

The relationship signal wasn't what they talked about. It was how similarly they structured their sentences.

And all this happens below conscious awareness. You don't really notice that someone just used "we" instead of "one," or matched your rate of hedging. You just feel like the conversation flows. Like, “Hey, this person gets me!”

I keep thinking about this because it explains something I've been trying to figure out for months: why AI writing feels wrong even when it's technically correct.

Universal Acceptability Problem

Large language models are very good at content words. If you ask for a professional sounding email, you'll get professional vocabulary. Ask for a friendly tone and you'll get exclamation points.

But they're terrible at the unconscious stuff. The tiny function words that signal relationship. The patterns that make a reader feel like you're speaking with them instead of at them.

Therein lies the rub: on its own, AI writing is optimized for universal acceptability. It's designed to sound appropriate to anyone, which means it's tuned to no one in particular.

The content is fine. The relationship is missing.

It passes the grammar check. It fails the vibe check, as the kids* say.

(*On second thought, I don’t think the kids actually say that.)

The Fluency Trap

Another problem? Fluent text (often) feels true….at least on first pass through.

There's this thing psychologists call "processing fluency." I'm not a psychologist, but the basic idea is this: people are more likely to believe statements that are easy to read. THat might seem obvious, but even obvious can be interesting. Belief can and does change not strictly because the content is better but because it's easier to process. You can literally make someone think a statement is truer by changing the font. Or making it rhyme. Or just repeating it. That's it. That's the whole trick.

And AI generates extremely fluent prose. Our brains evolved to use processing ease as a proxy for reliability, so we read it and think: this seems right.

Which means AI writing sounds relatively competent. It passes that first-glance credibility test.

But it fails the deeper test. It's polished but impersonal. Fluent but hollow.

Building Grantspace (or: where I've been for six months)

Over at North Light AI, we have been building Grantspace.ai for a while now and honestly the central question never goes away: where does AI help, and where does it hurt?

Thursday I'm giving a demo and a talk about this. But really I'm just forcing myself to articulate things I've been stuck on for months.

I had a beta user a few months ago who fed our tool her entire previous grant proposal (which had been funded, actually) and asked it to "make it better." The output came back with cleaner sentences, better transitions, more "professional" language.

She hated it.

She couldn't explain why at first. Then she said: "It sounds like it could be about anyone's program."

Yeah. That.

Alas, AI is very good at generating text and very bad at knowing what to say.

Most tools pretend this isn't true. They market speed and fluency as if those were the hard parts. But speed and fluency are exactly what AI is good at. The hard parts are knowing what's true about your situation, understanding your actual reader, deciding what commitments you're willing to make.

The Ratio Problem

Too much AI generation and you get generic output that needs heavy cleanup. Users lose track of what they actually meant to say.

Too little and you've just built a fancy text editor.

I spent probably two months trying to find the right ratio. How much should be AI-generated vs human-written? 30/70? 50/50? We tried a bunch of different splits. Built like four different versions of the interface.

Completely wrong question.

It took me way too long to realize: it's not about ratio. It's about separation. AI and humans are good at different things, and trying to blend them into some percentage just makes both parts worse.

But often we ask AI writing tools to do the human parts. "Write a grant proposal for my nonprofit." The AI doesn't know your theory of change. Doesn't know what you've actually accomplished. Doesn't know what you're willing to promise. So it makes things up. Not factually (usually), but strategically.

It commits you to things you didn't choose. And because the output is fluent, it feels usable. Then you send it and a human reads it and gets that uncanny feeling.

You've outsourced the wrong thing.

Stop Asking AI to Make Decisions You Never Made

I spent a long time believing better prompts alone would fix this. Add more context. Be more specific about tone. Give examples. And so on.

Prompting does help. I am not bashing prompting. But sometimes we ask too much of the prompt.

For example, when you prompt an AI to "write a needs statement for our literacy program," you're asking it to simultaneously decide what the key problem is, what evidence matters, how urgent to sound, what to promise you'll address, and then write sentences expressing all of this.

The AI has to make all those decisions in order to write anything. But you never made those decisions explicitly. So the AI makes them for you based on patterns from its training data.

You get the average of all needs statements it's ever seen. Fluent, interchangeable…not yours.

The Blank Page Problem

There's this assumption that more options produce better outcomes. Give people unlimited freedom and they'll do their best work.

But there's research (Catrinel Haught-Tromp, actually kind of a fun study known as “The Green Eggs and Ham” Hypothesis) where students wrote creative greeting card rhymes. Some got a specific noun they had to include. Others could write whatever.

The constrained group was more creative.

The explanation: constraints "limit the overwhelming number of available choices to a manageable subset." Without clear direction, choice overwhelms you. The blank page isn't freedom. It's paralysis.

This is why AI defaults to generic output. Give it an open-ended prompt and it faces infinite options. The path of least resistance is the average.

Constraints create specificity.

Make the Decisions First

So here's what we landed on: make the decisions first, then generate.

Grantspace uses structured inputs. Before any prose gets generated, you answer questions. What's the specific problem? Who experiences it? What evidence do you have? What will you do about it? What's your timeline? Who's responsible?

These aren't framed as prompts; they're constraints. The things that must be true or the writing collapses.

Once those exist, AI can turn structured decisions into fluent prose. The prose has rails. It can't drift into abstraction because the commitments are locked in.

It's like the difference between asking someone to "design a house" versus giving them a site, a budget, a family size, and a climate. The first produces generic houses. The second produces houses that might work.

Fluent Text Hides Sins

AI-generated writing often sounds competent without being accountable. Full of sentences like "We are committed to improving outcomes for underserved communities."

OK, fine. Perhaps that isn’t false. But it’s not checkable. I mean, who's doing what? By when? How will anyone know if it worked?

I started auditing writing for three things: Is it clear who does what? Are there actual dates? Can someone check if this happened?

Surprisingly, AI output frequently fails all three. Passive voice everywhere. Vague timelines. Promises that can't be verified.

There's a reason. Passive voice increases readers' distance from the content. AI defaults to passive constructions because they're "safer." "Outcomes will be improved" is less risky than "I will improve outcomes by December."

But that distance is what makes the writing unaccountable and so…unnerving.

Unaccountable writing eventually gets noticed. Maybe not on first read. But when someone tries to act on it or verify whether it happened, the emptiness becomes obvious.

Why Corporate Apologies Fail

Years ago, United Airlines landed in hot water after dragging a passenger off a flight.

Their response? "This is an upsetting event to all of us here at United. I apologize for having to re-accommodate these customers."

Nothing false…but nothing really accountable, either.

An apology is supposed to be a trust-repair ritual, right? When someone apologizes, you're subconsciously asking: Do you understand what you did? Do you understand why it mattered? Are you taking responsibility? Will this happen again? How will I know?

Generic apologies fail all five. Abstract language, passive voice, emphasis on feelings over actions, no causality, no timelines. Blah blah blah. Sanitized into meaninglessness…almost as if it was optimized for safe legal distance. (Which of course it was).

But here's what I didn't realize until recently: the reason we mistrust generic apologies is not just because they're vague. Structurally, it goes back to that Pennebaker thing again: The harmed party is using first-person pronouns, specific details, present-tense language. The corporate apology uses passive voice, collective pronouns, hedged timelines.

Complete mismatch.

Your brain doesn't think "insufficient apology." It feels "this person is not speaking with me."

And guess what AI loves to do? Write in corporate apology mode. It's trained to generalize, de-risk, avoid commitments, sound universally acceptable. Which pushes everything toward abstraction and neutrality.

The exact opposite of what trust requires….whether in marketing copy, a LinkedIn post, or a grant proposal.

OK, So What Matters?

My demo / talk on Thursday essentially boils down to: Structure > Prompting.

But the deeper (and harder) thing is that AI writing quality is a system property, not a model property. You don't get better writing overall by waiting for GPT-5. You get it by separating deciding from writing, capturing constraints before generation, auditing for accountability, keeping humans in charge of the relationship.

The AI is a tool in that system. Powerful, yes….but the actual system is what makes the output usable.

Most AI writing tools are built around the demo (look how fast it generates text!) not the use case (I need writing I can stand behind that says what I mean and sounds like it came from someone who knows my situation).

Pennebaker's research keeps coming back. Connection isn't about impressive words. It's about the tiny ones. The ones that signal "I'm speaking with you, not at you."

AI is very good at impressive words. Bad at tiny ones. The little human edges we barely notice.

The tools that work will respect this. Use AI for fluency, consistency, scale. Keep humans in control of the parts that create connection: specific details, real commitments, relationship with the actual reader.

We're building Grantspace around this idea. Will it work? I think so. We’ll find out soon if it does…or if we’ve just spent six months building a very opinionated text editor.

Payroll errors cost more than you think

While many businesses are solving problems at lightspeed, their payroll systems seem to stay stuck in the past. Deel's free Payroll Toolkit shows you what's actually changing in payroll this year, which problems hit first, and how to fix them before they cost you. Because new compliance rules, AI automation, and multi-country remote teams are all colliding at once.

Check out the free Deel Payroll Toolkit today and get a step-by-step roadmap to modernize operations, reduce manual work, and build a payroll strategy that scales with confidence.