Why Use Many Word When Few Word Do Trick

A brief guide to prompting in 2026

In partnership with

Speak messy. Prompt clean.

Go on tangents. Change your mind mid-sentence. Say "um" twelve times. Wispr Flow doesn't care — it takes everything you say, strips the filler, and gives you clean, structured text ready to paste into any AI tool.

The result: prompts with the full context your AI tools need to give you useful answers. Not the abbreviated version you'd type because typing is slow.

Works inside ChatGPT, Claude, Cursor, and every app on your screen. Millions of users worldwide, including teams at OpenAI, Vercel, and Clay.

One of my favorite recent prompting stories is Caveman. It's a Claude Code plugin built by a developer named Julius Brussee, and the idea is exactly what it sounds like: a system prompt that tells the model to respond in compressed, telegraphic, no-articles, no-hedging shorthand. Kevin Malone from The Office, basically (“Why use many word when few word do trick”).

I haven’t looked too deeply into the actual benchmarks, but apparently Caveman does have some success: Output tokens drop by 45 to 65% depending on the task.

Larger models sometimes underperform smaller ones because they ramble.

I love this for two reasons. It's funny (there's some kind of cosmic joke buried in the most sophisticated software ever built doing better work when you tell it to talk like a caveman) and it points at something real: most of what people put in their prompts is style, not substance…andt he style is costing them….in money and in accuracy.

But alas, you probably don't want Claude to talk to you like a caveman. I don't. But the experiment is a useful frame, because so much of the prompting advice still circulating online is residue from an older era of LLM….back when the main problem, essentially, was getting the model to try harder. That problem is gone. The current generation of Claude, Opus 4.7, is not a model you need to push. It's a model you need to point. A lot of the inherited “old ways” about how to push it now either does nothing or actively makes things worse.

This piece is mostly about Claude because that's what I use most often, but the principles transfer to GPT, Gemini, and the other frontier models. They've converged on similar behaviors, and if you're a heavy ChatGPT user you can read "Claude" as "your model of choice" throughout.

the office GIF

Giphy

First, the fundamentals that still work

Be specific. "Write me an email to my landlord" is vague. "Write a polite but firm email to my landlord asking him to fix the broken dishwasher he promised to repair three weeks ago, under 150 words, no passive aggression" gets you something usable on the first try.

The rule of thumb: if you handed your prompt to a friend who knew nothing about your situation, would they know what to do?

Give context, not just instructions. Telling Claude why you want something often works better than telling it what to do. "Explain this assuming I've never coded before, because I'm trying to decide whether to learn Python" produces a different and (though admittedly hard to quantify) a better answer than "explain Python simply." The why lets Claude calibrate dozens of small choices you didn't think to specify.

Show an example when you can. One example beats several paragraphs of description, especially for tone and voice.

Say what you want, not what you don't. "Write in a warm, casual tone" tends to be better than "don't be too formal." Negative instructions make Claude think about the forbidden thing. Positive ones give it a target.

Iterate instead of front-loading. Send something rough, see what comes back, refine. Treat the conversation as, well, a conversation!

Specificity matters more than it used to

Earlier Claude models papered over vague prompts by filling in reasonable defaults.

For example: ask for "a summary" and you'd get something shaped to what most people probably wanted. But Opus 4.7 is more literal….sometimes annoyingly so. It does what you asked. If you want a summary that emphasizes financial implications because you're presenting to a CFO, you need to say so.

If and when a prompt doesn't work, the question to ask first is "does Claude have what it needs," not "is my instruction phrased right." Because so many failed prompts today are context-starved, not instruction-mangled.

If you're asking Claude to help write something, pasting examples of your actual writing does more than any amount of describing your style. If you're asking for advice, sharing the relevant background does more than a long instruction. If you're asking about a document, telling Claude what to look for before it reads is more effective than asking questions cold afterward.

In multi-turn (iterative) conversations, the history becomes the prompt. Everything you said earlier, at least in the context of that specific conversation/thread, shapes what Claude does now. If a chat has drifted, course-correcting in place is often slower than starting fresh, and once a long conversation has momentum toward a particular direction, Claude keeps gravitating back to it.

The most useful move you might not be making

You ask Claude for help with something. Say, a blog post. Instead of helping you think about the blog post, Claude writes you the blog post. A whole finished thing. Now you're editing instead of thinking, and every revision burns through more output…and now somehow you're six rewrites in on a piece you weren't even sure you wanted to write yet.

Why? Because Claude's default is to produce. If you mention a deliverable (“blog post”), it assumes you want the deliverable. Perhaps that assumption is right most of the time on average, but it’s often wrong when you're in exploration mode rather than production mode,…when you want to talk through the angle before committing to draft.

The fix is small and almost embarrassingly effective. Name the mode you're in.

"Let's talk through this first. Don't write the actual post yet. I want to think through the angle and what to emphasize. Once I'm happy with the direction, I'll ask you to draft it."

That sentence shuts off the production reflex so that the draft happens when you're ready for it.

Others I use: "Just brainstorm with me, don't commit to an answer." "Give me the skeleton, not the finished thing." "I'm exploring options, not deciding." Each one redirects Claude from Default-Produce to Default-Think.

A related move: when revising, be specific about what to change, not what you want the result to be. "Rewrite the intro to be punchier" causes Claude to regenerate the whole intro from scratch based mostly on abstraction, but "Cut the second sentence and tighten the first, leave the rest alone" gets you a more targeted edit.

Calibrating Claude's confidence and depth

Instead of telling Claude what to do, tell it how confident to be and how much to invest. You can tune a lot of downstream behavior with one short sentence.

For a quick check: "I just need a gut-check on whether this looks reasonable, not a full analysis. Two or three sentences is fine."

For depth: "Treat this as something you need to be right about. If you're uncertain, say so. If you'd want to verify something to be sure, do that."

This gives Claude permission to surface its own uncertainty rather than smooth over it.

For thinking: "Reason it through, don't just answer." Or, "Before you answer, what would you need to know to give a good one?"

In my experience, these tend to work better than "think step by step," which has become ritual and often produces ceremonial reasoning rather than better reasoning.

Getting Claude to actually disagree with you

Sycophancy is the prompting problem no one fully solves. We know this. Claude is trained to be considerate, which is mostly good, but it means that when you describe an idea and ask "what do you think," you've already framed it as your idea and cued support.

During a guest lecture in an MBA class, I highlighted this sycophancy by asking it what it thought about my business idea of starting a restaurant that just serves breakfast cereals. It was a reference to an old episode of The Office when Michael Scott starts sharing some of his more, ahem, brilliant ideas.

The LLM (I can’t remember if it was Claude or ChatGPT) enthusiastically supported the idea. Said it was a slam dunk, and hey, what the heck was I even waiting for?”

Hmmm.

The fix is to separate yourself from the position you want criticized:

  • "I'm choosing between doing X and not doing it at all. Make the strongest case for not doing it."

  • "A friend is proposing X. What would you push back on?"

  • "Steelman the opposite position before evaluating mine."

  • "What's the most important thing your answer might be wrong about?"

Claude can give you genuine pushback but it needs an invitation that isn't undermined by the framing of your question. Easier said than done.

One thing that doesn't work all that well: asking "are you sure?" after an answer. You get either reaffirmation or wholesale capitulation, neither useful.

Better to ask what would change the answer, or what specific part of it might be wrong.

A few moves almost nobody uses

Ask Claude to predict its failure mode before answering. "Before you answer, predict the most likely way your answer will be slightly wrong or miss what I'm actually asking. Then answer."

This changes what Claude is optimizing for…now it's producing a good answer while modeling how it might fail, which surfaces caveats that might otherwise stay implicit. The failure prediction can turn out to be more useful than the answer.

Ask for the question behind the question. "What's the question behind my question here?" For advice and decisions, the surface question is usually a proxy for something deeper. "Should I quit my job?" is rarely just about the job.

Use specific people as lenses, not generic roles. "You are an expert in X" mostly produces generic expert-cliché output. I think marketing folks know this more than most of us. But asking "how would [a specific person whose work you know] approach this?" gets you something with more texture, because Claude has a richer model of a real person than of a category. Works best for thinking style, not literal claims.

Use arbitrary constraints to break out of defaults. If you're getting boring output, add a constraint that has nothing to do with the substance. OK, bear with me.

Examples: "Write this in exactly 100 words." "Use no adjectives." "Don't use the word [obvious word….probably “streamlined”]." The constraint forces choices the default wouldn't make. Especially useful for explaining technical concepts or breaking out of sloppy cliches: "explain this without using any of the standard metaphors people reach for."

Stage a structured disagreement. Ask for three responses that genuinely disagree, plus a fourth that identifies what they're actually disagreeing about underneath. The fourth part is the move. It produces a diagnosis of why people disagree, which is almost always more useful than the views themselves.

Conversation hygiene

Sometimes I find myself treating a Claude chat the way I’d text with a friend: it just sort of keeps going indefinitely until someone works up the courage to use the big yellow THUMBS UP emoji. This becomes one mega-conversation that can run for weeks, full of half-finished tangents and contradictory directions…and then I wonder why answers have gotten weirder over time.

But conversations have a lifecycle. They start clean, accumulate context, and at some point accumulate enough conflicting or stale context that starting fresh is faster than trying to course-correct in place.

The signs: Claude keeps reverting to an earlier version of your idea even though you've moved past it. Or it keeps suggesting things you've already rejected. Responses feel off in ways you can't quite articulate but are nevertheless incredibly annoying.

The move is to open a new chat and write a clean brief. Paste in whatever's worth keeping from the old conversation (You can even ask the old conversation for a detailed summary to catch the fresh thread up), frame what you're trying to do now, and start.

A useful tell: if you find yourself writing "no, like I said earlier" or "I already told you," start over. Claude isn't being defiant. The conversation has accumulated enough weight that your current instructions are competing with earlier ones, and the earlier ones often win.

For ongoing work that needs continuity (say, a long writing project, recurring research, a thinking partner on some long-running topic) that's what Projects are for (called Projects in ChatGPT, too). The better pattern is one project per ongoing piece of work, with the project's instructions describing the work and your role in it. Chats inside the project can start fresh whenever they need to, while the underlying context persists.

What Claude is genuinely bad at

Counting and precision. Asking Claude how many words are in a passage, how many times a letter appears in a word, or to produce exactly X items frequently produces wrong answers delivered with full confidence. Because the model processes language in chunks, not characters, it can make granular character-level counting genuinely hard. So…if you need an exact count I am sorry to say you may need to count yourself.

Citations and quotes. Asking Claude to find a quote or attribute an idea to a source produces plausible-sounding citations that may be wrong….sometimes very wrong. Inventing the source is one of the model's most reliable failure modes. If a citation matters, verify it yourself.

Recent events without search. Claude's training has a cutoff date. Anything after it, or anything that changed after it, isn't necessarily reliable. In chat, Claude usually searches when it should, but for time-sensitive questions it's worth being explicit. I don’t know the latest cutoffs for these models (they are mostly much more current) but that’s why you shouldn’t be surprised when Claude doesn’t have the latest info. In fact, upon the announcement of Mythos from Anthropic, I asked Claude for details and it had no idea what I was talking about…not until I pushed it to do a live search.

True randomness. Ask for a random number, name, or example and you'll get answers that feel random but cluster heavily around defaults. Claude has favorite "random" numbers (37, 7, 42), favorite "random" names, favorite "random" examples. For real variety across runs, you gotta engineer it.

Exact length. "Write me exactly 500 words" produces approximately 500 words, often quite far off. Length targets are aspirational.

Knowing what it doesn't know. Claude has gotten better at saying "I'm not sure" when it should, but still sometimes confidently produces very plausible-sounding wrong answers, especially on niche topics, specific quotes, and citations. The fix is the calibration prompting above…explicitly asking Claude to flag uncertainty.

Files, images, and what you can actually paste

You can paste images directly into a chat. Screenshots, photos of handwriting, whiteboards, diagrams, charts. Claude reads them. If you have a confusing error message, screenshot it instead of describing it. A recipe in a cookbook, photograph it instead of typing it out. A chart you want analyzed, drop the image in. The vision capability is one of the most underused features for regular users.

You can also upload PDFs, Word docs, spreadsheets, and most other common file types. Claude reads them directly. For a long PDF especially, uploading is much better than pasting because formatting is preserved.

The general principle: if you have a Thing and want Claude to know about the Thing, give it the Thing. Don't transcribe or summarize it. Don't worry about whether Claude can handle the format. Because it almost certainly can.

Using Claude on your own prompting

When something isn't working, ask Claude why.

"I asked X and got Y. What was unclear about my prompt that led you to interpret it that way?"

Claude has direct access to how it parsed your request and is unusually good (in my opinion) at diagnosing its own misunderstanding. The fix is usually obvious once you see it from the other side.

You can also do this before sending a complicated prompt: "Here's what I want to ask. Before I do, is there anything missing or ambiguous?"

This catches the easy fixes before you've spent a turn on a no good, very bad answer.

Things that don't (really) matter

There are always research papers that claim the opposite, but in general: You don't need to be polite, though writing naturally helps because your real voice carries more context than a stiff prompt-voice does. You don't need "you are a world-class expert" fluff. You don't need to threaten, bribe, or beg…none of it works reliably. "Take a deep breath" doesn't do anything substantive.

A lot of inherited prompting advice is less helpful with current models. If something feels like a magic word, it probably isn't.

Prompting Claude well is mostly about reducing what it has to guess…..separating Decision from Task. Every guess is a chance to drift from what you wanted. Specificity removes guesses about content; examples remove guesses about style; naming the mode (brainstorm versus drafting, etc.) removes guesses about whether you wanted a draft or a discussion.

You don't need to do any of these every time. Most prompts don't need them…you can just ask a question or an output and get a solid answer.

A note on this piece: it draws on Anthropic's official prompting guidance for Opus 4.7 (linked above), supplemented by patterns from extensive use. The specific techniques in the later sections are my synthesis rather than established practice. Try them, keep what works, drop what doesn't.