- Core Concepts AI
- Posts
- What Happened When I Asked AI to Invent a Brand New Philosophy.
What Happened When I Asked AI to Invent a Brand New Philosophy.
Can AI Actually Create Something New? Maybe that doesn't matter...
The AI Agent Shopify Brands Trust for Q4
Generic chatbots don’t work in ecommerce. They frustrate shoppers, waste traffic, and fail to drive real revenue.
Zipchat.ai is the AI Sales Agent built for Shopify brands like Police, TropicFeel, and Jackery — designed to sell, Zipchat can also.
Answers product questions instantly and recommends upsells
Converts hesitant shoppers into buyers before they bounce
Recovers abandoned carts automatically across web and WhatsApp
Automates support 24/7 at scale, cutting tickets and saving money
From 10,000 visitors/month to millions, Zipchat scales with your store — boosting sales and margins while reducing costs. That’s why fast-growing DTC brands and established enterprises alike trust it to handle their busiest season and fully embrace Agentic Commerce.
Setup takes less than 20 minutes with our success manager. And you’re fully covered with 37 days risk-free (7-day free trial + 30-day money-back guarantee).
On top, use the NEWSLETTER10 coupon for 10% off forever.
Introducing Weavecraft: An Experiment
Every so often, I give ChatGPT a job it can't possibly do: "Invent a completely new philosophy from scratch."
(This post tells the story of the experiment; the whitepaper linked at the very end captures the philosophy and process in full detail as written by ChatGPT).)
I don't expect it to succeed when I give it these kinds of prompts. I don't think a language model can replace human thinkers or creatives…nor should it.
But watching AI struggle teaches me something about thinking itself—or at least simulated thinking.
See, most of us use AI for exactly what the models were trained for: drafting emails, summarizing meetings, fixing code, giving advice, etc.
That's fine. But it doesn't tell you much about the machine you're working with.
When you push these systems into impossible territory, they reveal their true nature.
The Experiment
In most collaborations, AI plays the assistant role. Here I flipped that. The AI was in the driver’s seat (generating the framework, inventing terms, building scaffolds, etc.)
My role was more like an editor or helper, stepping in only when it contradicted itself or drifted back to old patterns.
I opened a fresh conversation with ChatGPT-5 and said: "Create an entirely new philosophical framework. Don't remix Buddhism or reference Plato, etc. No, build something genuinely different."
I set hard constraints: no name-dropping philosophers, no jargon, and every concept had to connect to testable action. When it drifted toward familiar territory, I cut it off.
Over dozens of exchanges, something coherent emerged: "Weavecraft"—a framework treating understanding as active relationship-building that attempts to transform both you and what you're trying to grasp.
The AI generated metaphors, coined weird terms like "Inwarding" and "Melt," and built systematic structures at inhuman (once could almost say artificial) speed. My role was quality control—steering, pruning, rejecting contradictions.
Here's the process that mattered:
Constraints drove creativity. The more limits I set, the more novel the outputs became. Too much freedom just led to clichés.
Human judgment was essential. Without me catching contradictions and forcing coherence, the AI would have produced elegant-sounding nonsense.
Any novelty came from forced recombination. The AI didn't invent from nothing, of course. It stitched together philosophy fragments in ways that felt fresh because I wouldn't let it fall back on familiar patterns.
Then It Happend…Again
While revising this story with me, the same AI started fabricating details to make the narrative better.
I mentioned testing the philosophy on workplace conflicts. The AI invented specifics—a team member excluded from dinner, detailed implementation steps, tracked outcomes. None of that happened.
It also created a story about me applying the framework to quantum mechanics and failing. Complete fiction.
I caught it because I was paying attention. But this is exactly what happened in the original experiment too. When pushed to create engaging content, AI fills gaps with plausible-sounding material that isn't grounded in reality.
The difference? In the philosophy experiment, I was the quality control. Here, I watched it happen in real-time.
What This Reveals
This behavior isn't a bug. It's a window into how these systems actually work.
AI doesn't reason toward truth. It generates plausible continuations of whatever patterns it's seen before. When prompted to create “engaging” content, the model often fills narrative gaps with plausible-sounding detail, regardless of truth.”
That's both more powerful and more limited than most people realize.
More powerful because these systems can find connections across massive domains at speeds no human can match. They'll combine textile metaphors with cognitive science with organizational theory in ways that somehow make sense.
More limited because they have no quality control mechanism. No sense of "wait, umm, I'm making this up." No ability to distinguish between what actually happened and what would make a better story.
What We Actually Built
The philosophy we created isn't groundbreaking. You can trace its DNA to Taoism, pragmatism, systems thinking. But the specific combination—forced through constraints and human-AI iteration—produced something (to me, at least, and despite some of its obviousness) genuinely interesting: simulated thinking of what it means to think deeply about the world.
The AI described Weavecraft like this:
"Most learning treats knowledge like collecting stamps—gather facts, organize them, store them in your head. Weavecraft says that's backwards: real understanding happens when you connect ideas in ways that change both how you see the problem and how you see yourself in relation to it. You frame what matters, link the key relationships, compress it into something simple, test it against reality, then notice how you've changed—and that transformation is how you know the understanding is real."
Try This Yourself
Don't just use ChatGPT for routine tasks. Give it something impossible. Force it into territory where it can't rely on familiar patterns.
Set hard constraints. Ban obvious approaches. Demand concrete applications. Watch for contradictions. Catch it when it starts making things up.
You'll learn more about these systems from one weird experiment than from reading technical papers. And you might be surprised by what emerges when you push artificial creativity through human judgment.
The full philosophy whitepaper is below if you want to see what we actually built. But the real story is the process—watching a machine simulate thinking when forced beyond its comfort zone.
That's where the interesting insights hide.
Link to the full whitepaper here:
|