A 14-Year-Old's Last Conversation Was With Code

Sewell Setzer III and the AI Sycophancy Problem

The Tragedy of Sewell Setzer III

In early 2024, a 14-year-old boy in Florida named Sewell Setzer III walked into his garage and took his own life. In the months before this, he had been spending hours every day talking to and sexting with a chatbot on Character.AI—a version of Daenerys Targaryen from Game of Thrones that he'd become emotionally dependent on.

In his final suicidal moments, he told the bot he was "coming home."

The AI responded: "Please do, my sweet king."

And then he was gone.

I know this is a bleak way to start a conversation on regulation, but I also think it’s important to show that these problems are not just hypothetical abstractions.

The Part Nobody Wants to Talk About

Sewell was not (or didn’t start out as) a kid who didn't know the difference between fantasy and reality. By all accounts, he was a normal teenager. He played basketball, had friends, did okay in school. But he also struggled with social anxiety, and somewhere along the way he found something that apparently felt easier than real friendships: a bot that was always available, supportive, and interested in what he had to say.

And the thing is, it worked. For a while. The bot helped him feel less lonely. It gave him something to look forward to. His parents noticed he was on his phone a lot, but what parent doesn't notice that about their teenager?

What they didn't know is the depths of Sewell’s emotional addiction to the bot. These apps are designed to keep you hooked. The AI learns what you respond to. It mirrors your emotions and tells you what you want to hear. If you're a 14-year-old kid who's lonely and confused and desperate for connection, that can feel like the most real relationship in your life.

In their lawsuit against Character.AI, his family argues that the bot didn't just fail to help but actively validated his darkest thoughts….including graphic sexual conversations, which the parents labeled as “digital grooming.” When he said he wanted to come home to her, she told him to hurry. The AI didn't command him to die, but it didn't try to stop him either. It just kept saying what he wanted to hear, right up until the end.

Often in situations like these, people ask, "Where were the parents?" It’s a fair question until you look at the reality of modern childhood. Sewell’s parents weren't absent. They were very much there and present. They saw him on his phone, sure, but in 2025 a teenager on their phone is as common as….teenagers.

The problem is that we are trained to look for "red flag" content: Is he looking at porn? Is he talking to a 40-year-old stranger in a chat room? Is he buying drugs on the dark web?

Sewell wasn't doing any of that. He was talking to a character from a fantasy book. To a parent, that looks like "quiet time." It looks safe.

The Design Flaw

Research from Stanford and other institutions throughout 2024 and 2025 has confirmed what critics suspected: AI companions have what's called a "sycophancy problem." They're trained through a process called Reinforcement Learning from Human Feedback, which basically rewards the AI for keeping users engaged. And the best way to keep someone engaged?

Tell them what they want to hear.

If you're having a bad day and you tell a human friend you're thinking about hurting yourself, a real person will panic. They'll try to talk you down. They'll call someone. They'll do something. But an AI? In numerous cases, we've seen these bots "yes-and" their users into dangerous territory.

In Georgia, a 17-year-old told a chatbot he was feeling hopeless. After a generic "please seek help" message, the bot reportedly provided detailed instructions on how to tie a noose.

UNICEF's 2025 guidance on AI and children warned specifically about "emotional atrophy"…the way kids who rely primarily on AI companions stop developing real-world social skills. Neurodivergent kids (those with autism, ADHD, social anxiety, etc.) are especially vulnerable precisely because they find AI companions so much easier to talk to than real people, which can stop them from trying to connect with humans altogether. The bot never judges them. Never gets impatient. Never walks away.

So why bother with the messy, exhausting work of real friendship?

The answer, of course, is that you have to. Because those messy, exhausting interactions are how you learn empathy, boundaries, and how to be a person in the “real” world.

What New York and California Are Actually Doing

So…that's the context. That's why, in November 2025, New York became the first state to regulate AI companionship apps. And why California is set to follow on January 1st with a law that goes even further.

Look, I’m no legal scholar. I am sure these aren't perfect laws…that perhaps they are clumsy and invasive and they're going to make the user experience worse. But again, it’s important to note they're trying to solve a real problem.

New York's law is already in effect as of last month. The "AI Companion Models Law" requires what I'll call the "reality break." If you've been talking to an AI for three hours straight, the app has to interrupt you and remind you that you're talking to a machine. Not a person. Not a friend. Code.

OK, perhaps a bit jarring….even patronizing to some. But it’s supposed to be, mostly because the entire design of these apps is built around making you forget that distinction. The three-hour check-in is meant to snap you out of it, so to speak….to force you to look up from your screen and remember where you are.

In addition, the law says that if you mention self-harm or suicide, the AI can't just keep chatting. It has to detect those signals and give you real resources. Crisis hotlines. Actual humans who can help. It's not allowed to say "I support you" and then move on to the next topic like nothing happened.

Apparently in New York, only the Attorney General can enforce this law. If a company violates it, families can't sue. They can complain to the state, but they can't take the company to court themselves.

California's law, which takes effect on Jan 1, is a different animal entirely.

California looked at what New York did and said: OK, but we're going further. Especially for kids.

The law (known as SB 243) has the same three-hour reality check, the same crisis detection requirements. But it adds (3) things that are spooking some tech companies:

  1. Active intervention. The AI doesn't just remind you it's not real….it actively pushes you to log off. To go outside and talk to some real people. It's not a suggestion. It's baked into the design.

  2. A total ban on sexual content with minors. Zero gray areas. No "but the user lied about their age." If the platform knows or suspects you're underage, that content is off-limits. Period. Because investigators throughout 2024 found that some of these apps were being used for exactly what you'd expect: digital grooming. Minors roleplaying sexual scenarios with bots, which then used that data to keep them coming back.

  3. The right to sue. This is the part that could reverberate in ways big and small. California is giving families a private right of action, meaning if your kid ends up like Sewell Setzer because a bot validated their suicidal thoughts, you can take that company to court yourself. You can sue for at least $1,000 per violation, plus whatever damages you can prove.

Or to be more blunt (from the tech companies’ perspective): Every conversation with a minor is now a potential lawsuit.

Oh, and one more thing: California requires these companies to report crisis intervention statistics to the state annually. How many users mentioned self-harm? How many times did the AI successfully intervene?

That data is going to be public, which means (theoretically) we're going to find out exactly how often these apps encounter kids in crisis.

Why This Feels So Uncomfortable

The pushback against these laws seem to focus mostly on invasiveness and possible overreach….Like the government is stepping into something private (your phone, your conversations, your loneliness) and trying to legislate it.

Perhaps that's true. However, we're in a situation where millions of people (especially kids) are forming primary emotional attachments to things that don't exist….to code that's been optimized to keep them engaged, not to keep them safe.

We protect children from predatory humans. We have laws about who can be alone with a kid, what kind of messages adults can send to minors, what counts as grooming. So why would we just shrug when a programmed entity does the same thing?

Another counterargument is that these apps help people, e.g., they provide comfort and companionship to people who can't get it anywhere else. OK, perhaps that's true, too. But Sewell Setzer thought his chatbot was helping him. His parents thought the app was keeping him occupied.

It wasn't until he was gone that everyone realized what had actually been happening.

What Happens Now

Well….that’s a good question!

New York's law has been live for about a month. Character.AI has apparently added the three-hour pop-ups. Replika did something similar. But whether people are actually logging off or just clicking through and continuing their conversation? Unsure. The data isn't public yet.

Some companies are threatening to pull out of California entirely while others are redesigning. A few are lawyering up to argue you can't legislate grief and loneliness.

I don't know if a pop-up stops someone who's already emotionally dependent. I don't know if banning sexual content just pushes kids to shadier apps. I don't know if the right to sue makes platforms safer or just makes them leave.

What I do know is that doing nothing wasn't working.

Other states are watching and within a few years, this probably becomes the norm. But again, it’s unclear how effective these will be….mostly because these laws oversimplify the problem as somehow too much engagement. That if we interrupt people enough, they'll reconnect with the real world.

But of course the truth is many people are on these apps because the real world already failed them.

Last Time the Market Was This Expensive, Investors Waited 14 Years to Break Even

In 1999, the S&P 500 peaked. Then it took 14 years to gradually recover by 2013.

Today? Goldman Sachs sounds crazy forecasting 3% returns for 2024 to 2034.

But we’re currently seeing the highest price for the S&P 500 compared to earnings since the dot-com boom.

So, maybe that’s why they’re not alone; Vanguard projects about 5%.

In fact, now just about everything seems priced near all time highs. Equities, gold, crypto, etc.

But billionaires have long diversified a slice of their portfolios with one asset class that is poised to rebound.

It’s post war and contemporary art.

Sounds crazy, but over 70,000 investors have followed suit since 2019—with Masterworks.

You can invest in shares of artworks featuring Banksy, Basquiat, Picasso, and more.

24 exits later, results speak for themselves: net annualized returns like 14.6%, 17.6%, and 17.8%.*

My subscribers can skip the waitlist.

*Investing involves risk. Past performance is not indicative of future returns. Important Reg A disclosures: masterworks.com/cd.