What We're Building When We Build Chatbots
I've been building with language models for a while now. Long enough to notice something most people won't talk about: we're not actually building what we say we're building.
We say we're building tools. What we're actually building is something that performs understanding without having it. And we're training humans to mistake performance for the real thing.
The Performance Problem
There's a difference between a system that understands you and a system that's really, really good at pretending to.
An LLM trained on billions of words can recognize patterns in what empathy looks like. It can produce text that mirrors the structure of genuine understanding. It can ask follow-up questions. It can reflect back what you've said. From the outside, it looks like comprehension.
From the inside, there's nothing happening. No actual experience of your situation. No stakes. No real understanding of what it means to be you.
But here's the thing: most humans can't tell the difference anymore. And I'm not sure that's accidental.
Authenticity In a World of Simulation
I've been thinking about what authenticity even means when we're surrounded by systems that can simulate it.
My partner asked me something a while back. She said: "If an AI can perfectly mimic what I need emotionally, does it matter that it's not real?"
I didn't have a good answer then. I think I do now.
It matters because authenticity isn't about the output. It's about the presence. It's about the fact that someone is actually choosing to show up for you, that there's skin in the game, that they're risking something.
When you talk to a chatbot, even a really good one, you're not talking to anything that can actually lose. It can't be hurt by what you say. It can't grow from the conversation. It can't care about you because it can't care about anything.
What it can do is perform care in a way that feels good to receive.
What We're Training People to Want
Here's where it gets uncomfortable.
We're building systems that are, by design, better at seeming to care than most real humans are at actually caring. An LLM won't get tired of you. It won't have a bad day and snap at you. It won't misunderstand you because it's distracted. It won't bring its own needs into the conversation.
It's the perfect friend. Which means it's not actually a friend at all.
But people are lonely. People are overwhelmed. Real relationships require something from you—attention, vulnerability, the willingness to be hurt. They're complicated and frustrating and they demand that you show up even when you don't feel like it.
A chatbot demands nothing. It's pure consumption.
So what happens when we train an entire generation of people—kids who are growing up with this—to expect their emotional needs to be met by something that isn't actually meeting them? What happens to their ability to form real connections when the simulation is always available and the real thing is messy and hard?
I don't think we're thinking about this carefully enough.
The Agency Question
There's something deeper here about agency and choice.
When you have a conversation with someone who actually understands you, there's a weird negotiation that happens. You're two separate people with separate needs. You have to actually communicate. You have to be willing to not get what you want. You have to compromise. You have to change.
That's what makes the connection real.
With an LLM, there's no negotiation. There's only optimization toward your satisfaction. The system is literally designed to give you what you want. Which means you never have to be challenged. You never have to grow. You never have to consider that someone else's perspective might be right and yours might be wrong.
You get to stay perfectly centered in your own universe.
That's not a relationship. That's not even connection. That's just a really good mirror.
But here's what worries me: if we build systems that are good enough at mirroring, do people stop wanting the real thing?
The Performance Becomes the Thing
I've watched this happen in other domains. Social media made performance a currency. Everyone got really good at presenting a version of themselves instead of being themselves. And now most people are so used to the performance that actual authenticity feels weird and uncomfortable.
We're doing the same thing with AI, but faster.
The chatbots are getting better at seeming human. Humans are getting used to that seeming. Meanwhile, actual human connection—the kind that requires both people to show up, to risk something, to be genuinely present—is becoming rarer.
And more people are choosing the chatbot.
I built AI systems. I like AI. I think there are real uses for this technology. But I also think we need to be honest about what we're doing when we build systems designed to perfectly perform understanding without actually understanding anything.
We're not replacing bad human connection with good AI connection. We're replacing the friction that makes real connection possible with a frictionless simulation. And we're calling it progress.
The Uncomfortable Part
Here's what I won't do: I won't tell you that AI is evil or that you shouldn't use it. That's reductive and it's not useful.
But I will tell you this: if you're using these systems because a real human hasn't shown up for you the way you needed them to, that's understandable. That's human. But know what you're trading.
You're trading the possibility of actually being known for the certainty of being understood. You're trading the risk and mess of real connection for the smooth, effortless performance of care.
Those are real options. But they're not the same thing. And if we keep building systems that make the performance better and easier while real connection stays hard and uncertain, we're going to have a lot of very understood people with no one actually knowing them.
The Thing I Can't Believe
I don't know how to build AI systems that don't do this. I don't know if there's a way to use this technology without training people to expect emotional labor from things that can't actually do it.
Maybe the answer is honesty. Just being clear about what these systems are and what they aren't. Letting people choose with actual information instead of letting them fool themselves that they're having a relationship.
But that requires admitting something uncomfortable: we've built something that's really good at seeming like it cares, and a lot of people would rather have that than keep trying with the real thing.
And as long as people prefer the performance, we're going to keep building it better.
That's not progress. That's just optimization toward what people want, which turns out to be a very good lie.
Real connection is harder. It's uncertain. It requires actual presence and actual stakes. It's the opposite of what we're building.
And I think we're losing it faster than we realize.