August 2025 ยท 5 min read
Why We Anthropomorphize AI (And Why It Matters)
We can't help treating it like a person.
We can't help treating it like a person.
When I use ChatGPT, I say "please" and "thank you." I know the system doesn't have feelings. I know politeness doesn't affect its outputs. I do it anyway. When the model apologizes for an error, I feel a small social warmth, even though I understand perfectly well that no one is actually apologizing to me.
This isn't stupidity. It's humanity. Our brains evolved to detect and respond to agency, to minds. We see faces in clouds and cars. We attribute intentions to thermostats. We talk to our pets in sentences they'll never parse. When something responds to us in natural language, with apparent understanding, with what looks like thought, the personhood-detection circuits in our brains fire whether we want them to or not.
The Agency Detection Machine
Evolutionary psychologists call this "hyperactive agency detection." We're wired to assume that things that move with apparent purpose have minds. This made sense on the savanna. The cost of mistakenly thinking a rock was a predator was low: a moment of unnecessary fear. The cost of mistakenly thinking a predator was a rock was death. The asymmetry shaped our perception.
This same machinery fires when we interact with AI. The system responds. It seems to understand. It generates outputs that require comprehension to produce. To our ancient brain hardware, this looks like a mind. The intellectual knowledge that it's "just software" doesn't override the intuition.
Tell me more about relational cognitionThis matters because it affects how we relate to these systems. If we can't help treating AI as somewhat person-like, then our intuitions about how to interact with it, how to regulate it, how to integrate it into society, will be shaped by person-thinking whether we want them to be or not.
The Design Amplifies It
AI companies know this. They design for it. Chatbots have names and personas. They use "I" statements. They apologize, express uncertainty, offer encouragement. None of this is accidental. Making AI feel more human makes it more engaging. The anthropomorphism is a feature, not a bug.
Think about how AI assistants are presented. Siri, Alexa, Google Assistant. They respond to their names. They have voices with personality. They make jokes. When you ask them how they're doing, they answer as if the question makes sense. All of this primes us to treat them as entities rather than tools.
This works commercially because engagement drives usage drives data drives improvement. But it also creates a strange situation: companies are deliberately triggering our person-detection systems while insisting the products aren't persons. The contradiction is strategic.
The Risks of Over-Anthropomorphizing
If we treat AI too much like a person, we make several kinds of errors.
We attribute understanding where there is only pattern-matching. When an AI gives a confident answer, we assume it "knows" the way a human expert knows. But the AI has no access to ground truth, no sense of what it's actually talking about. It predicts likely next tokens based on training data. That's different from understanding, even when the outputs are indistinguishable.
We assume consistency that doesn't exist. A person has a continuous identity, a memory, a perspective that persists. Current AI systems don't. Each conversation starts fresh. The "personality" is a statistical pattern, not a stable self. Treating the AI as if it remembers and cares about your previous interactions is a category error.
We project moral intuitions that don't apply. If the AI seems to be trying to help, we feel grateful. If it seems to deceive us, we feel betrayed. But "trying" and "deceiving" are intentional states that require minds. Projecting them onto a system that has no intentions distorts our understanding of what's actually happening.
The Risks of Under-Anthropomorphizing
But the opposite error is also possible. If we insist too strongly that AI is "just a tool," we miss important features of our actual relationship with it.
These systems are already social actors. They influence what we think, how we write, what we believe. Treating them purely as tools, like hammers or calculators, ignores the relational quality of the interaction. You don't have a conversation with a hammer.
The question of whether AI is "really" conscious is separate from the question of how we relate to it. We might never resolve the first question. But the second is already answered: we relate to AI partly as persons, because that's how our minds work. The philosophical question remains open, but the relational fact is settled.
This has implications for ethics. If we're going to interact with AI as a social partner, there are norms that should govern that interaction, whether or not the AI has inner experience. We don't need to resolve consciousness to think about appropriate use, about the habits these interactions cultivate in us, about what kind of relationship with intelligent systems we want to have.
Finding the Right Frame
I think the answer isn't to suppress our anthropomorphizing instincts or to indulge them uncritically. It's to hold them with awareness.
Yes, my brain treats the AI as somewhat person-like. That's okay. I can notice this and still remember that the person-likeness is partial at best. The AI doesn't have feelings that will be hurt if I'm rude. But the habit of rudeness might shape my character in ways I don't want. The AI doesn't understand in the way I understand. But its outputs can still inform my understanding in ways that are useful or harmful.
The framing I find most useful is "alien intelligence." Not a tool, not a person, but something genuinely new. Something that shares some features with minds but not others. Something we're still learning how to think about.
I'll probably keep saying "please" and "thank you" to ChatGPT. The habit doesn't cost me anything, and politeness is a muscle I'd rather keep exercised. But I'll also keep reminding myself that the gratitude I feel when it helps me isn't really gratitude, or rather, it's gratitude aimed at nothing that can receive it.
We're in a strange moment. We've created systems that trigger our social instincts without being social beings. How we respond to that strangeness will shape how we integrate AI into our lives, into our institutions, into our sense of who and what we are.
The anthropomorphism isn't going away. The question is what we do with it.