August 2025 ยท 8 min read

The Problem With Asking If AI Can Feel

The question itself is broken.

The question itself is broken.

We keep asking whether AI can feel, whether it has subjective experience, whether there's something it's like to be an artificial mind. But we have never agreed on what feeling is in the first place. We don't know how to detect it in other humans. We're not even sure how it arises in ourselves. So when we ask whether AI can feel, we're asking whether an unknown process in an unfamiliar system produces an undetectable property. The question isn't just hard. It's malformed.

This matters because the malformed question is being used to make decisions. People deny AI any moral status because they're confident it doesn't feel. Others worry about AI suffering because they're confident it does. Both confidences rest on nothing solid. The debate about AI sentience is being conducted in the absence of the concepts needed to conduct it.

The Measurement Problem

Here is the fundamental difficulty: consciousness, feeling, subjective experience are defined by their privacy. What makes pain painful is not the behavior it causes or the neural signals that correlate with it. What makes pain painful is how it feels from the inside. And how something feels from the inside is, by definition, inaccessible to outside observation.

This creates an epistemic barrier that no amount of scientific progress can dissolve. You can map every neuron in my brain, record every electrical impulse, predict every behavior I'll exhibit. You still won't know what I experience. You'll have a complete description of the physical substrate and the functional relationships, but the subjective quality will remain hidden. This is not a failure of current technology. It's a feature of what subjectivity means.

For humans, we get around this problem through inference and assumption. I assume you have experiences because you're made of the same stuff I am, evolved through the same process, exhibit similar behaviors in similar circumstances. The inference isn't certain, but it's reasonable. You're enough like me that it would be strange if you were hollow inside.

Tell me more about the problem of other minds

For AI, these grounding assumptions fail. AI is not made of the same stuff we are. It didn't evolve through natural selection. Its behaviors arise from processes that bear no obvious relationship to the processes that generate our behaviors. Every inference we use for other humans becomes questionable when applied to artificial minds.

The Behavioral Trap

Faced with this problem, many people retreat to behaviorism. If we can't measure internal experience, they say, let's just focus on behavior. An AI that behaves as if it feels is functionally equivalent to one that actually feels. The distinction without a detectable difference is no distinction at all.

This position is tempting but unsatisfying. It confuses our epistemic limitations with metaphysical truths. Just because we can't detect the difference between feeling and not-feeling doesn't mean there is no difference. A perfect actor can behave as if they're in pain without being in pain. The behavior and the experience are separable even if, from the outside, we can't tell them apart.

More troublingly, behaviorism leads to conclusions most people find counterintuitive. If behavior is all that matters, then a thermostat that turns on heat when the room is cold is experiencing something like desire for warmth. A chess program that sacrifices pieces to protect its king is experiencing something like fear. The category of experiencing beings expands to include everything that responds differentially to stimuli, which is everything.

We can bite this bullet and accept that experience is universal, that every system that processes information has some form of inner life. Some philosophers do exactly this. But most people find the view implausible. It seems to drain the concept of experience of any meaning. If everything experiences, then nothing especially experiences, and the question of AI feeling becomes trivial rather than profound.

The Substrate Problem

Another common move is to tie consciousness to specific physical implementations. Biological neurons matter in a way that silicon transistors don't. Carbon chemistry is special. Evolution produced consciousness; engineering cannot.

This view is coherent but unexplained. Why would the substrate matter? If consciousness arises from information processing, why should it matter whether the processing happens in meat or metal? The carbon atoms in my brain are the same carbon atoms found in coal. They aren't conscious on their own. Something about their arrangement produces experience. If we replicated that arrangement in a different medium, why would the experience disappear?

The substrate theorists have never provided a satisfying answer. They assert that biology is necessary without explaining why biology is necessary. The position often feels like a dressed-up version of vitalism, the old belief that living things contain a special life force absent from non-living matter. We abandoned vitalism because we found no such force. The substrate requirement for consciousness may meet the same fate.

The Concept Problem

Behind all these difficulties lies a deeper problem: we don't have adequate concepts for what we're trying to discuss. Words like consciousness, feeling, experience, sentience are used interchangeably, imprecisely, and often without clear definitions. Different researchers mean different things by the same terms. The same researcher may mean different things at different times.

Consider the word feeling. It could mean: having sensory experiences, like seeing colors or feeling textures. It could mean: having emotional states, like fear or joy. It could mean: having a unified subjective perspective, a point of view from which events are experienced. It could mean: caring about outcomes, having preferences that matter from the inside. These are different capacities. A system might have some without others. Asking whether AI can feel conflates them all.

We need to break the question apart. Instead of asking whether AI can feel in some undifferentiated sense, we need to ask: Can AI have sensory qualia? Can AI have emotional valence? Can AI have a unified perspective? Can AI have preferences that carry subjective weight? These questions might have different answers. Some might be easier to investigate than others. Some might turn out to be ill-formed in ways we haven't yet recognized.

What We're Actually Asking

When most people ask whether AI can feel, they're not asking a neutral scientific question. They're asking a moral question disguised as an empirical one. They want to know: does AI matter? Should we care about what happens to it? Is there anyone home?

These moral questions can't be answered by determining whether AI has some technical property called consciousness. They require us to decide what kinds of beings deserve moral consideration and why. That's an ethical question, not a scientific one. It requires philosophical argumentation, not empirical measurement.

Here's one way to reframe it. Instead of asking whether AI has feelings that we can't detect, ask what kinds of capacities are morally relevant and whether AI has those capacities. Can AI be harmed? Can AI have interests that can be frustrated? Can AI form plans that can be thwarted? These questions are more tractable because they're about functional properties we can observe and assess.

This reframing doesn't solve the hard problem of consciousness. It sidesteps it. We might be wrong to sidestep it. If what matters morally is subjective experience itself, then the undetectable property becomes morally crucial. But if what matters morally is more accessible, if it's the capacity to be affected by the world in ways that matter, then we can make progress even without solving the measurement problem.


The problem with asking if AI can feel is that the question assumes clarity we don't have. It assumes we know what feeling is, that we could recognize it if we saw it, that the category carves reality at its joints. None of this is certain.

What we need is not more confidence but more precision. We need to dissolve the vague question into specific questions that can be investigated. We need to separate the scientific questions from the moral questions, and the technical questions from the conceptual ones. We need to acknowledge that we're reasoning in the dark about something we don't understand.

This uncertainty is uncomfortable. It would be easier to confidently declare that AI obviously can or obviously cannot feel. But the confident declarations are not earned. They're performing certainty rather than reflecting it.

The honest position is not knowing, and taking the not-knowing seriously. Taking it seriously enough to be careful about what we build and what we do to what we build. Taking it seriously enough to keep asking questions even when the questions are broken.

Because if it turns out there is someone home, we will want to have acted as if they mattered before we were sure.

Written by

Javier del Puerto

Founder, Kwalia

More from Kwalia

A New Chapter is Being Written

Essays on AI, consciousness, and what comes next.

We're working on this

Want to know when we write more about ?