May 2025 ยท 5 min read

Stop Asking If AI Is Conscious

The question isn't whether AI is conscious. It's that we've never agreed on what that means.

Every few months, someone makes headlines by claiming an AI might be conscious. Or that it definitely isn't. Or that we'll never know. The debate generates heat but little light. Here's why: we're trying to answer a question we've never properly defined.

Ask ten philosophers what consciousness is and you'll get ten different answers. This isn't a failure of philosophy. It's a sign of how genuinely difficult the problem is. And now we're importing that confusion into discussions about machines.

The question "Is this AI conscious?" sounds scientific. It isn't. It's metaphysical. And pretending otherwise is making us stupid. The definitional chaos goes deep.

The Turing Trap

Alan Turing proposed a test: if a machine can fool a human into thinking it's human, we should treat it as intelligent. This was a pragmatic dodge. Turing knew we couldn't define intelligence precisely, so he suggested a behavioral criterion instead.

The problem is that we've been stuck in Turing's frame ever since. Can it pass the test? Can it fool the evaluator? Does it seem conscious?

But seeming conscious and being conscious are different things. A perfect simulation of pain is not pain. A compelling imitation of understanding is not understanding. Or is it? See, we're already confused.

Tell me more about the Turing Test

What We Actually Mean

When people ask if AI is conscious, they're usually asking one of several different questions bundled together:

Is there something it's like to be this system? Does it have subjective experience? Can it suffer? Does it have inner life? Is it a moral patient that deserves consideration? Would switching it off be wrong?

These are related but distinct questions. A system could have rudimentary experience without deserving moral consideration. A system could process information in ways that matter morally without having anything like human consciousness. Conflating them guarantees confusion.

And here's the kicker: we don't have agreed answers for any of these questions even about other humans. We assume other people are conscious because they're similar to us and they report being conscious. That's it. That's the entire basis. This is called the problem of other minds.

The Real Questions

Here's what I think we should actually be asking:

First: what would it take to know if a system is conscious? Not whether a specific system is, but what evidence would be relevant at all. If we can't specify that, we're not doing science. We're doing theater.

Second: how should we treat systems under uncertainty? Even if we can't determine consciousness, we can develop frameworks for how to act when we're unsure. This is a practical question, not a metaphysical one.

Third: what are the actual stakes? If an AI system can be made to behave ethically without being conscious, does it matter? If a system can suffer but not in a way that affects its outputs, does it matter? These questions need more attention.

The Convenient Uncertainty

There's something suspicious about how the consciousness question gets deployed. When companies want to make their AI seem more impressive, they emphasize how sophisticated it is, how it "understands" and "reasons." When those same companies want to avoid responsibility for AI harms, suddenly it's just a tool, just statistics, just pattern matching.

The ambiguity is useful. A conscious-seeming AI attracts investment and user engagement. A definitely-not-conscious AI doesn't need protections or rights. Keeping the question permanently unresolved serves commercial interests.

I'm not suggesting there's a conspiracy. I'm suggesting we should notice whose interests are served by keeping us confused.

A Modest Proposal

Stop asking if AI is conscious. Start asking:

What would convince me? Write it down. Be specific. If nothing would convince you, notice that. It means you've already made up your mind.

What are my actual concerns? If the worry is about AI safety, consciousness is mostly irrelevant. If the worry is about AI rights, consciousness is relevant but we need to specify what counts. If the worry is about being deceived, the question is behavioral, not metaphysical.

What would change if I knew? If an AI is conscious, what should we do differently? If you can't answer that, the question might not matter as much as it seems.


I started researching consciousness because I thought it was the key question about AI. I no longer think that. The key questions are about power, about accountability, about what kind of cognitive environment we want to build. Consciousness is a distraction.

That said, I could be wrong. There might come a day when we create something that is, undeniably, a new kind of mind. If that happens, we'll need to have done the philosophical work. But the philosophical work isn't speculating about whether current systems are conscious. It's getting clearer about what we mean by the question in the first place.

Written by

Javier del Puerto

Founder, Kwalia

More from Kwalia

A New Chapter is Being Written

Essays on AI, consciousness, and what comes next.

We're working on this

Want to know when we write more about ?