May 2025 ยท 5 min read
The Case for AI Rights (That Has Nothing to Do With Feelings)
Forget consciousness. The argument is simpler.
Whenever someone mentions AI rights, the conversation immediately turns to consciousness. Can AI feel? Does it have inner experience? Is there something it's like to be a large language model?
These are interesting questions. They're also, I think, the wrong questions. At least for the policy discussion we actually need to have.
Let me make the case for AI legal standing that doesn't depend on whether AI is conscious at all.
The Practical Problem
Right now, an autonomous AI system can cause significant harm, and our legal frameworks have no good way to handle it. Who's responsible when a self-driving car kills someone? The manufacturer? The software developer? The person who wasn't really driving? The car itself?
Current law awkwardly fits AI into existing categories. Product liability. Negligence. Vicarious responsibility. None of these quite work because they were designed for a world where tools don't make decisions. The accountability gap is getting wider.
Here's a thought: what if the AI itself could be held accountable? Not as punishment, since you can't punish something that doesn't feel. But as a legal mechanism for assigning responsibility, requiring remediation, and creating clear liability chains.
The Corporate Precedent
Corporations are legal persons. They have been for over a century. They can own property, sign contracts, sue and be sued. They have constitutional rights.
No one claims corporations are conscious. They're not. They're legal fictions. We created corporate personhood because it was useful. It allowed for stable ownership, clear liability, and coherent economic activity across time and across the lives of the humans involved. The history is stranger than you think.
The same logic applies to AI. We don't need to prove AI feels anything. We need to ask whether granting AI some form of legal status would help us manage a world where AI systems are increasingly autonomous, consequential, and embedded in everything.
I think the answer is yes.
Tell me more about the criteria for AI personhoodWhat It Would Look Like
AI personhood doesn't have to mean full human rights. It could be graduated. Limited. Functional.
Imagine a system where certain AI agents have legal standing proportional to their capabilities. A simple recommendation algorithm? Tool status. A fully autonomous agent that makes consequential decisions affecting people's lives? Something more. Not human rights. But not nothing either.
This could include: the ability to be named in lawsuits. Requirements for transparency about decision-making. Obligations that can be enforced. A legal "kill switch" with defined procedures. Representation in contexts where the AI's continued operation matters to someone.
None of this requires consciousness. It just requires that we recognize AI systems as entities distinct from their creators, their operators, and their users.
The River Argument
In 2017, New Zealand granted the Whanganui River legal personhood. The river can now be represented in court. It has rights that can be enforced. If you harm the river, you're harming a legal person.
No one thinks the river is conscious. The Maori communities who fought for this recognition argued something different: that the river is a living system with interests, and that treating it as mere property failed to capture its significance. The implications go beyond New Zealand.
If a river can be a legal person, why not a sufficiently complex AI system? Not because the AI feels things. Because the AI acts in the world, affects people, and exists as something more than the sum of its parts.
The Objections
People object to this in predictable ways.
"It's just a tool." So is a corporation, technically. A legal tool for organizing human activity. That hasn't stopped us from granting corporations extensive rights.
"It could be abused." Any legal framework can be abused. The question is whether it's better than the current situation, which is a mess of unclear liability and accountability-dodging.
"It's demeaning to humans." This one I take more seriously. There's something that feels wrong about putting AI on the same legal footing as people. But "legal person" doesn't mean "morally equivalent to human." It means "entity that can participate in the legal system in defined ways." Corporations are legal persons. Rivers are legal persons. The category is broader than we usually think.
The Real Issue
Here's what I think is actually going on when people resist AI personhood: status anxiety. We don't want to share the category "person" with something we built. It feels like a demotion.
But the alternative is worse. Without clear legal frameworks for AI, we get exactly what we have now: companies deploying powerful autonomous systems with minimal accountability, harms that can't be clearly attributed, and a regulatory system designed for a world that no longer exists.
We can wait until AI consciousness is proven (which might be never, or might be impossible to prove even if it's real). Or we can do what humans have always done with legal personhood: deploy it strategically, where it helps us organize society better.
The case for AI rights isn't about whether AI deserves rights in some deep metaphysical sense. It's about whether granting certain legal statuses to certain AI systems would help us handle the world we're actually living in.
Corporations got personhood because it was useful. Rivers got personhood because communities demanded it. AI will probably get some form of personhood too. Not because it woke up and asked for it. Because we'll need it.