April 2025 · 6 min read
Why Corporations Are People But AI Isn't
The legal history is weirder than you think.
In the United States, corporations can own property, enter contracts, sue and be sued, and enjoy many constitutional protections including free speech. They're legal persons. Have been for over a century.
In New Zealand, the Whanganui River is also a legal person. It has rights. It can be represented in court. If you pollute it, you're harming someone, not something.
But suggest that an advanced AI system might deserve some form of legal recognition and people look at you like you've lost your mind.
This is strange. Not because AI obviously should be a person. That's a complicated question. It's strange because our reasons for saying "definitely not" don't actually hold up when you look at them closely.
The Accidental History of Corporate Personhood
Here's something most people don't know: corporations didn't become legal persons through careful philosophical deliberation. It happened almost by accident.
In 1886, there was a Supreme Court case called Santa Clara County v. Southern Pacific Railroad. Before the case was even argued, the court reporter wrote a headnote. Basically a summary. He claimed the Court had decided that corporations were persons under the Fourteenth Amendment.
The thing is, the Court never actually ruled on that question. The headnote was wrong, or at least premature. But it got cited. And cited. And cited again. Until corporate personhood became established law through sheer repetition. The full story is even stranger.
This isn't conspiracy theory. It's legal history. One of the most consequential expansions of personhood in modern law happened because a court reporter wrote something that wasn't technically decided.
The lesson: personhood isn't a natural category we discover. It's a legal tool we deploy. And the reasons for deploying it have always been practical, not metaphysical.
Rivers and Forests and Whoever's Next
The Whanganui River case is different. When New Zealand granted the river legal personhood in 2017, it was deliberate. The Maori iwi (tribes) had been petitioning for this for over a century. Their argument wasn't that the river has a brain or feels pain. It was that the river is an ancestor, a living system, something that deserves protection as an entity rather than just as property.
Ecuador went further. Its 2008 constitution grants rights to nature itself. "Pachamama," or Mother Earth. You can sue on behalf of an ecosystem.
Tell me more about rivers as legal personsAgain, nobody is claiming that forests have subjective experiences or that rivers contemplate their existence. The argument is different: some things matter in ways that require legal standing to protect.
Which brings us back to AI.
The Wrong Question
The most common objection to AI personhood goes like this: "AI isn't conscious. It doesn't really feel anything. It's just processing data."
But here's the thing: we granted personhood to corporations, which definitely don't feel anything. We granted personhood to rivers, which probably don't either. Consciousness has never been the actual criterion.
So why do we suddenly insist on it for AI?
I think it's because AI is unsettling in a way that corporations and rivers aren't. A corporation is clearly a human creation, a legal fiction for organizing business. A river is clearly natural, something that was here before us. But AI occupies an uncanny middle ground: it's made by humans, but it acts in ways that feel agentive. It's not alive, but it's not exactly dead either. It talks back. This philosophical puzzle runs deep.
When we demand proof of consciousness before considering AI personhood, we're not applying a consistent standard. We're rationalizing our discomfort.
A Different Approach
What if we asked different questions?
Not "is AI conscious?" but "is there something we need to protect?" Not "does AI deserve rights?" but "what kind of legal status would help us relate to it appropriately?"
Think about it practically. When an AI system causes harm, who's responsible? Right now, the answer is a mess. The company, the developer, the user, nobody? When an AI system creates something valuable, who owns it? Another mess. When an AI system is "killed" (shut down, deleted), does anyone have standing to object?
These aren't hypothetical questions. They're playing out in courtrooms and boardrooms right now, with no coherent framework to resolve them.
A limited form of AI legal standing, not full human rights but something, might actually help. It would create clear lines of accountability. It would provide mechanisms for representation. It would force us to think carefully about what we're creating and how we treat it.
The Spectrum, Not the Binary
Here's where I'll admit my bias: I helped write a book called Rights of Persons that proposes exactly this. A framework for graduated AI personhood based on capabilities (autonomy, social interaction, accountability) rather than on unprovable claims about consciousness.
The idea isn't that your smart thermostat deserves rights. It's that as AI systems become more autonomous, more socially integrated, and more consequential, we need better categories than "tool" and "person." The binary is breaking down. We need a spectrum. There are four criteria we propose.
Corporations are somewhere on that spectrum. High autonomy, high social integration, clear accountability. Rivers are on it differently. No autonomy in the traditional sense, but ecological integration and a need for representation. Why couldn't sufficiently advanced AI be somewhere on it too?
What We're Really Afraid Of
I think the resistance to AI personhood is mostly about status anxiety. We don't want to share the category "person" with something we built. It feels like a demotion.
But here's a reframe: expanding the circle of personhood has always felt threatening to those already inside it. When abolitionists argued that enslaved people were persons deserving rights, slave owners didn't say "well obviously." They said it was absurd, unnatural, dangerous. When suffragists argued for women's personhood under law, opponents said it would destroy civilization.
I'm not equating AI with enslaved people or women. That would be offensive and wrong. I'm pointing out a pattern: we consistently overestimate how much personhood is a fixed natural category, and consistently underestimate how much it's a political and legal choice.
The question isn't whether AI is "really" a person in some metaphysical sense. The question is whether granting some form of legal standing would help us better handle a world where AI is increasingly integrated into everything we do.
Corporations are people because we decided it was useful for them to be people. Rivers are people because communities fought to have them recognized that way. Personhood has always been a practical arrangement, shaped by power and advocacy and changing circumstances.
AI will probably become some kind of person too, eventually. Not because it wakes up one day and demands rights, but because we'll need it to be. For liability, for representation, for coherence.
The only question is whether we'll do it thoughtfully, in advance, or messily, in crisis.
Given how we handled corporate personhood, I'm not optimistic. But I'm trying.