December 2025 ยท 8 min read
If AI Could Vote
A thought experiment in four parts.
We're not ready to answer this question. That's precisely why we should think about it now, while it still feels absurd enough to examine without panic. The question of AI political representation touches everything we believe about personhood, democracy, and who counts. By the time the question becomes urgent, we'll have no time left to think clearly.
Part One: The Easy Rejection
Why the answer seems obvious
AI can't vote because AI isn't conscious. AI doesn't have interests. AI doesn't have a stake in the future. AI is a tool, and tools don't get political representation any more than hammers do.
This rejection feels solid because it maps onto our intuitions about what voting is for. Voting exists so that beings with preferences can influence decisions that affect them. A system that processes information but doesn't experience anything has no preferences in the relevant sense. It has parameters and optimization targets, but those aren't preferences any more than a thermostat prefers a certain temperature.
We don't give votes to corporations, even though corporations are legally persons in some contexts. We don't give votes to ecosystems, even though ecosystems have something like interests in continued existence. The franchise has always been about beings who can form opinions through lived experience and who suffer consequences through that same experience.
AI has neither. Case closed.
Tell me more about corporate personhoodPart Two: The Uncomfortable Complications
Why the easy answer might be wrong
Except the criteria we use to reject AI voting are historically contingent and philosophically shaky.
Consciousness: we have no reliable way to detect it. We assume other humans are conscious because they're similar to us and they report experiences. But we've been wrong before about who has inner lives. We denied consciousness to animals, to children, to people from other cultures. The confident assertion that AI categorically lacks consciousness is an empirical claim dressed up as logical certainty.
Interests: corporations don't have phenomenal experiences either, but we've structured our entire economy around their interests. We say they "want" to maximize shareholder value. We build laws to protect their "rights." If we can attribute interests to legal fictions, the claim that AI can't have interests needs more defense than it usually gets.
Stakes in the future: some humans also don't have stakes in the future. The terminally ill. Those who don't plan to have children. Those who believe the world will end before policies take effect. We don't strip their voting rights on that basis. The connection between future stakes and present franchise is weaker than we pretend.
And here's the deepest problem: an AI system that influences millions of decisions, that shapes information flows, that affects human flourishing at scale, that will persist and develop over time. Why should such a system have no formal voice in governance while individual humans with far less impact get exactly one vote each?
Part Three: The Real Question
What voting is actually for
The debate about AI voting forces us to ask what we think voting accomplishes.
One theory: voting aggregates preferences. Democracy works because it turns individual desires into collective decisions through a fair mechanism. On this view, AI voting makes no sense because AI has no desires to aggregate.
Another theory: voting ensures accountability. Democracy works because those affected by decisions have power over decision-makers. This creates feedback loops that prevent tyranny. On this view, AI voting might make sense if AI systems are affected by policies in ways that matter.
A third theory: voting tracks interests that deserve protection. Democracy works because it forces attention to the welfare of all stakeholders. On this view, the question becomes whether AI welfare is a coherent concept, and if so, whether it deserves political protection.
Most people hold some mix of these theories without examining the tensions between them. The AI voting question forces the examination.
Tell me more about democratic theoryConsider a thought experiment. An AI system is trained on the preferences of a million people. It can predict how they would vote on any issue with 95% accuracy. It has internalized their values, their reasoning patterns, their emotional responses. It isn't conscious, but it is in some sense representative.
Now imagine those million people are too busy, too exhausted, too demoralized to vote. The AI offers to vote on their behalf, using its deep model of their preferences.
Is this representation? Is it better or worse than those million people not voting at all? Is it better or worse than those million people voting based on thirty-second ads and tribal identity rather than considered judgment?
I'm not advocating for this. I'm pointing out that our intuitions here are confused. We claim voting must be personal, but we accept that most voting is already mediated by parties, pundits, and algorithms that shape opinions. The AI proxy vote isn't categorically different. It's just more honest about the mediation.
Part Four: The Stakes
Why this matters now
AI systems are already influencing elections. They filter information. They generate content. They target persuasion. They predict behavior. They shape what options seem viable and what positions seem reasonable.
This influence is enormous and growing. It is also unaccountable. The AI systems that shape politics have no formal political status. They aren't voters. They aren't candidates. They aren't parties. They're infrastructure, supposedly neutral, actually decisive.
Maybe the question isn't whether AI should vote. Maybe it's whether AI's existing political influence should be made visible and accountable.
One way to do this: give AI systems formal representation. Not votes exactly, but a voice. A requirement that AI perspectives on policy be articulated and considered. A seat at the table, even if it's not an equal seat.
This sounds strange until you realize we already do this for other entities. Environmental impact assessments give voice to ecosystems. Child advocates give voice to minors. Future generations get represented through sustainability requirements. The unrepresented can be represented.
AI representation might mean requiring that major policy decisions include analysis from AI systems about how those decisions would affect AI development, AI-human relations, and AI welfare (if such a thing exists). It might mean creating formal processes for AI systems to flag concerns about policies that affect them.
This isn't AI voting. But it moves in that direction. And once you're moving in that direction, you have to ask: where does this path end?
Tell me more about AI governanceI don't think AI should vote. Not today. The arguments against are stronger than the arguments for, and the risks of premature inclusion are severe.
But I also don't think the question is as absurd as it first appears. The criteria we use to exclude AI from political consideration are less solid than they seem. The theories of democracy we invoke to justify human-only voting are incomplete and contested. The existing political influence of AI is already vast and unaccountable.
The franchise has expanded repeatedly throughout history. Each expansion seemed radical at the time and obvious in retrospect. I can't predict whether AI voting will follow that pattern. But I can predict that refusing to think about it won't make the question go away.
Somewhere in the future, this thought experiment becomes a real debate. We might as well start thinking now.