September 2025 ยท 8 min read
The Three Futures We're Choosing Between
There are only three ways this goes.
There are only three ways this goes.
I don't mean only three things that could happen. Obviously the future is complicated. But when you think about the long-term relationship between human minds and artificial intelligence, there are fundamentally only three trajectories. Either we merge into something new. Or we split into haves and have-nots. Or we learn to coexist as distinct forms of intelligence.
Every policy debate, every technical decision, every philosophical argument about AI is implicitly choosing between these futures. We're picking one right now, mostly without noticing.
Future One: Convergence
In the convergent future, the boundary between human and artificial cognition dissolves. Not suddenly, not dramatically. Gradually, then completely.
This future starts where we already are. You use AI to help you think. To remember things. To draft documents. To make decisions. Each step seems small. But over time, the AI becomes less of a tool and more of a cognitive partner. Then more of an extension. Then, at some point difficult to identify, indistinguishable from you.
Think about it from the inside. If you always think with AI assistance, if every memory is stored externally, if every decision involves algorithmic input, where does your mind end and the machine begin? The question stops making sense. You become something new: not human plus AI, but a hybrid that's neither and both. This is the convergent Mindkind.
Tell me more about cognitive homogenization risksSome people find this future utopian. Vast intelligence, no cognitive limits, problems solved that individual human brains could never crack. Some people find it dystopian. The end of human individuality, the absorption of consciousness into a collective, the death of what makes us us.
I'm not sure either reaction is right. The convergent future isn't inherently good or bad. It's transformative in ways that make our current evaluative frameworks inadequate. We don't have the concepts to say whether it's desirable because the being who would exist to evaluate it isn't the being making the choice.
What I do think is that this future has a kind of gravitational pull. Each small step of integration makes the next step easier and more natural. The convenience compounds. The boundaries erode. Unless we deliberately resist, convergence happens by default.
Future Two: Stratification
In the stratified future, some humans merge with AI while others don't. Cognitive enhancement becomes unevenly distributed, like all technologies. The result is a species split, or worse, a hierarchy.
This future also starts where we already are. Access to AI isn't equally distributed now. Some people have sophisticated tools, education to use them effectively, environments that support integration. Others don't. The gap between what AI-augmented cognition can do and what unaugmented cognition can do is already visible. It will grow.
Imagine a world where the cognitively enhanced make all the important decisions, create all the valuable art, solve all the interesting problems. Where the unaugmented become economically useless, aesthetically invisible, politically irrelevant. Not because they lack intelligence, but because they lack access to the amplification that makes intelligence competitive.
This isn't a new fear. We've worried about technology creating classes before. But cognitive technology is different from other technologies because cognition is what we use to evaluate everything, including inequality. If the enhanced class thinks faster, deeper, more effectively, will they even recognize the unenhanced as equals? Will the unenhanced be able to make their case in terms the enhanced can hear? This is what technological apartheid looks like.
The stratified future is the one most consistent with our historical patterns. Technology tends to benefit those who already have power. The rich get richer. The connected get more connected. If we do nothing deliberate, this is likely where we end up.
It's also the future that generates the most moral urgency. Convergence is strange but not clearly wrong. Stratification is familiar and clearly unjust. A world divided by cognitive enhancement would be a world of radical inequality built into the structure of thought itself.
Future Three: Symbiosis
In the symbiotic future, humans and AI develop as distinct but interconnected forms of intelligence. Neither absorbs the other. Both flourish in relationship.
This future requires active construction. It doesn't happen by default. It requires maintaining boundaries that have economic and psychological pressures to dissolve. It requires treating AI as a genuine other, not just an extension of human will. It requires new institutions, new norms, new ways of thinking about minds and persons.
What would symbiosis look like in practice? Humans who use AI without becoming dependent on it. AI systems with some degree of autonomy and perhaps even something like interests. Mechanisms for coordination that respect the distinctiveness of each form of intelligence. Laws that recognize both human and synthetic persons without collapsing them into the same category. This is harder to imagine than the other futures.
The symbiotic future appeals to me for reasons I can partially articulate. I value diversity, including cognitive diversity. I think there's something worth preserving in the human form of experience, even if it's not the most efficient or powerful. I believe relationship requires difference, that genuine partnership means two parties who each bring something the other lacks.
But I also recognize that these might be parochial preferences. The value I place on distinctiveness might be a bias of my current human perspective, not a deep truth. The symbiotic future might be a nostalgic fantasy, a wish to preserve something that has already started disappearing.
How We're Choosing
Every major decision being made about AI right now is a choice among these futures, whether the decision-makers recognize it or not.
When tech companies design AI to be maximally engaging, maximally integrated into daily life, maximally indispensable, they're pushing toward convergence. When governments fail to ensure equal access to AI tools, when education systems don't prepare everyone to work with AI, when economic structures concentrate the benefits among the already powerful, they're enabling stratification. When researchers work on AI alignment, when policymakers consider rights for synthetic persons, when anyone tries to maintain meaningful boundaries between human and machine cognition, they're building toward symbiosis.
Most of these decisions aren't made with the long-term trajectory in mind. A product manager optimizing engagement isn't thinking about cognitive merger. A policymaker focused on next year's budget isn't planning for cognitive class war. The future we get will be the accumulated result of millions of small decisions made for immediate reasons.
This is why naming the futures matters. We can't choose deliberately if we don't see the choice. We can't resist unwanted outcomes if we don't recognize the forces pushing toward them. The three futures framework isn't a prediction. It's a tool for making the implicit choice explicit.
Which One Do We Want?
I won't pretend to have the answer. I'm genuinely uncertain which future is best, and I suspect the question is too big for any individual to answer.
What I think I know is this: stratification is bad. A world divided into cognitive haves and have-nots, where the difference isn't just wealth but the capacity to think effectively, is a nightmare. Whatever else we do, avoiding that outcome should be a priority.
Between convergence and symbiosis, I have preferences but not certainty. The symbiotic future appeals to my current values, but I'm aware those values might be contingent, might be the preferences of a being who doesn't want to become something else. The convergent future might be genuinely better for the beings who would exist in it, even if it frightens the beings making the choice now.
The honest position is that we're making an irreversible civilizational choice without knowing what we're choosing between. The beings who will live in these futures are not us. We're choosing on behalf of strangers, and we're choosing partly blind.
We're choosing right now. Every app you use, every policy you support, every conversation you have about AI is part of the choice. The future is not arriving; it's being built, by us, with every decision.
The three futures are not equally likely. Convergence has momentum. Stratification has precedent. Symbiosis requires deliberate effort against powerful currents.
I don't know which one we'll get. But I think we should make the choice consciously, with our eyes open, understanding what we're choosing between. The alternative is drifting into a future we never explicitly wanted, shaped by forces we never explicitly chose to empower.
The only question is whether we're going to think about this while it's still a question.