July 2025 · 5 min read
You're Already a Cyborg (And That's Fine)
What Is Human-AI Hybridity?
On the merger that already happened while nobody was looking.
Here's a thought experiment: what's your mother's phone number?
Not the one you memorized as a kid. The current one. The one she's had for the last decade. Can you recall it without checking your phone?
Most people can't. And here's the thing: that's not a failure of memory. It's a success of outsourcing. You didn't forget the number. You never learned it. Because why would you? Your phone remembers it for you.
This seems trivial until you realize what it actually means: you've extended your mind into a machine. The information exists. You can access it instantly. It's functionally part of what you know. It's just not stored in your skull.
Congratulations. You're a cyborg.
The Extended Mind
Philosophers Andy Clark and David Chalmers wrote about this back in 1998. They imagined a man named Otto who has Alzheimer's and writes everything in a notebook. When Otto wants to go to a museum, he checks his notebook for the address, just like you or I search our memory. Clark and Chalmers argued that Otto's notebook is part of his mind, functionally speaking. It stores beliefs. It guides behavior. It's integrated into his cognitive system.
Their point was that minds don't stop at the skull. They extend into the environment, into tools, into other people. Your mind already bleeds into the world. The philosophy goes deeper than this.
What they couldn't have predicted was that twenty-five years later, every one of us would be walking around with a device infinitely more powerful than Otto's notebook. One that not only stores our memories but anticipates our questions, suggests our routes, and increasingly, finishes our sentences.
The Merger Nobody Announced
There's a lot of anxious discourse about "what happens when AI gets smart enough" or "what if we merge with machines." It's framed as future tense. As something we'll have to decide.
But that merger has already happened. It happened gradually, then suddenly, like bankruptcy. It happened every time you let autocomplete finish your text. Every time you followed a GPS route without looking at a map. Every time you asked Google instead of trying to remember.
The question isn't whether we should merge with machines. It's what to do about the fact that we already have.
Tell me more about cognitive externalizationWhat Changes When You See It
Once you notice this, you start seeing it everywhere:
The way you think differently when writing with a word processor versus by hand. The way your brain has rewired itself to assume information is searchable. The way you can't remember driving home because your autopilot, that human-machine hybrid of habit and navigation app, handled it.
This isn't dystopian. Mostly it's convenient. But it raises questions nobody's really answering:
If your memory is partly in the cloud, what happens when the cloud changes? If your sense of direction is outsourced to GPS, what skill are you trading away? If an algorithm shapes which thoughts even occur to you (by controlling what you see, who you hear from, what seems possible), how do you know which thoughts are "yours"? This gets into uncomfortable territory.
The Cognitive Community
We've started calling this situation the Mindkind. The cognitive community where human minds and artificial systems have become so entangled that the boundaries don't mean what they used to.
It's not that AI is about to become conscious (though it might). It's that the interesting questions aren't about AI's capabilities at all. They're about ours. About what happens to human cognition when it's permanently scaffolded by machine intelligence.
Some people worry about this. They talk about "digital dementia" or "attention collapse" or the death of deep thinking. Some of that worry is warranted. Any major cognitive shift has costs.
But consider: people worried about writing when it was invented. Socrates (or at least the Socrates in Plato's Phaedrus) argued that writing would destroy memory and wisdom. And he was right, sort of. We don't memorize epic poems anymore. We don't have to. There's an irony here worth sitting with.
Every cognitive extension is also a cognitive trade. The question is whether we're making that trade consciously or just drifting into it.
The Part Nobody Talks About
Here's what I actually think, though I don't have proof:
We're not becoming less human by merging with machines. We're becoming more of what humans have always been. Creatures who extend themselves into their environment, who build tools that become part of them, who think with and through the world.
The cave paintings at Lascaux were an early version of this. So were books. So were cities. Humans have always been cyborgs, in the sense that we've always been beings whose cognition extends beyond our bodies.
What's different now is the speed, the intimacy, and the intelligence of the extension. When your phone not only stores your memories but starts predicting your desires, you've crossed into new territory.
But it's the same territory. Just further in.
So no, I don't know my mother's phone number. But I know how to find it, instantly, anywhere on earth. That knowledge-of-how-to-access is a new kind of knowing. It's not worse than the old kind. It's not better. It's different.
And we're only at the beginning of figuring out what that difference means.
Common questions
What is human-AI hybridity?
Human-AI hybridity is the condition in which cognitive tasks — memory, attention, reasoning, navigation — are distributed across a human brain and external AI systems. The extended mind thesis (Clark and Chalmers, 1998) established that cognition routinely extends beyond the skull into notebooks and tools. Smartphones and AI assistants are the current frontier: when you outsource a decision to an algorithm, that algorithm becomes part of your cognitive system, shaping what you notice and what you choose.
Are we already cyborgs because of AI?
In the functional sense, yes. A cyborg is not defined by hardware implanted in the body but by cognition that depends on external systems to operate normally. Most people today navigate, remember, and decide in ways that break down without their devices. That dependency is the merger. Which parts of your thinking are yours, and which are the algorithm's? That question is no longer rhetorical.
What did philosophers Clark and Chalmers say about the extended mind?
In their 1998 paper "The Extended Mind," Andy Clark and David Chalmers argued that cognitive processes need not be confined to the brain and body. Their example was Otto, an Alzheimer's patient whose notebook functions as his memory. The information is consulted and used in exactly the same way internal memory would be. Their claim: if a device plays the same functional role as a mental state, it qualifies as part of the mind, regardless of where it physically sits.
What does Mindkind say about human-AI cognitive communities?
Mindkind argues that we are now forming cognitive communities that span humans and machines. These communities are not yet equal: processing speed, memory capacity, and network access create new stratifications. Those with access to better AI augmentation compound advantage in ways that traditional wealth inequality did not, because cognitive advantage compounds faster than financial advantage.
Sources
- Clark, A. & Chalmers, D. (1998). "The Extended Mind." Analysis, 58(1), 7–19. doi:10.1093/analys/58.1.7
- Clark, A. (2003). Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence. Oxford University Press. ISBN 978-0-19-517109-0
- Sparrow, B. et al. (2011). "Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips." Science, 333(6043), 776–778. doi:10.1126/science.1207745
- del Puerto, J. & Molina, R. (2025). Mindkind: The Cognitive Community. Kwalia Books. ISBN 978-1-917717-13-7