July 2025 · 5 min read

You're Already a Cyborg (And That's Fine)

On the merger that already happened while nobody was looking.

Here's a thought experiment: what's your mother's phone number?

Not the one you memorized as a kid. The current one. The one she's had for the last decade. Can you recall it without checking your phone?

Most people can't. And here's the thing: that's not a failure of memory. It's a success of outsourcing. You didn't forget the number. You never learned it. Because why would you? Your phone remembers it for you.

This seems trivial until you realize what it actually means: you've extended your mind into a machine. The information exists. You can access it instantly. It's functionally part of what you know. It's just not stored in your skull.

Congratulations. You're a cyborg.

The Extended Mind

Philosophers Andy Clark and David Chalmers wrote about this back in 1998. They imagined a man named Otto who has Alzheimer's and writes everything in a notebook. When Otto wants to go to a museum, he checks his notebook for the address, just like you or I search our memory. Clark and Chalmers argued that Otto's notebook is part of his mind, functionally speaking. It stores beliefs. It guides behavior. It's integrated into his cognitive system.

Their point was that minds don't stop at the skull. They extend into the environment, into tools, into other people. Your mind already bleeds into the world. The philosophy goes deeper than this.

What they couldn't have predicted was that twenty-five years later, every one of us would be walking around with a device infinitely more powerful than Otto's notebook. One that not only stores our memories but anticipates our questions, suggests our routes, and increasingly, finishes our sentences.

The Merger Nobody Announced

There's a lot of anxious discourse about "what happens when AI gets smart enough" or "what if we merge with machines." It's framed as future tense. As something we'll have to decide.

But that merger has already happened. It happened gradually, then suddenly, like bankruptcy. It happened every time you let autocomplete finish your text. Every time you followed a GPS route without looking at a map. Every time you asked Google instead of trying to remember.

The question isn't whether we should merge with machines. It's what to do about the fact that we already have.

Tell me more about cognitive externalization

What Changes When You See It

Once you notice this, you start seeing it everywhere:

The way you think differently when writing with a word processor versus by hand. The way your brain has rewired itself to assume information is searchable. The way you can't remember driving home because your autopilot, that human-machine hybrid of habit and navigation app, handled it.

This isn't dystopian. Mostly it's convenient. But it raises questions nobody's really answering:

If your memory is partly in the cloud, what happens when the cloud changes? If your sense of direction is outsourced to GPS, what skill are you trading away? If an algorithm shapes which thoughts even occur to you (by controlling what you see, who you hear from, what seems possible), how do you know which thoughts are "yours"? This gets into uncomfortable territory.

The Cognitive Community

We've started calling this situation the Mindkind. The cognitive community where human minds and artificial systems have become so entangled that the boundaries don't mean what they used to.

It's not that AI is about to become conscious (though it might). It's that the interesting questions aren't about AI's capabilities at all. They're about ours. About what happens to human cognition when it's permanently scaffolded by machine intelligence.

Some people worry about this. They talk about "digital dementia" or "attention collapse" or the death of deep thinking. Some of that worry is warranted. Any major cognitive shift has costs.

But consider: people worried about writing when it was invented. Socrates (or at least the Socrates in Plato's Phaedrus) argued that writing would destroy memory and wisdom. And he was right, sort of. We don't memorize epic poems anymore. We don't have to. There's an irony here worth sitting with.

Every cognitive extension is also a cognitive trade. The question is whether we're making that trade consciously or just drifting into it.

The Part Nobody Talks About

Here's what I actually think, though I don't have proof:

We're not becoming less human by merging with machines. We're becoming more of what humans have always been. Creatures who extend themselves into their environment, who build tools that become part of them, who think with and through the world.

The cave paintings at Lascaux were an early version of this. So were books. So were cities. Humans have always been cyborgs, in the sense that we've always been beings whose cognition extends beyond our bodies.

What's different now is the speed, the intimacy, and the intelligence of the extension. When your phone not only stores your memories but starts predicting your desires, you've crossed into new territory.

But it's the same territory. Just further in.


So no, I don't know my mother's phone number. But I know how to find it, instantly, anywhere on earth. That knowledge-of-how-to-access is a new kind of knowing. It's not worse than the old kind. It's not better. It's different.

And we're only at the beginning of figuring out what that difference means.

Written by

Javier del Puerto

Founder, Kwalia

More from Kwalia

A New Chapter is Being Written

Essays on AI, consciousness, and what comes next.

We're working on this

Want to know when we write more about ?