July 2025 · 5 min read

Your Thoughts Aren't Yours Anymore

How Is AI Changing Human Cognition?

That opinion you formed this morning? You didn't form it.

Think about something you want right now. Not food or water. Something you've decided you want. A product. An experience. A political position.

Now ask yourself: how did that desire get there?

The standard story is that you formed it. You observed the world, gathered information, weighed your options, and arrived at a preference. Your preference. Yours.

I think that story is increasingly false. Not entirely false, but false enough to matter. Because we now live in an environment specifically engineered to manufacture our desires before we notice they're being manufactured.

How did advertising change cognition before AI?

Advertising has always tried to influence what people want. This isn't new. What's new is the precision.

In the 1950s, advertisers could put a billboard on a highway and hope the right people saw it. They could run a TV commercial during a popular show and cross their fingers. The targeting was crude. They were throwing messages at populations and hoping some stuck.

Today's system is different in kind, not just degree. The algorithm doesn't advertise to populations. It advertises to you. To the version of you that exists at 11:47 PM when you're tired and emotionally vulnerable. To the version of you that just read a news article about economic uncertainty. To the version of you that paused, for 0.3 seconds longer than usual, on a photo of someone living a life you want. The profiling is more precise than you think.

This isn't advertising anymore. It's desire engineering.

How does the AI feedback loop manufacture desire?

Here's what makes this different from old manipulation. The system learns.

Every time you click, scroll, pause, or bounce, you're telling the algorithm what works on you. What triggers your attention. What makes you anxious, envious, or hopeful. The algorithm doesn't know why these triggers work. It doesn't need to. It just knows that showing you certain content in certain contexts changes your behavior in predictable ways.

And then it does more of that.

The result is a system that gets better at influencing you, specifically, every single day. Your personal manipulation engine, trained on years of your own behavioral data, refined continuously. The mechanics are worth understanding.

Tell me more about programmable desire

Why does AI-mediated cognitive change feel invisible?

The strange thing is that manufactured desires feel exactly like authentic ones. When I want something, I don't experience that want as coming from outside me. I experience it as me wanting.

This is the trick. The system doesn't make you do things against your will. It changes your will. By the time you feel the desire, it's already yours. The manipulation happened upstream, in the choice of what information reached you, in what order, in what emotional context.

A 2014 study showed that Facebook could make users feel happier or sadder just by adjusting the emotional valence of posts in their feeds. The users had no idea this was happening. They just felt different that day. Their mood felt like their mood.

Scale that up. What about your beliefs? Your fears? Your sense of what's normal, possible, desirable?

What are the political consequences of AI-reshaped cognition?

Cambridge Analytica got a lot of attention, and then we all kind of forgot about it. But the infrastructure didn't go anywhere. The ability to profile voters psychometrically and target them with emotionally calibrated messages, that exists. Companies use it. Campaigns use it. What they actually did was worse than the headlines.

Here's my uncomfortable thought: if your political opinions were systematically shaped by a targeting system optimized to push your specific psychological buttons, would you know? Would you be able to tell? Or would those opinions simply feel like conclusions you'd reached through reason?

I don't think we'd know. I don't think we can know. Not with certainty. The manipulation operates below the level where we have introspective access to our own processes.

Can human cognition remain independent inside AI-mediated environments?

We like to believe in a self that exists prior to its influences. A core "you" that takes in information and decides what to think, independent of how that information was selected and presented.

But what if there's no such core? What if the self is, to a significant degree, constructed by its informational environment? What if who you are is largely a function of what you've been shown?

I think this is closer to the truth than the autonomous self we imagine. And if so, then controlling someone's information environment is, in a meaningful sense, controlling them.

Not in a conspiracy theory way. Not some cabal in a room making decisions. Worse, actually: an emergent system where millions of optimization algorithms are all competing to capture attention and shape behavior, with nobody in charge and no master plan. Just evolution. The things that work spread. The things that don't, die. What works, it turns out, is whatever captures and holds human attention, regardless of whether that's good for humans.


I started this essay asking you to think about something you want. I'm ending by asking you to wonder: where did that want come from? Did you choose it? Did you assemble it from raw experience through pure reason?

Or did something else place it there, so gently you didn't notice, so precisely you'd swear it was always yours?

Common questions

How is AI changing human cognition?

AI is changing human cognition by becoming part of the loop in which thoughts form. Recommendation algorithms shape attention before a thought is consciously chosen. Autocomplete and AI writing assistants finish sentences in directions that felt spontaneous but were probabilistically steered. Navigation systems atrophy spatial reasoning in people who use them habitually. The change accumulates through thousands of small delegations until the boundary between your thinking and the system's is genuinely unclear.

Are algorithmic feeds changing what people think?

Yes, with measurable effect. Studies on political polarisation, health misinformation, and attention span show that content recommendation systems shift what people believe, not merely what they see. The feedback loop compounds: the algorithm learns your responses, optimises for engagement, which changes your responses, which updates the algorithm. After years of this cycle, the thoughts that feel most natural have been shaped by a system optimised for time-on-platform, not for your flourishing.

What is desire engineering?

Desire engineering is the practice of using behavioral data and machine learning to manufacture preferences before a user is consciously aware of them. Traditional advertising targeted populations with broadcast messages; today's systems target individuals in specific emotional states, identified in real time from scroll behavior and dwell time. The shift from advertising to desire engineering is a shift from persuasion to pre-emption: the system shapes what you want before you decide to want it.

Is it possible to have genuinely independent thoughts in an AI-mediated environment?

The question is not binary. What you can do is develop more awareness of the loop: recognize that the sources of information you access, the frames in which issues are presented, and the emotional states in which you encounter ideas are all products of systems with their own objectives. That awareness is the precondition for cognitive sovereignty — the ongoing practice of noticing and partially resisting the desire-engineering loop — a concept developed further in Mindkind.

Sources

  • Sunstein, C. (2017). #Republic: Divided Democracy in the Age of Social Media. Princeton University Press. ISBN 978-0-691-17543-2
  • Zuboff, S. (2019). The Age of Surveillance Capitalism. PublicAffairs. ISBN 978-1-61039-569-4
  • Sparrow, B. et al. (2011). "Google Effects on Memory." Science, 333(6043), 776–778. doi:10.1126/science.1207745
  • Bail, C. et al. (2018). "Exposure to Opposing Views on Social Media Can Increase Political Polarization." PNAS, 115(37), 9216–9221. doi:10.1073/pnas.1804840115
  • del Puerto, J. & Molina, R. (2025). Mindkind: The Cognitive Community. Kwalia Books. ISBN 978-1-917717-13-7

Written by

Javier del Puerto

Founder, Kwalia

More from Kwalia

A New Chapter is Being Written

Essays on AI, consciousness, and what comes next.

We're working on this

Want to know when we write more about ?