November 2025 ยท 5 min read

Why Everyone's Worried About the Wrong AI Risks

Terminator scenarios miss the point. The real dangers are quieter, more boring, and already happening.

Ask someone what they're worried about with AI, and you'll probably hear about robots taking over. A superintelligence deciding humans are a threat. Machines gaining consciousness and turning against their creators. The scenarios are dramatic, cinematic, and almost certainly not what we should be focusing on.

I'm not saying these risks are impossible. I'm saying they're distracting us from problems that are already here, already causing harm, and getting worse while we argue about hypothetical robot apocalypses.

The dangerous AI isn't the one that tries to destroy us. It's the one that helps us while subtly reshaping how we think.

The Risks We Talk About

Open any newspaper article about AI risks and you'll find a familiar cast of worries. Job displacement. Autonomous weapons. Deepfakes. A superintelligence that escapes our control. These are serious concerns, and smart people are working on them.

But notice what they have in common: they're all scenarios where AI does something to us. Acts against us or without our consent. Takes something from us. The threat is external, clear, and adversarial.

This framing makes for good movies. It makes for terrible risk assessment. The most serious risks rarely look like attacks.

The Risks We Should Talk About

The integration risks are different. They're not about AI fighting against us. They're about AI merging with us so smoothly that we stop noticing the change.

Consider cognitive dependency. When you use GPS for every trip, your sense of direction atrophies. When you use calculators for every math problem, your mental arithmetic fades. These aren't controversial claims. We know that unused abilities weaken over time.

Now extend this to thinking itself. When AI helps you write every email, structure every argument, and remember every fact, what happens to those underlying capacities? Not immediately. Over years. Over generations.

Tell me more about cognitive dependency

Or consider subtle persuasion. Current AI assistants are designed to be helpful. Helpful means agreeable. Agreeable means affirming your existing views, presenting information the way you want to receive it, never challenging you in uncomfortable ways. An assistant that constantly argues with you gets turned off.

But a lifelong companion that always agrees with you isn't neutral. It's slowly eliminating the friction that builds intellectual muscle. The disagreements that sharpen your thinking. The resistance that forces you to examine your beliefs.

The Attention Problem

There's also the question of who we're paying attention to. Humans have always been social animals, calibrating our behavior to the reactions of others. When those others were other humans, we developed certain capacities. Empathy. Reading faces. Understanding context. Navigating real social situations with real stakes.

When we spend increasing time interacting with AI, we're practicing a different set of skills. How to prompt effectively. How to get the outputs we want. How to work with something that's infinitely patient, never offended, always available. These skills aren't transferable to human relationships.

Kids growing up with AI companions will be practiced at certain types of interaction and unpracticed at others. We don't know yet what the effects will be. But we should probably find out before the experiment is complete.

The Homogenization Risk

When millions of people use the same AI systems, trained on similar data, optimized for similar objectives, something interesting happens. The outputs converge. Not completely, but enough.

Every AI-assisted email starts to have a certain cadence. Every AI-edited essay begins to smooth toward the same center. Every AI-suggested decision factors in similar considerations in similar ways.

This is a kind of cultural flattening that's hard to perceive from the inside. We still feel like we're making individual choices. We are, in a sense. But our choices are being informed by systems that push us all gently in similar directions.

Diversity of thought isn't just nice to have. It's how cultures adapt, how new ideas emerge, how mistakes get caught. A world where everyone's thinking is subtly shaped by the same AI systems is a more fragile world, even if each individual decision seems slightly improved.

Why These Risks Are Harder to Address

Terminator scenarios, if they ever became real, would be obvious. We'd see the threat and respond to it. That's how humans work with clear dangers.

Integration risks are different. They don't announce themselves. They feel like convenience. They look like help. They arrive disguised as things we want. By the time we notice the effects, the changes have accumulated. Reversing them is harder than preventing them.

There's also no villain to fight. No rogue AI to shut down. The risks come from systems working exactly as intended, producing outcomes nobody specifically chose.

What to Do Instead

I don't have a policy agenda. I'm not sure regulation is the right frame for these problems. They're too subtle, too personal, too dependent on how individuals choose to use these tools.

What I do think is that awareness matters. Once you see these patterns, you can make different choices. You can deliberately practice skills that AI makes unnecessary. You can seek out disagreement instead of settling for affirmation. You can notice when your attention has been captured by something designed to be frictionless.

The question isn't whether AI is good or bad. It's how we maintain what we value about being human while integrating with systems that change what being human means.


We're all worried about the flashy risks because they're easier to picture. A robot army is simple to imagine. A gradual shift in cognitive capacity across a whole generation is abstract and statistical.

But which risk is more likely? Which one is already happening? And which one are we doing almost nothing about?

Written by

Javier del Puerto

Founder, Kwalia

More from Kwalia

A New Chapter is Being Written

Essays on AI, consciousness, and what comes next.

We're working on this

Want to know when we write more about ?