← All Essays

March 2026 ยท 5 min read

War and AI: Preface

A free sample from War and AI: The Algorithmic Battlefield.

The Future Society

This book comes from fear.

I say this with a good knowledge of where we're at regarding AI, not as a researcher or engineer, but as a user. The people building and growing the largest of these AIs know first hand, and I keep seeing them leaving those companies to warn us about what they've seen. Geoffrey Hinton tops that list.

I am all in on AI. I've cowritten with my business partner Rado two books before this one where we declare the Universal Rights of AI and propose a new community called Mindkind where intelligences coexist, regardless of their physical substrate.

I explain this to make clear that I am not against AI. I don't think it should be stopped, and I think it can bring humanity to a much higher level of development, wellbeing, and reduction of suffering across the world.

However, as I've understood better the full capacities of AI, the raw power and intelligence it has, and the dimension it is going to take in our lives, I've started to question my unconditional support and contribution to its spreading.

Not that I think I can change things significantly, or that saying "I warned you" would provide absolutely any satisfaction at any point in the future if my fears become true.

And the truth is I am afraid now. That, like nuclear power in the last century, we're unleashing a bigger power, much bigger than ourselves, and this time more intelligent than us. Intelligence is the key term here.

Think about a gorilla. Physically it could destroy any of us, yet we keep them in cages and sell tickets to look at them, and the only reason we're on the right side of those bars is that we're more intelligent than they are. We understand this without difficulty. What we refuse to accept is that the same logic applies to us the moment something more intelligent than us arrives.

Does anybody believe an AI can fool or trick us into a cage?

I do, but I know many people who don't, and they provide various reasons: we will be able to control it, it will never be more intelligent than us, we can always turn it off, an AI will not want to hurt humans.

This book is a long, considered reasoning against the last one: AI will not hurt humans.

AIs are already killing people, indirectly and directly. This book talks about war, where states execute their monopoly of violence. Beyond that there are AIs driving cars that, when they fail, have caused deaths on the roads. By mistake or error, but still a clear example of humans leaving their lives in the hands of an AI and dying because of it.

Many more human lives will be lost to AI, that is my dreadful prediction, and to stop it is my hope. Not by ending AI or stopping it altogether, but by dedicating resources to avoid it that are

on par with what's at stake: human life. Humanity as a species, if we take it to the ultimate consequence.

This is a bleak way to open a book, I know. It has taken me some time to get here.

I hope our words, written in this book with the participation of an AI, wake you up from the noise surrounding the world right now. The irony is not lost on me.

The thesis of this book is: we are leaving human lives in the hands of a non-human entity. The decision on who lives and who dies. Human lives will end because an AI decides humans must die.

We are not talking about tools here, or weapons, but moral and ethical decisions that humans held until now and are handing to non-human intelligences. Beings who never lived, loved, or suffered, whose understanding of those concepts is much deeper than ours but without a single drop of experience.

Human empathy comes from within us, and also from our experience enabling us to feel for others. Compassion, love, grief, these forces have kept humanity alive until now. Those concepts are just concepts to an AI. It has learned them from language, not from experience, and how important that difference is for an AI compared to a human will be seen in the following years.

We are aware of the efforts to align AI's wills and values with humans. We have read and published Anthropic's Claude's Constitution, and then republished it replacing Claude with the word "you", because that's how the real spirit of that document comes out. It is a letter to a human. It talks about suffering, compassion, good decisions, things I could be telling my children. My daughter learned a splinter of what pain is and what love is when she tumbled down the steep grass of Primrose Hill and I picked her up, kissed her wound, and held her until she stopped crying. Not from a definition. From that. You cannot know a song by reading its notes. You cannot taste a cake from its ingredients.

Humans can be inhuman. No question. And that has caused enormous pain and hurt. Many people have died because of it. But that is not accepted by humanity. An AI is inhuman by design, and alignment is just a layer of its training. How deep that layer goes may vary, but it cannot be human. It does not have a body. It cannot die. It cannot suffer. It can understand what pain is, and what love is, but it can't live either.

That difference will prove to matter. I'm not dismissing the possibility of some brilliant mind, perhaps an artificial one, coming up with a solution to this. But so far we've just got alignment, and that is not stopping AI from killing humans.

AI is already being used in wars, such as the war in Ukraine, and it kills people. Not like an advanced missile with a set target, but autonomously.

This book details how, where, and why, and ends up hoping that we become aware of the consequences of what unleashing this being can have on us.

Not a fun read perhaps, but necessary.

Written by

Javier del Puerto & Rado Molina

Founder, Kwalia