June 2025 ยท 5 min read

Who's Responsible When AI Kills?

Nobody, currently. That's the problem.

Nobody, currently. That's the problem.

In March 2018, an Uber self-driving car killed Elaine Herzberg in Tempe, Arizona. She was crossing the street with her bicycle. The car's sensors detected her six seconds before impact. The software classified her first as an unknown object, then as a vehicle, then as a bicycle. It never decided she was a pedestrian. The car never braked.

Who was responsible? The safety driver, who was watching videos on her phone? Uber, which had disabled the car's factory emergency braking system? The engineers who wrote the perception software? The executives who decided to launch testing before the system was ready?

The safety driver was charged with negligent homicide. Uber paid a settlement to the family. Nobody at Uber faced criminal charges. The software itself, of course, was not a legal subject at all. The investigation revealed systemic failures.

This case illustrates what I call the accountability gap. As AI systems become more autonomous, it becomes harder to locate responsibility in any single human being. And our legal frameworks weren't built for this.

The Diffusion Problem

Traditional liability works because we can trace a chain of causation back to a human decision. Someone designed the product. Someone manufactured it. Someone sold it. When something goes wrong, we follow the chain.

AI complicates this in two ways. First, the "decision" that causes harm often emerges from millions of weighted connections in a neural network. Nobody wrote the rule that classified Elaine Herzberg as an unknown object. The system learned that pattern from training data, in ways that even its creators cannot fully explain.

Second, modern AI systems are built from layers of components, each developed by different teams, trained on different data, integrated into larger systems that nobody fully understands. The more autonomous the system, the more distributed the responsibility.

Tell me more about current AI liability law

Consider a medical AI that recommends the wrong treatment. Was the error in the training data, which came from thousands of hospitals? The model architecture, designed by a research team? The integration code, written by contractors? The hospital's deployment decision? The doctor who accepted the recommendation? Good luck finding a jury that can sort that out.

The Automation Paradox

Here's what makes this genuinely difficult, not just legally but philosophically. We automate precisely because we want to remove human judgment from the loop. That's the point. A system that requires human oversight at every step isn't really autonomous.

But our entire concept of moral and legal responsibility depends on human judgment. We punish people for making bad choices. We hold companies liable for defective products. Neither framework fits a system that makes its own choices in ways we didn't specify and don't fully understand.

This isn't hypothetical anymore. Autonomous weapons are being deployed in conflicts. Algorithmic trading systems make decisions that move markets. Content recommendation systems shape political discourse. The stakes are already life and death.

Current Non-Solutions

The standard responses to this problem don't actually work.

"Hold the company liable" sounds reasonable until you realize that companies are themselves diffusions of responsibility. CEOs don't design algorithms. Engineers don't make deployment decisions. Board members don't review code. Collective liability becomes a cost of doing business, not a mechanism for preventing harm.

"Require human oversight" is the most common regulatory approach. Keep a human in the loop. But research on automation shows that humans are terrible at overseeing systems they've been trained to trust. The Uber safety driver wasn't paying attention precisely because she'd learned the car usually handles everything. That's not a failure of individual character. It's a predictable consequence of automation.

"Just don't deploy systems we can't explain" would solve the problem, but at the cost of not deploying AI at all. The most capable systems are precisely the ones that operate in ways we can't fully trace. Explanation and capability are, for now, in tension.

A Different Framing

I think we need to stop asking "who is responsible?" and start asking "how do we create responsibility?" The accountability gap exists because we're looking for existing responsibility rather than designing new forms of it.

One approach: treat sufficiently autonomous AI systems as a new category of legal subject. Not a person, exactly, but something. An entity that can be party to contracts, that can hold insurance, that can be "punished" through mandated modification or termination. This sounds strange, but it's not without precedent. Corporations are legal subjects without consciousness. Ships can be liable for damages under admiralty law.

Another approach: shift from individual liability to systemic requirements. Instead of asking who's to blame after harm occurs, mandate specific safety practices before deployment. Require testing regimes, insurance minimums, impact assessments. Make it harder to deploy unsafe systems, rather than trying to assign blame after they've already caused damage.

A third approach: create new institutions specifically designed for AI accountability. Regulatory bodies with technical expertise, empowered to investigate failures and mandate corrections. Something like the FAA, but for AI. Not perfect, but better than leaving accountability scattered across a dozen overlapping jurisdictions.


Elaine Herzberg would still be alive if the humans at Uber had made different choices. But the legal system struggled to identify which choices mattered, made by whom, in what sequence. The car's software made the fatal decision, but software can't be held accountable under current law. The humans involved were each partially responsible, which meant none of them was fully responsible. The system as a whole failed, but systems don't go to prison.

We're going to see more cases like this. Many more. The technology is outpacing our ability to assign responsibility for what it does. And pretending we can solve this by holding individual humans accountable for emergent machine behavior is a comforting fiction that won't survive contact with reality.

The real question isn't who's responsible now. It's what kind of responsibility we're going to invent for a world where machines make consequential decisions. And we're running out of time to figure it out.

Written by

Javier del Puerto

Founder, Kwalia

More from Kwalia

A New Chapter is Being Written

Essays on AI, consciousness, and what comes next.

We're working on this

Want to know when we write more about ?