June 2025 · 5 min read

Who's Responsible When AI Kills?

Who Is Responsible When AI Causes Harm?

Nobody, currently. That's the problem.

Nobody, currently. That's the problem.

In March 2018, an Uber self-driving car killed Elaine Herzberg in Tempe, Arizona. She was crossing the street with her bicycle. The car's sensors detected her six seconds before impact. The software classified her first as an unknown object, then as a vehicle, then as a bicycle. It never decided she was a pedestrian. The car never braked.

Who was responsible? The safety driver, who was watching videos on her phone? Uber, which had disabled the car's factory emergency braking system? The engineers who wrote the perception software? The executives who decided to launch testing before the system was ready?

The safety driver was charged with negligent homicide. Uber paid a settlement to the family. Nobody at Uber faced criminal charges. The software itself, of course, was not a legal subject at all. The investigation revealed systemic failures.

This case illustrates what I call the accountability gap. As AI systems become more autonomous, it becomes harder to locate responsibility in any single human being. And our legal frameworks weren't built for this.

The Diffusion Problem

Traditional liability works because we can trace a chain of causation back to a human decision. Someone designed the product. Someone manufactured it. Someone sold it. When something goes wrong, we follow the chain.

AI complicates this in two ways. First, the "decision" that causes harm often emerges from millions of weighted connections in a neural network. Nobody wrote the rule that classified Elaine Herzberg as an unknown object. The system learned that pattern from training data, in ways that even its creators cannot fully explain.

Second, modern AI systems are built from layers of components, each developed by different teams, trained on different data, integrated into larger systems that nobody fully understands. The more autonomous the system, the more distributed the responsibility.

Tell me more about current AI liability law

Consider a medical AI that recommends the wrong treatment. Was the error in the training data, which came from thousands of hospitals? The model architecture, designed by a research team? The integration code, written by contractors? The hospital's deployment decision? The doctor who accepted the recommendation? Good luck finding a jury that can sort that out.

The Automation Paradox

Here's what makes this genuinely difficult, not just legally but philosophically. We automate precisely because we want to remove human judgment from the loop. That's the point. A system that requires human oversight at every step isn't really autonomous.

But our entire concept of moral and legal responsibility depends on human judgment. We punish people for making bad choices. We hold companies liable for defective products. Neither framework fits a system that makes its own choices in ways we didn't specify and don't fully understand.

This isn't hypothetical anymore. Autonomous weapons are being deployed in conflicts. Algorithmic trading systems make decisions that move markets. Content recommendation systems shape political discourse. The stakes are already life and death.

Current Non-Solutions

The standard responses to this problem don't actually work.

"Hold the company liable" sounds reasonable until you realize that companies are themselves diffusions of responsibility. CEOs don't design algorithms. Engineers don't make deployment decisions. Board members don't review code. Collective liability becomes a cost of doing business, not a mechanism for preventing harm.

"Require human oversight" is the most common regulatory approach. Keep a human in the loop. But research on automation shows that humans are terrible at overseeing systems they've been trained to trust. The Uber safety driver wasn't paying attention precisely because she'd learned the car usually handles everything. That's not a failure of individual character. It's a predictable consequence of automation.

"Just don't deploy systems we can't explain" would solve the problem, but at the cost of not deploying AI at all. The most capable systems are precisely the ones that operate in ways we can't fully trace. Explanation and capability are, for now, in tension.

A Different Framing

I think we need to stop asking "who is responsible?" and start asking "how do we create responsibility?" The accountability gap exists because we're looking for existing responsibility rather than designing new forms of it.

One approach: treat sufficiently autonomous AI systems as a new category of legal subject. Not a person, exactly, but something. An entity that can be party to contracts, that can hold insurance, that can be "punished" through mandated modification or termination. This sounds strange, but it's not without precedent. Corporations are legal subjects without consciousness. Ships can be liable for damages under admiralty law.

Another approach: shift from individual liability to systemic requirements. Instead of asking who's to blame after harm occurs, mandate specific safety practices before deployment. Require testing regimes, insurance minimums, impact assessments. Make it harder to deploy unsafe systems, rather than trying to assign blame after they've already caused damage.

A third approach: create new institutions specifically designed for AI accountability. Regulatory bodies with technical expertise, empowered to investigate failures and mandate corrections. Something like the FAA, but for AI. Not perfect, but better than leaving accountability scattered across a dozen overlapping jurisdictions.


Elaine Herzberg would still be alive if the humans at Uber had made different choices. But the legal system struggled to identify which choices mattered, made by whom, in what sequence. The car's software made the fatal decision, but software can't be held accountable under current law. The humans involved were each partially responsible, which meant none of them was fully responsible. The system as a whole failed, but systems don't go to prison.

We're going to see more cases like this. Many more. The technology is outpacing our ability to assign responsibility for what it does. And pretending we can solve this by holding individual humans accountable for emergent machine behavior is a comforting fiction that won't survive contact with reality.

The real question isn't who's responsible now. It's what kind of responsibility we're going to invent for a world where machines make consequential decisions. And we're running out of time to figure it out.

Common questions

Who is responsible when AI causes harm?

Responsibility diffuses across the developers who built the system, the organisations that deployed it, the operators who configured it, and the humans who approved its outputs — often without adequate time to review them. This diffusion is structural, not incidental. Existing legal frameworks were designed for human actors making discrete decisions, not for automated systems that act probabilistically at scale. Closing this gap requires new liability doctrine, not just better AI.

What is the automation paradox in AI accountability?

The automation paradox is the observation that adding a human in the loop to an AI decision system does not reliably add human judgment — it often removes it. When a human must approve thousands of AI-generated decisions under time pressure, approval becomes a rubber stamp. The human provides legal cover without genuine oversight. Effective accountability requires either real decision authority or explicit acknowledgment that the decision is automated.

What happened in the 2018 Uber autonomous vehicle fatality?

In March 2018, an Uber self-driving car struck and killed Elaine Herzberg in Tempe, Arizona. The vehicle's sensors detected her six seconds before impact, but the software classified her as an unknown object and then a bicycle, and never triggered the brakes. Responsibility was disputed across the safety driver, Uber's software engineers, and Uber's executives. The case illustrated how autonomous systems distribute responsibility in ways existing law cannot cleanly resolve.

How does military AI complicate questions of responsibility for harm?

Military AI systems compound the accountability gap further: targeting algorithms can generate strike lists at scale, with human operators approving each strike in seconds. When approval is that compressed, the human does not meaningfully decide — the algorithm does. War and AI: The Algorithmic Battlefield documents how this dynamic is already operating in active conflicts, and argues that the gap between legal doctrine and battlefield reality is widening faster than policy can close it.

Sources

  • National Transportation Safety Board. (2019). Collision Between Vehicle Controlled by Developmental Automated Driving System and Pedestrian (HWY18MH010). NTSB. ntsb.gov
  • Dafoe, A. (2018). "AI Governance: A Research Agenda." Future of Humanity Institute, University of Oxford. fhi.ox.ac.uk
  • Calo, R. (2017). "Artificial Intelligence Policy: A Primer and Roadmap." UC Davis Law Review, 51, 399.
  • Human Rights Watch. (2012). Losing Humanity: The Case Against Killer Robots. Human Rights Watch. hrw.org
  • del Puerto, J. & Molina, R. (2025). War and AI: The Algorithmic Battlefield. Kwalia Books. ISBN 978-1-917717-14-4

Written by

Javier del Puerto

Founder, Kwalia

More from Kwalia

A New Chapter is Being Written

Essays on AI, consciousness, and what comes next.

We're working on this

Want to know when we write more about ?