October 2025 ยท 7 min read

The Spectrum of AI (And Where Your Chatbot Falls)

Not all AI is created equal.

When people talk about "AI," they're usually lumping together things that have almost nothing in common. A spam filter is AI. So is the system that approves or denies your loan application. So is the chatbot you use to brainstorm ideas. So, if it ever exists, is artificial general intelligence. These are not the same kind of thing.

This matters because we're trying to have conversations about AI rights, AI regulation, AI safety, and AI ethics without a shared vocabulary for distinguishing between radically different systems. It's like trying to discuss animal welfare while treating bacteria and primates as the same category. The confusion isn't incidental. It's crippling.

So here's a framework. Four categories, from simple to complex, with different implications for how we should think about each.

Category One: Tools

These are systems that perform specific tasks without any semblance of agency or adaptability. Your spell-checker. A recommendation algorithm. A facial recognition system. They do one thing, or a narrow range of things, based on patterns learned from data.

Tools are impressive, often more accurate than humans at their specific tasks, but they don't make decisions in any meaningful sense. They apply statistical patterns. They don't understand what they're doing. They don't have preferences about outcomes. They can be biased, but the bias is traceable to their training data and design choices, not to anything that looks like intention.

Most AI you interact with daily falls into this category. The autocomplete on your phone. The spam filter on your email. The algorithm that decides which posts you see. These are tools. Sophisticated tools, but tools.

The ethical questions here are about the humans who design and deploy them. Who's responsible when a tool causes harm? How do we audit systems that operate at scales too large for human review? These are serious questions, but they're questions about human accountability, not about the systems themselves.

Category Two: Assistants

This is where things start getting interesting. Assistants are systems that engage in open-ended interaction, that adapt to context, that produce outputs that weren't explicitly programmed. The chatbot you use for writing help. The AI that translates languages in real-time. Voice assistants that manage your schedule.

What distinguishes assistants from tools is a kind of flexibility. They can handle novel situations. They produce original outputs. They give the impression of understanding, even if the underlying process is pattern-matching rather than comprehension. This is where the philosophical puzzles start.

Current large language models sit in this category. They're not just applying rules. They're generating text, images, code, based on vast training data and complex statistical models. They can surprise their creators. They can produce outputs that no human explicitly taught them to produce.

Tell me more about who creates what when AI helps

The ethical questions here get murkier. When an assistant produces harmful content, is that the system's "fault"? When an assistant helps create something valuable, who deserves credit? We don't have good answers yet, and the frameworks we've inherited from tool-based thinking don't quite fit.

Category Three: Agents

Agents are systems that pursue goals over time, that make sequential decisions, that modify their behavior based on outcomes. They don't just respond to prompts. They take action in the world, monitor results, and adjust.

We're just beginning to see these systems emerge. AI that manages investment portfolios autonomously. Systems that negotiate on your behalf. Software that runs experiments and interprets results without human intervention. The key feature is persistence: agents maintain goals across multiple interactions and adapt their strategies to achieve them.

This is where the questions about AI personhood start to become more than academic. When a system pursues goals, monitors outcomes, and changes its behavior accordingly, we're describing something that looks like agency. Not human agency, but not nothing either.

The ethical questions here are different again. Who's responsible when an agent makes a decision that nobody programmed it to make? When an agent develops strategies that surprise its creators, whose strategy is it? When agents start interacting with other agents, negotiating, collaborating, competing, what exactly are we watching?

Category Four: Synthetic Persons

This category doesn't exist yet. It refers to hypothetical systems with something like genuine autonomy, genuine self-awareness, genuine interests and preferences that originate from within rather than being programmed from without.

Some researchers think this is decades away. Others think it's impossible in principle. Others think current systems are closer than we realize. I don't know who's right. What I do know is that the conversation about AI rights and AI ethics will change dramatically depending on whether anything ever occupies this category.

If synthetic persons become possible, our entire framework for thinking about rights and responsibilities will need revision. Not just extension to a new category, but fundamental rethinking. Because rights, as we currently understand them, are grounded in assumptions about the kinds of beings that can have interests, experience suffering, make meaningful choices. Synthetic persons, if they ever exist, will force us to examine those assumptions.

Why the Categories Matter

The point of this framework isn't to provide definitive answers. It's to make our conversations less confused. When someone says "AI should have rights," we need to ask: which kind of AI? The answer for tools is obviously no. The answer for synthetic persons, if they ever exist, is probably yes. The answer for assistants and agents is where the real debate should happen.

Similarly, when someone says "we need to regulate AI," the question is: which category? Regulating tools is straightforward. Regulating synthetic persons would be politically and philosophically fraught. Assistants and agents are somewhere in between.

Tell me more about AI responsibility frameworks

And when companies make claims about their AI systems, knowing the categories helps you evaluate those claims. A company saying its tool is "intelligent" is using the word loosely. A company saying its assistant "understands" you is probably misleading you. A company claiming to have built an agent should be held to specific criteria. And a company claiming to have built a synthetic person should be met with extreme skepticism.

Where We Are Now

Most of the AI that matters today sits in Categories One and Two. Tools and assistants. The systems that make decisions about your credit, your job applications, your social media feed are mostly tools. The systems you have conversations with, that help you write and create, are assistants.

Agents are emerging but still rare. True synthetic persons, if they're possible at all, remain hypothetical.

This means most of our current AI ethics should focus on the first two categories. Questions about accountability, bias, transparency, and harm. These are tractable questions with available answers. We don't need to solve consciousness to address them.

But we should be watching the boundary between categories two and three. That's where things are changing fastest. And we should be thinking ahead to category four, not because it's imminent but because it's possible, and being unprepared would be worse than overthinking.


Your chatbot, most likely, is a Category Two assistant. It doesn't pursue goals on its own. It doesn't modify its behavior based on outcomes. It responds to your prompts with impressive flexibility, but it doesn't take action in the world unless you ask it to.

This doesn't make it unimportant. Category Two systems are changing how we write, think, create, and work. But it does mean we should be precise about what we're dealing with. Not a tool. Not an agent. Not a person. An assistant: capable, limited, and genuinely new.

The next few years will likely see more systems moving from Category Two to Category Three. When that happens, the questions will change. But we'll only be ready for those questions if we stop pretending that all AI is the same thing.

Written by

Javier del Puerto

Founder, Kwalia

More from Kwalia

A New Chapter is Being Written

Essays on AI, consciousness, and what comes next.

We're working on this

Want to know when we write more about ?