November 2025 ยท 5 min read
The Myth of the Neutral Tool
Technology is not a hammer. It's a hammer that whispers which nails to hit.
There's a line you hear when people don't want to think critically about technology: "It's just a tool. It depends on how you use it." Guns don't kill people. Social media isn't bad. AI is neutral. The user determines the outcome.
This sounds reasonable. It isn't. Tools are never neutral. They embody the values, assumptions, and intentions of their creators. They make certain actions easier and others harder. They shape what's thinkable. This is a well-studied phenomenon.
A hammer is the best case for the neutrality argument. Even a hammer is biased toward hitting things. That's what it's for. It makes hitting easier than not-hitting. Its design encodes a purpose.
The Affordance Problem
Psychologist James Gibson coined the term "affordances" to describe what objects invite us to do. A chair affords sitting. A button affords pressing. A slot affords inserting. We perceive possibilities through the design of things.
Software has affordances too. A like button affords quick judgment. A comment box affords response. An infinite scroll affords continued engagement. A notification dot affords checking. These aren't neutral features. They're invitations.
The designers of these systems know this. They test different versions to see which produces more engagement. They optimize for certain behaviors. The "neutral tool" is carefully calibrated to get you to act in specific ways.
Tell me more about dark patternsDefault Settings Are Decisions
Consider a single design choice: whether to show notifications immediately or batch them for a summary. Most apps default to immediate. This isn't neutral. It's a choice that favors interruption over focus, engagement over presence, the app's metric over your attention.
The same pattern exists everywhere. Autoplay is a decision. Infinite scroll is a decision. Algorithmic ranking is a decision. Requiring an account is a decision. Each default encodes a value, and most users never change defaults.
When someone says technology is neutral, ask: neutral compared to what? Compared to not having the technology? Because the baseline matters. The research on defaults is striking.
Training Data Is Value-Laden
AI systems make the neutrality problem even clearer. A language model trained on the internet will reflect what's on the internet. A facial recognition system trained on certain faces will perform differently on others. A recommendation algorithm trained on engagement will recommend engaging content, which may not be accurate, helpful, or healthy content.
The data isn't neutral. The objective function isn't neutral. The choice of what to measure isn't neutral. At every step, human decisions are being made about what counts, what matters, what to optimize for.
When these systems are deployed at scale, their embedded assumptions become environmental. They become the water we swim in. We stop noticing them.
Who Benefits From Neutrality Claims
Notice who tends to argue that technology is neutral: the people who make it and profit from it. If the technology is neutral, they bear no responsibility for its effects. If the problems are caused by users, the solution is user education, not regulation.
This framing is convenient. It deflects accountability. It treats systemic effects as individual choices. It obscures the power of those who design the systems that structure our choices. The accountability gap is real.
A more honest framing: technology is a designed environment. It shapes behavior. The designers know this. The question isn't whether it shapes behavior but whose interests the shaping serves.
Beyond Good and Bad
I'm not arguing that technology is bad. That's just the flip side of claiming it's neutral. Technology isn't neutral or good or bad. It's specific. Each technology, each design choice, each default setting, each algorithmic parameter has specific effects that benefit some and harm others.
The work is to understand those specifics. Who designed this? What were they optimizing for? What behaviors does it make easier? What alternatives does it foreclose? Who benefits and who bears the cost?
These questions resist simple answers. They require looking at particular technologies in particular contexts with particular attention to power and incentives.
What Actually Helps
Individual awareness is a start but not a solution. You can notice when an app is manipulating you. You can change your defaults. You can be more intentional. But these individual tactics don't address the structural problem.
What would? Regulation that holds designers accountable for effects, not just intentions. Requirements for transparency in algorithmic systems. Competition law that prevents monopolistic control of digital infrastructure. Labor protections for the workers who train and moderate AI systems.
And, at a cultural level: dropping the neutrality myth. Technology is designed. Design is political. Pretending otherwise is itself a political choice.
I use technology constantly. I'm not a Luddite. But I try to remember that every tool I use was built by someone, optimized for something, designed to produce certain outcomes. When I use it, I'm working within constraints that someone else chose.
The question isn't whether to use technology. It's whether to use it as a naive consumer or as someone who understands that tools shape the hands that use them.