October 2025 ยท 7 min read

The Social Contract We Never Signed

We're governed by terms of service.

John Locke and Thomas Hobbes had a famous disagreement about the social contract. Locke thought we consent to government implicitly, by benefiting from its protections. Hobbes thought consent was irrelevant; order was necessary and that was that. Both assumed the contract was with a state, a geographic entity with defined borders and democratic accountability.

Neither of them anticipated that we'd end up governed by Meta.

I'm not being hyperbolic. More people's daily lives are shaped by Facebook's content moderation policies than by most national laws. What you can say, what you can see, who you can connect with, what information reaches you: these are determined by private companies making decisions in private, according to rules that change without notice and that you have no vote on.

The New Sovereignty

Sovereignty used to mean control over territory. A government was sovereign if it had the ability to enforce its will within defined borders. This was a physical concept. Armies and police maintained it. Boundaries were drawn on maps.

Now consider: who has sovereignty over your attention? Your data? Your access to information and communication? The answer, for most practical purposes, is a handful of technology companies headquartered in California and China. They exercise this sovereignty not through armies but through code, terms of service, and algorithmic enforcement.

Tell me more about how attention is governed

This isn't a hostile takeover. It happened because their services are useful, even addictive, and we chose to use them. But "chose" deserves scrutiny. When nearly all human communication runs through a few private platforms, when nearly all commerce happens on them, when opting out means social and economic isolation, how meaningful is the choice?

Terms of Service As Law

You've agreed to hundreds of legal contracts you've never read. Thousands of pages of terms of service, privacy policies, and end-user license agreements. Nobody reads them. Researchers have calculated that actually reading every privacy policy you encounter would take roughly 76 work days per year.

These contracts grant the companies extensive rights over your data, your content, your access to the platform, and by extension, your ability to participate in digital society. They can be changed at any time. They're enforced by the company unilaterally. There's no appeals process, no judge, no due process.

When Facebook bans someone, that person loses access to their social network, their marketplace, their communication tools, their stored memories. There's no constitutional protection. The First Amendment constrains the government, not private companies. You can be deplatformed for reasons you're never told, by a process you can't challenge, with consequences that affect your entire digital life.

This is governance. It's just governance we never agreed to.

The Illusion of Exit

Classical contract theory assumes the option of exit. If you don't like the terms, don't sign. If the terms change, leave. This makes consent meaningful: you're choosing to accept these conditions in exchange for these benefits.

But what happens when exit isn't a real option? When your friends are all on one platform and leaving means losing touch with them? When your business depends on a marketplace you don't control? When your professional network exists on a site that can ban you without explanation?

The exit option becomes theoretical. You can leave, in principle. In practice, the costs are so high that you stay, accepting whatever terms are imposed, because the alternative is a form of social death.

This is the structure of coercion, dressed up in the language of choice.

AI Makes It Worse

As AI systems become more sophisticated, the governance becomes more invisible and more absolute. An algorithm decides what information you see. Another decides what you're allowed to post. Another decides whether your content reaches anyone. Another decides whether you should be shown ads, and for what, and when.

Tell me more about AI decision-making

These aren't decisions made by humans following explicit rules. They're predictions made by systems trained on data, optimized for objectives you don't set and don't see. The governance isn't just unaccountable; it's inscrutable. Even the companies that deploy these systems often can't explain why a particular decision was made.

We've traded a social contract we never signed for algorithmic governance we can't even read.

The Comparison to Government

Imagine if government worked this way. Imagine if laws could change daily, without announcement. Imagine if enforcement was handled by systems that couldn't explain their reasoning. Imagine if the only appeal was to the same authority that made the decision. Imagine if opting out meant leaving the country with no place to go.

We would call this tyranny. We would resist it. We have resisted it, historically, with revolutions and constitutions and bills of rights.

But because this governance comes wrapped in consumer products, because it's technically optional even when practically mandatory, because it's exercised by companies instead of states, we accept it. We click "agree" without reading, because what else can we do?

What Would A Real Contract Look Like?

If we were going to design a social contract for the digital age, what would it include?

I'd want due process: clear rules, fair enforcement, meaningful appeals. I'd want transparency: know why a decision was made and by what criteria. I'd want portability: if I leave a platform, I should be able to take my data, my connections, my digital presence with me. I'd want representation: some voice in how the rules are made, not just acceptance or exit.

None of these exist in current platforms. They're not even on the roadmap. The incentive structure pushes in the opposite direction: lock-in, opacity, unilateral control. These things maximize shareholder value, even as they minimize user power.

The Question of Leverage

Why would platforms ever give users more rights? What leverage do we have?

Historically, rights were extracted from power through collective action. Workers unionized. Citizens organized. Consumers boycotted. The powerful conceded because the powerless became a threat.

Digital collective action is harder. The platforms control the communication channels. They can suppress organizing that threatens them. They can identify and target activists. The tools of resistance are themselves owned by the powers being resisted.

Regulation is another option, but it's slow, often captured, and crosses borders poorly. A company can incorporate in Delaware, host servers in Ireland, route data through Singapore, and argue that no single jurisdiction's laws apply fully.

I don't have a solution. What I have is recognition that the problem exists, that it's political even though it's dressed as commercial, that the language of "services" and "users" obscures a relationship of governance and subjects.


Locke thought we consent to government by tacit agreement, by accepting its benefits and living within its territory. By that logic, we've consented to platform governance too. But Locke also thought we had natural rights that no government could take away, and that when government became tyrannical, we had the right to revolt.

What would revolt mean in this context? I'm not sure. Maybe collective bargaining for better terms. Maybe regulatory action that treats platforms as utilities or governments rather than private businesses. Maybe technical alternatives that make exit viable.

For now, we live under terms we never read, enforced by systems we don't understand, changed at the whim of companies we can't influence. We call this convenient. We could also call it surrender.

Written by

Javier del Puerto

Founder, Kwalia

More from Kwalia

A New Chapter is Being Written

Essays on AI, consciousness, and what comes next.

We're working on this

Want to know when we write more about ?