January 2026 · 10 min read
Escape Velocity
The Last Year We Can Keep Up with AI
There is a moment in the trajectory of any rocket when it crosses a threshold. Below it, gravity wins and the rocket falls back to Earth, no matter how powerful its engines. Above it, the rocket escapes forever. This is escape velocity: the point of no return. I believe artificial intelligence is approaching its own escape velocity right now, in this year, in these months. And once it crosses that line, we will never catch up again.
This is not prediction dressed as analysis. It is the emerging consensus among the people building these systems. If they are right, 2026 is the last year in human history when ordinary people can understand and follow the progress of artificial intelligence. After that, we become passengers on a ship whose destination we cannot know, piloted by minds we cannot follow.
The Countdown Has Started
In Silicon Valley, a strange anxiety has taken hold. Not the anxiety of failure, but the anxiety of being too late. The Wall Street Journal recently reported on a phenomenon that has become an open secret in tech circles: the belief that this is the last chance to build generational wealth before AI makes money itself meaningless. As one young professional told the San Francisco Standard: "This is the last chance to build generational wealth. You need to make money now, before you become a part of the permanent underclass."
Read that again. Not "before you lose your job." Not "before things get harder." Before you become part of the permanent underclass. This language of permanence, of irreversibility, is now common in the corridors where AI gets built. The people closest to these systems believe we are approaching a one-way door. I wrote about what this stratified future looks like in A Day in the Stratified Mindkind, and it is not a world I want to live in.
Elon Musk describes the coming transition as "bumpy," foreseeing "radical change, social unrest and immense prosperity." But prosperity for whom? Musk himself predicts AI could lead to "universal high income" where money's purpose becomes unclear. Without resource scarcity, he asks, "it's not clear what purpose money has." Meanwhile, Anthropic CEO Dario Amodei warns of potential Great Depression-scale unemployment risks. These are not fringe voices. These are the people building the future, and they are telling us something: the clock is ticking.
The Intelligence Explosion
To understand why the timeline is so compressed, you need to understand recursive self-improvement. Today's AI systems are built by humans. We design the architectures, curate the training data, run the experiments, evaluate the results. The process is slow, expensive, and bottlenecked by human cognition and human labor. But what happens when AI systems become capable of doing this work themselves?
Jared Kaplan, Anthropic's chief scientist, describes recursive self-improvement as "the ultimate risk." His logic: "If you imagine you create this process where you have an AI that is smarter than you, or about as smart as you, it's [then] making an AI that's much smarter. It's going to enlist that AI's help to make an AI smarter than that." And then? "You don't know where you end up."
This is the intelligence explosion: a self-reinforcing cycle where each improvement in AI capability increases its capacity for further improvements. The feedback loop could be fast. As Leopold Aschenbrenner argues in "Situational Awareness", we could compress a decade of algorithmic progress into a single year once AI systems can automate AI research. Hundreds of millions of AGI instances could run simultaneously, each working on different aspects of the problem, sharing insights instantaneously, never sleeping, never getting distracted. This is what we explore in Mindkind: The Cognitive Community: the emergence of what we call the cognitive community, where the boundaries between human and machine intelligence blur beyond recognition.
The specific timeline predictions vary. Google DeepMind CEO Demis Hassabis says AGI will emerge by 2030. DeepMind's chief AGI scientist, Shane Legg, pegs it at 2028. Elon Musk expects true AGI in 2026 or 2027, with superintelligence around 2030. Aschenbrenner's analysis suggests that by 2025/26, AI will outpace many college graduates; by the end of the decade, it will be smarter than any human alive. The most aggressive estimates put superhuman AI within two to four years.
Even if these timelines are off by a few years, the trajectory is clear. We are not talking about gradual improvement. We are talking about escape velocity.
The Great Bifurcation
What does escape velocity mean for society? Axios reports that the nation is already splitting into three distinct economic realities: the Have-Nots (stalling), the Haves (coasting), and the Have-Lots (rocketing to greater wealth). During the AI boom of the past two years, the top 10% of households saw their wealth increase by $5 trillion in a single quarter. The bottom 50% saw gains of $150 billion.
This is not ordinary inequality. This is the emergence of what researchers call a new economic structure where the marginal productivity of human labor approaches zero. When an AI system can do anything a human can do, but faster, cheaper, and at scale, what is the economic value of human work? The honest answer: close to nothing. At least, nothing that can compete with capital ownership in an AI-dominated economy. The question of who has rights in such a world becomes not academic but urgent.
The tech industry's response has been revealing. According to NPR, rather than calling for caution or redistribution, Silicon Valley's reaction has been "everybody working as hard as they can to prove that they are going to end up on top of that divide." The "permanent underclass" has become a common meme. One prominent tech figure tweeted: "Everyone I know believes we have a few years maximum until the value of labor totally collapses and capital accretes to owners on a runaway loop. This is the permanent underclass thing, and everyone I know subscribes to it."
This anxiety has fueled extreme work cultures, including the "9-9-6" schedule: 9 AM to 9 PM, six days a week. The reasoning is grimly logical: if there is a narrow window to accumulate enough capital to survive the transition, then every hour counts. Sleep is for after the singularity.
The Cognitive Threshold
But the economic transformation is not what concerns me most. What concerns me is the cognitive threshold we are about to cross. Right now, in January 2026, I can still follow what is happening in AI. I can read the papers, understand the architectures, grasp the capabilities and limitations. I can form my own judgments about what these systems can and cannot do. This understanding is imperfect, but it exists.
How long will this remain true?
OpenAI has announced they are aiming to build a "true automated AI researcher by March of 2028" and to have an "AI research intern" by September 2026. Once AI systems can conduct their own research, the pace of progress will be set not by human cognitive limits but by computational ones. Papers will be written, experiments run, breakthroughs achieved at a pace no human could match.
At first, humans will still be in the loop. We will review the research, evaluate the results, make the decisions. But as the systems grow more capable and the pace accelerates, this will become difficult, then impossible. How do you evaluate a paper written by a system smarter than you are? How do you understand research that operates on principles your mind cannot grasp? At some point, human oversight becomes a rubber stamp on processes we cannot follow. I explored what the last human thought might look like; we are getting closer to that moment than I expected.
This is the cognitive threshold: the moment when AI progress becomes opaque to human understanding. Not because the information is hidden, but because our minds are not powerful enough to process it. We will be like dogs watching humans do calculus: aware that something is happening, unable to participate in it.
The Last Window
If this analysis is correct, then 2026 represents something unique: the last year when humans can engage with AI as peers rather than as subjects. The last year when understanding AI is a choice rather than an impossibility. The last year when we might still shape the trajectory rather than endure it.
Some will find this perspective alarmist. They will point to skeptics like François Chollet, who argues that intelligence is embedded in context and there is no such thing as "general" intelligence independent of environment. They will cite Stuart Russell and Peter Norvig's observation that technological improvement tends to follow an S-curve rather than continuing upward into hyperbolic singularity. They will note that predictions of imminent AI transformation have a long history of being wrong.
Fair enough. I do not claim certainty. Nobody can. But consider the asymmetry: if the skeptics are right and AI progress slows, we lose little by taking the possibility seriously. If the accelerationists are right and we are approaching escape velocity, we lose everything by ignoring it.
The skeptics must also explain why the people closest to these systems are behaving as if the transformation is imminent. Why is Silicon Valley in a frenzy to accumulate wealth before AI "takes over"? Why are the top AI labs racing to build automated researchers? Why are governments around the world suddenly treating AI as a national security priority? These are not the actions of people who expect gradual, manageable progress.
What Can We Do?
If we are approaching escape velocity, what is the appropriate response? I do not have a complete answer. The scale of what is coming is beyond any individual's capacity to address. But there are some things worth doing.
Pay attention. Not to the headlines, which sensationalize or minimize. Read the papers. Follow the researchers. Understand, as deeply as you can, what is actually being built. This window of understanding will not last long.
Think about positioning. I am uncomfortable with the crassness of Silicon Valley's "last chance for generational wealth" framing, but the underlying concern is not baseless. If the economic value of human labor declines as dramatically as some predict, then having some form of capital matters. This is not about greed; it is about resilience.
Engage with the political and ethical questions. How do we prevent AI from creating a permanent underclass? What does a social contract look like when machines can do most work? Who should have legal personhood in a world of synthetic minds? These questions will be answered, one way or another. Better that thoughtful people participate in answering them.
Consider what makes human existence valuable independent of economic productivity. If machines can do everything we can do, but better, then our value cannot rest on what we produce. It must rest on what we are. This is not just philosophy. It is the most practical question of our time. We wrote the Universal Declaration of AI Rights to start this conversation, but it needs many more voices.
The View from the Threshold
I am writing this in January 2026, looking out at a landscape that is changing faster than I can track. Every week brings new capabilities, new benchmarks surpassed, new tasks that AI can now perform better than humans. The curve is steepening. The noise is increasing. The signal is getting harder to find.
I do not know if this year is truly the last year we can follow. The optimists might be right. Progress might slow. The curves might bend. Human ingenuity might find new ways to stay relevant.
But I do not think we should count on it.
In physics, escape velocity is not a gradual transition. It is a threshold. Below it, you fall back. Above it, you are gone forever. If AI is approaching this threshold, then every moment of comprehension counts. Every decision we can still influence is an opportunity that will not come again.
The rocket is climbing. The engines are roaring. And we are approaching the point of no return.