The empire always falls
A citizen of Rome in 117 AD, under Emperor Trajan, would've found it difficult to imagine the empire not existing. The roads, the aqueducts, the legal system, the trade networks stretching from Britain to Mesopotamia: all of it seemed to be a near-fact of nature, like gravity // the Mediterranean itself.
Edward Gibbon gave us six volumes explaining how that feeling turned out to be wrong, and even he couldn't fully untangle all the causes.
But the overarching theme might be this: the permanence was a mirage, and belief in the permanence a catastrophic delusion.
Popular AI commentary treats the current crop of foundation model companies the way those Roman citizens treated the legions: as inevitable, as the only possible structure the world could take. The posting classes assume that because OpenAI and Google and Anthropic and Meta have built impressive things, those impressive things will continue to compound in a linear fashion until every job is automated and every economy is restructured, leaving a permanent underclass of unemployable humans in a world that no longer needs them. This is treated as so obvious that questioning it marks you as either naive or sentimental.
But companies destroy themselves and empires rot from within, and the people living inside these systems almost never see the collapse coming, because the system itself is the lens through which they view the world.
Permanence is the most dangerous feeling in history
Thomas Kuhn argued in The Structure of Scientific Revolutions that the scientists working within a dominant framework don't use it as a tool so much as inhabit it. Normal science is puzzle-solving within a framework that nobody questions, until the anomalies pile up so high that someone proposes a new framework entirely, and the old guard spends twenty years insisting nothing's changed. The Ptolemaic model of the solar system survived for over a thousand years, largely because everyone concerned was brilliant enough to keep adding epicycles to make the data fit, making every new complication feel like...well, progress.
In the "AI inevitability thesis" every limitation gets explained away as a temporary obstacle on the path to AGI. Reasoning will improve, costs will fall etc and to be fair, they might. But the confidence with which these predictions are delivered should remind you of the confidence with which the British Empire's administrators, circa 1900, reviewed the permanent nature of their civilizational project. They had the world's largest navy and the world's most extensive telegraph network, plus control of roughly a quarter of the earth's land surface. Within fifty years, nearly all of it was gone. And that dissolution happened because the underlying conditions that made the empire possible changed in ways that no amount of naval power could address.
Sure things fill graveyards
In 2007, Research In Motion controlled roughly half the US smartphone market and had a market capitalization north of $60 billion. RIM's co-CEO Mike Lazaridis reportedly studied the iPhone at launch and concluded it was impossible for the device to work as advertised on a cellular network. He was, in a narrow technical sense, almost right. The first iPhone had appalling battery life and a network that could barely support it. But he was catastrophically wrong about everthing else.
Nokia held about 40% of the global mobile phone market at its peak. Internal documents later revealed that middle management had become so terrified of senior leadership's reactions to bad news that critical information about competitive threats stopped flowing upward. The company suffocated on its own hierarchy. Xerox PARC invented the graphical user interface, the mouse, the laser printer and Ethernet, and then Xerox managed to commercialize approximately one of those things while Steve Jobs walked out of a demo and built Apple's future on what he'd seen.
The Soviet Union fell because the gap between its internal model of reality and actual reality became unsustainable. The Ottoman Empire spent its last century implementing increasingly frantic reforms, each one an attempt to bolt modernity onto a structure that couldn't support it.
I'm reminded of Shelley's Ozymandias: the lone and level sands stretch far away, and they stretch away from every kingdom that ever declared itself eternal.
And yes, that could still include Google, Anthropic and anyone // everyone else.
Straight lines never stay straight
The current AI narrative draws a line from model 1 to model 2 to model 3 to whatever comes next, projects it forward, and concludes that human labor // existence is finished. But straight-line projections are the most reliably wrong predictions in the history of forecasting, tech or otherwise.
What actually happens, in empires // companies alike, is that progress hits unexpected walls and leaders make strategic blunders while some force that nobody took seriously finds an approach that makes the incumbent architecture look like Ptolemy's epicycles: elaborate and technically sophisticated but pointed in entirely the wrong direction. The blunder creates the opening creates the backlash creates the opportunity for the insurgent. Why should the AI industry be exempt from this? What is it about foundation models that repeals the laws of entropy that have governed every dominant system in recorded history?
Clayton Christensen documented the corporate version of this in The Innovator's Dilemma. Across industries and decades, incumbents fail to respond to disruptive threats because responding would require cannibalizing their existing business (or identity, or vision, or mission) and admitting that the strategy everyone got promoted for executing was wrong. AKA: Dominant systems produce the very conditions that destroy them, because the success of the system makes it impossible for the people inside it to perceive its weaknesses.
What collapse looks like before it arrives
We haven't seen the first great AI collapse. We haven't seen a foundation model company make the BlackBerry mistake or the Nokia mistake, or the Roman mistake, or the Ottoman mistake or reach their Bunker-in-Berlin mistake. But we will, and we'll see it multiple times, because these mistakes = features of power concentration. The hubris that makes a company or an empire dominant in one era is frequently the quality that blinds it to the next one. If you could ask Lazaridis in 2006, or a British colonial administrator in 1900, whether their model of the world was permanent, each would've given you a very convincing explanation for why it in fact was.
When someone tells you that AGI is inevitable and the permanent economic displacement of most humans is a foregone conclusion, what they're really telling you is that they believe the current leaders of the AI industry will execute flawlessly, indefinitely, against challenges they can't yet foresee, in an environment that's changing faster than perhaps any technological enviroment in history. They believe that this particular set of institutions, at this particular moment, has broken the pattern that has held for every empire and every corporation in human history.
But roads crumble and legions go home, the epicycles collapse into a simpler truth, and something else, something nobody predicted, grows in the spaces left behind. It always has and it always will.