A soft-landing manual for the second gilded age
By the summer of 1945, West Berlin had been reduced to rubble. Allied bombing, the Soviet ground assault and Hitler's insistence on Götterdämmerung had destroyed roughly a third of the city's buildings and left most of the rest damaged. There was no functioning government, no reliable electricity, no clean water in large sections of the city, and somewhere around 75 million cubic metres of debris where neighbourhoods used to be. The women who cleared that debris by hand, the Trümmerfrauen, became one of the defining images of the postwar period. Three years later, when the Soviets blockaded the city and tried to starve it into submission, the Western Allies mounted the Berlin Airlift, flying in food and fuel for over a year. And then, against every reasonable prediction, Germany rebuilt. Within a decade, it was the jewel of the Western world, a functioning democracy with a growing economy, universities and civic life, constructed on top of what had been, not long before, a moonscape of broken concrete and ash.
The conventional wisdom about large-scale disruption assumes that destruction is permanent and that recovery, if it happens at all, will be slow and partial. Berlin is a useful corrective. When the political will existed and the institutional support was there, an entire nation rebuilt itself from almost nothing in a timeframe that would have seemed absurd to anyone standing in the rubble in 1945. The Marshall Plan helped. Allied political commitment helped. But what mattered most was that people decided the situation was not hopeless, and then acted on that decision.
We're going to need that kind of energy.
AI is arriving fast, the old economic arrangements are visibly failing, and the dominant narratives about what comes next have split into 2 equally unhelpful camps: utopian accelerationists who believe the market will sort everything out if we build fast enough, and existential doomers who've convinced themselves that the only possible outcome is either human extinction or permanent mass unemployment. Both camps share a singular trait: they've decided the future is already determined and that human agency is irrelevant to whatever happens next.
The future is not determined. It never has been.
I don't entirely believe AI will produce the civilizational wipeout the doomers are predicting. I think there's reasonably good odds the economy will find equilibrium, the way it always does after major technological shifts. New industries will emerge. New forms of work will appear. People will adapt, because people always adapt.
But equilibrium is not the same as a good outcome. The economy found equilibrium after the Industrial Revolution too, and for several decades that equilibrium included child labour in textile mills and life expectancy in Manchester hovering around 25. What matters is how much unnecessary suffering we allow between here and there, and whether the new normal we settle into is one worth living in.
What follows is my attempt at a practical roadmap for getting from 2025 to 2035 in a way that makes the landing as soft as possible.
My bias is toward pragmatism. I believe safety nets work. I believe democracy, despite its many and obvious failings, remains the best system we've got for making collective decisions. I believe that history offers us better guidance than science fiction (or AI influencers on Twitter) for thinking about technological transitions. And I believe somewhat fervently that the people building the most promising models for a post-AI future aren't in Silicon Valley think tanks. They're in rural West Virginia counties, Danish labour offices, and UCL research labs.
They've already started the work.
The rest of us need to catch up.
The doomers are wrong (sorry)
The economic doomer thesis goes like this: AI will automate most cognitive labour within the next decade. This will create permanent mass unemployment on a scale that makes the Industrial Revolution look gentle. The wealth generated by AI will concentrate in the hands of a tiny number of companies and their shareholders. Democracy will be unable to respond because politicians are too slow, too captured, and too ignorant about technology to act in time. The result will be some form of neo-feudalism in which a small AI-owning class controls everything and the rest of humanity becomes economically superfluous.
Pieces of this deserve to be taken seriously. Wealth concentration is a real and worsening problem, political institutions do lag behind technological change, and the speed of AI development is unprecedented in certain narrow respects. But the overall thesis has a fatal flaw: it treats the current distribution of political and economic power as a fixed constraint rather than a variable. It assumes that because our institutions are currently failing to address inequality, they will inevitably continue failing, forever, no matter what anyone does. That's a deficit of historical imagination on a grand scale.
Barbara Tuchman, in her magnificent book The March of Folly, documented the recurring pattern of governments pursuing policies that were demonstrably against their own interests, from the Trojan War to the Vietnam War. Her point was not that folly is inevitable. Her point was that folly is a choice, and that at every stage of every disaster she documented, there were people pointing out the obvious alternative, people whose advice was ignored for reasons of ideology, pride, or institutional inertia. The existence of folly doesn't prove the impossibility of wisdom. It proves the necessity of fighting for it.
The doomers are enacting a version of the error Tuchman catalogued. They've looked at the current state of affairs, noticed that things are going badly, and concluded that "going badly" is the only possible trajectory. But this ignores the entire history of social reform, in which terrible conditions produced political movements that eventually forced institutional change. Child labour was once normal. 60-hour work weeks were once standard. The absence of workplace safety regulations was once considered a natural feature of industrial capitalism. None of these things changed because elites spontaneously became generous. They changed because people organized, fought, and built new institutions that constrained the worst impulses of capital accumulation.
The AI transition will require the same kind of fight. But it's a fight we can win, and the doomer insistence that we can't is itself one of the obstacles we need to overcome.
Re: Graeber and Bregman
David Graeber, the anthropologist and activist who died in 2020, spent much of his career arguing that our collective imagination about economic possibility had been artificially narrowed. His book Bullshit Jobs made the observation that a startling percentage of the modern workforce was employed in roles that even the people performing them believed to be pointless. Entire categories of employment that existed primarily to maintain bureaucratic structures and power hierarchies rather than to produce anything of genuine value.
Graeber's insight is directly relevant to the AI conversation because it reframes the automation question entirely. If a large fraction of current employment is already, by the workers' own admission, socially useless, then the automation of those jobs isn't necessarily a catastrophe. It might be a liberation, provided we have systems in place to ensure that the people displaced from pointless work can still eat, pay rent, and maintain their dignity while they figure out what to do next.
Rutger Bregman picked up a related thread in Utopia for Realists, making the case for universal basic income, open borders, and a 15-hour work week. What made Bregman's argument persuasive wasn't the novelty of any individual proposal (the UBI idea goes back at least to Thomas Paine's Agrarian Justice in 1797) but his insistence on treating these ideas as practical policy rather than utopian dreaming. Bregman marshalled the evidence from actual UBI experiments, from the Manitoba Mincome experiment in the 1970s to GiveDirectly's programmes in Kenya, to argue that giving people unconditional cash transfers tends to produce good outcomes: people don't stop working, they don't drink themselves to death, they invest in their children's education and start small businesses.
The common thread between Graeber and Bregman is a refusal to accept that the current organisation of work and economic life is natural, inevitable, or optimal. Both argued that we could build something better and that the primary obstacle was not technological or economic but imaginative and political. We've been told so many times that there is no alternative to the current system that we've internalized it as a religious truth, even as the evidence piles up that the current system is making people miserable.
And here's the part most people miss about markets and entrepreneurship: a guaranteed income floor and universal basic services actually supercharge entrepreneurship. Right now, the single biggest barrier to starting a business in the United States is the risk of losing your health insurance. The single biggest reason talented people stay in jobs they hate is the fear of what happens if they fail. Every UBI study ever conducted has shown an increase in self-employment and business formation among recipients. If you want an economy that actually generates new ideas and new businesses, it turns out the most effective thing you can do is make sure nobody starves if their startup doesn't work out.
Any serious roadmap for the next decade has to take these insights as a starting premise. The AI transition is going to eliminate and transform millions of jobs, and pretending it won't isn't a strategy. But the outcome of that transformation isn't predetermined. It depends on what we build.
The road not taken: Jeff Atwood and guaranteed minimum income
In January 2025, Jeff Atwood published "Stay Gold, America," an essay about the American Dream that turned into something much larger. Atwood co-founded Stack Overflow and Discourse, and he spent 2 decades building internet infrastructure. But "Stay Gold" wasn't about technology. It was about the observation that the United States had entered a period of wealth concentration exceeding the original Gilded Age, with the top 1% of households controlling 32% of all wealth while the bottom 50% held 2.6%. And it was about what that concentration was doing to the foundational promise of equal opportunity.
I worked with Atwood on the launch and creation of Stay Gold, and what struck me most was how he framed the problem. He wasn't interested in the typical Silicon Valley move of proposing a tech fix for a political failure; he went back to first principles. The American Dream, as James Truslow Adams originally defined it in 1931, was a social order in which everyone could attain the fullest stature of which they were innately capable. Atwood's argument: that this dream had become structurally inaccessible to millions of Americans, and that the act of sharing it, of deliberately extending its reach, was not charity but the fulfilment of the promise itself.
Atwood and his family pledged half their remaining wealth toward what became the Rural Guaranteed Minimum Income Initiative. Partnering with GiveDirectly and OpenResearch, they began designing rigorous GMI studies in rural American counties where poverty had been entrenched for generations. Mercer County, West Virginia, where Atwood's father was born and the collapse of coal mining left communities hollowed out. Beaufort County, North Carolina, where farming and factory jobs had evaporated. These were places where the American Dream had become, for all practical purposes, a cruel joke.
The historical lineage of guaranteed minimum income is longer and more distinguished than most people realize. Paine proposed a retirement pension funded by estate taxes in 1797. Social Security, established in 1935, cut senior poverty from roughly 50% to 10%. Martin Luther King Jr. made the moral case for direct cash disbursements in Where Do We Go From Here: Chaos or Community? in 1967, arguing that a guaranteed income was the simplest and most effective weapon against poverty. The Earned Income Tax Credit, established in 1975, became the 2nd most effective anti-poverty tool in America after Social Security. And in 2019, Mayor Michael Tubbs launched the Stockton Economic Empowerment Demonstration, providing 125 residents with $500 per month in unconditional payments for 2 years. The results showed improved financial stability, increased full-time employment, and better mental health outcomes.
Atwood's contribution was to connect this lineage to the specific crisis of the present moment: the simultaneous arrival of extreme wealth concentration and AI-driven economic transformation. His speech at Cooper Union's Great Hall in March 2025, where he appeared alongside Alexander Vindman, laid out the case with characteristic directness. If the wealth exists (and it does), and if the evidence shows that guaranteed income works (and it does), and if the alternative is watching millions of people get trapped in poverty while their economic foundations crumble beneath them (and it is), then the failure to act is a political choice, not an economic inevitability.
The OpenResearch study, completed in 2023, was the largest and most detailed GMI study ever conducted in the United States. Its findings reinforced what every previous study had shown: people who received unconditional cash didn't blow it on luxuries. They shared it with others. They went out of their way to help others in desperate need. They started small businesses. They invested in their kids. What guaranteed income did went beyond individual financial stability. The real effect was the network of economic security spreading through communities, strengthening the bonds between people that poverty systematically destroys.
When AI displaces workers at scale, the critical variable is whether those workers have a floor beneath them or whether they fall into a void. A guaranteed income provides the floor. It preserves the community ties and social networks that people need to rebuild their economic lives. And the evidence, accumulated across decades and dozens of studies, consistently shows that it works.
Guaranteed minimum income really is the road not taken in American (and global) economic policy, the path that was available at every stage and was always rejected in favour of means-tested bureaucracy and moral panic about dependency. The AI transition gives us a reason, and perhaps the last best chance, to finally take it.
What else we need to build
Guaranteed minimum income is the foundation, but it's not the whole structure. There are 4 other critical investments that need to happen between now and 2035.
Universal basic services as durable infrastructure. The UCL Institute for Global Prosperity has championed the idea of guaranteeing every citizen access to basic services regardless of employment status: healthcare, housing, education, transportation, communications, food security. Most developed nations already provide some of these universally. The expansion would mean filling in the gaps and making the guarantee explicit. Why services alongside income? Because services are harder to claw back politically. Once a country has a national health service, eliminating it becomes almost impossible, as every British politician who has flirted with privatising the NHS has discovered. Cash transfers are vulnerable to the endless cycle of means-testing and moral panic. Universal services create durable infrastructure that survives changes in government.
Portable benefits and the end of employer-dependent welfare. The system in which your health insurance, retirement savings, and professional identity are all tied to your employer made sense in the era of lifelong employment at a single company. That era ended decades ago. The AI transition will accelerate job turnover dramatically, and the employer-dependent welfare model will become actively dangerous, trapping people in declining industries because they can't afford to lose their benefits. It also kills entrepreneurship stone dead: you don't quit your job to build something new if quitting means your family loses health coverage. Denmark's "flexicurity" model combines flexible labour markets with generous unemployment insurance and active retraining programmes, producing low long-term unemployment despite high job turnover. These models work, and the evidence base is extensive. Denmark, incidentally, has one of the highest rates of entrepreneurship in Europe.
A public AI infrastructure layer. Right now, the most powerful AI systems are controlled by a handful of private companies spending tens of billions on compute infrastructure. If this trajectory continues, we'll end up with an AI oligopoly controlling the core cognitive infrastructure of the 21st century economy. We've already run this experiment with social media platforms and the results have been mixed at best. The alternative is government-funded compute clusters, open-source foundation models, and public research institutions that provide AI capabilities as a utility. The model is the public research university, the original ARPANET, the BBC: fund a public institution with a clear mandate to serve the public interest and let it provide a baseline that the private sector has to compete with.
Democratic governance of algorithmic systems. AI systems are already making consequential decisions about who gets a loan, who gets hired, who gets paroled, what news you see. These decisions are largely invisible, poorly documented, unaccountable, and unappealable. This is a governance failure, and fixing it doesn't require any technological breakthrough. We know how to create accountability structures for decision-making systems because we've been doing it for centuries. The EU's AI Act provides a template. Citizens' assemblies on AI policy, modelled on the Irish Citizens' Assembly that broke the political deadlock on abortion, would give ordinary people meaningful input into the design and deployment of systems that affect their lives.
Obstacles etc
The first obstacle is political. Everything I've described requires massive public investment, which means either raising taxes on the wealthy and on corporations or redirecting existing spending. The wealthy have enormous political influence in every democracy, and they will resist any attempt to fund a more generous social contract at their expense. This has been the central political conflict of every democratic society since Athens. It gets worse when inequality is high, because extreme wealth translates directly into extreme political power.
The second obstacle is ideological. 40 years of individualist economics have embedded the assumption that markets are always more efficient than governments, that public provision of services is inherently wasteful, and that individual responsibility should take precedence over collective insurance. These assumptions are deeply held by politicians, economists, media commentators, and voters, and they won't be dislodged by a single white paper or a clever policy proposal. Changing the ideological terrain requires sustained intellectual and political work over years.
The third obstacle is coordination. Many of these proposals work best when implemented at scale, ideally internationally. A single country that raises corporate taxes to fund universal basic services while its neighbours maintain lower rates will face capital flight. International coordination is possible (the OECD's minimum corporate tax agreement shows this) but difficult and slow.
The fourth obstacle is the tech industry itself. The major AI companies have strong incentives to resist public AI infrastructure, algorithmic accountability, and any regulation that might slow their growth. They'll lobby, fund think tanks, produce studies showing regulation will destroy innovation, and warn about China winning the AI race. This playbook is familiar from every previous round of tech regulation debates, and it has been effective.
The fifth obstacle is us. Our cynicism, our exhaustion, our tendency to scroll past the policy debate and doom-post about the inevitable robot apocalypse. Doomerism is a form of political paralysis. If you believe the future is already determined, you don't organize, you don't vote, you don't fight. You wait for the catastrophe and feel validated when it arrives.
How we beat the doom
Overcoming these obstacles is the actual work of the next decade. And it starts with a recognition that none of them are unprecedented.
The political obstacle of concentrated wealth was faced and partially overcome during the Progressive Era in the United States and the postwar social democratic settlement in Europe. The ideological obstacle of market fundamentalism was challenged and partially overturned by the Keynesian revolution of the 1930s and '40s. The coordination obstacle of international cooperation was addressed, however imperfectly, by the postwar institutional order: the United Nations, the Bretton Woods system, the European Coal and Steel Community that grew into the EU. The obstacle of industry resistance has been overcome repeatedly, from railway safety to pharmaceutical regulation to environmental protection. And the obstacle of public cynicism has been overcome every time a social movement successfully mobilized people around a vision of a better future.
The playbook is there. It's in the history books and the policy research. It's in the lived experience of countries that have already built more humane economic systems than the Anglo-American default. You can see it in places like Mercer County and Beaufort County, where the GMI studies are already gathering data on what happens when you actually give people a floor to stand on.
Political will isn't a resource that exists in fixed supply. It's something that gets created through argument, organizing, coalition-building, and the slow, grinding work of changing what people believe is possible. Bregman was right about this: the Overton window isn't a natural phenomenon. It's a political construction, and it can be moved.
A 10-year calendar
2025-2027: Foundation. Pass legislation requiring algorithmic transparency and accountability for high-risk AI systems. Establish public AI research institutes with serious funding. Scale the Atwood-GiveDirectly rural GMI model to additional counties, building the evidence base. Begin the political conversation about universal basic services by proposing pilot programmes in sympathetic jurisdictions.
2027-2029: Construction. Implement universal basic services pilots. Roll out portable benefits in at least one major economy. Scale up public AI infrastructure to provide competitive open-source alternatives to private models. Expand rural GMI from individual counties to regional programmes, using accumulated data to refine the model. Begin international negotiations on AI governance standards.
2029-2031: Expansion. Expand universal basic services from pilots to national programmes. Introduce guaranteed minimum income legislation building on 5 years of study data. Massively increase public investment in human-centred services: healthcare, education, elder care, community infrastructure.
2031-2033: Integration. Link the components into a coherent system. Ensure that the social safety net, portable benefits, GMI, and AI governance framework work together to provide real security during economic transitions. Shift from unemployment insurance to transition insurance, recognizing that job changes will be frequent and that people need support during transitions as much as during unemployment.
2033-2035: Consolidation. Lock in the gains. Make the new institutions as politically durable as possible by building broad public support and embedding them in legal frameworks that are difficult to reverse. Evaluate what's working and adjust what isn't.
This is an aggressive timeline. It assumes a level of political ambition and institutional capacity that may be optimistic. But it's not fantasy. The postwar welfare state was built in roughly this timeframe. The New Deal transformed American government in under a decade. West Berlin went from 75 million cubic metres of rubble to a prosperous, functioning city in less than 15 years. When the political will exists and the institutional support is there, the speed of transformation can be shocking.
Staying gold in the age of AI
There's a passage in S.E. Hinton's The Outsiders where Johnny Cade tells Ponyboy Curtis to "stay gold," referencing Robert Frost's poem "Nothing Gold Can Stay." The poem is about the inevitability of loss, about how the first green of spring is also its most beautiful and most fragile. It's a melancholy poem, and in context a melancholy injunction: stay good, stay kind, stay open to beauty, even though the world will do its best to harden you.
Atwood's use of "stay gold" carried that bittersweet quality but pushed it further. His argument was that the American Dream isn't an individual achievement. It's incomplete until it's shared. The act of sharing, of deliberately extending economic security and opportunity to people who've been locked out, is what makes the dream real. This is backed by the data from every GMI study ever conducted: when you give people economic security, they share it. They invest in their families and their neighbours. The network effect of opportunity turns out to be vastly more powerful than the individual effect.
Apply this to the AI transition and the implications are hard to overstate. If you build the institutions that share the gains, if you create guaranteed income floors and universal services and portable benefits and democratic governance, then AI becomes a tool for broadly shared prosperity rather than a mechanism for further concentration. It becomes the platform on which a million new businesses get built, because people have the security to take the risk. The technology doesn't determine the outcome. The institutions do.
The accelerationists believe that technological progress will automatically produce good outcomes and that any attempt to steer or constrain it is futile. The doomers believe that bad outcomes are inevitable and that any attempt to prevent them is naive. The uncomfortable middle ground is where the real work happens: the future will be determined by human choices, and those choices will be difficult and contested, but they are ours to make.
The case for stubborn optimism
I want to end with something that might sound naive, and I'm okay with that.
The case for optimism about the AI transition is not that everything will be fine. Everything will not be fine automatically. I believe we'll find equilibrium regardless. The economy is adaptive, and people are resourceful, and new forms of work will emerge in ways none of us can predict. The case for optimism is that we don't have to settle for whatever equilibrium the market produces on its own.
We have, at this specific moment in history, an unusual combination of factors that makes it possible to actively shape the landing. The economic resources to fund a generous social contract exist. The historical evidence that safety nets and public institutions work is extensive and consistent. We know how to build AI governance systems that are actually enforceable. Public awareness that the current trajectory is unsustainable is growing. And people like Atwood are already doing the work, already running the studies, already building the evidence base for guaranteed minimum income in the communities that need it most.
What we lack is the political organization to translate awareness into action, and the imaginative confidence to believe that alternatives are possible. These are real deficits. But they're deficits that have been overcome before, in circumstances far more dire than our own.
The Trümmerfrauen who cleared Berlin's rubble by hand didn't have a 10-year plan. They had buckets and wheelbarrows and the determination to rebuild what had been destroyed. The Marshall Plan gave them institutional support, and the political commitment of the Western Allies gave them security, but the basic act was human: people deciding that the rubble was not the end of the story.
The doomers want you to believe that the game is already over. The accelerationists want you to believe that you don't need to play. Both positions are comforting in their way, because both relieve you of the burden of actually doing something.
A serious 10-year roadmap offers no such comfort.
The future is undetermined, the stakes are enormous, and the work is ours to do.
That's the most optimistic thing I can imagine.
The future is still up for grabs.
So grab it.