Published on 25.02.2026
A viral report by Citrini Research imagines a 2028 economic crisis triggered by rapid AI displacement of white-collar workers, but its speculative foundations and aggressive timelines undermine its credibility. Meanwhile, real data on AI-driven wealth inequality from researcher Jeremy Ney paints a far more grounded and genuinely alarming picture of who stands to lose as the AI revolution unfolds.
In late February 2026, a piece of financial speculation dressed up as macro research went massively viral on X. Citrini Research published "The 2028 Global Intelligence Crisis," a thought experiment framed as a memo from the future describing how agentic AI could break the modern economic engine within two years. The report claims AI will create an "Intelligence Displacement Spiral" where white-collar workers face severe disruption, consumer spending collapses because machines do not buy groceries, and the economy enters a doom loop with no natural brake.
The core thesis runs through several phases. First, AI efficiency triggers margin expansion as companies cut headcount. Then comes "Ghost GDP," where productivity numbers look great on paper but the wealth fails to circulate in the real economy because AI agents do not spend money on housing, food, or services. Then intermediation collapses as AI agents bypass traditional platforms like DoorDash, Visa, and IBM, routing around their fee structures. Finally, the full crisis hits: unemployment at ten percent, S&P 500 drawdown of thirty-eight percent, mortgage market impairment in tech hubs, and cascading defaults in private credit.
The report specifically targeted companies built on what it calls "friction," naming IBM for its COBOL legacy support, payment networks for their transaction fees, SaaS companies whose features could be replicated in-house by coding agents, and delivery platforms whose moat is merely human habit. IBM dropped over thirteen percent in a single day, its worst in twenty years. DoorDash CEO Tony Fang responded with what amounted to a validation of the premise, saying the industry needs to "earn the right" to serve AI agents, which only rattled investors further.
But here is where the critical thinking has to kick in. This report was produced by a fund started in 2023, founded by a former paramedic turned entrepreneur, and its arguments read more like AI-generated speculation than rigorous economic analysis. The claims compress decades of structural change into a two-year window. The scenario assumes zero adaptation from humans, firms, governments, or markets. It treats agentic AI capabilities as if they are already mature, when enterprise AI adoption has been painfully slow since 2020. The report's own authors hedged by calling it a scenario rather than a prediction, and one of them tweeted that he hoped he was wrong. That is not the posture of someone who has done a hundred hours of serious research.
The rebuttal landscape is instructive. Noah Smith dismissed it as a scary bedtime story lacking a coherent macroeconomic model. John Loeber highlighted institutional inertia and the persistent inefficiency of existing software as buffers. Multiple economists pointed out that productivity gains flow somewhere, whether to shareholders, governments, or consumers through lower prices, and that historical technological revolutions have consistently created net new employment over time. The "Ghost GDP" concept falls apart when you consider deflationary abundance, where AI-driven cost reductions effectively raise purchasing power across the economy.
Link: The Case for Dystopian AI
While the Citrini Report trades in speculation, Jeremy Ney of the American Inequality Newsletter provides the kind of grounded, data-driven analysis that actually deserves attention. Ney is a researcher, data scientist, and professor at Columbia Business School who has spent years mapping inequality across the United States, and his findings about AI's distributional effects are far more concerning than any fictional 2028 scenario precisely because they are already happening.
The numbers are stark. During the AI surge of the past two years, the top ten percent of households saw their wealth increase by five trillion dollars in a single quarter, while the bottom fifty percent gained just one hundred and fifty billion, a thirty-three-times gap. The bottom half of Americans own just one percent of all U.S. stocks, meaning the gains accruing to Nvidia or Microsoft never flow through to millions of households. Corporate profits sit at some of their highest levels in eighty years while American workers take home their smallest share of national wealth since 1947.
Ney identifies three categories of workers facing the biggest threats. Recent college graduates are experiencing one of the worst job markets in decades, with Stanford economists finding a thirteen percent decline in employment for young workers in AI-exposed jobs since the advent of ChatGPT. Older workers lack the adaptive capacity to retrain or relocate, though paradoxically they are seeing an uptick in hiring because AI currently favors experience over entry-level skills. And workers with easily replaceable skills, some six point one million people including secretaries, cashiers, and customer service representatives, face direct displacement with few transferable skills and limited resources to upskill.
The human stories behind the data are devastating. Brian Groh, a freelance writer who lost his work to AI, moved from DC back to Ohio, took up tree-cutting on a chatbot's advice, injured his back, and nearly fell into the same opioid trap that has claimed so many lives in that region. This is what displacement actually looks like, not a clean macro chart but real people making desperate choices.
The policy gap is equally alarming. Retraining all six point one million at-risk low-skill workers in one year would cost an estimated one hundred and fifty-one billion dollars, fifty percent more than the annual SNAP budget. The United States used to invest heavily in job training through programs signed by Kennedy and Nixon, but Reagan slashed federal training funding from ten billion to two billion in just two years. That infrastructure never recovered. Former Commerce Secretary Gina Raimondo, after meeting with billionaires and tech CEOs, warned plainly: "It's the end of America as we know it if we don't use this moment to do things differently."
Link: The Case for Dystopian AI
There is a meta-story here that deserves as much scrutiny as the economic arguments. The Citrini Report accumulated over twenty-six million views on X and five thousand likes on Substack, moved actual stock prices, and generated coverage from Bloomberg. Yet its foundations are, by any serious analytical standard, remarkably thin. It reads like a Deep Research output that has been lightly edited by humans. Its arguments feel AI-generated, lacking grounding in real business operations or capital market mechanics.
This points to a deeper problem in the information ecosystem around AI. Venture capitalists boost AI optimism narratives because they have financial positions in AI companies heading toward IPOs. Funds publish provocative scenarios that exaggerate AI capabilities because going viral is more profitable than being rigorous. Newsletter operators optimize for subscription revenue, which incentivizes clickbait over honest analysis. The social contract between content creators and audiences is fracturing.
The timing is not coincidental. Anthropic and OpenAI are both on IPO countdowns. Every piece of content that amplifies AI's transformative power, whether optimistic or pessimistic, serves to inflate the perceived market opportunity. A doom scenario that says AI will destroy the economy by 2028 and an optimism scenario that says AI will create unprecedented productivity both serve the same master: the narrative that AI is so powerful you cannot ignore it, which is exactly what you want potential IPO investors to believe.
Meanwhile, the real story is slower and less dramatic. Enterprise AI adoption remains painfully slow. Agentic AI was a huge disappointment in 2025, and 2026 does not look fundamentally different even as plugins become more accessible. The labor market has deteriorated, but primarily due to the Trump Administration's policies on tariffs and immigration, early retirements, and a lack of domestic talent, not because of AI displacement. The gap between AI discourse and AI reality has perhaps never been wider.
Link: The Case for Dystopian AI