Published on 03.03.2026
Citadel macro strategist Frank Flight pushes back hard against the viral "2028 Global Intelligence Crisis" report from Citrini Research, arguing with real data that AI adoption follows a slow S-curve, not an exponential explosion. The piece makes a compelling case that productivity shocks from AI are disinflationary and growth-enhancing, not job-destroying, and that doomsday labor displacement scenarios require a stack of unrealistic assumptions to hold together.
So here is something genuinely fascinating that has been playing out in the macro-AI discourse over the past couple of weeks. You had the Citrini Research piece -- "The 2028 Global Intelligence Crisis" -- go absolutely viral on Substack, painting this dramatic picture of imminent mass displacement from AI. Markets moved. People panicked. And then Citadel Securities, through macro strategist Frank Flight, dropped what amounts to a methodical, data-driven demolition of that entire thesis.
The core of Flight's argument is deceptively simple but incredibly important: recursive technology does not equal recursive adoption. Just because AI systems can theoretically improve themselves does not mean economic deployment follows the same exponential curve. And this is where so many people -- including very smart analysts -- keep getting tripped up. They see the capability curve and mentally map it onto the adoption curve, and those are fundamentally different things.
Flight points to the St. Louis Fed's Real Time Population Survey data on AI adoption, and the numbers are striking in their stability. If we were truly on the brink of mass labor displacement, you would expect to see an inflection upward in daily AI use for work. Instead, the data is remarkably flat. The unemployment rate sits at 4.28%. Software engineering job postings are actually up 11% year-over-year. This is not what an imminent displacement crisis looks like.
What is particularly sharp about Flight's analysis is the compute cost constraint argument. He frames it as a substitution elasticity problem: AI will only replace human labor when the marginal cost of compute drops below the marginal cost of human labor for a given task. But here is the catch-22 that the doomsday crowd rarely addresses -- if automation expands rapidly, demand for compute rises, which pushes up its marginal cost, which creates a natural economic boundary against full substitution. You literally cannot have frictionless, instant replacement when the physical infrastructure has real constraints around energy, semiconductors, and data centers.
The Keynes parallel is worth lingering on. In 1930, Keynes predicted the fifteen-hour work week by the early 2000s based on productivity growth projections. He was right about the productivity gains but completely wrong about what humans would do with them. We did not work less. We consumed more. We invented entirely new categories of wants. This is the "elasticity of human wants" that Flight references, and it is the single most underappreciated variable in every AI displacement model I have seen.
Now, here is where I want to push back on both sides a bit. The AI Supremacy newsletter author frames this as Citadel being "much closer to a realistic position," and I think that is broadly correct, but it also undersells some legitimate near-term disruption concerns. The fact that aggregate labor data looks stable does not mean specific sectors and specific demographics are not getting squeezed. Youth unemployment for the 18-24 cohort is at 8.1%. Entry-level positions in knowledge work are getting compressed. The average hides a lot of variance, and Flight's macro lens -- while sound on the aggregate -- does not fully reckon with the distributional consequences.
What the author of the newsletter is absolutely right about, and what more people need to internalize, is this: a lot of the "technological optimism" we see online is manufactured by those with the most to gain. Venture capital, tech executives, even Wall Street itself have enormous financial incentives to hype AI capabilities beyond what the data supports. When someone tells you AI is about to change everything overnight, your first question should always be: what is their position?
The missing piece in both the Citrini and Citadel analyses -- and the newsletter author touches on this but does not fully develop it -- is the question of what happens at the margin of new company formation. If AI genuinely reduces the engineering cost of starting a company, the economic impact might not show up as displacement at all but as a restructuring of who builds what. The "Alpha cohort" point is interesting: kids born between 2010 and 2024 will be AI natives in a way that current workers simply are not. The real disruption might be generational, not occupational.
Citadel's 2026 Global Intelligence Crisis Response to Citrini Research