Will AI Eat the World in 2026? A Reality Check on Hype vs Infrastructure

Published on 25.11.2025

Will AI Eat the World in 2026?

TLDR: Despite the hype that AI will transform everything, the current reality shows massive infrastructure investment outpacing actual transformative applications. Michael Burry's analysis of hyperscaler GPU depreciation practices (2-3 year useful life vs. longer reported periods) suggests the AI infrastructure build-out may be creating bubble conditions.

Marc Andreessen's 2011 proclamation that "Software is Eating the World" became prophetic. His 2023 attempt to make the same case for AI hasn't yet materialized in the same way, and that gap between expectation and reality deserves serious examination.

The piece raises an uncomfortable question that too few in the tech industry are willing to ask: Is AI actually eating the world, or is it Nvidia, Google, datacenter operators, TSMC, and ASML that are the real story here? The foundational infrastructure companies are experiencing unprecedented demand, but the transformative AI applications that would justify this infrastructure investment remain more promise than reality. The "AI 2027" crowd with their predictions of imminent superintelligence are beginning to look increasingly detached from observable trends.

Michael Burry—of "The Big Short" fame—has turned his analytical eye to hyperscaler accounting practices around AI infrastructure. His thesis centers on depreciation: he estimates the true useful life of state-of-the-art GPUs in hyperscaler datacenters at just 2-3 years, not the longer durations being listed on balance sheets. This matters enormously. If hyperscalers are overstating the useful life of their AI hardware, they're understating their true costs and overstating their profits. When reality catches up with accounting, corrections tend to be painful.

The article identifies what it calls "the cardinal sign of a bubble: supply-side gluttony." We're seeing insatiable momentum in datacenter buildout—billions upon billions flowing into AI infrastructure. But supply-side buildup without corresponding demand-side transformation is precisely how bubbles form. The question isn't whether AI will eventually be transformative (it will), but whether the current investment wave is getting ahead of actual utility.

For architects and technical leaders, this analysis should prompt hard questions about AI strategy. The convergence of affordability crises, potential debt crises, and massive AI infrastructure commitments creates systemic risk. Organizations planning multi-year AI initiatives need contingency thinking for scenarios where the current investment environment corrects. What happens to cloud AI pricing if hyperscalers face margin pressure? What happens to AI talent costs if the bubble deflates? The prudent approach is to build AI capabilities that deliver measurable value today while maintaining flexibility for an uncertain infrastructure landscape.

What the article doesn't fully address is the timeline question. AI may indeed "eat the world in the 2030s" as it suggests, but "probably in ways many people don't yet anticipate." The history of transformative technologies shows that impact timing is notoriously difficult to predict—often slower than optimists expect, then faster than pessimists imagine. The current AI infrastructure investment may look prescient in hindsight, or it may look like classic bubble excess. Either way, the gap between infrastructure investment and realized transformation is real and growing.

Key takeaways:

  • The AI infrastructure build-out is massively outpacing demonstrated transformative applications in production
  • Michael Burry's depreciation thesis suggests hyperscalers may be overstating GPU useful life (2-3 years vs. reported longer periods)
  • Supply-side gluttony in datacenter capacity is a classic bubble indicator worth monitoring
  • Political and economic factors (affordability crisis, debt concerns) will increasingly intersect with AI infrastructure decisions
  • The "AI eating the world" thesis may prove true eventually, but current timelines appear optimistic

Tradeoffs:

  • Aggressive AI infrastructure investment gains first-mover positioning but risks significant capital exposure if demand doesn't materialize
  • Conservative AI depreciation accounting gains short-term profit appearance but sacrifices accuracy and creates future write-down risk
  • Waiting for AI maturity gains capital efficiency but sacrifices competitive positioning if transformation accelerates

Link: Will AI eat the world in 2026?

External Links (1)