2025 AI Recap: DeepSeek Broke Scaling, Anthropic Won Coding, China Dominated Open Source
Published on 31.12.2025
2025 Recap: The Year the Old Rules Broke
TLDR: 2025 shattered AI's comfortable assumptions. DeepSeek proved efficiency could match brute-force compute, Anthropic beat Microsoft and Google in enterprise despite their distribution advantages, and China shipped nine competitive open-source models while Meta managed only two.
Summary:
This is one of the most comprehensive AI retrospectives of 2025, and it deserves attention. The piece, compiled with insights from VC partner Jess Leão, documents how every major AI assumption got stress-tested this year.
The DeepSeek moment in January wasn't just a news story - it was an inflection point. Their R1 model reportedly trained for under $6 million, compared to GPT-4's estimated $100+ million. Technical innovations like Multi-head Latent Attention (reducing KV cache by 93%) and mixture-of-experts architecture proved that architectural efficiency could substitute for brute-force compute. The market response was brutal: $1 trillion wiped from U.S. tech market cap in a single day, NVIDIA losing $593 billion - the largest single-day market cap loss in history.
AI-assisted coding emerged as the undisputed killer app. The numbers are staggering: Cursor hit $29.3 billion valuation, Claude Code reached $1 billion ARR (Anthropic's fastest ever), and the combined coding agent market exceeded $4 billion in annual recurring revenue. Andrej Karpathy coined "vibe coding" in February; by December, Collins Dictionary named it Word of the Year. Y Combinator reported 25% of their Winter 2025 batch had codebases that were 95% AI-generated.
But the most surprising story is Anthropic's enterprise victory. Despite Microsoft's Copilot in every Office app, Google's Gemini across their ecosystem, and OpenAI's consumer mindshare, Anthropic went from 12% to 32% enterprise share while OpenAI dropped from 50% to 25%. The lesson: when the difference between "works okay" and "works great" is large enough, users switch despite friction. Distribution didn't beat product quality.
China's open-source dominance is the strategic story that demands attention. Count the competitive model releases: nine from Chinese labs in 2025, two from Meta (with Behemoth still MIA). Alibaba's Qwen team shipped monthly updates like clockwork. When Meta's Llama 4 launched in April, it was... fine. Solid. But not the American answer everyone expected. By fall, reports emerged that Meta was developing "Avocado" - a closed-source model. After years as the open-source champion, Meta is hedging. Yann LeCun is departing.
The Model Context Protocol story shows how markets mature. Started as an Anthropic side project in November 2024, by December 2025 it had 97 million monthly SDK downloads and was adopted by ChatGPT, Gemini, Microsoft Copilot, VS Code, and Cursor. The AAIF launch brought together Anthropic, OpenAI, and Block as co-founders. When rivals standardize together, interoperability has beaten lock-in.
For architects and teams watching AI strategy: the 2025 lessons are clear. Distribution doesn't guarantee victory when product quality differences are significant. Efficiency innovations can close the gap with brute-force spending. Standards matter, and the "walled garden" phase is ending. The companies that built the best products for specific workflows won, even against massive distribution advantages.
Key takeaways:
- Anthropic grew from 12% to 32% enterprise share while OpenAI dropped from 50% to 25%
- DeepSeek's $6M training cost proved architectural efficiency can match brute-force compute
- AI coding tools reached $4B+ combined ARR with Claude Code hitting $1B fastest in Anthropic history
- China shipped 9 competitive open-source models in 2025 vs Meta's 2
Tradeoffs:
- Efficient architectures match raw compute but require deeper research investment
- Open-source models accelerate ecosystem but raise IP and security concerns
Link: 2025 Recap: The Year the Old Rules Broke
This article was generated from newsletter content. Topics covered may reflect the source material's focus and editorial perspective.