Published on 24.02.2026
TLDR: At The Pragmatic Summit and Future of Software Development events, industry leaders revealed that AI adoption is happening simultaneously across all company sizes—a departure from traditional tech adoption curves. New data shows 92% of developers use AI coding tools monthly, with organizations experiencing either 2x more incidents or 50% fewer, depending on their foundational engineering practices.
What struck me most about these events was the consensus among veterans like Martin Fowler and Kent Beck—who've collectively spent over 100 years in software engineering—that we're experiencing change faster than ever before in their careers. This isn't hyperbole; this is from engineers who lived through the PC era, the internet revolution, mobile computing, and cloud infrastructure shifts.
The traditional adoption curve we've seen with mobile, cloud, and even cryptocurrency doesn't apply here. Historically, indie developers experimented first, then startups jumped in a few years later, followed eventually by large enterprises. Native mobile, for instance, took six years before traditional companies like Ryanair caught up. But with AI, something unprecedented is happening: everyone is moving simultaneously.
I encountered embedded engineers writing Assembly and C code—people you'd expect to be last in line for AI adoption—who now have one-third to one-half of their low-level code generated by AI agents. Even more striking, large traditional companies with thousands of developers, including those in agriculture, industrial products, and finance, aren't waiting to see what happens. They're all implementing AI strategies right now. This concurrent adoption across all organizational sizes is genuinely remarkable and suggests we're in a fundamentally different phase of technology change.
Laura Tacho, former CTO of DX and distinguished engineer, presented exclusive data at The Pragmatic Summit that revealed some uncomfortable truths about AI adoption. Ninety-two percent of developers use AI coding assistants at least once per month—that's near-universal adoption. The self-reported productivity gains are around four hours per week per developer, though this varies wildly depending on how teams implement these tools.
Here's where it gets interesting and somewhat troubling: organizations are experiencing wildly divergent results. Some companies are seeing twice as many customer-facing incidents since adopting AI, while others are experiencing fifty percent fewer incidents. This isn't random noise. It reveals something profound: AI is a multiplier. It amplifies what was already there. Healthy, well-structured engineering organizations with solid practices are accelerating further—getting faster, maintaining higher code quality, and experiencing fewer production issues. Dysfunctional organizations are becoming more dysfunctional, just faster.
This distinction matters enormously for leaders implementing AI. The MIT study from July 2025 called this "The Gen AI Divide"—the steep drop-off between pilot projects and production reality, and even steeper between production improvements and actual profit. Companies can't just adopt tools and expect magic. They need healthy human and systems-level foundations first. Laura's closing statement really resonated: "Stay grounded, stay skeptical, stay human. Most of all, stay pragmatic."
Thomas Dohmke, founder of Entire (an AI-native GitHub) and former GitHub CEO, and Rajeev Rajan, CTO of Atlassian, shared fascinating insights about building modern engineering teams. Some teams at Atlassian have engineers writing essentially zero lines of code—instead orchestrating agents to do the work. The outcome? Teams aren't necessarily getting smaller, but they're producing two to five times more output, and creatively, they're more productive.
But here's what I found misleading in the AI-native narrative: Thomas was refreshingly honest that most of his day as a startup founder involves dealing with mundane things like HR systems and administrative tasks that agents haven't solved. AI-native doesn't mean everything is automated. It means strategic orchestration of AI agents for specific high-value problems.
What genuinely excites me about agents for distributed teams is something Rajeev pointed out: they've become like sparring partners. When you're stuck at 2 AM debugging code alone, you can now ask an agent to explain something or solve it outright. For remote teams, agents have effectively recreated the in-person collaboration advantage. Thomas has code review agents, coding agents, brainstorming agents, research agents—each specialized for different problems.
The most hilarious moment was when Thomas pointed out that Rajeev, CTO of a major software company, had to buy his own laptop with his own money to run modern AI tools because corporate IT was too restrictive. That detail captures an uncomfortable reality: large, "agile" organizations that built the tools for agile development can't move as fast as a startup founder with a laptop. The infrastructure and policies designed for legacy systems are becoming genuine competitive disadvantages.
Engineering leaders talk about this behind closed doors constantly: mid-career engineers are being left behind by the AI wave. New graduates are more productive with AI tools immediately—they're learning coding alongside agents and don't have years of manual coding habits to unlearn. Senior engineers have experience and architecture knowledge that makes them invaluable. But mid-level engineers, those with five to ten years of experience, face a genuine career risk if they don't rapidly upskill.
The solution isn't revolutionary: they need to spend time playing with agents as side projects, building intuition for when to use them, how to guide them, and how to validate their outputs. CTOs and CIOs at major banks have realized that you can give an agent a task in the evening and check in the next morning—this isn't a theoretical advantage, it's changing their entire approach to leadership and technical work.
This question bothered me when I first heard it: if AI is generating most code now, is there any point to refactoring legacy systems? The answer is "absolutely, yes." But the nature of refactoring changes. Instead of manually rewriting code, teams can now ask agents to refactor large swaths of systems, validate the changes, and iterate. The focus shifts from manual labor to defining good refactoring targets and validating quality.
What concerns me is whether organizations will actually invest in this. Refactoring has always been a "unsexy" activity that leaders deprioritize. With AI, it becomes feasible to refactor at scale, but it requires discipline to actually do it rather than just shipping new features.
On the twenty-fifth anniversary of the Agile Manifesto, assembled at nearly the same location where it was written, the conversation turned toward what's next. One surprising trend: Extreme Programming practices that predate Agile are making a comeback. Pair programming, continuous integration, test-driven development—the practices that fell out of fashion as Agile became corporate—are finding renewed relevance with AI agents.
Think about it: TDD becomes more valuable when an agent is generating code. Unit tests become your specification. Continuous integration becomes critical when you have agents deploying code. These practices, dismissed as too rigorous for modern teams, suddenly make sense again.
The most profound statement from the event came from Kent Beck, Laura Tacho, and Steve Yegge: "Organizations are constrained by human and systems-level problems. We remain skeptical of the promise of any technology to improve organizational performance without first addressing human and systems-level constraints. We remain skeptical and we remain human."
This is what I think organizations are avoiding thinking about: technology adoption is fundamentally a people problem, not a tool problem. You can deploy Claude Code, GitHub Copilot, and agent frameworks everywhere, but if your teams lack psychological safety, if your decision-making processes are broken, if your architecture is too coupled, if your culture doesn't embrace experimentation—AI becomes just another way to amplify existing problems faster.
The missing piece in most AI adoption conversations is genuine organizational change. Not rebranding existing processes as "AI-native," but actually rethinking how engineers work together, how leaders empower teams, and how organizations measure success beyond velocity metrics.
Link: The Future of Software Engineering with AI: Six Predictions