AI Skeptics, Generalists, and the Precision-Accuracy Paradox in Software Teams
Published on 13.01.2026
AI Skeptics, Generalists, and Precision vs Accuracy
TLDR: The most successful teams with AI aren't the believers or the skeptics—they're the ones who marry critical thinking with ambitious experimentation. Plus, understanding whether your team is "precisely inaccurate" or "imprecisely accurate" might be the diagnostic you've been missing.
There's a fascinating tension in how engineering teams approach AI today. Birgitta Böckeler, who leads AI-assisted software delivery at ThoughtWorks, has been observing this firsthand, and her insights cut through the noise of the AI hype cycle.
The teams winning with AI aren't sitting in either the "believer" or "skeptic" camp. They're doing something harder—they're producing high-quality thinking about AI regardless of which direction it takes them. If you've got skeptics on your team who raise legitimate concerns (not just reflexive "AI bad" responses), you should actually be rewarding them. Someone needs to play devil's advocate, and that friction is healthy for genuine exploration.
Here's the thing that most organizations get wrong: they try to homogenize their teams' attitudes toward AI. Either everyone needs to be excited, or the skeptics need to "get with the program." But the real magic happens when you have both perspectives engaged in honest dialogue. Critical thinking married with ambition and curiosity—that's the formula.
Now, let's talk about something Pat Kua surfaced at the LDX3 conference: the distinction between precision and accuracy in software engineering. Accuracy means you're building the right thing—it meets actual business needs. Precision means you're building it consistently—on time, with good quality, predictably.
This matters because it creates very different failure modes. You might have a team that's precisely inaccurate: they ship like clockwork, their processes are tight, but they're cranking out features nobody actually needs. They're detached from customers, obsessed with technical elegance over business value. Conversely, imprecisely accurate teams know exactly where the value lies but can't execute consistently—good plans, missed deadlines, releases that catch fire.
For architects and team leads, this framework gives you a diagnostic lens. Before you throw process improvements or customer research at your team, figure out which failure mode you're actually fighting. The treatments are completely different.
And here's where generalists enter the picture. There's this narrative that people become generalists by accident—career twists, opportunity hopping. But observation suggests otherwise. Most generalists are deliberately motivated by impact and value creation rather than craft for its own sake.
This has profound implications for how we think about AI adoption. People who value themselves by the sophistication of the software they build will feel threatened by AI. People who value themselves by the outcomes they create for customers and businesses will embrace AI as a force multiplier. The generalist mindset—focused on impact over implementation—positions you to use AI without restraint.
In organizations evaluating their AI readiness, this might be the real assessment: are your people attached to their craft, or attached to their impact? Because AI doesn't care about your elegant code. It cares about getting things done.
Key takeaways:
- Reward skeptics who provide substantive critique—high-quality thinking in any direction advances understanding
- Diagnose whether your team fails through precision (building wrong things well) or accuracy (building right things poorly)
- Generalists tend to focus on impact over craft, which positions them better for AI adoption
- The attachment to sophisticated implementation, rather than outcomes, creates AI resistance
Tradeoffs:
- Embracing AI skeptics slows initial adoption but improves long-term decision quality
- Generalist focus on impact enables AI embrace but may sacrifice deep technical mastery