Published on 28.01.2026
TLDR: Research on 667 people found that domain expertise doesn't predict AI performance - some average performers saw massive gains while top experts barely improved. The differentiator is "Theory of Mind": the ability to set context, fill in gaps the AI doesn't know, and treat bad outputs as diagnostic information.
OpenAI recently released a report called "Ending the Capability Overhang" - their term for the gap between what AI can do and what most people actually get from it. Their data shows power users extract six to eight times more value from the same AI tools as typical users. Same subscription, same model, wildly different results. This isn't about using AI more. It's about using it differently.
The research that should make you uncomfortable comes from Northeastern University and UCL. They tested 667 people, measuring performance alone and then performance with AI assistance. The finding: being good at your job didn't make people good at working with AI. Some average performers saw massive gains. Some top experts barely improved at all. Years of experience, advanced degrees, deep domain knowledge - none of it predicted who would benefit most.
The researchers identified one habit separating high-gainers from everyone else: Theory of Mind - the ability to step into another perspective. In practice, this manifested as three behaviors:
They set the scene. Before asking anything, they gave background. Who they are, what they're working on, who the output is for. They didn't assume the AI knew their context.
They filled in gaps. They asked themselves: what do I know that the AI doesn't? Then they included it. Company context, internal jargon, what they'd already tried, constraints that were obvious to them but invisible to the machine.
They treated bad answers as information. When the AI missed the mark, they didn't just rephrase and retry. They figured out why it missed. Did it misunderstand the goal? Lack a constraint? Then they adjusted their approach based on that diagnosis.
The article introduces a useful framing: your "Human API." An API is how one system talks to another. Your Human API is how well you translate what's in your head - your expertise, context, judgment - into something AI can work with. Your expertise is table stakes. The multiplier is how clearly you communicate with the machine.
For engineering leaders and architects, this has implications for how you think about AI adoption in your teams. Training people on AI tools might matter less than training them on communication patterns. The 10-second protocol before any important AI request: Context (what am I holding that the AI doesn't have?), Needs (what does the AI need to know to give useful output?), Verification (if this misses, what will I check first?).
The uncomfortable implication: the people who succeed with AI won't necessarily be the smartest or most experienced. They'll be the ones who communicate best with the machine. That's a learnable skill, which is good news for anyone willing to develop it - and a warning for experts who assume their domain knowledge will automatically transfer.
Key takeaways:
Tradeoffs:
Link: Good at your job but bad at AI?
The content above is AI-generated based on newsletter sources. While I strive for accuracy, please verify critical information from original sources.