Published on 06.03.2026
TLDR: Executive coach Richard Hughes-Jones, with 20+ years of experience, discusses how tech leaders can navigate uncertainty by moving beyond rigid frameworks, blending inner work with practical experimentation, and leveraging AI as a coaching companion rather than a replacement.
Look, here is something that most of us in tech do not want to hear: frameworks are not going to save you. We love frameworks. We love processes. We love the idea that if you just follow the right steps in the right order, everything will work out. But leadership — real leadership — does not work that way, and Richard Hughes-Jones has spent over two decades coaching executives who had to learn this the hard way.
This conversation on the Refactoring podcast digs into territory that makes a lot of technical leaders uncomfortable. Hughes-Jones comes from strategy consulting, which means he has seen the corporate playbook from the inside. He knows all the frameworks. And his core argument is that the most effective leaders are the ones who learn to operate beyond those frameworks — in the messy, ambiguous space where there is no playbook and you have to rely on judgment, self-awareness, and the ability to sit with discomfort.
The section on leading through uncertainty and complexity is where this gets really practical. We are in an era where AI is reshaping every assumption about what engineering teams look like, what they can do, and what the role of a leader even is. Hughes-Jones does not pretend to have the answer, but he offers something more useful: a way of thinking about it. He talks about polarities — those tensions that cannot be resolved, only managed — and safe-to-fail experiments. Instead of trying to predict the future and pick the "right" strategy, you run small bets, learn fast, and adapt. This is not new advice, but the way he frames it in the context of personal leadership rather than just organizational strategy is genuinely useful.
What I think is missing from this conversation, and what a lot of coaching discussions avoid, is the structural reality that many tech leaders face. It is great to talk about inner work and self-awareness, but when your organization is cutting headcount by 30% and asking you to "do more with AI," the bottleneck is not usually your mindset. Sometimes the constraint is genuinely external, and no amount of coaching will fix a broken incentive structure. Hughes-Jones hints at this but does not fully confront it.
The AI-for-coaching angle is interesting and worth paying attention to. The idea of blending human coaching with AI tools — using AI for pattern recognition and reflection prompts while keeping a human coach for the nuanced, relational work — is a pragmatic middle ground. But let us be honest about the economics here: executive coaching is expensive, and AI coaching tools are going to eat a significant portion of that market. The question is not whether to use AI for coaching, but whether we are being thoughtful about what we lose when we do.