Asking Engineers, Signal Areas, and Weekly Readings
Published on 13.04.2026
Asking Engineers Where the Friction Is
TLDR: Metrics tell you something is broken but not what. Luca from Refactoring.fm argues that actually talking to engineers — a structured "Listening Tour" — is the missing half of developer experience diagnostics. Most managers find that qualitative work harder than wiring up dashboards, which is exactly why it stays undone.
There is a funny irony in how engineering teams approach their own problems. We are a profession that defaults to instrumentation. We want dashboards, percentiles, DORA metrics, throughput numbers. And those things are genuinely useful — I am not dismissing them. Accelerate showed us that measurement matters. But here is the thing: a metric is a cue, not a diagnosis. When your cycle time spikes, the metric is just waving its arms at you. It does not tell you whether the problem is flaky tests, unclear requirements, a slow review culture, or the fact that one person on the team is a bottleneck for three different systems.
The answer, Luca argues, is deceptively obvious: ask the engineers. They already know where the friction is. They experience it every single day. The Listening Tour — a deliberate practice of structured one-on-one and group conversations designed to surface blockers — is what bridges the gap between your quantitative data and the actual lived experience of your team. Luca's framing here is worth sitting with: combining qualitative signal with quantitative data makes you "unstoppable," but most managers find the conversations harder than the dashboards. That asymmetry explains a lot about why devex programs stall.
I think this is right, and I would add one thing the article gestures at but does not fully develop: the listening tour only works if engineers trust that what they say will lead to action. Psychological safety is not just a nice-to-have for performance — it is the precondition for honest signal collection. If engineers suspect their feedback will be ignored or, worse, used against them, you will get polished non-answers. The infrastructure for honest conversation has to come before the conversation itself.
Asking engineers, signal areas, and weekly readings
Behavioral Interviews as Signal Area Forecasting
TLDR: Austin McDonald, former hiring committee chair at Meta, breaks down behavioral interviews into three signal areas: scope and ownership, ambiguity and perseverance, and conflict resolution plus leadership. The insight that interviewers are essentially forecasters — studying past behavior to predict future performance — reframes how candidates should think about their answers.
Behavioral interviews have always been important, but the argument in this issue is that AI's effect on coding interviews has pushed them to the front of the line. When a language model can pass your LeetCode filter, the question of "how does this person actually work" becomes the load-bearing part of the process. Austin McDonald's framework is practical and worth understanding whether you are interviewing or running interviews.
The three signal areas — scope and ownership, ambiguity and perseverance, conflict resolution and leadership — map to what companies actually care about at a structural level. Scope and ownership is about whether you drive things forward without needing to be managed. Ambiguity and perseverance is about whether you can make progress when the path is unclear. The third cluster is about whether you can operate effectively inside a human system, which is to say, the actual job. These are not arbitrary categories. They reflect what goes wrong when senior engineers fail in roles: they either stop at their stated scope, freeze in ambiguous situations, or generate friction in the team.
Austin's point about follow-up questions is the sharpest observation in the piece. When an interviewer asks a follow-up, most candidates assume they want more detail. The better read is to ask yourself what signal the interviewer is probing for. Are they checking scope? Ownership? How you handled a difficult person? That metacognitive layer — thinking about what the interviewer is trying to measure, not just what they are asking — is the difference between a competent answer and a strong one. Amazon with 16 leadership principles and Meta with five structured areas represent different ends of a spectrum, but the underlying logic is the same: they are trying to predict how you will behave before they have seen you behave.
Asking engineers, signal areas, and weekly readings
Weekly Reads: Git History, AI Feedback Loops, and Coaching
TLDR: Three short reads from Luca's roundup: mining git history for team health signals, building a structured practice to feed AI session learnings back into shared team artifacts, and a thoughtful experiment in using AI for coaching versus human coaches. All three are worth your time.
The weekly reads section from this Refactoring issue is unusually strong and I want to give each piece its due rather than collapsing them into a list.
The git history piece by Ally Piechowski makes the point that version control is not just a backup system — it is a record of how your team actually works. Hotspots, bus factor, bug clusters, patterns that appear during crunch time: all of this is readable from git history with commands that are not particularly exotic. This is the kind of observability work that rarely makes it onto a roadmap but pays back in understanding your system's actual weak points.
Rahul Garg's piece on AI feedback is something I have been thinking about independently: every interaction with an AI coding tool generates signal about what the tool handles well, what context was missing, what prompts succeeded. Most teams let that signal evaporate. Rahul proposes feeding those learnings back into shared team artifacts — effectively closing the loop so the team gets better at using AI, not just the individual who happened to be in that session. This is the kind of practice that distinguishes teams that genuinely improve their AI-assisted workflows from teams that just have AI tools installed.
Cate Huston's piece on AI coaching is honest in a way that a lot of the "AI replaces X" content is not. She found that AI is genuinely useful for structured issues where you need a thinking partner or validation. But for the messy, identity-level stuff — the kind of thing where you need to feel that another person believes in you — the AI does not land the same way. That is not a failure of the AI. It is a description of what coaching actually is. The piece does not make a sweeping claim; it describes a specific experiment with specific results, which is exactly the right epistemic posture for this moment.