Published on 09.02.2026
TLDR: Research suggests that developers feel more productive when using AI tools, but controlled studies show they may not actually be producing better output. The "productivity placebo" effect mirrors the false sense of progress we get during marathon debugging sessions.
Summary:
Here is the uncomfortable truth that nobody in the AI tooling space wants to talk about: most of the data we have on AI productivity is self-reported, and self-reported data is notoriously unreliable. Luca Rossi from Refactoring raises a critical point that the one time researchers used proper control groups to measure AI-assisted development, developers reported feeling more productive while the actual output told a different story. That is a significant gap between perception and reality, and it should give every engineering leader pause before restructuring their teams around AI-first workflows.
The analogy Rossi draws here is brilliant and deeply relatable. Think about those multi-hour debugging sessions where you try everything, nothing works, you go to bed frustrated, and then fix the problem in five minutes the next morning. That relentless debugging loop feels like flow state, but it is actually autopilot. You are getting quick feedback loops with each change-and-test cycle, which tricks your brain into thinking meaningful work is happening. AI-assisted coding can produce exactly the same illusion. You are prompting, reviewing, adjusting, prompting again, and the rapid iteration feels productive because it is cognitively easy to sustain. But easy does not mean effective.
The practical advice here is solid but also somewhat incomplete. Rossi suggests three strategies: stay engaged and actually understand what the AI is doing, avoid multitasking by either going fully autonomous or fully hands-on, and take frequent breaks using a Pomodoro-style approach. These are reasonable hygiene practices. But what is missing from this conversation is measurement. If we know self-reported productivity is unreliable, and we know AI creates a placebo effect, then individual discipline alone is not the answer. Teams need objective metrics, things like defect rates, rework cycles, time-to-merge with review quality held constant. Without that, you are just fighting a cognitive bias with willpower, and cognitive biases tend to win.
For architects and engineering managers, the takeaway is clear: do not assume AI tools are delivering value just because your team says they feel faster. Build feedback loops that measure actual outcomes. Track whether code quality metrics are improving or degrading. Look at the ratio of AI-generated code that survives review versus what gets rewritten. The feeling of productivity is not the same as productivity, and confusing the two is how organizations make expensive strategic mistakes.
Key takeaways:
Tradeoffs:
Link: Productivity placebo, hard work, and strategic quitting
TLDR: The tech industry's debate between hustle culture and work-life balance fundamentally misses the point. Motivated people in healthy environments naturally work hard, making the argument about enforcement versus restraint entirely misdirected.
Summary:
There is a culture war simmering in tech that has become increasingly tiresome: the hustle-all-the-time camp versus the slow-is-smooth-smooth-is-fast camp. People have attached their identities to these positions, which is always the first sign that a conversation has stopped being useful. Rossi argues the entire framing is wrong, and I think he is mostly right, but he also does not go far enough.
The core insight is that motivated people naturally work hard when they have the right conditions. You do not need to mandate 996 schedules or hustle culture because talented people with proper direction and agency will push themselves. Hard work, in Rossi's framing, is a lagging indicator of a healthy environment, not a leading indicator you can force. This is an elegant way to think about it. You cannot manufacture motivation by demanding long hours any more than you can make a plant grow faster by pulling on it.
But here is what this analysis avoids: the definition of "hard work" itself is doing a lot of heavy lifting in this argument. There is a difference between sustained intense focus on difficult problems and simply putting in long hours. Someone doing four hours of genuinely hard cognitive work is often outproducing someone doing twelve hours of meetings and email. The conversation about hard work rarely distinguishes between intensity and duration, and that ambiguity lets both sides claim they are right. The hustle camp points to hours logged, the balance camp points to output per hour, and they talk past each other indefinitely.
For team leads and architects, the practical question is not whether your team should work hard. It is whether you have created the conditions where deep, focused work is possible. That means protecting blocks of uninterrupted time, giving people meaningful problems to solve, providing enough context for autonomous decision-making, and getting out of the way. If you find yourself needing to motivate people to work harder, the problem is almost certainly environmental, not motivational. Either the work is not compelling, the direction is unclear, or the organizational friction is too high.
Key takeaways:
Link: Productivity placebo, hard work, and strategic quitting
TLDR: Drawing on Annie Duke's expertise in decision-making under uncertainty, strategic quitting is reframed as a rational skill rather than a character flaw. Pre-defined kill criteria and accountability partners are the practical tools to overcome our psychological resistance to quitting.
Summary:
Annie Duke, one of the world's top experts in decision-making and a former professional poker player, makes a compelling case that quitting is one of the most important and undervalued skills in professional life. The logic is straightforward: we make decisions under uncertainty, we learn new information after committing, and sometimes that information tells us we should change course. The ability to actually change course, rather than doubling down on a failing path, is what separates good decision-makers from stubborn ones.
The psychological barriers to quitting are well-documented and powerful. People associate quitting with failure and weak character. They fear judgment from others, even though research shows people actually respect well-reasoned quitting decisions. And of course, there is the sunk cost fallacy, the irrational tendency to continue investing in something because of what you have already put in rather than what you expect to get out. Rossi shares Duke's extreme example of a marathon runner who broke her leg at mile eight and continued running for eighteen more miles against medical advice. It is an absurd story, but it illustrates a pattern that plays out in more subtle ways across every engineering organization.
The practical framework Duke offers is genuinely useful: establish kill criteria in advance and use accountability partners to enforce them. The key insight is that there is an enormous psychological difference between encountering a warning signal in the moment and having pre-committed to act on that signal. The signal is identical, but your response will be radically different. This is directly applicable to software projects, product bets, and architectural decisions. How many teams continue building features that clearly are not working because nobody defined what failure would look like before they started?
For engineering teams and architects, this framework should be standard practice. Before starting any significant initiative, whether it is a migration, a new service, or a product experiment, define the kill criteria upfront. What metrics would tell you this is not working? What timeline triggers a reassessment? Who is responsible for calling the stop? Without these pre-commitments, the default behavior is to keep going because stopping feels like admitting failure. And the longer you keep going, the more sunk cost builds up, making it even harder to quit. This is how organizations end up spending two years on migrations that should have been abandoned after six months.
What Duke's framework does not fully address, however, is the organizational power dynamics that make quitting hard even when you know you should. It is one thing to have kill criteria; it is another when the person who championed the project is a senior leader who will interpret stopping as a personal affront. The individual cognitive biases are real, but the organizational incentives often matter more. Until companies start rewarding good quitting decisions as explicitly as they reward successful launches, the structural incentives will continue to favor persistence over prudence.
Key takeaways:
Tradeoffs:
Link: Productivity placebo, hard work, and strategic quitting