Published on 21.01.2026
TLDR: The gap between agent demos and production deployments isn't about model intelligence—it's about reliability, infrastructure, and the system around the models. Teams succeeding with agents share common traits: they're realistic about capabilities, start narrow, and invest in workflow over model selection.
There's a name for the disconnect between AI excitement and AI reality: the "gen AI paradox." McKinsey reported that nearly eight in ten companies have deployed generative AI in some form, but roughly the same percentage report no material impact. Even more striking: 90% of function-specific AI use cases remain stuck in pilot mode. Agents were supposed to be the answer—the thing that turns experiments into impact. A Gartner poll from this year found only 15% of tech leaders are actually deploying somewhat autonomous agents.
The author, Luca Rossi, has spent months talking to teams about this. The pattern he sees is consistent: the demos look flawless, but production falls apart. The gap isn't intelligence—today's models are remarkably capable. The gap is everything else.
Understanding the Autonomy Spectrum
The word "agent" has become overloaded. Rossi offers a useful spectrum:
The key distinction is that agents do things autonomously. They interact with external systems, make decisions, and take actions with consequences. This makes them both powerful and risky. The feedback loop is fundamentally different—chatbots have humans in the loop constantly, agents don't.
Here's the critical insight: the best agents today are only selectively autonomous—independent on routine tasks, supervised on critical ones. Successful deployments don't go full autonomy from day one. They escalate to humans for high-stakes decisions and gradually expand authority as trust builds.
The Building Blocks
Every agent has three core components:
The point is that an agent is a full system, not just a model. A better model doesn't automatically translate to a better agent if tools are flaky or the orchestrator can't recover from errors.
Where Agents Actually Work
The most common success area is workflow automation—handling repetitive, multi-step tasks across systems: updating CRMs, processing invoices, syncing data between tools. PwC found 64% of agent deployments focus on workflow automation.
A notable subset is UI automation. AI is better at using APIs than clicking around websites, but many systems don't have APIs, or their APIs are incomplete and painful. Industries like healthcare, insurance, and travel often have legacy platforms never designed for automation. Agents that interact with UIs directly—clicking buttons, filling forms, navigating pages—can unlock insane value even doing relatively simple tasks.
Successful cases are practical and down-to-earth: well-defined tasks, contained blast radius for errors, and a clear baseline to beat (usually humans doing boring work manually). What doesn't work yet: open-ended agents requiring lots of judgment, long-running autonomous workflows, or high-stakes customer-facing applications.
The Compounding Errors Problem
This is the most fundamental challenge. In any multi-step workflow, small error rates at each step multiply into large failure rates overall. If each step has 95% reliability (which is optimistic), over 20 steps you're left with just 36% success. Demos look good because they show the happy path—maybe 3-5 carefully chosen steps.
The countermeasures:
For architects and technical leads, the testing and observability guidance is particularly relevant. Agents are non-deterministic—the same input can produce different outputs across runs. You can't write traditional tests with the same confidence. When something fails, it's hard to reproduce. Invest in observability early: log each step's inputs, outputs, and latency. Tools like LangSmith, Langfuse, or structured logging help create full traces from trigger to completion.
At scale, additional factors compound: authentication (how can agents act on behalf of users across systems?), security (can the agent access things it shouldn't?), escalation (does it know when to pause?), and latency/cost (are you accounting for volume?). In the LLM world, everything compounds fast.
Making AI agents work in the real world
This article was generated from the Refactoring newsletter. While I've done my best to capture the essence of this piece, I encourage you to read the original article for the complete analysis and additional examples.