The Real Questions People Are Asking About AI Agents and Automation
Published on 13.04.2026
The Real Questions People Are Asking About AI Agents and Automation
TLDR: Wyndo from the AI Maker Substack newsletter tackles the recurring questions from readers about building AI-powered automation workflows, specifically whether non-technical people can do it and whether the knowledge will hold up over time. The answers are more nuanced than the usual "yes, anyone can do AI" pitch you've heard a hundred times.
The premise here is familiar if you follow the AI newsletter space. Someone is selling a paid subscription, and they're doing it by acknowledging the objections rather than bulldozing past them. But the questions Wyndo is answering are genuinely interesting, because they're the same ones I hear from developers and non-developers alike. Can someone without a coding background actually build agent workflows? And more importantly, should they trust that the investment of time and money will pay off when the tooling changes every three months?
On the non-technical audience question, the argument is that the blueprints do the decision-making for you. You follow along, you end up with something that runs, and you learn the shape of the problem even if you didn't design the solution yourself. There's something honest about this framing. A lot of "no-code AI" content glosses over the fact that wiring together agents still requires a mental model of how data flows, what triggers what, and where things break. The claim isn't that it's trivial, it's that the cognitive scaffolding is already built into the guides.
The point about knowledge compounding is the one I find most defensible. The specific tool will change, and if you've been around software long enough you've watched entire ecosystems rise and collapse. But the way you reason about composing AI systems, the instinct for what makes a good agent boundary versus a bad one, that does transfer. It's similar to how understanding SQL fundamentally is worth more than memorizing the quirks of any particular ORM. The specific syntax gets replaced, the conceptual foundation doesn't.
The tool choice commentary is worth noting. The newsletter is currently built around Claude rather than ChatGPT, specifically because of agentic capabilities. That's a reasonable call given where the models are right now, though it's also a bet that could look dated by the time you're reading this. The acknowledgment that they'll expand to other frontier models as they catch up is refreshingly honest, more so than newsletters that pretend there's only one tool worth knowing.