Tell the AI Why, Wallet Deanonymization, and the Complexity Tax
Published on 13.05.2026
AI Coding Tip 019: Tell the AI Why, Not Just What
TLDR: Giving an AI tool a bare command like "refactor this" hands it a steering wheel with no road. Telling it the reason first, the real constraint you are working within, gets you a solution that actually fits.
Summary: There is a habit most developers fall into quickly with AI coding assistants. You type a short imperative: optimize this query, add error handling, clean this up. It sounds efficient. The AI does something. It might even look right. But Maxi Contieri's argument is that you are letting the model guess your real intent, and models are surprisingly confident guessers even when they are wrong.
The difference between "refactor this" and "refactor this because we are about to hand it to a junior developer who has never worked in this codebase" is enormous. The second prompt carries the constraint. The AI now knows the audience, the goal, and the acceptable tradeoff space. It will choose clarity over cleverness, verbose names over compact ones, and it will skip the aggressive optimizations that only make sense if you already know what the code is doing.
This is not a novel idea from software engineering. Requirements documents have always asked for rationale, not just specification. The problem is that conversational AI interfaces trained us to be terse, because short prompts look like short conversations and conversations feel natural. But you are not having a conversation, you are issuing a specification. Treat it like one.
What Contieri is not saying out loud is that this technique also forces you to articulate your own reasoning. If you cannot explain why you want a refactor, maybe the refactor is not necessary. The prompt discipline is doing double duty as design discipline.
Key takeaways:
- State your reason or constraint before the command, not after
- The AI guesses intent from context; give it better context than "clean this up"
- Treating prompts as specifications rather than conversations changes the quality of output
- The discipline of writing the "why" first often clarifies whether the change is needed at all
Why do I care: Every senior engineer I know has seen AI-generated code that was technically correct but architecturally wrong, because the model optimized for the wrong thing. Context budgets are cheap now. Use them. Teaching your team to front-load rationale in prompts is one of the fastest, cheapest process improvements available right now.
AI Coding Tip 019 - Tell the AI Why, Not Just What
AI Is Making Crypto Wallet Deanonymization Much Cheaper
TLDR: Connecting a pseudonymous crypto wallet to a real-world identity used to require expensive investigative labor. AI agents have dropped that cost to under four dollars per attempt, and the privacy assumption built into most crypto usage is quietly falling apart.
Summary: Dishita Malvania opens with a number that stops you: the cost of deanonymizing a crypto wallet using AI-assisted methods is now often under four dollars. This is not a rounding error or a fringe capability. It reflects a structural shift in what the privacy model of blockchain actually relied on.
The privacy was never in the cryptography. Addresses are pseudonymous, not anonymous. What kept most users effectively private was the labor cost of correlation: taking an address, cross-referencing on-chain transaction patterns, scraping social media for wallet tags, matching writing styles, checking ENS names, triangulating timestamps. A skilled investigator doing this manually might spend hours per target. That friction was the actual privacy guarantee, even if nobody said so explicitly.
AI agents collapse that labor cost. They can ingest public social media posts where someone might have mentioned a wallet address, cross-reference ENS registrations, match on-chain behavior to publicly known patterns, and produce a candidate identity match in minutes. At four dollars a run you can do this at scale. Governments can do it. Competitors can do it. Stalkers can do it.
The article gestures at countermeasures: zero-knowledge proofs, better operational security, avoiding linking your real identity to anything on-chain. These are real tools. But the honest observation is that most crypto users never adopted serious opsec because they thought the cost of targeting them individually was prohibitive. That assumption deserves a hard reexamination. The ENS name you registered with your first name, the tweet where you shared your wallet for tips, the forum post where you discussed your holdings, these are all data points now within automated reach.
Key takeaways:
- The privacy model of most crypto usage was based on investigative labor costs, not cryptography
- AI automation has dropped wallet-to-identity correlation costs by orders of magnitude
- ENS names, social media mentions, and on-chain patterns are now cheap to combine at scale
- Zero-knowledge proofs and strict opsec are the actual technical countermeasures, not pseudonymity alone
- Anyone who linked their real identity to a wallet address in any public forum should reassess their exposure
Why do I care: This is not a niche crypto story. It is an object lesson in what happens when a security model depends on economic friction rather than technical guarantees. Web3 projects that store user preferences or history on-chain, applications that link wallet addresses to user-facing features, any architecture that assumed "public blockchain but nobody will bother connecting dots" is now looking at a broken assumption. The threat model needs updating.
AI Is Making Crypto Wallet Deanonymization Much Cheaper
Understanding Complexity Can Make Life and Work Less Complicated
TLDR: Organizations are not machines, they are complex adaptive systems, and the management instinct to analyze, plan, and control breaks down precisely where it is most confidently applied. Understanding why this happens is the first step toward managing with the grain rather than against it.
Summary: Novi opens with a sports team analogy that is simple enough to be sticky: the team is more than the players. You can analyze every individual player's statistics in granular detail and still completely miss why the team loses. The strategies, the shared understanding built over seasons, the informal dynamics between players, the way team culture shifts under a new coach: these produce outcomes that no individual-level analysis can predict or explain. Organizations work the same way.
This is the core of complexity science applied to management. A complicated system, like an engine or a factory production line, can be analyzed, modeled, and optimized because the parts behave predictably and the relationships between them are fixed. A complex adaptive system, like a software team, a market, or an organization under pressure, has parts that change their behavior in response to each other and to the environment. The same intervention can produce opposite effects at different times. The Cynefin framework, which the article references, is a useful vocabulary here: it distinguishes between obvious problems, complicated problems, complex situations, and chaotic ones, and argues that applying the wrong response pattern to the wrong domain is the most common source of management failure.
What the article is most interested in is why managers keep reaching for complicated-system tools in complex situations. The answer is probably psychological. Analysis and planning produce artifacts, reports, roadmaps, org charts, that feel like understanding. They are easier to defend to stakeholders than "we ran small experiments and learned from the results," which sounds like you are making it up as you go, even though it is the more appropriate method.
The missing piece in this article is the practical transition. Knowing the difference between complicated and complex does not automatically tell you what to do on Monday morning. The Cynefin advice to "probe, sense, respond" rather than "analyze, plan, execute" is sound but abstract. The genuinely hard work is convincing the organizational structures around you, the ones that still want quarterly roadmaps and predictable delivery timelines, to accept that some problems do not work that way.
Key takeaways:
- Complex adaptive systems cannot be managed with the same tools as complicated mechanical systems
- The Cynefin framework gives vocabulary for matching response patterns to problem types
- Emergence means team or organizational behavior cannot be explained by individual analysis alone
- "Probe, sense, respond" is the appropriate loop for complex situations, not "analyze, plan, execute"
- The real challenge is institutional: most organizations reward the artifacts of planning even when planning is the wrong tool
Why do I care: Every large software project I have worked on eventually hit a point where the original architecture assumptions stopped matching reality, and the response was more planning rather than more learning. Systems thinking is one of those topics where understanding the theory does not automatically change behavior, because the incentive structures around you reward the wrong response. Still, getting the vocabulary right is the prerequisite. You cannot argue for a different approach if you cannot name the problem.
Understanding Complexity Can Make Life and Work Less Complicated
Google Gemini Moves Into Your Home: The Weekly Poll
TLDR: Google is giving Google Home a Gemini upgrade that allows compound natural language commands across multiple smart home systems at once. The HackerNoon community is split on whether this is progress or just more AI in places it was not asked for.
Summary: Google Home is getting what the newsletter calls a "Gemini brain transplant." The practical change is meaningful: instead of issuing discrete commands for each device or service, you can now issue a single instruction that spans alarm setting, calendar updates, and smart home controls simultaneously. It sounds like the version of smart home assistants that the category has been promising since Alexa shipped in 2014.
The reader poll this week asked whether you would let Gemini move into your home. The options ranged from enthusiastic upgrade to "my light switch works just fine." I find that last option more honest than most product managers would like. The friction in smart home adoption has never really been the intelligence of the assistant. It has been the reliability. Smart home devices fail in ways that dumb devices do not: firmware updates that break integrations, cloud outages that make local hardware unresponsive, privacy policies that change after you have already wired your house. Adding a more capable AI layer does not fix any of those problems. It adds a new dependency on top of existing ones.
The Gemini integration is genuinely interesting for the compound command capability. That is a real usability improvement over the current state of discrete, one-action-at-a-time voice commands. What nobody is talking about is what happens to the privacy model when your home assistant is also the AI that has context from your calendar, your email, and your search history. The data integration that makes compound commands possible is also the data integration that makes the privacy surface substantially larger.
Key takeaways:
- Google Home's Gemini upgrade enables compound natural language commands across multiple services
- The usability improvement is real, but reliability and privacy concerns remain unaddressed
- Smart home AI adds dependency layers on top of existing fragile cloud integrations
- The data integration powering natural language commands also expands the privacy surface significantly
Why do I care: From an architecture standpoint, "make everything talk to everything" is a decision you usually regret six months after shipping it. The compound command capability is convenient exactly because it breaks down the isolation between systems. That convenience and that risk are the same feature.