Encoding Team Standards Into AI Coding Assistants

Published on 01.04.2026

motyl.dev<div></div></>FRONTEND

Encoding Team Standards Into AI Coding Assistants

TLDR: Rahul Garg at Thoughtworks argues that AI coding assistants produce wildly inconsistent results because only experienced engineers know how to prompt them well. The fix is to encode team knowledge into shared, versioned instruction files stored directly in your repository, so every developer gets the same quality baseline from day one.

Summary: The premise here is straightforward but often overlooked: two developers on the same team, working in the same codebase, using the same AI tool, can produce dramatically different results. The gap isn't the AI, it's the prompt. Senior engineers instinctively include architectural constraints, naming conventions, security considerations, and error-handling patterns when they ask AI tools for help. Junior developers don't know to do this yet. So the AI gives them something technically functional but contextually wrong, and the senior has to fix it in review.

Garg's proposal is to treat this problem the same way we treat any other consistency problem in software: infrastructure. Instead of hoping everyone picks up the right prompting instincts through osmosis, you write explicit instruction files that encode the team's accumulated judgment, commit them to version control, and let the AI execute against them consistently for everyone. The instructions live alongside the code, get reviewed through pull requests, and evolve the same way everything else does.

The structure Garg recommends for these instructions has four parts. You start with a role definition that establishes the right perspective, then context requirements that tell the AI what constraints matter, then categorized standards that distinguish between things that are non-negotiable versus advisory, and finally an output format that makes results comparable and reviewable. This isn't novel prompt engineering wisdom, it's applied software design: explicit contracts instead of implicit assumptions.

What I find genuinely interesting here is the knowledge extraction process. To write these instructions, someone has to actually interview senior engineers about what they fix in code review, what security instincts they apply automatically, what architectural decisions they'd never explain out loud because they assume everyone knows. That process frequently surfaces disagreements between experienced developers that were never explicitly discussed. The instruction-writing exercise becomes a forcing function for making tacit standards explicit, which has value completely independent of AI tooling.

The article covers generation, refactoring, security review, and code review as separate application areas, and notes honestly that this approach costs real effort to maintain. Over-prescription is a real risk: if your instructions are too rigid, the AI produces brittle, formulaic output that doesn't adapt to context.

Key takeaways:

  • The consistency gap in AI-assisted development comes from prompting expertise, not tool capability, and treating it as an infrastructure problem is more scalable than code review gatekeeping.
  • Effective shared instructions combine role definition, explicit constraints, prioritized standards (critical vs. advisory), and a defined output format.
  • Storing instructions in version control with PR review turns personal preferences into collective, evolvable team artifacts.
  • The knowledge extraction process needed to write good instructions often reveals undiscussed disagreements between senior engineers, making the exercise valuable beyond AI tooling.
  • This approach is most cost-effective for larger teams; smaller teams can maintain consistency through conversation and should start with just one high-impact instruction.

Why do I care: As a senior frontend developer or architect, you're probably already the bottleneck in code review when AI-generated code hits your PR queue. This article gives you a concrete mechanism to push your judgment earlier in the process without being present for every decision. The framing of "executable shared infrastructure" resonates because it maps onto patterns we already trust: linting rules, architectural decision records, design systems. What the article doesn't fully address is the governance problem: who owns these instruction files, how do you prevent them from becoming stale shelfware, and what happens when the instructions reflect one architect's preferences rather than genuine team consensus. Those are the hard problems, and encoding standards into AI prompts doesn't solve them, it just makes the disagreements more visible.

Encoding Team Standards

External Links (1)