Minimum Viable Testing Strategy, Interview Design That Works, and Growing Engineers in the AI Era
Published on 01.12.2025
Minimum Viable Testing: Getting Maximum Value with Minimal Effort
TLDR: Integration tests offer the highest ROI for most teams. Combine static typing, selective unit tests, critical E2E tests, and consider skipping staging entirely with feature flags and canary rollouts for faster release cycles.
The author presents a refreshingly pragmatic testing philosophy that cuts through the dogma surrounding test-driven development and coverage metrics. Rather than asking "how do we test thoroughly," the more useful question becomes "how do we extract maximum value from minimal testing effort?" This reframing alone is worth attention from teams drowning in test maintenance.
The strategy starts with static typing as table stakes. Using TypeScript or similar statically-typed variants catches entire categories of bugs at compile time, effectively providing "free" testing. This foundation allows teams to be more selective about where they invest in explicit tests. The observation that static typing also improves AI code generation is a practical bonus that's becoming increasingly relevant.
The hierarchy of test value presented deserves careful consideration. Integration tests sit at the apex, covering execution paths across multiple components and mapping directly to business-critical user journeys. Unit tests remain valuable but shouldn't become an obsession, reserved for genuinely complex logic or code likely to change. End-to-end tests, being expensive to maintain, should target only what you'd otherwise check manually before each release.
The boldest recommendation involves eliminating staging environments entirely. By combining canary rollouts, feature flags, comprehensive testing, and PR previews, teams can ship directly to production with confidence. This removes a friction point that often delays releases by days while providing faster feedback loops. It's not universally applicable, but for teams with the infrastructure maturity to support it, the productivity gains compound significantly.
For architects evaluating team practices, this framework provides useful criteria for auditing existing test strategies. Many teams over-invest in unit tests while neglecting integration coverage, or maintain expensive E2E suites testing happy paths that simple integration tests would catch faster.
Key takeaways:
- Integration tests typically provide the highest return on investment across most codebases
- Static typing catches bugs early and serves as a form of documentation
- Unit tests should target complex, change-prone code rather than achieving arbitrary coverage
- Consider eliminating staging by combining feature flags, canary releases, and PR previews
Tradeoffs:
- Skipping staging accelerates releases but requires mature observability and rollback capabilities
- Prioritizing integration tests catches more real-world bugs but makes failure isolation harder
- Minimal unit testing speeds development but may miss edge cases in complex logic
Link: Minimum viable testing, good interviews, and growing engineers
Designing Engineering Interviews That Actually Work
TLDR: Make interviews resemble actual work as closely as possible. This means using problems similar to real challenges, testing collaboration rather than solo performance, and transparently sharing your process with candidates upfront.
The interview design section addresses a fundamental disconnect in technical hiring: many processes test for skills that bear little resemblance to daily work. Algorithmic puzzle-solving ability, while demonstrating certain cognitive capabilities, rarely predicts success in building and maintaining production systems. The solo coding session under observation creates stress conditions that don't exist in normal development.
The north star principle of making interviews mirror actual work seems obvious but requires genuine organizational commitment. It means abandoning LeetCode-style challenges in favor of problems drawn from your actual codebase or domain. It means evaluating collaboration skills by having candidates work alongside team members rather than performing alone. These changes require more preparation and coordination but produce better signal about candidate fit.
The recommendation to share interview processes upfront, potentially in job descriptions, represents sophisticated talent strategy. Far from "giving away the answers," this transparency creates alignment and commitment. Candidates can prepare meaningfully, reducing performance anxiety that introduces noise into evaluations. It also signals organizational maturity, attracting candidates who value clear communication.
The reference to Notion's comprehensive interview preparation guides demonstrates what excellence looks like. Providing detailed guidance for every position helps candidates succeed, which ultimately serves the company's interest in accurate assessment. When candidates know what to expect, their performance better reflects their actual capabilities rather than their tolerance for ambiguity and surprise.
For engineering leaders building or refining interview processes, the framework provides clear evaluation criteria. Each stage should answer: "Does this resemble what the person will actually do in this role?" If the answer is no, the stage may be generating noise rather than signal.
Key takeaways:
- Interview problems should closely match real work challenges candidates will face
- Test collaboration skills by having candidates work with team members
- Share interview processes transparently to improve candidate preparation and commitment
- Well-documented interview guides signal organizational maturity and attract better candidates
Tradeoffs:
- Work-similar interviews require more preparation but provide better signal on job fit
- Transparent processes improve candidate experience but may reduce assessment of adaptability
- Collaborative interviews reveal teamwork skills but take more team member time
Link: Minimum viable testing, good interviews, and growing engineers
Growing Junior Engineers in the AI Era
TLDR: AI disrupts the traditional learning progression where juniors gradually build expertise through writing code. Now reviewing AI-generated code before developing the judgment to evaluate it, junior engineers need earlier code review training, stronger foundational skills emphasis, and more intentional mentorship.
The interview with Meri Williams, CTO at Pleo, surfaces a genuinely important concern about AI's impact on developer growth trajectories. The traditional progression worked well: write small pieces of code, take on larger tasks, build expertise through accumulation. Junior engineers learned by doing, developing intuition through thousands of small decisions and their consequences.
AI fundamentally disrupts this flow. Junior engineers now spend more time reviewing and iterating on AI-generated code than writing from scratch. But effective code review requires exactly the experience they haven't yet accumulated. They lack the pattern recognition to spot security vulnerabilities, the architectural sensibility to identify poor abstractions, and the maintenance perspective to predict future pain points.
The proposed adaptations merit serious consideration. Earlier code review training means teaching juniors to spot problems they haven't personally encountered, a fundamentally different pedagogical approach. Emphasizing foundational skills like security, performance, scalability, and maintainability provides frameworks for evaluating code beyond "does it work?" Doubling down on pair programming and design sessions creates the guided practice that solo AI-assisted coding lacks.
The observation that many juniors already use AI brilliantly is encouraging. Using ChatGPT to explore problem structure before coding, leveraging Claude for architectural thinking, treating AI as an endlessly patient mentor represents mature, learning-oriented tool use. The challenge isn't eliminating AI from junior development but channeling it to accelerate growth rather than substitute for it.
For engineering managers and team leads, this demands intentional investment in junior development. The returns from hiring promising juniors depend on successfully growing them, which now requires deliberate intervention rather than relying on natural progression through task assignment.
Key takeaways:
- AI disrupts traditional junior developer learning by replacing writing code with reviewing it
- Juniors reviewing AI code lack experience to identify security, architecture, and maintainability issues
- Earlier code review training and stronger foundational skills emphasis become essential
- Pair programming and design sessions provide guided practice AI-assisted solo work lacks
Tradeoffs:
- AI accelerates junior output but may slow deep skill development without intervention
- Treating AI as mentor provides patient explanations but misses the judgment human mentors offer
- Earlier code review training speeds evaluation skills but may feel abstract without implementation experience
Link: Minimum viable testing, good interviews, and growing engineers
This summary was compiled from the Refactoring newsletter by Luca Rossi. The perspectives presented aim to provide actionable insights for engineering leaders navigating testing strategy, hiring practices, and team development in an AI-augmented landscape.