CTO Archetypes, Speed of Learning, and Four Articles Worth Your Time

Published on 30.03.2026

PRODUCTIVITY

Leadership Archetypes: Why Two People With the Same Title Are Often Doing Completely Different Jobs

TLDR: Pat Kua's framework for CTO archetypes — and a companion piece on Engineering Manager archetypes — gives language to something most tech leaders feel intuitively but rarely articulate: two people with the same title can be doing fundamentally different work, optimizing for different things, and succeeding or failing based on how well their archetype matches the organization's actual needs.

Summary: The reason archetype frameworks are genuinely useful — and not just another personality quiz to put in your LinkedIn bio — is that they make explicit something that's usually invisible. When you're operating as a CTO or an Engineering Manager, the title tells the org chart almost nothing about where you're actually spending your time or what you're being measured against. Naming the archetype you're operating in forces that conversation out into the open.

Luca builds on Pat Kua's guide to CTO archetypes, which maps out how the role shifts as a company matures — from the scrappy Startup CTO who's writing code at midnight and making everything work through sheer force of will, through the Scale-Up CTO who's building systems and teams, to the M&A CTO who's integrating acquisitions and managing organizational complexity. The same person can move through all of these archetypes as their company grows. Or they can leave a company at one stage, join a new one at a different stage, and find that everything that made them successful before seems to work against them now.

That last observation is the one with the most practical weight. It explains something you see repeatedly in engineering leadership: a CTO who was brilliant at one company struggles inexplicably at another. The archetype lens makes this legible. It's not that their skills degraded — it's that the archetype the role demands doesn't match the archetype they're built for and most practiced at. And without the vocabulary to name that mismatch, both sides end up confused and frustrated.

Where this gets specifically useful for people in job searches is the idea that strong candidates use interviews to both sense and shape the role. Archetypes give you the language to ask the right diagnostic questions — is this a Founder CTO role or a Scale-Up role? Is the company asking for someone to build the function from scratch, or stabilize something that's already there? — and to say clearly when there's a mismatch, rather than hoping it will work out anyway.

Key takeaways:

  • The same title can mean completely different jobs — archetypes make this explicit and discussable
  • Archetypes are point-in-time snapshots, not fixed identities — they shift as companies and people evolve
  • Strong candidates use archetype frameworks during interviews to diagnose fit and surface mismatches early
  • Many leadership struggles at new companies trace back to archetype mismatch rather than skill gaps

Why do I care: As a developer moving into architecture or leadership, this is the framework that explains why some moves work and others don't. The code is often the easy part. The hard part is understanding what kind of leader the organization actually needs right now — and whether that's you. This is foundational reading before any senior leadership job search.

Leadership archetypes, perfect teams, and weekly readings 💡


The Perfect Team Formula: CircleCI CTO Rob Zuber on Speed of Learning

TLDR: When asked to describe his ideal product team — the atomic unit of under ten people actually building things together — Rob Zuber's answer didn't start with skills or processes. It started with a question: what's the fastest way to get the information we need to make the next decision?

Summary: Speed of learning is an unusually good north star for a team because it cuts through a lot of the noise that usually dominates team health conversations. Agile ceremonies, sprint velocities, test coverage percentages — these are all proxies. Speed of learning is closer to the thing you actually want. Can this team find out quickly whether they're building the right thing? Can they course-correct before they've invested six months in the wrong direction?

Rob Zuber, speaking from his experience as CTO of CircleCI, identifies three factors that separate teams that learn fast from teams that debate forever and ship late. The first is clear business understanding. Not just feature understanding — genuine comprehension of what the business is trying to achieve and why it matters. Every engineer on the team needs to be able to answer the question "why are we building this?" in terms of outcomes, not just deliverables. Without that foundation, teams optimize locally and miss the picture.

The second factor is a rapid experimentation mindset. This sounds obvious, but it's actively resisted by a surprising number of teams. The pull toward certainty — toward debating until you know you're right before building anything — is real and understandable. But in a world where the fastest learner wins, the correct response to uncertainty is to build the smallest thing that gives you information, ship it, and learn from what happens. Not to schedule another planning meeting.

The third factor is the hardest to create artificially: a high-trust environment where people can raise their hand and say "I have no idea how to do this" without fear. Psychological safety is a concept that gets talked about a lot and practiced relatively rarely. Zuber's framing is concrete — the test is whether someone can admit they're stuck and ask for help. Build the kind of trust where that's possible, and the team's collective intelligence goes up dramatically. Leaders have to model this first.

Key takeaways:

  • Speed of learning — not velocity or coverage — is the real measure of a high-performing product team
  • Business understanding is a prerequisite: engineers need to know the why, not just the what
  • Rapid experimentation beats endless debate — build the smallest thing that gives you information
  • Psychological safety is the multiplier: teams that can admit confusion and ask for help outlearn teams that can't

Why do I care: This is the cleanest articulation I've seen of what makes small product teams actually work. If you're staffing a team, joining one, or trying to understand why a team you're on isn't performing, these three factors are the real diagnostic. Not headcount. Not process. Not tech stack. The fastest learner wins.

Leadership archetypes, perfect teams, and weekly readings 💡


Weekly Reading Highlights: ADRs in the AI Era, AI 2028, The Cost of Complexity, and Rands on AI

TLDR: Four curated reads from Luca this week: Martin Fowler's primer on Architecture Decision Records (more useful than ever with AI), a viral imaginary report from 2028 on AI's second and third-order effects, a piece on how engineering culture systematically undervalues simplicity, and Rands blending management advice with practical AI experiment notes.

Summary: Martin Fowler's piece on Architecture Decision Records is the kind of article worth revisiting every year. The core practice — capturing not just what architectural decisions were made, but why, who was involved, and what alternatives were considered — is deceptively simple and systematically underused. Most teams know they should do this. Most teams don't. With AI tools now capable of helping maintain ADR documentation, the friction excuse is getting weaker. When a new engineer joins and asks "why is this system built this way?", an ADR is the artifact you wish existed. If your team doesn't have them, this article is a good place to start.

The Citrini Research piece takes a more provocative format: an imaginary report written from 2028, looking back at AI's second and third-order effects on work, organizations, and society. At 25 minutes it's a substantial read, but the exercise it puts you through is valuable. The first-order effects of AI are well-covered at this point. The interesting territory is the downstream consequences that are harder to reason about — how industries reshape around changed cost structures, how skills markets shift, how the definition of engineering work evolves. Reading it as a test for your own beliefs — which predictions feel right? which feel dubious? — is the right frame.

Matheus Lima's piece on simplicity vs. complexity addresses something most experienced engineers have felt but rarely articulated as a cultural critique. Over-building is rewarded. Engineers who create elaborate, complex solutions get a compelling narrative: look at all the edge cases I handled, look at the abstraction I designed. Engineers who ship the simplest thing that works — which is often the hardest thing to do — get relative silence. Lima argues this is a cultural and incentive problem. The team that rewards elegance and simplicity produces better software over time, but it requires consciously pushing against default engineering incentives.

Rands has been publishing a blend of his classic management writing and notes from his own AI experiments. The combination works. His AI posts are grounded in the way that most AI commentary isn't — he's actually describing what he tried, what worked, what didn't, and occasionally sharing the code that came out of it. Alongside that, his management writing continues to evolve. Worth following if you haven't already.

Key takeaways:

  • ADRs are worth the investment and are getting easier to maintain with AI tooling — start if you haven't
  • The Citrini 2028 thought experiment is a useful stress-test for your beliefs about AI's trajectory
  • Engineering culture systematically underrewards simplicity — this is worth naming and pushing back against
  • Rands' blend of management writing and AI experiment notes is unusually grounded compared to most AI opinion coverage

Why do I care: Four solid recommendations that cover the full spectrum from practical tooling (ADRs) to big-picture thinking (AI 2028) to culture (simplicity) to individual reflection (Rands on AI). The ADR piece alone is worth sharing with your team this week. The simplicity piece is the one that might generate the most interesting conversation.

Leadership archetypes, perfect teams, and weekly readings 💡