Shadcn, React Best Practices, and the AI Skills Erosion Debate

Published on 16.01.2026

Just Fucking Use Shadcn

TLDR: Shadcn/ui has become the de facto standard for React component libraries by offering a copy-paste model that gives developers full ownership and control, rather than the traditional npm dependency approach.

The React ecosystem has seen countless component libraries come and go, but shadcn/ui represents a fundamental shift in how we think about UI components. Instead of installing yet another dependency that abstracts away control and couples your project to external updates, shadcn takes a radically different approach: you copy the component source code directly into your project.

This isn't laziness or a hack—it's a deliberate architectural decision. Built on top of Radix UI primitives (or the newer Base UI), these components come with accessibility baked in from the ground up. The Tailwind CSS styling means you're working with utility classes you already know, and since you own the code, customization is as simple as editing a file rather than wrestling with CSS overrides or prop drilling.

The ecosystem has evolved beyond just the core components. A registry system now allows developers to share custom components, extending the library's capabilities while maintaining the same ownership model. This is particularly valuable for teams building design systems—you get a solid foundation without vendor lock-in.

For architects and teams evaluating component strategies, shadcn represents a middle ground between building everything from scratch and fully delegating to external libraries. You get the speed of using pre-built components with the flexibility of owning your code. The tradeoff is that you're now responsible for maintaining those components, but that's often preferable to being stuck on an outdated library version.

Key takeaways:

  • Copy-paste model provides full code ownership versus traditional npm dependencies
  • Built on accessible Radix UI/Base UI primitives with Tailwind styling
  • Registry system enables ecosystem growth while maintaining ownership philosophy
  • Ideal for teams wanting pre-built foundations without vendor lock-in

Tradeoffs:

  • Gain full control and customization freedom but sacrifice automatic updates from library maintainers
  • Own your component code but accept responsibility for accessibility and bug fixes

Link: Just Fucking Use Shadcn


Introducing: React Best Practices by Vercel

TLDR: Vercel released a structured repository with 40+ performance optimization rules across 8 categories, prioritized by impact level to help teams focus on what matters most.

Performance optimization in React applications often feels like chasing ghosts. Teams spend hours micro-optimizing useCallback and useMemo usage while ignoring the elephant in the room: massive bundle sizes and waterfall request patterns that tank Core Web Vitals. Vercel's new react-best-practices repository aims to end this confusion by providing a prioritized framework for optimization.

The key insight here is the emphasis on priority. Not all performance issues are created equal. The framework guides developers to tackle high-impact problems first—eliminating data fetching waterfalls, reducing JavaScript bundle sizes, and optimizing critical rendering paths—before diving into micro-optimizations that might save milliseconds at best.

What makes this particularly valuable is the structured approach across 8 categories, each with impact ratings. This transforms performance optimization from an art into a more systematic discipline. Rather than relying on intuition or cargo-culting tips from Twitter, teams can methodically work through categories and understand the expected return on investment for each optimization.

For engineering teams and architects, this repository serves as both an educational resource and an audit checklist. New team members can learn established patterns, while experienced developers can use it to validate their approaches. The impact ratings also help with sprint planning—you can make data-informed decisions about which optimizations to prioritize.

The real value here isn't just the individual rules but the prioritization framework. It acknowledges that developer time is finite and helps teams spend it where it matters most.

Key takeaways:

  • 40+ optimization rules organized across 8 categories with impact ratings
  • Prioritizes high-impact fixes (waterfalls, bundle size) over micro-optimizations
  • Provides systematic approach to performance work rather than ad-hoc fixes
  • Serves as both learning resource and audit checklist for teams

Link: Introducing: React Best Practices


How Our Engineering Team Uses AI

TLDR: MetalBear's team building mirrord in Rust shares that AI excels at code exploration and generating scripts but struggles with complex architectures and long-running development sessions.

Real-world experience reports about AI coding tools are far more valuable than marketing hype, and MetalBear's candid assessment offers genuine insights. Their team works on mirrord, a Kubernetes development tool written in Rust—not exactly a trivial codebase or mainstream technology stack.

The team found AI most valuable in three specific scenarios: understanding unfamiliar code sections, exploring architectural alternatives, and generating utility scripts. These are tasks where AI's ability to quickly synthesize information and propose options saves significant time. When you're navigating a complex codebase or prototyping different approaches, having an AI collaborator can accelerate the exploration phase considerably.

However, the limitations are equally instructive. AI tools struggle with complex architectures that span multiple files and concepts. They also degrade in usefulness during long-running development sessions—the context window limitations and lack of persistent understanding become apparent when you're deep in a multi-hour debugging session.

For teams considering AI tool adoption, this provides a realistic calibration. AI works well as an exploration accelerator and boilerplate generator, but expecting it to handle complex architectural work or replace deep understanding of your codebase will lead to frustration. The sweet spot seems to be using AI for specific, bounded tasks while maintaining human oversight for system-level decisions.

Key takeaways:

  • AI excels at code exploration, architectural prototyping, and script generation
  • Struggles with complex multi-file architectures and long development sessions
  • Most valuable as exploration accelerator rather than autonomous developer
  • Context window limitations become apparent in extended sessions

Tradeoffs:

  • Gain speed in exploration and prototyping but sacrifice deep understanding if over-relied upon
  • Useful for bounded tasks but inadequate for complex architectural decisions

Link: How Our Engineering Team Uses AI


Jensen Huang's Defense of AI Optimism

TLDR: Nvidia's CEO criticizes the "doomer narrative" around AI, arguing that negative messaging about AI risks discourages investment and harms innovation.

Jensen Huang's comments at a recent event have sparked debate about how we discuss AI's potential risks and benefits. The Nvidia CEO specifically targeted what he calls "doomer narratives"—the constant drumbeat of warnings about AI dangers that he believes creates a chilling effect on investment and innovation.

There's a legitimate tension here worth examining. On one hand, Huang has obvious financial interests in AI optimism—Nvidia's dominance in AI hardware means bearish AI sentiment directly impacts his company's valuation. On the other hand, he raises a valid point about how messaging shapes investment behavior and talent allocation.

The counterargument, which Huang doesn't fully address, is that responsible discussion of risks isn't the same as doom-mongering. The AI safety community would argue that understanding potential harms is necessary for building robust systems. The real question is finding the right balance between prudent caution and paralyzing pessimism.

What's missing from Huang's critique is acknowledgment of regulatory capture concerns he briefly mentioned. The most effective AI advocacy would combine optimism about beneficial applications with serious engagement on safety concerns, rather than dismissing critics as harmful to innovation.

For technology leaders and architects, this debate matters because it shapes organizational AI strategies. Understanding both the transformative potential and genuine limitations helps teams make better decisions about where to invest in AI capabilities versus where human expertise remains essential.

Key takeaways:

  • Huang argues negative AI messaging discourages beneficial investment and innovation
  • Valid tension exists between discussing risks and enabling progress
  • Speaker has clear financial interest in AI optimism via Nvidia's market position
  • Balanced discourse requires engaging with safety concerns rather than dismissing them

Link: Jensen Huang Is Begging You to Stop Being So Negative About AI


Don't Forget to Google It: The AI Skills Erosion Problem

TLDR: Heavy reliance on AI coding assistants may be eroding fundamental developer skills like reading documentation, effective searching, and critical problem analysis.

This video raises an uncomfortable question that many in the industry are whispering about: are AI coding tools making developers worse at their craft? The argument centers on fundamental skills that historically defined competent developers—the ability to read documentation thoroughly, search effectively for solutions, break down complex problems, and critically evaluate approaches.

When developers default to asking an LLM for every question, they bypass the learning that comes from struggle. Reading documentation teaches you the boundaries and assumptions of a library. Searching Stack Overflow exposes you to edge cases and alternative approaches. Debugging systematically builds mental models of how systems actually work. These skills compound over time, and shortcuts compound their absence.

The piece notes concrete business consequences, citing Tailwind CSS laying off 75% of their team. While the direct causation there is debatable, the broader point about industry transformation and skill expectations is worth considering. If AI handles routine coding tasks, what skills become more valuable? Probably the ones AI struggles with: architectural thinking, understanding business context, and making judgment calls about tradeoffs.

For team leads and architects, this suggests a need to be intentional about skill development. Pairing sessions, code reviews, and occasionally tackling problems without AI assistance might be necessary to maintain and build fundamental capabilities. The goal isn't to reject AI tools but to use them as amplifiers rather than replacements for core engineering skills.

Key takeaways:

  • AI assistants may bypass the learning that comes from documentation reading and debugging
  • Fundamental skills like problem decomposition and critical evaluation risk atrophy
  • Industry layoffs raise questions about changing skill requirements
  • Teams should be intentional about maintaining core engineering capabilities

Tradeoffs:

  • Gain immediate productivity from AI assistance but risk long-term skill development
  • Speed up routine tasks but potentially slow down ability to handle novel problems

Link: Don't forget to Google it...