TypeScript 6.0, React useEffect naming, pnpm 11, and open source conspiracies

Published on 27.03.2026

motyl.dev<div></div></>FRONTEND

TypeScript 6.0: The Last JavaScript TypeScript

TLDR: TypeScript 6.0 is officially out, and it is the last version of the compiler built on its own JavaScript codebase. Think of it as a farewell tour before TypeScript 7.0 arrives rewritten in Go with native speed and multi-threaded type checking.

Summary: Let us be clear about what TypeScript 6.0 actually is: a bridge, a migration assistant, and a polite ultimatum. The TypeScript team has spent the past year building an entirely new compiler and language service in Go, promising native execution speed and parallel type checking. TypeScript 7.0 will be that new engine. TypeScript 6.0 is what you need to get through before you can board that train — and the doors are already closing on a lot of things you may have been taking for granted.

The headline changes are defaults that flip in ways that will break projects if you are not paying attention. Strict mode is now enabled by default, which is long overdue and honestly should have happened years ago. The module setting defaults to esnext, acknowledging that ES Modules are the dominant format in 2026. The target now floats to the current-year ECMAScript version, sitting at es2025 right now. And the one that will catch the most projects off guard: types now defaults to an empty array instead of automatically vacuuming up every declaration file in your node_modules/@types directory. That last change alone can improve build times by 20 to 50 percent for projects with large dependency trees. It will also break anything that relied on ambient globals appearing from @types packages without being explicitly declared.

There are genuinely exciting new features in this release as well. The long-awaited Temporal API has reached stage 4, and TypeScript 6.0 ships built-in types for it under the esnext target. If you have been waiting to use proper date and time handling in JavaScript without reaching for a third-party library, the moment has arrived. Alongside that, the ECMAScript upsert proposal also reached stage 4, introducing getOrInsert and getOrInsertComputed on Map and WeakMap. This collapses the verbose has-then-get-or-set pattern that every JavaScript developer has written hundreds of times into a single clean method call.

The deprecation list is long and intentional. outFile is gone. The baseUrl option as a module resolution root is deprecated. The moduleResolution node setting, which was never an accurate representation of how modern Node.js resolves modules, is deprecated in favor of nodenext or bundler. AMD, UMD, and SystemJS module targets are gone. ES5 as a target is deprecated, which should surprise very few people given that every modern browser has supported ES2015 for nearly a decade. The --esModuleInterop false and --allowSyntheticDefaultImports false options can no longer be disabled, because their behavior has been the recommended default for years and disabling them caused subtle runtime failures.

One technically interesting addition is the --stableTypeOrdering flag, which makes TypeScript 6.0's type ordering match TypeScript 7.0's deterministic algorithm. The current compiler assigns internal type IDs based on the order types are encountered, which means the order of union types in emitted declarations can change depending on unrelated program details. TypeScript 7.0 fixes this with a content-based deterministic sort. The new flag lets you preview 7.0's ordering in 6.0, useful for detecting ordering-sensitive inference bugs before upgrading. It does carry up to a 25% type-checking slowdown, so it is diagnostic tooling rather than something to leave on permanently.

What is missing from this announcement is any honest accounting of the migration pain. The TypeScript team frames all of this as "breaking changes designed to help you prepare for 7.0," but the reality is that the combination of strict mode defaulting to true, types defaulting to an empty array, rootDir defaulting to the tsconfig directory, and module defaulting to esnext represents a significant pile of silent build breakage for projects that install the upgrade without reading the changelog. The team deserves credit for providing a ts5to6 codemod tool, and the ignoreDeprecations: "6.0" escape hatch gives teams breathing room. But the framing of this as a smooth transition release is generous.

Key takeaways:

  • TypeScript 6.0 is the last JavaScript-based compiler; TypeScript 7.0 will be written in Go
  • strict defaults to true, module to esnext, target to es2025, types to []
  • Temporal API types and Map.getOrInsert are now built in
  • outFile, baseUrl, moduleResolution node, ES5 target, AMD/UMD/SystemJS are deprecated or removed
  • Run the ts5to6 codemod tool, set explicit types in tsconfig, and test thoroughly before upgrading

Why do I care: Every TypeScript project in production needs to plan for this migration now. The types defaulting to an empty array is the most dangerous silent change — it can strip hundreds of ambient types that your project depends on without a clear error pointing to the cause. The payoff in build performance is real, but only if you migrate deliberately rather than reactively.

Announcing TypeScript 6.0


Name Your useEffect Functions — Seriously

TLDR: Passing an anonymous arrow function to useEffect is the community's most widespread self-inflicted readability wound. Naming those functions costs nothing and immediately transforms how you read, debug, and review React components.

Summary: There is a small change you can make to React code today that costs zero libraries, zero build configuration, and approximately four seconds of typing per effect, and it will meaningfully improve how quickly you and everyone else can understand what a component is doing. Name your useEffect functions. Not with a comment above them, not with a constant declared separately. Name the function expression itself, inline, right at the call site.

The case Neciu Dan makes in his article is built on a concrete example that anyone who has reviewed a component with multiple effects will immediately recognize. Four anonymous effects in a row — each one requiring you to read through the implementation before you can understand its purpose. Your brain performs four separate parse-and-interpret passes just to build a mental map of a single component. Now rename each one: connectToInventoryWebSocket, fetchInitialStock, resetStockOnLocationChange, notifyParentOfStockUpdate. Suddenly you can skim four names and understand the entire data flow before reading a single line of implementation. The component's behavior is now documented in the code itself, not in comments that will drift or in your memory of when you last opened this file.

There is a debugging payoff that goes beyond readability during code review. When an anonymous arrow function throws an error, your stack trace shows the function as anonymous. When you have four effects in a component, that is not useful information. A named function expression appears in the stack trace with its name, which tells you exactly which effect broke — and that matters when you are triaging a production error on your phone at 11pm or reading an error report in a monitoring tool without access to your editor. React DevTools also uses function names in profiling output, so anonymous effects show up as anonymous in the profiler as well.

The more interesting observation in this article is that naming reveals design problems. If you struggle to name an effect without using "and" in the name — syncWidthAndApplyTheme, for example — that is the effect telling you it is doing two unrelated things and should be split. The naming requirement functions as a real-time design smell detector. And if the best name you can produce sounds like internal state shuffling — something like syncDerivedValue or updateStateBasedOnOtherState — that is usually a signal that the code should not be an effect at all. It should be a derived value computed during render, or logic moved into an event handler.

The author is honest about one tension worth probing: the fourth effect in the warehouse inventory example, notifyParentOfStockUpdate, is explicitly called out as a pattern the React documentation flags as unnecessary in many cases. The parent could fetch the data itself, or the callback could be invoked at the source of the data change. By naming the effect, the author says, he made the problem visible. That is true, and it is a genuine benefit. But there is something to push back on here. Naming is not a substitute for fixing the design problem. The article acknowledges this, but the framing treats naming as a sufficient first step rather than a diagnostic that should prompt refactoring. In a codebase where effects are systematically misused, naming them clearly reveals the misuse but does not remove it.

Key takeaways:

  • Replace anonymous arrow functions in useEffect with named function expressions using the syntax useEffect(function doSomethingSpecific() {...}, [deps])
  • Named effects appear in stack traces and React DevTools profiler by name, not as anonymous
  • If you cannot name an effect without using "and," split it into two effects
  • If the best name describes internal state shuffling, the code probably belongs in a derived value or event handler instead
  • This also applies to useCallback, useMemo, and reducer functions, though the payoff is highest with useEffect

Why do I care: Code review is where this pays off most. Scanning a diff with four anonymous effects requires reading every implementation to understand scope. Named effects let you assess intent at a glance and focus review attention on implementations that actually look suspicious. In large teams where you regularly review code from components you have never seen before, this is a compounding daily productivity improvement.

Start naming your useEffect functions, you will thank me later


pnpm 11 Beta: A Significant Architectural Overhaul

TLDR: pnpm 11 is in beta and it represents a substantial rethinking of how the package manager stores, resolves, and manages packages, with SQLite replacing JSON files in the content-addressable store and global virtual store becoming the default for global installs.

Summary: If pnpm 10 felt like an incremental improvement over 9, pnpm 11 beta reads more like an architectural reckoning. The team has revisited fundamental decisions about storage format, configuration loading, global package isolation, and the lockfile schema — and nearly all of the changes point in the direction of better performance, more predictable behavior, and cleaner separation of concerns.

The most architecturally interesting change is the move to SQLite for storing package index data in the content-addressable store. Previously, package metadata lived as individual JSON files under the store's index directory. At scale, that means thousands of small file reads on every install. SQLite consolidates this into a single database using MessagePack-encoded values and Write-Ahead Logging for concurrent access. The practical effect is fewer filesystem syscalls, better space efficiency for small metadata entries, and faster repeated installs. The store version bumps to 11, and packages missing from the new index are automatically re-fetched.

The handling of global packages has been completely reworked. Each globally installed package or group of packages now gets its own isolated installation directory with its own package.json, node_modules, and lockfile. Previously, all global packages shared a single installation space, which meant that one package's peer dependency requirements could interfere with another's. The new model uses a virtual store at a path based on a hash, which prevents global packages from affecting each other through hoisting changes or version resolution shifts. The tradeoff is that pnpm install -g with no arguments is no longer supported — you must be explicit about what you are installing.

Configuration loading has been tightened significantly. pnpm no longer reads most settings from .npmrc files or the pnpm field in package.json. pnpm-specific settings must now live in pnpm-workspace.yaml. This is a meaningful breaking change for anyone relying on per-project .npmrc files for pnpm settings, and the migration path requires moving those settings into the workspace manifest under a new packageConfigs field. The upside is a cleaner separation between npm-compatible registry and auth settings and pnpm-specific behavior settings.

What deserves more scrutiny is the Node.js version support changes. pnpm 11 drops support for Node.js 18, 19, 20, and 21. Node.js 18 is in maintenance mode but still receives security updates through April 2025. Node.js 20 is in active LTS. Dropping those versions in a beta release suggests that the final release timeline may push teams to upgrade Node.js before they are ready. The release notes do not explain the motivation for dropping Node.js 20 in particular, which is an active LTS release. That is worth watching as the beta matures.

Key takeaways:

  • The content-addressable store now uses SQLite instead of individual JSON files, improving install performance
  • Global packages are now isolated from each other with their own node_modules and lockfile
  • pnpm-specific settings must now live in pnpm-workspace.yaml, not .npmrc or package.json's pnpm field
  • Node.js 18, 19, 20, and 21 support is dropped
  • New commands: pnpm sbom for generating software bills of materials, pnpm clean for removing node_modules, pnpm runtime set for managing runtimes
  • strictDepBuilds and blockExoticSubdeps are now true by default

Why do I care: If your project uses pnpm in CI with a global virtual store, this beta changes the underlying storage format and configuration loading in ways that will require explicit migration work. The SQLite store and global package isolation are genuine improvements, but the configuration consolidation to pnpm-workspace.yaml and the Node.js version drops mean this is not a passive upgrade.

Release pnpm 11 Beta 0


Zero 1.0: Local-First Sync Engine Hits Stability

TLDR: Zero, Rocicorp's local-first sync engine for web applications, has reached its first stable release after nearly two years of development. The 1.0 designation signals API stability and a commitment to maintenance, even as the functional changes from the preceding release are minimal.

Summary: Rocicorp has declared Zero stable, marking the end of a development period spanning nearly two years, more than fifty releases, and hundreds of bug fixes. The project is a local-first sync engine, which means it handles the problem of keeping client-side application state synchronized with a server while allowing the application to read and write data locally without waiting for network round trips. The promise of local-first architecture has been compelling for years, but building sync engines that handle conflicts, partial connectivity, and schema evolution correctly is genuinely hard, and most implementations either compromise on correctness or on developer experience.

The 1.0 release is functionally almost identical to version 0.26.2. The major version bump is explicitly described as symbolic — there are no breaking changes. What the version signals is that the Zero API is now considered stable, that the team is committed to maintaining it, and that future breaking changes will be rare and carefully managed. For teams evaluating whether to adopt Zero, this is the signal they have been waiting for: the project is no longer a moving target.

The actual new features in 1.0 are narrow but pragmatic. The release adds support for detecting publication changes in Supabase via COMMENT ON PUBLICATION statements, working around Supabase's lack of event trigger support for publications. A handful of bugs are fixed: corrupted Litestream restores, CDC health error handling, a race condition in IPC channel shutdown, orphaned subscriber scenarios, and SQL type handling for time and timetz columns. These are maintenance-grade fixes, not feature announcements — which is appropriate for a stability release.

What is worth questioning here is the local-first space itself. Zero is impressive engineering, but the category has a history of compelling demos and difficult production realities. Conflict resolution, schema migrations with existing synced data, and the operational complexity of running a sync server alongside your primary database are genuine challenges that a stability declaration does not eliminate. The question for any team evaluating Zero is not whether the API is stable — it clearly is — but whether the operational model of running a dedicated sync layer fits their architecture and team capacity.

Key takeaways:

  • Zero 1.0 is the first stable release of Rocicorp's local-first sync engine
  • The API is now stable and the team commits to maintaining backward compatibility
  • Functionally very close to 0.26.2; the 1.0 bump is symbolic
  • New Supabase publication change detection works around Supabase's missing event trigger support for publications
  • Suitable for teams building web apps that need offline-capable, low-latency data access

Why do I care: If you have been watching Zero from the sidelines waiting for a stability signal before evaluating it for a project, 1.0 is that signal. Local-first sync is a real architectural improvement for the right class of applications — anything where perceived performance and offline capability matter. The question is whether your use case justifies the operational overhead.

Zero 1.0


The Top 10 Biggest Conspiracies in Open Source

TLDR: Andrew Nesbitt has written a brilliantly constructed satirical piece cataloguing ten elaborate conspiracy theories about the open source ecosystem — from Dependabot as a corporate surveillance operation to the suggestion that all significant open source is maintained by exactly 14 people operating 3,000 GitHub accounts.

Summary: Satire works when it is built on enough truth to make you momentarily uncertain whether you are reading fiction. Andrew Nesbitt's top ten open source conspiracies thread that needle with impressive precision. The piece reads like a long-form investigative essay — complete with unnamed sources, Freedom of Information requests, and a recurring motif of the number eleven appearing in suspiciously improbable contexts — and it is structured as a slow descent from plausible cynicism to full paranoid fantasy.

The individual entries range from darkly plausible to gleefully absurd. The Dependabot entry argues that the pull requests are a side effect of building a real-time map of corporate dependency hygiene, and the response-time dataset is quietly sold to recruiters. This is funny precisely because it is not entirely implausible. GitHub does have extraordinarily detailed visibility into how engineering teams operate, and the product economics of enterprise GitHub do involve selling organizational intelligence in various forms. The Dockerfile entry argues that the broken syntax was deliberately preserved by consulting firms who had built revenue around explaining it — and then traces those firms to founding membership in the CNCF, which controls the committee that reviews container tooling simplification proposals. That one lands because the CNCF's complexity is a running industry joke.

The Kubernetes entry might be the sharpest: Google's internal economics team predicted a 40% contraction in operations engineering roles from cloud adoption, so the solution was to open source a system complex enough to require dedicated teams but useful enough that adoption was nearly mandatory. YAML was chosen specifically because it is easy to get wrong. Each new Kubernetes concept — pods, services, deployments, statefulsets, daemonsets, ingresses, custom resource definitions — represents roughly 0.3 full-time employees of ongoing maintenance work. If you have ever sat in a Kubernetes configuration review wondering how the complexity got this bad, the conspiracy framing is genuinely cathartic.

The piece culminates in the claim that every open source project with more than 10,000 GitHub stars is maintained by the same 14 people operating 3,000 accounts, funded by DARPA, and that the annual maintainer burnout discourse is generated by a language model fine-tuned on Hacker News comments. The ending pulls all the threads together into a unified theory with the self-aware disclaimer that the author is not connecting the dots, merely pointing out that the dots exist and are arranged in a shape. It is a genuinely well-constructed piece of satirical writing about an industry that has made itself available to exactly this kind of absurdist treatment.

What Nesbitt is actually doing beneath the comedy is pointing at real dynamics: the concentration of open source maintenance in very few hands, the way corporate incentives shape what gets built and what stays complex, the opacity of how funding moves through fiscal sponsors, and the genuine weirdness of how influential individual dependency incidents like left-pad have been. The joke format gives him permission to say things that, stated plainly, would sound merely cynical.

Key takeaways:

  • A satirical long-form piece about open source, structured as a conspiracy theory compendium
  • Works as comedy because it is grounded in genuine industry dynamics: maintainer concentration, corporate incentives, CNCF governance, and ecosystem fragility
  • The Kubernetes entry arguing complexity is a deliberate jobs program is the sharpest and most uncomfortable
  • Worth reading as a companion to any serious discussion of open source sustainability

Why do I care: The funniest satire is the kind that makes you go quiet for a second. The Kubernetes entry should do that to anyone who has spent time fighting YAML configuration in a production cluster and wondering why the abstractions feel the way they do. More seriously, the themes underneath the comedy — maintainer burnout, ecosystem power concentration, the economics of open source complexity — are real problems that the community continues to navigate poorly.

The Top 10 Biggest Conspiracies in Open Source


Choosing a JavaScript Logging Library in 2026

TLDR: The Sentry team has published a comparison of the major JavaScript logging libraries — Pino, Winston, Bunyan, and LogTape — with an opinionated recommendation framework based on runtime targets, bundle size, and cross-environment compatibility.

Summary: If you are still reaching for console.log in production code, the case against that habit is well established: no structured output, no log levels, no transports, no filtering, no redaction of sensitive data before it leaves your application. The more interesting question in 2026 is which logging library to reach for, and the landscape has changed enough that older default choices deserve reexamination.

The comparison covers four dedicated libraries plus Sentry's own logging integration. Pino has been the performance champion for Node.js applications since 2016, and it remains the best choice when you are running server-side Node only and bundle size is a priority. At 3.3 kilobytes gzipped with eleven dependencies, it is compact, fast by its own benchmarks, and well-maintained. The tradeoff is that it runs in the browser only via a polyfill, losing most of the performance advantages that make it compelling in the first place. Winston is the oldest and most configurable option, with the largest ecosystem of transports and the most production deployment history behind it, but at 38.3 kilobytes with seventeen dependencies and no tree-shaking, you pay for that flexibility in bundle size.

Bunyan gets a brief treatment and a clear recommendation: do not use it for new projects. It has not seen a release in five years and the repository shows minimal activity. It was once valuable for its zero-dependency, JSON-first simplicity, but those qualities are available in more actively maintained alternatives. The genuinely interesting new entrant is LogTape, released in 2023, which is the only library in the comparison that runs natively across Node, Deno, Bun, browsers, and edge runtimes without polyfills. Zero dependencies, tree-shakable, and claiming to be twice as fast as Pino and ten times faster than Winston. For teams building full-stack TypeScript applications that run in multiple environments, LogTape's universal runtime support is a meaningful differentiator.

The part of this article worth reading skeptically is the Sentry Logger section, which occupies a prominent slot in a comparison published on Sentry's own blog. The framing as an objective guide competes with the obvious commercial interest in recommending the Sentry SDK. The trace-connected logging feature is genuinely useful if you are already using Sentry for error and performance monitoring — having logs automatically linked to traces and errors in the same tool reduces context switching during debugging. But if you are not already a Sentry customer, the comparison stops being neutral at this point.

Key takeaways:

  • Pino for Node-only applications where performance and small bundle size are priorities
  • Winston for Node applications that need extensive transport options and rich configuration
  • LogTape for universal runtime support — Node, Deno, Bun, browsers, and edge — or when writing a library that should not force a logging choice on consumers
  • Do not start new projects with Bunyan; it is no longer actively maintained
  • All of these libraries support custom transports to send logs to external observability platforms

Why do I care: Most frontend and full-stack projects would benefit from structured logging earlier than they implement it. The shift toward edge runtimes and server-side rendering in multiple environments makes the universal runtime support of LogTape newly relevant — a library that polyfills in the browser is a different operational model than one that runs natively everywhere your code runs.

Choosing a JavaScript Logging Library: The 2026 Definitive Guide