TanStack Start Gets RSC, Bun v1.3.12, Caveman Mode for AI Agents, and Next.js Logging
Published on 14.04.2026
TanStack Start Gets React Server Components Your Way
TLDR: TanStack Start treats RSCs as fetchable, cacheable data streams instead of a server-owned component tree. You decide when and where to render them, not the framework.
The TanStack team published a detailed breakdown of their experimental React Server Components implementation, and it takes a fundamentally different approach from what Next.js does. Instead of making the server own the entire component tree, TanStack treats RSCs as React Flight streams that the client can fetch, cache, and render on its own terms. This is a significant philosophical shift.
In Next.js, your app orbits around the server. The framework decides how RSCs are created, where they render, and how interactive boundaries are defined. TanStack Start flips this: RSCs become just another piece of async data, like JSON from an API endpoint. You can create them anywhere on the server, decode them wherever you want, and cache them however you like using existing tools like TanStack Query or TanStack Router.
The implementation introduces something called Composite Components, which lets the server render UI while exposing slots for client content. The server positions opaque placeholders but cannot inspect or transform what the client puts in those slots. This means the client owns the final tree assembly, which most RSC systems do not attempt.
They also measured real performance on their own site. Blog pages dropped about 153 KB gzipped from the client bundle, and Total Blocking Time went from 1,200 milliseconds down to 260 milliseconds on one page. But they are honest about the limits: pages dominated by interactive UI shells barely moved. RSCs help when pages are content-heavy or dependency-heavy, not when the page is already mostly client state.
One thing the article glosses over is the migration cost. Moving from a traditional SPA to any RSC-based architecture requires rethinking data fetching, caching boundaries, and error handling. The fact that TanStack intentionally does not support "use server" actions is a security-conscious choice, but it also means you lose the convenience of implicit RPCs that other RSC frameworks offer.
React Server Components Your Way
Logging in Next.js Is Hard Because It Runs in Three Runtimes
TLDR: A typical Next.js app executes in Node.js, Edge, and the browser simultaneously. Most logging libraries only target Node, leaving Edge middleware and client-side code in the dark.
The Sentry team wrote a thorough analysis of why logging in Next.js is genuinely difficult. The core problem is runtime fragmentation. Your "frontend code" is actually a mix of Server Components running on Node, middleware possibly running on Edge, and Client Components running in the browser. Most JavaScript loggers assume Node.js and rely on APIs like AsyncLocalStorage or the filesystem module that simply do not exist in the browser or Edge runtimes.
The article walks through two practical solutions. LogTape is a newer logging library built from scratch with no dependencies that runs natively in all three runtimes. You configure categories and a Sentry sink, and each logger can be filtered independently in Sentry. The alternative is using Sentry's built-in logger from the Sentry Next.js SDK, which is also runtime-agnostic and requires no additional dependencies if you already use Sentry for error tracking and tracing.
What makes this piece valuable is the tracing integration. Both approaches connect logs to a unique trace ID per request, so you can query all logs from a single request across all three runtimes. When something breaks in production, you do not need to guess whether the error came from middleware, server rendering, or client-side code. The trace ID ties it all together.
The article misses an important discussion about cost. Structured logging at scale, especially when sent to an external service like Sentry, can get expensive fast. Every interactive component logging its mount state, every middleware call, every server-side data fetch all producing structured log entries adds up. The article mentions filtering for noise reduction but does not go deep enough into what a reasonable sampling strategy looks like for a production Next.js app.
Caveman Mode Cuts LLM Token Usage by 65 Percent
TLDR: A Claude Code skill plugin makes AI agents respond in caveman-speak, cutting output tokens by an average of 65 percent while preserving full technical accuracy.
This is one of those ideas that sounds like a joke until you look at the numbers. The Caveman plugin strips articles, filler words, pleasantries, and hedging from AI agent responses. Instead of getting a paragraph explaining why your React component re-renders, you get "New object ref each render. Inline object prop equals new ref equals re-render. Wrap in useMemo." Same fix, 75 percent fewer words.
The benchmark data is compelling. Across ten tasks, the average token savings was 65 percent, ranging from 22 percent for architecture discussions to 87 percent for bug explanations. The plugin works across Claude Code, Codex, Gemini CLI, Cursor, Windsurf, Cline, and GitHub Copilot. It even has a Classical Chinese mode for maximum compression.
What interests me most is the research paper cited in the README. A March 2026 study found that constraining large language models to brief responses improved accuracy by 26 percentage points on certain benchmarks and completely reversed performance hierarchies. Verbose models are not necessarily more correct. The constraint forces the model to focus on substance over style.
The plugin also includes a caveman-compress feature that rewrites your CLAUDE.md and other memory files into caveman-speak, saving an average of 46 percent on input tokens per session. This matters because context window usage directly affects cost and speed.
Bun v1.3.12 Adds Markdown Rendering, Cron Scheduling, and JSC Upgrades
TLDR: Bun 1.3.12 brings a built-in markdown-to-ANSI renderer, an in-process cron scheduler, async stack traces for native APIs, and a major JavaScriptCore engine upgrade with using declarations and JIT improvements.
This release is packed. Bun now has Bun.markdown.ansi for rendering markdown directly to colored terminal output, complete with Kitty graphics protocol support for inline images in compatible terminals. The in-process Bun.cron scheduler lets you run cron jobs that share state with the rest of your application, with no overlap protection and UTC scheduling.
The JavaScriptCore upgrade includes over 1,650 upstream commits. Native using and await using declarations for explicit resource management are now supported. Array.isArray got a JIT intrinsic boost, String.includes has a faster single-character search path, and promise resolution got micro-optimized. URLPattern.test and exec are up to 2.3 times faster by eliminating temporary object allocations.
On the bugfix side, there are dozens of fixes covering Node.js compatibility, memory leaks, Web API edge cases, and the JavaScript bundler. Notable fixes include process.env being empty when the CWD is in a directory without read permission, a memory leak in vm.Script calls, and fs.statSync returning wrong inode numbers on NFS mounts.
The TCP_DEFER_ACCEPT optimization for Bun.serve on Linux is a nice touch. It defers connection acceptance until the client has actually sent data, collapsing two event loop wake-ups into one. This is the same optimization nginx uses.
Uses for Nested Promises in Concurrency Control
TLDR: Nested promises are useful when one async function invokes another but should not block on the inner function's completion. A readers-writer lock implementation demonstrates why promise flattening can be a problem.
This is a rare deep dive into JavaScript concurrency primitives. The author was building a readers-writer lock for an encrypted document store and discovered that JavaScript's automatic promise flattening in then and await was actively working against them. The RWLock needs to check if a queue is empty and then push a function into another queue atomically, but the microtask delay introduced by await means all concurrent calls see the queue as empty and execute immediately.
The solution uses a nested promise pattern: the inbox queue returns Promise of an object containing a Promise, and the outer wrapper prevents automatic flattening. This lets the function advance through already-completed steps without blocking the inbox on the result of executing those steps. It is a way of making one promise not wait on another.
The article explains why the Promises/A spec authors chose implicit flattening: convenience. Nested arrays are useful data structures, but nested promises just represent how many async operations were needed, which is rarely useful for normal code. However, when you are actively managing concurrency, you sometimes need one async operation to trigger another without blocking on it.
What the author avoids discussing is how this pattern interacts with error handling. If the inner promise rejects, does the outer promise catch it? The article focuses on the happy path, which is typical for concurrency writing but leaves a gap for production use.
The Intl API Is the Best Browser API You Are Not Using
TLDR: The Intl family of APIs handles dates, numbers, currencies, lists, plurals, text segmentation, and locale-aware sorting, all built into every modern browser with no dependencies.
This is an exhaustive guide to the Intl APIs that covers far more than most developers know exists. RelativeTimeFormat gives you "in 3 days" or "2 hours ago" with proper locale grammar. DurationFormat handles everything from "2 hours, 45 minutes" to digital clock format. NumberFormat handles currencies with correct symbol placement, decimal separators, and accounting notation for negative values.
The ListFormat API turns arrays into natural-language lists with proper conjunction or disjunction connectors, handling the Oxford comma debate by letting you pick en-US or en-GB. PluralRules gives you the correct plural form for any number in any language, from English's simple one/other to Arabic's six forms. Segmenter breaks text into words, sentences, or graphemes correctly for languages like Japanese that do not use spaces. Collator sorts strings in locale-aware order, with numeric sorting that puts chapter9 before chapter10.
The article makes a good point about the shared foundation: pick a locale, pick some options, create a formatter, reuse it with your data. Whether formatting dates, numbers, currencies, lists, plurals, words, or sorted strings, the API shape stays similar. This consistency makes the whole family easier to learn than it first appears.
What is missing is a frank discussion about browser support gaps. While most Intl APIs are well-supported, some like DurationFormat are newer and may need polyfills for older browsers. The article also does not address the performance cost of creating formatters, which matters in tight loops.
You Cannot Cancel a JavaScript Promise Except Sometimes You Can
TLDR: You can interrupt async functions by returning a promise that never resolves. The garbage collector cleans up the suspended function when nothing references it anymore.
The Inngest team wrote a clever piece about how their SDK interrupts workflow functions on serverless infrastructure. Since each invocation has a hard timeout, the runtime needs to stop the function, save progress, and resume later. Throwing an exception does not work because user try-catch blocks swallow the interruption. Generators give clean interruption but force unfamiliar syntax on users.
The solution: return a promise that never resolves. When the workflow function awaits this promise at a step boundary, it hangs. The runtime detects the hang via a setTimeout macrotask, saves the step result, and exits. On the next invocation, the function re-executes from the top, memoized steps return instantly, and it advances to the next new step.
The garbage collection story is the surprising part. An unsettled promise is just an object in memory. If nothing references it and the suspended function's call stack becomes unreachable, the garbage collector cleans everything up. The article demonstrates this with FinalizationRegistry, showing that even promises that hang forever get collected when their references are severed.
The catch is reference chains. If anything holds a reference to the hanging promise or the suspended function's closure, the garbage collector cannot touch it. The pattern only works when you intentionally sever all references, which requires careful design.