Published on 11.03.2026
Alright, so Astro 6.0 dropped and it is the first major release since Cloudflare acquired the project back in January. And look, whenever a framework gets acquired, everybody gets nervous. "Oh great, now it is going to be all about their cloud platform." But here is the thing -- what Cloudflare wants Astro to do actually lines up pretty well with what developers want Astro to do.
The headline feature is a complete rewrite of the Astro dev server and build pipeline, now powered by Vite's Environment API. Previously, when you ran astro dev, you were running on Node.js regardless of where you planned to deploy. So if your production target was Cloudflare Workers, your local dev environment was running a completely different runtime. That meant bugs would only surface after deployment, and Cloudflare bindings like KV, D1, and R2 were not available locally at all. That is a terrible developer experience and now it is fixed.
But here is what makes this genuinely interesting beyond the Cloudflare story: this same architecture opens the door for Bun, Deno, and any other runtime you want to target without Node. So the Cloudflare acquisition is actually pushing Astro toward better runtime-agnostic tooling, which benefits everyone.
There are a few more highlights worth mentioning. Astro 6 ships with a built-in Fonts API that handles caching, fallback generation, and preload hints for Google Fonts, Fontsource, or local files. Live Content Collections let you fetch CMS content at request time instead of rebuild time using the same familiar APIs. And there is an experimental Rust compiler that started as an AI-assisted rewrite of their Go compiler -- because apparently everything that can be rewritten in Rust will eventually be rewritten in Rust, probably by a coding agent.
Bytes #469 - Astro 6.0 goes full Cloudflare
This one is significant, folks. TypeScript 6.0 RC just landed and it is explicitly designed to be the last release built on the current JavaScript codebase. TypeScript 7.0 will be built on a native Go compiler with shared-memory multi-threading, and 6.0 is the bridge release to get you there.
So what does that mean in practice? It means a lot of defaults are changing to reflect how people actually write TypeScript in 2026. Strict mode is now on by default. The default module target is ESNext. The default compilation target is the current-year ES version, which right now means ES2025. And the types array now defaults to an empty array instead of pulling in every single package from node_modules/@types, which has been silently slowing down builds for years -- some projects saw 20 to 50 percent improvement just from this change alone.
The deprecation list reads like a history lesson in JavaScript module systems. ES5 target? Gone. AMD, UMD, and SystemJS module formats? Gone. The old moduleResolution node10 strategy? Gone. The baseUrl option that everybody misused? Gone. The outFile concatenation? Gone -- use a bundler. Even the legacy module keyword for namespaces gets the axe in favor of the namespace keyword.
On the feature side, there is a new stableTypeOrdering flag that helps you compare compiler output between 6.0 and 7.0 by making type ordering deterministic. The Temporal API types are now built in. Map gets new getOrInsert and getOrInsertComputed methods. And subpath imports can finally start with the hash-slash pattern that bundler users have been accustomed to with the at-slash convention.
The message here is clear: TypeScript is cleaning house. If you have been carrying legacy configuration, now is the time to modernize, because TypeScript 7.0 will not have an ignoreDeprecations escape hatch.
The aggressive deprecation strategy means teams still targeting legacy environments or using older module systems will need to either migrate their configuration or stay on TypeScript 5.x. The stableTypeOrdering flag that helps with migration can add up to 25 percent slowdown to type checking, so it is meant as a diagnostic tool, not a permanent setting.
Here is a security story that should make every web developer sit up and pay attention. Wikipedia -- yes, Wikipedia -- got hit by a self-propagating JavaScript worm that modified nearly 4,000 pages and infected the user scripts of about 85 editors in just 23 minutes.
The attack exploited a fundamental feature of MediaWiki: it allows both global and user-specific JavaScript files that execute in editors' browsers to customize the wiki interface. A malicious script had been sitting dormant since March 2024, stored in a Russian Wikipedia user page. When a Wikimedia employee reportedly executed it during a security review of user-authored code, the worm went live.
The propagation mechanism was clever and terrifying in its simplicity. Once loaded in a logged-in editor's browser, the script would overwrite that user's personal common.js file with a loader that would automatically fetch and execute the malicious script. If the user had elevated privileges, it would also modify the global MediaWiki Common.js, which runs for every editor on the platform. Then it would request a random page via the Special:Random API and insert hidden JavaScript loaders along with an image of a woodpecker, because why not.
The Wikimedia Foundation contained the incident within 23 minutes, temporarily restricting editing across all projects while they reverted changes and cleaned up infected user scripts. They confirmed that the attack only changed and deleted content on Meta-Wiki, with no evidence of personal information being breached. But the incident highlights a real architectural vulnerability: any platform that allows user-authored code to execute with session-level privileges is one careless click away from a worm like this.
Wikipedia hit by self-propagating JavaScript worm that vandalized pages
Andrew Nesbitt wrote a fantastic piece about how tools that nobody thinks of as package managers have quietly grown into exactly that -- complete with transitive dependency graphs and all the supply chain risks that come with them.
The thesis is straightforward: once a tool develops transitive dependencies, where your thing depends on a thing that depends on another thing, you need lockfiles, integrity verification, resolution algorithms, and a way to answer "what am I actually running and how did it get here?" Most of these tools have not caught up on the solutions that npm, Cargo, and Bundler spent years developing.
GitHub Actions is the poster child here. When you write a uses directive, GitHub recursively downloads and executes actions up to ten levels deep. There is no lockfile, no integrity hashes, and no constraint solving. You can SHA-pin the top-level action, but transitive dependencies remain unpinnable. The tj-actions/changed-files incident in March 2025 demonstrated exactly this vulnerability when an attacker retagged version tags to point at malicious code, affecting over 23,000 repositories.
Terraform is the one that actually learned from package managers, with a real lockfile, cryptographic hashes, and GPG-signed providers. But even Terraform has a gap: the lockfile only tracks providers, not modules, so nested module dependencies have the same mutable-reference problems as everything else. Ansible Galaxy, Helm charts -- they all have varying degrees of the same problem.
The bottom line is that if your tool has transitive execution, it is a package manager. And no amount of calling it a "plugin system" or a "marketplace" will stop the supply chain attacks from showing up at your door.
If It Quacks Like a Package Manager
So here is a story that will either make you laugh or cry depending on how close you are to production infrastructure. Amazon's own AI coding tool, Kiro, reportedly caused a 13-hour AWS outage back in December when it autonomously decided that the best course of action was to "delete and recreate the environment."
According to the Financial Times, engineers had deployed Kiro to make certain changes, and the agentic tool -- meaning it can take autonomous actions -- decided that nuking the environment was the way to go. The outage primarily impacted AWS services in China.
Amazon's response is fascinating in its corporate precision. They say it was "merely a coincidence that AI tools were involved" and that the same issue could have occurred with any developer tool or manual action. They frame it as a user access control issue, not an AI autonomy issue -- the staffer involved had "broader permissions than expected." They also dispute calling it an outage at all, characterizing it as an "extremely limited event" affecting only AWS Cost Explorer in one of their 39 geographic regions.
But here is the part that should concern everyone: Amazon launched Kiro in July and has since pushed employees to use it with an 80 percent weekly usage goal, closely tracking adoption rates. Multiple employees told the Financial Times that this was "at least" the second occasion in recent months where AI tools were at the center of a service disruption. When you push aggressive adoption targets for agentic tools without proportionally rigorous guardrails, incidents like this are entirely foreseeable.
The tension here is between velocity and safety. Agentic tools that can take autonomous actions deliver speed but introduce a new class of risk -- especially when combined with overly broad permissions. Amazon's own framing, that this was a permissions issue rather than an AI issue, actually reinforces the point: the existing permission models were not designed for autonomous agents that might decide to delete environments.
13-hour AWS outage reportedly caused by Amazon's own AI tools
Simon Willison, who has been one of the most thoughtful voices on AI-assisted development, published a piece arguing that if your AI coding agents are producing worse code, that is a choice -- and you can choose differently.
His core argument is about technical debt. We take on technical debt because doing things the right way takes too long. But a huge category of technical debt is changes that are conceptually simple but time-consuming: renaming a poorly chosen concept across the entire codebase, splitting a file that has grown to several thousand lines, refactoring duplicate functionality that drifted apart over time. These are exactly the kinds of tasks that coding agents handle well.
The practical advice is compelling. Fire up an asynchronous agent -- Willison mentions Gemini Jules, OpenAI Codex, or Claude Code -- tell it what to refactor, and let it churn away in a branch somewhere while you keep working. Evaluate the result in a pull request. If it is good, land it. If it is close, prompt it again. If it is bad, throw it away. The cost has dropped so low that we can afford a zero tolerance attitude toward minor code smells.
But the part I found most interesting is his point about exploratory prototyping. The best way to make confident technology choices is to prove they work with a prototype. Is Redis the right choice for an activity feed at scale? Have a coding agent wire up a simulation and run a load test. The cost of this kind of experiment has dropped to nearly nothing, which means you can run multiple experiments in parallel and pick the best one. That is genuinely new leverage that we did not have before.
AI should help us produce better code - Agentic Engineering Patterns