Install Buttons in HTML, Safari 26.5, Tailwind v4.3, and Choosing Stacks That Don't Punish You Later
Published on 13.05.2026
This week the platform did something it rarely does. It made a hard problem boring. We have a new HTML element for installing web apps, Safari shipped 63 fixes alongside :open and Origin, Tailwind finally gave us scrollbar utilities, and a few writers stepped back to ask bigger questions about why our stacks keep collapsing under their own weight. Let me walk you through what caught my attention.
A new HTML install element
TLDR: Chrome and Edge are jointly trialing an <install> element that renders a browser-controlled install button with zero JavaScript. It works for the current page or, with an installurl attribute, for any other web app. Origin trial runs in Chromium 148 to 153.
Summary: Web app installation has lived in JavaScript for years. You listened for beforeinstallprompt, you stashed the event, you triggered it from your own button, and you hoped your UI was good enough that users trusted the click. The new install element flips that. You drop <install></install> into your markup and the browser draws the button, the icon, and the label in the user's language. Because the browser owns the rendering, it can treat the click as a genuine signal of intent, similar to how permission elements like geolocation work.
Patrick Brosset of Microsoft Edge wrote up the proposal, and the implementation came out of a collaboration with the Chrome team. The element supports cross-origin installs through installurl, which means you can build a small catalog page where each entry installs a different PWA. If the destination manifest doesn't declare an id, you provide a manifestid attribute computed from the Application tab in DevTools.
Fallback is built in. Anything inside the element renders when the browser doesn't recognize the tag, so progressive enhancement is the default rather than something you bolt on. Events fire for promptaction, promptdismiss, and validationstatuschanged, so you can still observe what the user did without orchestrating the install flow yourself.
Microsoft also continues to ship the imperative navigator.install() API in parallel. The two approaches have different trade-offs. The element is best when you want browser-trusted UI with minimal code. The JavaScript API is best when you need custom design or want to trigger installs from existing interaction handlers. Both are running origin trials right now, and the teams are explicitly asking the community which approach should win, or whether both should ship.
Key takeaways:
- One HTML element replaces an entire JavaScript install ceremony.
- Cross-origin installs unlock real PWA catalog pages.
- Origin trial in Chrome and Edge 148 to 153, share feedback on the WICG repo.
Why do I care: As a frontend architect, I read this as the platform absorbing yet another piece of glue code we have written badly for a decade. If you have a portfolio of internal tools or a partner ecosystem, an <install> button on a directory page is much easier to maintain than a JavaScript shell that detects support, captures the prompt, and re-renders. It also raises a real question about your custom install UI. If the browser draws it for you, what are you actually adding by drawing it yourself?
Install web apps with the new HTML install element
The Web Install API is ready for testing
TLDR: Microsoft Edge opened an origin trial for navigator.install() in version 143, running through 148. It is the JavaScript counterpart to the new HTML install element and supports same-origin and cross-origin app installs.
Summary: Edge's blog post is the imperative half of the install story. Where the HTML element gives you a drop-in trusted button, navigator.install() lets your own UI trigger the install at exactly the moment you choose. You write the button, you handle the styling, and when the user clicks you call the asynchronous function and wait for the promise.
The motivation is the same as the declarative version. Today every browser exposes installation through different entry points and developers have very little control over when the prompt appears. The Web Install API moves that control into your hands, which matters for app-store-like experiences where the install is part of a larger journey rather than an opportunistic prompt.
The origin trial is enabled per origin via meta tag or HTTP header. Users on your live site get the API without needing to enable any flag. That is the right design for testing real usage patterns, because installation friction is exactly the kind of metric you want to measure with real visitors rather than QA scenarios.
Key takeaways:
navigator.install()lets your own UI trigger same-origin or cross-origin installs.- Origin trial in Edge 143 to 148 on Windows, macOS, and Linux.
- Provides the imperative escape hatch when the declarative element is too restrictive.
Why do I care: This is the API I would reach for when installation is woven into an existing onboarding flow, where pausing to render a separate browser-controlled button would feel like a regression. The interesting strategic question is whether two competing approaches will both ship long term, or whether one will win. Either way, both teams are gathering evidence in public, which is the right way to settle it.
The Web Install API is ready for testing
WebKit Features for Safari 26.5
TLDR: Safari 26.5 ships the :open pseudo-class, element-scoped CSS random(), the color-interpolation attribute on SVG gradients, ToggleEvent.source, and the Origin API, alongside 63 bug fixes that span scroll-driven animations, anchor positioning, and zoom rendering.
Summary: WebKit's release notes read like a quality-and-quiet-features release rather than a marquee one, and that is genuinely useful. The :open pseudo-class finally gives us a consistent way to style the open state of <details>, <dialog>, <select>, and <input> pickers without falling back to attribute selectors that only half-work depending on the element.
CSS random() got reshaped after community feedback. Named random values are now global by default, which means random(--s, 100px, 200px) will produce the same number across every element that uses the name. The new element-scoped keyword lets you opt back into per-element behavior when you actually want it. This is the kind of API churn that happens when a feature ships first and gets used hard, and it is a good sign that WebKit kept iterating.
SVG gets a long-awaited improvement. Gradients now respect the color-interpolation attribute, so you can ask for linearRGB interpolation and get a perceptually evener transition between saturated colors. The midpoint of a red-to-blue gradient no longer turns muddy by default.
The Origin API is the kind of utility that disappears into your code once you have it. Instead of parsing strings and pulling in the Public Suffix List for same-site comparisons, you call Origin.from(value) and get a structured object you can compare directly, including for opaque origins from cross-origin message events.
The bug fix list is long. Multiple scroll-driven animation issues, several anchor positioning chains, zoom rendering across grid and flex layouts, and continued work on the block-in-inline layout rewrite. WebKit is clearly investing in correctness at depth rather than just chasing new specs.
Key takeaways:
:openand element-scopedrandom()smooth out everyday CSS.- SVG
color-interpolationproduces cleaner gradients between saturated colors. - Origin API replaces fragile origin string parsing.
Why do I care: When a release is mostly bug fixes, that is the release I want shipped. Scroll-driven animations and anchor positioning are still relatively young and every fix here removes a workaround somewhere in production. The Origin API in particular is a small but meaningful security primitive that we should be using in any code that compares cross-document message origins.
WebKit Features for Safari 26.5
Tailwind CSS v4.3
TLDR: Tailwind v4.3 adds scrollbar utilities, a @container-size variant for size-based containers, zoom-* utilities, tab-width helpers, and stacked plus compound @variant support. The post also catches us up on v4.2, including four new neutral palettes and a first-class webpack loader.
Summary: The scrollbar utilities are the headline because nobody wants to remember which half-supported browser property they need this week. Tailwind now ships scrollbar-auto, scrollbar-thin, scrollbar-none, color utilities for thumb and track, and the gutter utilities that keep your layout from jumping when the scrollbar appears. This is one of those features you reach for monthly and never want to write from scratch again.
The container query story gets more complete. The original @container utility creates an inline-size container, which is the right default but means container units like cqb and cqh have no block dimension to resolve against. The new @container-size creates a true size container so block-axis queries actually work.
The zoom-* utilities expose the CSS zoom property now that every browser has finally agreed it exists. The tab-* utilities let you control the rendered width of tab characters in preformatted text. Both are small, both are appreciated.
The v4.2 additions are worth a second look. Four new neutral palettes named mauve, olive, mist, and taupe give you warmer or cooler alternatives to the existing grays without losing the gray feel. The dedicated webpack plugin runs Tailwind around twice as fast as routing through PostCSS for large projects, and the gains carry over to Turbopack through its webpack compatibility layer. There is also a wave of logical property utilities and a font-features-* escape hatch for low-level OpenType control.
Key takeaways:
- First-class scrollbar utilities, including thumb and track colors and gutter control.
@container-sizeenables block-axis container queries.- Dedicated webpack plugin roughly doubles build speed on large projects.
Why do I care: I think the scrollbar utilities and the webpack plugin alone justify the upgrade for most teams. The build performance gain in Next.js projects is the kind of thing you should compound across every developer's local environment and every CI run. The new container-size variant also closes a real gap in the container query story that has been quietly limiting what you could express.
Tailwind CSS v4.3: Scrollbars, new colors, and more
Six Million Selections Later
TLDR: Mozilla shares data on the EU's Digital Markets Act browser choice screens. Firefox gets selected every 10 seconds, retention is five times higher than organic downloads, and academic research shows Firefox daily actives are 113 percent higher on iOS in the EU than they would have been without the DMA.
Summary: This is the first real evidence we have on whether browser choice screens actually move the needle. The answer is yes, very much so. Over two years of data shows that when people get a real choice screen, a meaningful share of them pick something other than the default, and the people who do switch tend to stick around longer than people who download organically.
The mobile gains are uneven across platforms. iOS saw a 113 percent lift in Firefox daily active users in the EU compared to comparable non-EU markets. Android saw 12 percent, which is smaller mostly because Firefox's Android base was already larger. The DMA's effect is also growing over time rather than flattening, which suggests the screens are still surfacing alternatives to people who hadn't considered them yet.
Desktop remains the big missed opportunity. Roughly 310 million desktops and laptops in the EU have no equivalent choice screen, and Windows in particular continues to use deceptive design to keep users on Edge. Even where mobile choice screens exist, importing data and switching defaults still creates friction that gatekeepers have little incentive to fix.
There is a demographic finding worth pausing on. Women make up a significantly higher share of Firefox selections via iOS choice screens than via organic downloads, which suggests choice screens reach users who report lower confidence in manually changing browser defaults. The choice screen is doing real work in reaching people that the existing UX patterns were quietly excluding.
Key takeaways:
- Real browser choice changes user behavior in measurable ways.
- Desktop is still largely unregulated and the gap shows.
- Choice screens reach demographics that organic flows do not.
Why do I care: As a frontend person I care about this because the browser monoculture story has practical consequences for what we can build. More viable engines means more leverage to push the platform forward, and more pressure on every vendor to keep up. The DMA's success here also gives us a template for the AI debate, where the same gatekeeping dynamics are starting to play out in places we have less control over.
Soon We Can Finally Banish JavaScript to the ShadowRealm
TLDR: TC39's ShadowRealm proposal, currently at Stage 2.7, introduces a new kind of JavaScript realm with its own globals and intrinsics but no separate thread. Code runs in isolation on the main thread, which makes it useful for sandboxes, test cleanrooms, and third-party code quarantine.
Summary: This piece is part comedy bit, part actually useful explainer. The author walks through what a JavaScript realm actually is, why we already have multiple realms in cross-origin iframes and workers, and what they cannot currently do. The short version is that each realm is single-threaded and they cannot share execution context, which makes existing realms heavy when all you want is an isolated global scope.
ShadowRealm fixes that. You create one with new ShadowRealm(), you call shadow.evaluate('...') to run code inside it, and that code sees a pristine global object. Anything declared in the host realm is invisible to the shadow realm, and anything declared in the shadow realm is invisible to the host. The shadow realm runs on the same thread, so there are no worker-style message passing constraints.
The proposed API is intentionally small. Just evaluate and importValue. The importValue method is the more interesting of the two because it lets you dynamically import a module into the shadow realm and pull a specific exported function back out as a callable binding in your host code. That makes ShadowRealms a real isolation primitive for plugin systems, not just goth eval.
There is an important caveat. ShadowRealms are an integrity boundary, not a security boundary. Code inside can still make inferences about code outside, so this is for organizing scope and quarantining third-party noise, not for running fully untrusted code. The proposal is still subject to changes from feedback during trial implementations, but it is close enough that we should be designing for it.
Key takeaways:
- ShadowRealm gives you a clean global scope without a worker thread.
- Useful for plugin sandboxes, test cleanrooms, and quarantining third-party code.
- Integrity boundary, not a security boundary.
Why do I care: Anyone who has had to debug a third-party library polluting window will see the value here immediately. The pattern that makes the most sense to me is loading analytics, A/B testing scripts, or design-system widgets into a shadow realm so they cannot stomp on the global object your application owns. It is also a much cleaner answer for plugin architectures than what we typically have today.
Soon We Can Finally Banish JavaScript to the ShadowRealm
Why Keyboard Users Can't Scroll Your Overflow Containers
TLDR: Scroll containers and keyboard focus are two separate browser systems that were never designed to talk to each other. Sighted keyboard users get trapped in tables and lists with overflow content because the container is not in the tab order. The fix is tabindex="0", aria-label, and a visible focus style.
Summary: The author found this bug the way most of us find accessibility bugs, which is to say someone filed it and it took putting the mouse away to reproduce. Tab order follows interactive elements. Scroll containers are a layout primitive that the spec never classified as interactive. The gap between those two systems is where sighted keyboard users fall through.
The trigger list is broader than you would expect. overflow-x: auto implicitly sets overflow-y: auto, so a horizontally scrolling table is also a vertical scroll container that keyboard users cannot scroll. overflow: hidden still creates a scroll container even though it just clips. And properties like transform, filter, will-change, contain: paint, and content-visibility: auto all create scroll containers as a side effect, often added for animation or performance reasons by someone who had no idea they were creating a keyboard trap.
The fix is small but it has three parts that need to ship together. Add tabindex="0" to put the container in the tab order. Add a descriptive aria-label so screen readers announce something meaningful when focus lands there. And add a visible :focus-visible style with enough contrast, because sighted keyboard users land on the container and need to know they did.
The judgment call is when to skip the fix. If every meaningful piece of content inside is already reachable by tab, adding tabindex="0" to the wrapper just adds noise. If the container holds a large number of focusable items where tabbing through every one would be a burden, a single stop on the wrapper is actually friendlier. The test the author recommends is simple. Unplug your mouse, tab through your page, and find every container with overflow you cannot scroll. Five minutes, real bugs.
Key takeaways:
- Scroll containers and tab order are separate systems with a gap between them.
- The fix is
tabindex="0",aria-label, and visible focus styling, together. - Many scroll containers are created by side effects of transform, contain, or will-change.
Why do I care: This is a class of bug that is invisible to automated tooling, invisible to most developers, and invisible to screen reader users. Only sighted keyboard users hit it, and they tend to just stop using your product rather than file a ticket. As a frontend architect I want this audit step in every team's definition of done. It costs nothing, and the impact on users you would otherwise lose silently is real.
Why Keyboard Users Can't Scroll Your Overflow Containers
There are only four sensible ways to build a website
TLDR: Jono Alderson argues that stack choice is a constraint system, not a tooling decision, and that almost every website collapses into one of four patterns. React when the site is a product, Shopify when the job is selling, static when the site mostly sits still, and a mature CMS like WordPress when publishing is the system.
Summary: This is the kind of essay that makes you cancel a meeting. The argument is that teams keep choosing stacks as expressions of identity. Engineering-led orgs reach for React because it feels like control. Design-led teams reach for visual builders because they want to move fast. Editorial teams reach for WordPress because that is where content lives. None of these instincts is wrong, but none of them is grounded in what the organization actually needs to operate the system over time.
Alderson lays out four patterns. Application-like sites with real state and complex interactions earn the React and headless complexity, but most sites that reach for that stack are not actually applications, and they pay the cost of rebuilding a CMS from scratch badly. Commerce sites should let Shopify own the machinery of payments, inventory, and trust, because fighting Shopify's opinions means stitching workarounds for problems the platform already solved. Sites that mostly sit still benefit from static generation, where the simplicity is the value. Sites where publishing is the system, with workflows and content relationships and multi-stakeholder governance, are exactly what WordPress, Drupal, and TYPO3 were built for, and the dismissal of those tools usually comes from people who have never had to govern a real content operation.
The middle ground is where most teams get stuck. Visual builders optimize for the moment of creation and underperform on long-term operation. Hybrid headless architectures look elegant on diagrams and become fragile in production, because you are now responsible for synchronizing data across systems, defining boundaries, handling failure modes, and keeping editorial workflows coherent across multiple tools. Done well, this can be powerful. Most organizations underestimate the cost.
The real test is not which stack is most flexible or most modern. It is which one you can run, day after day, as the organization changes around it. Most teams pick the wrong stack because they misunderstand themselves, not the technology.
Key takeaways:
- Stack choice defines what problems you can solve and which ones you will never see coming.
- Hybrid composability is the hardest path and is rarely justified.
- Choose the system you can operate, not the one that looks best in a demo.
Why do I care: I have seen this play out at scale more times than I can count. Editorial-led publishers rebuilding a CMS in Next.js because React felt modern, then watching their content team go on strike. Engineering-led product teams adopting WordPress for marketing and spending more time fighting the plugin ecosystem than shipping features. This piece is the right kind of architectural counterweight to put in front of a stakeholder before the next platform decision.
There are only four sensible ways to build a website
On Rendering the Sky, Sunsets, and Planets
TLDR: Maxime Heckel walks through a month-long shader project building atmospheric scattering in the browser. The article covers Rayleigh and Mie scattering, ozone absorption, raymarching, lighting passes, ray-sphere intersection for planet rendering, and a LUT-based optimization pass inspired by Sebastian Hillaire's production technique.
Summary: This is a long, careful piece that turns a NASA photo of the Endeavour shuttle at sunset into a working real-time atmosphere shader. Heckel starts from first principles. The sky's color is the result of light interacting with air, and you simulate that by raymarching from the camera through a transparent medium, accumulating density and transmittance along the way. Rayleigh scattering explains why the sky is blue. Mie scattering explains the hazy glow around the sun. Ozone absorption explains the purple tones near the horizon at sunset.
The build progresses in stages. First a sky shader that renders a backdrop. Then a post-processing effect that uses the depth buffer to render atmospheric fog through a 3D scene. Then ray-sphere intersection so you can render the atmosphere as a shell around a planet rather than a flat dome. Each step has interactive widgets in the article so you can move the sun, change altitude, and watch the color shift.
The final section pivots to performance. The naive raymarching version has a nested lightmarch loop that gets expensive fast. Hillaire's LUT-based technique precomputes the lighting into textures. A transmittance LUT stores how much light survives at every angle and altitude. A sky-view LUT stores the resulting color in every direction from the camera. An aerial perspective LUT stores the atmospheric haze between camera and scene geometry. The final composition pass just samples textures rather than running the full nested loop per pixel.
The bonus material is generous. There is a section on rendering eclipses by computing the angular separation between sun and moon directions. There is a section on tweaking the atmospheric constants to render Mars, which produces the famous blue Martian sunset for free.
Key takeaways:
- Atmospheric scattering breaks down into Rayleigh, Mie, and ozone components.
- LUT-based precomputation replaces nested loops with texture lookups.
- The same shader can render Earth, Mars, eclipses, and arbitrary atmospheres by tuning constants.
Why do I care: Even if you never write a shader, this kind of write-up is how I keep my mental model of what GPUs and the browser can actually do honest. The post is also a strong template for how to explain technical work, with interactive widgets at every stage and an honest section on what the author would do differently. If you have any 3D ambitions in your stack, this is worth bookmarking.
On Rendering the Sky, Sunsets, and Planets
Control the Speed of Infinite Animations
TLDR: Temani Afif shows a CSS-only technique using animation-composition: add to control the speed of an infinite animation through a single custom property. Values above 1 speed up, between 0 and 1 slow down, 0 stops, and negative values reverse direction.
Summary: This is one of those CSS tips that looks like a magic trick until you read it twice. The idea is to run the same animation twice on the same element with additive composition. The first instance runs at the original duration. The second instance runs at a derived duration and is paused by default. Setting a single custom property --s to a numeric speed factor changes whether the second animation runs and how fast, which adds to or subtracts from the first.
Hover toggles the play state. The effect is that hovering speeds up, slows down, stops, or reverses an infinite animation without rewriting it. Each element can carry its own --s value and you get distinct behaviors from the same keyframes.
The author shows three examples. An infinite rotation with per-element speed factors, an infinite marquee where the offset-distance is animated, and a glowing border effect where a custom property is animated. The technique works for any animatable property because the additive composition operates on whatever you put in the keyframes.
The CSS if() inline conditional is used for the cleanest version, but support is limited so there is a fallback that splits the keyframes into init and control so you can still get the reverse behavior with sign() math. Either way, the runtime API is just one custom property.
Key takeaways:
animation-composition: addlets two animations layer additively.- A single custom property controls speed, direction, and stop state.
- Works for any animatable property, including custom properties used for visual effects.
Why do I care: This is a great example of where modern CSS quietly replaces a JavaScript pattern I have written many times. If you build interactive product pages or marketing experiences with looping animations, you can wire speed controls and hover interactions straight from CSS variables without touching requestAnimationFrame. Smaller bundles, fewer event handlers, less to break.
Control the Speed of Infinite Animations
Preserving DOM Changes Across Live Reloads
TLDR: Kitty Giraudel hit a bug where his theme switcher would reset on every Eleventy live reload because morphdom diffed the served HTML against the live DOM and stripped the JavaScript-applied attribute. The fix is a development-only MutationObserver that reapplies the attribute when it disappears.
Summary: Live reload tools like Eleventy's dev server send updated HTML to the client and use morphdom to apply surgical DOM changes rather than full page reloads. That preserves scroll position, event listeners, and runtime state, which is exactly what you want. But it also assumes the served HTML is the source of truth, which breaks down when JavaScript has modified the DOM after load.
Giraudel's theme switcher applies a data-theme attribute to the <html> element on DOMContentLoaded. The attribute is not in the served HTML because the theme is resolved at runtime from local storage and OS preferences. So every time he saves a file, morphdom sees the new HTML without the attribute, compares it to the current DOM with the attribute, and removes it.
The fix is a small MutationObserver that watches the attribute and reapplies it if it disappears. The script is only included in development builds so production users never carry the overhead. He also explored a more generic solution that would re-fire DOMContentLoaded to remount the page, but Cursor correctly pointed out the problems with that. DOMContentLoaded is expected to fire once. Many listeners are not safe to run twice. Synthetic events have isTrusted: false which some code ignores. The generic solution would require all your scripts to be idempotent, which is hard to enforce without a framework designed for it.
Key takeaways:
- Live reload + DOM diffing strips client-side DOM changes that are not in the served HTML.
- A scoped MutationObserver that watches the specific attribute is the right fix.
- Re-firing
DOMContentLoadedis technically possible but creates duplicate-listener problems.
Why do I care: This is the kind of bug that wastes an hour of your debugging budget and then sits in the back of your mind every time you see a flicker. The pattern generalizes. Any time you apply runtime DOM state that has no representation in your source files, your live reload setup will quietly fight you. Knowing where the fight happens, and that a 10-line observer fixes it, is worth keeping in the back of your head.
Preserving DOM Changes Across Live Reloads
Behind the Scenes Hardening Firefox with Claude Mythos
TLDR: Mozilla details how it used Claude Mythos Preview and other AI models to find and fix 271 latent security bugs in Firefox 150. The approach used an agentic harness that runs reproducible test cases against hypothesized bugs, scaled across ephemeral VMs tied into Mozilla's security bug lifecycle.
Summary: This post is a milestone in how we think about AI in software security. A few months ago, AI-generated bug reports to open source projects were mostly noise. The economics were brutal because it is cheap to prompt an LLM to find a problem and expensive for maintainers to verify it. Mozilla's post documents a sharp inflection where, due to better models and better harnessing, the bugs got very good very quickly.
Mozilla unhid a small sample of the bug reports to give a sense of the depth. A 15-year-old bug in the <legend> element triggered through recursion stack limits and cycle collection. A 20-year-old XSLT bug where reentrant key calls freed the backing store while a raw pointer was still in use. A JIT optimization bug that created a fake-object primitive in code that had been heavily fuzzed. Sandbox escapes that involved patching the sandboxed source code to simulate attacker control and then escalating into the parent process.
The harness is the interesting part architecturally. Models can hallucinate plausible bugs, but an agentic harness that generates reproducible test cases dynamically tests its hypotheses and discards the unreproducible ones. Mozilla built theirs on top of existing fuzzing infrastructure, started with simple prompting on Claude Opus 4.6, parallelized across ephemeral VMs, and integrated with their full bug lifecycle including deduplication, triaging, and shipping.
There is also a striking observation about what the models did not find. Several previous hardening efforts, like freezing parent-process prototypes by default, blocked entire categories of attempted sandbox escapes that showed up in the harness logs. The architectural defenses paid off in a way that is hard to measure with anything else.
Key takeaways:
- Agentic harnesses with reproducible test cases changed the signal-to-noise ratio of AI bug finding.
- Mozilla shipped 271 bugs in one release, with a mix of JIT, IPC, layout, and parser issues.
- Existing architectural hardening blocked entire classes of attempted exploits in observable logs.
Why do I care: As a frontend architect I read this as a preview of where every codebase is heading. The advice at the end of the post is direct. Start now, use simple prompting, observe, iterate. The teams that build a basic harness this year will have a serious advantage when the next generation of models lands. The same agentic pipeline that found these Firefox bugs will find bugs in your application logic too.
Behind the Scenes Hardening Firefox with Claude Mythos
Testing Vue components in the browser
TLDR: Julia Evans documents how to write end-to-end Vue component tests that run entirely in a browser tab using QUnit, without Node, Playwright, or any external test runner. The setup mounts components into a temporary DOM node, polls for readiness with a small waitFor helper, and uses a server endpoint to reset test data.
Summary: Evans has been running a long project to figure out how to write frontend JavaScript without Node, and testing was the missing piece. Playwright always felt slow and unwieldy, and the result was untested code that she avoided changing. After a conversation with a colleague she tried QUnit, which runs in a browser tab and has a useful per-test rerun button.
The setup is simple by design. Her main app exposes components on window._components. A mountComponent helper creates a Vue app with a small template, registers the components, and mounts it into a positioned-off-screen div under the qunit-fixture element. Tests assert against the resulting DOM. A reset() function calls a dev-server endpoint that runs SQL to restore the test database to a known state before tests that need it.
The interesting friction is in the async waiting. Rather than sleep calls, Evans wrote a waitFor that polls every 20ms with a 2-second timeout, used to wait for elements to appear after network requests resolve. Filling out forms is also fiddly because Vue needs an input or change event after you set a value programmatically. She found Chrome's built-in code coverage tooling works for measuring what the tests actually exercised, with a few non-obvious setup steps.
There are open questions the post is honest about. CSS classes are not the best test selectors and getByRole or data-testid would be more robust. Without CI integration the tests cannot run in a build pipeline yet. And libraries like Testing Library or Vue Test Utils probably would have made the form interactions cleaner from day one.
Key takeaways:
- Browser-only Vue testing is viable with QUnit and a thin mount helper.
- A simple
waitForpolling helper replaces brittlesleepcalls. - The setup avoids Node entirely but leaves CI as an open question.
Why do I care: This is a refreshing piece because most testing posts assume a heavy toolchain. If you maintain a small app or a personal project where the build process is already minimal, replicating this pattern could give you the confidence to make changes you currently avoid. It also surfaces a useful philosophical point, which is that any time your test needs to wait for something undefined in the DOM, your component is probably leaking state before it is ready for the user.
Testing Vue components in the browser
Google I/O 2026 Developer Keynote
TLDR: Google's developer keynote at I/O 2026 lands on May 19 from 1:30 to 2:45 pm Pacific. The brief promises Google's latest AI tools for developer productivity and new cross-platform experiences.
Summary: This is a placeholder rather than a full article, but worth flagging because the developer keynote at I/O is where the practical signal lives for the year ahead. Last year's I/O leaned heavily on AI tooling integrations into Android Studio, Chrome DevTools, and Firebase. Expect more of that this year, with whatever new model tier Google ships alongside.
For frontend developers the questions to bring to the broadcast are concrete. What is happening with Baseline and the cross-browser interop story. What is Chrome's plan for the install element and the Web Install API now that Edge is co-developing them. What is changing in Lighthouse, INP, and the rest of the performance toolchain. And how is Google packaging its AI tooling for the kinds of teams that do not want to wire up custom agents themselves.
The session page is light on detail, which is normal at this stage. The keynote itself usually sets the agenda for the more substantive sessions later in the week, so it is worth watching live if your timezone allows it, or skimming the summary the next morning.
Key takeaways:
- May 19, 1:30 to 2:45 pm Pacific.
- AI tooling for developer productivity is the framing.
- Real detail will come in the sessions that follow, not in the keynote.
Why do I care: As an architect I treat the major vendor keynotes as a way to calibrate where the platform is investing and what assumptions to build into the next 6 to 12 months of planning. Whether or not you watch live, the keynote is the cheapest way to get that calibration in one block.