Next.js 16.1, HTMX, Go Leaks, and a 15-Year-Old Screen Share Tech

Published on 19.12.2025

Next.js 16.1

TLDR: Next.js 16.1 significantly speeds up development with default Turbopack file system caching, offering up to 14x faster server restart times. It also introduces an experimental bundle analyzer.

Summary: The latest Next.js release, version 16.1, focuses on improving the developer experience by tackling one of the most common pain points: slow development server restarts. By enabling Turbopack's file system caching by default in development mode, the team reports a dramatic reduction in compile times, with some projects seeing up to a 14-fold improvement. This is a significant quality-of-life update for developers working on large and complex Next.js applications.

For architects and teams, this means less time waiting for the dev server to catch up and more time iterating on features. The faster feedback loop can lead to increased productivity and a more enjoyable development process. While this is a development-only feature, it removes a major friction point in the daily workflow. The release also includes an experimental bundle analyzer to help teams optimize their production builds by visualizing what's contributing to the bundle size.

Key takeaways:

  • Up to 14x faster compile times in development.
  • Turbopack file system caching is now on by default.
  • Experimental bundle analyzer for production optimization.
  • Simplified debugging with next dev --inspect.

Link: Next.js 16.1 | daily.dev

Please Just Fucking Try HTMX

TLDR: HTMX provides a simpler way to build interactive web applications by extending HTML with attributes that handle server communication, avoiding the complexity of modern JavaScript frameworks. A case study showed a 67% reduction in codebase size.

Summary: This article makes a passionate plea for developers to consider HTMX as a viable alternative to heavyweight JavaScript frameworks like React, Vue, or Angular. The core idea behind HTMX is to enhance standard HTML, allowing it to make server requests and swap content directly in the DOM without writing complex JavaScript. This approach can dramatically simplify the front-end architecture for many types of applications, particularly those that are server-rendered.

For teams that feel burdened by the complexity of the modern JavaScript ecosystem, HTMX offers a compelling "middle ground." It allows for the creation of rich, interactive experiences without the steep learning curve and extensive tooling associated with full-blown SPAs. The author highlights a case where a company replaced a complex JavaScript front end with HTMX and saw a two-thirds reduction in their codebase. This is a powerful data point for architects considering their framework choices, especially for projects where simplicity and maintainability are high priorities. The author seems to be missing the point that for highly interactive applications, the complexity moves to the backend, and you might end up with a less-maintainable system.

Key takeaways:

  • HTMX extends HTML to enable server interactions without complex JavaScript.
  • Can significantly reduce codebase size and complexity.
  • A good alternative for server-rendered applications.

Link: Please Just Fucking Try HTMX | daily.dev

Detecting goroutine leaks in modern Go

TLDR: Go versions 1.24-1.26 introduce powerful new tools, synctest and goleakprofile, to help developers detect and diagnose goroutine leaks in both testing and production environments.

Summary: Goroutine leaks are a common and often insidious problem in concurrent Go programs. This article provides a comprehensive overview of the new tools available in recent and upcoming Go versions to combat this issue. The synctest package allows developers to write tests that can deterministically detect leaks by identifying common patterns like unclosed channels, double sends, and orphaned worker goroutines.

For production environments, the goleakprofile provides a way to monitor applications for goroutine leaks over time. This is invaluable for architects and teams responsible for the long-term stability and reliability of Go services. By understanding the common leak patterns and utilizing these new tools, teams can proactively prevent memory and resource exhaustion in their applications. The article does a great job of not just introducing the tools, but also explaining the underlying causes of the leaks, which is crucial for prevention.

Key takeaways:

  • synctest package for detecting leaks in tests.
  • goleakprofile for monitoring leaks in production.
  • Common leak patterns include unclosed channels and orphaned workers.

Link: Detecting goroutine leaks in modern Go | daily.dev

We Mass-Deployed 15-Year-Old Screen Sharing Technology and It's Actually Better

TLDR: A team building an AI coding platform found that a simple, 15-year-old technique of polling for JPEG screenshots over HTTP outperformed their modern, complex H.264 video streaming solution, especially on unreliable networks.

Summary: This is a fascinating story of a team that went down a deep, complex rabbit hole only to find that a much simpler, older solution was better for their use case. They initially tried to build a high-performance screen sharing feature using WebRTC, but it was blocked by enterprise firewalls. They then engineered a sophisticated H.264 video streaming pipeline over WebSockets, capable of 60fps. However, this solution struggled on poor network connections.

Ultimately, they discovered that polling for JPEG screenshots, a technique that is well over a decade old, provided a more resilient and reliable user experience. While it didn't offer the same high frame rate, it was far more robust on the kinds of unreliable networks their users often had. This is a powerful reminder for architects that the "best" technical solution is not always the most modern or complex one. It's crucial to deeply understand the constraints of the environment in which your application will be used and to prioritize robustness and user experience over raw technical specifications.

Key takeaways:

  • Simple, older technology can sometimes be more robust than modern, complex solutions.
  • Understand your users' network conditions.
  • Prioritize user experience and reliability over raw performance metrics.

Link: We Mass-Deployed 15-Year-Old Screen Sharing Technology and It's Actually Better | daily.dev

Gemini Flash 3 is my new favorite model

TLDR: Gemini Flash 3 is a fast, cheap, and powerful new model that excels at spatial reasoning and multimodal tasks, making it a strong contender against more expensive models like Claude Opus 4.5.

Summary: The author is highly impressed with the performance of Google's new Gemini Flash 3 model. Despite being significantly faster and cheaper than Gemini 3 Pro, it demonstrates remarkable capabilities, particularly in spatial reasoning and its ability to process multiple modalities like images, video, and audio. The article suggests that for many tasks, Flash 3 can produce results that are on par with or even better than much more expensive models.

For teams working with AI, this is an exciting development. The availability of a cheap, fast, and powerful model like Flash 3 could enable new applications and make existing ones more cost-effective. Architects should consider this model for tasks that require multimodal understanding or rapid, low-latency responses. The author notes that while it's a great model, it does have a tendency to be more verbose than its predecessor, which is a tradeoff to be aware of.

Key takeaways:

  • Gemini Flash 3 is a fast, cheap, and powerful AI model.
  • Excels at spatial reasoning and multimodal tasks.
  • A cost-effective alternative to more expensive models.
  • Can be more verbose than other models.

Link: Gemini Flash 3 is my new favorite model (yes really) | daily.dev