Nvidia's Open-Weights Power Play, OpenAI's AWS Marriage, xAI's Budget Video Generator, and Recursive Language Models

Published on 27.03.2026

AI & AGENTS

Nvidia Nemotron 3 Super 120B - Open-Source Speed Demon

TLDR: Nvidia released Nemotron 3 Super 120B-A12B, a fully open-weights language model with a hybrid mamba-2/transformer/mixture-of-experts architecture that pushes 442 output tokens per second, making it the fastest open-weights model in its class and a clear play to keep the developer ecosystem locked to Nvidia hardware.

The Batch - Nvidia Nemotron 3 Super 120B


OpenAI Tracks Agent States on AWS

TLDR: OpenAI and Amazon announced a stateful runtime environment for AI agents on AWS, giving Amazon exclusive third-party cloud hosting rights for OpenAI's frontier models and further unwinding the once-central Microsoft partnership.

The Batch - OpenAI Agent States on AWS


xAI's Cost-Effective Video Generator - Grok Imagine 1.0

TLDR: xAI launched Grok Imagine 1.0, a text/image/video-to-video generator that topped Artificial Analysis Video Arena while costing roughly a third of Google Veo 3.1 and a seventh of Sora 2 Pro per minute of generated video.

The Batch - xAI Grok Imagine 1.0


Recursive Language Models (RLMs)

TLDR: MIT researchers developed Recursive Language Models that process long contexts by treating input text as a persistent variable in an external Python environment, recursively breaking tasks into sub-tasks and achieving 91.3 percent on BrowseComp+ where GPT-5 alone could not answer.

The Batch - Recursive Language Models