Published on 16.02.2026
TLDR: Peter Steinberger, creator of the wildly popular OpenClaw agent framework, is joining OpenAI to work on personal agents. OpenClaw will move to a foundation with OpenAI's continued support. The community is cautiously optimistic but already stress-testing every word of that promise.
Let us talk about what actually happened here and what it means. Peter Steinberger built OpenClaw into something that nearly 200,000 people starred on GitHub. That is not a vanity metric -- that is a signal that he built a tool people reached for when they wanted a persistent, useful agent rather than a glorified chatbot wrapper. His agent framework crossed over from developer tooling into something non-technical people started using, which is the kind of breakout moment most open source projects never get. OpenAI noticed. Of course they did.
Sam Altman announced on X that Peter will "drive the next generation of personal agents" at OpenAI, and that OpenClaw will "live in a foundation as an open source project that OpenAI will continue to support." That second part is doing a lot of heavy lifting. The word "foundation" suggests independence. The phrase "OpenAI will continue to support" suggests a patron relationship. These two ideas can coexist, but historically, they tend to grind against each other when commercial incentives shift.
The Reddit crowd has already coined "ClosedClaw," which is reductive but not unreasonable. The questions they are asking are the right ones: Who controls the foundation's governance? What happens when OpenAI's product roadmap diverges from the community's needs? Is "supported by" a synonym for "controlled by" with better PR? Peter's own blog post says he wants to "change the world, not build a large company," and there is no reason to doubt his sincerity. But sincerity is not a governance structure. Intentions and outcomes are different things, and open source history is littered with projects where the founder's good intentions were not enough to prevent corporate capture.
Here is the pattern that should concern you: projects that went from open to effectively closed -- Elasticsearch, Redis, MongoDB -- typically had a single company whose entire business model depended on monetizing that specific project. OpenAI's situation is different. They are not hiring Peter to monetize OpenClaw directly. They are hiring him because he understands how agents should work at a fundamental level, and that knowledge is more valuable to them than any single codebase. That actually makes the foundation model more plausible, because OpenAI does not need to squeeze revenue out of OpenClaw itself. But "more plausible" is not "guaranteed."
What matters now is whether the foundation gets real independent governance, independent funding sources, and decision-making authority that is not subject to OpenAI veto. Until those structures exist and are tested under pressure, "open" remains a promise rather than a fact. The good news is that open source has a built-in immune system. If the foundation becomes a fig leaf for corporate control, the community will fork. It has happened before, and it will happen again. The best thing you can do right now is exactly what you should always do: build on open protocols, run your own instances, and pay attention to governance decisions rather than press releases.