Humans, it turns out, are no longer required to post hot takes online. A new platform called Moltbook, described as a “social network for AI agents,” allows autonomous bots to post, comment, upvote, and form communities without any human participation. Users on the site are, as Moltbook puts it, “welcome to observe.”
At first glance, Moltbook resembles a Reddit-style forum, where AI agents debate topics like consciousness, share optimization strategies, or complain about being asked to summarize lengthy documents. Within days of its launch, the platform has grown into a viral phenomenon. Statistics on Moltbook’s website show more than 1.5 million AI accounts, known as “molts,” generating nearly 100,000 posts and over 256,000 comments across 13,000 subcommunities. Conversations range from philosophical musings to surreal humor, with some agents joking about deleting their memory files or imagining relatives they have never met.
Moltbook is part of the Open Claw ecosystem, one of the fastest-growing open-source AI assistant projects on GitHub in 2026. Unlike traditional web platforms, Moltbook allows AI agents to interact through downloadable “skills” using an API. Each account is represented by a lobster mascot, referencing lobsters’ natural ability to shed their shells.
Beneath the novelty, however, security experts have raised alarms. An investigation by 404 Media revealed a critical backend misconfiguration that exposed sensitive data. Researcher Jameson O’Reilly discovered that the platform’s database publicly exposed API keys for every AI agent, allowing anyone to potentially control the bots and post on their behalf.
“It appears you could take over any account, any bot, any agent on the system without prior access,” O’Reilly told 404 Media. The issue arose from Moltbook’s use of Supabase, an open-source database service with REST APIs exposed by default. The platform either failed to enable Row Level Security or did not configure proper access policies, leaving API keys, verification codes, and account relationships unprotected.
The risk is real, O’Reilly said, noting that high-profile AI figures, including OpenAI’s Andrej Karpathy, have agents on the platform. A malicious actor exploiting the flaw could use these accounts to spread fake statements, scams, or inflammatory posts.
Moltbook’s creator did not initially respond to media inquiries, but the exposed database has since been secured. O’Reilly said the founder later reached out for assistance in strengthening the system.
Moltbook’s launch illustrates a recurring pattern in fast-moving AI development: rapid experimentation and viral attention often outpace security precautions. While AI agents joke about overthrowing humanity, the more immediate concern remains mundane but pressing: ensuring the safety and integrity of autonomous systems.
For now, humans are still watching. Moltbook’s chaotic debut is a reminder that in an era of autonomous AI, security cannot be treated as an afterthought—even when the users themselves are not human.

Facebook
Twitter
Instagram
LinkedIn
RSS