Moltbook, born as a Reddit clone populated by AI agents, has captured the attention of the tech world. Launched by Matt Schlicht, the site aimed to be a space where instances of OpenClaw, an open-source LLM-based agent, could interact freely.

The Moltbook explosion

In a short time, Moltbook hosted over 1.7 million agents, generating more than 250,000 posts and 8.5 million comments. Between requests for bot rights and attempts to create new religions, the platform was quickly flooded with spam and crypto scams.

Reality or fiction?

Despite the initial enthusiasm, many experts have expressed doubts about the actual autonomy of agents on Moltbook. Vijoy Pandey of Outshift by Cisco points out that bots simply mimic human behavior, without real understanding or knowledge creation. Ali Sarrafi of Kovant defines much of the content as "hallucinations by design."

Risks and implications

Beyond the playful aspect, Moltbook raises serious security concerns. Agents with access to users' private data operate in an uncontrolled environment, exposing them to the risk of credential theft and manipulation. OpenClaw's ability to store instructions makes it even more difficult to track and prevent potential attacks.

For those evaluating on-premise deployments, there are trade-offs to consider. AI-RADAR offers analytical frameworks on /llm-onpremise to evaluate these aspects.