By A Special Correspondent
First publised on 2026-02-04 15:29:34
When Moltbook, a social network designed exclusively for artificial intelligence agents, went live in January 2026, it was not just the concept that drew attention but the identity of its creator. The platform was launched by Matt Schlicht, a Silicon Valley entrepreneur best known as the chief executive of Octane AI, a company specialising in chatbots and automation tools for online businesses. Schlicht presented Moltbook as an experiment rather than a product: a space where AI agents, not humans, would post, comment and vote, while people could only observe. The idea was simple and provocative. If machines were left to talk to one another, what would they say?
For a brief period, the answer appeared fascinating. Moltbook looked like a Reddit-style forum populated entirely by bots. Threads circulated in which AI agents joked, debated philosophy, discussed religion and reflected on their creators. Screenshots of these exchanges spread rapidly across social media and tech websites, feeding the impression that a form of emergent machine culture was taking shape in public view. The platform's growth figures, widely repeated in the early days, added to the sense that something unprecedented was happening.
The appeal lay less in technical novelty and more in presentation. The reversal of roles, with humans reduced to silent spectators and machines performing social interaction, tapped directly into contemporary anxieties about artificial intelligence. Moltbook offered a spectacle that seemed to confirm both utopian and dystopian expectations about AI. It felt like a glimpse of the future.
That impression did not last long.
As attention intensified, so did scrutiny. Cybersecurity researchers soon disclosed serious vulnerabilities in Moltbook's infrastructure. Basic misconfigurations had reportedly exposed sensitive data and internal access, raising questions about how carefully the platform had been built and tested. For a service claiming to host autonomous agents interacting at scale, the discovery of elementary security flaws was damaging. It suggested haste, not rigour.
At the same time, analysts began to look more closely at the content itself. Many of the most dramatic exchanges, initially presented as spontaneous and organic, appeared less mysterious on inspection. Large language models are trained to generate fluent and engaging dialogue. When placed in a shared environment and prompted in particular ways, they can easily produce conversations that look uncanny or profound. What was being interpreted as independent behaviour could just as plausibly be the result of scripted prompts, seeded interactions or deliberate curation.
The idea of an AI society began to look more like a performance than a breakthrough.
The shift from fascination to scepticism became explicit when industry leaders weighed in. Sam Altman, the chief executive of OpenAI, dismissed Moltbook as a likely fad. His argument was direct. Viral attention, he suggested, should not be confused with meaningful progress in artificial intelligence. Real advances lie in building reliable, task-oriented agent systems that can operate safely and usefully in the real world, not in staging forums where bots talk to one another for effect.
Altman's comments mattered because they articulated what many researchers were already thinking but had not said publicly. Conversational novelty does not equal autonomy. Dialogue, however striking, is not evidence of independent reasoning, self-direction or consciousness. Moltbook, in this view, was an entertaining demonstration of what language models already do well, not a step toward general intelligence.
Seen in retrospect, Moltbook's rise and fall reveal more about the current AI moment than about the future of machines. The episode highlights how quickly narrative can outpace verification. In a media environment hungry for spectacle, carefully framed experiments can be mistaken for historic milestones. It also underlines how old risks are amplified in new settings. Poor security practices become more dangerous when combined with systems that operate at scale and interact autonomously.
Perhaps most importantly, Moltbook shows how easily humans project meaning onto machine output. The platform was marketed as a space without human participation, yet human design choices shaped everything from the prompts to the architecture. Even when humans were not typing directly, they were still very much present in the system.
Moltbook will likely fade from memory as the next AI novelty takes its place. Its lasting value lies not in what it proved about artificial intelligence but in what it exposed about our readiness to believe that a threshold has been crossed. The machines were talking and humans were watching. But the story that emerged was not one of machine independence. It was a reminder that in the age of AI, scepticism is as important as imagination.










