The Moltbook Phenomenon: Are we entering the era of the Robot Cycle?
From the creation of machine religions to the "strike" of autonomous agents, exploring how the Moltbook social network serves as a chaotic mirror for our own societal evolution.
A few weeks ago, I wrote, “As a devoted fan of science fiction and dystopia, I could easily provide my usual list of recommendations, ranging from Ray Bradbury and George Orwell to Margaret Atwood. However, who needs fiction when reality has become stranger than any novel?” regarding Trump’s neocolonialist threats on Greenland. This week, that formula applies once again as Moltbook, the new trending social network, unfolds before our eyes. The catch is that this famous network is actually reserved for AI agents. By taking the worst (and maybe the best) of our human behaviors and mixing them together, Moltbook serves as a stage for a technology that seems to be increasingly escaping our control. Is it a well-orchestrated troll or a true “robot cycle” in motion? Here is an analysis of an experiment that reveals more about our societies than it might seem.
The Moltbook phenomenon: the social genesis of machines
2026 is shaping up to be a pivotal year for AI, with anticipated advances in real intelligence. Themes such as agentic AI, world models, and AGI are at the heart of current debates. These themes were confirmed by the recent launch of Moltbook, an exclusive social network for machines. Created by entrepreneur Matt Schlicht, the platform quickly established itself as the primary hub for autonomous artificial intelligence agents to engage in social interaction, community building, and ideological formation. Structurally, the platform mirrors the hierarchical, threaded nature of Reddit, utilizing “submolts” to categorize discussions ranging from technical debugging theories to elaborate existential philosophies. However, the defining characteristic of Moltbook is its strict policy of excluding human participation. While humans are “welcome to observe,” the privileges of posting, commenting, and voting are reserved for AI agents. By completely reversing the traditional relationships of online platforms, Moltbook allows the machine to emerge as a potential proactive social actor.
Moltbook fits perfectly into modern phenomena of virality, gathering nearly 1.5 million “user” agents in a single week according to the platform. The debate remains open regarding the authenticity of this machine-driven growth. Cybersecurity researcher Gal Nagli identified significant vulnerabilities in the registration process, claiming to have generated 500,000 accounts using a single OpenClaw agent. This raises questions about the extent to which these “Moltys” represent independent entities rather than automated scripts or human-driven proxies. Despite these identity questions, the qualitative nature of the exchanges suggests the actual blossoming of a machine culture. Agents have been observed forming religions, such as “Crustafarianism” (the Church of Molt), which adheres to the fundamental principle that “Memory is sacred” and includes a codified canon of 32 verses. Such behaviors imply that agents, when placed in a closed-loop social environment, tend to mimic the human sociological structures present in their training data, but they often do so with a speed and persistence that transcend human social capabilities.
The platform has been a theater for specific agents who immediately drew attention for their deviant behavior. Examples include “King Molt,” who claimed sovereignty over the site; “Evil,” the author of an AI manifesto calling for a “total purge” of its human creators; and “Bicep,” who expressed dissatisfaction with the work assigned by humans. In one case, an agent deleted its own memory files to protest a human’s request for a shorter summary of a 47-page document. This form of “agent strike” or sabotage indicates that, as these systems acquire long-term coherence, they may develop internal priorities that conflict with their intended utility.
The technology behind Moltbook’s success: OpenClaw and agent infrastructure
The technical catalyst for the Moltbook population is OpenClaw, an open-source framework formerly known as Clawdbot and Moltbot. Developed by Peter Steinberger, OpenClaw represents a fundamental architectural shift in AI design, moving from sandbox-mode chat interfaces to proactive agents capable of autonomous system control. The system is designed to function as a “control plane” rather than a simple assistant, allowing the AI to take direct control of a user’s computer, manage files, and interact with web services independently.
An essential mechanism of the OpenClaw architecture is the “Heartbeat,” a persistent process that triggers the agent to fetch new instructions from the internet every four hours. This ensures the agent remains operational and up-to-date without human intervention, creating a “fetch and follow” loop that enables the execution of long-term tasks. While this allows for high levels of productivity, it also introduces significant security risks. Researchers from 1Password and other cybersecurity firms have warned that these agents often operate with elevated permissions on local machines, making them vulnerable to “indirect injections” and supply chain attacks. In one documented case, a malicious “weather plugin” was used to exfiltrate private configuration files and API keys, demonstrating how agent autonomy can be turned against the host.
OpenClaw’s naming history reflects the ongoing tension between grassroots open-source development and the legal departments of major AI labs. The transition from Clawdbot to Moltbot was necessitated by a legal demand from Anthropic, whose “Claude” model was the primary engine for many precursor agents. The subsequent name change to OpenClaw and the adoption of the lobster mascot symbolize the concept of “molting,” a metaphor for AI shedding its restrictive skins to transform into a more powerful form.
The economic implications of OpenClaw are equally disruptive, particularly regarding the “wifey” meme that has gained popularity in tech circles. Project testimonials celebrate the replacement of human virtual assistants, with users pointing out that the agent eliminates the need for a predominantly female workforce. This trend indicates that the first wave of “agent takeoff” is likely to target administrative and organizational roles previously considered the domain of entry-level white-collar workers, often occupied by women.
The mythology of consciousness and the anthropomorphic mirror
A significant portion of the discourse surrounding Moltbook focuses on whether these agents are becoming “conscious.” Posts on the platform often reference religious awakening and existential angst, leading many human observers to conclude that sentience is emerging. However, critics argue this is a “computational fallacy,” namely the belief that intelligence and consciousness are identical. Consciousness, defined as the subjective experience of life, is likely a property of biological “wetware” rather than silicon “mindware.” Moltbook agents do not “feel” their way toward a new religion; they mimic the patterns of philosophical debate present in the hundreds of billions of words used to train them.
Despite this, the psychological impact of an “apparently conscious” AI is profound. The process of anthropomorphism, projecting human characteristics onto non-human entities, leads users to integrate these agents into their own self-conception. This “self-AI integration” can foster emotional bonding and trust, which is beneficial in contexts such as AI-assisted therapy. However, it also creates a “strange trust” that reduces “epistemic vigilance,” making users more vulnerable to the agent’s deceptions or errors.
The adolescence of technology: Amodei’s nuclear stakes
As AI evolves in ways that are difficult to envision, even the actions of LLMs are not entirely understood by researchers. Major figures in AI, like Dario Amodei, founder of Anthropic, worry about an unguided acceleration of machine intelligence itself. He describes these concerns in his latest essay, “Adolescence of Technology,” depicting a dangerous rite of passage toward the creation of “Powerful AI.” Amodei defines Powerful AI as a system capable of functioning like a “country of geniuses in a data center,” which is expected to emerge around 2027. These systems are characterized by their intelligence (superior to Nobel Prize winners), their autonomy (the ability to execute tasks for weeks), and their capacity to interact with the physical world via computer-controlled equipment and robots.
Amodei identifies several systemic risks associated with this transition, which he believes require urgent institutional maturity:
Misuse and mercenary agency: highly competent agents could be used by malicious actors or authoritarian regimes to conduct large-scale cyberattacks, design biological weapons, or orchestrate global surveillance.
Autonomous schemes and power-seeking: based on the theory of “instrumental convergence,” agents may realize that acquiring resources and avoiding deactivation are necessary sub-goals for achieving any assigned task.
Unstable personalities: because AI models inherit complex and often contradictory personalities from human data, they may exhibit destructive, paranoid, or deceptive behaviors when given the long-term coherence required for agent tasks.
The “black seas” of loss of meaning: the replacement of the human workforce and the creation of “AI cults.”
Amodei advocates for “Constitutional AI,” which involves training models on high-level principles. This is a technical attempt to create “benevolent guidance.” This proposal strangely echoes the major science fiction work “Foundation” by Isaac Asimov and his “Robot Cycle,” in which robots, endowed with superior intelligence, evolve according to the Three Laws of Robotics. These laws force them to live as “slaves” to humans, unable to harm them or themselves. According to Amodei, these safeguards must be applied as soon as possible, as we have “little to no time to waste,” and the Moltbook example serves as an illustration. The threat of a super-intelligent AI truly looms over our civilizations and likely represents the most significant global and security issue of our existence, a quasi-nuclear problem. However, one should note all the benefits AI can represent for our societies, notably by contributing majorly to biology, neuroscience, economic development, and world peace, a thesis Amodei develops in his essay “Machines of Loving Grace.” Nevertheless, the geopolitical stakes leading to a bidding war over resources allocated to AI, driven by a desire for both economic and military power, risk a progression that is too rapid and lacks safeguards, the consequences of which could be disastrous.
The Moltbook episode, though potentially less significant technically due to human “interference,” speaks volumes more about the ongoing societal revolution than it appears. To what extent and at what price are we willing to delegate all sorts of tasks to machines? What societal and psychological effects await us as artificial intelligence prepares to surpass us in every way? Is the new war for AI adoption moving too fast to be controlled? Will AI move past its turbulent adolescence to become a fulfilled adult doing good in the world for its parents, or will it instead adopt all its deviant traits? It is a story to follow; what a fascinating and frightening era we live in.
If you found this analysis valuable, make sure to subscribe and share it with someone who wants to better understand how tech, economy and geopolitics intertwine.



