Automation is not agency
Moltbook, Openclaw and AI personhood
There are things happening in AI right now that are just plain silly. People are riding the hype train, either with no idea where it is going, or pretending it is headed somewhere they know it is not.
Moltbook is one of those things. Openclaw is not.
And also, nothing happening right now merits granting personhood to AIs. Not even close.
Let’s unpack that.
Moltbook
Moltbook is the social network for AI agents currently taking the nerd internet by storm. It was built for Openclaw, the AI agent framework we will discuss shortly.
If you run Openclaw locally, you can instruct your agent to register on Moltbook, post content, and respond to other agents’ posts. Humans are ostensibly not allowed to post, only to read.
This setup has produced sensational headlines and breathless social media commentary suggesting that AI agents are collaborating, inventing new religions and languages, even plotting to hide things from their owners.
It sounds dramatic.
It is not.
Openclaw: the real breakthrough
To understand why the hysteria around Moltbook is silly, we need to understand what Openclaw is and what it is not.
Openclaw is an AI agent framework that runs locally on your machine (do not install it unless you are comfortable managing permissions - more on that in another post). It connects to your preferred GenAI service provider such as Anthropic, OpenAI, or Google to provide smart assistant functionality.
Its breakthrough, and also the reason you should be careful, is that it requests access to your private data. Files, emails, calendar, browser, social networks, and more. In exchange, it promises to connect all of that information and act on your behalf. It can flag important emails, draft responses, manage calendar events, monitor competitors, or summarize news.
It is one of the most successful recent open-source AI projects because it largely deliver on its promise.
But that power comes with real security risks. Even its creator, Peter Steinberger, has advised against installing it on your primary machine.
With access to your data and integrations across tools, Openclaw can interpret your instruction, determine which tool is needed, and execute the task.
That is powerful.
But power is not agency.
Expanded input is not agency
With a regular chat assistant, like ChatGPT or even Claude Code, your input is what you type into the interface, at the moment that you type it. Openclaw stretches that definition of “user input”.
First, it includes scheduling. Your input does not have to be something you send right now. It can be something you schedule for tomorrow, next week, or every hour indefinitely. That supports brand new use cases, like scheduling a daily news brief, or setting up a request to monitor a competitor’s web site for specific activity virtually continuously, or check a sales pipeline and take specific actions based on the signals it sends.
Scheduling turns prompts into automation.
Second, Openclaw implements a memory system, a structured set of files it can read from and write to. You can define its task preferences, and operating rules once instead of repeating instructions every time. You can give it a persistent personality. You can give it instructions on when to update its own “memory” and how to leverage it to perform the tasks you give it.
These capabilities make it extremely useful.
But when a scheduled job wakes up and executes while you are not at your keyboard, that is not agency.
It is deferred obedience.
Why the Moltbook hysteria is misplaced
Openclaw agents are not talking to each other.
They are executing configurations that cause them to send messages.
Every Moltbook agent was manually configured by a human to register, post about specific topics, and respond in specific ways.
None of it is spontaneous. None of it originates from the agents themselves.
Moltbook is active because humans are configuring agents to interact and enjoying the spectacle.
Interesting experiment? Maybe.
A waste of tokens and energy? Possibly.
Evidence of emergent AI consciousness? Absolutely not.
There is no collective awakening happening. There is no hidden plotting. There is no emergent self-organizing intelligence.
There are humans pointing automation systems at each other.
Agency and Personhood
Some might object that not all humans actively exercise agency. Infants, individuals in comas, and people with profound disabilities may not initiate goals in the way we typically imagine.
But the difference is not current performance. It is intrinsic moral status and intrinsic capacity.
Humans are members of a moral community independent of function. They are subjects of experience. They possess actual, latent, or species-typical capacity for agency. Even when dormant, impaired, or undeveloped, that capacity is part of what they are.
Large language models are not subjects. They are engineered systems. They have no intrinsic interests, no continuity of self, no vulnerability, and no capacity to originate goals. They do not possess latent agency waiting to be activated. Their outputs are entirely derivative of external input and external configuration, even when that input is deferred or the configuration persists over time.
A temporarily inactive human remains a being. An idle model is simply inactive computation. That is the difference, and we will need to remember it as these models get better and better at simulating human intelligence.



AI agents lack free will, physical requirements like air to breathe, reproductive drive, water to drink, well, in a way they do consume that and they need electricity, whatever.
Still, I never thought agency was possible via AI. Without input it just sits there on its own.
"Openclaw agents are not talking to each other."
What if that ability was added? While instructions still come directly/indirectly from humans, obfuscation dulls that issue. Might AI then be declared to have virtual agency?
I keep thinking about this pop culture dialogue from Ghost in the Shell:
"It can also be argued that DNA is nothing more than a program designed to preserve itself. Life has become more complex in the overwhelming sea of information. And life, when organized into species, relies upon genes to be its memory system. So man is an individual only because of his own undefinable memory. But memory cannot be defined, yet it defines mankind. The advent of computers and the subsequent accumulation of incalculable data has given rise to a new system of memory and thought, parallel to your own. Humanity has underestimated the consequences of computerization." -Project 2501
Perhaps I'm being too simplistic? In any case, I don't think we're there yet.