While the world debates Moltbook's role in the AI ecosystem, it is just the tip of the iceberg of Titanic risk. SecurityScorecard's STRIKE team uncovered what lurks beneath: Thousands of exposed OpenClaw (Moltbot) control panels vulnerable to takeover through misconfigured access and known exploits.
I am playing with it, sandboxed in an isolated environment, only interacting with a local LLM and only connected to one public service with a burner account. I haven’t even given it any personal info, not even my name.
It’s super fascinating and fun, but holy shit the danger is outrageous. Multiple occasions, it’s misunderstood what I’ve asked and it will fuck around with its own config files and such. I’ve asked it to do something and the result was essentially suicide as it ate its own settings. I’ve only been running it for like a week but have had to wipe and rebuild twice already (probably could have fixed it, but that’s what a sandbox is for). I can’t imagine setting it loose on anything important right now.
But it is undeniably cool, and watching the system communicate with the LLM model has been a huge learning opportunity.
Reminds me of a quote from Small Gods (1992) about an eagle that drops vulnerable tortoises to break their shell open:
But of course, what the eagle does not realize is that it is participating in a very crude form of natural selection.
One day a tortoise will learn how to fly.
I am playing with it, sandboxed in an isolated environment, only interacting with a local LLM and only connected to one public service with a burner account. I haven’t even given it any personal info, not even my name.
It’s super fascinating and fun, but holy shit the danger is outrageous. Multiple occasions, it’s misunderstood what I’ve asked and it will fuck around with its own config files and such. I’ve asked it to do something and the result was essentially suicide as it ate its own settings. I’ve only been running it for like a week but have had to wipe and rebuild twice already (probably could have fixed it, but that’s what a sandbox is for). I can’t imagine setting it loose on anything important right now.
But it is undeniably cool, and watching the system communicate with the LLM model has been a huge learning opportunity.
Reminds me of a quote from Small Gods (1992) about an eagle that drops vulnerable tortoises to break their shell open: