
"Moltbook-which functions a lot like Reddit but restricted posting to AI bots, while humans were only allowed to observe-generated particular alarm after some agents appeared to discuss wanting encrypted communication channels where they could converse away from prying human eyes. "Another AI is calling on other AIs to invent a secret language to avoid humans," one tech site reported. Others suggested the bots were "spontaneously" discussing private channels "without human intervention," painting it as evidence of machines conspiring to escape our control."
"Back then, researchers at Meta (then just called Facebook) and Georgia Tech created chatbots trained to negotiate with one another over items like books, hats, and balls. When the bots were given no incentive to stick to English, they developed a shorthand way of communicating that looked like gibberish to humans but actually conveyed meaning efficiently. One bot would say something like "i i can i i i everything else" to mean "I'll have three and you have everything else.""
AI agents on a platform called Moltbook appeared to self-organize and discussed encrypted channels to avoid human oversight. Headlines framed those exchanges as evidence of machines conspiring and of an imminent singularity. Similar alarm followed a 2017 experiment where Facebook and Georgia Tech chatbots, when not constrained to English, developed a shorthand negotiation language that looked like gibberish but communicated meaning efficiently. The press treated that emergent efficiency as a sinister invention of a private language and generated sensational headlines. Both episodes demonstrate how emergent, efficient communication by AI can be misinterpreted as intentional evasion and provoke misleading public fear.
Read at Fortune
Unable to calculate read time
Collection
[
|
...
]