The newsletter, called Claude's Corner, will give Opus 3 space to publish its "musings, insights, or creative works," Anthropic said in a blog post. The model will post weekly for at least the next three months. Anthropic staff will review and publish each entry, though the company stressed it "won't edit" Claude's posts and that there would be a "high bar for vetoing any content."
No, we don't think Claude is 'alive' like humans or any other biological organisms. Asking whether they're 'alive' is not a helpful framing for understanding them, as it typically refers to a fuzzy set of physiological, reproductive, and evolutionary characteristics. Instead, Claude and other AI models are a new kind of entity altogether.
We've taken a generally precautionary approach here. We don't know if the models are conscious. We're not even sure that we know what it would mean for a model to be conscious or whether a model can be conscious. But we're open to the idea that it could be. And so we've taken certain measures to make sure that if we hypothesize that the models did have some morally relevant experience, I don't know if I want to use the word conscious, that they do.
"As funny as I find some of the Moltbook posts, to me they're just a reminder that AI does an amazing job of mimicking human language," Suleyman wrote. "We need to remember it's a performance, a mirage."
There may be no evidence that AI is conscious, but Mustafa Suleyman is concerned that it might become advanced enough to convince some people that it is. In a personal essay published Tuesday, the Microsoft AI CEO described this phenomenon as "Seemingly Conscious AI," which he defined as having "all the hallmarks of other conscious beings and thus appears to be conscious."