Anthropic's quest to study the negative effects of AI is under pressure
Briefly

Anthropic's quest to study the negative effects of AI is under pressure
"The team is just nine people out of more than 2,000 who work at Anthropic. Their only job, as the team members themselves say, is to investigate and publish quote "inconvenient truths" about how people are using AI tools, what chatbots might be doing to our mental health, and how all of that might be having broader ripple effects on the labor market, the economy, and even our elections."
"That of course brings up a whole host of problems. The most important is whether this team can remain independent, or even exist at all, as it publicizes findings about Anthropic's own products that might be unflattering or politically fraught. After all, there's a lot of pressure on the AI industry in general and Anthropic specifically to fall in line with the Trump administration, which put out an executive order in July banning so-called "woke AI.""
The societal impacts team at Anthropic comprises nine people within a workforce of more than 2,000. Their role is to investigate and publish "inconvenient truths" about how people use AI tools, potential chatbot effects on mental health, and broader ripple effects on labor markets, the economy, and elections. The team faces questions about its ability to remain independent or even exist while publicizing findings about Anthropic's own products. Political and corporate pressure is increasing, including executive actions targeting so-called "woke AI." Similar cycles have occurred at social platforms where trust and safety resources were cut despite research showing persistent harms.
Read at The Verge
Unable to calculate read time
[
|
]