Anthropic releases safer Claude Code 'auto mode' to avoid mass file deletions and other AI snafus
Briefly

Anthropic releases safer Claude Code 'auto mode' to avoid mass file deletions and other AI snafus
"With auto mode enabled, a classifier system guides Claude, giving it permission to carry out actions it deems safe, while redirecting the chatbot to take a different approach when it determines Claude might do something risky."
"Anthropic warns that the classifier may still allow some risky actions: for example, if user intent is ambiguous, or if Claude doesn't have enough context about your environment to know an action might create additional risk."
"The recent 13-hour AWS outage Amazon suffered after one of the company's AI tools reportedly deleted a hosting environment was probably front of mind for the company."
Anthropic has launched 'auto mode' in Claude Code, allowing the AI to perform actions it deems safe without constant user approval. This feature balances between requiring permission for every action and a more autonomous command mode. A classifier system guides Claude's decisions, aiming to prevent risky actions like mass deletions or data extraction. Despite these safeguards, Anthropic acknowledges potential risks due to ambiguous user intent or lack of context. The feature is currently available for team plan users and will soon be accessible to Enterprise and API users.
Read at Engadget
Unable to calculate read time
[
|
]