I let Anthropic's Claude Cowork loose on my files, and it was both brilliant and scary
Briefly

I let Anthropic's Claude Cowork loose on my files, and it was both brilliant and scary
""With great power comes great responsibility." So said wise old Uncle Ben to a young Peter Parker. With Claude Cowork, you're granting the AI enormous power, but the responsibility of what it does falls entirely on your shoulders. Claude Cowork is basically agentic AI for your file system. Here's a view of its main screen. You can see that Claude recommends six task categories (at 1)."
"Also: Claude Cowork automates complex tasks for you now - at your own risk Like most chatbots, there's a prompt entry area (at 2). But the real key to Cowork's power is the Work in a Folder option (at 3). Here is where you specify what folder on your local computer Claude is going to dig through and process. It may seem dangerous to let an AI loose in your computer's files (and it is)."
Claude Cowork is an agentic AI that operates directly on a user's local file system. The interface recommends six task categories and includes a Work in a Folder option to target specific folders for automated processing. The Organize Files task enables bulk classification and data crunching. Granting the AI access to local files creates significant risk, and the ultimate responsibility for actions and changes rests with the user. Anthropic's prior Claude Code capability already allowed AI access across large codebases. Programmers commonly use source control such as GitHub to track and revert changes, but Cowork lacks built-in source-control safeguards. The service costs about $100 per month and raises open questions about security, scalability, and trust.
Read at ZDNET
Unable to calculate read time
[
|
]