Scale AI has a data security problem
Briefly

Scale AI is under scrutiny for security vulnerabilities after reports surfaced that the company uses public Google Docs for confidential client work. This method exposes sensitive AI training documents and contractor details to anyone with access to the links. Although Scale AI claims to prioritize data security and is currently investigating these issues, important clients, including Google and OpenAI, have paused collaborations. Cybersecurity experts warn that such practices may lead to increased vulnerabilities and potential exploitation by malicious actors.
Scale AI routinely uses public Google Docs to track work for high-profile customers like Google, Meta, and xAI, leaving multiple AI training documents labeled 'confidential' accessible to anyone with the link.
Some of those documents can be viewed and also edited by anyone with the right URL.
Two cybersecurity experts told BI that such practices could leave the company and its clients vulnerable to various kinds of hacks, such as hackers impersonating contractors or uploading malware into accessible files.
'We are conducting a thorough investigation and have disabled any user's ability to publicly share documents from Scale-managed systems,' a Scale AI spokesperson said.
Read at Business Insider
[
|
]