
"Research from Wiz has revealed that nearly two-thirds (65%) of private AI companies listed in the Forbes AI 50 had leaked sensitive information on GitHub. "Think API keys, tokens, and sensitive credentials, often buried deep in deleted forks, gists, and developer repos most scanners never touch," the research states. "Some of these leaks could have exposed organizational structures, training data, or even private models." With the development and evolution of AI accelerating, cybersecurity teams are finding themselves in a new risk frontier."
"The majority of these exposures stem from traditional weaknesses such as misconfigurations, unpatched dependencies, and exposed API keys in developer repositories. What's changed is the scale and impact. In AI environments, a single leaked key doesn't just expose infrastructure; it can unlock private training data, model weights, or inference endpoints, the intellectual property that defines a company's competitive advantage."
Sixty-five percent of private AI companies listed in the Forbes AI 50 leaked sensitive information on GitHub, including API keys, tokens, and credentials often hidden in deleted forks, gists, and developer repositories. These leaks have the potential to expose organizational structures, training data, and private models. Most exposures arise from traditional security weaknesses such as misconfigurations, unpatched dependencies, and exposed API keys, but AI magnifies their impact. A single leaked key can unlock private training data, model weights, or inference endpoints. Emerging AI-native risks such as model poisoning, prompt injection, and autonomous agents add novel attack vectors across cloud environments.
Read at Securitymagazine
Unable to calculate read time
Collection
[
|
...
]