Researchers Warn of Privilege Escalation Risks in Google's Vertex AI ML Platform
Briefly

"By exploiting custom job permissions, we were able to escalate our privileges and gain unauthorized access to all data services in the project," Palo Alto Networks Unit 42 researchers Ofir Balassiano and Ofir Shaty said in an analysis published earlier this week.
"Deploying a poisoned model in Vertex AI led to the exfiltration of all other fine-tuned models, posing a serious proprietary and sensitive data exfiltration attack risk."
Read at The Hacker News
[
|
]