Google releases VaultGemma, its first privacy-preserving LLM
Briefly

Google releases VaultGemma, its first privacy-preserving LLM
"The companies seeking to build larger AI models have been increasingly stymied by a lack of high-quality training data. As tech firms scour the web for more data to feed their models, they could increasingly rely on potentially sensitive user data. A team at Google Research is exploring new techniques to make the resulting large language models (LLMs) less likely to "memorize" any of that content."
"LLMs have non-deterministic outputs, meaning you can't exactly predict what they'll say. While the output varies even for identical inputs, models do sometimes regurgitate something from their training data-if trained with personal data, the output could be a violation of user privacy. In the event copyrighted data makes it into training data (either accidentally or on purpose), its appearance in outputs can cause a different kind of headache for devs. Differential privacy can prevent such memorization by introducing calibrated noise during the training phase."
Companies seeking larger AI models face shortages of high-quality training data and may rely on web-scraped, potentially sensitive user data. Large language models can sometimes reproduce training content, creating privacy and copyright risks when personal or copyrighted material appears in outputs. Differential privacy reduces memorization by adding calibrated noise during training, but it lowers accuracy and increases compute costs. Model performance under differential privacy depends mainly on the noise-to-batch ratio and can be mitigated by increasing compute (FLOPs) or the data budget (tokens). Scaling laws balance compute, privacy, and data budgets to guide practical trade-offs.
Read at Ars Technica
Unable to calculate read time
[
|
]