How Apple plans to train its AI on your data without sacrificing your privacy
Briefly

Apple faces a challenge in improving its AI technology without compromising user privacy, a core value for the company. Unlike competitors like Google and OpenAI that analyze user chats to train AIs, Apple utilizes synthetic data created from its own large language models. However, since synthetic data lacks the nuances of real-world interactions, Apple has turned to differential privacy. This innovative solution combines synthetic and real data to inform AI training while preserving user confidentiality, showcasing Apple's commitment to balancing technological advancement with privacy concerns.
Apple has always prided itself on being more privacy-focused than its tech rivals. To that end, the company has relied on something called synthetic data to train and improve its AI products.
The problem with synthetic data is that it can't replicate the special human touch found in real-world content. This limitation has led Apple to adopt a different approach, known as differential privacy.
Differential privacy combines synthetic data with real data. As described by Apple, the method generates an embedding for each synthetic message to capture key elements such as language, topic, and length.
While OpenAI, Google, Microsoft, and Meta train their products partly by analyzing user chats, Apple seeks to enhance its Apple Intelligence technology without compromising users' privacy.
Read at ZDNET
[
|
]