Boosting AI Performance with User Privacy by Apple
Apple has unveiled a new method for training its AI models, focusing on improving performance without sacrificing user privacy. This initiative, outlined in a blog entry on Apple’s Machine Learning Research site and initially covered by Bloomberg, will commence in the beta versions of iOS 18.5 and macOS 15.5.
Previously, Apple’s strategy involved utilising synthetic data, which included artificially produced messages, to enhance its AI functionalities such as writing aids and email summaries. While this safeguarded user privacy, the company acknowledged challenges in accurately reflecting the way individuals naturally write and summarise their thoughts.
The innovative technique enables Apple to conduct private comparisons between synthetic data and genuine user content, all while avoiding access to or retention of actual user emails.
Here is an explanation of how this innovative approach functions: Apple creates thousands of fictional emails that address everyday subjects. These emails are transformed into “embeddings,” which are data representations of the content. They are then dispatched to a select group of devices that have actively chosen to participate in Apple’s Device Analytics scheme.
On each device, a brief sample of recent user emails is securely compared with the synthetic messages to identify the closest match. Importantly, the results remain on the device. Applying differential privacy, only anonymised signals regarding the most frequently selected synthetic messages are relayed back to Apple.
These frequently chosen messages are subsequently utilised to enhance Apple’s AI models, contributing to a better accuracy rate in outputs like email summaries while ensuring user anonymity.
Although it remains uncertain whether this strategy will enable Apple to keep pace with AI frontrunners such as ChatGPT or Gemini 2.0, the prioritisation of privacy gives Apple a unique advantage in the AI landscape.
Leave a Reply