Apple
Apple improves Apple Intelligence with on-device data

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
Apple has been under fire lately for the lackluster performance of its AI features, particularly when it comes to summarizing notifications.
In response, the company recently explained how it’s working to make its AI better while still protecting users’ privacy.
Apple is using a technique called “differential privacy” to improve its AI without directly accessing people’s personal data.
Here’s how it works in simple terms: Apple creates fake, computer-generated data that looks and behaves like real user data but doesn’t contain any actual information from real people.
For example, to improve email-related AI tasks, Apple generates a large number of synthetic emails covering different topics and styles.
It then turns each of these fake messages into a digital summary that captures key traits like the language used, the topic discussed, and the message length.
These summaries are then sent out to a limited number of user devices that have agreed to share analytics with Apple.
The devices quietly compare the synthetic summaries with actual emails stored on the device to see which fake messages are most similar to real ones.
The devices then report back general findings, like which types of summaries are most accurate, without sending back any personal content.
Apple then uses this feedback to make its AI models smarter and more useful.
The company is already applying this method to improve features like Genmoji, and it plans to expand it to other creative tools such as Image Playground, Image Wand, Memories Creation, and writing assistance features.
Apple is also exploring ways to use this method to improve how its AI summarizes emails.
All in all, Apple is making its AI features more helpful and accurate while keeping user data private by relying on synthetic examples instead of the real thing and testing them with users if they have opted in to share analytics with Apple.
What do you think about this training approach? Would you prefer an AI model trained like this? Tell us what you think below in the comments, or reach us via our Twitter or Facebook.
Follow us on Flipboard, Google News, or Apple News
