Expert comment: Apple AI safety & security

By Kevin Surace, Chair, Token

Apple has taken a “privacy and security first” approach to handling all generative AI interactions that must be processed in the cloud. No one else comes close at this point, and no one else has spelled out with full transparency how they intend to meet that high bar. More information can be found here: https://security.apple.com/blog/private-cloud-compute/.

Note that, at least for now, this is for Apple hardware product users who must trust that what they say to the AI is private to them and can never be stolen or learned from. It’s possible that some enterprises will evaluate the strength of this and allow their employees to use Apple devices with Apple Intelligence without fear.

Apple didn’t exactly state what silicon they used here. Is it a custom GPU cluster they designed or their own M4 processors, which include a neural engine and substantial GPU resources? But in typical Apple fashion, they have vertically integrated everything and taken ownership of its security from top to bottom. It’s impressive and ahead of AWS, Microsoft, and Google cloud offerings for LLMs thus far, even if it is just in support of Apple Intelligence features.

Apple has set the bar for absolute privacy and security of generative AI interactions. Everyone else will need to scramble now to meet this bar. This may allow enterprises to trust the Apple infrastructure for routine Apple Intelligence interactions, even those that include some corporate data.

Apple has developed its own foundation models that are very impressive but don’t yet beat out GPT-4. They publish their comparisons here: https://machinelearning.apple.com/research/introducing-apple-foundation-models. While Apple has not said what its partnership with OpenAI entails, they hint that when GPT-4 (or GPT-5 perhaps) is required for more accuracy, they will use it. To ensure absolute privacy, they would need to host it themselves in their Private Cloud Compute. They didn’t state that yesterday, so I suspect that the ink is still drying on those agreements with details to be worked out. But bouncing out to GPT-4 anytime won’t work. They suggested there would be an opt-in to that, so perhaps users give up some privacy when they opt to use GPT-4. How safe is OpenAI? They do provide various levels of private operation, but no one really knows how safe, secure, and non-sharing it actually is. While Apple has published an extensive security white paper, OpenAI has a short ChatGPT Enterprise privacy note, which certainly isn’t convincing Elon Musk it’s safe.

Apple has set the bar for absolute privacy and security of generative AI interactions. This may allow enterprises to trust the Apple infrastructure for routine Apple Intelligence interactions, even those that include some corporate data. This is a world-class effort, one where they are inviting security experts to poke holes in their approach. I’d say it appears as rock solid as anything we have seen.

All data to the cloud is encrypted, so a simple man-in-the-middle attack won’t work. From what they are saying, one would have to break into their network, but they don’t even have any debugging tools enabled in runtime—no privileged runtime access. They even took major precautions against actual physical access (basically breaking into the data center). They state that they have made this so secure and so encrypted with no storage of your information that it isn’t a target. I’d say this is state-of-the-art from the silicon to the outer doors of the facility.

Apple is stating that they are using their own foundation models in the network and the devices. That’s first and foremost. Then they note a partnership with OpenAI, to be used only when required, and they will also use the best of breed models. They seem to be hedging their bets here. OpenAI is a bit of a black box. But I suspect either Apple will host it themselves or demand a very private instance for their users, and users have to opt-in to its use. They failed to give us more details on the partnership, so time will tell, but it’s clear Apple takes privacy and security seriously, and they realize the hesitancy when they mention OpenAI. My bet is they will do this right, and it won’t be an issue.

Ad
Join over 500,000 cybersecurity professionals in our LinkedIn group "Information Security Community"!

No posts to display