The rising influence of artificial intelligence (AI) has many organizations scrambling to address the new cybersecurity and data privacy concerns created by the technology, especially as AI is used in cloud systems. Apple addresses AI’s security and privacy issues head-on with its Private Cloud Compute (PCC) system.
Apple seems to have solved the problem of offering cloud services without undermining user privacy or adding additional layers of insecurity. It had to do so, as Apple needed to create a cloud infrastructure on which to run generative AI (genAI) models that need more processing power than its devices could supply while also protecting user privacy, stated a ComputerWorld article.
Apple is opening the PCC system to security researchers to “learn more about PCC and perform their own independent verification of our claims,” the company announced. In addition, Apple is also expanding its Apple Security Bounty.
What does this mean for AI security going forward? Security Intelligence spoke with Ruben Boonen, CNE Capability Development Lead at IBM, to learn what researchers think about PCC and Apple’s approach.
SI: ComputerWorld reported this story, saying that Apple hopes that “the energy of the entire infosec community will combine to help build a moat to protect the future of AI.” What do you think of this move?
Boonen: I read the ComputerWorld article and reviewed Apple’s own statements about their private cloud. I think what Apple has done here is good. I think it goes beyond what other cloud providers do because Apple is providing an insight into some of the internal components they use and are basically telling the security community, you can have a look at this and see if it is secure or not.
Also good from the perspective that AI is constantly getting bigger as an industry. Bringing generative AI components into regular consumer devices and getting people to trust their data with AI services is a really good step.
SI: What do you see as the pros of Apple’s approach to securing AI in the cloud?
Boonen: Other cloud providers do provide high-security guarantees for data that’s stored on their cloud. Many businesses, including IBM, trust their corporate data to these cloud providers. But a lot of times, the processes to secure data aren’t visible to their customers; they don’t explain exactly what they do. The biggest difference here is that Apple is providing this transparent environment for users to test that plane.
Explore AI cybersecurity solutions
SI: What are some of the downsides?
Boonen: Currently, the most capable AI models are very big, and that makes them very useful. But when we want AI on consumer devices, there’s a tendency for vendors to ship small models that can’t answer all questions, so it relies on the larger models in the cloud. That comes with additional risk. But I think it is inevitable that the whole industry will be moving to that cloud model for AI. Apple is implementing this now because they want to give consumers trust to the AI process.
SI: Apple’s system doesn’t play well with other systems and products. How will Apple’s efforts to secure AI in the cloud benefit other systems?
Boonen: They are providing a design template that other providers like Microsoft, Google and Amazon can then replicate. I think it is mostly effective as an example for other providers to say maybe we should implement something similar and provide similar testing capabilities for our customers. So I don’t think this directly impacts other providers except to push them to be more transparent in their processes.
It’s also important to mention Apple’s Bug Bounty as they invite researchers in to look at their system. Apple has a history of not doing very well with security, and there have been cases in the past where they’ve refused to pay out bounties for issues found by the security community. So I’m not sure they’re doing this entirely out of the interest of attracting researchers, but also in part of convincing their customers that they are doing things securely.
That being said, having read their design documentation, which is extensive, I think they’re doing a pretty good job in addressing security around AI in the cloud.