Skip to main content

OpenAI defends against Apple Intelligence privacy concerns

OpenAI's Mira Murati introduces GPT-4o.
OpenAI

Tesla CEO Elon Musk took to his X (formerly Twitter) social media platform on Monday to complain about the recently announced integration of OpenAI’s ChatGPT into Apple iOS (and more specifically, Siri), maligning the machine learning system as “creepy spyware.” During Fortune’s MPW dinner Tuesday evening, OpenAI Chief Technology Officer Mira Murati rebutted Musk’s allegations.

“That’s his opinion. Obviously I don’t think so,” she told the audience. “We care deeply about the privacy of our users and the safety of our products.”

The spat stems from Apple’s new partnership with ChatGPT-maker OpenAI, which was announced during WWDC 2024 on Monday. The partnership will see ChatGPT integrated into Siri, handling user queries that exceed the capabilities of Apple’s onboard AI. In essence, the Siri integration will act as API call, software developer Dylan McDonald observed, saying “it’s basically the same as using the ChatGPT app.”

OpenAI’s technology will have to be integrated into iOS “so that it can be used with a number of Apple services,” Fortune reports. However, Apple made clear during Monday’s announcement that it will not share user data with OpenAI, nor will OpenAI train its models on Apple’s user data. This AI differs from Apple Intelligence, which also debuted Monday. Apple Intelligence runs its own models and operates on a secure, private compute cloud separate from the public cloud OpenAI uses.

Musk, who co-founded OpenAI, but later left the company to found rival xAI, even threatened to ban employees at all of his companies from using Apple products in their jobs, including iPhones and Macs, in response to the partnership announcement. “Apple has no clue what’s actually going on once they hand your data over to OpenAI. They’re selling you down the river,” Musk wrote on X.

“We’re trying to be as transparent as possible with the public,” Murati said on Tuesday. “The biggest risk is that stakeholders misunderstand the technology.”

Andrew Tarantola
Andrew has spent more than a decade reporting on emerging technologies ranging from robotics and machine learning to space…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more