Skip to main content

Steve Jobs directly responds to new Apple publisher policies

app storeIt was only a matter of time until an Apple exec reached out to address the controversy of its new app publishing policies. But it’s still somewhat surprising that exec was CEO Steve Jobs himself, who’s been at the center of increased speculation concerning his health.

According to MacRumors, an iOS user and developer wrote to Apple, expressing his fears over the new policy’s effect on software as a service (SaaS) apps. There’s been mounting concern that SaaS apps, like DropBox, Salesforce, and Readability won’t be able to operate as they previously did (or at all, as in Reabability’s case). Jobs’ reply to the comment? “We created subscriptions for publishing apps, not SaaS apps. – Sent from my iPhone.”

Jobs’ response is about as vague as it is terse. Still, it can be assumed Jobs’ is implying that the publishing policy applies to publishing apps only, and SaaS material isn’t subjected to it. So the likes of Netflix should be safe from the newly enforced guidelines. However, the nine-word response from Jobs’ isn’t pacifying all the apprehensive developers out there. Screenshot sharing app TinyGrab announced it won’t pursue an iOS app because of the rules, and there’s general confusion as to whether SaaS apps will be hampered by the publishing app on a case-by-case basis or not. Also not helping matters is the fact that the Apple’s App Store guidelines are less than concrete, purposefully so. Apple itself doesn’t nail down precisely who is an publisher or a SaaS provider, which is why a flurry of anxious developers are reaching for answers.

Unfortunately, the short e-mail from Jobs’ hasn’t provided any major revelations, but it seems that for the moment a host of SaaS apps are safe from Apple publishing fees.

Molly McHugh
Former Digital Trends Contributor
Before coming to Digital Trends, Molly worked as a freelance writer, occasional photographer, and general technical lackey…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more