Skip to main content

Jobs Subpoenaed Over Apple Options Scandal

Jobs Subpoenaed Over Apple Options Scandal

Apple CEO Steve Jobs has reportedly been issued a subpoena by the U.S. Securities and Exchange Commission as its moves forward in its case against former Apple general counsel Nancy Heinen in the company’s stock option backdating scandal. Heinen is currently being sued for allegedly backdating share grants of Apple stock, including a 7.5 million share-option grant to Steve Jobs in 2001.

Apple’s own investigation into the option backdating found that Jobs was aware backdating was taking place, but did not personally profit from the activity and did not participate in any misconduct.

The SEC alleges that Heinen had her staff fabricate documents which indicated Apple’s board had approved backdated stock option grants. At the same time the SEC charged Heinen, it also settled with former Apple CFO Fred Anderson to the tune of $3.5 million. Although Anderson neither admitted to the SEC’s charges nor denied them, the SEC alleged that Anderson should have been aware of the false backdating. Anderson, for his part, laid the controversy at the feet of his former boss, Steve Jobs, saying that Jobs had assured him the company board had approved the grants. Heinen has denied the SEC’s allegations.

Both Heinen and Anderson received millions of dollars in unreported compensation as a result of the backdating.

The SEC is subpoena for Steve Jobs is apparently part of the agency’s work to build its case against Heinen, rather than part of an SEC investigation into Apple or Jobs himself. In December 2006, Apple restated $84 million in earnings as a result of a probe into its options backdating activities, following a lengthy internal investigation.

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more