Skip to main content

HP Settles Spying Scandal with Journalists

HP Settles Spying Scandal with Journalists

Hewlett-Packard is still working to put 2006’s spying scandal behind them, announcing that it has reached a settlement with four journalists over allegations the company engaged in corporate espionage to ferret out a leak in its board room.

Hewlett-Packard has already paid a $14.5 million settlement in the affair, although the company admitted no wrongdoing over allegations investigators in its employ used “pretexting”—essentially, social engineering—to obtain private phone records of board members and journalists as it tried to stop leaks to the press.

The four journalists involved have been in in settlement talks with the company since 2006, according to Terry Gross, the journalists’ San Francisco-based attorney. The settlement has Hewlett-Packard donating undisclosed amounts to a selection of charities chosen by the journalists.

The pretexting scandal led to the ousting of HP board chairperson Patricia Dunn, along with criminal charges being filed against Dunn and four private investigators. The charges were eventually dropped, but investigator Bryan Wagner was charged in federal court and plead guilty to counts of identity theft and conspiracy. Wagner is awaiting sentencing.

The settlement does not entirely pull HP out of the woods: the company still faces five lawsuits against the company, Dunn, and former ethics chief Kevin Hunsaker for “illegal and reprehensible conduct,” along with suits from other journalists whose records were compromised by HP’s actions.

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more