Skip to main content

HP Announces Kickback Settlement with DOJ

Image used with permission by copyright holder

Technology giant Hewlett-Packard has announced that it has reached a settlement with the U.S. Department of Justice over allegations that the company paid kickbacks to the mammoth accounting and consulting firm Accenture in exchange for work on government projects. HP has not specified a monetary amount involved with the settlement, but says it expects the agreement to have an impact of 2 cents per share on its third quarter financial. Given that HP has over 2.3 billion shares out on the market, that could total around $50 million.

The original lawsuits were whistleblower complaints against both Hewlett-Packard and Sun filed in 2007; the suits alleged that HP and Sun had “alliance relationships” with Accenture that resulted in HP paying kickbacks to Accenture for work that landed in HP’s lap after security government contracts—if true, the actions would be a violation of the federal False Claims Act. The original suits also named Microsoft has a defendant, but the U.S. government declined to join that case, and it was eventually dismissed.

HP has not admitted to any wrongdoing, and has agreed to the settlement essentially to put the allegations behind them. The settlement still requires approval by the Justice Department as well as the U.S. District Court in the Eastern District of Arkansas, where the original complaint was filed.

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more