Skip to main content

Those ‘warranty void if removed’ stickers are illegal, says the FTC

Kyolshin/123RF

No longer will you have to resign yourself to a life in which you refrain from pulling off stickers that read “warranty void if removed.” As it turns out, these stickers are not only ugly, but in fact deceptive and potentially illegal in the U.S., as per a series of warning letters the Federal Trade Commission (FTC) sent to six companies. These companies deal in a wide range of industries, the Commission noted, from selling automobiles to cellular devices to video gaming systems — but all say that “consumers must use specified parts or service providers to keep their warranties intact.” And this, the FTC says, is a no-no.

“Unless warrantors provide the parts or services for free or receive a waiver from the FTC, such statements generally are prohibited by the Magnuson-Moss Warranty Act, a law that governs consumer product warranties,” the Commission noted. “Similarly, such statements may be deceptive under the FTC Act.”

The Commission has now requested that all six companies take a closer look at their promotional and warranty materials and guarantee that they neither state nor imply that a warranty is only provided with the use of specific parts of services. Moreover, the watchdog group has encouraged companies to revise their practices in order to be legally compliant. If no revisions are made within 30 days, there may be legal action taken.

“Provisions that tie warranty coverage to the use of particular products or services harm both consumers who pay more for them as well as the small businesses who offer competing products and services,” said Thomas B. Pahl, Acting Director of the FTC’s Bureau of Consumer Protection.

While the FTC has specifically called out six companies, you should bear in mind that this rule actually applies to any device that costs more than $15. While we’re not sure exactly which companies were targeted by the Commission, we do know that entities like Sony and Microsoft include warnings on the edges of their game consoles that claim that breaking the seal also breaks your right to claim a warranty. And now, we also know that such a practice is illegal.
Lulu Chang
Former Digital Trends Contributor
Fascinated by the effects of technology on human interaction, Lulu believes that if her parents can use your new app…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more