Skip to main content

Court rejects Tiffany’s case on eBay counterfeits

Image used with permission by copyright holder

The U.S. Supreme Court has refused to consider Tiffany & Co.’s appeal to hold eBay accountable under trademark law for counterfeit merchandise bearing the jeweler’s name on the auction Web site. The decision followed eBay’s 2008 New York court victory over Tiffany, in which a judge found that eBay could not be responsible for fake goods sold via its service. The judge ruled that procedures and reporting systems eBay has in place were sufficient for manufacturers and trademark holders to protect their brands.

Back in September, a federal trial judge dismissed a false advertising claim brought against eBay by Tiffany as part of the same lawsuit.

Tiffany is the second-largest luxury jewelry retailer, and had argued that eBay operated with full knowledge that Tiffany knock-offers are being sold via the auction site and, through commissions, was profiting off the sale of the counterfeit goods. For its part, eBay maintained that it has deployed extensive anti-fraud systems and invests over $20 million a year in programs designed to remote bogus merchandise from the site and enable brand owners to report problems.

eBay has also won similar cases brought by L’Oreal and Rolex, although in 2008 a French court ordered eBay to pay over $60 million to LVMH (Moet Hennessy Louis Vuitton) over counterfeit sales.

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more