Skip to main content

ValueClick Settles With FTC for $2.9 Mln

ValueClick Settles With FTC for $2.9 Mln

Online marketer ValueClick has reached a $2.9 million settlement with the Federal Trade Commission to resolve allegations that the company engages in deceptive online marketing practices, and violated provisions of the CAN-SPAM and FTC Acts.

ValueClick is a “lead-generation” firm that uses online promotions and other incentives to deliver potential customers to its clients. The FTC probe, launched in mid-2007, centered on Web sites claiming to offer a free gift of significant value, as well as the Web- and email-based mechanisms ValueClick used to draw traffic to those sites. By settling with the FTC, ValueClick is not admitting any wrongdoing, or acknowledging it broke any laws.

“We have worked with the FTC and have reached an agreement on the standards and practices that will govern our lead generation business going forward,” said ValueClicks’s COO of U.S. media David Yovanno, in a statement. “We believe this settlement will also help set the guidelines for the lead generation industry as a whole, and we will continue to participate in the Internet Advertising Bureau to help establish best practices to that end.”

The FTC’s investigation of ValueClick was basically seen as a sign the federal government is growing impatient with the online lead-generation industry’s inability to regulate itself. Consumer advocacy groups have repeatedly called for legislation to normalize practices within the industry, and some feel only the government will have the enforcement power to prevent the industry from misleading consumers and abusing loopholes in current legislation.

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more