Skip to main content

U.S. Govt. Reports Rising Internet Fraud

Despite ever-increasing efforts to alert Internet users to potential scams, Internet fraudsters still cost duped Americans out of more money in 2007 than ever before. According to the U.S. government’s Internet Crime Complaint Center (IC3), Americans reported $240 million in losses from Internet fraud in 2007, a $40 million increase from 2006.

The Center’s hefty 2007 Internet Crime Report [PDF] shows that 206,884 individual complaints were received in 2007, 90,000 of which were referred to local law enforcement agencies. Internet auction fraud was the most frequent complaint, but non-delivery of purchases and credit fraud were also high on the list, along with non-fraudulent complaints concerning computer intrusion, spam, and child pornography.

“The Internet presents a wealth of opportunity for would be criminals to prey on unsuspecting victims, and this report shows how extensive these types of crime have become,” said FBI Cyber Division Assistant Director James E. Finch, in a statement. “What this report does not show is how often this type of activity goes unreported. Filing a complaint through IC3 is the best way to alert law enforcement authorities of Internet crime.”

Nick Mokey
As Digital Trends’ Managing Editor, Nick Mokey oversees an editorial team delivering definitive reviews, enlightening…
Chinese internet giant to launch its own version of ChatGPT, report says
brain with computer text scrolling artificial intelligence

Chinese internet giant Baidu is planning to launch its own version of ChatGPT, a report claimed on Sunday.

The company will unveil its AI-powered chatbot in March, a person claiming to have knowledge of the matter told Bloomberg.

Read more
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more