Skip to main content

Google spots child porn in man’s Gmail account, tips off police

google area 21 hq
Image used with permission by copyright holder
According to KHOU.com,  John Henry Skillern from Houston is charged with possession of child pornography. The criminal activity was discovered by Google, after the tech giant saw three such images in Skillern’s Gmail account. Once it spotted the images, Google tipped off The National Center for Missing and Exploited Children, which then reached out to the police. Skillern was sending the images to a friend through Gmail, police said.

“He was trying to get around getting caught, he was trying to keep it inside his email,” Detective David Nettles of the Houston Metro Internet Crimes Against Children Taskforce said. “I can’t see that information, I can’t see that photo, but Google can.”

MORE: Meet Bleep, BitTorrent’s anti-NSA/spying chat and messaging app for Windows

Due to Google’s participation in the case, the police were able to obtain a search warrant. That’s how they discovered more evidence of child pornography on Skillern’s phone and tablet, including text messages, emails, and at least one video.

This incident will likely spark a debate about privacy and confidentiality on the Internet. Google’s commitment to combating child pornography on the Internet is well known. The company even released a statement regarding its position on the matter and the role it plays in combating the existence of child porn.

Though the post doesn’t specifically mention that Google scans people’s Gmail accounts for unsavory and/or illegal content, it’s probably safe to say that Google’s and Gmail’s Terms of Service don’t protect people who use the company’s products and services to commit such crimes.

Konrad Krawczyk
Former Digital Trends Contributor
Konrad covers desktops, laptops, tablets, sports tech and subjects in between for Digital Trends. Prior to joining DT, he…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more