Skip to main content

Google releasing new email security features that remind you not to send unencrypted personal info

gmail app
Image used with permission by copyright holder
In case you’re sending private information over email that you would rather not end up in the wrong hands, Google’s new set of security features might be of assistance. Instead of simply letting you unleash potentially hazardous data out into the wild, Gmail is now programmed to issue a warning whenever you try to send or receive emails to/from someone without TLS encryption enabled.

It does this using a “small red unlocked padlock” icon, according to PC World, which you’ll see in the upper right-hand corner of the email in question. This is designed to inform you that, without the support of TLS encryption, a dedicated enough lurker could easily see the contents of the message as it’s traveling through the web.

While it’s unlikely that you’ll ever encounter one of these indicators, considering most major email providers already have TLS encryption in place, those of us with — or in contact with —  less commonplace email addresses may be in for an unfortunate surprise during Gmail exchanges.

The upside, however, is that with these warnings being integrated into one of the most prevalently used email clients, other providers are more likely than ever to add better security measures — namely TLS encryption — to their own arsenals.

Secondly, Google is doubling down on spammers with a new question mark icon set to replace the profile pictures of users that the company’s algorithms fail to authenticate. With this, Google is trying to eradicate those weird emails you might see from ostensibly real domains like bankofamerica.com that actually couldn’t be further from actual bank notices.

As expected, Google admits that while not every email that comes through with one of these warnings is necessarily malicious, let them serve as reminders that without the right security, delivering sensitive information over the web could lead to cybercrime vulnerabilities that could otherwise be avoided.

Both of these new features will roll out to Gmail users on Tuesday, closely followed by Google for Work customers in the weeks following.

Gabe Carey
Former Digital Trends Contributor
A freelancer for Digital Trends, Gabe Carey has been covering the intersection of video games and technology since he was 16…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more