Skip to main content

Google’s latest anti-spam change helps clean up your calendar

Spam is one of the many enemies of the internet, and Google has come up with a new way to tackle it — at least on your calendar. The search engine giant recently tweaked how events show on Google Calendar so that you’ll only be able to display events from senders you know.

With the change, you’ll still get email event invitations from unknown senders, but they will only appear in your calendar after you accept. This means that only meetings from people in your same company domain, people in your contacts list, or people you’ve interacted with before will be added to your calendar automatically. Typically, these are usually trusted people who won’t be sending you spam meetings that can mess with your calendar.

The anti-spam setting in Google Calendar
Google

Of course, Google is all about choices, and, on top of this new option for “only if the sender is known,” you can also choose to have all kinds of invitations appear on your calendar. To switch between the two options, just visit the settings menu from the top of the calendar screen. Then, choose general, and look under event settings. From there, under add invitations to my calendar, you can make sure that the option for only if the sender is known or is chosen to enjoy this new anti-spam option.

The default option, though, is to show invitations from everyone, but IT admins can change it at the domain level in a separate setting not available for end users. Google started rolling out this change on July 20, and it could take up to 15 days for the feature to show up for everyone. It applies to Google Workspace customers, as well as legacy G Suite Basic and Business customers.

This isn’t the first anti-spam tactic from Google. On Google search, Google uses a system known as “SpamBrain” to catch spam sites. As the name suggests, SpamBrain is an A.I.-based system that identified 200 times more spam websites in 2021 compared to when it was first in use. Google mentioned in a previous update that SpamBranin cut down 70% of hacked spam, and 75% of gibberish spam, helping keep 99% of searches on Google search spam-free.

Arif Bacchus
Arif Bacchus is a native New Yorker and a fan of all things technology. Arif works as a freelance writer at Digital Trends…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more