Skip to main content

New Gmail features include personalized backgrounds

gmail-logo-by-google
Image used with permission by copyright holder

The time has come to once again dig into the depths of your hard drive, to explore those masses of folders that have remained untouched ever since you created them all those years ago, filling them to bursting point with hundreds of photos of that vacation in Hawaii, and of your Aunt Mildred just after she came out of hospital after her successful hip operation. Yes, the time has come to choose a photo – well, if you have a Gmail account, that is.

In a blog article posted Thursday, Google software engineer Jiří Semecký announced a brand new Gmail feature that allows users to customize their inboxes with their own photos.

So no longer will you have to look at one of the pre-set Gmail themes (they weren’t that bad though, were they?). Now you can get creative.

To make your inbox even more your own, click on “mail settings” (found by clicking on the gear icon at the top right of the screen) and then on the “themes” tab. You’ll see the “Create your own theme” option. Click on that and away you go.

Gmail Labs are also about to roll out full versions of two other new features called Don’t forget Bob and Got the wrong Bob?

“Don’t forget Bob and Got the wrong Bob? are two Gmail Labs features that help prevent you from making two common mistakes,” software engineering intern Assaf Ben-David wrote in a blog post on Wednesday.

The mistakes the two Bobs focus on are “forgetting to include someone on an e-mail, and sending a message to the wrong person with a similar name to the person you meant to email — like emailing Bob (your boss) instead of Bob (your friend).”

Got the wrong Bob? will be a boon to many users who fail to double-check to whom they are actually sending their email, and should save more than a few red faces.

Don’t forget Bob will make a suggestion if it thinks you may have left someone out of a group e-mail. Like if you’re e-mailing Uncle Sid, Uncle Harry and Aunt Sylvia, it could alert you to the fact that you’ve left out Aunt Mildred. And you wouldn’t want to offend her now, would you?

Topics
Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more