Skip to main content

Google Photos now using A.I. to simplify editing and sharing images

Google rolling out A.I.-enhanced features to Google Photos app

On May 8, Google CEO Sundar Pichai took the stage at Google I/O to deliver his keynote presentation, which centered on artificial intelligence. And now, some of those A.I.-based features are beginning to roll out across many of its products and services, including Google Photos with “suggested actions.” While Google Color Pop was the first feature to roll out after Google I/O, all of the new A.I.-enhanced features are now available on the Google Photos app.

Over 5 billion pictures are viewed every day in Photos, and Google sees A.I. as the answer to speeding up the editing and sharing process. Suggested actions, as the name implies, are context sensitive actions that will display automatically while viewing individual photos. For example, using facial recognition, Photos will know who’s in a picture and offer a one-tap option to share it with that person (assuming this is someone in your contact list whose face Google has already learned). If that person appears in multiple images, Photos will even suggest to share all of them — again, with just a single tap.

When it comes to editing, different corrections will be suggested based on the look of the photo. If an image is underexposed, a simple “Fix brightness” suggestion will pop up automatically. Other suggested actions will be less subtle, including the “Pop color” option that desaturates the background to draw attention to your subject. Sure, selective color is one of the more notorious photographic clichés today, but the impressive part here is how the app is able to accurately differentiate the subject from the background. On images where the software is able to pick out the subject, Google Assistant will suggest the edit.

Even more impressive — and likely more useful — is the ability to add color to a black-and-white photograph. When viewing a monochrome image, Photos will suggest to “colorize” it. Like magic, tapping the button turns the image into a full color photograph. Not surprisingly, showing off this feature earned a chorus of cheers from the audience. Adobe demonstrated a very similar technology last year at the annual Adobe MAX conference.

Another audience favorite feature was much more mundane, but no less useful. By taking a picture of a document, Google Photos will be able to automatically convert the image into a PDF — even if the photo was shot at a wonky angle. The app recognizes the document within the frame and crops it out and changes perspective as necessary. Admittedly, this is likely one of those features you will use rarely — but when you need it, you will really appreciate having it.

If the suggested actions work as well in practice as they did in the recorded demonstration, it will likely be the most important update to Google Photos yet.  Suggested actions will begin rolling out to users “in the next couple of months,” according to Pichai.

Although not related to photography, Google also demonstrated new machine vision capabilities coming to Google Lens. Simply by pointing the camera at things, you’ll be able to learn more about them, from looking up words on a restaurant menu, to identifying the building in front of you, to analyzing an outfit you like and automatically being shown similar styles. Again, it remains to be seen how this works in practice, but if it’s anywhere close to the performance we saw in Google’s presentation, this will be a very impressive new feature.

Later during I/O, Google announced a partners program for Google Photos, which will allow third-party apps to integrate with the photo platform.

Updated on May 16, 2018: Updated post to reflect A.I. enhancements are now available. 

Editors' Recommendations

Steven Winkelman
Former Digital Trends Contributor
Steven writes about technology, social practice, and books. At Digital Trends, he focuses primarily on mobile and wearables…
How to watch Google I/O 2024
A Google logo sign at the top of a building.

We are quickly approaching Google I/O 2024, which is scheduled to take place on Tuesday, May 14, in Mountain View, California. The keynote address will be available for live-streaming, meaning you can watch it from the comfort of your own home.

But what time does I/O begin? What are we expecting? Here's what you need to know!
How to watch Google I/O 2024
It’s time to I/O

Read more
I compared Google and Samsung’s AI photo-editing tools. It’s not even close
A person holding the Samsung Galaxy S24 Ultra and Google Pixel 8 Pro.

The Samsung Galaxy S24 Ultra (left) and Google Pixel 8 Pro Andy Boxall / Digital Trends

Most phones nowadays are equipped with dual lens or triple lens camera systems and have powerful photo-editing tools baked natively into the software. This means most people have a compact photo-editing suite in their pocket every day.

Read more
Here’s how Apple could change your iPhone forever
An iPhone 15 Pro Max laying on its back, showing its home screen.

Over the past few months, Apple has released a steady stream of research papers detailing its work with generative AI. So far, Apple has been tight-lipped about what exactly is cooking in its research labs, while rumors circulate that Apple is in talks with Google to license its Gemini AI for iPhones.

But there have been a couple of teasers of what we can expect. In February, an Apple research paper detailed an open-source model called MLLM-Guided Image Editing (MGIE) that is capable of media editing using natural language instructions from users. Now, another research paper on Ferret UI has sent the AI community into a frenzy.

Read more