Skip to main content

Google unveils a slew of new and improved machine learning APIs

Trusted Contacts
Image used with permission by copyright holder
The Google Cloud, Google’s eponymous artificial intelligence platform, is quite the capable little set of services. Its algorithms can handle everything from language translation to the identification of objects and landmarks. And now, it’s getting even better. On Tuesday, Google Cloud chief Diane Greene announced the formation of a new team, the Google Cloud Machine Learning group, that will manage the Mountain View, California-based company’s cloud intelligence efforts going forward.

Improved APIs

Recommended Videos

The group will be helmed by Jia Li, former head of research at Snapchat and pioneer behind the feature that lets you attach emojis to real-world objects, and Fe-Fei Li, former director of AI at Stanford. They will oversee a slew of upgrades to Google’s cloud services in the coming months, much of which will involve Google Cloud’s hardware infrastructure. New graphical processing units (GPUs), which Google said are especially good at accelerating the sort of self-training machine learning software that lives on the company’s servers, will join the existing network’s CPUs. And a novel security layer will better ensure that customers’ data remain anonymous — GPUs caches will be wiped before beginning a new task, a practice which Google said isn’t common among cloud platforms.

Please enable Javascript to view this content

Google Cloud is improving in other ways, as well. Its Cloud Vision application programming interface (API) — a system now capable of identifying millions of logos, landmarks, and objects in images — now runs on Google’s custom “Tensor Processing Units,” the processors optimized to run Google’s TensorFlow machine learning platform. (APIs, for the uninitiated, are an extensible set of resources that let developers leverage third-party services like Cloud Vision.) The developer tools are now unified, which Google said will make it “simpler to implement,” and the company has reduced the price of “large-scale deployments” by 80 percent.

Google is also introducing Cloud Jobs API, a cloud-powered service that matches prospective employees with companies. “[The system] uses [AI] to understand how job titles and skills relate to one another and what job content, location, and seniority are the closest match to a [candidate’s] preferences,” Google said. It’s intended for job boards and career sites like LinkedIn and Jobseeker, for instance, and is already in use by three: Jibe, tech job listing site Dice, and CareerBuilder.

Another manifestation of Google’s machine learning API, Cloud Translation API, is now available globally after a months-long beta. It’s now capable of more accurately identifying the names of things such as people and locations, parsing the syntax of sentences, and analyzing morphology (the forms of and relationships between words), and it supports eight languages — English, Chinese, French, German, Japanese, Korean, Portuguese, Spanish, and Turkish — and 16 language pairs. The AI algorithms reduce errors by from 55 to 85 percent, Google said, and represent some of the largest improvements of machine learning in the past decade.

Google’s also introducing a new Premium translation service fit for “precise, long-form” applications like live-stream translations and “high volume[s] of  emails.” It will debut in the coming weeks.

Fun experiments

Google also took the opportunity to showcase AI-powered tools and apps on a new website: AI Experiments.

AI Experiments taps Google’s TensorFlow, the company’s open source machine learning platform. It’s the most popular machine learning framework on project host site GitHub, Google said, and one that has been used to transform images into psychedelic nightmares, teach computers to play Pong, and invent fake Chinese characters.

One app on the AI Experiments site, AI Duet, generates melodies that complement your own composition style, essentially acting as a sort of computer-driven musical partner. Another, Quick, Draw!, tasks you with depicting a written prompt in under 20 seconds. Google’s artificial intelligence attempts to identify it in real time, and, once the time has elapsed, shows which guesses it considered along the way.

Giorgio Cam identifies objects in rhyming form, pairing the result with an electronic soundtrack by Italian DJ and musician Giorgio Moroder. Bird Sounds organizes dozens of bird calls by such categories as tone and frequency. The Thing Translator identifies objects and gives the translated word for whatever you show it. And Infinite Drum Machine uses machine learning to sort everyday sounds into similar families.

Google is hoping to grow the website into a veritable collection of AI-powered utilities — and it’s accepting admissions starting today.

Kyle Wiggers
Former Digital Trends Contributor
Kyle Wiggers is a writer, Web designer, and podcaster with an acute interest in all things tech. When not reviewing gadgets…
The OnePlus 13 is coming on January 7 — along with a surprise
The OnePlus logo on the back of the OnePlus Open Apex Edition.

It's official: the OnePlus 13 will launch on January 7, 2025. Preempting the anticipated event by several weeks, OnePlus has officially confirmed the date we’ll see its next major smartphone release outside of China. Additionally, it has revealed some key features and news of a surprise new launch to go along with the phone.

OnePlus will release the OnePlus 13 in three different colors — Black Eclipse, Arctic Dawn, and Midnight Ocean. It’s the latter that is likely to be the model to have, as it is wrapped in a material called micro-fiber vegan leather, which is apparently corrosion and scratch-resistant but still luxurious to the touch. For the Arctic Dawn phone, the glass will have a special coating to give it a silky-smooth finish. It’s likely these are the same colors offered in China, where the phone has already been announced, just with different names.

Read more
I’m really worried about the future of smart glasses
The front of the Ray-Ban Meta smart glasses.

The Ray-Ban Meta smart glasses are among the most interesting, unexpectedly fun, and surprisingly useful wearables I’ve used in 2024. However, as we go into 2025, I’m getting worried about the smart glasses situation.

This isn’t the first time I’ve felt like we’re on the cusp of a new wave of cool smart eyewear products, only to be very disappointed by what came next.
Why the Ray-Ban Meta are so good

Read more
We need to talk about this fantastic, industry-leading Motorola collab
A person holding the Motorola Edge 50 Neo.

We are accustomed to tech brands partnering with adjacent brands, whether it’s OnePlus with Hasselblad or Honor and Huawei with Porsche Design, and often — such as with Xiaomi and Leica — singing the praises of the resulting collaborations. But not enough has been said about Motorola’s now established partnership with color experts Pantone.

It was when the recently released Motorola Edge 50 Neo arrived for me to try out that I finally understood how impactful the collaboration has become. Why? It manages to make even ordinary colors look fantastic.
Boring gray?

Read more