Skip to main content

Google’s AI just got ears

The Google Gemini AI logo.
Google

AI chatbots are already capable of “seeing” the world through images and video. But now, Google has announced audio-to-speech functionalities as part of its latest update to Gemini Pro. In Gemini 1.5 Pro, the chatbot can now “hear” audio files uploaded into its system and then extract the text information.

The company has made this LLM version available as a public preview on its Vertex AI development platform. This will allow more enterprise-focused users to experiment with the feature and expand its base after a more private rollout in February when the model was first announced. This was originally offered only to a limited group of developers and enterprise customers.

1. Breaking down + understanding a long video

I uploaded the entire NBA dunk contest from last night and asked which dunk had the highest score.

Gemini 1.5 was incredibly able to find the specific perfect 50 dunk and details from just its long context video understanding! pic.twitter.com/01iUfqfiAO

— Rowan Cheung (@rowancheung) February 18, 2024

Google shared the details about the update at its Cloud Next conference, which is currently taking place in Las Vegas. After calling the Gemini Ultra LLM that powers its Gemini Advanced chatbot the most powerful model of its Gemini family, Google is now calling Gemini 1.5 Pro its most capable generative model. The company added that this version is better at learning without additional tweaking of the model.

Gemini 1.5 Pro is multimodal in that it can interpret different types of audio into text, including TV shows, movies, radio broadcasts, and conference call recordings. It’s even multilingual in that it can process audio in several different languages. The LLM may also be able to create transcripts from videos; however, its quality may be unreliable, as mentioned by TechCrunch.

When first announced, Google explained that Gemini 1.5 Pro used a token system to process raw data. A million tokens equate to approximately 700,000 words or 30,000 lines of code. In media form, it equals an hour of video or around 11 hours of audio.

There have been some private preview demos of Gemini 1.5 Pro that demonstrate how the LLM is able to find specific moments in a video transcript. For example, AI enthusiast Rowan Cheung got early access and detailed how his demo found an exact action shot in a sports contest and summarized the event, as seen in the tweet embedded above.

However, Google noted that other early adopters, including United Wholesale Mortgage, TBS, and Replit, are opting for more enterprise-focused use cases, such as mortgage underwriting, automating metadata tagging, and generating, explaining, and updating code.

Fionna Agomuoh
Fionna Agomuoh is a technology journalist with over a decade of experience writing about various consumer electronics topics…
Google’s new AI generates audio soundtracks from pixels
An AI generated wolf howling

Deep Mind showed off the latest results from its generative AI video-to-audio research on Tuesday. It's a novel system that combines what it sees on-screen with the user's written prompt to create synced audio soundscapes for a given video clip.

The V2A AI can be paired with vide -generation models like Veo, Deep Mind's generative audio team wrote in a blog post, and can create soundtracks, sound effects, and even dialogue for the on-screen action. What's more, Deep Mind claims that its new system can generate "an unlimited number of soundtracks for any video input" by tuning the model with positive and negative prompts that encourage or discourage the use of a particular sound, respectively.

Read more
The Vision Pro just got one of its most anticipated features
A man adjusts an Apple Vision Pro headset over his eyes.

Apple's chess demo showed how to make Vision Pro games in WebXR. Apple

At the recent WWDC 2024 event, Apple previewed visionOS 2, which will arrive later this year. The developer beta version is already available, and now we know about a new way to experience VR in the Vision Pro.

Read more
The Meta Quest just got an awesome new VR accessory
A person uses a Logitech MX Ink to scuplt in 3D with a Meta Quest 3

Logitech has announced its first piece of hardware in the VR space, a stylus that lets you draw and paint in 3D on the Meta Quest 3. The MX Ink Stylus is a familiar-looking sketching tool that works hand in hand with the Quest controller to drastically expand the capabilities of the popular Quest 3 VR headset.

The MX Ink also works with the Quest Pro and Quest 2 and supports several painting and sculpting Quest apps, including Gravity Sketch, PaintingVR, OpenBrush, ShapesXR, GestureVR, Arkio, and Engage XR. If you connect your Quest to a VR-ready PC, you can use the MX Ink Stylus with Adobe’s Substance Modeler and Elucis by Realize Medical.

Read more