Skip to main content

Google’s new AI generates audio soundtracks from pixels

An AI generated wolf howling
Google Deep Mind

Deep Mind showed off the latest results from its generative AI video-to-audio research on Tuesday. It’s a novel system that combines what it sees on-screen with the user’s written prompt to create synced audio soundscapes for a given video clip.

The V2A AI can be paired with vide -generation models like Veo, Deep Mind’s generative audio team wrote in a blog post, and can create soundtracks, sound effects, and even dialogue for the on-screen action. What’s more, Deep Mind claims that its new system can generate “an unlimited number of soundtracks for any video input” by tuning the model with positive and negative prompts that encourage or discourage the use of a particular sound, respectively.

V2A Cars

The system works by first encoding and compressing the video input, which the diffusion model then leverages to iteratively refine the desired audio effects from background noise based on the user’s optional text prompt and from the visual input. This audio output is finally decoded and exported as a waveform that can then be recombined with the video input.

The best part is that the user doesn’t have to go in and manually (read: tediously) sync the audio and video tracks, as the V2A system does it automatically. “By training on video, audio and the additional annotations, our technology learns to associate specific audio events with various visual scenes, while responding to the information provided in the annotations or transcripts,” the Deep Mind team wrote.

V2A Wolf

The system is not yet perfected, however. For one, the output audio quality is dependent on the fidelity of the video input and the system gets tripped up when video artifacts or other distortions are present in the input. According to the Deep Mind team, syncing dialogue to the audio track remains an ongoing challenge.

V2A Claymation family

“V2A attempts to generate speech from the input transcripts and synchronize it with characters’ lip movements,” the team explained. “But the paired vide- generation model may not be conditioned on transcripts. This creates a mismatch, often resulting in uncanny lip-syncing, as the video model doesn’t generate mouth movements that match the transcript.”

The system still needs to undergo “rigorous safety assessments and testing” before the team will consider releasing it to the public. Every video and soundtrack generated by this system will be affixed with Deep Mind’s SynthID watermarks. This system is far from the only audio-generating AI currently on the market. Stability AI dropped a similar product just last week while ElevenLabs released their sound effects tool last month.

Andrew Tarantola
Andrew has spent more than a decade reporting on emerging technologies ranging from robotics and machine learning to space…
DuckDuckGo’s new AI service keeps your chatbot conversations private
DuckDuckGo

DuckDuckGo released its new AI Chat service on Thursday, enabling users to anonymously access popular chatbots like GPT-3.5 and Claude 3 Haiku without having to share their personal information as well as preventing the companies from training the AIs on their conversations. AI Chat essentially works by inserting itself between the user and the model, like a high-tech game of telephone.

From the AI Chat home screen, users can select which chat model they want to use -- Meta’s Llama 3 70B model and Mixtral 8x7B are available in addition to GPT-3.5 and Claude -- then begin conversing with it as they normally would. DuckDuckGo will connect to that chat model as an intermediary, substituting the user's IP address with one of their own. "This way it looks like the requests are coming from us and not you," the company wrote in a blog post.

Read more
Intel’s new AI image generation app is free and runs entirely on your PC
screenshot of AI Playground image creation screen showing more advanced ccontrols

Intel shared a sneak preview of its upcoming AI Playground app at Computex earlier this week, which offers yet another way to try AI image generation. The Windows application provides you with a new way to use generative AI a means to create and edit images, as well as chat with an AI agent, without the need for complex command line prompts, complicated scripts, or even a data connection.

The interesting bit is that everything runs locally on your PC, leveraging the parallel processing power of either an Intel Core Ultra processor with a built-in Intel Arc GPU or through a separate 8GB VRAM Arc Graphics card.

Read more