Skip to main content

OpenAI needs just 15 seconds of audio for its AI to clone a voice

In recent years, the listening time required by a piece of AI to clone someone’s voice has been getting shorter and shorter.

It used to be minutes, now it’s just seconds.

OpenAI, the Microsoft-backed company behind the viral generative AI chatbot ChatGPT, recently revealed that its own voice-cloning technology requires just 15 seconds of audio material to reproduce someone’s voice.

In a post on its website, OpenAI shared a small-scale preview of a model called Voice Engine, which it’s been developing since late 2022.

Voice Engine works by feeding it a minimum of 15 seconds of spoken material. The user can then input text to create what OpenAI describes as “emotive and realistic” speech that “closely resembles the original speaker.”

OpenAI insists it is taking a “cautious and informed approach to a broader release due to the potential for synthetic voice misuse,” adding that it wants to “start a dialogue on the responsible deployment of synthetic voices, and how society can adapt to these new capabilities.”

It added: “Based on these conversations and the results of these small scale tests, we will make a more informed decision about whether and how to deploy this technology at scale.”

One of the misuses that OpenAI refers to is a scam that some criminals are already carrying out using similar technology that’s been publicly available for some time. It involves cloning a voice and then calling a friend or relative of that person to trick them into handing over cash via a bank transfer. There are also fears about how such technology might be used in the upcoming presidential election, an issue highlighted by a recent high-profile incident in which a robocall using a clone of President Joe Biden’s voice told people not to vote in January’s New Hampshire primary.

Another concern is how the rapidly improving technology will impact the livelihoods of voice actors who fear that they’ll be increasingly asked to sign over the rights to their voice so that AI can be used to create a synthetic version, with compensation for such a contract likely to be much lower than if the actor was asked to perform the job in person.

Looking at more positive deployments of the technology, OpenAI suggests that it could be used to provide reading assistance to non-readers and children using natural-sounding, emotive voices “representing a wider range of speakers than what’s possible with preset voices,” as well as instant translation of videos and podcasts, something that Spotify is already trialing.

It could also be used to help patients who are gradually losing their voice through illness to continue communicating using what sounds like their own voice.

OpenAI has some examples of the AI-generated audio and the reference audio on its website, and we’re sure you’ll agree that they’re pretty extraordinary.

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more