Skip to main content

Oops — Google Bard AI demo is disproven by the first search result

These are heady days if you’re following the world of artificial intelligence (AI). ChatGPT is taking over the world, Microsoft is adding its tech to Bing, and Google is working on its own AI called Bard.

Except, Bard might not quite be ready for prime time — and Google just proved it during its own tech demonstration. Oops.

A Google blog post discussing its LaMBDA artificial intelligence technology displayed on a smartphone screen.
Shutterstock

The error came during a slide explaining how Bard could help a person explain new discoveries from the James Webb Space Telescope (JWST) to their nine-year-old child. Seems simple enough, you might think.

However, Bard made the following three suggestions:

  • In 2023, the JWST spotted a number of galaxies nicknamed “green peas”
  • The telescope captured images of galaxies that are over 13 billion years old
  • JWST took the very first picture of a planet outside of our own solar system

It’s that last entry that has ruffled some feathers. That’s because the first photo of a planet outside our solar system (otherwise known as an exoplanet) was actually taken way back in 2004 by the Very Large Telescope (VLT).

Unfortunately a simple google search would tell us that JWST actually did not "take the very first picture of a planet outside of our own solar system" and this is literally in the ad for Bard so I wouldn't trust it yet https://t.co/OS8AMyLQRu

— Isabel Angelo (@IsabelNAngelo) February 7, 2023

As explained by NASA, that image shows exoplanet 2M1207b orbiting a brown dwarf star. That not only makes it the first image captured of an exoplanet, but also means that 2M1207b is the first exoplanet seen to orbit a brown dwarf. It’s a double whammy that unfortunately went unacknowledged by Bard.

As pointed out by astrophysics Ph.D. student Isabel Angelo on Twitter, the irony is that a quick Google search could have prevented this problem. Simply searching “first photo of an exoplanet” puts the NASA page at the top of the pile.

That said, Bard wasn’t too far off. September 2022 marked the first time that the JWST itself had snapped a photo of an exoplanet (in this case, it was exoplanet HIP 65426 b). But as any fluent speaker of English will know, there’s a world of difference between saying “the JWST took the very first picture of an exoplanet” and “the JWST took its first picture of an exoplanet.”

A bumpy road ahead

Screenshot of Google Bard responding to a question.
Google / Google

That lack of nuance aptly highlights some of the concerns surrounding the increasing role of AI in everyday life. If things like Bard and ChatGPT are integrated into search engines, they’re going to need to be not only fast but accurate too. Otherwise, a simple search could infect you with untrue information without any hint that it has happened.

After all, Google’s demonstration didn’t require you to visit a dodgy website making questionable claims, which might have raised red flags. Instead, the answers to the posed question are framed as if they’re coming directly from Google itself. Given Google’s global reach and name recognition, that could make the information it spits out much more believable — even if it’s totally incorrect.

Search AIs clearly need plenty of work before they can be treated with more than a tiny iota of trust. Google itself says it’s not going to release Bard until it has reached a “high bar for safety.” Clearly, it still has a long way to go.

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more