Skip to main content

Google’s Eric Schmidt is optimistic that advanced AI won’t kill us all

eric schmidt iphone use version 1457620602 ericschmidt
Shutterstock
If we base our perceptions of the world on the types of movies we makes, then it seems humanity is torn between its feelings of love toward our future robotic friends, and a deep fear and hatred for the artificial intelligence driving them that will one day destroy us all. That latter emotion is one that some intellectuals and futurists have suggested is a very real possibility too: just listen to Elon Musk and Stephen Hawking.

However not everyone thinks that we’ll one day reach a singularity, where intelligent machines can make more intelligent machines, and within a few hours we’re dealing with god-like artificial beings as our new neighbors; Eric Schmidt, for example. The current head of Alphabet, Google’s parent company, is a big fan of artificial intelligence and believes that the vision we all have of it is dead wrong.

Instead of being a nefarious program waiting for its moment to strike, AI is far more benign, according to Schmidt. Look at music curation he said, in a recent piece for the BBC – while taking a swipe at Apple’s human driven selection.

To Schmidt, AI is smart analytics. It’s software to collate huge amounts of data into something readable by a human, whilst cherry picking what they would want based on parameters it has already pre-defined; automatically. Holiday selection is an example he brings up as a potential future task for AI. Why do we need to search based on price, or destination — why doesn’t it just know everything we like and pre-select a few tailored choices?

While this might not be available just yet, it won’t be long. As Schmidt points out, Google apps can already recognise more than 58 languages from type or speech and can find images of certain things based on visual data alone. Why not something else useful?

Of course though, we must factor in Schmidt’s potential for bias. As a long time Google employee, he’s seen the giant’s personalization earn the company billions of dollars through targeted advertising. But just as that sort of automated, AI-driven personalization can make lots of money and make services easier to use, it has the potential to create filter bubbles.

Perhaps the AI uprising won’t be a military one, but a gradual isolation of individual humans based on automatically defined preferences.

Jon Martindale
Jon Martindale is the Evergreen Coordinator for Computing, overseeing a team of writers addressing all the latest how to…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more