Skip to main content

Google tells workers to be wary of AI chatbots

Alphabet has told its employees not to enter confidential information into Bard, the generative AI chatbot created and operated by Google, which Alphabet owns.

The company’s warning also extends to other chatbots, such as Microsoft-backed ChatGPT from OpenAI, Reuters reported on Thursday.

The AI-powered chatbots have generated huge interest in recent months due to their impressive ability to converse in a human-like way, write essays and reports, and even succeed in academic tests.

But Alphabet has concerns about its workers inadvertently leaking internal data via the tools.

In ongoing work to refine and improve the advanced AI technology, human reviewers may read the conversations that users have with the chatbots, posing a risk to personal privacy and also the potential exposure of trade secrets, the latter of which Alphabet appears to be particularly concerned about.

In addition, the chatbots are partly trained using users’ text exchanges, so with certain prompts, the tool could potentially repeat confidential information that it receives in those conversations to members of the public.

Like ChatGPT, Bard is now freely available for anyone to try. On its webpage, it warns users: “Please do not include information that can be used to identify you or others in your Bard conversations.”

It adds that Google collects “Bard conversations, related product usage information, info about your location, and your feedback,” and uses the data to improve Google products and services that include Bard.

Google says it stores Bard activity for up to 18 months, though a user can change this to three or 36 months in their Google account.

It adds that as a privacy measure, Bard conversations are disconnected from a Google account before a human reviewer sees them.

Reuters said that while Alphabet’s warning has been in place for a while, it recently expanded it, telling its workers to avoid using precise computer code generated by chatbots. The company told the news outlet that Bard can sometimes make “undesired code suggestions,” though the current iteration of the tool is still considered to be a viable programming aid.

Alphabet isn’t the only company to warn its employees about the privacy and security risks linked to using the chatbots. Samsung recently issued a similar instruction to its workers after a number of them fed sensitive semiconductor-related data into ChatGPT, and Apple and Amazon, among others, have reportedly also enacted a similar internal policy.

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more