Skip to main content

ChatGPT just plugged itself into the internet. What happens next?

OpenAI just announced that ChatGPT is getting even more powerful with plugins that allow the AI to access portions of the internet. This expansion could simplify tasks like shopping and planning trips without the need to access various websites for research.

This new web integration is in testing with select partners at the moment. The list includes Expedia, FiscalNote, Instacart, Kayak, Klarna, Milo, OpenTable, Shopify, Slack, Speak, Wolfram, and Zapier.

OpenAI's website open on a MacBook, showing ChatGPT plugins.
Image used with permission by copyright holder

These are data-intensive companies, some serving millions of users with travel planning, shopping, gathering information, and organizing schedules. There is a clear benefit to using AI to simplify these tasks and make user interaction easier.

OpenAI demonstrated an advanced request for a vegan recipe and a vegan restaurant in San Francisco. Plugins from WolframAlpha, OpenTable, and Instacart were installed from a plugin store.

The recipe problem was complicated by asking for “just the ingredients” and then adding that calories should be calculated with WolframAlpha. OpenAI even prepared an order for the ingredients on Instacart.

ChatGPT quickly responded with a vegan restaurant and provided a link to make a reservation on OpenTable. The ingredients for a chickpea salad followed, along with the calories for each item, automatically scaled to the correct portion size. Finally, an Instacart link is provided to order the ingredients. ChatGPT included a promotional message, enticing new users with free delivery.

When the time comes to place an order or book a reservation, that happens on each company’s website. In the future, we might grow to trust AI enough to make purchase decisions.

ChatGPT even realized you probably won’t want to order bulk items for a single meal. Olive oil, lemon juice, and seasonings were not included in the Instacart order. OpenAI posted the video on Twitter.

We are adding support for plugins to ChatGPT — extensions which integrate it with third-party services or allow it to access up-to-date information. We’re starting small to study real-world use, impact, and safety and alignment challenges: https://t.co/A9epaBBBzx pic.twitter.com/KS5jcFoNhf

— OpenAI (@OpenAI) March 23, 2023

OpenAI’s blog post highlights an important detail when AI is used for business — safety and alignment. Since plugins access the internet and due to the ongoing issues of AI confusion, ChatGPT plugins will double-check the trustworthiness of the result.

Besides the partner plugins, OpenAI will be testing two of its own plugins, Browsing, and Code interpreter.

With the Browsing plugin, ChatGPT can directly access the internet, using Bing for searching and a text browser when collecting information from web pages. This could be quite similar to Bing Chat, which also uses GPT-4 and has access to the internet.

Code interpreter is unique in that it allows ChatGPT to run Python code within tightly controlled limits, described as a sandboxed, firewalled execution environment.” This allows you to do complex calculations based on traditional computer code rather than relying on the confused answers an AI sometimes provides.

This huge expansion of ChatGPT’s capabilities comes with risks, and OpenAI is proceeding cautiously. The timeline for the rollout of ChatGPT plugins depends on how well it performs in these initial tests.

A small number of ChatGPT-Plus users will gain access to plugins. OpenAI has a waitlist to request access, and ChatGPT plugins will eventually roll out to more companies and users.

Alan Truly
Alan is a Computing Writer living in Nova Scotia, Canada. A tech-enthusiast since his youth, Alan stays current on what is…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more