Skip to main content

Microsoft might put ChatGPT into Outlook, Word, and PowerPoint

Microsoft is currently testing the GPT AI language that was developed by the technology brand OpenAI to potentially be used in its Office suite of products, including Word, Outlook, and PowerPoint.

OpenAI’s intuitive technology products, including ChatGPT and Dall-E 2, have become internet sensations for their text- and image-generating prowess. Many have speculated about how ChatGPT can practically and morally be used. However, Microsoft is looking to use the company’s AI models in a more functional manner. The company has already implemented a version of the OpenAI GPT text-generator model as an update to its autocomplete feature, according to The Information.

Microsoft has also been testing GPT AI model features on PowerPoint and Outlook. These include functions that allow people to find Outlook search results with AI-driven speech-like commands instead of keywords in an email inbox. Outlook and Word are also getting AI models that use suggested replies to emails or recommended document changes to sharpen writing skills. There is currently no word on whether this use will eventually be built into consumer-facing versions of Microsoft Office, or if the brand is just toying with the potential of the GPT model.

Still, this practical use of the GPT technology comes after Microsoft invested $1 billion into OpenAI in 2019 and “purchased an exclusive license to the underlying technology behind GPT-3 in 2020,” the publication added.

In addition to its Office suite, Microsoft could be looking to implement the GPT AI model into its Bing search engine in an effort to compete with Google. This could be the product that is most likely to release, with availability speculated for March, according to The Verge.

However, OpenAI’s technology, despite its being remarkable, has a host of pitfalls, including some related to information accuracy and privacy. The brand’s freemium ChatGPT AI chatbot is infamous for filling in the information it does not know with incorrect data, which would especially be a challenge if the model were being developed for a professional use case.

In terms of privacy, The Information said Microsoft has been working to develop its own custom privacy-preserving models based on GPT-3, as well as GPT-4, which has not yet been released. The company claims it has seen early positive results in “training large language models on private data,” but has not confirmed whether the model is viable enough for a commercial or even business-tier product.

Fionna Agomuoh
Fionna Agomuoh is a technology journalist with over a decade of experience writing about various consumer electronics topics…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more