Skip to main content

Even Microsoft thinks ChatGPT needs to be regulated — here’s why

Artificial intelligence (AI) chatbots have been taking the world by storm, with the capabilities of Microsoft’s ChatGPT causing wonderment and fear in almost equal measure. But in an intriguing twist, even Microsoft is now calling on governments to take action and regulate AI before things spin dangerously out of control.

The appeal was made by BSA, a trade group representing numerous business software companies, including Microsoft, Adobe, Dropbox, IBM, and Zoom. According to CNBC, the group is advocating for the US government to integrate rules governing the use of AI into national privacy legislation.

A MacBook Pro on a desk with ChatGPT's website showing on its display.
Hatice Baran / Unsplash

More specifically, BSA’s argument has four main tenets. These include the assertions that Congress should clearly set out when companies need to determine the potential impact of AI, and that those requirements should come into effect when the use of AI leads to “consequential decisions” — which Congress should also define.

BSA also states that Congress should ensure company compliance using an existing federal agency and that the development of risk-management programs must be a requirement for any company dealing with high-risk AI.

According to Craig Albright, vice president of U.S. government relations at BSA, “We’re an industry group that wants Congress to pass this legislation, so we’re trying to bring more attention to this opportunity. We feel it just hasn’t gotten as much attention as it could or should.”

BSA believes the American Data Privacy and Protection Act, a bipartisan bill that is yet to become law, is the right legislation to codify its ideas on AI regulation. The trade group has already been in touch with the House Energy and Commerce Committee — the body that first introduced the bill — about its views.

Legislation is surely coming

A laptop opened to the ChatGPT website.
Shutterstock

The breakneck speed at which AI tools have developed in recent months has caused alarm in many corners about the potential consequences for society and culture, and those fears have been heightened by the numerous scandals and controversies that have dogged the field.

Indeed, BSA is not the first body to have advocated for tougher guardrails against AI abuse. In March 2023, a group of prominent tech leaders called on AI firms to pause research on anything more advanced than GPT-4. The group stated this was necessary because “AI systems with human-competitive intelligence can pose profound risks to society and humanity” and that society at large needed to catch up and understand what AI development could mean for the future of civilization.

It is clear that the rapid speed with which AI tools have developed has caused a lot of consternation among both industry leaders and the general public. And when even Microsoft is suggesting its own AI products should be regulated, it seems increasingly likely that some form of AI legislation will become law sooner or later.

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more