Skip to main content

IBM CEO Rometty joins calls to ensure that AI remains a positive force

facebook convolutional translation ai
Image used with permission by copyright holder
The growing influence of artificial intelligence (AI) has drawn some concerns from a number of quarters about its potential impact on society. A number of organizations have been working to both calm a nervous public and ensure that AI is developed with certain concerns in mind.

Microsoft’s CEO Satya Nadella has been particularly outspoken about how AI can be a net positive as long as long as it’s done the right way, but he’s not the only influential executive to speak up both in AI’s defense and with some cautionary words. IBM CEO Ginni Rometty has also joined in, with a very similar message to Nadella’s, as ZDNet reports.

Rometty called out the same issues of transparency and ethics in a statement prepared in advance of the World Economic Forum in Davos, Switzerland. Speaking about AI’s potential impact, she said, “Commonly referred to as artificial intelligence, this new generation of technology and the cognitive systems it helps power will soon touch every facet of work and life — with the potential to radically transform them for the better.”

In similar fashion to Microsoft’s Nadella, Rometty laid out three principles to guide the development of AI. First, AI should have a specific purpose, namely to “enhance and extend human capability, expertise, and potential” rather than merely replacing humans — something she predicts won’t happen in the near future. As she put it, “Cognitive systems will not realistically attain consciousness or independent agency.”

Next, she said that IBM will be transparent in its development of AI, ensuring that AI’s purpose and the data used in training AI systems are communicated and understood. Finally, IBM and others should help to ensure that workers and others gain the skills required to adjust to AI and its implementation.

Rometty joins not only executives like Nadella in speaking out about the proper development of AI systems and their impact on society, but some major efforts are being funded along the same lines. The recently announced $27 million fund created by LinkedIn founder Reid Hoffman along with MIT and Harvard is one, and another kicked off by Carnegie Mellon is another. AI is coming, but at least there’s a concerted effort to make sure that its potential for Skynet-like evil is curtailed.

Mark Coppock
Mark has been a geek since MS-DOS gave way to Windows and the PalmPilot was a thing. He’s translated his love for…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more