Skip to main content

These ingenious ideas could help make AI a little less evil

Right now, there’s plenty of hand-wringing over the damage artificial intelligence (AI) can do. To offset that, Firefox maker Mozilla set out to encourage more accountable use of AI with its Responsible AI Challenge, and the recently announced winners of the contest show that the AI-infused future doesn’t have to be all doom and gloom.

The first prize of $50,000 went to Sanative AI, which “provides anti-AI watermarks to protect images and artwork from being used as training data” for the kind of large-language models that power AI tools like ChatGPT. There has been much consternation from photographers and artists over their work being used to train AI without permission, something Sanative AI could help to remedy.

profile of head on computer chip artificial intelligence
Digital Trends Graphic / Digital Trends

Kwanele Chat Bot was the $30,000 runner-up, and it “aims to empower women in communities plagued by violence by enabling them to access help fast and ensure the collection of admissible evidence.” Third place and $20,000 went to Nolano, which is a “trained language model that uses natural language processing to run on laptops and smartphones.”

As well as the cash prizes, all of the winners will be mentored by AI industry leaders and gain access to Mozilla’s “resources and communities.”

Tremendous potential

The winners of Mozilla's 2023 Responsible AI Challenge stand on a stage holding their prize checks.
Mozilla

The competition comes at a time of increasing concern over the powers of AI — and the potential for artificial intelligence to cause harm. In March 2023, numerous tech leaders signed an open letter calling on a pause on AI development due to its risks, while earlier this week a similar open letter was published warning that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

But it doesn’t have to be all bad. As Joshua Long, Chief Security Analyst at security firm Intego, recently told Digital Trends, “Like any tool in the physical or virtual worlds, computer code can be used for good or for evil.” While AI’s vast computational abilities could be used for nefarious purposes, they can also be channeled towards solving some of the most pressing problems facing humanity.

Indeed, the Mozilla Responsible AI Challenge suggests that there is plenty of good that can be done when AI is put to use. We’ve already seen some amazing uses for ChatGPT, and Mozilla’s contest could encourage further beneficial experimentation in this field.

What’s certain is that we’re only beginning to see what AI is capable of, and it’s imperative to ensure that it’s put to use as a force for good. As Mozilla’s prize winners have shown, AI has tremendous potential waiting to be unlocked.

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more