Skip to main content

Don’t be fooled by dystopian sci-fi stories: A.I. is becoming a force for good

A.I. is good pepper the robot
Tomohiro Ohsumi/Getty Images
One of the most famous sayings about technology is the “law” laid out by the late American historian Melvin Kranzberg: “Technology is neither good nor bad; nor is it neutral.”

It’s a great saying: brief, but packed with instruction, like a beautifully poetic line of code. If I understand it correctly, it means that technology isn’t inherently good or bad, but that it will certainly impact upon us in some way — which means that its effects are not neutral. A similarly brilliant quote came from the French cultural theorist Paul Virilio: “the invention of the ship was also the invention of the shipwreck.”

“Technology is neither good nor bad; nor is it neutral.”

To adopt that last image, artificial intelligence (A.I.) is the mother of all ships. It promises to be as significant a transformation for the world as the arrival of electricity was in the nineteenth and twentieth century. But while many of us will coo excitedly over the latest demonstration of DeepMind’s astonishing neural networks, a lot of the discussion surrounding A.I. is decidedly negative. We fret about robots stealing jobs, autonomous weapons threatening the world’s wellbeing, and the creeping privacy issues of data-munching giants. Heck, once the dream of achieving artificial general intelligence arrives, some pessimists seem to think the only debate is whether we’re obliterated by Terminator-style robots or turned into grey goo by nanobots.

While some of this technophobia is arguably misplaced, it’s not hard to see critics’ point. Tech giants like Google and Facebook have hired some of the greatest minds of our generation, and put them to work not curing disease or rethinking the economy, but coming up with better ways to target us with ads. The Human Genome Project, this ain’t! Shouldn’t a world-changing technology like A.I. be doing a bit more… world changing?

A course in moral A.I.?

2018 may be the year when things start to change. While they’re still small seeds just beginning to sprout green shoots, there’s more evidence that the subject of making A.I. into a true force for good is starting to gain momentum. For example, starting this semester, the School of Computer Science at Carnegie Mellon University (CMU) will be teaching a new class, titled “Artificial Intelligence for Social Good.” It touches on many of the topics you’d expect from a graduate and undergraduate class — optimization, game theory, machine learning, and sequential decision making — and will look at these through the lens of how each will impact society. The course will also challenge students to build their own ethical A.I. projects, giving them real world experience with creating potentially life-changing A.I.

Image used with permission by copyright holder
ITU/R.Farrell

“A.I. is the blooming field with tremendous commercial success, and most people benefit from the advances of A.I. in their daily lives,” Professor Fei Fang told Digital Trends. “At the same time, people also have various concerns, ranging from potential job loss to privacy and safety issues to ethical issues and biases. However, not enough awareness has been raised regarding how A.I. can help address societal challenges.”

Fang describes this new course as “one of the pioneering courses focusing on this topic,” but CMU isn’t the only institution to offer one. It joins a similar “A.I. for Social Good” course offered at the University of Southern California, which started last year. At CMU, Fang’s course is listed as a core course for a Societal Computing Ph.D. program.

“Not enough awareness has been raised regarding how A.I. can help address societal challenges.”

During the new CMU course, Fang and a variety of guest lecturers will discuss a number of ways A.I. can help solve big social questions: machine learning and game theory used to help protect wildlife from poaching, A.I. being used to design efficient matching algorithms for kidney exchange, and using A.I. to help prevent HIV among homeless young people by selecting a set of peer leaders to spread health-related information.

“The most important takeaway is that A.I. can be used to address pressing societal challenges, and can benefit society now and in the near future,” Fang said. “And it relies on the students to identify these challenges, to formulate them into clearly defined problems, and to develop A.I. methods to help address them.”

Challenges with modern A.I.

Professor Fang’s class isn’t the first time that the ethics of A.I. has been discussed, but it does represent (and, certainly, coincide with) a renewed interest in the field. A.I. ethics are going mainstream.

This month, Microsoft published a book called “The Future Computed: Artificial intelligence and its role in society.” Like Fang’s class, it runs through some of the scenarios in which A.I. can help people today: letting those with limited vision hear the world described to them by a wearable device, and using smart sensors to let farmers increase their yield and be more productive.

Ekso Bionics

There are plenty more examples of this kind. Here at Digital Trends, we’ve covered A.I. that can help develop new pharmaceutical drugs, A.I. that can help people avoid shelling out for a high priced lawyer, A.I. to diagnose disease, and A.I. and robotics projects which can help reduce backbreaking work — either by teaching humans how to perform it more safely or even taking them out of the loop altogether.

All of these are positive examples of how A.I. can be used for social good. But for it to really become a force for positive change in the world, artificial intelligence needs to go beyond simply good applications. It also needs to be created in a way that is considered positive by society. As Fang says, the possibility of algorithms reflecting bias is a significant problem, and one that’s still not well understood.

The possibility of algorithms reflecting bias is a significant problem, and one that’s still not well understood.

Several years ago, African-American Harvard University PhD Latanya Sweeney “exposed” Google’s search algorithms as being inadvertently racist, by linking names more commonly given to black people with ads relating to arrest records. Sweeney, who had never been arrested, found that she was nonetheless shown ads asking “Have you been arrested?” that her white colleagues were not. Similar case studies have noticed how image recognition systems will be more likely to associate a picture of a kitchen with women and one of sports coaching with men. In this case, the bias wasn’t necessarily the fault of one programmer, but rather discriminatory patterns hidden in the large sets of data Google’s algorithms are trained on.

The same is true for the “black boxing” of algorithms, which can make them inscrutable to even their own creators. In Microsoft’s new book, its authors suggest that A.I. should be built around an ethical framework, a bit like science fiction writer Isaac Asimov’s “Three Laws of Robotics” for the “woke” generation. These six principles include the fact that AI systems should be fair, reliable and safe; that they should be private and secure; that they should be inclusive; that they should be transparent, and that they they should be accountable.

“If designed properly, A.I. can help make decisions that are fairer because computers are purely logical and, in theory, are not subject to the conscious and unconscious biases that inevitably influence human decision-making,” Microsoft’s authors write.

More work to be done

Ultimately, this is going to be easier said than done. From most people’s perspective, A.I. research done in the private sector far outstrips work done in the public sector. The problem with this is accountability in a world where algorithms are guarded as secretly as missile launch codes. There is also no cause for companies to solve big societal problems if it will not immediately benefit their bottom line. (Or score them some brownie points to possibly avoid regulation.) It would be naive to think that all of the concerns raised by profit-driven companies are going to be altruistic, no matter how much they might suggest otherwise.

For broader discussions about the use of A.I. for public good, something is going to have to change. Is it recognizing the power of artificial intelligence and putting into place more regulations allowing for scrutiny? Does it mean companies forming ethics boards, as was the case with Google DeepMind, as part of their research into cutting edge A.I.? Is it awaiting a market-driven change, or backlash, that will demand that tech giants offer more information about the system’s that govern our lives? Is it, as Bill Gates has suggested, implementing a robot tax that will curtail the use of A.I. or robotics in some situations by taxing companies for replacing its workers? None of these solutions are perfect.

And the biggest question of all remains: Who exactly defines ‘good’? Debates about how A.I. can be a force for good in our society will involve a significant number of users, policy makers, activists, technologists, and other interested parties working out what kind of world it is that we want to create, and how to use technology to best achieve that.

As DeepMind co-founder Mustafa Suleyman told Wired: “Getting these things right is not purely a matter of having good intentions. We need to do the hard, practical and messy work of finding out what ethical A.I. really means. If we manage to get A.I. to work for people and the planet, then the effects could be transformational. Right now, there’s everything to play for.”

Courses like Professor Fang’s aren’t the final destination, by any means. But they are a very good start.

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
A.I. translation tool sheds light on the secret language of mice
ai sheds light on mouse communication

Breaking the communication code

Ever wanted to know what animals are saying? Neuroscientists at the University of Delaware have taken a big leap forward in decoding the sounds made by one particular animal in a way that takes us a whole lot closer than anyone has gotten so far. The animal in question? The humble mouse.

Read more
Deep learning A.I. can imitate the distortion effects of iconic guitar gods
guitar_amp_in_anechoic_chamber_26-1-2020_photo_mikko_raskinen_006 1

Music making is increasingly digitized here in 2020, but some analog audio effects are still very difficult to reproduce in this way. One of those effects is the kind of screeching guitar distortion favored by rock gods everywhere. Up to now, these effects, which involve guitar amplifiers, have been next to impossible to re-create digitally.

That’s now changed thanks to the work of researchers in the department of signal processing and acoustics at Finland’s Aalto University. Using deep learning artificial intelligence (A.I.), they have created a neural network for guitar distortion modeling that, for the first time, can fool blind-test listeners into thinking it’s the genuine article. Think of it like a Turing Test, cranked all the way up to a Spınal Tap-style 11.

Read more
Fake news? A.I. algorithm reveals political bias in the stories you read
newspaper stack

Here in 2020, internet users have ready access to more news media than at any other point in history. But things aren’t perfect. Click-driven ad models, online filter bubbles, and the competition for readers’ attention means that political bias has become more entrenched than ever. In worst-case scenarios, this can tip over into fake news. Other times, it simply means readers receive a slanted version of events, without necessarily realizing that this is the case.

What if artificial intelligence could be used to accurately analyze political bias to help readers better understand the skew of whatever source they are reading? Such a tool could conceivably be used as a spellcheck- or grammar check-type function, only instead of letting you know when a word or sentence isn’t right, it would do the same thing for the neutrality of news media -- whether that be reporting or opinion pieces.

Read more