Skip to main content

Machine learning? Neural networks? Here’s your guide to the many flavors of A.I.

Neural networks? Machine learning? Here's your secret decoder for A.I. buzzwords

machine learning
Image used with permission by copyright holder

A.I. is everywhere at the moment, and it’s responsible for everything from the virtual assistants on our smartphones to the self-driving cars soon to be filling our roads to the cutting-edge image recognition systems reported on by yours truly.

Unless you’ve been living under a rock for the past decade, there’s good a chance you’ve heard of it before — and probably even used it. Right now, artificial intelligence is to Silicon Valley what One Direction is to 13-year-old girls: an omnipresent source of obsession to throw all your cash at, while daydreaming about getting married whenever Harry Styles is finally ready to settle down. (Okay, so we’re still working on the analogy!)

But what exactly is A.I.? — and can terms like “machine learning,” “artificial neural networks,” “artificial intelligence” and “Zayn Malik” (we’re still working on that analogy…) be used interchangeably?

To help you make sense of some of the buzzwords and jargon you’ll hear when people talk about A.I., we put together this simple guide help you wrap your head around all the different flavors of artificial intelligence — If only so that you don’t make any faux pas when the machines finally take over.

Artificial intelligence

We won’t delve too deeply into the history of A.I. here, but the important thing to note is that artificial intelligence is the tree that all the following terms are all branches on. For example, reinforcement learning is a type of machine learning, which is a subfield of artificial intelligence. However, artificial intelligence isn’t (necessarily) reinforcement learning. Got it?

So far, no-one has built a general intelligence.

There’s no official consensus agreement on what A.I. means (some people suggest it’s simply cool things computers can’t do yet), but most would agree that it’s about making computers perform actions which would be considered intelligent were they to be carried out by a person.

The term was first coined in 1956, at a summer workshop at Dartmouth College in New Hampshire. The big current distinction in A.I. is between current domain-specific Narrow A.I. and Artificial General Intelligence. So far, no-one has built a general intelligence. Once they do, all bets are off…

Symbolic A.I.

You don’t hear so much about Symbolic A.I. today. Also referred to as Good Old Fashioned A.I., Symbolic A.I. is built around logical steps which can be given to a computer in a top-down manner. It entails providing lots and lots of rules to a computer (or a robot) on how it should deal with a specific scenario.

Selmer Bringsjord
Selmer Bringsjord Image used with permission by copyright holder

This led to a lot of early breakthroughs, but it turned out that these worked very well in labs, in which every variable could be perfectly controlled, but often less well in the messiness of everyday life. As one writer quipped about Symbolic A.I., early A.I. systems were a little bit like the god of the Old Testament — with plenty of rules, but no mercy.

Today, researchers like Selmer Bringsjord are fighting to bring back a focus on logic-based Symbolic A.I., built around the superiority of logical systems which can be understood by their creators.

Machine Learning

If you hear about a big A.I. breakthrough these days, chances are that unless a big noise is made to suggest otherwise, you’re hearing about machine learning. As its name implies, machine learning is about making machines that, well, learn.

Like the heading of A.I., machine learning also has multiple subcategories, but what they all have in common is the statistics-focused ability to take data and apply algorithms to it in order to gain knowledge.

There are a plethora of different branches of machine learning, but the one you’ll probably hear the most about is…

Neural Networks

If you’ve spent any time in our Cool Tech section, you’ve probably heard about artificial neural networks. As brain-inspired systems designed to replicate the way that humans learn, neural networks modify their own code to find the link between input and output — or cause and effect — in situations where this relationship is complex or unclear.

Artificial neural networks have benefited from the arrival of deep learning.

The concept of artificial neural networks actually dates back to the 1940s, but it was really only in the past few decades when it started to truly live up to its potential: aided by the arrival of algorithms like “backpropagation,” which allows neural network to adjust their hidden layers of neurons in situations where the outcome doesn’t match what the creator is hoping for. (For instance, a network designed to recognize dogs, which misidentifies a cat.)

This decade, artificial neural networks have benefited from the arrival of deep learning, in which different layers of the network extract different features until it can recognize what it is looking for.

Within the neural network heading, there are different models of potential network — with feedforward and convolutional networks likely to be the ones you should mention if you get stuck next to a Google engineer at a dinner party.

Reinforcement Learning

Reinforcement learning is another flavor of machine learning. It’s heavily inspired by behaviorist psychology, and is based around the idea that software agent can learn to take actions in an environment in order to maximize a reward.

As an example, back in 2015 Google’s DeepMind released a paper showing how it had trained an A.I. to play classic video games, with no instruction other than the on-screen score and the approximately 30,000 pixels that made up each frame. Told to maximize its score, reinforcement learning meant that the software agent gradually learned to play the game through trial and error.

MarI/O - Machine Learning for Video Games

Unlike an expert system, reinforcement learning doesn’t need a human expert to tell it how to maximize a score. Instead, it figures it out over time. In some cases, the rules it is learning may be fixed (as with playing a classic Atari game.) In others, it keeps adapting as time goes by.

Evolutionary Algorithms

Known as a generic population-based metaheuristic optimization algorithm if you’ve not been formerly introduced yet, evolutionary algorithms are another type of machine learning; designed to mimic the concept of natural selection inside a computer.

The process begins with a programmer inputting the goals he or she is trying to achieve with their algorithm. For example, NASA has used evolutionary algorithms to design satellite components. In that case, the function may be to come up with a solution capable of fitting in a 10cm x 10cm box, capable of radiating a spherical or hemispherical pattern, and able to operate at a certain Wi-Fi band.

The algorithm then comes up with multiple generations of iterative designs, testing each one against the stated goals. When one eventually ticks all the right boxes, it ceases. In addition to helping NASA design satellites, evolutionary algorithms are a favorite of creatives using artificial intelligence for their work: such as the designers of this nifty furniture.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
This AI cloned my voice using just three minutes of audio
acapela group voice cloning ad

There's a scene in Mission Impossible 3 that you might recall. In it, our hero Ethan Hunt (Tom Cruise) tackles the movie's villain, holds him at gunpoint, and forces him to read a bizarre series of sentences aloud.

"The pleasure of Busby's company is what I most enjoy," he reluctantly reads. "He put a tack on Miss Yancy's chair, and she called him a horrible boy. At the end of the month, he was flinging two kittens across the width of the room ..."

Read more
Digital Trends’ Top Tech of CES 2023 Awards
Best of CES 2023 Awards Our Top Tech from the Show Feature

Let there be no doubt: CES isn’t just alive in 2023; it’s thriving. Take one glance at the taxi gridlock outside the Las Vegas Convention Center and it’s evident that two quiet COVID years didn’t kill the world’s desire for an overcrowded in-person tech extravaganza -- they just built up a ravenous demand.

From VR to AI, eVTOLs and QD-OLED, the acronyms were flying and fresh technologies populated every corner of the show floor, and even the parking lot. So naturally, we poked, prodded, and tried on everything we could. They weren’t all revolutionary. But they didn’t have to be. We’ve watched enough waves of “game-changing” technologies that never quite arrive to know that sometimes it’s the little tweaks that really count.

Read more
Digital Trends’ Tech For Change CES 2023 Awards
Digital Trends CES 2023 Tech For Change Award Winners Feature

CES is more than just a neon-drenched show-and-tell session for the world’s biggest tech manufacturers. More and more, it’s also a place where companies showcase innovations that could truly make the world a better place — and at CES 2023, this type of tech was on full display. We saw everything from accessibility-minded PS5 controllers to pedal-powered smart desks. But of all the amazing innovations on display this year, these three impressed us the most:

Samsung's Relumino Mode
Across the globe, roughly 300 million people suffer from moderate to severe vision loss, and generally speaking, most TVs don’t take that into account. So in an effort to make television more accessible and enjoyable for those millions of people suffering from impaired vision, Samsung is adding a new picture mode to many of its new TVs.
[CES 2023] Relumino Mode: Innovation for every need | Samsung
Relumino Mode, as it’s called, works by adding a bunch of different visual filters to the picture simultaneously. Outlines of people and objects on screen are highlighted, the contrast and brightness of the overall picture are cranked up, and extra sharpness is applied to everything. The resulting video would likely look strange to people with normal vision, but for folks with low vision, it should look clearer and closer to "normal" than it otherwise would.
Excitingly, since Relumino Mode is ultimately just a clever software trick, this technology could theoretically be pushed out via a software update and installed on millions of existing Samsung TVs -- not just new and recently purchased ones.

Read more