Skip to main content

Deep learning vs. machine learning: What’s the difference between the two?

deep learning vs machine explained ai 01
Image used with permission by copyright holder
In recent months, Microsoft, Google, Apple, Facebook, and other entities have declared that we no longer live in a mobile-first world. Instead, it’s an artificial intelligence-first world where digital assistants and other services will be your primary source of information and getting tasks done. Your typical smartphone or PC are now your secondary go-getters.

Backing this new frontier are two terms you’ll likely hear often: machine learning and deep learning. These are two methods in “teaching” artificial intelligence to perform tasks, but their uses goes way beyond creating smart assistants. What’s the difference? Here’s a quick breakdown.

Computers now see, hear, and speak

With the help of machine learning, computers can now be “trained” to predict the weather, determine stock market outcomes, understand your shopping habits, control robots in a factory, and so on. Google, Amazon, Facebook, Netflix, LinkedIn, and more popular consumer-facing services are all backed by machine learning. But at the heart of all this learning is what’s known as an algorithm.

Simply put, an algorithm is not a complete computer program (a set of instructions), but a limited sequence of steps to solve a single problem. For example, a search engine relies on an algorithm that grabs the text you enter into the search field box, and searches the connected database to provide the related search results. It takes specific steps to achieve a single, specific goal.

Machine learning has actually been around since 1956. Arthur Samuel didn’t want to write a highly-detailed, lengthy program that could enable a computer to beat him in a game of checkers. Instead, he created an algorithm that enabled the computer to play against itself thousands of times so it could “learn” how to perform as a stand-alone opponent. By 1962, this computer beat the Connecticut state champion.

Thus, at its core, machine learning is based on trial and error. We can’t manually write a program by hand that can help a self-driving car distinguish a pedestrian from a tree or a vehicle, but we can create an algorithm for a program that can solve this problem using data. Algorithms can also be created to help programs predict the path of a hurricane, diagnose Alzheimer’s early, determine the world’s most overpaid and underpaid soccer stars, and so on.

Machine learning typically runs on low-end devices, and breaks a problem down into parts. Each part is solved in order, and then combined to create a single answer to the problem. Well-known machine learning contributor Tom Mitchell of Carnegie Mellon University explains that computer programs are “learning” from experience if their performance of a specific task is improving. Machine learning algorithms are essentially enabling programs to make predictions, and over time get better at these predictions based on trial and error experience.

Here are the four main types of machine learning:

Supervised machine learning

In this scenario, you are providing a computer program with labeled data. For instance, if the assigned task is to separate pictures of boys and girls using an algorithm for sorting images, those with a male child would have a “boy” label, and images with a female child would have a “girl” label. This is considered as a “training” dataset, and the labels remain in place until the program can successfully sort the images at an acceptable rate.

Semi-supervised machine learning

In this case, only a few images are labeled. The computer program will then use an algorithm to make its best guess regarding the unlabeled images, and then the data is fed back to the program as training data. A new batch of images is then provided, with only a few sporting labels. It’s a repetitive process until the program can distinguish between boys and girls at an acceptable rate.

Unsupervised machine learning

This type of machine learning doesn’t involve labels whatsoever. Instead, the program is blindly thrown into the task of splitting images of boys and girls into two groups using one of two methods. One algorithm is called “clustering” that groups similar objects together based on characteristics, such as hair length, jaw size, eye placement, and so on. The other algorithm is called “association” where the program creates if/then rules based on similarities it discovers. In other words, it determines a common pattern between the images, and sorts them accordingly.

Reinforcement machine learning

Chess would be an excellent example of this type of algorithm. The program knows the rules of the game and how to play, and goes through the steps to complete the round. The only information provided to the program is whether it won or lost the match. It continues to replay the game, keeping track of its successful moves, until it finally wins a match.

Now it’s time to move on to a deeper subject: deep learning.

Deep Learning

Deep learning is basically machine learning on a “deeper” level (pun unavoidable, sorry). It’s inspired by how the human brain works, but requires high-end machines with discrete add-in graphics cards capable of crunching numbers, and enormous amounts of “big” data. Small amounts of data actually yield lower performance.

Unlike standard machine learning algorithms that break problems down into parts and solves them individually, deep learning solves the problem from end to end. Better yet, the more data and time you feed a deep learning algorithm, the better it gets at solving a task.

In our examples for machine learning, we used images consisting of boys and girls. The program used algorithms to sort these images mostly based on spoon-fed data. But with deep learning, data isn’t provided for the program to use. Instead, it scans all pixels within an image to discover edges that can be used to distinguish between a boy and a girl. After that, it will put edges and shapes into a ranked order of possible importance to determine the two genders.

On an even more simplified level, machine learning will distinguish between a square and triangle based on information provided by humans: squares have four points, and triangles have three. With deep learning, the program doesn’t start out with pre-fed information. Instead, it uses an algorithm to determine how many lines the shapes have, if those lines are connected, and if they are perpendicular. Naturally, the algorithm would eventually figure out that an inserted circle does not fit in with its square and triangle sorting.

Again, this latter “deep thinking” process requires more hardware to process the big data generated by the algorithm. These machines tend to reside in large datacenters to create an artificial neural network to handle all the big data generated and supplied to artificial intelligent applications. Programs using deep learning algorithms also take longer to train because they’re learning on their own instead of relying on hand-fed shortcuts.

“Deep Learning breaks down tasks in ways that makes all kinds of machine assists seem possible, even likely. Driverless cars, better preventive healthcare, even better movie recommendations, are all here today or on the horizon,” writes Nvidia’s Michael Copeland. “With Deep Learning’s help, A.I. may even get to that science fiction state we’ve so long imagined.”

Is Skynet on the way? Not yet

A great recent example of deep learning is translation. This technology is capable of listening to a presenter talking in English, and translating his words into a different language through both text and an electronic voice in real time. This achievement was a slow learning burn over the years due to the differences in overall language, language use, voice pitches, and maturing hardware-based capabilities.

Deep learning is also responsible for conversation-carrying chatbots, Amazon Alexa, Microsoft Cortana, Facebook, Instagram, and more. On social media, algorithms based on deep learning are what cough up contact and page suggestions. Deep learning even helps companies customize their creepy advertising to your tastes even when you’re not on their site. Yay for technology.

“Looking to the future, the next big step will be for the very concept of the ‘device’ to fade away,” says Google CEO Sundar Pichai. “Over time, the computer itself—whatever its form factor—will be an intelligent assistant helping you through your day. We will move from mobile first to an A.I. first world.”

Kevin Parrish
Former Digital Trends Contributor
Kevin started taking PCs apart in the 90s when Quake was on the way and his PC lacked the required components. Since then…
Digital Trends’ Top Tech of CES 2023 Awards
Best of CES 2023 Awards Our Top Tech from the Show Feature

Let there be no doubt: CES isn’t just alive in 2023; it’s thriving. Take one glance at the taxi gridlock outside the Las Vegas Convention Center and it’s evident that two quiet COVID years didn’t kill the world’s desire for an overcrowded in-person tech extravaganza -- they just built up a ravenous demand.

From VR to AI, eVTOLs and QD-OLED, the acronyms were flying and fresh technologies populated every corner of the show floor, and even the parking lot. So naturally, we poked, prodded, and tried on everything we could. They weren’t all revolutionary. But they didn’t have to be. We’ve watched enough waves of “game-changing” technologies that never quite arrive to know that sometimes it’s the little tweaks that really count.

Read more
Digital Trends’ Tech For Change CES 2023 Awards
Digital Trends CES 2023 Tech For Change Award Winners Feature

CES is more than just a neon-drenched show-and-tell session for the world’s biggest tech manufacturers. More and more, it’s also a place where companies showcase innovations that could truly make the world a better place — and at CES 2023, this type of tech was on full display. We saw everything from accessibility-minded PS5 controllers to pedal-powered smart desks. But of all the amazing innovations on display this year, these three impressed us the most:

Samsung's Relumino Mode
Across the globe, roughly 300 million people suffer from moderate to severe vision loss, and generally speaking, most TVs don’t take that into account. So in an effort to make television more accessible and enjoyable for those millions of people suffering from impaired vision, Samsung is adding a new picture mode to many of its new TVs.
[CES 2023] Relumino Mode: Innovation for every need | Samsung
Relumino Mode, as it’s called, works by adding a bunch of different visual filters to the picture simultaneously. Outlines of people and objects on screen are highlighted, the contrast and brightness of the overall picture are cranked up, and extra sharpness is applied to everything. The resulting video would likely look strange to people with normal vision, but for folks with low vision, it should look clearer and closer to "normal" than it otherwise would.
Excitingly, since Relumino Mode is ultimately just a clever software trick, this technology could theoretically be pushed out via a software update and installed on millions of existing Samsung TVs -- not just new and recently purchased ones.

Read more
AI turned Breaking Bad into an anime — and it’s terrifying
Split image of Breaking Bad anime characters.

These days, it seems like there's nothing AI programs can't do. Thanks to advancements in artificial intelligence, deepfakes have done digital "face-offs" with Hollywood celebrities in films and TV shows, VFX artists can de-age actors almost instantly, and ChatGPT has learned how to write big-budget screenplays in the blink of an eye. Pretty soon, AI will probably decide who wins at the Oscars.

Within the past year, AI has also been used to generate beautiful works of art in seconds, creating a viral new trend and causing a boon for fan artists everywhere. TikTok user @cyborgism recently broke the internet by posting a clip featuring many AI-generated pictures of Breaking Bad. The theme here is that the characters are depicted as anime characters straight out of the 1980s, and the result is concerning to say the least. Depending on your viewpoint, Breaking Bad AI (my unofficial name for it) shows how technology can either threaten the integrity of original works of art or nurture artistic expression.
What if AI created Breaking Bad as a 1980s anime?
Playing over Metro Boomin's rap remix of the famous "I am the one who knocks" monologue, the video features images of the cast that range from shockingly realistic to full-on exaggerated. The clip currently has over 65,000 likes on TikTok alone, and many other users have shared their thoughts on the art. One user wrote, "Regardless of the repercussions on the entertainment industry, I can't wait for AI to be advanced enough to animate the whole show like this."

Read more