Tiempo de lectura: 5 minutos

Artificial Intelligence vs. Machine Learning vs. Deep Learning

We all like to think we’re a little more clever than Arthur C. Clarke posited with his iconic critique: "Any sufficiently advanced technology is indistinguishable from magic.”

We know who’s running Oz behind the curtain, and we’re not quite convinced yet that AI is as dangerous as nuclear weapons or an “immortal dictator from which we can never escape.” It’s just grad-level Applied Mathematics and Statistics properly advanced to its logical conclusion.

But do we truly understand the difference between Artificial Intelligence, Machine Learning, and Deep Learning? Or do we just parrot the jargon to fit in?

Despite DeepLearning.AI founder and CEO, Andrew Ng’s, best assurances of “don’t worry about it if you don’t understand,” we actually need to understand it. Maybe entrepreneurial grand wizard and all-around badass Mark Cuban said it better at the Upfront Summit 2017: “Artificial Intelligence, deep learning, machine learning — whatever you’re doing if you don’t understand it — learn it. Because otherwise, you’re going to be a dinosaur within three years.”

These three terms are used interchangeably, but they’re not the same. At all. They’re more like three layers of a Matryoshka nesting doll. Deep learning is a specific kind of machine learning. And machine learning is a subset of artificial intelligence.

With rising consumer experience expectations, every company has to begin pondering its own relationship with these technologies. We all know about BD (big data), and we all know we need a plan to interact with billions of files, lines of code, images, videos, user-generated content, and market intelligence. These three technological innovations (but they’re really one) equip us to leverage the computational power of machines to extend our own capabilities. We’re in the proverbial “too close for missiles, switching to guns,” as it goes.

So, how do you unpack these layers and understand the implications? Let’s get conversive on the future of technology together.

Artificial Intelligence

At its most basic definition, AI is any program that can sense, reason, act, or adapt in a manner similar to human intelligence. It’s silicon hardware and electrical impulses that mimic the human mind. Not creepy; useful.

This is a broad, far-reaching category of technology programming that can be used to describe almost any intelligent machine or system. For better or worse, it gets broadly applied today to nearly every platform or SaaS offering. Everyone’s got AI at this point.

There are a lot of different theoretical approaches to AI, as well. Some machines only react without using past experience to inform new decisions. Others hold short-term memories, while still others are designed to attempt to capture human emotions and understand decision-making. In theory, AI can even be self-aware, understanding not only our emotional states and responses but its own internal feelings and the appropriate interactions implied therein.

In our world, it’s more useful to break AI down into discrete categories that speak to the purpose and capabilities of a particular application:

Artificial Narrow Intelligence (ANI) programs complete a specific task. The scope is limited, and the corresponding “human behaviors” are more basic. (e.g., Identify the people in these photos. Predict the winner of this chess game. Locate the exact file I want with limited defining parameters.) Essentially, ANI calculates the probability of this event outcome, given specific inputs. From chatbots and virtual assistants to real-time predictive analytics and intelligent search, this infinitely useful and expansively common technology is what we mean when we talk about Artifical Intelligence today.

Artificial General Intelligence (AGI) is AI that performs on par with you and me—or even with the handful of mortals smarter than us. It’s emotionally intelligent, empathic, and nuanced. And, it doesn’t exist yet. Not that we know of, at least.

Artificial Super Intelligence (ASI) is AI that haunts our dreams and lurks in the corners of our nightmares. This is the kind of AI that would surpass human intelligence and ability, the kind of AI that Hollywood casts in dystopian SciFi series (or SciFi Western series) and makes us all question the risk/reward of AI research. It’s also improbable at the moment.

Machine Learning

It’s AI that gets smarter. The more data the algorithm is exposed to, the more intelligently it performs in relation to new data of a similar set over time. Unlike deep learning, machine learning (ML) requires human intervention to improve.

Essentially, ML is a way to leverage AI in specific applications to access incredibly vast data sets and predict the future based on statistical history. It works with structured and unstructured data, making it a critical component of how we grow bat365 CloudFS.

In particular, ML is a critical part of automation. Using data to teach algorithms to more intelligently predict future data outcomes, we can transfer predictive work to digital pathways that offload our neural networks. Whether it’s in predictive cybersecurity protocols or smart searches or forecasting and optimization for a specific industry like financial services or geospatial engineering, ML intensifies our own intuitions in reading data and making decisions.

ML feeds on data. Today, those are largely traditional data sets and algorithms like clustering, classification, and regression analysis. The more good data you have, the better your model performs over time. Because ML operates on the difference or error rate between established “ground truths” and its predictions, it’s a self-optimizing system that will advance as far as the quality of data it's fed as the basis for its operations.

Because machine learning is still a broad category of AI (as opposed to deep learning), we often define what we mean as ML by the objective it’s been given. Start by stating the scope of an ML algorithm. Now, you understand its purpose, and you can direct the proper data to maximize its value within your technology application, such as intelligent search functions or predictive analytics in a specific context.

Deep Learning

Now we’ve drilled down to a subset of machine learning. Deep learning (DL) uses layers of neural networks to apply vast data lakes. The depth of the neural network defines deep learning—a true deep learning algorithm has at least three nodes of neural networks, including both input and output.

Here, we’re approaching Arthur C. Clarke’s magical science. Deep learning eliminates large parts of the human intervention required in machine learning. It’s a self-educating algorithm that uses the sheer volume of available data to improve predictive performance, opening new levels of complexity in what can be automated. It’s scalable machine learning, according to MIT—and it’s the future of unstructured data in particular.

Deep learning is where artificial intelligence most closely mimics the human brain today. Beyond just advanced statistical analysis, deep learning can begin to execute intelligent decision-making by a non-human brain.

Neural networks are the backbone of deep learning. An input layer accepts inputs from data. A hidden layer identifies features hidden from the initial analysis of the data. And an output layer provides the appropriate output.  This multi-layered structure of data analysis is what separates deep learning from machine learning. It’s also what allows deep learning to solve tasks where established machine learning models prove inadequate.

Advances in deep learning are the driving force behind much of the interactive AI we experience today, from self-driving cars to virtual assistants to real-time language translation capabilities. Because deep learning can handle complex weighted biases in the inputs and then backpropagate the outputs based on the goal of error-rate reduction, it’s inherently more self-optimizing than machine learning.

Perhaps most promising to bat365’s future, deep learning extends feature extraction beyond machine learning’s limitations. Rather than operating on flat algorithms that require pre-processing of the data features, deep learning can take raw data and execute intelligent search algorithms without the underlying categorization or classification of the data. These algorithms understand the implicit representation of raw data without human input, opening up the future of unstructured data.

Rise of the Machines

Whether or not you understand decision trees and logic regression or struggle to define a Naïve Bayes classifier, you need to understand the distinctions between artificial intelligence, machine learning, and deep learning. All three will impact your future, the future of your data, and the future of your organization.

As we march ever closer to a code-driven equivalent of the human brain, we’ll continue to see gains in productivity, speed, intelligence, and memory. We’ll have more access to more data than ever before. The teams that learn how to harness that information and use it to improve their customer experience, go-to-market strategy, and product development will win. Everyone else will be wondering what happened when the machines replaced them.

We can’t afford to not understand these concepts. Even without a Ph.D. in Computer Science, we can see the future, and it looks a lot like algorithm-driven intelligence coupled with living humans’ emotions, creativity, desire, and drive. The human spark will never be replaced by electrons, but it can be vastly improved upon with the right data inputs.