Key Differences Between Human and Machine Learning

Machines can do some things much better than humans. But they still need humans to tell them what’s worthwhile.

We all have a fairly clear idea of what “artificial intelligence” (AI) means – even if we’re not so clear on how it works. Over the past few years, however, the terms “machine learning” and “deep learning” keep popping up in connection with AI.

What exactly is machine learning? How is it different from human learning?

 
brian-kostiuk-252865-unsplash.jpg
 

This article will answer those questions in clear, practical terms – while providing some insight into the ways machine learning and human learning work together to develop new solutions.

Machine learning mimics certain functions of the human brain.

Although the concept of machine learning seems to have appeared very suddenly, it dates all the way back to 1959, when computer researcher Arthur Samuel proposed it as a possible way of achieving artificial intelligence.

This is a critical distinction:

Machine learning is not the same as artificial intelligence. It’s a set of techniques designed to help produce artificial intelligence.

Like human learning, machine learning is based on the concept of reinforcement.

 
hans-peter-gauster-252751-unsplash.jpg
 

For example, we learn very early in life to associate certain familiar faces with hugs and kisses – and to associate barking dogs and honking car horns with a feeling of danger. As we grow older, our brains can learn all kinds of associations – for instance, the sight of a busy office may evoke a feeling of stress, while the smell of a certain perfume may give rise to a feeling of love.

We weren’t born with any of these connections hardwired into our brains – we learned them, through repeated exposure and experience.

In the human brain, “neurons that fire together wire together.” In other words, the more often two groups of brain cells get activated at the same time, the more connected they become – and the more likely one of those groups will light up when the other one is activated. This is why the smell of Mom’s cookies can trigger a visual memory of the house where you grew up – or vice versa.

Machine learning simulates this biological process on digital hardware and software.

 

Machines can do some things much better than humans. But they still need humans to tell them what’s worthwhile. Tweet This

 

In machine learning, the “neurons” are little sub-programs in an artificial neural network (ANN). Although those programs don’t simulate all the physical and chemical behavior of nerve cells (a massive task, even for a single cell) they’re designed to mimic certain abilities of living human neurons – most importantly, the ability to forge connections with other “cells” in their digital network.

Human researchers train neural networks to help machines learn.

Like human brains, ANNs learn through repeated exposure to a wide range of stimuli, known as data points. Those data points could be digital images, or sounds, or paragraphs of text – or more abstract pieces of data, such as billing statements, stock prices, sales figures, or tracking data for packages.

 
 

It’s the ANN’s job to detect patterns in that data, and make predictions about future patterns.

In supervised machine learning, human researchers make the ANN’s job easier by providing labeled examples of the patterns they want the ANN to detect. Just as your first taste of ice cream taught you to keep an eye out for similar treats – and your first experience with a mean dog taught you to watch out for similar threats – labeled data patterns tell the ANN, “This kind of thing is important – so be on the lookout for anything similar!”

Supervised machine learning is a top-down process: a human operator explicitly tells the ANN what kinds of patterns to look for.

But not all machine learning is supervised. Sometimes human researchers can learn much more by simply turning their ANN loose on a set of data, and seeing what patterns it comes up with. Unsupervised machine learning is great at thinking “outside the box.” For example, they’ve designed water systems and logistical networks that seem totally counterintuitive to humans – but turn out to work much more efficiently than any system humans have designed.

Unsupervised machine learning is a bottom-up process: the ANN discovers patterns for itself, and the human operator decides which of those results are useful.

Of course, no matter how smart an ANN becomes, it still needs human intelligence to tell it which of its solutions are useful. This is where reinforced machine learning comes in. In this type of learning, the ANN chooses a response to each data point, and receives a positive or negative response from the human trainer. Since the ANN has been programmed to avoid negative responses and maximize rewards, it gradually learns to find the kinds of solutions its human trainers want.

Reinforced machine learning combines top-down and bottom-up processes: the ANN may discover patterns on its own, but the human operator explicitly tells it which results are “right,” and instructs the ANN to keep looking for similar ones.

Machine learning is currently limited by several significant factors.

Through a combination of large datasets, extensive training, and consistent feedback from human trainers, ANNs can learn to design railways, plan marketing strategies, predict sales goals, and even detect artistic influences in paintings. What they can’t do, however, is understand what these skills mean in the real world.

 
andy-kelly-402111-unsplash.jpg
 

Even with a huge dataset and months of reinforced learning, no existing ANN can learn anything beyond “these responses are good” and “these responses are bad.” In fact, even the notions of “good” and “bad” are simply data points that have been fed into the network. We could just as easily switch those inputs, and create an ANN that purposefully gets every question wrong. It would make no difference to the ANN, which feels neither pleasure nor pain.

Human learning, on the other hand, is driven not only by notions of “this feels good” and “this feels bad,” but by an ever-evolving set of desires about the things we want to feel. A dry martini tastes revolting at age 20 – but by age 40, it offers just the right hint of bitterness. Our tastes evolve as our bodies and senses develop, and as we experience an ever-greater variety of sights, sounds, smells, tastes, tactile sensations and emotions.

And unlike machine learning, human learning never takes a break.

Even when we’re asleep, our brains are hard at work, forming and reshaping patterns of neural connections influenced by every experience we’ve ever had. Human learning can’t be divided into discrete data points, because human experience can’t be divided into discrete intervals. Each time we learn something new, our understanding of it is shaped by our experiences up to that point.

But these advantages can also function as limitations on human learning.

We humans are so overloaded with information that our brains automatically forget most of what we experience. ANNs, on the other hand, have perfect memory – they’re unable to forget anything they’ve learned, unless we deliberately delete it.

In a similar way, our instincts are shaped by all our past experiences – and those instincts can sometimes prevent us from seeing solutions machines can see. Neural networks aren’t limited by what is “reasonable,” but only by what they’ve been told is physically possible. This makes an ANN’s range of possible solutions far larger than that of a human.

 
jeshoots-com-357165-unsplash.jpg
 

Ultimately, the value of any piece of learning lies in its application.

So far, only humans are capable of knowing whether a given solution actually makes sense, can be realistically implemented, and will produce the desired result. Only humans can instinctively tell if a machine-generated result is just plain silly, or frightening, or both.

We humans can judge these things instantly, without apparent effort – while machines are still a long way from being able to make these kinds of judgments at all.

The difference lies not in how machines learn, but in their inability to connect their learnings across domains. Cross-domain machine learning remains one of the hottest topics for research at the cutting edge of the field. It may see some major breakthroughs soon. But for now, even the brightest ANNs are a long way from understanding the problems they solve.

True understanding requires context. And for now, that’s something only we humans can bring to the table.


About the writer

Screenshot_Ben thomas.jpg

Ben Thomas is a writer and brand strategist specializing in emerging technologies, Big Data, and the Internet of Things (IoT).

He loves to energize audiences about the frontiers of science, culture and technology — and the ways these all come together.