We’re told that AI neural networks “learn” the way humans do.

by John J. Williams

James Fodor*, a neuroscientist, explains why that isn’t the case.

Recently developed artificial intelligence (AI) models are capable of many impressive feats, including recognizing images and producing human-like language.

But just because AI can perform human behavior doesn’t mean it can think or understand like humans.

As a researcher who studies how humans understand and reason about the world, I think it’s important to emphasize that AI systems ‘think’ and learn are fundamentally different from how humans do — and we still have a long way to go before AI can think like us.

We're told that AI neural networks "learn" the way humans do.

A widespread misconception

Advances in AI have led to systems that can exhibit very human-like behavior.

The GPT-3 language model can produce text often indistinguishable from human speech.

Another model, PaLM, can explain jokes it’s never seen before.

Recently, a general-purpose AI, Gato, has been developed to perform hundreds of tasks, including captioning images, answering questions, playing Atari video games, and even controlling a robotic arm to stack blocks.

And DALL-E is a system trained to create custom images and illustrations from a text description.

These breakthroughs have led to some bold claims about the capability of such AI and what it can tell us about human intelligence.

For example, Nando de Freitas, a researcher at Google’s AI company DeepMind, argues that scaling up existing models will be enough to produce human-level artificial intelligence.

Others have echoed this view.

In all the excitement, it is easy to assume that human behavior means human understanding.

But there are some key differences between AI and human thinking and learning.

Neural nets versus the human brain

The most recent AI comprises artificial neural networks or ‘neural networks’.

The term “neural” is used because these networks are inspired by the human brain, in which billions of cells called neurons form complex webs of connections and process information as they fire signals back and forth.

Neural nets are a highly simplified version of biology.

A simple node replaces a real neuron, and the strength of the connection between nodes is represented by a single number called a “weight”.

With enough connected nodes stacked in enough layers, neural networks can be trained to recognize patterns and even “generalize” them to similar stimuli (but not identical) to what they have seen before.

Put, generalization refers to the ability of an AI system to take what it has learned from certain data and apply it to new data.

Identifying features, recognizing patterns, and generalizing results are at the heart of neural network success — mimicking techniques humans use for such tasks.

Yet there are important differences.

Neural nets are typically trained by ‘supervised learning’.

So they get many samples of an input and the desired output, and then the connection weights are gradually adjusted until the network “learns” to produce the desired result.

To learn a language task, a neural net can get a sentence word by word, slowly learning to predict the next word in the sequence.

This is very different from how people normally learn.

Most human learning is “unsupervised,” meaning we are not explicitly told what the “correct” response is to a particular stimulus.

We have to solve this ourselves.

For example, children are not instructed how to speak but learn it through a complex process of speech exposure, imitation, and adult feedback.

Another difference is the sheer amount of data used to train AI.

The GPT-3 model is trained on 400 billion words, mainly from the Internet.

At 150 words per minute, it would take a human nearly 4,000 years to read that much text.

Such calculations show that humans can’t learn in the same way as AI.

We have to deal more efficiently with smaller amounts of data.

An even more fundamental difference concerns the way neural networks learn.

To match a stimulus with a desired response, neural nets use a ” backpropagation ” algorithm to send errors back through the network, allowing the weights to be adjusted just the right way.

However, neuroscientists widely recognize that backpropagation cannot be implemented in the brain because it requires external signals that do not exist.

Some researchers have suggested using variations of brain backpropagation, but there is no evidence that human brains can use such learning methods.

Instead, people learn by creating structured mental concepts in which many different properties and associations are connected.

For example, our concept of “banana” includes its shape, color yellow, the knowledge that it is a fruit, how to hold it, and so on.

As far as we know, AI systems do not constitute conceptual knowledge this way.

They rely entirely on extracting complex statistical associations from their training data and applying them to similar contexts.

Efforts are underway to build AI that combines different types of inputs (such as images and text) – but it remains to be seen whether this will be enough for these models to learn the same kinds of rich mental representations that humans use to make sense of the world.

We still do not know much about how people learn, understand, and reason.

What we know, however, indicates that humans perform these tasks very differently from AI systems.

As such, many researchers think we need new approaches and a more fundamental understanding of how the human brain works before we can build machines that believe and learn like humans.

*James Fodor, a Ph.D. candidate in cognitive neuroscience, University of Melbourne.

Related Posts