When AI Was Old-Fashioned

Neural networks are hot, but AI programmers have other tools in their arsenal.

2020-11-29 Previous Home Next

When Siri or Alexa recognize the words you speak, and when Facebook identifies your friends in pictures, they are matching patterns.

News stories use terms like "machine learning" and "deep learning" to describe pattern-matching programs like these.

They use statistical techniques, in particular, a technique called neural networks, to learn patterns in data.

Neural networks

When learning to write neural network based programs, you start with some basic examples. A typical one is to recognize that an image like this contains a zebra:

Zebra

Using many example images, some containing zebras and others not, you can slowly modify the parameters of a neural network until it "learns" to provide the right answers. Once it has been tuned in this way, the neural network is likely to then correctly classify new images that it has never seen before.

To learn patterns, neural networks need a large number of input examples and a lot of computing power. Despite having studied neural networks for a long time, AI academics never managed to make them practical enough to use for real problems.

But in the 2000s, researchers came up with a few key software breakthroughs at the same time as computers became fast and cheap enough. Neural networks became successful at realistic pattern-matching tasks and now attract news headlines in many domains.

For example, a program called GPT-3 invented this summer generates convincing-looking op-ed pieces, stories, newspaper articles, essays, and even some kinds of computer programs.

To make GPT-3, researchers gathered huge amounts of text from the world's web pages and built a "language model" using patterns in the text. This model lets GPT-3 predict subsequent words given a few prompt words.

Good old-fashioned AI

Statistical methods like neural nets are not the only way to write AI programs. In fact, prior to their recent popularity, a completely different approach was common.

For example, the chess-playing AI programs described in the previous article, In an Ideal World, represent the current board position and future moves with symbols. By manipulating these symbols, these programs can build alternative future board positions and evaluate them to choose the best move.

This so-called "symbolic" approach is not limited to games. It has been in wide use for decades in business, engineering, and science. Not backed by venture capital, these programs are not promoted with bold claims and no longer make the news. They are considered so unremarkable that the press does not even count them as "AI."

Among researchers, the symbolic approach is now called "Good Old-Fashioned AI" or GOFAI. It's like a detective solving cases with methodical police work, as opposed to gut intuition and hunches.

Just like in solving mysteries, both symbolic and statistical methods have their strengths in real applications.

The promise of neural networks is that a programmer doesn't need to understand the rules of the domain; the neural network will automatically infer them from the examples. In contrast, to write a program using the symbolic approach, the programmer needs to somehow map or encode the rules and facts into symbols.

But a symbolic program can write its partial results in a log so you can follow its reasoning— important when a program produces an unexpected result. On the other hand, a neural net tends to provide only an answer with no explanation.

As a practical matter, useful programs often employ different methods as needed, so it's likely that both approaches will be with us for a long time.