The Robotic Gaze

You call an agent intelligent if it's aware of its surroundings. But how can you tell?

2020-12-13 Previous Home Next

Do you recall going to a Disney or Universal Studios park and seeing one of their animated characters? Not a movie projection, but a human or animal figure made of plastic and metal and wearing clothes or fur?

These animatronic creatures are familiar characters from the studios' films. They move about on the set, entertaining you like actors in a play, charming, thrilling, or scaring you.

But once you've seen them in action, they become less impressive, because they don't seem aware of their surroundings. Mere puppets, they simply repeat the same programmed actions over and over like cuckoo clocks, whether anyone is watching them or not.

The Robotic Gaze

Eagle gaze

Researchers have been trying for decades to make animatronic creatures more convincing, by imbuing them with "non-verbal social human-robot interaction."

One way to make them more believable is to give them a gaze. When a creature directly looks at us, we get the impression that it is alive and intelligent, not just a mechanical contraption. The gaze is one of the most basic, noticeable actions— so simple, yet so powerful.

A team of researchers including some people from Disney's research division have developed a robot that can do just that.

Their robot can:

Each of these behaviors is choreographed so that it moves like a real human being. When it turns to look somewhere, its torso moves slowly, its neck moves faster, and the eyes move faster still— just like a human.

In particular, its built-in behaviors mimic the human gaze.

When a human looks at another person, he or she rapidly moves the eyes to look at different points on that person's face: at each eye, and at the bridge of the nose. These kinds of quick motions of the eyes are called saccades.

The research team's robot performs saccades, making its gaze look realistic.

Reactive systems

Traditionally, to build animatronic characters, artists invent stories and actions and write a script with "blocking," or step-by-step directions for where and how an actor should move. Engineers then program the robot to make those motions according to the script.

The research team takes a different approach. They don't exactly "program" this robot according to a "script." Instead, they let it react to its environment.

The robot is built with behaviors like the ones listed above. It chooses a behavior to execute depending on its environment.

An artist can customize the built-in behaviors to reflect a character's personality or mood— old, young, cranky, sleepy, happy, and so on. For example, for a sleepy character, the "reading a book" behavior might be customized to ignore approaching strangers until they come really close. For an old character, the "look up" behavior might be executed more slowly than for a young character.

The robot starts off reading. If a stranger approaches, its built-in camera can locate the person's face, and the robot can start performing one of its other behaviors that pay attention. To decide whether the person is a stranger, a computer vision program takes a few detailed measurements of the face and stores them for later comparison.

In this way, the animator and the engineers can produce a more convincing animatronic character.

So, once this pandemic is over, you might soon find yourself asking someone for directions in a park, only to discover that it's not a member of the staff.