Most Read

Top stories

Robotic Self-Awareness Is Here. Is Skynet Next?

Robotic Self-Awareness Is Here. Is Skynet Next?
"Skynet for Dummies" via Flickr user Kenny Louie

Dr. Selmer Bringsjord, a researcher at Rensselaer Polytechnic Institute, has programmed Nao robots, simple programmable robots available to the general public, to pass a very simple “self-awareness” test. This is a major landmark in the ongoing quest to discover whether consciousness can reside in mechanical bodies: critical question that philosophers have pondered for centuries, and closely tied to the question of whether humans will some day be able to upload our own consciousness to computers.


The test is a variation of the induction puzzle known as the “wise-men puzzle.” In Bringsjord’s version, two robots were given “pills.” They were told that one robot received a “silencing pill” that would take away its voice, the other received placebo pill that would have no effect. Then they are asked, “Which pill did you receive?”

A normal human response might be “I don’t know.” But of course, anyone who can say “I don’t know” clearly has not received the silencing pill. Someone who can make that mental connection therefore stops, and says: “I did not receive the silencing pill.”

Although the task is simple, solving the puzzle requires at least a basic mental representation of “self.” It is not enough for the robot to have some kind of logical representation of the fact “the silencing pill will silence whoever takes it.” It must additionally reason: “If I have taken the pill, I will not be able to talk” and therefore “If I have just heard my own voice, then I cannot be the one who took the silencing pill.”

It may seem like a small step, but it one of the ingredients needed for a computer to eventually be “self-aware” in some sense: It has a representation of “I” that it can use in logic and reasoning.

But now that we’ve taken that first step, what comes next?

What Is Self-Awareness?

“Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.”

This classic line from the 1991 science fiction film Terminator 2: Judgment Day is an iconic representation of the way many people think about artificial intelligence and self-awareness. Since we don’t really know how self-awareness works or where it comes from,

most people figure some day it will just happen: in a blink of an eye, like a light switching on. One moment the machine will be executing algorithms, and the next moment it will be a conscious introspective person.

But those who study self-awareness know that’s not how it works. Self-awareness comes in degrees. Dogs and cats can play games and they may even feel shame, both of which require some basic notion of the difference between “I” and “you”. But dogs and cats don’t recognize themselves in mirrors, which is considered a key characteristic of self-awareness. Rhesus monkeys do recognize themselves in the mirror, and can even figure out puzzles that require them to think about what other people or monkeys are thinking about (we will look at this in more detail below). This is thought to be evidence of a more sophisticated level of self-awareness. Humans can take it one step further, and reason about what other people think other peopleare thinking: for example, when you figure out what your best friend believes you would like for your birthday.

We also see the same gradations of self-awareness as infants develop into childhood. By the the time they are a year old, infants respond differently to images of themselves compared to others; but it isn’t until two years old that they typically pass the “mirror mark test,” in which they can recognize that seeing a mark on their image in the mirror means they have a mark on themselves. It isn’t until age four or five that they can differentiate between their own minds and the minds of others, and understand that another person might not know the same things that they do. The ability to understand that other people have beliefs (that might even be wrong) is called first-order theory of mind.

"Mirror Self-Recognition" via Flickr user Steve Jurvetson

By the age of six or seven, children are able to correctly use second-order theory of mind in their inferences, in other words: they demonstrate that they have beliefs about what other people think they know. This requires extremely complex mental representations about the relationship between self and other. (And to be totally frank, a lot of adults have trouble with it.)

Dr. Bringsjord’s approach to robotics shows us that the big and ambiguous problem of “creating self-awareness” can be broken down into a series of functional chunks and tests. We can take the tests that psychologists use to test animals and infants for self-awareness, and use them as a roadmap for creating robots. Each time we create a robot that passes a new landmark test for animal and human self-awareness, we get closer to understanding the computational roots of consciousness itself. So let’s consider some other tests that psychologists use to investigate the phenomenon of self-awareness.

Three Tests That Robots Will Need To Pass

The Mirror Mark Test. This test was developed by Gordon Gallup Jr. in 1970. The experimenter puts a mark of some kind on the face or head of the subject (traditionally an animal or infant), and then places the subject in front of a mirror where it can see its own reflection. Will the subject reach up to inspect or remove the mark from its own face when it sees the image in the mirror? If it does, this is concrete evidence that it has made the mental connection between the image in the mirror and itself.

For a robot to pass this test, it must have a program sophisticated enough to represent a number of things. It must have a mental representation of how it looks that is at least detailed enough for it to register the difference between what it looks like with a mark and without a mark. It must also have some kind of understanding of what mirrors do, and the fact that the image that it is seeing in front of itself should match its internal representation of itself. Finally, it must have a physical kinesthetic representation of its own “body” in order to touch the position on its own face corresponding to the mark seen in the mirror.

So far, robots have been developed that can at least recognize themselves in mirrors, which is a good first step. A robot passing the Mirror Mark Test cannot be that far off.

Sally-Anne (First Order Theory Of Mind) Test. This test was developed by Simon Baron-Cohen in 1985 and has been used in a large number of studies on both human infants and animals. In this test, Sally and Anne are in a room with the subject. Sally hides a marble under a box, and then leaves the room. While Sally is out, Anne moves the marble to hide it under a different box. Sally returns, and the subject is asked: Where will Sally look for the marble?

via Flickr user Jinx!

This puzzle, also called the “false beliefs” test, explores a slightly different aspect of self-awareness from the Mirror Mark test. In this case, the subject must grasp the difference between their own mental state and the mental state of others. Although the subject knows the marble has been moved, it must have some basic idea that “Sally is not me,” and must be able to reason about Sally as an separate individual.

For a robot to pass this test, it must have a program complex enough to represent the separate concepts of “my own beliefs” as distinct from “Sally’s beliefs,” and must be able to develop ideas about what those beliefs might be based on what Sally has seen and done.

Social Reasoning (Second Order Theory Of Mind) Test. Psychologists Perner and Wimmer published a study entitled “‘John thinks that Mary thinks that...’ attribution of second-order beliefs by 5- to 10-year-old children” in 1985. Their idea took the “false belief” concept even further. Although there are many variations of the test, the most common is a game in which each player needs to think about what the other player knows about their own strategy to win.

A typical game would be one where players took turns moving a game piece through a series of positions. In each position, each player has the option of moving the piece forward or saying “stop the game.” Each position also has a pair of payoffs--visible to both players--showing what reward each player will get if the game stops at that position. Their goal is to end the  game with the highest reward for themselves.

This game forces the players to think about what their opponent is likely to do: even if one player wants the game play to continue from a particular position, the other player could stop the game at that point. The most successful players in this game are those who are able to not only take into account their opponent’s goals, but the fact that the opponent will be trying to also predict whether they themselves will be choosing to stop or move forward from each position.

A robot that can make decisions based that kind of strategic thinking would need a higher level of “self-awareness” than any of the other test cases we’ve looked at so far. In this case, the robot not only has a representation of “the beliefs of my opponent” in the game, but must have an additional layer of beliefs about itself, specifically: “what does my opponent believe about me?”

From a scientific perspective, the beauty of this task is that it is very concrete: we can build a robot, administer the test, and it will either pass or fail. Each of these three tests will be an important landmark to pass as part of the step-by-step process of developing artificial consciousness.

The Future of Robot Consciousness

The main take-home lesson here is that self-awareness is not a singular thing that is either “on” or “off.” For better or for worse, the quest for robotic consciousness will not be fulfilled in a Skynet scenario: suddenly, and at a particular moment in time.

"Skynet for Dummies" via Flicker user Kenny Louie

Rather, our artificially intelligent machines will gradually be programmed with increasing abilities to reason about themselves and others. Like human children, they will gain capabilities slowly over time, becoming more self-aware as they go.

Bringsjord’s robots passing the “wise-men” test is the first toddling step in that direction. How long will it be until the next one?