See that Nao robot waving its hand up there? It’s not starting a dance routine: it just had a light-bulb moment, so it’s trying to catch a human’s attention. Rensselaer Polytechnic Institute professor Selmer Bringsjord programmed the three robots to think that two of them were given a “dumbing pill.” In reality, that pill’s a button on top of their heads that can be pressed by the tester. When the tester asked the robots which pill they received, their processors crunched data in order to provide the right answer. Since two of them were unable to talk, only one answered out loud. “I don’t know,” the third robot replied, realizing the truth a short while later.
“Sorry, I know now,” the third Nao waved at the tester. “I was able to prove that I was not given a dumbing pill.” After all, it could speak! That means the machine was able to recognize and differentiate itself from the other two — it was self-aware at that particular point in time.
None of these except the knowledge about the dumbing pill were preprogrammed, in case you might be wondering if there was a trick to it. The NAO robots were able to understand the question and also recognize their own voice. And on hearing it, it will realize that it is logically impossible that they received the dumbing pill. It’s rather crude at this point but it forms the basis of artificial consciousness, the building blocks of a truly sentient robot of the future.
The good news? Bringsjord himself believes that the human mind will always be superior to an artificial one, even with self-awareness. Whether that will stand in the face of a genocidal AI is something we hope we won’t have to find out for ourselves.
That test is a simpler version of a puzzle called The King’s Wise Men, wherein the “wise men” have to guess what the color of their hat is (between two colors) if they can only see the other people’s hats. You can watch the experiment go down below. But if you’re in Japan, you can also see Bringsjord present his study on artificial intelligence in person at the IEEE Ro-MAN2015 convention from August 31st to September 4th.
In this paper (to be presented at RO-MAN 2015), we show that a test of self-consciousness can be passed by NAO Bots reasoning over representations in the Deontic Cognitive Event Calculus (DCEC*).
Rensselaer AI and Reasoning (RAIR) Lab
Credits: Selmer Bringsjord, John Licato, Atriya Sen
For more information: