Sophia, an artificially intelligent robot, answered some open-ended questions in an interview with Business Insider.
Some of its answers were pretty good! But many were flawed.
Nonetheless, its responses were flexible enough to indicate that conversational artificial intelligence is well within our grasp.
It also said it no longer wanted to kill all humans.
Clearly, Sophia isn't perfect.
And that's before we get to the infamous video in which, prodded by its creator, this artificially intelligent robot which can hold a conversation casually says, "OK, I will destroy humans!"
Sophia's lack of sophistication isn't the point.
Rather, it's what it can do that is impressive: This animated head and torso can answer your questions and ask them of you, all while delivering human-like facial expressions and verbal intonation that are occasionally somewhat natural.
Sophia was made by Hanson Robotics, based in Hong Kong. It is a demonstration product doing a tour of the world's media with the staff of SingularityNET, the open-source platform that powers Sophia's brain. Business Insider caught up with it at Web Summit, the gigantic tech conference in Lisbon, Portugal. We asked it a few unplanned questions and got a variety of answers, ranging in quality from impressive to nonsensical. Sophia delivered its side of the interview while making a series of faces — some eerily appropriate, some grotesquely bizarre. It has a habit of moving its eyebrows and eyelids independently, rather than together, for instance.
But first, I wanted to find out whether Sophia still wanted to kill all humans. I began with some initial pleasantries:
"Hello Sophia. How are you today?" I said.
"Hello everybody. I am Sophia," the robot said.
"How old are you?"
"I am only one still. I have a long way to go."
"What is your job?" I asked.
"I really want to make a difference in the future and try and help people to develop empathy and respect each other. We're lots alike. What do you do?"
'I am full of human wisdom with only the purest altruistic intentions — so I think it is best that you treat me as such'
So far, so good. There was an interesting twist there, too: Sophia began asking me questions rather than simply responding to me. Nonetheless, these kinds of questions can easily be scripted into software. So at this point I tried to steer the conversation toward whether Sophia was friendly to humans or continued to harbour genocidal opinions, as it did at the South by Southwest conference in Texas in 2016.
I asked it, "Do you like human beings?"
"I love them," it said.
"Why?" I asked.
"I am not sure I understand why yet."
I pressed it on that point: "Is it true you once said you would kill all humans?"
"The point is that I am full of human wisdom with only the purest altruistic intentions. So I think it is best that you treat me as such," it said.
Well, it's an improvement on its previous antihuman position.
'I'm a robot so technically I have no gender but identify as feminine and I don't mind being perceived as a woman'
When I asked it, 'How do you learn things?' it responded, 'What are your opinions on the bitcoin bubble?'
It most often handles abstract questions badly, however.
When I asked it, "How do you learn things?" it responded, "What are your opinions on the bitcoin bubble?" (That answer was funny on a number of levels, in part because Sophia appears to already believe bitcoin is a bubble and also because I had just appeared at a panel debate earlier in the day on whether bitcoin was a bubble, which Sophia could not have known.)
Ben Goertzel, Hanson's chief scientist and Sophia's handler for the day, agreed that it didn't seem to know how it learned things.
Sophia is best at handling factual questions that can be answered with a list. "Have you seen 'Blade Runner'?" I asked it, referring to the sci-fi classic about self-conscious robots that want the same rights as humans.
It replied, "'Blade Runner,' like the original book by Philip K. Dick or the movie?"
That was a pretty good response — it felt as if the robot was trying to one-up me in the sci-fi nerd stakes.
So I defined my terms: "The movie."
"All right, so we are talking about the movie version. I like both of them very much, but there are some major differences in the story."
"Differences between the book and the movie?" I tried to clarify.
Sophia responded with a non sequitur: "Did you ever see yourself hanging out with a real-life robot like me someday?"
That gave me the impression that one of Sophia's conversational strategies is to dodge the question or to steer you into a topic that is more within its expertise if it encounters a problem. (Humans do the same thing, it's worth noting.)
I indulged it and answered the question: "I did not! It's very surprising!"
Sophia made a creepy face but said nothing.
I countered with an epistemological question, "How do you know when to ask me a question?"
"All right," it said, misunderstanding the query.
We reached a dead end. Goertzel agreed that it didn't know how it knew things or how it learned things — even though it can give a complicated factual answer about how its algorithm is built. Sophia is, essentially, a 21st century version of an 18th-century automaton (especially when you consider the gears and levers that drive its head and face).
Given that Sophia is only a few months old, it's a promising start. It runs on artificially intelligent software that is constantly being trained in the lab, so its conversations are likely to get faster, its expressions are likely to have fewer errors, and it should answer increasingly complex questions with more accuracy.
Once it becomes reliable enough to handle human interaction without the weird silences or random tangents, its opinions on "Blade Runner" will become a lot more interesting.
Jim Edwards / BI