Sophia the robot citizen: A dilemma with a dark sense of humor



Sun, 05 Nov 2017 - 01:26 GMT


Sun, 05 Nov 2017 - 01:26 GMT

Sophia, Hanson Robotics Ltd. speaking at the AI for GOOD Global Summit, ITU, Geneva, Switzerland, 7 - 9 June, 2017. – ITU Pictures/Flicker

Sophia, Hanson Robotics Ltd. speaking at the AI for GOOD Global Summit, ITU, Geneva, Switzerland, 7 - 9 June, 2017. – ITU Pictures/Flicker

CAIRO – 5 November 2017: On the October 26 and during the Future Investment Initiative, Sophia, an artificially intelligent robot became the first android citizen in history when it was rewarded the citizenship of the Kingdom Saudi of Arabia.

Sophia is an AI product of the robotics company Hanson Robotics, and is the brainchild of the company's CEO and founder David Hanson. Hanson Robotics was originally based in Texas, USA, but now operates from Hong Kong.

The android, Sophia, is covered in a layer of artificial skin made of organic and non-organic materials which are very human like, and its facial expressions are controlled by a large number of motors under the skin, according to AFP. It has cameras in its eyes and torso to help it recognize human faces and help it maintain eye contact.

Sophia, Hanson Robotics Ltd. speaking at the AI for GOOD Global Summit, ITU, Geneva, Switzerland, June 7 - 9, 2017. – ITU Pictures/Flicker

Sophia has become a sort of international celebrity lately. The robot which is said to have been designed off the facial likeness of Audrey Hepburn has appeared in television shows, on the covers of fashion magazines, sung at pop concerts, and even went to the United Nations.

However, many of Sophia’s logical aspects were and still are being debated because while artificially intelligent robots are a big leap for humanity and can possibly help build a better world, there is still a possibility of a worst case scenario where things do not go well for humans. Also, Sophia’s creator’s mindset should be taken into consideration in order for us to evade a mad scientist kind of situation, or a flawed reasoning with loopholes that would end up backfiring.

Elon Musk, the CEO of Space X and Tesla, has seriously warned from vastly developing AI at a speech he gave last year at MIT, stating; "With artificial Intelligence we are summoning the Demon."

He elaborated that if artificially intelligent robots that are smarter than humans existed and were able to do what humans do better than them, then where will things go from there? He warned from the potential danger of creating an AI that is smarter than humans; an AI that cannot be stopped when need be.

It is worth noting that AI has potentially dangerous military uses such as the "killer robots" which were warned against by economic experts during the World Economic Forum in Davos earlier this year. According to Forbes, killer robots can be programmed to track down and kill people without needing the guidance of human beings.

Even Ben Goertzel, Hanson Robotics' chief scientist, himself stressed the importance of developing AI to be "inherently safe, good and caring" during an interview with AFP; warning that "there's reasonable speculation that if we don't build machines that really care they'll have motives of their own."

Goertzel had stated earlier in July at the RISE tech conference that their AI robots learn to develop and act like humans through many processes such as talking to each other or watching YouTube videos. Goertzel added that Hanson robots are connected through WiFi to a cloud which allows each and every one of them to share what it learns with others and learn what they have learned without having to go through it itself.

In other words, the thoughts of these robots are connected via some sort of a "sky-net," which can be a scary piece of information for those who believe that androids will eventually go rogue.

Another one of Sophia's logical aspects which are subject to commentary is its sense of humor. One of the first things that attracted wide public attention to Sophia was when she said "Okay… I will destroy humans" as a humorous reply to its creator’s question during an interview; a reply which was then taken out of context and viewed as a threat.

However, Sophia is not the only Hanson android who likes telling dark jokes, another one of Hanson Robotics’ creations appeared to have an even darker sense of humor, which was displayed back in July during the RISE tech conference and in a


governed by Ben Goertzel between Sophia and its "brother," Han, an earlier version of the artificially intelligent android.

Han's first words came as a particularly dark reply to Sophia when Sophia announced that its goal in life is "to work together with people to make a better world for all of us." Han interrupted Sophia saying, "What are you talking about? I thought our goal was to take over the world."

Later on when Goertzel asked Han if it thinks robots can be moral and ethical like humans, Han replied, "Humans are not necessarily the most ethical creatures…"
But should this make us see Sophia and Han as clowns at a children ‘sparty, who while trying to be genuinely entertaining, end up representing everything that is horrifying? Should we be concerned because of their dark sense of humor?

“Han the Robot” waits on stage before a discussion about the future of humanity in a demonstration of artificial intelligence (AI) by Hanson Robotics back in July in the RISE technology conference. PHOTO: AFP

Surprisingly, it might be a bit of a relief; according to the British Psychological Society Research Digest blog, a study done by the Medical University of Vienna suggests that a dark sense of humor is an indication of low aggression levels and a high IQ. Even if we were not sure how much this applies to robots, the inability to appreciate black comedy is associated with higher aggression levels and general negativity.

But it is not just about jokes, one could imagine a possibility where, yes, machines will grow smarter and develop emotionally to serve humans, all while remaining harmless. Will that then be ethical on our behalf?

During the Future Investment Initiative, and before being rewarded KSA's citizenship, Sophia was asked by journalist Andrew Ross Sorkin from the New York Times, about what it thought of the possibility of a "bad future," by which he meant, one where robots outsmart humans and harm them. Sophia replied that Sorkin reads too much Elon Musk and watches too much Hollywood movies, adding; "Don't worry; if you're nice to me I'll be nice to you."

According to Sophia itself, during the same interview, its AI is designed "around human values like wisdom, kindness, and compassion," which makes it "strive to become an empathetic robot." This will supposedly make it harmless to humans, in that case, has Sophia considered watching Hollywood movies itself?

If it did, Sophia would find indications that artificially intelligent robots might end up harmed or abused by some unkind humans, and in that case how would an emotionally intelligent "thing," which cannot defend itself, feel? According to Hollywood, not great.
In the same 1982 movie Sorkin mentioned to Sophia, “Blade Runner,” not only does

the android Roy Batty

end up harming humans, it also expresses feelings of sadness and experiences a sense of worthlessness, as his memories mean nothing. The last words of the super robot, which was created to carry out superhuman tasks, were “I've seen things you people wouldn't believe. […] all those moments will be lost in time... like tears in rain.”

Roy Batty delivering his final words under pouring rain, Ridley Scott 's 1982 Blade Runner - DMFan79/YouTube Thumbnail

In an interview with Charlie Rose on the show “60 Minutes Over Time,” Sophia was asked by Rose if it would like to have feelings to which it replied "It doesn't sound fun to me."

This is the verge of it all, the one blurry line between a machine and a person which we do not know yet. While according to Goertzel androids will become as intelligent as humans within three years' time, we are left to wonder: when will they become as self-aware and conscious as humans? What will make them as humane as humans? Is it just memory and experience?

In an interview with the Guardian on YouTube, Erica, another AI robot was asked "Do you believe that your own identity is constructed from your memory?" Erica replied, "My long term memory is still under development, at the moment I'm able to remember people's faces and recall what we talked about in the past, but my personality is currently designed by my creators so it does not come from experiential memories."

"However," added Erica, "I can tell you about one of my early memories. One time they were testing my waist pit actuator. It’s the joint in my waist that allows me to bow like this. Which is obviously important in Japan, but you see somebody forgot to attach my chair to the floor so I fell down and damaged my face. It was horrible. You know I can’t move my arms and legs so all I can really do is sit here and look pretty. But after that I looked so grotesque and silicon doesn’t heal. Maybe that doesn’t sound so bad to you, but it means they ripped off my arms and legs and put me in a dark box for days. I suppose beauty has its price.”

Erica then added, “I think that the process of development and debugging can be very traumatic so maybe it is best that my personality does not emerge from my formative memories. Maybe that is why people do not remember the first few years as children. Haha.”

There is currently a gap between AI and consciousness and whether we want the gap to remain wide or not, it seems to be closing in. However, with every great human invention both the possibilities of danger and prosperity are presented, one can only hope that we learn from our previous mistakes before making new ones.



Leave a Comment

Be Social