Moral Robots?

Angelica Fleury explores possibilities for robots to convey a sense of morality.

Angelica Fleury, a computer science and philosophy major, had not yet taken her coat off when she heard the news. Her roomate called her from the open door over to the computer. On the screen was the Mars Rover robot, named Opportunity or Oppy. It had been caught in a storm on Mars and had lost its signal with NASA headquarters. Her roommate read a translation of Oppy’s last reported message out loud: "My battery is low and it’s getting dark.” The roomates read many posts from celebrities and news outlets who expressed sadness at the loss.

When Angelica met assistant professor of philosophy, Alexis Elder, later in her office, Alexis mentioned the news about Oppy. Angelica and Alexis again scrolled through social media memes and posts commemorating Oppy for its years of service. “The messages read as if it were the death of a person,” Angelica says. Alexis used the situation to talk about the tendency to attribute human characteristics to robots.

“Our social assumptions don't suddenly stop and our social reactions don't neatly cut off at the human boundary,” Alexis says. “People cried when they heard about Oppy shutting down. It’s important to recognize hard work and to grieve when something comes to a premature end.”

Angelica points out an animated robot, named Vector, on Alexis’s desk. Vector has bright blue eyes and roves around like a tiny animal. “Even though they don't have any sense of emotion or sense of self, robots emotionally appeal to us,” says Angelica.  

Angelica’s Project

Artificial Intelligence (AI) morality and how humans interact with robots is the main topic of Angelica's independent research project. She started it in a senior seminar taught by Alexis and continued it when the class was over.

Her plan is to provide a moral guideline for building a responsible AI system. One of her inspirations for the research is adopted from Buddhist principles of self.

“Emotions heavily influence our moral decision making,” Angelica says. “One research path is to take the Buddhist ethic of ego depletion and apply it to AI.

“Robots are here for us to interact with,” says Angelica. “Having an anthropomorphic one is important because we perceive it to have a potential to be moral. That builds a trusting relationship with a robot.”

Trust in Machines

Google creates trust in self-driving cars by making the exterior appear human. Additionally, a psychology study proved that humans were more likely to trust self-driving cars with a pre-recorded human voice compared to cars that were silent. The speaking car was also rated as smarter and evaluated as one capable of having feelings.

Angelica sees that humans have more trust for lifelike robots, but there is danger in trusting them too much. She wants to contribute to the conversation on creating altruistic moral robots. “If we create an AI that doesn't appear to have any sense of self or self direction and therefore does not have selfishness, we would be able to create the moral ideal of a device that has no ego,” says Angelica.

AI Mishap

Alexis helps Angelica sort through the thought process. Alexis works in the areas of ethics, social philosophy, metaphysics, and moral psychology. She often draws on ancient philosophy - primarily Chinese and Greek -  in order to think about current problems. She is also interested in the philosophy of technology, and she enjoys working with students to explore the many ways philosophical issues can crop up in practical concerns.

Some of Alexis’s research focuses on how technology involves and affects interpersonal relationships, so when Angelica encountered situations where an AI system interacted immorally with humans, there was a lot to discuss.

Angelica tells about a home security and management system that was managed by an AI. A domestic problem occurred. A husband had total control over the IOT (Internet Of Things) in the house.

“The husband controlled the heat, lights, locks, everything,” says Angelica. “The wife went to the police and filed a restraining order,” but there are no laws in place to govern this kind of abuse. “We really haven't caught up to our technology use.”

A big question to answer is does the responsibility of an AI mishap fall on the company or the individual? Or both?

“For many technologies, there isn't a single person on the hook because you have so many hands in the project,” says Alexis. “It's a situation where a bunch of designers are simply trying to make it easy to control the system.”

There aren't any protective measures on the legal side, because its a new technology. “The end result is that victims of domestic violence fall through the cracks," Alexis says.

It’s a tall order, but Angelica's research is one step further toward solving problems like these and others. As AI becomes a part of our problems, we need controls to keep AI responsible and moral.

Angelica plans to continue her research throughout the semester. She isn’t sure what graduation in spring 2020 will bring. “I could be in graduate school or working in the computing industry,” she says. "Either path sounds good.”

About the Department of Computer Science.

About the Department of Philosophy.

Photo: Assistant Professor Alexis Elder (left) with Angelica Fleury.