I often do immoral things, both in the view of others and my own view. One does not have to subordinate their life to morals, not as an individual, not as a group, not as a society and not as a company. But, of course, there are behaviors that I find to be immoral and which I generally avoid. Even an ethicist must not only engage in moral actions. He needn’t be good. A physician needn’t be healthy either. One is simply occupied with a topic on a professional level. One explores a subject.
I use the term machine morality much like I do artificial intelligence. Machines have no consciousness and no will. They have just as little empathy. This is why human morals are simulated in machine morality. For example, it’s about teaching certain rules to autonomous machines and AI systems, that are morally justified and that they should follow.
Machine ethics explores and produces machine morality. It’s reflects on moral and immoral machines and tries to build them. It’s really about exploring such machines and highlighting the opportunities and risks that come with their use. Machine ethics is therefore not about saying that all machines should be moralized. It simply explores the possibilities. Since 2013, we’ve been creating chatbots that recognize their user’s problems and adequately act and react. For instance, we developed the “Goodbot” that interacts with the user according to moral principles. In contrast, we also developed the "Liebot,” which lies systematically. The “Goodbot,” one might say, is the morally good one, while the “Liebot” is immoral or morally bad. Continuous lying from people is bad, since it destroys trust in relationships, friendships, groups and societies. With the Liebot, we have been able to show that a machine that lies systematically can be equally corrosive, be it on webpages or among service robots.
Robots have no awareness, no will, but they can follow rules very well. A machine has no reason to be moral, not of its own accord. In humans, morality at the least makes collective living easier.
Morality can be taught to autonomous and semi-autonomous systems. Among them are certain robots and AI systems, or robots that are connected to AI systems. Rules can be planted into them that are morally grounded. In doing so, one can work with annotated decision trees, for example, like those I developed a few years ago. Equipped with sensors, these machines go through the world and work their way through question after question, like how old, how big or how far away something is. In the end it takes one of several provided decisions. For each question, the decision tree is annotated to reflect why it’s important or why it is being posed. This way the moral assumptions and justifications become quite explicit.
I'm careful with terms such as "obligations" as a machine ethicist. I’d rather speak of “liabilities,” and maybe even that goes too far. But we may of course seek out metaphors in the hope that we’ll be able to understand one another. In any case, robots and AI systems have to do what we want them to. As an ethicist, I do not believe that robots and AI systems have moral rights, because they lack consciousness and capacity for suffering, awareness and the will to live.
They already do. There are robots on the market that can react emotionally. Robots, like Pepper, can through facial and voice recognition figure out something about a person’s emotional state and can adjust their behavior and their statements accordingly. Empathy, however, is not what I’d call it. Robots that adapt their behavior to that of people can be found in care and treatment situations. One well-known example is Paro, a baby seal robot that is intended to help people with dementia.
Thus, we do have robots that recognize emotions and display them, but do not have them. Robots and AI systems will never have feelings in my opinion. For this, there needs to be a biochemical basis. Therefore, one shouldn’t leave patients or attendees alone with robots in a nursing or treatment situation, for security reasons and because the presence of people, of feeling and compassionate beings, are especially important in this situation. For the most part, robots are also designed to be used by some specialist. This is how Robear, a prototype from Japan, gets used in tandem with hospital personnel.
No, not like humans. But there are machines that can assess the consequences of their actions. Otherwise, there could be no automated driving. For that, consequences must constantly be foreseen, compared and evaluated. This is an important field for machine ethics. One can develop systems that follow specific rules but not rigidly, rather that include the possible consequences of their decisions.
It should not decide. I am opposed to quantifying it with people’s views or qualifying it to count potential victims of accidents or to judge them by age, sex and health. Of course, automatic braking should occur when there’s a person on the road, and I’m for integrating emergency braking assistants in as many cars and trucks as possible. But otherwise, I advise caution and restraint.
Autonomous cars should drive on highways. Urban traffic is too difficult for them. There are many pedestrians and cyclists on the road, and every second there are thousands of things to assess. Driving in the city is communication, one waves, one winks, one smiles.
In Sion, in Switzerland, there is an autonomous shuttle. But it travels at low speed and on virtual tracks. This is not transferable to normal cars. On the straight and open highways, where there are no passers-by, many accidents can be avoided with automated driving. Machines can do much more than people in some areas. For example, they can see at night. Or around the corner, when a parent system is available. Autonomous trucks could have a great future, in addition to autonomous buses and shuttles.
Interview: Martin Daßinnies