Ethical as well as technological challenges to robotics and AI

What ‘rights’ have artificial intelligence objects, and can you embed a conscience?

“Well, if droids could think, there’d be none of us here, would there?” - Obi-Wan Kenobi

Fully autonomous robots with humanlike capabilities might yet be some way away, still the realm largely of science fiction, but lawmakers, legal experts and manufacturers are already engaged in debates about the ethical challenges involved in their production and use, and their legal status, their “legal personality”: ultimately, whether it’s these machines or human beings who should bear responsibility for their actions.

There are questions about whether and how much self-learning machines should be taking independent decisions about moral equivalence involving ethical choices which have traditionally been the preserve of humans. At the extreme, for example, can it be right for a machine to decide to kill an enemy combatant that it has identified without resort to human agency? Or is the robot morally no different from a “brainless” weapon?

Is there an inherent difference morally between a “sexbot” and a standard, brainless sex toy?

READ MORE

Last year, Luxembourgish MEP Mady Delvaux and the European Parliament's legal affairs committee suggested that self-learning robots could be granted some form of "electronic personality", so they can be held liable for damages if they go rogue.

MEPs want the commission also to work on an ethical framework for the development of robots – it is due to report this month on issues relating to artificial intelligence but has made clear it does not see the case for giving robots a legal personality.

Saudi Arabia has already gone a step further in granting citizenship to a robot, "Sophia", albeit a bit of a PR gimmick to launch a conference on AI. She is already campaigning for women's rights – they have only a few in the kingdom, but a few more than robots. (Twitter was ablaze with the fact that Sophia was not wearing the headscarf and abaya that human women are expected to wear when in public.)

Legal rights

If companies in many countries can and do have legal personality – corporations have some of the legal rights and responsibilities of a human being, including being able to sign a contract or the ability to be sued – then why not robots?

Not a right to marry or vote, or human rights – but a means of making legally responsible for their actions robots which have the capacity through self-learning to develop autonomously beyond what their producers could reasonably anticipate.

That would only make sense, critics say, if robots could own property that could then be seized by a court. No problem, the MEPs said, if a compulsory insurance policy is required for each and every robot.

The proposal is strongly opposed by many scientists who say that it is simply an attempt to remove legal liability from manufacturers for the consequences of their work. Even if a malfunction is not foreseeable because it is the result of a robot’s own autonomous reasoning.

In the last month an open letter to the commission from 156 artificial intelligence experts from 14 European countries, computer scientists, law professors and CEOs warned that granting robots legal personhood would be “inappropriate” from a “legal and ethical perspective”.

The programming of robots is raising complex ethical challenges too. You can programme a driverless car to avoid running down a pedestrian who steps out in front of it, but how do you teach it to distinguish between morally grey choices? If, for example, avoiding the pedestrian means squashing a baby in a pram, how should an “intelligent” car weigh its options?

Delvaux, in her parliament resolution, cites the famous Isaac Asimov three laws of robotics as a good starting point for the discussion – a robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given to it by human beings except where such orders would conflict with the first law. A robot must protect its own existence as long as such protection does not conflict with the first or second law.

Complexities

But Asimov’s rules, supposed to be programmed in to the electronic DNA of robots, don’t begin to address the complexities.

In the defence sector a lively campaign is now also under way to ban the development of lethal autonomous weapons (LAWs). This month, governments meet for the fifth time in Geneva to discuss whether and how to regulate them. Some countries – including the US, Russia and China – adamantly oppose a ban. Others, primarily developing countries, are eager to implement strong regulation as soon as possible.

Around the world, militaries and arms manufacturers are testing systems that use artificial intelligence technology to operate in swarms or choose targets independently. They could soon outperform existing military technology at only a fraction of the cost.

Is it morally acceptable, for example, that a pilotless drone could not only identify a potential enemy target but obliterate it automatically without the intervention of human agency?

The remorseless onward march of technology is making Asimov’s intellectual challenges and astromech droids like R2D2 all too real.