26th May 2017

Balancing Freedom & Progress: Robotics and AI Ethics

Is it conceivable to build an ethical robot?

We, as humans, have inbuilt responses to situations. Our moral compass – it’s what sets us apart from our robot counterparts. We as humans can choose to make a decision based on the expected or preferred outcome, e.g. fall in love, get married. Want a pet, buy a dog.

 

We can also make a decision based on the likely actions of another – if you have a small child you put up a stair gate.

But what about intelligent robots, what about our humanoid counterparts? Do robots need to be sentient to be ethical?

Well, they must be programmed to make the decision. They, unlike us, don’t have the power of choice in the decision. In effect, a robot behaves ethically because it’s been told that’s what it must do.

 

Take for example a robot helper that is programmed to stop a person from falling whilst walking on the pavement. The robot knows that their role is to make sure the person stops before the kerb. A human with a less robust moral compass might not stop that person from falling.

A flippant example you might think, however, what if the consequences are more serious?  What about an autonomous vehicle that’s involved in a car crash?

What if the intelligent system on board that vehicle has to decide to crash into a young family of four, or a group of five elderly passers-by. What would we do as humans? Save the young or the old? How do you programme a car, a robot, a humanoid to make the right decision when it’s not black and white?

 

If the armed services are using robots to pull the trigger, who’s responsible – the robot or the programmer? Can you jail a robot? Can you sue a humanoid?

We’ll all have to wrestle with our own consciences in these cases, but can you build a conscience in a robot?

Isn’t the fact that as humans we have a range of emotions, the critical thing that sets us apart?

The other issue with ethics is that not every human has the same moral code, for example, different societies have different ethical codes, even within the same country depending on where you live. What may be good for one, might not sit well with another?

Can you have ethics that are for the ‘higher good’? People will disagree on what this is – can we all behave in a universally good way and do we all want to?

Is it a case of what is good for the goose is good for the gander?

 

How do we agree on a set of ethics and moral governance as we move towards Smart City living and advances in tech? How do we ensure that it’s not the humans who are morally corrupt?

The good news is that ethical governance is beginning to come to the fore. There are guidelines that can be used to help us as we look at the challenges and solutions of AI ethics.

Robot simulators can provide developers with a virtual environment to try out robot code, and there is an opportunity to get the glitches out of the systems before they are released into the public domain.

One solution is to ensure ethical codes are built in from the get-go. That an ethical and moral list is agreed and stuck to. Alongside this, it is crucial to educate the programmers and the manufacturers.

Perhaps the answer is ethics boards in the large corporations that are rolling out new tech. But who governs the governors?

Do we end up in a Marvel film where the baddie uses humanoids for his or her evil world domination plans?  The law needs to catch up with the advancements in technology, will there eventually be a “legal status” for robots?

In conclusion, it appears there are some common themes that need addressing, including: education; governance and standards; research and innovation; and data privacy.

The future of robotics is fascinating and compelling; the advancements and the ways in which we can help others is remarkable. However, does this come at a price? Only time will tell.