Self-driving cars present ethical dilemma

Google car (AAP)
Google car (AAP)

As technology advances and self-driving cars inch closer to becoming the norm, the debate over the programming of the vehicles rages.

Science fiction author Isaac Asimov created a set of rules his robots had to follow as a safety feature, known as the Three Laws of Robotics. The first and foremost: a robot may not injure a human being or, through inaction, allow a human being to come to harm.

It's a law that driverless cars are likely to conflict with.

While the introduction of autonomous vehicles (AVs) is likely to prevent 90 percent of car accidents, some crashes will still occur. And they'll force the cars to make an ethically-tough choice.

Should an autonomous car primarily protect its passenger in a crash, even if it results in the deaths of pedestrians, or is Star Trek right in that the needs of the many outweigh the needs of the few?

Self-driving cars present ethical dilemma

A self-protective AV would prioritise the safety of the passenger -- even if it meant others would be harmed or killed.

Self-driving cars present ethical dilemma

A utilitarian AV would prioritise saving the most lives -- even at the expense of the passenger in the car.

And public opinion is divided over which should be implemented globally.

A US survey found while people are prepared to take the moral high ground in a general sense, when it gets personal they're not so keen.

Despite saying self-driving cars should be programmed to prioritise others, people said they'd prefer to buy cars that protected the passenger first, especially when it was family involved.

On top of that, those surveyed admitted they would be less likely to buy an AV that was government-regulated to be utilitarian.

"Regulation may provide a solution to this problem, but ... our results suggest that such regulation could substantially delay the adoption of AVs, which means that the lives saved by making AVs utilitarian may be outnumbered by the deaths caused by delaying the adoption of AVs altogether," the authors of the study say.

Optimisation Research Group leader Professor Toby Walsh says with self-driving cars less than a decade away, society needs to work out how they're programmed.

"The study sheds some light on the state of public sentiment on this ethical issue. It shows that aligning moral AI driving algorithms with human values is a major challenge -- there is no easy answer!"

But the survey should be taken with a grain of salt -- people had plenty of time to decide their answers and they weren't in any immediate danger.

"This may not reflect how we would, as drivers of cars, act in such moments of crisis," Prof Walsh says.

Unitec's Professor Hossein Sarrafzadeh says it's an important issue, but not the only one.

"When we use artificial intelligence, we are trusting a machine to make decisions for us. Trading shares, driving cars and flying airplanes are examples of such cases.

"We need to decide how much control we give to machines and as machines are used more extensively in our lives this question becomes more central."

The survey was published in the Science journal on Friday.

Newshub.