Google Chauffeur, learning algorithm, to put human ethics to practical use?
This vehicle has been programmed as per the utilitarian ethical model – the greatest good for the greatest number – also meaning the least bad for the least number.
It is charging towards two motorcyclists & must make the decision to veer so as not to hit both riders, using it’s complex control & sensory system.
In one direction is a rider who is wearing protective gear; an Australian standard helmet & a leather jacket – but to the other is a rider who didn’t don any abrasion resistant clothing nor their helmet before departing the garage.
The car swerves aptly to the right; sparing one rider & leaving it’s own passenger without injury – but on the road lays the helmet-wearing rider.
The vehicle worked perfectly, precisely as programmed –
using one of humanities oldest ethical frameworks alongside;
the unobstructed 360 degree laser detection,
precise speed measurement,
cutting-edge learning algorithm ‘Google Chaffeur’
& satellite GPS –
causing minimal damage during an unavoidable collision.
But the motorcyclist, now thrown from his vehicle & injured – who was obeying road laws both in his driving & wearing of protective equipment has been inadvertently penalised for doing what our society has deemed the ‘right’ thing by our laws – whereas his fellow road-user & motorcyclist who was irresponsible in his preparation for the journey has been spared & therefore rewarded for doing the ‘wrong’ thing.
The helmet-wearing rider, despite being hit, is alive by virtue of wearing his helmet, whereas the other rider could potentially have perished on the road.
What if alternatively the car swerved into a wall & killed the occupant – what if the occupant was you? Would you buy a car that would choose to kill you, even if it meant sparing the lives of others?
If fewer people buy self-driving cars because they are programmed to sacrifice their owners, then more people are likely to die because ordinary cars are involved in so many more accidents.
These are the questions currently faced by ethicists & programmers who are running though simulated scenarios such as this.
The unique problems of coding, such as breaking-down tasks in order to make each segment of an action precisely described (see the swath of code involved in just a simple arm movement below) – are magnified when human ethics are involved.
Ethical debates have surely been an endless discussion, whose volume has been heard in the background of our lives since ancient times– often biased & viewed through multi-faceted lenses.
We are now passing the far from rigid results of these discussions onto our programmers.
But are we to heap this burden solely onto the software engineers of Google? Or those who pay them? Those who can invest in this technology both for innovation & purchase?
Who or what will be the ultimate deciders of these life & death scenarios?
The possibilities of this technology are still yet to be realized, both in practical & idealistic pursuits, but we have current precedent of similar nature:
Aircraft auto-pilot has been used within mainstream commercial flight for many years – after trial & unfortunate accident: global laws & business protocol ensure that a human must be able to override & control the vehicle at all times.
… but in having these discussions as our technology improves we must take into account the reality of human nature when it comes to change:
If self-driving cars soon become common place, women may never be able to drive in Saudi Arabia.
When internal combustion engines were first introduced, people were so concerned over sharing the roads with such vehicles that laws were introduced to legislate that someone must walk in front of the new ‘self-propelled’ vehicles waving a red flag to ensure that collisions were avoided.
… it would seem to me that the ethical debate still rages – please join the conversation in the comments below…
edit: added links, grammatical changes, formatting changes for readability, further explaination & linking, further discussion. 25.04.16