Deakin Communicating Science 2016

EES 200/101

Algorithmic Morality

Google Chauffeur, learning algorithm, to put human ethics to practical use?

ESS200 - BLOGimg

Juan Travieso, The Little Robot, 2015

A self-driving car is within imminent danger of a collision –
sensing this danger the car follows protocol & amends steering creating impact that is of least damage…

This vehicle has been programmed as per the utilitarian ethical model – the greatest good for the greatest number – also meaning the least bad for the least number.

It is charging towards two motorcyclists & must make the decision to veer so as not to hit both riders, using it’s complex control & sensory system.

In one direction is a rider who is wearing protective gear; an Australian standard helmet & a leather jacket – but to the other is a rider who didn’t don any abrasion resistant clothing nor their helmet before departing the garage.
The car swerves aptly to the right; sparing one rider & leaving it’s own passenger without injury – but on the road lays the helmet-wearing rider.

ESS200 - BLOGimg2

TAC, Public Service Announcement, 2012

The vehicle worked perfectly, precisely as programmed –
using one of humanities oldest ethical frameworks alongside;


the unobstructed 360 degree laser detection,
internal gyroscope,
precise speed measurement,
real-time cameras,
cross-referencing sonar,
cutting-edge learning algorithm ‘Google Chaffeur’
& satellite GPS –
causing minimal damage during an unavoidable collision.

But the motorcyclist, now thrown from his vehicle & injured – who was obeying road laws both in his driving & wearing of protective equipment has been inadvertently penalised for doing what our society has deemed the ‘right’ thing by our laws – whereas his fellow road-user & motorcyclist who was irresponsible in his preparation for the journey has been spared & therefore rewarded for doing the ‘wrong’ thing.
The helmet-wearing rider, despite being hit, is alive by virtue of wearing his helmet, whereas the other rider could potentially have perished on the road.

What if alternatively the car swerved into a wall & killed the occupant – what if the occupant was you?  Would you buy a car that would choose to kill you, even if it meant sparing the lives of others?

If fewer people buy self-driving cars because they are programmed to sacrifice their owners, then more people are likely to die because ordinary cars are involved in so many more accidents.

  • 1.25 million road deaths worldwide in 2013
  • 82 deaths in Australia already this year (as of the date of publication)
  • Whereas with well-over 1million km logged during Google Car testing, there have been precisely 2 accidents; one with a human driving & the other when the car was hit from behind by another driver.

What if the dealership offered different ethical options of programming depending on your personal inclinations? Who would be liable for the actions of such vehicles?

These are the questions currently faced by ethicists & programmers who are running though simulated scenarios such as this.
The unique problems of coding, such as breaking-down tasks in order to make each segment of an action precisely described (see the swath of code involved in just a simple arm movement below) – are magnified when human ethics are involved.


Ethical debates have surely been an endless discussion, whose volume has been heard in the background of our lives since ancient times– often biased & viewed through multi-faceted lenses.
We are now passing the far from rigid results of these discussions onto our programmers.
But are we to heap this burden solely onto the software engineers of Google? Or those who pay them? Those who can invest in this technology both for innovation & purchase?
Who or what will be the ultimate deciders of these life & death scenarios?

The possibilities of this technology are still yet to be realized, both in practical & idealistic pursuits, but we have current precedent of similar nature:

Aircraft auto-pilot has been used within mainstream commercial flight for many years – after trial & unfortunate accident: global laws & business protocol ensure that a human must be able to override & control the vehicle at all times.

… but in having these discussions as our technology improves we must take into account the reality of human nature when it comes to change:

If self-driving cars soon become common place, women may never be able to drive in Saudi Arabia.

When internal combustion engines were first introduced, people were so concerned over sharing the roads with such vehicles that laws were introduced to legislate that someone must walk in front of the new ‘self-propelled’ vehicles waving a red flag to ensure that collisions were avoided.

ESS200 - BLOGimg4

… it would seem to me that the ethical debate still rages – please join the conversation in the comments below…

edit: added links, grammatical changes, formatting changes for readability, further explaination & linking, further discussion. 25.04.16


One comment on “Algorithmic Morality

  1. Adam
    April 14, 2016

    What a great topic! I was listening to a podcast on this topic just yesterday and would definitely recommend you have a listen:

    First up, I don’t know who wrote this post, ‘ewvs’ doesn’t really give me any hints. Second, I think you need to rethink the layout. By using centralised text it makes it look a little messy, remember white space is really important for ready on the web.

    I like the way you started with a story that really highlighted the issues faced by this problem. I hadn’t heard that one before and it certainly draws attention to the ethical problems (I might use that example in the future, thanks!).

    You mentioned the issue of lumping the ethical problems onto programmers but is this the case? I imagine ethicist are incredibly interested in this issue, are corporations allowing them to be part of the conversation?

    You make some great points about how ethics could become a consumer commodity. I liked the video at the end, but am not sure how the gif of the robot arm fits into the conversation.

    I really like this blog, it just needs some tweaking to make it easier to read.


Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Deakin Authors

%d bloggers like this: