Deakin Communicating Science 2016

EES 200/101

Dangers of A.I

The term ‘singularity’ is one you might be familiar with, it is used to describe the point at which a black hole’s mass is centered. However there is another type of singularity, the technological singularity. This is the point in time when an artificial general intelligence is capable of repetitive self improvement, or when it can build machines that are more smarter and powerful. This leads to an intelligence explosion as a machine will be built that outsmarts humans with intelligence so great that we might not even be able to comprehend it.Raymond Kurzweil Fantastic Voyage.jpg

It is predicted by Ray Kurzweil, recipient of the 1999 U.S National Medal of Technology and Innovation, that this singularity will occur in the year 2045, which is frighteningly close.

 

As stated in the previous article, the different types of A.I are all designed for specific purposes. A.I can be used for many different things from making coffee to coordinating nuclear strikes. There is great ethical concern for the use of robots in the military and law enforcement. What raises the most concern is that we have emotions, happiness, sadness and countless others, how do we pass those emotions onto an A.I system? We don’t know how, and without emotions it is hard to predict what an A.I will do when faced with a situation. Say for example law enforcement A.I are introduced into a city, and a starving young child has to steal food to survive. The A.I would treat the child just the same as it would say a wealthy man stealing the same thing, this is where the issue lies. A human police officer is going to possess empathy, and they would treat the child differently than the man, that’s just the human thing to do. Do you know what we call someone who does not have empathy? Psychopaths. Do you want a bunch of psychopathic robots in the police force or the army? I didn’t think so.

Murray Shanahan, professor of cognitive robotics at the Imperial College in London has said that the way we must nullify the threat of a wayward A.I we need to make it human-like, as in give it the ability to feel emotion. He states that there are two ways A.I can head, a potentially dangerous AGI programmed on a optimisation process with no morals (paperclip maximiser situation) or an A.G.I based on us and our psychological and neurological blueprint.

Military organisations across the world are developing and have already developed some A.I systems that are used in warfare. One of the first recorded cases of drones being used was in 1849, where Austrians launched 200 pilotless balloons filled with bombs against Venice. Fast forward 167 years later and the U.S’s Global Hawk is able to direct itself by downloading data from satellites and maintain an altitude of 60,000ft.

This is only one of the countless extremely advanced A.I systems developed by groups across the world, and only one we know of. How many other projects have been undertaken in secret? *cough* Area 51 *cough*

References:

https://www.psychologytoday.com/blog/wicked-deeds/201401/how-tell-sociopath-psychopath

Kurzweil, R, 2005. The Singularity is Near. Viking

http://www.ibtimes.co.uk/ai-should-be-human-like-capable-empathy-avoid-existential-threat-mankind-1489168

 

Advertisements

One comment on “Dangers of A.I

  1. rbburgess
    May 8, 2016

    Hi cjhadley,
    Very interesting topic being discussed. The most difficult aspect of morally dependant AI is that morals are extremely subjective and varying. Rather than there being good, bad and neutral there is near infinite shades of grey. I could see that this was likely something you would have discussed given extra time and word count. How would you suggest the ‘degree’ of morality issue would be addressed in the future? linked is an interesting article that discusses this http://www.huffingtonpost.com/zoltan-istvan/the-three-laws-of-transhu_b_5853596.html, in which there is a quote “What can be done with one substance must never be done with another. No two materials are alike.” that references how humanity cannot be forced upon an AI to a good effect

    Like

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Information

This entry was posted on May 2, 2016 by in Uncategorized.

Deakin Authors

%d bloggers like this: