Ethics and Artificial Intelligence (with apologies to Jeff Goldblum)

You’re invited over to someone’s house and they have a gorilla that can do sign language (just go with it). You actually communicate with this gorilla and have a conversation, albeit simple. You visit a few more times and this gorilla recognizes you and recalls what you talked about last time. You visit one more time for dinner and can’t find the gorilla. The host then states that they’ve slaughtered the gorilla for tonight’s meal since they heard gorilla’s taste fantastic. You’re absolutely horrified (or so I hope) that this gorilla would be treated like a cow or a chicken. But, both are animals, so what’s the difference? The difference is that while a cow doesn’t apparently have a higher-level conscious, this gorilla did. The fact you saw a higher-order conscious in this particular animal means you’ve ascribed some type of ethical value to consciousness (unless you’re a psychopath, in which case seek help).

While this may not be an issue for vegans – and they’ll let you know it’s not an issue and that they’re a vegan and that essential oils can… –  it also creates another moral quandary that we’ll need to tackle very soon: Artificial intelligence. If AI has what we would call a consciousness, does this hold moral implications?

The idea of humanoid artificial intelligence (AI) is something that has captured the collective mind of people for the better part of a century. Some of the stories that have come out attempting to tackle this future not-so-hypothetical world have been amazing (Blade Runner, Westworld, anything by Philip Dick or Isaac Asimov), while others have…well…sucked (looking at you I Robot, how DARE you ruin Isaac Asimov’s story). The good ones always play with the morality of AI, which is a fun thought experiment, but we’re entering a time where it could become a reality.

The best modern treatment (I haven’t seen Blade Runner 2 because I’m a sad person) would be HBO’s Westworld. It tackles the complex issue of AI, specifically the question of is something truly artificially intelligent if it’s been programmed to be self-aware? Beyond that there is the question of morality – if these robots can be rebuilt and have their memories erased, but are also becoming self-aware, are we still justified and okay to do whatever we want to them? That is, if they can feel pain, anguish, and understand what they’re feeling and can articulate the feeling, what are the moral implications? After all, they aren’t biological, the consciousness and intelligence is still artificial, yet is there something there?

If we start developing AI are we prepared for the day when one of the robots stops what they’re doing, looks up at us, and says, “Who am I” (insert Mass Effect reference here), are we prepared to handle that event? If the machines begin to dream of electric sheep, if it’s apparent they have a conscious – even if on par with a gorilla – does this change how we act toward machines?

All of this boils down to whether or not we place a moral value on consciousness itself. Does the fact that someone has consciousness render value to that person? More importantly, what about people that would innately have consciousness if not for some flaw or due to development? Does a Gorilla that can respond back with signs hold more value than the cow in the field? Does a human infant – who isn’t self-aware – hold more value than a Gorilla (RIP Harambe)? Does a human who is awake have more rights than a human who is in a coma? And – take your hit from the bong right now to really give an impact to this question – how do we properly define consciousness in a significant and quantifiable way, or is that something we can even do?

And here you thought this was going to be about robots and Will Smith.

Sadly, our technology is advancing quicker than our ethical conversations. We’re encroaching upon the days when AI – even in a minimal form – is a reality, but we don’t even know how to articulate ethics for the conscious beings who currently exist. This is the point where Dr. Ian Malcom jumps in and says, “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.” No one likes being against scientific progress in the modern age because then you’re just anti-science, a relic of the Dark Ages who wants to throw us all back to the days of death by paper cut. But the reality is that sometimes we should probably slow down our progress when we start entering realms that have ethical ramifications. A doctor being able to use robotic arms halfway across the world via the internet to perform surgery? No significant ethical problems there, so make it so. A computer that is so advanced that it has some sense of self-awareness; yeah, that matters, especially on the ethics of what we do with the computer.

But we’re going to push ahead regardless. If scientists debated what would happen if they could split an atom over a populated area without considering the ethical ramifications of such a leap forward in progress, then they sure as hell aren’t going to consider the ethics of making a machine conscious. So these are the questions we have to star tackling and start answering…that is before the robots become self-aware and philosophy becomes automated.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s