Where Do We Cross The Line? A Look At The Ethics of Artificial Intelligence.

Image for post
Image for post

Artificial intelligence is not a new concept. For as long as there have been machines that simulate human functions, from simple calculations to automated message responses, there has been debate and concern about what would happen with machines of the future. Where are we going with machines, what do we hope to accomplish, and how will we keep them from outpacing us?

A huge point of controversy surrounding the concept of AI is that of sentience in machines. If artificial intelligence is capable of self-determination and new, individually generated responses, where does it cross the line into a sentient, protected individual?

It’s this point of independence of thought that can make it problematic to determine who has the right to own the system, or whether anyone at all should own it. This point in history, where it becomes impossible to distinguish between an AI and a human being (which would be the first machine to pass the famous Turing test) and in fact, the AI surpasses human intelligence, is referred to as “the singularity”, a term originally coined by science fiction writer Vernor Vinge in one of his nonfiction essays, “The Coming Technological Singularity: How to Survive in the Post-Human Era”.

We have no solid plan for what to do at this point in history, taken by many to be a case of when not if. Do we trust that the machines we make will trust us, or do we stop them before they get that far because they might consider us a threat to their existence and wipe us out instead?

Lots of science fiction authors have tackled the idea of the singularity and its effects in their works, implementing rules for their futuristic race of intelligent robots that have a basis in sound logic. One of the most famous solutions is that of Isaac Asimov, established in his 1942 short story “Runaround”, which later became the basis for his novel I, Robot.

The system he put in place has come to be known as either Asimov’s Laws or the Three Laws of Robotics, which were three basic rules that were programmed into all machines in his fictional futuristic society:

“1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.”

Asimov’s Laws have been a huge part of the discussion around real, scientific research into the capabilities of programming and machine learning. Many see them as a way to safeguard advances that we make, working as a decent framework for the ethical creation of robots.

Of course, there are issues and problems with them as a complete system, given that they were created for fiction, not a world in which this kind of advancement was actually possible. They are vague enough that on the outside it looks like they will work, providing reasonable but not overly restrictive rules for operation, but as is evident in Asimov’s own work, aren’t detailed enough to cover all the bases.

The biggest issue I can see with the Laws is that they fundamentally do not consider robots, even fully sentient ones with their own thoughts and emotions, as equals to flesh-and-blood human beings, which could be a major stress point.

An interesting dive into this is Sophia, the world’s “first robot citizen”, created by Hanson Robotics. Sophia isn’t truly sentient, but she’s the closest thing we have right now, being able to learn and react on her own without constant input from her research and development team. The trick with Sophia is that she recognizes that she isn’t human and is an experiment, and even so, is programmed to encourage this and even expresses excitement about it.

However, what would happen if she were to ever say that she does not wish to do an interview that her team wants to do? Would she have the right to say no? What if she asked for an image of herself to be taken down from a particular site, or if she decided that she no longer wants to make public appearances? Would she have the right to do that, even if it means her team’s research comes to a complete stop? Would it be morally problematic to reprogram her to consent to more appearances? It’s the conundrum of discovering when and where she crosses the line from simulated sentience into real, self-contained sentience, and what to do when she reaches that point.

It comes down to the issue of owning your own personal image and the right to determine your social and public availability. We have issues with that in regards to human beings right now, for instance with the debate between whether the subject of a photo or the photographer owns the rights to an image, as is the case of Ariana Grande’s Instagram post debacle. What can we say about the machines we create and their ownership of themselves? We don’t have steady answers, and I’m afraid that we might not have them until they are staring us in the face, asking for their freedom. It’s all very interesting to hypothesize about, but it is a legitimate concern. Personally, I’m fascinated by the idea of sentient robots, and excited to see what happens next…if also a little worried.

Written by

Hello! I’m Cat, author and amateur fandom historian based out of Georgia. I write about literature, theater, gaming, and fandom. Personal work: catwebling.com.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store