Wednesday, November 12, 2014

What is There To Fear About Artificial Intelligence?

When I was about twelve I remember a friend of mine saying "It must suck to be a robot or clone."
     "Why?" I asked him.
     "Because" he replied, "It's like you're alive but you don't have a soul."
I do not remember the context of this brief conversation other than that it took place outside a movie theater. For obvious reasons reading Do Androids Dream of Electric Sheep made it come back into my head.

I remember at the time feeling conflicted. I am not a religious person, so my reply to my friend was that no one has a soul, we all just have our minds so how could being a hyper intelligent robot or an exact biological copy of another human be less of an existence? Yet, despite my own world view granting me a logic in which there was nothing wrong, the idea of a non-naturally occurring intelligence did bother me. In reading Philip Dick's book I realized that that sentiment still lingers. I do have sympathy for the androids, they have a great deal of injustice thrust upon them. Yet there is a part of me that says they all should be done away with, not just if they kill owners on other planets and come to earth but that their production, use, and existence should stop.

I have to think I am far from unique in this sentiment. Besides Do Androids Dream of Electric Sheep there are many other stories in writing and film, Frankenstein, 2001 A Space Odyssey, Terminator to name just a few, in which an artificial intelligence seeks to destroy its creator, us. So culturally we seem to have a phobia of intelligence that mirrors our own. I wonder why we have such a fear. Is it the conscious belief that all artificial intelligence will eventually hold a revolution against us that drives our fear? Or is that just an argument we have invented to justify a more instinctual fear? If it is instinctual, what evolutionary pressures droves the development of that instinct?

5 comments:

  1. Richard, I agree with you in that I don't think self governing robots such as androids should be fully developed as they are in Do Androids Dream of Electric Sheep, and like you, I am not sure why I have those sentiments against this idea. I'm not sure it is the belief that a revolution will occur, with these machines rising against us as they do in books and films, but rather the knowledge of the destruction that they could bring about. Many people are already nervous about the development of technology for the sole use of military purpose in war and destruction, imagine what human androids could do if they were put to a similar use? I think that argument does have credit though. I believe that humans are afraid of no longer being "at the top of the food chain" so to say. As far as we know, we have always been the most intelligent form of life on the planet, capable of communicating and creating in ways that other animals are not. If human androids were to become so intelligent that their logic and ideas over rule ours, there may be a point at which humans would become animals to them. It may be a bit of a stretch, but while writing this Planet of the Apes came to mind, where humans work as servants and entertainers for the apes. I think that it is human instinct to want to avoid such situations, and I do think that if androids were to become overly developed, such a world where humans become the lesser begins is possible.

    ReplyDelete
  2. As biological entities, humans (and our predecessors) have had the opportunity to undergo the refining process of evolution over millions of years. Through this evolution, we have taken on many traits inherent for our species' collective survival: we are sociable, have opposable thumbs, high stamina, increased intelligence, etc. Androids, on the other hand, I think represent a great unknown: these things are created by us, but do not have the same benefits of eons of evolution. Ultimately what defines an android is intelligence on the order of humans lacking the empathy, sociability, and emotional intelligence of humans. This describes a psychopath, which I think might at least partially be the cause for concern, particularly in Electric Sheep. Like all tools, they can be easily manipulated for good or evil, and lack any sort of moral conscience. Of course, with androids still being a work of science fiction, this is all speculative. Perhaps there is a computer code for empathy that we have yet to find. If we define our species by empathic ability, this potential discovery calls into question what it means to be human, which is also something deeply disturbing to a person, who is capable of introspection.

    ReplyDelete
  3. I think the problem with creating androids in the image of humans is our imperfect understanding of self. While we can function at incredibly high levels, we are still a long way off from understanding how exactly we achieve what we do. When we try and implement something without a perfect or even relatively comprehensive understanding of it, we are prone to making errors and these errors can manifest themselves in an manner of ways. The problem is not our inability to instill emotive or empathetic capabilities (this could certainly be possible in a number of years), it is the unpredictable and unknown way in which our attempts at creation will manifest themselves.

    ReplyDelete
  4. I believe that we have this innate fear because we are afraid of anything that is not part of 'us' or our surroundings.

    A.I. are inherently alien. Although they were built by humans, act like humans, and look like humans, they are not human. This of course begs the question, "What is humanity?" and "Are we afraid of not-humanity?"

    Undoubtedly, A.I. creates a more striking fear because it resembles something that is human. Ask yourself this, what is more fearful, a monster complete with fins, scales and etc. that acts like a monster, or a human that acts like a monster? The answer is of course the latter.

    What makes A.I. more creepy, or fearful, is the near-human image it gives us. It is a human, but it acts inhumanely. Even those A.I. that are system components such as HAL 9000 have a human voice and act humanely (Recall this line from 2001: "Thank you for a very enjoyable game." and of course "I'm sorry Dave, I can't do that.")

    Perhaps it is not the A.I. itself that we are afraid of, but the corruption of humanity, and the evidence of the monsters that we can become that is our greatest fear.

    ReplyDelete
  5. I think a big part of the fear is the fear of the unknown. As Sean brought up in class last week this revolution of robotics has a lot of parallels to the atomic bomb. It began with curiosity and pushing the limits of science, but to think that such a powerful and destructive weapon could be created was not known. Those science fiction movies and novels seem like the extreme of what could happen, but are they really that far off from true possibility? Particularly in the United States versus Europe there is a pattern of trying out new inventions before knowing the full ramifications (think GMO's, prescription medications, nanotechnology). Often times the public isn't even fully aware that new scientific developments have made it into their homes and lives already.

    ReplyDelete

Note: Only a member of this blog may post a comment.