Monday, November 17, 2014

Let's Arrest the Robot

So I've been reading a lot about robots and androids and all of that good stuff and I wanted to get the classes opinion on something. If autonomous machines hurt someone, or commit some type of crime who should be held responsible? Can we hold a robot responsible? Would the fear of law even affect them?

I think this is an important moral dialogue that needs to be addressed before robots are out and about in the world. Even in a military setting, if a robot were to kill an innocent civilian who would take the blame?


4 comments:

  1. Unless we reach a point where we may acknowledge androids as equals to humans, I believe liability falls on the manufacturer in the event of an accident. However, if the manufacturer is not the programmer (perhaps the hardware is shipped to another facility that programs it) then maybe the programmer shares partial responsibility. For all intents and purposes, this is an example of an operational fault, the kind that prompts product recalls and such. Another question is, will this liability put a damper on a company's willingness to innovate in robotics?

    ReplyDelete
  2. That’s an interesting question Emma which is up and coming in today’s society. Let me frame your question in a modern setting. Let’s look at the Google driverless car. If a google car crashes and injures/harms someone, who is to blame? The car, the google corporation, the programmers, the engineers, who? The list goes on and on.


    If we look at least towards “Do Androids dream of Electric Sheep” we find that it is the robots who are punished. Deckard retires androids, not corporations or robotic designers. I wonder if intelligence has anything to do with justice or at least who gets the blame. If the google drives less cars had artificial intelligence could they be blamed? I don't know.

    ReplyDelete
  3. I feel that, like everything else, it would be dependent upon individual cases for who would end up bearing the blame. As Matt and Aaron pointed out, there are several lines of people and/groups that could end up taking the responsibility. I personally believe that the source of the fault/malfunction/etc should lead to who should be held most accountability. For example, if it was a programming error, then the programmer should be accountable, if it was a manufacturing error then the manufacturer should be accountable. The problem is that there is rarely just one person involved on a project. And even if the source was, say, from the manufacturer, there were still check points and levels of approval that needed to occur and there should have been quality checks made by the designers. Even with our current technology it is often not clear who to blame in certain situations. And I believe that Androids or robots would be no different. If someone has a rabid dog, the dog will be put down, but the owner is still responsible for their dog's actions. And if a car malfunctions, the dealer/manufacturer/designer is responsible, not the car. However, if robots start developing a conscious, then that is a whole new ball game. I think that humanity would need to adjust how the justice system ran to incorporate robots that intentionally kill/harm of their own volition. Should the company who made the robots be accountable? I do not know, it would depend on how much of their design or program led the robot to hurting a person.

    ReplyDelete
  4. I'd like to just address a small piece of your question: "Would the fear of law even affect them (the robots)?" To start, I think we need to look at why laws are upheld and why they are broken in human societies. Laws are generally upheld both due to fear of consequences and because individuals have a mutual interest in living in an ordered, peaceful, society. When personal needs and desires supersede the value of order or the harm that could come from consequences, laws a broken. This encompasses both petty theft of money or food, not starving is of greater benefit than being put in jail, to revolutions, when the benefit of an ordered society fails to provide a good life. Both the upholding and braking of laws is driven by our own desire for survival or at least for the survival of our children. Essentially for law to work, the hopes and aspirations of individuals must be bettered served by the existence of law and the consequences of law must stall or remove those hopes and aspirations. Most critically this requires individuals to have hopes and aspirations, currently something we do not have the ability to program into a computer. A computer has a set of tasks it can perform and will do so without question when commanded to. Thus without a true intelligence we can program our robots to obey all laws, making it the manufacturer or programmer’s responsibility if the robot hurts someone. However, if an android has consciously chosen to hurt someone this would indicate that they must have some hope or aspiration that the person was inhibiting. Possessing a hope or aspiration means law would affect them as much as any human.

    ReplyDelete

Note: Only a member of this blog may post a comment.