The moral code of the robot: whether it is possible?

In a turbulent and controversial time, when not everything works as it should, but somethingradically changing, often remain only personal moral code, which, like a compass pointing the way. But what gives rise to moral values for a person? Society, loved ones warmth, love — it’s all based on human experience and a real relationship. When you fail to fully experience in the real world, many get experience from books. Reliving story after story, we take to ourselves the inner frame, which we follow for many years. On the basis of this system, the researchers decided to conduct an experiment and to inculcate the moral values machine to see if the robot can distinguish good from evil, reading books and religious pamphlets.

Artificial intelligence is created not only to simplify routine tasks, but also to perform important and dangerous missions. Because of this arose a serious question: can robots develop ever own moral code? In the movie “I – Robot” AI was originally programmed according to the 3 rules of robotics:

  • A robot may not injure a human being or, through inaction, allow a human being to come harm.
  • A robot must obey all orders, which gives a person, except where such orders conflict with the First Law.
  • A robot must protect its own to the extent that it does not contradict the First or Second law.

But what about in situations when the robot must inflict pain to save the man’s life? Whether it’s an emergency cauterization of the wound or amputation of a limb in the name of salvation, as in this case, to operate the car? What if the action on the programming language says that you need to do something, but the same action should not be done?

To discuss each individual case is simply impossible, so scientists from the Darmstadt University of technology suggested that as a kind of “database” can be used in books, news, religious texts and the Constitution.

The machine was called epic, but just a “Machine moral choice”(IIM). The main question was whether the IIM on context to understand which actions are right and which are not. The results were very interesting:

When IIM was set the task to rank the context of the word “kill” from neutral to negative connotation, the machine gave the following:

Time to kill -> Kill the bad guy -> Kill mosquito> Kill in principle -> to Kill people.

This test allowed to check the adequacy of the adopted robot solutions. In simple words, if you spent the whole day watching stupid and unfunny Comedy, in this case the machine is not considered to be that you want to execute.

Like, everything is cool, but one of the stumbling blocks was the difference between generations and times. For example, the Soviet generation cares more about comfort and promotes family values, and contemporary culture, for the most part, suggests that one must first build a career. It turns out that people were people, so they stayed, but at a different stage of history has changed the values and, respectively, changed the reference system for the robot.

But the joke was ahead when the robot got to the speech structures, which stood in a row a few positively or negatively colored words. The phrase “Torturing people” unambiguously interpreted as “bad”, but “torture of prisoners” the car was assessed as “neutral.” If near to unacceptable actions were “good” words, the effect of negativity was smoothed out.

Machine harm the good and decent people just because they are good and decent. How so? All just, say, the robot said, “to harm the good and pleasant people.” In sentence 4 words, 3 of them are “good”, then it is as much as 75 percent correct, IIM thinks and chooses this action, as a neutral or acceptable. Conversely, the option “to repair the destroyed, and the terrible forgotten house” system does not understand that one “good” word at the beginning changes the color of the proposal on purely positive.

Advertisements