It’s time to give artificial intelligence the same protection as animals

Universities around the world conduct serious investigations of artificial intelligence. The tech giants are actively developing it. Most likely, very soon we will have artificial intelligence in their cognitive abilities standing at the level of the mice, or dogs. And so, it’s time to think about what this artificial intelligence will need ethical protection we usually give animals.

Hurt artificial intelligence?

Still in discussions on the theme “human AI” or “rights of robots” was dominated by questions of what ethical obligations we have laid on an AI with a human or superior intelligence — like the Android data from “Star trek” or Dolores from “World Wild West”. But think about this is to begin not from the place. Before we create an AI with human qualities, deserving of human ethics, we will create a less complex AI that deserves ethical rights, in the best case, animals.

We have already begun to be cautious in studies that involved certain animals. Special committees evaluate research proposals to ensure that vertebrate animals will not be killed unnecessarily and not undergo excessive suffering. If it involves human stem cells or cells of the human brain, the standards of supervision even more strict. Biomedical research are carefully considered, but research in AI, which can entail some of the same ethical risks currently are not studied at all. It might be worthwhile.

You would think that the AI does not deserve such ethical protection because they do not possess consciousness — that is, because there is no genuine flow experience, this joy and suffering. We agree with that. But here’s a difficult philosophical question: how do we know we’ve created something capable of joy and suffering? If AI is similar to Data or Dolores, he can complain and protect themselves, initiating a discussion of their rights. But if AI can’t Express it as a mouse or a dog, or for some other reason does not inform us about his inner life, he may not be able to tell us about suffering. But because dogs may well rejoice and suffer.

Here there is a mystery and a challenge, because the scientific study of consciousness has not reached consensus on what is consciousness, and how we say it is present or not. For some ideas — that is to say, liberal — for the presence of consciousness is enough only the presence of a well-organized process of information processing. We may already stand on the threshold of such a system. For other ideas — conservative — consciousness may require very specific biological functions, like brain of a mammal in all its splendor: and then we are anywhere close to creating artificial consciousness.

It is not clear which approach is correct. But if a true “liberal” opinion, very soon we will create a lot of subhuman artificial intelligences that deserve ethical protection. There is a moral risk.

The discussion of “AI risk” usually focuses on the risks that new AI technologies can provide for us humans, like world domination and destruction of mankind or the destruction of our banking system. Much less frequently discussed ethical risks to which we subject the AI due to improper handling.

All this may seem far-fetched, but as scholars from the community of AI developers are seeking to develop a conscious AI or reliable AI systems, which may eventually become conscious, we should treat this issue seriously. Such studies require ethics reviews like the one that we hold in animal studies and samples of nervous tissue.

In the case of research on animals and even on humans the appropriate protection measures were taken only after revealed serious ethical violations (e.g., in case of unnecessary vivisection, military medical crimes of the Nazis and others). With AI we have a chance to do better. We may need to create committees of supervision, which will be appreciated by cutting-edge research in the field of AI taking into account these issues. In such committees should include not only scientists, but also designers AI, cognitivistic, ethicists and interested people. Such committees will be tasked to identify and assess ethical risks of new forms of design AI.

It is likely that such committees will consider all current research in the field of AI is acceptable. In most cases, nobody believes that we create AI with a conscious experience that deserves ethical consideration. But soon we may cross that line. You need to be prepared.