The development of artificial intelligence in the twenty-first century is leaps and bounds: one of its main achievements was the recognition capability of human emotions. In its annual report on the development of AI, interdisciplinary research center that studies the social impact of artificial intelligence AI Now, called for a ban on the use of this technology in certain cases. According to experts, the skill of AI in the field of emotion recognition should not be applied in decisions that affect people’s lives and society as a whole. Why is the ability of robots to distinguish emotions can significantly change the habitual life of mankind?
Can a robot be a feeling of empathy?
The computer vision algorithms that are able to determine certain emotions, there are on the planet for at least a couple of decades. This technology is based on information obtained through machine learning — special algorithms that process the data to better make a decision. Despite all the advances in modern robotics, the ability to reproduce this truly human skill is still quite challenging. Experts Microsoft noted that the recognition of emotions of people using computers has the potential to create a number of applications of the new generation, however, due to difficulties when determining the AI for a long time was showing incorrect results. However, new research shows that modern technology is already helping recruitment agencies to assess the potential productivity of the prospective employee at the stage of interview. Thus, the analysis of video recordings of interviews with the latest technology is now allowing managers to get a better idea about the emotional state of their subordinates.
The ease of continuous monitoring with the help of AI raises many issues that go beyond ethics. Currently, there are a large number of issues related to the privacy of personal information that can cause negative attitude of the society. Given these ethical considerations, the use of artificial intelligence in everyday life but can help when applying for work or when procedures for the imposition of criminal sentences, at the same time can be unreliable. So, if the system will examine the bias, the use of AI during interviews or sentencing convicts may be minimized.