The algorithms of life and death: how to understand artificial intelligence that can heal people?

When it comes to the application of machine learning, most often the conversations are about the medical field. This is not surprising: a huge industry that generates a phenomenal amount of data and revenue, in which technological advances can improve or save the lives of millions of people. Hardly a week goes by without the appearance of research which suggests that the algorithms will better experts to identify pneumonia or Alzheimer’s disease — diseases of complex organs, from the eye to the heart. And all this goes, but…

The problems of overcrowded hospitals and overworked nurses are poisoning public health system, and increase the cost of private health system. And here, again, the algorithms offer a tempting solution. How many times actually need to visit a doctor? Is it possible to replace these visits smart chatbot — which will be equipped with portable diagnostic tests using the latest achievements in the field of biotechnology? Unnecessary visits could be reduced and patients could be diagnosed and referred to specialists quickly, without waiting for the initial consultation.

As in the case of artificial intelligence algorithms, the goal is not to replace doctors but to give them the tools to reduce daily or recurring pieces of work. With an AI that can explore thousands of scans per minute, “boring stuff” remains on the machines, and doctors can focus on those parts of work that require more complex, subtle, based on judgments about the best methods of treatment and the patient’s needs.

High stakes

And yet, as in the case with the algorithms of AI, there are risks associated with their use — even for tasks that are considered mundane. Problems algorithms a “black box” that take inexplicable decisions, serious enough when you are trying to understand why an automated chatbot-the recruiter is not impressed by your story during the interview. In the health context, where decisions can mean life or death, the consequences of algorithmic failure can be fatal.

Neural networks do an excellent job with handling large amount of training data and linkages, the absorption of underlying patterns or logic of the system in the hidden layers linear algebra; whether detection of skin cancer by photos or writing pseudosclerosis language. However, they are terribly explain the underlying logic of the relationship they discovered: there is something more than just a string of numbers, the statistical weights between the layers. And they can’t distinguish correlation from causation.

Raises an interesting dilemma for medical professionals. The dream of big data in medicine is to provide neural network “huge amounts of data about health,” locating complex, implicit relationships and make an individual assessment of patients. What if the algorithm would be unreasonably effective in the diagnosis of the health status or treatment assignment, but you will not have a scientific understanding of how this relationship actually works?

Too many threads that need to be untangled

The statistical models that underlie these neural networks often assume that variables are independent from each other, but in a complex, interactive system like the human body, it is not always the case.

In a sense, this is a known concept in medical science there are many phenomena and relations that have been observed for decades, but is still poorly understood on a biological level. Paracetamol is one of the most popular painkillers, but still in active discussions about his actions. Practitioners may seek to use any tool, is most effective, regardless of whether it is based on deep scientific understanding. Fans of the Copenhagen interpretation of quantum mechanics can rephrase that as “Shut up and heal!”.

Of course, in this area is ongoing debate about whether or not do we run the risk with this approach is to overlook the deeper understanding that will ultimately prove more fruitful — for example, to search for new drugs.

In addition to the philosophical flourishes, there are practical problems: if you don’t understand the black box of medical algorithm, how to approach the issues of clinical trials and regulation?

May require transparency with regard to the functioning of the algorithm — data that he looks at the threshold values on the basis of which draws conclusions or gives advice, but this can conflict with the motives of profit and the pursuit of secrecy in medical start-UPS.

One solution might be to delete the algorithms which are unable to explain themselves or rely on well understood medical science. But it may prevent people to reap the benefits of useful work such algorithms.

Evaluation of algorithms

New algorithms in the field of health will not be able to do what physics did with quantum mechanics, because it is not deployed in the field. Many algorithms are improved working in the field. How do we choose the most promising approach?

Create a standardized system of clinical trials and testing, which equally applies to algorithms that work in different ways or using different input data will be challenging. Clinical trials that use samples of small size, for example, algorithms that try to personalize treatment for individuals will also be challenging. With small samples and weak scientific understanding of what is happening it will be impossible to determine, the algorithm has succeeded or failed, because he may be good in General, but to show a bad example.

Add to this mix the training and the picture becomes even more complicated. “More importantly, the ideal algorithm in the “black box” is plastic and is constantly updated, so the traditional model of clinical trials is not appropriate because it relies on a static product that is subject to stable rating”.

I need to customize the whole system of medical and clinical trials.

Achieving balance

The health history reflects the history of artificial intelligence in many aspects. It is no coincidence that IBM tried to change the healthcare field, using its artificial intelligence Watson.

The balance will have to be found. We have to find a way to process big data, to use the terrible power of neural networks and to automate reasoning. We should be aware of the shortcomings and biases of this approach to solving problems.

We should welcome these technologies because they can be a useful addition to the skills, knowledge and understanding that can give people. Like neural networks, our industry needs to learn, expanding this cooperation in the future.