The future is near: how will artificial intelligence treat us?

Why is artificial intelligence better than a “human” doctor, why overworked medical workers still do not trust AI, whether a symbiosis of natural and artificial intelligence is possible in medicine, and what morality and medical ethics have to do with it? Such non-standard issues are increasingly being raised in the medical community today, discussing the high-tech development of the industry. Dina Filyushina, the deputy director for strategic development of regional healthcare solutions at BARS Group JSC, tells more about the pros and cons of using AI in medicine in the author's column for Realnoe Vremya.

Natural intelligence and a burnout doctor

In the conditions of the current healthcare system, an ordinary Russian doctor is chronically overloaded. What they must have time to do? To collect a clinical history, identify the risks of diseases, prescribe the right treatment, have time to see all patients, paying attention to everyone, sign documents with an electronic signature, follow clinical recommendations, take into account standards and procedures for providing medical care. He needs to be like a six-armed deity, and all this — in conditions of extremely compressed time allotted for a reception. And overwork, as you know, leads to professional burnout.

Natural, that is, human intelligence is capable of many things: synthesising new knowledge, making decisions based on values and meanings, bearing social and professional responsibility, constantly expanding professional horizons.

A person can think creatively, creating qualitatively new solutions. Not only on the basis of previous experience, but also on the basis of abstractions to build models of the future, to create concepts, to consider theories and assumptions. He sees the professional problem from different angles and applies a cross-disciplinary approach. For example, when making a diagnosis, a doctor takes into account not only data on his profile, but also on related disciplines. He also takes into account the emotional state of the patient, his lifestyle, remembers that the patient can simulate or that the symptoms can be distorted by concomitant diseases. With all this in mind, the diagnosis will be much better.

Photo: Rinat Nazmetdinov/realnoevremya.ru

And intuition. Probably, many people have had it happen that all data and figures speak about one thing, but there is a clear inner feeling that now we need to make a different choice. And in the end, such decisions turn out to be correct. This is an unconscious process based on previous experience and analysis of a broader set of factors hidden from consciousness. Intuition is still a purely human trait and skill.

But natural intelligence has not only advantages, but also weaknesses — the very human factor. Any biological organism is characterised by fatigue, which entails a loss of concentration and the risk of making a mistake.

Technologies at the service of medicine

Unfortunately, human thinking needs a lot of time to “think over”/analyse a large amount of information. A huge stream of interactive data and an array of historically accumulated data in the form of anamnesis of diseases, previous studies, dynamics of patient health indicators, many factors for decision-making and a catastrophic lack of time is an unaffordable burden for an ordinary doctor.

The medical worker needs to realise, analyse, compare, pass through himself to make a decision, for which there are only minutes, or even seconds. And if the specialist is not in the mood or does not feel well, then the effectiveness of his diagnosis decreases significantly.

I want to touch separately on the potential benefits of using AI in medicine. Why potential? Because now there are not very many AI systems that quickly identify risks and take into account many input parameters, and the order of their application has not yet been fully regulated.

AI and neural networks are capable of transforming modern healthcare in the future. Of changing the diagnostic system for the better, improving the quality of medical services while reducing costs. Artificial intelligence learns from clinical data and patient medical histories. It takes into account many input parameters during calculations and is potentially able to quickly determine the risks of diseases, predict the dynamics of their course.

About morality and economic expediency

A healthcare professional should make decisions based on facts, and these decisions should be rational and practical. But no less important are the values on which this choice is based: ethics, morality, ideas about good and evil, about the good for the patient.

Sometimes the rational solution seems to be to abandon the further struggle for the life and health of the patient. Cost, resource intensity, poor prognosis for a cure are rational parameters. But the struggle for the patient's life, for the quality of his life, getting rid of suffering is a choice that is not always economically justified. It's a human choice.

There is a desire to help, and there is hope. What if they can't? We will worsen the indicators. These are moral and organisational and methodological problems of people. But can artificial intelligence help here? And it depends on how this tool is configured, what result it is aimed at. And do not forget that the tool is just a set of algorithms, depending on the volume and quality of data “at the input”, settings, training and goal setting. To some extent, it lacks moral criteria. They are set by people.

Photo: Maria Gorozhaninova/realnoevremya.ru

On complexity of AI use in practical medicine

For such systems to help, an initial knowledge base is needed, on the basis of which the AI will become a consultant or assistant in human decision-making. This requires the participation of experts in filling the database, we need marked-up data samples prepared with their help for training neural networks, digitised procedures and standards of medical care, clinical recommendations.

Now it is difficult to analyse the data that is available in medical information systems.

How does the doctor enter data into the system at the reception? In conditions of limited time for reception, incorrect sentence construction, generally accepted abbreviations, the use of non-standard symbols, and the lack of word separation are often encountered. The doctor understands what he wrote, and another doctor will understand or guess, because this is their subject area, which they have learned to understand, but, unfortunately, these are great difficulties for medical data analysis systems, negatively affecting the results that AI generates for us.

Another difficulty is the large amount of data required for training. Ideally, all data from the history of diseases should be digitised, the information structured. It should be borne in mind that the methodology of treatment, collection of reporting data, the list of information displayed in medical documentation continues to change dynamically, and for AI developers this means that systems will need to be retrained from time to time. And here comes the challenge — how to learn how to do it quickly.

So, for the correct operation of AI, we need “clean” machine-readable data, data samples prepared and marked up by highly qualified specialists for training neural networks, digitised procedures for providing medical care, clinical recommendations and standards of medical care.

When changing the methodology, medical information systems also begin to be filled with new data only with the appearance of approved changes in the methodology of diagnosis, treatment, patient observation, etc., which slightly postpones the training of AI systems.

Symbiosis or confrontation?

If we look at artificial intelligence through the eyes of a developer, we see a set of algorithms and mathematical methods that can be trained on data, analyse images, look for non-obvious connections and similarities in huge amounts of data, detect differences where natural intelligence may simply not notice them. But for a doctor, the work of artificial intelligence is a black box. The doctor does not understand the “thinking” of the system and how the AI got the final result.

It is possible to build the trust of medical professionals in AI by explaining the basic algorithms of its operation and the data on which systems are trained. It is also possible for doctors to participate more widely in working groups on the preparation of data for training neural networks.

It is necessary to explain the basic algorithms of artificial intelligence within the framework of university training of specialists in digital departments and within the framework of professional retraining.

Well, answering the question: is a symbiosis of doctors and AI possible? Yes, provided that we divide the problem solving between the intelligences. If we leave it to natural intelligence to solve strategic and creative tasks and use artificial intelligence as a tool to perform routine tasks in order to reduce the burden on doctors.

AI should be considered as a system for supporting medical decision-making — not replacing the doctor, but helping them to pay attention to certain points.

Having overcome all these difficulties, we will be able to become friends and partners with artificial intelligence.

Dina Filyushina
Reference

The author's opinion may not coincide with the position of the editorial board of Realnoe Vremya.