How AI disrupts the healthcare industry
Time is probably one of the most valuable resources in hospitals. Patients need thorough consultations and treatments, and hospital staff need a more humane workload. Still two thirds of their time is spent on documentation. Since today’s AI algorithms are able to make more precise predictions than doctors, it seems obvious that the application of these solutions to assist medical staff can only be beneficial. Studies show that some specially trained algorithms are already able to make predictions about a patient’s diagnosis with a higher percentage of accuracy than a doctor’s. Scientists also trained an AI to predict the patient’s mortality. AIs have a 92% forecasting reliability whereas humans only have 85%. Fed with enough data, these machine learning technologies could read brain scans to detect tumors and help medical staff make educated decisions on the most effective treatment.
At Europe’s largest conference for the health IT industry DMEA 2019, Dr. Philipp Daumke of the text mining company Averbis saw four fields of application for AI in healthcare: clinical research, patient recruitments for clinical studies, medical coding and billing, decision support for diagnosis and therapy. In short – These machine learning algorithms could not only release the medical workforce from time consuming paperwork, but also cut costs dramatically. It would avoid billing errors and bureaucratic inefficiencies. Ultimately, they reduce costs and improve patient care.
What AI and machine learning are capable of – and what not
These innovations are first footsteps into uncharted territory. Before we venture any further, we not only have to work on the technology side, but also think about ethical, security and feasibility concerns.
Most current AIs still struggle with the flaws that humans program them with. Biased face recognition and limited ability to understand actual spoken word (including dialects and colloquialisms) are only topped by major security setbacks. Most of these visionary technologies are currently still closer to machine learning than actual artificial intelligence. And this is their core strength. Machines help us to analyze massive amounts of data, because they are able to learn faster and more systematically.
Unstructured data: the main problems of current AI implementation
As much as there is potential for AI in healthcare, there are just as many hurdles to overcome. Simply the fact that the existing data is not yet structured, is still segmented into silos and saved in many different ways makes it necessary for a first-tier AI to organize the data before it can be used. According to Dr. Philipp Daumke, there will be 2-3 zettabytes of healthcare data by 2020, but 80 percent of data is still unstructured. Nationally, and also for every single hospital, different codes for diagnoses apply. So just to decipher each data set, adapt them and then bring all together, a highly skilled AI needs to be put in place. Standardizing the coding in each hospital would mean additional work for doctors and nurses that is not likely to be implemented efficiently. Difficulties also lie in details, for example most of the diagnosis is written in a negated form (like “cannot be ruled out”), which makes it hard for machines and algorithms to understand. That’s one of the reasons why IBM Watson was dropped from its use in German hospitals. It simply wasn’t effective for the real-life implications. The challenges here are to build a platform and standardize interfaces, which must be regulated by politics (EU, WHO) to achieve scalability and portability of medical data.
Rightfully so, these institutions also consider security aspects if we attempt to collect all patients’ data to analyze it. In fact, the patients should stay in charge of their medical data at all times. But this seems more complicated than one would have thought: To create an interdisciplinary data lake, we have to merge many individual data silos and overcome institutional barriers which also rely on their own legal systems (from individual hospital rights to state to country). Wherever the data is stored, whether in data centers or the cloud, it needs to be absolutely secure from malicious actors. And can this level of security even realistically be ensured? Also, we need to talk about “Explainable AI”, how machine learning processes can be made more transparent and comprehensible to not build another black box, neither for patients nor for doctors. Maybe we need a personal health data manager, which could be a person or another AI.
Ethical concerns & responsibility: Who makes the call?
Only once we have conquered these practical real-life challenges can we move on to the next pressing questions: Can doctors always rely on the AI’s decision or do they need to double-check? Who’s responsible if the AI makes a wrong guess? These questions of medical ethics need to be explored. Despite all of these concerns there is a consensus that AI in healthcare is not only in our future but in some cases also current reality. We need to overcome the challenges of data security, validity and responsibility. As Dr. Peter Gocke, Chief Data Officer at Charité Berlin said at DMEA 2019: “AI should not replace doctors. The future goal is to replace doctors who avoid AI by doctors who use AI”. If we continue to set sensible standards for its use, doctors can improve their own capabilities, as well as the patient’s life. So, it doesn’t look like doctors will be replaced by robodocs anytime soon. It’s still the human who makes the decision and takes responsibility.