ai medicine

A.I. and Healthcare – Challenges and Solutions

Artificial Intelligence (AI) refers to a broad umbrella of technologies that enable machines to perform tasks that were formerly only capable of being carried out by humans. AI is a key enabler of what is being referred to as the Fourth Industrial Revolution and as such its usage will become prevalent right across society including the healthcare sector. There are many challenges to applying AI some of which are more salient depending on the area of application. Here I discuss some of the challenges to implementing AI in healthcare specifically and some possible solutions.

Some people find the concept of AI a little nebulous so a definition is a good starting point and there are different ways to define AI. The European Commission High-Level Expert Group on Artificial Intelligence defines AI as …

(referring to) systems designed by humans, that, given a complex goal, act in the physical or digital world by perceiving their environment, interpreting the collected structured or unstructured data, reasoning on the knowledge derived from this data and deciding the best action(s) to take (according to pre-defined parameters) to achieve the given goal. AI systems can also be designed to learn to adapt their behaviour analysing how the environment is affected by their previous actions.

As a scientific discipline, AI includes several approaches and techniques, such as machine learning (of which deep learning and reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search and optimization) and robotics (which includes control perception, sensors and actuators as well as the integration of all other techniques into cyber-physical systems

High-Level Expert Group on Artificial Intelligence (2018) A definition of AI: Main Capabilities and Scientific Disciplines , European Commission Brussels.

This is a long description of what AI is, but I like it since it takes a functional approach to defining AI. One slightly puzzling aspect of this definition, to me anyway, is why it considers AI to refer only to “systems designed by humans”. Nowadays this is mainly still the case, but we are approaching an era where self-programming AI might be expected to proliferate. This will bring additional challenges particularly in the areas of ethics and control. Some have argued that the type of superintelligence that could arise from an AI able to recursively improve its own capabilities and intelligence could pose an existential threat to humanity. Therefore, consideration of the implications of this should begin as soon as possible even if the technology to develop this type of AI is not yet available. In the main though the EU definition is a useful one that avoids the mistake of many popular definitions which conflate human and machine intelligence and also often fail to fully address the scope of AI.

AI can be divided into two broad categories, narrow (weak) AI and broad (strong) AI. Narrow AI refers to AI that is designed to perform a specific task or tasks in a single domain and does not go beyond these. Broad or strong AI is AI that can perform a wide range of tasks using a range of cognitive processes, the physical version of which is like the androids and intelligent robots usually presented in movies and tv shows as examples of AI. Strong AI is likely still many years away from being developed and in this article I am discussing narrow AI since this is the type of AI that is being successfully developed and implemented in healthcare settings currently.

Applications of AI to Healthcare

As scientific knowledge continually increases, medical knowledge is growing all the time yet the cognitive capacity of physicians remains more or less fixed. The increasing specialisation of medicine and healthcare provides opportunities for the use of technology to augment medical practitioners work and in some cases to perform tasks autonomously. Technology such as AI can help to improve access to healthcare for all and facilitate precision medicine.

The application of AI to healthcare is not new. In the 1970s and 1980s there were attempts to encode the knowledge of medical practitioners into expert systems that could be used to provide computerized decision support to clinicians. MYCIN which was used to diagnose certain diseases of the blood and prescribe appropriate antibiotics is an example of such an early system. Such systems can be expensive to create and maintain. Expert systems are still created and used in medicine today but have arguably been overshadowed in recent years by the use of machine learning based AI, which rather than being explicitly programmed with knowledge rules, learns from the data it is provided with. To get a sense of the breadth of use of AI in medicine today, here are some applications identified in a paper by Briganti & Le Moine (2020)

  • Cardiology (early detection of atrial fibrillation, prediction of cardiovascular disease)
  • Pulmonary Medicine (interpretation of pulmonary function tests)
  • Endocrinology (real time monitoring of blood glucose levels)
  • Nephrology (prediction of the decline of glomerular filtration rate in patients with kidney disease, establishing risk for progressive IgA nephropathy)
  • Gastroenterology (endoscopy image processing, detection of abnormal structures, diagnose gastroesophageal reflux disease and atrophic gastritis, prediction of outcomes for gastrointestinal bleeding, esophageal cancer, inflammatory bowel disease and metastasis in colorectal cancer)
  • Neurology (epileptic seizure monitoring and management, quantitative assessment of gait, posture and tremor in patients with neurological conditions such as Parkinsons disease)
  • Histopathology (computational diagnosis of cancer)
  • Radiology (image based diagnosis)

In addition to the above, AI has also been used in healthcare settings to streamline administrative processes, to extract structure and meaning from unstructured clinical notes, in telemedicine for automated voice diagnosis of conditions ranging from coronary heart disease to mental health disorders, to develop wearable technologies to monitor health, for drug design and development and more. AI has also contributed to the development of medical robots, such as those which can provide some caregiving tasks and surgical robots. There are many applications of AI in healthcare, but not all of them are entirely successful for a number of reasons.

Challenges and solutions to the implementation of AI in a Healthcare Setting

Availability of Data

To create a predictive machine learning model requires lots of good quality data that is labelled correctly but publicly available datasets for healthcare are in short supply and sometimes of poor quality. Synthetic data can be an alternative in some cases but brings challenges too since for example when building a classifier to detect rare outcomes, the proportion of the rare class in the synthetic dataset may be boosted to increase power but the resulting classifier fails to generalise well to ‘live data’. Lack of publicly available data and possible unsuitability of synthetic data creates a situation where those building AI applications need access to data often held by public healthcare organisations but making this data available brings with it privacy concerns, legislative issues and technical challenges.

The findings of the OECD 2019-2020 survey of health data development indicates that Ireland is somewhat behind the curve when it comes to implementing policies, regulations and practices that align with best practice health data governance and that promote the development, use, accessibility and sharing of key national health datasets. This is why anyone working regularly with health data in Ireland will surely welcome the recent announcement that the Irish Centre for High End Computing (ICHEC) have been awarded funding by the Health Research Board (HRB) to develop and test a proof of concept technical infrastructure for a national Health Data Platform. The DASSL (data, access, storage, sharing and linkage) model has been developed by HRB to facilitate linking and reuse of health data in a safe manner that ensures patient anonymity is protected. DASSL is being developed to improve access to data for research and policy making purposes generally rather than specifically for AI research and even after a proof of concept has been developed, there is still significant work to be done before a national roll out can occur. However DASSL will be a hugely positive development for health research and practice, including applications of AI to healthcare, and by extension to health outcomes for patients.

Another element that is essential to facilitating access to datasets which may otherwise be siloed is the development of a national patient identifier which all users of health and social care services in Ireland would receive. This has been championed by a number of organisations and in terms of one single thing that could be implemented to facilitate linkage and use of health data, this should be top of the list.

Validating Performance

AI for healthcare is a high stakes challenge since the consequences of an error could potentially be deleterious to patient health. The ability to robustly validate AI is essential from both a patient safety perspective and for the purpose of building trust in the technology.

The National Health Service (NHS) in the United Kingdom through their AI laboratory recently produced a blueprint for validating AI tools. Their process firstly involves the creation of a validation or test dataset using data not used to train the algorithm, the validation is run in a secure cloud-based environment to protect the intellectual property of the developer, and in addition to using standard measures, such as sensitivity and specificity, the variance of the machine learning model is tested and its performance on various sub groups of the data also checked. Finally the results are reported to the developers so that the model can be improved. Such a blueprint could be useful to inform development of a similar process in Ireland.

Lack of trust in AI

A lack of trust in AI by physicians has been identified as a significant barrier to the adoption of AI in healthcare. In a recent article in the digital health edition of The Lancet, the authors emphasise that new approaches in medical education that emphasise the digital literacy of physicians and integrate subjective views of illness and patient attitudes towards AI are needed. If AI can be shown to be robustly validated perhaps using a methodology similar to that outlined above by the NHS this will also increase physician trust and confidence in the technology.

One of the key issues contributing to lack of trust is ‘explainability’. The success of deep learning and neural networks have contributed much to the surge of interest in AI in recent years. However such methods are not easily explainable, at least in the sense of explaining how the AI came to a decision in a particular instance. Some AI providers have been criticised for selling explainable AI that doesn’t do what it purports to and physicians are reluctant to adopt technology when its decision making process is not transparent.

My opinion is that the importance of explainability can sometimes be overstated. Provided that the accuracy of the AI has been robustly validated and an informed patient has consented to its use then explainability is a ‘nice to have’ rather than an essential. There are, for example, treatments being used now for which the mechanism of action is not known, is it such a leap to suggest that we could use AI diagnostic tools for which the features of the data contributing to the diagnosis are not fully understood?

Ethical Issues

Much of the above has already touched on ethical issues. Privacy and confidentiality of patient data is essential and this is why initiatives such as DASSL and a national health identifier are vital to ensure that useful knowledge can be leveraged from existing and future health datasets without comprising patient privacy. It is also important that medical AI has been comprehensively tested on and validated on non-biased data before being deployed and that patients are well informed about its use and have their voices listened to. Legislative challenges exist to to help ensure AI is safe and effective and that data privacy concerns are addressed. While in many ways AI could be seen as enabling access to healthcare, it could also contribute to a gap between those countries which have access to the technology and those that don’t.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.