The potential of artificial intelligence (AI) in healthcare is unknown, so there is a need to anticipate ethical concerns arising from widespread use. Ethics Guidelines for Trustworthy AI were published by the European Commission in April 2019. Four ethical principles were defined: respect for human autonomy, prevention of harm, fairness, and explicability. Seven requirements emerged: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental wellbeing; and accountability.
The ethical implications of AI in cardiology will depend on its clinical role. Current applications include machine learning-based risk prediction models using multiple sources of quantitative data to find correlations. This is an incremental development on top of existing tools. The potential for disruptive innovation comes with imaging data and natural language processing.
Discussion points
Is it AI or machine learning?
- Will we ever have truly artificially intelligent machines we can rely on in medicine or is it more accurate to talk about machine learning? It depends on how intelligence is defined. Is it only the ability to acquire and apply knowledge and skills? Or does it also demand handling abstract concepts and using knowledge, not only pattern recognition, to manipulate the environment? At the moment, human oversight is still a requirement.
How should we innovate responsibly?
- How do we ensure that machine learning produces results that are reproducible, reliable, independently validated, unbiased, and generalisable? The major bottleneck is insufficient data to create robust algorithms.
- How should we obtain consent from patients to share data for large databanks? Consent creates bias – so-called ‘racial AI’ – since we then base algorithms on certain groups. An alternative is the federated learning approach: models are run locally and only the weight of the model is shared to cross-replicate.
- AI in medicine often originates with informatics experts rather clinicians starting from a hypothesis and a clinical problem. How can we bring these communities together?
- AI for imaging assistance does not raise ethical issues because the result is shown to a physician to validate or dismiss it. Problems arise with integrative systems that use genomics, medical records, imaging, and device data to produce a prediction score. It is virtually impossible to cross-check plausibility because there are so many dimensions. These systems must be well curated, validated, and overseen by regulatory bodies.
- In underserved areas we can expand access to excellent clinical tools globally by allowing machines to make the final decision. Are we ready for that in situations where there is no doctor for validation?
- Are the results from machine learning algorithms sufficiently reproducible and generalisable to be used in less advantaged areas of the world?
- With machine learning the medical outcome for the patient might not be that different. But what about the psychological outcome?
- AI is good at recognising patterns but there are patterns we do not want to replicate – for example social inequality in health.
How do we ensure transparency and generate trust?
- Physicians needs to explain clinical reasoning to patients. How do we ensure that machine learning tools are sufficiently transparent that clinicians can understand and trust the algorithms?
- Machine learning can be supervised: humans define the labels that should be predicted, and the algorithm selects the features that perform best. Unsupervised learning creates new possibilities by identifying features that were previously unrecognised. Neural networks, considered as a sub-category of machine learning, may operate as a “black box”.
- Might we deskill as physicians because we cannot explain the reasons behind a decision? This was a concern when guidelines were introduced as a recipe for treatment, but it did not happen. There should not be a problem if AI is used to support medical decisions rather than supplant them. There may be an issue if deep learning identifies features so far not understood in the pathophysiology of a disease.
- Many clinicians do not understand the maths behind existing clinical risk scores, but they are used, and patients accept the results.
- Should patients have a right to reject decisions based solely on automated processing? Yes, according to the European Parliamentary Research Service. Patients still expect clinicians to use their judgement and consult patients for the final decision.
Who is responsible if there is an error?
- Who is responsible if AI makes a mistake? This is not an issue for AI that generates more data, such as automatic chamber segmentation, because it will be reviewed. AI for prediction and treatment modification should be regulated.
- What if the clinician ignores AI’s suggestion and AI was right? Is the doctor responsible?
- Do any machine learning systems state their probability of being sure? Probabilities are based on the training dataset and mean very little when used on other data.
- Digitalisation can increase the human touch because patients become partners in the care pathway. They have access to their data, come with questions, and stimulate interaction. In addition, digitalisation leads to improved coding and structured care, which benefits even those who fall outside the digital divide.
- We need a plausibility check by somebody who understands the concepts behind disease. No algorithm can differentiate between a statistical versus clinical association, and causality. With big data there is a risk of false positive correlations.
- Who is liable for AI decisions given directly to patients?
What are the regulatory priorities to ensure responsible and ethical use of AI in medicine?
- Software and prediction tools used in the EU will be regulated as a medical device and require certification from May 2020.
- Commercially available wearable technologies generate a large amount of data and this is unregulated. Who is using the data, how, and for what purpose?
- Standards and requirements must be set for introducing AI techniques into healthcare.
- Evidence is needed from randomised controlled trials, real-world registries, and international cohort studies to prove AI methods are safe and efficient for patient use.
- Treating a single patient from AI used on big data poses a major challenge.
Conclusion
AI in healthcare creates questions around responsibility, trust, and transparency. EU regulations are a potential safeguard. Standards are needed for introducing AI into healthcare. Clinicians must check the plausibility of associations and patients should be consulted on decisions about their care.