Introduction
e-Health has been defined as the use of information and communications technology (ICT) in support of health and health-related fields [1]. Recently, the term digital health has been introduced as a more general definition encompassing e-Health as well as emerging areas such as the use of advanced computing sciences in “big data”, genomics and artificial intelligence (AI), to support and deliver health care and to improve the health and wellbeing of people [2].
These technologies are considered promising, as they would allow more personalised approaches to care, empowering patients, improved patient safety, better communication between care providers and patients, increased access to health information, better chronic disease management and prevention, improved efficiency of the healthcare system, improving access to scarce specialist skills and reducing patient referral, tele-education in the case of shortages of expertise, connecting patients even in remote locations, and in the management of the large amount of health data nowadays available [3].
While the potential of digital health solutions is wide-ranging, their design, development, deployment, and utilisation rely on business models and use of algorithms and data that need to be considered from an ethical point of view.
Generally, in medicine and nursing the basic principles of ethics concern respect for human life, respect for human dignity, autonomy, care, justice, and maximising beneficence. Digital health solutions are based on different finalities, thus extending existing ethical implications beyond what was accepted up till now [4].
In addition, the availability of new technology leads to the consideration of issues other than the conventional ones (i.e., privacy, confidentiality, and informed consent) that are specific and novel. In Table 1, a selection of publications that focus on the ethics of digital health tools in the last three years is reported chronologically. From this it is possible to appreciate the evolution of the principles that have been addressed and that have been considered a priority over time.
Table 1. Evolution over time in selected literature of the main ethical aspects concerning digital health.
Authors |
Ethical aspects |
---|---|
Kleinpeter E. 2017 [13]
|
Data protection, equality of service availability, medicine from the art of curing to the science of measurement, changes in patient-physician relationship. |
Terrasse M et al. 2019 [25] |
Impact of social networking sites on the doctor-patient relationship, the development of e-health platforms to deliver care, the use of online data and algorithms to inform health research, and the broader public health consequences of widespread social media use. |
Boers SN et al. 2020 [26]
|
Dealing with predictive and diagnostic uncertainty, roles and responsibilities of patients, role and responsibilities of physicians, patient-physician relationship. |
Burr C et al. 2020 [27] |
Privacy, autonomy, accountability, intelligibility, accessibility. |
Fenech and Buston. 2020 [28] |
Relationship, data, transparency and explainability, health inequalities, errors and liability, meeting public needs, regulation, consequences of novel insights, trusting algorithms, collaborations between public and private sector organisations, developing principles and translating them into policy. |
Among them, in this paper, three main ethical implications relevant to digital health will be discussed, as well as the potential questions arising from utilisation of AI in the healthcare setting.
Data quality
With the widespread availability of novel mobile medical technologies for ubiquitous monitoring, directly controlled and operated by the patient, often providing an onboard interpretation of the measured signals, in order to overcome the difficulties in integrating these systems in the clinical workflow [5], it becomes of paramount importance: 1) to know the exact claims, limits of validity, and reported accuracy for which the medical device or software has been approved; 2) to be sure that the device has been used as recommended, so that the patient-collected data can be considered as reliable; and 3) to have knowledge of possible updates in software that could have changed the performance of the device.
The first point recalls on the one hand the need for more transparency in the medical device certification process [6], and on the other hand the need for clinicians to be informed about performance and limits of usage of possible devices they could recommend to patients. As an example, the Apple smartwatch software that uses an optical sensor to detect the pulse wave and look for beat‑to‑beat variability at rest to detect possible atrial fibrillation (AF) is not designed for individuals already diagnosed with AF [7]. Also, the ECG app in the same device, that uses the electrical heart sensor to record a 1-lead ECG and then provides a result (i.e., sinus rhythm, AF, or inconclusive), is not recommended for users with other known arrhythmias and is not designed for individuals diagnosed with AF, or to detect AF for HR >120 bpm [8].
The second point implies that the patient needs to have sufficient digital skills and health literacy (i.e., the ability to seek, find, understand, and appraise health information from electronic sources and apply the knowledge gained to addressing or solving a health problem [9]) in order to be able to properly use the device.
The third point refers to the fact that digital health solutions based on smartphone applications and use of embedded or connected sensors are prone to being modified via internet connection, to provide patches to possible bugs, to update to new operative systems, and to expand the features provided. In this process, where automated updating depends on the smartphone user’s setting, algorithm performances can be profoundly changed, thus requiring the need to verify which software version has been utilised to automatically interpret the data [10,11].
Changes in patient-physician relationship
While in the past the healthcare professional was the natural intermediary in guiding patient access to health information, nowadays this role is functionally by-passed by new apomediaries (i.e., the Web, on-line groups, …) in a process of progressive disintermediation [12], with the ethical concern that incorrect ideas or potentially dangerous practices could take hold. In addition, the development of digital health poses the risk of shifting medicine from the “art of curing” to a “science of measurement”, where the inner life and feelings of the patient would be forgotten, and the communicative dimension of the medical act would be secondary to its informational dimension [13].
Nevertheless, to guide the patient on his/her digital health journey, there is a need for new professional learning paths for physicians to use new tools properly and to correctly interpret their results, as well as how to integrate them into medical practice ethically and deontologically.
The wide patient access to medical devices (other than the conventional thermometer and blood pressure monitoring device usually in everybody’s home), as well as lifestyle wearables, will lead to a situation in which patients self-control their health, confirm or disconfirm symptoms, activate a physician consultation or take immediate action.
This new scenario of shifting tasks and responsibilities could affect the roles that doctors have in healthcare, on the one hand stimulating new interactions between care providers and patients to understand health and its correlation with lifestyle better, and on the other hand creating in the patient a sense of independence, self-judgement and confrontation. This raises questions on whether valuable features of the traditional patient-physician relationship would become lost, if medical training would need to be rethought, or patient expectations redirected [14].
Equity of access to healthcare services
Health equity has been defined as the absence of discrimination or unfair health disparities, to be achieved by minimising such health disparities among groups of people who have different levels of underlying social advantage [15]. In this context, digital health is considered as having the potential to resolve health inequalities. However, no availability of computer technology or limited access to the Internet, lack of the required skills and physical access barriers (which mainly affect low-income classes, the elderly, and people with disabilities) could represent a limiting factor for the accessibility of these new services to those categories that are expected to receive more benefit from it, thus exacerbating disparities in healthcare qualities and outcomes, so reinforcing what has been described as the “digital divide” [3,16].
In particular, the importance of health literacy (i.e., the degree to which an individual can access, process, and comprehend basic health information and services in order to inform and participate in health decisions) to cardiovascular disease management, prevention and public health has been addressed recently and underlined in a scientific statement from the American Heart Association [17].
Also, it has been reported that individuals with limited health literacy may have less access to reliable Internet-based health education materials [18]. While the use of digital health solutions may represent an attractive option for patient-oriented education, text messages, and social networking to help with chronic disease management, a mobile health intervention study showed that racial or ethnic minorities, older adults, and those with limited health literacy were the least engaged with text messaging and automated calls [19].
Data enabling health (artificial intelligence)
AI represents a multidisciplinary area, including the fields of machine learning, natural language processing, and robotics, describing a range of techniques that allow computers to perform tasks that would usually require human reasoning and problem-solving skills. AI started more than 60 years ago in the field of computer science, but now, thanks to technological developments in both the field of artificial neural networks and powerful graphics processing units (GPU), it seems to have reached an adequate level of development to be applied in any field of medicine. Thanks to its ability to extract knowledge and learn from large sets of clinical data, AI can play a role in diagnostics, decision making and personalised medicine. At the same time, it creates a novel set of ethical challenges that need to be identified and mitigated [20].
On 10 April 2018, twenty-five European countries signed a Declaration of Cooperation on Artificial Intelligence (AI), with the goals of increasing public and private investment in AI, preparing for socio-economic changes, and ensuring an appropriate ethical and legal framework. Concerning this last aim, in June 2018 the Commission appointed 52 experts to a High-Level Expert Group on Artificial Intelligence (AI HLEG), including representatives from academia, civil society, as well as industry (but no medical associations, physicians, or patients were involved).
After publication of a first draft in December 2018 and the review of 506 comments received, in April 2019 the final voluntary and non-binding document on Ethics Guidelines for Trustworthy AI was published [21]. In it, seven key requirements (i.e., human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental wellbeing; and accountability) and four interrelated ethical principles (respect for human autonomy; prevention of harm; fairness; and explicability) were generally addressed. In Table 2, examples of questions arising, contextualising such principles into the healthcare domain, are given.
Table 2. Examples of questions arising contextualising the ethical principles addressed in the Ethics Guidelines for Trustworthy AI [23] into the healthcare domain.
Ethical principle |
Contextualised questions in the healthcare domain |
---|---|
Respect of human autonomy |
|
Prevention of harm |
|
Fairness |
|
Explicability |
|
At the basis of the development of AI in healthcare, there are questions around its regulatory environment. Currently, the concept of software to be used for medical purposes, such as prediction and prognosis, has been added to the medical device definition of the Medical Device Regulation [22], thus in principle covering AI under this regulatory umbrella. However, no specific characteristics of AI have been addressed in such regulations or related guidance documents.
More recently, the European Commission published a “White Paper On Artificial Intelligence - A European approach to excellence and trust” [23] to set out policy options on how to achieve promoting the uptake of AI and how to address the risks associated with certain uses of this new technology, strongly supporting a human-centric approach.
Considering the lack of current requirements regarding transparency, traceability and human oversight, the creation of a clear European regulatory framework for trustworthy AI, to be applied on related products and services, is keenly anticipated. Results of these regulatory efforts will be visible in the near future and will allow a better evaluation of how to integrate AI properly as a digital health tool in healthcare practice, hopefully overcoming the limitations of current implementations [24].
Conclusions
Technological progress is part of our society, and it is only natural that it impacts the domain of healthcare domain. To properly cope with new digital health tools, it is important to pre-evaluate their ethical implications before their implementation into clinical practice. In this way, the regulatory framework, as well as the educational pathway for both citizens and healthcare professionals, can be updated in order to guide the process instead of being affected first, and only, after the fact, searching for a remedy.