Introduction
Healthcare has a lot of prospects thanks to the expansion of digital health data and technological advancements in artificial intelligence (AI). AI has the ability to support many different procedures, including administration, clinical research, personalised medicine, diagnosis, and drug discovery.1 Large amounts of data must be accessed in order to employ AI in healthcare, which creates privacy issues. Numerous nations, especially those in the European Union (EU), have made significant investments in AI efforts, allocating budgets and increasing funding for AI-related healthcare projects.2 While India is following the steps of GDPR through its Digital Personal Data Protection Bill, 2022, it is yet to be enacted. As of now, GDPR is applicable mostly to EU nations.
The proposed Artificial Intelligence Act3 and Guidelines for Trustworthy AI4 are two ethical and legal tools the European Commission has developed to direct the ethical design of AI systems.5
But there is a conflict between advancing AI and safeguarding data privacy. The need for diversified and high-quality healthcare data is emphasised in the proposed EU AI Act.6 It takes careful planning and resource allocation to strike a balance between the privacy threats posed by AI and its beneficial purposes. An overview of the ethical trade-offs relating to data use in healthcare AI is given in this article, along with procedural suggestions for making just choices in this area.
GDPR Trends in different countries with Respect to patient data
Patient data is necessary for the creation and evaluation of AI models in applications connected to health. The General Data Protection Regulation (GDPR), which aims to secure personal information and standardise data protection practises, is the primary legal foundation for data protection in Europe.7 The GDPR does, however, permit Member States to enact derogations for public interest, academic, historical, or statistical purposes, resulting in a variety of data governance strategies across Europe.8 While Germany has a more thorough approach to control, emphasising patient consent for data processing9, countries like Finland take a more liberal approach, permitting data access through national permits and comprehensive policies.10 Big data and open data policies have been adopted by Finland, which also prioritises public education and offers online services for citizens to access health information.11 Germany, on the other hand, places a higher priority on patient consent and has rigorous data privacy rules, which researchers find to be confusing and limit the use of the research exemption.12 In Germany, secondary health data research often requires either anonymization or consent.
Making the Most of Artificial Intelligence’s Potential in Healthcare
There are trade-offs between data access and privacy due to the various methods to data governance in healthcare. These difficulties are a result of varying interpretations of GDPR requirements and viewpoints on how to balance principles like solidarity and informational self-determination.13 Making a decision between data access and privacy poses moral questions and may have repercussions for bias in AI development and privacy rights. Public opposition to liberal methods to data governance arises from worries about privacy, bias, and discrimination. Consent or anonymization are prioritised in restrictive techniques, although they might result in administrative challenges, biases in selection, and a lack of representativeness in the data. Full anonymization is becoming more and more elusive, and it might not even ensure people’s privacy or the effectiveness of AI models.14 There have been questions expressed regarding the impact on data sharing and AI innovation as well as the potential overprotection of personal data.15 Due to differing interpretations of data legislation, disregarding the significance of data access could lead to lost investments in the development of AI public instruction
Is the extravagant spending on data privacy worth it?
Due to rigorous and conflicting data governance policies in many nations, Europe’s potential for healthcare AI is constrained.16 Because of complicated data protection regulations, a lack of staff, and poor data governance frameworks, national public health institutions only occasionally deploy AI. Researchers want increased access to patient data in a secure setting while yet respecting privacy concerns in order to fully utilise the benefits of AI in healthcare.
To promote health AI innovation while protecting the privacy of patient data, policymakers must find the proper balance. Given the limited nature of public resources, it is unethical to prioritise the development of AI-driven solutions over other healthcare goals or data infrastructure development. If developers do not have widespread access to pertinent healthcare data, proposed regulations, such as the EU AI act, may potentially forbid AI applications in healthcare, further complicating matters.
Alongside the creation of AI tools, solid data governance and management frameworks like the European Health Data Space (EHDS) and interoperability standards for medical records should be built to solve these concerns. Prior to making significant investments in AI-driven healthcare technology, it is essential to invest in addressing data governance issues to prevent resource wastage.17
Additionally, focusing only on health AI could deprioritize approaches that don’t use AI but have a track record of success. Aside from investing in AI, authorities should think about bolstering evidence-based approaches and tackling fundamental barriers to care given the unknown usefulness of AI treatments and their limited real-world impact.18 This strategy recognises the necessity of combining human expertise and AI in healthcare.
Allocating resources fairly for health AI is a difficult undertaking that calls for a thorough analysis of the advantages, constraints, and competing agendas. To fully realise the potential of healthcare AI in Europe while respecting privacy and guaranteeing the efficiency of the entire healthcare system, it is essential to strike the proper balance in data governance and resource allocation.19
Ethical Considerations of data access in the Health Sector
The literature on AI ethics frequently ignores the trade-offs between data access and privacy as well as resource allocation in favour of fairness issues and prejudice. Existing regulations, like those set forth by the EC, have a tendency to limit the ethical discussion to particular AI uses while ignoring more general moral conundrums.20
Ethics debates concerning AI should be preceded by a larger conversation on priorities and public investments to assure accountability. Steps that involve identifying health needs and prioritising them, should receive more attention in the policy and planning cycle of the creation of health interventions.21 This strategy is in line with suggestions made by the World Economic Forum, which emphasise the value of examining strengths, weaknesses, opportunities, and dangers when developing national AI plans.22 In the end, decisions about how to allocate resources and use technology entail political debates and trade-offs between conflicting values.
According to Norman Daniels, the emphasis should be on fair procedures and procedural values in the lack of agreement on substantive principles.23 A useful approach for debating resource allocation in the context of digital health and ethical AI development is the Accountability for Reasonableness (A4R) concept, which highlights important requirements for justifiable decision-making in public health.24 To ensure justice, decision-makers in EU nations should discuss the trade-offs between data privacy and the benefits of AI with a diverse set of stakeholders, including researchers, data subjects, clinicians, and others. This broad engagement, or “data democracy,” strengthens public confidence and gives affected groups more influence.25 Ethicists can help by providing insight into difficult moral dilemmas. For educated decision-making, a clear understanding of the available funds and conflicting needs is essential. The morality of judgements regarding data privacy and access techniques ultimately depends on how well they adhere to procedurally fair, accountable, and transparent requirements.26
Conclusion
Data privacy and maximising AI’s potential must be traded off during the development and application of AI in healthcare. Countries should create uniform digital health plans that reflect their core values to address this. Deliberation with the public is essential because it enables nations to communicate their priorities and achieve a balance between data access and privacy. Budgets for national and European AI should reflect the balance that was selected. Disregarding these factors raises issues with distributive justice that shouldn’t be neglected in moral debates over AI’s potential impact on health.
References
1 Davenport, T. and Kalakota, R., 2019. The potential for artificial intelligence in healthcare. Future healthcare journal, 6(2), p.94.; Schork, N.J., 2019. Artificial intelligence and personalized medicine. Precision medicine in Cancer therapy, pp.265-283; Fleming, N., 2018. How artificial intelligence is changing drug discovery. Nature, 557(7706), pp.S55-S55;
2 Venture pulse: Q1’18 Global Analysis of Venture Funding KPMG. Available at: https://kpmg.com/xx/en/home/insights/2018/04/venture-pulse-q1-18-global-analysis-of-venture-funding.html (Accessed: 24 May 2023); Bundesregierung, D., 2018. Strategie Künstliche Intelligenz der Bundesregierung. Berlin, November; de Nigris, S., Craglia, M., Nepelski, D., Hradec, J., Gomez-Gonzales, E., Gomez Gutierrez, E., Vazquez-Prada Baillet, M., Righi, R., de Prato, G., Lopez Cobo, M. and Samoili, S., 2020. AI watch: AI uptake in health and healthcare, 2020 (No. JRC122675). Joint Research Centre (Seville site).
3 Matefi, R., 2021. The Artificial Intelligence Impact on the Rights to Equality and Non-Discrimination in the Light of the Proposal for a Regulation of the European Parliament and of the Council Laying down Harmonized Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. Rev. Universul Juridic, p.130.
4 Smuha, N.A., 2019. The EU approach to ethics guidelines for trustworthy artificial intelligence. Computer Law Review International, 20(4), pp.97-106.
5 Custers, B., Sears, A.M., Dechesne, F., Georgieva, I., Tani, T. and Van der Hof, S., 2019. EU personal data protection in policy and practice. Hague: TMC Asser Press.
6 McLennan, S., Rachut, S., Lange, J., Fiske, A., Heckmann, D. and Buyx, A., 2022. Practices and attitudes of bavarian stakeholders regarding the secondary use of health data for research purposes during the COVID-19 pandemic: qualitative interview study. Journal of Medical Internet Research, 24(6), p.e38754.
7 Shabani, M. and Borry, P., 2018. Rules for processing genetic data for research purposes in view of the new EU General Data Protection Regulation. European Journal of Human Genetics, 26(2), pp.149-156.
8 Bak, M.A., Ploem, M.C., Ateşyürek, H., Blom, M.T., Tan, H.L. and Willems, D.L., 2020. Stakeholders’ perspectives on the post-mortem use of genetic and health-related data for research: a systematic review. European Journal of Human Genetics, 28(4), pp.403-416.
9 Molnár-Gábor, F., Sellner, J., Pagil, S., Slokenberga, S., Tzortzatou-Nanopoulou, O. and Nyström, K., 2022, September. Harmonization after the GDPR? Divergences in the rules for genetic and health data sharing in four member states and ways to overcome them by EU measures: Insights from Germany, Greece, Latvia and Sweden. In Seminars in Cancer Biology (Vol. 84, pp. 271-283). Academic Press.
10 Vrijenhoek, T., Tonisson, N., Kääriäinen, H., Leitsalu, L. and Rigter, T., 2021. Clinical genetics in transition— a comparison of genetic services in Estonia, Finland, and the Netherlands. Journal of Community Genetics, 12, pp.277-290.
11 Jormanainen, V., Parhiala, K., Niemi, A., Erhola, M., Keskimäki, I. and Kaila, M., 2019. Half of the Finnish population accessed their own data: comprehensive access to personal health information online is a corner-stone of digital revolution in Finnish health and social care: Englanti. Finnish Journal of eHealth and eWelfare, 11(4), pp.298-310.
12 Supra at 2.
13 Hoffman, S. and Podgurski, A., 2012. Balancing privacy, autonomy, and scientific needs in electronic health records research. SMUL Rev., 65, p.85.
14 Mostert, M., Bredenoord, A.L., Biesaart, M.C. and Van Delden, J.J., 2016. Big Data in medical research and EU data protection law: challenges to the consent or anonymise approach. European Journal of Human Genetics, 24(7), pp.956-960.
15 Ploem, M.C., Essink-Bot, M.L. and Stronks, K., 2013. Proposed EU data protection regulation is a threat to medical research. Bmj, 346.
16 Haneef, R., Delnord, M., Vernay, M., Bauchet, E., Gaidelyte, R., Oyen, H.V., Or, Z., Pérez-Gómez, B., Palmieri, L., Achterberg, P. and Tijhuis, M., 2020. Innovative use of data sources: A Cross-sectional study of Data Linkage Practices across European Countries.
17 Supra at 7.
18 de Inversiones Innovation, B.B.E. and Advisory, F., 2021. Artificial intelligence, blockchain and the future of Europe: How disruptive technologies create opportunities for a green and digital economy. European Investment Bank.
19 D’Amour, A., Heller, K., Moldovan, D., Adlam, B., Alipanahi, B., Beutel, A., Chen, C., Deaton, J., Eisenstein, J., Hoffman, M.D. and Hormozdiari, F., 2022. Underspecification presents challenges for credibility in modern machine learning. The Journal of Machine Learning Research, 23(1), pp.10237-10297.
20 Supra at 4.
21 Bak, M.A., 2022. Computing fairness: ethics of modeling and simulation in public health. Simulation, 98(2), pp.103-111.
22 Madzou, L. and Shukla, P., 2019. A framework for developing a national Artificial Intelligence strategy, White Paper. In World Economic Forum.
23 Daniels, N. and Sabin, J., 1997. Limits to health care: fair procedures, democratic deliberation, and the legitimacy problem for insurers. Philosophy & public affairs, 26(4), pp.303-350.
24 Wong, P.H., 2020. Democratizing algorithmic fairness. Philosophy & Technology, 33, pp.225-244.
25 Ienca, M., Ferretti, A., Hurst, S., Puhan, M., Lovis, C. and Vayena, E., 2018. Considerations for ethics review of big data health research: A scoping review. PloS one, 13(10), p.e0204937.
26 McLennan, S., Shaw, D. and Celi, L.A., 2019. The challenge of local consent requirements for global critical care databases. Intensive care medicine, 45, pp.246-248.
Arshita Anand
This article has been authored by Arshita Anand, 3rd rank holder of Article Writing Competition at Zedroit Privacy Festival-2023
Arshita Anand
This article has been authored by Arshita Anand, 3rd rank holder of Article Writing Competition at Zedroit Privacy Festival-2023