Generative AI and Explainable AI in the Field of the Health Industry: Application, Ethical Considerations and Future Challenges
Keywords:
GENAI, XAI, LLMAbstract
The emergence of Generative Artificial Intelligence (GenAI) and Explainable Artificial Intelligence (XAI) represents a paradigm shift in the digital transformation of the healthcare industry. GenAI models, including generative adversarial networks (GANs), variational autoencoders (VAEs) and large language models (LLMs), can generate synthetic yet realistic medical data. This facilitates innovations in medical imaging, drug discovery and personalized treatment planning. XAI focuses on ensuring the transparency, interpretability and trustworthiness of AI models. The most important factors in clinical applications are where human lives are at risk. The integration of GenAI and XAI has the potential to improve clinical decision support systems, optimize workflows, and enhance healthcare outcomes, while addressing critical issues such as data scarcity, bias, and patient privacy. However, combining them also presents substantial technical, ethical, and regulatory challenges, including the need for interpretable generative models, adherence to data protection laws such as the General Data Protection Regulation (GDPR), and ensuring fairness and accountability in automated decision-making. This paper explains a comprehensive analysis of the applications and implementation frameworks of GenAI and XAI in healthcare, emphasizing their role in fostering trustworthy, transparent and patient-centered AI solutions.
References
[1] E.N. Cawood, M. Vespe, A. Kotsev and R. Bavel. Generative AI outlook report – Exploring the intersection of technology, society, and policy. Publications Office of the European Union, 2025, https://data.europa.eu/doi/10.2760/1109679
[2] S. Ali, T. Abuhmed, S. El-Sappagh, K. Muhammad, J. M. Alonso-Moral, R. Confalonieri, R. Guidotti, J. Ser, N. Díaz-Rodríguez and F. Herrera. Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence. Information Fusion, Volume 99, 2023, ISSN 1566-2535, https://doi.org/10.1016/j.inffus.2023.101805
[3] M. Kritikos. The impact of the General Data Protection Regulation (GDPR) on artificial intelligence. European Parliamentary Research Service, 2020, ISBN 978-92-846-6771-0, https://doi.org/10.2861/293
[4] Intersoft consulting. General Data Protection Regulation (GDPR). 2025, https://gdpr-info.eu/
[5] F. Garcea, A. Serra, F. Lamberti and L. Morra. Data augmentation for medical imaging: A systematic literature review. Computers in Biology and Medicine, Volume 152, 2023, ISSN 0010-4825, https://doi.org/10.1016/j.compbiomed.2022.106391
[6] G. M. Harshvardhan, M. K. Gourisaria, M. Pandey and S. S. Rautaray. A comprehensive survey and analysis of generative models in machine learning. Computer Science Review, Volume 38, 2020, ISSN 1574-0137, https://doi.org/10.1016/j.cosrev.2020.100285
[7] X. Wang and H. Liu. Data supplement for a soft sensor using a new generative model based on a variational autoencoder and Wasserstein GAN. Journal of Process Control, Volume 85, 2020, Pages 91-99, ISSN 0959-1524, https://doi.org/10.1016/j.jprocont.2019.11.004
[8] D. Paulson and L. Victor. Generative Adversarial Networks (GANs) for Medical Image Synthesis and Data Augmentation. Preprints, 2025, https://doi.org/10.20944/preprints202506.1310.v1
[9] I. D. Mienye, G. Obaido, N. Jere, E. Mienye, K. Aruleba, I. D. Emmanuel and B. Ogbuokiri. A survey of explainable artificial intelligence in healthcare: Concepts, applications, and challenges. Informatics in Medicine Unlocked, Volume 51, 2024, ISSN 2352-9148, https://doi.org/10.1016/j.imu.2024.101587
[10] C. Giorgetti, G. Contissa and G. Basile. Healthcare AI, explainability, and the human-machine relationship: a (not so) novel practical challenge. Front Med (Lausanne), 2025, https://doi.org/10.3389/fmed.2025.1545409
[11] L. Weber, S. Lapuschkin, A. Binder and W. Samek. Beyond explaining: Opportunities and challenges of XAI-based model improvement. Information Fusion, Volume 92, 2023, Pages 154-176, ISSN 1566-2535, https://doi.org/10.1016/j.inffus.2022.11.013
[12] S. A. Rabbani, M. El-Tanani, S. Sharma, S. S. Rabbani, Y. El-Tanani, R. Kumar and M. Saini. Generative Artificial Intelligence in Healthcare: Applications, Implementation Challenges, and Future Directions. BioMedInformatics, 2025, https://doi.org/10.3390/biomedinformatics5030037
[13] T. Hulsen. Explainable Artificial Intelligence (XAI): Concepts and Challenges in Healthcare. AI, 2023, pp. 652-666, https://doi.org/10.3390/ai4030034
[14] M.T. Ribeiro, S. Singh and C. Guestrin. “Why should I trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 2016, pp. 1135–1144, https://doi.org/10.48550/arXiv.1602.04938
[15] A.B. Arrieta, N. Díaz-Rodríguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, S. García, S. Gil-López, D. Molina and R. Benjamins. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 2020, 58, pp. 82–115, https://doi.org/10.48550/arXiv.1910.10045
[16] P.J. Phillips, C.A. Hahn, P.C. Fontana, D.A. Broniatowski and M.A. Przybocki. Four Principles of Explainable Artificial Intelligence. National Institute of Standards and Technology, Gaithersburg, MD, USA, Volume 18, 2020, https://doi.org/10.6028/NIST.IR.8312
[17] D. Vale, A. El-Sharif and M. Ali. Explainable artificial intelligence (XAI) post-hoc explainability methods: risks and limitations in non-discrimination law. AI Ethics 2, pp. 815–826, 2022, https://doi.org/10.1007/s43681-022-00142-y
[18] J. Amann, A. Blasimme, E. Vayena, D. Frey and V. I. Madai. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Med Inform Decis Mak 20, 310, 2020, https://doi.org/10.1186/s12911-020-01332-6
[19] H. Felzmann, E.F Villaronga, C. Lutz and A. Tamò-Larrieux. Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data & Society, 2019, https://doi.org/10.1177/2053951719860542
[20] K. Sokol and P. Flach. Explainability fact sheets: a framework for systematic assessment of explainable approaches. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* '20), Association for Computing Machinery, New York, USA, pp. 56–67, 2020, https://doi.org/10.1145/3351095.3372870
[21] Z. C. Lipton. The Mythos of Model Interpretability: In Machine Learning, the Concept of Interpretability is Both Important and Slippery. Queue 16, 2018, https://doi.org/10.1145/3236386.3241340
[22] R. R. Hoffman, S. T. Mueller, G. Klein and J. Litman. Metrics for explainable AI: Challenges and prospects. Computer Science, 2019, https://doi.org/10.48550/arXiv.1812.04608
[23] K. Fauvel, V. Masson and E. A. Fromont. A performance-explainability framework to benchmark machine learning methods: Application to multivariate time series classifiers. Computer Science, Machine Learning, 2021, https://doi.org/10.48550/arXiv.2005.14501
[24] Y. Guang, Y. Qinghao and X. Jun. Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond. Information Fusion, Volume 77, Pages 29-52, 2022, ISSN 1566-2535, https://doi.org/10.1016/j.inffus.2021.07.016
[25] S. N. Veer, L. Riste, S. Cheraghi-Sohi, D. L. Phipps, M. P. Tully, K. Bozentko, S. Atwood, A. Hubbard, C. Wiper, M. Oswald et al. Trading off accuracy and explainability in AI decision-making: findings from 2 citizens’ juries. Journal of the American Medical Informatics Association, Volume 28, Issue 10, pp. 2128–2138, 2021, https://doi.org/10.1093/jamia/ocab127
[26] J. Druce, M. Harradon and J. Tittle. Explainable artificial intelligence (XAI) for increasing user trust in deep reinforcement learning driven autonomous systems. Computer Science, Artificial Intelligence, 2021, https://doi.org/10.48550/arXiv.2106.03775
[27] E. Merrer and G. Trédan. Remote explainability faces the bouncer problem. Nature Machine Intelligence, 2020, https://doi.org/10.1038/s42256-020-0216-z
[28] G. A. Kaissis, M. R. Makowski, D. Rückert and R. Braren. Secure, privacy-preserving and federated machine learning in medical imaging. Nature Machine Intelligence, 2020, https://doi.org/10.1038/s42256-020-0186-1
[29] S. Saifullah, D. Mercier, A. Lucieri, A. Dengel and S. Ahmed. Privacy Meets Explainability: A Comprehensive Impact Benchmark. Computer Science, Machine Learning, 2022, https://doi.org/10.48550/arXiv.2211.04110
[30] HHS Office for Civil Rights. Standards for privacy of individually identifiable health information – Final rule. pp. 53181–53273, 2002, https://aspe.hhs.gov/standards-privacy-individually-identifiable-health-information
[31] HHS Office for Civil Rights. The HIPAA Privacy Rule and Electronic Health Information Exchange in a Networked Environment – Openness and Transparency. pp. 1-4, 2025, https://www.hhs.gov/sites/default/files/ocr/privacy/hipaa/understanding/special/healthit/opennesstransparency.pdf
[32] R. Creemers and G. Webster. Translation: Personal Information Protection Law of the People’s Republic of China. Effective 1 November 2021, https://digichina.stanford.edu/work/translation-personal-information-protection-law-of-the-peoples-republic-of-china-effective-nov-1-2021/
[33] R. K. Yekollu, T. B. Ghuge, S. S. Biradar, S. V. Haldikar and O. F. M. A. Kader. Explainable AI in Healthcare: Enhancing Transparency and Trust in Predictive Models. pp. 1660–1664, 2024, https://doi.org/10.1109/icesc60852.2024.10690121
[34] A. S. Albahri. A systematic review of trustworthy and explainable artificial intelligence in Healthcare: Assessment of quality, bias risk, and data fusion. Information Fusion, vol. 96, pp. 156–191, 2023, https://doi.org/10.1016/j.inffus.2023.03.008
[35] S. Ali, T. Abuhmed, S. El-Sappagh, K. Muhammad, J. M. Alonso-Moral, R. Confalonieri, R. Guidotti, J. Del Ser, N. Díaz-Rodríguez and F. Herrera. Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence, Information Fusion, Volume 99, 2023, ISSN 1566-2535, https://doi.org/10.1016/j.inffus.2023.101805
[36] J. Amann, A. Blasimme, E. Vayena, D. Frey and V. I. Madai. Explainability for Artificial Intelligence in healthcare: A multidisciplinary perspective. BMC Medical Informatics and Decision Making, vol. 20, no. 1, 2020, https://doi.org/10.1186/s12911-020-01332-6
[37] R. Larasati and A. DeLiddo. Building a Trustworthy Explainable AI in Healthcare. Human Computer Interaction and Emerging Technologies, 2020, https://doi.org/10.18573/book3.ab
[38] A. A. Adeniran, A. P. Onebunne and P. William. Explainable AI (XAI) in healthcare: Enhancing trust and transparency in critical decision-making. World Journal of Advanced Research and Reviews, 23(3), pp. 2447–2658, 2024, https://doi.org/10.30574/wjarr.2024.23.3.2936
[39] K. N. Alam, P. B. Zadeh and A. Sheikh-Akbari. Attribution-Based Explainability in Medical Imaging: A Critical Review on Explainable Computer Vision (X-CV) Techniques and Their Applications in Medical AI. Electronics, 2025, https://doi.org/10.3390/electronics14153024
[40] N. Gharaibeh. Enhancing interpretability in brain tumor detection: Leveraging Grad-CAM and SHAP for explainable AI in MRI-based cancer diagnosis. Applied Computer Science, 21(3), pp. 182–197, 2025, https://doi.org/10.35784/acs_7375
[41] V. Sankaradass, V. K. M. Manish, R. Anandhan, R. Velmurugan and S. Sakthivel. A Machine Learning and Data Analytics Approach to Patient Risk Stratification. 2025 International Conference on Data Science, Agents & Artificial Intelligence (ICDSAAI), Chennai, India, pp. 1-6, 2025, https://doi.org/10.1109/ICDSAAI65575.2025.11011598
[42] N. Bernard, and K. Balog. A Systematic Review of Fairness, Accountability, Transparency, and Ethics in Information Retrieval. ACM Computer Survey, Vol. 57, Article 136, 2025, https://doi.org/10.1145/3637211
[43] R.B. Parikh, A. Gdowski, D. A. Patt et al. Using Big Data and Predictive Analytics to Determine Patient Risk in Oncology. American Society of Clinical Oncology Educational book, American Society of Clinical Oncology, Annual Meeting, 2019, https://doi.org/10.1200/edbk_238891
[44] U. M. K. Moorthy, A. M. J. Muthukumaran, V. Kaliyaperumal, S. Jayakumar and K. A. Vijayaraghavan. Explainability and Regulatory Compliance in Healthcare: Bridging the Gap for Ethical XAI Implementation. Explainable Artificial Intelligence in the Healthcare Industry, pp. 521-561, 2025, https://doi.org/10.1002/9781394249312.ch23
[45] S. Daram. Explainable AI in Healthcare: Enhancing Trust, Transparency, and Ethical Compliance in Medical AI Systems. pp. 11-20, 2025, https://doi.org/10.63282/3050-9416.IJAIBDCMS-V6I2P102
[46] S. Jenko, E. Papadopoulou, V. Kumar, S. S. Overman, K. Krepelkova, J. Wilson, E. L. Dunbar, C. Spice and T. Exarchos. Artificial Intelligence in Healthcare: How to Develop and Implement Safe, Ethical and Trustworthy AI Systems. AI, 6(6), 2025, https://doi.org/10.3390/ai6060116
[47] M. Mądra-Sawicka, J. Gołuchowski and J. Paliszkiewicz. Trust-Building in the Generative AI – Future Perspectives and Emerging Trends. Routledge, 1st Edition, 2025, https://doi.org/10.4324/9781003586944-2
[48] K. Raman, R. Kumar, C. J. Musante and S. Madhavan. Integrating Model‐Informed Drug Development With AI: A Synergistic Approach to Accelerating Pharmaceutical Innovation. Clinical and Translational Science, 18(1), 2025, https://doi.org/10.1111/cts.70124
[49] A. Taheri, A. Farhadi, and A. Zamanifar. Application of GenAI in Clinical Administration Support. Application of Generative AI in Healthcare Systems, Springer, pp. 91-117, 2025, ISBN 978-3-031-82962-8, https://doi.org/10.1007/978-3-031-82963-5_4
[50] K. Zheng, Z. Shen, Z. Chen et al. Application of AI-empowered scenario-based simulation teaching mode in cardiovascular disease education. BMC Medical Education, 2024, ISSN 1472-6920, https://doi.org/10.1186/s12909-024-05977-z
Downloads
Published
Issue
Section
License
Copyright (c) 2026 International Journal of Sciences: Basic and Applied Research (IJSBAR)

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Authors who submit papers with this journal agree to the following terms.