Collection

Beyond the Code: Unraveling Ethics and Bias in AI-Powered Healthcare Innovations

The dramatic growth in healthcare data volumes over the past decades, spurred by widespread adoption of Electronic Health Record (EHR) systems, has paved the way for innovative applications of Artificial Intelligence (AI) in healthcare. AI now plays a pivotal role in almost every aspect of healthcare, from drug discovery to clinical decision support. The excitement has particularly surged recently, largely driven by the potential use of generative AI techniques, such as ChatGPT, in healthcare applications. Despite AI's promising potential, its widespread use also brings forth ethical challenges and biases that need to be addressed. Recent studies underscore the importance of examining these issues, as disparities in healthcare may not only originate at the data collection stage but also become amplified through the development and implementation of AI technologies. This special issue welcomes original research contributions that focus on the ethics and bias of AI in healthcare applications. The AI techniques of interest encompass a broad range, including but not limited to natural language processing, medical imaging, deep learning, predictive modeling, human-computer interfaces, Internet of Things, and more. Healthcare applications of interest include, but are not limited to, clinical decision support, drug discovery, precision medicine, clinical research, translational research, telehealth and mhealth, consumer applications, and robotics. Relevant topics for this special issue include, but are not limited to: • AI for health equity and addressing health disparities • Transparency, interpretability, and explainability of AI techniques in healthcare applications • Patient safety concerns related to the AI use in healthcare • Legal and regulatory compliance in AI-driven healthcare • Privacy, confidentiality, and data security in healthcare AI applications • Informed consent for patients in the context of data usage in AI models • Data, algorithmic, and human bias in AI techniques • Fairness metrics, evaluation, and tools for AI in healthcare • Reasoning and practical solutions to mitigate bias in healthcare AI applications • Technical and methodological approaches to address ethical concerns and bias in healthcare AI applications We also encourage the submission of position papers that address ethics and bias in AI. This includes papers from individuals or groups belonging to communities that have historically been negatively affected by AI, bias, or health disparities. Additionally, we invite position papers from institutions that play a pivotal role in mitigating the impact of bias in healthcare applications of AI.

Editors

  • Yanshan Wang

    Vice Chair of Research and Assistant Professor, Department of Information Management, University of Pittsburgh, Pittsburgh, PA, USA

  • Ahmad P. Tafti

    Assistant Professor, Department of Information Management, University of Pittsburgh, Pittsburgh, PA, USA

  • Kirk Roberts

    Associate Professor, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, USA

  • Hongfang Liu

    Professor, Department of AI & Informatics, Mayo Clinic, Rochester, MN, USA

Articles (2 in this collection)