The impact of artificial intelligence (AI) is palpable across the health care spectrum, from aiding in early disease detection through image analysis to streamlining administrative tasks. Regulatory agencies recognize the need for rapid integration of health care AI solutions, as demonstrated by the FDA’s clearance of over 500 AI solutions classified as Software as a Medical Device (SaMD).
However, AI developers often prioritize speed over meticulous validation, which can compromise the depth of continuous performance monitoring and validation. Given the critical nature of medical decisions, health care AI has unique requirements.
Machine learning (ML) models can be fragile due to changes and inevitable data drift. However, reduced data quality over time and sub-par model outputs can often cause patient harm. In addition, transferring a model from one hospital system to another can prove challenging due to the complexity of the data.
To derive value from AI and ML implementations, developers must use responsible AI that aligns with five fundamental principles: It must be useful, safe, equitable, secure, and transparent. Nowhere is this more important than in the treatment of patients with cancer.
1. Useful
AI solutions must be designed to address specific health care challenges and deliver meaningful improvements in patient care and operational efficiency.
One of the fundamental challenges in determining the usefulness of an AI model is its application to a specific clinical context that solves real-world problems. Usefulness should translate into the quadruple aim of improving population health, enhancing patient satisfaction, reducing costs, and improving clinician work-life balance.
Here are two ways responsible AI has proven useful:
Increase positive patient outcomes. Implementing a “closing the loop” strategy using predictive insights into emergency department (ED) visits and early interventions for symptomatic or at-risk cancer patients can reduce ED visits by 30%.
Improve clinician efficiency. The ability to analyze large swaths of data and provide insight is a valuable time-saving benefit that was previously impractical for clinicians to accomplish on their own. With the application of AI in the clinical setting, hidden trends in patient data are surfaced allowing physicians to pre-empt adverse events while reducing the burden of gathering data.
These findings highlight the positive impact of AI-driven solutions on patient outcomes and overall health care experiences.
2. Safe
Patient safety is paramount. AI solutions must be rigorously tested and monitored to ensure they do not harm patients or introduce errors into clinical workflows.
Developers venturing into health care AI integration must understand the unique quality of every hospital and its patient population. One approach to deliberate implementation of responsive AI is through extensive model validation during development, continuous performance monitoring, and swift issue resolution:
Extensive model validation. Implementing this process ensures high performance and fairness across sensitive demographic subgroups. This involves thoroughly testing and validating diverse datasets to ensure models provide accurate and unbiased results for clinicians across different patient populations.
Continuous performance monitoring. Automated alerting, data transformations, and ML algorithms should track the performance of the model in real-world clinical settings. Performance measures should include prediction volume, data drift, prediction drift, label drift, model drift discrimination, and calibration.
Swift issue resolution. Should metrics fall out of range, timely interventions can maintain model integrity. When an out-of-range alert is received, a root-cause analysis can pinpoint the sources of problems and suggest decisive action, whether through updating data, fine-tuning algorithms, or retraining models, to rectify the issues and ensure AI systems consistently deliver safe, fair, and effective results.
3. Equitable
AI must be designed and evaluated to work effectively across diverse patient populations.
AI systems in health care should work fairly for everyone, regardless of race, gender, age, socioeconomic status, or any other demographic or clinical characteristics. Problems often originate from systematic biases present in the data used for training. In 2017, the National Academy of Medicine highlighted the fact that Black patients often receive inferior treatments than their Caucasian counterparts, even after controlling for such variables as class, comorbidities, health behaviors, and access to health care services.
The incidence of bias can be reduced by:
Engaging clinicians in product development. Involving nurses and clinicians with extensive industry experience in product design helps ensure solutions meet health care providers’ practical needs and expectations.
Conducting frequent user surveys. Qualitative and quantitative user interviews through a product’s life cycle generate continuous feedback. By listening carefully, developers can address concerns promptly, make the necessary adjustments, and improve the overall user experience.
Auditing for bias and fairness. Using third-party resources to audit data and track the performance of AI models helps reduce bias at the data level and allows for quick intervention should the AI model drift from expected performance.
4. Secure
Health care data is sensitive and must be protected. AI systems must adhere to strict security standards to prevent unauthorized access and data breaches.
Compliance with SOC2 (Service Organization Control 2) and adherence to the Health Insurance Portability and Accountability Act (HIPAA) privacy and security requirements should be minimum standards for any AI developer. Those standards should also apply to all partners within the AI tech stack, including data storage providers, analytics platforms, and any other business associates.
Adherence to the following can help ensure security of AI products:
Data siloing. Data from each organization should be isolated to minimize the risk of data leakage between health care institutions. This reduces the likelihood of unauthorized access or unintentional data exposure. It also makes it more difficult for hackers to access multiple organizations.
Continuous security testing. By conducting routine penetration testing and vulnerability assessments, health care AI products can fortify their defenses, implement timely security patches, and ensure that data remains secure. This approach safeguards patient information and reflects a commitment to responsible AI in health care.
Employee training and awareness. Nine out of 10 data breaches start with a mistake by a human. A responsible AI developer should conduct comprehensive and frequent employee training to create a culture of data security awareness, punctuated by a quarterly phishing campaign of each employee and follow-up with those who fall prey.
5. Transparent
Clinicians and patients must understand how AI decisions are made. Transparent AI systems are explainable, making their decision-making processes accessible and interpretable.
Transparent AI safeguards both patient care and clinical efficiency, making it a cornerstone of ethical AI use in health care.
AI systems should feature user-friendly interfaces that enable clinicians to grasp the rationale behind AI predictions. Further, AI outputs must be customized to the clinician’s needs and accompanied by context and individualized for each patient.
Transparent AI should include:
Clear presentation within the clinician’s workflow. AI systems should simplify clinician decision-making, with algorithm, training data, and predictions available within customary workflows.
Visual representation of clinical basis. Visual data representations of relevance to each patient and impactful clinical factors can effectively communicate the primary patient characteristics that drive the risk assessment or diagnosis. This builds trust and allows clinicians to make more informed judgments about the relevance of AI-generated insights.
Prioritization of actionable insights. This approach allows clinicians to make timely and informed choices about patient care. Prominently displayed data — such as a risk score related to the likelihood of a particular cancer patient visiting the emergency department in the next 30 days or a risk index change score of patient status — can inform care decisions.
AI’s future should be responsible.
The responsible use of AI in health care should empower clinicians, rather than replace them. Health care’s transformation must follow responsible AI principles to ensure that the technology aligns with ethical and regulatory standards while maximizing its benefits for health care delivery and patient well-being.
By adhering to these principles, clinicians, AI developers, and regulators can collectively contribute to a system where technology enhances patient care, improves clinical efficiency, and upholds the highest standards of ethics and safety. This journey toward responsible AI in health care holds the promise of a healthier and more equitable future for all.
Kathy Ford is a health care executive.