AI regulations for patient safety and equity

Its integration into various aspects of life, including critical areas like healthcare, finance, and law, poses unique challenges

By Aditya Sinha

  • Follow us on
  • google-news
  • whatsapp
  • telegram

Top Stories

Published: Tue 5 Dec 2023, 6:02 PM

Last updated: Tue 5 Dec 2023, 6:03 PM

The need for a robust regulatory framework for emerging technologies is exemplified by historical examples that reveal the risks of both overregulation and regulatory lag. In the late 1800s, British laws overly restricted automobile development, emphasising the danger of imposing stringent regulations based on an outdated understanding of technology. Similarly, the continued exposure to radioactivity well into the mid-20th century, despite early warnings about its dangers, illustrates the perils of slow regulatory response to emerging scientific knowledge.

This historical context is highly relevant to the regulation of Artificial Intelligence (AI) in modern times. AI's integration into various aspects of life, including critical areas like healthcare, finance, and law, poses unique challenges. The often opaque nature of AI algorithms, which can lead to biased decision-making, underscores the urgency for a regulatory framework that is both adaptive and forward-thinking. Particularly in healthcare, where AI has the potential to significantly influence patient outcomes and medical practices, ensuring transparent, ethical, and safe application of AI is paramount.

Today, AI is being deployed in various facets of the healthcare industry. For instance, A recent study published in Nature Medicine highlighted how PANDA, an innovative AI model, has significantly advanced pancreatic cancer detection by accurately identifying common pancreatic lesions and their subtypes through routine non-contrast CT scans, a task previously unachievable by radiologists alone. It's built upon a substantial dataset and a sophisticated deep learning architecture, allowing for high sensitivity and specificity in lesion detection, outperforming traditional diagnostic methods. PANDA's ability to enhance radiologists' diagnostic capabilities, especially in less specialised settings, and its potential for early cancer detection and screening in asymptomatic individuals marks a transformative step in cancer diagnosis, extending its utility beyond pancreatic cancer to other types where early detection is crucial.

However, AI is as good as the data it is trained on. Empirical research studies have revealed significant bias in AI applications within healthcare. A study by Obermeyer et al. (2019) in Science reported racial bias in a healthcare algorithm widely used in US hospitals, which systematically discriminated against Black patients. This was echoed by research demonstrating that the Framingham Heart Study cardiovascular risk score, used for decades in industrialised countries, showed biased results favouring Caucasian patients over African American patients, leading to unequal and inaccurate care distribution. Another study published in the Journal of Global Health on algorithmic biases emphasises that bias can emerge at any stage of creating algorithms, from study design and data collection to implementation and dissemination, necessitating mitigation strategies at each step. This study defines algorithmic biases as “the application of an algorithm that compounds existing inequities in socioeconomic status, race, ethnic background, religion, gender, disability, or sexual orientation and amplifies inequities in health systems.”

The absence of a comprehensive regulatory framework to eliminate algorithmic bias in AI deployed in healthcare could lead to significant negative impacts on patient outcomes. Addressing this challenge starts with acknowledging that bias can infiltrate the algorithm creation process at any point, from study design to data collection and model implementation. To mitigate this, data science teams in healthcare must be diverse, including not just technical experts but also professionals from various backgrounds, such as clinicians who understand the nuances of clinical contexts. This diversity in teams can contribute to more balanced and equitable AI models.

Furthermore, it's important to recognise the inherent tradeoff between algorithm performance and bias. Since societal inequities influence algorithm development, there is always a potential for bias. Short-term strategies might include adjusting algorithms to protect disadvantaged groups, even if it means compromising on overall accuracy. This approach, while technically challenging, is crucial for ensuring equitable care. Long-term solutions require broader, more diverse teams and the implementation of checklists or safeguards throughout the algorithm development process.

On a larger scale, awareness of algorithmic bias is growing, and efforts are being made to combat it through calibrated incentives and formal legislation. However, current legal measures are still evolving and may not fully address the complexity of the issue. The goal is to not just optimise algorithm performance but also to minimise bias, ideally through a system of checks and balances. This ongoing effort underscores the need for continuous refinement in algorithm development and a collaborative approach across sectors to harness the power of AI in healthcare effectively.

Policymakers can leverage the World Health Organisation's framework on AI in healthcare by ensuring rigourous documentation and transparency throughout the AI system's lifecycle. This includes pre-specifying medical purposes, development processes, and maintaining records that trace the AI development steps, adopting a risk-based approach for documentation levels. Moreover, implementing a total product lifecycle approach encompassing pre-market management, post-market surveillance, and risk management strategies to address issues such as cybersecurity and algorithmic bias is crucial.

Additionally, policymakers should mandate clear documentation of an AI system's intended use, training dataset details, and demographic composition. They must advocate for external analytical validation to test the system's performance in real-world settings, alongside clinical validation scaled by risk, with randomised clinical trials for high-risk tools. Emphasising data quality, privacy, and protection, as well as fostering engagement and collaboration among stakeholders, will ensure that AI systems are developed and deployed responsibly. This collaborative approach may facilitate regulatory convergence and harmonisation globally, adapting to the rapidly evolving AI landscape.

(Aditya Sinha is Officer on Special Duty, Economic Advisory Council to the Prime Minister of India. He tweets @adityasinha004. Views Personal.)


More news from World