Untangling the legal web: Navigating AI’s complexities in healthcare

Untangling the legal web: Navigating AI’s complexities in healthcare

By SMU City Perspectives team

Published 6 July, 2023


POINT OF VIEW

The law of negligence would probably not require the doctors to understand or explain the detailed and complex intricacies of the AI model to the patient. What is important would be the doctor’s awareness of the reliability (or otherwise) of the AI for the intended use.

Gary Chan Kok Yew

Professor of Law, Singapore Management University


In brief

  1. As the integration of artificial intelligence (AI) becomes more prevalent in medical practices, it brings forth a myriad of legal complexities and challenges.
  2. To mitigate the risks that AI tools can bring, medical practitioners should exercise due diligence, understand the technology sufficiently and take reference from respectable bodies of medical opinion.
  3. Medical AI tools should be developed and implemented according to regulatory guidelines, with patients sufficiently informed on the risks and benefits they encompass.

This article is being featured in Special Feature: Mind Meets Machine

In the fast-paced world of healthcare, where time and accuracy are of utmost importance, the integration of artificial intelligence (AI) has been game-changing. AI-powered tools like the Singapore Eye Lesion Analyser Plus (SELENA+), Rapid AI and RadiLogic have brought significant improvements in patient care. By providing faster and more accurate diagnoses, medical practitioners are also seeing better outcomes for their patients. As new applications emerge across a spectrum of medical services, healthcare systems in many cities are on the brink of a major transformation.

In many cases, the benefits of the technology are widely celebrated. However, the use of medical AI could enter legal territory when the patient outcome takes a negative turn. Who is to be held accountable if a patient faces a misdiagnosis leading to injury or death as a result of the use of AI? What are the circumstances that might lead to a doctor or hospital being accused of ‘medical negligence’ due to the use of an AI-powered tool? While the answers remain unclear, conversations on how tort law might adapt to these developments are underway. Gary Chan Kok Yew, Professor of Law, shares some key discussions within the legal community and offers solutions for healthcare professionals who wish to navigate this space.

Evolving definitions of the ‘standard of care’

The legal term known as the ‘standard of care’ is at the crux of this conundrum. According to Prof Chan, ‘standard of care’ refers to an individual’s legal duty to exercise reasonable caution to prevent harm to others. Medical practitioners are expected to consider a range of factors in their decision-making. This includes the foreseeable risks, potential harm, costs of preventive measures, industry practices, and the purpose and potential benefits of the activity in question. 


Medical negligence lawsuits can arise if a patient faces injuries as a result of an act or omission by a doctor that deviates from the legal standard. With the use of AI tools in medical settings, conversations on the definition of ‘standard of care’ are evolving, and it remains unclear whether these standards will remain as is or require adaptation. This includes the critical question - what does ‘medical negligence’ entail in the age of AI?

Prof Chan highlights three key dilemmas below:

  1. Liability regime vs encouraging innovation

One conversation revolves around the differences in the liability regime. According to Prof Chan, some have commented that patients should be required to prove that the hospitals and doctors were at fault in order to recover compensation for their injuries. With this approach, healthcare professionals can feel confident in testing and adopting new technologies to benefit their practice and patients. When no such requirement is put in place, however, hospitals and doctors may be afraid of the legal consequences that might come their way, thus deterring them from harnessing the potential of AI. 

  1. Strict liability vs proving negligence

Given how challenging it is to establish fault in AI cases, others propose the notion of ‘strict liability’, a legal doctrine that holds individuals responsible for the consequences of their actions even if they did not intend to cause harm. Under this approach, injured patients only need to show that the medical practitioner’s actions have resulted in injury to the patient, without needing to prove negligence. 

  1. The challenges of turning to AI recommendations  

Opaque AI systems present another challenge. When AI recommendations come from deep learning systems, doctors may lack access to the reasoning behind the AI’s decision-making. Prof Chan explains, “If the doctor accepts the AI recommendation, which turns out to be erroneous and causes harm to the patient, would the doctor be regarded as negligent? Conversely, if the doctor rejects the AI recommendation and adopts a different treatment which thereby injures the patient, would he or she be negligent? The answer is unclear at the moment. We probably need more information on the doctor’s basis, if any, for accepting or rejecting the AI recommendation.”

Prof Chan adds that the views of medical peers play an important role in determining what would be considered an acceptable practice in these circumstances. However, anticipating the opinions of medical peers remains a challenge at present since the use of medical AI is still in its infancy.

Mitigating risk in the use of medical AI

Prof Chan offers some solutions to help medical practitioners navigate this period of ambiguity. 

Firstly, hospitals and clinics should conduct due diligence on each medical AI tool before implementation. This includes having healthcare professionals thoroughly review relevant materials and seek clarification from the AI developers if needed. He adds, “Even if the hospital or clinic has conducted due diligence in deciding to implement medical AI, the negligent use of AI by an employee in a particular instance can result in the hospital or clinic shouldering legal liability for harms caused to patients.”

Therefore, he encourages hospitals to ensure that staff utilising medical AI for patients receive proper training where possible. While there are no mandatory requirements for healthcare professionals at present, Prof Chan says there are educational workshops and seminars to help them keep abreast of developments and discussions in medical AI. 

To minimise the risk of negligence liability, doctors should also adhere to practices accepted by a respected body of medical opinion. He explains that when faced with differing opinions, courts typically defer to a responsible body of medical opinion that supports the defendant doctor's practice as long as it is reasonable and minimises harm. This approach encourages innovation without deterring doctors from utilising new methods or technology endorsed by one medical opinion, even if it differs from another.

Enabling informed decision-making

By issuing regulatory guidelines, governments can play a pivotal role in developing and implementing safe and reliable AI systems. Prof Chan highlights the example of the Artificial Intelligence in Healthcare Guidelines (AIHGIe), which was developed by Singapore’s Ministry of Health (MOH), Health Sciences Authority (HSA), and the Institute of Health Informatics and Digital (IIHS) in 2021. The document offers guidelines to help AI developers design, build and test the technology effectively while also listing steps and precautions AI implementers should take prior to the application of medical AI. While these guidelines are not legally binding, they are important reference points for ensuring responsible AI development and usage across the board.

On an individual level, patients should be encouraged to ask their healthcare provider questions about the prior use of these innovative technologies, for example, by finding out if the medical AI tool in question has been used for diagnosing or treating similar conditions and the level of safety and reliability based on previous use cases. Prof Gary shares, “Where the medical AI has not been used, or such use would be on an experimental basis, patients should enquire about alternatives to AI. On the other hand, to make an informed decision, the patient should also not ignore the potential benefits of such AI use.”

Taking reasonable care at relevant stages

While information is key to decision-making, Prof Chan advises healthcare providers against what he calls ‘information dumping’ on patients by sharing details that are irrelevant to them. 

He says, “I believe that in the future when the use of medical AI becomes more common, the hospitals and doctors would be more well-versed with the technology. However, the law of negligence would probably not require the doctors to understand or explain the detailed and complex intricacies of the AI model to the patient. What is important would be the doctor’s awareness of the reliability, or otherwise, of the AI for the intended use.” 

Prof Chan encourages medical practitioners to take reasonable care at relevant stages of AI implementation including monitoring the AI performance and conducting regular reviews of the technology. He says, “The current state of medical negligence in Singapore allows room for medical innovations that are accepted by a responsible body of medical peers. Nevertheless, doctors should endeavour to know enough about the medical AI in order to assess the likely material risks and minimise harm to the patient.”

What insights come to mind?

What insights come to mind?

Click to respond and see what others think too

What makes you skeptical?

We read every single story, comment and idea; and consolidate them into insights for our writer community.

What makes you curious?

We read every single story, comment and idea; and consolidate them into insights for our writer community.

What makes you optimistic?

We read every single story, comment and idea; and consolidate them into insights for our writer community.

What makes you on the fence?

We read every single story, comment and idea; and consolidate them into insights for our writer community.

Story successfully submitted.

Story successfully submitted.

Thank you for your story. We'll be consolidating all stories to kickstart a discussion portal in our next release. Subscribe to get updates on its launch.

I consent to SMU collecting, using and disclosing my personal data to provide information relating to XXX offered by SMU that I am signing up for/that I have indicated my interest in.

I can find out about my rights and choices and how my personal data is used and disclosed here.

Methodology & References
  1. Artificial Intelligence in Healthcare. Ministry of Health. (n.d.). https://www.moh.gov.sg/licensing-and-regulation/artificial-intelligence-in-healthcare
  2. Bresnick, J. (2019, December 18). What is deep learning and how will it change healthcare?. HealthITAnalytics. https://healthitanalytics.com/features/what-is-deep-learning-and-how-will-it-change-healthcare
  3. Chan, G., & Yip, M. (2021, October). AI, data and private law: The theory-practice interface. Ink.library.smu.edu.sg. https://ink.library.smu.edu.sg/sol_research/3436/
  4. iSchemaView, Inc. (n.d.). Aneurysm, pulmonary embolism and stroke software platform powered by ai. Aneurysm, pulmonary embolism and stroke software platform powered by AI. https://www.rapidai.com/
  5. Medical negligence and malpractice in Singapore. SingaporeLegalAdvice.com. (2022, April 12). https://singaporelegaladvice.com/law-articles/medical-negligence-and-malpractice-in-singapore/
  6. Radilogic: Bringing AI into radiology. IPI Singapore. (n.d.). https://www.ipi-singapore.org/tech-offers/174612/radilogic-bringing-ai-into-radiology.html
  7. Selina + . National Artificial Intelligence Strategy. (n.d.). https://www.ihis.com.sg/healthai/Pages/Selena.aspx