Skip to content

AI – The Transition from No to Go

Physician on computer

Jason Newton delves into what healthcare providers need to think about before considering a transition to the use of AI.

It’s been approximately two years since ChatGPT became part of our medical vocabulary. These days you can hardly avoid daily articles about AI in your inbox or favorite publications, and the topic is surely on the agenda at any upcoming conference you plan to attend. There is no shortage of hype associated with AI in our space, and perhaps with good reason. It has a lot of promise, but in the present tense, very little of AI’s ultimate impact is understood.
 

What is clear to me is that we went from, sometime before COVID, the idea of AI to, sometime after COVID, the reality of AI without any grand demarcation. It seemingly just happened, and it is now here to stay. At the same time, many early adopters in the healthcare profession haven’t necessarily taken the time to walk through basic and fundamental thought processes before using this shiny new technology.

From my standpoint, and perhaps for a variety of reasons, “Artificial Intelligence” is often either confused with or substituted for the concept of “Augmented Intelligence.” While artificial intelligence suggests computers are doing work instead of humans, augmented intelligence stands for the proposition that technology is helping humans, but that humans ultimately still do the work and make the final decisions. I recently listened to a great podcast in which a physician discussed how “human error” is generally permissible, yet we expect 100% perfection (and nothing less) from technological tools like AI before we are willing to deem them useful.[1] Is that the correct mindset or is that limiting potential benefits?

Many people have anxiety and fear around AI. I know I worry about what AI has already done and can do in the hands of bad actors.[2] Some of the concern in medicine may be appropriate, particularly as AI becomes more advanced and autonomous. Some doctors worry about whether their livelihood will be at stake. But at present, it is more productive to spend time thinking about how AI may assist providers and their patients. For example, pattern doctors (radiologists, pathologists and dermatologists) are most likely to be knowingly using and embracing AI, while other practitioners are considering or using automated visit summary transcription tools now entering the market.

The standard of care applicable to healthcare providers is generally based upon what colleagues in the same or in a similar community are doing. We may have transitioned out of the phase that the standard of care does not include AI. We arguably are in the state in which it is reasonable to use AI. We are going to get to the point at which at least in some circumstances, plaintiffs’ lawyers will contend it is a violation of the standard of care for a practitioner to have not used AI. We are not there yet but I believe the day is coming (and perhaps sooner than we think).

Some basic questions need to be answered and considerations given as you transition into AI use.

For where we find ourselves at this moment in time, there are a few questions and considerations medical practitioners should be walking through while embracing a transition to incorporating AI tools into practice.

First, you ought not to use free tools such as ChatGPT (or comparable alternatives) for clinical care. Without getting into the technical nature of how their algorithms work, while they can on one hand demonstrate mind-blowing capabilities, it is also well documented that they make things up.[3]

Second, while ChatGPT should not be used for clinical care, your patients are or will be using it and you need to be aware of that. Also know that ChatGPT is not a confidential repository and entering PHI into its systems would be a mistake.

Third, for subscription-based AI, doing a demo as part of your due diligence before committing to a product is a good idea, but doing a data dump of old records into the tool before or after you go live could inadvertently create liability. Know that marketers of these tools might encourage you to do such a data dump because it helps train their model with more data.

Fourth, meaningful attention and appreciation needs to be given to the legal terms of the contracts with AI companies, with particular focus on indemnification obligations and limits of liability. Get expert help to guide you appropriately and apprise you of the risks.

Fifth, practitioners need to be thinking through whether they should advertise use of AI (probably not), disclose its use to patients (probably, but it depends on the circumstances), obtain informed consent (maybe), and whether (and how) use of AI tools should be documented – especially when there is a discrepancy between the human and the tool (yes, consult with your medical professional liability insurer for advice on this). Practitioners also need to be cognizant of potential overreliance on or overconfidence in tools serving as a clinical backstop, or what psychologists might refer to as “social loafing.”[4]

Finally, practitioners today should be prepared for the companies behind AI tools they use to blame the human doctor in the event of a bad outcome. The argument will be that only the human can diagnose, prescribe or treat, and that the responsibility for doing all those things ultimately rests upon the provider’s shoulders – not the software the clinician consulted.

If there is one thing that is certain at this moment, it’s that there are more questions than answers when it comes to AI in medical practice. What’s most important is that we keep asking the questions.

Join us for a webinar on April 17, Is ChatGPT the New Dr. Google? Understanding Risks of Generative AI, where I'll be co-presenting with my colleague Margaret Curtin on AI benefits, risks, and risk strategies.

Register for the webinar here

[1] https://www.backtable.com/shows/innovation/podcasts/37/practical-ai-learning-the-basics

[2] https://securityintelligence.com/posts/using-generative-ai-distort-live-audio-transactions/?utm_source=newsletter&utm_medium=email&utm_campaign=newsletter_axioscodebook&stream=top;

https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html;

https://www.wired.com/story/nsa-rob-joyce-chatgpt-security/?utm_source=newsletter&utm_medium=email&utm_campaign=newsletter_axioscodebook

[3] https://www.law360.com/articles/1811422/print?section=massachusetts,

https://www.nytimes.com/2023/11/06/technology/chatbots-hallucination-rates.html;

https://www.nytimes.com/2023/12/29/nyregion/michael-cohen-ai-fake-cases.html

[4] https://www.wsj.com/tech/ai/does-working-with-robots-make-humans-slack-off-b7b645c1

Comments