ChatGPT, the newest chatbot from OpenAI, has made headlines in recent months as it has reportedly reached 100 million users in just two months, faster than social media giants Instagram and TikTok. While medicine is certainly a field where you have to be cautious about trusting technology, I figured I would give this application a try and see how it dealt with straightforward patient scenarios.
I started it off with a case that would be a slam dunk for even a junior medical student.
The response, while not perfect, was generally accurate and somewhat comprehensive.
In this simple test, I’d say that ChatGPT passed. I wouldn’t want it to be my doctor, but I could see it being a useful tool in clinical practice. There were a couple red flags that came up in the short time I’ve been using it, however. For example, here’s my attempt to have it help me with research for a recent newsletter:
Pretty good response, right? I thought so. But I never intended for the chatbot to do the work for me, I just was hoping it could point me in the direction of some high quality papers to get the literature review started. So here’s me asking for the links and citations for the papers so I can read them and form my own opinions:
Looks great, and that’s exactly what this program is designed to do — craft responses that look the part. It’s only when I tried the links that I started to get suspicious. The first link is broken, and the second one takes you to a completely unrelated paper on obesity. Upon further searching, it turns out that neither of these papers actually exists. ChatGPT simply did a great job piecing together believable author names, article titles, and journal names to convince the user that it’s providing exactly what it was asked for. If these sources were fabricated, should I really be trusting the other information it provided?
On the downside, this made me realize that ChatGPT isn’t quite as useful as I had initially hoped. On the bright side, these tests made me confident that AI won’t be putting me out of a job — at least not yet. AI absolutely cannot replace medical professionals in the present, and attempts at this could lead to very bad outcomes. Nevertheless, I can definitely see a future where doctors are regularly using AI “assistants” like ChatGPT. In fact, I think there’s a role for it in its current form.
Even in this early stage, ChatGPT is pretty good at coming up with (nearly endless) options for diagnoses, testing, and more. It seems to have no trouble combing the internet and coming up with potential answers to clinical scenarios. It will sometimes produce potential diagnoses or tests that I hadn’t even thought of. Where a physician is needed, however, is deciding which of these are reasonable to pursue and which don’t make sense for that specific patient. It’s a message you’ve likely heard before, but it’s absolutely true: a doctor with AI is far superior to either a doctor or AI alone.
DISCLAIMER:
All content and information provided on or through this website is for general informational purposes only and does not constitute a professional service of any kind. This includes, but is not limited to, the practice of medicine, nursing, or other professional healthcare services. The use of any information contained on or accessed through this website is at the user’s own risk. The material on this site or accessible through this site is not intended to be a substitute for any form of professional advice. Always seek the advice of a qualified professional before making any health-related decisions or taking any health-related actions. Users should not disregard or delay in obtaining medical advice for any medical condition they have, and should seek the assistance of their healthcare professionals for any such conditions.
Comments