Is AI ready for use in the pharmacy? How a recent study challenged ChatGPT with real drug-information questions

January 17, 2024

Recently, ChatGPT has emerged as a phenomenon with the potential to change many industries. It is a tool seemingly capable of providing answers to whatever question the user types into the chat box. Last February, ChatGPT passed the US Medical Licensing Exam. So what does that mean for AIs' place in the pharmacy and should pharmacists be implementing AI into their practice now? 

That's what pharmacy researchers at Long Island University investigated. They challenged ChatGPT with real drug-related questions that had come through the Long Island University's College of Pharmacy drug information service. The research showed that ChatGPT was only able to provide satisfactory responses to 10 out of the 39 medical questions asked. For the other 29 questions, ChatGPT did not address the question posed or provided an incorrect or incomplete answer. 

When prompted by the researchers to provide the references to verify the information, ChatGPT only provided references for eight responses and each included non-existent references. This study highlights how ChatGPT is not an appropriate tool when it comes to making clinical decisions in your everyday practice.

OpenAI, the company behind ChatGPT, has a usage policy that states that its AI tools are "not fine-tuned to provide medical information. You should never use our models to provide diagnostic or treatment services for serious medical conditions." Additionally, in its technical report, OpenAI states that GPT-4’s biggest limitation is that it “hallucinates” information (makes it up) and is often “confidently wrong in its predictions.” Meaning that it can provide an answer that is seemingly correct but is making up the information, which can lead healthcare providers astray if they are depending on the AI for accurate information. 

AI models are trained to predict strings of words that best match the question posed, meaning it doesn’t understand the question but rather predicts the next words in order to be accepted by the user as an appropriate answer. AI hallucinations occur because AI lacks the capability to employ logical reasoning or identify factual inconsistencies in the information they generate. Meaning, generative AI doesn't actually understand the question like a human would, instead it predicts what sounds like the right answer. 

There are generative AI models being developed for healthcare. At MedEssist, we've been working with Pendium health to develop an AI that can be used and trusted by healthcare professionals. Together we have pharmacists, NPs, MDs working on creating the most accurate AI possible for primary care. MedEssist AI is a highly accurate and easy-to-use AI that is integrated into the MedEssist platform, helping you answer complex DI questions, create personalized patient handouts instantly, and draft newsletters or other marketing materials.  

How is MedEssist AI any different than ChatGPT? 
  • MedEssist AI was created by a team of healthcare providers and it is trained from only reliable medical information such as  drug product monographs, practice guidelines, and merck manuals.
  • Unlike ChatGPT, it was designed specifically to address the kinds of questions you would be asked as a pharmacist.
  • MedEssist AI provides reliable references. It cites each line when possible, making it easy for you to find the original source.   
  • Lastly, it's integrated into the platform you use everyday allowing you to tap into AI whether you're making a patient assessment, writing a newsletter, or creating a new patient intake form.

Visit our website: ​​https://www.medessist.com/ai to learn more! 

Also take a look at our social media pages and website for great tips on how you can use MedEssist AI in your daily practice. 

References: 

  1. Bhaimiya, S. (n.d.). The free version of chatgpt may provide false answers to questions about drugs, new study finds. Retrieved from https://www.businessinsider.com/chatgpt-may-provide-false-answers-to-medical-questions-study-2023-12  
  2. Clark, M., & Vincent, J. (2023). What’s new with GPT-4 — from processing pictures to acing tests Retrieved from https://www.theverge.com/2023/3/15/23640047/openai-gpt-4-differences-capabilties-functions
← Back to Blog

Newsletter

Sign up for practical tips, tricks, and insights to simplifying your workflow and growing your business.
(unsubscribe anytime)

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Ready to simplify your practice and meet new patients?
Go live with your new web portal and pharmacy dashboard in less than 24 hours.
Still unsure? 
Send us a message!
Our team is here to answer any questions you have.
Support is just a click away!

For Patients

Visit our BookMyPharmacy portal to find a pharmacy near you!
Find a pharmacy