Published on
Many patients are now using artificial intelligence (AI) tools with large language models to fill the information gap between the time their test results show up in their patient-facing health records—which could be the same day—and the time their clinicians provide personalized interpretation and next steps. Experts warn that the accuracy of AI chatbot information often depends on how the patients frame their questions and what details they input along the way. But it turns out that patients are at least somewhat cautious about what AI is telling them, according to a post from Kaiser Health News. While about half of the public say they trust AI tools to answer questions on cooking, home maintenance, or technology, for example, just 29% say they trust chatbots to provide reliable health information. And it’s not just the younger generation who might be talking about chatbots: About 1 in 7 adults over the age of 50 use AI for health.Â
Privacy matters: When AI comes up in discussion, clinicians might consider cautioning patients that even the most sensible-sounding chatbot responses could be medically inaccurate and that there are no HIPAA privacy protections for personal health information provided to chatbots. AI is a rapidly evolving technology that has potential for good as long as patients verify their information with human clinicians.