Published on
Some 32% of U.S. adults say they have turned to artificial intelligence (AI) chatbots in the past year for health information, according to 2 recent surveys. The KFF Tracking Poll on Health Information and Trust found younger and lower-income individuals often turn to AI for help to avoid the costs and access barriers when seeking traditional healthcare services. What’s concerning is that many users skip professional follow-up entirely, including 58% of those seeking mental health advice and 42% seeking physical health advice. Additionally, 41% of AI health users say they have uploaded personal health information directly into these tools, even though 65% of those who have shared personal information with AI say that they are concerned about the privacy protections, according to the KFF data gathered from 1,343 adults. A separate survey uncovered similar results. Rock Health’s recent Consumer Adoption of Digital Health Survey, which included 8,000 U.S. adults, also found that 32% turn to AI chatbots for health information and help, double the percentage from just a year ago. Their top queries include searching for treatment options based on a diagnosis (59%), asking for a diagnosis based on symptoms (56%), and seeking information about prescription drugs and/or side effects (55%). Now that the AI frontrunners have all launched consumer-facing experiences specifically for health, chatbot use is certain to grow.Â
Unlicensed practice of medicine: “This data should be a wake-up call for our industry. When one-third of adults substitute a provider visit for an AI chatbot, urgent care isn’t just at risk of losing patient volume to Big Tech—we could be facing a serious clinical crisis,” says Alan A. Ayers, President of Urgent Care Consultants and Senior Editor of JUCM. “Patients are highly susceptible to bad prompting, algorithmic hallucinations, and faulty medical advice from platforms that are effectively engaging in the unlicensed practice of medicine. Urgent care operators must aggressively reclaim the digital front door, not just to protect our margins, but to protect our communities from the very real harm of unverified AI triage.”
