Published on
Download the article PDF: The Case For A Generative Ai Acceptable Use Policy In Urgent Care
Urgent Message: Banning generative artificial intelligence can create a culture of secretive use that presents potential risks for legal liability, clinical harm, and degradation of reputation.
Key Words: Generative Artificial Intelligence; Protected Health Information; HIPAA; Clinical Decision-Making
Across urgent care centers, many clinicians are experimenting with generative artificial intelligence (AI) tools. The trend is measurable, not anecdotal. The American Medical Association reports that 2 in 3 physicians used AI in 2024โa stunning rise from 38% in 2023โwhile Elsevierโs global Clinician of the Future 2025 survey finds nearly half of clinicians worldwide report using AI at work, most often general-purpose tools.1,2
Use brings upside and downside. In terms of benefits, AI can assist in drafting job descriptions and interview questions, creating policy and procedure templates, and analyzing and drafting responses to patient satisfaction feedback. In terms of risk, hallucinations (errors and fabricated output that appear to be factual) are well documented in the medical literature. Leading journals and the World Health Organization warn that AI systems are not ready for autonomous clinical decision-making and require human verification and governance.3-6
From a compliance standpoint, entering protected health information (PHI) into a public chatbot, for example, can be a HIPAA violation absent a compliant business associate agreement (BAA) and appropriate safeguards.โท And the security context is real: Threat researchers have tracked 100,000+ stolen ChatGPT credentials on underground markets, underscoring the need for strong account hygiene and policies that restrict use to approved platforms.โธ
What about simply banning AI? Evidence suggests bans backfire. In workplace studies, employees take a โbring-your-own AIโ approach when their organizations lag on sanctioned options. Microsoft and LinkedIn report 78% of AI users bring their own tools to work, and Salesforce found over half of workplace generative AI use occurs without formal employer approval.9,10 In other words, prohibition counterproductively drives use into the shadowsโwithout training, tracking, or risk mitigation strategies.
Adopt An AI Acceptable Use Policy
The practical answer is leadership, not avoidance. Consider an acceptable use policy (AUP) tailored for the urgent care center, which should:
1) Specify approved tools and accounts (under BAAs where applicable) and bar PHI in public systems.
2) Define allowed vs. prohibited uses (eg, patient-education materials vs. diagnosis and/or prescribing).
3) Require human oversight and verification against authoritative sources before anything reaches patients or the medical record.
4) Institute basic security hygiene (multifactor authentication, organization-managed accounts, link and file caution alerts).
5) Provide brief training and an incident-report pathway; review the policy at least annually.
Failing to implement a policy results in potential regulatory exposure (HIPAA notifications and Office of Civil Rights scrutiny), clinical risk from unverified content, and reputational harm. AUPs do not eliminate risk; they channel inevitable use toward safer, more transparent practice.
Quick Reference For Generative AI Use
Always ask yourself before using AI:
- Am I protecting PHI?
- Am I verifying accuracy?
- Am I ensuring fairness and readability?
- Am I being transparent about AI use?
Using co-intelligent principles, always invite AI to the tableโbut keep the human in the loop; treat it like a capable but alien intern; and assume todayโs AI is the worst youโll ever use. The urgent care center that operates by those rulesโcodified in a clear policyโwill capture the efficiency gains while protecting patient trust.11-13
References
- American Medical Association. Physician enthusiasm grows for health care AI. Published February 12, 2025. Accessed September 17, 2025. https://www.ama-assn.org
- Elsevier. Clinician of the Future 2025 Report. Elsevier Insights. Published July 22, 2025. Accessed September 17, 2025. https://www.elsevier.com
- Perlis RH. Artificial intelligence in peer review. JAMA. Published August 26, 2025. Accessed September 17, 2025. https://jamanetwork.com
- Hager P, Smith J, Lee K, et al. Evaluation and mitigation of the limitations of large language models for clinical decision-making. Nat Med. 2024;30(online). Accessed September 17, 2025. https://www.nature.com/nm
- Asgari E, Zhang M, Williams D, et al. A framework to assess clinical safety and hallucination in LLMs for healthcare. NPJ Digit Med. 2024;7:258. Accessed September 17, 2025. https://www.nature.com/npjdigitalmed
- World Health Organization. AI ethics and governance guidance for large multimodal models. Published January 18, 2024. Accessed September 17, 2025. https://www.who.int/publications
- US Department of Health and Human Services, Office for Civil Rights. HIPAA guidance and Business Associate considerations (overview resources). Accessed September 17, 2025. https://www.hhs.gov/ocr
- Group-IB. Over 100,000 ChatGPT credentials discovered in stealer logs. Press release. Published June 20, 2023. Accessed September 17, 2025. https://www.group-ib.com
- Microsoft Corp; LinkedIn Corp. 2024 Work Trend Index: AI at work is hereโNow comes the hard part. Published May 8, 2024. Accessed September 17, 2025. https://worktrendindex.microsoft.com
- Salesforce. AI at Work Research: Over half of GenAI adopters use unapproved tools. Published November 15, 2023. Accessed September 17, 2025. https://www.salesforce.com
- Mollick E. On-boarding your AI intern. One Useful Thing. Published May 20, 2023. Accessed September 17, 2025. https://www.oneusefulthing.org
- Big Think Editors. Ethan Mollickโs 4 guiding principles for leading with AI. Big Think. Published December 4, 2024. Accessed September 17, 2025. https://bigthink.com
- Barronโs. How will AI affect your job? Only you can answer that. Interview with Ethan Mollick. Published May 2024. Accessed September 17, 2025. https://www.barrons.com
Sample Policy
A sample acceptable use policy regarding generative AI for urgent care operators is available on the JUCM website.

