AI and HIPAA Compliance: How to Reduce Risk While Embracing Innovation
A compliance Hotline Log is a centralized record of all reports submitted through your organization’s ethics or Many healthcare organizations believe artificial intelligence (AI) will be integral to future growth and has the potential to transform care. One recent study cited in The HIPAA Journal found the following data about healthcare organizations:
- 88% have integrated cloud-based generative AI (GenAI) apps (such as ChatGPT) into their operations
- 98% use apps that incorporate GenAI features
- 96% use apps that leverage user data for training
- 43% are experimenting with running GenAI infrastructure locally
Such rapid change in the use of AI raises alarm bells for compliance leaders, with many left wondering: how exactly does this impact HIPAA compliance?
Our team has spent the last 30 years helping healthcare organizations of all sizes navigate evolving data privacy and security risks. We’ve compiled their ideas here to help you:
- Understand how AI adoption creates potential HIPAA vulnerabilities
- Identify immediate steps to prevent AI-related data privacy and security risks
- Create a long-term plan to adopt AI while maintaining robust controls
A Billion-Dollar Transformation: Why Healthcare Organizations Are Embracing AI
Before we describe HIPAA-related AI risks, let’s be clear: this technology could be revolutionary for the healthcare industry. PwC claims the technology could generate $868 billion for the healthcare industry, with the potential to:
- Save Time: Once AI systems are proven to be safe and compliant, they can save time and effort throughout complex healthcare organizations. AI agents and automation can streamline workflows, reduce administrative tasks, and fix operational bottlenecks that often take providers’ time away from patients.
- Improve Care: AI cannot only free doctors’ time, but also help with many complex procedures and processes that directly influence patient outcomes. For example, machine learning can identify patterns in imaging that a human doctor might miss, helping to improve the accuracy and speed of diagnostics across many areas of disease.
- Increase Efficiency: The net impact of streamlined workflows will be significant efficiency gains. The National Bureau of Economic Research estimates that AI could save between $200 billion and $360 billion annually for hospitals, physician groups, and payers. These savings will be generated through measures such as OR optimization and improved capacity management.
helping to ease the pressure organizations feel due to ongoing staffing shortages.
However, these benefits rest on the ability of healthcare organizations to ensure AI does not compromise data privacy and security.
HIPAA in the Era of AI: How Automation Puts Patient Data at Risk
The introduction of AI could fundamentally change how healthcare compliance leaders think about HIPAA. While there are numerous areas where the technology creates potential risk, three stand out:
1. Re-Identification Attacks
HIPAA outlines 18 “identifiers” that denote protected health information (PHI), such as name, address, and phone number. If covered entities and business associates (BAs) can “de-identify” these parameters, the information can be shared with unauthorized individuals, usually for research purposes.
That process of “de-identification” has allowed many organizations to use sensitive health information to build sophisticated AI models. Cancer detection algorithms, healthcare chatbots, and back office automation; all are developed using information that would be a serious HIPAA violation if it ever became identifiable.
Now, research suggests it is often possible to “re-identify” individuals by cross-referencing data from multiple publicly available sources. Criminals can use AI to find patterns within de-identified data and connect it with specific individuals. That means models built on seemingly HIPAA-safe principles may no longer protect patients or guarantee data breaches don’t occur.
2. Shadow AI Adoption
Rapid AI adoption leaves many employees unsure about the data security and privacy implications of tools like ChatGPT. Just 29% of providers and 17% of administrators are aware of their organization’s main AI policies. This awareness gap has led to an uptick in “shadow AI,” where many employees use tools like ChatGPT without explicit consent or oversight from compliance.
The risk is that individuals assume these tools are safe and don’t take proper precautions. These tools often advertise themselves as “HIPAA-Ready,” but what this means is a legal grey area; experts warn that the companies themselves aren’t subject to HIPAA, and therefore are unlikely to meet full compliance requirements.
A recent cybersecurity study found that many healthcare workers routinely upload ePHI to generative AI platforms—with 71% using their personal accounts. These platforms are generally not covered by a business associate agreement (BAA); many have complex data privacy terms that give the company access to any information shared within the tool.
3. Third-Party Vendor Risk
Many covered entities have already worked with vendors that lack a business associate agreement, creating significant HIPAA risk exposure. This compounds the problem, where many of these vendors are adopting AI just as fast as covered entities, essentially multiplying risk throughout the supply chain.
2025 saw multiple PHI breaches from AI-powered companies, with one “AI-First” solution for insurance enrollment leaking 87,565 individuals’ health information. Covered entities whose patients are implicated in these leaks may be held partially responsible, leaving many highly vulnerable as their vendor networks rapidly roll out AI features that are not fully secured.
Creating AI Guardrails: How to Maintain HIPAA Compliance while Leveraging AI
While a truly AI-ready HIPAA program will evolve along with the technology, our team recommends four key steps to protect your data privacy and security:
1. Policy Development
Clear, enforceable policies are the foundation of any HIPAA-compliant program that leverages AI. Without them, organizations are left relying on employees to make judgment calls about tools and data they may not fully understand.
The challenge will always be time: policies require significant manual effort to develop from scratch, especially when dealing with technology that makes significant breakthroughs every month.
How to Ensure HIPAA Compliance:
- Adopt the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) as the foundation for internal AI governance policies.
- Define which AI tools are approved for use and under what conditions, including any restrictions on PHI input.
- Create a clear approval process for new AI tools before they are adopted by any department.
- Establish guidelines for permissible use of generative AI, including personal vs. work accounts.
- Require documentation of all AI use cases that touch patient data, from planning through deployment.
2. Vendor Management
AI may force covered entities to change how they engage with vendors. Healthcare organizations will need more detailed visibility of how business associates handle patient data, especially around training AI models and their own AI integrations.
How to Ensure HIPAA Compliance:
- Require all vendors using AI to provide updated BAAs that specifically address AI data handling.
- Add AI-specific questions to vendor security assessments and onboarding checklists.
- Monitor vendor announcements for new AI feature rollouts that may affect PHI.
- Establish contractual rights to audit vendors’ AI-related data practices.
- Prioritize vendors who can demonstrate HIPAA compliance across their AI infrastructure.
3. Compliance Visibility
The data security risks associated with AI can easily go unnoticed when tools are integrated into existing workflows. Many of the biggest vulnerabilities—such as shadow AI usage—occur on the frontlines, making it hard for compliance leaders to detect before it’s too late.
Compliance hotlines and other mechanisms for compliance reporting play a vital role here. Employees may see their colleagues using AI in ways that are unsafe, but fear that leadership is too invested in the technology. Speaking out feels dangerous, as if they will be punished for disrupting the adoption process, even if it compromises patient data security.
How to Ensure HIPAA Compliance:
- Ensure at least one anonymous reporting channel exists specifically for AI-related compliance concerns.
- Actively communicate the hotline to staff during training and onboarding.
- Designate a point of contact responsible for triaging and responding to AI-related reports.
- Track and review reports regularly to identify patterns or emerging risks.
- Protect reporters from retaliation and make that protection explicit in policy.
4. Regular Risk Assessments
Recent innovations in healthcare delivery, such as telehealth, have shown that rapid adoption almost always creates novel data privacy risks. Organizations need evaluation processes that can keep up with rapid change—including purpose-built assessments for AI initiatives at each stage of development.
How to Ensure HIPAA Compliance:
- Increase the frequency of standard HIPAA risk assessments to at least twice annually.
- Develop an AI-specific risk assessment template covering data inputs, model training, output use, and access controls.
- Conduct assessments at three key stages for new AI initiatives: planning, pilot, and full deployment.
- Assign clear ownership for AI risk monitoring within the compliance or security team.
- Review and update assessments whenever a vendor rolls out significant AI changes or a new tool is approved.
Get a Proven Partner to Make AI HIPAA-Safe
The team behind Compliance Resource Center has spent over 30 years at the forefront of healthcare compliance. We’ve helped companies of all sizes develop and maintain effective Compliance Programs that fit their organizational culture and budget.
From our library of policies and procedures to flexible HIPAA training, our solutions are designed to help you navigate evolving regulatory requirements. Because while the technology is new, the process of adapting to new challenges isn’t.
Want to speak with HIPAA experts who understand the pressure you’re under?
Subscribe to blog