You’ve likely heard it from your executives, in the forums, and even from television ads. With AI products like ChatGPT and Claude advancing in leaps and bounds, people in all fields should take advantage of AI’s productivity-boosting capabilities.
And many in the healthcare industry are.
Here are two examples of what well-meaning clinicians did on their own with AI tools:
- A physician asked an AI product to translate a discharge summary into 4th grade reading level so that his patient could understand it better. But discharge summaries involve subscriptions that are copyrighted and have licensing agreements that forbid putting any of the content into a generative AI model.
- Another physician – a recent medical school graduate – created a Python script that allowed him to send clinical notes to ChatGPT so that it could organize and write them.
Unfortunately, these types of cases of “shadow AI” – the unauthorized use of AI in the workplace – are now the biggest data exfiltration risk the healthcare field has ever faced.
In our recent webinar, two Fortified executives discuss how to deploy AI products in a way that’s visible, sanctioned and safe. Here are some of the key takeaways.
Dangers of Unvetted AI Usage
Some of the many reasons why unauthorized AI usage poses serious security risks includes:
- When data enters an AI platform, it leaves your organization’s control. Once uploaded, it cannot be retrieved or deleted.
- Some AI products “hallucinate.” They would rather give you the wrong information than say, “Sorry, I don’t know.”
- Any product that can do the work of 10 people can also do the security damage of 10 people.
- Unvetted AI opens the door for potential lawsuits. Some Human Resources departments used an AI product to help evaluate job candidates, and the AI tool was found to have a built-in bias. As a result, some of those companies are facing lawsuits.
A Block-All-AI Policy Is Not the Solution
Some healthcare organizations have already imposed an across-the-board ban on using AI products, but that approach isn’t a good long-term solution. Healthcare workers are likely to find workarounds, like using AI on a personal device.
Your organization’s message should be more along the lines of, “We’re in favor of new productivity tools, but we also want to keep our patient data safe. We’re not being punitive; we’re just trying to protect the sensitive data we’ve been entrusted with.”
If there’s an AI tool that can read a mammogram better than an experienced radiologist, that’s certainly worth evaluating. Just remember that many of these AI products are very new. They’re not bullet-proof and you need to evaluate them carefully.
Establishing An AI Governance Committee
Effective governance is a vital part of healthcare data security. Most organizations already have a solid IT governance structure in place – and that’s a prerequisite for launching an AI governance committee.
This committee should have the full support and involvement of C-suite leaders. Drawing on the combined oversight of IT, cybersecurity, compliance and legal departments, the AI governance committee should take the following steps:
- Make it crystal-clear throughout the organization that our #1 priority is to protect our patients’ private health data.
- Let all employees know that the organization wants to enable the use of AI products, but in a way that provides proven data visibility and security.
- When evaluating AI products, security analysis should take place early in the process, not as a rubber-stamp at the end.
- Carefully evaluate every prospective AI contract and product design. It’s imperative to know exactly where the data goes.
- Establish a separate process for evaluating AI third-party vendors because you’ll probably have to ask different questions than in standard TPRM assessments.
It’s also a good idea to have a few younger clinicians on the AI governance committee so they can share Millennials’ views and expectations about AI.
MSSPs Can Offer Guidance
If your organization hasn’t formed an AI governance committee, your managed security partner can share best practices for AI governance and ways to ensure data visibility.
Listen to our webinar to learn more about effective AI governance. The goal is to make safe AI usage an easier choice than unsafe use.