Recent activity associated with ShinyHunters-branded extortion campaigns reinforces a critical shift in healthcare cybersecurity: attackers are increasingly targeting identity, SaaS platforms, and trusted third-party access paths rather than relying only on malware or traditional ransomware deployment.
The FBI has previously warned that recent campaigns target Salesforce environments for data theft and extortion, including activity associated with UNC6040 and UNC6395. In some cases, victims later received extortion demands from actors calling themselves ShinyHunters. Google/Mandiant has also reported an expansion of ShinyHunters-branded SaaS data theft activity involving voice phishing, credential harvesting, SSO compromise, MFA enrollment abuse, and cloud-based data exfiltration.
Deeper Dive into ShinyHunters
ShinyHunters-style attacks often begin with identity compromise, social engineering, unauthorized SaaS access, or abuse of trusted cloud integrations. These attacks may not immediately disrupt clinical systems, encrypt files, or trigger traditional ransomware indicators. Instead, attackers can move quietly through valid access paths, identify sensitive datasets, and exfiltrate information from systems that appear operationally normal.
That distinction matters. A healthcare organization may continue delivering care while sensitive business, employee, customer, vendor, or patient-adjacent data has already been copied or transferred through trusted platforms.
Why It Matters
When attackers gain access through a trusted identity or third-party platform, the critical question becomes: how far does that trust extend?
Third-party services are deeply integrated with essential functions such as billing, scheduling, communications, support, CRM, identity management, analytics, and customer-facing operations. A compromise of one of these services can create downstream exposure across multiple business functions, even when the EHR and core clinical systems remain available.
What Leaders Must Know
Resilience now encompasses the systems that the organization does not fully own. Healthcare organizations can strengthen their clinical systems and still remain exposed through platforms, providers, and integrations, that support the broader business of patient care.
Leaders need a clear understanding of which third-party services are most critical, what access those services have, what data they handle, how they are monitored, and how the organization would respond if trust in these partners were to be compromised.
Discussion Points for the Team
- Which third-party platforms have the most trusted access into the organization?
- Which outside services would cause the most disruption if trust in them were lost?
- What sensitive data sits in support, billing, scheduling, CRM, or other business platforms outside the EHR?
- Where are we most dependent on cloud or SaaS platforms to keep operations moving?
- How quickly could we identify and contain quiet access through a trusted third-party system?
- Who owns resilience planning for critical third-party services across security, IT, operations, compliance, legal, and executive leadership?
Threats To Be Aware Of
Emergency Triage Required for FortiClient EMS
Overview: A critical FortiClient EMS vulnerability 7.4.5 through 7.4.6, CVE-2026-35616, was actively exploited in the wild and added to CISA’s Known Exploited Vulnerabilities catalog.
Healthcare Impact: Because EMS is a centralized management platform, compromise there can quickly become a larger problem for trust, control, and containment across the managed environment.
Recommended Actions: Apply the hotfix or confirm the upgrade path to 7.4.7 or later and identify any older EMS versions still exposed. Treat endpoint management infrastructure as a high-value control point.
Questions to Ask Your Team:
- Are any EMS instances still exposed or pending remediation?
- If EMS were compromised, what controls or endpoints would be affected first?
Nation State Persistence Risk: FIRESTARTER Backdoor on Cisco Edge Devices
Overview: A nation-state actor used FIRESTARTER malware to maintain persistent backdoor access on Cisco Firepower and Secure Firewall devices running ASA or FTD software after exploitation of previously disclosed vulnerabilities.
Healthcare Impact: For healthcare organizations, trusted edge devices support connectivity, segmentation, and remote access. If trust in those devices is lost, the issue becomes larger than a patching exercise.
Recommended Actions: Validate exposure, confirm whether compromised devices were fully evicted, and ensure the organization has a process to demonstrate a clean state on critical network infrastructure after a high-confidence compromise.
Questions to Ask Your Team:
- Which edge devices would create the most disruption if trust in them were lost?
- Where do we still assume patched means clean?
Peer Pulse: Healthcare AI and Cybersecurity with Bob Swaskoski, Chief Security Officer at Heritage Valley Health System
Russell: How do you define responsible AI use in a healthcare organization?
Bob: To use AI responsibly we must begin by invoking the first rule of medicine; “do no harm”. While the technology itself and the potential uses/value continue to rapidly evolve, we must not be distracted by the hype and instead need to build upon our existing Risk Management experiences to develop a structured framework that is transparent, protects patient safety, preserves clinical judgment, ensures regulatory compliance, maintains privacy and security, and prevents operational or financial harm.
Russell: As AI adoption expands, where are the biggest governance gaps or unanswered questions?
Bob: AI technology and adoption is growing globally at an exponential rate and as a result, the governance gaps are growing daily. Many healthcare organizations are not prepared to address both the strategic and operational questions that must be answered before AI solutions can be implemented and used safely. Questions of ownership and accountability across clinical, financial, and administrative domains must be clearly defined as well as output validation standards to ensure that inherent AI flaws such as bias and hallucinations are not introduced.
Russell: What should healthcare leaders look for when evaluating third-party AI vendors?
Bob: For organizations that already have mature procurement procedures to evaluate and on-board new technology vendors, the good news is that you can modify them to include considerations specific to AI such as full transparency and documentation for their system including model types, training data sources, bias and hallucination trends as well as validation, testing, and safety guardrails to prevent misuse. While it is common to evaluate a vendor’s strategic roadmap as well as their stability and maturity in the market, that is certainly more challenging given how new AI solutions are to the market.
Russell: What categories of data should never be entered into public or lightly governed AI tools?
Bob: Of course, this should include both restricted (PHI, PII, PCI) and confidential data. The broad availability of AI solutions available on each computer and smartphone significantly increases the risk of data exposure which necessitates strong policy and technical controls to prohibit any exposure.
Russell: Who should ultimately own AI governance in healthcare?
Bob: Regardless of the specific role, the most important consideration is for there to be a single accountable person for AI governance across the enterprise and that it is implemented and managed with a cross-functional structure. Every organization culture is unique and it might seem logical to think that AI is a technology or a compliance problem, but since AI solutions will be used in clinical, financial, and administrative domains, it is essential that governance is managed consistently at the C-level.
Russell: How should organizations prepare for AI-assisted phishing, impersonation, and related social engineering risks?
Bob: AI is already proving how much more effectively it can produce (at scale) near perfect email and voice impersonations to fuel social engineering attacks. As a result, security awareness for the workforce requires a complete redesign to focus not on “spotting errors” but rather on workflows that assess user behavior and include strong identity verification methods. As existing “defensive tools” are enhanced to better identify suspicious communications, AI enhanced phishing simulations will be useful to raise the workforce skill level in identifying these threats.
Russell: How much human oversight is needed before AI output is trusted in clinical or operational settings?
Bob: All risks should be assessed and stratified but there is no clinical scenario where AI output is trusted without accountable human review and validation. Even for uses that are categorized as “low risk” such as summarizing non-clinical documents or reformatting text in emails, it is imperative that humans still own and are responsible for the outcome.
Russell: What should improve before healthcare can say it is using AI in a way that is both effective and secure?
Bob: Everything. AI is evolving faster than the controls needed to manage it and without an effective governance structure including regulatory guidelines, clearly defined ownership, data loss prevention, human-in-the-loop validation and workforce readiness, AI will remain a high-risk initiative.
Closing Perspective
The organizations best positioned to defend themselves in today’s environment are not those that waiting for perfect clarity. They are the ones who take early action to manage what is already present, reinforce the most critical controls, and systematically ensure that innovation does not outpace readiness. In healthcare, resilience is no longer limited to the systems we own. It includes the identities we trust, the vendors we depend on, the platforms we connect, and the decisions we make before disruption occurs. The goal is not just to prevent the next incident. It is to preserve confidence, continuity, and patient care when trust in a critical system is tested.