Alert essentials:
CoPilot protections could have been bypassed with a simple email, allowing threat actors to exfiltrate data from Microsoft 365 users without user interaction or awareness.
The weakness has been patched, and no action from users is needed at this time.
Detailed threat description:
A critical vulnerability targeting Microsoft 365 Copilot, which could have enabled attacks against users, was patched by Microsoft before the issue was made public.
The first known zero-click vulnerability targeting Artificial Intelligence (AI) tools is an indirect Prompt Injection Vulnerability, assigned CVE-2025-32711. Researchers responsible for discovering the flaw in Microsoft’s CoPilot named the new exploitation technique “LLM Scope Violation.” This novel procedure would have enabled data exfiltration from M365 users without alerting the user in any way.
CoPilot relies on the large language model (LLM) of OpenAI’s Chat GPT and Microsoft’s web API Graph to retrieve files and generate requested content. One of the barriers the company built into the product to prevent prompt injections is content filtering. However, an attacker can bypass this safeguard by avoiding certain keywords and embedding malicious instructions in external content.
Artificial intelligence promises operational efficiencies and workflow optimization. In healthcare, AI streamlines administrative tasks, enhances the early detection of diseases, and personalizes treatment plans for patients.
Yet this example warns how LLMs can be manipulated through prompt injections and adversarial input, thus greatly expanding an organization’s attack surface. Left unpatched, EchoLeak could have released company information via an email with simple instructions. Microsoft has issued an update that addresses the scope violation, and no customer action is required for this weakness.
While AI offers significant potential for improving cybersecurity, it also introduces new risks and challenges. Organizations must carefully assess these risks and implement effective security measures to safeguard their AI systems and data.
Healthcare providers should strike a balance between innovation and security, remain vigilant for emerging threats, and collaborate closely with regulatory bodies. By doing so, they can safely harness the power of AI while safeguarding their systems and mission.
Impacts on healthcare organizations:
With the surge of Generative AI, cyberattacks are becoming increasingly complex, and healthcare providers struggle to protect sensitive data and systems while innovating.
Utilizing AI tools in healthcare organizations presents a multifaceted and escalating threat, with implications that encompass patient safety, operational continuity, regulatory compliance, and public trust. Healthcare organizations should implement AI-specific governance frameworks, segment networks, and train staff on AI threat awareness.
Affected Products / Versions
CVEs
- CVE-2025-32711- CWE-77- CVSS 9.3
Recommendations
Engineering recommendations:
- Ensure all systems, applications, and firmware are regularly updated
- Use delimiters in system messaging to assist AI in determining user input from harmful external content
Leadership / Program recommendations:
- Adopt a Zero-trust network architecture
- Deploy AI-powered anomaly detection tools to identify unusual behavior across endpoints, networks, and user activity
- AI Prompt Shields are a solution developed by Microsoft to defend against both direct and indirect prompt injection attacks
- Spotlighting helps the AI system distinguish between valid system instructions and potentially untrustworthy external inputs
- LLM applications are inherently not secure and can be weaponized by adversaries
- Foster a security-first culture where employees feel empowered and informed
- Carefully consider the potential outcomes of using artificial intelligence and provide direction to defenders by developing governing AI policies
Fortified recommends applying patches and updates where possible and only after adequate testing in a development environment to ensure stability and compliance with organizational change management policies.
References:
- Blogpost: https://www.aim.security/lp/aim-labs-echoleak-blogpost
- Fortified Health Security Horizon Report-AI Generated Cyberattacks: https://fortifiedhealthsecurity.com/pdf/2025-Horizon-Report-State-of-Cybersecurity-in-Healthcare.pdf
- Microsoft: https://devblogs.microsoft.com/blog/protecting-against-indirect-injection-attacks-mcp
- Microsoft Enhanced AI Security: https://azure.microsoft.com/en-us/blog/enhance-ai-security-with-azure-prompt-shields-and-azure-ai-content-safety
- NIST: https://nvd.nist.gov/vuln/detail/cve-2025-32711
- Owap.org: OWASP-Top-10-for-LLMs-v2025.pdf