What's Changing
Microsoft has introduced Anthropic’s Claude models into Copilot 365, specifically within the Researcher agent and Copilot Studio. While this may seem like a welcome expansion of AI capabilities, enabling Anthropic comes with significant data protection and compliance risks.
What You Lose When You Enable Anthropic
According to Jisc, turning on Anthropic means your data:
- Leaves Microsoft’s secure environment
- Is no longer covered by Microsoft’s audit and compliance controls
- Does not benefit from Microsoft’s data residency guarantees
-
Is excluded from Microsoft’s Customer Copyright Commitment
- Is not protected by Microsoft’s service level agreements (SLAs)
Instead, your data is governed by Anthropic’s own commercial terms and data processing agreements, which have not yet been fully analysed by Jisc for risk.
Microsoft has built trust in Copilot by ensuring data is processed securely within its enterprise environment. Enabling Anthropic undermines that trust and introduces ambiguity into our messaging around AI safety. As Jisc puts it:
“Copilot 365 can still be safe and secure, as long as you do not enable the Anthropic option.”
This is especially critical for us in higher education, where data governance, compliance with the ICO’s 72-hour breach notification rule, and protection of personal data are non-negotiable.
We strongly advise not enabling Anthropic in Microsoft Copilot until further security assurances are provided. We will continue to monitor updates from Jisc and Microsoft and share any changes that affect our institutional risk posture.
Tue 30 Sept 2025, 18:44|