As part of our commitment to secure and responsible use of AI tools across the university, we want to highlight a recent advisory from Jisc’s National Centre for AILink opens in a new windowLink opens in a new window. Jisc provides us with expert support on digital infrastructure and cybersecurity, and their latest guidance is clear: Do not enable Anthropic’s Claude models in Microsoft Copilot 365.
Microsoft has introduced Anthropic’s Claude models into Copilot 365, specifically within the Researcher agent and Copilot Studio. While this may seem like a welcome expansion of AI capabilities, enabling Anthropic comes with significant data protection and compliance risks.
What You Lose When You Enable Anthropic
According to Jisc, turning on Anthropic means your data:
- Leaves Microsoft’s secure environment
- Is no longer covered by Microsoft’s audit and compliance controls
- Does not benefit from Microsoft’s data residency guarantees
- Is excluded from Microsoft’s Customer Copyright Commitment
- Is not protected by Microsoft’s service level agreements (SLAs)
Instead, your data is governed by Anthropic’s own commercial terms and data processing agreements, which have not yet been fully analysed by Jisc for risk.
Why This Matters
Microsoft has built trust in Copilot by ensuring data is processed securely within its enterprise environment. Enabling Anthropic undermines that trust and introduces ambiguity into our messaging around AI safety. As Jisc puts it:
“Copilot 365 can still be safe and secure, as long as you do not enable the Anthropic option.”
This is especially critical for us in higher education, where data governance, compliance with the ICO’s 72-hour breach notification rule, and protection of personal data are non-negotiable.