Skip to main content Skip to navigation

Information Security Awareness Blog

Welcome to the Information Security awareness blog, your go-to resource for the latest in cybersecurity awareness. This blog offers practical tips, expert advice, and up-to-date information to help you stay secure in the digital world. Whether you're a member of staff or a student, you'll find valuable content to enhance your cybersecurity knowledge and practices.

Get in touch

If you'd like to submit an article for the blog or suggest discussion topics, please contact us: .

Show all news items

Why You Shouldn’t Enable Anthropic in Microsoft Copilot - A Security Perspective

As part of our commitment to secure and responsible use of AI tools across the university, we want to highlight a recent advisory from Jisc’s National Centre for AILink opens in a new window. Jisc provides us with expert support on digital infrastructure and cybersecurity, and their latest guidance is clear: do not enable Anthropic’s Claude models in Microsoft Copilot 365.

What's Changing

Microsoft has introduced Anthropic’s Claude models into Copilot 365, specifically within the Researcher agent and Copilot Studio. While this may seem like a welcome expansion of AI capabilities, enabling Anthropic comes with significant data protection and compliance risks.

What You Lose When You Enable Anthropic

According to Jisc, turning on Anthropic means your data:

  • Leaves Microsoft’s secure environment
  • Is no longer covered by Microsoft’s audit and compliance controls
  • Does not benefit from Microsoft’s data residency guarantees
  • Is excluded from Microsoft’s Customer Copyright Commitment
  • Is not protected by Microsoft’s service level agreements (SLAs)

Instead, your data is governed by Anthropic’s own commercial terms and data processing agreements, which have not yet been fully analysed by Jisc for risk.

Why This Matters

Microsoft has built trust in Copilot by ensuring data is processed securely within its enterprise environment. Enabling Anthropic undermines that trust and introduces ambiguity into our messaging around AI safety. As Jisc puts it:

“Copilot 365 can still be safe and secure, as long as you do not enable the Anthropic option.”

This is especially critical for us in higher education, where data governance, compliance with the ICO’s 72-hour breach notification rule, and protection of personal data are non-negotiable.

Our Position

We strongly advise not enabling Anthropic in Microsoft Copilot until further security assurances are provided. We will continue to monitor updates from Jisc and Microsoft and share any changes that affect our institutional risk posture.

Tue 30 Sept 2025, 18:44 | Tags: News and updates

Let us know you agree to cookies