Blog Post

Microsoft Mechanics Blog
8 MIN READ

Microsoft Purview protections for Copilot

Zachary-Cavanell's avatar
Zachary-Cavanell
Bronze Contributor
Apr 22, 2025

Use Microsoft Purview and Microsoft 365 Copilot together to build a secure, enterprise-ready foundation for generative AI.

Use Microsoft Purview and Microsoft 365 Copilot together to build a secure, enterprise-ready foundation for generative AI. Apply existing data protection and compliance controls, gain visibility into AI usage, and reduce risk from oversharing or insider threats.

Classify, restrict, and monitor sensitive data used in Copilot interactions. Investigate risky behavior, enforce dynamic policies, and block inappropriate use — all from within your Microsoft 365 environment.

Erica Toelle, Microsoft Purview Senior Product Manager, shares how to implement these controls and proactively manage data risks in Copilot deployments.

Control what content can be referenced in generated responses.

Check out Microsoft 365 Copilot security and privacy basics.

Uncover risky or sensitive interactions.

Use DSPM for AI to get a unified view of Copilot usage and security posture across your org.

Block access to sensitive resources.

See how to configure Conditional Access using Microsoft Entra.

Watch our video here.

QUICK LINKS:

00:00 — Microsoft Purview controls for Microsoft 365 Copilot

00:32 — Copilot security and privacy basics

01:47 — Built-in activity logging

02:24 — Discover and Prevent Data Loss with DSPM for AI

04:18 — Protect sensitive data in AI interactions

05:08 — Insider Risk Management

05:12 — Monitor and act on inappropriate AI use

07:14 — Wrap up

Link References

Check out https://aka.ms/M365CopilotwithPurview

Watch our show on oversharing at https://aka.ms/OversharingMechanics

Unfamiliar with Microsoft Mechanics?

As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft.

Keep getting this insider knowledge, join us on social:


Video Transcript:

-Not all generative AI is created equal. In fact, if data security or privacy-related concerns are holding your organization back, today I’ll show you how the combination of Microsoft 365 Copilot and the data security controls in Microsoft Purview provide an enterprise-ready platform for GenAI in your organization. This way, GenAI is seamlessly integrated into your workflow across familiar apps and experiences, all backed by unmatched data security and visibility to minimize data risk and prevent data loss. First, let’s level set on a few Copilot security and privacy basics. Whether you’re using the free Copilot Chat that’s included with Microsoft 365 or have a Microsoft 365 Copilot license, they both honor your existing access permissions to work information in SharePoint and OneDrive, your Teams meetings and your email, meaning generated AI responses can only be based on information that you have access to.

-Importantly, after you submit a prompt, Copilot will retrieve relevant index data to generate a response. The data only stays within your Microsoft 365 service trust boundary and doesn’t move out of it. Even when the data is presented to the large language models to generate a response, information is kept separate to the model, and is not used to train it. This is in contrast to consumer apps, especially the free ones, which are often designed to collect training data. As users upload files into them or paste content into their prompts, including sensitive data, the data is now duplicated and stored in a location outside of your Microsoft 365 service trust boundary, removing any file access controls or classifications you’ve applied in the process, placing your data at greater risk.

-And beyond being stored there for indexing or reasoning, it can be used to retrain the underlying model. Next, adding to the foundational protections of Microsoft 365 Copilot, Microsoft Purview has activity logging built in and helps you to discover and protect sensitive data where you get visibility into current and potential risks, such as the use of unprotected sensitive data in Copilot interactions, classify and secure data where information protection helps you to automatically classify, and apply sensitivity labels to data, ensuring it remains protected even when it’s used with Copilot, and detect and mitigate insider risks where you can be alerted to employee activities with Copilot that pose a risk to your data, and much more.

-Over the next few minutes, I’ll focus on Purview capabilities to get ahead of and prevent data loss and insider risks. We’ll start in Data Security Posture Management or DSPM for AI for short. DSPM for AI is the one place to get a rich and prioritized bird’s eye view on how Copilot is being used inside your organization and discover corresponding risks, along with recommendations to improve your data security posture that you can implement right from the solution. Importantly, this is where you’ll find detailed dashboards for Microsoft 365 Copilot usage, including agents.

-Then in Activity Explorer, we make it easy to see recent activities with AI interactions that include sensitive information types, like credit cards, ID numbers or bank accounts. And you can drill into each activity to see details, as well as the prompt and response text generated. One tip here, if you are seeing a lot of sensitive information exposed, it points to an information oversharing issue where people have access to more information than necessary to do their job. If you find yourself in this situation, I recommend you also check out our recent show on the topic at aka.ms/OversharingMechanics where I dive into the specific things you should do to assess your Microsoft 365 environment for potential oversharing risks to ensure the right people can access the right information when using Copilot.

-Ultimately, DSPM for AI gives you the visibility you need to establish a data security baseline for Copilot usage in your organization, and helps you put in place preventative measures right away. In fact, without leaving DSPM for AI on the recommendations page, you’ll find the policies we advise everyone to use to improve data security, such as this one for detecting potentially risky interactions using insider risk management and other recommendations, like this one to detect potentially unethical behavior using communication compliance policies and more. From there, you can dive in to Microsoft Purview’s best-in-class solutions for more granular insights, and to configure specific policies and protections.

-I’ll start with information protection. You can manage data security controls with Microsoft 365 Copilot in scope with the information protection policies, and the sensitivity labels that you have in use today. In fact, by default, any Copilot response using content with sensitivity labels will automatically inherit the highest priority label for the referenced content. And using data loss prevention policies, you can prevent Copilot from processing any content that has a specific sensitivity label applied. This way, even if users have access to those files, Copilot will effectively ignore this content as it retrieves relevant information from Microsoft Graph used to generate responses. Insider risk management helps you to catch data risk based on trending activities of people on your network using established user risk indicators and thresholds, and then uses policies to prevent accidental or intentional data misuse as they interact with Copilot where you can easily create policies based on quick policy templates, like this one looking for high-risk data leak patterns from insiders.

-By default, this quick policy will scope all users in groups with a defined triggering event of data exfiltration, along with activity indicators, including external sharing, bulk downloads, label downgrades, and label removal in addition to other activities that indicate a high risk of data theft. And it doesn’t stop there. As individuals perform more risky activities, those can add up to elevate that user’s risk level. Here, instead of manually adjusting data security policies, using Adaptive Protection controls, you can also limit Copilot use depending on a user’s dynamic risk level, for example, when a user exceeds your defined risk condition thresholds to reach an elevated risk level, as you can see here.

-Using Conditional Access policies in Microsoft Entra, in this case based on authentication context, as well as the condition for insider risk that you set in Microsoft Purview, you can choose to block their permission when attempting to access sites with a specific sensitivity label. That way, even if a user is granted access to a SharePoint site resource by an owner, their access will be blocked by the Conditional Access policy you set. Again, this is important because Copilot honors the user’s existing permissions to work with information. This way, Copilot will not return information that they do not have access to.

-Next, Communication Compliance is a related insider risk solution that can act on potentially inappropriate Copilot interactions. In fact, there are specific policy options for Microsoft 365 Copilot interactions in communication compliance where you can flag jailbreak or prompt injection attempts using Prompt Shields classifiers. Communication compliance can be set to alert reviewers of that activity so they can easily discover policy matches and take corresponding actions. For example, if a person tries to use Copilot in an inappropriate way, like trying to get it to work around its instructions to generate content that Copilot shouldn’t, it will report on that activity, and you’ll also be able to see the response informing the user that their activity was blocked.

-Once you have the controls you want in place, it’s a good idea to keep going back to DSPM for AI so you can see where Copilot usage is matching your data security policies. Sensitive interactions per AI app shows you interactions based on sensitive information types. Top unethical AI interactions surfaces insights based on the communication compliance controls you’ve defined. Top sensitivity labels referenced in Microsoft 365 Copilot reports on the labels you’ve created, and applied to reference content. And you can see Copilot interactions mapped to insider risk severity levels. Then digging into these reports shows you a filtered view of activities in Activity Explorer with time-based trends and details for each. Additionally, because all Copilot interactions are logged, like other Microsoft 365 activities in email, Microsoft Teams, SharePoint and OneDrive, you can now use the new data security investigation solution. This uses AI to quickly reason over thousands of items, including Copilot Chat interactions to help you investigate the potential cause of risks for known data leaks in similar incidents.

-So that’s how Microsoft 365 Copilot, along with Microsoft Purview, provides comprehensive controls to help protect your data, minimize risk, and quickly identify Copilot interactions that could lead to compromise so you can take corrective actions. No other AI solution has this level of protection and control. To learn more, check out aka.ms/M365CopilotwithPurview. Keep watching Microsoft Mechanics for the latest updates and thanks for watching.

 

Published Apr 22, 2025
Version 1.0
No CommentsBe the first to comment
OSZAR »