Data Security
15 TopicsData security for agents and 3rd party AI in Microsoft Purview
With built-in visibility into how AI apps and agents interact with sensitive data — whether inside Microsoft 365 or across unmanaged consumer tools — you can detect risks early, take decisive action, and enforce the right protections without slowing innovation. See usage trends, investigate prompts and responses, and respond to potential data oversharing or policy violations in real time. From compliance-ready audit logs to adaptive data protection, you’ll have the insights and tools to keep data secure as AI becomes a part of everyday work. Shilpa Ranganathan, Microsoft Purview Principal Group PM, shares how to balance AI innovation with enterprise-grade data governance and security. Move from detection to prevention. Built-in, pre-configured policies you can activate in seconds. Check out DSPM for AI. Monitor risky usage and take action. Block risky users from uploading sensitive data into AI apps. See how to use DSPM for AI. Set instant guardrails. Use DSPM for AI to identify AI agents that may be at risk of data oversharing and take action. Get started. QUICK LINKS: 00:00 — AI app security, governance, & compliance 01:30 — Take Action with DSPM for AI 02:08 — Activity logging 02:32 — Control beyond Microsoft services 03:09 — Use DSPM for AI to monitor data risk 05:06 — ChatGPT Enterprise 05:36 — Set AI Agent guardrails using DSPM for AI 06:44 — Data oversharing 08:30 — Audit logs 09:19 — Wrap up Link References Check out https://aka.ms/SecureGovernAI Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: -Do you have a good handle on the data security risks introduced by the growing number of GenAI apps inside your organization? Today, 78% of users are bringing their own AI tools, often consumer grade, to use as they work and bypassing the data security protections you’ve set. And now, combined with the increased use of agents, it can be hard to know what data is being used in AI interactions to keep valuable data from leaking outside of your organization. -In the next few minutes, I’ll show you how enterprise grade data security, governance, and compliance can go hand in hand with GenAI adoption inside your organization with Data Security Posture Management for AI in Microsoft Purview. This single solution not only gives you automatic visibility into Microsoft Copilot and custom apps and agents in use inside your organization, but extends visibility into AI interactions happening across different non-Microsoft AI services that may be in use. Risk analytics then help you see at a glance what’s happening with your data with a breakdown of the top unethical AI interactions, sensitive data interactions per AI app, along with how employees are interacting with apps based on their risk profile, either high, medium, or low. And specifically for agents, we also provide dedicated reports to expose the data risks posed by agents in Microsoft 365 Copilot and maker created agents from Copilot Studio. And visibility is just one half of what we give you. You can also take action. -Here, DSPM for AI provides you proactive recommendations to help you take immediate action to enhance your data security and compliance posture right from the service using built-in and pre-configured Microsoft Purview policies. And with all AI interactions audited, not only do you get the visibility I just showed, but the data is automatically captured for data lifecycle management, eDiscovery, and Communication Compliance investigations. In fact, clicking on this one recommendation for compliance controls can help you set up policies in all these areas. -Now, if you’re wondering how activity signals from AI apps and agents flow into DSPM for AI in the first place, the good news is, for the AI apps and agents you build with either Microsoft Copilot services or with Azure AI, even if you haven’t configured a single policy in Microsoft Purview, activity logging is enabled by default, and built-in reports are generated for you out of the gate. As I showed, visibility and control extend beyond Microsoft services as soon as you take proactive action. Directly from DSPM for AI, the fortify data security recommendation, for example, when activated under the covers leverage Microsoft Purview’s built-in classifiers to detect sensitive data and to log interactions from local app traffic over the network, as well as the device level to protect file system interactions on Microsoft Purview onboarded PCs and Macs, and even web-based apps running in Microsoft Edge, to help prevent risky users from leaking sensitive data. -Next, with insights now flowing in, let me walk you through how you can use DSPM for AI every day to monitor your data risks and take action. I’ll start again from reports in the overview to look at GenAI apps that are popular in our organization. Something that is really concerning are the ones in use by my riskiest users who are interacting with popular consumer apps like DeepSeek and Google Gemini. ChatGPT consumer is at the top of the list, and it’s not a managed app for our organization. It’s brought in by users who are either using it for free or with a personal license, but what’s really concerning is that it has the highest number of risky users interacting with it, which could increase our risk of data loss. Now, my first inclination might be to block usage of the app outright. That said, if I scroll back up, instead I can see a proactive recommendation to prevent sensitive data exfiltration in ChatGPT with adaptive protection. -Clicking in, I can see the types of sensitive data shared by users and their prompts. Creating this policy will log the actions of minor-risk users and block high-risk users from typing in or uploading sensitive information into ChatGPT. I can also choose to customize this policy further, but I’ll keep what’s there and confirm. And with the policies activated, now let me show you the result. Here we have a user with an elevated risk level. They’re entering sensitive information into the prompt, and when they submit it, they are blocked. On the other hand, when a user with a lower risk level enters sensitive information and submits their prompt, they’re informed that their actions are being audited. -Next, as an admin, let me show you how this activity was audited. From DSPM for AI in the Activity Explorer, I can see all interactions and any matching sensitive information types. Here’s the activity we just saw, and I can click into it to see more details, including exactly what was shared in the user’s prompt. Now for ChatGPT Enterprise, there’s even more visibility due to the deep API integration with Microsoft Purview. By selecting this recommendation, you can register your ChatGPT Enterprise workspace to discover and govern AI interactions. In fact, this recommendation walks you through the setup process. Then with the interactions logged in Activity Explorer, not only are you able to see what prompts were submitted, but you can also get complete visibility into the generated responses. -Next, with the rapid development of AI agents, let me show you how you can use DSPM for AI to discover and set guardrails around information used with your user-created agents. Clicking on agents takes you to a filtered view. Immediately, I can see indicators of a potential oversharing issue. This is where data access permissions may be too broad and where not enough of my data is labeled with corresponding protections. I can also see the total agent interactions over time, the top five agents open to internet users, with interactions by unauthenticated or anonymous users. This is where people outside of my organization are interacting with agents grounded on my organization’s data, which can be bad. -I can also quickly see a breakdown of sensitive interactions per agent along with the top sensitivity labels referenced to get an idea of the type of data in use and how well protected it is. To find out more, from the Activity Explorer, I can see in this AI interaction, the agent was invoked in Copilot Chat, and I can view the agent’s details and see the prompt and response just like before. Now what I really want to do is to take a closer look at the potential data oversharing issue that was flagged. For that, I’ll return to my dashboard and click into the default assessment. These run every seven days, scanning files containing sensitive data and identifying where those files are located, such as SharePoint sites with overly permissive user access. -And I can dig into the details. I’ll click into the top one for “Obsidian Merger” and I can see label coverage for the data within it. And in the protect tab, there are eight sensitivity labels and five that are referenced by Copilot and agents. Since I want agents to honor data classifications and their related protections, I can configure recommended policies. The most stringent option is to restrict all items, removing the entire site from view of Copilot and agents. Or for more granular controls, I also have a few more options. I can create default sensitivity labels for newly created items, or if I move back to the top-level options, I have the option to “Restrict Access by Label.” The Obsidian Merger information is highly privileged, and even if you’re on the core team working on it, we don’t want agents to reason over the information, so I’ll pick this label option. -From there, I need to extend the list of sensitivity labels and I’ll select Obsidian Merger, then confirm to create the policy. And this will now block the agent from reasoning over the content that includes the Obsidian Merger label. In fact, let’s look at the policy in action. Here you can see the user is asking the Copilot agent to summarize the Project Obsidian M&A doc and even though they are the owner and author of the file, the agent cannot reason over it. It responds, “Unfortunately, I can’t provide detailed information because the content is protected.” -As I mentioned, for both your agents and GenAI apps across Microsoft and non-Microsoft services, all activity is recorded in Audit logs to help conduct investigations whenever needed. In fact, DSPM for AI logged activity flows directly into Microsoft Purview’s best-in-class solutions for insider risk management, letting your security teams detect risky AI prompts as part of their investigations into risky users, communication compliance to aid investigations into non-compliance use in AI interactions, such as a user trying to get sensitive information like an acquisition plan, eDiscovery, where interactions across your Copilots, agents, and AI apps can be collected and reviewed to help conduct investigations and respond to litigations. -So that was an overview of how GenAI adoption can go hand in hand with your enterprise grade data security, governance, and compliance requirements for your organizations, keeping your data protected. To learn more, check out aka.ms/SecureGovernAI. Keep watching Microsoft Mechanics for the latest updates, and thanks for watching.556Views0likes0CommentsCan MS Purview mask data in CE
Hi Can MS Purview enable data masking in Dynamics Customer Engagement / Service, If yes how this can be achieved? if No, Can we expect this feature in near future? Note: We would not enable any mask (Field Security Profile) features directly in CE, would like to happen using MS Purview90Views0likes1CommentPurview AMA March 12 - Ask Questions Below!
The next Purview AMA covering Data Security, Compliance, and Governance takes place on 12 March at 8am Pacific. Register HERE! Your subject matter experts are: Maxime Bombardier - Purview Data Security and Horizontals Sandeep Shah - Purview Data Governance Peter Oguntoye - Purview Compliance And, if you'd like to get started now, feel free to post your questions as comments below. They may be answered live, or if we don't get to them, they will be answered in-text below (you may also note what you'd prefer!) Thank you for being a part of the Purview community, we can't do exciting events like this without you! Don't forget to register ✏️72Views0likes0CommentsSensitivity Label change alert
We have successfully rolled out Sensitivity Labels across our organization. All users an admins subscribe to M365 E5 I would like create an alert email which fires when a Sensitivity Label is replaced with a lower-order label on any document or email. The Activity Explorer logs in Purview show the labell applied, but events, but I am struggling to find a way to create an alert. I tried using PowerAutomate, but unable to find a solution there. Thanks Dheeraj259Views0likes2CommentsSecuring outbound traffic with Azure Data Factory's outbound network rules
The Outbound Rules feature in Azure Data Factory allows organizations to exercise granular control over outbound traffic, thereby strengthening network security. By integrating with Azure Policy, this feature also improves overall governance.11KViews5likes10CommentsKick Start Your Security Learning with a 7-lesson, Open-Source Course
This course is designed to teach you fundamental cyber security concepts to kick start your security learning. It is vendor agnostic and is divided into small lessons that should take around 30-60 mins to complete. Each lesson has a small quiz and links to further reading if you want to dive into the topic a bit more.3KViews3likes1CommentUnlock Your Cybersecurity Potential: Explore the Security-101 Curriculum!
In our interconnected world, cybersecurity is no longer a luxury—it’s a necessity. Whether you’re a seasoned IT professional or a curious enthusiast, understanding the fundamentals of security is crucial. Today, I’m thrilled to introduce you to a treasure trove of knowledge: the Security-101repository. What Is Security-101? The Security-101 repository, hosted on GitHub, is your gateway to mastering cybersecurity essentials. Developed by experts at Microsoft, this curriculum is designed to be accessible, practical, and engaging. Why Should You Explore Security-101? Foundational Knowledge: Whether you’re new to the field or need a refresher, Security-101 covers the basics. From the CIA Triad (Confidentiality, Integrity, and Availability) to risk management, you’ll gain a solid understanding. Vendor-Agnostic Approach: No product pitches here! Security-101 focuses on principles rather than specific tools. It’s like learning to drive before choosing a car. Learn at Your Own Pace: Each lesson takes just 30-60 minutes. Perfect for busy professionals or those eager to improve during lunch breaks. Interactive Quizzes: Test your knowledge after each lesson. Reinforce what you’ve learned and track your progress. You can utilize the following study plan for mastering the cybersecurity concepts covered in the Security-101 repository or come up with a self-pace study plan. Week Topic Subtopics Activities Week 1 Foundations and Basics CIA triad (Confidentiality, Integrity, Availability) Risks vs. Threats Security control concepts Read lessons on Foundational concepts. Take quizzes. Week 2 Zero Trust Architecture Zero trust model IAM in Zero trust Networking in Zero Trust Explore zero trust principles. Review related materials. Week 3 Security Operations (SecOps) Security incident response Security monitoring Security automation Study SecOps Concepts Complete quizzes Week 4 Application Security (AppSec) Secure Coding practices Web application security Secure software development Dive into AppSec topics. Week 5 Data Security Data encryption Data classification Data loss Understand data security. Take quizzes. Call to Action: Explore Security-101 Today! Here’s how you can engage: Visit the repository: Head over to the Security-101 repository. Star and bookmark it—you’ll want to return! Start with Lesson 1: Begin with the first lesson. Whether you’re sipping coffee or waiting for a code build, invest that time in your growth. Share with Peers: Spread the word! Tell your colleagues, friends, and fellow tech enthusiasts. Let’s build a community of security-conscious learners. Conclusion Security isn’t an afterthought; it’s woven into every digital interaction. By exploring Security-101, you’re not just learning—you’re empowering yourself to protect data, systems, and people. Learning about Security is an essential step for anyone looking to protect their digital assets and navigate the complex landscape of cybersecurity. The course offered by Microsoft on GitHub is a comprehensive starting point that covers fundamental concepts such as the CIA triad, zero trust architecture, and various security practices. It’s vendor-agnostic, making the knowledge applicable across different platforms and technologies. By understanding the basics of cybersecurity, you can better assess risks, implement effective controls, and contribute to a safer online environment. Whether you’re a beginner or looking to refresh your knowledge, Security 101 equips you with the tools and understanding necessary to face modern security challenges. So, take the leap and start your cybersecurity learning journey today.2.6KViews2likes0Comments