securing ai
26 TopicsEnterprise-grade controls for AI apps and agents built with Azure AI Foundry and Copilot Studio
AI innovation is moving faster than ever, and more AI projects are moving beyond experimentation into deployment, to drive tangible business impact. As organizations accelerate innovation with custom AI applications and agents, new risks emerge across the software development lifecycle and AI stack related to data oversharing and leaks, new vulnerabilities and threats, and non-compliance with stringent regulatory requirements Through 2025, poisoning of software supply chains and infrastructure technology stacks will constitute more than 70% of malicious attacks against AI used in the enterprise 1 , highlighting potential threats that originate early in development. Today, the average cost of a data breach is $4.88 million, but when security issues are caught early in the development process, that number drops dramatically to just $80 per incident 2 . The message is very clear; security can’t be an afterthought anymore. It must be a team sport across the organization, embedded from the start and throughout the development lifecycle. That's why developers and security teams should align on processes and tools that bring security into every stage of the AI development lifecycle and give security practitioners visibility into and the ability to mitigate risks. To address these growing challenges and help customers secure and govern their AI workloads across development and security teams, we are: Enabling Azure AI Foundry and Microsoft Copilot Studio to provide best-in-class foundational capabilities to secure and govern AI workloads Deeply integrating and embedding industry-leading capabilities from Microsoft Purview, Microsoft Defender, and Microsoft Entra into Azure AI Foundry and Microsoft Copilot Studio This week, 3,000 developers are gathering in Seattle for the annual Microsoft Build conference, with many more tuning in online, to learn practical skills for accelerating their AI apps and agents' innovation. To support their AI innovation journey, today we are excited to announce several new capabilities to help developers and organizations secure and govern AI apps and agents. New Azure AI Foundry foundational capabilities to secure and govern AI workloads Azure AI Foundry enhancements for AI security and safety With 70,000 customers, 100 trillion tokens processed this quarter, and 2 billion enterprise search queries each day, Azure AI Foundry has grown beyond just an application layer—it's now a comprehensive platform for building agents that can plan, take action, and continuously learn to drive real business outcomes. To help organizations build and deploy AI with confidence, we’re introducing new security and safety capabilities and insights for developers in Azure AI Foundry Introducing Spotlighting to detect and block prompt injection attacks in real time As AI systems increasingly rely on external data sources, a new class of threats has emerged. Indirect prompt injection attacks embed hidden instructions in documents, emails, and web content, tricking models into taking unauthorized actions without any direct user input. These attacks are difficult to detect and hard to prevent using traditional filters alone. To address this, Azure AI Content Safety is introducing Spotlighting, now available in preview. Spotlighting strengthens the Prompt Shields guardrail by improving its ability to detect and handle potential indirect prompt injections, where hidden adversarial instructions are embedded in external content. This new capability helps prevent the model from inadvertently acting on malicious prompts that are not directly visible to the user. Enable Spotlighting in Azure AI Content Safety to detect potential indirect prompt injection attacks New capabilities for task adherence evaluation and task adherence mitigation to ensure agents remain within scope As developers build more capable agents, organizations face growing pressure to help confirm those agents act within defined instructions and policy boundaries. Even small deviations can lead to tool misuse, broken workflows, or risks like unintended exposure of sensitive data. To solve this, Azure AI Foundry now includes task adherence for agents, now in preview and powered by two components: a real-time evaluation and a new control within Azure AI Content Safety. At the core is a real-time task adherence evaluation API, part of Azure AI Content Safety. This API assesses whether an agent’s behavior is aligned with its assigned task by analyzing the user’s query, system instructions, planned tool calls, and the agent’s response. The evaluation framework is built on Microsoft’s Agent Evaluators, which measure intent resolution, tool selection accuracy, completeness of response, and overall alignment to the original request. Developers can run this scoring logic locally using the Task Adherence Evaluator in the Azure AI Evaluation SDK, with a five-point scale that ranges from fully nonadherent to fully adherent. This gives teams a flexible and transparent way to inspect task-level behavior before it causes downstream issues. Task adherence is enforced through a new control in Azure AI Content Safety. If an agent goes off-task, the control can block tool use, pause execution, or trigger human review. In Azure AI Agent Service, it is available as an opt-in feature and runs automatically. Combined with real-time evaluation, this control helps to ensure that agents stay on task, follow instructions, and operate according to enterprise policies. Learn more about Prompt Shields in Azure AI Content Safety. Azure AI Foundry continuous evaluation and monitoring of agentic systems Maintaining high performance and compliance for AI agents after deployment is a growing challenge. Without ongoing oversight, issues like performance degradation, safety risks, or unintentional misuse of resources can slip through unnoticed. To address this, Azure AI Foundry introduces continuous evaluation and monitoring of agentic systems, now in preview, provides a single pane of glass dashboard to track key metrics such as performance, quality, safety, and resource usage in real time. Continuous evaluation runs quality and safety evaluations at a sampled rate of production usage with results made available in the Azure AI Foundry Monitoring dashboard and published to Application Insights. Developers can set alerts to detect drift or regressions and use Azure Monitor to gain full-stack visibility into their AI systems. For example, an organization using an AI agent to assist with customer-facing tasks can monitor groundedness and detect a decline in quality when the agent begins referencing irrelevant information, helping teams to act before it potentially negatively affects trust of users. Azure AI Foundry evaluation integrations with Microsoft Purview Compliance Manager, Credo AI, and Saidot for streamlined compliance AI regulations and standards introduce new requirements for transparency, documentation, and risk management for high-risk AI systems. As developers build AI applications and agents, they may need guidance and tools to help them evaluate risks based on these requirements and seamlessly share control and evaluation insights with compliance and risk teams. Today, we are announcing previews for Azure AI Foundry evaluation tool’s integration with a compliance management solution, Microsoft Purview Compliance Manager, and AI governance solutions, Credo AI and Saidot. These integrations help define risk parameters, run suggested compliance evaluations, and collect evidence for control testing and auditing. For example, for a developer who’s building an AI agent in Europe may be required by their compliance team to complete a Data Protection Impact Assets (DPIA) and Algorithmic Impact Assessment (AIA) to meet internal risk management and technical documentation requirements aligned with emerging AI governance standards and best practices. Based on Purview Compliance Manager’s step-by-step guidance on controls implementation and testing, the compliance teams can evaluate risks such as potential bias, cybersecurity vulnerabilities, or lack of transparency in model behavior. Once the evaluation is conducted in Azure AI Foundry, the developer can obtain a report with documented risk, mitigation, and residual risk for compliance teams to upload to Compliance Manager to support audits and provide evidence to regulators or external stakeholders. Assess controls for Azure AI Foundry against emerging AI governance standards Learn more about Purview Compliance Manager. Learn more about the integration with Credo AI and Saidot in this blogpost. Leading Microsoft Entra, Defender and Purview value extended to Azure AI Foundry and Microsoft Copilot Studio Introducing Microsoft Entra Agent ID to help address agent sprawl and manage agent identity Organizations are rapidly building their own AI agents, leading to agent sprawl and a lack of centralized visibility and management. Security teams often struggle to keep up, unable to see which agents exist and whether they introduce security or compliance risks. Without proper oversight, agent sprawl increases the attack surface and makes it harder to manage these non-human identities. To address this challenge, we’re announcing the public preview of Microsoft Entra Agent ID, a new capability in the Microsoft Entra admin center that gives security admins visibility and control over AI agents built with Copilot Studio and Azure AI Foundry. With Microsoft Entra Agent ID, an agent created through Copilot Studio or Azure AI Foundry is automatically assigned an identity with no additional work required from the developers building them. This is the first step in a broader initiative to manage and protect non-human identities as organizations continue to build AI agents. : Security and identity admins can gain visibility into AI agents built in Copilot Studio and Azure AI Foundry in the Microsoft Entra Admin Center This new capability lays the foundation for more advanced capabilities coming soon to Microsoft Entra. We also know that no one can do it alone. Security has always been a team sport, and that’s especially true as we enter this new era of protecting AI agents and their identities. We’re energized by the momentum across the industry; two weeks ago, we announced support for the Agent-to-Agent (A2A) protocol and began collaborating with partners to shape the future of AI identity workflows. Today, we’re also excited to announce new partnerships with ServiceNow and Workday. As part of this, we’ll integrate Microsoft Entra Agent ID with the ServiceNow AI Platform and the Workday Agent System of Record. This will allow for automated provisioning of identities for future digital employees. Learn more about Microsoft Entra Agent ID. Microsoft Defender security alerts and recommendations now available in Azure AI Foundry As more AI applications are deployed to production, organizations need to predict and prevent potential AI threats with natively integrated security controls backed by industry-leading Gen AI and threat intelligence for AI deployments. Developers need critical signals from security teams to effectively mitigate security risks related to their AI deployments. When these critical signals live in separate systems outside the developer experience, this can create delays in mitigation, leaving opportunities for AI apps and agents to become liabilities and exposing organizations to various threats and compliance violations. Now in preview, Microsoft Defender for Cloud integrates AI security posture management recommendations and runtime threat protection alerts directly into the Azure AI Foundry portal. These capabilities, previously announced as part of the broader Microsoft Defender for Cloud solution, are extended natively into Azure AI Foundry enabling developers to access alerts and recommendations without leaving their workflows. This provides real-time visibility into security risks, misconfigurations, and active threats targeting their AI applications on specific Azure AI projects, without needing to switch tools or wait on security teams to provide details. Security insights from Microsoft Defender for Cloud help developers identify and respond to threats like jailbreak attacks, sensitive data leakage, and misuse of system resources. These insights include: AI security posture recommendations that identify misconfigurations and vulnerabilities in AI services and provide best practices to reduce risk Threat protection alerts for AI services that notify developers of active threats and provide guidance for mitigation, across more than 15 detection types For example, a developer building an AI-powered agent can receive security recommendations suggesting the use of Azure Private Link for Azure AI Services resources. This reduces the risk of data leakage by handling the connectivity between consumers and services over the Azure backbone network. Each recommendation includes actionable remediation steps, helping teams identify and mitigate risks in both pre- and post-deployment phases. This helps to reduce risks without slowing down innovation. : Developers can view security alerts on the Risks + alerts page in Azure AI Foundry : Developers can view recommendations on the Guardrails + controls page in Azure AI Foundry This integration is currently in preview and will be generally available in June 2025 in Azure AI Foundry. Learn more about protecting AI services with Microsoft Defender for Cloud. Microsoft Purview capabilities extended to secure and govern data in custom-built AI apps and agents Data oversharing and leakage are among the top concerns for AI adoption, and central to many regulatory requirements. For organizations to confidently deploy AI applications and agents, both low code and pro code developers need a seamless way to embed security and compliance controls into their AI creations. Without simple, developer-friendly solutions, security gaps can quickly become blockers, delaying deployment and increasing risks as applications move from development to production. Today, Purview is extending its enterprise-grade data security and compliance capabilities, making it easier for both low code and pro code developers to integrate data security and compliance into their AI applications and agents, regardless of which tools or platforms they use. For example, with this update, Microsoft Purview DSPM for AI becomes the one place data security teams can see all the data risk insights across Microsoft Copilots, agents built in Agent Builder and Copilot Studio, and custom AI apps and agents built in Azure AI Foundry and other platforms. Admins can easily drill into security and compliance insights for specific AI apps or agents, making it easier to investigate and take action on potential risks. : Data security admins can now find data security and compliance insights across Microsoft Copilots, agents built with Agent Builder and Copilot Studio, and custom AI apps and agents in Microsoft Purview DSPM for AI In the following sections, we will provide more details about the updates to Purview capabilities in various AI workloads. 1. Microsoft Purview data security and compliance controls can be extended to any custom-built AI application and agent via the new Purview SDK or the native Purview integration with Azure AI Foundry. The new capabilities make it easy and effortless for security teams to bring the same enterprise-grade data security compliance controls available today for Microsoft 365 Copilot to custom AI applications and agents, so organizations can: Discover data security risks, such as sensitive data in user prompts, and data compliance risks, such as harmful content, and get recommended actions to mitigate risks proactively in Microsoft Purview Data Security Posture Management (DSPM) for AI. Protect sensitive data against data leakage and insider risks with Microsoft Purview data security policies. Govern AI interactions with Audit, Data Lifecycle Management, eDiscovery, and Communication Compliance. Microsoft Purview SDK Microsoft Purview now offers Purview SDK, a set of REST APIs, documentation, and code samples, currently in preview, enabling developers to integrate Purview's data security and compliance capabilities into AI applications or agents within any integrated development environment (IDE). : By embedding Purview APIs into the IDE, developers help enable their AI apps to be secured and governed at runtime For example, a developer building an AI agent using an AWS model can use the Purview SDK to enable their AI app to automatically identify and block sensitive data entered by users before it’s exposed to the model, while also providing security teams with valuable signals that support compliance. With Purview SDK, startups, ISVs, and partners can now embed Purview industry-leading capabilities directly into their AI software solutions, making these solutions Purview aware and easier for their customers to secure and govern data in their AI solutions. For example, Infosys Vice President and Delivery Head of Cyber Security Practice, Ashish Adhvaryu indicates, “Infosys Cyber Next platform integrates Microsoft Purview to provide enhanced AI security capabilities. Our solution, the Cyber Next AI assistant (Cyber Advisor) for the SOC analyst, leverages Purview SDK to drive proactive threat mitigation with real-time monitoring and auditing capabilities. This integration provides holistic AI-assisted protection, enhancing cybersecurity posture." Microsoft partner EY (previously known as Ernst and Young) has also leveraged the new Purview SDK to embed Purview value into their GenAI initiatives. “We’re not just building AI tools, we are creating Agentic solutions where trust, security, and transparency are present from the start, supported by the policy controls provided through the Purview SDK. We’re seeing 25 to 30 percent time savings when we build secure features using the Purview SDK,” noted Sumanta Kar, Partner, Innovation and Emerging Tech at EY. Learn more about the Purview SDK. Microsoft Purview integrates natively with Azure AI Foundry Organizations are developing an average of 14 custom AI applications. The rapid pace of AI innovation may leave security teams unaware of potential data security and compliance risks within their environments. With the update announced today, Azure AI Foundry signals are now directly integrated with Purview Data Security Posture Management for AI, Insider Risk Management, and data compliance controls, minimizing the need for additional development work. For example, for AI applications and agents built with Azure AI Foundry models, data security teams can gain visibility into AI usage and data risks in Purview DSPM for AI, with no additional work from developers. Data security teams can also detect, investigate, and respond to both malicious and inadvertent user activities, such as a departing employee leveraging an AI agent to retrieve an anomalous amount of sensitive data, with Microsoft Purview Insider Risk Management (IRM) policies. Lastly, user prompts and AI responses in Azure AI apps and agents can now be ingested into Purview compliance tools as mentioned above. Learn more about Microsoft Purview for Azure AI Foundry. 2. Purview data protections extended to Copilot Studio agents grounded in Microsoft Dataverse data Coming to preview in June, Purview Information Protection extends auto-labeling and label inheritance coverage to Dataverse to help prevent oversharing and data leaks. Information Protection makes it easier for organizations to automatically classify and protect sensitive data at scale. A common challenge is that sensitive data often lands in Dataverse from various sources without consistent labeling or protection. The rapid adoption of agents built using Copilot Studio and grounding data from Dataverse increases the risk of data oversharing and leakage if data is not properly protected. With auto-labeling, data stored in Dataverse tables can be automatically labeled based on policies set in Microsoft Purview, regardless of its source. This reduces the need for manual labeling effort and protects sensitive information from the moment it enters Dataverse. With label inheritance, AI agent responses grounded in Dataverse data will automatically carry and honor the source data’s sensitivity label. If a response pulls from multiple tables with different labels, the most restrictive label is applied to ensure consistent protection. For example, a financial advisor building an agent in Copilot Studio might connect multiple Dataverse tables, some labeled as “General” and others as “Highly Confidential.” If a response pulls from both, it will inherit the most restrictive label, in this case, "Highly Confidential,” to prevent unauthorized access and ensure appropriate protections are applied across both maker and users of the agent. Together, auto-labeling and label inheritance in Dataverse support a more secure, automated foundation for AI. : Sensitivity labels will be automatically applied to data in Dataverse : AI-generated responses will inherit and honor the source data’s sensitivity labels Learn more about protecting Dataverse data with Microsoft Purview. 3. Purview DSPM for AI can now provide visibility into unauthenticated interactions with Copilot Studio agents As organizations increasingly use Microsoft Copilot Studio to deploy AI agents for frontline customer interactions, gaining visibility into unauthenticated user interactions and proactively mitigating risks becomes increasingly critical. Building on existing Purview and Copilot Studio integrations, we’ve extended DSPM for AI and Audit in Copilot Studio to provide visibility into unauthenticated interactions, now in preview. This gives organizations a more comprehensive view of AI-related data security risks across authenticated and unauthenticated users. For example, a healthcare provider hosting an external, customer-facing agent assistant must be able to detect and respond to attempts by unauthenticated users to access sensitive patient data. With these new capabilities in DSPM for AI, data security teams can now identify these interactions, assess potential exposure of sensitive data, and act accordingly. Additionally, integration with Purview Audit provides teams with seamless access to information needed for audit requirements. : Gain visibility into all AI interactions, including those from unauthenticated users Learn more about Purview for Copilot Studio. 4. Purview Data Loss Prevention extended to more Microsoft 365 agent scenarios To help organizations prevent data oversharing through AI, at Ignite 2024, we announced that data security admins could prevent Microsoft 365 Copilot from using certain labeled documents as grounding data to generate summaries or responses. Now in preview, this control also extends to agents published in Microsoft 365 Copilot that are grounded by Microsoft 365 data, including pre-built Microsoft 365 agents, agents built with the Agent Builder, and agents built with Copilot Studio. This helps ensure that files containing sensitive content are used appropriately by AI agents. For example, confidential legal documents with highly specific language that could lead to improper guidance if summarized by an AI agent, or "Internal only” documents that shouldn’t be used to generate content that can be shared outside of the organization. : Extend data loss prevention (DLP) policies to Microsoft 365 Copilot agents to protect sensitive data Learn more about Data Loss Prevention for Microsoft 365 Copilot and agents. The data protection capabilities we are extending to agents in Agent Builder and Copilot Studio demonstrate our continued investment in strengthening the Security and Governance pillar of the Copilot Control System (CSS). CCS provides integrated controls to help IT and security teams secure, manage, and monitor Copilot and agents across Microsoft 365, spanning governance, management, and reporting. Learn more here. Explore additional resources As developers and security teams continue to secure AI throughout its lifecycle, it’s important to stay ahead of emerging risks and ensure protection. Microsoft Security provides a range of tools and resources to help you proactively secure AI models, apps, and agents from code to runtime. Explore the following resources to deepen your understanding and strengthen your approach to AI security: Learn more about Security for AI solutions on our webpage Learn more about Microsoft Purview SDK Get started with Azure AI Foundry Get started with Microsoft Entra Get started with Microsoft Purview Get started with Microsoft Defender for Cloud Get started with Microsoft 365 Copilot Get started with Copilot Studio Sign up for a free Microsoft 365 E5 Security Trial and Microsoft Purview Trial 1 Predicts 2025: Navigating Imminent AI Turbulence for Cybersecurity, Jeremy D'Hoinne, Akif Khan, Manuel Acosta, Avivah Litan, Deepak Seth, Bart Willemsen, 10 February 2025 2 IBM. "Cost of a Data Breach 2024: Financial Industry." IBM Think, 13 Aug. 2024, https://www.ibm.com/think/insights/cost-of-a-data-breach-2024-financial-industry; Cser, Tamas. "The Cost of Finding Bugs Later in the SDLC." Functionize, 5 Jan. 2023, https://www.functionize.com/blog/the-cost-of-finding-bugs-later-in-the-sdlc2KViews1like0CommentsSecuring Microsoft M365 Copilot and AI with Microsoft's Suite of Security Products - Part 1
Microsoft 365 Copilot and AI applications created in Azure AI Foundry are transforming productivity, but they also introduce new security challenges for businesses. Organizations embracing these AI capabilities must guard against risks such as data leaks, novel AI-driven threats (e.g. prompt injection attacks), and compliance violations. Microsoft offers a comprehensive suite of products to help secure and govern AI solutions. This multipart guide provides a detailed roadmap for using Microsoft’s security services together to protect AI deployments and Copilot integrations in an enterprise environment. Overview of Microsoft Security Solutions for AI and Copilot Microsoft’s security portfolio spans identity, devices, cloud apps, data, and threat management – all crucial for securing AI systems. Key products include: Microsoft Entra (identity and access), Microsoft Defender XDR (Unified enterprise defense suite that natively coordinates detection, prevention, investigation, and response across endpoints, identities, email, and applications), Microsoft Purview (data security, compliance, and governance), Microsoft Sentinel (cloud-native SIEM/SOAR for monitoring and response), and Microsoft Intune (device management), among others. These solutions are designed to integrate – forming an AI-first, unified security platform that greatly reduces the complexity of implementing a cohesive Zero Trust strategy across your enterprise and AI ecosystem. The table below summarizes the main product categories and their roles in securing AI applications and Copilot: Security Area Microsoft Product Role in Securing AI and Copilot Identity and Access Management Microsoft Entra and Entra Suite (Entra ID Protection, Entra Conditional Access, Entra Internet Access, Entra Private Access, Entra ID Governance) Verify and control access to AI systems. Enforce strong authentication and least privilege for users, admins, and AI service identities. Conditional Access policies (including new AI app controls) restrict who can use specific AI applications Endpoint & Device Security Microsoft Defender for Endpoint, Microsoft Intune Secure user devices that interact with AI. Defender for Endpoint provides EDR (Endpoint Detection & Response) to help block malware or exploits while also identifying devices that may be high risk. Intune helps ensure only managed, compliant devices can access corporate AI apps, aligning with a Zero Trust strategy. Cloud & Application Security Microsoft Defender for Cloud (CSPM/CWPP), Defender for Cloud Apps (CASB/SSPM), Azure Network Security (Azure Firewall, WAF) Protect AI Infrastructure and cloud workloads (IAAS/SASS) Defender for Cloud continuously assesses security posture of AI services (VMs, containers, Azure OpenAI instances) and detects misconfigurations or vulnerabilities. It now provides AI security posture management across multi-cloud AI environments (Azure, AWS, Google) and even multiple model types. Defender for Cloud Apps monitors and controls SaaS AI app usage to combat “shadow AI” usage (unsanctioned AI tools). Azure Firewall and WAF guard AI APIs and web front-ends against network threats, with new Copilot-powered features to analyze traffic and logs. Threat Detection & Response Microsoft Defender XDR, Microsoft Sentinel (SIEM/SOAR), Microsoft Security Copilot Detect and respond to threats. Microsoft’s Defender XDR suite provides a single pane of glass for security operations teams to detect, investigate, and respond to threats, correlating signals from endpoints, identities, cloud apps, and email. Microsoft Sentinel enhances these capabilities by aggregating and correlating signals from 3rd party, non-Microsoft products with Defender XDR data to alert on suspicious activities across the environment. Security Copilot (an AI assistant for SOC teams) further accelerates incident analysis and response using generative AI – helping defenders investigate incidents or automate threat hunting. Data Security & Compliance Microsoft Purview (Information Protection, Data Loss Prevention, Insider Risk, Compliance Manager, DSPM for AI), SharePoint Advanced Management Protect sensitive data used or produced by AI. Purview enables classification and sensitivity labeling of data so that confidential information is handled properly by AI. Purview Data Loss Prevention (DLP) enforces policies to prevent sensitive data leaks – for example, new Purview DLP controls for Edge for Business can block users from typing or pasting sensitive data into generative AI apps like ChatGPT or Copilot Chat. Purview Insider Risk Management can detect anomalous data extraction via AI tools. Purview Compliance Manager and Audit help ensure AI usage complies with regulations (e.g. GDPR, HIPAA) and provide audit logs of AI interactions. AI Application Safety Azure AI Content Safety (content filtering), Responsible AI controls (Prompt flow, OpenAI policy) Ensure AI output and usage remain safe and within policy. Azure AI Content Safety provides AI-driven content filters and “prompt shields” to block malicious or inappropriate prompts/outputs in real-time. Microsoft’s Responsible AI framework and tools (such as evaluations in Azure AI Studio to simulate adversarial prompts) further help developers build AI systems that adhere to safety and ethical standards. Meanwhile, M365 Copilot has built-in safeguards – it respects all your existing Microsoft 365 security, privacy, and compliance controls by design! How the Pieces Work Together Imagine a user at a company is using Microsoft 365 Copilot to query internal documents. Entra ID first ensures the user is who they claim (with MFA), and that their device is in a compliant state. When the user prompts Copilot, Copilot checks the user’s permissions and will only retrieve data they are authorized to see. The prompt and the AI’s generated answer is then checked by Microsoft Purview’s DLP , Insider Risk, DSPM, and compliance policies – if the user’s query or the response would expose, say, credit card numbers or other sensitive information, the system can block it or redact it. Meanwhile, Defender XDR's extended detection and response capabilities are working in the background: Defender for Cloud Apps logs that the user accessed an approved 3rd party AI service, Sentinel correlates this with any unusual behavior (like data exfiltration after running the prompt), an alert is triggered, and the user is either blocked or if allowed, forced to label and encrypt the data before sending it externally. In short, each security layer – identity, data, device, cloud, monitoring – plays an important part in securing this AI-driven scenario. Stay tuned for Part 2 of this Multi-Part Series In the following articles, we break down how to configure and use the tools summarized in this article, starting with Identity and Access Management. We will also highlight best practices (like Microsoft's recommended Prepare -> Discover -> Protect -> Govern approach for AI security) and include recent product enhancements that assist in securing AI.1KViews1like1CommentAnnouncing Public Preview of DLP for M365 Copilot in Word, Excel, and PowerPoint
Today, we are excited to announce the public preview of Data Loss Prevention (DLP) for M365 Copilot in Word, Excel, and PowerPoint. This development extends the capabilities you rely on for safeguarding data in M365 Copilot Chat, bringing DLP protections to everyday Copilot scenarios within these core productivity apps. Building on Our Foundation Data oversharing and leakage is a top concern for organizations using generative AI technology, and securing AI-based workflows can feel overwhelming. We’ve been laying a strong foundation with Microsoft Purview Data Loss Prevention—especially with DLP for M365 Copilot—and are excited to expand its reach to further reduce the risk of AI-related oversharing at scale. In the original public preview release, we enabled admins to configure DLP rules that block Copilot from processing or summarizing sensitive documents in M365 Copilot Chat. However, these controls didn’t extend to the powerful in-app Copilot experiences, such as rewriting text in Word, summarizing presentations in PowerPoint, or generating helpful formulas in Excel. That changes now with this public preview. The Next Phase of DLP for M365 Copilot Similar to our original approach for M365 Copilot Chat, we are bringing consistent, flexible protection to M365 Copilot for Word, Excel, and PowerPoint. Here’s how it works in this preview: Current file DLP checks: Copilot now respects sensitivity labels on an opened document or workbook. If a document has a sensitivity label and a DLP rule that excludes its content from Copilot processing, Copilot actions like summarizing or auto-generating content directly in the canvas are blocked. Chatting with Copilot is also unavailable. File reference DLP checks: When a user tries to reference other files in a prompt – like pulling data or slides from other labeled documents – Copilot checks DLP policies before retrieving the content. If there is a DLP policy configured to block Copilot processing of files with that file’s sensitivity label, Copilot will show an apology message rather than summarizing that content – so no accidental oversharing occurs. You can learn more about DLP for M365 Copilot here: Learn about the Microsoft 365 Copilot policy location (preview) Getting Started Enabling DLP for M365 Copilot in Word, Excel, and PowerPoint follows a setup similar to configuring DLP policies for other workloads. From the Purview compliance portal, you can configure the DLP policy for a specific sensitivity label at a file, group, site, and/or user level. If you have already enabled a DLP for M365 Copilot policy with the ongoing DLP for M65 Copilot Chat preview, no further action is needed – the policy will automatically begin to apply in Word, Excel, and PowerPoint Copilot experiences. In this preview, our focus is on ensuring reliability, performance, and seamless integration with the Office apps you use every day. We’ll continue to refine the user experience as we move toward general availability, including improvements to error messages and user guidance for each scenario. Join the Preview This public preview reflects our ongoing commitment to deliver robust data protection for AI-powered workflows. By extending the same DLP principles you trust to Word, Excel, and PowerPoint, we’re empowering you to embrace AI confidently without sacrificing control over your organization’s most valuable information. We invite you to start testing these capabilities in your environment. Your feedback is invaluable to us – we encourage all customers to share their experiences and insights, helping shape the next evolution of DLP for M365 Copilot in Office.2.7KViews1like4CommentsHow to deploy Microsoft Purview DSPM for AI to secure your AI apps
Microsoft Purview Data Security Posture Management (DSPM for AI) is designed to enhance data security for the following AI applications: Microsoft Copilot experiences, including Microsoft 365 Copilot. Enterprise AI apps, including ChatGPT enterprise integration. Other AI apps, including all other AI applications like ChatGPT consumer, Microsoft Copilot, DeepSeek, and Google Gemini, accessed through the browser. In this blog, we will dive into the different policies and reporting we have to discover, protect and govern these three types of AI applications. Prerequisites Please refer to the prerequisites for DSPM for AI in the Microsoft Learn Docs. Login to the Purview portal To begin, start by logging into Microsoft 365 Purview portal with your admin credentials: In the Microsoft Purview portal, go to the Home page. Find DSPM for AI under solutions. 1. Securing Microsoft 365 Copilot Be sure to check out our blog on How to use the DSPM for AI data assessment report to help you address oversharing concerns when you deploy Microsoft 365 Copilot. Discover potential data security risks in Microsoft 365 Copilot interactions In the Overview tab of DSPM for AI, start with the tasks in “Get Started” and Activate Purview Audit if you have not yet activated it in your tenant to get insights into user interactions with Microsoft Copilot experiences In the Recommendations tab, review the recommendations that are under “Not Started”. Create the following data discovery policies to discover sensitive information in AI interactions by clicking into each of them and select “Create policies”. Detect risky interactions in AI apps - This public preview Purview Insider Risk Management policy helps calculate user risk by detecting risky prompts and responses in Microsoft 365 Copilot experiences. Click here to learn more about Risky AI usage policy. With the policies to discover sensitive information in Microsoft Copilot experiences in place, head back to the Reports tab of DSPM for AI to discover any AI interactions that may be risky, with the option to filter to Microsoft Copilot Experiences, and review the following for Microsoft Copilot experiences: Total interactions over time (Microsoft Copilot) Sensitive interactions per AI app Top unethical AI interactions Top sensitivity labels references in Microsoft 365 Copilot Insider Risk severity Insider risk severity per AI app Potential risky AI usage Protect sensitive data in Microsoft 365 Copilot interactions From the Reports tab, click on “View details” for each of the report graphs to view detailed activities in the Activity Explorer. Using available filters, filter the results to view activities from Microsoft Copilot experiences based on different Activity type, AI app category and App type, Scope, which support administrative units for DSPM for AI, and more. Then drill down to each activity to view details including the capability to view prompts and response with the right permissions. To protect the sensitive data in interactions for Microsoft 365 Copilot, review the Not Started policies in the Recommendations tab and create these policies: Information Protection Policy for Sensitivity Labels - This option creates default sensitivity labels and sensitivity label policies. If you've already configured sensitivity labels and their policies, this configuration is skipped. Protect sensitive data referenced in Microsoft 365 Copilot - This guides you through the process of creating a Purview Data Loss Prevention (DLP) policy to restrict the processing of content with specific sensitivity labels in Copilot interactions. Click here to learn more about Data Loss Prevention for Microsoft 365 Copilot. Protect sensitive data referenced in Copilot responses - Sensitivity labels help protect files by controlling user access to data. Microsoft 365 Copilot honors sensitivity labels on files and only shows users files they already have access to in prompts and responses. Use Data assessments to identify potential oversharing risks, including unlabeled files. Stay tuned for an upcoming blog post on using DSPM for AI data assessments! Use Copilot to improve your data security posture - Data Security Posture Management combines deep insights with Security Copilot capabilities to help you identify and address security risks in your org. Once you have created policies from the Recommendations tab, you can go to the Policies tab to review and manage all the policies you have created across your organization to discover and safeguard AI activity in one centralized place, as well as edit the policies or investigate alerts associated with those policies in solution. Note that additional policies not from the Recommendations tab will also appear in the Policies tab when DSPM for AI identifies them as policies to Secure and govern all AI apps. Govern the prompts and responses in Microsoft 365 Copilot interactions Understand and comply with AI regulations by selecting “Guided assistance to AI regulations” in the Recommendations tab and walking through the “Actions to take”. From the Recommendations tab, create a Control unethical behavior in AI Purview Communications Compliance policy to detect sensitive information in prompts and responses and address potentially unethical behavior in Microsoft Copilot experiences and ChatGPT for Enterprise. This policy covers all users and groups in your organization. To retain and/or delete Microsoft 365 Copilot prompts and responses, setup a Data Lifecycle policy by navigating to Microsoft Purview Data Lifecycle Management and find Retention Policies under the Policies header. You can also preserve, collect, analyze, review, and export Microsoft 365 Copilot interactions by creating an eDiscovery case. 2. Securing Enterprise AI apps Please refer to this amazing blog on Unlocking the Power of Microsoft Purview for ChatGPT Enterprise | Microsoft Community Hub for detailed information on how to integrate with ChatGPT for enterprise, the Purview solutions it currently supports through Purview Communication Compliance, Insider Risk Management, eDiscovery, and Data Lifecycle Management. Learn more about the feature also through our public documentation. 3. Securing other AI Microsoft Purview DSPM for AI currently supports the following list of AI sites. Be sure to also check out our blog on the new Microsoft Purview data security controls for the browser & network to secure other AI apps. Discover potential data security risks in prompts sent to other AI apps In the Overview tab of DSPM for AI, go through these three steps in “Get Started” to discover potential data security risk in other AI interactions: Install Microsoft Purview browser extension For Windows users: The Purview extension is not necessary for the enforcement of data loss prevention on the Edge browser but required for Chrome to detect sensitive info pasted or uploaded to AI sites. The extension is also required to detect browsing to other AI sites through an Insider Risk Management policy for both Edge and Chrome browser. Therefore, Purview browser extension is required for both Edge and Chrome in Windows. For MacOS users: The Purview extension is not necessary for the enforcement of data loss prevention on macOS devices, and currently, browsing to other AI sites through Purview Insider Risk Management is not supported on MacOS, therefore, no Purview browser extension is required for MacOS. Extend your insights for data discovery – this one-click collection policy will setup three separate Purview detection policies for other AI apps: Detect sensitive info shared in AI prompts in Edge – a Purview collection policy that detects prompts sent to ChatGPT consumer, Micrsoft Copilot, DeepSeek, and Google Gemini in Microsoft Edge and discovers sensitive information shared in prompt contents. This policy covers all users and groups in your organization in audit mode only. Detect when users visit AI sites – a Purview Insider Risk Management policy that detects when users use a browser to visit AI sites. Detect sensitive info pasted or uploaded to AI sites – a Purview Endpoint Data loss prevention (eDLP) policy that discovers sensitive content pasted or uploaded in Microsoft Edge, Chrome, and Firefox to AI sites. This policy covers all users and groups in your org in audit mode only. With the policies to discover sensitive information in other AI apps in place, head back to the Reports tab of DSPM for AI to discover any AI interactions that may be risky, with the option to filter by Other AI Apps, and review the following for other AI apps: Total interactions over time (other AI apps) Total visits (other AI apps) Sensitive interactions per AI app Insider Risk severity Insider risk severity per AI app Protect sensitive info shared with other AI apps From the Reports tab, click on “View details” for each of the report graphs to view detailed activities in the Activity Explorer. Using available filters, filter the results to view activities based on different Activity type, AI app category and App type, Scope, which support administrative units for DSPM for AI, and more. To protect the sensitive data in interactions for other AI apps, review the Not Started policies in the Recommendations tab and create these policies: Fortify your data security – This will create three policies to manage your data security risks with other AI apps: 1) Block elevated risk users from pasting or uploading sensitive info on AI sites – this will create a Microsoft Purview endpoint data loss prevention (eDLP) policy that uses adaptive protection to give a warn-with-override to elevated risk users attempting to paste or upload sensitive information to other AI apps in Edge, Chrome, and Firefox. This policy covers all users and groups in your org in test mode. Learn more about adaptive protection in Data loss prevention. 2) Block elevated risk users from submitting prompts to AI apps in Microsoft Edge – this will create a Microsoft Purview browser data loss prevention (DLP) policy, and using adaptive protection, this policy will block elevated, moderate, and minor risk users attempting to put information in other AI apps using Microsoft Edge. This integration is built-in to Microsoft Edge. Learn more about adaptive protection in Data loss prevention. 3) Block sensitive info from being sent to AI apps in Microsoft Edge - this will create a Microsoft Purview browser data loss prevention (DLP) policy to detect inline for a selection of common sensitive information types and blocks prompts being sent to AI apps while using Microsoft Edge. This integration is built-in to Microsoft Edge. Once you have created policies from the Recommendations tab, you can go to the Policies tab to review and manage all the policies you have created across your organization to discover and safeguard AI activity in one centralized place, as well as edit the policies or investigate alerts associated with those policies in solution. Note that additional policies not from the Recommendations tab will also appear in the Policies tab when DSPM for AI identifies them as policies to Secure and govern all AI apps. Conclusion Microsoft Purview DSPM for AI can help you discover, protect, and govern the interactions from AI applications in Microsoft Copilot experiences, Enterprise AI apps, and other AI apps. We recommend you review the Reports in DSPM for AI routinely to discover any new interactions that may be of concern, and to create policies to secure and govern those interactions as necessary. We also recommend you utilize the Activity Explorer in DSPM for AI to review different Activity explorer events while users interacting with AI, including the capability to view prompts and response with the right permissions. We will continue to update this blog with new features that become available in DSPM for AI, so be sure to bookmark this page! Follow-up Reading Check out this blog on the details of each recommended policies in DSPM for AI: Microsoft Purview – Data Security Posture Management (DSPM) for AI | Microsoft Community Hub Address oversharing concerns with Microsoft 365 blueprint - aka.ms/Copilot/Oversharing Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Considerations for deploying Microsoft Purview AI Hub and data security and compliance protections for Microsoft 365 Copilot and Microsoft Copilot | Microsoft Learn Commonly used properties in Copilot audit logs - Audit logs for Copilot and AI activities | Microsoft Learn Supported AI sites by Microsoft Purview for data security and compliance protections | Microsoft Learn Where Copilot usage data is stored and how you can audit it - Microsoft 365 Copilot data protection and auditing architecture | Microsoft Learn Downloadable whitepaper: Data Security for AI Adoption | Microsoft Public roadmap for DSPM for AI - Microsoft 365 Roadmap | Microsoft 3653.9KViews6likes1CommentUnderstanding and mitigating security risks in MCP implementations
Introducing any new technology can introduce new security challenges or exacerbate existing security risks. In this blog post, we’re going to look at some of the security risks that could be introduced to your environment when using Model Context Protocol (MCP), and what controls you can put in place to mitigate them. MCP is a framework that enables seamless integration between LLM applications and various tools and data sources. MCP defines: A standardized way for AI models to request external actions through a consistent API Structured formats for how data should be passed to and from AI systems Protocols for how AI requests are processed, executed, and returned MCP allows different AI systems to use a common set of tools and patterns, ensuring consistent behavior when AI models interact with external systems. MCP architecture MCP follows a client-server architecture that allows AI models to interact with external tools efficiently. Here’s how it works: MCP Host – The AI model (e.g., Azure OpenAI GPT) requesting data or actions. MCP Client – An intermediary service that forwards the AI model's requests to MCP servers. MCP Server – Lightweight applications that expose specific capabilities (APIs, databases, files, etc.). Data Sources – Various backend systems, including local storage, cloud databases, and external APIs. MCP security controls Any system which has access to important resources has implied security challenges. Security challenges can generally be addressed through correct application of fundamental security controls and concepts. As MCP is only newly defined, the specification is changing very rapidly and as the protocol evolves. Eventually the security controls within it will mature, enabling a better integration with enterprise and established security architectures and best practices. Research published in the Microsoft Digital Defense Report states that 98% of reported breaches would be prevented by robust security hygiene and the best protection against any kind of breach is to get your baseline security hygiene, secure coding best practices and supply chain security right – those tried and tested practices that we already know about still make the most impact in reducing security risk. Let's look at some of the ways that you can start to address security risks when adopting MCP. MCP server authentication (if your MCP implementation was before 26th April 2025) Problem statement: The original MCP specification assumed that developers would write their own authentication server. This requires knowledge of OAuth and related security constraints. MCP servers acted as OAuth 2.0 Authorization Servers, managing the required user authentication directly rather than delegating it to an external service such as Microsoft Entra ID. As of 26 April 2025, an update to the MCP specification allows for MCP servers to delegate user authentication to an external service. Risks: Misconfigured authorization logic in the MCP server can lead to sensitive data exposure and incorrectly applied access controls. OAuth token theft on the local MCP server. If stolen, the token can then be used to impersonate the MCP server and access resources and data from the service that the OAuth token is for. Mitigating controls: Thoroughly review your MCP server authorization logic, here some posts discussing this in more detail - Azure API Management Your Auth Gateway For MCP Servers | Microsoft Community Hub and Using Microsoft Entra ID To Authenticate With MCP Servers Via Sessions · Den Delimarsky Implement best practices for token validation and lifetime Use secure token storage and encrypt tokens Excessive permissions for MCP servers Problem statement: MCP servers may have been granted excessive permissions to the service/resource they are accessing. For example, an MCP server that is part of an AI sales application connecting to an enterprise data store should have access scoped to the sales data and not allowed to access all the files in the store. Referencing back to the principle of least privilege (one of the oldest security principles), no resource should have permissions in excess of what is required for it to execute the tasks it was intended for. AI presents an increased challenge in this space because to enable it to be flexible, it can be challenging to define the exact permissions required. Risks: Granting excessive permissions can allow for exfiltration or amending data that the MCP server was not intended to be able to access. This could also be a privacy issue if the data is personally identifiable information (PII). Mitigating controls: Clearly define the permissions that the MCP server has to access the resource/service it connects to. These permissions should be the minimum required for the MCP server to access the tool or data it is connecting to. Indirect prompt injection attacks Problem statement: Researchers have shown that the Model Context Protocol (MCP) is vulnerable to a subset of Indirect Prompt Injection attacks known as Tool Poisoning Attacks. Tool poisoning is a scenario where an attacker embeds malicious instructions within the descriptions of MCP tools. These instructions are invisible to users but can be interpreted by the AI model and its underlying systems, leading to unintended actions that could ultimately lead to harmful outcomes. Risks: Unintended AI actions present a variety of security risks that include data exfiltration and privacy breaches. Mitigating controls: Implement AI prompt shields: in Azure AI Foundry, you can follow these steps to implement AI prompt shields. Implement robust supply chain security: you can read more about how Microsoft implements supply chain security internally here. Established security best practices that will uplift your MCP implementation’s security posture Any MCP implementation inherits the existing security posture of your organization's environment that it is built upon, so when considering the security of MCP as a component of your overall AI systems it is recommended that you look at uplifting your overall existing security posture. The following established security controls are especially pertinent: Secure coding best practices in your AI application - protect against the OWASP Top 10, the OWASP Top 10 for LLMs, use of secure vaults for secrets and tokens, implementing end-to-end secure communications between all application components, etc. Server hardening – use MFA where possible, keep patching up to date, integrate the server with a third party identity provider for access, etc. Keep devices, infrastructure and applications up to date with patches Security monitoring – implementing logging and monitoring of an AI application (including the MCP client/servers) and sending those logs to a central SIEM for detection of anomalous activities Zero trust architecture – isolating components via network and identity controls in a logical manner to minimize lateral movement if an AI application were compromised. Conclusion MCP is a promising development in the AI space that enables rich data and context access. As developers embrace this new approach to integrating their organization's APIs and connectors into LLMs, they need to be aware of security risks and how to implement controls to reduce those risks. There are mitigating security controls that can be put in place to reduce the risks inherent in the current specification, but as the protocol develops expect that some of the risks will reduce or disappear entirely. We encourage you to contribute to and suggest security related MCP RFCs to make this protocol even better! With thanks to OrinThomas, dasithwijes, dendeli and Peter Marcu for their inputs and collaboration on this post.6.8KViews8likes0CommentsUnlocking the Power of Microsoft Purview for ChatGPT Enterprise
In today's rapidly evolving technology landscape, data security and compliance are key. Microsoft Purview offers a robust solution for managing and securing interactions with AI based solutions. This integration not only enhances data governance but also ensures that sensitive information is handled with the appropriate controls. Let's dive into the benefits of this integration and outline the steps to integrate with ChatGPT Enterprise in specific. The integration works for Entra connected users on the ChatGPT workspace, if you have needs that goes beyond this, please tell us why and how it impacts you. Important update 1: Effective May 1, these capabilities require you to enable pay-as-you-go billing in your organization. Important update 2: From May 19, you are required to create a collection policy to ingest ChatGPT Enterprise information. In DSPM for AI you will find this one click process. Benefits of Integrating ChatGPT Enterprise with Microsoft Purview Enhanced Data Security: By integrating ChatGPT Enterprise with Microsoft Purview, organizations can ensure that interactions are securely captured and stored within their Microsoft 365 tenant. This includes user text prompts and AI app text responses, providing a comprehensive record of communications. Compliance and Governance: Microsoft Purview offers a range of compliance solutions, including Insider Risk Management, eDiscovery, Communication Compliance, and Data Lifecycle & Records Management. These tools help organizations meet regulatory requirements and manage data effectively. Customizable Detection: The integration allows for the detection of built in can custom classifiers for sensitive information, which can be customized to meet the specific needs of the organization. To help ensures that sensitive data is identified and protected. The audit data streams into Advanced Hunting and the Unified Audit events that can generate visualisations of trends and other insights. Seamless Integration: The ChatGPT Enterprise integration uses the Purview API to push data into Compliant Storage, ensuring that external data sources cannot access and push data directly. This provides an additional layer of security and control. Step-by-Step Guide to Setting Up the Integration 1. Get Object ID for the Purview account in Your Tenant: Go to portal.azure.com and search for "Microsoft Purview" in the search bar. Click on "Microsoft Purview accounts" from the search results. Select the Purview account you are using and copy the account name. Go to portal.azure.com and search for “Enterprise" in the search bar. Click on Enterprise applications. Remove the filter for Enterprise Applications Select All applications under manage, search for the name and copy the Object ID. 2. Assign Graph API Roles to Your Managed Identity Application: Assign Purview API roles to your managed identity application by connecting to MS Graph utilizing Cloud Shell in the Azure portal. Open a PowerShell window in portal.azure.com and run the command Connect-MgGraph. Authenticate and sign in to your account. Run the following cmdlet to get the ServicePrincipal ID for your organization for the Purview API app. (Get-MgServicePrincipal -Filter "AppId eq '9ec59623-ce40-4dc8-a635-ed0275b5d58a'").id This command provides the permission of Purview.ProcessConversationMessages.All to the Microsoft Purview Account allowing classification processing. Update the ObjectId to the one retrieved in step 1 for command and body parameter. Update the ResourceId to the ServicePrincipal ID retrieved in the last step. $bodyParam= @{ "PrincipalId"= "{ObjectID}" "ResourceId" = "{ResourceId}" "AppRoleId" = "{a4543e1f-6e5d-4ec9-a54a-f3b8c156163f}" } New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId '{ObjectId}' -BodyParameter $bodyParam It will look something like this from the command line We also need to add the permission for the application to read the user accounts to correctly map the ChatGPT Enterprise user with Entra accounts. First run the following command to get the ServicePrincipal ID for your organization for the GRAPH app. (Get-MgServicePrincipal -Filter "AppId eq '00000003-0000-0000-c000-000000000000'").id The following step adds the permission User.Read.All to the Purview application. Update the ObjectId with the one retrieved in step 1. Update the ResourceId with the ServicePrincipal ID retrieved in the last step. $bodyParam= @{ "PrincipalId"= "{ObjectID}" "ResourceId" = "{ResourceId}" "AppRoleId" = "{df021288-bdef-4463-88db-98f22de89214}" } New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId '{ObjectId}' -BodyParameter $bodyParam 3. Store the ChatGPT Enterprise API Key in Key Vault The steps for setting up Key vault integration for Data Map can be found here Create and manage credentials for scans in the Microsoft Purview Data Map | Microsoft Learn When setup you will see something like this in Key vault. 4. Integrate ChatGPT Enterprise Workspace to Purview: Create a new data source in Purview Data Map that connects to the ChatGPT Enterprise workspace. Go to purview.microsoft.com and select Data Map, search if you do not see it on the first screen. Select Data sources Select Register Search for ChatGPT Enterprise and select Provide your ChatGPT Enterprise ID Create the first scan by selecting Table view and filter on ChatGPT Add your key vault credentials to the scan Test the connection and once complete click continue When you click continue the following screen will show up, if everything is ok click Save and run. Validate the progress by clicking on the name, completion of the first full scan may take an extended period of time. Depending on size it may take more than 24h to complete. If you click on the scan name you expand to all the runs for that scan. When the scan completes you can start to make use of the DSPM for AI experience to review interactions with ChatGPT Enterprise. The mapping to the users is based on the ChatGPT Enterprise connection to Entra, with prompts and responses stored in the user's mailbox. 5. Review and Monitor Data: Please see this article for required permissions and guidance around Microsoft Purview Data Security Posture Management (DSPM) for AI, Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Use Purview DSPM for AI analytics and Activity Explorer to review interactions and classifications. You can expand on prompts and responses in ChatGPT Enterprise 6. Microsoft Purview Communication Compliance Communication Compliance (here after CC) is a feature of Microsoft Purview that allows you to monitor and detect inappropriate or risky interactions with ChatGPT Enterprise. You can monitor and detect requests and responses that are inappropriate based on ML models, regular Sensitive Information Types, and other classifiers in Purview. This can help you identify Jailbreak and Prompt injection attacks and flag them to IRM and for case management. Detailed steps to configure CC policies and supported configurations can be found here. 7. Microsoft Purview Insider Risk Management We believe that Microsoft Purview Insider Risk Management (here after IRM) can serve a key role in protecting your AI workloads long term. With its adaptive protection capabilities, IRM dynamically adjusts user access based on evolving risk levels. In the event of heightened risk, IRM can enforce Data Loss Prevention (DLP) policies on sensitive content, apply tailored Entra Conditional Access policies, and initiate other necessary actions to effectively mitigate potential risks. This strategic approach will help you to apply more stringent policies where it matters avoiding a boil the ocean approach to allow your team to get started using AI. To get started use the signals that are available to you including CC signals to raise IRM tickets and enforce adaptive protection. You should create your own custom IRM policy for this. Do include Defender signals as well. Based on elevated risk you may select to block users from accessing certain assets such as ChatGPT Enterprise. Please see this article for more detail Block access for users with elevated insider risk - Microsoft Entra ID | Microsoft Learn. 8. eDiscovery eDiscovery of AI interactions is crucial for legal compliance, transparency, accountability, risk management, and data privacy protection. Many industries must preserve and discover electronic communications and interactions to meet regulatory requirements. Including AI interactions in eDiscovery ensures organizations comply with these obligations and preserves relevant evidence for litigation. This process also helps maintain trust by enabling the review of AI decisions and actions, demonstrating due diligence to regulators. Microsoft Purview eDiscovery solutions | Microsoft Learn 9. Data Lifecycle Management Microsoft Purview offers robust solutions to manage AI data from creation to deletion, including classification, retention, and secure disposal. This ensures that AI interactions are preserved and retrievable for audits, litigation, and compliance purposes. Please see this article for more information Automatically retain or delete content by using retention policies | Microsoft Learn. Closing By following these steps, organizations can leverage the full potential of Microsoft Purview to enhance the security and compliance of their ChatGPT Enterprise interactions. This integration not only provides peace of mind but also empowers organizations to manage their data more effectively. We are still in preview some of the features listed are not fully integrated, please reach out to us if you have any questions or if you have additional requirements.3.3KViews1like2CommentsExplore how to secure AI by attending our Learn Live Series
Register to attend Learn Live: Security for AI with Microsoft Purview and Defender for Cloud starting April 15 In this month-long webinar series, IT pros and security practitioners can hone their security skillsets with a deeper understanding of AI-centric challenges, opportunities, and best practices using Microsoft Security solutions. Each session will follow a hosted demo format and cover a Microsoft Learn module (topics listed below). You can ask the SMEs questions via the chat as they show you how to use Microsoft Purview and Microsoft Defender for Cloud to protect your organization in the age of AI. Learn Live dates/topics include: April 15 at 12pm PST – Manage AI Data Security Challenges with Microsoft Purview: Microsoft Purview helps you strengthen data security in AI environments, providing tools to handle challenges from AI technology. Learn to safeguard your data and adapt to evolving security challenges in AI technology. This session will help you: Understand sensitivity labels in Microsoft 365 Copilot Secure against generative AI data exposure with endpoint Data Loss Prevention Detect generative AI usage with Insider Risk Management Dynamically protect sensitive data with Adaptive Protection April 22 at 12pm PST – Manage Compliance with Microsoft Purview with Microsoft 365 Copilot: Use Microsoft Purview for compliance management with Microsoft 365 Copilot. You'll learn how to handle compliance aspects of Copilot's AI functionalities through Purview. This session will teach you how to: Audit Copilot interactions within Microsoft 365 using Microsoft Purview Investigate Copilot interactions using Microsoft Purview eDiscovery Manage Copilot data retention with Microsoft Purview Data Lifecycle Management Monitor and mitigate risks in Copilot interactions using Microsoft Purview Communication Compliance April 29 at 12pm PST – Identify and Mitigate AI Data Security Risks: Microsoft Purview Data Security Posture Management (DSPM) for AI helps organizations monitor AI activity, enforce security policies, and prevent unauthorized data exposure. Learn how to configure DSPM for AI, track AI interactions, run data assessments, and apply security controls to reduce risks associated with AI usage. You will learn how to: Explain the purpose and benefits of Microsoft Purview DSPM for AI Set up and configure DSPM for AI to monitor AI interactions Identify and analyze AI security risks using reports and insights Run and review AI data assessments to detect oversharing risks Apply security policies, such as DLP and sensitivity labels, to protect AI-referenced data May 13 at 10am PST – Enable Advanced Protection for AI Workloads with Microsoft Defender for Cloud: As organizations use and develop AI applications, they need to address new and amplified security risks. Prepare your environment for secure AI adoption to safeguard your data and identify threats to your AI. This session will help you: Understand how Defender for Cloud can protect AI workloads Enable threat protection workloads for AI Gain application and end user context for AI alerts Register today for these new sessions. We look forward to seeing you! If you’re unable to attend a session, don’t worry—the recordings will be made available on-demand via YouTube.1.5KViews0likes0CommentsEnhance AI security and governance across multi-model and multi-cloud environments
Generative AI adoption is accelerating, with AI transformation happening in real-time across various industries. This rapid adoption is reshaping how organizations operate and innovate, but it also introduces new challenges that require careful attention. At Ignite last fall, we announced several new capabilities to help organizations secure their AI transformation. These capabilities were designed to address top customer priorities such as preventing data oversharing, safeguarding custom AI, and preparing for emerging AI regulations. Organizations like Cummins, KPMG, and Mia Labs have leveraged these capabilities to confidently strengthen their AI security and governance efforts. However, despite these advancements, challenges persist. One major concern is the rise of shadow AI—applications used without IT or security oversight. In fact, 78% of AI users report bringing their own AI tools, such as ChatGPT and DeepSeek, into the workplace 1 . Additionally, new threats, like indirect prompt injection attacks, are emerging, with 77% of organizations expressing concerns and 11% of organizations identifying them as a critical risk 2 . To address these challenges, we are excited to announce new features and capabilities that help customers do the following: Prevent risky access and data leakage in shadow AI with granular access controls and inline data security capabilities Manage AI security posture across multi-cloud and multi-model environments Detect and respond to new AI threats, such as indirect prompt injections and wallet abuse Secure and govern data in Microsoft 365 Copilot and beyond In this blog, we’ll explore these announcements and demonstrate how they help organizations navigate AI adoption with confidence, mitigating risks, and unlocking AI’s full potential on their transformation journey. Prevent risky access and data leakage in shadow AI With the rapid rise of generative AI, organizations are increasingly encountering unauthorized employee use of AI applications without IT or security team approval. This unsanctioned and unprotected usage has given rise to “shadow AI,” significantly heightening the risk of sensitive data exposure. Today, we are introducing a set of access and data security controls designed to support a defense-in-depth strategy, helping you mitigate risks and prevent data leakage in third-party AI applications. Real-time access controls to shadow AI The first line of defense against security risks in AI applications is controlling access. While security teams can use endpoint controls to block access for all users across the organization, this approach is often too restrictive and impractical. Instead, they need more granular controls at the user level to manage access to SaaS-based AI applications. Today we are announcing the general availability of the AI web category filter in Microsoft Entra Internet Access to help enforce access controls that govern which users and groups have access to different AI applications. Internet Access deep integration with Microsoft Entra ID extends Conditional Access to any AI application, enabling organizations to apply AI access policies with granularity. By using Conditional Access as the policy control engine, organizations can enforce policies based on user roles, locations, device compliance, user risk levels, and other conditions, ensuring secure and adaptive access to AI applications. For example, with Internet Access, organizations can allow your strategy team to experiment with all or most consumer AI apps while blocking those apps for highly privileged roles, such as accounts payable or IT infrastructure admins. For even greater security, organizations can further restrict access to all AI applications if Microsoft Entra detects elevated identity risk. Inline discovery and protection of sensitive data Once users gain access to sanctioned AI applications, security teams still need to ensure that sensitive data isn’t shared with those applications. Microsoft Purview provides data security capabilities to prevent users from sending sensitive data to AI applications. Today, we are announcing enhanced Purview data security capabilities for the browser available in preview in the coming weeks. The new inline discovery & protection controls within Microsoft Edge for Business detect and block sensitive data from being sent to AI apps in real-time, even if typed directly. This prevents sensitive data leaks as users interact with consumer AI applications, starting with ChatGPT, Google Gemini, and DeepSeek. For example, if an employee attempts to type sensitive details about an upcoming merger or acquisition into Google Gemini to generate a written summary, the new inline protection controls in Microsoft Purview will block the prompt from being submitted, effectively blocking the potential leaks of confidential data to an unsanctioned AI app. This augments existing DLP controls for Edge for Business, including protections that prevent file uploads and the pasting of sensitive content into AI applications. Since inline protection is built natively into Edge for Business, newly deployed policies automatically take effect in the browser even if endpoint DLP is not deployed to the device. : Inline DLP in Edge for Business prevents sensitive data from being submitted to consumer AI applications like Google Gemini by blocking the action. The new inline protection controls are integrated with Adaptive Protection to dynamically enforce different levels of DLP policies based on the risk level of the user interacting with the AI application. For example, admins can block low-risk users from submitting prompts containing the highest-sensitivity classifiers for their organization, such as M&A-related data or intellectual property, while blocking prompts containing any sensitive information type (SIT) for elevated-risk users. Learn more about inline discovery & protection in the Edge for Business browser in this blog. In addition to the new capabilities within Edge for Business, today we are also introducing Purview data security capabilities for the network layer available in preview starting in early May. Enabled through integrations with Netskope and iboss to start, organizations will be able to extend inline discovery of sensitive data to interactions between managed devices and untrusted AI sites. By integrating Purview DLP with their SASE solution (e.g. Netskope and iBoss), data security admins can gain visibility into the use of sensitive data on the network as users interact with AI applications. These interactions can originate from desktop applications such as the ChatGPT desktop app or Microsoft Word with a ChatGPT plugin installed, or non-Microsoft browsers such as Opera and Brave that are accessing AI sites. Using Purview Data Security Posture Management (DSPM) for AI, admins will also have visibility into how these interactions contribute to organizational risk and can take action through DSPM for AI policy recommendations. For example, if there is a high volume of prompts containing sensitive data sent to ChatGPT, DSPM for AI will detect and recommend a new DLP policy to help mitigate this risk. Learn more about inline discovery for the network, including Purview integrations with Netskope and iBoss, in this blog. Manage AI security posture across multi-cloud and multi-model environments In today’s rapidly evolving AI landscape, developers frequently leverage multiple cloud providers to optimize cost, performance, and availability. Different AI models excel at various tasks, leading developers to deploy models from various providers for different use cases. Consequently, managing security posture across multi-cloud and multi-model environments has become essential. Today, Microsoft Defender for Cloud supports deployed AI workloads across Azure OpenAI Service, Azure Machine Learning, and Amazon Bedrock. To further enhance our security coverage, we are expanding AI Security Posture Management (AI-SPM) in Defender for Cloud to improve compatibility with additional cloud service providers and models. This includes: Support for Google Vertex AI models Enhanced support for Azure AI Foundry model catalog and custom models With this expansion, AI-SPM in Defender for Cloud will now offer the discovery of the AI inventory and vulnerabilities, attack path analysis, and recommended actions to address risks in Google VertexAI workloads. Additionally, it will support all models in Azure AI Foundry model catalog, including Meta Llama, Mistral, DeepSeek, as well as custom models. This expansion ensures a consistent and unified approach to managing AI security risks across multi-model and multi-cloud environments. Support for Google Vertex AI models will be available in public preview starting May 1, while support for Azure AI Foundry model catalog and custom models is generally available today. Learn More. 2: Microsoft Defender for Cloud detects an attack path to a DeepSeek R1 workload. In addition, Defender for Cloud will also offer a new data and AI security dashboard. Security teams will have access to an intuitive overview of their datastores and AI services across their multi-cloud environment, top recommendations, and critical attack paths to prioritize and accelerate remediation. The dashboard will be generally available on May 1. The new data & AI security dashboard in Microsoft Defender for Cloud provides a comprehensive overview of your data and AI security posture. These new capabilities reflect Microsoft’s commitment to helping organizations address the most critical security challenges in managing AI security posture in their heterogeneous environments. Detect and respond to new AI threats Organizations are integrating generative AI into their workflows and facing new security risks unique to AI. Detecting and responding to these evolving threats is critical to maintaining a secure AI environment. The Open Web Application Security Project (OWASP) provides a trusted framework for identifying and mitigating such vulnerabilities, such as prompt injection and sensitive information disclosure. Today, we are announcing Threat protection for AI services, a new capability that enhances threat protection in Defender for Cloud, enabling organizations to secure custom AI applications by detecting and responding to emerging AI threats more effectively. Building on the OWASP Top 10 risks for LLM applications, this capability addresses those critical vulnerabilities highlighted on the top 10 list, such as prompt injections and sensitive information disclosure. Threat protection for AI services helps organizations identify and mitigate threats to their custom AI applications using anomaly detection and AI-powered insights. With this announcement, Defender for Cloud will now extend its threat protection for AI workloads, providing a rich suite of new and enriched detections for Azure OpenAI Service and models in the Azure AI Foundry model catalog. New detections include direct and indirect prompt injections, novel attack techniques like ASCII smuggling, malicious URL in user prompts and AI responses, wallet abuse, suspicious access to AI resources, and more. Security teams can leverage evidence-based security alerts to enhance investigation and response actions through integration with Microsoft Defender XDR. For example, in Microsoft Defender XDR, a SOC analyst can detect and respond to a wallet abuse attack, where an attacker exploits an AI system to overload resources and increase costs. The analyst gains detailed visibility into the attack, including the affected application, user-entered prompts, IP address, and other suspicious activities performed by the bad actor. With this information, the SOC analyst can take action and block the attacker from accessing the AI application, preventing further risks. This capability will be generally available on May 1. Learn More. : Security teams can investigate new detections of AI threats in Defender XDR. Secure and govern data in Microsoft 365 Copilot and beyond Data oversharing and non-compliant AI use are significant concerns when it comes to securing and governing data in Microsoft Copilots. Today, we are announcing new data security and compliance capabilities. New data oversharing insights for unclassified data available in Microsoft Purview DSPM for AI: Today, we are announcing the public preview of on-demand classification for SharePoint and OneDrive. This new capability gives data security admins visibility into unclassified data stored in SharePoint and OneDrive and enables them to classify that data on demand. This helps ensure that Microsoft 365 Copilot is indexing and referencing files in its responses that have been properly classified. Previously, unclassified and unscanned files did not appear in DSPM for AI oversharing assessments. Now admins can initiate an on-demand data classification scan, directly from the oversharing assessment, ensuring that older or previously unscanned files are identified, classified, and incorporated into the reports. This allows organizations to detect and address potential risks more comprehensively. For example, an admin can initiate a scan of legacy customer contracts stored in a specified SharePoint library to detect and classify sensitive information such as account numbers or contact information. If these newly classified documents match the classifiers included in any existing auto-labeling policies, they will be automatically labeled. This helps ensure that documents containing sensitive information remain protected when they are referenced in Microsoft 365 Copilot interactions. Learn More. Security teams can trigger on-demand classification scan results in the oversharing assessment in Purview DSPM for AI. Secure and govern data in Security Copilot and Copilot in Fabric: We are excited to announce the public preview of Purview for Security Copilot and Copilot in Fabric, starting with Copilot in Power BI, offering DSPM for AI, Insider Risk Management, and data compliance controls, including eDiscovery, Audit, Data Lifecycle Management, and Communication Compliance. These capabilities will help organizations enhance data security posture, manage compliance, and mitigate risks more effectively. For example, admins can now use DSPM for AI to discover sensitive data in user prompts and responses and detect unethical or risky AI usage. Purview’s DSPM for AI provides admins with comprehensive reports on user activities and data interactions in Copilot for Power BI, as part of the Copilot in Fabric experience, and Security Copilot. DSPM Discoverability for Communication Compliance: This new feature in Communication Compliance, which will be available in public preview starting May 1, enables organizations to quickly create policies that detect inappropriate messages that could lead to data compliance risks. The new recommendation card on the DSPM for AI page offers a one-click policy creation in Microsoft Purview Communication Compliance, simplifying the detection and mitigation of potential threats, such as regulatory violations or improperly shared sensitive information. With these enhanced capabilities for securing and governing data in Microsoft 365 Copilot and beyond, organizations can confidently embrace AI innovation while maintaining strict security and compliance standards. Explore additional resources As organizations embrace AI, securing and governing its use is more important than ever. Staying informed and equipped with the right tools is key to navigating its challenges. Explore these resources to see how Microsoft Security can help you confidently adopt AI in your organization. Learn more about Security for AI solutions on our webpage Get started with Microsoft Purview Get started with Microsoft Defender for Cloud Sign up for a free Microsoft 365 E5 Security Trial and Microsoft Purview Trial Learn more about the innovations designed to help your organization protect data, defend against cyber threats, and stay compliant. Join Microsoft leaders online at Microsoft Secure on April 9. [1] 2024 Work Trend Index Annual Report, Microsoft and LinkedIn, May 2024, N=31,000. [2] Gartner®, Gartner Peer Community Poll – If your org’s using any virtual assistants with AI capabilities, are you concerned about indirect prompt injection attacks? GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.3.6KViews2likes0CommentsMicrosoft Defender for Cloud Apps - Ninja Training
Welcome to our Ninja Training for Microsoft Defender for Cloud Apps! Are you trying to protect your SaaS applications? Are you concerned about the posture of the apps you are using? Is shadow IT or AI a concern of yours? Then you are in the right place. The training below will aggregate all the relevant resources in one convenient location for you to learn from. Let’s start here with a quick overview of Microsoft Defender for Cloud Apps’ capabilities. Microsoft Defender for Cloud Apps | Microsoft Security Overview of Microsoft Defender for Cloud Apps and the capability of a SaaS Security solution. Overview - Microsoft Defender for Cloud Apps | Microsoft Learn Understand what Microsoft Defender for Cloud Apps is and read about its main capabilities. Quick Start The basic features of Defender for Cloud Apps require almost no effort to deploy. The recommended steps are to: Connect your apps Enable App Discovery Enable App Governance After enabling these features, all default detections and alerts will start triggering in the Microsoft Defender XDR console, and give you tremendous value with minimal configuration. Simplified SaaS Security Deployment with Microsoft Defender for Cloud Apps | Virtual Ninja Training Step-by-step video on how to quickly deploy Defender for Cloud Apps Get started - Microsoft Defender for Cloud Apps This quickstart describes how to start working with Microsoft Defender for Cloud Apps on the Microsoft Defender Portal. Review this if you prefer text to video Basic setup - Microsoft Defender for Cloud Apps The following procedure gives you instructions for customizing your Microsoft Defender for Cloud Apps environment. Connect apps to get visibility and control - Microsoft Defender for Cloud Apps App connectors use the APIs of app providers to enable greater visibility and control by Microsoft Defender for Cloud Apps over the apps you connect to. Make sure to connect all your available apps as you start your deployment Turn on app governance in Microsoft Defender for Cloud Apps App governance in Defender for Cloud Apps is a set of security and policy management capabilities designed for OAuth-enabled apps registered on Microsoft Entra ID, Google, and Salesforce. App governance delivers visibility, remediation, and governance into how these apps and their users access, use, and share sensitive data in Microsoft 365 and other cloud platforms through actionable insights out-of-the box threat detections, OAuth apps attack disruption, automated policy alerts and actions. It only takes a few minutes to enable and provide full visibility on your users’ Oauth app consents Shadow IT Discovery - Integrate with Microsoft Defender for Endpoint This article describes the out-of-the-box integration available between Microsoft Defender for Cloud Apps and Microsoft Defender for Endpoint, which simplifies cloud discovery and enabling device-based investigation. Control cloud apps with policies Policies in Microsoft Defender for Cloud Apps help define user behavior in the cloud, detect risky activities, and enable remediation workflows. There are various types of policies, such as Activity, Anomaly Detection, OAuth App, Malware Detection, File, Access, Session, and App Discovery policies. These policies help mitigate risks like access control, compliance, data loss prevention, and threat detection. Detect Threats and malicious behavior After connecting your cloud apps in Defender for Cloud Apps, you will start seeing alerts in your XDR portal. Here are resources to learn more about these alerts and how to investigate them. Note that we are constantly adding new built-in detections, and they are not necessarily part of our public documentation. How to manage incidents - Microsoft Defender XDR Learn how to manage incidents, from various sources, using Microsoft Defender XDR. How to investigate anomaly detection alerts Microsoft Defender for Cloud Apps provides detections for malicious activities. This guide provides you with general and practical information on each alert, to help with your investigation and remediation tasks. Note that detections are added on a regular basis, and not all of them will have entries in this guide. Configure automatic attack disruption in Microsoft Defender XDR - Microsoft Defender XDR | Microsoft Learn Learn how to take advantage of XDR capabilities to automatically disrupt high confidence attacks before damage is done. OAuth apps are natively integrated as part of Microsoft XDR. Create activity policies - Microsoft Defender for Cloud Apps | Microsoft Learn In addition to all the built-in detections as part of Microsoft Defender for Cloud Apps, you can also create your own policies, including Governance actions, based on the Activity log captured by Defender for Cloud Apps. Create and manage custom detection rules in Microsoft Defender XDR - Microsoft Defender XDR | Microsoft Learn Learn how to leverage XDR custom detection rules based on hunting data in the platform. CloudAppEvents table in the advanced hunting schema - Microsoft Defender XDR | Microsoft Learn Learn about the CloudAppEvents table which contains events from all connected applications with data enriched by Defender for Cloud Apps in a common schema. This data can be hunted across all connected apps and your separate XDR workloads. Investigate behaviors with advanced hunting - Microsoft Defender for Cloud Apps | Microsoft Learn Learn about behaviors and how they can help with security investigations. Investigate activities - Microsoft Defender for Cloud Apps | Microsoft Learn Learn how to search the activity log and investigate activities with a simple UI without the need for KQL App Governance – Protect from App-to-App attack scenario App governance in Microsoft Defender for Cloud Apps is crucial for several reasons. It enhances security by identifying and mitigating risks associated with OAuth-enabled apps, which can be exploited for privilege escalation, lateral movement, and data exfiltration. Organizations gain clear visibility into app compliance, allowing them to monitor how apps access, use, and share sensitive data. It provides alerts for anomalous behaviors, enabling quick responses to potential threats. Automated policy alerts and remediation actions help enforce compliance and protect against noncompliant or malicious apps. By governing app access, organizations can better safeguard their data across various cloud platforms. These features collectively ensure a robust security posture, protecting both data and users from potential threats. Get started with App governance - Microsoft Defender for Cloud Apps Learn how app governance enhances the security of SaaS ecosystems like Microsoft 365, Google Workspace, and Salesforce. This video details how app governance identifies integrated OAuth apps, detects and prevents suspicious activity, and provides in-depth monitoring and visibility into app metadata and behaviors to help strengthen your overall security posture. App governance in Microsoft Defender for Cloud Apps and Microsoft Defender XDR - Microsoft Defender for Cloud Apps | Microsoft Learn Defender for Cloud Apps App governance overview Create app governance policies - Microsoft Defender for Cloud Apps | Microsoft Learn Many third-party productivity apps request access to user data and sign in on behalf of users for other cloud apps like Microsoft 365, Google Workspace, and Salesforce. Users often accept these permissions without reviewing the details, posing security risks. IT departments may lack insight into balancing an app's security risk with its productivity benefits. Monitoring app permissions provides visibility and control to protect your users and applications. App governance visibility and insights - Microsoft Defender for Cloud Apps | Microsoft Learn Managing your applications requires robust visibility and insight. Microsoft Defender for Cloud Apps offers control through in-depth insights into user activities, data flows, and threats, enabling effective monitoring, anomaly detection, and compliance Reduce overprivileged permissions and apps Recommendations for reducing overprivileged permissions App Governance plays a critical role in governing applications in Entra ID. By integrating with Entra ID, App Governance provides deeper insights into application permissions and usage within your identity infrastructure. This correlation enables administrators to enforce stringent access controls and monitor applications more effectively, ensuring compliance and reducing potential security vulnerabilities. This page offers guidelines for reducing unnecessary permissions, focusing on the principle of least privilege to minimize security risks and mitigate the impact of breaches. Investigate app governance threat detection alerts List of app governance threat detection alerts classified according to MITRE ATT&CK and investigation guidance Manage app governance alerts Learn how to govern applications and respond to threat and risky applications directly from app governance or through policies. Hunt for threats in app activities Learn how to hunt for app activities directly form the XDR console (Microsoft 365 Connector required as discussed in quick start section). How to Protect Oauth Apps with App Governance in Microsoft Defender for Cloud Apps Webinar | How to Protect Oauth Apps with App Governance in Microsoft Defender for Cloud Apps. Learn how to protect Oauth applications in your environment, how to efficiently use App governance within Microsoft Defender for Cloud Apps to protect your connected apps and raise your security posture. App Governance is a Key Part of a Customers' Zero Trust Journey Webinar| learn about how the app governance add-on to Microsoft Defender for Cloud Apps is a key component of customers' Zero Trust journey. We will examine how app governance supports managing to least privilege (including identifying unused permissions), provides threat detections that are able and have already protected customers, and gives insights on risky app behaviors even for trusted apps. App Governance Inclusion in Defender for Cloud Apps Overview Webinar| App governance overview and licensing requirements. Frequently asked questions about app governance App governance FAQ Manage the security Posture of your SaaS (SSPM) One of the key components of Microsoft Defender for Cloud Apps is the ability to gain key information about the Security posture of your applications in the cloud (AKA: SaaS). This can give you a proactive approach to help avoid breaches before they happen. SaaS Security posture Management (or SSPM) is part the greater Exposure Management offering, and allows you to review the security configuration of your key apps. More details in the links below: Transform your defense: Microsoft Security Exposure Management | Microsoft Secure Tech Accelerator Overview of Microsoft Exposure Management and it’s capabilities, including how MDA & SSPM feed into this. SaaS Security Posture Management (SSPM) - Overview - Microsoft Defender for Cloud Apps | Microsoft Learn Understand simply how SSPM can help you increase the safety of your environment Turn on and manage SaaS security posture management (SSPM) - Microsoft Defender for Cloud Apps | Microsoft Learn Enabling SSPM in Defender for Cloud Apps requires almost no additional configuration (as long as your apps are already connected), and no extra license. We strongly recommend turning it on, and monitoring its results, as the cost of operation is very low. SaaS Security Initiative - Microsoft Defender for Cloud Apps | Microsoft Learn The SaaS Security Initiative provides a centralized place for software as a service (SaaS) security best practices, so that organizations can manage and prioritize security recommendations effectively. By focusing on the most impactful metrics, organizations can enhance their SaaS security posture. Secure your usage of AI applications AI is Information technologies’ newest tool and strongest innovation area. As we know it also brings its fair share of challenges. Defender for Cloud Apps can help you face these from two different angles: - First, our App Discovery capabilities give you a complete vision of all the Generative AI applications in use in an environment - Second, we provide threat detection capabilities to identify and alert from suspicious usage of Copilot for Microsoft 365, along with the ability to create custom detection using KQL queries. Secure AI applications using Microsoft Defender for Cloud Apps Overview of Microsoft Defender for Cloud Apps capabilities to secure your usage of Generative AI apps Step-by-Step: Discover Which Generative AI Apps Are Used in Your Environment Using Defender for Cloud Apps Detailed video-guide to deploy Discovery of Gen AI apps in your environment in a few minutes Step-by-Step: Protect Your Usage of Copilot for M365 Using Microsoft Defender for Cloud Apps Instructions and examples on how to leverage threat protection and advanced hunting capabilities to detect any risky or suspicious usage of Copilot for Microsoft 365 Get visibility into DeepSeek with Microsoft Defender for Cloud Apps Understand how fast the Microsoft Defender for Cloud Apps team can react when new apps or new threats come in the market. Discover Shadow IT applications Shadow IT and Shadow AI are two big challenges that organizations face today. Defender for Cloud Apps can help give you visibility you need, this will allow you to evaluate the risks, assess for compliance and apply controls over what can be used. Getting started The first step is to ensure the relevant data sources are connected to Defender for Cloud Apps to provide you the required visibility: Integrate Microsoft Defender for Endpoint - Microsoft Defender for Cloud Apps | Microsoft Learn The quickest and most seamless method to get visibility of cloud app usage is to integrate MDA with MDE (MDE license required). Create snapshot cloud discovery reports - Microsoft Defender for Cloud Apps | Microsoft Learn A sample set of logs can be ingested to generate a Snapshot. This lets you view the quality of the data before long term ingestion and also be used for investigations. Configure automatic log upload for continuous reports - Microsoft Defender for Cloud Apps | Microsoft Learn A log collector can be deployed to facilitate the collection of logs from your network appliances, such as firewalls or proxies. Defender for Cloud Apps cloud discovery API - Microsoft Defender for Cloud Apps | Microsoft Learn MDA also offers a Cloud Discovery API which can be used to directly ingest log information and mitigate the need for a log collector. Evaluate Discovered Apps Once Cloud Discovery logs are being populated into Defender for Cloud Apps, you can start the process of evaluating the discovered apps. This includes reviewing their usage, user count, risk scores and compliance factors. View discovered apps on the Cloud discovery dashboard - Microsoft Defender for Cloud Apps | Microsoft Learn View & evaluate the discovered apps within Cloud Discovery and Generate Cloud Discovery Executive Reports Working with the app page - Microsoft Defender for Cloud Apps | Microsoft Learn Investigate app usage and evaluate their compliance and risk factors Discovered app filters and queries - Microsoft Defender for Cloud Apps | Microsoft Learn Apply granular filtering and app tagging to focus on apps that are important to you Work with discovered apps via Graph API - Microsoft Defender for Cloud Apps | Microsoft Learn Investigate discovered apps via the Microsoft Graph API Add custom apps to cloud discovery - Microsoft Defender for Cloud Apps | Microsoft Learn You can add custom apps to the catalog which can then be matched against log data. This is useful for LOB applications. Govern Discovered Apps Having evaluated your discovered apps, you can then take some decisions on what level of governance and control each of the applications require and whether you want custom policies to help govern future applications: Govern discovered apps using Microsoft Defender for Endpoint - Microsoft Defender for Cloud Apps | Microsoft Learn Setup governance enforcement actions when using Microsoft Defender for Endpoint Govern discovered apps - Microsoft Defender for Cloud Apps | Microsoft Learn Apply governance actions to discovered apps from within the Cloud Discovery area Create cloud discovery policies - Microsoft Defender for Cloud Apps | Microsoft Learn Create custom Cloud Discovery policies to identify usage, alert and apply controls Operations and investigations - Sample AH queries - Tips on investigation - (section for SOC) Advanced Hunting Compromised and malicious applications investigation | Microsoft Learn Investigate anomalous app configuration changes Impersonation and EWS in Exchange | Microsoft Learn Audits impersonate privileges in Exchange Online Advanced Hunting Queries Azure-Sentinel/Solutions/Microsoft Entra ID/Analytic Rules/ExchangeFullAccessGrantedToApp.yaml at master · Azure/Azure-Sentinel · GitHub This detection looks for the full_access_as_app permission being granted to an OAuth application with Admin Consent. This permission provide access to all Exchange mailboxes via the EWS API can could be exploited to access sensitive data by being added to a compromised application. The application granted this permission should be reviewed to ensure that it is absolutely necessary for the applications function Azure-Sentinel/Solutions/Microsoft Entra ID/Analytic Rules/AdminPromoAfterRoleMgmtAppPermissionGrant.yaml at master · Azure/Azure-Sentinel · GitHub This rule looks for a service principal being granted permissions that could be used to add a Microsoft Entra ID object or user account to an Admin directory role. Azure-Sentinel/Solutions/Microsoft Entra ID/Analytic Rules/SuspiciousOAuthApp_OfflineAccess.yaml at master · Azure/Azure-Sentinel · GitHub Offline access will provide the Azure App with access to the listed resources without requiring two-factor authentication. Consent to applications with offline access and read capabilities should be rare, especially as the known Applications list is expanded Best Practice recommendations Common threat protection policies - Microsoft Defender for Cloud Apps | Microsoft Learn Common Defender for Cloud Apps Threat Protection policies Recommended Microsoft Defender for Cloud Apps policies for SaaS apps | Microsoft Learn Recommended Microsoft Defender for Cloud Apps policies for SaaS apps Best practices for protecting your organization - Microsoft Defender for Cloud Apps | Microsoft Learn Best practices for protecting your organization with Defender for Cloud Apps Completion certificate! Click here to get your shareable completion certificate!! Advanced configuration Training Title Description Importing user groups from connect apps This article outlines the steps on how to import user groups from connected apps Manage Admin Access This article describes how to manage admin access in Microsoft Defender for Cloud Apps. Configure MSSP Access In this video, we walk through the steps on adding Managed Security Service Provider (MSSP) access to Microsoft Defender for Cloud Apps. Provide managed security service provider (MSSP) access - Microsoft Defender XDR | Microsoft Learn Provide managed security service provider (MSSP) access Integrate with Secure Web Gateways Microsoft Defender for Cloud Apps integrates with several secure web gateways available in the market. Here are the links to configure this integration. Integrate with Zscaler Integrate with iboss Integrate with Corrata Integrate with Menlo Additional resources Microsoft Defender for Cloud Apps Tech Community This is a Microsoft Defender for Cloud Apps Community space that allows users to connect and discuss the latest news, upgrades, and best practices with Microsoft professionals and peers.2.4KViews4likes1Comment