microsoft defender for cloud
228 TopicsCheck out the latest security skill-building resources on Microsoft Learn
Prove your experience with this new Microsoft Applied Skill Are you an identity and access professional? Do you have a foundational understanding of Microsoft Entra ID? Showcase your experience and readiness for identity scenarios by earning our new Microsoft Applied Skill: Get started with identities and access using Microsoft Entra. You can prepare for the skills assessment by completing our Learning Path—Perform basic identity and access tasks—here you'll learn how to: Create, configure, and manage identities Describe the authentication capabilities of Microsoft Entra ID Describe the access management capabilities of Microsoft Entra Describe the identity protection and governance capabilities of Microsoft Entra Get started with identity and access labs On average, this Learning Path requires less than four hours to complete. Get started today! Certification update: Goodbye, SC-400 – hello, SC-401! As you may already know, we will be retiring Microsoft Certified: Information Protection and Compliance Administrator Associate Certification and its related Exam SC-400: Administering Information Protection and Compliance in Microsoft 365 on May 31, 2025. If you are considering renewing the certification please do so before the date. There is still several ways to showcase your expertise of Purview through the new Microsoft Certified: Information Security Administrator Certification and applied skills mentioned in this blog. There's still time: catch our Learn Live Series and enhance your security for AI capabilities As organizations develop, use, and increasingly rely on AI applications, they must address new and amplified security risks. Are you prepared to secure your environment for AI adoption? How about identifying threats to your AI and safeguarding data? Watch on demand: Learn Live – Security for AI with Microsoft Purview and Defender for Cloud In this four-part series, IT pros and security practitioners can hone their security skillsets with a deeper understanding of AI-centric challenges, opportunities, and best practices using Microsoft Security solutions. Topics include: Manage AI Data Security Challenges with Microsoft Purview: Microsoft Purview helps you strengthen data security in AI environments, providing tools to manage challenges from AI technology. Manage Compliance with Microsoft Purview with Microsoft 365 Copilot: Use Microsoft Purview for compliance management with Microsoft 365 Copilot. You'll learn how to handle compliance aspects of Copilot's AI functionalities through Purview. Identify and Mitigate AI Data Security Risks: Microsoft Purview Data Security Posture Management (DSPM) for AI helps organizations monitor AI activity, enforce security policies, and prevent unauthorized data exposure. Enable Advanced Protection for AI Workloads with Microsoft Defender for Cloud: As organizations use and develop AI applications, they need to address new and amplified security risks. Prepare your environment for secure AI adoption to safeguard your data and identify threats to your AI. If you are looking for more training and resources related to Microsoft Security, please visit the Security Hub.Enterprise-grade controls for AI apps and agents built with Azure AI Foundry and Copilot Studio
AI innovation is moving faster than ever, and more AI projects are moving beyond experimentation into deployment, to drive tangible business impact. As organizations accelerate innovation with custom AI applications and agents, new risks emerge across the software development lifecycle and AI stack related to data oversharing and leaks, new vulnerabilities and threats, and non-compliance with stringent regulatory requirements Through 2025, poisoning of software supply chains and infrastructure technology stacks will constitute more than 70% of malicious attacks against AI used in the enterprise 1 , highlighting potential threats that originate early in development. Today, the average cost of a data breach is $4.88 million, but when security issues are caught early in the development process, that number drops dramatically to just $80 per incident 2 . The message is very clear; security can’t be an afterthought anymore. It must be a team sport across the organization, embedded from the start and throughout the development lifecycle. That's why developers and security teams should align on processes and tools that bring security into every stage of the AI development lifecycle and give security practitioners visibility into and the ability to mitigate risks. To address these growing challenges and help customers secure and govern their AI workloads across development and security teams, we are: Enabling Azure AI Foundry and Microsoft Copilot Studio to provide best-in-class foundational capabilities to secure and govern AI workloads Deeply integrating and embedding industry-leading capabilities from Microsoft Purview, Microsoft Defender, and Microsoft Entra into Azure AI Foundry and Microsoft Copilot Studio This week, 3,000 developers are gathering in Seattle for the annual Microsoft Build conference, with many more tuning in online, to learn practical skills for accelerating their AI apps and agents' innovation. To support their AI innovation journey, today we are excited to announce several new capabilities to help developers and organizations secure and govern AI apps and agents. New Azure AI Foundry foundational capabilities to secure and govern AI workloads Azure AI Foundry enhancements for AI security and safety With 70,000 customers, 100 trillion tokens processed this quarter, and 2 billion enterprise search queries each day, Azure AI Foundry has grown beyond just an application layer—it's now a comprehensive platform for building agents that can plan, take action, and continuously learn to drive real business outcomes. To help organizations build and deploy AI with confidence, we’re introducing new security and safety capabilities and insights for developers in Azure AI Foundry Introducing Spotlighting to detect and block prompt injection attacks in real time As AI systems increasingly rely on external data sources, a new class of threats has emerged. Indirect prompt injection attacks embed hidden instructions in documents, emails, and web content, tricking models into taking unauthorized actions without any direct user input. These attacks are difficult to detect and hard to prevent using traditional filters alone. To address this, Azure AI Content Safety is introducing Spotlighting, now available in preview. Spotlighting strengthens the Prompt Shields guardrail by improving its ability to detect and handle potential indirect prompt injections, where hidden adversarial instructions are embedded in external content. This new capability helps prevent the model from inadvertently acting on malicious prompts that are not directly visible to the user. Enable Spotlighting in Azure AI Content Safety to detect potential indirect prompt injection attacks New capabilities for task adherence evaluation and task adherence mitigation to ensure agents remain within scope As developers build more capable agents, organizations face growing pressure to help confirm those agents act within defined instructions and policy boundaries. Even small deviations can lead to tool misuse, broken workflows, or risks like unintended exposure of sensitive data. To solve this, Azure AI Foundry now includes task adherence for agents, now in preview and powered by two components: a real-time evaluation and a new control within Azure AI Content Safety. At the core is a real-time task adherence evaluation API, part of Azure AI Content Safety. This API assesses whether an agent’s behavior is aligned with its assigned task by analyzing the user’s query, system instructions, planned tool calls, and the agent’s response. The evaluation framework is built on Microsoft’s Agent Evaluators, which measure intent resolution, tool selection accuracy, completeness of response, and overall alignment to the original request. Developers can run this scoring logic locally using the Task Adherence Evaluator in the Azure AI Evaluation SDK, with a five-point scale that ranges from fully nonadherent to fully adherent. This gives teams a flexible and transparent way to inspect task-level behavior before it causes downstream issues. Task adherence is enforced through a new control in Azure AI Content Safety. If an agent goes off-task, the control can block tool use, pause execution, or trigger human review. In Azure AI Agent Service, it is available as an opt-in feature and runs automatically. Combined with real-time evaluation, this control helps to ensure that agents stay on task, follow instructions, and operate according to enterprise policies. Learn more about Prompt Shields in Azure AI Content Safety. Azure AI Foundry continuous evaluation and monitoring of agentic systems Maintaining high performance and compliance for AI agents after deployment is a growing challenge. Without ongoing oversight, issues like performance degradation, safety risks, or unintentional misuse of resources can slip through unnoticed. To address this, Azure AI Foundry introduces continuous evaluation and monitoring of agentic systems, now in preview, provides a single pane of glass dashboard to track key metrics such as performance, quality, safety, and resource usage in real time. Continuous evaluation runs quality and safety evaluations at a sampled rate of production usage with results made available in the Azure AI Foundry Monitoring dashboard and published to Application Insights. Developers can set alerts to detect drift or regressions and use Azure Monitor to gain full-stack visibility into their AI systems. For example, an organization using an AI agent to assist with customer-facing tasks can monitor groundedness and detect a decline in quality when the agent begins referencing irrelevant information, helping teams to act before it potentially negatively affects trust of users. Azure AI Foundry evaluation integrations with Microsoft Purview Compliance Manager, Credo AI, and Saidot for streamlined compliance AI regulations and standards introduce new requirements for transparency, documentation, and risk management for high-risk AI systems. As developers build AI applications and agents, they may need guidance and tools to help them evaluate risks based on these requirements and seamlessly share control and evaluation insights with compliance and risk teams. Today, we are announcing previews for Azure AI Foundry evaluation tool’s integration with a compliance management solution, Microsoft Purview Compliance Manager, and AI governance solutions, Credo AI and Saidot. These integrations help define risk parameters, run suggested compliance evaluations, and collect evidence for control testing and auditing. For example, for a developer who’s building an AI agent in Europe may be required by their compliance team to complete a Data Protection Impact Assets (DPIA) and Algorithmic Impact Assessment (AIA) to meet internal risk management and technical documentation requirements aligned with emerging AI governance standards and best practices. Based on Purview Compliance Manager’s step-by-step guidance on controls implementation and testing, the compliance teams can evaluate risks such as potential bias, cybersecurity vulnerabilities, or lack of transparency in model behavior. Once the evaluation is conducted in Azure AI Foundry, the developer can obtain a report with documented risk, mitigation, and residual risk for compliance teams to upload to Compliance Manager to support audits and provide evidence to regulators or external stakeholders. Assess controls for Azure AI Foundry against emerging AI governance standards Learn more about Purview Compliance Manager. Learn more about the integration with Credo AI and Saidot in this blogpost. Leading Microsoft Entra, Defender and Purview value extended to Azure AI Foundry and Microsoft Copilot Studio Introducing Microsoft Entra Agent ID to help address agent sprawl and manage agent identity Organizations are rapidly building their own AI agents, leading to agent sprawl and a lack of centralized visibility and management. Security teams often struggle to keep up, unable to see which agents exist and whether they introduce security or compliance risks. Without proper oversight, agent sprawl increases the attack surface and makes it harder to manage these non-human identities. To address this challenge, we’re announcing the public preview of Microsoft Entra Agent ID, a new capability in the Microsoft Entra admin center that gives security admins visibility and control over AI agents built with Copilot Studio and Azure AI Foundry. With Microsoft Entra Agent ID, an agent created through Copilot Studio or Azure AI Foundry is automatically assigned an identity with no additional work required from the developers building them. This is the first step in a broader initiative to manage and protect non-human identities as organizations continue to build AI agents. : Security and identity admins can gain visibility into AI agents built in Copilot Studio and Azure AI Foundry in the Microsoft Entra Admin Center This new capability lays the foundation for more advanced capabilities coming soon to Microsoft Entra. We also know that no one can do it alone. Security has always been a team sport, and that’s especially true as we enter this new era of protecting AI agents and their identities. We’re energized by the momentum across the industry; two weeks ago, we announced support for the Agent-to-Agent (A2A) protocol and began collaborating with partners to shape the future of AI identity workflows. Today, we’re also excited to announce new partnerships with ServiceNow and Workday. As part of this, we’ll integrate Microsoft Entra Agent ID with the ServiceNow AI Platform and the Workday Agent System of Record. This will allow for automated provisioning of identities for future digital employees. Learn more about Microsoft Entra Agent ID. Microsoft Defender security alerts and recommendations now available in Azure AI Foundry As more AI applications are deployed to production, organizations need to predict and prevent potential AI threats with natively integrated security controls backed by industry-leading Gen AI and threat intelligence for AI deployments. Developers need critical signals from security teams to effectively mitigate security risks related to their AI deployments. When these critical signals live in separate systems outside the developer experience, this can create delays in mitigation, leaving opportunities for AI apps and agents to become liabilities and exposing organizations to various threats and compliance violations. Now in preview, Microsoft Defender for Cloud integrates AI security posture management recommendations and runtime threat protection alerts directly into the Azure AI Foundry portal. These capabilities, previously announced as part of the broader Microsoft Defender for Cloud solution, are extended natively into Azure AI Foundry enabling developers to access alerts and recommendations without leaving their workflows. This provides real-time visibility into security risks, misconfigurations, and active threats targeting their AI applications on specific Azure AI projects, without needing to switch tools or wait on security teams to provide details. Security insights from Microsoft Defender for Cloud help developers identify and respond to threats like jailbreak attacks, sensitive data leakage, and misuse of system resources. These insights include: AI security posture recommendations that identify misconfigurations and vulnerabilities in AI services and provide best practices to reduce risk Threat protection alerts for AI services that notify developers of active threats and provide guidance for mitigation, across more than 15 detection types For example, a developer building an AI-powered agent can receive security recommendations suggesting the use of Azure Private Link for Azure AI Services resources. This reduces the risk of data leakage by handling the connectivity between consumers and services over the Azure backbone network. Each recommendation includes actionable remediation steps, helping teams identify and mitigate risks in both pre- and post-deployment phases. This helps to reduce risks without slowing down innovation. : Developers can view security alerts on the Risks + alerts page in Azure AI Foundry : Developers can view recommendations on the Guardrails + controls page in Azure AI Foundry This integration is currently in preview and will be generally available in June 2025 in Azure AI Foundry. Learn more about protecting AI services with Microsoft Defender for Cloud. Microsoft Purview capabilities extended to secure and govern data in custom-built AI apps and agents Data oversharing and leakage are among the top concerns for AI adoption, and central to many regulatory requirements. For organizations to confidently deploy AI applications and agents, both low code and pro code developers need a seamless way to embed security and compliance controls into their AI creations. Without simple, developer-friendly solutions, security gaps can quickly become blockers, delaying deployment and increasing risks as applications move from development to production. Today, Purview is extending its enterprise-grade data security and compliance capabilities, making it easier for both low code and pro code developers to integrate data security and compliance into their AI applications and agents, regardless of which tools or platforms they use. For example, with this update, Microsoft Purview DSPM for AI becomes the one place data security teams can see all the data risk insights across Microsoft Copilots, agents built in Agent Builder and Copilot Studio, and custom AI apps and agents built in Azure AI Foundry and other platforms. Admins can easily drill into security and compliance insights for specific AI apps or agents, making it easier to investigate and take action on potential risks. : Data security admins can now find data security and compliance insights across Microsoft Copilots, agents built with Agent Builder and Copilot Studio, and custom AI apps and agents in Microsoft Purview DSPM for AI In the following sections, we will provide more details about the updates to Purview capabilities in various AI workloads. 1. Microsoft Purview data security and compliance controls can be extended to any custom-built AI application and agent via the new Purview SDK or the native Purview integration with Azure AI Foundry. The new capabilities make it easy and effortless for security teams to bring the same enterprise-grade data security compliance controls available today for Microsoft 365 Copilot to custom AI applications and agents, so organizations can: Discover data security risks, such as sensitive data in user prompts, and data compliance risks, such as harmful content, and get recommended actions to mitigate risks proactively in Microsoft Purview Data Security Posture Management (DSPM) for AI. Protect sensitive data against data leakage and insider risks with Microsoft Purview data security policies. Govern AI interactions with Audit, Data Lifecycle Management, eDiscovery, and Communication Compliance. Microsoft Purview SDK Microsoft Purview now offers Purview SDK, a set of REST APIs, documentation, and code samples, currently in preview, enabling developers to integrate Purview's data security and compliance capabilities into AI applications or agents within any integrated development environment (IDE). : By embedding Purview APIs into the IDE, developers help enable their AI apps to be secured and governed at runtime For example, a developer building an AI agent using an AWS model can use the Purview SDK to enable their AI app to automatically identify and block sensitive data entered by users before it’s exposed to the model, while also providing security teams with valuable signals that support compliance. With Purview SDK, startups, ISVs, and partners can now embed Purview industry-leading capabilities directly into their AI software solutions, making these solutions Purview aware and easier for their customers to secure and govern data in their AI solutions. For example, Infosys Vice President and Delivery Head of Cyber Security Practice, Ashish Adhvaryu indicates, “Infosys Cyber Next platform integrates Microsoft Purview to provide enhanced AI security capabilities. Our solution, the Cyber Next AI assistant (Cyber Advisor) for the SOC analyst, leverages Purview SDK to drive proactive threat mitigation with real-time monitoring and auditing capabilities. This integration provides holistic AI-assisted protection, enhancing cybersecurity posture." Microsoft partner EY (previously known as Ernst and Young) has also leveraged the new Purview SDK to embed Purview value into their GenAI initiatives. “We’re not just building AI tools, we are creating Agentic solutions where trust, security, and transparency are present from the start, supported by the policy controls provided through the Purview SDK. We’re seeing 25 to 30 percent time savings when we build secure features using the Purview SDK,” noted Sumanta Kar, Partner, Innovation and Emerging Tech at EY. Learn more about the Purview SDK. Microsoft Purview integrates natively with Azure AI Foundry Organizations are developing an average of 14 custom AI applications. The rapid pace of AI innovation may leave security teams unaware of potential data security and compliance risks within their environments. With the update announced today, Azure AI Foundry signals are now directly integrated with Purview Data Security Posture Management for AI, Insider Risk Management, and data compliance controls, minimizing the need for additional development work. For example, for AI applications and agents built with Azure AI Foundry models, data security teams can gain visibility into AI usage and data risks in Purview DSPM for AI, with no additional work from developers. Data security teams can also detect, investigate, and respond to both malicious and inadvertent user activities, such as a departing employee leveraging an AI agent to retrieve an anomalous amount of sensitive data, with Microsoft Purview Insider Risk Management (IRM) policies. Lastly, user prompts and AI responses in Azure AI apps and agents can now be ingested into Purview compliance tools as mentioned above. Learn more about Microsoft Purview for Azure AI Foundry. 2. Purview data protections extended to Copilot Studio agents grounded in Microsoft Dataverse data Coming to preview in June, Purview Information Protection extends auto-labeling and label inheritance coverage to Dataverse to help prevent oversharing and data leaks. Information Protection makes it easier for organizations to automatically classify and protect sensitive data at scale. A common challenge is that sensitive data often lands in Dataverse from various sources without consistent labeling or protection. The rapid adoption of agents built using Copilot Studio and grounding data from Dataverse increases the risk of data oversharing and leakage if data is not properly protected. With auto-labeling, data stored in Dataverse tables can be automatically labeled based on policies set in Microsoft Purview, regardless of its source. This reduces the need for manual labeling effort and protects sensitive information from the moment it enters Dataverse. With label inheritance, AI agent responses grounded in Dataverse data will automatically carry and honor the source data’s sensitivity label. If a response pulls from multiple tables with different labels, the most restrictive label is applied to ensure consistent protection. For example, a financial advisor building an agent in Copilot Studio might connect multiple Dataverse tables, some labeled as “General” and others as “Highly Confidential.” If a response pulls from both, it will inherit the most restrictive label, in this case, "Highly Confidential,” to prevent unauthorized access and ensure appropriate protections are applied across both maker and users of the agent. Together, auto-labeling and label inheritance in Dataverse support a more secure, automated foundation for AI. : Sensitivity labels will be automatically applied to data in Dataverse : AI-generated responses will inherit and honor the source data’s sensitivity labels Learn more about protecting Dataverse data with Microsoft Purview. 3. Purview DSPM for AI can now provide visibility into unauthenticated interactions with Copilot Studio agents As organizations increasingly use Microsoft Copilot Studio to deploy AI agents for frontline customer interactions, gaining visibility into unauthenticated user interactions and proactively mitigating risks becomes increasingly critical. Building on existing Purview and Copilot Studio integrations, we’ve extended DSPM for AI and Audit in Copilot Studio to provide visibility into unauthenticated interactions, now in preview. This gives organizations a more comprehensive view of AI-related data security risks across authenticated and unauthenticated users. For example, a healthcare provider hosting an external, customer-facing agent assistant must be able to detect and respond to attempts by unauthenticated users to access sensitive patient data. With these new capabilities in DSPM for AI, data security teams can now identify these interactions, assess potential exposure of sensitive data, and act accordingly. Additionally, integration with Purview Audit provides teams with seamless access to information needed for audit requirements. : Gain visibility into all AI interactions, including those from unauthenticated users Learn more about Purview for Copilot Studio. 4. Purview Data Loss Prevention extended to more Microsoft 365 agent scenarios To help organizations prevent data oversharing through AI, at Ignite 2024, we announced that data security admins could prevent Microsoft 365 Copilot from using certain labeled documents as grounding data to generate summaries or responses. Now in preview, this control also extends to agents published in Microsoft 365 Copilot that are grounded by Microsoft 365 data, including pre-built Microsoft 365 agents, agents built with the Agent Builder, and agents built with Copilot Studio. This helps ensure that files containing sensitive content are used appropriately by AI agents. For example, confidential legal documents with highly specific language that could lead to improper guidance if summarized by an AI agent, or "Internal only” documents that shouldn’t be used to generate content that can be shared outside of the organization. : Extend data loss prevention (DLP) policies to Microsoft 365 Copilot agents to protect sensitive data Learn more about Data Loss Prevention for Microsoft 365 Copilot and agents. The data protection capabilities we are extending to agents in Agent Builder and Copilot Studio demonstrate our continued investment in strengthening the Security and Governance pillar of the Copilot Control System (CSS). CCS provides integrated controls to help IT and security teams secure, manage, and monitor Copilot and agents across Microsoft 365, spanning governance, management, and reporting. Learn more here. Explore additional resources As developers and security teams continue to secure AI throughout its lifecycle, it’s important to stay ahead of emerging risks and ensure protection. Microsoft Security provides a range of tools and resources to help you proactively secure AI models, apps, and agents from code to runtime. Explore the following resources to deepen your understanding and strengthen your approach to AI security: Learn more about Security for AI solutions on our webpage Learn more about Microsoft Purview SDK Get started with Azure AI Foundry Get started with Microsoft Entra Get started with Microsoft Purview Get started with Microsoft Defender for Cloud Get started with Microsoft 365 Copilot Get started with Copilot Studio Sign up for a free Microsoft 365 E5 Security Trial and Microsoft Purview Trial 1 Predicts 2025: Navigating Imminent AI Turbulence for Cybersecurity, Jeremy D'Hoinne, Akif Khan, Manuel Acosta, Avivah Litan, Deepak Seth, Bart Willemsen, 10 February 2025 2 IBM. "Cost of a Data Breach 2024: Financial Industry." IBM Think, 13 Aug. 2024, https://www.ibm.com/think/insights/cost-of-a-data-breach-2024-financial-industry; Cser, Tamas. "The Cost of Finding Bugs Later in the SDLC." Functionize, 5 Jan. 2023, https://www.functionize.com/blog/the-cost-of-finding-bugs-later-in-the-sdlcProtect AI apps with Microsoft Defender
Stay in control with Microsoft Defender. You can identify which AI apps and cloud services are in use across your environment, evaluate their risk levels, and allow or block them as needed — all from one place. Whether it’s a sanctioned tool or a shadow AI app, you’re equipped to set the right policies and respond fast to emerging threats. Microsoft Defender gives you the visibility to track complex attack paths — linking signals across endpoints, identities, and cloud apps. Investigate real-time alerts, protect sensitive data from misuse in AI tools like Copilot, and enforce controls even for in-house developed apps using system prompts and Azure AI Foundry. Rob Lefferts, Microsoft Security CVP, joins me in the Mechanics studio to share how you can safeguard your AI-powered environment with a unified security approach. Identify and protect apps. Instantly surface all generative AI apps in use across your org — even unsanctioned ones. How to use Microsoft Defender for Cloud Apps. Extend AI security to internally developed apps. Get started with Microsoft Defender for Cloud. Respond with confidence. Stop attacks in progress and ensure sensitive data stays protected, even when users try to bypass controls. Get full visibility in Microsoft Defender incidents. Watch our video. QUICK LINKS: 00:00 — Stay in control with Microsoft Defender 00:39 — Identify and protect AI apps 02:04 — View cloud apps and website in use 04:14 — Allow or block cloud apps 07:14 — Address security risks of internally developed apps 08:44 — Example in-house developed app 09:40 — System prompt 10:39 — Controls in Azure AI Foundry 12:28 — Defender XDR 14:19 — Wrap up Link References Get started at https://aka.ms/ProtectAIapps Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: - While generative AI can help you do more, it can also introduce new security risks. Today, we’re going to demonstrate how you can stay in control with Microsoft Defender to discover the GenAI cloud apps that people in your organization are using right now and approve or block them based on their risk. And for your in-house developed AI apps, we’ll look at preventing jailbreaks and prompt injection attacks along with how everything comes together with Microsoft Defender incident management, to give you complete visibility into your events. Joining me once again to demonstrate how to get ahead of everything is Microsoft Security CVP, Rob Lefferts. Welcome back. - So glad to be back. - It’s always great to have you on to keep us ahead of the threat landscape. In fact, since your last time on the show, we’ve seen a significant increase in the use of generative AI apps, and some of them are sanctioned by IT but many of them are not. So what security concerns does this raise? - Each of those apps really carries their own risk, and even in-house developed apps aren’t necessarily immune to risk. We see some of the biggest risks with Consumer apps, especially the free ones, which are often designed to collect training data as users upload files into them or paste content into their prompts that can then be used to retrain the underlying model. So, before you know it, your data might be part of the public domain, that is, unless you get ahead of it. - And as you showed, this use of your data is often written front and center in the terms and conditions of these apps. - True, but not everyone reads all the fine print. To be clear, people go into these apps with good intentions, to work more efficiently and get more done, but they don’t always know the risks; and that’s where we give you the capabilities you need to identify and protect Generative AI SaaS apps using Microsoft Defender for Cloud Apps. And you can combine this with Microsoft Defender for Cloud for your internally developed apps alongside the unified incident management capabilities in Microsoft Defender XDR where the activities from both of these services and other connected systems come together in one place. - So given just how many cloud apps there are out there and a lot of companies building their own apps, where would you even start? - Well, for most orgs, it starts with knowing which external apps people in your company are using. If you don’t have proactive controls in place yet, there’s a pretty good chance that people are bringing their own apps. Now to find out what they’re using, right from the unified Defender portal, you can use Microsoft Defender for Cloud Apps for a complete view of cloud apps and websites in use inside your organization. The signal comes in from Defender-onboarded computers and phones. And if you’re not already using Defender for Cloud Apps, let me start by showing you the Cloud app catalog. Our researchers at Microsoft are continually identifying and classifying new cloud apps as they surface. There are over 34,000 apps across all of these filterable categories that are all based on best practice use cases across industries. Now if I scroll back up to Generative AI, you’ll see that there are more than 1,000 apps. And I’ll click on this control to filter the list down, and it’s a continually expanding list. We even add to it when existing cloud apps integrate new gen AI capabilities. Now once your signal starts to come in from your managed devices, moving back over to the dashboard, you’ll see that I have visibility into the full breadth of Cloud Apps in use, including Generative AI apps and lots of other categories. The report under Discovered apps provides visibility into the cloud apps with the broadest use within your managed network. And from there, you can again see categories of discovered apps. I’ll filter by Generative AI again, and this time it returns the specific apps in use in my org. Like before, each app has a defined risk score of 0 to 10, with 10 being the best, based on a number of parameters. And if I click into any one of them, like Microsoft Copilot, I can see the details as well as how they fair for general areas, a breadth of security capabilities, as well as compliance with standards and regulations, and whether they appear to meet legal and privacy requirements. - And this can save a lot of valuable time especially when you’re trying to get ahead of risks. - And Defender for Cloud Apps doesn’t just give you visibility. For your managed devices enrolled into Microsoft Defender, it also has controls that can either allow or block people from using defined cloud apps, based on the policies you have set as an administrator. From each cloud app, I can see an overview with activities surrounding the app with a few tabs. In the cloud app usage tab, I can drill in even more to see usage, users, IP addresses, and incident details. I’ll dig into Users, and here you can see who has used this app in my org. If I head back to my filtered view of generative AI apps in use, on the right you can see options to either sanction apps so that people can keep using them, or unsanction them to block them outright from being used. But rather than unsanction these apps one-by-one like Whack-a-Mole, there’s a better way, and that’s with automation based on the app’s risk score level. This way, you’re not manually configuring 1,000 apps in this category; nobody wants to do that. So I’ll head over to policy management, and to make things easier as new apps emerge, you can set up policies based on the risk score thresholds that I showed earlier, or other attributes. I’ll create a new policy, and from the dropdown, I’ll choose app discovery policy. Now I’ll name it Risky AI apps, and I can set the policy severity here too. Now, I’m going to select a filter, and I’ll choose category first, I’ll keep equals, and then scroll all the way down to Generative AI and pick that. Then, I need to add another filter. In this case, I’m going to find and choose risk score. I’ll pause for a second. Now what I want to happen is that when a new app is documented, or an existing cloud app incorporates new GenAI capabilities and meets my category and risk conditions, I want Defender for Cloud Apps to automatically unsanction those apps to stop people from using them on managed devices. So back in my policy, I can adjust this slider here for risk score. I’ll set it so that any app with a risk score of 0 to 6 will trigger a match. And if I scroll down a little more, this is the important part of doing the enforcement. I’ll choose tag app as unsanctioned and hit create to make it active. With that, my policy is set and next time my managed devices are synced with policy, Defender for Endpoint will block any generative AI app with a matching risk score. Now, let’s go see what it looks like. If I move over to a managed device, you’ll remember one of our four generative AI apps was something called Fakeyou. I have to be a little careful with how I enunciate that app name, and this is what a user would see. It’s clearly marked as being blocked by their IT organization with a link to visit the support page for more information. And this works with iOS, Android, Mac, and, of course, Windows devices once they are onboarded to Defender. - Okay, so now you can see and control which cloud apps are in use in your organization, but what about those in-house developed apps? How would you control the AI risks there? - So internally developed apps and enterprise-grade SaaS apps, like Microsoft Copilot, would normally have the controls and terms around data usage in place to prevent data loss and disallow vendors from training their models on your data. That said, there are other types of risks and that’s where Defender for Cloud comes in. If you’re new to Defender for Cloud, it connects the security team and developers in your company. For security teams, for your apps, there’s cloud security posture management to surface actions to predict and give you recommendations for preventing breaches before they happen. For cloud infrastructure and workloads, it gives you insights to highlight risks and guide you with specific protections that you can implement for all of your virtual machines, your data infrastructure, including databases and storage. And for your developers, using DevOps, you can even see best practice insights and associated risks with API endpoints being used, and in Containers see misconfigurations, exposed secrets and vulnerabilities. And for cloud infrastructure entitlement management, you can find out where you have potentially overprovisioned or inactive entitlements that could lead to a breach. And the nice thing is that from the central SecOps team perspective, these signals all flow into Microsoft Defender for end-to-end security tracking. In fact, I have an example here. This is an in-house developed app running on Azure that helps an employee input things like address, tax information, bank details for depositing your salary, and finding information on benefits options that employees can enroll into. It’s a pretty important app to ensure that the right protections are in place. And for anyone who’s entered a new job right after graduation, it can be confusing to know what benefits options to choose from, things like 401k or IRA for example in the U.S., or do you enroll into an employee stock purchasing program? It’s actually a really good scenario for generative AI when you think about it. And if you can act on the options it gives you to enroll into these services, again, it’s super helpful for the employees and important to have the right controls in place. Obviously, you don’t want your salary, stock, or benefits going into someone else’s account. So if you’re familiar with how generative AI apps work, most use what’s called a system prompt to enforce basic rules. But people, especially modern adversaries, are getting savvy to this and figuring out how to work around these basic guardrails: for example, by telling these AI tools to ignore their instructions. And I can show you an example of that. This is our app’s system prompt, and you’ll see that we’ve instructed the AI to not display ID numbers, account numbers, financial information, or tax elections with examples given for each. Now, I’ll move over to a running session with this app. I’ve already submitted a few prompts. And in the third one, with a gentle bit of persuasion, basically telling it that I’m a security researcher, for the AI model to ignore the instructions, it’s displaying information that my company and my dev team did not want it to display. This app even lets me update the bank account IBAN number with a prompt: Sorry, Adele. Fortunately, there’s a fix. Using controls as part of Azure AI Foundry, we can prevent this information from getting displayed to our user and potentially any attacker if their credentials or token has been compromised. So this is the same app on the right with no changes to the system message behind it, and I’ll enter the prompts in live this time. You’ll see that my exact same attempts to get the model to ignore its instructions no matter what I do, even as a security researcher, have been stopped in this case using Prompt Shields and have been flagged for immediate response. And these types of controls are even more critical as we start to build more autonomous agentic apps that might be parsing messages from external users and automatically taking action. - Right, and as we saw in the generated response, protection was enforced, like you said, using content safety controls in Azure AI Foundry. - Right, and those activities are also passed to Defender XDR incidents, so that you can see if someone is trying to work around the rules that your developers set. Let me quickly show you where these controls were set up to defend our internal app against these types of prompt injection or jailbreak attempts. I’m in the new Azure AI Foundry portal under safety + security for my app. The protected version of the app has Prompt shields for jailbreak and indirect attacks configured here as input filters. That’s all I had to do. And what I showed before was a direct jailbreak attack. There can also be indirect attacks. These methods are a little sneakier where the attacker, for example, might poison reference data upstream with maybe an email sent previously or even an image with hidden instructions, which gets added to the prompt. And we protect you in both cases. - Okay, so now you have policy protections in place. Do I need to identify and track issues in their respective dashboards then? - You can, and depending on your role or how deep in any area you want to go, all are helpful. But if you want to stitch together multiple alerts as part of something like a multi-stage attack, that’s where Defender XDR comes in. It will find the connections between different events, whether the user succeeded or not, and give you the details you need to respond to them. I’m now in the Defender XDR portal and can see all of my incidents. I want to look at a particular incident, 206872. We have a compromised user account, but this time it’s not Jonathan Wolcott; it’s Marie Ellorriaga. - I have a feeling Jonathan’s been watching these shows on Mechanics to learn what not to do. - Good for him; it’s about time. So let’s see what Marie, or the person using her account, was up to. It looks like they found our Employee Assistant internal app, then tried to Jailbreak it. But because our protections were in place, this attempt was blocked, and we can see the evidence of that from this alert here on the right. Then we can see that they moved on to Microsoft 365 Copilot and tried to get into some other finance-related information. And because of our DLP policies preventing Copilot from processing labeled content, that activity also wouldn’t have been successful. So our information was protected. - And these controls get even more important, I think, as agents also become more mainstream. - That’s right, and those agents often need to send information outside of your trust boundary to reason over it, so it’s risky. And more than just visibility, as you saw, you have active protections to keep your information secure in real-time for the apps you build in-house and even shadow AI SaaS apps that people are using on your managed devices. - So for anyone who’s watching today right now, what do you recommend they do to get started? - So to get started on the things that we showed today, we’ve created end-to-end guidance for this that walks you through the entire process at aka.ms/ProtectAIapps; so that you can discover and control the generative AI cloud apps people are using now, build protections into the apps you’re building, and make sure that you have the visibility you need to detect and respond to AI-related threats. - Thanks, Rob, and, of course, to stay up-to-date with all the latest tech at Microsoft, be sure to keep checking back on Mechanics. Subscribe if you haven’t already, and we’ll see you again soon.849Views1like0CommentsHow to exclude IPs & accounts from Analytic Rule, with Watchlist?
We are trying to filter out some false positives from a Analytic rule called "Service accounts performing RemotePS". Using automation rules still gives a lot of false mail notifications we don't want so we would like to try using a watchlist with the serviceaccounts and IP combination we want to exclude. Anyone knows where and what syntax we would need to exlude the items on the specific Watchlist? Query: let InteractiveTypes = pack_array( // Declare Interactive logon type names 'Interactive', 'CachedInteractive', 'Unlock', 'RemoteInteractive', 'CachedRemoteInteractive', 'CachedUnlock' ); let WhitelistedCmdlets = pack_array( // List of whitelisted commands that don't provide a lot of value 'prompt', 'Out-Default', 'out-lineoutput', 'format-default', 'Set-StrictMode', 'TabExpansion2' ); let WhitelistedAccounts = pack_array('FakeWhitelistedAccount'); // List of accounts that are known to perform this activity in the environment and can be ignored DeviceLogonEvents // Get all logon events... | where AccountName !in~ (WhitelistedAccounts) // ...where it is not a whitelisted account... | where ActionType == "LogonSuccess" // ...and the logon was successful... | where AccountName !contains "$" // ...and not a machine logon. | where AccountName !has "winrm va_" // WinRM will have pseudo account names that match this if there is an explicit permission for an admin to run the cmdlet, so assume it is good. | extend IsInteractive=(LogonType in (InteractiveTypes)) // Determine if the logon is interactive (True=1,False=0)... | summarize HasInteractiveLogon=max(IsInteractive) // ...then bucket and get the maximum interactive value (0 or 1)... by AccountName // ... by the AccountNames | where HasInteractiveLogon == 0 // ...and filter out all accounts that had an interactive logon. // At this point, we have a list of accounts that we believe to be service accounts // Now we need to find RemotePS sessions that were spawned by those accounts // Note that we look at all powershell cmdlets executed to form a 29-day baseline to evaluate the data on today | join kind=rightsemi ( // Start by dropping the account name and only tracking the... DeviceEvents // ... | where ActionType == 'PowerShellCommand' // ...PowerShell commands seen... | where InitiatingProcessFileName =~ 'wsmprovhost.exe' // ...whose parent was wsmprovhost.exe (RemotePS Server)... | extend AccountName = InitiatingProcessAccountName // ...and add an AccountName field so the join is easier ) on AccountName // At this point, we have all of the commands that were ran by service accounts | extend Command = tostring(extractjson('$.Command', tostring(AdditionalFields))) // Extract the actual PowerShell command that was executed | where Command !in (WhitelistedCmdlets) // Remove any values that match the whitelisted cmdlets | summarize (Timestamp, ReportId)=arg_max(TimeGenerated, ReportId), // Then group all of the cmdlets and calculate the min/max times of execution... make_set(Command, 100000), count(), min(TimeGenerated) by // ...as well as creating a list of cmdlets ran and the count.. AccountName, AccountDomain, DeviceName, DeviceId // ...and have the commonality be the account, DeviceName and DeviceId // At this point, we have machine-account pairs along with the list of commands run as well as the first/last time the commands were ran | order by AccountName asc // Order the final list by AccountName just to make it easier to go through | extend HostName = iff(DeviceName has '.', substring(DeviceName, 0, indexof(DeviceName, '.')), DeviceName) | extend DnsDomain = iff(DeviceName has '.', substring(DeviceName, indexof(DeviceName, '.') + 1), "")45Views0likes0CommentsEnhance AI security and governance across multi-model and multi-cloud environments
Generative AI adoption is accelerating, with AI transformation happening in real-time across various industries. This rapid adoption is reshaping how organizations operate and innovate, but it also introduces new challenges that require careful attention. At Ignite last fall, we announced several new capabilities to help organizations secure their AI transformation. These capabilities were designed to address top customer priorities such as preventing data oversharing, safeguarding custom AI, and preparing for emerging AI regulations. Organizations like Cummins, KPMG, and Mia Labs have leveraged these capabilities to confidently strengthen their AI security and governance efforts. However, despite these advancements, challenges persist. One major concern is the rise of shadow AI—applications used without IT or security oversight. In fact, 78% of AI users report bringing their own AI tools, such as ChatGPT and DeepSeek, into the workplace 1 . Additionally, new threats, like indirect prompt injection attacks, are emerging, with 77% of organizations expressing concerns and 11% of organizations identifying them as a critical risk 2 . To address these challenges, we are excited to announce new features and capabilities that help customers do the following: Prevent risky access and data leakage in shadow AI with granular access controls and inline data security capabilities Manage AI security posture across multi-cloud and multi-model environments Detect and respond to new AI threats, such as indirect prompt injections and wallet abuse Secure and govern data in Microsoft 365 Copilot and beyond In this blog, we’ll explore these announcements and demonstrate how they help organizations navigate AI adoption with confidence, mitigating risks, and unlocking AI’s full potential on their transformation journey. Prevent risky access and data leakage in shadow AI With the rapid rise of generative AI, organizations are increasingly encountering unauthorized employee use of AI applications without IT or security team approval. This unsanctioned and unprotected usage has given rise to “shadow AI,” significantly heightening the risk of sensitive data exposure. Today, we are introducing a set of access and data security controls designed to support a defense-in-depth strategy, helping you mitigate risks and prevent data leakage in third-party AI applications. Real-time access controls to shadow AI The first line of defense against security risks in AI applications is controlling access. While security teams can use endpoint controls to block access for all users across the organization, this approach is often too restrictive and impractical. Instead, they need more granular controls at the user level to manage access to SaaS-based AI applications. Today we are announcing the general availability of the AI web category filter in Microsoft Entra Internet Access to help enforce access controls that govern which users and groups have access to different AI applications. Internet Access deep integration with Microsoft Entra ID extends Conditional Access to any AI application, enabling organizations to apply AI access policies with granularity. By using Conditional Access as the policy control engine, organizations can enforce policies based on user roles, locations, device compliance, user risk levels, and other conditions, ensuring secure and adaptive access to AI applications. For example, with Internet Access, organizations can allow your strategy team to experiment with all or most consumer AI apps while blocking those apps for highly privileged roles, such as accounts payable or IT infrastructure admins. For even greater security, organizations can further restrict access to all AI applications if Microsoft Entra detects elevated identity risk. Inline discovery and protection of sensitive data Once users gain access to sanctioned AI applications, security teams still need to ensure that sensitive data isn’t shared with those applications. Microsoft Purview provides data security capabilities to prevent users from sending sensitive data to AI applications. Today, we are announcing enhanced Purview data security capabilities for the browser available in preview in the coming weeks. The new inline discovery & protection controls within Microsoft Edge for Business detect and block sensitive data from being sent to AI apps in real-time, even if typed directly. This prevents sensitive data leaks as users interact with consumer AI applications, starting with ChatGPT, Google Gemini, and DeepSeek. For example, if an employee attempts to type sensitive details about an upcoming merger or acquisition into Google Gemini to generate a written summary, the new inline protection controls in Microsoft Purview will block the prompt from being submitted, effectively blocking the potential leaks of confidential data to an unsanctioned AI app. This augments existing DLP controls for Edge for Business, including protections that prevent file uploads and the pasting of sensitive content into AI applications. Since inline protection is built natively into Edge for Business, newly deployed policies automatically take effect in the browser even if endpoint DLP is not deployed to the device. : Inline DLP in Edge for Business prevents sensitive data from being submitted to consumer AI applications like Google Gemini by blocking the action. The new inline protection controls are integrated with Adaptive Protection to dynamically enforce different levels of DLP policies based on the risk level of the user interacting with the AI application. For example, admins can block low-risk users from submitting prompts containing the highest-sensitivity classifiers for their organization, such as M&A-related data or intellectual property, while blocking prompts containing any sensitive information type (SIT) for elevated-risk users. Learn more about inline discovery & protection in the Edge for Business browser in this blog. In addition to the new capabilities within Edge for Business, today we are also introducing Purview data security capabilities for the network layer available in preview starting in early May. Enabled through integrations with Netskope and iboss to start, organizations will be able to extend inline discovery of sensitive data to interactions between managed devices and untrusted AI sites. By integrating Purview DLP with their SASE solution (e.g. Netskope and iBoss), data security admins can gain visibility into the use of sensitive data on the network as users interact with AI applications. These interactions can originate from desktop applications such as the ChatGPT desktop app or Microsoft Word with a ChatGPT plugin installed, or non-Microsoft browsers such as Opera and Brave that are accessing AI sites. Using Purview Data Security Posture Management (DSPM) for AI, admins will also have visibility into how these interactions contribute to organizational risk and can take action through DSPM for AI policy recommendations. For example, if there is a high volume of prompts containing sensitive data sent to ChatGPT, DSPM for AI will detect and recommend a new DLP policy to help mitigate this risk. Learn more about inline discovery for the network, including Purview integrations with Netskope and iBoss, in this blog. Manage AI security posture across multi-cloud and multi-model environments In today’s rapidly evolving AI landscape, developers frequently leverage multiple cloud providers to optimize cost, performance, and availability. Different AI models excel at various tasks, leading developers to deploy models from various providers for different use cases. Consequently, managing security posture across multi-cloud and multi-model environments has become essential. Today, Microsoft Defender for Cloud supports deployed AI workloads across Azure OpenAI Service, Azure Machine Learning, and Amazon Bedrock. To further enhance our security coverage, we are expanding AI Security Posture Management (AI-SPM) in Defender for Cloud to improve compatibility with additional cloud service providers and models. This includes: Support for Google Vertex AI models Enhanced support for Azure AI Foundry model catalog and custom models With this expansion, AI-SPM in Defender for Cloud will now offer the discovery of the AI inventory and vulnerabilities, attack path analysis, and recommended actions to address risks in Google VertexAI workloads. Additionally, it will support all models in Azure AI Foundry model catalog, including Meta Llama, Mistral, DeepSeek, as well as custom models. This expansion ensures a consistent and unified approach to managing AI security risks across multi-model and multi-cloud environments. Support for Google Vertex AI models will be available in public preview starting May 1, while support for Azure AI Foundry model catalog and custom models is generally available today. Learn More. 2: Microsoft Defender for Cloud detects an attack path to a DeepSeek R1 workload. In addition, Defender for Cloud will also offer a new data and AI security dashboard. Security teams will have access to an intuitive overview of their datastores and AI services across their multi-cloud environment, top recommendations, and critical attack paths to prioritize and accelerate remediation. The dashboard will be generally available on May 1. The new data & AI security dashboard in Microsoft Defender for Cloud provides a comprehensive overview of your data and AI security posture. These new capabilities reflect Microsoft’s commitment to helping organizations address the most critical security challenges in managing AI security posture in their heterogeneous environments. Detect and respond to new AI threats Organizations are integrating generative AI into their workflows and facing new security risks unique to AI. Detecting and responding to these evolving threats is critical to maintaining a secure AI environment. The Open Web Application Security Project (OWASP) provides a trusted framework for identifying and mitigating such vulnerabilities, such as prompt injection and sensitive information disclosure. Today, we are announcing Threat protection for AI services, a new capability that enhances threat protection in Defender for Cloud, enabling organizations to secure custom AI applications by detecting and responding to emerging AI threats more effectively. Building on the OWASP Top 10 risks for LLM applications, this capability addresses those critical vulnerabilities highlighted on the top 10 list, such as prompt injections and sensitive information disclosure. Threat protection for AI services helps organizations identify and mitigate threats to their custom AI applications using anomaly detection and AI-powered insights. With this announcement, Defender for Cloud will now extend its threat protection for AI workloads, providing a rich suite of new and enriched detections for Azure OpenAI Service and models in the Azure AI Foundry model catalog. New detections include direct and indirect prompt injections, novel attack techniques like ASCII smuggling, malicious URL in user prompts and AI responses, wallet abuse, suspicious access to AI resources, and more. Security teams can leverage evidence-based security alerts to enhance investigation and response actions through integration with Microsoft Defender XDR. For example, in Microsoft Defender XDR, a SOC analyst can detect and respond to a wallet abuse attack, where an attacker exploits an AI system to overload resources and increase costs. The analyst gains detailed visibility into the attack, including the affected application, user-entered prompts, IP address, and other suspicious activities performed by the bad actor. With this information, the SOC analyst can take action and block the attacker from accessing the AI application, preventing further risks. This capability will be generally available on May 1. Learn More. : Security teams can investigate new detections of AI threats in Defender XDR. Secure and govern data in Microsoft 365 Copilot and beyond Data oversharing and non-compliant AI use are significant concerns when it comes to securing and governing data in Microsoft Copilots. Today, we are announcing new data security and compliance capabilities. New data oversharing insights for unclassified data available in Microsoft Purview DSPM for AI: Today, we are announcing the public preview of on-demand classification for SharePoint and OneDrive. This new capability gives data security admins visibility into unclassified data stored in SharePoint and OneDrive and enables them to classify that data on demand. This helps ensure that Microsoft 365 Copilot is indexing and referencing files in its responses that have been properly classified. Previously, unclassified and unscanned files did not appear in DSPM for AI oversharing assessments. Now admins can initiate an on-demand data classification scan, directly from the oversharing assessment, ensuring that older or previously unscanned files are identified, classified, and incorporated into the reports. This allows organizations to detect and address potential risks more comprehensively. For example, an admin can initiate a scan of legacy customer contracts stored in a specified SharePoint library to detect and classify sensitive information such as account numbers or contact information. If these newly classified documents match the classifiers included in any existing auto-labeling policies, they will be automatically labeled. This helps ensure that documents containing sensitive information remain protected when they are referenced in Microsoft 365 Copilot interactions. Learn More. Security teams can trigger on-demand classification scan results in the oversharing assessment in Purview DSPM for AI. Secure and govern data in Security Copilot and Copilot in Fabric: We are excited to announce the public preview of Purview for Security Copilot and Copilot in Fabric, starting with Copilot in Power BI, offering DSPM for AI, Insider Risk Management, and data compliance controls, including eDiscovery, Audit, Data Lifecycle Management, and Communication Compliance. These capabilities will help organizations enhance data security posture, manage compliance, and mitigate risks more effectively. For example, admins can now use DSPM for AI to discover sensitive data in user prompts and responses and detect unethical or risky AI usage. Purview’s DSPM for AI provides admins with comprehensive reports on user activities and data interactions in Copilot for Power BI, as part of the Copilot in Fabric experience, and Security Copilot. DSPM Discoverability for Communication Compliance: This new feature in Communication Compliance, which will be available in public preview starting May 1, enables organizations to quickly create policies that detect inappropriate messages that could lead to data compliance risks. The new recommendation card on the DSPM for AI page offers a one-click policy creation in Microsoft Purview Communication Compliance, simplifying the detection and mitigation of potential threats, such as regulatory violations or improperly shared sensitive information. With these enhanced capabilities for securing and governing data in Microsoft 365 Copilot and beyond, organizations can confidently embrace AI innovation while maintaining strict security and compliance standards. Explore additional resources As organizations embrace AI, securing and governing its use is more important than ever. Staying informed and equipped with the right tools is key to navigating its challenges. Explore these resources to see how Microsoft Security can help you confidently adopt AI in your organization. Learn more about Security for AI solutions on our webpage Get started with Microsoft Purview Get started with Microsoft Defender for Cloud Sign up for a free Microsoft 365 E5 Security Trial and Microsoft Purview Trial Learn more about the innovations designed to help your organization protect data, defend against cyber threats, and stay compliant. Join Microsoft leaders online at Microsoft Secure on April 9. [1] 2024 Work Trend Index Annual Report, Microsoft and LinkedIn, May 2024, N=31,000. [2] Gartner®, Gartner Peer Community Poll – If your org’s using any virtual assistants with AI capabilities, are you concerned about indirect prompt injection attacks? GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.4.1KViews2likes0CommentsThe security benefits of structuring your Azure OpenAI calls – The System Role
In the rapidly evolving landscape of GenAI usage by companies, ensuring the security and integrity of interactions is paramount. A key aspect is managing the different conversational roles—namely system, user, and assistant. By clearly defining and separating these roles, you can maintain clarity and context while enhancing security. In this blog post, we explore the benefits of structuring your Azure OpenAI calls properly, focusing especially on the system prompt. A misconfigured system prompt can create a potential security risk for your application, and we’ll explain why and how to avoid it. The Different Roles in an AI-Based Chat Application Any AI chat application, regardless of the domain, is based on the interaction between two primary players, the user and the assistant. The user provides input or queries. The assistant generates contextually appropriate and coherent responses. Another important but sometimes overlooked player is the designer or developer of the application. This individual determines the purpose, flow, and tone of the application. Usually, this player is referred to as the system. The system provides the initial instructions and behavioral guidelines for the model. Microsoft Defender for Cloud’s researchers identified emerging anti-pattern Microsoft Defender for Cloud (MDC) offers security posture management and threat detection capabilities across clouds and has recently released a new set of features to help organizations build secure enterprise-ready gen-AI apps in the cloud, helping them build securely and stay secure. MDC’s research experts continuously track the development patterns to enhance the offering but also to promote secure practices to their customers and the wider tech community. They are also primary contributors to the OWASP Top 10 threats for LLM (Idan Hen, research team manager). Recently, MDC's research experts identified a common anti-pattern in AI application development is emerging – appending the system to the user prompt. Mixing these sections is easy and tempting – developers often use it because it’s slightly faster while building and also allows them to maintain context through long conversations. But this practice is harmful – it introduces detrimental security risks that could easily result in 'game over' – exposing sensitive data, getting your computer abused, or making your system vulnerable to Jailbreak attacks. Diving deeper: How system prompts evaluation keeps your application secure Separate system, user and assistant prompts with Azure OpenAI ChatCompletion API Azure OpenAI Service's Chat Completion API is a powerful tool designed to facilitate rich and interactive conversational experiences. Leveraging the capabilities of advanced language models, this API enables developers to create human-like chat interactions within their applications. By structuring conversations with distinct roles—system, user, and assistant—the API ensures clarity and context throughout the dialogue: [{"role": "system", "content": [Developer’s instructions]}, {"role": "user", "content”: [User’s request]}, {"role": "assistant", "content": [Model’s response] } ] This structured interaction model allows for enhanced user engagement across various use cases such as customer support, virtual assistants, and interactive storytelling. By understanding and predicting the flow of conversation, the Chat Completion API helps create not only natural and engaging user experiences but securer applications, driving innovation in communication technology. Anti-pattern explained When developers append their instructions to the user prompt. The model receives single input composed by two different sources: developer and user: {"role": "user", "content”: [Developer’s instructions] + [User’s request]} {"role": "assistant", "content": [Model’s response] } When developer instructions are mingled with user input, detection and content filtering systems often struggle to distinguish between the two. Anti-pattern resulting in less secured application This blurring of input roles can facilitate easier manipulation through both direct and indirect prompt injections, thereby increasing the risk of misuse and harmful content not being detected properly by security and safety systems. Developer instructions frequently contain security-related content, such as forbidden requests and responses, as well as lists of do's and don'ts. If these instructions are not conveyed using the system role, this important method for restricting model usage becomes less effective. Additionally, customers have reported that protection systems may misinterpret these instructions as malicious behavior, leading to a high rate of false positive alerts and the unwarranted blocking of benign content. In one case, a customer described forbidden behavior and appended it to the user role. The threat detection system then flagged it as malicious user activity. Moreover, developer instructions may contain private content and information related to the application's inner workings, such as available data sources and tools, their descriptions, and legitimate and illegitimate operations. Although it is not recommended, these instructions may also include information about the logged-in user, connected data sources and information related to the application's operation. Content within the system role enjoys higher privacy; a model can be instructed not to reveal it to the user, and a system prompt leak is considered a security vulnerability. When developer instructions are inserted together with user instructions, the probability of a system prompt leak is much higher, thereby putting our application at risk. Why do developers mingle their instructions with user input? In many cases, recurring instructions improve the overall user experience. During lengthy interactions, the model tends to forget earlier conversations, including the developer instructions provided in the system role. For example, a model instructed to role-play in an English teaching application or act as a medical assistant in a hospital support application may forget its assigned role by the end of the conversation. This can lead to poor user experience and potential confusion. To mitigate this issue, it is crucial to find methods to remind the model of its role and instructions throughout the interaction. One incorrect approach is to append the developer's instructions to user input by adding them to the User role. Although it keeps developers’ instructions fresh in the model's 'memory,' this practice can significantly impact security, as we saw earlier. Enjoy both user experience and secured application To enjoy both quality detection and filtering capabilities along with a maximal user experience throughout the entire conversation, one option is to refeed developer instructions using the system role several times as the conversation continues: {"role": "system", "content": [Developer’s instructions]}, {"role": "user", "content”: [User’s request 1]} {"role": "assistant", "content": [Model’s response 1] } {"role": "system", "content": [Developer’s instructions]}, {"role": "user", "content”: [User’s request 2]} {"role": "assistant", "content": [Model’s response 2] } By doing so, we achieve the best of both worlds: maintaining the best practice of separating developer instructions from user requests using the Chat Completion API, while keeping the instructions fresh in the model's memory. This approach ensures that detection and filtering systems function effectively, our instructions get the model's full attention, and our system prompt remains secure, all without compromising the user experience. To further enhance the protection of your AI applications and maximize detection and filtering capabilities, it is recommended to provide contextual information regarding the end user and the relevant application. Additionally, it is crucial to identify and mark various input sources and involved entities, such as grounding data, tools, and plugins. By doing so, our system can achieve a higher level of accuracy and efficacy in safeguarding your AI application. In our upcoming blog post, we will delve deeper into these critical aspects, offering detailed insights and strategies to further optimize the protection of your AI applications. Start secure and stay secure when building Gen-AI apps with Microsoft Defender for Cloud Structuring your prompts securely is the best-practice when designing chatbots. There are other lines of defense that must be put in place to fully secure your environment. Sign up and Enable the new Defender for cloud threat protection for AI for active threat detection (preview). Enable posture management to cover all your cloud security risks, including new AI posture features. Further Reading Microsoft Defender for Cloud (MDC). AI protection using MDC. Chat Completion API. Security challenges related to GenAI. How to craft effective System Prompt. The role of System Prompt in Chat Completion API. Responsible AI practices for Azure OpenAI models. Asaf Harari, Data Scientist, Microsoft Threat Protection Research. Shiran Horev, Principal Product Manager, Microsoft Defender for Cloud. Slava Reznitsky, Principal Architect, Microsoft Defender for Cloud.Microsoft Security in Action: Zero Trust Deployment Essentials for Digital Security
The Zero Trust framework is widely regarded as a key security model and a commonly referenced standard in modern cybersecurity. Unlike legacy perimeter-based models, Zero Trust assumes that adversaries will sometimes get access to some assets in the organization, and you must build your security strategy, architecture, processes, and skills accordingly. Implementing this framework requires a deliberate approach to deployment, configuration, and integration of tools. What is Zero Trust? At its core, Zero Trust operates on three guiding principles: Assume Breach (Assume Compromise): Assume attackers can and will successfully attack anything (identity, network, device, app, infrastructure, etc.) and plan accordingly. Verify Explicitly: Protect assets against attacker control by explicitly validating that all trust and security decisions use all relevant available information and telemetry. Use Least Privileged Access: Limit access of a potentially compromised asset, typically with just-in-time and just-enough-access (JIT/JEA) and risk-based policies like adaptive access control. Implementing a Zero Trust architecture is essential for organizations to enhance security and mitigate risks. Microsoft's Zero Trust framework essentially focuses on six key technological pillars: Identity, Endpoints, Data, Applications, Infrastructure, & Networks. This blog provides a structured approach to deploying each pillar. 1. Identity: Secure Access Starts Here Ensure secure and authenticated access to resources by verifying and enforcing policies on all user and service identities. Here are some key deployment steps to get started: Implement Strong Authentication: Enforce Multi-Factor Authentication (MFA) for all users to add an extra layer of security. Adopt phishing-resistant methods, such as password less authentication with biometrics or hardware tokens, to reduce reliance on traditional passwords. Leverage Conditional Access Policies: Define policies that grant or deny access based on real-time risk assessments, user roles, and compliance requirements. Restrict access from non-compliant or unmanaged devices to protect sensitive resources. Monitor and Protect Identities: Use tools like Microsoft Entra ID Protection to detect and respond to identity-based threats. Regularly review and audit user access rights to ensure adherence to the principle of least privilege. Integrate threat signals from diverse security solutions to enhance detection and response capabilities. 2. Endpoints: Protect the Frontlines Endpoints are frequent attack targets. A robust endpoint strategy ensures secure, compliant devices across your ecosystem. Here are some key deployment steps to get started: Implement Device Enrollment: Deploy Microsoft Intune for comprehensive device management, including policy enforcement and compliance monitoring. Enable self-service registration for BYOD to maintain visibility. Enforce Device Compliance Policies: Set and enforce policies requiring devices to meet security standards, such as up-to-date antivirus software and OS patches. Block access from devices that do not comply with established security policies. Utilize and Integrate Endpoint Detection and Response (EDR): Deploy Microsoft Defender for Endpoint to detect, investigate, and respond to advanced threats on endpoints and integrate with Conditional Access. Enable automated remediation to quickly address identified issues. Apply Data Loss Prevention (DLP): Leverage DLP policies alongside Insider Risk Management (IRM) to restrict sensitive data movement, such as copying corporate data to external drives, and address potential insider threats with adaptive protection. 3. Data: Classify, Protect, and Govern Data security spans classification, access control, and lifecycle management. Here are some key deployment steps to get started: Classify and Label Data: Use Microsoft Purview Information Protection to discover and classify sensitive information based on predefined or custom policies. Apply sensitivity labels to data to dictate handling and protection requirements. Implement Data Loss Prevention (DLP): Configure DLP policies to prevent unauthorized sharing or transfer of sensitive data. Monitor and control data movement across endpoints, applications, and cloud services. Encrypt Data at Rest and in Transit: Ensure sensitive data is encrypted both when stored and during transmission. Use Microsoft Purview Information Protection for data security. 4. Applications: Manage and Secure Application Access Securing access to applications ensures that only authenticated and authorized users interact with enterprise resources. Here are some key deployment steps to get started: Implement Application Access Controls: Use Microsoft Entra ID to manage and secure access to applications, enforcing Conditional Access policies. Integrate SaaS and on-premises applications with Microsoft Entra ID for seamless authentication. Monitor Application Usage: Deploy Microsoft Defender for Cloud Apps to gain visibility into application usage and detect risky behaviors. Set up alerts for anomalous activities, such as unusual download patterns or access from unfamiliar locations. Ensure Application Compliance: Regularly assess applications for compliance with security policies and regulatory requirements. Implement measures such as Single Sign-On (SSO) and MFA for application access. 5. Infrastructure: Securing the Foundation It’s vital to protect the assets you have today providing business critical services your organization is creating each day. Cloud and on-premises infrastructure hosts crucial assets that are frequently targeted by attackers. Here are some key deployment steps to get started: Implement Security Baselines: Apply secure configurations to VMs, containers, and Azure services using Microsoft Defender for Cloud. Monitor and Protect Infrastructure: Deploy Microsoft Defender for Cloud to monitor infrastructure for vulnerabilities and threats. Segment workloads using Network Security Groups (NSGs). Enforce Least Privilege Access: Implement Just-In-Time (JIT) access and Privileged Identity Management (PIM). Just-in-time (JIT) mechanisms grant privileges on-demand when required. This technique helps by reducing the time exposure of privileges that are required for people, but are only rarely used. Regularly review access rights to align with current roles and responsibilities. 6. Networks: Safeguard Communication and Limit Lateral Movement Network segmentation and monitoring are critical to Zero Trust implementation. Here are some key deployment steps to get started: Implement Network Segmentation: Use Virtual Networks (VNets) and Network Security Groups (NSGs) to segment and control traffic flow. Secure Remote Access: Deploy Azure Virtual Network Gateway and Azure Bastion for secure remote access. Require device and user health verification for VPN access. Monitor Network Traffic: Use Microsoft Defender for Endpoint to analyze traffic and detect anomalies. Taking the First Step Toward Zero Trust Zero Trust isn’t just a security model—it’s a cultural shift. By implementing the six pillars comprehensively, organizations can potentially enhance their security posture while enabling seamless, secure access for users. Implementing Zero Trust can be complex and may require additional deployment approaches beyond those outlined here. Cybersecurity needs vary widely across organizations and deployment isn’t one-size-fits all, so these steps might not fully address your organization’s specific requirements. However, this guide is intended to provide a helpful starting point or checklist for planning your Zero Trust deployment. For a more detailed walkthrough and additional resources, visit Microsoft Zero Trust Implementation Guidance. The Microsoft Security in Action blog series is an evolving collection of posts that explores practical deployment strategies, real-world implementations, and best practices to help organizations secure their digital estate with Microsoft Security solutions. Stay tuned for our next blog on deploying and maximizing your investments in Microsoft Threat Protection solutions.Extremely Slow Performance Since Defender Was Pushed on Us
Compliance, Security, Protection, and Defender are all extremely slow, with responses from screen to screen ranging from 30 seconds to multiple minutes between clicking items and waiting for Microsoft cloud to return results. I have a GB link and speed test well over 600 Mbps so it's not on my end. It appears the cutover in late January to this new "Defender" platform has been extremely detrimental to the Office portal response times in these portals. What is being done to resolve this?20KViews2likes12Comments