enterprise security
12 TopicsAnnouncing Public Preview of DLP for M365 Copilot in Word, Excel, and PowerPoint
Today, we are excited to announce the public preview of Data Loss Prevention (DLP) for M365 Copilot in Word, Excel, and PowerPoint. This development extends the capabilities you rely on for safeguarding data in M365 Copilot Chat, bringing DLP protections to everyday Copilot scenarios within these core productivity apps. Building on Our Foundation Data oversharing and leakage is a top concern for organizations using generative AI technology, and securing AI-based workflows can feel overwhelming. We’ve been laying a strong foundation with Microsoft Purview Data Loss Prevention—especially with DLP for M365 Copilot—and are excited to expand its reach to further reduce the risk of AI-related oversharing at scale. In the original public preview release, we enabled admins to configure DLP rules that block Copilot from processing or summarizing sensitive documents in M365 Copilot Chat. However, these controls didn’t extend to the powerful in-app Copilot experiences, such as rewriting text in Word, summarizing presentations in PowerPoint, or generating helpful formulas in Excel. That changes now with this public preview. The Next Phase of DLP for M365 Copilot Similar to our original approach for M365 Copilot Chat, we are bringing consistent, flexible protection to M365 Copilot for Word, Excel, and PowerPoint. Here’s how it works in this preview: Current file DLP checks: Copilot now respects sensitivity labels on an opened document or workbook. If a document has a sensitivity label and a DLP rule that excludes its content from Copilot processing, Copilot actions like summarizing or auto-generating content directly in the canvas are blocked. Chatting with Copilot is also unavailable. File reference DLP checks: When a user tries to reference other files in a prompt – like pulling data or slides from other labeled documents – Copilot checks DLP policies before retrieving the content. If there is a DLP policy configured to block Copilot processing of files with that file’s sensitivity label, Copilot will show an apology message rather than summarizing that content – so no accidental oversharing occurs. You can learn more about DLP for M365 Copilot here: Learn about the Microsoft 365 Copilot policy location (preview) Getting Started Enabling DLP for M365 Copilot in Word, Excel, and PowerPoint follows a setup similar to configuring DLP policies for other workloads. From the Purview compliance portal, you can configure the DLP policy for a specific sensitivity label at a file, group, site, and/or user level. If you have already enabled a DLP for M365 Copilot policy with the ongoing DLP for M65 Copilot Chat preview, no further action is needed – the policy will automatically begin to apply in Word, Excel, and PowerPoint Copilot experiences. In this preview, our focus is on ensuring reliability, performance, and seamless integration with the Office apps you use every day. We’ll continue to refine the user experience as we move toward general availability, including improvements to error messages and user guidance for each scenario. Join the Preview This public preview reflects our ongoing commitment to deliver robust data protection for AI-powered workflows. By extending the same DLP principles you trust to Word, Excel, and PowerPoint, we’re empowering you to embrace AI confidently without sacrificing control over your organization’s most valuable information. We invite you to start testing these capabilities in your environment. Your feedback is invaluable to us – we encourage all customers to share their experiences and insights, helping shape the next evolution of DLP for M365 Copilot in Office.1.9KViews1like4CommentsOptimizing Cybersecurity Costs with FinOps
This blog highlights the integration of two essential technologies: Cybersecurity best practices and effective budget management across tools and services. Let’s understand FinOps FinOps is a cultural practice for cloud cost management. It enables teams to take ownership of cloud usage. It helps organizations maximize value by fostering collaboration among technology, finance, and business teams on data-driven spending decisions. FinOps Framework The FinOps Framework works across the following areas: Principles Collaborate as a team. Take responsibility for cloud resources. Ensure timely access to reports. Phases Inform: Visibility and allocation Optimize: Utilization Operate: Continuous improvement and operations Maturity: Crawl, Walk, Run Key Components of Cybersecurity Budgets Preventive Measures Preventive measures serve as the initial line of defense in cybersecurity. These measures encompass firewalls, antivirus software, and encryption tools. The primary objective of these measures is to avert cybersecurity incidents from occurring. They constitute a critical component of any comprehensive cybersecurity strategy and often account for a substantial portion of the budget. Detection & Monitoring Tools like Azure Firewalls and Azure monitoring are essential for identifying potential security threats and alerting teams early to minimize impact. Incident Response Incident response comprises the measures taken to mitigate the impact of a security breach after its occurrence. This process includes isolating compromised systems, eliminating malicious software, and restoring affected systems to their normal functionality Training & Awareness Training and awareness are crucial for cybersecurity. Educating employees about threats, teach them how to avoid risks, and inform them of company security policies. Investing in training can prevent security incidents. FinOps approach to managing the cost of Security Security Cost-Optimization Security is crucial as threats and cyber-attacks evolve. Azure FinOps helps identify and remove cloud spending inefficiencies, allowing resources to be reallocated to advanced threat detection, robust controls like MFA and ZTNA, and continuous monitoring tools. Azure FinOps provides visibility into cloud costs, identifying underutilized or redundant resources and over-provisioned budgets that can be redirected to cybersecurity. Continuous real-time monitoring helps spot trends, anomalies, and inefficiencies, aligning resources with strategic goals. Regular audits may reveal overlapping subscriptions or unused security features, while ongoing monitoring prevents these issues from recurring. The efficiency gained can fund advanced threat detection, new protection measures, or security training. FinOps ensures every dollar spent on cloud services adds value, transforming waste into a secure, efficient cloud environment. Risk Mitigation FinOps boosts visibility and transparency, helping teams find weaknesses and risks in licenses, identities, devices, and access points. This is crucial for improving IAM, configuring access controls correctly, and using MFA to protect systems and data, also involves continuous monitoring to spot security gaps early and align measures with organizational goals. It helps manage financial risk by estimating breach costs and allocating resources efficiently. Regular risk assessments and budget adjustments ensure effective security investments that balance defense and business objectives. Improved Compliance and Governance Complying with standards like GDPR, HIPAA, or PCI-DSS is essential for strong cyber defenses. A FinOps approach helps by automating compliance reporting, allowing organizations to use cost-effective tools such as Azure FinOps toolkit to meet regulations. Conclusion Azure FinOps is a useful tool for managing cybersecurity costs. It enhances cost visibility and accountability, enables budget optimization and assists with compliance audits and reporting, also helps businesses invest their resources effectively and efficiently.263Views0likes0CommentsUnderstanding and mitigating security risks in MCP implementations
Introducing any new technology can introduce new security challenges or exacerbate existing security risks. In this blog post, we’re going to look at some of the security risks that could be introduced to your environment when using Model Context Protocol (MCP), and what controls you can put in place to mitigate them. MCP is a framework that enables seamless integration between LLM applications and various tools and data sources. MCP defines: A standardized way for AI models to request external actions through a consistent API Structured formats for how data should be passed to and from AI systems Protocols for how AI requests are processed, executed, and returned MCP allows different AI systems to use a common set of tools and patterns, ensuring consistent behavior when AI models interact with external systems. MCP architecture MCP follows a client-server architecture that allows AI models to interact with external tools efficiently. Here’s how it works: MCP Host – The AI model (e.g., Azure OpenAI GPT) requesting data or actions. MCP Client – An intermediary service that forwards the AI model's requests to MCP servers. MCP Server – Lightweight applications that expose specific capabilities (APIs, databases, files, etc.). Data Sources – Various backend systems, including local storage, cloud databases, and external APIs. MCP security controls Any system which has access to important resources has implied security challenges. Security challenges can generally be addressed through correct application of fundamental security controls and concepts. As MCP is only newly defined, the specification is changing very rapidly and as the protocol evolves. Eventually the security controls within it will mature, enabling a better integration with enterprise and established security architectures and best practices. Research published in the Microsoft Digital Defense Report states that 98% of reported breaches would be prevented by robust security hygiene and the best protection against any kind of breach is to get your baseline security hygiene, secure coding best practices and supply chain security right – those tried and tested practices that we already know about still make the most impact in reducing security risk. Let's look at some of the ways that you can start to address security risks when adopting MCP. MCP server authentication (if your MCP implementation was before 26th April 2025) Problem statement: The original MCP specification assumed that developers would write their own authentication server. This requires knowledge of OAuth and related security constraints. MCP servers acted as OAuth 2.0 Authorization Servers, managing the required user authentication directly rather than delegating it to an external service such as Microsoft Entra ID. As of 26 April 2025, an update to the MCP specification allows for MCP servers to delegate user authentication to an external service. Risks: Misconfigured authorization logic in the MCP server can lead to sensitive data exposure and incorrectly applied access controls. OAuth token theft on the local MCP server. If stolen, the token can then be used to impersonate the MCP server and access resources and data from the service that the OAuth token is for. Mitigating controls: Thoroughly review your MCP server authorization logic, here some posts discussing this in more detail - Azure API Management Your Auth Gateway For MCP Servers | Microsoft Community Hub and Using Microsoft Entra ID To Authenticate With MCP Servers Via Sessions · Den Delimarsky Implement best practices for token validation and lifetime Use secure token storage and encrypt tokens Excessive permissions for MCP servers Problem statement: MCP servers may have been granted excessive permissions to the service/resource they are accessing. For example, an MCP server that is part of an AI sales application connecting to an enterprise data store should have access scoped to the sales data and not allowed to access all the files in the store. Referencing back to the principle of least privilege (one of the oldest security principles), no resource should have permissions in excess of what is required for it to execute the tasks it was intended for. AI presents an increased challenge in this space because to enable it to be flexible, it can be challenging to define the exact permissions required. Risks: Granting excessive permissions can allow for exfiltration or amending data that the MCP server was not intended to be able to access. This could also be a privacy issue if the data is personally identifiable information (PII). Mitigating controls: Clearly define the permissions that the MCP server has to access the resource/service it connects to. These permissions should be the minimum required for the MCP server to access the tool or data it is connecting to. Indirect prompt injection attacks Problem statement: Researchers have shown that the Model Context Protocol (MCP) is vulnerable to a subset of Indirect Prompt Injection attacks known as Tool Poisoning Attacks. Tool poisoning is a scenario where an attacker embeds malicious instructions within the descriptions of MCP tools. These instructions are invisible to users but can be interpreted by the AI model and its underlying systems, leading to unintended actions that could ultimately lead to harmful outcomes. Risks: Unintended AI actions present a variety of security risks that include data exfiltration and privacy breaches. Mitigating controls: Implement AI prompt shields: in Azure AI Foundry, you can follow these steps to implement AI prompt shields. Implement robust supply chain security: you can read more about how Microsoft implements supply chain security internally here. Established security best practices that will uplift your MCP implementation’s security posture Any MCP implementation inherits the existing security posture of your organization's environment that it is built upon, so when considering the security of MCP as a component of your overall AI systems it is recommended that you look at uplifting your overall existing security posture. The following established security controls are especially pertinent: Secure coding best practices in your AI application - protect against the OWASP Top 10, the OWASP Top 10 for LLMs, use of secure vaults for secrets and tokens, implementing end-to-end secure communications between all application components, etc. Server hardening – use MFA where possible, keep patching up to date, integrate the server with a third party identity provider for access, etc. Keep devices, infrastructure and applications up to date with patches Security monitoring – implementing logging and monitoring of an AI application (including the MCP client/servers) and sending those logs to a central SIEM for detection of anomalous activities Zero trust architecture – isolating components via network and identity controls in a logical manner to minimize lateral movement if an AI application were compromised. Conclusion MCP is a promising development in the AI space that enables rich data and context access. As developers embrace this new approach to integrating their organization's APIs and connectors into LLMs, they need to be aware of security risks and how to implement controls to reduce those risks. There are mitigating security controls that can be put in place to reduce the risks inherent in the current specification, but as the protocol develops expect that some of the risks will reduce or disappear entirely. We encourage you to contribute to and suggest security related MCP RFCs to make this protocol even better! With thanks to OrinThomas, dasithwijes, dendeli and Peter Marcu for their inputs and collaboration on this post.4.8KViews8likes0CommentsBlog Series: Charting Your Path to Cyber Resiliency
"Cyber resilience is more than just a buzzword in the security industry; it is an essential approach to safeguarding digital assets in an era where cyber threats are not a matter of ‘if’ but ‘when’." - World Economic Forum, 2024 Cyber resiliency describes an organization’s ability to anticipate, withstand, respond and recover from adverse conditions caused by cyberattacks. Destructive cyberattacks such as ransomware can be highly impactful to business operations and profitability. With its emphasis on protecting our companies’ most critical business functions, cyber resiliency enhances the reputation of the Cybersecurity function - it can even help us achieve that most elusive goal of demonstrating our value to the business. In Part 1 and Part 2 of this series we examined the origins of cyber resiliency and Microsoft’s approach to helping our clients become more cyber resilient. As we learned in Part 1, Microsoft has identified 24 key issues that organizations should strategically target to enhance their cyber resilience. These key issues are grouped into the following categories: Low maturity security operations Insecure configuration of identity provider Insufficient privilege access and lateral movement controls No Multi-factor Authentication Lack of information protection control Limited adoption of modern security frameworks Let’s look at how Security Copilot can help, starting with the issue of Low maturity security operations. Security Operations Since its official release in April 2024, we’ve seen many Microsoft clients benefit from Security Copilot’s capabilities to address cyber resiliency issues in the category of Low maturity SOC Operations. For example, through its built-in integration with the Microsoft Defender XDR suite, Security Copilot features such as incident summaries, KQL Query Assistant and guided response can help with these components of the control: Skill gaps across security operations Limited use of endpoint detection and response Gaps in security monitoring and integration Even customers choosing not to use the full Defender XDR suite also benefit from Copilot’s abilities to help them reverse engineer malware and generate scripts. And organizations with limited or no SIEM/SOAR capabilities can also take advantage of Security Copilot’s easy integration with Microsoft Sentinel to accelerate SIEM/SOAR adoption. Security Copilot also assists with the issue of Ineffective SOC processes and operating model in 2 key ways: Reporting and Threat Intelligence. Reporting Security Copilot customers love the tool’s ability to quickly generate comprehensive incident reports geared to a variety of audiences, both technical and executive. Microsoft Defender for Threat Intelligence Integration Cyber Threat Intelligence (CTI) plays an important role in cyber resilience. NIST notes that an organization’s cyber resiliency decreases as the threat environment changes and new threat actors, techniques and vulnerabilities are introduced. Yet we often see customers not using threat intelligence effectively or worse, not using it at all. Within the M365 Portal, the embedded Security Copilot experience features incident summaries that are automatically enriched with threat intelligence from the full version of Microsoft Defender for Threat Intelligence. In both the embedded and standalone experiences, Security Copilot enables SOC analysts to use natural language to learn more about the threats and threat actors affecting their company and industry, get information about specific IOCs, and perform vulnerability impact assessments. Not sure how to start using threat intelligence? That’s OK, Security Copilot’s got you covered with suggested prompts like these in the standalone portal: Keep in mind, though, that Security Copilot is not just for SOC Operations– in fact, one of the key mistakes we’ve seen customers make in Security Copilot proof-of-concepts has been in failing to involve Security teams outside the SOC. Simply put, if your organization is just using Security Copilot in the SOC, you’re significantly limiting its impact on your overall cyber resilience. So let’s look next at what else it can do through integrations with identity management, data protection, and cloud platforms. Identity Management According to the Verizon Data Breach Investigations Report (DBIR), most breaches start with stolen credentials. This is reflected in Microsoft’s cyber resilience guidance where 3 of the key issue categories are identity-based. Security Copilot aids with identifying gaps in Entra configuration, both in the Entra Admin center and the Security Copilot standalone experience. Core capabilities include: Troubleshooting a user’s sign-in failures Providing user account details and authentication methods Exploring audit log events for a particular user, group, or application Enumerating Entra ID roles and group memberships In this case I’m troubleshooting a recent failed sign on attempt by a user. Security Copilot gives me the details of the sign-in and tells me in plain language the reason for the failure, along with the applicable conditional access policy, and the remediation steps to take: Security and Identity pros whose organizations already use Microsoft’s Workload Identities feature can also take advantage of Security Copilot’s abilities to investigate risky Entra ID applications. Security Copilot’s reach even extends to protection of Active Directory on-premises through its integration with Microsoft's Unified Security Operations Platform, which can include Defender for Identity alerts, as well as Windows Security Events collected by Microsoft Sentinel. Data Protection and Vulnerability Management The cyber resilience category Lack of Information Control covers a diverse set of components, including ineffective data loss prevention controls and lack of patch and vulnerability management. Security Copilot integrations support various teams across the organization in areas such as: Data Protection Security Copilot has a powerful integration with Microsoft Purview Data Security Posture Management (DSPM), a centralized data security management tool that includes signals from Microsoft Purview Information Protection, Data Loss Prevention, and Insider Risk Management. Just some of the many goals of this integration are: Helping Security teams conduct deeper investigations into data security incidents Enabling DLP admins to better identify gaps in DLP policy coverage Identifying devices involved in data exfiltration activities Assisting with insider risk management investigations Vulnerability Management As SANS notes, “The quantity of outstanding vulnerabilities for most large organizations is overwhelming, and all organizations struggle to keep up with the never-ending onslaught of new vulnerabilities in their infrastructure and applications.” Security Copilot works with Microsoft Defender External Attack Surface Management (Defender EASM) to help address this challenge. Defender EASM helps identify public-facing assets such as domains and hosts to map your organization’s external attack surface, discover unknown issues, and minimize risk. Security Copilot’s integration with EASM helps teams identify public-facing assets with high-priority CVEs and CVSS scores and find issues like expired domains, expired SSL certificates and SHA1 certificates. If you’re not currently using Defender EASM, it offers a free 30-day trial. (In fact, many customers have been so impressed with EASM and its Security Copilot integration during their trials, they’ve gone ahead and made it a permanent part of their cyber resilience strategy). Finally, note that both Purview DSPM and Defender EASM have multi-cloud capabilities. When used in combination with Security Copilot, they can greatly assist IT and Security teams with limited security experience in more than 1 cloud. Cloud Platforms Finally, in the cyber resilience category Limited adoption of modern security frameworks, Security Copilot helps address the issue of insecure design and configuration across cloud platforms via integrations with Azure Firewall and Azure WAF. Security Copilot features include identifying malicious traffic, searching for a given IDPS signature across all Azure Firewalls in the environment, and generating recommendations to improve the overall security of your deployments. Security Copilot can also help analyze Azure Web Application Firewall (WAF) logs to provide context for: Most frequently triggered rules Malicious IP addresses identified Blocked SQL injection (SQLi) and Cross-site Scripting (XSS) requests Microsoft Copilot for Security integration is available for both Azure WAF on both Azure Application Gateway and Azure WAF on Azure Front Door. Conclusion As we've seen throughout this series, Microsoft provides practical and tactical guidance to help our customers enhance their cyber resiliency to sophisticated and destructive cyberattacks that impact critical business operations. Security Copilot offers new capabilities to help build cyber resiliency in diverse and challenging areas such as: Vulnerability management Data security Multi-cloud management Security operations Identity protection In Building Secure, Resilient Architectures for Cyber Mission Assurance, MITRE emphasizes that “game-changing technologies, techniques, and strategies can make transformational improvements in the resilience of our critical systems.” It's clear that Security Copilot is already one of those game-changers and, with the recent announcement of Security Copilot agents, charting your path to cyber resilience just got a lot more exciting.290Views1like0CommentsUnveiling the Shadows: Extended Critical Asset Protection with MSEM
As cybersecurity evolves, identifying critical assets becomes an essential step in exposure management, as it allows for the prioritization of the most significant assets. This task is challenging because each type of critical asset requires different data to indicate its criticality. The challenge is even greater when a critical asset is not managed by a security agent such as EDR or AV, making the relevant data unreachable. Breaking traditional boundaries, Microsoft Security Exposure Management leverages multiple insights and signals to provide enhanced visibility into both managed and unmanaged critical assets. This approach allows customers to enhance visibility and facilitates more proactive defense strategies by maintaining an up-to-date, prioritized inventory of assets. Visibility is the Key Attackers often exploit unmanaged assets to compromise systems, pivot, or target sensitive data. The risk escalates if these devices are critical and have access to valuable information. Thus, organizations must ensure comprehensive visibility across their networks. This blog post will discuss methods Microsoft Security Exposure Management uses to improve visibility into both managed and unmanaged critical assets. Case Study: Domain Controllers A domain controller server is one of the most critical assets within an organization’s environment. It authenticates users, stores sensitive Active Directory data like user password hashes, and enforces security policies. Threat actors frequently target domain controller servers because once they are compromised, they gain high privileges, which allow full control over the network. This can result in a massive impact, such as organization-wide encryption. Therefore, having the right visibility into both managed and unmanaged domain controllers is crucial to protect the organization's network. Microsoft Security Exposure Management creates such visibility by collecting and analyzing signals and events from Microsoft Defender for Endpoint (MDE) onboarded devices. This approach extends, enriches, and improves the customer’s device inventory, ensuring comprehensive insight into both managed and unmanaged domain controller assets. Domain Controller Discovery Methods Microsoft Browser Protocol The Microsoft Browser protocol, a component of the SMB protocol, facilitates the discovery and connection of network resources within a Windows environment. Once a Windows server is promoted to a domain controller, the operating system automatically broadcasts Microsoft Browser packets to the local network, indicating that the originating server is a domain controller. These packets hold meaningful information such as the device’s name, operating system-related information, and more. 1: An MSBrowser packet originating from a domain controller. Microsoft Security Exposure Management leverages Microsoft Defender for Endpoint’s deep packet inspection capabilities to parse and extract valuable data such as the domain controller’s NetBios name, operating system version and more from the Microsoft Browser protocol. Group Policy Events Group Policy (GPO) is a key component in every Active Directory environment. GPO allows administrators to manage and configure operating systems, applications, and user settings in an Active Directory domain-joined environment. Depending on the configuration, every domain-joined device locates the relevant domain controller within the same Active Directory site and pulls the relevant group policies that should be applied. During this process, the client's operating system audits valuable information within the Windows event log Once the relevant event has been observed on an MDE onboarded device, valuable information such as the domain controller’s FQDN and IP address is extracted from it. LDAP Protocol A domain controller stores the Active Directory configuration in a central database that is replicated between the domain controllers within the same domain. This database holds user data, user groups, security policies, and more. To query and update information in this database, a dedicated network protocol, LDAP (Lightweight Directory Access Protocol), is used. For example, to retrieve a user’s display name or determine their group membership, an LDAP query is directed to the domain controller for the relevant information. This same database also holds details about other domain controllers, configured domain trusts, and additional domain-related metadata. 3: Domain controller computer account in Active directory Users and Computers management console. Once a domain controller is onboarded to Microsoft Defender for Endpoint, the LDAP protocol is used to identify all other domain controllers within the same domain, along with their operating system information, FQDN, and more. Identifying what is critical After gaining visibility through various protocols, it's crucial to identify which domain controllers are production and contain sensitive data, distinguishing them from test assets in a testing environment. Microsoft Security Exposure Management uses several techniques, including tracking the number of devices, users, and logins, to accurately identify production domain controllers. Domain controllers and other important assets not identified as production assets are not automatically classified as critical assets by the system. However, they remain visible under the relevant classification, allowing customers to manually override the system’s decision and classify them as critical. Building the Full Picture In addition to classifying assets as domain controllers, Microsoft Security Exposure Management provides customers with additional visibility by automatically classifying other critical devices and identities such as Exchange servers, VMware vCenter, backup servers, and more. 4: Microsoft Defender XDR Critical Asset Management settings page. Identifying critical assets and distinguishing them from other assets empowers analysts and administrators with additional information to prioritize tasks related to these assets. The context of asset criticality is integrated within various Microsoft Defender XDR experiences, including the device page, incidents, and more. This empowers customers to streamline SOC operations, swiftly prioritize and address threats to critical assets, implement targeted security recommendations, and disrupt ongoing attacks. For those looking to learn more about critical assets and exposure management, here are some additional resources you can explore. Overview of critical asset protection - Overview of critical asset management in Microsoft Security Exposure Management - Microsoft Security Exposure Management | Microsoft Learn Learn about predefined classifications - Criticality Levels for Classifications - Microsoft Security Exposure Management | Microsoft Learn Overview of critical assets protection blog post - Critical Asset Protection with Microsoft Security Exposure Management | Microsoft Community Hub693Views0likes0CommentsBlog Series: Charting Your Path to Cyber Resiliency
Part 1: What Is Cyber Resiliency and How Do I Get It? Recently I was on a call with some Security leaders who were interested in how we at Microsoft could help them with cyber resiliency. But when I asked the questions "What does cyber resiliency mean to you?” and “What specific aspects of cyber resilience are you interested in improving?", they struggled to answer. If you're having difficulty with those questions yourself, don't worry, you're not alone. Cyber resiliency – being able to successfully continue business operations in the face of destructive cyberattacks - is having a Moment these days. It's The New Zero Trust, you might say. But what is cyber resilience really beyond an industry buzzword or a sales play? What does an organization need to do to become cyber resilient? To understand more, let's start with a look at the history of cyber resiliency and how it has evolved over the last 15 years. MITRE (best known for their ATT&CK frameworks) was an early leader in the cyber resilience movement. MITRE's 2010 publication Building Secure, Resilient Architectures for Cyber Mission Assurance, explained the need for cyber resiliency by emphasizing the operational impact of cyberattacks and the financial cost of recovery, also noting that “the cyber adversary continues to have an asymmetric advantage as we fruitlessly play Whac-A-Mole in response to individual attacks.” (Sound familiar?) One year later, MITRE released the first publication of their Cyber Resiliency Engineering Framework (CREF). In subsequent years, MITRE followed up with revisions to CREF, along with additional papers on methods and metrics for effectively measuring cyber resiliency. They also developed the CREF Navigator, an online tool to help define and graphically represent cyber resiliency goals, objectives and techniques as defined by NIST (National Institute of Standards and Technology). NIST's 2021 publication SP 800-160 Volume 2 (Rev 1): Developing Cyber-Resilient Systems is a comprehensive cyber resiliency framework that builds on CREF. It also gives us the most used definition of cyber resiliency which is: "the ability to anticipate, withstand, recover from, and adapt to adverse conditions, stresses, attacks, or compromises that use or are enabled by cyber resources." Like MITRE's early work, this publication is rooted in systems and software engineering principles and how engineers in national defense and critical infrastructure need to build resiliency into mission-critical systems. However, today we commonly apply this definition and this understanding of cyber resiliency to any organization concerned with minimizing the impact of cyberattacks on their business-critical systems. The extension of cyber resiliency principles beyond government and critical infrastructure is also evident in The EU's Cybersecurity Strategy for the Digital Decade presented in December 2020. Although this strategy was chiefly concerned with "EU institutions bodies and agencies," it also emphasized the increasing dependency of both public and private sectors on digital systems and cybersecurity, noting that financial services, digital services, and manufacturing were among the hardest hit by cybercrime. Microsoft echoed this idea in our 2022 Digital Defense Report which featured a special section on cyber resiliency, calling it “A crucial foundation of a connected society.” The report emphasized 3 key cyber resiliency themes: the critical link between cyber resiliency and business risk the importance of adapting security practices and technologies to keep up with a continuously evolving threat landscape the challenges of attaining cyber resiliency when using legacy technologies Microsoft also maintains a list of 24 key issues impacting cyber resiliency, spanning everything from legacy on-premises resources to cloud technologies and frameworks. We’ll come back to this guidance in Part 2 of our series. Conclusion Cyber resiliency is more than the latest industry buzzword. In the first part of this series, we looked at the origins of the cyber resiliency movement with a focus on 2 common cyber resiliency frameworks developed by MITRE and NIST. We also looked briefly at Microsoft’s approach and some resources we offer customers wanting to improve the resilience of critical business operations in the face of destructive cyberattacks. In the 2nd part of this series, we'll take a closer look at Microsoft's approach to cyber resiliency, from its origins in the days of Trustworthy Compute to present-day guidance on designing security solutions to mitigate the effects of ransomware. Finally, in Part 3 of the series we’ll examine how we can use AI to help with some of the most challenging components of cyber resiliency.670Views3likes2CommentsIntroducing the Secure Future Initiative Tech Tips show!
Introducing the Secure Future Initiative: Tech Tips show! This show provides bite-sized technical tips from Microsoft security experts about how you can implement recommendations from one of the six engineering pillars of Microsoft's Secure Future Initiative in your own environment to uplift your security posture. Hosted by Sarah Young, Principal Security Advocate (_@sarahyo) and Michael Howard, Senior Director Microsoft Red Team (@michael_howard), the series interviews a range of Microsoft security experts giving you practical advice about how to implement SFI controls in your organization's environment. The first episode about phishing resistant creds is live on YouTube and MS Learn. Upcoming episodes include: Using managed identities Using secure vaults to store secrets Applying ingress and egress control Scanning for creds in code and push protection Enabling audit logs for cloud and develop threat detections Keep up to date with the latest Secure Future Initiative news at aka.ms/sfi284Views1like0CommentsMicrosoft Security in Action: Zero Trust Deployment Essentials for Digital Security
The Zero Trust framework is widely regarded as a key security model and a commonly referenced standard in modern cybersecurity. Unlike legacy perimeter-based models, Zero Trust assumes that adversaries will sometimes get access to some assets in the organization, and you must build your security strategy, architecture, processes, and skills accordingly. Implementing this framework requires a deliberate approach to deployment, configuration, and integration of tools. What is Zero Trust? At its core, Zero Trust operates on three guiding principles: Assume Breach (Assume Compromise): Assume attackers can and will successfully attack anything (identity, network, device, app, infrastructure, etc.) and plan accordingly. Verify Explicitly: Protect assets against attacker control by explicitly validating that all trust and security decisions use all relevant available information and telemetry. Use Least Privileged Access: Limit access of a potentially compromised asset, typically with just-in-time and just-enough-access (JIT/JEA) and risk-based policies like adaptive access control. Implementing a Zero Trust architecture is essential for organizations to enhance security and mitigate risks. Microsoft's Zero Trust framework essentially focuses on six key technological pillars: Identity, Endpoints, Data, Applications, Infrastructure, & Networks. This blog provides a structured approach to deploying each pillar. 1. Identity: Secure Access Starts Here Ensure secure and authenticated access to resources by verifying and enforcing policies on all user and service identities. Here are some key deployment steps to get started: Implement Strong Authentication: Enforce Multi-Factor Authentication (MFA) for all users to add an extra layer of security. Adopt phishing-resistant methods, such as password less authentication with biometrics or hardware tokens, to reduce reliance on traditional passwords. Leverage Conditional Access Policies: Define policies that grant or deny access based on real-time risk assessments, user roles, and compliance requirements. Restrict access from non-compliant or unmanaged devices to protect sensitive resources. Monitor and Protect Identities: Use tools like Microsoft Entra ID Protection to detect and respond to identity-based threats. Regularly review and audit user access rights to ensure adherence to the principle of least privilege. Integrate threat signals from diverse security solutions to enhance detection and response capabilities. 2. Endpoints: Protect the Frontlines Endpoints are frequent attack targets. A robust endpoint strategy ensures secure, compliant devices across your ecosystem. Here are some key deployment steps to get started: Implement Device Enrollment: Deploy Microsoft Intune for comprehensive device management, including policy enforcement and compliance monitoring. Enable self-service registration for BYOD to maintain visibility. Enforce Device Compliance Policies: Set and enforce policies requiring devices to meet security standards, such as up-to-date antivirus software and OS patches. Block access from devices that do not comply with established security policies. Utilize and Integrate Endpoint Detection and Response (EDR): Deploy Microsoft Defender for Endpoint to detect, investigate, and respond to advanced threats on endpoints and integrate with Conditional Access. Enable automated remediation to quickly address identified issues. Apply Data Loss Prevention (DLP): Leverage DLP policies alongside Insider Risk Management (IRM) to restrict sensitive data movement, such as copying corporate data to external drives, and address potential insider threats with adaptive protection. 3. Data: Classify, Protect, and Govern Data security spans classification, access control, and lifecycle management. Here are some key deployment steps to get started: Classify and Label Data: Use Microsoft Purview Information Protection to discover and classify sensitive information based on predefined or custom policies. Apply sensitivity labels to data to dictate handling and protection requirements. Implement Data Loss Prevention (DLP): Configure DLP policies to prevent unauthorized sharing or transfer of sensitive data. Monitor and control data movement across endpoints, applications, and cloud services. Encrypt Data at Rest and in Transit: Ensure sensitive data is encrypted both when stored and during transmission. Use Microsoft Purview Information Protection for data security. 4. Applications: Manage and Secure Application Access Securing access to applications ensures that only authenticated and authorized users interact with enterprise resources. Here are some key deployment steps to get started: Implement Application Access Controls: Use Microsoft Entra ID to manage and secure access to applications, enforcing Conditional Access policies. Integrate SaaS and on-premises applications with Microsoft Entra ID for seamless authentication. Monitor Application Usage: Deploy Microsoft Defender for Cloud Apps to gain visibility into application usage and detect risky behaviors. Set up alerts for anomalous activities, such as unusual download patterns or access from unfamiliar locations. Ensure Application Compliance: Regularly assess applications for compliance with security policies and regulatory requirements. Implement measures such as Single Sign-On (SSO) and MFA for application access. 5. Infrastructure: Securing the Foundation It’s vital to protect the assets you have today providing business critical services your organization is creating each day. Cloud and on-premises infrastructure hosts crucial assets that are frequently targeted by attackers. Here are some key deployment steps to get started: Implement Security Baselines: Apply secure configurations to VMs, containers, and Azure services using Microsoft Defender for Cloud. Monitor and Protect Infrastructure: Deploy Microsoft Defender for Cloud to monitor infrastructure for vulnerabilities and threats. Segment workloads using Network Security Groups (NSGs). Enforce Least Privilege Access: Implement Just-In-Time (JIT) access and Privileged Identity Management (PIM). Just-in-time (JIT) mechanisms grant privileges on-demand when required. This technique helps by reducing the time exposure of privileges that are required for people, but are only rarely used. Regularly review access rights to align with current roles and responsibilities. 6. Networks: Safeguard Communication and Limit Lateral Movement Network segmentation and monitoring are critical to Zero Trust implementation. Here are some key deployment steps to get started: Implement Network Segmentation: Use Virtual Networks (VNets) and Network Security Groups (NSGs) to segment and control traffic flow. Secure Remote Access: Deploy Azure Virtual Network Gateway and Azure Bastion for secure remote access. Require device and user health verification for VPN access. Monitor Network Traffic: Use Microsoft Defender for Endpoint to analyze traffic and detect anomalies. Taking the First Step Toward Zero Trust Zero Trust isn’t just a security model—it’s a cultural shift. By implementing the six pillars comprehensively, organizations can potentially enhance their security posture while enabling seamless, secure access for users. Implementing Zero Trust can be complex and may require additional deployment approaches beyond those outlined here. Cybersecurity needs vary widely across organizations and deployment isn’t one-size-fits all, so these steps might not fully address your organization’s specific requirements. However, this guide is intended to provide a helpful starting point or checklist for planning your Zero Trust deployment. For a more detailed walkthrough and additional resources, visit Microsoft Zero Trust Implementation Guidance. The Microsoft Security in Action blog series is an evolving collection of posts that explores practical deployment strategies, real-world implementations, and best practices to help organizations secure their digital estate with Microsoft Security solutions. Stay tuned for our next blog on deploying and maximizing your investments in Microsoft Threat Protection solutions.2.3KViews1like0Comments