microsoft fabric
71 TopicsDAX Demystified: 5 Key Lessons Every Beginner Needs to Learn Early
🔍 Upcoming https://www.linkedin.com/company/102768826/admin/page-posts/published/?share=true# Session – Saturday, June 21 at 7:00 AM PT 🎙️ Speaker: Markus Ehrenmueller-Jensen 📌 Topic: DAX Demystified: 5 Key Lessons Every Beginner Needs to Learn Early If you've ever written a DAX measure and thought, “Why doesn’t this work?” — you’re not alone. In this session, Markus will walk you through five key lessons that make the difference between confusion and confidence when working with DAX. Learn the practical insights he wishes he knew when he started—like understanding calculated columns vs. measures, and how row and filter context really work. 📈 Whether you're just beginning with DAX or looking to solidify your fundamentals, this session is packed with real-world examples and “aha!” moments that will help DAX finally make sense. 👥 Ideal for: Power BI users, data analysts, and Excel pros getting started with DAX. 🔗 Register here: https://www.meetup.com/microsoft-fabric-cafe/events/308524621/?utm_medium=referral&utm_campaign=share-btn_savedevents_share_modal&utm_source=link&utm_version=v2 #MicrosoftFabric #PowerBI #DAX #FabricCafe #MicrosoftLearn #DataAnalytics6Views0likes0CommentsOrchestrate Data Ingestion using Apache Airflow in Microsoft Fabric
🚀 Upcoming #FabricCoffee session at https://www.linkedin.com/company/102768826/admin/page-posts/published/?share=true# 🚀 📅 Date: Friday, June 13th 🕕 Time: 6:00 PM PT | Saturday, June 14th at 11:00 AM AEST 🎙️ Speaker: https://www.linkedin.com/company/102768826/admin/page-posts/published/?share=true# 📌 Topic: Orchestrate Data Ingestion using Apache Airflow Supercharge your data pipelines by combining the power of #Apache #Airflow with #MicrosoftFabric! In this dynamic session, discover how to seamlessly orchestrate data ingestion from multiple sources into Lakehouses and Warehouses with full automation and scalability. 🔹 Trigger Fabric Dataflows, Pipelines, and Notebooks with Airflow 🔹 Automate and monitor data ingestion in real time 🔹 Optimize dependencies and error handling for seamless workflows Whether you're modernizing your ETL processes or implementing a Medallion Architecture, this session equips you with practical strategies to streamline and scale your data operations effortlessly. Register here: https://www.meetup.com/microsoft-fabric-cafe/events/308348139/?utm_medium=referral&utm_campaign=share-btn_savedevents_share_modal&utm_source=link&utm_version=v2 👉 Don’t miss this opportunity to level up your data engineering game with Apache Airflow + Microsoft Fabric! #MicrosoftFabric #FabricCafe #MicrosoftLearn #ApacheAirflow #DataEngineering50Views1like1CommentWhat's new in SQL Server 2025
Add deep AI integration with built-in vector search and DiskANN optimizations, plus native support for large object JSON and new Change Event Streaming for live data updates. Join and analyze data faster with the Lakehouse shortcuts in Microsoft Fabric that unify multiple databases — across different SQL Server versions, clouds, and on-prem — into a single, logical schema without moving data. Build intelligent apps, automate workflows, and unlock rich insights with Copilot and the unified Microsoft data platform, including seamless Microsoft Fabric integration, all while leveraging your existing SQL skills and infrastructure. Bob Ward, lead SQL engineer, joins Jeremy Chapman to share how the latest SQL Server 2025 innovations simplify building complex, high-performance workloads with less effort. Run natural language semantic search directly in SQL Server 2025. Vector search and DiskANN work efficiently on modest hardware — no GPU needed. Get started. Run NoSQL in SQL. Store and manage large JSON documents directly in SQL Server 2025. Insert, update, and query JSON data with native tools. Check it out. Avoid delays. Reduce database locking without code changes to keep your apps running smoothly. See the new Optimized Locking in SQL Server 2025. QUICK LINKS: 00:00 — Updates to SQL Server 2025 00:58 — Search and AI 03:55 — Native JSON Support 06:41 — Real-Time Change Event Streaming 08:40 — Optimized Locking for Better Concurrency 10:33 — Join SQL Server data with Fabric 13:53 — Wrap up Link References Start using SQL Server 2025 at https://aka.ms/GetSQLServer2025 Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: - Today we’ll look at the AI integration developer updates and performance improvements that make SQL Server 2025 a major upgrade. We’ve got a lot to unpack here, so we’re going to waste no time and get straight into this with lead SQL engineer, Bob Ward. Welcome back to the show. - So great to be back. - So SQL Server 2025, it’s brand new. It’s in public preview right now. So what’s behind the release and what’s new? - There are three major areas of updates that we focus on in this release. First, we have deep AI integration. For example, we now have built-in vector search support for more accurate and efficient data retrieval with some under the hood optimizations using DiskANN. Second, if you’re a developer, this is the most significant release of SQL in the last decade. You know, some of the highlights are native support for JSON files and new change event streaming capabilities for real-time updates. And the third area is improved analytics, where we’re going to make it easy to mirror your SQL Servers into Microsoft Fabric without moving the data. - And all of these are very significant updates. So why don’t we start with what’s new in search and AI? - Great, let’s get going. As I’ve mentioned, we’ve integrated AI directly into the database engine to give you smarter, intelligent searching. With vector search capabilities built-in, you can do semantic search over your data to find matches based on similarity versus keywords. For example, here I have a database with a table called ProductDescription, and I want to search using SQL queries against the Description table for intelligent search. Typically, you’d use full text search for this. Now I’ve built this out, but what about these natural language phrases, Will they work? They don’t. And even when I use like clauses, as you can see here, or contains, or even freetext, none of these methods returns what I’m looking for. Instead, this is where natural language with vector search in SQL Server 2025 shines. As a developer, I can get started even locally on my laptop, no GPU required. I’m using the popular framework, Ollama, to host a free open-source embeddings model from Hugging Face. This will convert our data into vectors, including query prompts, and I declare it using this CREATE EXTERNAL MODEL statement. Then I’m able to go in and build a table using the new built-in vector type to store what’s called embeddings in a binary format. My table has keys pointing back to my description data and then I can use a built-in T-SQL function to generate embeddings based on Ollama and store them. For vector search to work, I need to create a vector index, and it’s also performance optimized using Disk approximate nearest neighbor, or DiskANN, which is a new way to offload what you’d normally want to run completely in memory to point to an index stored on disk. I have a stored procedure to convert the query prompts into embeddings so it can be used to find matching embeddings in the vector index. So now I have everything running locally on my laptop running SQL. Let’s see how it works. I’ll try this natural language prompt, like I showed earlier. And it worked. I get a rich set of results, with matching information based on my search to find products in the database. And I can even use Copilot from here to explore more about SQL data. I’ll prompt it to look for my new table. And you can see, response here, finding our new table. And I can ask it to pull up a few embedding values with product names and descriptions. And as you saw the result using our open source embeddings returned a few languages back. And the good news is that if your data contains multiple languages, it’s easy to use different embedding models. For example, here I’ve wired up Azure OpenAI’s ADA 2 embeddings model optimized for multiple languages without even changing my code. And now I can even search using Mandarin Chinese and get back matching results. - And DiskANN and vector-based search are both massive updates that really go hand in hand to enable better natural language querying on modest hardware. So what about all the developer updates? - With these updates, things get so much more efficient for developers. With JSON file types, you can bring NoSQL into your SQL relational database. Let me show you how. I’ve created a database called Orders and a table called Orders. Notice here the new JSON data type, which can store up to a massive two gigabytes of JSON document in this native data type. Now let’s look at a couple of examples. First, I can easily insert JSON documents in their native format directly into the table, and I’ll show you some of the JSON functions that you can do to process this new JSON type. JSON value will pull a particular value out of a JSON document and bring it back in result set format. And I can just dump out all the JSON values, so each document will appear as a separate row in their native JSON format. But instead of just doing that, I have aggregate functions. This takes all the rows of JSON types in the table and produces a single array with a single JSON document with all the new rows in the native JSON type. Key-value pairs are also popular in JSON, and I can use the new OBJECT AGGREGATE function to take the order ID key and the JSON document and produce a set of key-value pairs. And I can modify the JSON type directly from here too. Notice, for order_id 1, the quantity is also 1. I’ll run this update to modify the value. And when it’s finished, the order_id, quantity has been updated with the value of 2 directly in the JSON. Now that’s a good example of using the JSON type. So let me show you how this works with a JSON index. I’ve got a different database for contacts, along with the table for contacts using a JSON document as one of the properties of the contacts table. I can create a JSON index on top of that JSON document, like this. Now I’ve got some sample data that are JSON documents. And in a second, I’m going to push those into our database. And as I scroll, you’ll that this has nested tags as properties in the JSON document. Now I’ll run the query so I can insert these rows with the names of each tag. Let’s go look at the output. I’m using JSON value for the name, but I’m using JSON query because the tags are nested. Now I’ll show you an example searching with the JSON index. I’m using the new JSON contains function to find tags called fitness that are deep nested in the JSON document. And I can run that and find the right tags and even the execution plan. You can see here that it shows we’re using the new JSON index to help go find that information. - That’s a big deal. And like you said, there’s a lot happening natively in JSON, and now you’ve got the benefits of SQL for joins, and security, and a lot more, - You know, and for developers who use change data capture, things become a lot easier with change event streaming. Here, we’re reducing I/O overhead and sending transaction log changes directly to your application. To get started with change event streaming for our orders database, I’ll run the stored procedure to enable streaming for the database. You can see the table we’re going to use to track changes is a typical type of orders table. Here I’ve created what’s called an event stream group. This is where I’ve configured event streaming to tell it the location of our Azure event hub to stream our data, and I’ve added my credentials. Then I’ve configured the table orders to be part of the event streaming group. I’ve run these procedures to make sure that my configuration is correct. So let’s do something interesting. I’m going to automate a workflow using agents to listen for changes as they come in and try to resolve any issues. First, I’ve created an Azure function app, and using my function app, I have an agent running in the Azure AI service called ContosoShippingAgent. It’s built to take shipment information, analyze it, and decide whether something can be done to help. For example, resolving a shipping delay. I’ve started my Azure function. This function is waiting for events to be sent to Azure Event Hub in order to process them. Now, in SQL, I’ll insert a new order. Going back over to my Azure function, you’ll see how the event is processed. In the code, first, we’re dumping up the raw cloud event that I showed earlier. Notice the operation is an insert. It’s going to dump out some of the different fields we’ve parsed out of the data, the column names, the metadata, and then the row itself. Notice that because the shipment is 75 days greater than our sales date, it will call our agent. The agent then comes back with a response. It looked at the tracking details and determined that it can change the shipping provider to expedite our delayed shipment, and it contacted the customer directly with the updating shipping info. - And everybody likes faster shipping. So speaking of things that are getting faster, it’s kind of a tradition on Mechanics that we cover the speed ups for SQL Server. So what are the speed ups and the performance optimizations for ‘25? - Well, there’s a lot, but my favorite one improves application concurrency. We’ve improved the internals of how locking works without application code changes. And I’ve got an example of this running. I have a lock escalation problem that I need to resolve. I’m going to go update about 2,500 rows in this table just to show what happens, then how we’ve solved for it. So running this query against that Dynamic Management View, or DMV, shows locks that have accumulated, about 2,500 locks here for key-value locks and 111 for page locks. So what happens if I run enough updates against the table that would cause a lock escalation? Here, I’ll update 10,000 rows in the system. But you can see with the locks that this has been escalated to an object lock. It’s not updating the entire table, but it’s going to cause a problem. Because I’ve got a query over here that can update the maximum value in just one row and it’s going to get blocked, but it shouldn’t have to be. You can see here from the blocking query that’s running that it’s blocked on that original session, and I’m not actually updating a row that’s affected by the first one. This is the problem with lock escalation. Now let’s look at a new option called optimized locking in SQL Server 2025. Okay, let’s go back to where I updated 10,000 rows and look at the lock. Notice how in this particular case I have a transaction lock. It’s an intent exclusive lock for the table, but only a transaction lock for that update. If I use this query to update the max, you’ll see that we are not blocked. And by looking at the locks, each item has specific transaction locks, so we’re not blocking each other. And related to this, we’ve also solved another problem where two unrelated updates can get blocked. We call this lock after qualification. - Okay, so it’s pinpointing the exact lock type, so you’ll get less locks in the end. So why don’t we move on though from locks to joins? - Sure. With Microsoft Fabric, it’s amazing. You can pull in multiple databases, multiple data types into a unified data platform. Imagine you have two different SQL Servers in different clouds and on-prem, and you just want to join this data together in an easy way without migrating it. With Fabric, you can. I have a SQL Server 2022 instance with a database, and we’ve already mirrored the product tables from that database into Fabric. I’ll show you the mirroring configuration process for a SQL Server 2025 instance with different, but related tables. These are similar to the steps from mirroring any SQL Server. I’ve created a database connection for SQL Server 2025. Now I’ll pick all the tables in our database and connect. I’ll leave the name as is, AdventureWorks, and we’re ready to mirror our database. You can see now that the replication process has started for all the tables. All the rows have been replicated for all the columns on all the tables in my database and they’ve been mirrored into Fabric. Now let’s query the data using the SQL analytic endpoint. And you can see that the tables that we have previously had in our database and SQL Server are now mirrored into OneLake. Let’s run a query and I’ll use Copilot to do that. Here’s the Copilot code with explanations. Now I’ll run it. And as it completes, there’s our top customers buy sales. Now what if we wanted to do a join across the other SQL server? It’s possible. But normally, there are a lot of manual pieces to do this. Fabric can make that easier using a lakehouse. So let’s create a new lakehouse. I just didn’t to give it a name, AdventureWorks, and confirm. Now notice there are no tables in this lakehouse yet, so let’s add some. And for that, I’ll use a shortcut. A shortcut uses items in OneLake, like the SQL Server databases we just mirrored. So I’ll add the AdventureWorks database. And scrolling down, I’ll pick all the tables I want. Now I’ll create it. And we’re not storing the data separately in the lakehouse. It’s just a shortcut, like an active read link to the source data, which is our mirrored database, and therefore something that already exists in OneLake. And now you can see I’ve got these objects here. This icon means that these are shortcut from another table. So now, let’s get data from another warehouse. The SQL Server 2022 instance, which was ADW_products. Again, here, I’ll pick the tables that I want and Create. That’s it. So I can go and look at product to make sure I’ve got my product data. Now, let’s try to query this as one database and use another analytic endpoint directly against the lakehouse itself. So basically it thinks all the tables are just part of the unified schema now. Let’s open up Copilot and write a prompt to pull my top customers by products and sales. And it will be able to work directly against all of these connected databases because they are in just the same schema. And there you go. I have a list of all the data I need in one logical database. - And this is really great. And I know now that everything’s in OneLake, there’s also a lot more that you can do with that data. - With the lakehouse, the sky’s the limit. You can use Power BI, or any of those services that are in the unified data platform, Microsoft Fabric. - Okay, so now we’ve seen all the updates with SQL Server 2025. To everyone watching, what’s the best thing they can do to get started? - Well, the first thing is to start using it. SQL Server 2025 is ready for you to download and install it on the platform of your choice. You’ll find it at aka.ms/GetSQLServer2025. - So thanks so much for sharing all the updates, Bob, and thank you for joining us today. Be sure to subscribe for more, and we’ll see again soon.233Views0likes0CommentsMicrosoft Fabric Warehouses for the Database Administrator
📢 Upcoming Session – June 7 at 7 AM PT 🧠 Topic: Microsoft Fabric Warehouses for the Database Administrator 🎙️ Speaker: Andy Cutler Are you a DBA trying to navigate your role in the world of Microsoft Fabric? Whether you've been asked to "look after this Fabric thing" or you're just curious about where DBAs fit in, this session is for you. We’ll explore Microsoft Fabric Warehouses—specifically from a DBA’s point of view. Learn how to approach this cloud-based SQL service with the tools and mindset of a data professional, and understand what really matters when managing Fabric in a real-world setting. Join us to uncover: 🔹 What DBAs need to know about Fabric Warehouses 🔹 How to think about administration in a SaaS analytics platform 🔹 How Fabric fits into the future of data warehousing and analytics #MicrosoftFabric #DataWarehouse #DBA #FabricWarehouse #MicrosoftLearn #FabricCafe #CloudAnalytics #PowerBI #SQL42Views0likes0CommentsWhat’s Included with Microsoft’s Granted Offerings for Nonprofits?
Are you a nonprofit looking to boost your impact with cutting-edge technology? Microsoft is here to help! From free software licenses to guided technical documentation and support, this program offers a range of resources designed to empower your organization. In this blog, we’ll dive into the incredible tools and grants available to nonprofits through Microsoft, showing you how to make the most of these generous offerings. Whether you’re managing projects or just trying to simplify your day-to-day tasks, there’s something here for everyone. Let’s explore what’s possible!1.2KViews0likes1CommentOrchestrate multimodal AI insights within your healthcare data estate (Public Preview)
In today’s healthcare landscape, there is an increasing emphasis on leveraging artificial intelligence (AI) to extract meaningful insights from diverse datasets to improve patient care and drive clinical research. However, incorporating AI into your healthcare data estate often brings significant costs and challenges, especially when dealing with siloed and unstructured data. Healthcare organizations produce and consume data that is not only vast but also varied in format—ranging from structured EHR entries to unstructured clinical notes and imaging data. Traditional methods require manual effort to prepare and harmonize this data for AI, specify the AI output format, set up API calls, store the AI outputs, integrate the AI outputs, and analyze the AI outputs for each AI model or service you decide to use. Orchestrate multimodal AI insights is designed to streamline and scale healthcare AI within your data estate by building off of the data transformations in healthcare data solutions in Microsoft Fabric. This capability provides a framework to generate AI insights by connecting your multimodal healthcare data to an ecosystem of AI services and models and integrating structured AI-generated insights back into your data estate. When you combine these AI-generated insights with the existing healthcare data in your data estate, you can power advanced analytics scenarios for your organization and patient population. Key features: Metadata store lakehouse acts as a central repository for the metadata for AI orchestration to effectively capture and manage enrichment definitions, view definitions, and contextual information for traceability purposes. Execution notebooks define the enrichment view and enrichment definition based on the model configuration and input mappings. They also specify the model processor and transformer. The model processor calls the model API, and the transformer produces the standardized output while saving the output in the bronze lakehouse in the Ingest folder. Transformation pipeline to ingest AI-generated insights through the healthcare data solutions medallion lakehouse layers and persist the insights in an enrichment store within the silver layer. Conceptual architecture: The data transformations in healthcare data solutions in Microsoft Fabric allow you ingest, store, and analyze multimodal data. With the orchestrate multimodal AI insights capability, this standardized data serves as the input for healthcare AI models. The model results are stored in a standardized format and provide new insights from your data. The diagram below shows the flow of integrating AI generated insights into the data estate, starting as raw data in the bronze lakehouse and being transformed to delta tables in the silver lakehouse. This capability simplifies AI integration across modalities for data-driven research and care, currently supporting: Text Analytics for health in Azure AI Language to extract medical entities such as conditions and medications from unstructured clinical notes. This utilizes the data in the DocumentReference FHIR resource. MedImageInsight healthcare AI model in Azure AI Foundry to generate medical image embeddings from imaging data. This model leverages the data in the ImagingStudy FHIR resource. MedImageParse healthcare AI model in Azure AI Foundry to enable segmentation, detection, and recognition from imaging data across numerous object types and imaging modalities. This model uses the data in the ImagingStudy FHIR resource. By using orchestrate multimodal AI insights to leverage the data in healthcare data solutions for these models and integrate the results into the data estate, you can analyze your existing data alongside AI enrichments. This allows you to explore use cases such as creating image segmentations and combining with your existing imaging metadata and clinical data to enable quick insights and disease progression trends for clinical research at the patient level. Get started today! This capability is now available in public preview, and you can use the in-product sample data to test this feature with any of the three models listed above. For more information and to learn how to deploy the capability, please refer to the product documentation. We will dive deeper into more detailed aspects of the capability, such as the enrichment store and custom AI use cases, in upcoming blogs. Medical device disclaimer: Microsoft products and services (1) are not designed, intended or made available as a medical device, and (2) are not designed or intended to be a substitute for professional medical advice, diagnosis, treatment, or judgment and should not be used to replace or as a substitute for professional medical advice, diagnosis, treatment, or judgment. Customers/partners are responsible for ensuring solutions comply with applicable laws and regulations. FHIR® is the registered trademark of HL7 and is used with permission of HL7.1.1KViews2likes0CommentsElevating care management analytics with Copilot for Power BI
Healthcare data solutions care management analytics capability offers a comprehensive template using the medallion Lakehouse architecture to unify and analyze diverse data sets of meaningful insights. This enables enhanced care coordination, improved patient outcomes, and scalable, sustainable insights. As the healthcare industry faces rising costs and growing demand for personalized care, data and AI are becoming critical tools. Copilot for Power BI leads this shift, blending AI-driven insights with advanced visualization to revolutionize care delivery. What is Copilot for Power BI? Copilot is an AI-powered assistant embedded directly into Power BI, Microsoft's interactive data visualization platform. By leveraging natural language processing and machine learning, Copilot helps users interact with their data more intuitively whether by asking questions in plain English, generating complex calculations, or uncovering patterns that might otherwise go unnoticed. Copilot for Power BI is embedded within healthcare data solutions, allowing care management—one of its core capabilities—to harness these AI-driven insights. In the context of care management analytics, this means turning a sea of clinical, claims, and operational data into actionable insights without needing to write a single line of code. This empowers teams across all technical levels to gain value from data. Driving better outcomes through intelligent insights in care management analytics The Care Management Analytics solution, built on the Healthcare data solutions platform, leverages Power BI with Copilot embedded directly within it. Here’s how Copilot for Power BI is revolutionizing care management: Enhancing decision-making with AI Traditionally, deriving insights from healthcare data required technical expertise and hours of analysis. Copilot simplifies this by allowing care managers and clinicians to ask questions like “Analyze which medical conditions have the highest cost and prevalence in low-income regions.” The AI interprets these queries and responds with visualizations, trends, and predictions—empowering faster, data-driven decisions. Proactive care planning By analyzing historical and real-time data, Copilot helps identify at-risk patients before complications arise. This enables care teams to intervene earlier, design more personalized care plans, and ultimately improve outcomes while reducing unnecessary hospitalizations. Operational efficiency From staffing models to resource allocation, Copilot provides visibility into operational metrics that can drive significant efficiency gains. Healthcare leaders can quickly identify bottlenecks, monitor key performance indicators (KPIs) and simulate “what-if” scenarios, enabling more i nformed, data-backed decisions on care delivery models. Reducing costs without compromising quality Cost containment is a constant challenge in healthcare. By highlighting areas of high spend and correlating them with clinical outcomes, Copilot empowers organizations to optimize care pathways and eliminate inefficiencies ensuring patients receive the right care at the right time, without waste. Democratizing data access Perhaps one of the most transformative aspects of Copilot is how it democratizes access to analytics. Non-technical users from care coordinators to nurse managers can interact with dashboards, explore data, and generate insights independently. This cultural shift encourages a more data-literate workforce and fosters collaboration across teams. Real-world impact Consider a healthcare system leveraging Power BI and Copilot to manage chronic disease populations more effectively. By combining claims data, social determinants of health (SDoH) indicators, and patient-reported outcomes, care teams can gain a comprehensive view of patient needs- enabling more coordinated care and proactively identifying care gaps. With these insights, organizations can launch targeted outreach initiatives that reduce avoidable emergency department (ED) visits, improve medication adherence, and ultimately enhance outcomes. The future is here The integration of Copilot for Power BI marks a pivotal moment for healthcare analytics. It bridges the gap between data and action, bringing AI to the frontlines of care. As the industry continues to embrace value-based care models, tools like Copilot will be essential in achieving the triple aim: better care, lower costs, and improved patient experience. Copilot is more than a tool — it is a strategic partner in you care transformation journey. Deployment of care management analytics Showcasing how a Population Health Director uncovers actionable insights through Copilot Note: To fully leverage the capabilities of the solution, please follow the deployment steps provided and use the sample data included with the Healthcare Data Solution. For more information on care management analytics, please review our detailed documentation and get started with transforming your healthcare data landscape today Overview of care management analytics - Microsoft Cloud for Healthcare | Microsoft Learn Deploy and analyze using Care management analytics - Training | Microsoft Learn. Medical device disclaimer: Microsoft products and services (1) are not designed, intended or made available as a medical device, and (2) are not designed or intended to be a substitute for professional medical advice, diagnosis, treatment, or judgment and should not be used to replace or as a substitute for professional medical advice, diagnosis, treatment, or judgment. Customers/partners are responsible for ensuring solutions comply with applicable laws and regulations.Upgrade performance, availability and security with new features in Azure Database for PostgreSQL
At Microsoft Build 2025 the Postgres on Azure team is announcing an exciting set of improvements and features for Azure Database for PostgreSQL. One area we are always focused on is the enterprise. This week we are delighted to announce improvements across the enterprise pillars of Performance, Availability and Security. In addition, we're improving Integration of Postgres workloads with services like ADF and Fabric. Here's a quick tour of the enterprise enhancements to Azure Database for PostgreSQL being announced this week. Performance and scale SSD v2 with HA support - Public Preview The public preview of zone-redundant high availability (HA) support for the Premium SSD v2 storage tier with Azure Database for PostgreSQL flexible server is now available. You can now enable High Availability with zone redundancy using Azure Premium SSD v2 when deploying flexible server, helping you achieve a Recovery Point Objective (RPO) of zero for mission-critical workloads. Premium SSD v2 offers sub-millisecond latency and outstanding performance at a low cost, making it ideal for IO-intensive, enterprise-grade workloads. With this update, you can significantly boost the price-performance of your PostgreSQL deployments on Azure and improve availability with reduced downtime during HA failover. The key benefits of SSD v2 include: Flexible disk sizing from 1 GiB to 64 TiB, with 1-GiB increment support Independent performance configuration: scale up to 80,000 IOPS and 1,200 MBps throughput without needing to provision larger disks To learn more about how to upgrade and best practices, visit: Premium SSDv2 PostgreSQL 17 Major Version Upgrade – Public Preview PostgreSQL version 17 brings a host of performance improvements, including a more efficient VACUUM process, faster sequential scans via streaming IO, and optimized query execution. Now, with the public preview of in-place major version upgrades to PostgreSQL 17 there is an easier path to v17 for your existing flexible server workloads. With this release, you can upgrade from earlier versions (14, 15, or 16) to PostgreSQL 17 without the need to migrate data or change server endpoints, simplifying the upgrade process and minimizing downtime. Azure’s in-place upgrade capability offers a native, low-disruption upgrade path directly from the Azure Portal or CLI. For upgrade steps and best practices, check out our detailed blog post. Availability Long-Term Backup (LTR) for Azure Database for PostgreSQL flexible server - Generally Available Long-term backups are essential for organizations with regulatory, compliance, and audit-driven requirements, especially in industries like finance and healthcare. Certifications such as HIPAA often mandate data retention periods up to 10 years, far exceeding the default 35-day retention limit provided by point-in-time restore (PITR) capabilities. Long-term backup for Azure Database for PostgreSQL flexible server, powered by Azure Backup is now generally available. With this release, you can now benefit from: Policy-driven, one-click enablement of long-term backups Resilient data retention across Azure Storage tiers Consumption-based pricing with no egress charges Support for restoring backups well beyond community-supported PostgreSQL versions This LTR capability uses a logical backup approach based on pg_dump and pg_restore, offering a flexible, open-source format that enhances portability and ensures your data can be restored across a variety of environments including Azure VMs, on-premises, or even other cloud providers. Learn more about long term retention: Backup and restore - Azure Database for PostgreSQL flexible server Azure Databases for PostgreSQL flexible server Resiliency Solution accelerator When it comes to ensuring business continuity, your database infrastructure is the most critical component. In addition to product documentation, it is important to have access to opinionated solution architecture, industry-proven recommended practices, and deployable infra-as-code that you can learn and customize to ensure an automated production-ready resilient infrastructure for your data. The Azure Database for PostgreSQL Resiliency Solution Accelerator is now available, providing a set of deployable architectures to ensure business continuity, minimize downtime, and protect data integrity during planned and unplanned events. In additional to architecture and recommended practices, a customizable Terraform deployment workflow is provided. Learn more: Azure Database for PostgreSQL Resiliency Solution Accelerator Security Automatic Customer Managed Key (CMK) version updates - Generally Available Azure Database for PostgreSQL flexible server data is fully encrypted, supporting both Service Managed and Customer Managed encryption keys (CMK). Automatic version updates for CMK (also known as “versionless keys”) is now generally available. This change simplifies the key lifecycle management by allowing PostgreSQL to automatically adopt new keys without needing manual updates. Combined with Azure Key Vault's auto-rotation feature this significantly reduces the management overhead of encryption key maintenance. Learn more about automatic CMK version updates. Azure confidential computing SKUs for flexible server - Public Preview Azure confidential computing enables secure sensitive and regulated data, preventing unwanted access of data in-use, by cloud providers, administrators, or external users. With the public preview of Azure confidential SKUs for Azure Database for PostgreSQL flexible server you can now select from a range of Confidential Computing VM sizes to run your PostgreSQL workloads in a hardware-based trusted execution environment (TEE). Azure confidential computing encrypts data in TEE, processing data in a verified environment, enabling you to securely process workloads while meeting compliance and regulatory demands. Learn more about confidential computing with the Azure Database for flexible server. Integration Entra Authentication for Azure Data Factory & Azure Synapse - Generally Available In an era of bring-your-own-device and cloud-enabled apps it is increasingly important for enterprises to maintain central control an identity-based security perimeter. With integrated Entra ID support, Azure Database for PostgreSQL flexible server allows you to bring your database workloads within this perimeter. But how do you securely connect to other services? Entra ID authentication is now supported in the Azure Data Factory and Azure Synapse connectors for Azure Database for PostgreSQL. This feature enables seamless, secure connectivity using Service Principal (key or certificate) and both User-Assigned and System-Assigned Managed Identities, streamlining access to your data pipelines and analytics workloads. Learn more about How to Connect from Azure Data Factory and Synapse Analytics to Azure Database for PostgreSQL. Fabric Data Factory – Upsert Method & Script Activity - Generally Available The Microsoft Fabric has become to go-to data analytics platform with services and tools for every data lifecycle state. To improve customization and fine-grained control over processing of PostgreSQL data, the Upsert Method and custom Script Activity are now generally available in Fabric Data Factory when using Azure Database for PostgreSQL as a source or sink. Upsert Method enables intelligent insert-or-update logic for PostgreSQL, making it easier to handle incremental data loads and change data capture (CDC) scenarios without complex workarounds. Script Activity allows you to embed and execute your own SQL scripts directly within pipelines—ideal for advanced transformations, procedural logic, and fine-grained control over data operations. These capabilities offer enhanced flexibility for building robust, enterprise-grade data workflows, simplifying your ETL processes. Connect to VS Code from the Azure Portal - Public Preview With the exciting announcement of a revamped VS Code PostgreSQL extension preview this week, we're adding a new connection option to the Azure Portal to connect to your flexible server with VS Code, creating a more unified and efficient developer experience. Here's why it matters: One Click Connectivity: No manual connection strings or configuration needed. Faster Onboarding: Go from provisioning a database in Azure to exploring and managing it in VS Code within seconds. Integrated Workflow: Manage infrastructure and development from a single, cohesive environment. Productivity: Connect directly from the Portal to leverage VS Code extension features like query editing, result views, and schema browsing. Where to learn more The Build 2025 announcements this week are just the latest in a compelling set of features delivered by the Azure Database for PostgreSQL team and build on our latest set of monthly feature updates (see: April 2025 Recap: Azure Database for PostgreSQL Flexible Server). Follow the Azure Database for PostgreSQL Blog where you'll see many of the latest updates from Build, including What's New with PostgreSQL @Build, and New Generative AI Features in Azure Database for PostgreSQL.Data security controls in OneLake
Unify and secure your data — no matter where it lives — without sacrificing control using OneLake security, part of Microsoft Fabric. With granular permissions down to the row, column, and table level, you can confidently manage access across engines like Power BI, Spark, and T-SQL, all from one place. Discover, label, and govern your data with clarity using the integrated OneLake catalog that surfaces the right items fast. Aaron Merrill, Microsoft Fabric Principal Program Manager, shows how you can stay in control, from security to discoverability — owning, sharing, and protecting data on your terms. Protect sensitive information at scale. Set precise data access rules — down to individual rows. Check out OneLake security in Microsoft Fabric. No data duplication needed. Hide sensitive columns while still allowing access to relevant data. See it here with OneLake security. Built-in compliance insights. Streamline discovery, governance, and sharing. Get started with the OneLake catalog. QUICK LINKS: 00:00 — OneLake & Microsoft Fabric core concepts 01:28 — Table level security 02:11 — Column level security 03:06 — Power BI report 03:28 — Row level security 04:23 — Data classification options 05:19 — OneLake catalog 06:22 — View and manage data 06:48 — Governance 07:36 — Microsoft Fabric integration 07:59 — Wrap up Link References Check out our blog at https://aka.ms/OneLakeSecurity Sign up for a 60-day free trial at https://fabric.microsoft.com Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: -As you build AI and analytic workloads, unifying your data from wherever it lives and making it accessible doesn’t have to come at the cost of security. In fact, today we dive deeper into Microsoft’s approach to data unification, accessibility, and security with OneLake, part of Microsoft Fabric, where we’ll focus on OneLake’s security control set and how it compliments data discovery via the new OneLake catalog. -Now, in case you’re new to OneLake and Microsoft Fabric, I’ll start by explaining a few core concepts. OneLake is the logical multi-cloud data lake that is foundational to Microsoft Fabric, Microsoft’s fully managed data analytics and AI platform. OneLake, with its support for open data formats, provides a single and unified place across your entire company for data to be discovered, accessed, and controlled across your data estate. Data can reside anywhere, and you can connect to it using shortcuts or via mirroring. And once in OneLake, you have a single place where data can be centrally classified and labeled as the basis for policy controls. You can then configure granular, role-based permissions that can apply down to the folder level for unstructured data and by table for structured data. -Then all the way down to the column and row levels within each table. This way, security is enforced across all connected data. Meaning that whether you’re accessing the data through Spark, Power BI, T-SQL, or any other engine, it’s protected and you have the controls to allow or limit access to data on your terms. In fact, let me show you a few examples for enforcing OneLake security at all of these levels. I’ll start with an example showing OneLake security at the table level. I want to grant our suppliers team access to a specific table in this lakehouse. I’ll create a OneLake security role to do that. So I’ll just give it a name, SuppliersReaders. Then I’ll choose selected data and find the table that I want to share by expanding the table list, pick suppliers and then confirm. -Now, I just need to assign the right users. I’ll just add Mona in this case, and create the role. Then if I move over to Mona’s experience, I can run queries against the supplier data in the SQL endpoint. But if I try to query any other table, I’m blocked, as you can see here. Now, let me show you another option. This time, I’ll lock access down to the column level. I want to grant our customer relations team access to the data they need, but I don’t want to give them access to PII data. Using OneLake security controls, I can create a role that restricts access to sensitive columns. Like before, I’ll name it. Then I need to select my data. This time, I’ll choose three different tables for customer and order data. But notice this grayed out legacy orders table here that we would like to apply column security to as well. I don’t own the permissions for this table because it’s a shortcut to other data. However, the owner of that data can grant permission to it using the steps I’ll show next. From the role I just created, I’ll expand on my tables. And for the customer’s table, I’ll enable column security. Once I confirm, I can select the columns I want to remove and that we don’t want them to see and save it. -Now, let’s look at the results of this from another engine, Power BI, while building a report. I’ll choose a semantic model for my Power BI report. With the column level security in place, notice the sensitive columns I removed before, contact name and address, are hidden from me. And when I expand the legacy orders table, which was a shortcut, it’s also not showing PII columns. Now, some scenarios require that security controls are applied where records might be interspersed with the same table, so a row level filter is needed. For example, our US-based HR team should only see data for US-based employees. I’ve created another security role with the right data selected, HRUS. -Now, I’ll move to my tables and choose from the options for this employee’s table and I’ll select row security. Row level security in OneLake uses SQL statements to limit what people can see. I’ll do that here with a simple select statement to limit country to USA. Now, from the HR team’s perspective, they can start to query the data using another engine, Spark, to analyze employer retention. But only across US based employees, as you can see from the country column. And as mentioned, this applies to all engines, no matter how you access it, including the Parquet files directly in OneLake. Next, let’s move on to data classification options that can be used to inform policy controls. Here, the good news is the same labels you’ve defined in Microsoft Purview for your organization used in Microsoft 365 for emails, messaging, files, sites, and meetings can be applied to data items in OneLake. -Additionally, Microsoft Purview policy controls can be used to automatically label content in OneLake. And another benefit I can show you from the lineage view is label inheritance. Notice this Lakehouse is labeled Non-Business, as is NorthwindTest, but look at the connected data items on the right of NorthwindTest. They are also non-business. If I move into the test lakehouse and apply a label either automatically or manually to my data, like I’m doing here, then I move back to the lineage view. My downstream data items like this model and the SQL analytics endpoint below it have automatically inherited the upstream label. -So now we’ve explored OneLake security controls, their implementation, and enforcement, let’s look at how this works hand in hand with the OneLake catalog for data discovery and management. First, to know that you’re in the right place, you can use branded domains to organize collections of data. I’ll choose the sales domain. To get the data I want, I can see my items as the ones I own, endorsed items, and my favorites. I can filter by workspace. And on top, I can select the type of data item that I’m looking for. Then if I move over to tags, I can find ones associated with cost centers, dates, or other collection types. -Now, let’s take a look at a data item. This shows me more detail, like the owner and location. I can also see table schemas and more below. I can preview data within the tables directly from here. Then using the lineage tab, it shows me a list of connected and related items. Lastly, the monitor tab lets me track data refresh history. Now, let me show you how as a data owner you can view and manage these data items. From the settings of this lakehouse, I can change its properties and metadata, such as the endorsement or update the sensitivity label. And as the data owner, I can also share it securely internally or even externally with approved recipients. I’ll choose a colleague, [email protected], and share it. -Next, the govern tab in the OneLake catalog gives you even more control as a data owner, as well as recommendations to make data more secure and compliant. You’ll find it on the OneLake catalog main page. This gives me key insights at a glance, like the number and type of items I own. And when I click into view more, I see additional information like my data hierarchy. Below that, item inventory and data refresh status. Sensitivity label coverage gives me an idea of how compliant my data items are. And I can assess data completeness based on whether an item is properly tagged, described, and endorsed across the items I own. Back on the main view, I can see governance actions tailored specifically to my data, like increasing sensitivity label, coverage, and more. -The OneLake catalog is integrated across Microsoft Fabric experiences to help people quickly discover the items they need. And it’s also integrated with your favorite Office apps, including Microsoft Excel, where you can use the get data control to select and access data in OneLake. And right in context, without leaving the app, you can define what you want and pull it directly into your Excel file for analysis. The OneLake catalog is the one place where you can discover the data that you want and manage the data that you own. And combined with OneLake security controls, you can do all of this without increasing your data security risks. -To find out more and get started, check out our blog at aka.ms/OneLakeSecurity. Also, be sure to sign up for a 60 day free trial at fabric.microsoft.com. And keep watching Mechanics for the latest updates across Microsoft, subscribe to our channel, and thanks for watching.227Views0likes0Comments