life sciences
283 TopicsSeamlessly manage Dragon Copilot with the new Microsoft Dragon admin center
Today, we are thrilled to announce the Microsoft Dragon admin center – a new way to manage your Microsoft Cloud for Healthcare clinical applications including Microsoft Dragon Copilot. This user-friendly platform, built upon Microsoft 365 and Microsoft’s e-commerce framework, enables healthcare administrators to control and manage their licensing, billing and organizational lifecycle with ease and efficiency. The Microsoft Dragon admin center streamlines the implementation and management of clinical applications in the health provider ecosystem, reducing time from weeks or months to days. Microsoft Dragon Copilot can be purchased and provisioned quickly with a few clicks. We are excited to have Microsoft partners and customers try it out! Benefits The Microsoft Dragon admin center provides numerous benefits to healthcare organizations and partners: Efficiency: Streamlines administration of clinical applications through a centralized and unified interface that provides consistency across all administrative functions. Partner Integration: Offers flexibility to embed Dragon Copilot in the Electronic Health Record (EHR) system of choice or resell the application out of the box. Customization: Enables high degrees of customization for administrators managing wide ranges of users. Scalability: Allows healthcare providers to scale clinical applications within a few hours. Compliance: Adheres to Microsoft standards of privacy, compliance, and security. Key Features The Microsoft Dragon admin center offers several key features that make it an indispensable tool for healthcare administrators: Simplified license management, user role assignment, and billing allows customers to easily purchase more or upgrade licenses depending on business needs. Seamless and automated provisioning of the Dragon Copilot application limits deployment delays. Customizable organization hierarchy empowers healthcare administrators to manage their organization in a few clicks. One stop shop for managing Electronic Health Record (EHR) partners and users operating in the embedded Dragon Copilot application reduces the complexity and time required to manage multiple systems and partners separately. Extensive configuration of settings and library objects of Dragon Copilot increases time-to-value. How to Get Started Getting started with the Microsoft Dragon admin center is straightforward: Purchase licenses: Identify the type of billing account you have in M365 and contact your Microsoft representative to purchase licenses. If you are a Microsoft Partner you can purchase through Partner Center. Assign licenses and conduct user role management: Assign licenses and provide different individuals the right roles to administer the Dragon admin center. Once the license and user role management is complete, navigate to the Microsoft Dragon admin center where you will be able to: Provision your Dragon Copilot application. Set up your organization hierarchy and healthcare groups, and manage your Electronic Health Record partners (EHRs). Manage and configure your Dragon Copilot application settings, features, and library objects in the context of your organization hierarchy. For a detailed step by step set-up guide for Microsoft Dragon admin center, please visit: End-to-end workflow overview | Microsoft Learn Conclusion The Microsoft Dragon admin center is a valuable tool that empowers healthcare administrators and streamlines clinical application management. By leveraging its advanced functionalities and user-friendly interface, healthcare organizations can enhance efficiency, accuracy, and customization in their workflows. Learn more about the Microsoft Dragon admin center here: Dragon admin center documentation | Microsoft LearnOrchestrate multimodal AI insights within your healthcare data estate (Public Preview)
In today’s healthcare landscape, there is an increasing emphasis on leveraging artificial intelligence (AI) to extract meaningful insights from diverse datasets to improve patient care and drive clinical research. However, incorporating AI into your healthcare data estate often brings significant costs and challenges, especially when dealing with siloed and unstructured data. Healthcare organizations produce and consume data that is not only vast but also varied in format—ranging from structured EHR entries to unstructured clinical notes and imaging data. Traditional methods require manual effort to prepare and harmonize this data for AI, specify the AI output format, set up API calls, store the AI outputs, integrate the AI outputs, and analyze the AI outputs for each AI model or service you decide to use. Orchestrate multimodal AI insights is designed to streamline and scale healthcare AI within your data estate by building off of the data transformations in healthcare data solutions in Microsoft Fabric. This capability provides a framework to generate AI insights by connecting your multimodal healthcare data to an ecosystem of AI services and models and integrating structured AI-generated insights back into your data estate. When you combine these AI-generated insights with the existing healthcare data in your data estate, you can power advanced analytics scenarios for your organization and patient population. Key features: Metadata store lakehouse acts as a central repository for the metadata for AI orchestration to effectively capture and manage enrichment definitions, view definitions, and contextual information for traceability purposes. Execution notebooks define the enrichment view and enrichment definition based on the model configuration and input mappings. They also specify the model processor and transformer. The model processor calls the model API, and the transformer produces the standardized output while saving the output in the bronze lakehouse in the Ingest folder. Transformation pipeline to ingest AI-generated insights through the healthcare data solutions medallion lakehouse layers and persist the insights in an enrichment store within the silver layer. Conceptual architecture: The data transformations in healthcare data solutions in Microsoft Fabric allow you ingest, store, and analyze multimodal data. With the orchestrate multimodal AI insights capability, this standardized data serves as the input for healthcare AI models. The model results are stored in a standardized format and provide new insights from your data. The diagram below shows the flow of integrating AI generated insights into the data estate, starting as raw data in the bronze lakehouse and being transformed to delta tables in the silver lakehouse. This capability simplifies AI integration across modalities for data-driven research and care, currently supporting: Text Analytics for health in Azure AI Language to extract medical entities such as conditions and medications from unstructured clinical notes. This utilizes the data in the DocumentReference FHIR resource. MedImageInsight healthcare AI model in Azure AI Foundry to generate medical image embeddings from imaging data. This model leverages the data in the ImagingStudy FHIR resource. MedImageParse healthcare AI model in Azure AI Foundry to enable segmentation, detection, and recognition from imaging data across numerous object types and imaging modalities. This model uses the data in the ImagingStudy FHIR resource. By using orchestrate multimodal AI insights to leverage the data in healthcare data solutions for these models and integrate the results into the data estate, you can analyze your existing data alongside AI enrichments. This allows you to explore use cases such as creating image segmentations and combining with your existing imaging metadata and clinical data to enable quick insights and disease progression trends for clinical research at the patient level. Get started today! This capability is now available in public preview, and you can use the in-product sample data to test this feature with any of the three models listed above. For more information and to learn how to deploy the capability, please refer to the product documentation. We will dive deeper into more detailed aspects of the capability, such as the enrichment store and custom AI use cases, in upcoming blogs. Medical device disclaimer: Microsoft products and services (1) are not designed, intended or made available as a medical device, and (2) are not designed or intended to be a substitute for professional medical advice, diagnosis, treatment, or judgment and should not be used to replace or as a substitute for professional medical advice, diagnosis, treatment, or judgment. Customers/partners are responsible for ensuring solutions comply with applicable laws and regulations. FHIR® is the registered trademark of HL7 and is used with permission of HL7.Mastering Agent Governance in Microsoft 365
The "Mastering Agent Governance in Microsoft 365" series is based on the Administering and Governing Agents whitepaper published by Microsoft and designed to educate IT leaders, compliance officers, and decision-makers about the importance of governance for AI agents in Microsoft 365, particularly in highly regulated industries like Healthcare and Life Sciences (HLS). The six-episode series cover the growing role of agents, the risks of unmanaged agents, and the strategic importance of governance frameworks. Empowering innovation while protecting patient data and ensuring compliance In the age of AI-powered productivity, agents—automated digital assistants built with tools like Microsoft 365 Copilot, SharePoint, and Copilot Studio—are transforming how work gets done. From streamlining clinical documentation to automating regulatory reporting, agents are becoming indispensable in Healthcare and Life Sciences (HLS). But with great power comes great responsibility. Why Governance Can’t Be an Afterthought In highly regulated industries like HLS, where data sensitivity and compliance are paramount, the rise of autonomous agents introduces new risks: Unauthorized data access could expose protected health information (PHI). Unmonitored agent behavior could lead to regulatory violations. Lack of lifecycle controls could result in outdated or insecure agents operating in production environments. Agent governance isn’t just an IT concern—it’s a business imperative. It ensures that innovation doesn’t outpace compliance, and that every agent deployed aligns with organizational policies, security standards, and regulatory frameworks like HIPAA, GDPR, and FDA 21 CFR Part 11. Understanding the Agent Landscape Microsoft 365 supports a spectrum of agent creators: End Users using SharePoint or Copilot templates to automate simple tasks. Makers building more complex agents in Copilot Studio. Developers crafting sophisticated, enterprise-grade agents with Azure AI and Teams Toolkit. Each persona requires a different level of oversight. For example, a clinical researcher using SharePoint to build a data retrieval agent may need minimal governance, while a developer building a patient-facing chatbot must adhere to strict data protection and validation protocols. Governance in Action Microsoft provides a layered governance model: Tool Controls: Define what agent creators can do within tools like Copilot Studio and SharePoint. Content Controls: Ensure agents only access data they’re authorized to use, leveraging Microsoft Purview for sensitivity labeling and DLP. Agent Management: Monitor usage, enforce lifecycle policies, and block non-compliant agents via the Microsoft 365 Admin Center. This framework allows organizations to empower innovation while maintaining control—critical in environments where patient safety and regulatory compliance are non-negotiable. The Business Case for Governance For HLS organizations, agent governance delivers tangible benefits: Reduced compliance risk through proactive policy enforcement. Improved operational efficiency by enabling safe automation. Greater trust from patients, regulators, and internal stakeholders. In short, governance is the foundation that allows agents to scale safely and sustainably.Healthcare Agent Orchestrator: Multi-agent Framework for Domain-Specific Decision Support
At Microsoft Build, we introduced the Healthcare Agent Orchestrator, now available in Azure AI Foundry Agent Catalog . In this blog, we unpack the science: how we structured the architecture, curated real tumor board data, and built robust agent coordination that brings AI into real healthcare workflows. Healthcare Agent Orchestrator assisting a simulated tumor board meeting. Introduction Healthcare is inherently collaborative. Critical decisions often require input from multiple specialists—radiologists, pathologists, oncologists, and geneticists—working together to deliver the best outcomes for patients. Yet most AI systems today are designed around narrow tasks or single-agent architectures, failing to reflect the real-world teamwork that defines healthcare practice. That’s why we developed the Healthcare Agent Orchestrator: an orchestrator and code sample built around Microsoft’s industry-leading healthcare AI models, designed to support reasoning and multidisciplinary collaboration -- enabling modular, interpretable AI workflows that mirror how healthcare teams actually work. The orchestrator brings together Microsoft healthcare AI models—such as MedImageParse for image recognition, CXRReportGen for automated radiology reporting, and MedImageInsight for retrieval and similarity analysis—into a unified, task-aware system that enables developers to build an agent that reflects real-word healthcare decision making pattern. Healthcare Is Naturally Multi-Agent Healthcare decision-making often requires synthesizing diverse data types—radiologic images, pathology slides, genetic markers, and unstructured clinical narratives—while reconciling differing expert perspectives. In a molecular tumor board, for instance, a radiologist might highlight a suspicious lesion on CT imaging, a pathologist may flag discordant biopsy findings, and a geneticist could identify a mutation pointing toward an alternate treatment path. Effective collaboration in these settings hinges not on isolated analysis, but on structured dialogue—where evidence is surfaced, assumptions are challenged, and hypotheses are iteratively refined. To support the development of healthcare agent orchestrator, we partnered with a leading healthcare provider organization, who independently curated and de-identified a proprietary dataset comprising longitudinal patient records and real tumor board transcripts—capturing the complexity of multidisciplinary discussions. We provided guidance on data types most relevant for evaluating agent coordination, reasoning handoffs, and task alignment in collaborative settings. We then applied LLM-based structuring techniques to convert de-identified free-form transcripts into interpretable units, followed by expert review to ensure domain fidelity and relevance. This dataset provides a critical foundation for assessing agent coordination, reasoning handoffs, and task alignment in simulated collaborative settings. Why General-Purpose LLMs Fall Short for Healthcare Collaboration While general-purpose large language models have delivered remarkable results in many domains, they face key limitations in high-stakes healthcare environments: Precision is critical: Even small hallucinations or inconsistencies can compromise safety and decision quality Multi-modal integration is required: Many healthcare decisions involve interpreting and correlating diverse data types—images, reports, structured records—much of which is not available in public training sets Transparency and traceability matter: Users must understand how conclusions are formed and be able to audit intermediate steps The Healthcare Agent Orchestrator addresses these challenges by pairing general reasoning capabilities with specialized agents that operate over imaging, genomics, and structured EHRs—ensuring grounded, explainable results aligned with clinical expectations. Each agent contributes domain-specific expertise, while the orchestrator ensures coherence, oversight, and explainability—resulting in outputs that are both grounded and verifiable. Architecture: Coordinating Specialists Through Orchestration Healthcare Agent Orchestrator. Healthcare Agent Orchestrator’s multi-agent framework is built on modular AI infrastructure, designed for secure, scalable collaboration: Semantic Kernel: A lightweight, open-source development kit for building AI agents and integrating the latest AI models into C#, Python, or Java codebases. It acts as efficient middleware for rapidly delivering enterprise-grade solutions—modular, extensible, and designed to support responsible AI at scale. Model Context Protocol (MCP): an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools. Magentic-One: Microsoft’s generalist multi-agent system for solving open-ended web and file-based tasks across domains—built on Microsoft AutoGen, our popular open-source framework for developing multi-agent applications. Each agent is orchestrated within the system and integrated via Semantic Kernel’s group chat infrastructure, with support for communication and modular deployment via Azure. This orchestration ensures that each model—whether interpreting a lung nodule, analyzing a biopsy image, or summarizing a genomic variant—is applied precisely where its expertise is most relevant, without overloading a single system with every task. The modularity of the framework also future-proofs: as new health AI models and tools emerge, they can be seamlessly incorporated into the ecosystem without disrupting existing workflows—enabling continuous innovation while maintaining clinical stability. Microsoft’s healthcare AI models at the Core Healthcare agent orchestrator also enables developers to explore the capabilities of Microsoft’s latest healthcare AI models: CXRReportGen: Integrates multimodal inputs—including current and prior X-ray images and report context—to generate grounded, interpretable radiology reports. The model has shown improved accuracy and transparency in automated chest X-ray interpretation, evaluated on both public and private data. MedImageParse 3 : A biomedical foundation model for imaging parsing that can jointly conduct segmentation, detection, and recognition across 9 imaging modalities. MedImageInsight 4 : Facilitates fast retrieval of clinically similar cases, supports disease classification across broad range of medical image modalities, accelerating second opinion generation and diagnostic review workflows. Each model has the ability to act as a specialized agent within the system, contributing focused expertise while allowing flexible, context-aware collaboration orchestrated at the system level. CXRReportGen is included in the initial release and supports the development and testing of grounded radiology report generation. Other Microsoft healthcare models such as MedImageParse and MedImageInsight are being explored in internal prototypes to expand the orchestrator’s capabilities across segmentation, detection, and image retrieval tasks. Seamless Integration with Microsoft Teams Rather than creating new silos, Healthcare Agent Orchestrator integrates directly into the tools clinicians already use—specifically Microsoft Teams. Developers are investigating how clinicians can engage with agents through natural conversation, asking questions, requesting second opinions, or cross-validating findings—all without leaving their primary collaboration environment. This approach minimizes friction, improves user experience, and brings cutting-edge AI into real-world care settings. Building Toward Robust, Trustworthy Multi-Agent Collaboration Think of the orchestrator as managing a secure, structured group chat. Each participant is a specialized AI agent—such as a ‘Radiology’ agent, ‘PatientHistory’ agent, or 'ClinicalTrials‘ agent. At the center is the ‘Orchestrator’ agent, which moderates the interaction: assigning tasks, maintaining shared context, and resolving conflicting outputs. Agents can also communicate directly with one another, exchanging intermediate results or clarifying inputs. Meanwhile, the user can engage either with the orchestrator or with specific agents as needed. Each agent is configured with instructions (the system prompt that guides its reasoning), and a description (used by both the UI and the orchestrator to determine when the agent should be activated). For example, the Radiology agent is paired with the cxr_report_gen tool, which wraps Microsoft’s CXRReportGen model for generating findings from chest X-ray images. Tools like this are declared under the agent’s tools field and allow it to call foundation models or other capabilities on demand—such as the clinical_trials tool 5 for querying ClinicalTrials.gov. Only one agent is marked as facilitator, designating it as the moderator of the conversation; in this scenario, the Orchestrator agent fills that role. Early observations highlight that multi-agent orchestration introduces new complexities—even as it improves specialization and task alignment. To address these emergent challenges, we are actively evolving the framework across several dimensions: Mitigating Error Propagation Across Agents: Ensuring that early-stage errors by one agent do not cascade unchecked through subsequent reasoning steps. This includes introducing critical checkpoints where outputs from key agents are verified before being consumed by others. Optimizing Agent Selection and Specialization: Recognizing that more agents are not always better. Adding unnecessary or redundant agents can introduce noise and confusion. We’ve implemented a systematic framework that emphasizes a few highly suited agents per task —dynamically selected based on case complexity and domain needs—while continuously tracking performance gains and catching regressions early. Improving Transparency and Hand-off Clarity: Structuring agent interactions to make intermediate outputs and rationales visible, enabling developers (and the system itself) to trace how conclusions were reached, catch inconsistencies early, and intervene when necessary. Adapting General Frameworks for Healthcare Complexity Generic orchestration frameworks like Semantic Kernel provide a strong foundation—but healthcare demands more. The stakes are higher, the data more nuanced, and the workflows require precision, traceability, and regulatory compliance. Here’s how we’ve extended and adapted these systems to help address healthcare demands: Precision and Safety: We introduced domain-aware verification checkpoints and task-specific agent constraints to reduce inappropriate tool usage—supporting more reliable reasoning. To help uphold the high standards required in healthcare, we defined two complementary metric systems (Check Healthcare Agent Orchestrator Evaluation for more details): Core Metrics: monitor health agents selection accuracy, intent resolution, contextual relevance, and information aggregation RoughMetric: a composite score based on ROUGE that helps quantify the precision of generated outputs and conversation reliability. TBFact: A modified version of RadFact 2 that measures factuality of claims in agents' messages and helps identifying omissions and hallucination Domain-Specific Tool Planning: Healthcare agents must reason across multimodal inputs—such as chest X-rays, CT slices, pathology images, and structured EHRs. We’ve customized Semantic Kernel’s tool invocation and planning modules to reflect clinical workflows, not generic task chains. These infrastructure-level adaptations are designed to complement Microsoft Healthcare AI models—such as CXRReportGen, MedImageParse, and MedImageInsight—working together to enable coordinated, domain-aware reasoning across complex healthcare tasks. Enabling Collaborative, Trustworthy AI in Healthcare Healthcare demands AI systems that are as collaborative, adaptive, and trustworthy as the clinical teams they aim to support. The Healthcare Agent Orchestrator is a concrete step toward that vision—pairing specialized health AI models with a flexible, multi-agent coordination framework, purpose-built to reflect the complexity of real clinical decision-making. By aligning with existing healthcare workflows and enabling transparent, role-specific collaboration, this system shows promise to empower clinicians to work more effectively—with AI as a partner, not a replacement. Healthcare Multi-Agent Orchestrator and the Microsoft healthcare AI models are intended for research and development use. Healthcare Multi-Agent Orchestrator and the healthcare AI models not designed or intended to be deployed in clinical settings as-is nor is it intended for use in the diagnosis or treatment of any health or medical condition, and its performance for such purposes has not been established. You bear sole responsibility and liability for any use of Healthcare Multi-Agent Orchestrator or the healthcare AI models, including verification of outputs and incorporation into any product or service intended for a medical purpose or to inform clinical decision-making, compliance with applicable healthcare laws and regulations, and obtaining any necessary clearances or approvals. 1 arXiv, Universal Abstraction: Harnessing Frontier Models to Structure Real-World Data at Scale, February 2, 2025 2 arXiv, MAIRA-2: Grounded Radiology Report Generation, June 6, 2024 3 Nature Method, A foundation model for joint segmentation, detection and recognition of biomedical objects across nine modalities, Nov 18, 2024 4 arXiv, Medimageinsight: An open-source embedding model for general domain medical imaging, Oct 9, 2024 5 Machine Learning for Healthcare Conference, Scaling Clinical Trial Matching Using Large Language Models: A Case Study in Oncology, August 4, 20231.5KViews1like0CommentsWhen "Wrong" Looks “Right”: The Challenge of Evaluating AI in Healthcare
Choosing the right evaluation metrics is crucial for ensuring patient safety and clinical accuracy when integrating AI into healthcare. Traditional text comparison metrics like F1, BLEU, ROUGE, and METEOR often fail to distinguish between clinically accurate and inaccurate responses. Advanced methods such as BERTScore, ClinicalBERT, and MoverScore, show better results but still have limitations. In this blogpost, we present a compelling case for investing in more advanced evaluation methods, even when they require additional computational resources. When patient safety is at stake, the ability to reliably distinguish between clinically accurate and inaccurate content isn't just nice to have—it's essential.1.7KViews3likes0CommentsDocument Acknowledgement and Attestation with Microsoft’s PowerPlatform - Step by Step
Leveraging Microsoft’s PowerPlatform and Office 365 we can securely store policy related documents, use Flow to automate document acknowledgement and attestation, use PowerApps to provide a mobile friendly app to review and accept policy documents, and finally we can build beautiful dashboards to visualize that status of a given document acceptance process.Optimizing Azure Healthcare Multimodal AI Models for Intel CPU Architecture
Alexander Mehmet Ersoy, Principal Product Manager, Microsoft HLS AI Abhishek Khowala, Principal AI Engineer, Intel Ravi Panchumarthy, AI Framework Engineer, Intel Srinarayan Srikanthan, AI Framework Engineer, Intel Ekaterina Aidova, AI Frameworks Engineer, Intel Alberto Santamaria-Pang, Principal Applied Data Scientist, Microsoft HLS AI and Adjunct Faculty at Johns Hopkins Medicine, Microsoft Peter Lee, Applied Scientist, Microsoft HLS AI and Adjunct Assistant Professor at Vanderbilt University Ivan Tarapov, Sr. Director, Microsoft HLS AI The Rise of Multimodal AI in Healthcare The healthcare sector is witnessing a surge in the adoption of multimodal AI models, which are crucial for applications ranging from diagnostics to personalized treatment plans. These models combine data from various sources such as medical images, patient records, and genomic data to provide comprehensive insights. Microsoft’s Azure AI Foundry's Model Catalog of multimodal healthcare foundation models is at the forefront of this change. Models recently launched (such as MedImageInsights, MedImageParse, CXRReportGen [8], and many others) are designed to help healthcare organizations rapidly build and deploy AI solutions tailored to their specific needs, while minimizing the extensive compute and data requirements typically associated with building multimodal models from scratch. Real-World Examples from our industry partners regarding the adoption of multimodal AI models are highlighted in the article “Unlocking next-generation AI capabilities with healthcare AI models”. Challenges and Opportunities in Hardware Optimization As models get more complex, which is the case with the foundation model trend, the demands on the hardware rise. While GPUs remain the platform of choice for minimizing the model execution times, CPUs present substantial optimization possibilities, especially for inference workloads. We believe that providing a framework for efficient CPU-based environments holds a huge potential for many production scenarios where speed can be traded off for cost savings. With multimodal healthcare AI, the complexity of handling different data modalities and ensuring efficient inference requires innovative solutions and collaboration between industry leaders. Companies are increasingly looking towards hardware-specific optimizations to enhance model efficiency and reduce latency while keeping costs at bay. Intel, with its robust suite of AI tools and extensions for frameworks like PyTorch, is pioneering this optimization effort. For instance, the Intel® Distribution of OpenVINO™ toolkit has been instrumental in accelerating the development of computer vision and deep learning applications in healthcare [1]. You can learn about our recent collaboration with Intel on AI optimizations to advance medical innovations in the article "Empower Medical Innovations: Intel Accelerates PadChest & fMRI Models on Microsoft Azure* Machine Learning”. The demand for AI applications in healthcare is rapidly increasing. Multimodal AI models, which can process and analyze complex datasets, are essential for tasks such as early disease detection, treatment planning, and patient monitoring. While optimizing these models to perform efficiently on specific hardware is important, it is not necessarily a barrier to adoption. Models optimized with CUDA for Nvidia GPUs often deliver optimal performance and run faster than on any other hardware. However, the benefit of using CPUs lies in the tradeoff they offer. You can choose to optimize for speed by running your model on a GPU and optimizing for it in PyTorch, or you can optimize for cost by sacrificing speed. This is the proposition here: the option to run the model slower with an accessible CPU, which can be advantageous in scenarios where speed is not the primary concern, but access to GPU hardware is. The Intel® oneAPI Deep Neural Network Library (oneDNN) have proven effective in reducing GPU requirement burden and accelerating time to market for AI solutions [2]. Both Intel® Extension for PyTorch (IPEX) and OpenVINO utilize the Intel® oneDNN to accelerate deep learning operations, taking advantage of underlying hardware features. IPEX optimizes existing PyTorch workflows with minimal code changes. OpenVINO provides cross-platform deep learning optimization for deployment flexibility. In this blog post, a custom deployment was implemented using CXRReportGen along with both IPEX and OpenVINO optimizations, demonstrating how these techniques can support different deployment scenarios and technical requirements. This optimization is accessible through Azure's compute services and Intel's technology. Benchmarking and Performance Acceleration To address these challenges, our new collaboration with Intel focuses on leveraging Intel’s advanced AI tools and hardware capabilities to optimize multimodal AI models for greater healthcare access. By utilizing Intel's Extension for PyTorch and other optimization techniques, we aim to optimize CPUs for best model run time speed. While this may slightly degrade performance, the main benefit is addressing the problem of GPU hardware scarcity. This partnership not only underscores the importance of hardware-specific optimizations but also sets a new standard for AI model deployment in real-world healthcare applications. Both IPEX and OpenVINO are built on a common foundation - Intel® oneDNN which is a high-performance library designed specifically for deep learning applications and optimized for Intel architecture. oneDNN leverages specialized hardware instructions available in Intel processors such as Intel® Advanced Vector Extensions 512 (Intel® AVX-512) Vector Neural Network Instructions (VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) [3] on Intel CPUs as well as Intel XeMatrix Extensions (XMX) AI engines on Intel discrete GPUs. Figure 1: OneDNN Library IPEX [4] extends PyTorch* with the latest performance optimizations for Intel hardware [5]. It leverages oneDNN under the hood to provide optimized implementations of key operations. This allows developers to stay within their existing PyTorch code with minimal changes - making it an excellent choice for teams already comfortable with the PyTorch ecosystem who want to quickly optimize their models for Intel hardware. import torch ############## import ipex ############### import intel_extension_for_pytorch as ipex model = Model() model.eval() ############## Optimize with IPEX ############### model = ipex.optimize(model, dtype=torch.bfloat16) # Continue with inference as normal Figure 2. Intel Extension for PyTorch The Intel® Distribution of OpenVINO™ toolkit is a powerful solution for optimizing and deploying deep learning models across a wide range of Intel hardware [6]. Like IPEX, it leverages oneDNN under the hood, but takes a different approach - offering cross-platform optimization and flexible deployment options. OpenVINO supports two main workflows: a convenience workflow, where you run models directly with minimal setup, and a performance workflow, recommended for production, where models are first converted offline into the OpenVINO Intermediate Representation (IR). This one-time conversion step enables highly optimized inference and allows the final application to remain lightweight and efficient. Here’s a simple example using OpenVINO for inference with a pre-converted IR model. Refer to OpenVINO Notebooks repo for more samples: import openvino as ov core = ov.Core() ############## Load the OpenVINO IR model ############### compiled_model = core.compile_model("model.xml", "CPU") ############## Run inference ################### infer_request = compiled_model.create_infer_request() results = infer_request.infer({input_tensor_name: input_tensor}) Figure 3: OpenVINO toolkit Overview. IPEX and OpenVINO are supported in all Intel architectures. However, for optimal performance, Intel recommends using instances powered by 4th Gen Intel® Xeon® Scalable processors or newer, which feature AMX and other hardware acceleration capabilities, such as Azure’s v6-series (e.g., Standard_E48s_v6) [7]. Results We conducted a detailed performance benchmark by using CXRReportGen, a state-of-the-art foundation model designed to generate a list of radiological findings from chest X-rays, over Standard_E48s_v6 hardware (48 vCPUs, 248 GiB RAM) with and without IPEX and OpenVINO optimization. We realized up to 70% improvement in CXRReportGen foundation model run time when applying optimizations with IPEX and similarly substantial gains using OpenVINO, compared to the non-optimized baseline on the same CPU hardware. This significant improvement highlights the potential of leveraging Intel's performance optimizations to make critical healthcare AI models more cost-efficient and accessible. Such advancements enable healthcare providers to deploy advanced diagnostic tools even in resource-constrained environments, ultimately improving patient care and operational efficiency. SKU Run Type (100 Runs) Mean Run Time (seconds) Standard Deviation of Run Time (seconds) Standard_E48s_v6 (48 vCPUs, 348 GiB RAM) No Optimization 22.47 0.1061 Standard_E48s_v6 (48 vCPUs, 348 GiB RAM) IPEX 8.21 0.2375 Standard_E48s_v6 (48 vCPUs, 348 GiB RAM) OpenVINO 7.01 0.0569 Table 1: Performance Comparison of CXRReportGen Model Across 100 Runs with CPU. Future Prospects and Innovations Our benchmarks with Intel optimizations with both IPEX and OpenVINO show great potential on decreasing the model run time of our foundation models and increasing scalability via CPU. This optimization positions Intel CPUs as a viable deployment. This not only increases deployment options but also offers opportunities to reduce cloud costs with CPU-based instances and even consider deploying these workflows on existing compute headroom at the edge. For custom deployments, the setup described in this blog post is now available on the provided compute instances in Azure and with optimization software from Intel. So that developers can optimize inference workloads while taking advantage of large memory pools available via CPU and use towards handling large batch workloads. Our advancements with Intel in model runtime optimizations are considered to be available in the Azure AI model catalogs. Please stay tuned for further updates. As we continue to innovate and optimize, the potential for AI to transform healthcare and improve patient outcomes becomes increasingly attainable. We are now more equipped than ever to making it easier for our partners and customers to create connected experiences at every point of care, empower their healthcare workforce, and unlock the value from their data using data standards that are important to the healthcare industry. References [1] Intel OpenVINO Optimizes Deep Learning Performance for Healthcare Imaging [2] Accelerating Healthcare Diagnostics with Intel oneAPI and AI Tools [3] Intel Advanced Matrix Extensions [4] Intel Extension for Pytorch [5] Accelerate with Intel Extension to PyTorch [6] Intel Accelerates PadChest and fMRI Models on Azure ML [7] Azure’s first 5th Gen Intel® Xeon® processor instances are now available and we're excited! [8] CxrReportGen Model Card in Azure AI Foundry The healthcare AI models in Azure AI Foundry are intended for research and model development exploration. The models are not designed or intended to be deployed in clinical settings as-is nor for use in the diagnosis or treatment of any health or medical condition, and the individual models’ performances for such purposes have not been established. You bear sole responsibility and liability for any use of the healthcare AI models, including verification of outputs and incorporation into any product or service intended for a medical purpose or to inform clinical decision-making, compliance with applicable healthcare laws and regulations, and obtaining any necessary clearances or approvals.