hybrid
122 TopicsCloud infrastructure for disconnected environments enabled by Azure Arc
Organizations in highly regulated industries such as government, defense, financial services, healthcare, and energy often operate under strict security and compliance requirements and across distributed locations, some with limited or no connectivity to public cloud. Leveraging advanced capabilities, including AI, in the face of this complexity can be time-consuming and resource intensive. Azure Local, enabled by Azure Arc, offers simplicity. Azure Local’s distributed infrastructure extends cloud services and security across distributed locations, including customer-owned on-premises environments. Through Azure Arc, customers benefit from a single management experience and full operational control that is consistent from cloud to edge. Available in preview to pre-qualified customers, Azure Local with disconnected operations extends these capabilities even further – enabling organizations to deploy, manage, and operate cloud-native infrastructure and services in completely disconnected or air-gapped networks. What is disconnected operations? Disconnected operations is an add-on capability of Azure Local, delivered as a virtual appliance, that enables the deployment and lifecycle management of your Azure Local infrastructure and Arc-enabled services, without any dependency on a continuous cloud connection. Key Benefits Consistent Azure Experience: You can operate your disconnected environment using the same tools you already know - Azure Portal, Azure CLI and ARM Templates extended through a local control plane. Built-in Azure Services: Through Azure Arc, you can deploy, update, and manage Azure services such as Azure Local VMs, Azure Kubernetes Service (AKS), etc. Data Residency and Control: You can govern and keep data within your organization's physical and legal jurisdiction to meet data residency, operational autonomy, and technological isolation requirements. Key Use Cases Azure Local with disconnected operations unlocks a range of impactful use cases for regulated industries: Government and Defense: Running sensitive government workloads and classified data more securely in air-gapped and tactical environments with familiar Azure management and operations. Manufacturing: Deploying and managing mission-critical applications like industrial process automation and control systems for real-time optimizations in more highly secure environments with zero connectivity. Financial Services: Enhanced protection of sensitive financial data with real time data analytics and decision making, while ensuring compliance with strict regulations in isolated networks. Healthcare: Running critical workloads with a need for real-time processing, storing and managing sensitive patient data with the increased levels of privacy and security in disconnected environments Energy: Operating critical infrastructure in isolated environments, such as electrical production and distribution facilities, oil rigs, or remote pipelines. Here is an example of how disconnected operations for Azure Local can provide mission critical emergency response and recovery efforts by providing essential services when critical infrastructure and networks are unavailable. Core Features and capabilities Simplified Deployment and Management Download and deploy the disconnected operations virtual appliance on Azure Local Premier Solutions through a streamlined user interface. Create and manage Azure Local instances using the local control plane, with the same tooling experience as Azure. Offline Updates The monthly update package includes all the essential components: the appliance, Azure Local software, AKS, and Arc-enabled service agents. You can update and manage the entire Azure Local instance using the local control plane without an internet connection. Monitoring Integration You can monitor your Azure Local instances and VMs using external monitoring solutions like SCOM by installing custom management packs and monitor AKS Clusters through 3 rd party open-source solutions like Prometheus and Grafana. Run Mission-Critical Workloads – Anytime, Anywhere Azure Local VMs You can run VMs with flexible sizing, support for custom VM images, and high availability through storage replication and automatic failover – all managed through the local Azure interface. AI & Containers with AKS You can use disconnected AI containers with Azure Kubernetes Service (AKS) on Azure Local to deploy and manage AI applications in disconnected scenarios where data residency and operational autonomy is required. AKS enables the deployment and management of containerized applications such as AI agents and models, deep learning frameworks, and related tools, which can be leveraged for inferencing, fine-tuning, and training in isolated networks. AKS also automates resource scaling, allowing for the dynamic addition and removal of container instances to more efficiently utilize hardware resources, including GPUs, which are critical for AI workloads. This provides consistent Azure experience in managing Kubernetes clusters and AI workloads with the same tooling and processes in connected environments. Get Started: Resources and Next Steps Microsoft is excited to announce the upcoming preview of Disconnected Operations for Azure Local in Q3 ‘CY25 for both Commercial and Government Cloud customers. To Learn more, please visit Disconnected operations for Azure Local overview (preview) - Azure Local Ready to participate? Get Qualified! or contact your Microsoft account team. Please also check out this session at Microsoft Build https://build.microsoft.com/en-US/sessions/BRK195 by Mark Russinovich, one of the most influential minds in cloud computing. His insights into the latest Azure innovations, the future of cloud architecture and computing, is a must-watch event!1.4KViews7likes2CommentsAnnouncement: General Availability of Logic Apps Hybrid Deployment Model
We are thrilled to announce the General Availability of the Logic Apps Hybrid Deployment Model, a groundbreaking feature that offers unparalleled flexibility and control to our customers. This innovative deployment model allows you to run Logic Apps workloads on customer-managed infrastructure, providing you with the option to host your integration solutions on-premises, in a private cloud, or even in a third-party public cloud. With the Logic Apps Hybrid Deployment Model, you can tailor your integration solutions to meet your specific needs, whether it's for regulatory compliance, data privacy, or network restrictions. This model ensures that you have the freedom to choose the best environment for your workflows, while still leveraging the powerful capabilities of Azure Logic Apps. The Hybrid Deployment Model supports a semi-connected architecture, offering local processing of workflows, local storage, and local network access. This means that the data processed by the workflows remains in your local SQL Server, and you have the ability to connect to local networks. Additionally, the built-in connectors will execute in your local compute, giving you access to local data sources and higher throughput. Since we launched the public preview, we have received an overwhelmingly positive response from customers across various industries. Many customers, including those looking to migrate from BizTalk Server, have expressed interest in this offering due to its ability to co-locate integration platforms near key lines of business systems, avoiding dependencies on public internet to process transactions. Journey of the Hybrid Deployment Model Feature At the Integrate 2024 event, we announced the early access preview of the Hybrid Deployment model for Logic Apps Standard. This initial phase allowed interested parties to nominate themselves for early access and provided valuable feedback on the model's functionality and benefits. Following the private preview, we launched the public preview, which empowered our customers with additional flexibility and control. This phase allowed customers to build and deploy workflows on customer-managed infrastructure, offering the option to run Logic Apps on-premises, in a private cloud, or in a third-party public cloud. The public preview also introduced the semi-connected architecture, enabling local processing of workflows and access to local data sources. In October 2024, we refreshed the public preview and received an overwhelmingly positive response from customers across various industries. This feedback highlighted the model's ability to meet specific use cases, such as migrating from BizTalk Server and co-locating integration platforms near key lines of business systems. The public preview refresh also emphasized the model's alignment with our promise of providing customers with more options to meet their business needs. We are excited to see how our customers will leverage the Logic Apps Hybrid Deployment Model to meet their business needs and drive innovation. Thank you for your continued support and feedback. New features in the GA release: Open Telemetry support: Open telemetry is a vendor-neutral open-source Observability framework for instrumenting, generating, collecting, and exporting telemetry data. The support for Open Telemetry in Hybrid deployment model ensures the seamless logging in the semi-connected scenarios and provides the ability to choose any observability platform as a telemetry endpoint. More details here. To set up Open Telemetry capability from Azure portal, follow these steps: Open the host.json in the root directory of SMB file share path configured in your logic app. In the host.json file, at the root level, add the following telemetryMode setting with the OpenTelemetry value, for example: { "version": "2.0", "extensionBundle": { "id": "Microsoft.Azure.Functions.ExtensionBundle.Workflows", "version": "[1.*, 2.0.0)" }, "telemetryMode": "OpenTelemetry" } When you enable Open Telemetry in the host.json file, your logic app exports telemetry based on the Open Telemetry-supported app settings that you define in the environment. Add below app settings from portal by navigating to Containers-->Environment variables-->edit and deploy. App setting Description OTEL_EXPORTER_OTLP_ENDPOINT The online transaction processing (OTLP) exporter endpoint URL for where to send the telemetry data. OTEL_EXPORTER_OTLP_HEADERS (optional) A list of headers to apply to all outgoing data. Commonly used to pass authentication keys or tokens to your observability backend. If your Open Telemetry endpoint requires other Open Telemetry related settings, include these settings in the app settings too. Support for Zip deployment through VSCode: The support for Zip deployment in VSCode deployment has enhanced the deployment experience with more reliability. This feature uses Azure Entra authentication for deployment, hence the VSCode machine doesn’t require to have permissions on the SMB share and the user need not to provide SMB credentials in subsequent deployments. To use Zip deployment, follow below steps: create an app registration. In the VSCode deployment, provide Client ID, Object ID and Client secret values. If there are any concerns with creating App registration, you can continue to use SMB deployment option by choosing "Use SMBDeployment For Hybrid" in the Extensions configuration of VSCode If you would like to use zip deployment in an existing Logic App, you will need to manually add the app settings as indicated here. The Zip deployment APIs can be used in CI/CD pipelines as well for DevOps deployment. We will be publishing another blog with detailed steps on the DevOps process. Support for more regions: We are pleased to announce the expansion of our hybrid deployment support to additional regions, in response to valuable customer feedback. This enhancement aims to better meet the diverse geographic and operational requirements of your businesses. The hybrid deployment is now available in the following regions: Central US, East Asia, East US, North Central US, Southeast Asia, Sweden Central, UK South, West Europe, and West US. Logic Apps Rules Engine Support on Linux containers: In this release, we have added support for Azure Logic Apps Rules Engine to run on Linux containers which enables customers to use the Rules Engine capabilities in Hybrid Logic Apps. Improvements for Effective Scaling and Performance: We have introduced few improvements in the runtime storage and the scaling behaviour aimed at improving the performance and achieving effective scaling. Please refer to the following articles: Scaling mechanism in hybrid deployment model for Azure Logic Apps Standard | Microsoft Community Hub Hybrid deployment model for Logic Apps- Performance Analysis and Optimization recommendations | Microsoft Community Hub Diagnostic tool: To assist with troubleshooting the environment configuration issues, we have created a troubleshooting tool, which will help you review the health of all the components of the hybrid deployment and provide insights. You can find the script in our GitHub repository. Select the troubleshoot.ps1 file and copy it to a folder and run the script using PowerShell. This script should be run where you have access to kubectl. References: Create Standard logic app workflows for hybrid deployment - Azure Logic Apps | Microsoft Learn Set up your own infrastructure for Standard logic app workflows - Azure Logic Apps | Microsoft Learn Set up and view enhanced telemetry for Standard workflows - Azure Logic Apps | Microsoft Learn760Views1like0CommentsPreview of Arc enabled SQL Server in US Government Virginia
Introduction We are excited to announce that Azure Arc-enabled SQL Server on Windows is now in public preview for the US Government Virginia region. With Azure Arc-enabled SQL Server, U.S. government agencies and organizations can manage SQL Server instances outside of Azure from the Azure Government portal, in a secure and compliant manner. Arc-enabled SQL Server resources in US Gov Virginia can be onboarded and viewed in the Azure Government portal just like any Azure resource, giving you a single pane of glass to monitor and organize your SQL Server estate in the Gov cloud. Preview features of Azure Arc-Enabled SQL Server Currently, in the US Government Virginia region, SQL Server registration provides the following features: Connect (onboard) a SQL Server instance to Azure Arc. SQL Server inventory which includes the following capabilities in the Azure portal: View the SQL Server instance as an Azure resource. View databases as an Azure resource. View the properties for each server. For example, you can view the version, edition, and database for each instance. All other features, including license management, and Extended Security Updates (ESU), are not currently available. How to Onboard Your SQL Server Onboarding a SQL Server to Azure Arc in the Government cloud is a two-step process that you can initiate from the Azure (US Gov) portal. Step 1: Connect hybrid machines with Azure Arc-enabled servers Step 2: Connect your SQL Server to Azure Arc on a server already enabled by Azure Arc Limitations The following SQL Server features are not currently available in any US Government region: Failover cluster instance (FCI) Availability group (AG) SQL Server services like SSIS, SSRS, or Power BI Report Server Future Plans and Roadmap This public preview is a major first step in bringing Azure Arc’s hybrid data management to Azure Government, and more enhancements are on the way. We will be enabling features like Arc-based billing (PAYG) and ESU purchasing along with feature parity with public cloud in future. After US Gov Virginia, we will expand to other US Gov regions starting with US Gov Arizona. Conclusion The availability of Azure Arc-enabled SQL Server in the US Gov Virginia region marks an important milestone for hybrid data management in Government. If you’re an Azure Government user managing SQL Server instances, we invite you to try out this public preview. And please, share your feedback with us through the community forum or your Microsoft representatives. Learn More: SQL Server enabled by Azure Arc in US Government Preview SQL Server enabled by Azure Arc192Views2likes0CommentsEmpowering the Physical World with AI
Unlocking AI at the Edge with Azure Arc The integration of AI into the physical environment is revolutionizing ways we interact with and navigate the world around us. By embedding intelligence into edge devices, AI is not just processing data—it is defining how machines perceive, reason, and act autonomously in real-world scenarios. AI at the edge is transforming how we interact with our environment, driven by critical factors such as data sensitivity, local regulations, compliance, low latency requirements, limited network connectivity, and cost considerations. Added to this, the emergence of new, powerful agentic AI capabilities enables autonomous and adaptive real-time operations, making AI an indispensable tool in reshaping the physical world. Customers’ Use Cases By embedding AI into edge operations, industries are unlocking transformative efficiencies and innovations. In manufacturing, edge-powered AI enables real-time quality control and predictive maintenance, minimizing downtime and maximizing productivity. In retail, AI enhances customer experiences with personalized recommendations and streamlined inventory management. Similarly, finance leverages AI's capabilities for robust fraud detection and advanced risk management. Moreover, sectors like government and defense are increasingly adopting edge AI for safety-critical applications, enabling autonomous, real-time surveillance and response solutions that are both efficient and resilient. These advancements are paving the way for scalable, adaptive solutions that meet the unique demands of diverse operational environments. Azure’s Adaptive Cloud Approach enabling AI from cloud to edge Building on the promise to unify cloud and edge, Azure’s adaptive cloud approach is empowering teams to develop and scale AI workloads seamlessly across diverse environments. By enabling a unified suite of services tailored for modern AI applications, whether deployed in public clouds or distributed locations, Azure Arc enables streamlined operations with enhanced security and resilience. Central to extending AI services to the edge is our commitment to adaptive, scalable, and efficient solutions tailored to diverse operational needs. Azure Arc plays a key role in this vision by facilitating seamless deployment and management of AI workloads across various environments. This week, we’re excited to share that a subset of Microsoft Azure AI Foundry models, such as Phi and Mistral have been rigorously validated to run on Azure Local enabled by Azure Arc. Our investments are reflected in two primary areas: Foundational tools for MLOps and developer frameworks, which empower teams to build robust AI applications Intuitive, end-to-end low-code experiences designed for data analysts and solution developers. These low-code tools prioritize user-friendly interfaces and rapid deployment, enabling the creation of solutions with just a few clicks. This dual focus ensures enterprises can fully harness the potential of edge AI while maintaining flexibility and operational efficiency. Image 1: This high-level diagram illustrates our vision for the cloud to edge AI workloads, enabled by Azure Arc. Some components (agents and integration with AI Foundry and Foundry Local) are still under development, while others are more advanced and have been released to the market. Build 2025: New Capabilities and Releases This strategic vision is now being realized through a wave of new capabilities unveiled at Build 2025. These innovations are designed to accelerate edge AI adoption and simplify the developer experience—making it easier than ever to build, deploy, and manage intelligent applications across hybrid environments. Announcements related to developer Building blocks: Kubernetes AI Toolchain Orchestrator (KAITO), enabled by Azure Arc (public preview) Foundry Local (public preview) for Windows apps to be deployed on any client device read more here. Workload orchestration (public preview) Application development tools for Kubernetes enabled by Arc (public preview) Refer to this blog to read more: https://aka.ms/AdaptiveCloudBuild2025 Announcements related to End-to-end experiences: Edge RAG, enabled by Azure Arc is now available in public preview. Azure AI Video Indexer for recorded files, enabled by Arc is generally available since April 2025. Azure AI Video Indexer for live video analysis, enabled by Arc is available in private preview, for limited set of customers Customer scenarios: enabling search and retrieval for on-premises data on Azure Local Edge RAG targets customers who have data that needs to stay on premises due to data gravity, security and compliance, or latency requirements. We have observed significant and consistent interest from highly regulated sectors. These entities are exploring the use of RAG capabilities in disconnected environments through Azure Local. DataON is a hybrid cloud computing company for enterprises of all sizes, with a focus on educational institutions and local government agencies. Recently, they have worked with the their customers to successfully deploy our RAG solution on CPU and GPU clusters and begin testing with sample end-customer data. “DataON has been actively exploring how Edge RAG can enhance our Microsoft Azure Local solutions by providing more efficient data retrieval and decision-making capabilities. It’s exciting to be part of the private preview program and see firsthand how Edge RAG is shaping the future of data-driven insights.” Howard Lo | VP, Sales & Marketing | DataON This capability brings generative AI and RAG to on-premises data. Edge RAG was validated on AKS running on Azure Local. Based on DataON and other customer feedback, we have expanded the version to include new features: Model Updates: Ability to use any model compatible with OpenAI Inferencing standard APIs Multi-lingual support: 100+ common languages for document ingestion and question-answer sessions Multi-modal support: Support for image ingestion & retrieval during question-answer sessions Search Types: Support for Text, Vector, Hybrid Text & Hybrid Text+Image searches Ingestion Scale-out: Integration with KEDA for fully parallelized, high-throughput ingestion pipeline Evaluation Workflow with RAG Metrics: Integrated workflow with built-in or customer-provided sample dataset Read more about Edge RAG in this blog: https://aka.ms/AzureEdgeAISearchenabledbyArc. AI Workloads for Disconnected Operations In fully disconnected (air-gapped or non-internet) environments, such as those often found in government and defense sectors, technologies like RAG, can be deployed on-premises or in secure private clouds. This is currently available with limited access. Use Cases: Video analysis: Automatically analyzes video and audio content to extract metadata such as objects and scenes. Use cases include live video and analysis, mission debriefing and training, and modern safety. Models consumption: A central repository for securely managing, sharing, and deploying AI/ML models. Use cases: model governance, rapid deployment of mission-specific models, and inter-agency collaboration. Retrieval-Augmented Generation (RAG): Combines LLMs with a document retrieval system to generate accurate, context-aware responses based on internal knowledge bases. Use cases include field briefings, legal and policy compliance, and cybersecurity incident response. Transforming Industries with AI: Real-World Stories from the Edge Across industries, organizations are embracing AI to solve complex challenges, enhance operations, and deliver better outcomes. From healthcare to manufacturing, retail to energy, and even national security, Azure AI solutions are powering innovation at scale. In the manufacturing sector, a global company sought to optimize production and reduce costly downtime. Azure AI Video Indexer monitored video feeds from production lines to catch defects early, while custom predictive maintenance models from the Model Catalog helped prevent equipment failures. RAG provided real-time insights into operations, empowering managers to make smarter decisions by asking questions. These tools collectively boosted efficiency, minimized downtime, and improved product quality. At Airports, Azure AI helped enhance passenger experience and safety. From monitoring queue lengths and tracking vehicles to detecting falls and identifying restricted area breaches, the combination of Azure Local, Video Indexer, Azure IoT for Operations, and custom AI created a smarter, safer airport environment. Retailers, too, are reaping the benefits. A major retail chain used Azure AI to understand in-store customer behavior through video analytics, optimize inventory with demand forecasting models, and personalize shopping experiences using RAG. These innovations led to better customer engagement, streamlined inventory management, and increased sales. In Healthcare, a leading provider operating multiple hospitals and clinics nationwide faced the daunting task of analyzing massive volumes of patient data—from medical records and imaging to real-time feeds from wearable devices. With strict privacy regulations in play, they turned to Azure AI. Using Azure AI Video Indexer, they analyzed imaging data like X-rays and MRIs to detect anomalies. The Model Catalog enabled predictive analytics to identify high-risk patients and forecast readmissions. Meanwhile, Retrieval-Augmented Generation (RAG) gave doctors instant access to patient histories and relevant medical literature. The result? More accurate diagnoses, better patient care, and full regulatory compliance. These stories highlight how Azure Arc enabled AI workloads are not just a set of tools—they are a catalyst for transformation. Whether it’s saving lives, improving safety, or driving business growth, the impact is real, measurable, and growing every day. Learn More Whether you are tuning in online or joining us in person, we wish you a fun and exciting Build 2025! The advancements in AI at the edge are set to revolutionize how we build, deploy, and manage applications, providing greater speed, agility, and security for businesses around the world. Recommended Build Sessions: Breakout session (BRK188): Power your AI apps across cloud and edge with Azure Arc Breakout session (BRK183): Improving App Health with Health Modeling and Chaos Engineering Breakout session (BRK 195): Inside Azure innovations with Mark Russinovich Breakout session (BRK 168): AI and Agent Observability in Azure AI Foundry and Azure Monitor1.1KViews2likes0CommentsComparing feature sets for AKS enabled by Azure Arc deployment options
This article shows a comparison of features available for the different deployment options under AKS enabled by Azure Arc. AKS on Azure Stack HCI, version 23H2 AKS Edge Essentials AKS on Windows Server and AKS on Azure Stack HCI 22H2 Supported infrastructure where the Kubernetes clusters are hosted Azure Stack HCI, version 23H2 Windows 10/11 IoT Enterprise Windows 10/11 Enterprise Windows 10/11 Pro Windows Server 2019/2022 Azure Stack HCI 22H2 Windows Server 2019 Windows Server 2022 CNCF conformant? Yes Yes Yes K8s cluster lifecycle management tools (create, scale, upgrade and delete clusters) Az CLI Az PowerShell Azure Portal ARM templates PowerShell PowerShell Windows Admin Center Kubernetes cluster management plane Kubernetes clusters are managed by Arc Resource Bridge that runs as part of infrastructure components on the Azure Stack HCI cluster. Kubernetes clusters are self-managed, to preserve resources. Kubernetes clusters are managed using a “management cluster”, that is installed using PowerShell before Kubernetes workload clusters can be created. Can you use kubectl and other open-source Kubernetes tools? Yes Yes Yes Supported Kubernetes versions. Supports K8s only. Continuous updates to supported Kubernetes versions. For latest version support, run az aksarc get-versions. Supports K3s and K8s. Continuous updates to supported Kubernetes versions. For the latest version, visit steps to prepare your machine for AKS Edge Essentials. Supports K8s only. Continuous updates to supported Kubernetes versions. For latest version support, visit AKS hybrid releases on GitHub. Azure Fleet Manager integration No No No Terraform integration Not yet No No Azure Monitor integration Yes, via Arc extensions Yes, via Arc extensions Yes, via Arc extensions The following is a comparison between node pool capabilities for AKS enabled by Azure Arc deployment options: AKS on Azure Stack HCI, version 23H2 AKS Edge Essentials AKS on Windows Server and Azure Stack HCI 22H2 Windows nodepool support Yes Windows Server 2019 Datacenter Windows Server 2022 Datacenter Yes Windows Server 2022 Datacenter (Core) Yes Windows Server 2019 Datacenter Windows Server 2022 Datacenter Linux OS options CBL-Mariner CBL-Mariner CBL-Mariner Container Runtime Containerd for Linux and Windows nodes. Containerd for Linux and Windows nodes. Containerd for Linux and Windows nodes. Node pool auto-scalar Yes No (manually add nodes) Yes Horizontal pod scalar Yes No Yes GPU support Yes No Yes Azure container registry Yes Yes Yes The following is a comparison between networking features for AKS enabled by Azure Arc deployment options: AKS on Azure Stack HCI, version 23H2 AKS Edge Essentials AKS on Windows Server and Azure Stack HCI 22H2 Network creation and management You need to create the network in Azure Stack HCI 23H2 before creating an AKS cluster. You also need to ensure the network has the right connectivity and IP address availability for a successful cluster creation and operation. You need to provide the IP address range for node IPs and Service IPs, that is available and has the right connectivity. The network configuration needed for the cluster is handled by AKS. Read AKS Edge Essentials networking. You need to create the network in Windows Server before creating an AKS cluster. You also need to ensure the Read network has the right connectivity and IP address availability for a successful cluster creation and operation. Supported networking options Static IP networks with/without VLAN ID Static IP address or use reserved IPs when using DHCP DHCP networks with/without VLAN ID Static IP networks with/without VLAN ID SDN support No No Yes Supported CNIs Calico Calico (K8s) Flannel (K3s) Calico Load balancer MetalLB Arc extension Bring your own load balancer (BYOLB) KubeVIP MetalLB Arc extension Bring your own load balancer (BYOLB) HAProxy MetalLB Arc extension SDN load balancer Bring your own load balancer (BYOLB) The following is a comparison between storage features for AKS enabled by Azure Arc deployment options: AKS on Azure Stack HCI, version 23H2 AKS Edge Essentials AKS on Windows Server and Azure Stack HCI 22H2 Types of supported persistent volumes Read Write Once Read Write Many PVC using local storage Read Write Once Read Write Many Container Storage Interface (CSI) support Yes Yes Yes CSI drivers Disk and Files (SMB and NFS) drivers installed by default. Support for SMB and NFS storage drivers. Support for SMB and NFS storage drivers. Dynamic provisioning support Yes Yes Yes Volume resizing support Yes Yes Yes The following is a comparison between security and authentication options in AKS and AKS enabled by Azure Arc: AKS on Azure Stack HCI, version 23H2 AKS Edge Essentials AKS on Windows Server and Azure Stack HCI 22H2 Access to Kubernetes clusters Kubectl Kubectl Kubectl Kubernetes cluster authentication Certificate based Kubeconfig Microsoft Entra ID Certificate based Kubeconfig Microsoft Entra ID Certificate based Kubeconfig Microsoft Entra ID Active Directory SSO Kubernetes cluster authorization (RBAC) Kubernetes RBAC Azure RBAC Kubernetes RBAC Kubernetes RBAC Support for network policies No No Yes – only for Linux containers Limit source networks that can access API server Yes Yes Yes Certificate rotation and encryption Yes Yes Yes Secrets store CSI driver Yes Yes Yes gMSA support No Yes Yes Azure policy Yes, via Arc extensions Yes, via Arc extensions Yes, via Arc extensions Azure Defender No Yes, via Arc extensions (preview) Yes, via Arc extensions (preview) The following is a comparison between pricing and SLA for AKS and AKS enabled by Azure Arc: AKS on Azure Stack HCI, version 23H2 AKS Edge Essentials AKS on Windows Server and Azure Stack HCI 22H2 Pricing Pricing is based on the number of workload cluster vCPUs. Control plane node nodes are free. Azure Stack HCI, version 23H2 is priced a $10/physical core and AKS workload VMs is $24/vcpu/month. $2.50 per device per month. Pricing is based on the number of workload cluster vCPUs. Control plane nodes & load balancer VMs are free. Azure Stack HCI, version 23H2 is priced a $10/physical core and AKS workload VMs is $24/vcpu/month. Azure hybrid benefit support Yes No Yes SLA No SLA offered since the Kubernetes cluster is running on-premises. No SLA offered since the Kubernetes cluster is running on-premises. No SLA offered since the Kubernetes cluster is running on-premises.3KViews1like2CommentsGA: Inbound private endpoint for Standard v2 tier of Azure API Management
Standard v2 was announced in general availability on April 1st, 2024. Customers can now configure an inbound private endpoint for their API Management Standard v2 instance to allow clients in your private network to securely access the API Management gateway over Azure Private Link. The private endpoint uses an IP address from an Azure virtual network in which it's hosted. Network traffic between a client on your private network and API Management traverses over the virtual network and a Private Link on the Microsoft backbone network, eliminating exposure from the public internet. Further, you can configure custom DNS settings or an Azure DNS private zone to map the API Management hostname to the endpoint's private IP address. Inbound private endpoint With a private endpoint and Private Link, you can: Create multiple Private Link connections to an API Management instance. Use the private endpoint to send inbound traffic on a secure connection. Use policy to distinguish traffic that comes from the private endpoint. Limit incoming traffic only to private endpoints, preventing data exfiltration. Combine with outbound virtual network integration to provide end-to-end network isolation of your API Management clients and backend services. Today, only the API Management instance’s Gateway endpoint supports inbound private link connections. In addition, each API management instance can support at most 100 private link connections. Typical scenarios You can use an inbound private endpoint to enable private-only access directly to the API Management gateway to limit exposure of sensitive data or backends. Some of the common supported scenarios include: Pass client requests through a firewall and configure rules to route requests privately to the API Management gateway. Configure Azure Front Door (or Azure Front Door with Azure Application Gateway) to receive external traffic and then route traffic privately to the API Management gateway. For example, see Connect Azure Front Door Premium to an Azure API Management with Private Link. Learn more API Management v2 tiers FAQ API Management v2 tiers documentation API Management overview documentationTroubleshoot the Azure Arc Agent in Azure using Azure Monitor & Log Analytics Workspace
This article explores how to centralize logging from on-premises servers—both physical and virtual—into a single Log Analytics Workspace. The goal is to enhance monitoring capabilities for the Azure Arc Connected Machine Agent running on these servers. Rather than relying on scattered and unstructured .log files on individual machines, this approach enables customers to collect, analyze, and gain insights from multiple agents in one centralized location. This not only simplifies troubleshooting but also unlocks richer observability across the hybrid environment.561Views1like0CommentsExtending Azure's AI Platform with an adaptive cloud approach
Authored by Derek Bogardus and Sanjana Mohan, Azure Edge AI Product Management Ignite 2024 is here, and nothing is more top of mind for customers than the potential to transform their businesses with AI wherever they operate. Today, we are excited to announce the preview of two new Arc-enabled services that extend the power of Azure’s AI platform to on-premises and edge environments. Sign up to join the previews here! An adaptive cloud approach to AI The goal of Azure’s adaptive cloud approach is to extend just enough Azure to customers’ distributed environments. For many of these customers, valuable data is generated and stored locally, outside of the hyperscale cloud, whether due to regulation, latency, business continuity, or simply the large volume of data being generated in real time. AI inferencing can only occur where the data exists. So, while the cloud has become the environment of choice for training models, we see a tremendous need to extend inferencing services beyond the cloud to enable complete cloud-to-edge AI scenarios. Search on-premises data with generative AI Over the past couple of years, generative AI has come to the forefront of AI innovation. Language models give any user the ability to interact with large, complex data sets in natural language. Public tools like ChatGPT are great for queries about general knowledge, but they can’t answer questions about private enterprise data on which they were not trained. Retrieval Augmented Generation, or "RAG", helps address this need by augmenting language models with private data. Cloud services like Azure AI Search and Azure AI Foundry simplify how customers can use RAG to ground language models in their enterprise data. Today, we are announcing the preview of a new service that brings generative AI and RAG to your data at the edge. Within minutes, customers can deploy an Arc extension that contains everything needed to start asking questions about their on-premises data, including: Popular small and large language models running locally with support for both CPU and GPU hardware A turnkey data ingestion and RAG pipeline that keeps all data completely local, with RBAC controls to prevent unauthorized access An out-of-the-box prompt engineering and evaluation tool to find the best settings for a particular dataset Azure-consistent APIs to integrate into business applications, as well as a pre-packaged UI to get started quickly This service is available now in gated private preview for customers running Azure Local infrastructure, and we plan to make it available on other Arc-enabled infrastructure platforms in the near future. Sign up here! Deploy curated open-source AI models via Azure Arc Another great thing about Azure’s AI platform is that it provides a catalog of curated AI models that are ready to deploy and provide consistent inferencing endpoints that can be integrated directly into customer applications. This not only makes deployment easy, but customers can also be confident that the models are secure and validated These same needs exist on the edge as well, which is why we are now making a set of curated models deployable directly from the Azure Portal. These models have been selected, packaged, and tested specifically for edge deployments, and are currently available on Azure Local infrastructure. Phi-3.5 Mini (3.8 billion parameter language model) Mistral 7B (7.3 billion parameter language model) MMDetection YOLO (object detection) OpenAI Whisper Large (speech to text) Google T5 Base (translation) Models can be deployed from a familiar Azure Portal wizard to an Arc AKS cluster running on premises. All available models today can be run on just a CPU. Phi-3.5 and Mistral 7B also have GPU versions available for better performance. Once complete, the deployment can be managed directly in Azure ML Studio, and an inferencing endpoint is available on your local network. Wrap up Sign up now to join either of the previews at the link below or stop by and visit us in person in the Azure Arc and Azure Local Expert Meet Up station in the Azure Infrastructure neighborhood at Ignite. We’re excited to get these new capabilities into our customers’ hands and hear from you how it’s going. Sign up to join the previews here4.6KViews7likes2CommentsArc Jumpstart Newsletter: April 2025 Edition
We’re thrilled to bring you the latest updates from the Arc Jumpstart team in this month’s newsletter. Whether you are new to the community or a regular Jumpstart contributor, this newsletter will keep you informed about new releases, key events, and opportunities to get involved in within the Azure Adaptive Cloud ecosystem. Check back each month for new ways to connect, share your experiences, and learn from others in the Adaptive Cloud community.296Views1like1CommentUpgrade to Azure Local, version 23H2 OS
Azure Stack HCI is now part of Azure Local. Learn more. Today, we’re sharing an update on the upgrade to Azure Local, version 23H2 OS. As described in a previous blog post, moving from Azure Stack HCI, version 22H2 to Azure Local, version 23H2 is a two-step process. The first step is to upgrade the operating system using existing processes and tools. The second step is to apply the upgrade for the mandatory Azure Arc solution enablement, which is a guided experience via the Azure portal. It is urgent that you perform the first step and install the OS upgrade to Azure Local, version 23H2 OS by May 31, 2025. After then, Azure Stack HCI, version 22H2 OS reaches end of support, and your system will stop receiving critical security updates and will not be eligible for support requests. Hear directly from one of our Azure Local customers, Australian supermarket brand Coles, on their experience successfully upgrading the OS for 1000 Azure Local machines in just 8 days! Coles: “Preparing for the Future: Upgrading Azure Local (Stack HCI) from 22h2 to 23h2” As we approach the end of life for Azure Local (Stack HCI) 22h2 in May 2025, it is crucial to prepare for the upcoming changes in management and deployment introduced in 23h2. This transition marks a significant step towards enhancing our cloud deployment and upgrade processes. The first step in this journey is to upgrade the operating system and cluster functional levels. This foundational move sets the stage for a seamless transition to the new version. Our team successfully upgraded 1000 nodes from 22H2 to 23H2 in just 8 days, showcasing our dedication and efficiency. This remarkable achievement was made possible through meticulous planning, including risk assessments, timeline creation, resource allocation, and establishing contingency plans to ensure a smooth transition. Several key technologies and processes played a pivotal role in this upgrade. We leveraged PowerShell scripts for much of the process, finding them to be the most reliable and repeatable method. Through comprehensive testing, we identified improvements that ensured we maintained high standards and minimized risks for the production system rollouts. While the thought of upgrading so many nodes was daunting, utilizing these familiar tools significantly eased the upgrade process. Our team's expertise with these tools enabled us to address challenges promptly and maintain a steady pace. Additionally, cross-departmental collaboration was crucial in streamlining operations and troubleshooting issues effectively. Looking ahead, we are excited about the new features and enhancements in 23H2. We are also planning to further refine our upgrade processes based on the insights gained from this experience. In conclusion, the successful upgrade from 22H2 to 23H2 demonstrates our team's capability to manage complex transitions efficiently. As we continue to innovate and improve our Azure Local deployment strategies, we remain committed to delivering high-quality solutions that meet the evolving needs of our organization. Jason Tayler, Lead Senior Systems Engineer, Coles Conclusion Coles is one of many customer success stories, and we hope this inspires you to upgrade your systems! On behalf of the Azure Local team, we thank you for your continuous trust and feedback. Learn more To learn more about installing the OS upgrade, refer to the upgrade documentation. For known issues and remediation guidance, see the Azure Local Supportability GitHub repository.1.1KViews2likes1Comment