containers
152 TopicsWhat's new in Azure Container Apps at Build'25
Azure Container Apps is a fully managed serverless container service that runs microservices and containerized applications on Azure. It provides built-in autoscaling, including scale to zero, and offers simplified developer experience with support for multiple programming languages and frameworks, including special features built for .NET and Java. Container Apps also provides many advanced networking and monitoring capabilities, offering seamless deployment and management of containerized applications without the need to manage underlying infrastructure. Following the features announced at Ignite’24, we've continued to innovate and enhance Azure Container Apps. We announced the general availability of Serverless GPUs, enabling seamless AI workloads with automatic scaling, optimized cold start, per-second billing, and reduced operational overhead. We added a preview of JavaScript code interpreter support for Dynamic Sessions for applications that require the execution of potentially malicious JavaScript code, such as code provided by end users. Furthermore, we partnered with Aqua Security to enhance the security of Azure Container Apps, offering comprehensive image scanning, runtime protection, and supply chain security. These advancements ensure that Azure Container Apps remains a trusted platform for running scalable, secure, and resilient containerized applications. The features we're announcing at Build’25 for new Serverless GPUs integrations, and many new networking and observability features that Enterprises care about further deepens this commitment. Running AI workloads on Azure Container Apps Azure Container Apps efficiently supports AI workloads with features like serverless GPUs with NVIDIA NIM integration, dynamic sessions with Hyper-V isolation, and integrations for enhanced performance and scalability. We are furthering this feature set by announcing new capabilities and integrations for Azure Container Apps. Deploy Foundry Models on Serverless GPUs for Inferencing Azure Container Apps now provides an integration with Foundry Models, which allows you to deploy ready-to-use AI models directly during container app creation. This integration supports serverless APIs with pay-as-you-go billing and managed compute with pay-per-GPU pricing, providing flexibility in deploying Foundry models. Announcing General Availability of Dedicated GPUs Dedicated GPUs in Azure Container Apps are now generally available, and simplify AI application development and deployment by reducing management overhead. It offers built-in support for key components like the latest CUDA driver, turnkey networking, and security features, allowing you to focus on your AI application code. Early access to Serverless GPU in Dynamic Sessions Serverless GPU in Azure Container Apps Dynamic Sessions in now available as an early access feature, enable running untrusted AI-generated code at scale within compute sandboxes protected by Hyper-V isolation. This feature supports a GPU-powered Python code interpreter to better handle AI workloads. Microsoft Dev Box offers an integration with Serverless GPU in Dynamic Sessions through the Dev Box GPU Shell feature. Advanced Networking capabilities Azure Container Apps offers many networking capabilities including custom VNet integration, private endpoints, user-defined routes, NAT Gateway support, and peer-to-peer encryption. We are extending these capabilities by offering new controls and features to support more nuanced network architectures. Announcing General Availability of Private Endpoints Private Endpoints for Azure Container Apps, now generally available, allows customers to connect to their Container Apps environment using a private IP address in their Azure Virtual Network. This eliminates exposure to the public internet and secures access to their applications. Additionally, customers can connect directly from Azure Front Door to their workload profile environments over a private link instead of the public internet. New Premium Ingress capabilities The new premium ingress feature in Azure Container Apps allows for customizable ingress scaling, enabling better handling of higher demand workloads like large performance tests. It introduces environment-level ingress configuration options, including termination grace period, idle request timeout, and header count. Announcing Public Preview of rule-based routing We are adding a rule-based routing feature in Azure Container Apps that allows you to direct incoming HTTP traffic to different apps within your Container Apps environment based on the requested host name or path. This simplifies your architecture for microservice applications, A/B testing, blue-green deployments, and more, without needing a separate reverse proxy. Observability and debugging capabilities Azure Container Apps provides several built-in observability features that give you a holistic view of your container app’s health throughout its application lifecycle, and help you monitor and diagnose the state of your app to improve performance and respond to trends and critical problems. We are extending these existing capabilities by introducing new observability and debugging features. Announcing General Availability of Open Telemetry Collector The OpenTelemetry agent in Azure Container Apps is now generally available, allowing developers to use open-source standards to send app data without setting up the collector themselves. The managed agent collects and exports telemetry data to various endpoints, including Azure Monitor Application Insights, Datadog, and any generic OTLP-configured endpoint. Announcing General Availability of Aspire dashboard The .NET 8’s Aspire dashboard in Azure Container Apps is now generally available, providing live data about your project and containers in the cloud to evaluate performance and debug errors with comprehensive logs, metrics, and traces. In addition, we now support the newest version of Aspire (v9.2), which includes new visualization features, the ability to pause/resume telemetry, and will be globally available in the coming weeks. New Diagnose and Solve dashboard The new Diagnose and Solve dashboard for Azure Container Apps provides a comprehensive overview of app health, performance, and resource utilization, with insights into apps, jobs, replicas, node count, and CPU usage over time. It also includes new detectors to diagnose and resolve issues such as container create failures, health probe failures, and image pull failures. Integration with Azure SRE agent Azure Container Apps integrates seamlessly with the Azure SRE agent to enhance operational efficiency and application uptime. By continuously monitoring application health and performance, the SRE agent provides valuable insights and autonomously responds to production alerts, mitigating issues with minimal intervention. This integration allows developers to leverage the SRE agent to monitor Azure Container Apps resources, from Container App Environments to Apps to Revisions to Replicas, ensuring faster troubleshooting and proactive issue resolution. Enhanced Enterprise capabilities In addition to these announcements, we are introducing several enhanced Enterprise features to Azure Container Apps. Announcing General Availability of Azure Container Apps on Arc-enabled Kubernetes The ability to run Azure Container Apps on your own Azure Arc-enabled Kubernetes clusters (AKS and AKS-HCI) is now generally available. This allows developers to leverage Azure Container Apps features while IT administrators maintain corporate compliance by hosting applications in hybrid environments. Announcing General Availability of Planned Maintenance in Azure Container Apps Planned Maintenance for Azure Container Apps is now generally available, which allows you to control when non-critical updates are applied to your environment. This helps minimize downtime and impact on applications. Critical updates are applied as needed to ensure security and reliability compliance. Announcing Public Preview of workflow capabilities with Durable Task Scheduler The new advanced pro-code workflow feature in Azure Container Apps, leveraging durable task scheduler, is now in public preview. With durable task scheduler in Container Apps, you can create reliable workflows as code, leveraging state persistence and fault-tolerant execution. These containerized workflows enhance scalability, reliability, and streamlined monitoring for administration of complex workflows. Native Azure Functions in Azure Container Apps The new, streamlined method for running Azure Functions natively in Azure Container Apps allows customers to leverage the full features and capabilities of Azure Container Apps while benefiting from the simplicity of auto-scaling provided by Azure Functions. With the new native hosting model, customers can deploy Azure Functions directly onto Azure Container Apps with the same experience as deploying other containerized applications. Customers can also get the complete feature set of Azure Container Apps with this new deployment experience, including multi-revision management, easy authentication, metrics and alerting, health probes and many more. Azure Container Apps at Build’25 conference Also, if you're at Build, come see us at the following sessions: Breakout 182: Better Microservices Development using Azure Container Apps Breakout 190: Secure Next-Gen AI Apps with Azure Container Apps Serverless GPUs Lab 341: Agentic AI Inferencing with Azure Container Apps Community Table Talk 457: App Reliability, Azure Container Apps, & Serverless GPUs Breakout 186: Earth’s Defense with Hera: AI Agents Battle Planet Extinction Threats Breakout 187: Event-Driven Architectures: Serverless Apps That Slay at Scale Breakout 201: Innovate, deploy, & optimize your apps without infrastructure hassles Breakout 117: Use VS Code to build AI apps and agents Breakout 185: Maximizing efficiency in cloud-native app design Demo 544: Building Resilient Cloud-Native Microservices Or come talk to us at the Serverless booth at the Expert Meet-up area at the Hub! Wrapping up As always, we invite you to visit our GitHub page for feedback, feature requests, or questions about Azure Container Apps, where you can open a new issue or up-vote existing ones. If you’re curious about what we’re working on next, check out our roadmap. We look forward to hearing from you!539Views0likes1CommentWhat's New in Azure App Service at #MSBuild 2025
New App Service Premium v4 plan The new App Service Premium v4 (Pv4) plan has entered public preview at Microsoft Build 2025 for both Windows and Linux! This new plan is designed to support today's highly demanding application performance, scale, and budgets. Built on the latest "v6" general-purpose virtual machines and memory-optimized x64 Azure hardware with faster processors and NVMe temporary storage, it provides a noticeable performance uplift over prior generations of App Service Premium plans (over 25% in early testing). The Premium v4 offering includes nine new sizes ranging from P0v4 with a single virtual CPU and 4GB RAM all the way up through P5mv4, with 32 virtual CPUs and 256GB RAM, providing CPU and memory options to meet any business need. App Service Premium v4 plans provide attractive price-performance across the entire performance curve for both Windows and Linux customers. Premium v4 customers using pay-as-you-go (PAYG) on Azure App Service for Windows can expect to save up to 24% compared with prior Premium plans. We plan to provide deeper commitment-based discounts such as reserved instances and savings plan at GA. For more detailed pricing on the various CPU and memory options, see the pricing pages for Windows and Linux as well as the Azure Pricing Calculator. App Service currently has Pv4 deployed in a few regions with more regions being regularly added. For more details on how to configure app service plans with Premium v4 as well as a regularly updated list of regional availability, see the product documentation and start taking advantage of faster performance today! 2-zone Availability Zone support is now generally available With a recently completed platform update in May, customers now enjoy the 99.99% Availability Zone (AZ) SLA when running on only two instances (instead of three)! As part of this update more parts of the App Service footprint have enabled AZ support “in place”, which means many existing app service plans can now also use Availability Zones. Availability Zone configuration for app service plans is also now mutable. This means if an app service plan is running on an AZ-enabled part of the App Service footprint, customers can choose to enable and disable Availability Zone support at any time. Read more about the new Availability Zone options in the announcement article! ARM/CLI surface area for Availability Zone support has also been updated to provide increased visibility into AZ configuration details. The same enhanced visibility is also coming to the Azure Portal in June. With these changes customers can determine if an App Service plan is on an AZ-enabled scale unit, as well as how many zones are available for zone spanning. This allows customers to deploy with either two zones, or three zones (where available), of zone spanning for their App Service plans. For App Service plans that are AZ-enabled, customers will also be able to see the physical zone placement of each AZ enabled App Service plan. Availability Zone support is available on the new Premium v4 plan, and also supported with Premium v2, Premium v3, and the dedicated App Service Environment v3 (Isolated V2 plan). Check out the Availability Zone options for your App Service plans and start getting the benefits of zone resiliency today! .NET Aspire on Azure App Service .NET Aspire support is now available in public preview for App Service on Linux! .NET Aspire developers creating applications have an additional deployment option with App Service as a deployment target. Developers can create multi-app/multi-service .NET Aspire applications locally and deploy them into Azure using the new App Service deployment provider. The App Service and .NET Aspire teams worked together to create an App Service “provider” using .NET Aspire’s new “provider model”. The build provider translates the code-centric view of a .NET Aspire application topology into an Azure deployment mapped onto App Service constructs. The App Service provider supports securely deploying multiple .NET Aspire applications, with observability via the familiar .NET Aspire dashboard coming in the near future. The Getting Started with .NET Aspire on Azure App Service blog has instructions on how to create a .NET Aspire project for deployment onto App Service, as well a link for providing feedback. If you happen to be at Build 2025, drop by our booth or the theatre session “DEM548: How .NET Aspire on App Service enhances modern app development” to see live demonstrations of the App Service support for .NET Aspire! Using App Service to build agentic AI apps The last few months of intelligent app development have seen a frenetic pace of change with the rapid evolution of agents on Azure AI Foundry Agent Service and new agent extensibility options like Model Context Protocol (MCP) opening avenues for integrating existing data sources and APIs into agentic architectures. Here's a quick run-down of useful resources published recently: This article demonstrates hosting a remote MCP server on Azure App Service. The sample is an adaptation of the weather service example from the MCP site. The App Service variation also includes an azd template for easy experimentation via a CLI deployment to App Service! This article walks through integrating a .NET Core implementation of a “To-Do” list API running on App Service with an agent created on Azure AI Foundry Agent Service. It’s a straightforward example demonstrating how developers can bring together the power of AI agents with existing web API investments. Quick start guides for using App Service with Azure Open AI in your language of choice -- Python, Node, .NET, and Java. Using Microsoft Research’s latest 1-bit “super-small” language model, BitNet on App Service. Enhance search queries on text data stored in Azure SQL DB using natural language vector functions and Azure App Service. Includes an accompanying azd example. How to use Azure AI Search hybrid search capabilities from App Service with .NET (Blazor), Java (Spring Boot), Node (Express), or Python (FastAPI). Use GitHub Copilot to compare your application’s bicep against a representative “best practices” bicep definition and then generate the necessary bicep diff. In addition, using Sidecar for App Service on Linux, developers can easily connect Phi SLMs to their applications. Examples using the chat completion endpoint in the SLM sidecar extensions are available in this GitHub repo with code examples for .NET, Node, Python and Java. There are also accompanying docs for .NET, Node, Python (FastAPI) and Java (Spring Boot) which go into more details on using the SLM sidecar extensions. The sidecar extensions capability is also now enabled in the Azure Portal. AI Labs at Microsoft Build For those of you attending Microsoft Build in person, we will have labs for additional hands-on experience using AI with Azure App Service. LAB347: Add AI experiences to existing .NET apps using Sidecar in App Service This lab (first lab occurrence and second lab occurrence - see Exercise 4) covers an e-commerce inventory API (written in .NET) integrated with an agent running on Azure AI Foundry Agent Service. When a customer interacts with the AI agent it automatically invokes the appropriate web APIs to fetch real-time inventory information, add/remove products in a shopping cart, and increment/decrement product inventory. This is a great example of an AI powered agent grounded in a company’s ever changing transactional data. As a fun sidenote, GitHub Copilot was used extensively to build >95% of the sample application as well as to generate the OpenAPI specification that integrates the inventory web API with the AI agent! The same AI-on-App Service lab (Exercise 1) walks developers through integrating a basic Azure OpenAI chat interface into a web application. The lab also demonstrates using a background WebJob on Linux with Azure OpenAI (Exercise 2) to categorize user sentiment for product reviews. The lab also shows (Exercise 3) how to use a small language model (SLM) like Microsoft’s Phi-4 model in a WebJob to perform similar categorization, without the need to call out to an LLM. Although SLMs are not as powerful as LLMs, SLMs are an interesting alternative for integrating AI functionality where either cost, or control over AI data flows, are considerations. Azure SRE Agent for App Service One of the big announcements at Build this year was the Agentic DevOps announcement, which includes the new Azure SRE Agent. Designed to empower Site Reliability Engineers (SRE), the SRE Agent is a new agentic service that can manage Azure application platform services. including App Service, Functions, and Azure Container Apps to name just a few. It provides automatic incident response and mitigation, faster root cause analysis (RCA) of production issues, and continuous monitoring of application health and performance. With SRE Agent, you can use a natural language interface for managing your web applications on Azure App Service. To be an early adopter of the Agentic DevOps revolution, check out the announcement blog and sign-up to join the SRE Agent preview as it starts rolling out! WebJobs for App Service on Linux (GA) WebJobs for App Service on Linux just recently GA’d earlier this month. With this functionality developers can implement the same “infra-glue” style of background jobs that they have enjoyed with App service on Windows. Take a look at the documentationdemonstrating WebJobs support for shell scripts, Python, Java, .NET and Node on Linux! As mentioned earlier, the AI-on-App Service lab at this year’s Build conference has two code examples (see Exercise 2 and Exercise 3) demonstrating Linux WebJobs with Azure OpenAI, as well as a locally connected Phi-4 Small Language Model (SLM) sidecar, to categorize user sentiment for submitted product reviews. These are great examples of creatively using WebJobs to perform background batch-style work with your AI resources. Also keep an eye out for the upcoming WebJobs for Windows Containers GA which is planned “soon” this summer! Language and Framework Updates In addition to the release of .NET Aspire support for App Service, the App Service team has kept busy updating myriad Node, Python, Java/JBoss, .NET and PHP versions. To give an idea of the scope of effort keeping language and framework versions up to date across both Windows and Linux, App Service released more than two dozen language/framework specific updates in the last few weeks prior to Build. That represents the ongoing platform commitment to keeping languages regularly updated without the need for developers to explicitly invest time and effort doing so themselves. Just last month, Strapi support was introduced for App Service on Linux! Strapi is an open source headless Javascript based content management system that provides developers a robust platform for developing and delivering content across a variety of formats. The Azure Marketplace Strapi offering provides customization control, global availability and pre-built integration to essential Azure services like Azure Database for MySQL or PostgreSQL and Azure Email Communication Services. Deep dive on the details of hosting Strapi on App Service in this article. The custom error pages feature for App Service has also been updated just prior to Build. Custom error pages enable developers to customize the response rendered for common HTTP errors (403, 502 and 503) which are returned by the platform. This release includes a new option to always render custom errors regardless of whether the HTTP error was platform generated, or application generated. There will also be an Azure Portal update coming in June with support for the new custom error page features! Looking ahead to summer, stay tuned for the impending arrival of .NET 10 preview bits on App Service across both Windows and Linux! Networking and ASE Updates App Service support for public inbound IPv6 traffic is availablein most regions in public preview, with the service working towards a planned GA of inbound IPv6 support during the summer. Inbound IPv6 is supported for both IPv6-only upstream clients, as well dual-stack scenarios where a web application is reachable over either an IPv4 address or an IPv6 address. As part of an upcoming summer release, App Service will be delivering a public preview of *outbound* IPv6 traffic. For details on using IPv6 on App Service, as well to track all of the upcoming updates, consult this article: Announcing inbound IPv6 support in public preview - Azure App Service. For App Service Environment (ASE) customers, App Service will soon be releasing new support for adding custom Certificate Authorities (CAs) to an ASE. This new support will enable securing inbound TLS traffic using certificates issued by a custom Certificate Authority. Hybrid Connections customers will be happy to see that a new version of the App Service Hybrid Connection Manager (HCM) was just released just a few weeks ago. The new HCM delivers updated UX support for both Linux and Windows customers, enhanced logging and connection testing, and a brand new CLI for scripting and command-line management of Hybrid Connections! You might have missed it, but there was a recent addition to the troubleshooting options on App Service with the new Network Troubleshooter! The Network Troubleshooter offers comprehensive analysis and actionable insights to resolve connectivity failures for both Linux and Windows web apps. It tests connectivity to Azure resources like Storage, Redis, SQL Server, MySQL server, and other apps running on App Service. It diagnoses connectivity problems with Private endpoints, Service endpoints, and Internet-based endpoints, detects NAT gateways, and investigates DNS failures with custom DNS servers. Additionally, it provides actionable recommendations and surfaces any network rules it finds that are blocking connectivity. If you regularly wrestle with connectivity challenges, give the Network Troubleshooter a try! Next Steps Developers can learn more about Azure App Service at Getting Started with Azure App Service. Stay up to date on new features and innovations on Azure App Service via Azure Updates as well as the Azure App Service (@AzAppService) X feed. There is always a steady stream of great deep-dive technical articles about App Service as well as the breadth of developer focused Azure services over on the Apps on Azure blog. And lastly take a look at Azure App Service Community Standups hosted on the Microsoft Azure Developers YouTube channel. The Azure App Service Community Standup series regularly features walkthroughs of new and upcoming features from folks that work directly on the product! Build 2025 Session Reference (Note: all times below are listed in Seattle time - Pacific Daylight Time) (Note: some labs have more than one timeslot spanning multiple days) Innovate, deploy, & optimize your apps without infrastructure hassles https://build.microsoft.com/en-US/sessions/BRK201 Monday, May 19 th 11:15 AM – 12:15 PM Pacific Daylight Time Arch, 705 Pike, Level 6, Room 606 Breakout, Streaming Online and Recorded Session (BRK201) Quickly build, deploy, and scale web apps and APIs globally with App Service https://build.microsoft.com/en-US/sessions/BRK200 Tuesday, May 20 th 11:45 AM – 12:45 PM Pacific Daylight Time Arch, 705 Pike, Level 6, Room 608 Breakout, Streaming Online and Recorded Session (BRK200) Simplifying .NET upgrades with GitHub Copilot https://build.microsoft.com/en-US/sessions/DEM549 Monday, May 19 th 5:05 PM - 5:20 PM Pacific Daylight Time Arch, 705 Pike, Level 4, Hub, Theater B Demo Session – Also Recorded (DEM549) Use Azure SRE Agent to automate tasks and increase site reliability https://build.microsoft.com/en-US/sessions/DEM550 Tuesday, May 20 th 5:10 PM - 5:25 PM Pacific Daylight Time Arch, 705 Pike, Level 4, Hub, Theater A Demo Session – Also Recorded (DEM550) How .NET Aspire on App Service enhances modern app development https://build.microsoft.com/en-US/sessions/DEM548 Wednesday, May 21 st 2:00 PM - 2:15 PM Pacific Daylight Time Arch, 705 Pike, Level 4, Hub, Theater B Demo Session – Also Recorded (DEM548) Add AI experiences to existing .NET apps using Sidecars in App Service [Note: Lab participants will be able to try Phi-4 and Azure AI Foundry Agent service scenarios in this lab.] https://build.microsoft.com/en-US/sessions/LAB347 Monday, May 19 th 4:45 PM - 6:00 PM Pacific Daylight Time Arch, 800 Pike, Level 1, Yakima 1 Hands on Lab – In-Person Only (LAB347) You can also work through the lab with your own Azure subscription! Code is available at https://github.com/Azure-Samples/Build2025-LAB347. Deploy the lab resources using the included resource provisioning template (https://github.com/Azure-Samples/Build2025-LAB347/blob/main/resources/lab347.json). You can deploy the template by searching on “Deploy a custom template” in the Azure Portal, and copying and pasting the template into the “Build your own template in the editor option”! Add AI experiences to existing .NET apps using Sidecars in App Service [Note: Lab participants will be able to try Phi-4 and Azure AI Foundry Agent service scenarios in this lab.] https://build.microsoft.com/en-US/sessions/LAB347-R1 Wednesday, May 21 st 4:30 PM - 5:45 PM Pacific Daylight Time Arch, 800 Pike, Lower Level, Skagit 5 Hands on Lab – In-Person Only (LAB347-R1) You can also work through the lab with your own Azure subscription! Code is available at https://github.com/Azure-Samples/Build2025-LAB347. Deploy the lab resources using the included resource provisioning template (https://github.com/Azure-Samples/Build2025-LAB347/blob/main/resources/lab347.json). You can deploy the template by searching on “Deploy a custom template” in the Azure Portal, and copying and pasting the template into the “Build your own template in the editor option”! Modernizing .NET Applications using Azure Migrate and GitHub Copilot https://build.microsoft.com/en-US/sessions/LAB343 Tuesday, May 20 th 5:15 PM - 6:30 PM Pacific Daylight Time Arch, 800 Pike, Level 1, Yakima 1 Hands on Lab – In-Person Only (LAB343) Modernizing .NET Applications using Azure Migrate and GitHub Copilot https://build.microsoft.com/en-US/sessions/LAB343-R1 Thursday, May 22 nd 10:15 AM – 11:30 AM Pacific Daylight Time Arch, 800 Pike, Level 2, Chelan 2 Hands on Lab – In-Person Only (LAB343-R1)1.1KViews0likes0CommentsAzure Kubernetes Service Baseline - The Hard Way, Third time's a charm
1 Access management Azure Kubernetes Service (AKS) supports Microsoft Entra ID integration, which allows you to control access to your cluster resources using Azure role-based access control (RBAC). In this tutorial, you will learn how to integrate AKS with Microsoft Entra ID and assign different roles and permissions to three types of users: An admin user, who will have full access to the AKS cluster and its resources. A backend ops team, who will be responsible for managing the backend application deployed in the AKS cluster. They will only have access to the backend namespace and the resources within it. A frontend ops team, who will be responsible for managing the frontend application deployed in the AKS cluster. They will only have access to the frontend namespace and the resources within it. By following this tutorial, you will be able to implement the least privilege access model, which means that each user or group will only have the minimum permissions required to perform their tasks. 1.1 Introduction In this third part of the blog series, you will learn how to: Harden your AKS cluster. - Update an existing AKS cluster to support Microsoft Entra ID integration enabled. Create a Microsoft Entra ID admin group and assign it the Azure Kubernetes Service Cluster Admin Role. Create a Microsoft Entra ID backend ops group and assign it the Azure Kubernetes Service Cluster User Role. Create a Microsoft Entra ID frontend ops group and assign it the Azure Kubernetes Service Cluster User Role. Create Users in Microsoft Entra ID Create role bindings to grant access to the backend ops group and the frontend ops group to their respective namespaces. Test the access of each user type by logging in with different credentials and running kubectl commands. 1.2 Prequisities: This section outlines the recommended prerequisites for setting up Microsoft entra ID with AKS. Highly recommended to complete Azure Kubernetes Service Baseline - The Hard Way here! or follow the Microsoft official documentation for a quick start here! Note that you will need to create 2 namespaces in kubernetes one called frontend and the second one called backend. 1.3 Target Architecture Throughout this article, this is the target architecture we will aim to create: all procedures will be conducted by using Azure CLI. The current architecture can be visualized as followed: 1.4 Deployment 1.4.1 Prepare Environment Variables This code defines the environment variables for the resources that you will create later in the tutorial. Note: Ensure environment variable $STUDENT_NAME and placeholder <TENANT SUB DOMAIN NAME>is set before adding the code below. # Define the name of the admin group ADMIN_GROUP='ClusterAdminGroup-'${STUDENT_NAME} # Define the name of the frontend operations group OPS_FE_GROUP='Ops_Fronted_team-'${STUDENT_NAME} # Define the name of the backend operations group OPS_BE_GROUP='Ops_Backend_team-'${STUDENT_NAME} # Define the Azure AD UPN (User Principal Name) for the frontend operations user AAD_OPS_FE_UPN='opsfe-'${STUDENT_NAME}'@<SUB DOMAIN TENANT NAME HERE>.onmicrosoft.com' # Define the display name for the frontend operations user AAD_OPS_FE_DISPLAY_NAME='Frontend-'${STUDENT_NAME} # Placeholder for the frontend operations user password AAD_OPS_FE_PW=<ENTER USER PASSWORD> # Define the Azure AD UPN for the backend operations user AAD_OPS_BE_UPN='opsbe-'${STUDENT_NAME}'@<SUB DOMAIN TENANT NAME HERE>.onmicrosoft.com' # Define the display name for the backend operations user AAD_OPS_BE_DISPLAY_NAME='Backend-'${STUDENT_NAME} # Placeholder for the backend operations user password AAD_OPS_BE_PW=<ENTER USER PASSWORD> # Define the Azure AD UPN for the cluster admin user AAD_ADMIN_UPN='clusteradmin'${STUDENT_NAME}'@<SUB DOMAIN TENANT NAME HERE>.onmicrosoft.com' # Placeholder for the cluster admin user password AAD_ADMIN_PW=<ENTER USER PASSWORD> # Define the display name for the cluster admin user AAD_ADMIN_DISPLAY_NAME='Admin-'${STUDENT_NAME} 1.4.2 Create Microsoft Entra ID Security Groups We will now start by creating 3 security groups for respective team. Create the security group for Cluster Admins az ad group create --display-name $ADMIN_GROUP --mail-nickname $ADMIN_GROUP 2. Create the security group for Application Operations Frontend Team az ad group create --display-name $OPS_FE_GROUP --mail-nickname $OPS_FE_GROUP 3. Create the security group for Application Operations Backend Team az ad group create --display-name $OPS_FE_GROUP --mail-nickname $OPS_FE_GROUP Current architecture can now be illustrated as follows: 1.4.3 Integrate AKS with Microsoft Entra ID 1. Lets update our existing AKS cluster to support Microsoft Entra ID integration, and configure a cluster admin group, and disable local admin accounts in AKS, as this will prevent anyone from using the --admin switch to get full cluster credentials. az aks update -g $SPOKE_RG -n $AKS_CLUSTER_NAME-${STUDENT_NAME} --enable-azure-rbac --enable-aad --disable-local-accounts Current architecture can now be described as follows: 1.4.4 Scope and Role Assignment for Security Groups This chapter describes how to create the scope for the operation teams to perform their daily tasks. The scope is based on the AKS resource ID and a fixed path in AKS, which is /namespaces/. The scope will assign the Application Operations Frontend Team to the frontend namespace and the Application Operation Backend Team to the backend namespace. Lets start by constructing the scope for the operations team. AKS_BACKEND_NAMESPACE='/namespaces/backend' AKS_FRONTEND_NAMESPACE='/namespaces/frontend' AKS_RESOURCE_ID=$(az aks show -g $SPOKE_RG -n $AKS_CLUSTER_NAME-${STUDENT_NAME} --query 'id' --output tsv) 2. Lets fetch the Object ID of the operations teams and admin security groups. Application Operation Frontend Team. FE_GROUP_OBJECT_ID=$(az ad group show --group $OPS_FE_GROUP --query 'id' --output tsv) Application Operation Backend Team. BE_GROUP_OBJECT_ID=$(az ad group show --group $OPS_BE_GROUP --query 'id' --output tsv Admin. ADMIN_GROUP_OBJECT_ID=$(az ad group show --group $ADMIN_GROUP --query 'id' --output tsv) 3) This commands will grant the Application Operations Frontend Team group users the permissions to download the credential for AKS, and only operate within given namespace. az role assignment create --assignee $FE_GROUP_OBJECT_ID --role "Azure Kubernetes Service RBAC Writer" --scope ${AKS_RESOURCE_ID}${AKS_FRONTEND_NAMESPACE} az role assignment create --assignee $FE_GROUP_OBJECT_ID --role "Azure Kubernetes Service Cluster User Role" --scope ${AKS_RESOURCE_ID} 4) This commands will grant the Application Operations Backend Team group users the permissions to download the credential for AKS, and only operate within given namespace. az role assignment create --assignee $BE_GROUP_OBJECT_ID --role "Azure Kubernetes Service RBAC Writer" --scope ${AKS_RESOURCE_ID}${AKS_BACKEND_NAMESPACE} az role assignment create --assignee $BE_GROUP_OBJECT_ID --role "Azure Kubernetes Service Cluster User Role" --scope ${AKS_RESOURCE_ID} 5) This command will grant the Admin group users the permissions to connect to and manage all aspects of the AKS cluster. az role assignment create --assignee $ADMIN_GROUP_OBJECT_ID --role "Azure Kubernetes Service RBAC Cluster Admin" --scope ${AKS_RESOURCE_ID} Current architecture can now be described as follows: 1.4.5 Create Users and Assign them to Security Groups. This exercise will guide you through the steps of creating three users and adding them to their corresponding security groups. Create the Admin user. az ad user create --display-name $AAD_ADMIN_DISPLAY_NAME --user-principal-name $AAD_ADMIN_UPN --password $AAD_ADMIN_PW 2. Assign the admin user to admin group for the AKS cluster. First identify the object id of the user as we will need this number to assign the user to the admin group. ADMIN_USER_OBJECT_ID=$(az ad user show --id $AAD_ADMIN_UPN --query 'id' --output tsv) 3. Assign the user to the admin security group. az ad group member add --group $ADMIN_GROUP --member-id $ADMIN_USER_OBJECT_ID 4. Create the frontend operations user. az ad user create --display-name $AAD_OPS_FE_DISPLAY_NAME --user-principal-name $AAD_OPS_FE_UPN --password $AAD_OPS_FE_PW 5. Assign the frontend operations user to frontend security group for the AKS cluster. First identify the object id of the user as we will need this number to assign the user to the frontend security group. FE_USER_OBJECT_ID=$(az ad user show --id $AAD_OPS_FE_UPN --query 'id' --output tsv) 6. Assign the user to the frontend security group. az ad group member add --group $OPS_FE_GROUP --member-id $FE_USER_OBJECT_ID 7. Create the backend operations user. az ad user create --display-name $AAD_OPS_BE_DISPLAY_NAME --user-principal-name $AAD_OPS_BE_UPN --password $AAD_OPS_BE_PW 8. Assign the backend operations user to backend security group for the AKS cluster. First identify the object id of the user as we will need this number to assign the user to the backend security group. BE_USER_OBJECT_ID=$(az ad user show --id $AAD_OPS_BE_UPN --query 'id' --output tsv) 9. Assign the user to the backend security group. az ad group member add --group $OPS_BE_GROUP --member-id $BE_USER_OBJECT_ID Current architecture can now be described as follows: 1.4.6 Validate your deployment in the Azure portal. Navigate to the Azure portal at https://portal.azure.com and enter your login credentials. Once logged in, on your top left hand side, click on the portal menu (three strips). From the menu list click on Microsoft Entra ID. On your left hand side menu under Manage click on Users. Validate that your users are created, there shall be three users, each user name shall end with your student name. On the top menu bar click on the Users link. On your left hand side menu under Manage click on Groups. Ensure you have three groups as depicted in the picture, the group names should end with your student name. Click on security group called Ops_Backend_team-YOUR STUDENT NAME. On your left hand side menu click on Members, verify that your user Backend-YOUR STUDENT NAME is assigned. On your left hand side menu click on Azure role Assignments, from the drop down menu select your subscription. Ensure the following roles are assigned to the group: Azure Kubernetes service Cluster User Role assigned on the Cluster level and Azure Kubernetes Service RBAC Writer assigned on the namespace level called backend. 11.On the top menu bar click on Groups link. Repeat step 7 - 11 for Ops_Frontend_team-YOUR STUDENT NAME and ClusterAdminGroup-YOUR STUDENT NAME 1.4.7 Validate the Access for the Different Users. This section will demonstrate how to connect to the AKS cluster from the jumpbox using the user account defined in Microsoft Entra ID. Note: If you deployed your AKS cluster using the quick start method We will check two things: first, that we can successfully connect to the cluster; and second, that the Operations teams have access only to their own namespaces, while the Admin has full access to the cluster. Navigate to the Azure portal at https://portal.azure.com and enter your login credentials. Once logged in, locate and select your rg-hub where the Jumpbox has been deployed. Within your resource group, find and click on the Jumpbox VM. In the left-hand side menu, under the Operations section, select Bastion. Enter the credentials for the Jumpbox VM and verify that you can log in successfully. First remove the existing stored configuration that you have previously downloaded with Azure CLI and kubectl. From the Jumpbox VM execute the following commands: rm -R .azure/ rm -R .kube/ Note: The .azure and .kube directories store configuration files for Azure and Kubernetes, respectively, for your user account. Removing these files triggers a login prompt, allowing you to re-authenticate with different credentials. 7. Retrieve the username and password for Frontend user. Important: Retrieve the username and password from your local shell, and not the shell from Jumpbox VM. echo $AAD_OPS_FE_UPN echo $AAD_OPS_FE_PW 8. From the Jumpbox VM initiate the authentication process. az login Example output: bash azureuser@Jumpbox-VM:~$ az login To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code XXXXXXX to authenticate. 9. Open a new tab in your web browser and access https://microsoft.com/devicelogin. Enter the generated code, and press Next 10. You will be prompted with an authentication window asking which user you want to login with select Use another account and supply the username in the AAD_OPS_FE_UPN variable and password from variable AAD_OPS_FE_PW and then press Next. Note: When you authenticate with a user for the first time, you will be prompted by Microsoft Authenticator to set up Multi-Factor Authentication (MFA). Choose "I want to setup a different method" option from the drop-down menu, and select Phone, supply your phone number, and receive a one-time passcode to authenticate to Azure with your user account. 11. From the Jumpbox VM download AKS cluster credential. SPOKE_RG=rg-spoke STUDENT_NAME= AKS_CLUSTER_NAME=private-aks az aks get-credentials --resource-group $SPOKE_RG --name $AKS_CLUSTER_NAME-${STUDENT_NAME} You should see a similar output as illustrated below: bash azureuser@Jumpbox-VM:~$ az aks get-credentials --resource-group $SPOKE_RG --name $AKS_CLUSTER_NAME-${STUDENT_NAME} Merged "private-aks" as current context in /home/azureuser/.kube/config azureuser@Jumpbox-VM:~$ 12. You should be able to list all pods in namespace frontend. You will now be prompted to authenticate your user again, as this time it will validate your newly created user permissions within the AKS cluster. Ensure you login with the user you created i.e $AAD_OPS_FE_UPN, and not your company email address. kubectl get po -n frontend Example output: azureuser@Jumpbox-VM:~$ kubectl get po -n frontend To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code XXXXXXX to authenticate. NAME READY STATUS RESTARTS AGE nginx 1/1 Running 0 89m 13. Try to list pods in default namespace bash kubectl get pods Example output: bash azureuser@Jumpbox-VM:~$ kubectl get po Error from server (Forbidden): pods is forbidden: User "[email protected]" cannot list resource "pods" in API group "" in the namespace "default": User does not have access t o the resource in Azure. Update role assignment to allow access. 14. Repeat step 6 and 13 for the remaining users, and see how their permissions differs. # Username and password for Admin user execute the command from your local shell and not from Jumpbox VM echo $AAD_ADMIN_UPN echo $AAD_ADMIN_PW # Username and password for Backend user execute the command from your local shell and not from Jumpbox VM echo $AAD_OPS_BE_UPN echo $AAD_OPS_BE_PW 🎉 Congratulations, you made it to the end! You’ve just navigated the wild waters of Microsoft Entra ID and AKS — and lived to tell the tale. Whether you’re now a cluster conqueror or an identity integration ninja, give yourself a high five (or a kubectl get pods if that’s more your style). Now go forth and secure those clusters like the cloud hero you are. 🚀 And remember: with great identity comes great responsibility.131Views1like0CommentsBuilding the Agentic Future
As a business built by developers, for developers, Microsoft has spent decades making it faster, easier and more exciting to create great software. And developers everywhere have turned everything from BASIC and the .NET Framework, to Azure, VS Code, GitHub and more into the digital world we all live in today. But nothing compares to what’s on the horizon as agentic AI redefines both how we build and the apps we’re building. In fact, the promise of agentic AI is so strong that market forecasts predict we’re on track to reach 1.3 billion AI Agents by 2028. Our own data, from 1,500 organizations around the world, shows agent capabilities have jumped as a driver for AI applications from near last to a top three priority when comparing deployments earlier this year to applications being defined today. Of those organizations building AI agents, 41% chose Microsoft to build and run their solutions, significantly more than any other vendor. But within software development the opportunity is even greater, with approximately 50% of businesses intending to incorporate agentic AI into software engineering this year alone. Developers face a fascinating yet challenging world of complex agent workflows, a constant pipeline of new models, new security and governance requirements, and the continued pressure to deliver value from AI, fast, all while contending with decades of legacy applications and technical debt. This week at Microsoft Build, you can see how we’re making this future a reality with new AI-native developer practices and experiences, by extending the value of AI across the entire software lifecycle, and by bringing critical AI, data, and toolchain services directly to the hands of developers, in the most popular developer tools in the world. Agentic DevOps AI has already transformed the way we code, with 15 million developers using GitHub Copilot today to build faster. But coding is only a fraction of the developer’s time. Extending agents across the entire software lifecycle, means developers can move faster from idea to production, boost code quality, and strengthen security, while removing the burden of low value, routine, time consuming tasks. We can even address decades of technical debt and keep apps running smoothly in production. This is the foundation of agentic DevOps—the next evolution of DevOps, reimagined for a world where intelligent agents collaborate with developer teams and with each other. Agents introduced today across GitHub Copilot and Azure operate like a member of your development team, automating and optimizing every stage of the software lifecycle, from performing code reviews, and writing tests to fixing defects and building entire specs. Copilot can even collaborate with other agents to complete complex tasks like resolving production issues. Developers stay at the center of innovation, orchestrating agents for the mundane while focusing their energy on the work that matters most. Customers like EY are already seeing the impact: “The coding agent in GitHub Copilot is opening up doors for each developer to have their own team, all working in parallel to amplify their work. Now we're able to assign tasks that would typically detract from deeper, more complex work, freeing up several hours for focus time." - James Zabinski, DevEx Lead at EY You can learn more about agentic DevOps and the new capabilities announced today from Amanda Silver, Corporate Vice President of Product, Microsoft Developer Division, and Mario Rodriguez, Chief Product Office at GitHub. And be sure to read more from GitHub CEO Thomas Dohmke about the latest with GitHub Copilot. At Microsoft Build, see agentic DevOps in action in the following sessions, available both in-person May 19 - 22 in Seattle and on-demand: BRK100: Reimagining Software Development and DevOps with Agentic AI BRK 113: The Agent Awakens: Collaborative Development with GitHub Copilot BRK118: Accelerate Azure Development with GitHub Copilot, VS Code & AI BRK131: Java App Modernization Simplified with AI BRK102: Agent Mode in Action: AI Coding with Vibe and Spec-Driven Flows BRK101: The Future of .NET App Modernization Streamlined with AI New AI Toolchain Integrations Beyond these new agentic capabilities, we’re also releasing new integrations that bring key services directly to the tools developers are already using. From the 150 million GitHub users to the 50 million monthly users of the VS Code family, we’re making it easier for developers everywhere to build AI apps. If GitHub Copilot changed how we write code, Azure AI Foundry is changing what we can build. And the combination of the two is incredibly powerful. Now we’re bringing leading models from Azure AI Foundry directly into your GitHub experience and workflow, with a new native integration. GitHub models lets you experiment with leading models from OpenAI, Meta, Cohere, Microsoft, Mistral and more. Test and compare performance while building models directly into your codebase all within in GitHub. You can easily select the best model performance and price side by side and swap models with a simple, unified API. And keeping with our enterprise commitment, teams can set guardrails so model selection is secure, responsible, and in line with your team’s policies. Meanwhile, new Azure Native Integrations gives developers seamless access to a curated set of 20 software services from DataDog, New Relic, Pinecone, Pure Storage Cloud and more, directly through Azure portal, SDK, and CLI. With Azure Native Integrations, developers get the flexibility to work with their preferred vendors across the AI toolchain with simplified single sign-on and management, while staying in Azure. Today, we are pleased to announce the addition of even more developer services: Arize AI: Arize’s platform provides essential tooling for AI and agent evaluation, experimentation, and observability at scale. With Arize, developers can easily optimize AI applications through tools for tracing, prompt engineering, dataset curation, and automated evaluations. Learn more. LambdaTest HyperExecute: LambdaTest HyperExecute is an AI-native test execution platform designed to accelerate software testing. It enables developers and testers to run tests up to 70% faster than traditional cloud grids by optimizing test orchestration, observability and streamlining TestOps to expedite release cycles. Learn more. Mistral: Mistral and Microsoft announced a partnership today, which includes integrating Mistral La Plateforme as part of Azure Native Integrations. Mistral La Plateforme provides pay-as-you-go API access to Mistral AI's latest large language models for text generation, embeddings, and function calling. Developers can use this AI platform to build AI-powered applications with retrieval-augmented generation (RAG), fine-tune models for domain-specific tasks, and integrate AI agents into enterprise workflows. MongoDB (Public Preview): MongoDB Atlas is a fully managed cloud database that provides scalability, security, and multi-cloud support for modern applications. Developers can use it to store and search vector embeddings, implement retrieval-augmented generation (RAG), and build AI-powered search and recommendation systems. Learn more. Neon: Neon Serverless Postgres is a fully managed, autoscaling PostgreSQL database designed for instant provisioning, cost efficiency, and AI-native workloads. Developers can use it to rapidly spin up databases for AI agents, store vector embeddings with pgvector, and scale AI applications seamlessly. Learn more. Java and .Net App Modernization Shipping to production isn’t the finish line—and maintaining legacy code shouldn’t slow you down. Today we’re announcing comprehensive resources to help you successfully plan and execute app modernization initiatives, along with new agents in GitHub Copilot to help you modernize at scale, in a fraction of the time. In fact, customers like Ford China are seeing breakthrough results, reducing up to 70% of their Java migration efforts by using GitHub Copilot to automate middleware code migration tasks. Microsoft’s App Modernization Guidance applies decades of enterprise apps experience to help you analyze production apps and prioritize modernization efforts, while applying best practices and technical patterns to ensure success. And now GitHub Copilot transforms the modernization process, handling code assessments, dependency updates, and remediation across your production Java and .NET apps (support for mainframe environments is coming soon!). It generates and executes update plans automatically, while giving you full visibility, control, and a clear summary of changes. You can even raise modernization tasks in GitHub Issues from our proven service Azure Migrate to assign to developer teams. Your apps are more secure, maintainable, and cost-efficient, faster than ever. Learn how we’re reimagining app modernization for the era of AI with the new App Modernization Guidance and the modernization agent in GitHub Copilot to help you modernize your complete app estate. Scaling AI Apps and Agents Sophisticated apps and agents need an equally powerful runtime. And today we’re advancing our complete portfolio, from serverless with Azure Functions and Azure Container Apps, to the control and scale of Azure Kubernetes Service. At Build we’re simplifying how you deploy, test, and operate open-source and custom models on Kubernetes through Kubernetes AI Toolchain Operator (KAITO), making it easy to inference AI models with the flexibility, auto-scaling, pay-per-second pricing, and governance of Azure Container Apps serverless GPU, helping you create real-time, event-driven workflows for AI agents by integrating Azure Functions with Azure AI Foundry Agent Service, and much, much more. The platform you choose to scale your apps has never been more important. With new integrations with Azure AI Foundry, advanced automation that reduces developer overhead, and simplified operations, security and governance, Azure’s app platform can help you deliver the sophisticated, secure AI apps your business demands. To see the full slate of innovations across the app platform, check out: Powering the Next Generation of AI Apps and Agents on the Azure Application Platform Tools that keep pace with how you need to build This week we’re also introducing new enhancements to our tooling to help you build as fast as possible and explore what’s next with AI, all directly from your editor. GitHub Copilot for Azure brings Azure-specific tools into agent mode in VS Code, keeping you in the flow as you create, manage, and troubleshoot cloud apps. Meanwhile the Azure Tools for VS Code extension pack brings everything you need to build apps on Azure using GitHub Copilot to VS Code, making it easy to discover and interact with cloud services that power your applications. Microsoft’s gallery of AI App Templates continues to expand, helping you rapidly move from concept to production app, deployed on Azure. Each template includes fully working applications, complete with app code, AI features, infrastructure as code (IaC), configurable CI/CD pipelines with GitHub Actions, along with an application architecture, ready to deploy to Azure. These templates reflect the most common patterns and use cases we see across our AI customers, from getting started with AI agents to building GenAI chat experiences with your enterprise data and helping you learn how to use best practices such as keyless authentication. Learn more by reading the latest on Build Apps and Agents with Visual Studio Code and Azure Building the agentic future The emergence of agentic DevOps, the new wave of development powered by GitHub Copilot and new services launching across Microsoft Build will be transformative. But just as we’ve seen over the first 50 years of Microsoft’s history, the real impact will come from the global community of developers. You all have the power to turn these tools and platforms into advanced AI apps and agents that make every business move faster, operate more intelligently and innovate in ways that were previously impossible. Learn more and get started with GitHub Copilot718Views1like0CommentsPowering the Next Generation of AI Apps and Agents on the Azure Application Platform
Generative AI is already transforming how businesses operate, with organizations seeing an average return of 3.7x for every $1 of investment [The Business Opportunity of AI, IDC study commissioned by Microsoft]. Developers sit at the center of this transformation, and their need for speed, flexibility, and familiarity with existing tools is driving the demand for application platforms that integrate AI seamlessly into their current development workflows. To fully realize the potential of generative AI in applications, organizations must provide developers with frictionless access to AI models, frameworks, and environments that enable them to scale AI applications. We see this in action at organizations like Accenture, Assembly Software, Carvana, Coldplay (Pixel Lab), Global Travel Collection, Fujitsu, healow, Heineken, Indiana Pacers, NFL Combine, Office Depot, Terra Mater Studios (Red Bull), and Writesonic. Today, we’re excited to announce new innovations across the Azure Application Platform to meet developers where they are and help enterprises accelerate their AI transformation. The Azure App Platform offers managed Kubernetes (Azure Kubernetes Service), serverless (Azure Container Apps and Azure Functions), PaaS (Azure App Service) and integration (Azure Logic Apps and API Management). Whether you’re modernizing existing applications or creating new AI apps and agents, Azure provides a developer‑centric App Platform—seamlessly integrated with Visual Studio, GitHub, and Azure AI Foundry—and backed by a broad portfolio of fully managed databases, from Azure Cosmos DB to Azure Database for PostgreSQL and Azure SQL Database. Innovate faster with AI apps and agents In today’s fast-evolving AI landscape, the key to staying competitive is being able to move from AI experimentation to production quickly and easily. Whether you’re deploying open-source AI models or integrating with any of the 1900+ models in Azure AI Foundry, the Azure App Platform provides a streamlined path for building and scaling AI apps and agents. Kubernetes AI Toolchain Operator (KAITO) for AKS add-on (GA) and Azure Arc extension (preview) simplifies deploying, testing, and operating open-source and custom models on Kubernetes. Automated GPU provisioning, pre-configured settings, workspace customization, real-time deployment tracking, and built-in testing interfaces significantly reduce infrastructure overhead and accelerate AI development. Visual Studio Code integration enables developers to quickly prototype, deploy, and manage models. Learn more. Serverless GPU integration with AI Foundry Models (preview) offers a new deployment target for easy AI model inferencing. Azure Container Apps serverless GPU offers unparalleled flexibility to run any supported model. It features automatic scaling, pay-per-second pricing, robust data governance, and built-in enterprise networking and security support, making it an ideal solution for scalable and secure AI deployments. Learn more. Azure Functions integration with AI Foundry Agent Service (GA) enables you to create real-time, event-driven workflows for AI agents without managing infrastructure. This integration enables agents to securely invoke Azure Functions to execute business logic, access systems, or process data on demand. It unlocks scalable, cost-efficient automation for intelligent applications that respond dynamically to user input or events. Learn more. Azure Functions enriches Azure OpenAI extension (preview) to automate embeddings for real-time RAG, semantic search, and function calling with built-in support for AI Search, Azure Cosmos DB for MongoDB and Azure Data Explorer vector stores. Learn more. Azure Functions MCP extension adds support for instructions and monitoring (preview) making it easier to build and operate remote MCP servers at cloud scale. With this update, developers can deliver richer AI interactions by providing capabilities and context to large language models directly from Azure Functions. This enables AI agents to both call functions and respond intelligently with no separate orchestration layer required. Learn more. Harnessing AI to drive intelligent business processes As AI continues to grow in adoption, its ability to automate complex business process workflows becomes increasingly valuable. Azure Logic Apps empowers organizations to build, orchestrate, and monitor intelligent, agent-driven workflows. Logic Apps agent loop orchestrates agentic business processes (preview) with goal-based automation using AI-powered reasoning engines such as OpenAI’s GPT-4o or GPT-4.1. Instead of building fixed flows, users can define the desired outcomes, and Agent loop action in Logic Apps figures out the steps dynamically. With 1400+ out-of-the-box connectors to various enterprise systems and SaaS applications, and full observability, Logic Apps enables you to rapidly deliver on all business process needs with agentic automation. Learn more. Enable intelligent data pipelines for RAG using Logic Apps (preview) with new native integrations with Azure Cosmos DB and Azure AI Search. Teams can ingest content into vector stores and databases through low-code templates. No custom code required. This enables AI agents to ground responses in proprietary data, improving relevance and accuracy for real business outcomes. Learn more. Empower AI agents to act with Logic Apps in AI Foundry (preview) across enterprise systems using low-code automation. Prebuilt connectors and templates simplify integration with Microsoft and third-party services from databases to SaaS apps. This gives developers and business users a faster way to orchestrate intelligent actions, automate complex workflows, and operationalize AI across the organization. Learn more. Scale AI innovation across your enterprise As AI adoption grows, so does the need for visibility and control over how models are accessed and utilized. Azure API Management helps you achieve this with advanced tools that ensure governance, security, and efficient management of your AI APIs. Expanded AI Gateway capabilities in Azure API Management (GA) give organizations deeper control, observability, and governance for generative AI workloads. Key additions include LLM Logging for prompts, completions, and token usage insights; session-aware load balancing to maintain context in multi-turn chats; robust guardrails through integration with Azure AI Content Safety service, and direct onboarding of models from Azure AI Foundry. Customers can also now apply GenAI-specific policies to AWS Bedrock model endpoints, enabling unified governance across multi-cloud environments. Learn more. Azure API Management support for Model Context Protocol (preview) makes it easy to expose existing APIs as secure, agent-compatible endpoints. You can apply gateway policies such as authentication, rate limiting, caching, and authorization to protect MCP servers. This ensures consistent, centralized policy enforcement across all your MCP-enabled APIs. With minimal effort, you can transform APIs into AI-ready services that integrate seamlessly with autonomous agents. Learn more. Azure API Center introduces private MCP registry and streamlined discovery (preview) giving organizations full control over which services are discoverable. Role-Based Access Control (RBAC) allows teams to manage who can find, use, and update MCP servers based on organizational roles. Developers can now discover and consume MCP-enabled APIs directly through the API Center portal. These updates improve governance and simplify developer experience for AI agent development. Learn more. Simplify operations for AI apps and agents in production Moving AI applications from proof-of-concept to production requires an environment that scales securely, cost-effectively, and reliably. The Azure App Platform continues to evolve with enhancements that remove operational friction, so you can deploy your AI apps, agents and scale with confidence. App Service Premium v4 Plan (preview) delivers up to 25% better performance and up to 24% cost savings over the previous generation—ideal for scalable, secure web apps. App Service Premium v4 helps modernize both Windows and Linux applications with better performance, security, and DevOps integration. It now offers a more cost-effective solution for customers seeking a fully managed PaaS, reducing infrastructure overhead while supporting today’s demanding AI applications. Learn more. AKS security dashboard (GA) provides unified visibility and automated remediation powered by Microsoft Defender for Containers—helping operations stay ahead of threats and compliance needs without leaving the Azure portal. Learn more. AKS Long-Term Support (GA) introduces 2-year support for all versions of Kubernetes after 1.27, in addition to the standard community-supported versions. This extended support model enables teams to reduce upgrade frequency and complexity, ensure platform stability, and provide greater operational flexibility. Learn more. Dynamic service recommendations for AKS (preview) streamlines the process of selecting and connecting services to your Azure Kubernetes Service cluster by offering tailored Azure service recommendations directly in the Azure portal. It uses in-portal intelligence to suggest the right services based on your usage patterns, making it easier to choose what’s best for your workloads. Learn more. Azure Functions Flex Consumption adds support for availability zones and smaller instance sizes (preview) to improve reliability and resiliency for critical workloads. The new 512 MB memory option helps customers fine-tune resource usage and reduce costs for lightweight functions. These updates are available in Australia East, East Asia, Sweden Central, and UK South, and can be enabled on both new and existing Flex Consumption apps. Learn more. Join us at Microsoft Build, May 19-22 The future of AI applications is here, and it’s powered by Azure. From APIs to automation, from web apps to Kubernetes, and from cloud to edge, we’re building the foundation for the next era of intelligent software. Whether you're modernizing existing systems or pioneering the next big thing in AI, Azure gives you the tools, performance, and governance to build boldly. Our platform innovations are designed to simplify your path, remove operational friction, and help you scale with confidence. Explore the various breakout, demo and lab sessions at Microsoft Build, May 19-22, to dive deeper into these Azure App Platform innovations. We can’t wait to see what you will build next!693Views0likes0CommentsReimagining App Modernization for the Era of AI
This blog highlights the key announcements and innovations from Microsoft Build 2025. It focuses on how AI is transforming the software development lifecycle, particularly in app modernization. Key topics include the use of GitHub Copilot for accelerating development and modernization, the introduction of Azure SRE agent for managing production systems, and the launch of the App Modernization Guidance to help organizations modernize their applications with AI-first design. The blog emphasizes the strategic approach to modernization, aiming to reduce complexity, improve agility, and deliver measurable business outcomes1.1KViews1like0CommentsNew Networking Capabilities in Azure Container Apps
New Networking Capabilities in Azure Container Apps Azure Container Apps is your go-to fully managed serverless container service that enables you to deploy and run containerized applications with per-second billing and autoscaling without having to manage infrastructure. Today, Azure Container Apps is thrilled to announce several new enterprise capabilities that will take the flexibility, security, and manageability of your containerized applications to the next level. These capabilities include premium ingress, rule-based routing, private endpoints, Azure Arc integration, and planned maintenance. Let’s dive into the advanced networking features that Azure Container Apps has introduced. Public Preview: Premium Ingress in Azure Container Apps Azure Container Apps now supports premium ingress in public preview. This feature brings environment-level ingress configuration options, with the highlight being customizable ingress scaling. This capability supports the scaling of the ingress proxy, allowing you to better handle higher demand workloads, such as large performance tests. By configuring your ingress proxy to run on workload profiles, you can scale out more ingress instances to manage the load. Keep in mind, running the ingress proxy on a workload profile will incur associated costs. But wait, there’s more! This release also includes other ingress-related settings to boost your application’s flexibility, such as termination grace period, idle request timeout, and header count. To learn more, please visit https://aka.ms/aca/ingress-config. Public Preview: Rule-Based Routing in Azure Container Apps Next up, we have rule-based routing, now in public preview. This feature is all about giving you greater flexibility and composability for your Azure Container Apps. It simplifies your architecture for microservice applications, A/B testing, blue-green deployments, and more. With rule-based routing, you can direct incoming HTTP traffic to different apps within your Container Apps environment based on the requested host name or path. This includes support for custom domains! No need to set up a separate reverse proxy like NGINX anymore. Just provide routing rules for your environment and incoming traffic will automatically be routed to the specified target apps. To learn more, please visit https://aka.ms/aca/rule-based-routing. Generally Available: Private Endpoints in Azure Container Apps We’re also excited to announce that private endpoints are now generally available for workload profile environments in Azure Container Apps. This means you can connect to your Container Apps environment using a private IP address in your Azure Virtual Network, eliminating exposure to the public internet and securing access to your applications. Plus, you can connect directly from Azure Front Door to your workload profile environments over a private link instead of the public internet. Today, you can enable Private Link to the container apps origin for Azure Front Door through the Azure CLI and Azure portal. TCP support is now available too! This feature is supported for both Consumption and Dedicated plans in workload profile environments. Whether you have new or existing environments, you can leverage this capability without needing to re-provision your environment. Additionally, this capability introduces the public network access setting, allowing you to configure Azure networking policies. GA pricing will go into effect on July 1, 2025. To learn more, please visit https://aka.ms/aca/private-endpoints. What else is going on with Azure Container Apps at Build 2025? There’s a lot happening at Build 2025! Azure Container Apps has numerous sessions and other features being launched. For a complete overview, check out our https://aka.ms/aca/whats-new-blog-build-2025. For feedback, feature requests, or questions about Azure Container Apps, visit our GitHub page. We look forward to hearing from you!244Views0likes0CommentsUnlocking new AI workloads in Azure Container Apps
Announcing new features to support AI workloads including - improved integrations for deploying Foundry models to Azure Container Apps, general availability of Dedicated GPUs, and the private preview of GPU powered dynamic sessions.313Views1like0Comments