asp .net
43 TopicsReimagining App Modernization for the Era of AI
This blog highlights the key announcements and innovations from Microsoft Build 2025. It focuses on how AI is transforming the software development lifecycle, particularly in app modernization. Key topics include the use of GitHub Copilot for accelerating development and modernization, the introduction of Azure SRE agent for managing production systems, and the launch of the App Modernization Guidance to help organizations modernize their applications with AI-first design. The blog emphasizes the strategic approach to modernization, aiming to reduce complexity, improve agility, and deliver measurable business outcomes2.4KViews2likes0CommentsStreamline & Modernise ASP.NET Auth: Moving enterprise apps from IIS to App Service with Easy Auth
Introduction When modernising your enterprise ASP.NET (.NET Framework) or ASP.NET Core applications and moving them from IIS over to Azure App Service, one of the aspects you will have to take into consideration is how you will manage authentication (AuthN) and authorisation (AuthZ). Specifically, for applications that leverage on-premises auth mechanisms such as Integrated Windows Authentication, you will need to start considering more modern auth protocols such as OpenID Connect/OAuth which are more suited to the cloud. Fortunately, App Service includes built-in authentication and authorisation support also known as 'Easy Auth', which requires minimal to zero code changes. This feature is integrated into the platform, includes a built-in token store, and operates as a middleware running the AuthN logic outside of your code logic, as illustrated by the image below:- More information on how EasyAuth works can be found here. Easy Auth supports several providers as illustrated above, but in this blog we will purely focus on using Entra ID (formerly known as Azure Active Directory) as the provider. It also assumes all of the Active Directory users have been synced up to Entra ID. With a few clicks, you can enable Entra ID authentication across any of your web, mobile or API apps in App Service, restrict which tenants or even specific identities are allowed access – all without touching code. This can be quite powerful in many scenarios, such as when you do not have access to the source control to implement your own auth logic, reducing the maintenance overhead of maintaining libraries or simply want a quick path to apply auth across your apps. For more detailed scenarios and comparison on when it makes sense using Easy Auth versus other authentication methods can be found here. Setting up Easy Auth Let’s see Easy Auth in action. As you can see below I have a sample ASP.NET app hosted on App Service which is accessible without any authentication:- Now let’s demonstrate how quickly it is to setup Easy Auth for my app:- 1) I navigated to my App Service resource within the Azure Portal 2) I went to Authentication and used the below configuration o Selected Microsoft as the Identity provider o Workforce configuration (current tenant) o Create a new app registration (appropriate Entra ID roles are required) o Entra ID app name:- sot-easyauth o Client secret expiry:- 180 days (this means I must renew the secret in advance of the 180 days otherwise my app/authentication will fail to function upon expiry causing downtime). o Allow requests only from this application itself. o Current tenant – Single tenant (i.e users outside of my tenant will be denied access) o Identity requirement:- Allow requests from any identity o Restrict access:- Require authentication (this will require authentication across my whole app, whereas “Allow unauthenticated access” means it is up to my app to decide when authentication is required). o HTTP 302 Found redirect (redirects unauthenticated users to the login page rather than just a page stating 401 unauthorised for example). o Token store:- Enabled (also allows the app to have access to the token). 3) For permissions, I left the default User.Read Graph API permission selected. More information on the different permissions can be found here. Now if I go back to the app and refresh the page again, I am redirected to the login page which is surfaced by the Easy Auth middleware:- Only after successful authentication, will I be able to see the Welcome page again:- Now that is pretty impressive, but you might want to go even further and have questions such as: how will my app know who’s logged in? How can I access the claims? How do I perform more granular AuthZ? Well for starters, Easy Auth essentially creates the claims in the incoming token and exposes them to your app as request headers which your app can then leverage to interpret accordingly. The list of headers can be found here. Typically, you will be tasked with creating custom logic to decode and interpret these claims, but with the likes of ASP.NET (.NET Framework), App Service can populate the claims of the authenticated user without additional code logic. However for ASP.NET Core this does not hold true. Thus, given the approach differs between ASP.NET (.NET Framework) and ASP.NET Core (starting from .NET Core), I will split these up into two different sections after touching upon AuthZ and Entra ID app roles. AuthZ and Entra ID App Roles If your IIS ASP.NET app leverages Windows Authentication for AuthN, but your app manages AuthZ itself, perhaps by mapping the domain credentials (e.g. CONTOSO\Sam) to specific AuthZ roles stored in somewhere like a database and remains a requirement to do, you can achieve a similar outcome by using the claims provided by Easy Auth. However, it is not recommended to use fields such as domain credentials, e-mail or UPN (e.g. [email protected]) given such attributes can change and even be re-used over time. For example, an employee called Dan Long has the UPN of [email protected] leaves the company and another employee with the same name joins the company and is assigned the same UPN [email protected] – potentially giving unauthorised access to resources belonging to the former employee. Instead you may consider using the oid (i.e objectId), which is a globally unique GUID that identifies the user across applications within a single tenant. You might also consider pairing oid with tid (i.e tenant ID) for sharding/routing if required. A note for multi-tenancy applications: the same user that exists in different tenants will have a different oid. More information on how to reliably identify a user and the different claim types available can be found here. Alternatively, if the built-in authorisation policies do not suffice, you can leverage Entra ID app roles to apply authorisation within your ASP.NET App, which we will cover in more depth further down below. For demonstration purposes, I have created an app role called Member in my Entra ID App Registration and assigned the Entra ID group “Contoso EA Members” to this role via the associated Entra ID Enterprise Application, which my identity is part of as shown below:- I am leveraging said role to restrict only the role Member from being able to access the Member Area page (more on this further down). More information on creating your own Entra ID app roles can be found here. ASP.NET (.NET Framework) claims and Entra ID App roles For ASP.NET 4.6 apps, App Service populates the user’s claims through ClaimsPrincipal.Current, which means you can easily reference the claims without additional logic. I have created sample code which demonstrates this here, and the output of this in App Service can be found below:- You will notice Easy Auth has picked up my Entra ID app role called Member under claim type roles. In the screenshot and sample, you will notice I have a link located on the top nav bar called Member Area which is guarded by an [Authorize] tag to restrict only members with the role Member access. Unfortunately, at this stage, if we were to access the page it will return with 401 Access Denied, regardless of my identity having the appropriate app role. The reason behind this, is because ASP.NET is looking for the claim type “http://schemas.microsoft.com/ws/2008/06/identity/claims/role” instead of “role”. Fortunately, Easy Auth can be configured to display the long claim name instead by configuring the Environment Variable WEBSITE_AUTH_USE_LEGACY_CLAIMS to False, as shown in the below screenshot:- After the change, if I logout and back in again, I will see this being reflected back into my application and the Member Area page will grant me access as shown in the screenshots below:- Voila, we now have claims and Entra ID app roles working within our ASP.NET application. ASP.NET Core claims and Entra ID App roles Out-of-the-box, Easy Auth does not support populating ASP.NET Core with the current user’s authentication context like it does for ASP.NET (.NET Framework) with ClaimsPrincipal. However, this can be achieved by using the nuget package Microsoft.Identity.Web which has built-in capability to achieve this. What I did was as follows:- 1) Installed the nuget package Microsoft.Identity.Web into my solution. 2) In my Program.cs file, I loaded in the library:- builder.Services.AddMicrosoftIdentityWebAppAuthentication(builder.Configuration); 3) I also added app.UseAuthorization() after app.UseAuthentication() app.UseAuthentication(); After these changes, User.Identities will now be populated with the claims, and the [Authorize] tag will work permitting only the role Member when visiting the Member Area page. The full sample code can be found here. Unlike with ASP.NET (.NET Framework), the downside with this approach is the added responsibility of managing an additional library (Microsoft.Identity.Web). Conclusion App Service Easy Auth can provide a streamlined and efficient way to manage authentication and authorisation for your ASP.NET applications. By leveraging Easy Auth, you can apply modern auth protocols with minimal to zero code changes. The built-in support for various identity providers, including Entra ID, can help developers implement flexible and robust auth mechanisms. As demonstrated, Easy Auth simplifies the process of integrating authentication and authorisation into your applications, making it an valuable tool for modernising enterprise apps. Good to know and additional resources Limitations of Entra ID app roles. For example “A user, group, or service principal can have a maximum of 1,500 app role assignment”:- Service limits and restrictions - Microsoft Entra ID | Microsoft Learn. You can leverage Azure Policy to audit across the organisation when App Service does not have Easy Auth enabled by turning on “App Service apps should have authentication enabled”:- Built-in policy definitions for Azure App Service - Azure App Service | Microsoft Learn. Work with User Identities in AuthN/AuthZ - Azure App Service | Microsoft Learn Configure Microsoft Entra Authentication - Azure App Service | Microsoft Learn Work with OAuth Tokens in AuthN/AuthZ - Azure App Service | Microsoft Learn547Views1like0CommentsKeep Your Azure Functions Up to Date: Identify Apps Running on Retired Versions
Running Azure Functions on retired language versions can lead to security risks, performance issues, and potential service disruptions. While Azure Functions Team notifies users about upcoming retirements through the portal, emails, and warnings, identifying affected Function Apps across multiple subscriptions can be challenging. To simplify this, we’ve provided Azure CLI scripts to help you: ✅ Identify all Function Apps using a specific runtime version ✅ Find apps running on unsupported or soon-to-be-retired versions ✅ Take proactive steps to upgrade and maintain a secure, supported environment Read on for the full set of Azure CLI scripts and instructions on how to upgrade your apps today! Why Upgrading Your Azure Functions Matters Azure Functions supports six different programming languages, with new stack versions being introduced and older ones retired regularly. Staying on a supported language version is critical to ensure: Continued access to support and security updates Avoidance of performance degradation and unexpected failures Compliance with best practices for cloud reliability Failure to upgrade can lead to security vulnerabilities, performance issues, and unsupported workloads that may eventually break. Azure's language support policy follows a structured deprecation timeline, which you can review here. How Will You Know When a Version Is Nearing its End-of-Life? The Azure Functions team communicates retirements well in advance through multiple channels: Azure Portal notifications Emails to subscription owners Warnings in client tools and Azure Portal UI when an app is running on a version that is either retired, or about to be retired in the next 6 months Official Azure Functions Supported Languages document here To help you track these changes, we recommend reviewing the language version support timelines in the Azure Functions Supported Languages document. However, identifying all affected apps across multiple subscriptions can be challenging. To simplify this process, I've built some Azure CLI scripts below that can help you list all impacted Function Apps in your environment. Linux* Function Apps with their language stack versions: az functionapp list --query "[?siteConfig.linuxFxVersion!=null && siteConfig.linuxFxVersion!=''].{Name:name, ResourceGroup:resourceGroup, OS:'Linux', LinuxFxVersion:siteConfig.linuxFxVersion}" --output table *Running on Elastic Premium and App Service Plans Linux* Function Apps on a specific language stack version: Ex: Node.js 18 az functionapp list --query "[?siteConfig.linuxFxVersion=='Node|18'].{Name:name, ResourceGroup:resourceGroup, OS: 'Linux', LinuxFxVersion:siteConfig.linuxFxVersion}" --output table *Running on Elastic Premium and App Service Plans Windows Function Apps only: az functionapp list --query "[?!contains(kind, 'linux')].{Name:name, ResourceGroup:resourceGroup, OS:'Windows'}" --output table Windows Function Apps with their language stack versions: az functionapp list --query "[?!contains(kind, 'linux')].{name: name, resourceGroup: resourceGroup}" -o json | ConvertFrom-Json | ForEach-Object { $appSettings = az functionapp config appsettings list -n $_.name -g $_.resourceGroup --query "[?name=='FUNCTIONS_WORKER_RUNTIME' || name=='WEBSITE_NODE_DEFAULT_VERSION']" -o json | ConvertFrom-Json $siteConfig = az functionapp config show -n $_.name -g $_.resourceGroup --query "{powerShellVersion: powerShellVersion, netFrameworkVersion: netFrameworkVersion, javaVersion: javaVersion}" -o json | ConvertFrom-Json $runtime = ($appSettings | Where-Object { $_.name -eq 'FUNCTIONS_WORKER_RUNTIME' }).value $version = switch($runtime) { 'node' { ($appSettings | Where-Object { $_.name -eq 'WEBSITE_NODE_DEFAULT_VERSION' }).value } 'powershell' { $siteConfig.powerShellVersion } 'dotnet' { $siteConfig.netFrameworkVersion } 'java' { $siteConfig.javaVersion } default { 'Unknown' } } [PSCustomObject]@{ Name = $_.name ResourceGroup = $_.resourceGroup OS = 'Windows' Runtime = $runtime Version = $version } } | Format-Table -AutoSize Windows Function Apps running on Node.js runtime: az functionapp list --query "[?!contains(kind, 'linux')].{name: name, resourceGroup: resourceGroup}" -o json | ConvertFrom-Json | ForEach-Object { $appSettings = az functionapp config appsettings list -n $_.name -g $_.resourceGroup --query "[?name=='FUNCTIONS_WORKER_RUNTIME' || name=='WEBSITE_NODE_DEFAULT_VERSION']" -o json | ConvertFrom-Json $runtime = ($appSettings | Where-Object { $_.name -eq 'FUNCTIONS_WORKER_RUNTIME' }).value if ($runtime -eq 'node') { $version = ($appSettings | Where-Object { $_.name -eq 'WEBSITE_NODE_DEFAULT_VERSION' }).value [PSCustomObject]@{ Name = $_.name ResourceGroup = $_.resourceGroup OS = 'Windows' Runtime = $runtime Version = $version } } } | Format-Table -AutoSize Windows Function Apps running on a specific language version: Ex: Node.js 18 az functionapp list --query "[?!contains(kind, 'linux')].{name: name, resourceGroup: resourceGroup}" -o json | ConvertFrom-Json | ForEach-Object { $appSettings = az functionapp config appsettings list -n $_.name -g $_.resourceGroup --query "[?name=='FUNCTIONS_WORKER_RUNTIME' || name=='WEBSITE_NODE_DEFAULT_VERSION']" -o json | ConvertFrom-Json $runtime = ($appSettings | Where-Object { $_.name -eq 'FUNCTIONS_WORKER_RUNTIME' }).value $nodeVersion = ($appSettings | Where-Object { $_.name -eq 'WEBSITE_NODE_DEFAULT_VERSION' }).value if ($runtime -eq 'node' -and $nodeVersion -eq '~18') { [PSCustomObject]@{ Name = $_.name ResourceGroup = $_.resourceGroup OS = 'Windows' Runtime = $runtime Version = $nodeVersion } } } | Format-Table -AutoSize All windows Apps running on unsupported language runtimes: (as of March 2025) az functionapp list --query "[?!contains(kind, 'linux')].{name: name, resourceGroup: resourceGroup}" -o json | ConvertFrom-Json | ForEach-Object { $appSettings = az functionapp config appsettings list -n $_.name -g $_.resourceGroup --query "[?name=='FUNCTIONS_WORKER_RUNTIME' || name=='WEBSITE_NODE_DEFAULT_VERSION']" -o json | ConvertFrom-Json $siteConfig = az functionapp config show -n $_.name -g $_.resourceGroup --query "{powerShellVersion: powerShellVersion, netFrameworkVersion: netFrameworkVersion}" -o json | ConvertFrom-Json $runtime = ($appSettings | Where-Object { $_.name -eq 'FUNCTIONS_WORKER_RUNTIME' }).value $version = switch($runtime) { 'node' { $nodeVer = ($appSettings | Where-Object { $_.name -eq 'WEBSITE_NODE_DEFAULT_VERSION' }).value if ([string]::IsNullOrEmpty($nodeVer)) { 'Unknown' } else { $nodeVer } } 'powershell' { $siteConfig.powerShellVersion } 'dotnet' { $siteConfig.netFrameworkVersion } default { 'Unknown' } } # Check if runtime version is unsupported $isUnsupported = switch($runtime) { 'node' { $ver = $version -replace '~','' [double]$ver -le 16 } 'powershell' { $ver = $version -replace '~','' [double]$ver -le 7.2 } 'dotnet' { $ver = $siteConfig.netFrameworkVersion $ver -notlike 'v7*' -and $ver -notlike 'v8*' } default { $false } } if ($isUnsupported) { [PSCustomObject]@{ Name = $_.name ResourceGroup = $_.resourceGroup OS = 'Windows' Runtime = $runtime Version = $version } } } | Format-Table -AutoSize Take Action Now By using these scripts, you can proactively identify and update Function Apps before they reach end-of-support status. Stay ahead of runtime retirements and ensure the reliability of your Function Apps. For step-by-step instructions to upgrade your Function Apps, check out the Azure Functions Language version upgrade guide. For more details on Azure Functions' language support lifecycle, visit the official documentation. Have any questions? Let us know in the comments below!2.2KViews1like2CommentsGet Ready for .NET Conf: Focus on Modernization
We’re excited to announce the topics and speakers for .NET Conf: Focus on Modernization, our latest virtual event on April 22-23, 2025! This event features live sessions from .NET and cloud computing experts, providing attendees with the latest insights into modernizing .NET applications, including technical upgrades, cloud migration, and tooling advancements. To get ready, visit the .NET Conf: Focus on Modernization home page and click Add to Calendar so you can save the date on your calendar. From this page, on the day of the event you’ll be able to join a live stream on YouTube and Twitch. We will also make the source code for the demos available on GitHub and the on-demand replays will be available on our YouTube channel. Learn more: https://focus.dotnetconf.net/ Why attend? In the fast-changing technological environment we now find ourselves, it has never been more urgent to modernize enterprise .NET applications to maintain competitiveness and stay ahead of the next innovation. Updating .NET applications for the cloud is a major business priority and involves not only technical upgrades and cloud migration, but also improvements in tooling, processes, and skills. At this event, you will get the end to end insights across latest tools, innovations, and best practices for successful .NET modernization. What can developers expect? The event will run live for up to five hours each day, covering different aspects of .NET modernizations. Scott Hanselman will set the tone for day one with discussion of the experiences and processes to modernize .NET applications in the era of AI. This will be followed by expert sessions on upgrading .NET apps and modernizing both your apps and data to the cloud. Day two will soar higher into the clouds, with sessions to help with cloud migration, cloud development, and infusing AI into your apps. You can interact with experts and ask questions to deepen your expertise, as we broadcast live on YouTube, or Twitch. Recordings of all sessions will be available with materials after the event. Agenda Here’s a quick snapshot of the schedule. Things may change, and we recommend that you please visit the event home page for the latest agenda and session times: https://focus.dotnetconf.net/agenda Day 1 – April 22, Tuesday Time (PDT) Session 8:00 am Modernizing .NET: Future-ready applications in the era of AI Scott Hanselman, Chet Husk, McKenna Barlow 9:00 am Deep dive into the upcoming AI-assisted tooling to upgrade .NET apps Chet Husk, McKenna Barlow 10:00 am Use Reliable Web App patterns to confidently replatform your web apps Pablo Lopes 11:00 am Modernize Data-Driven Apps (No AI Needed) Jerry Nixon 12:00 pm Modernize from ASP.NET to ASP.NET Core: The Future is Now Taylor Southwick Day 2 – April 23, Wednesday Time (PDT) Session 8:00 am Unblock .NET modernization with AI-assisted app and code assessment tools Michael Yen-Chi Ho 9:00 am Cloud development doesn't have to be painful thanks to .NET Aspire Maddy Montaquila (Leger) 10:00 am Introducing Artificial Intelligence to your application Jordan Matthiesen 11:00 am Modernizing your desktop: From WinForms to Blazor, Azure, and AI Santiago Arango Toro Save the Date! .NET Conf: Focus on Modernization is a free, two-day livestream event that you won’t want to miss. Tune in on April 22 and 23, 2025, ask questions live, and learn how to get your .NET applications ready for the AI revolution. Save the date! Stay tuned for more updates and detailed session information. We can’t wait to see you there!1.1KViews0likes0CommentsGetting Started with .NET on Azure Container Apps
Great news for .NET developers who would like to become familiar with containers and Azure Container Apps (ACA)! We just released a new Getting Started guide for .NET developers on Azure Container Apps. This guide is designed to help you get started with Azure Container Apps and understand how to build and deploy your applications using this service.1.3KViews1like0CommentsConnect Azure SQL Server via System Assigned Managed Identity under ASP.NET
TOC Why we use it Architecture How to use it References Why we use it This tutorial will introduce how to integrate Microsoft Entra with Azure SQL Server to avoid using fixed usernames and passwords. By utilizing System-assigned managed identities as a programmatic bridge, it becomes easier for Azure-related PaaS services (such as Container Apps) to communicate with the database without storing connection information in plain text. Architecture I will introduce each service or component and their configurations in subsequent chapters according to the order of A-C: A: The company's account administrator needs to create or designate a user as the database administrator. This role can only be assigned to one person within the database and is responsible for basic configuration and the creation and maintenance of other database users. It is not intended for development or actual system operations. B: The company's development department needs to create a Container App (or other service) as the basic unit of the business system. Programmers within this unit will write business logic (e.g., accessing the database) and deploy it here. C: The company's data department needs to create or maintain a database and designate Microsoft Entra as the only login method, eliminating other fixed username/password combinations. How to use it A: As this article does not dive into the detailed configuration of Microsoft Entra, it will only outline the process. The company's account administrator needs to create or designate a user as the database administrator. In this example, we will call this user "cch," and the account, "cch@thexxxxxxxxxxxx" will be used in subsequent steps. B-1: In this example, we can create a Container App with any SKU/region. Please note that during the initial setup, we will temporarily use the nginx:latest image from docker.io. After creating our own ASP.NET image, we will update it accordingly. For testing convenience, please enable Ingress traffic and allow requests from all regions. Once the Container App has been created, please enable the System Assigned Managed Identity. Lastly, please make a note of your App Name (e.g., mine is az-1767-aca) as we will use it in the following steps. C-1: Create a database/SQL server. During this process, you need to specify the user created in Step A as the database administrator. Please note that to select "Microsoft Entra-only authentication." In this mode, the username/password will no longer be used. Then, click on "Next: Networking." Microsoft Entra and Username & Password login methods are selected, for security reasons, it is strongly recommended to choose Microsoft Entra Only. The Username & Password option will not be used in this tutorial.) Since this article does not cover the detailed network configuration of the database, temporarily allow public access during the tutorial. Use the default values for other settings, click on "Review + Create," and then click "Create" to finish the setup. During this process, you need to specify the system-assigned managed identity created in Step B as the entity that will actually operate the database. And leave it default from the rest of the parts, and finally create the Database. C-2: After the database has created, you can log in using the identity "cch@thexxxxxxxxxxxx" you've get from Step A which is the database administrator. Open a PowerShell terminal and using the "cch" account, enter the following command to log in to SQL Server. You will need to change the <text> to follow your company's naming conventions. sqlcmd -S <YOUR_SERVER_NAME>.database.windows.net -d <YOUR_DB_NAME> -U <YOUR_FULL_USER_EMAIL> -G You will be prompt for a 2 step verification. dentities setup from Step B. First, we will introduce the method for the system-assigned managed identity. The purpose of the commands is to grant database-related operational permissions to the newly created user. This is just an example. In actual scenarios, you should follow your company's security policies and make the necessary adjustments accordingly. Please enter the following command. CREATE USER [<YOUR_APP_NAME>] FROM EXTERNAL PROVIDER; USE [<YOUR_DB_NAME>]; EXEC sp_addrolemember 'db_owner', '<YOUR_APP_NAME>'; For testing purposes, we will create a test table, and insert some data. CREATE TABLE TestTable ( Column1 INT, Column2 NVARCHAR(100) ); INSERT INTO TestTable (Column1, Column2) VALUES (1, 'First Record'); INSERT INTO TestTable (Column1, Column2) VALUES (2, 'Second Record'); B-2: Developers can now start building the Docker image. In my sample development environment, I'm using .NET 8.0. Run the following command in your development environment to create a Hello World project: dotnet new web -n WebApp --no-https This command will generate many files used for the project. You will need to modify both Program.cs and WebApp.csproj. using Microsoft.Data.SqlClient; var builder = WebApplication.CreateBuilder(args); var app = builder.Build(); app.MapGet("/", async context => { var response = context.Response; var connectionString = "Server=az-1767-dbserver.database.windows.net;Database=az-1767-db;Authentication=ActiveDirectoryMsi;TrustServerCertificate=True;"; await response.WriteAsync("Hello World\n\n"); try { using var conn = new SqlConnection(connectionString); await conn.OpenAsync(); var cmd = new SqlCommand("SELECT Column1, Column2 FROM TestTable", conn); using var reader = await cmd.ExecuteReaderAsync(); while (await reader.ReadAsync()) { var line = $"{reader.GetInt32(0)} - {reader.GetString(1)}"; await response.WriteAsync(line + "\n"); } } catch (Exception ex) { await response.WriteAsync($"[Error] {ex.Message}"); } }); app.Run("http://0.0.0.0:80"); <Project Sdk="Microsoft.NET.Sdk.Web"> <PropertyGroup> <TargetFramework>net8.0</TargetFramework> <Nullable>enable</Nullable> <ImplicitUsings>enable</ImplicitUsings> </PropertyGroup> <ItemGroup> <PackageReference Include="Microsoft.Data.SqlClient" Version="5.1.4" /> </ItemGroup> </Project> Please note the connectionString in Program.cs. The string must follow a specific format — you’ll need to replace az-1767-dbserver and az-1767-db with your own server and database names. After making the modifications, run the following command in the development environment. It will compile the project into a DLL and immediately run it (press Ctrl+C to stop). dotnet run Once the build is complete, you can package the entire project into a Docker image. Create a Dockerfile in the root of your project. FROM mcr.microsoft.com/dotnet/sdk:8.0 # Install ODBC Driver RUN apt-get update \ && apt-get install -y unixodbc odbcinst unixodbc-dev curl vim \ && curl -sSL -O https://packages.microsoft.com/debian/12/prod/pool/main/m/msodbcsql18/msodbcsql18_18.5.1.1-1_amd64.deb \ && ACCEPT_EULA=Y DEBIAN_FRONTEND=noninteractive dpkg -i msodbcsql18_18.5.1.1-1_amd64.deb \ && rm msodbcsql18_18.5.1.1-1_amd64.deb # Setup Project Code RUN mkdir /WebApp COPY ./WebApp /WebApp # OTHER EXPOSE 80 CMD ["dotnet", "/WebApp/bin/Debug/net8.0/WebApp.dll"] In this case, we are using mcr.microsoft.com/dotnet/sdk:8.0 as the base image. To allow access to Azure SQL DB, you’ll also need to install the ODBC driver in the image. Use the following command to build the image and push it to your Docker Hub (docker.io). Please adjust the image tag, for example az-1767-aca:202504091739 can be renamed to your preferred version, and replace theringe with your own Docker Hub username. docker build -t az-1767-aca:202504091739 . --no-cache docker tag az-1767-aca:202504091739 theringe/az-1767-aca:202504091739 docker push theringe/az-1767-aca:202504091739 After building and uploading the image, go back to your Container App and update the image configuration. Once the new image is applied, visit the app’s homepage and you’ll see the result. References: Connect Azure SQL Server via User Assigned Managed Identity under Django | Microsoft Community Hub Managed identities in Azure Container Apps | Microsoft Learn Azure Identity client library for Python | Microsoft Learn658Views1like0CommentsUnderstanding 'Always On' vs. Health Check in Azure App Service
The 'Always On' feature in Azure App Service helps keep your app warm by ensuring it remains running and responsive, even during periods of inactivity with no incoming traffic. As this feature pings to root URI after every 5 minutes. On Other hand Health-check feature helps pinging configured path every minute to monitor the application availability on each instance. What is 'Always On' in Azure App Service? The Always On feature ensures that the host process of your web app stays running continuously. This results in better responsiveness after idle periods since the app doesn’t need to cold boot when a request arrives. How to enable Always On: Navigate to the Azure Portal and open your Web App. Go to Configuration > General Settings. Toggle Always On to On. What is Health Check in Azure App Service? Health check increases your application's availability by rerouting requests away from instance where application is marked unhealthy and replacing instances if they remain unhealthy. How to enable Health-Check: Navigate to the Azure Portal and open your Web App. Under Monitoring, select Health check. Select Enable and provide a valid URL path for your application, such as /health or /api/health. Select Save. So, is it still necessary to enable the 'Always On' feature when Health Check is already pinging your application every minute? -> Yes, please find below explanation for the same. Test App scenario: Health Check enabled (pointing to /health_check path) and Always On disabled. Started the app and sent some user requests. Observations from the Test: After the application starts up, health check pings begin following the end user's request. Please find below table representing Health-check pings following user's request to root URI. Time Bucket URL Status Request Count 2025-03-20 07:00:00.0000000 / 200 6 2025-03-20 07:00:00.0000000 /health_check 200 30 2025-03-20 07:30:00.0000000 /health_check 200 30 Subsequent Health-check pings will continue, even in the absence of user requests. However, after restarting the app and in the absence of any user requests, we observed that Health Check requests were not initiated. This indicates that Health Check does not start automatically unless application is actively running and serving requests. Conclusion: Always On ensures that the app is proactively kept warm by sending root URI pings, even post-restart. The health-check feature is useful for monitoring application availability when the application is active. However, after a restart, if the application isn't active due to a lack of requests, Health-check pings won't initiate. Therefore, it is highly recommended to enable Always On, particularly for applications that need continuous availability and to avoid application process unload events. Recommendation: Enable Always On alongside Health Check to ensure optimal performance and reliability.989Views2likes0CommentsCapture .NET Profiler Trace on the Azure App Service platform
Summary The article provides guidance on using the .NET Profiler Trace feature in Microsoft Azure App Service to diagnose performance issues in ASP.NET applications. It explains how to configure and collect the trace by accessing the Azure Portal, navigating to the Azure App Service, and selecting the "Collect .NET Profiler Trace" feature. Users can choose between "Collect and Analyze Data" or "Collect Data only" and must select the instance to perform the trace on. The trace stops after 60 seconds but can be extended up to 15 minutes. After analysis, users can view the report online or download the trace file for local analysis, which includes information like slow requests and CPU stacks. The article also details how to analyze the trace using Perf View, a tool available on GitHub, to identify performance issues. Additionally, it provides a table outlining scenarios for using .NET Profiler Trace or memory dumps based on various factors like issue type and symptom code. This tool is particularly useful for diagnosing slow or hung ASP.NET applications and is available only in Standard or higher SKUs with the Always On setting enabled. In this article How to configure and collect the .NET Profiler Trace How to download the .NET Profiler Trace How to analyze a .NET Profiler Trace When to use .NET Profilers tracing vs. a memory dump The tool is exceptionally suited for scenarios where an ASP.NET application is performing slower than expected or gets hung. As shown in Figure 1, this feature is available only in Standard or higher Stock Keeping Unit (SKU) and Always On is enabled. If you try to configure .NET Profiler Trace, without both configurations the following messages is rendered. Azure App Service Diagnose and solve problems blade in the Azure Portal error messages Error – This tool is supported only on Standard, Premium, and Isolated Stock Keeping Unit (SKU) only with AlwaysOn setting enabled to TRUE. Error – We determined that the web app is not "Always-On" enabled and diagnostic does not work reliably with Auto Heal. Turn on the Always-On setting by going to the Application Settings for the web app and then run these tools. How to configure and collect the .NET Profiler Trace To configure a .NET Profiler Trace access the Azure Portal and navigate to the Azure App Service which is experiencing a performance issue. Select Diagnose and solve problems and then the Diagnostic Tools tile. Azure App Service Diagnose and solve problems blade in the Azure Portal Select the "Collect .NET Profiler Trace" feature on the Diagnostic Tools blade and the following blade is rendered. Notice that you can only select Collect and Analyze Data or Collect Data only. Choose the one you prefer but do consider having the feature perform the analysis. You can download the trace for offline analysis if necessary. Also notice that you need to **select the instance** on which you want to perform the trace. In the scenario, there is only one, so the selection is simple. However, if your app runs on multiple instances, either select them all or if you identify a specific instance which is behaving slowly, select only that one. You realize the best results if you can isolate a single instance enough so that the request you sent is the only one received on that instance. However, in a scenario where the request or instance is not known, the trace adds value and insights. Adding a thread report provides list of all the threads in the process is also collected at the end of the profiler trace. The thread report is useful especially if you are troubleshooting hung processes, deadlocks, or requests taking more than 60 seconds. This pauses your process for a few seconds until the thread dump is generated. CAUTION: a thread report is NOT recommended if you are experiencing High CPU in your application, you may experience issues during trace analysis if CPU consumption is high. Azure App Service Diagnose and solve problems, Collect .NET Profiler Trace blade in the Azure Portal There are a few points called out in the previous image which are important to read and consider. Specifically the .NET Profiler Trace will stop after 60 seconds from the time that it is started. Therefore, if you can reproduce the issue, have the reproduction steps ready before you start the profiling. If you are not able to reproduce the issue, then you may need to run the trace a few times until the slowness or hang occurs. The collection time can be increased up to 15 minutes (900 seconds) by adding an application setting named IIS_PROFILING_TIMEOUT_IN_SECONDS with a value of up to 900. After selecting the instance to perform the trace on, press the Collect Profiler Trace button, wait for the profiler to start as seen here, then reproduce the issue or wait for it to occur. Azure App Service Diagnose and solve problems, Collect .NET Profiler Trace status starting window After the issue is reproduced the .NET Profiler Trace continues to the next step of stopping as seen here. Azure App Service Diagnose and solve problems, Collect .NET Profiler Trace status stopping window Once stopped, the process continues to the analysis phase if you selected the Collect and Analyze Data option, as seen in the following image, otherwise you are provided a link to download the file for analysis on your local machine. The analysis can take some time, so be patient. Azure App Service Diagnose and solve problems, Collect .NET Profiler Trace status analyzing window After the analysis is complete, you can either view the Analysis online or download the trace file for local development. How to download the .NET Profiler Trace Once the analysis is complete you can view the report by selecting the link in the Reports column, as seen here. Azure App Service Diagnose and solve problems, Collect .NET Profiler Trace status complete window Clicking on the report you see the following. There is some useful information in this report, like a list of slow requests, Failed Requests, Thread Call stacks, and CPU stacks. Also shown is a breakdown of where the time was spent during the response generation into categories like Application Code, Platform, and Network. In this case, all the time is spent in the Application code. Azure App Service Diagnose and solve problems, Collect .NET Profiler Trace review the Report To find out specifically where in the Application Code this request performs the analysis of the trace locally. How to analyze a .NET Profiler Trace After downloading the network trace by selecting the link in the Data column, you can use a tool named Perf View which is downloadable on GitHub here. Begin by opening Perf View and double-clicking on the ".DIAGSESSION" file, after some moments expand it to render the Event Trace Log (ETL) file, as shown here. Analyze Azure App Service .NET Profiler Trace with Perf View Double-click on the Thread Time (with startStop Activities) Stacks which open up a new window similar to shown next. If your App Service is configured as out-of-process select the dotnet process which is associated to your app code. If your App Service is in-process select the w3wp process. Analyze Azure App Service .NET Profiler Trace with Perf View, dotnet out-of-process Double-click on dotnet and another window is rendered, as shown here. From the previous image, .NET Profiler Trace reviews the Report, it is clear where slowness is coming from, find that in the Name column or search for it by entering the page name into the Find text box. Analyze Azure App Service .NET Profiler Trace with Perf View, dotnet out-of-process, method, and class discovery Once found right-click on the row and select Drill Into from the pop-up menu, shown here. Select the Call Tree tab and the reason for the issue renders showing which request was performing slow. Analyze Azure App Service .NET Profiler Trace with Perf View, dotnet out-of-process, root cause This example is relatively. As you analyze more performance issues using Perf View to analyze a .NET Profiler Trace your ability to find the root cause of more complicated performance issues can be realized. When to use .NET Profilers tracing vs. a memory dump That same issue is seen in a memory dump, however there are some scenarios where a .NET Profile trace would be best. Here is a table, Table 1, which describes scenarios for when to capture a .NET profile trace or to capture a memory dump. Issue Type Symptom Code Symptom Stack Startup Issue Intermittent Scenario Performance 200 Requests take 500 ms to 2.5 seconds, or takes <= 60 seconds ASP.NET/ASP.NET Core No No Profiler Performance 200 Requests take > 60 seconds & < 230 seconds ASP.NET/ASP.NET Core No No Dump Performance 502.3/500.121/503 Requests take >=120 to <= 230 seconds ASP.NET No No Dump, Profiler Performance 502.3/500.121/503 Requests timing out >=230 ASP.NET/ASP.NET Core Yes/No Yes/No Dump Performance 502.3/500.121/503 App hangs or deadlocks (ex: due to async anti-pattern) ASP.NET/ASP.NET Core Yes/No Yes/No Dump Performance 502.3/500.121/503 App hangs on startup (ex: caused by nonasync deadlock issue) ASP.NET/ASP.NET Core No Yes/No Dump Performance 502.3/500.121 Request timing out >=230 (time out) ASP.NET/ASP.NET Core No No Dump Availability 502.3/500.121/503 High CPU causing app downtime ASP.NET No No Profiler, Dump Availability 502.3/500.121/503 High Memory causing app downtime ASP.NET/ASP.NET Core No No Dump Availability 500.0[121]/503 SQLException or Some Exception causes app downtime ASP.NET No No Dump, Profiler Availability 500.0[121]/503 App crashing due to fatal exception at native layer ASP.NET/ASP.NET Core Yes/No Yes/No Dump Availability 500.0[121]/503 App crashing due to exit code (ex: 0xC0000374) ASP.NET/ASP.NET Core Yes/No Yes/No Dump Availability 500.0 App begin nonfatal exceptions (during a context of a request) ASP.NET No No Profiler, Dump Availability 500.0 App begin nonfatal exceptions (during a context of a request) ASP.NET/ASP.NET Core No Yes/No Dump Table 1, when to capture a .NET Profiler Trace or a Memory Dump on Azure App Service, Diagnose and solve problems Use this list as a guide to help decide how to approach the solving of performance and availability applications problems which are occurring in your application source code. Here are some descriptions regarding the column heading. - Issues Type – Performance means that a request to the app is responding or processing the response but not at a speed in which it is expected to. Availability means that the request is failing or consuming more resources than expected. - Symptom Code – the HTTP Status and/or sub status which is returned by the request. - Symptom – a description of the behavior experienced while engaging with the application. - Stack – this table targets .NET, specifically ASP.NET, and ASP.NET Core applications. - Startup Issue – if "No" then the Scenario can or should be used, "No" represents that the issue is not at startup. If "Yes/No" it means the Scenario is useful for troubleshooting startup issues. - Intermittent – if "No" then the Scenario can or should be used, "No" means the issue is not intermittent or that it can be reproduced. If "Yes/No" it means the Scenario is useful if the issue happens randomly or cannot be reproduced. Meaning that the tool can be set to trigger on a specific event or left running for a specific amount of time until the exception happens. - Scenario – "Profiler" means that the collection of a .NET Profiler Trace would be recommended. "Dump" means that a memory dump would be your best option. If both are provided, then both can be useful when the given symptoms and system codes are present. You might find the videos in Table 2 useful which instruct you how to collect and analyze a memory dump or .NET Profiler Trace. Product Stack Hosting Symptom Capture Analyze Scenario App Service Windows in High CPU link link Dump App Service Windows in High Memory link link Dump App Service Windows in Terminate link link Dump App Service Windows in Hang link link Dump App Service Windows out High CPU link link Dump App Service Windows out High Memory link link Dump App Service Windows out Terminate link link Dump App Service Windows out Hang link link Dump App Service Windows in High CPU link link Dump Function App Windows in High Memory link link Dump Function App Windows in Terminate link link Dump Function App Windows in Hang link link Dump Function App Windows out High CPU link link Dump Function App Windows out High Memory link link Dump Function App Windows out Terminate link link Dump Function App Windows out Hang link link Dump Azure WebJob Windows in High CPU link link Dump App Service Windows in High CPU link link .NET Profiler App Service Windows in Hang link link .NET Profiler App Service Windows in Exception link link .NET Profiler App Service Windows out High CPU link link .NET Profiler App Service Windows out Hang link link .NET Profiler App Service Windows out Exception link link .NET Profiler Table 2, short video instructions on capturing and analyzing dumps and profiler traces Here are a few other helpful videos for troubleshooting Azure App Service Availability and Performance issues: View Application EventLogs Azure App Service Add Application Insights To Azure App Service Prior to capturing and analyzing memory dumps, consider viewing this short video: Setting up WinDbg to analyze Managed code memory dumps and this blog post titled: Capture memory dumps on the Azure App Service platform. Question & Answers - Q: What are the prerequisites for using the .NET Profiler Trace feature in Azure App Service? A: To use the .NET Profiler Trace feature in Azure App Service, the application must be running on a Standard or higher Stock Keeping Unit (SKU) with the Always On setting enabled. If these conditions are not met, the tool will not function, and error messages will be displayed indicating the need for these configurations. - Q: How can you extend the default collection time for a .NET Profiler Trace beyond 60 seconds? A: The default collection time for a .NET Profiler Trace is 60 seconds, but it can be extended up to 15 minutes (900 seconds) by adding an application setting named IIS_PROFILING_TIMEOUT_IN_SECONDS with a value of up to 900. This allows for a longer duration to capture the necessary data for analysis. - Q: When should you use a .NET Profiler Trace instead of a memory dump for diagnosing performance issues in an ASP.NET application? A: A .NET Profiler Trace is recommended for diagnosing performance issues where requests take between 500 milliseconds to 2.5 seconds or less than 60 seconds. It is also useful for identifying high CPU usage causing app downtime. In contrast, a memory dump is more suitable for scenarios where requests take longer than 60 seconds, the application hangs or deadlocks, or there are issues related to high memory usage or app crashes due to fatal exceptions. Keywords Microsoft Azure, Azure App Service, .NET Profiler Trace, ASP.NET performance, Azure debugging tools, .NET performance issues, Azure diagnostic tools, Collect .NET Profiler Trace, Analyze .NET Profiler Trace, Azure portal, Performance troubleshooting, ASP.NET application, Slow ASP.NET app, Azure Standard SKU, Always On setting, Memory dump vs profiler trace, Perf View analysis, Azure performance diagnostics, .NET application profiling, Diagnose ASP.NET slowness, Azure app performance, High CPU usage ASP.NET, Azure app diagnostics, .NET Profiler configuration, Azure app service performance1.2KViews3likes0CommentsAzure App Service Logging: How to Monitor Your Web Apps in Real-Time
As a developer, having visibility into the behavior of your applications is crucial to maintaining the reliability and performance of your software. Luckily, Azure App Service provides two powerful logging features to help you monitor your web apps in real-time: App Service Logs and Log Stream. In this blog post, we'll explore how to configure these features for both Windows and Linux Web Apps in Azure App Service.89KViews8likes9Comments