azure monitor
1291 TopicsPublic Preview: Smarter Troubleshooting in Azure Monitor with AI-powered Investigation
Investigate smarter – click, analyze, and easily mitigate with Azure Monitor investigations! We are excited to introduce the public preview of Azure Monitor issue and investigation. These new capabilities are designed to enhance your troubleshooting experience and streamline the process of resolving health degradations in your application and infrastructure.692Views3likes0CommentsWhat’s new in Observability at Build 2025
At Build 2025, we are excited to announce new features in Azure Monitor designed to enhance observability for developers and SREs, making it easier for you to streamline troubleshooting, improve monitoring efficiency, and gain deeper insights into application performance. With our new AI-powered tools, customizable alerts, and advanced visualization capabilities, we’re empowering developers to deliver high-quality, resilient applications with greater operational efficiency. AI-Powered Troubleshooting Capabilities We are excited to disclose two new AI-powered features, as well as share an update to a GA feature, which enhance troubleshooting and monitoring: AI-powered investigations (Public Preview): Identifies possible explanations for service degradations via automated analyses, consolidating all observability-related data for faster problem mitigation. Attend our live demo at Build and learn more here. Health models (Public Preview – coming in June 2025): Significantly improves the efficiency of detecting business-impacting issues in workloads, empowering organizations to deliver applications with operational efficiency and resilience through a full-stack view of workload health. Attend our live demo at Build to get a preview of the experience and learn more here. AI-powered Application Insights Code Optimizations (GA): Provides code-level suggestions for running .NET apps on Azure. Now, it’s easier to get code-level suggestions with GitHub Copilot coding agent (preview) and GitHub Copilot for Azure in VS Code. Learn more here. Enhanced AI and agent observability Azure Monitor and Azure AI Foundry now jointly offer real-time monitoring and continuous evaluation of AI apps and agentic systems in production. These capabilities are deeply integrated with the Foundry Observability experience and allow you to track key metrics such as performance, quality, safety, and resource usage. Features include: Unified observability dashboard for generative AI apps and agents (Public Preview): Provides full-stack visibility of AI apps and infrastructure with AI app metrics surfaced in both Azure Monitor and Foundry Observability. Alerts: Data is published to Azure Monitor Application Insights, allowing users to set alerts and analyze them for troubleshooting. Debug with tracing capabilities: Enables detailed root-cause analysis of issues like groundedness regressions. Learn more in our breakout session at Build! Improved Visualization We have expanded our visualization capabilities, particularly for Kubernetes services: Azure Monitor dashboards with Grafana (Public Preview): Create and edit Grafana dashboards directly in the Azure Portal with no additional cost. This includes dashboards for Azure Kubernetes Services (AKS) and other Azure resources. Learn more. Managed Prometheus Visualizations: Supports managed Prometheus visualizations for both AKS clusters (GA) and Arc-enabled Kubernetes clusters (Public Preview), offering a more cost-efficient and performant solution. Learn more. Customized and Simplified Monitoring Through enhancements to alert customization, we’re making it easier for you to get started with monitoring: Prometheus community recommended alerts: Offers one-click enablement of Prometheus recommended alerts for AKS clusters (GA) and Arc-enabled Kubernetes clusters (Public Preview), providing comprehensive alerting coverage across cluster, node, and pod levels. Simple log alerts (Public Preview): Designed to provide a simplified and more intuitive experience for monitoring and alerting, Simple log alerts evaluate each row individually, providing faster alerting compared to traditional log alerts. Simple log alerts support multiple log tiers, including Analytics and Basic Logs, which previously did not have any alerting solution. Learn more. Customizable email subjects for log search alerts (Public Preview): Allows customers to personalize the subject lines of alert emails including dynamic values, making it easier to quickly identify and respond to alerts. Send a custom event from the Azure Monitor OpenTelemetry Distro (GA): Offers developers a way to track user or system actions that matter the most to their business objectives, now available in the Azure Monitor OpenTelemetry Distro. Learn more. Application Insights auto-instrumentation for Java and Node Microservices on AKS (Public Preview): Easily monitor your Java and Node deployments without changing your code by leveraging auto-instrumentation that is integrated into the AKS cluster. These capabilities will help you easily assess the performance of your application and identify the cause of incidents efficiently. Learn more. Enhancements for Large Enterprises and Government Entities Azure Monitor Logs is introducing several new features aimed at supporting highly sensitive and high-volume logs, empowering large enterprises and government entities. With better data control and access, developers at these organizations can work better with IT Professionals to improve the reliability of their applications. Workspace replication (GA): Enhances resilience to regional incidents by enabling cross-regional workspace replication. Logs are ingested in both regions, ensuring continued observability through dashboards, alerts, and advanced solutions like Microsoft Sentinel. Granular RBAC (Public Preview): Supports granular role-based access control (RBAC) using Azure Attribute-Based Access Control (ABAC). This allows organizations to have row-level control on which data is visible to specific users. Data deletion capability (GA): Allows customers to quickly mark unwanted log entries, such as sensitive or corrupt data, as deleted without physically removing them from storage. It’s useful for unplanned deletions using filters to target specific records, ensuring data integrity for analysis. Process more log records in the Azure Portal (GA): Supports up to 100,000 records per query in the Azure Portal, enabling deeper investigations and broader data analysis directly within the portal without need for additional tools. We’re proud to further Azure Monitor's commitment to providing comprehensive and efficient observability solutions for developers, SREs, and IT Professionals alike. For more information, chat with Observability experts through the following sessions at Build 2025: BRK168: AI and Agent Observability with Azure AI Foundry and Azure Monitor BRK188: Power your AI Apps Across Cloud and Edge with Azure Arc DEM547: Enable application monitoring and troubleshooting faster with Azure Monitor DEM537: Mastering Azure Monitor: Essential Tips in 15 Minutes Expo Hall (Meet the Experts): Azure Arc and Azure Monitor booth892Views0likes0CommentsAnnouncing the Public Preview of Azure Monitor health models
Troubleshooting modern cloud-native workloads has become increasingly complex. As applications scale across distributed services and regions, pinpointing the root cause of performance degradation or outages often requires navigating a maze of disconnected signals, metrics, and alerts. This fragmented experience slows down troubleshooting and burdens engineering teams with manual correlation work. We address these challenges by introducing a unified, intelligent concept of workload health that’s enriched with application context. Health models streamline how you monitor, assess, and respond to issues affecting your workloads. Built on Azure Service Groups, they provide an out-of-the-box model tailored to your environment, consolidate signals to reduce alert noise, and surface actionable insights — all designed to accelerate detection, diagnosis, and resolution across your Azure landscape. Overview Azure Monitor health models enable customers to monitor the health of their applications with ease and confidence. These models use the Azure-wide workload concept of Service Groups to infer the scope of workloads and provide out-of-the-box health criteria based on platform metrics for Azure resources. Key Capabilities Out-of-the-Box Health Model Customers often struggle with defining and monitoring the health of their workloads due to the variability of metrics across different Azure resources. Azure Monitor health models provide a simplified out-of-the-box health experience built using Azure Service Group membership. Customers can define the scope of their workload using Service Groups and receive default health criteria based on platform metrics. This includes recommended alert rules for various Azure resources, ensuring comprehensive monitoring coverage. Improved Detection of Workload Issues Isolating the root cause of workload issues can be time-consuming and challenging, especially when dealing with multiple signals from various resources. The health model aggregates health signals across the model to generate a single health notification, helping customers isolate the type of signal that became unhealthy. This enables quick identification of whether the issue is related to backend services or user-centric signals. Quick Impact Assessment Assessing the impact of workload issues across different regions and resources can be complex and slow, leading to delayed responses and prolonged downtime. The health model provides insights into which Azure resources or components have become unhealthy, which regions are affected, and the duration of the impact based on health history. This allows customers to quickly assess the scope and severity of issues within the workload. Localize the Issue Identifying the specific signals and resources that triggered a health state change can be difficult, leading to inefficient troubleshooting and resolution processes. Health models inform customers which signals triggered the health state change, and which Service Group members were affected. This enables quick isolation of the trouble source and notifies the relevant team, streamlining the troubleshooting process. Customizable Health Criteria for Bespoke Workloads Many organizations operate complex, bespoke workloads that require their own specific health definitions. Relying solely on default platform metrics can lead to blind spots or false positives, making it difficult to accurately assess the true health of these custom applications. Azure Monitor health models allow customers to tailor health assessments by adding custom health signals. These signals can be sourced from Azure Monitor data such as Application Insights, Managed Prometheus, and Log Analytics. This flexibility empowers teams to tune the health model to reflect the unique characteristics and performance indicators of their workloads, ensuring more precise and actionable health insights. Getting Started Ready to simplify and accelerate how you monitor the health of your workloads? Getting started with Azure Monitor health models is easy — and during the public preview, it’s completely free to use. Pricing details will be shared ahead of general availability (GA), so you can plan with confidence. Start Monitoring in Minutes Define Your Service Group Create your Service Group and add the relevant resources as members to the Service Group. If you don’t yet have access to Service Groups, you can join here. Create Your Health Model In the Azure Portal navigate to Health Models and create your first model. You’ll get out-of-the-box health criteria automatically applied. Customize to Fit Your Needs In many cases the default health signals may suit your needs, but we support customization as well. Investigate and Act Use the health timeline and our alerting integration to quickly assess impact, isolate issues, and take action — all from a single pane of glass. We are targeted to go live in mid-June, at which point you can find us in the portal and our documentation will be published under the Azure Monitor documentation area. We Want to Hear From You Azure Monitor health models are built with our customers in mind — and your feedback is essential to shaping the future of this experience. Whether you're using the out-of-the-box health model or customizing it to fit your unique workloads, we want to know what’s working well and where we can improve. Share Your Feedback Use the “Give Feedback” feature directly within the Azure Monitor health models experience to send us your thoughts in context. Post your ideas in the Azure Monitor community. Prefer email? Reach out to us at [email protected] — we’re listening. Your insights help us prioritize features, improve usability, and ensure Azure Monitor continues to meet the evolving needs of modern cloud-native operations.2.4KViews5likes0CommentsPublic Preview: Simple Log Alerts in Azure Monitor
Public Preview: Simple Log Alerts in Azure Monitor We are excited to announce the Public Preview of Simple Log Alerts in Azure Monitor, available starting in mid-May. This new feature is designed to provide a simplified and more intuitive experience for monitoring and alerting, enhancing your ability to detect and respond to issues in near real-time. Simple Log Alerts are a new type of Log Search Alerts in Azure Monitor, designed to provide a simpler and faster alternative to Log Search Alerts. Unlike Log Search Alerts that aggregate rows over a defined period, Simple Log Alerts evaluate each row individually. This feature is now available for customers using Basic Logs who want to enable alerting. Previously, when customers opted to configure the traces table in Azure Monitor Application Insights as Basic Logs for cost optimization, they were unable to create alerts on that data. With the introduction of Simple Log Alerts, customers can now continue to benefit from cost savings while still setting up alerts on telemetry from Basic Logs. Key Benefits 🔍 Simplified Query Language Unlike Log Search Alerts that use complex queries with aggregations and joins, Simple Log Alerts use the transform Kusto Query Language (KQL). ⚡ Low Latency Alerting By evaluating each row individually, Simple Log Alerts provide faster alerting compared to Log Search Alerts. This means that alerts are triggered in near real-time, allowing for quicker incident response. 🌐 Broad Applicability Simple Log Alerts support multiple log tiers, including Analytics and Basic Logs, which previously did not have any alerting solution. 💰 Pricing Information The pricing for Simple Log Alerts is based on one-minute alerting and is the same as traditional log alerts. For detailed pricing information, please refer to the https://learn.microsoft.com/azure/azure-monitor/alerts/alerts-create-rule-cli-powershell-arm. 📚 Documentation and Links Overview of Azure Monitor alerts - Azure Monitor | Microsoft Learn Create a simple log search alert in Azure Monitor - Azure Monitor | Microsoft Learn Your Feedback Matters We look forward to your feedback and hope you find Simple Log Alerts to be a valuable addition to your monitoring toolkit. For questions or feedback, feel free to reach out to [email protected] or use the Give Feedback form directly in Azure Monitor portal.305Views0likes0CommentsGA: Managed Prometheus visualizations in Azure Monitor for AKS — unified insights at your fingertips
We’re thrilled to announce the general availability (GA) of Managed Prometheus visualizations in Azure Monitor for AKS, along with an enhanced, unified AKS Monitoring experience. Troubleshooting Kubernetes clusters is often time-consuming and complex whether you're diagnosing failures, scaling issues, or performance bottlenecks. This redesign of the existing Insights experience brings all your key monitoring data into a single, streamlined view reducing the time and effort it takes to diagnose, triage, and resolve problems so you can keep your applications running smoothly with less manual work. By using Managed Prometheus, customers can also realize up to 80% savings on metrics costs and benefit from up to 90% faster blade load performance delivering both a powerful and cost-efficient way to monitor and optimize your AKS environment. What’s New in GA Since the preview release, we’ve added several capabilities: Control plane metrics: Gain visibility into critical components like the API server and ETCD database, essential for diagnosing cluster-level performance bottlenecks. Load balancer chart deep links: Jump directly into the networking drilldown view to troubleshoot failed connections and SNAT port issues more efficiently. Improved at-scale cluster view: Get a faster, more comprehensive overview across all your AKS clusters, making multi-cluster monitoring easier. Simplified Troubleshooting, End to End The enhanced AKS Monitoring experience provides both a basic (free) tier and an upgraded experience with Prometheus metrics and logging — all within a unified, single-pane-of-glass dashboard. Here’s how it helps you troubleshoot faster: Identify failing components immediately With new KPI Cards for Pod and Node Status, you can quickly spot pending or failed pods, high CPU/memory usage, or saturation issues, decreasing diagnosis time. Monitor and manage cluster scaling smoothly The Events Summary Card surfaces Kubernetes warnings and pending pod states, helping you respond to scale-related disruptions before they impact production. Pinpoint root causes of latency and connectivity problems Detailed node saturation metrics, plus control plane and load balancer insights, make it easier to isolate where slowdowns or failures are occurring — whether at the node, cluster, or network layer. Free vs. Upgraded Metrics Overview Here’s a quick comparison of what’s included by default versus what you get with the enhanced experience: Basic tier metrics Additional metrics in upgraded experience Alert summary card Historical Kubernetes events (30 days) Events summary card Warning events by reason Pod status KPI card Namespace CPU and memory % Node status KPI card Container logs by volume Node CPU and memory % Top five controllers by logs volume VMSS OS disk bandwidth consumed % (max) Packets dropped I/O VMSS OS disk IOPS consumed % (max) Load balancer SNAT port usage API server CPU % (max) (preview) API server memory % (max) (preview) ETCD database usage % (max) (preview) See What Customers Are Saying Early adopters have already seen meaningful improvements: "Azure Monitor managed Prometheus visualizations for Container Insights has been a game-changer for our team. Offloading the burden of self-hosting and maintaining our own Prometheus infrastructure has significantly reduced our operational overhead. With the managed add-on, we get the powerful insights and metrics we need without worrying about scalability, upgrades, or reliability. It seamlessly integrates into our existing Azure environment, giving us out-of-the-box visibility into our container workloads. This solution allows our engineers to focus more on building and delivering features, rather than managing monitoring infrastructure." – S500 customer in health care industry Get Started Today We’re committed to helping you optimize and manage your AKS clusters with confidence. Visit the Azure portal and explore the new AKS Monitoring experience today! Learn more: https://aka.ms/azmon-prometheus-visualizations102Views0likes0CommentsAnnouncing the Launch of Customizable Email Subjects for Log Search Alerts V2 in Azure Monitor
We are thrilled to announce the launch of a new feature in Azure Monitor: Customizable Email Subjects for Log Search Alerts V2, available during May. What it is Customizable Email Subjects for Log Search Alerts V2 is a new feature that enables customers to personalize the subject lines of alert emails, making it easier to quickly identify and respond to alerts with more relevant and specific information. How it works This feature allows you to override email subjects with dynamic values by concatenating information from the common schema and custom text. For example, you can customize email subjects to include specific details such as the name of the virtual machine (VM) or patching details, allowing for quick identification without opening the email. Getting Started To get started with Customizable Email Subjects for Log Search Alerts V2, you can use the following methods: Using ARM Template: Create an alert rule with action properties using an ARM template. This option will be available during May. In order to create an alert rule with a customize email subject you should use ARM template with the latest API version (2021-08-01): Resource Manager template samples for log search alerts - Azure Monitor | Microsoft Learn And add the action properties parameter with the value of the subject (including static and dynamic values), for example: { "actionProperties": { "Email.Subject": "This is a custom email subject" } } At the end of the blog post you can find an example attached to an ARM template for your use. Using UI: Create an alert rule with action properties using the UI. This option will be available during June. How to use Dynamic values? To customize the email subject, you should use the action properties. The action properties are specified as key/value pairs by using static text, a dynamic value extracted from the alert payload, or a combination of both. The format for extracting a dynamic value from the alert payload is: ${<path to schema field>}. For example: ${data.essentials.monitorCondition}. Use the format of the common alert schema to specify the field in the payload, whether or not the action groups configured for the alert rule use the common schema. Looking Forward We are confident that this feature will significantly enhance your monitoring experience. By providing personalized alert emails, you can quickly identify and respond to issues with more relevant and specific information. Your Feedback Matters We look forward to your feedback, for questions or feedback, feel free to reach out to [email protected] or use the Give Feedback form directly in the Azure Monitor portal.116Views0likes0CommentsHelp Shape the Future of Log Analytics: Your Feedback Matters
We’re launching a quick survey to gather your feedback on Azure Monitor Log Analytics. Your input directly impacts our product roadmap and helps us prioritize the features and improvements that matter most to you. The survey takes just a few minutes, and your responses will remain confidential. Take the Survey New to Log Analytics? Start here: Get Started with Azure Monitor Log Analytics Overview of Log Analytics in Azure Monitor - Azure Monitor | Microsoft Learn For questions or additional feedback, feel free to reach out to [email protected]. Thank you for being part of this journey!141Views0likes0CommentsPublic Preview: Metrics usage insights for Azure Monitor Workspace
As organizations expand their services and applications, reliability and high availability are a top priority to ensure they provide a high level of quality to their customers. As the complexity of these services and applications grows, organizations continue to collect more telemetry to ensure higher observability. However, many are facing a common challenge: increasing costs driven by the ever-growing volume of telemetry data. Over time, as products grow and evolve, not all telemetry remains valuable. In fact, over instrumentation can create unnecessary noise, generating data that contributes to higher costs without delivering actionable insights. In a time where every team is being asked to do more with less, identifying which telemetry streams truly matter has become essential. To address this need we are announcing the Public Preview of ‘metrics usage insights’, a feature currently designed for Azure Managed Prometheus users which will analyze all metrics ingested in Azure Managed Workspace (AMW), surfacing actionable insights to optimize your observability setup. Metrics usage insights is built to empower teams with the visibility and tools the organizations need to manage observability costs effectively. It empowers customers to pinpoint metrics that align with their business objectives, uncover areas of unnecessary spend by identifying unused metrics, and sustain a streamlined, cost-effective monitoring approach. Metrics usage insights sends usage data to a Log Analytics Workspace (LAW) for analysis. This is a free offering, and there is no charge associated for the data sent to the Log Analytics workspace, storage or queries. Customers will be guided to enable the feature as part of the standard out of the box experience during new AMW resource creation. For existing AMWs this can be configured using diagnostic settings. Key Features 1.Understanding Limits and Quotas for Effective Resource Management Monitoring limits and quotas is crucial for system performance and resource optimization. Tracking usage aids in efficient scaling and cost avoidance. Metrics usage insights provides tools to monitor thresholds, resolve throttling, and ensure cost-effective operations without the need for creating support incidents. 2.Workspace Exploration This experience lets customers explore their AMW data and gain insights. It provides a detailed analysis of data points and samples ingested for billing, both at metric and workspace levels. Customers can evaluate individual metrics by examining their quantity, ingestion volume, and financial impact. 3.Identifying and Removing Unused Metrics The metrics usage insights feature helps identify underutilized metrics that are being ingested, but not used through dashboards, monitors, and API calls. Users facing high storage and ingestion costs can use this feature to delete unused metrics to optimize high-cost metrics, and reclaim capacity. Enable metrics usage insights To enable metrics usage insights, you create a diagnostic setting, which instructs the AMW to send data supporting the insights queries and workbooks to a Log Analytics Workspace (LAW). You'll be prompted to enable it automatically when you create a new Azure Monitor workspace. You can enable it later for an existing Azure Monitor workspace. Read More380Views3likes0CommentsAzure Monitor Application Insights Auto-Instrumentation for Java and Node Microservices on AKS
Key Takeaways (TLDR) Monitor Java and Node applications with zero code changes Fast onboarding: just 2 steps Supports distributed tracing, logs, and metrics Correlates application-level telemetry in Application Insights with infrastructure-level telemetry in Container Insights Available today in public preview Introduction Monitoring your applications is now easier than ever with the public preview release of Auto-Instrumentation for Azure Kubernetes Service (AKS). You can now easily monitor your Java and Node deployments without changing your code by leveraging auto-instrumentation that is integrated into the AKS cluster. This feature is ideal for developers or operators who are... Looking to add monitoring in the easiest way possible, without modifying code and avoiding ongoing SDK update maintenance. Starting out on their monitoring journey and looking to benefit from carefully chosen default configurations with the ability to tweak them over time. Working with someone else’s code and looking to instrument at scale. Or considering monitoring for the first time at the time of deployment. Before the introduction of this feature, users needed to manually instrument code, install language-specific SDKs, and manage updates on their own—a process that involved significant effort and numerous opportunities for errors. Now, all you need to do is follow a simple two-step process to instrument your applications and automatically send correlated OpenTelemetry-based application-level logs, metrics, and distributed tracing to your Application Insights resource. With AKS Auto-Instrumentation, you will be able to assess the performance of your application and identify the cause of any incidents more efficiently using the robust application performance monitoring capabilities of Azure Monitor Application Insights. This streamlined approach not only saves time but also ensures that your monitoring setup is both reliable and scalable. Feature Enablement and Onboarding To onboard to this feature, you will need to follow a two-step process: Prepare your cluster by installing the application monitoring webhook. Choose between namespace-wide onboarding or per-deployment onboarding by creating K8’s custom resources. Namespace-wide onboarding is the easiest method. It allows you to instrument all Java or Node deployments in your namespace and direct telemetry to a single Application Insights resource. Per-deployment onboarding allows more control by targeting specific deployments and directing telemetry to different Application Insights resources. Once the custom resource is created, you will need to deploy or redeploy your application, and telemetry will start flowing to Application Insights. For step-by-step instructions and to learn more about onboarding visit our official documentation on MS Learn. The Application Insights experience Once telemetry begins flowing, you can take advantage of Application Insights features such as Application Map, Failures/Performance Views, Availability, and more to help you efficiently diagnose and troubleshoot application issues. Let’s look at an example: I have an auto-instrumented distributed application running in the demoapp namespace of my AKS cluster. It consists of: One Java microservice Two Node.js microservices MongoDB and Redis as its data layer Scenario: End users have been complaining about some latency in the application. As the DRI, I can start my troubleshooting journey by going to the Application Map to get a topological view of my distributed application. I open Application Map and notice MicroserviceA has a red border - 50% of calls are erroring. The Container Insights card shows healthy pods - no failed pods or high CPU/memory usage. I can eliminate infrastructure issues as the cause of the slowness. In the Performance card, I spot that the rescuepet operation has an average duration of 10 seconds. That's pretty long. I drill in to get a distributed trace of the operation and find the root cause: an OutOfMemoryError. In this scenario, the issue has been identified as an out-of-memory error at the application layer. However, when the root cause is not in the code but in the infrastructure I get a full set of resource properties with every distributed trace so I can easily identify the infra resources running each span of my trace. I can click the investigate pods button to transition to Azure Monitor Container Insights and investigate my pods further. This correlation between application-level and infrastructure-level telemetry makes it much easier to determine whether the issue is caused by the application or the infrastructure. Pricing There is no additional cost to use AKS auto-instrumentation to send data to Azure Monitor. You will be only charged as per the current pricing. What’s Next Language Support This integration supports Java and Node workloads by leveraging the Azure Monitor OpenTelemetry distro. We have distros for .NET and Python as well and we are working to integrate these distros into this solution. At that point, this integration will support .NET, Python, Java and Node.js. For customers that want to instrument workloads in other languages such as Go, Ruby, PHP, etc. we plan to leverage open-source instrumentations available in the Open Telemetry community. In this scenario, customers will instrument their code using open source OpenTelemetry instrumentations, and we will provide mechanisms that will make it easy to channel the telemetry to Application Insights. Application Insights will expose an endpoint that accepts OpenTelemetry Language Protocol (OTLP) signals and configure the instrumented workload to channel the telemetry to this endpoint. Operating Systems and K8’s Controllers Right now, you can only instrument kubernetes deployments running on Linux node pools, but we plan to expand support to introduce support for Linux ARM64 node pools as well as support for StatefulSet, Job, Cronjob, and Replicaset controller types. Portal Experiences We are also working on Azure portal experiences to make onboarding easier. When our portal experiences for onboarding are released, users will be able to install the Application Insights extension for AKS using the portal and use a portal user interface to instrument their workloads instead of having to create custom resources. Beyond onboarding, we are working to build Application Insights consumption experiences within the AKS namespace and workloads blade. You will be able to see application-level telemetry right there in the AKS portal without having to navigate away from your cluster to Application Insights. FAQs: What are the advantages of AKS Auto-Instrumentation? No code changes required No access to source code required No configuration changes required Eliminates instrumentation maintenance What languages are supported by AKS Auto-Instrumentation? Currently, AKS Auto-Instrumentation supports Java and Node.js applications. Python and .NET support is coming soon. Moreover, we will be adding support for all OTel supported languages like Go soon via native OTLP ingestion. Does AKS Auto-Instrumentation support custom metrics? For Node.js applications, custom metrics require manual instrumentation with the Azure Monitor OpenTelemetry Distro. Java applications allow custom metrics with auto-instrumentation. Click here for more FAQs. This article was co-authored by Rishab Jolly and Abinet Abate521Views0likes0Comments