updates
432 TopicsIntroducing the Data-Bound Reference Layer in Azure Maps Visual for Power BI
The Data-Bound Reference Layer in Azure Maps for Power BI elevates map-based reporting by allowing users to visually explore, understand, and act on their data. This feature enables new possibilities for data analysts, business leaders, and decision-makers reliant on spatial insights.1.5KViews1like4CommentsMigration planning of MySQL workloads using Azure Migrate
In our endeavor to increase coverage of OSS workloads in Azure Migrate, we are announcing discovery and modernization assessment of MySQL databases running on Windows and Linux servers. Customers previously had limited visibility into their MySQL workloads and often received generalized VM lift-and-shift recommendations. With this new capability, customers can now accurately identify their MySQL workloads and assess them for right-sizing into Azure Database for MySQL. MySQL workloads are a cornerstone of the LAMP stack, powering countless web applications with their reliability, performance, and ease of use. As businesses grow, the need for scalable and efficient database solutions becomes paramount. This is where Azure Database for MySQL comes into play. Migrating from on-premises to Azure Database for MySQL offers numerous benefits, including effortless scalability, cost efficiency, enhanced performance, robust security, high availability, and seamless integration with other Azure services. As a fully managed Database-as-a-Service (DBaaS), it simplifies database management, allowing businesses to focus on innovation and growth. What is Azure Migrate? Azure Migrate serves as a comprehensive hub designed to simplify the migration journey of on-premises infrastructure, including servers, databases, and web applications, to Azure Platform-as-a-Service (PaaS) and Infrastructure-as-a-Service (IaaS) targets at scale. It provides a unified platform with a suite of tools and features to help you identify the best migration path, assess Azure readiness, estimate the cost of hosting workloads on Azure, and execute the migration with minimal downtime and risk. Key features of the MySQL Discovery and Assessment in Azure Migrate The new MySQL Discovery and Assessment feature in Azure Migrate (Preview) introduces several powerful capabilities: Discover MySQL database instances: The tool allows you to discover MySQL instances within your environment efficiently. By identifying critical attributes of these instances, it lays the foundation for a thorough assessment and a strategic migration plan. Assessment for Azure readiness: The feature evaluates the readiness of your MySQL database instances to migrate to Azure Database for MySQL – Flexible Server. This assessment considers several factors, including compatibility and performance metrics, to ensure a smooth transition. SKU recommendations: Based on the discovered data, the tool recommends the optimal compute and storage configuration for hosting MySQL workloads on Azure Database for MySQL. Furthermore, it provides insights into the associated costs, enabling better financial planning. How to get started? To begin using the MySQL Discovery and Assessment feature in Azure Migrate, follow this five-step onboarding process: Create an Azure Migrate Project: Initiate your migration journey by setting up a project in the Azure portal. Configure the Azure Migrate Appliance: Use a Windows-based appliance to discover the inventory of servers and provide guest credentials for discovering the workloads and MySQL credentials to fetch database instances and their attributes. Review Discovered Inventory: Examine the detailed attributes of the discovered MySQL instances. Create an Assessment: Evaluate the readiness and get detailed recommendations for migration to Azure Database for MySQL. For a detailed step-by-step guidance check out the documentation for discovery and assessment tutorials. Documentation: Discover MySQL databases running in your datacenter Assess MySQL database instances for migration to Azure Database for MySQL Share your feedback! In summary, the MySQL Discovery and Assessment feature in Azure Migrate enables you to effortlessly discover, assess, and plan your MySQL database migrations to Azure. Try the feature out in public preview and fast-track your migration journey! If you have any queries, feedback or suggestions, please let us know by leaving a comment below or by directly contacting us at [email protected]. We are eager to hear your feedback and support you on your journey to Azure.Azure virtual network terminal access point (TAP) public preview announcement
What is virtual network TAP? Virtual network TAP allows customers continuously stream virtual machine network traffic to a network packet collector or analytics tool. Many security and performance monitoring tools rely on packet-level insights that are difficult to access in cloud environments. Virtual network TAP bridges this gap by integrating with our industry partners to offer: Enhanced security and threat detection: Security teams can inspect full packet data in real-time to detect and respond to potential threats. Performance monitoring and troubleshooting: Operations teams can analyze live traffic patterns to identify bottlenecks, troubleshoot latency issues, and optimize application performance. Regulatory compliance: Organizations subject to compliance frameworks such as Health Insurance Portability and Accountability Act (HIPAA), and General Data Protection Regulation (GDPR) can use virtual network TAP to capture network activity for auditing and forensic investigations. Why use virtual network TAP? Unlike traditional packet capture solutions that require deploying additional agents or network appliances, virtual network TAP leverages Azure's native infrastructure to enable seamless traffic mirroring without complex configurations and without impacting the performance of the virtual machine. A key advantage is that mirrored traffic does not count towards virtual machine’s network limits, ensuring complete visibility without compromising application performance. Additionally, virtual network TAP supports all Azure virtual machine SKU. Deploying virtual network TAP The portal is a convenient way to get started with Azure virtual network TAP. However, if you have a lot of Azure resources and want to automate the setup you may want to use a PowerShell, CLI, or REST API. Add a TAP configuration on a network interface that is attached to a virtual machine deployed in your virtual network. The destination is a virtual network IP address in the same virtual network as the monitored network interface or a peered virtual network. The collector solution for virtual network TAP can be deployed behind an Azure Internal Load balancer for high availability. You can use the same virtual network TAP resource to aggregate traffic from multiple network interfaces in the same or different subscriptions. If the monitored network interfaces are in different subscriptions, the subscriptions must be associated to the same Microsoft Entra tenant. Additionally, the monitored network interfaces and the destination endpoint for aggregating the TAP traffic can be in peered virtual networks in the same region. Partnering with industry leaders to enhance network monitoring in Azure To maximize the value of virtual network TAP, we are proud to collaborate with industry-leading security and network visibility partners. Our partners provide deep packet inspection, analytics, threat detection, and monitoring solutions that seamlessly integrate with virtual network TAP: Network packet brokers Partner Product Gigamon GigaVUE Cloud Suite for Azure cPacket cPacket Cloud Suite Keysight CloudLens Security analytics, network/application performance management Partner Product Darktrace Darktrace /NETWORK Netscout Omnis Cyber Intelligence NDR Corelight Corelight Open NDR Platform LinkShadow LinkShadow NDR Fortinet FortiNDR Cloud FortiGate VM TrendMicro Trend Vision One™ Network Security Extrahop RevealX Bitdefender GravityZone Extended Detection and Response for Network eSentire eSentire MDR Vectra Vectra NDR AttackFence AttackFence NDR Arista Networks Arista NDR See our partner blogs: Streamline Traffic Mirroring in the Cloud with Azure Virtual Network Terminal Access Point (TAP) and Keysight Visibility | Keysight Blogs eSentire | Unlocking New Possibilities for Network Monitoring and… LinkShadow Unified Identity, Data, and Network Platform Integrated with Microsoft Virtual Network TAP Extrahop and Microsoft Extend Coverage for Azure Workloads Resources | Announcing cPacket Partnership with Azure virtual network terminal access point (TAP) Gain Network Traffic Visibility with FortiGate and Azure virtual network TAP Get started with virtual network TAP To learn more and get started, visit our website. We look forward to seeing how you leverage virtual network TAP to enhance security, performance, and compliance in your cloud environment. Stay tuned for more updates as we continue to refine and expand on our feature set!1.9KViews3likes5CommentsThe Deployment of Hollow Core Fiber (HCF) in Azure’s Network
Co-authors: Jamie Gaudette, Frank Rey, Tony Pearson, Russell Ellis, Chris Badgley and Arsalan Saljoghei In the evolving Cloud and AI landscape, Microsoft is deploying state-of-the-art Hollow Core Fiber (HCF) technology in Azure’s network to optimize infrastructure and enhance performance for customers. By deploying cabled HCF technology together with HCF-supportable datacenter (DC) equipment, this solution creates ultra-low latency traffic routes with faster data transmission to meet the demands of Cloud & AI workloads. The successful adoption of HCF technology in Azure’s network relies on developing a new ecosystem to take full advantage of the solution, including new cables, field splicing, installation and testing… and Microsoft has done exactly that. Azure has collaborated with industry leaders to deliver components and equipment, cable manufacturing and installation. These efforts, along with advancements in HCF technology, have paved the way for its deployment in-field. HCF is now operational and carrying live customer traffic in multiple Microsoft Azure regions, proving it is as reliable as conventional fiber with no field failures or outages. This article will explore the installation activities, testing, and link performance of a recent HCF deployment, showcasing the benefits that Azure customers can leverage from HCF technology. HCF connected Azure DCs are ready for service The latest HCF cable deployment connects two Azure DCs in a major city, with two metro routes each over 20km long. The hybrid cables both include 32 HCF and 48 single mode fiber (SMF) strands, with HCFs delivering high-capacity Dense Wavelength Division Multiplexing (DWDM) transmission comparable to SMF. The cables are installed over two diverse paths (the red and blue lines shown in image 1), each with different entry points into the DC. Route diversity at the physical layer enhances network resilience and reliability by allowing traffic to be rerouted through alternate paths, minimizing the risk of network outage should there be a disruption. It also allows for increased capacity by distributing network traffic more evenly, improving overall network performance and operational efficiency. Image 1: Satellite image of two Azure DC sites (A & Z) within a metro region interconnected with new ultra-low latency HCF technology, using two diverse paths (blue & red) Image 2 shows the optical routing that the deployed HCF cables take through both Inside Plant (ISP) and Outside Plant (OSP), for interconnecting terminal equipment within key sites in the region (comprised of DCs, Network Gateways and PoPs). Image 2: Optical connectivity at the physical layer between DCA and DCZ The HCF OSP cables have been developed for outdoor use in harsh environments without degrading the propagation properties of the fiber. The cable technology is smaller, faster, and easier to install (using a blown installation method). Alongside cables, various other technologies have been developed and integrated to provide a reliable end-to-end HCF network solution. This includes dedicated HCF-compatible equipment (shown in image 3), such as custom cable joint enclosures, fusion splicing technology, HCF patch tails for cable termination in the DC, and a HCF custom-designed Optical Time Domain Reflectometer (OTDR) to locate faults in the link. These solutions work with commercially available transponders and DWDM technologies to deliver multi-Tb/s capacities for Azure customers. Looking more closely at a HCF cable installation, in image 4 the cable is installed by passing it through a blowing-head (1) and inserting it into pre-installed conduit in segments underground along the route. As with traditional installations with conventional cable, the conduit, cable entry/exit, and cable joints are accessible through pre-installed access chambers, typically a few hundred meters apart. The blowing head uses high-pressure air from a compressor to push the cable into the conduit. A single drum-length of cable can be re-fleeted above ground (2) at multiple access points and re-jetted (3) over several kilometers. After the cables are installed inside the conduit, they are jointed at pre-designated access chamber locations. These house the purpose designed cable joint enclosures. Image 4: Cable preparation and installation during in-field deployment Image 5 shows a custom HCF cable joint enclosure in the field, tailored to protect HCFs for reliable data transmission. These enclosures organize the many HCF splices inside and are placed in underground chambers across the link. Image 5: 1) HCF joint enclosure in a chamber in-field 2) Open enclosure showing fiber loop storage protected by colored tubes at the rear-side of the joint 3) Open enclosure showing HCF spliced on multiple splice tray layers Inside the DC, connectorized ‘plug-and-play’ HCF-specific patch tails have been developed and installed for use with existing DWDM solutions. The patch tails interface between the HCF transmission and SMF active and passive equipment, each containing two SMF compatible connectors, coupled to the ISP HCF cable. In image 6, this has been terminated to a patch panel and mated with existing DWDM equipment inside the DC. Image 6: HCF patch tail solution connected to DWDM equipment Testing To validate the end-to-end quality of the installed HCF links (post deployment and during its operation), field deployable solutions have been developed and integrated to ensure all required transmission metrics are met and to identify and restore any faults before the link is ready for customer traffic. One such solution is Microsoft’s custom-designed HCF-specific OTDR, which helps measure individual splice losses and verify attenuation in all cable sections. This is checked against rigorous Azure HCF specification requirements. The OTDR tool is invaluable for locating high splice losses or faults that need to be reworked before the link can be brought into service. The diagram below shows an OTDR trace detecting splice locations and splice loss levels (dB) across a single strand of installed HCF. The OTDR can also continuously monitor HCF links and quickly locate faults, such as cable cuts, for quick recovery and remediation. For this deployment, a mean splice loss of 0.16dB was achieved, with some splices as low as 0.04dB, comparable to conventional fiber. Low attenuation and splice loss helps to maintain higher signal integrity, supporting longer transmission reach and higher traffic capacity. There are ongoing Azure HCF roadmap programs to continually improve this. Performance Before running customer traffic on the link, the fiber is tested to ensure reliable, error-free data transmission across the operating spectrum by counting lost or error bits. Once confirmed, the link is moved into production, allowing customer traffic to flow on the route. These optical tests, tailored to HCF, are carried out by the installer to meet Azure’s acceptance requirements. Image 8 illustrates the flow of traffic across a HCF link, dictated by changing demand on capacity and routing protocols in the region, which fluctuate throughout the day. The HCF span supports varying levels of customer traffic from the point the link was made live, without incurring any outages or link flaps. A critical metric for measuring transmission performance over each HCF path is the instantaneous Pre-Forward Error Correction (FEC) Bit Error Rate (BER) level. Pre-FEC BERs measure errors in a digital data stream at the receiver before any error correction is applied. This is crucial for transmission quality when the link carries data traffic; lower levels mean fewer errors and higher signal quality, essential for reliable data transmission. The following graph (image 9) shows the evolution of the Pre-FEC BER level on a HCF span once the link is live. A single strand of HCF is represented by a color, with all showing minimal fluctuation. This demonstrates very stable Pre-FEC BER levels, well below < -3.4 (log 10 ), across all 400G optical transponders, operating over all channels during a 24-day period. This indicates the network can handle high-data transmission efficiently with no Post-FEC errors, leading to high customer traffic performance and reliability. Image 9: Very stable Pre-FEC BER levels across the HCF span over 20 days The graph below demonstrates the optical loss stability over one entire span which is comprised of two HCF strands. It was monitored continuously over 20 days using the inbuilt line system and measured in both directions to assess the optical health of the HCF link. The new HCF cable paths are live and carrying customer traffic across multiple Azure regions. Having demonstrated the end-to-end deployment capabilities and network compatibility of the HCF solution, it is possible to take full advantage of the ultra-stable, high performance and reliable connectivity HCF delivers to Azure customers. What’s next? Unlocking the full potential of HCF requires compatible, end-to-end solutions. This blog outlines the holistic and deployable HCF systems we have developed to better serve our customers. While we further integrate HCF into more Azure Regions, our development roadmap continues. Smaller cables with more fibers and enhanced systems components to further increase the capacity of our solutions, standardized and simplified deployment and operations, as well as extending the deployable distance of HCF long haul transmission solutions. Creating a more stable, higher capacity, faster network will allow Azure to better serve all its customers. Learn more about how hollow core fiber is accelerating AI. Recently published HCF research papers: Ultra high resolution and long range OFDRs for characterizing and monitoring HCF DNANFs Unrepeated HCF transmission over spans up to 301.7km7.3KViews8likes1CommentCreate next-gen voice agents with Azure AI's Voice Live API and Azure Communication Services
Today at Microsoft Build, we’re excited to announce the General Availability of bidirectional audio streaming for Azure Communication Services Call Automation SDK. Unveiling the power of speech-to-speech AI through Azure Communication Services! As previously seen at Microsoft Ignite in November 2024 the Call Automation bidirectional streaming APIs already work with services like Azure OpenAI to build conversational voice agents through speech to speech integrations. Now with General Availability release of Call Automation bidirectional streaming API and Azure AI Speech Services Voice Live API (Preview), creating voice agents has never been easier. Imagine AI agents that deliver seamless, low-latency, and naturally fluent conversations, transforming the way businesses and customers interact. Bidirectional Streaming APIs allow customers to stream audio from ongoing calls to their webserver in near real-time, where their voice enabled Large Language Models (LLMs) can ingest the audio to reason over and provide voice responses to stream back into the call. In this release we have added support for extra security by adding JSON Web Token (JWT) based authentication for the websocket connection allowing developers to make sure they’re creating secure solutions. As industries like customer service, education, HR, gaming, and public services see a surge in demand for generative AI voice chatbots, businesses are seeking real-time, natural-sounding voice interactions with the latest and greatest GenAI models. Integrating Azure Communication Services with the new Voice Live API from Azure AI Speech Services provides a low-latency interface that facilitates streaming speech input and output with Azure AI Speech’s advanced audio and voice capabilities. It supports multiple languages, diverse voices, and customization, and can even integrate with avatars for enhanced engagement. On the server side, powerful language models interpret the caller's queries and stream human-like responses back in real time, ensuring fluid and engaging conversations. By integrating these two technologies customers can create new innovative solutions for: Multilingual agents Develop virtual customer service representatives capable of of having conversations with end customers in their preferred language, allowing customers creating solutions for multilingual regions to create one solution to serve multiple languages and regions. Noise suppression and echo cancellation For AI voice agents to be effective they need clear audio to understand what the user is requesting, in order to improve AI efficiency, you can use out of the box noise suppression and echo cancellation built into the Voice Live API to help provide your AI agent the best quality audio to be able to clearly understand the end users requests and assist them. Support for branded voices Build voice agents that stay on brand with custom voices that represent your brand in any interaction with the customer, use Azure AI Speech services to create custom voice models that represent your brand and provide familiarity for your customers. How to Integrate Azure Communication Services with Azure AI Speech Service Voice Live API Language support With the integration to Voice Live API, you can now create solutions for over 150+ locales for speech input and output with 600+ realistic voices out of the box. I if these voices don’t suit your needs, customers can take this one step further and create custom speech models for their brand. How to start bidirectional streaming var mediaStreamingOptions = new MediaStreamingOptions( new Uri(websocketUri), MediaStreamingContent.Audio, MediaStreamingAudioChannel.Mixed, startMediaStreaming: true ) { EnableBidirectional = true, AudioFormat = AudioFormat.Pcm24KMono }; How to connect to Voice Live API (Preview) string GetSessionUpdate() { var jsonObject = new { type = "session.update", session = new { turn_detection = new { type = "azure_semantic_vad", threshold = 0.3, prefix_padding_ms = 200, silence_duration_ms = 200, remove_filler_words = false }, input_audio_noise_reduction = new { type = "azure_deep_noise_suppression" }, input_audio_echo_cancellation = new { type = "server_echo_cancellation" }, voice = new { name = "en-US-Aria:DragonHDLatestNeural", type = "azure-standard", temperature = 0.8 } } }; Next Steps The SDK and documentation along will be available in the next few weeks following this announcement, allowing you to build your own solutions using Azure Communication Services and Azure AI Voice Live API. You can download our latest sample from GitHub to try this for yourself. To learn more about the Voice Live API and all its different capabilities, see Azure AI Blog.Announcing the General Availability of New Availability Zone Features for Azure App Service
What are Availability Zones? Availability Zones, or zone redundancy, refers to the deployment of applications across multiple availability zones within an Azure region. Each availability zone consists of one or more data centers with independent power, cooling, and networking. By leveraging zone redundancy, you can protect your applications and data from data center failures, ensuring uninterrupted service. Key Updates The minimum instance requirement for enabling Availability Zones has been reduced from three instances to two, while still maintaining a 99.99% SLA. Many existing App Service plans with two or more instances will automatically support Availability Zones without additional setup. The zone redundant setting for App Service plans and App Service Environment v3 is now mutable throughout the life of the resources. Enhanced visibility into Availability Zone information, including physical zone placement and zone counts, is now provided. For App Service Environment v3, the minimum instance fee for enabling Availability Zones has been removed, aligning the pricing model with the multi-tenant App Service offering. The minimum instance requirement for enabling Availability Zones has been reduced from three instances to two. You can now enjoy the benefits of Availability Zones with just two instances since we continue to uphold a 99.99% SLA even with the two-instance configuration. Many existing App Service plans with two or more instances will automatically support Availability Zones without necessitating additional setup. Over the past few years, efforts have been made to ensure that the App Service footprint supports Availability Zones wherever possible, and we’ve made significant gains in doing so. Therefore, many existing customers can enable Availability Zones on their current deployments without needing to redeploy. Along with supporting 2-instance Availability Zone configuration, we have enabled Availability Zones on the App Service footprint in regions where only two zones may be available. Previously, enabling Availability Zones required a region to have three zones with sufficient capacity. To account for the growing demand, we now support Availability Zone deployments in regions with just two zones. This allows us to provide you with Availability Zone features across more regions. And with that, we are upholding the 99.99% SLA even with the 2-zone configuration. Additionally, we are pleased to announce that the zone redundant setting (zoneRedundant property) for App Service plans and App Service Environment v3 is now mutable throughout the life of these resources. This enhancement allows customers on Premium V2, Premium V3, or Isolated V2 plans to toggle zone redundancy on or off as required. With this capability, you can reduce costs and scale to a single instance when multiple instances are not necessary. Conversely, you can scale out and enable zone redundancy at any time to meet your requirements. This ability has been requested for a while now and we are excited to finally make it available. For App Service Environment v3 users, this also means that your individual App Service plan zone redundancy status is now independent of other plans in your App Service Environment. This means that you can have a mix of zone redundant and non-zone redundant plans in an App Service Environment, something that was previously not supported. In addition to these new features, we also have a couple of other exciting things to share. We are now providing enhanced visibility into Availability Zone information, including the physical zone placement of your instances and zone counts. For our App Service Environment v3 customers, we have removed the minimum instance fee for enabling Availability Zones. This means that you now only pay for the Isolated V2 instances you consume. This aligns the pricing model with the multi-tenant App Service offering. For more information as well as guidance on how to use these features, see the docs - Reliability in Azure App Service. Azure Portal support for these new features will be available by mid-June 2025. In the meantime, see the documentation to use these new features with ARM/Bicep or the Azure CLI. Also check out BRK200 breakout session at Microsoft Build 2025 live on May 20th or anytime after via the recording where my team and I will be discussing these new features and many more exciting announcements for Azure App Service. If you’re in the Seattle area and attending Microsoft Build 2025 in person, come meet my team and me at our Expert Meetup Booth. FAQ Q: What are availability zones? Availability zones are physically separate locations within an Azure region, each consisting of one or more data centers with independent power, cooling, and networking. Deploying applications across multiple availability zones ensures high availability and business continuity. Q: How do I enable Availability Zones for my existing App Service plan or App Service Environment v3? There is a new toggle in the Azure portal that will be enabled if your App Service plan or App Service Environment v3 supports Availability Zones. Your deployment must be on the App Service footprint that supports zones in order to have this capability. There is a new property called “MaximumNumberOfZones”, which indicates the number of zones your deployment supports. If this value is greater than one, you are on the footprint that supports zones and can enable Availability Zones as long as you have two or more instances. If this value is equal to one, you need to redeploy. Note that we are continually working to expand the zone footprint across more App Service deployments. Q: Is there an additional charge for Availability Zones? There is no additional charge, you only pay for the instances you use. The only requirement is that you use two or more instances. Q: Can I change the zone redundant property after creating my App Service plan? Yes, the zone redundant property is now mutable, meaning you can toggle it on or off at any time. Q: How can I verify the zone redundancy status of my App Service Plans? We now display the physical zone for each instance, helping you verify zone redundancy status for audits and compliance reviews. Q: How do I use these new features? You can use ARM/Bicep or the Azure CLI at this time. Starting in mid-June, Azure Portal support should be available. The documentation currently shows how to use ARM/Bicep and the Azure CLI to enable these features. The documentation as well as this blog post will be updated once Azure Portal support is available. Q: Are Availability Zones supported on Premium V4? Yes! See the documentation for more details on how to get started with Premium V4 today.1.8KViews5likes0CommentsAnnouncing Federated Logging in Azure API Management
Managing APIs effectively requires robust security, governance, and deep operational visibility. With federated logging now available in Azure API Management, platform teams and API developers can monitor, troubleshoot, and optimize APIs more efficiently and without compromising security or collaboration. What is federated logging? As API ecosystems grow, maintaining centralized visibility while providing teams with the autonomy to manage and troubleshoot their APIs becomes a challenge. Federated logging centralizes insights for platform teams while empowering API teams with focused access to logs specific to their APIs, streamlining monitoring in large-scale API ecosystems. Centralized Monitoring for Platform Teams: Complete visibility into API health, performance, and usage trends across the organization. Autonomy for API Teams: Direct access to their own API logs, reducing reliance on platform teams and speeding up resolution times. Key Benefits Federated logging offers advantages for both platform and API teams, addressing their unique challenges and needs. For platform teams: Centralized Monitoring: Gain platform-wide visibility into API health, performance, and usage trends. Streamlined Troubleshooting: Quickly diagnose and resolve platform issues without dependency on individual API teams. Governance and Security: Ensure robust audit trails and compliance, supporting secure and scalable API management. For API teams: Faster Incident Resolution: Accelerate incident resolution thanks to immediate access to relevant logs, without waiting for the central platform team’s response. Actionable Insights: Track API growth, trends, and key performance metrics specific to your APIs to support reporting, planning, and strategic decision-making. Access Control: Limit access to logs to your API team only. How Federated Logging Works Federated logging is enabled using Azure Log Analytics and workspaces in Azure API Management: Platform teams configure logging to a centralized Log Analytics workspace for the entire API Management service, including individual workspaces. Platform teams can access centralized logs through the “Logs” page in the API Management service in the Azure portal or directly in the Log Analytics workspace. API teams can access logs for their workspace APIs through the “Logs” page in their API Management workspace in the Azure portal. Access control is enforced via Azure Log Analytics’ resource context mechanism, ensuring role-based log visibility. Get Started Today Federated logging in Azure API Management combines centralized monitoring and team autonomy, enabling efficient and effective operations. Start using federated logging by visiting the Azure API Management documentation.437Views0likes0CommentsIntroducing Workspace Gateway Metrics and Autoscale in Azure API Management
We’re excited to announce the availability of workspace gateway metrics and autoscale in Azure API Management, offering both real-time insights and automated scaling for your gateway infrastructure. This combination increases reliability, streamlines operations, and boosts cost efficiency. Monitor and Scale Gateway with New Metrics API Management workspace gateways now support two metrics: CPU Utilization (%): Represents CPU utilization across workspace gateway units. Memory Utilization (%): Represents memory utilization across workspace gateway units. Both metrics should be used together to make informed scaling decisions. For instance, if one of the metrics consistently exceeds a 70% threshold, adding an additional gateway unit to distribute the load can prevent outages during traffic increases. In most workloads, the CPU metric will determine scaling requirements. Automatically Scale Workspace Gateways In addition to manual scaling, Azure API Management workspace gateways now also feature autoscale, allowing for automatic scaling in or out based on metrics or a defined schedule. Autoscale provides several important benefits: Reliability: Autoscale ensures consistent performance by scaling out during periods of high traffic. Operational Efficiency: Automating scaling processes streamlines operations and eliminates manual and error-prone intervention. Cost Optimization: Autoscale scales down resources when traffic is lower, reducing unnecessary expenses. Access Metrics and Autoscale Settings You can access the new metrics in the “Metrics” page of your workspace gateway resource in the Azure portal or through Azure Monitor. Autoscale can be configured in the “Autoscale” page of your workspace gateway resource in the Azure portal or through the autoscale experience. Get Started Learn more about using metrics for scaling decisions.256Views0likes0CommentsProvisioning Azure Storage Containers and Folders Using Bicep and PowerShell
Overview: This blog demonstrates how to: Deploy an Azure Storage Account and Blob containers using Bicep Create a folder-like structure inside those containers using PowerShell This approach is ideal for cloud engineers and DevOps professionals seeking end-to-end automation for structured storage provisioning. Bicep natively supports the creation of: Storage Accounts Containers (via the blobServices resource) However, folders (directories) inside containers are not first-class resources in ARM/Bicep — they're created by uploading a blob with a virtual path, e.g., folder1/blob.txt. So how can we automate the creation of these folder structures without manually uploading dummy blobs? You can check out the blog "Designing Reusable Bicep Modules: A Databricks Example" for a good reference on how to structure the storage account pattern. It covers reusable module design and shows how to keep things clean and consistent. 1. Deploy an Azure Storage Account and Blob containers using Bicep You can provision a Storage Account and its associated Blob Containers using a few lines of code. and the parameters for 'directory services' be like, The process involves: Defining the Microsoft.Storage/storageAccounts resource for the Storage Account. Adding a nested blobServices/containers resource to create Blob containers within it. Using parameters to dynamically assign names, access tiers, and network rules. 2. Create a folder-like structure inside those containers using PowerShell To simulate a directory structure in Azure Data Lake Storage Gen2, use Bicep with deployment scripts that execute az storage fs directory create. This enables automation of folder creation inside blob containers at deployment time. In this setup: A Microsoft.Resources/deploymentScripts resource is used. The az storage fs directory create command creates virtual folders inside containers. Access is authenticated using a secure account key fetched via storageAccount.listKeys() Parameter Flow and Integration The solution uses Bicep’s module linking capabilities: The main module outputs the Storage Account name and container names. These outputs are passed as parameters to the deployment script module. The script loops through each container and folder, uploading a dummy blob to create the folder. Here’s the final setup with the storage account and container ready, plus the directory created inside—everything’s all set! Conclusion This approach is especially useful in enterprise environments where storage structures must be provisioned consistently across environments. You can extend this pattern further to: Tag blobs/folders Assign RBAC roles Handle folder-level metadata Have you faced similar challenges with Azure Storage provisioning? Share your experience or drop a comment!222Views0likes0Comments