benchmarking
43 TopicsDeploy NDm_v4 (A100) Kubernetes Cluster
We show how to deploy an optimal NDm_v4 (A100) AKS cluster, making sure that all 8 GPU and 8 InfiniBand devices available on each vritual machine come up correctly and are available to deliver optimal performance. A multi-node NCCL allreduce job is executed on the NDmv4 AKS cluster to verify its deployed/configured correctly.Azure’s ND GB200 v6 Delivers Record Performance for Inference Workloads
Achieving peak AI performance requires both cutting-edge hardware and a finely optimized infrastructure. Azure’s ND GB200 v6 Virtual Machines, accelerated by the NVIDIA GB200 Blackwell GPUs, have already demonstrated world record performance of 865,000 tokens/s for inferencing on the industry standard LLAMA2 70BAnnouncing Azure HBv5 Virtual Machines: A Breakthrough in Memory Bandwidth for HPC
Discover the new Azure HBv5 Virtual Machines, unveiled at Microsoft Ignite, designed for high-performance computing applications. With up to 7 TB/s of memory bandwidth and custom 4th Generation EPYC processors, these VMs are optimized for the most memory-intensive HPC workloads. Sign up for the preview starting in the first half of 2025 and see them in action at Supercomputing 2024 in AtlantaPerformance & Scalability of HBv4 and HX-Series VMs with Genoa-X CPUs
Azure has announced the general availability of Azure HBv4-series and HX-series virtual machines (VMs) for high performance computing (HPC). This blog provides in-depth technical and performance information about these HPC-optimized VMs.Training large AI models on Azure using CycleCloud + Slurm
Here we demonstrate and provide template to deploy a computing environment optimized to train a transformer-based large language model on Azure using CycleCloud, a tool to orchestrate and manage HPC environments, to provision a cluster comprised of A100, or H100, nodes managed by Slurm. Such environments have been deployed to train foundational models with 10-100s billions of parameters on terabytes of data.DGX Cloud Benchmarking on Azure
This blog presents our benchmarking results for NVIDIA DGX Cloud workloads on Azure, scaling from 8 to 1024 H100 GPUs. We detail the Slurm-based setup using Azure CycleCloud Workspace for Slurm, performance validation via NCCL and thermal screening, and tuning strategies that deliver near-parity with NVIDIA DGX reference metrics.Monitoring HPC & AI Workloads on Azure H/N VMs Using Telegraf and Azure Monitor (GPU & InfiniBand)
As HPC & AI workloads continue to scale in complexity and performance demands, ensuring visibility into the underlying infrastructure becomes critical. This guide presents an essential monitoring solution for AI infrastructure deployed on Azure RDMA-enabled virtual machines (VMs), focusing on NVIDIA GPUs and Mellanox InfiniBand devices. By leveraging the Telegraf agent and Azure Monitor, this setup enables real-time collection and visualization of key hardware metrics, including GPU utilization, GPU memory usage, InfiniBand port errors, and link flaps. It provides operational insights vital for debugging, performance tuning, and capacity planning in high-performance AI environments. In this blog, we'll walk through the process of configuring Telegraf to collect and send GPU and InfiniBand monitoring metrics to Azure Monitor. This end-to-end guide covers all the essential steps to enable robust monitoring for NVIDIA GPUs and Mellanox InfiniBand devices, empowering you to track, analyze, and optimize performance across your HPC & AI infrastructure on Azure. DISCLAIMER: This is an unofficial configuration guide and is not supported by Microsoft. Please use it at your own discretion. The setup is provided "as-is" without any warranties, guarantees, or official support. While Azure Monitor offers robust monitoring capabilities for CPU, memory, storage, and networking, it does not natively support GPU or InfiniBand metrics for Azure H- or N-series VMs. To monitor GPU and InfiniBand performance, additional configuration using third-party tools—such as Telegraf—is required. As of the time of writing, Azure Monitor does not include built-in support for these metrics without external integrations. Step 1: Making changes in Azure for sending GPU and IB metrics from Telegraf agents to Azure monitor from VM or VMSS. Register the microsoft.insights resource provider in your Azure subscription. Refer: Resource providers and resource types - Azure Resource Manager | Microsoft Learn Step 2: Enable Managed Service Identities to authenticate an Azure VM or Azure VMSS. In the example we are using Managed Identity for authentication. You can also use User Managed Identities or Service Principle to authenticate the VM. Refer: telegraf/plugins/outputs/azure_monitor at release-1.15 · influxdata/telegraf (github.com) Step 3: Set Up the Telegraf Agent Inside the VM or VMSS to Send Data to Azure Monitor In this example, I'll use an Azure Standard_ND96asr_v4 VM with the Ubuntu-HPC 2204 image to configure the environment for VMSS. The Ubuntu-HPC 2204 image comes with pre-installed NVIDIA GPU drivers, CUDA, and InfiniBand drivers. If you opt for a different image, ensure that you manually install the necessary GPU drivers, CUDA toolkit, and InfiniBand driver. Next, download and run the gpu-ib-mon_setup.sh script to install the Telegraf agent on Ubuntu 22.04. This script will also configure the NVIDIA SMI input plugin and InfiniBand Input Plugin, along with setting up the Telegraf configuration to send data to Azure Monitor. Note: The gpu-ib-mon_setup.sh script is currently supported and tested only on Ubuntu 22.04. Please read the InfiniBand counter collected by Telegraf - https://enterprise-support.nvidia.com/s/article/understanding-mlx5-linux-counters-and-status-parameters Run the following commands: wget https://raw.githubusercontent.com/vinil-v/gpu-ib-monitoring/refs/heads/main/scripts/gpu-ib-mon_setup.sh -O gpu-ib-mon_setup.sh chmod +x gpu-ib-mon_setup.sh ./gpu-ib-mon_setup.sh Test the Telegraf configuration by executing the following command: sudo telegraf --config /etc/telegraf/telegraf.conf --test Step 4: Creating Dashboards in Azure Monitor to Check NVIDIA GPU and InfiniBand Usage Telegraf includes an output plugin specifically designed for Azure Monitor, allowing custom metrics to be sent directly to the platform. Since Azure Monitor supports a metric resolution of one minute, the Telegraf output plugin aggregates metrics into one-minute intervals and sends them to Azure Monitor at each flush cycle. Metrics from each Telegraf input plugin are stored in a separate Azure Monitor namespace, typically prefixed with Telegraf/ for easy identification. To visualize NVIDIA GPU usage, go to the Metrics section in the Azure portal: Set the scope to your VM. Choose the Metric Namespace as Telegraf/nvidia-smi. From there, you can select and display various GPU metrics such as utilization, memory usage, temperature, and more. In example we are using GPU memory_used metrics. Use filters and splits to analyze data across multiple GPUs or over time. To monitor InfiniBand performance, repeat the same process: In the Metrics section, set the scope to your VM. Select the Metric Namespace as Telegraf/infiniband. You can visualize metrics such as port status, data transmitted/received, and error counters. In this example, we are using a Link Flap Metrics to check the InfiniBand link flaps. Use filters to break down the data by port or metric type for deeper insights. Link_downed Metric Note: The link_downed metric with Aggregation: Count is returning incorrect values. We can use Max, Min values. Port_rcv_data metrics Creating custom dashboards in Azure Monitor with both Telegraf/nvidia-smi and Telegraf/infiniband namespaces allows for unified visibility into GPU and InfiniBand. Testing InfiniBand and GPU Usage If you're testing GPU metrics and need a reliable way to simulate multi-GPU workloads—especially over InfiniBand—here’s a straightforward solution using the NCCL benchmark suite. This method is ideal for verifying GPU and network monitoring setups. NCCL Benchmark and OpenMPI is part of the Ubuntu HPC 22.04 image. Update the variable according to your environment. Update the hostfile with the hostname. module load mpi/hpcx-v2.13.1 export CUDA_VISIBLE_DEVICES=2,3,0,1,6,7,4,5 mpirun -np 16 --map-by ppr:8:node -hostfile hostfile \ -mca coll_hcoll_enable 0 --bind-to numa \ -x NCCL_IB_PCI_RELAXED_ORDERING=1 \ -x LD_LIBRARY_PATH=/usr/local/nccl-rdma-sharp-plugins/lib:$LD_LIBRARY_PATH \ -x CUDA_DEVICE_ORDER=PCI_BUS_ID \ -x NCCL_SOCKET_IFNAME=eth0 \ -x NCCL_TOPO_FILE=/opt/microsoft/ndv4-topo.xml \ -x NCCL_DEBUG=WARN \ /opt/nccl-tests/build/all_reduce_perf -b 8 -e 8G -f 2 -g 1 -c 1 Alternate: GPU Load Simulation Using TensorFlow If you're looking for a more application-like load (e.g., distributed training), I’ve prepared a script that sets up a multi-GPU TensorFlow training environment using Anaconda. This is a great way to simulate real-world GPU workloads and validate your monitoring pipelines. To get started, run the following: wget -q https://raw.githubusercontent.com/vinil-v/gpu-monitoring/refs/heads/main/scripts/gpu_test_program.sh -O gpu_test_program.sh chmod +x gpu_test_program.sh ./gpu_test_program.sh With either method NCCL benchmarks or TensorFlow training you’ll be able to simulate realistic GPU usage and validate your GPU and InfiniBand monitoring setup with confidence. Happy testing! References: Ubuntu HPC on Azure ND A100 v4-series GPU VM Sizes Telegraf Azure Monitor Output Plugin (v1.15) Telegraf NVIDIA SMI Input Plugin (v1.15) Telegraf InfiniBand Input Plugin DocumentationBenchmarking 6th gen. Intel-based Dv6 (preview) VM SKUs for HPC Workloads in Financial Services
Introduction In the fast-paced world of Financial Services, High-Performance Computing (HPC) systems in the cloud have become indispensable. From instrument pricing and risk evaluations to portfolio optimizations and regulatory workloads like CVA and FRTB, the flexibility and scalability of cloud deployments are transforming the industry. Unlike traditional HPC systems that require complex parallelization frameworks (e.g. depending on MPI and InfiniBand networking), many financial calculations can be efficiently executed on general-purpose SKUs in Azure. Depending on the codes used to perform the calculations, many implementations leverage vendor-specific optimizations such as AVX-512 from Intel. With the recent announcement of the public preview of the 6th generation of Intel-based Dv6 VMs (see here), this article will explore the performance evolution across three generations of D32ds – from D32dsv4 to D32dsv6. We will follow the testing methodology similar to the article from January 2023 – “Benchmarking on Azure HPC SKUs for Financial Services Workloads” (link here). Overview of D-Series VM in focus: In the official announcement it was mentioned, that the upcoming Dv6 series (currently in preview) offers significant improvements over the previous Dv5 generation. Key highlights include: Up to 27% higher vCPU performance and a threefold increase in L3 cache compared to the previous generation Intel Dl/D/Ev5 VMs. Support for up to 192 vCPUs and more than 18 GiB of memory. Azure Boost, which provides: Up to 400,000 IOPS and 12 GB/s remote storage throughput. Up to 200 Gbps VM network bandwidth. A 46% increase in local SSD capacity and more than three times the read IOPS. NVMe interface for both local and remote disks. Note: Enhanced security through Total Memory Encryption (TME) technology is not activated in the preview deployment and will be benchmarked once available. Technical Specifications for 3 generations of D32ds SKUs VM Name D32ds_v4 D32ds_v5 D32ds_v6 Number of vCPUs 32 32 32 InfiniBand N/A N/A N/A Processor Intel® Xeon® Platinum 8370C (Ice Lake) or Intel® Xeon® Platinum 8272CL (Cascade Lake) Intel® Xeon® Platinum 8370C (Ice Lake) Intel® Xeon® Platinum 8573C (Emerald Rapids) processor Peak CPU Frequency 3.4 GHz 3.5 GHz 3.0 GHz RAM per VM 128 GB 128 GB 128 GB RAM per core 4 GB 4 GB 4 GB Attached Disk 1200 SSD 1200 SSD 440 SSD Benchmarking Setup For our benchmarking setup, we utilised the user-friendly, open-source test suite from Phoronix (link) to run 2 tests from OpenBenchmarking.org test suite, specifically targeting quantitative finance workloads. The tests in the "finance suite" are divided into two groups, each running independent benchmarks. In addition to the finance test suite, we also ran the AI-Benchmark to evaluate the evolution of AI inferencing capabilities across three VM generations. Finance Bench QuantLib AI Benchmark Bonds OpenMP Size XXS Device Inference Score Repo OpenMP Size X Device AI Score Monte-Carlo OpenMP Device Training Score Software dependencies Component Version OS Image Ubuntu marketplace image: 24_04-lts Phoronix Test Suite 10.8.5 Quantlib Benchmark 1.35-dev Finance Bench Benchmark 2016-07-25 AI Benchmark Alpha 0.1.2 Python 3.12.3 To run the benchmark on a freshly created D-Series VM, execute the following commands (after updating the installed packages to the latest version): git clone https://github.com/phoronix-test-suite/phoronix-test-suite.git sudo apt-get install php-cli php-xml cmake sudo ./install-sh phoronix-test-suite benchmark finance For the AI Benchmark tests, a few additional steps are required. For example, creating a virtual environment for additional python packages and the installation of the tensorflow and ai-benchmark packages are required: sudo apt install python3 python3-pip python3-virtualenv mkdir ai-benchmark && cd ai-benchmark virtualenv virtualenv source virtualenv/bin/activate pip install tensorflow pip install ai-benchmark phoronix-test-suite benchmark ai-benchmark Benchmarking Runtimes and Results The purpose of this article is to share the results of a set of benchmarks that closely align with the use cases mentioned in the introduction. Most of these use cases are predominantly CPU-bound, which is why we have limited the benchmark to D-Series VMs. For memory-bound codes that would benefit from a higher memory-to-core ratio, the new Ev6 SKU could be a suitable option. In the picture below, you can see a representative benchmarking run on a Dv6 VM, where nearly 100% of the CPUs were utilised during execution. The individual runs of the Phoronix test suite, starting with Finance Bench and followed by QuantLib, are clearly visible. Runtimes Benchmark VM Size Start Time End Time Duration Minutes Finance Benchmark Standard D32ds v4 12:08 15:29 03:21 201.00 Finance Benchmark Standard D32ds v5 11:38 14:12 02:34 154.00 Finance Benchmark Standard D32ds v6 11:39 13:27 01:48 108.00 Finance Bench Results QuantLib Results AI Benchmark Alpha Results Discussion of the results The results show significant performance improvements in QuantLib across the D32v4, D32v5, and D32v6 versions. Specifically, the tasks per second for Size S increased by 47.18% from D32v5 to D32v6, while Size XXS saw an increase of 45.55%. Benchmark times for 'Repo OpenMP' and 'Bonds OpenMP' also decreased, indicating better performance. 'Repo OpenMP' times were reduced by 18.72% from D32v4 to D32v5 and by 20.46% from D32v5 to D32v6. Similarly, 'Bonds OpenMP' times decreased by 11.98% from D32v4 to D32v5 and by 18.61% from D32v5 to D32v6. In terms of Monte-Carlo OpenMP performance, the D32v6 showed the best results with a time of 51,927.04 ms, followed by the D32v5 at 56,443.91 ms, and the D32v4 at 57,093.94 ms. The improvements were -1.14% from D32v4 to D32v5 and -8.00% from D32v5 to D32v6. AI Benchmark Alpha scores for device inference and training also improved significantly. Inference scores increased by 15.22% from D32v4 to D32v5 and by 42.41% from D32v5 to D32v6. Training scores saw an increase of 21.82% from D32v4 to D32v5 and 43.49% from D32v5 to D32v6. Finally, Device AI scores improved across the versions, with D32v4 scoring 6726, D32v5 scoring 7996, and D32v6 scoring 11436. The percentage increases were 18.88% from D32v4 to D32v5 and 43.02% from D32v5 to D32v6. Next Steps & Final Comments The public preview of the new Intel SKUs have already shown very promising benchmarking results, indicating a significant performance improvement compared to the previous D-series generations, which are still widely used in FSI scenarios. It's important to note that your custom code or purchased libraries might exhibit different characteristics than the benchmarks selected. Therefore, we recommend validating the performance indicators with your own setup. In this benchmarking setup, we have not disabled Hyper-Threading on the CPUs, so the available cores are exposed as virtual cores. If this scenario is of interest to you, please reach out to the authors for more information. Additionally, Azure offers a wide range of VM families to suit various needs, including F, FX, Fa, D, Da, E, Ea, and specialized HPC SKUs like HC and HB VMs. A dedicated validation, based on your individual code / workload, is recommended here as well, to ensure the best suited SKU is selected for the task at hand.