integration
52 TopicsBuilding an Enterprise RAG Pipeline in Azure with NVIDIA AI Blueprint for RAG and Azure NetApp Files
Transform your enterprise-grade RAG pipeline with NVIDIA AI and Azure NetApp Files. This post highlights the challenges of scaling RAG solutions and introduces NVIDIA's AI Blueprint adapted for Azure. Discover how Azure NetApp Files boosts performance and handles dynamic demands, enabling robust and efficient RAG workloads.930Views0likes0CommentsGranting Azure Resources Access to SharePoint Online Sites Using Managed Identity
When integrating Azure resources like Logic Apps, Function Apps, or Azure VMs with SharePoint Online, you often need secure and granular access control. Rather than handling credentials manually, Managed Identity is the recommended approach to securely authenticate to Microsoft Graph and access SharePoint resources. High-level steps: Step 1: Enable Managed Identity (or App Registration) Step 2: Grant Sites.Selected Permission in Microsoft Entra ID Step 3: Assign SharePoint Site-Level Permission Step 1: Enable Managed Identity (or App Registration) For your Azure resource (e.g., Logic App): Navigate to the Azure portal. Go to the resource (e.g., Logic App). Under Identity, enable System-assigned Managed Identity. Note the Object ID and Client ID (you’ll need the Client ID later). Alternatively, use an App Registration if you prefer a multi-tenant or reusable identity. How to register an app in Microsoft Entra ID - Microsoft identity platform | Microsoft Learn Step 2: Grant Sites.Selected Permission in Microsoft Entra Open Microsoft Entra ID > App registrations. Select your Logic App’s managed identity or app registration. Under API permissions, click Add a permission > Microsoft Graph. Select Application permissions and add: Sites.Selected Click Grant admin consent. Note: Sites.Selected ensures least-privilege access — you must explicitly allow site-level access later. Step 3: Assign SharePoint Site-Level Permission SharePoint Online requires site-level consent for apps with Sites.Selected. Use the script below to assign access. Note: You must be a SharePoint Administrator and have the Sites.FullControl.All permission when running this. PowerShell Script: # Replace with your values $application = @{ id = "{ApplicationID}" # Client ID of the Managed Identity displayName = "{DisplayName}" # Display name (optional but recommended) } $appRole = "write" # Can be "read" or "write" $spoTenant = "contoso.sharepoint.com" # Sharepoint site host $spoSite = "{Sitename}" # Sharepoint site name # Site ID format for Graph API $spoSiteId = $spoTenant + ":/sites/" + $spoSite + ":" # Load Microsoft Graph module Import-Module Microsoft.Graph.Sites # Connect with appropriate permissions Connect-MgGraph -Scope Sites.FullControl.All # Grant site-level permission New-MgSitePermission -SiteId $spoSiteId -Roles $appRole -GrantedToIdentities @{ Application = $application } That's it, Your Logic App or Azure resource can now call Microsoft Graph APIs to interact with that specific SharePoint site (e.g., list files, upload documents). You maintain centralized control and least-privilege access, complying with enterprise security standards. By following this approach, you ensure secure, auditable, and scalable access from Azure services to SharePoint Online — no secrets, no user credentials, just managed identity done right.974Views2likes5CommentsAzure NetApp Files solutions for three EDA Cloud-Compute scenarios
Table of Contents Abstract Introduction EDA Cloud-Compute scenarios Scenario 1: Burst to Azure from on-premises Data Center Scenario 2: “24x7 Single Set Workload” Scenario 3: "Data Center Supplement" Summary Abstract Azure NetApp Files (ANF) is transforming Electronic Design Automation (EDA) workflows in the cloud by delivering unparalleled performance, scalability, and efficiency. This blog explores how ANF addresses critical challenges in three cloud compute scenarios: Cloud Bursting, 24x7 All-in-Cloud, and Cloud-based Data Center Supplement. These solutions are tailored to optimize EDA processes, which rely on high-performance NFS file systems to design advanced semiconductor products. With the ability to support clusters exceeding 50,000 cores, ANF enhances productivity, shortens design cycles, and eliminates infrastructure concerns, making it the default choice for EDA workloads in Azure. Additionally, innovations such as increased L3 cache and the transition to DDR5 memory enable performance boosts of up to 60%, further accelerating the pace of chip design and innovation. Co-authors: Andy Chan, Principal Product Manager Azure NetApp Files Arnt de Gier, Technical Marketing Engineer Azure NetApp Files Introduction Azure NetApp Files (ANF) solutions support three major cloud compute scenarios running Electronic Design Automation (EDA) in Azure: Cloud Bursting 24x7 All-in-Cloud Cloud based Data Center Supplement ANF solutions can address the key challenges associated with each scenario. By providing an optimized solution stack for EDA engineers ANF will increase productivity and shorten design cycles, making ANF the de facto standard file system for running EDA workloads in Azure. Electronic Design Automation (EDA) processes are comprised of a suite of software tools and workflows used to design semiconductor products such as advanced computer processors (chips) which are all in need of high performance NFS file system solutions. The increasing demand for chips with superior performance, reduced size, and lower power consumption (PPA) is driven by today's rapid pace of innovation to power workloads such as AI. To meet this growing demand, EDA tools require numerous nodes and multiple CPUs (cores) in a cluster. This is where Azure NetApp Files (ANF) comes into play with its high-performance, scalable file system. ANF ensures that data is efficiently delivered to these compute nodes. This means a single cluster—sometimes encompassing more than 50,000 cores—can function as a unified entity, providing both scale-out performance and consistency which is essential for designing advanced semiconductor products. ANF is the most performance optimized NFS storage in Azure making it the De facto solution for EDA workloads. According to Philip Steinke, AMD's Fellow of CAD Infrastructure and Physical Design, the main priority is to maximize the productivity of chip designers by eliminating infrastructure concerns related to compute and file system expansion typically experienced with on-premises deployments that require long planning cycles and significant capital expenditure. In register-transfer level (RTL) simulations, Microsoft Azure showcased that moving to a CPU with greater amounts of L3 Cache can give EDA users a performance boost of up to 60% for their workloads. This improvement is attributed to increased L3 cache, higher clock speeds (instructions per cycle), and the transition from DDR4 to DDR5 memory. Azure’s commitment to providing high-performing, on-demand HPC (High-Performance Computing) infrastructure is a well-known advantage and has become the primary reason EDA companies are increasingly adopting Azure for their chip design needs. In this paper, three different scenarios of Azure for EDA are explored, namely “Cloud Bursting”, “24x7 Single Set Workload” and “Data Center Supplement” as a reference framework to help guide engineer’s Azure for EDA journey. EDA Cloud-Compute scenarios The following sections delve into three key scenarios that address the computational needs of EDA workflows: “Cloud Bursting,” “24x7 Single Set Workload,” and “Data Center Supplement.” Each scenario highlights how Azure's robust infrastructure, combined with high-performance solutions like Azure NetApp Files, enables engineering teams to overcome traditional limitations, streamline chip design processes, and significantly enhance productivity. Scenario 1: Burst to Azure from on-premises Data Center An EDA workload is made up of a series of workflows where certain steps are bursty which can lead to incidents in semiconductor project cycles where compute demand exceeds the on-premises HPC server cluster capacity. Many EDA customers have been bursting to Azure to speed up their engineering projects. In one example, a total of 120,000 cores were deployed serving in many clusters, all were well supported with the high-performance capabilities of ANF. As design projects approach completion, the design is continuously and incrementally modified to fix bugs, synthesis and timing issues, optimization of area, timing and power, resolving issues associated with manufacturing design rule checks, etc. When design changes are made, many if not all the design steps must be re-run to ensure the change did not break the design. As a result, “design spins” or “large regression” jobs will put a large compute demand on the HPC server cluster. This leads to long job scheduler queues (IBM LSF and Univa Grid Engine are two common schedulers for EDA) where jobs wait to be dispatched to run on an available compute node. Competing project schedules are another reason HPC server cluster demands can exceed on-premises fixed capacity. Most engineering divisions within a company share infrastructure resources across teams and projects which inevitably leads to oversubscription of compute capacity and long job queues resulting in production delays. Bursting EDA jobs into Azure with its available compute capacity, is a way to alleviate these delays. For example, Azure’s latest CPU offering can deliver up to 47% shorter turnaround times for RTL simulation than on-premises. Engineering management tries to increase productivity with effective use of their EDA tool licensing. Utilizing Azure's on-demand compute resources and high-performance storage solutions like Azure NetApp Files, enables engineering teams to accelerate design cycles and reduce Non-recurring Engineering (NRE) costs, enhancing productivity significantly. For “burst to Azure” scenarios that allow engineers quick access to compute resources to finish a job without worrying about the underlying NFS infrastructure and traditional complex management overhead, ANF delivers: High Performance: up to 826,000 IOPS per large volume, serving the data for the most demanding simulations with ease to reduce turn-around-time. Scalability: As EDA projects advance, the data generated can grow exponentially. ANF provides large-capacity single namespaces with volumes up to 2PiB, enabling your storage solution to scale seamlessly, while supporting compute clusters with more than 50,000 cores. Ease of Use: ANF is designed for simplicity, with SaaS-like user experience, allowing deployment and management with a few clicks or API automation. Since storage deployment can be done rapidly, engineering to access their EDA HPC hardware quickly for their jobs. Cost-Effectiveness: ANF offers cool access, which transparently moves ‘cold’ data blocks to lower-cost Azure Storage. Additionally, Reserved Capacity (RC) can provide significant cost savings compared to pay-as-you-go pricing, further reducing the high upfront CapEx costs and long procurement cycle associated with on-premises storage solutions. Use the ANF effective pricing estimator to estimate your savings. Reliability and Security: ANF provides enterprise-grade data management and security features, ensuring that your critical EDA data is protected and available when you need it with key management and encryption built-in. Scenario 2: “24x7 Single Set Workload” As Azure for EDA matured over time and the value of providing engineers with available and faster HPC Infrastructure is becoming more widely shared, more users are now moving a entire sets of workloads into Azure that run 24x7. In addition to SPICE or RTL simulations, one such set of workloads is "digital signoff” with the same goal of increasing productivity. Scenario 1 concerns cloud bursting which involves batch processes with high performance and rapid deployment, whereas Scenario 2 involves operating a set of workloads with additional ANF capabilities for data security and user control needs. QoS support: ANF's QoS function fine-tunes storage utilization by establishing a direct correlation between volume size (quota) and performance, which set storage limit an EDA tool or workload may have access to. Snapshot data protection: As more users are using Azure resources, data protection is crucial. ANF snapshots protect primary data often and efficiently for fast recovery from corruption or loss, by restoring a volume to a snapshot in seconds or by restoring individual files from a snapshot. Enabling snapshots is recommended for user home directories and group shares for this reason as well. Large volume support: A set of workloads generates greater output than a single workload, and as such ANF’s large volume support is a feature that’s being widely adopted by EDA users of this scenario. ANF now supports single volumes up to 2PiB in size, allowing a more fine-tuned management of user’s storage footprint. Cool access: Cool access is an ANF feature that enables better cost control because only data that is being worked on at any given time remains in the hot tier. This functionality enables inactive data blocks from the volume and volume snapshots to be transferred from the hot tier to an Azure storage account (the cool tier), saving cost. Because EDA workloads are known to be metadata heavy, ANF does not relocate metadata to the cool tier, ensuring that metadata operations operate as expected. Dynamic capacity pool resizing: Cloud compute resources can be dynamically allocated. To support this deployment model, Azure NetApp Files (ANF) also offers dynamic pool resizing, which further enhances Azure-for-EDA's value proposition. If the size of the pool remains constant but performance requirements fluctuate, enabling dynamic provisioning and deprovisioning of capacity pools of different types provides just-in-time performance. This approach lowers costs during periods when high performance is not needed. Reserved Capacity: Azure allows compute resources to be reserved as a way to guarantee access to that capacity and allowing you to receive significant cost savings compared to the standard "pay-as-you-go" pricing model. This Azure offering is available to ANF. A reservation in 100-TiB and 1-PiB units per month for a one- or three-year term for a particular service level within a region is now available. Scenario 3: "Data Center Supplement" This scenario builds on Scenarios 1 and 2, while Scenario 3 involves EDA users expanding their workflow into Azure as their data center. In this scenario, a mixed EDA flow is hosted with tools from several EDA ISVs, spanning frontend, backend, and Analog mixed signal are being deployed. EDA Companies such as d-Matrix were able to design an entire AI chip, all in Azure as an example of Scenario 3. In this data center supplement scenario, data mobility and additional data life cycle management solutions are essential. Once again, Azure NetApp Files (ANF) rises to the challenge by offering additional features within its solution stack Backup support: ANF has a policy-based backup feature that uses AES-256-bit encryption during the encoding of the received backup data. Backup frequency is defined by a policy. Cross-region replication: ANF data can be replicated asynchronously between Azure NetApp Files volumes (source and destination) with cross-region replication. The source and destination volumes must be deployed in different Azure regions. The service level for the destination capacity pool might be the same or different, allowing customers to fine-tune their data protection demands as efficiently as possible. Cross-zone replication: Similar to the Azure NetApp Files cross-region replication feature, the cross-zone replication (CZR) capability provides data protection between volumes in different availability zones. You can asynchronously replicate data from an Azure NetApp Files volume (source) in one availability zone to another Azure NetApp Files volume (destination) in another availability zone. This capability enables you to fail over your critical application if a zone-wide outage or disaster happens. BC/DR: Users can construct their own solution based on their own goals by using a variety of BC/DR templates that include snapshots, various replication types, failover capabilities, backup, and support for REST API, Azure CLI, and Terraform. Summary The integration of ANF into the EDA workflow addresses the limitations of traditional on-premises infrastructure. By leveraging the latest CPU generations and Azure's on-demand HPC infrastructure, EDA users can achieve significant performance gains and improve productivity, all while being connected by the most optimized, performant file system that’s simple to deploy and support. The three Azure for EDA scenarios—Cloud Bursting, 24x7 Single Set Workload, and Data Center Supplement—showcase Azure's adaptability and effectiveness in fulfilling the changing needs of the semiconductor industry. As a result, ANF has become the default NFS solution for EDA in Azure, allowing businesses to innovate even faster.205Views1like0CommentsStreamlining data discovery for AI/ML with OpenMetadata on AKS and Azure NetApp Files
This article contains a step-by-step guide to deploying OpenMetadata on Azure Kubernetes Service (AKS), using Azure NetApp Files for storage. It also covers the deployment and configuration of PostgreSQL and OpenSearch databases to run externally from the Kubernetes cluster, following OpenMetadata best practices, managed by NetApp® Instaclustr®. This comprehensive tutorial aims to assist Microsoft and NetApp customers in overcoming the challenges of identifying and managing their data for AI/ML purposes. By following this guide, users will achieve a fully functional OpenMetadata instance, enabling efficient data discovery, enhanced collaboration, and robust data governance.334Views0likes0CommentsSynthetic Monitoring in Application Insights Using Playwright: A Game-Changer
Monitoring the availability and performance of web applications is crucial to ensuring a seamless user experience. Azure Application Insights provides powerful synthetic monitoring capabilities to help detect issues proactively. However, Microsoft has deprecated two key features: (Deprecated) Multi-step web tests: Previously, these allowed developers to record and replay a sequence of web requests to test complex workflows. They were created in Visual Studio Enterprise and uploaded to the portal. (Deprecated) URL ping tests: These tests checked if an endpoint was responding and measured performance. They allowed setting custom success criteria, dependent request parsing, and retries. With these features being phased out, we are left without built-in logic to test application health beyond simple endpoint checks. The solution? Custom TrackAvailability tests using Playwright. What is Playwright? Playwright is a powerful end-to-end testing framework that enables automated browser testing for modern web applications. It supports multiple browsers (Chromium, Firefox, WebKit) and can run tests in headless mode, making it ideal for synthetic monitoring. Why Use Playwright for Synthetic Monitoring? Simulate real user interactions (login, navigate, click, etc.) Catch UI failures that simple URL ping tests cannot detect Execute complex workflows like authentication and transactions Integrate with Azure Functions for periodic execution Log availability metrics in Application Insights for better tracking and alerting Step-by-Step Implementation (Repo link) Set Up an Azure Function App Navigate to the Azure Portal. Create a new Function App. Select Runtime Stack: Node.js. Enable Application Insights. Install Dependencies In your local development environment, create a Node.js project: mkdir playwright-monitoring && cd playwright-monitoring npm init -y npm install /functions playwright applicationinsights dotenv Implement the Timer-Triggered Azure Function Create timerTrigger1.js: const { app } = require('@azure/functions'); const { runPlaywrightTests } = require('../playwrightTest.js'); // Import the Playwright test function app.timer('timerTrigger1', { schedule: '0 */5 * * * *', // Runs every 5 minutes handler: async (myTimer, context) => { try { context.log("Executing Playwright test..."); await runPlaywrightTests(context); context.log("Playwright test executed successfully!"); } catch (error) { context.log.error("Error executing Playwright test:", error); } finally { context.log("Timer function processed request."); } } }); Implement the Playwright Test Logic Create playwrightTest.js: require('dotenv').config(); const playwright = require('playwright'); const appInsights = require('applicationinsights'); // Debugging: Print env variable to check if it's loaded correctly console.log("App Insights Key:", process.env.APPLICATIONINSIGHTS_CONNECTION_STRING); // Initialize Application Insights appInsights .setup(process.env.APPLICATIONINSIGHTS_CONNECTION_STRING || process.env.APPINSIGHTS_INSTRUMENTATIONKEY) .setSendLiveMetrics(true) .setDistributedTracingMode(appInsights.DistributedTracingModes.AI_AND_W3C) .setAutoDependencyCorrelation(true) .setAutoCollectRequests(true) .setAutoCollectPerformance(true) .setAutoCollectExceptions(true) .setAutoCollectDependencies(true) .setAutoCollectConsole(true) .setUseDiskRetryCaching(true) // Enables retry caching for telemetry .setInternalLogging(true, true) // Enables internal logging for debugging .start(); const client = appInsights.defaultClient; async function runPlaywrightTests(context) { const timestamp = new Date().toISOString(); try { context.log(`[${timestamp}] Running Playwright login test...`); // Launch Browser const browser = await playwright.chromium.launch({ headless: true }); const page = await browser.newPage(); // Navigate to login page await page.goto('https://www.saucedemo.com/'); // Perform Login await page.fill('#user-name', 'standard_user'); await page.fill('#password', 'secret_sauce'); await page.click('#login-button'); // Verify successful login await page.waitForSelector('.inventory_list', { timeout: 5000 }); // Log Success to Application Insights client.trackAvailability({ name: "SauceDemo Login Test", success: true, duration: 5000, // Execution time runLocation: "Azure Function", message: "Login successful", time: new Date() }); context.log("✅ Playwright login test successful."); await browser.close(); } catch (error) { context.log.error("❌ Playwright login test failed:", error); // Log Failure to Application Insights client.trackAvailability({ name: "SauceDemo Login Test", success: false, duration: 0, runLocation: "Azure Function", message: error.message, time: new Date() }); } } module.exports = { runPlaywrightTests }; Configure Environment Variables Create a .env file and set your Application Insights connection string: APPLICATIONINSIGHTS_CONNECTION_STRING=<your_connection_string> Deploy and Monitor Deploy the Function App using Azure CLI: func azure functionapp publish <your-function-app-name> Monitor the availability results in Application Insights → Availability. Setting Up Alerts for Failed Tests To get notified when availability tests fail: Open Application Insights in the Azure portal. Go to Alerts → Create Alert Rule. Select Signal Type: Availability Results. Configure a condition where Success = 0 (Failure). Add an action group (email, Teams, etc.). Click Create Alert Rule. Conclusion With Playwright-based synthetic monitoring, you can go beyond basic URL ping tests and validate real user interactions in your application. Since Microsoft has deprecated Multi-step web tests and URL ping tests, this approach ensures better availability tracking, UI validation, and proactive issue detection in Application Insights.705Views1like0CommentsAI for Operations
Solutions idea This solution series shows some examples of how Azure OpenAI and its LLM models can be used on Operations and FinOps issues. With a view to the use of models linked to the Enterprise Scale Landing Zone, the solutions shown, which are available on a dedicated GitHub, are designed to be deployed within a dedicated subscription, in the examples called ‘OpenAI-CoreIntegration’. The examples we are going to list are: SQL BPA AI Enhanced Azure Update Manager AI Enhanced Azure Cost Management AI Enhanced Azure AI Anomalies Detection Azure OpenAI Smart Doc Creator Enterprise Scale AI for Operations Landing Zone Design Architecture SQL BPA AI Enhanced Architecture This LogApp is an example of integrating ARC SQL practices assessment results with OpenAI, creating an HTML report and CSV file send via Email with OpenAI comment of Severity High and/or Medium results based on the actual Microsoft Documentation. Dataflow Initial Trigger Type: Recurrence Configuration: Frequency: Weekly Day: Monday Time: 9:00 AM Time Zone: W. Europe Standard Time Description: The Logic App is triggered weekly to gather data for SQL Best Practice Assessments. Step 1: Data Query Action: Run_query_and_list_results Description: Executes a Log Analytics query to retrieve SQL assessment results from monitored resources. Output: A dataset containing issues classified by severity (High/Medium). Step 2: Variable Initialization Actions: Initialize_variable_CSV: Initializes an empty array to store CSV results. Open_AI_API_Key: Sets up the API key for Azure OpenAI service. HelpLinkContent: Prepares a variable to store useful links. Description: Configures necessary variables for subsequent steps. Step 3: Process Results Action: For_eachSQLResult Description: Processes the query results with the following sub-steps: Condition: Checks if the severity is High or Medium. OpenAI Processing: Sends structured prompts to the GPT-4 model for recommendations on identified issues. Parses the JSON response to extract specific insights. CSV Composition: Creates an array containing detailed results. Step 4: Report Generation Actions: Create_CSV_table: Converts processed data into a CSV format. Create_HTML_table: Generates an HTML table from the data. ComposeMailMessage: Prepares an HTML email message containing the results and a link to the report. Description: Formats the data for sharing. Step 5: Saving and Sharing Actions: Create_file: Saves the HTML report to OneDrive. Send_an_email_(V2): Sends an email with the reports attached (HTML and CSV). Post_message_in_a_chat_or_channel: Shares the results in a Teams channel. Description: Distributes the reports to defined recipients. Components Azure OpenAI service is a platform provided by Microsoft that offers access to powerful language models developed by OpenAI, including GPT-4, GPT-4o, GPT-4o mini, and others. The service is used in this scenario for all the natural language understanding and generating communication to the customers. Azure Logic Apps is a cloud platform where you can create and run automated workflows with little to no code. Azure Logic Apps Managed Identities allow to authenticate to any resource that supports Microsoft Entra authentication, including your own applications. Azure ARC SQL Server enabled by Azure Arc extends Azure services to SQL Server instances hosted outside of Azure: in your data center, in edge site locations like retail stores, or any public cloud or hosting provider. SQL Best Practices Assessment feature provides a mechanism to evaluate the configuration of your SQL Server instance. Azure Monitor is a comprehensive monitoring solution for collecting, analyzing, and responding to monitoring data from your cloud and on-premises environments. Azure Kusto Query is a powerful tool to explore your data and discover patterns, identify anomalies and outliers, create statistical modeling, and more Potential use cases SQL BPA AI Enhanced exploits the capabilities of the SQL Best Practice Assessment service based on Azure ARC SQL Server. The collected data can be used for the generation of customised tables. The solution is designed for customers who want to enrich their Assessment information with Generative Artificial Intelligence. Azure Update Manager AI Enhanced Architecture This LogApp solution example retrieves data from the Azure Update Manager service and returns an output processed by generative artificial intelligence. Dataflow Initial Trigger Type: Recurrence Trigger Frequency: Monthly Time Zone: W. Europe Standard Time Triggers the Logic App at the beginning of every month. Step 1: Initialize API Key Action: Initialize Variable Variable Name: Api-Key Step 2: Fetch Update Status Action: HTTP Request URI: https://management.azure.com/providers/Microsoft.ResourceGraph/resources Query: Retrieves resources related to patch assessments using patchassessmentresources. Step 3: Parse Update Status Action: Parse JSON Content: Response body from the HTTP request. Schema: Extracts details such as VM Name, Patch Name, Patch Properties, etc. Step 4: Process Updates For Each: Body('Parse_JSON')?['data'] Iterates through each item in the parsed update data. Condition: If Patch Name is not null and contains "KB": Action: Format Item Parses individual update items for VM Name, Patch Name, and additional properties. Action: Send to Azure OpenAI Description: Sends structured prompts to the GPT-4 model Headers: Content-Type: application/json api-key: @variables('Api-Key') Body: Prompts Azure OpenAI to generate a report for each virtual machine and patch, formatted in Italian. Action: Parse OpenAI Response Extracts and formats the response generated by Azure OpenAI. Action: Append to Summary and CSV Adds the OpenAI-generated response to the Updated Summary array. Appends patch details to the CSV array. Step 5: Finalize Report Action: Create Reports (I, II, III) Formats and cleans the Updated Summary variable to remove unwanted characters. Action: Compose HTML Email Content Constructs an HTML email with the following: Report summary generated using OpenAI. Disclaimer about possible formatting anomalies. Company logo embedded. Step 6: Generate CSV Table Action: Converts the CSV array into a CSV format for attachment. Step 7: Send E-Mail Action: Send Email Recipient: [email protected] Subject: Security Update Assessment Body: HTML content with report summary. Attachment: Name: SmartUpdate_<timestamp>.csv Content: CSV table of update details. Components Azure OpenAI service is a platform provided by Microsoft that offers access to powerful language models developed by OpenAI, including GPT-4, GPT-4o, GPT-4o mini, and others. The service is used in this scenario for all the natural language understanding and generating communication to the customers. Azure Logic Apps is a cloud platform where you can create and run automated workflows with little to no code. Azure Logic Apps Managed Identities allow to authenticate to any resource that supports Microsoft Entra authentication, including your own applications. Azure Update Manager is a unified service to help manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your machines in Azure and on-premises/on other cloud platforms (connected by Azure Arc) from a single pane of management. You can also use Update Manager to make real-time updates or schedule them within a defined maintenance window. Azure Arc Server lets you manage Windows and Linux physical servers and virtual machines hosted outside of Azure, on your corporate network, or other cloud provider. Potential use cases Azure Update Manager AI Enhanced is an example of a solution designed for all those situations where the IT department needs to manage and automate the telling of information in a readable format on the status of updates to its infrastructure thanks to an output managed by generative artificial intelligence Azure Cost Management AI Enhanced Architecture This LogApp solution retrieves consumption data from the Azure environment and generates a general and detailed cost trend report on a scheduled basis. Dataflow Initial Trigger Type: Manual HTTP Trigger The Logic App is triggered manually using an HTTP request. Step 1: Set Current Date and Old Date Action: Set Actual Date Current date is initialized to @utcNow('yyyy-MM-dd'). Example Value: 2024-11-22. Action: Set Actual Date -30 Old date is set to 30 days before the current date. Example Value: 2024-10-23. Action: Set old date -30 Sets the variable currentdate to 30 days prior to the old date. Example Value: 2024-09-23. Action: Set old date -60 Sets the variable olddate to 60 days before the current date. Example Value: 2024-08-23. Step 2: Query Cost Data Action: Query last 30 days Queries Azure Cost Management for the last 30 days. Example Data Returned:json{ "properties": { "rows": [ ["Virtual Machines", 5000], ["Databases", 7000], ["Storage", 3000] ] } } Copia codice Action: Query -60 -30 days Queries Azure Cost Management for 30 to 60 days ago. Example Data Returned:json{ "properties": { "rows": [ ["Virtual Machines", 4800], ["Databases", 6800], ["Storage", 3050] ] } } Copia codice Step 3: Download Detailed Reports Action: Download_report_actual_month Generates and retrieves a detailed cost report for the current month. Action: Download_report_last_month Generates and retrieves a detailed cost report for the previous month. Step 4: Process and Store Reports Action: Actual_Month_Report Parses the JSON from the current month's report. Retrieves blob download links for the detailed report. Action: Last_Month_Report Parses the JSON from the last month's report. Retrieves blob download links for the detailed report. Action: Create_ActualMonthDownload and Create_LastMonthDownload Initializes variables to store download links. Action: Get_Actual_Month_Download_Link and Get_Last_Month_Download_Link Iterates through blob data and assigns the download link variables. Step 5: Generate Questions for OpenAI Action: Set_Question Prepares the first question for Azure OpenAI: "Describe the key differences between the previous and current month's costs, and create a bullet-point list detailing these differences in Euros." Action: Set_Second_Question Prepares a second question for Azure OpenAI: "Briefly describe in Italian the major cost differences between the two months, rounding the amounts to Euros." Step 6: Send Questions to Azure OpenAI Action: Passo result to OpenAI Sends the first question to OpenAI for generating detailed insights. Action: Get Description from OpenAI Sends the second question to OpenAI for a brief summary in Italian. Step 8: Process OpenAI Responses Action: Parse_JSON and Parse_JSON_Second_Question Parses the JSON response from OpenAI for both questions. Retrieves the content of the generated insights. Action: For_each_Description Iterates through OpenAI's responses and assigns the description to a variable DescriptionOutput. Step 9: Compose and send E-Mail Action: Compose_Email Composes an HTML email including: Key insights from OpenAI. Links to download the detailed reports. Example Email Content: Azure automated cost control system: - Increase of €200 in Virtual Machines. - Reduction of €50 in Storage. Download details: - Current month: [Download Report] - Previous month: [Download Report]. Action: Send_an_email_(V2) Sends the composed email. Components Azure OpenAI service is a platform provided by Microsoft that offers access to powerful language models developed by OpenAI, including GPT-4, GPT-4o, GPT-4o mini, and others. The service is used in this scenario for all the natural language understanding and generating communication to the customers. Azure Logic Apps is a cloud platform where you can create and run automated workflows with little to no code. Azure Logic Apps Managed Identities allow to authenticate to any resource that supports Microsoft Entra authentication, including your own applications. Potential use cases Azure Cost Management AI Enhanced is an example of a solution designed for those who need to programme the generation of reports related to FinOps topics with the possibility to customise the output and send the results via e-mail or perform a customised upload. Azure AI Anomalies Detection Architecture This LogApp solution leverages Azure Monitor's native machine learning capabilities to retrieve anomalous data within application logs. These will then be analysed by OpenAI. Dataflow Initial Trigger Type: Recurrence Trigger Frequency: Monthly Time Zone: W. Europe Standard Time Triggers the Logic App at the beginning of every month. Step 1: Initialize API Key Action: Initialize Variable Variable Name: Api-Key Step 2: Fetch Update Status Action: HTTP Request URI: https://management.azure.com/providers/Microsoft.ResourceGraph/resources Query: Retrieves resources related to patch assessments using patchassessmentresources. Step 3: Parse Update Status Action: Parse JSON Content: Response body from the HTTP request. Schema: Extracts details such as VM Name, Patch Name, Patch Properties, etc. Step 4: Process Updates For Each: @body('Parse_JSON')?['data'] Iterates through each item in the parsed update data. Condition: If Patch Name is not null and contains "KB": Action: Format Item Parses individual update items for VM Name, Patch Name, and additional properties. Action: Send to Azure OpenAI Description: Sends structured prompts to the GPT-4 model. Headers: Content-Type: application/json api-key: @variables('Api-Key') Body: Prompts Azure OpenAI to generate a report for each virtual machine and patch, formatted in Italian. Action: Parse OpenAI Response Extracts and formats the response generated by Azure OpenAI. Action: Append to Summary and CSV Adds the OpenAI-generated response to the Updated Summary array. Appends patch details to the CSV array. Step 5: Finalize Report Action: Create Reports (I, II, III) Formats and cleans the Updated Summary variable to remove unwanted characters. Action: Compose HTML Email Content Constructs an HTML email with the following: Report summary generated using OpenAI. Disclaimer about possible formatting anomalies. Company logo embedded. Step 6: Generate CSV Table Action: Converts the CSV array into a CSV format for attachment. Step 7: Send Notifications Action: Send Email Recipient: [email protected] Subject: Security Update Assessment Body: HTML content with report summary. Attachment: Name: SmartUpdate_<timestamp>.csv Content: CSV table of update details. Components Azure OpenAI service is a platform provided by Microsoft that offers access to powerful language models developed by OpenAI, including GPT-4, GPT-4o, GPT-4o mini, and others. The service is used in this scenario for all the natural language understanding and generating communication to the customers. Azure Logic Apps is a cloud platform where you can create and run automated workflows with little to no code. Azure Logic Apps Managed Identities allow to authenticate to any resource that supports Microsoft Entra authentication, including your own applications. Azure Monitor is a comprehensive monitoring solution for collecting, analyzing, and responding to monitoring data from your cloud and on-premises environments. Azure Kusto Queryis a powerful tool to explore your data and discover patterns, identify anomalies and outliers, create statistical modeling, and more Potential use cases Azure AI Anomalies Detection is an example of a solution that exploits the Machine Learning capabilities of Azure Monitor to diagnose anomalies within application logs that will then be analysed by Azure OpenAI. The solution can be customized based on Customer requirements. Azure OpenAI Smart Doc Creator Architecture This Function App solution leverages the Azure OpenAI LLM Generative AI to create a docx file based on the Azure architectural information of a specific workload (Azure Metadata based). The function exploits the 'OpenAI multi-agent' concept. Dataflow Step 1: Logging and Configuration Setup Initialize Logging: Advanced logging is set up to provide debug-level insights. Format includes timestamps, log levels, and messages. Retrieve OpenAI Endpoint: QUESTION_ENDPOINT is retrieved from environment variables. Logging confirms the endpoint retrieval. Step 2: Authentication Managed Identity Authentication: The ManagedIdentityCredential class is used for secure Azure authentication. The SubscriptionClient is initialized to access Azure subscriptions. Retrieves a token for Azure Cognitive Services (https://cognitiveservices.azure.com/.default). Step 3: Flattening Dictionaries Function: flatten_dict Transforms nested dictionaries into a flat structure. Handles nested lists and dictionaries recursively. Used for preparing metadata for storage in CSV. Step 4: Resource Tag Filtering Functions: get_resources_by_tag_in_subscription: Filters resources in a subscription based on a tag key and value. get_resource_groups_by_tag_in_subscription: Identifies resource groups with matching tags. Purpose: Retrieve Azure resources and resource groups tagged with specific key-value pairs. Step 5: Resource Metadata Retrieval Functions: get_all_resources: Aggregates resources and resource groups across all accessible subscriptions. get_resources_in_resource_group_in_subscription: Retrieves resources from specific resource groups. get_latest_api_version: Determines the most recent API version for a given resource type. get_resource_metadata: Retrieves detailed metadata for individual resources using the latest API version. Purpose: Collect comprehensive resource details for further processing. Step 6: Documentation Generation Function: generate_infra_config Processes metadata through OpenAI to generate documentation. OpenAI generates detailed and human-readable descriptions for Azure resources. Multi-stage review process: Initial draft by OpenAI. Feedback loop with ArchitecturalReviewer and DocCreator for refinement. Final content is saved to architecture.txt. Step 7: Workload Overview Function: generate_workload_overview Reads from the generated CSV file to create a summary of the workload. Sends resource list to OpenAI for generating a high-level overview. Step 8: Conversion to DOCX Function: txt_to_docx Creates a Word document (Output.docx) with: Section 1: "Workload Overview" (generated summary). Section 2: "Workload Details" (detailed resource metadata). Adds structured headings and page breaks. Step 9: Temporary Files Cleanup Function: cleanup_files Deletes temporary files: architecture.txt resources_with_expanded_metadata.csv Output.docx Ensures no residual files remain after execution. Step 10: CSV Metadata Export Function: save_resources_with_expanded_metadata_to_csv Aggregates and flattens resource metadata. Saves details to resources_with_expanded_metadata.csv. Includes unique keys derived from all metadata fields. Step 11: Architectural Review Process Functions: ArchitecturalReviewer: Reviews and suggests improvements to documentation. DocCreator: Incorporates reviewer suggestions into the documentation. Purpose: Iterative refinement for high-quality documentation. Step 12: HTTP Trigger Function Function: smartdocs Accepts HTTP requests with tag_key and tag_value parameters. Orchestrates the entire workflow: Resource discovery. Metadata retrieval. Documentation generation. File cleanup. Responds with success or error messages. Components Azure OpenAI service is a platform provided by Microsoft that offers access to powerful language models developed by OpenAI, including GPT-4, GPT-4o, GPT-4o mini, and others. The service is used in this scenario for all the natural language understanding and generating communication to the customers. Azure Functions is a serverless solution that allows you to write less code, maintain less infrastructure, and save on costs. Instead of worrying about deploying and maintaining servers, the cloud infrastructure provides all the up-to-date resources needed to keep your applications running. Azure Function App Managed Identities allow to authenticate to any resource that supports Microsoft Entra authentication, including your own applications. Azure libraries for Python (SDK) are the open-source Azure libraries for Python designed to simplify the provisioning, management and utilisation of Azure resources from Python application code. Potential use cases The Azure OpenAI Smart Doc Creator Function App, like all proposed solutions, can be modified to suit your needs. It can be of practical help when there is a need to obtain all the configurations, in terms of metadata, of the resources and services that make up a workload. Contributors Principal author: Tommaso Sacco | Cloud Solutions Architect Simone Verza | Cloud Solution Architect Extended Contribution: Saverio Lorenzini | Senior Cloud Solution Architect Andrea De Gregorio | Technical Specialist Gianluca De Rossi | Technical Specialist Special Thanks: Carmelo Ferrara | Director CSA Marco Crippa | Sr CSA Manager2.5KViews4likes3CommentsAzure AI Foundry, GitHub Copilot, Fabric and more to Analyze usage stats from Utility Invoices
Overview With the introduction of Azure AI Foundry, integrating various AI services to streamline AI solution development and deployment of Agentic AI Workflow solutions like multi-modal, multi-model, dynamic & interactive Agents etc. has become more efficient. The platform offers a range of AI services, including Document Intelligence for extracting data from documents, natural language processing and robust machine learning capabilities, and more. Microsoft Fabric further enhances this ecosystem by providing robust data storage, analytics, and data science tools, enabling seamless data management and analysis. Additionally, Copilot and GitHub Copilot assist developers by offering AI-powered code suggestions and automating repetitive coding tasks, significantly boosting productivity and efficiency. Objectives In this use case, we will use monthly electricity bills from the utilities' website for a year and analyze them using Azure AI services within Azure AI Foundry. The electricity bills is simply an easy start but we could apply it to any other format really. Like say, W-2, I-9, 1099, ISO, EHR etc. By leveraging the Foundry's workflow capabilities, we will streamline the development stages step by step. Initially, we will use Document Intelligence to extract key data such as usage in kilowatts (KW), billed consumption, and other necessary information from each PDF file. This data will then be stored in Microsoft Fabric, where we will utilize its analytics and data science capabilities to process and analyze the information. We will also include a bit of processing steps to include Azure Functions to utilize GitHub Copilot in VS Code. Finally, we will create a Power BI dashboard in Fabric to visually display the analysis, providing insights into electricity usage trends and billing patterns over the year. Utility Invoice sample Building the solution Depicted in the picture are the key Azure and Copilot Services we will use to build the solution. Set up Azure AI Foundry Create a new project in Azure AI Foundry. Add Document Intelligence to your project. You can do this directly within the Foundry portal. Extract documents through Doc Intel Download the PDF files of the power bills and upload them to Azure Blob storage. I used Document Intelligence Studio to create a new project and Train custom models using the files from the Blob storage. Next, in your Azure AI Foundry project, add the Document Intelligence resource by providing the Endpoint URL and Keys. Data Extraction Use Azure Document Intelligence to extract required information from the PDF files. From the resource page in the Doc Intel service in the portal, copy the Endpoint URL and Keys. We will need these to connect the application to the Document Intelligence API. Next, let’s integrate doc intel with the project. In the Azure AI Foundry project, add the Document Intelligence resource by providing the Endpoint URL and Keys. Configure the settings as needed to start using doc intel for extracting data from the PDF documents. We can stay within the Azure AI Foundry portal for most of these steps, but for more advanced configurations, we might need to use the Document Intelligence Studio. GitHub Copilot in VS Code for Azure Functions For processing portions of the output from Doc Intel, what better way to create the Azure Function than in VS Code, especially with the help of GitHub Copilot. Let’s start by installing the Azure Functions extension in VS Code, then create a new function project. GitHub Copilot can assist in writing the code to process the JSON received. Additionally, we can get Copilot to help generate unit tests to ensure the function works correctly. We could use Copilot to explain the code and the tests it generates. Finally, we seamlessly integrate the generated code and unit tests into the Functions app code file, all within VS Code. Notice how we can prompt GitHub Copilot from step 1 of Creating the Workspace to inserting the generated code into the Python file for the Azure Function to testing it and all the way to deploying the Function. Store and Analyze information in Fabric There are many options for storing and analyzing JSON data in Fabric. Lakehouse, Data Warehouse, SQL Database, Power BI Datamart. As our dataset is small, let’s choose either SQL DB or PBI Datamart. PBI Datamart is great for smaller datasets and direct integration with PBI for dashboarding while SQL DB is good for moderate data volumes and supports transactional & analytical workloads. To insert the JSON values derived in the Azure Functions App either called from Logic Apps or directly from the AI Foundry through the API calls into Fabric, let’s explore two approaches. Using REST API and the other Using Functions with Azure SQL DB. Using REST API – Fabric provides APIs that we can call directly from our Function to insert records using HTTP client in the Function’s Python code to send POST requests to the Fabric API endpoints with our JSON data. Using Functions with Azure SQL DB – we can connect it directly from our Function using the SQL client in the Function to execute SQL INSERT statements to add records to the database. While we are at it, we could even get GitHub Copilot to write up the Unit Tests. Here’s a sample: Visualization in Fabric Power BI Let's start with creating visualizations in Fabric using the web version of Power BI for our report, UtilitiesBillAnalysisDashboard. You could use the PBI Desktop version too. Open the PBI Service and navigate to the workspace where you want to create your report. Click on "New" and select "Dataset" to add a new data source. Choose "SQL Server" from the list of data sources and enter "UtilityBillsServer" as the server name and "UtilityBillsDB" as the DB name to establish the connection. Once connected, navigate to the Navigator pane where we can select the table "tblElectricity" and the columns. I’ve shown these in the pictures below. For a clustered column (or bar) chart, let us choose the columns that contain our categorical data (e.g., month, year) and numerical data (e.g., kWh usage, billed amounts). After loading the data into PBI, drag the desired fields into the Values and Axis areas of the clustered column chart visualization. Customize the chart by adjusting the formatting options to enhance readability and insights. We now visualize our data in PBI within Fabric. We may need to do custom sort of the Month column. Let’s do this in the Data view. Select the table and create a new column with the following formula. This will create a custom sort column that we will use as ‘Sum of MonthNumber’ in ascending order. Other visualizations possibilities: Other Possibilities Agents with Custom Copilot Studio Next, you could leverage a custom Copilot to provide personalized energy usage recommendations based on historical data. Start by integrating the Copilot with your existing data pipeline in Azure AI Foundry. The Copilot can analyze electricity consumption patterns stored in your Fabric SQL DB and use ML models to identify optimization opportunities. For instance, it could suggest energy-efficient appliances, optimal usage times, or tips to reduce consumption. These recommendations can be visualized in PBI where users can track progress over time. To implement this, you would need to set up an API endpoint for the Copilot to access the data, train the ML models using Python in VS Code (let GitHub Copilot help you here… you will love it), and deploy the models to Azure using CLI / PowerShell / Bicep / Terraform / ARM or the Azure portal. Finally, connect the Copilot to PBI to visualize the personalized recommendations. Additionally, you could explore using Azure AI Agents for automated anomaly detection and alerts. This agent could monitor electricity bill data for unusual patterns and send notifications when anomalies are detected. Yet another idea would be to implement predictive maintenance for electrical systems, where an AI agent uses predictive analytics to forecast maintenance needs based on the data collected, helping to reduce downtime and improve system reliability. Summary We have built a solution that leveraged the seamless integration of pioneering AI technologies with Microsoft’s end-to-end platform. By leveraging Azure AI Foundry, we have developed a solution that uses Document Intelligence to scan electricity bills, stores the data in Fabric SQL DB, and processes it with Python in Azure Functions in VS Code, assisted by GitHub Copilot. The resulting insights are visualized in Power BI within Fabric. Additionally, we explored potential enhancements using Azure AI Agents and Custom Copilots, showcasing the ease of implementation and the transformative possibilities. Finally, speaking of possibilities – With Gen AI, the only limit is our imagination! Additional resources Explore Azure AI Foundry Start using the Azure AI Foundry SDK Review the Azure AI Foundry documentation and Call Azure Logic Apps as functions using Azure OpenAI Assistants Take the Azure AI Learn courses Learn more about Azure AI Services Document Intelligence: Azure AI Doc Intel GitHub Copilot examples: What can GitHub Copilot do – Examples Explore Microsoft Fabric: Microsoft Fabric Documentation See what you can connect with Azure Logic Apps: Azure Logic Apps Connectors About the Author Pradyumna (Prad) Harish is a Technology leader in the GSI Partner Organization at Microsoft. He has 26 years of experience in Product Engineering, Partner Development, Presales, and Delivery. Responsible for revenue growth through Cloud, AI, Cognitive Services, ML, Data & Analytics, Integration, DevOps, Open Source Software, Enterprise Architecture, IoT, Digital strategies and other innovative areas for business generation and transformation; achieving revenue targets via extensive experience in managing global functions, global accounts, products, and solution architects across over 26 countries.2.2KViews4likes1CommentGetting started with the NetApp Connector for Microsoft M365 Copilot and Azure NetApp Files
Imagine a world where your on-premises and enterprise cloud files seamlessly integrate with Microsoft Copilot unleashing AI on your Azure NetApp Files enterprise data, and making your workday smoother and more efficient. Welcome to the future with the NetApp Connector for Microsoft Copilot!2.4KViews1like0CommentsBuilding scalable and persistent AI applications with LangChain, Instaclustr, and Azure NetApp Files
Discover the powerful combination of LangChain and LangGraph for building stateful AI applications and unlock the benefits of using a managed-database service like NetApp® Instaclustr® backed by Azure NetApp Files for seamless data persistence and scalability.1.8KViews0likes0CommentsHarnessing Generative AI with Weaviate on Azure Kubernetes Service and Azure NetApp Files
Dive into the world of vector databases and explore the critical benchmarks and trade-offs shaping generative AI with our hands-on guide to Weaviate on Azure Kubernetes Service and Azure NetApp Files.2.2KViews0likes0Comments