azure
2873 TopicsEnhancing AI Integrations with MCP and Azure API Management
As AI Agents and assistants become increasingly central to modern applications and experiences, the need for seamless, secure integration with external tools and data sources is more critical than ever. The Model Context Protocol (MCP) is emerging as a key open standard enabling these integrations - allowing AI models to interact with APIs, Databases and other services in a consistent, scalable way. Understanding MCP MCP utilizes a client-host-server architecture built upon JSON-RPC 2.0 for messaging. Communication between clients and servers occurs over defined transport layers, primarily: stdio: Standard input/output, suitable for efficient communication when the client and server run on the same machine. HTTP with Server-Sent Events (SSE): Uses HTTP POST for client-to-server messages and SSE for server-to-client messages, enabling communication over networks, including remote servers. Why MCP Matters While Large Language Models (LLMs) are powerful, their utility is often limited by their inability to access real-time or proprietary data. Traditionally, integrating new data sources or tools required custom connectors/ implementations and significant engineering efforts. MCP addresses this by providing a unified protocol for connecting agents to both local and remote data sources - unifying and streamlining integrations. Leveraging Azure API Management for remote MCP servers Azure API Management is a fully managed platform for publishing, securing, and monitoring APIs. By treating MCP server endpoints as other backend APIs, organizations can apply familiar governance, security, and operational controls. With MCP adoption, the need for robust management of these backend services will intensify. API Management retains a vital role in governing these underlying assets by: Applying security controls to protect the backend resources. Ensuring reliability. Effective monitoring and troubleshooting with tracing requests and context flow. n this blog post, I will walk you through a practical example: hosting an MCP server behind Azure API Management, configuring credential management, and connecting with GitHub Copilot. A Practical Example: Automating Issue Triage To follow along with this scenario, please check out our Model Context Protocol (MCP) lab available at AI-Gateway/labs/model-context-protocol Let's move from theory to practice by exploring how MCP, Azure API Management (APIM) and GitHub Copilot can transform a common engineering workflow. Imagine you're an engineering manager aiming to streamline your team's issue triage process - reducing manual steps and improving efficiency. Example workflow: Engineers log bugs/ feature requests as GitHub issues Following a manual review, a corresponding incident ticket is generated in ServiceNow. This manual handoff is inefficient and error prone. Let's see how we can automate this process - securely connecting GitHub and ServiceNow, enabling an AI Agent (GitHub Copilot in VS Code) to handle triage tasks on your behalf. A significant challenge in this integration involves securely managing delegated access to backend APIs, like GitHub and ServiceNow, from your MCP Server. Azure API Management's credential manager solves this by centralizing secure credential storage and facilitating the secure creation of connections to your third-party backend APIs. Build and deploy your MCP server(s) We'll start by building two MCP servers: GitHub Issues MCP Server Provides tools to authenticate on GitHub (authorize_github), retrieve user infromation (get_user ) and list issues for a specified repository (list_issues). ServiceNow Incidents MCP Server Provides tools to authenticate with ServiceNow (authorize_servicenow), list existing incidents (list_incidents) and create new incidents (create_incident). We are using Azure API Management to secure and protect both MCP servers, which are built using Azure Container Apps. Azure API Management's credential manager centralizes secure credential storage and facilitates the secure creation of connections to your backend third-party APIs. Client Auth: You can leverage API Management subscriptions to generate subscription keys, enabling client access to these APIs. Optionally, to further secure /sse and /messages endpoints, we apply the validate-jwt policy to ensure that only clients presenting a valid JWT can access these endpoints, preventing unauthorized access. (see: AI-Gateway/labs/model-context-protocol/src/github/apim-api/auth-client-policy.xml) After registering OAuth applications in GitHub and ServiceNow, we update APIM's credential manager with the respective Client IDs and Client Secrets. This enables APIM to perform OAuth flows on behalf of users, securely storing and managing tokens for backend calls to GitHub and ServiceNow. Connecting your MCP Server in VS Code With your MCP servers deployed and secured behind Azure API Management, the next step is to connect them to your development workflow. Visual Studio Code now supports MCP, enabling GitHub Copilot's agent mode to connect to any MCP-compatible server and extend its capabilities. Open Command Pallette and type in MCP: Add Server ... Select server type as HTTP (HTTP or Server-Sent Events) Paste in the Server URL Provide a Server ID This process automatically updates your settings.json with the MCP server configuration. Once added, GitHub Copilot can connect to your MCP servers and access the defined tools, enabling agentic workflows such as issue triage and automation. You can repeat these steps to add the ServiceNow MCP Server. Understanding Authentication and Authorization with Credential Manager When a user initiates an authentication workflow (e.g, via the authorize_github tool), GitHub Copilot triggers the MCP server to generate an authorization request and a unique login URL. The user is redirected to a consent page, where their registered OAuth application requests permissions to access their GitHub account. Azure API Management acts as a secure intermediary, managing the OAuth flow and token storage. Flow of authorize_github: Step 1 - Connection initiation: GitHub Copilot Agent invokes a sse connection to API Management via the MCP Client (VS Code) Step 2 - Tool Discovery: APIM forwards the request to the GitHub MCP Server, which responds with available tools Step 3 - Authorization Request: GitHub Copilot selects and executes authorize_github tool. The MCP server generates an authorization_id for the chat session. Step 4 - User Consent: If it's the 1st login, APIM requests a login redirect URL from the MCP Server The MCP Server sends the Login URL to the client, prompting the user to authenticate with GitHub Upon successful login, GitHub redirects the client with an authorization code Step 5 - Token Exchange and Storage: The MCP Client sends the authorization code to API Management APIM exchanges the code for access and refresh tokens from GitHub APIM securely stores the token and creates an Access Control List (ACL) for the service principal. Step 6 - Confirmation: APIM confirms successful authentication to the MCP Client, and the user can now perform authenticated actions, such as accessing private repositories. Check out the python logic for how to implement it: AI-Gateway/labs/model-context-protocol/src/github/mcp-server/mcp-server.py Understanding Tool Calling with underlaying APIs in API Management Using the list_issues tool, Connection confirmed APIM confirms the connection to the MCP Client Issue retrieval: The MCP Client requests issues from the MCP server The MCP Server attaches the authorization_id as a header and forwards the request to APIM The list of issues is returned to the agent You can use the same process to add the ServiceNow MCP Server. With both servers connected, GitHub Copilot Agent can extract issues from a private repo in GitHub and create new incidences in ServiceNow, automating your triage workflow. You can define additional tools such as suggest_assignee tool, assign_engineer tool, update_incident_status tool, notify_engineer tool, request_feedback tool and other to demonstrate a truly closed-loop, automated engineering workflow - from issue creation to resolution and feedback. Take a look at this brief demo showcasing the entire end-to-end process: Summary Azure API Management (APIM) is an essential tool for enterprise customers looking to integrate AI models with external tools using the Model Context Protocol (MCP). In this blog, we demonstrated how Azure API Management's credential manager solves the secure creation of connections to your backend APIs. By integrating MCP servers with VS Code and leveraging APIM for OAuth flows and token management, you can enable secure, agenting automation across your engineering tools. This approach not only streamlines workflows like issues triage and incident creation but also ensures enterprise-grade security and governance for all APIs. Additional Resources Using Credential Manager will help with managing OAuth 2.0 tokens to backend services. Client Auth for remote MCP servers: AZD up: https://aka.ms/mcp-remote-apim-auth AI lab Client Auth: AI-Gateway/labs/mcp-client-authorization/mcp-client-authorization.ipynb Blog Post: https://aka.ms/remote-mcp-apim-auth-blog If you have any questions or would like to learn more about how MCP and Azure API Management can benefit your organization, feel free to reach out to us. We are always here to help and provide further insights. Connect with us on LinkedIn (Julia Kasper & Julia Muiruri) and follow for more updates, insights, and discussions on AI integrations and API management.3.5KViews3likes2CommentsLogic App Standard - When High Memory / CPU usage strikes and what to do
Introduction Monitoring your applications is essential, as it ensures that you know what's happening and you are not caught by surprise when something happens. One possible event is the performance of your application starting to decrease and processing becomes slower than usual. This may happen due to various reasons, and in this blog post, we will be discussing the High Memory and CPU usage and why it affects your Logic App. We will also observe some possibilities that we've seen that have been deemed as the root cause for some customers. How High Memory and high CPU affects the processing When the instructions and information are loaded into Memory, they will occupy a space that cannot be used for other sets of instructions. The more memory is occupied, the more the Operative System will need to "think" to find the correct set of instructions and retrieve/write the information. So if the OS needs more time to find or write your instructions, the less time it will spend actually doing the processing. Same thing for the CPU. If the CPU load is higher, it will slow down everything, because the available workers are not able to "think" multiple items at the same time. This translates into the Logic App processing, by the overall slowness of performance. When the CPU or Memory reach a certain threshold, we start to see the run durations going up and internal retries increasing as well. This is because the runtime workers are busy and the tasks have timeout limits. For example, let's think of a simple run with a Read Blob built-in connector action, where the Blob is very large (let's say 400MB). The flow goes: Request Trigger -> Read blob -> send email The trigger has a very short duration and doesn't carry much overhead, because we're not loading much data on it. The Read Blob though, will try to read the payload into Memory (because we're using a Built-in Connector, and these load all the information into Memory). Built-in connector overview - Azure Logic Apps | Microsoft Learn So, not considering background processes, Kudu and maintenance jobs, we've loaded 400MB into memory. Using a WS1 plan, we have 3.5GB available. By just having a blank Logic App, you will see some memory occupied, although it may vary. So, if we think it takes 500MB for the base runtime and some processes, it leaves us with 3GB available. If we load 4 files at the same time, we will be using ~1.8GB (files + base usage). Already using about 50% of the memory. And this is just for one workflow and 4 runs. Of course the memory is released after the run completes, but if you think on a broader scale, with multiple runs and multiple actions at the same time, you see how easy it is to reach the thresholds. When we see memory over ~70%, the background tasks may behave in unexpected ways, so it's essential to have a clear idea on how your Logic App is working and what data you're loading into it. Same thing for CPU. The more you load into it, the slower it gets. You may have low memory usage, but if you're doing highly complex tasks such as XML transformations or some other built-in data transforms, your CPU will be heavily used. And the bigger the file and the more complex the transformation, the more CPU will be used. How to check memory/CPU Correctly monitoring your resources usage is vital and can avoid serious impact. To help your Standard logic app workflows run with high availability and performance, the Logic App Product Group has created the Health Check feature. This feature is still in Preview, but it's already a very big assistance in monitoring. You can read more about it, in the following article, written by our PG members, Rohitha Hewawasam and Kent Weare: Monitoring Azure Logic Apps (Standard) Health using Metrics And also the official documentation for this feature: Monitor Standard workflows with Health Check - Azure Logic Apps | Microsoft Learn The Metrics can also assist in providing a better view on the current usage. Logic App Metrics don't drill down on CPU usage, because those metrics are not available at App level, but rather at App Service Plan level. You will be able to see the working memory set and Workflow related metrics. Example metric: Private Bytes (AVG) - Logic App metrics On the AppService Plan overview, you will be able to see some charts with these metrics. It's an entry point to understand what is currently going on with your ASP and the current health status. Example: ASP Dashboard In the Metrics tab, you are able to create your own charts with a much greater granularity and also save as Dashboards. You're also able to create Alerts on these metrics, which greatly increases your ability to effectively monitor and act on abnormal situations, such as High Memory usage for prolonged periods of time. Example: Memory Percentage (Max) - ASP Metrics Currently there are multiple solutions provided to analyze your Logic App behavior and metrics, such as Dashboards and Azure Monitor Logs. I highly recommend reading these two articles from our PG that discuss these topics and explain and exemplify this: Logic Apps Standard Monitoring Dashboards | Microsoft Community Hub Monitoring Azure Logic Apps (Standard) with Azure Monitor Logs How to mitigate - a few possibilities Platform settings on 32 bits If your Logic App was created long back, it may be running on an old setting. Early Logic Apps were created with 32 bits, which severely limited the Memory usage scalability, as this architecture only allows a maximum of 3GB of usage. This comes from the Operative System limitations and memory allocation architecture. After some time, the standard was to create Logic Apps in 64 bits, which allowed the Logic App to scale and fully use the maximum allowed memory in all ASP tiers (up to 14GB in WS3). This can be checked and updated in the Configuration tab, under Platform settings. Orphaned runs It is possible that some runs do not finish due to various possibilities. Either they are long running or have failed due to unexpected exceptions, runs that linger in the system will cause an increase in memory usage, because the information will not be unloaded from Memory. When the runs become orphaned, they may not be spotted but they will remain "eating up" resources. The easiest way to find these runs, is to check the workflows and under the "Running" status, check which ones are still running well after the expected termination. You can filter the Run History by Status and use this to find all the runs that are still in "Running". In my example, I had multiple runs that had started hours before, but were not yet finished. Although this is a good example, it requires you to check each workflow manually. You can also do this by using Log Analytics and execute a query to return all the Runs that are not yet finished. You need to activate the Diagnostic Settings, as mentioned in this blog post: Monitoring Azure Logic Apps (Standard) with Azure Monitor Logs To make your troubleshooting easier, I've created a query that does this for you. It will check only for the Runs and return those that do not have a matching Completed status. The OperationName field will register the Start, Dispatch and Completed status. By eliminating the Dispatched status, we're left with Start and Completed. Therefore, this query should return all the RunIDs that have a Start but not a matching Completed status, as it groups them by counting the RunIDs. LogicAppWorkflowRuntime | where OperationName contains "WorkflowRun" and OperationName != 'WorkflowRunDispatched' | project RunId, WorkflowName, TimeGenerated,OperationName, Status, StartTime,EndTime | summarize Runs = count() by RunId, WorkflowName, StartTime | where Runs == 1 | project RunId, WorkflowName, StartTime Large payloads As previously discussed, large payloads can create a big overload and increase greatly the memory usage. This applies not only to the Built-in Connectors, but also to the Managed connectors, as the information stills needs to be processed. Although the data is not loaded into the ASP memory when it comes to Managed connectors, there is a lot of data flowing and being processed in CPU time. A Logic App is capable of processing a big amount of data, but when you combine large payloads, a very large number of concurrent runs, along with large number of actions and Incoming/Outgoing requests, you get a mixture that if left unattended and continues to scale up, will cause performance issues over time. Usage of built-in connectors The Built-in Connectors (or In App) are natively run in the Azure Logic Apps runtime. Due to this, the performance, capabilities and pricing are better, in most cases. Because they are running under the Logic App Runtime, all the data will be loaded in-memory. This requires you to do a good planning of your architecture and forecast for high levels of usage and heavy payload. As previously shown, using Built-in connectors that handle very large payloads, can cause unexpected errors such as Out of Memory exceptions. The connector will try to load the payload into memory, but if starts to load and the memory is no longer available, it can crash the worker and return an Out Of Memory exception. This will be visible in the Runtime logs and it may also lead to the run becoming orphaned, as it get stuck in a state that is not recoverable. Internally, the Runtime will attempt to gracefully retry these failed tasks, and it will retry multiple times. But there is always the possibility that the state is not recoverable and thus the worker crashes. This makes it also necessary to closely monitor and plan for high usage scenarios, in order to properly scale Up and Out your App Service Plans. Learnings Monitoring can be achieved as well through the Log Stream, which requires you to configure a Log Analytics connection, but can provide a great deal of insight of what the Runtime is doing. It can give you Verbose level or simply Warning/Error levels. It does provide a lot of information and can be a bit tricky to read, but the level of detail it provides, can be a huge assistance in troubleshooting from your side and from the Support side. For this, you can navigate to your Log Stream tab, enable it, change to "Filesystem Logs" and enjoy the show. If an Out of Memory exception is caught, it will show up in red letters (as other exceptions show), and will be something similar to this: Job dispatching error: operationName='JobDispatchingWorker.Run', jobPartition='', jobId='', message='The job dispatching worker finished with unexpected exception.', exception='System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown. at System.Threading.Thread.StartCore() at System.Threading.Thread.Start(Boolean captureContext) at System.Threading.Thread.Start() No PROD Logic App should be left without monitoring and alerting. Being a critical system or not, you should always plan not only for disaster scenarios but also for higher than usual volumes, because nothing is static and there's always the possibility that the system that today has a low usage, will be scaled and will be used in some way that it was not intended to. For this, implementing the monitoring on the resources metrics is very valuable and can detect issues before they get overwhelming and cause a show stopper scenario. You can use the Metrics from the Logic App that are provided out of the box, or the metrics in the ASP. These last metrics will cover a wider range of signals, as it's not as specific as the ones from the Logic App. You can also create custom Alerts from the Metrics and thus increasing your coverage on distress signals from the Logic App processing. Leaving your Logic App without proper monitoring will likely catch you, your system administrators and your business by surprise when the processing falls out of the standard parameters and chaos starts to arise. There is one key insight that must be applied whenever possible: expect the best, prepare for the worst. Always plan ahead, monitor the current status and think proactively and not just reactively. Disclaimer: The base memory and CPU values are specific for your app, and it can vary based on number of apps in App Service Plan, the number of instances you have as Always Running, etc, and number of workflows in the app, and how complex these workflows are and what internal jobs needs to be provisioned.How to use azure logic app to update AAD user’s password automatically
Scenario Azure logic app is an extraordinary cloud automation application. For updating Azure Active Directory user’s password in batches and automatically, azure logic app consumption or a logic app standard can invoke Azure Active Directory Graph API but it requires specific permissions. References passwordAuthenticationMethod: resetPassword - Microsoft Graph beta | Microsoft Learn Sign in with resource owner password credentials grant - Microsoft Entra | Microsoft Learn List passwordMethods - Microsoft Graph beta | Microsoft Learn Update user - Microsoft Graph v1.0 | Microsoft Learn Services Used Azure Logic App (Consumption or Standard) Azure Active Directory (AAD) Solution 1 1.Create an AAD application registration 2.Add permission: UserAuthenticationMethod.ReadWrite.All More details: https://learn.microsoft.com/en-us/graph/api/authenticationmethod-resetpassword?view=graph-rest-beta&tabs=http#permissions 3.Grant admin consent 4.Set up a logic app designer Here we selected 'When a http request is received' as a trigger. Action 1: HTTP – Get token This action is used to get token. This token will be used in the following actions. Method: POST URL: https://login.microsoftonline.com/{tenantID}/oauth2/v2.0/token Content-Type: application/x-www-form-urlencoded Body: client_id={MyClientID} &scope=https%3A%2F%2Fgraph.microsoft.com%2F.default &client_secret={MyClientSecret} &grant_type=password &username={MyUsername}%40{myTenant}.com &password={MyPassword} Reference: https://learn.microsoft.com/en-us/azure/active-directory/develop/v2-oauth-ropc Action 2: HTTP – Get Pwd ID This action is used to get Password Method ID. Method: GET URL: https://graph.microsoft.com/beta/me/authentication/passwordMethods Content-type: application/json Reference: https://learn.microsoft.com/en-us/graph/api/authentication-list-passwordmethods?view=graph-rest-beta&tabs=http Action 3: HTTP – Update Pwd This action is used to update the password of a user. Method: POST URL: https://graph.microsoft.com/beta/users/{userObjectId | userPrincipalName}/authentication/passwordMethods/{passwordMethodId}/resetPassword Content-type: application/json Body: { "newPassword": "{myNewPassword}" } Reference: https://learn.microsoft.com/en-us/graph/api/authenticationmethod-resetpassword?view=graph-rest-beta&tabs=http#http-request In URI, we can use this Expression to get the value of passwordMethodId: body('HTTP_2_-_Get_Pwd_ID')['value'][0]['id'] Solution 2 1.Grant 4 permissions to application registration and grant admin consent User.ManageIdentities.All User.EnableDisableAccount.All User.ReadWrite.All Directory.ReadWrite.All Reference: https://learn.microsoft.com/en-us/graph/api/user-update?view=graph-rest-1.0&tabs=http#permissions 2.Add role assignment ‘User Administrator’ to application registration In delegated access, the calling app must be assigned the Directory.AccessAsUser.All delegated permission on behalf of the signed-in user. In application-only access, the calling app must be assigned the User.ReadWrite.All application permission and at least the User Administrator Azure AD role. Reference: https://learn.microsoft.com/en-us/graph/api/user-update?view=graph-rest-1.0&tabs=http 3.Set up a logic app designer Here we also selected 'When a http request is received' as a trigger. Action 1: HTTP – Get token This action is used to get token. This token will be used in the following actions. Method: POST URL: https://login.microsoftonline.com/{tenantID}/oauth2/v2.0/token Content-type: application/x-www-form-urlencoded Body: client_id={MyClientID} &scope=https%3A%2F%2Fgraph.microsoft.com%2F.default &client_secret={MyClientSecret} &grant_type=client_credentials Action 2: HTTP – Update Pwd This action is used to update the password of a user. Method: PATCH URL: https://graph.microsoft.com/v1.0/users/{userObjectId} Content-type: application/json Body: { "passwordProfile": { "forceChangePasswordNextSignIn": false, "password": "{myNewPassword}" } } Reference: https://learn.microsoft.com/en-us/graph/api/user-update?view=graph-rest-1.0&tabs=http#example-3-update-the-passwordprofile-of-a-user-to-reset-their-password Result We can check user password update records on AAD audit logs on azure portal: AAD page -> Users -> AAD audit logs6.6KViews4likes4CommentsHow to deploy n8n on Azure App Service and leverage the benefits provided by Azure.
Lately, n8n has been gaining serious traction in the automation world—and it’s easy to see why. With its open-source core, visual workflow builder, and endless integration capabilities, it has become a favorite for developers and tech teams looking to automate processes without being locked into a single vendor. Given all the buzz, I thought it would be the perfect time to share a practical way to run n8n on Microsoft Azure using App Service. Why? Because Azure offers a solid, scalable, and secure platform that makes deployment easy, while still giving you full control over your container and configurations. Whether you're building a quick demo or setting up a production-ready instance, Azure App Service brings a lot of advantages to the table—like simplified scaling, integrated monitoring, built-in security features, and seamless CI/CD support. In this post, I’ll walk you through how to get your own n8n instance up and running on Azure—from creating the resource group to setting up environment variables and deploying the container. If you're into low-code automation and cloud-native solutions, this is a great way to combine both worlds. The first step is to create our Resource Group (RG); in my case, I will name it "n8n-rg". Now we proceed to create the App Service. At this point, it's important to select the appropriate configuration depending on your needs—for example, whether or not you want to include a database. If you choose to include one, Azure will handle the connections for you, and you can select from various types. In my case, I will proceed without a database. Proceed to configure the instance details. First, select the instance name, the 'Publish' option, and the 'Operating System'. In this case, it is important to choose 'Publish: Container', set the operating system to Linux, and most importantly select the region closest to you or your clients. Service Plan configuration. Here, you should select the plan based on your specific needs. Keep in mind that we are using a PaaS offering, which means that underlying compute resources like CPU and RAM are still being utilized. Depending on the expected workload, you can choose the most appropriate plan. Secondly—and very importantly—consider the features offered by each tier, such as redundancy, backup, autoscaling, custom domains, etc. In my case, I will use the Basic B1 plan. In the Database section, we do not select any option. Remember that this will depend on your specific requirements. In the Container section, under 'Image Source', select 'Other container registries'. For production environments, I recommend using Azure Container Registry (ACR) and pulling the n8n image from there. Now we will configure the Docker Hub options. This step is related to the previous one, as the available options vary depending on the image source. In our case, we will use the public n8n image from Docker Hub, so we select 'Public' and proceed to fill in the required fields: the first being the server, and the second the image name. This step is very important—use the exact same values to avoid issues. In the Networking section, we will select the values as shown in the image. This configuration will depend on your specific use case—particularly whether to enable Virtual Network (VNet) integration or not. VNet integration is typically used when the App Service needs to securely communicate with private resources (such as databases, APIs, or services) that reside within an Azure Virtual Network. Since this is a demo environment, we will leave the default settings without enabling VNet integration. In the 'Monitoring and Security' section, it is essential to enable these features to ensure traceability, observability, and additional security layers. This is considered a minimum requirement in production environments. At the very least, make sure to enable Application Insights by selecting 'Yes'. Finally, click on 'Create' and wait for the deployment process to complete. Now we will 'stop' our Web App, as we need to make some preliminary modifications. To do this, go to the main overview page of the Web App and click on 'Stop'. In the same Web App overview page, navigate through the left-hand panel to the 'Settings' section. Once there, click on it and select 'Environment Variables'. Environment variables are key-value pairs used to configure the behavior of your application without changing the source code. In the case of n8n, they are essential for defining authentication, webhook behavior, port configuration, timezone settings, and more. Environment variables within Azure specifically in Web Apps function the same way as they do outside of Azure. They allow you to configure your application's behavior without modifying the source code. In this case, we will add the following variables required for n8n to operate properly. Note: The variable APP_SERVICE_STORAGE should only be modified by setting it to true. Once the environment variables have been added, proceed to save them by clicking 'Apply' and confirming the changes. A confirmation dialog will appear to finalize the operation. Restart the Web App. This second startup may take longer than usual, typically around 5 to 7 minutes, as the environment initializes with the new configuration. Now, as we can see, the application has loaded successfully, and we can start using our own n8n server hosted on Azure. As you can observe, it references the host configured in the App Service. I hope you found this guide helpful and that it serves as a useful resource for deploying n8n on Azure App Service. If you have any questions or need further clarification, feel free to reach out—I'd be happy to help.86Views0likes0CommentsError Running Script in Runbook with System Assigned Managed Identity
Hello everyone, I could use some assistance, please. I'm encountering an error when trying to run a script within a runbook. I'm using PowerShell 5.1 with a system-assigned managed identity. The script works find without using the managed identiy via powershell outside of azure. Error: System.Management.Automation.ParameterBindingException: Cannot process command because of one or more missing mandatory parameters: Credential. at System.Management.Automation.CmdletParameterBinderController.PromptForMissingMandatoryParameters(Collection1 fieldDescriptionList, Collection1 missingMandatoryParameters) at System.Management.Automation.CmdletParameterBinderController.HandleUnboundMandatoryParameters I am using this script Connect-ExchangeOnline -ManagedIdentity -Organization domain removed for privacy reasons # Specify the user's mailbox identity $mailboxIdentity = "email address removed for privacy reasons" # Get mailbox configuration and statistics for the specified mailbox $mailboxConfig = Get-Mailbox -Identity $mailboxIdentity $mailboxStats = Get-MailboxStatistics -Identity $mailboxIdentity # Check if TotalItemSize and ProhibitSendQuota are not null and extract the sizes if ($mailboxStats.TotalItemSize -and $mailboxConfig.ProhibitSendQuota) { $totalSizeBytes = $mailboxStats.TotalItemSize.Value.ToString().Split("(")[1].Split(" ")[0].Replace(",", "") -as [double] $prohibitQuotaBytes = $mailboxConfig.ProhibitSendQuota.ToString().Split("(")[1].Split(" ")[0].Replace(",", "") -as [double] # Convert sizes from bytes to gigabytes $totalMailboxSize = $totalSizeBytes / 1GB $mailboxWarningQuota = $prohibitQuotaBytes / 1GB # Check if the mailbox size exceeds 90% of the warning quota if ($totalMailboxSize -ge ($mailboxWarningQuota * 0.0)) { # Send an email notification $emailBody = "The mailbox $($mailboxIdentity) has reached $($totalMailboxSize) GB, which exceeds 90% of the warning quota." Send-MailMessage -To "email address removed for privacy reasons" -From "email address removed for privacy reasons" -Subject "Mailbox Size Warning" -Body $emailBody -SmtpServer "smtp.office365.com" -Port 587 -UseSsl -Credential (Get-Credential) } } else { Write-Host "The required values(TotalItemSize or ProhibitSendQuota) are not available." }518Views0likes1CommentHas anyone here integrated JIRA with Azure DevOps
We are currently using Azure Pipelines for our deployment process and Azure Boards to track issues and tickets. However, our company recently decided to move the ticketing system to JIRA, and I have been tasked with integrating JIRA with Azure DevOps. If you have done something similar, I will appreciate any guidance, best practices, or things to watch out for.45Views0likes3CommentsResoure Graph Explorer
I’m looking to retrieve a list of Azure resources that were created within the last 24 hours. However, it appears that Azure does not consistently expose the timeCreated property across all resource types, which makes direct filtering challenging. Request for Clarification/Support: Could you please confirm if there’s a reliable way to filter resources based on their creation time — for example, resources created in the last N days or within the last 6 hours? If timeCreated is not uniformly available, what’s the recommended approach (e.g., using Resource Graph, Activity Logs, or any other reliable method) to achieve this?52Views0likes2CommentsComparision on Azure Cloud Sync and Traditional Entra connect Sync.
Introduction In the evolving landscape of identity management, organizations face a critical decision when integrating their on-premises Active Directory (AD) with Microsoft Entra ID (formerly Azure AD). Two primary tools are available for this synchronization: Traditional Entra Connect Sync (formerly Azure AD Connect) Azure Cloud Sync While both serve the same fundamental purpose, bridging on-prem AD with cloud identity, they differ significantly in architecture, capabilities, and ideal use cases. Architecture & Setup Entra Connect Sync is a heavyweight solution. It installs a full synchronization engine on a Windows Server, often backed by SQL Server. This setup gives administrators deep control over sync rules, attribute flows, and filtering. Azure Cloud Sync, on the other hand, is lightweight. It uses a cloud-managed agent installed on-premises, removing the need for SQL Server or complex infrastructure. The agent communicates with Microsoft Entra ID, and most configurations are handled in the cloud portal. For organizations with complex hybrid setups (e.g., Exchange hybrid, device management), is Cloud Sync too limited?68Views1like2CommentsDrive digital transformation of your business with Microsoft Azure
Technology has been transforming business ever since the invention of the wheel. But in recent years, the business landscape has changed fundamentally due to the unique convergence of three things: Increasing volumes of data, particularly driven by the digitization of “things” and advances in data analytics used to draw actionable insight from that data The rise of cloud computing, which places limitless computing and storage power into the hands of organizations of all sizes, increasing the pace of innovation and competition The explosion and ubiquity of mobile computing The convergence of these factors has shifted both what customers expect, because of access to unprecedented amounts of information, and what companies must deliver to meet those expectations. Check out the attached white paper to learn more!2.3KViews1like1CommentConfigure SQL Storage for Standard Logic Apps
Logic Apps uses Azure Storage by default to hold workflows, states and runtime data. However, now in preview, you can use SQL storage instead of Azure Storage for your logic apps workflow related transactions. Note that Azure Storage is still required and SQL is only an alternative for workflow transactions. Why Use SQL Storage? Benefit Description Portability SQL runs on VMs, PaaS, and containers—ideal for hybrid and multi-cloud setups. Control Predictable pricing based on usage. Reuse Assets Leverage SSMS, CLI, SDKs, and Azure Hybrid Benefits. Compliance Enterprise-grade backup, restore, failover, and redundancy options. When to Use SQL Storage Scenario Recommended Storage Need control over performance SQL On-premises workflows (Azure Arc) SQL Predictable cost modeling SQL Prefer SQL ecosystem SQL Reuse existing SQL environments SQL General-purpose or default use cases Azure Storage Configuration via Azure Portal Prerequisites: Azure Subscription Azure SQL Server and Database Azure SQL Setup: From your Azure SQL server, navigate to Security > Networking > Public Access > select "Selected networks". Scroll down and enable “Allow Azure services and resources…”. Navigate to Settings > Microsoft Entra ID > Ensure “Microsoft Entra authentication only” is unchecked. Note: this can be done during SQL server creation from the Networking tab. Standard Logic App Setup: From your Azure Portal, create a new Logic App (Standard). In the Storage tab, select SQL from the dropdown. Add your SQL connection string. Verification Tip: After deployment, check your logic apps environment variable 'Workflows.Sql.ConnectionString' to confirm the SQL DB name is reflected. Known Issues & Fixes Issue Fix Could not find a part of the path 'C:\home\site\wwwroot' Re-enable SQL authentication and verify path settings. SQL login error due to AAD-only authentication Navigate to Settings > Microsoft Entra ID > Ensure “Microsoft Entra authentication only” is unchecked. Final Thoughts SQL as a storage provider for Logic Apps opens up new possibilities for hybrid deployments, performance tuning, and cost predictability. While still in preview, it’s a promising option for teams already invested in the SQL ecosystem. If you are already using this as an alternative or think this would be useful, let us know in the comments below. Resources https://learn.microsoft.com/en-us/azure/logic-apps/set-up-sql-db-storage-single-tenant-standard-workflows https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-pricing?source=recommendations