azure
108 TopicsCodeful Workflows: A New Authoring Model for Logic Apps Standard
đ This blog introduce early concepts of a pre-release functionality and is subject to change. Azure Logic Apps Standard offers you a powerful cloud orchestration engine, enabling you to build and run automated workflows that effortlessly integrate resources from various services, systems, apps, and data sources. Whether you're looking to streamline processes across a complex enterprise or simply reduce the need for extensive coding, this platform provides a solution that's both efficient and flexible. For those of you who require more control over workflow designs or want to leverage your expertise in frameworks like .NET and the Durable Tasks framework, Logic Apps Standard now introduces an exciting new feature: Codeful Workflows. With Codeful Workflows, you can define workflows using an imperative programming style, blending the flexibility of coding with the simplicity and operational strengths of Logic Apps. This means you can structure your workflows the way that makes sense to you while still tapping into the rich ecosystem of connectors and tools built into Logic Apps. What Are Codeful Workflows? Codeful Workflows expand the authoring and execution models of a Logic Apps Standard, offering developers the ability to implement, test and run workflows using an imperative programming model both locally and in the cloud. Built on frameworks like .NET and the Durable Tasks framework, Codeful Workflows allow you to structure workflows in code while seamlessly integrating with Logic Apps Standard rich connector ecosystem, and leverage its operational capabilities. The core elements of a Logic App workflowâtriggers, actions and connections âare translated into durable task concepts within this codeful model: Triggers are implemented as Client Functions that invoke durable orchestrations, which contain the body of the workflow, blending logic implemented by the language primitives, with connections actions for external connectivity. Connector actions are presented as Activity Functions. The Logic Apps Connector ecosystem is exposed to you via an SDK, bringing discoverability and rich support for intelisense when creating action inputs, invoking actions or reusing action outputs in later steps. The SDK vastly simplifies the execution of those connectors, by wrapping them internally on a Activity Function, so you donât need to create new activities for each connector action you want to invoke. Connections, which manages the connectivitiy between actions and end systems, remains unchanged, allowing you to setup once and share connections between multiple orchestrations and logic apps declarative workflows. Connector actions uses a reference to a connection, providing flexibility between local and cloud configurations. Using those building blocks, you can create workflows using familiar programming paradigms, while still benefiting from the easy configuration and operational feature of Logic Apps Standard. If you are an existing Logic Apps Standard customer, your codeful and visual workflows can coexist within the same application, bridging the gap between pro-code and low-code approaches. With those two execution models working hand in hand on the same application, Logic Apps Standard becomes a comprehensive orchestration tool that caters to all developer personas, from integration specialists to enterprise teams, with no cliffs on their experience. Creating Codeful Workflows Designing codeful workflows begins with creating a new Logic Apps project within Visual Studio Code, configured for .NET and the Durable Tasks framework. From triggers to actions, developers gain full flexibility to define their workflows programmatically. Implementing Triggers Triggers are the entry points of workflows, and in Codeful Workflows, they are defined as Client Functions. For example, an HTTP trigger can start a workflow when a request is received: [FunctionName("HelloTrigger")] public static async Task<HttpResponseMessage> HttpStart( [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestMessage req, [DurableClient] IDurableOrchestrationClient starter, ILogger log) { var requestContent = await req.Content.ReadAsStringAsync(); var workflowInput = new HTTPHelloInput { Greeting = $"Hello from Codeful workflows. You said '{requestContent}'" }; log.LogInformation("Workflow Input = '{workflowInput}'.", JsonSerializer.Serialize(workflowInput)); string instanceId = await starter.StartNewAsync("HelloOrchestrator", workflowInput); log.LogInformation("Started orchestration with ID = '{instanceId}'.", instanceId); return await starter.WaitForCompletionOrCreateCheckStatusResponseAsync(req, instanceId); } Using Connector Actions Both Managed and Service Provider Actions are available to be used within your orchestrations. They are organized in the SDK by type making it easy to find the right connector to use. Once you identify the action to use, you can use the rich intelisense interface to generate inputs and call the action directly in your orchestration code. Deployment and Operations Deploying Logic Apps Standard that uses both codeful and codeless workflows follows the same practices already available in Logic Apps Standard. Operational insights, such as endpoint visibility and execution monitoring, are provided within the Azure Portal, ensuring parity with the functionality available for codeless workflows. This cohesive deployment model allows organizations to maximize their resources and cater to diverse development needs, whether they require quick prototyping via low-code tools or robust, scalable solutions through pro-code implementations. Codeful Workflows and Intelligent Agents You can take advantage of codeful workflows and Logic Apps Standard Agent Loop to create new intelligent applications that embed advanced AI decision-making directly into your processes â enabling your apps and automation to not just follow predefined steps, but to reason, adapt, and act autonomously towards goals. See this demo where we share two approaches to implement agent loops â combining codeful and codeless workflows, where you can reuse existing workflows as tools, and writing agent loop actions directly with code: Looking for feedback on Codeful Workflows We are looking for early feedback on this feature. If you are interested in participating on a private preview, please use the form below to register your interest and we will contact you to share the instructions. https://aka.ms/lacodeful/privatepreview/form139Views0likes0CommentsIntroducing Model Logging, Import from AI Foundry, and extended model support in AI Gateway
As organizations increasingly integrate AI into their applications, managing model usage, ensuring governance, and optimizing performance across diverse APIs has become critical. Azure API Managementâs AI Gateway is evolving rapidly to meet these needs introducing powerful new capabilities that simplify integration, improve observability, and enhance control over AI workloads. In this update, weâre excited to share several key enhancements, including expanded support for Responses API and AWS Bedrock APIs, advanced token tracking and logging, session-aware load balancing, and streamlined onboarding for custom models. Letâs dive into whatâs new and how you can take advantage of these features today. Model Logging and Token tracking dashboard Understanding how your AI models are being used is critical for governance, cost management, and performance tuning. AI Gateway now enables comprehensive model logging and token tracking, giving you visibility into: Prompts and completions Token usage You can configure diagnostic settings to export this data to long-term storage solutions such as Azure Monitor, Azure Storage, or Event Hub for custom analysis. Importantly, this logging feature is fully compatible with streaming responses, allowing you to capture detailed insights without compromising the real-time experience for users. A built-in dashboard in the Azure portal provides an at-a-glance view of token usage trends, model performance across teams, and cost drivers- empowering organizations to make data-driven decisions around AI consumption and policy. Learn more about model logging. Responses API Support (Preview) The Responses API is a new stateful API from Azure OpenAI that unifies the capabilities of the Chat Completions API and the Assistants API into a single, streamlined interface. This makes it easier to build multi-turn conversational experiences, maintain session context, and handle tool calling: all within one API. With AI Gateway support for the Responses API, you now get: Token limiting to manage usage quotas Token and request tracking for auditing and monitoring Semantic caching to reduce latency and optimize compute Content filtering and safety controls This support enables organizations to confidently use the Responses API at scale with built-in observability and governance. AWS Bedrock API Support In our continued effort to support multi-cloud AI strategies, weâre thrilled to announce native support for AWS Bedrock API in AI Gateway. This means you can now: Apply token limits to Bedrock-based models Use semantic caching to minimize redundant requests Enforce content safety and responsible AI policies Log prompts and completions just as you would with Azure-hosted models Whether youâre running models like Anthropic Claude or Bedrock, you can bring them under the same centralized AI Gateway streamlining operations, compliance, and user experience. Simplified Onboarding: AI Foundry and OpenAI-Compatible APIs With the introduction of LLM policies that now support Azure AI Model Inference and 3rd party OpenAI-compatible APIs we wanted to simplify the process of onboarding those APIs to Azure API Management. Weâre happy to announce two new experiences in Azure API Managementâs portal: Import from Azure AI Foundry and Create OpenAI API. These new gestures allow you to easily configure your model endpoints to be exposed via AI Gateway and configure token limiting, token tracking, semantic caching and content safety policy directly from the Azure portal. Session-aware load balancing Modern LLM applications, especially chatbots, agents, and batch inference workloadsâoften require stateful processing, where a userâs requests must consistently hit the same backend to preserve context. Weâre introducing session-aware load balancing in Azure API Management to meet this need. With this feature, you can: Enable cookie-based session affinity for load-balanced backends Ensure that requests from the same session are routed to the same Azure OpenAI or third-party endpoint Support APIs like Assistants or the new Responses API that rely on consistent backend state Session-aware load balancing ensures your multi-turn conversations or batched tool-calling experiences remain consistent, reliable, and scalable while still benefiting from Azure API Managementâs AI Gateway capabilities. Learn more about session-aware load balancing. Get started These new capabilities are being gradually rolled out across all Azure regions where API Management is available. Want early access to the latest AI Gateway features? You can now configure your Azure API Management instance to join the AI Gateway Early (GenAI Release) update group. This gives you access to new features before they are made generally available to all customers. To configure this, navigate to the Service Update Settings blade in the Azure portal and select the appropriate update track. Learn more about update groups.326Views0likes1CommentGA: Inbound private endpoint for Standard v2 tier of Azure API Management
Standard v2 was announced in general availability on April 1st, 2024. Customers can now configure an inbound private endpoint for their API Management Standard v2 instance to allow clients in your private network to securely access the API Management gateway over Azure Private Link. The private endpoint uses an IP address from an Azure virtual network in which it's hosted. Network traffic between a client on your private network and API Management traverses over the virtual network and a Private Link on the Microsoft backbone network, eliminating exposure from the public internet. Further, you can configure custom DNS settings or an Azure DNS private zone to map the API Management hostname to the endpoint's private IP address. Inbound private endpoint With a private endpoint and Private Link, you can: Create multiple Private Link connections to an API Management instance. Use the private endpoint to send inbound traffic on a secure connection. Use policy to distinguish traffic that comes from the private endpoint. Limit incoming traffic only to private endpoints, preventing data exfiltration. Combine with outbound virtual network integration to provide end-to-end network isolation of your API Management clients and backend services. Today, only the API Management instanceâs Gateway endpoint supports inbound private link connections. In addition, each API management instance can support at most 100 private link connections. Typical scenarios You can use an inbound private endpoint to enable private-only access directly to the API Management gateway to limit exposure of sensitive data or backends. Some of the common supported scenarios include: Pass client requests through a firewall and configure rules to route requests privately to the API Management gateway. Configure Azure Front Door (or Azure Front Door with Azure Application Gateway) to receive external traffic and then route traffic privately to the API Management gateway. For example, see Connect Azure Front Door Premium to an Azure API Management with Private Link. Learn more API Management v2 tiers FAQ API Management v2 tiers documentation API Management overview documentationAnnouncing the open Public Preview of the Premium v2 tier of Azure API Management
Today, we are excited to announce the public preview of Azure API Management Premium v2 tier. Superior capacity, highest entity limits, unlimited included calls, and the most comprehensive set of features set the Premium apart from other API Management tiers. Customers rely on the Premium tier for running enterprise-wide API programs at scale, with high availability, and performance. The Premium v2 tier has a new architecture that eliminates management traffic from the customer VNet, making private networking much more secure and easier to setup. During the creation of a Premium v2 instance, you can choose between VNet injection or VNet integration (introduced in the Standard v2 tier) options. New and improved VNet injection Using VNet injection in Premium v2 no longer requires any network security groups rules, route tables, or service endpoints. Customers can secure their API workloads without impacting API Management dependencies, while Microsoft can secure the infrastructure without interfering with customer API workloads. In short, the new VNet injection implementation enables both parties to manage network security and configuration setting independently and without affecting each other. You can now configure your APIs with complete networking flexibility: force tunnel all outbound traffic on-premises, send all outbound traffic through an NVA, or add a WAF device to monitor all inbound traffic to your API Management Premium v2âall without constraints. Region availability The public preview of the Premium v2 tier is available only in 6 public regions (Australia East, East US2, Germany West Central, Korea Central, Norway East and UK South) and requires creating a new service instance. For pricing information and regional availability, please visit the API Management pricing page. Learn more API Management v2 tiers documentation API Management v2 tiers FAQ API Management overview documentationAnnouncing Federated Logging in Azure API Management
Managing APIs effectively requires robust security, governance, and deep operational visibility. With federated logging now available in Azure API Management, platform teams and API developers can monitor, troubleshoot, and optimize APIs more efficiently and without compromising security or collaboration. What is federated logging? As API ecosystems grow, maintaining centralized visibility while providing teams with the autonomy to manage and troubleshoot their APIs becomes a challenge. Federated logging centralizes insights for platform teams while empowering API teams with focused access to logs specific to their APIs, streamlining monitoring in large-scale API ecosystems. Centralized Monitoring for Platform Teams: Complete visibility into API health, performance, and usage trends across the organization. Autonomy for API Teams: Direct access to their own API logs, reducing reliance on platform teams and speeding up resolution times. Key Benefits Federated logging offers advantages for both platform and API teams, addressing their unique challenges and needs. For platform teams: Centralized Monitoring: Gain platform-wide visibility into API health, performance, and usage trends. Streamlined Troubleshooting: Quickly diagnose and resolve platform issues without dependency on individual API teams. Governance and Security: Ensure robust audit trails and compliance, supporting secure and scalable API management. For API teams: Faster Incident Resolution: Accelerate incident resolution thanks to immediate access to relevant logs, without waiting for the central platform teamâs response. Actionable Insights: Track API growth, trends, and key performance metrics specific to your APIs to support reporting, planning, and strategic decision-making. Access Control: Limit access to logs to your API team only. How Federated Logging Works Federated logging is enabled using Azure Log Analytics and workspaces in Azure API Management: Platform teams configure logging to a centralized Log Analytics workspace for the entire API Management service, including individual workspaces. Platform teams can access centralized logs through the âLogsâ page in the API Management service in the Azure portal or directly in the Log Analytics workspace. API teams can access logs for their workspace APIs through the âLogsâ page in their API Management workspace in the Azure portal. Access control is enforced via Azure Log Analyticsâ resource context mechanism, ensuring role-based log visibility. Get Started Today Federated logging in Azure API Management combines centralized monitoring and team autonomy, enabling efficient and effective operations. Start using federated logging by visiting the Azure API Management documentation.179Views0likes0CommentsIntroducing Workspace Gateway Metrics and Autoscale in Azure API Management
Weâre excited to announce the availability of workspace gateway metrics and autoscale in Azure API Management, offering both real-time insights and automated scaling for your gateway infrastructure. This combination increases reliability, streamlines operations, and boosts cost efficiency. Monitor and Scale Gateway with New Metrics API Management workspace gateways now support two metrics: CPU Utilization (%): Represents CPU utilization across workspace gateway units. Memory Utilization (%): Represents memory utilization across workspace gateway units. Both metrics should be used together to make informed scaling decisions. For instance, if one of the metrics consistently exceeds a 70% threshold, adding an additional gateway unit to distribute the load can prevent outages during traffic increases. In most workloads, the CPU metric will determine scaling requirements. Automatically Scale Workspace Gateways In addition to manual scaling, Azure API Management workspace gateways now also feature autoscale, allowing for automatic scaling in or out based on metrics or a defined schedule. Autoscale provides several important benefits: Reliability: Autoscale ensures consistent performance by scaling out during periods of high traffic. Operational Efficiency: Automating scaling processes streamlines operations and eliminates manual and error-prone intervention. Cost Optimization: Autoscale scales down resources when traffic is lower, reducing unnecessary expenses. Access Metrics and Autoscale Settings You can access the new metrics in the âMetricsâ page of your workspace gateway resource in the Azure portal or through Azure Monitor. Autoscale can be configured in the âAutoscaleâ page of your workspace gateway resource in the Azure portal or through the autoscale experience. Get Started Learn more about using metrics for scaling decisions.Expose REST APIs as MCP servers with Azure API Management and API Center (now in preview)
As AI-powered agents and large language models (LLMs) become central to modern application experiences, developers and enterprises need seamless, secure ways to connect these models to real-world data and capabilities. Today, weâre excited to introduce two powerful preview capabilities in the Azure API Management Platform: Expose REST APIs in Azure API Management as remote Model Context Protocol (MCP) servers Discover and manage MCP servers using API Center as a centralized enterprise registry Together, these updates help customers securely operationalize APIs for AI workloads and improve how APIs are managed and shared across organizations. Unlocking the value of AI through secure API integration While LLMs are incredibly capable, they are stateless and isolated unless connected to external tools and systems. Model Context Protocol (MCP) is an open standard designed to bridge this gap by allowing agents to invoke toolsâsuch as APIsâvia a standardized, JSON-RPC-based interface. With this release, Azure empowers you to operationalize your APIs for AI integrationâsecurely, observably, and at scale. 1. Expose REST APIs as MCP servers with Azure API Management An MCP server exposes selected API operations to AI clients over JSON-RPC via HTTP or Server-Sent Events (SSE). These operations, referred to as âtools,â can be invoked by AI agents through natural language prompts. With this new capability, you can expose your existing REST APIs in Azure API Management as MCP serversâwithout rebuilding or rehosting them. Addressing common challenges Before this capability, customers faced several challenges when implementing MCP support: Duplicating development efforts: Building MCP servers from scratch often led to unnecessary work when existing REST APIs already provided much of the needed functionality. Security concerns: Server trust: Malicious servers could impersonate trusted ones. Credential management: Self-hosted MCP implementations often had to manage sensitive credentials like OAuth tokens. Registry and discovery: Without a centralized registry, discovering and managing MCP tools was manual and fragmented, making it hard to scale securely across teams. API Management now addresses these concerns by serving as a managed, policy-enforced hosting surface for MCP toolsâoffering centralized control, observability, and security. Benefits of using Azure API Management with MCP By exposing MCP servers through Azure API Management, customers gain: Centralized governance for API access, authentication, and usage policies Secure connectivity using OAuth 2.0 and subscription keys Granular control over which API operations are exposed to AI agents as tools Built-in observability through APIMâs monitoring and diagnostics features How it works MCP servers: In your API Management instance navigate to MCP servers Choose an API: + Create a new MCP Server and select the REST API you wish to expose. Configure the MCP Server: Select the API operations you want to expose as tools. These can be all or a subset of your APIâs methods. Test and Integrate: Use tools like MCP Inspector or Visual Studio Code (in agent mode) to connect, test, and invoke the tools from your AI host. Getting started and availability This feature is now in public preview and being gradually rolled out to early access customers. To use the MCP server capability in Azure API Management: Prerequisites Your APIM instance must be on a SKUv1 tier: Premium, Standard, or Basic Your service must be enrolled in the AI Gateway early update group (activation may take up to 2 hours) Use the Azure Portal with feature flag: ⤠Append ?Microsoft_Azure_ApiManagement=mcp to your portal URL to access the MCP server configuration experience Note: Support for SKUv2 and broader availability will follow in upcoming updates. Full setup instructions and test guidance can be found via aka.ms/apimdocs/exportmcp. 2. Centralized MCP registry and discovery with Azure API Center As enterprises adopt MCP servers at scale, the need for a centralized, governed registry becomes critical. Azure API Center now provides this capabilityâserving as a single, enterprise-grade system of record for managing MCP endpoints. With API Center, teams can: Maintain a comprehensive inventory of MCP servers. Track version history, ownership, and metadata. Enforce governance policies across environments. Simplify compliance and reduce operational overhead. API Center also addresses enterprise-grade security by allowing administrators to define who can discover, access, and consume specific MCP serversâensuring only authorized users can interact with sensitive tools. To support developer adoption, API Center includes: Semantic search and a modern discovery UI. Easy filtering based on capabilities, metadata, and usage context. Tight integration with Copilot Studio and GitHub Copilot, enabling developers to use MCP tools directly within their coding workflows. These capabilities reduce duplication, streamline workflows, and help teams securely scale MCP usage across the organization. Getting started This feature is now in preview and accessible to customers: https://aka.ms/apicenter/docs/mcp AI Gateway Lab | MCP Registry 3. Whatâs next These new previews are just the beginning. We're already working on: Azure API Management (APIM) Passthrough MCP server support Weâre enabling APIM to act as a transparent proxy between your APIs and AI agentsâno custom server logic needed. This will simplify onboarding and reduce operational overhead. Azure API Center (APIC) Deeper integration with Copilot Studio and VS Code Today, developers must perform manual steps to surface API Center data in Copilot workflows. Weâre working to make this experience more visual and seamless, allowing developers to discover and consume MCP servers directly from familiar tools like VS Code and Copilot Studio. For questions or feedback, reach out to your Microsoft account team or visit: Azure API Management documentation Azure API Center documentation â The Azure API Management & API Center Teams612Views0likes0CommentsNow in Public Preview: System events for data-plane in API Management gateway
Weâre excited to announce the public preview of new data-plane system events in Azure Event Grid for the Azure API Management managed gateway (starting with classic tiers). This new capability provides near-real-time visibility into critical operations within your data-plane, helping you extend your API traffic with monitoring, automate responses, and prevent disruptions. These data-plane events complement the existing control-plane events available in Azure Event Grid system topics, marking the beginning of expanded event-driven capabilities in Azure API Management. Whatâs New? 1. Circuit Breaker Events: Our managed gateway now publishes circuit breaker status changes to Event Grid, so you can act before issues escalate. Microsoft.ApiManagement.Gateway.CircuitBreakerOpened Triggered when the failure threshold is reached, and traffic to a backend is temporarily blocked. Microsoft.ApiManagement.Gateway.CircuitBreakerClosed Indicates recovery and that traffic has resumed to the previously blocked backend. 2. Self-Hosted Gateway Token Events: Stay informed about authentication token status to ensure deployed gateways do not become disconnected. Microsoft.ApiManagement.Gateway.TokenNearExpiry Published 7 days before a tokenâs expiration to prompt proactive key rotation. Microsoft.ApiManagement.Gateway.TokenExpired Indicates a failed authentication attempt due to an expired tokenâpreventing synchronization with the cloud instance. (note: API traffic is not disrupted). And this is just the beginning! We're continuously expanding event-driven capabilities in Azure API Management. Stay tuned for more system events coming soon! Why This Matters? With system events for data-plane, managed gateway now offer near-real-time extensibility via Event Grid. This allows customers to: Detect and respond to failures instantly. Automate alerts and workflows for proactive issue resolution. Ensure smooth operations with timely token management. Public Preview Limitations Single-Instance Scope: Events are scoped to the individual gateway instance where they occur. No cross-instance aggregation yet. Available in classic tiers only: This feature is currently supported only on the classic Developer, Basic, Standard, and Premium tiers of API Management. Get Started Today Start monitoring your APIs in real-time with event-driven architecture today. Follow the event schema and samples to build subscribers and handlers. Review integration guidance with Event Grid to wire up your automation pipelines. For a full list of supported Azure API Management system events and integration guidance, visit the Azure Event Grid integration docs.480Views1like0Commentsđ§Š Use Index + Direct Access to pull data across loops in Data Mapper
When working with repeating structures in Logic Apps Data Mapper, you may run into situations where two sibling loops exist under the same parent. What if you need to access data from one loop while youâre inside the other? This is where the Direct Access function, used in combination with Index, can save the day. đ§Ş Scenario In this pattern, weâre focusing on the schema nodes shown below: đ¸ Source & Destination Schemas (with loops highlighted) In the source schema: Under the parent node VehicleTrips, we have two sibling arrays: Vehicle â contains VehicleRegistration Trips â contains trip-specific values like VehicleID, Distance, and Duration In the destination schema: We're mapping into the repeating node Looping/Trips/Trip It expects each tripâs data along with a flattened VehicleRegistration value that combines both: The current tripâs VehicleID The corresponding vehicleâs VehicleRegistration The challenge? These two pieces of data live in two separate sibling arrays. đ§° Try it yourself đ Download the sample files from GitHub Place them into the following folders in your Logic Apps Standard project: Artifacts â Source, destination and dependency schemas (.xsd) Map Definitions â .lml map file Maps â The .xslt file generated when you save the map Then right-click the .lml file and select âOpen with Data Mapperâ in VS Code. đ ď¸ Step-by-step Breakdown â Step 1: Set up the loop over Trips Start by mapping the repeating Trips array from the source to the destination's Trip node. Within the loop, we map: Distance Duration These are passed through To String functions before mapping, as the destination schema expects them as string values. As you map the child nodes, you will notice a loop automatically added on parent nodes (Trips->Trip) đ¸ Mapping Distance and Duration nodes (context: weâre inside Trips loop) đ Step 2: Use Index and Direct Access to bring in sibling loop values Now we want to map the VehicleRegistration node at the destination by combining two values: VehicleID (from the current trip) VehicleRegistration (from the corresponding vehicle) âĄď¸ Note: Before we add the Index function, delete the auto-generated loop from Trips to Trip To fetch the matching VehicleRegistration: Use the Index function to capture the current position within the Trips loop đ¸ Index setup for loop tracking Use the Direct Access function to retrieve VehicleRegistration from the Vehicle array. đ Direct Access input breakdown The Direct Access function takes three inputs: Index â from the Index function, tells which item to access Scope â set to Vehicle, the array you're pulling from Target Node â VehicleRegistration, the value you want This setup means: âFrom the Vehicle array, get the VehicleRegistration at the same index as the current trip.â đ¸ Direct Access setup đ§ Step 3: Concatenate and map the result Use the Concat function to combine: VehicleID (from Trips) VehicleRegistration (from Vehicle, via Direct Access) Map the result to VehicleRegistration in the destination. đ¸ Concat result to VehicleRegistration âĄď¸ Note: Before testing, delete the auto-generated loop from Vehicle to Trip đ¸ Final map connections view â Step 4: Test the output Once your map is saved, open the Test panel and paste a sample payload. You should see each Trip in the output contain: The original Distance and Duration values (as strings) A VehicleRegistration field combining the correct VehicleID and VehicleRegistration from the sibling array đ¸ Sample Trip showing the combined nodes đŹ Feedback or ideas? Have feedback or want to share a mapping challenge? Open an issue on GitHubLog Ingestion to Azure Log Analytics Workspace with Logic App Standard
Currently, to send logs to Azure Log Analytics, the recommended method involves using the Azure Log Analytics Data Collector. This is a managed connector that typically requires public access to your Log Analytics Workspace (LAW). Consequently, this connector does not function if your LAW has Virtual Network (VNet) integration, as outlined in the Azure Monitor private link security documentation. Solution: Logic App Standard for VNet Integrated Log Analytics Workspace To address this limitation, a solution has been developed using Logic App Standard to directly connect to the LAW ingestion http endpoint. The relevant API documentation for this endpoint can be found here: Log Analytics REST API | Microsoft Learn. It's important to note that the current version of this endpoint exclusively supports authentication via a shared key, as detailed in the Log Analytics REST API Reference | Microsoft Learn. Any request to the Log Analytics HTTP Data Collector API must include the Authorization header. To authenticate a request, you must sign the request with either the primary or secondary key for the workspace that is making the request and pass that signature as part of the request. Implementing Shared Key Authentication with C# Inline Script The proposed solution involves building a small C# inline script within the Logic App Standard to handle the shared key authentication process. Sample code for this implementation has been uploaded to my GitHub: LAWLogIngestUsingHttp string dateString = DateTime.UtcNow.ToString("r"); byte[] content = Encoding.UTF8.GetBytes(jsonData); int contentLength = content.Length; string method = "POST"; string contentType = "application/json"; string resource = "/api/logs"; string stringToSign = $"{method}\n{contentLength}\n{contentType}\nx-ms-date:{dateString}\n{resource}"; byte[] sharedKeyBytes = Convert.FromBase64String(connection.SharedKey); using HMACSHA256 hmac = new HMACSHA256(sharedKeyBytes); byte[] stringToSignBytes = Encoding.UTF8.GetBytes(stringToSign); byte[] signatureBytes = hmac.ComputeHash(stringToSignBytes); string signature = Convert.ToBase64String(signatureBytes); HTTP Action Configuration Subsequently, an HTTP action within the Logic App Standard is configured to call the Log Analytics ingestion endpoint using an HTTP POST method. The endpoint URL follows this format: https://{WorkspaceId}.ods.opinsights.azure.com/api/logs?api-version=2016-04-01 Remember to replace {WorkspaceId} with your actual Log Analytics Workspace ID. the custom table name will be in the log-type header