Connectors
20 Topics🎙️ Announcement: Logic Apps connectors in Azure AI Search for Integrated Vectorization
We’re excited to announce that Azure Logic Apps connectors are now supported within AI Search as data sources for ingestion into Azure AI Search vector stores. This unlocks the ability to ingest unstructured documents from a variety of systems—including SharePoint, Amazon S3, Dropbox and many more —into your vector index using a low-code experience. This new capability is powered by Logic Apps templates, which orchestrate the entire ingestion pipeline—from extracting documents to embedding generation and indexing—so you can build Retrieval-Augmented Generation (RAG) applications with ease. Grounding AI with RAG: Why Document Ingestion Matters Retrieval-Augmented Generation (RAG) has become a cornerstone technique for building grounded and trustworthy AI systems. Instead of generating answers from the model’s pretraining alone, RAG applications fetch relevant information from external knowledge bases—giving LLMs access to accurate and up-to-date enterprise data. To power RAG, enterprises need a scalable way to ingest and index documents into a vector store. Whether you're working with policy documents, legal contracts, support tickets, or financial reports, getting this content into a searchable, semantic format is step one. Simplified Ingestion with Integrated Vectorization Azure AI Search’s Integrated Vectorization capability automates the process of turning raw content into semantically indexed vectors: Chunking: Documents are split into meaningful text segments Embedding: Each chunk is transformed into a vector using an embedding model like text-embedding-3-small or a custom model Indexing: Vectors and associated metadata are written into a searchable vector store Projection: Metadata is preserved to enable filtering, ranking, and hybrid queries This eliminates the need to build or maintain custom pipelines, making it significantly easier to adopt RAG in production environments. Ingest from Anywhere: Logic Apps + AI Search With today’s release, we’re extending ingestion to a variety of new data sources by integrating Logic Apps connectors directly with AI Search. This allows you to retrieve unstructured content from enterprise systems and seamlessly ingest it into the vector store. Here’s how the ingestion process works with Logic Apps: Connect to Source Systems Using prebuilt connectors, Logic Apps can fetch content from various data sources including Sharepoint document libraries, messages from Service Bur or Azure Queues, files from OneDrive or SFTP Server and more. You can trigger ingestion on demand or at schedule. Parse and Chunk Documents Next, Logic Apps uses built-in AI-powered document parsing actions to extract raw text. This is followed by the “Chunk Document” action, which: Tokenizes the document based on language model-friendly units Splits the content into semantically coherent chunks This ensures optimal chunk size for downstream embedding and retrieval. Note – Currently we default to a chunk size of 5000 in the workflows created for document ingestion. We’ll be updating the default chunk size to a smaller number in our next release. Meanwhile, you can update it in the workflow if you need a smaller chunk size. Generate Embeddings with Azure OpenAI The chunked text is then passed to the Azure OpenAI connector, where the text-embedding-3-small or another configured embedding model is used to generate high-dimensional vector representations. These vectors capture the semantic meaning of the content and are key to enabling accurate retrieval in RAG applications. Write to Azure AI Search Finally, the embeddings, along with any relevant metadata (e.g., document title, tags, timestamps), are written into the AI Search index. The index schema is created for you ——and can include fields for filtering, sorting, and semantic ranking. Logic Apps Templates: Fast Start, Flexible Design To help you get started, we’ve created Logic Apps templates specifically for RAG ingestion. These templates: Include all the steps mentioned above Are customizable if you want to update the default configuration Whether you’re ingesting thousands of PDFs from SharePoint or syncing files from Amazon S3 bucket, these templates provide a production-grade foundation for building your pipeline. Getting Started Here is step by step detailed documentation to get started using Integrated Vectorization with Logic Apps data sources 👉 Get started with Logic Apps data sources for AI Search ingestion 👉 Learn more about Integrated Vectorization in Azure AI Search We'd Love Your Feedback We're just getting started. Tell us: What other data sources would you like to ingest? What enhancements would make ingestion easier for your use case? Are there specific industry templates or formats we should support? 👉 Reply to this post or share your ideas through our feedback form We’re building this with you—so your feedback helps shape the future of AI-powered automation and RAG.625Views1like0Comments🤖 AI Procurement assistant using prompt templates in Standard Logic Apps
📘 Introduction Answering procurement-related questions doesn't have to be a manual process. With the new Chat Completions using Prompt Template action in Logic Apps (Standard), you can build an AI-powered assistant that understands context, reads structured data, and responds like a knowledgeable teammate. 🏢 Scenario: AI assistant for IT procurement Imagine an employee wants to know: "When did we last order laptops for new hires in IT?" Instead of forwarding this to the procurement team, a Logic App can: Accept the question Look up catalog details and past orders Pass all the info to a prompt template Generate a polished, AI-powered response 🧠 What Are Prompt Templates? Prompt Templates are reusable text templates that use Jinja2 syntax to dynamically inject data at runtime. In Logic Apps, this means you can: Define a prompt with placeholders like {{ customer.orders }} Automatically populate it with outputs from earlier actions Generate consistent, structured prompts with minimal effort ✨ Benefits of Using Prompt Templates in Logic Apps Consistency: Centralized prompt logic instead of embedding prompt strings in each action. Reusability: Easily apply the same prompt across multiple workflows. Maintainability: Tweak prompt logic in one place without editing the entire flow. Dynamic control: Logic Apps inputs (e.g., values from a form, database, or API) flow right into the template. This allows you to create powerful, adaptable AI-driven flows without duplicating effort — making it perfect for scalable enterprise automation. 💡 Try it Yourself Grab the sample prompt template and sample inputs from our GitHub repo and follow along. 👉 Sample logic app 🧰 Prerequisites To get started, make sure you have: A Logic App (Standard) resource in Azure An Azure OpenAI resource with a deployed GPT model (e.g., GPT-3.5 or GPT-4) 💡 You’ll configure your OpenAI API connection during the workflow setup. 🔧 Build the Logic App workflow Here’s how to build the flow in Logic Apps using the Prompt Template action. This setup assumes you're simulating procurement data with test inputs. 📌 Step 0: Start by creating a Stateful Workflow in your Logic App (Standard) resource. Choose "Stateful" when prompted during workflow creation. This allows the run history and variables to be preserved for testing. 📸 Creating a new Stateful Logic App (Standard) workflow Here’s how to build the flow in Logic Apps using the Prompt Template action. This setup assumes you're simulating procurement data with test inputs. 📌 Trigger: "When an HTTP request is received" 📌 Step 1: Add three Compose actions to store your test data. documents: This stores your internal product catalog entries [ { "id": "1", "title": "Dell Latitude 5540 Laptop", "content": "Intel i7, 16GB RAM, 512GB SSD, standard issue for IT new hire onboarding" }, { "id": "2", "title": "Docking Station", "content": "Dell WD19S docking stations for dual monitor setup" } ] 📸 Compose action for documents input question: This holds the employee’s natural language question. [ { "role": "user", "content": "When did we last order laptops for new hires in IT?" } ] 📸 Compose action for question input customer: This includes employee profile and past procurement orders { "firstName": "Alex", "lastName": "Taylor", "department": "IT", "employeeId": "E12345", "orders": [ { "name": "Dell Latitude 5540 Laptop", "description": "Ordered 15 units for Q1 IT onboarding", "date": "2024/02/20" }, { "name": "Docking Station", "description": "Bulk purchase of 20 Dell WD19S docking stations", "date": "2024/01/10" } ] } 📸 Compose action for customer input 📌 Step 2: Add the "Chat Completions using Prompt Template" action 📸 OpenAI connector view 💡Tip: Always prefer the in-app connector (built-in) over the managed version when choosing the Azure OpenAI operation. Built-in connectors allow better control over authentication and reduce latency by running natively inside the Logic App runtime. 📌 Step 3: Connect to Azure OpenAI Navigate to your Azure OpenAI resource and click on Keys and Endpoint for connecting using key-based authentication 📸 Create Azure OpenAI connection 📝 Prompt template: Building the message for chat completions Once you've added the Get chat completions using Prompt Template action, here's how to set it up: 1. Deployment Identifier Enter the name of your deployed Azure OpenAI model here (e.g., gpt-4o). 📌 This should match exactly with what you configured in your Azure OpenAI resource. 2. Prompt Template This is the structured instruction that the model will use. Here’s the full template used in the action — note that the variable names exactly match the Compose action names in your Logic App: documents, question, and customer. system: You are an AI assistant for Contoso's internal procurement team. You help employees get quick answers about previous orders and product catalog details. Be brief, professional, and use markdown formatting when appropriate. Include the employee’s name in your response for a personal touch. # Product Catalog Use this documentation to guide your response. Include specific item names and any relevant descriptions. {% for item in documents %} Catalog Item ID: {{item.id}} Name: {{item.title}} Description: {{item.content}} {% endfor %} # Order History Here is the employee's procurement history to use as context when answering their question. {% for item in customer.orders %} Order Item: {{item.name}} Details: {{item.description}} — Ordered on {{item.date}} {% endfor %} # Employee Info Name: {{customer.firstName}} {{customer.lastName}} Department: {{customer.department}} Employee ID: {{customer.employeeId}} # Question The employee has asked the following: {% for item in question %} {{item.role}}: {{item.content}} {% endfor %} Based on the product documentation and order history above, please provide a concise and helpful answer to their question. Do not fabricate information beyond the provided inputs. 📸 Prompt template action view 3. Add your prompt template variables Scroll down to Advanced parameters → switch the dropdown to Prompt Template Variable. Then: Add a new item for each Compose action and reference it dynamically from previous outputs: documents question customer 📸 Prompt template variable references 🔍 How the template works Template element What it does {{ customer.firstName }} {{ customer.lastName }} Displays employee name {{ customer.department }} Adds department context {{ question[0].content }} Injects the user’s question from the Compose action named question {% for doc in documents %} Loops through catalog data from the Compose action named documents {% for order in customer.orders %} Loops through employee’s order history from customer Each of these values is dynamically pulled from your Logic App Compose actions — no code, no external services needed. You can apply the exact same approach to reference data from any connector, like a SharePoint list, SQL row, email body, or even AI Search results. Just map those outputs into the Prompt Template and let Logic Apps do the rest. ✅ Final Output When you run the flow, the model might respond with something like: "The last order for Dell Latitude 5540 laptops was placed on February 20, 2024 — 15 units were procured for IT new hire onboarding." This is based entirely on the structured context passed in through your Logic App — no extra fine-tuning required. 📸 Output from run history 💬 Feedback Let us know what other kinds of demos and content you would like to see using this formConcurrency support for Service Bus built-in connector in Logic Apps Standard
In this post, we'll cover the recent enhancements in the built-on or InApp Service Bus connector in Logic Apps Standard. Specifically, we'll cover the support for concurrency for Service Bus trigger...6.9KViews0likes16CommentsAutomatic Regeneration of Azure Managed Connectors Connection keys in VS Code Extension
Starting with version 4.57.6, the Azure Logic Apps (Standard) extension for Visual Studio Code will automatically regenerate the connection keys required to allow the extension to access Azure Managed Connections.2.8KViews4likes3CommentsAutomating Logic Apps connections to Dynamics 365 using Bicep
I recently worked with a customer to show the ease of integration between Logic Apps and the Dataverse as part of Dynamics 365 (D365). The flows of integrations we looked at included: Inbound: D365 updates pushed in near real-time into a Logic Apps HTTP trigger. Outbound: A Logic App sending HTTP requests to retrieve data from D365. The focus of this short post will be on the outbound use case, showing how to use the Microsoft Dataverse connector with Bicep automation. A simple use case The app shown here couldn't be much simpler: it's a Timer recurrence which uses the List Rows action to retrieve data from D365, here's an snip from an execution: Impressed? 🤣 Getting this setup clicking-through the Azure Portal is fairly simple. The connector example uses a Service Principal to authenticate the Logic App to D365 (OAuth being an alternative), so several parameters are needed: Additionally you'll be required to configure an Environment parameter for D365, which is a URL for the target environment, e.g. https://meaingful-url-for-your-org.crm.dynamics.com. Configuring the Service Principal may be the most troublesome part and is outside of the scope of this Bicep automation, and would be considered a separate task per-environment. This page may help you complete the required identity creation. So... what about the Bicep? You can see the Bicep files in the GitHub repository here. We have to deploy 2 resources: resource laworkflow 'Microsoft.Logic/workflows@2019-05-01' = { } ... resource commondataserviceApiConnection 'Microsoft.Web/connections@2016-06-01' = { } ... The first Microsoft.Logic/workflows resource deploys the app configuration, and the second Microsoft.Web/connections resource deploys the Dataverse connection used by the app. The relationship between resources after deployment will be: The Bicep for such a simple example took some trial and error to get right and the documentation is far from clear, something I will try to get improved. In hindsight it seems straight forward, these snippets outline where I struggled. A snip from the connections resource: resource commondataserviceApiConnection 'Microsoft.Web/connections@2016-06-01' = { name: 'commondataservice' ... properties: { displayName: 'la-to-d365-commondataservice' api: { id: '/subscriptions/${subscription().subscriptionId}/providers/Microsoft.Web/locations/${location}/managedApis/commondataservice' ... The property at path properties.api.id is all important here. Now looking at the workflows resource: resource laworkflow 'Microsoft.Logic/workflows@2019-05-01' = { name: logicAppName ... parameters: { '$connections': { value: { commondataservice: { connectionName: 'commondataservice' connectionId: resourceId('Microsoft.Web/connections', 'commondataservice') id: commondataserviceApiConnection.properties.api.id } } } ... Here we see the important parameters for the connection configuration, creating the relationship between the resources: connectionName: reference the name of the connection as specified in the resource. connectionId: uses the Bicep resourceId function to obtain the deployed Azure resource ID. id: references the properties.api.id value specified earlier. So fairly simple, but understanding what value is required where isn't straight forward and that's where documentation improvement is needed. Secret Management An extra area I looked at was improved secret management in Bicep. Values required for the Service Principal must be handled securely, so how do you achieve this? The approach I took was to use the az.getSecret Bicep function within the .bicepparm file, allowing for a secret to be read from an Azure KeyVault at deployment time. This has the advantage of separating the main template file from the parameters it uses. The KeyVault used is pre-provisioned which stores the Service Principal secrets and not deployed as part of this Bicep code. using './logicapps.bicep' ... param commondataserviceEnvironment = getSecret( readEnvironmentVariable('AZURE_KV_SUBSCRIPTION_ID'), readEnvironmentVariable('AZURE_KV_RESOURCE_GROUP'), readEnvironmentVariable('AZURE_KV_NAME'), 'commondataserviceClientSecret') This example obtains the commondataserviceClientSecret parameter value from Key Vault at the given Subscription, Resource Group, Key Vault name, and secret name. You must grant Azure Resource Manager access to the Key Vault, enabled by the setting shown below: The Subscription ID, Resource Group name, and Key Vault name are read from environment variables using the readEnvironmentVariable function, showing another possibility for configuration alongside individual .bicepparm file per-environment. In Summary While this was a very simple Logic Apps use case, I hope it ties together the areas of connector automation, configuration, and security, helping you accelerate the time to a working solution. Happy integrating!