model catalog
37 TopicsAzure AI Foundry Models: Futureproof Your GenAI Applications
Years of Rapid Growth and Innovation The Azure AI Foundry Models journey started with the launch of Models as a Service (MaaS) in partnership with Meta Llama at Ignite 2023. Since then, we’ve rapidly expanded our catalog and capabilities: 2023: General Availability of the model catalog and launch of MaaS 2024: 1800+ models available including Cohere, Mistral, Meta, G42, AI21, Nixtla and more, with 250+ OSS models deployed on managed compute 2025 (Build): 10000+ models, new models sold directly by Microsoft, more managed compute models and expanded partnerships, introduction of advanced tooling like Model Leaderboard, Model Router, MCP Server, and Image Playground GenAI Trends Reshaping the Model Landscape To stay ahead of the curve, Azure AI Foundry Models is designed to support the most important trends in GenAI: Emergence of Reasoning-Centric Models Proliferation of Agentic AI and Multi-agent systems Expansion of Open-Source Ecosystems Multimodal Intelligence Becoming Mainstream Rise of Small, Efficient Models (SLMs) These trends are shaping a future where enterprises need not just access to models—but smart tools to pick, combine, and deploy the best ones for each task. A Platform Built for Flexibility and Scale Azure AI Foundry is more than a catalog—it’s your end-to-end platform for building with AI. You can: Explore over 10000+ models, including foundation, industry, multimodal, and reasoning models along with agents. Deploy using flexible options like PayGo, Managed Compute, or Provisioned Throughput (PTU) Monitor and optimize performance with integrated observability and compliance tooling Whether you're prototyping or scaling globally, Foundry gives you the flexibility you need. Two Core Model Categories 1. Models Sold Directly by Microsoft These models are hosted and billed directly by Microsoft under Microsoft Product Terms. They offer: Enterprise-grade SLAs and reliability Deep Azure service integration Responsible AI standards Flexible usage of reserved quota by using Azure AI Foundry Provisioned Throughput (PTU) across direct models including OpenAI, Meta, Mistral, Grok, DeepSeek and Black Forest Labs. Reduce AI workload costs on predictable consumption patterns with Azure AI Foundry Provisioned Throughput reservations. Learn more here Coming to the family of direct models from Azure: Grok 3 / Grok 3 Mini (from xAI) Flux Pro 1.1 Ultra (from Black Forest Labs) Llama 4 Scout & Maverick (from Meta) Codestral 2501, OCR (from Mistral) 2. Models from Partners & Community These models come from the broader ecosystem, including open-source and monetized partners. They are deployed as Managed Compute or Standard PayGo, and include models from Cohere, Paige and Saifr. We also have new industry models joining this ecosystem of partner and community models NVIDIA NIMs: ProteinMPNN, RFDiffusion, OpenFold2, MSA Paige AI: Virchow 2G, Virchow 2G-mini Microsoft Research: EvoDiff, BioEmu-1 Expanded capabilities that make model choice simpler and faster Azure AI Foundry Models isn’t just about more models. We’re introducing tools to help developers intelligently navigate model complexity: 1. Model Leaderboard Easily compare model performance across real-world tasks with: Transparent benchmark scores Task-specific rankings (summarization, RAG, classification, etc.) Live updates as new models are evaluated Whether you want the highest accuracy, fastest throughput, or best price-performance ratio—the leaderboard guides your selection. 2. Model Router Don’t pick just one—let Azure do the heavy lifting. Automatically route queries to the best available model Optimize based on speed, cost, or quality Supports dynamic fallback and load balancing This capability is a game-changer for agents, copilots, and apps that need adaptive intelligence. 3. Image/Video Playground A new visual interface for: Testing image generation models side-by-side Tuning prompts and decoding settings Evaluating output quality interactively This is particularly useful for multimodal experimentation across marketing, design, and research use cases. 4. MCP Server Enables model-aware orchestration, especially for agentic workloads: Tool use integration Multi-model planning and reasoning Unified coordination across model APIs A Futureproof Foundation With Azure AI Foundry Models, you're not just selecting from a list of models—you’re stepping into a full-stack, flexible, and future-ready AI environment: Choose the best model for your needs Deploy on your terms—serverless, managed, or reserved Rely on enterprise-grade performance, security, and governance Stay ahead with integrated innovation from Microsoft and the broader ecosystem The AI future isn’t one-size-fits-all—and neither is Azure AI Foundry. Explore Today : Azure AI Foundry3.3KViews0likes0CommentsTransforming Customer Support with Azure OpenAI, Azure AI Services, and Voice AI Agents
Customer support today is under immense pressure to meet the rising expectations of speed, personalization, and always-on availability. Yet, businesses still struggle with 1. Long wait times and call center 2. queues 3. Disconnected support channels 4. Limited availability of agents outside business hours 5. Repetitive issues consuming valuable human time 6. Frustrated users due to lack of immediate and contextual answers These inefficiencies are costing businesses over $3.7 trillion annually in poor service delivery, while over 70% of agents (based on the research) spend excessive time searching for the right answers instead of resolving problems directly How Voice AI Agents Are Transforming the Support Experience Enter the era of voice-enabled AI agents—powered by Azure OpenAI, Azure AI Services, and ServiceNow—designed to completely transform the way customers engage with support systems. These agents can now: Handle complex user queries in natural language Access enterprise systems (like CRM, ITSM, HR) in real-time Automate repetitive tasks such as password resets, ticket status updates, or return tracking Escalate only when human assistance is truly needed Create connected, seamless, and intelligent support experiences across departments Let’s take a closer look at four architecture patterns that showcase how enterprises can deploy these agents effectively. 🔷 Architecture Pattern 1: Unified Voice Agent with Azure AI + ServiceNow + CRM Integration In this architecture, the customer support journey begins when a user initiates a voice-based conversation through a front-end interface such as a web application, mobile app, or smart device. The captured audio is streamed directly to Azure OpenAI GPT-4o's real-time API, which performs immediate speech-to-text transcription, interprets the intent behind the request, and prepares the initial system response—all in a single seamless stream. Once the user’s intent is understood (e.g., "create a ticket", "check incident status", or "list recent issues"), GPT-4o passes control to Semantic Kernel, which orchestrates the next steps through function calling. Semantic Kernel hosts pre-defined tools (functions) that map to ServiceNow API actions, such as createIncident, getIncidentStatus, listIncidents, or searchKnowledgeBase. These function calls are then securely routed to ServiceNow via REST APIs. ServiceNow executes the appropriate actions—whether it's creating a new support ticket, retrieving the status of an open incident, or searching its Knowledge Base. CRM data is also seamlessly accessed, if needed, to enrich responses with personalized context such as customer history or case metadata. The result from ServiceNow (e.g., an incident ID or KB article summary) is then sent back to Azure GPT-4o, which converts the structured data into a natural spoken response. This final audio output is delivered to the user in real time, completing the end-to-end conversational loop. Additionally, tools like Azure Monitor or Application Insights can be integrated to log telemetry, track usage trends, monitor latency, and analyze user satisfaction over time. This architecture enables organizations to streamline customer support operations, reduce wait times, and deliver natural, intelligent assistance across any channel—voice-first. 🔷 Architecture Pattern 2: Scalable Customer Support with Multi-Agent Voice Architecture This architecture introduces a modular and distributed agent-based design to deliver intelligent, scalable customer support through a voice interface. The process starts with the User Proxy Agent, which acts as the entry point for all user conversations. It captures voice input and forwards the request to the Master Agent, which serves as the brain of the architecture. The Master Agent, empowered with a large language model (LLM) and memory, interprets the intent behind the user's input and dynamically routes the request to the most appropriate domain-specific agent. These include specialized agents such as the Activation Agent, Root Agent, Sales Agent, or Technical Agent, each designed to handle specific workflows or business tasks. The Activation Agent connects to web services and handles provisioning or onboarding scenarios. The Root Agent taps into document search systems (like Azure Cognitive Search) to answer questions grounded in internal documentation. The Sales Agent is equipped with structured logic models (SLMs) and CRM access to retrieve sales-related data from backend databases. The Technical Agent is containerized via Docker and built to manage backend diagnostics, code-level issues, or infrastructure status—often connecting to systems like ServiceNow for real-time ITSM execution. Once the task is executed by the respective agent, results are passed back through the Master Agent and ultimately to the User Proxy Agent, which synthesizes the output into a voice response and delivers it to the user. The presence of shared memory between agents allows for maintaining context across multi-turn conversations, enabling complex, multi-step interactions (e.g., “Create a ticket, check the latest order status, and escalate it if unresolved.”) without breaking continuity. This architecture is ideal for enterprises looking to scale customer support horizontally, adding new agents without disrupting existing workflows. It enables parallelism, specialization, and real-time orchestration, providing faster resolutions while reducing the burden on human agents. Best suited for distributed support operations across IT, HR, sales, and field support—where task-specific intelligence and modular scale are critical. 🔷 Architecture Pattern 3: Customer Support Reinvented with Voice RAG + Azure AI + ServiceNow This architecture brings a cutting-edge twist to Retrieval-Augmented Generation (RAG) by enabling it through a Voice AI agent—creating a truly conversational experience grounded in enterprise knowledge. By combining Azure OpenAI models with the ServiceNow Knowledge Base, this pattern ensures accurate, voice-driven support for employees or customers in real time. The process begins when a user interacts with a voice-enabled interface—via phone, web, or embedded assistant. The Voice AI agent streams the audio to Azure OpenAI GPT-4o, which transcribes the voice input, understands the intent, and then triggers a RAG pipeline. Instead of relying solely on the model’s internal memory, the system performs a real-time query against the ServiceNow Product Knowledge Base, retrieving relevant knowledge articles, troubleshooting guides, or support workflows. These results are embedded directly into the prompt, creating an enriched context that is passed to the language model via Azure AI Foundry. The model then generates a natural, contextually accurate spoken response, which is converted back into audio and voiced to the user—creating a seamless end-to-end Voice RAG experience. This approach ensures that responses are not only conversational but also deeply grounded in trusted enterprise knowledge. Ideal for helpdesk automation, HR support, and IT troubleshooting—where users prefer speaking naturally and need verified, document-backed responses in real time. 🔷 Architecture Pattern 4: Conversational Customer Support with AI Avatars and Azure AI This architecture delivers rich, conversational experiences by integrating AI avatars, Azure AI, and ServiceNow to offer human-like, intelligent customer support across channels. It merges natural speech, facial expression, and enterprise data to create a highly engaging support assistant. The interaction begins when a user speaks with an AI avatar application, whether embedded in a web portal, mobile device, or kiosk. The voice is captured and processed through a speech-to-text pipeline, which feeds the Avatar Module and Live Discussions Engine to manage lip-sync, emotional tone, and turn-taking. Behind the scenes, the avatar is connected to Azure AI services, including Custom Neural Voice (CNV) and Azure OpenAI, which enable the avatar to understand intent and generate responses in natural, conversational language. Most critically, the system integrates directly with the ServiceNow platform. Through secure APIs, the avatar queries ServiceNow to: Retrieve case status updates Provide summaries of incident history Look up Knowledge Base articles Trigger incident creation if needed These ServiceNow results are then passed through the text-to-speech module, with support for multilingual voice synthesis, and rendered by the avatar using expressive animation. Responses are visually delivered as live or pre-rendered avatar videos, creating a truly interactive and personalized experience. This pattern not only answers basic questions but also surfaces dynamic enterprise data—turning the AI avatar into a frontline voice agent capable of real-time, connected support across IT, HR, or customer service domains. Best for branded digital experiences, frontline support stations, or HR/IT helpdesk automation where facial presence, empathy, and backend integration are essential. ✨ Closing Thoughts: The Future of Customer Support Is Here Customer expectations have evolved—and so must the way we deliver support. By combining the power of Azure OpenAI, Azure AI Services, and ServiceNow, we’re not just automating tasks—we’re reinventing how organizations connect with their users. Whether it's: A unified voice agent handling IT tickets and CRM queries, A multi-agent architecture scaling across departments, A voice-enabled RAG system delivering knowledge-grounded answers in real time, or A human-like AI avatar offering face-to-face support— These architectures are driving a new era of intelligent, conversational, and scalable customer service. 👉 Join us at the Microsoft Booth during ServiceNow Knowledge 2025 (starting May 6th) to experience these solutions live, explore the tech behind them, and imagine how they can transform your business. Let’s build the future of support—together.665Views1like1CommentUnlocking Document Intelligence: Mistral OCR Now Available in Azure AI Foundry
Every organization has a treasure trove of information—buried not in databases, but in documents. From scanned contracts and handwritten forms to research papers and regulatory filings, this knowledge often sits locked in static formats, invisible to modern AI systems. Imagine if we could teach machines not just to read, but to truly understand the structure and nuance of these documents. What if equations, images, tables, and multilingual text could be seamlessly extracted, indexed, and acted upon—at scale? That future is here. Today we are announcing the launch of Mistral OCR in the Azure AI Foundry model catalog—a state-of-the-art Optical Character Recognition (OCR) model that brings intelligent document understanding to a whole new level. Designed for speed, precision, and multilingual versatility, Mistral OCR unlocks the potential of unstructured content with unmatched performance. From Patient Charts to Investment Reports—Built for Every Industry Mistral OCR’s ability to extract structure from complex documents makes it transformative across a range of verticals: Healthcare Hospitals and health systems can digitize clinical notes, lab results, and patient intake forms, transforming scanned content into structured data for downstream AI applications—improving care coordination, automation, and insights. Finance & Insurance From loan applications and KYC documents to claims forms and regulatory disclosures, Mistral OCR helps financial institutions process sensitive documents faster, more accurately, and with multilingual support—ensuring compliance and improving operational efficiency. Education & Research Academic institutions and research teams can turn PDFs of scientific papers, course materials, and diagrams into AI-readable formats. Mistral OCR’s support for equations, charts, and LaTeX-style formatting makes it ideal for scientific knowledge extraction. Legal & Government With its multilingual and high-fidelity OCR capabilities, legal teams and public agencies can digitize contracts, historical records, and filings—accelerating review workflows, preserving archival materials, and enabling transparent governance. Key Highlights of Mistral OCR According to Mistral their OCR model stands apart due to the following: State-of-the-Art Document Understanding Mistral OCR excels in parsing complex, multimodal documents—extracting tables, math, and figures with markdown-style clarity. It goes beyond recognition to deliver understanding. benchmark testing. Whether you’re working in Hindi, Arabic, French, or Chinese—this model adapts seamlessly. State-of-the-Art Document Understanding Mistral OCR excels in parsing complex, multimodal documents—extracting tables, math, and figures with markdown-style clarity. It goes beyond recognition to deliver understanding. Multilingual by Design With support for dozens of languages and scripts, Mistral OCR achieves 99%+ fuzzy match scores in benchmark testing. Whether you’re working in Hindi, Arabic, French, or Chinese—this model adapts seamlessly. Fastest in Its Class Process up to 2,000 pages per minute on a single node. This speed makes it ideal for enterprise document pipelines and real-time applications. Doc-as-Prompt + Structured Output Turn documents into intelligent prompts—then extract structured, JSON-formatted output for downstream use in agents, workflows, or analytics engines. Why use Mistral OCR on Azure AI Foundry? Mistral OCR is now available as serverless APIs through Models as a Service (MaaS) in Azure AI Foundry. This enables enterprise-scale workloads with ease. Network Isolation for Inferencing: Protect your data from public network access. Expanded Regional Availability: Access from multiple regions. Data Privacy and Security: Robust measures to ensure data protection. Quick Endpoint Provisioning: Set up an OCR endpoint in Azure AI Foundry in seconds. Azure AI ensures seamless integration, enhanced security, and rapid deployment for your AI needs. How to deploy Mistral OCR model in Azure AI Foundry? Prerequisites: If you don’t have an Azure subscription, get one here: https://azure.microsoft.com/en-us/pricing/purchase-options/pay-as-you-go Familiarize yourself with Azure AI Model Catalog Create an Azure AI Foundry hub and project. Make sure you pick East US, West US3, South Central US, West US, North Central US, East US 2 or Sweden Central as the Azure region for the hub. Create a deployment to obtain the inference API and key: Open the model card in the model catalog on Azure AI Foundry. Click on Deploy and select the Pay-as-you-go option. Subscribe to the Marketplace offer and deploy. You can also review the API pricing at this step. You should land on the deployment page that shows you the API and key in less than a minute. These steps are outlined in detail in the product documentation. From Documents to Decisions The ability to extract meaning from documents—accurately, at scale, and across languages—is no longer a bottleneck. With Mistral OCR now available in Azure AI Foundry, organizations can move beyond basic text extraction to unlock true document intelligence. This isn’t just about reading documents. It’s about transforming how we interact with the knowledge they contain. Try it. Build with it. And see what becomes possible when documents speak your language.6KViews1like7CommentsUnlock Multi-Modal Embed 4 and Multilingual Agentic RAG with Command A on Azure
Developers and enterprises now have immediate access to state-of-the-art generative and semantic models purpose-built for RAG (Retrieval-Augmented Generation) and agentic AI workflows on Azure AI Foundry to: Deploy high-performance LLMs and semantic search engines directly into production Build faster, more scalable, and multilingual RAG pipelines Leverage models that are optimized for enterprise workloads in finance, healthcare, government, and manufacturing Cohere Embed 4: High-Performance Embeddings for Search & RAG Accompanying Command A is Cohere’s Embed 4, a cutting-edge embedding model ideal for retrieval-augmented generation pipelines and semantic search. Embed 4 (the latest evolution of Cohere’s Embed series) converts text – and even images – into high-dimensional vector representations that capture semantic meaning. It’s a multi-modal, multilingual embedding model designed to provide recall and relevance in vector search, text classification, and clustering tasks. What makes Embed 4 stand out? 100+ Language Support: This model is truly global – it supports well over 100 languages for text embeddings. You can encode queries and documents in many languages (Arabic, Chinese, French, Hindi, etc.) into the same vector space, enabling cross-lingual search out of the box. For example, a question in Spanish can retrieve a relevant document originally in English if their ideas align semantically. Multi-Modal Embeddings: Embed 4 is capable of embedding not only text but also images. This means you can use it for multimodal search scenarios – e.g. indexing both textual content and images and allowing queries across them. Under the hood, the model has an image encoder; the Azure AI Foundry SDK provides an ImageEmbeddingsClient to generate embeddings from images. With this, you could embed a diagram or a screenshot and find text documents that are semantically related to that image’s content. Matryoshka Embeddings (Scalable Dimensions): A novel feature in Cohere’s Embed 4 is Matryoshka Representation Learning, which produces embeddings that can be truncated to smaller sizes with minimal loss in fidelity. In practice, the model can output a high-dimensional vector (e.g. 768 or 1024 dims) but you have the flexibility to use just the first 64, 128, 256, etc. dimensions if needed. These “nested” embeddings mean you can choose a vector size that balances accuracy vs. storage/query speed – smaller vectors save memory and compute while still preserving most of the semantic signal. This is great for enterprise deployments where vector database size and latency are critical. Enterprise Optimizations: Cohere has optimized Embed 4 for production use. It supports int8 quantization and binary embedding output natively, which can drastically reduce storage footprint and speed up similarity search with only minor impact on accuracy (useful for very large indexes). The model is also trained on massive datasets (including domain-specific data) to ensure robust performance on noisy enterprise text. It achieves state-of-the-art results on benchmark evaluations like MTEB, meaning you get retrieval quality on par with or better than other leading embeddings models (OpenAI, Google, etc.). For instance, Cohere’s previous embed model was top-tier on cross-language retrieval tasks and Embed4 further improves on that foundation. Cohere Command A: Generative Model for Enterprise AI Command A is Cohere’s latest flagship large language model, designed for high-performance text generation in demanding enterprise scenarios. It’s an instruction-tuned, conversational LLM that excels at complex tasks like multi-step reasoning, tool use (function calling), and retrieval-augmented generation. Command A features a massive 111B parameter Transformer architecture with 256K token context length – enabling it to handle extremely large inputs (hundreds of pages of text) in a single prompt without losing coherence. Source for above benchmarks : Introducing Command A: Max performance, minimal compute Some key capabilities of Command A include: Long Context (256K tokens): Using an innovative attention architecture (sliding window + global attention), Command A can ingest up to 256,000 tokens of text in one go. This enables use cases like analyzing lengthy financial reports or entire knowledge bases in a single prompt. Enterprise-Tuned Generation: Command A is optimized for business applications – it’s excellent at instructions, summarization, and especially RAG workflows where it integrates retrieved context and even cites sources to mitigate hallucinations. It supports tool calling (function calling) out-of-the-box so it can interact with external APIs or data sources as part of an Azure AI Agent. Multilingual Proficiency: Command A is good at multilingual use cases (covering all major business languages, with near leading performance in Japanese, Korean, and German). Efficient Deployment: Despite its size, Command A is engineered for efficiency – it delivers 150% higher throughput than its predecessor (Command R+ 08-2024) and requires only 2× A100/H100 GPUs to run. In practice this means lower latency. It also supports streaming token output, so applications can start receiving the response as it’s generated, keeping interaction latency low. Real-World Use Cases for Command A + Embed 4 With both a powerful generative model and a state-of-the-art embedding model at your fingertips, developers can build advanced AI solutions. Here are some real-world use cases unlocked by Command A and Embed 4 on Azure: Financial Report Summarization (RAG): Imagine ingesting thousands of pages of financial filings, earnings call transcripts, and market research into a vector store. Using Embed 4, you can embed and index all this text. When an analyst asks “What were the key revenue drivers mentioned in ACME Corp’s Q1 2025 report?”, you use the query embedding to retrieve the most relevant passages. Command A (with its 256K context) can then take those passages and generate a concise summary or answer with cited evidence. The model’s long context window means it can consider all retrieved chunks at once, and its enterprise tuning ensures factual, business-appropriate summaries. Legal Research Agent (Tool Use + Multilingual): For example a multinational law firm handling cross-border mergers and acquisitions. They have a vast repository of legal documents in multiple languages. Using Embed 4, they index these documents, creating multilingual embeddings. When a lawyer researches a specific legal precedent related to a merger in Germany, they can query in English. Embed 4 retrieves relevant German documents, and Command A summarizes key points, translates excerpts, and compares legal arguments across jurisdictions. Furthermore, Command A leverages tool calling (utilizing agentic capabilities) to retrieve additional information from external databases, such as company registration details and regulatory filings, integrating this data into its analysis to provide a comprehensive report. Technician Knowledge Assistant (RAG + Multilingual): Think of a utilities company committed to operational excellence, managing a vast network of critical equipment, including power generators, transformers, and distribution lines. They can leverage Command A, integrated with Embed 4, to index a comprehensive repository of equipment manuals, maintenance records, and sensor data in multiple languages. This enables technicians and engineers to access critical knowledge instantly. Technicians can ask questions in their native language about specific equipment issues, and Command A retrieves relevant manuals, troubleshooting guides, and past repair reports. It also guides technicians through complex maintenance procedures step-by-step, ensuring consistency and adherence to best practices. This empowers the company to optimize maintenance processes, improve overall equipment reliability, and enhance communication, ultimately achieving operational excellence. Multimodal Search & Indexing: With Embed 4’s image embedding capability, you can build search systems that go beyond text. For instance, a media company could index their image library by generating embeddings for each image (using Azure’s Image Embeddings client) and also index captions/descriptions. A user could then supply a query image (or a textual description) and retrieve both images and articles that are semantically similar to the query. This is useful for scenarios like finding slides similar to a given diagram, searching scanned invoices by content, or matching user-uploaded photos to reference documents. Getting Started: Deploying via Azure AI Foundry In Azure AI Foundry, Embed 4 can be used via the Embeddings API to encode text or images into vectors. Each text input is turned into a numeric vector (e.g. 1024-dimension float array) that you can store in a vector database or use for similarity comparisons. The embeddings are normalized for cosine similarity by default. You can also take advantage of Azure’s vector index or Azure Cognitive Search to directly build vector search on top of these model outputs. Image Source : Introducing Embed 4: Multimodal search for business One of the biggest benefits of using Azure AI Foundry is the ease of deployment for these models. Cohere’s Command A and Embed 4 are available in the model catalog – you can find their model cards and deploy them in just a few clicks. Azure Foundry supports serverless API endpoints for these models, meaning Microsoft hosts the inference infrastructure and scales it for you (with pay-as-you-go billing). Integration with Azure AI Agent Service: If you’re building an AI agent (a system that can orchestrate models and tools to perform tasks), Azure AI Agent Service makes it easy to incorporate these models. In the Agent Service, you can simply reference the deployed model by name as the agent’s reasoning LLM. For example, you could specify an agent that uses CohereCommandA as its model, and add tools like Azure Cognitive Search. The agent can then handle user requests by, say, using a Search tool (powered by Embed 4 vector index) and then passing the results to Command A for answer formulation – all managed by the Azure Agent framework. This lets you build production-grade agentic AI workflows that leverage Cohere’s models with minimal plumbing. In short, Azure provides the glue to connect Command A + Embed 4 + Tools into a coherent solution. Try Command A and Embed 4 today on Azure AI Foundry The availability of Cohere’s Command A and Embed 4 on Azure AI Foundry empowers developers to build the next generation of intelligent apps on a fully managed platform. You can now easily deploy a 256K-context LLM that rivals the best in the industry, alongside a high-performance embedding model that plugs into your search and retrieval pipelines. Whether it’s summarizing lengthy documents with cited facts, powering a multilingual enterprise assistant, enabling multimodal search experiences, or orchestrating complex tool-using agents – these models open up a world of possibilities. Azure AI Foundry makes it simple to integrate these capabilities into your solutions, with the security, compliance, and scalability of Azure’s cloud. We encourage you to try out Command A and Embed 4 in your own projects. Spin them up from the Azure model catalog, use the provided SDK examples to get started, and explore how they can elevate your applications’ intelligence. With Cohere’s models on Azure, you have cutting-edge AI at your fingertips, ready to deploy in production. We’re excited to see what you build with them!2.2KViews0likes0CommentsThe Future of AI: Customizing AI agents with the Semantic Kernel agent framework
The blog post Customizing AI agents with the Semantic Kernel agent framework discusses the capabilities of the Semantic Kernel SDK, an open-source tool developed by Microsoft for creating AI agents and multi-agent systems. It highlights the benefits of using single-purpose agents within a multi-agent system to achieve more complex workflows with improved efficiency. The Semantic Kernel SDK offers features like telemetry, hooks, and filters to ensure secure and responsible AI solutions, making it a versatile tool for both simple and complex AI projects.1.5KViews3likes0CommentsThe Future of AI: Reduce AI Provisioning Effort - Jumpstart your solutions with AI App Templates
In the previous post, we introduced Contoso Chat – an open-source RAG-based retail chat sample for Azure AI Foundry, that serves as both an AI App template (for builders) and the basis for a hands-on workshop (for learners). And we briefly talked about five stages in the developer workflow (provision, setup, ideate, evaluate, deploy) that take them from the initial prompt to a deployed product. But how can that sample help you build your app? The answer lies in developer tools and AI App templates that jumpstart productivity by giving you a fast start and a solid foundation to build on. In this post, we answer that question with a closer look at Azure AI App templates - what they are, and how we can jumpstart our productivity with a reuse-and-extend approach that builds on open-source samples for core application architectures.393Views0likes0CommentsThe Future of AI: Power Your Agents with Azure Logic Apps
Building intelligent applications no longer requires complex coding. With advancements in technology, you can now create agents using cloud-based tools to automate workflows, connect to various services, and integrate business processes across hybrid environments without writing any code.2.9KViews2likes1CommentAutomate Quota Discovery in Azure AI Foundry: A Tale of 3 APIs
Automate the discovery of Azure regions that meet your AI deployment needs using three essential APIs: Models API, Usages API, and Locations API. This process helps reduce decision fatigue and ensures compliance with enterprise-wide model deployment standards. Key learnings: Model Deployment Requirements: Understand the needs of a standard Retrieval-Augmented Generation (RAG) application, which involves deploying multiple models. Automation Benefits: Streamline your deployment process and ensure compliance with enterprise standards. Three Essential APIs: Models API: Query available models for a specific subscription within a chosen location. Usages API: Assess current usages and limits to infer available quotas. Locations API: Obtain a list of all available regions. A comprehensive Jupyter notebook with the implementation steps is available in the accompanying GitHub repository. This resource is invaluable for AI developers looking to streamline their deployment processes and ensure their applications meet all necessary requirements442Views3likes0CommentsThe Future of AI Is: Model Choice - From Structured Process To Seamless Platform
Language models are at the heart of generative AI applications. But in just over a year, we've moved from a handful of model providers to 1M+ community variants and more, resulting in the paradox of choice that ends in decision fatigue. In this blog post, we'll look at how developers can rethink their model selection strategy with a structured decision-making process, and a seamless development platform, to help them. This post is part of the Future of AI series jumpstarted by Marco Casalaina with his post on Exploring Multi-Agent AI Systems.2KViews1like0CommentsAzure AI Foundry: Empowering Scientific Discovery with AI
Azure AI Foundry is enabling scientific discovery with the introduction of three groundbreaking models from Microsoft Research: Aurora, MatterSim, and TamGen. These models, available starting January 20, 2025, offer transformative capabilities in weather forecasting, materials simulation, and drug design. By providing access to these advanced tools, Azure AI Foundry is enabling researchers and developers to explore new frontiers and accelerate the pace of innovation.840Views0likes0Comments