agents
34 TopicsDigital Deep Dive: Copilot Control System (CCS)
Join us for two days of insights, demos, and deep dives on Copilot Control System (CCS)! Learn all about how to effectively secure, manage, and analyze the use of Microsoft 365 Copilot, Copilot Chat, Microsoft Copilot Studio, and agents across your organization. We will kick it off with a CCS overview and official welcome to this two-day digital skilling event. The rest of day 1 will be dedicated to security, governance, and preparing your tenant for Microsoft 365 Copilot and agents. Come back on day 2 for deep dives on Copilot controls and agent lifecycle, measurement and reporting, and user enablement. Finally, we'll wrap up with a recap and next steps to help you turn what you've learned into action. Agenda Day 1 | Tuesday, June 17, 2025 Start time Session title 8:00am PT Introduction to Copilot Control System 8:30am PT Secure Copilot and agents: Practical steps for addressing oversharing concerns utilizing SAM and Purview 9:30am PT Prevent data loss and insider risks for M3icrosoft 365 Copilot with Microsoft Purview 10:30am PT Understanding web search controls in M365 Copilot 11:00am PT Build enterprise-scale agents securely Day 2 | Wednesday, June 18, 2025 Start time Session title 8:00am PT Copilot agent management and controls 9:00am PT Empower Copilot Studio makers with enterprise-grade management controls 10:00am PT Measure usage and impact of Copilot and agents 11:00am PT Practical guidance for AI and collaboration adoption 12:00pm PT That’s a wrap! What’s next for your Copilot Control System journey Hope to see you there! Come ready to learn and ask our experts all of your burning questions!25KViews22likes24CommentsSecure Microsoft 365 Copilot and agents: Practical steps for addressing oversharing concerns utilizing SAM and Purview
Transformative AI solutions can boost your business’s productivity—and with a few simple steps, you can unlock these advantages without jeopardizing important data. By taking a structured approach to data security and compliance, you can ensure these cutting-edge tools share information safely and appropriately to mitigate risks. Join us at this session to receive thorough technical guidance for preventing data oversharing within your organization. You’ll explore the common causes of oversharing—such as unrestricted privacy settings, broken permission inheritance, and mismanaged domain groups—and how you can securely realize the value of AI with Microsoft 365 Copilot. In this session, you will learn: Immediate steps you can take to rapidly deploy and secure Copilot without the risk of oversharing Methods to ensure your organization is secure, including enhancing your data security stance and boosting visibility into security gaps Policies that reduce the potential for oversharing by lowering data volume and improving compliance How do I participate? No registration is required. Select "Add to calendar" to save the date, select "Attend" to receive event reminders, then join us live on June 17th ready to learn and ask questions! Feel free to post questions and comments below. You can post during the live session and/or in advance if the timing doesn't work for you. This session is part of the Digital Deep Dive: Copilot Control System (CCS). Add it to your calendar, select "Attend" for event reminders, and check out the other sessions! Each session has its own page where the session livestream and discussion space will be available at the start time. You will also be able to view sessions on demand after the event.43Views0likes0CommentsDiskANN on Azure Database for PostgreSQL – Now Generally Available
By Abe Omorogbe, Senior PM We’re thrilled to announce the General Availability (GA) of DiskANN for Azure Database for PostgreSQL unlocking fast, scalable, and cost-effective vector search for production workloads. Building on momentum from our private and public previews, this release brings major upgrades that directly reflect customer feedback for better performance, lower memory usage, and greater flexibility for advanced GenAI applications. Whether you're working with massive datasets or deploying on resource-constrained environments, DiskANN now offers an index that scales effortlessly. DiskANN delivers up to 10x faster speed, 4x lower costs and up to 96x lower memory footprint compared to the industry standard pgvector HNSW. In this post, we’ll highlight the following: Common pain points in large-scale vector search New features in the GA release Dive into product quantization (PQ) the main optimization that powers DiskANN’s performance Share internal testing results that demonstrate how DiskANN stacks up against alternatives like HNSW. Read on to see why DiskANN is ready for your most demanding vector search workloads. What is DiskANN? Developed by Microsoft Research and battle-tested across global services like Bing and Microsoft 365, DiskANN is a high-performance approximate nearest neighbor (ANN) search algorithm built for scalable vector search. It delivers the high recall, high throughput, and low latency required by today’s most demanding agentic AI and retrieval-augmented generation (RAG) workloads. DiskANN offers the following benefits: Low Latency: Its graph-based index structure minimizes SSD reads during search, enabling high throughput and consistently low query latency. Cost Efficiency: DiskANN’s design reduces memory usage up to 96x smaller than standard indexing methods helping lower infrastructure costs. Scalability: Optimized for massive datasets, DiskANN is built to efficiently handle millions of vectors, making it ideal for production-scale applications. Accuracy: DiskANN delivers highly accurate results without sacrificing speed or precision. Integration: DiskANN works natively with Azure Database for PostgreSQL, leveraging the power and flexibility of PostgreSQL. Breaking Through the Limits of Large-Scale Vector Search Vector search has become essential for powering AI applications from recommendation systems to agentic AI but scaling it has been anything but easy. If you've worked with large vector datasets, you've likely run into the same roadblocks: Your data is too big to fit in memory leading to slower searches. Building indexes takes forever and eats up your resources. You have no idea how long the indexing process will take or where it’s stuck. Your embedding model outputs high-dimensional vectors, but your database can’t handle them. Database bills spiral out of control due to memory intensive machines needed for efficient search on a large dataset. Sound familiar? These are not edge cases they’re the standard challenges faced by anyone trying to scale Postgres’s vector search capabilities into real-world production workloads. With the General Availability (GA) release of DiskANN for Azure Database for PostgreSQL, we’re tackling these problems head-on, bringing production-ready scale, speed, and efficiency to vector search. Let’s break down how. Product Quantization (PQ) for Lower Memory and Storage Costs (preview) One of the biggest blockers in vector search is fitting your data into memory. When using pgvector’s HNSW and your vector data doesn't fit in memory, this can lead to compute intensive I/O operations, causing degraded performance. With the GA release, DiskANN introduces a preview version of Product Quantization (PQ)—a powerful vector compression technique that makes it possible to store and search massive datasets with a dramatically smaller memory footprint. With PQ enabled, you get: Reduced memory usage — enabling datasets that previously couldn’t fit in RAM. Lower memory costs — compressed vectors mean smaller indexes and cheaper monthly bills. Faster performance — less I/O pressure means lower latency and higher throughput. Example results In our internal testing, we use pg_diskann on Azure Postgres to build an index of 35 million 768D vectors and ran benchmarking queries on an 8-core 32GB machine. The results were: 32x lower memory footprint than using pgvector’s HNSW and 4x lower cost due to significantly less resources needed to run vector search queries effectively compared to HNSW. Also, compared to standard HSNW, pg_diskann delivers up to 10x lower latency @ 95% recall especially in large scale scenarios with millions of vectors. When testing higher quality embedding such as OpenAI v3-large (3072 dimensions), we saw up to 96x lower memory footprint, due to extremely efficient compressing. In this scenario PQ compresses each vector from 12KB (3072 D, 4 bytes/D) to just 128B per quantized vector. Sign up for the preview today! To get access. Go Big: Supports vectors up to 16,000 dimensions Another big blocker for customers developing advanced GenAI applications with pgvector is that HNSW only supports indexing vectors up to 2,000 dimensions a limit that constrains the development of applications using high-dimensional embedding models which deliver high accuracy (i.e. text-embedding-large). With this release, DiskANN now supports vectors up to 16,000 dimensions. When you have product quantization enabled. Popular embedding models with over 2000 dimensions (text-embedding-large, E5-mistral-7b-instruct and NV-embed-v2) Faster Index Builds, Smarter Memory Usage Index creation has historically been a pain point, especially in previous versions of pg_diskann—especially for large datasets. In this GA release, we’ve significantly accelerated the build process through: Improved memory management using `maintenance_work_mem` more efficiently. Optimized algorithms that reduce disk I/O and CPU usage during indexing We’ve also published detailed documentation to guide you through best practices for faster index builds. The result? Index builds that are not only faster but more predictable and resource friendly. When indexing 1 millions vectors, the DiskANN GA version is ~2x faster. It took 696.0630 seconds vs 1172.3314 seconds in our DiskANN preview build. Real-Time Index Progress Tracking Previously, with pg_diskann building large indexes felt like working in the dark. Now, with the addition of improved progress reporting support, you can track exactly how far along your index build is—making it easier to monitor, plan, and troubleshoot during creation. Checking index build progress with PSQL in VSCode Use the following command in PSQL to check pg_diskann index build progress. SELECT phase, round(100.0 * blocks_done / nullif(blocks_total, 0), 1) AS "%" FROM pg_stat_progress_create_index; Using DiskANN on Azure Database for PostgreSQL Using DiskANN on Azure Database for PostgreSQL is easy. Enable the pgvector & diskann Extension: Allowlist the pgvector and diskann extension within your server configuration. Activating DiskANN in Azure Database for PostgreSQL Create Extension in Postgres: Create the pg_diskann extension on your database along with any dependencies. CREATE EXTENSION IF NOT EXISTS pg_diskann CASCADE; Create a Vector Column: Define a table to store your vector data, including a column of type vector for the vector embeddings. CREATE TABLE demo ( id INT GENERATED ALWAYS AS IDENTITY PRIMARY KEY, embedding public.vector(3) ); INSERT INTO demo (embedding) VALUES ('[1.0, 2.0, 3.0]'), ('[4.0, 5.0, 6.0]'), ('[7.0, 8.0, 9.0]'); Index the Vector Column: Create an index on the vector column to optimize search performance. The pg_diskann PostgreSQL extension is compatible with pgvector, it uses the same types, distance functions and syntactic style. To use Product Quanatization sign up for the preview today! CREATE INDEX demo_embedding_diskann_idx ON demo USING diskann (embedding vector_cosine_ops) Perform Vector Searches: Use SQL queries to search for similar vectors based on various distance metrics (cosine similarity in the example below). SELECT id, embedding FROM demo ORDER BY embedding <=> '[2.0, 3.0, 4.0]' LIMIT 5; Ready to Dive In? DiskANN’s GA release transforms PostgreSQL into a fully capable vector search platform for production AI workloads. It delivers: Support for millions of compressed vectors Compatibility with pgvector Reduced memory and storage costs Faster index creation Support for high-dimensional vectors Real-time indexing progress visibility Whether you’re building an enterprise-scale retrieval system or optimizing costs in a lean AI application, Use the DiskANN today and explore the future of AI-driven applications with the power of Azure Database for PostgreSQL! Run our end-to-end sample RAG app with DiskANN Learn More DiskANN on Azure Database for PostgreSQL is ready for production workloads. With Product Quantization, support for high-dimensional vectors, faster index creation, and clearer operational visibility, you can now scale your vector search applications even further — all while keeping costs low. To learn more, check out our documentation and start building today!Build your code-first agent with Azure AI Foundry: Self-Guided Workshop
Build your first Agent App Agentic AI is changing how we build intelligent apps - enabling software to reason, plan, and act for us. Learning to build AI agents is quickly becoming a must-have skill for anyone working with AI. Self-Guided Workshop Try our self-guided “Build your code-first agent with Azure AI Foundry” workshop to get hands-on with Azure AI Agent Service. You’ll learn to build, deploy, and interact with agents using Azure’s powerful tools. What is Azure AI Agent Service? Azure AI Agent Service lets you create, orchestrate, and manage AI-powered agents that can handle complex tasks, integrate with tools, and deploy securely. What Will You Learn? The basics of agentic AI apps and how they differ from traditional apps How to set up your Azure environment How to build your first agent How to test and interact with your agent Advanced features like tool integration and memory management Who Is This For? Anyone interested in building intelligent, goal-oriented agents — developers, data scientists, students, and AI enthusiasts. No prior experience with Azure AI Agent Service required. How Does the Workshop Work? Tip: Select the self-guided tab in Getting Started for the right instructions. Step-by-step guides at your own pace Code samples and templates Real-world scenarios Get Started See what agentic AI can do for you with the self-guided “Build your code-first agent with Azure AI Foundry” workshop. Build practical skills in one of AI’s most exciting areas. Try the workshop and start building agents that make a difference! Additional Resources Azure AI Foundry Documentation Azure AI Agent Service Overview Questions or feedback Questions or feedback? Visit the issues page. Happy learning and building with Azure AI Agent Service!587Views0likes0CommentsAzure AI Foundry/Azure AI Service - cannot access agents
I'm struggling with getting agents to work via API which were defined in AI Foundry (based on Azure AI Service). When defining agent in project in AI Foundry I can use it in playground via web browser. The issue appears when I'm trying to access them via API (call from Power Automate). When executing Run on agent I get info that agent cannot be found. The issue doesn't exist when using Azure OpenAI and defining assistants. I can use them both via API and web browser. I guess that another layer of management which is project might be an issue here. I saw usage of SDK in Python and first call is to connect to a project and then get an agent. Does anyone of you experienced the same? Is a way to select and run agent via API?64Views0likes0CommentsCopilot Control System innovations in the Microsoft 365 Copilot Wave 2 Spring release
Today’s Microsoft 365 Copilot Wave 2 Spring release highlights new capabilities for the new era of human-agent collaboration. Teams are now empowered to address unique business needs by creating and customizing secure, enterprise-grade agents faster and more easily than ever—and as an IT pro you’re probably wondering how you’re going to manage that. We hear you, and are excited to share new capabilities in the Copilot Control System (CCS) to help IT administrators and security professionals start solving the challenges of cost management, security and governance, and lifecycle management so that you can effectively secure, measure, and manage the use of Microsoft 365 Copilot and agents to maximize their impact. Security & Governance Protecting your organization’s data from oversharing, data loss, and insider risks is crucial for effective agent deployment. Apps and agents in Microsoft Purview Data Security Posture Management for AI (public preview available June 2025) empowers administrators to identify and manage potential data security risks associated with agents. It allows them to understand any sensitive data accessed by agents, identify gaps in data security policies, and take actions to address those gaps—all from a single dashboard. Apps and agents in Data Security Posture Management for AI dashboard Admins can access comprehensive insights about a specific agent and see how the agent is protected by Purview data security policies. The detailed agent view indicates which policies are applied to the agent and provides the option for admins to create a policy directly from this view to address any coverage gaps. Here are some examples of insights related to coverage gaps: Alerts for risky agent interactions: Shows if the agent has received prompts containing sensitive data or risky information. Control inappropriate use of AI agents: Detect any inappropriate or unethical use of the agent. Block sensitive data from AI agent processing: Displays the confidentiality level of data the agent interacts with, providing an option to block certain levels from being accessed by the agent. Additional risk-related insights are also available from the detailed agent view, empowering admins with visibility to make informed decisions about how agents should interact with the organization’s data. AI agent details view Measurement & Reporting Copilot Analytics is our set of reporting capabilities designed to help organizations measure the impact of AI on their business. We’ve created a set of reports to understand agent usage and maximize their effectiveness: Reports for IT admins to help manage agents across the organization and leadership level reporting to assess agent productivity and impact. Understanding agent usage and consumption is fundamental to managing potential sprawl and associated costs. IT and AI admins need visibility into which agents people are using and how they’re being used to effectively manage agents. The Microsoft 365 admin center offers rich, built-in agent usage and consumption reporting that enable admins to monitor agent usage across their organization. Admins can review and understand overall agent usage in Microsoft 365 Copilot Chat including total active users, segmentation by Microsoft 365 Copilot licensed and unlicensed users, total active agents by publisher, and agent and user-level details. The new Message consumption report (targeted for public preview in May 2025) offers visibility into metered consumption of agents in Microsoft 365 Copilot Chat, including total messages consumed, cumulative and daily time series, and user and agent-level details. It provides visibility into how many agent messages are being consumed, helping you understand and manage associated costs. The Agent usage report (targeted for public preview in June 2025) gives admins the ability to review and understand overall agent usage in Microsoft 365 Copilot Chat including total active users, segmentation by Microsoft 365 Copilot licensed and unlicensed users, total active users and active agents by publisher, and agent and user-level details. agents report in the Microsoft 365 admin center Both the Message consumption report and Agent usage report provide powerful insights to inform IT decision making with regards to agent deployment and adoption. Complementing the tools available in Microsoft 365 admin center, the new Copilot Studio Agent Report (now available in public preview) in Viva Insights provides leaders and IT pros with a comprehensive view of Copilot Studio agent usage, performance, and value. In June 2025 we’re adding a new capability to this report to include customizable business impact reporting to correlate agent usage to your specific business metrics—for example, see how your support agents reduce ticket creation, or how your sales agents unlock bigger opportunities—giving you the ability to measure AI ROI within your organization. For more details on these reports, please visit our blog on agent reporting in Copilot Analytics. Management Controls The critical third step to addressing challenges of cost management and security is effective lifecycle management. Microsoft 365 admin center provides admins with the tools they need to manage agent deployment, enable, disable, or block agents for specific users or groups, and ensure that agents are used efficiently and in alignment with organizational needs. Today, agent management is available within Integrated apps and will soon be made available in a dedicated management pane within Copilot Control System. New updates to the agent inventory management layout and workflows (targeted for release in May CY2025) make it easier to see the critical information you need to manage your agent and connectors inventory. Agent management (available Q3 CY25) enables admins to easily make updates and changes to the list of users approved to create and use agents. Similarly, you can see any users who haven't been approved to use your team’s agents, and quickly decide if we want to grant them access or not. of Copilot settings in the Microsoft 365 admin center showing a control to enable/disable users or groups to install and create agents in Microsoft 365 Copilot Chat. With Shared Agent Inventory Management (targeted for release May CY2025) admins view reports and take additional actions on the insights, such as blocking or unblocking agents as appropriate. Admins can also export their agent inventory to CSV and manage agents via PowerShell. These latest capabilities equip IT administrators and security professionals with the tools you need to support your organization’s AI-powered transformation. We look forward to continuing to provide insights and controls across Copilot Control System that empower you to take informed actions and create a managed, trusted environment in which agent creators—and agent users—can thrive. To learn more about Copilot Control system, watch the demo or read the announcement blog.6.2KViews2likes2CommentsAI Sparks: Unleashing Agents with the AI Toolkit
The final episode of our "AI Sparks" series delved deep into the exciting world of AI Agents and their practical implementation. We also covered a fair part of MCP with Microsoft AI Toolkit extension for VS Code. We kicked off by charting the evolutionary path of intelligent conversational systems. Starting with the rudimentary rule-based Basic Chatbots, we then explored the advancements brought by Basic Generative AI Chatbots, which offered contextually aware interactions. Then we explored the Retrieval-Augmented Generation (RAG), highlighting its ability to ground generative models in specific knowledge bases, significantly enhancing accuracy and relevance. The limitations were also discussed for the above mentioned techniques. The session was then centralized to the theme – Agents and Agentic Frameworks. We uncovered the fundamental shift from basic chatbots to autonomous agents capable of planning, decision-making, and executing tasks. We moved forward with detailed discussion on the distinction between Single Agentic systems, where one core agent orchestrates the process, and Multi-Agent Architectures, where multiple specialized agents collaborate to achieve complex goals. A key part of building robust and reliable AI Agents, as we discussed, revolves around carefully considering four critical factors. Firstly, Knowledge-Providing agents with the right context is paramount for them to operate effectively and make informed decisions. Secondly, equipping agents with the necessary Actions by granting them access to the appropriate tools allows them to execute tasks and achieve desired outcomes. Thirdly, Security is non-negotiable; ensuring agents have access only to the data and services they genuinely need is crucial for maintaining privacy and preventing unintended actions. Finally, establishing robust Evaluations mechanisms is essential to verify that agents are completing tasks correctly and meeting the required standards. These four pillars – Knowledge, Actions, Security, and Evaluation – form the bedrock of any successful agentic implementation. To illustrate the transformative power of AI Agents, we explored several interesting use cases and applications. These ranged from intelligent personal assistants capable of managing schedules and automating workflows to sophisticated problem-solving systems in domains like customer service. A significant portion of the session was dedicated to practical implementation through demonstrations. We highlighted key frameworks that are empowering developers to build agentic systems.: Semantic Kernel: We highlighted its modularity and rich set of features for integrating various AI services and tools. Autogen Studio: The focus here was on its capabilities for facilitating the creation and management of multi-agent conversations and workflows. Agent Service: We discussed its role in providing a more streamlined and managed environment for deploying and scaling AI agents. The major point of attraction was that these were demonstrated using the local LLMs which were hosted using AI Toolkit. This showcased the ease with which developers can utilize VS Code AI toolkit to build and experiment with agentic workflows directly within their familiar development environment. Finally, we demystified the concept of Model Context Protocol (MCP) and demonstrated how seamlessly it can be implemented using the Agent Builder within the VS Code AI Toolkit. We demonstrated this with a basic Website development using MCP. This practical demonstration underscored the toolkit's power in simplifying the development of complex solutions that can maintain context and engage in more natural, multi-step interactions. The "AI Sparks" series concluded with a discussion, where attendees had a clearer understanding of the evolution, potential and practicalities of AI Agents. The session underscored that we are on the cusp of a new era of intelligent systems that are not just reactive but actively work alongside us to achieve goals. The tools and frameworks are maturing, and the possibilities for agentic applications are sparking innovation across various industries. It was an exciting journey, and engagement during the final session on AI Sparks around Agents truly highlighted the transformative potential of this field. "AI Sparks" Series Roadmap: The "AI Sparks" series delved deeper into specific topics using AI Toolkit for Visual Studio Code, including: Introduction to AI toolkit and feature walkthrough: Introduction to the AI Toolkit extension for VS Code a powerful way to explore and integrate the latest AI models from OpenAI, Meta, Deepseek, Mistral, and more. Introduction to SLMs and local model with use cases: Explore Small Language Models (SLMs) and how they compare to larger models. Building RAG Applications: Create powerful applications that combine the strengths of LLMs with external knowledge sources. Multimodal Support and Image Analysis: Working with vision models and building multimodal applications. Evaluation and Model Selection: Evaluate model performance and choose the best model for your needs. Agents and Agentic Frameworks: Exploring the cutting edge of AI agents and how they can be used to build more complex and autonomous systems. The full playlist of the series with all the episodes of "AI Sparks" is available at AI Sparks Playlist. Continue the discussion and questions in Microsoft AI Discord Community where we have a dedicated AI-sparks channel. All the code samples can be found on AI_Toolkit_Samples .We look forward to continuing these insightful discussions in future series!233Views2likes0Comments