
Agentic extensibility is expanding the frontier of enterprise AI. By creating agents that surface knowledge, take actions, and even reinvent workflows, people can personalize AI’s power like never before.
But how do you move into the agentic future without putting your organization and employees at risk? How do you encourage citizen developers to create agents freely while maintaining security, privacy, and compliance?
At Microsoft Digital, the company’s IT organization, we’re putting practical governance structures in place to ensure our internal agents are useful, safe, and properly scoped. Through employee empowerment with guardrails, we’re unlocking the potential of the agentic era.
New frontiers, new challenges

Plenty of organizations are still getting used to the idea of AI in the workplace. Adding agents on top of that can ignite fears for IT practitioners, security teams, privacy experts, and other professionals whose job it is to keep their organizations functioning safely and smoothly.
Most of those fears are around a new territory of unknowns. Organizations want to understand if and how agents will exacerbate existing vulnerabilities and violate policies. They’re unsure whether centralized governance is enough to secure agents at scale. And they’re worried about agent sprawl—too many agents shared too freely, with too little oversight.
The democratic nature of agent building plays a major role.
“We’re now putting generative AI capabilities into the hands of people with little to no technical background, and that’s incredible from a productivity and innovation standpoint,” says Aisha Hasan, Power Platform and Copilot Studio product manager for Microsoft Digital. “But it also makes it simpler for people to do potentially risky things, because AI lets them do it that much faster and easier.”
To address these risks, our governance team at Microsoft Digital has identified the greatest challenges facing the company and our customers. They include:
- Ensuring users and apps don’t get access to privileged information and effectively applying controls
- Keeping employees from creating agents that violate company policies
- Balancing between the freedom for employees to share their creations and the need to prevent agent sprawl
- Delineating which agents are authoritative, safe, and centralized for enterprise functions
- Inventorying agents to provide lifecycle management
Managing all of these challenges comes down to a delicate balancing act.
“The key is to achieve an equilibrium between innovation and governance,” Hasan says. “A strong policy framework is the foundation of good governance.”
Applying our existing governance groundwork to agents
At Microsoft Digital, our approach to governing agents has grown out of years of practices and policies that we’ve already matured with other products, including AI-powered tools like Microsoft 365 Copilot. We’re also learning as we build, identifying new issues and edge cases as they emerge.
We prioritize three chief categories to keep our employees and organization safe:
- Security: We’ve established standards for data classification, policies on handling confidential information, and other security measures to protect data from unauthorized access, misuse, and disclosures. Microsoft Purview powers these capabilities through data labeling, rights management, and data loss prevention.
- Privacy: Privacy compliance measures keep personal data protected and ensure agents adhere to regulatory frameworks in regions where we operate. We conduct regular privacy assessments for all applications, and that applies to high-impact agents as well.
- Regulatory compliance: Regulatory compliance assessments ensure agents meet legal standards. To keep us up to speed, our Legal and Compliance teams carefully monitor AI guidelines, regulations, and laws as they evolve so we can understand and incorporate them into our assessments.
Expanding these priorities to agents is an unfolding process for our Microsoft Digital Governance and Security teams.
“You have to learn from what you’ve already successfully worked into your tenant,” says Amy Rosenkranz, principal product manager responsible for Copilot extensibility with Microsoft Digital. “What are your core governance principles, and what’s your risk tolerance for different capabilities like openness to external systems or high-compliance environments?”
We incorporated elements of our tenant’s minimum bar for governance into how we secure agents. Those include Microsoft Information Protection, a functional inventory, activity logging, lifecycle management, and the ability to properly isolate agents against crossing data boundaries.
Although our all-up strategy is to govern at the container level, the added functionality of agents demands that we also introduce further controls like sharing limits, breadth of knowledge sources, agent metadata, and information about an agent’s behaviors.
Our intention is always to act as proactively as possible while putting reactive structures in place to catch any issues that arise. After all, this is a new technology, so there are bound to be some surprises. By combining all of these elements, we’ve landed on five core principles for governing agents:
- We empower employees to create and share simple, low-risk agents: We provide a safe space and personal flexibility that allows individual employees to experiment without implicating company data or content that users don’t own.
- We capture and vet sensitive data flows at the enterprise level: More complex or far-reaching agents owned by teams or lines of business need enterprise documentation to account for external audits or security and privacy validation. Builders need to demonstrate that they’ve thought through the security and privacy implications of their agents, so these projects go through approval process flows similar to any other professionally developed apps before we trust them with potentially sensitive data.
- We protect data designated confidential or higher: We contain data flows to tenant mandates and only trust suitable storage destinations for content. That depends on the ability to gate which connectors can work with particular source data and sensitivity labels.
- We honor the enterprise lifecycle: Both user-based and attestation-based lifecycles come into play. We treat agents that individual users own like any other user app and delete them when the employee leaves the organization. Agents owned by teams have a lifecycle defined by the tenant and tied to attestation, the software development lifecycle (SDL), and accountability confirmations.
“It all goes back to our core principles, to what we’re trying to achieve,” says David Johnson, tenant and compliance architect with Microsoft Digital. “We’re focused on what we allow for our employees, what governance means in this environment, and expanding these principles out to cover individual agents.”
Covering the full spectrum of agents with a toolkit of policies and protections
Because agents are so diverse, generalized governance will only get you so far. There’s an entire matrix of different parameters that apply to any agent, and they all require different policies. Those parameters include:
- Different types of reach: Personal agents, limited sharing like dev environments, or broad sharing
- The agent-building tool: Microsoft 365 Copilot agent builder, SharePoint agent builder, Microsoft Copilot Studio, or tools geared to more professional developers
- Knowledge sources: Public sites, SharePoint and OneDrive, directly uploaded files, enterprise apps and systems, and third-party products
- Enterprise sanctioning: Whether we promote agents into officially published internal tools that represent authoritative applications
Each of these parameters creates a pivot that we need to manage through governance, and we’ve painstakingly assembled a set of policies and controls to account for them. As our understanding and use of agents advances, we’re continually updating how we match their characteristics and capabilities with relevant policies and any applicable reviews.
Taking a matrixed approach: Our Microsoft Digital agent governance framework
The following list demonstrates the matrix of factors that determines how we govern different kinds of agents created using different tools. This matrix helps our employees understand the agent creation process and helps Microsoft Digital maintain safety and control.
- SharePoint agent builder
- What users can build: Knowledge-only agents
These agents reason over Microsoft 365 collaboration data, and they’re gated to the SharePoint environment where they’re created. - Technical proficiency: No-code
- Knowledge sources: SharePoint, custom instructions
- Capabilities: Not applicable
- Actions and plug-ins: Not applicable
- Sharing and publishing: Copilot navigation in SharePoint, sharing by link, Sharing in Microsoft Teams chat
- Custom engine or bring-your-own model: Not applicable
- Reviews: No review needed
IT does not gate knowledge-only agents outside of governance tied to SharePoint sites. Microsoft Digital honors reactive take-down requests like any other self-service construct but does not provide proactive gating.
- What users can build: Knowledge-only agents
- Copilot Studio agent builder
- What users can build: Knowledge-only agents
These agents feature graph connectors from a pre-approved catalog to expose additional data. - Technical proficiency: No-code
- Knowledge sources: SharePoint, external websites, custom instructions, additional internal knowledge sources via graph connectors
- Capabilities: Code interpreter, image generator
- Actions and plug-ins: Not applicable
- Sharing and publishing: Individual use, sharing by link
- Custom engine or bring-your-own model: Not applicable
- Reviews: No review necessary
These agents only access Copilot-available graph data. Microsoft Digital honors reactive take-down requests like any other self-service construct but does not provide proactive gating.
- What users can build: Knowledge-only agents
- Copilot Studio
- What users can build: Task and custom agents
These agents connect to more systems through connectors and orchestration logic to handle more complex scenarios. We may publish agents at this level of complexity and utility to our agent catalog for wide organizational use. - Technical proficiency: Low-code or pro-code
- Knowledge sources: SharePoint, external websites, custom instructions, Additional internal knowledge sources via advanced graph connectors, Power Platform connectors
- Capabilities: Not applicable
- Actions and plug-ins:
- Retrieval and task agents: Read-only actions
- Custom agents: Read or write actions using Power Platform connectors
- Sharing and publishing:
- Retrieval or task agents in a personal developer environment: Sharing by link with up to 10 people
- Custom agents: Publishing to 10 people or the agent catalog in Copilot Chat
- Broad publishing: Requires a review similar to professionally developed apps, including an understanding of the agent’s data implications
- Custom engine or bring-your-own model: Custom Azure Open AI large language models (LLMs)
- Reviews: Custom agents for our catalog require reviews for security, privacy, accessibility, responsible AI, and an environment-specific maker stack review.
- What users can build: Task and custom agents
- Teams toolkit in Visual Studio Code
- What users can build: Retrieval, task, and custom agents
These agents may or may not connect to more systems through connectors and orchestration logic to handle more complex scenarios. We may publish agents produced at this level of complexity and utility as Teams apps or to our agent catalog for wide organizational use. - Technical proficiency: Pro-code
- Knowledge sources: SharePoint, external websites, custom instructions, additional internal knowledge sources via graph connectors
- Capabilities: Code interpreter, image generator, Teams chats and channels
- Actions and plug-ins: API actions
- Sharing and publishing: Publishing as an app in Teams or as an agent in the catalog in Copilot Chat
- Custom engine or bring-your-own model: Custom Azure Open AI large language models (LLMs)
- Reviews: Custom agents for publishing as a Teams app or our catalog require reviews for security, privacy, accessibility, responsible AI, and an environment-specific maker stack review.
- What users can build: Retrieval, task, and custom agents
Want to see this matrix in one chart? Download the full version here.
In addition to mapping out our policies for governing agents, this chart illustrates how we see their relative utility across our organization. From left to right, it demonstrates an escalation from personally useful to organizationally useful agents. Their governance policies and controls escalate accordingly.
“Well-established application policies help us drive adoption and management for agents,” says Mykhailo Sydorchuk, a Customer Zero lead for Microsoft 365 integrated experiences at Microsoft Digital. “Fortunately, most organizations have well-defined security, privacy, and other governance mechanisms in place, so it shouldn’t be too difficult to extend those to agents.”
Managing agent sprawl in the enterprise environment
Our governance structures, practices, and policies also prevent sprawl that comes from unnecessary, duplicative, or unused agents. For example, if more than one team were to create an agent that points to HR information, the employee experience would suffer because our users wouldn’t be sure which agent presents the authoritative source of truth.
Most importantly, Microsoft Digital partners with other internal organizations to ensure they target agent development to avoid sprawl. Ideally, these engagements take place before teams start building their agents so we can avoid wasted effort or rework.
Microsoft Digital acts as a resource for teams who create agents in three ways:
- Before we set a team free to create an agent, we conduct early consultations that empower teams to identify the right scenarios. If a pre-existing agent fits their scenario, we encourage them to use that agent instead of creating another, redundant solution.
- We actively partner with teams to lend technical assistance and ensure they only build relevant, uniquely useful solutions that don’t overlap with other, already-authoritative enterprise agents. Additionally, we encourage creators to build the simplest possible solution to meet their needs so they can deliver agents with minimal custom investment and iterate quickly.
- Members of Microsoft Digital operate as an “Agent Center of Excellence.” They conduct internal engagements, acting as educators and coaches for teams who want to build agents.
We also combat sprawl in other ways. First, user-based lifecycles and periodic attestation help us keep agents from getting out of hand by making sure employees take accountability for them. Requiring attestation means that agents cease to exist once they’re no longer useful or their owner leaves the company.
In-product controls are very helpful. For example, our policies around how widely individuals or teams can share their agents restrict the degree they overlap with each other.
IT administration helps us control the many surface areas for creating and publishing agents. Because we have a firm minimum bar founded in our overall tenant, that provides a good policy framework for consistency among admins.
Finally, user education has an important role to play. Like agent creation capabilities themselves, our employee knowledge-building efforts are still relatively new. We’re prioritizing education to ensure everyone can use these tools safely and keep them scoped to their needs.
“The biggest part of managing sprawl is that we clean house regularly,” Johnson says. “We make sure we tie every agent to some sort of accountability policy to confirm it’s still compliant, effectively managed, and secure, and if all of that is in place, the agent can continue its work.”
Lessons learned from our agent governance efforts
As your organization dives deeper into the new era of AI-empowered work, agents will become an essential part of your employees’ day-to-day lives. But your IT, Security, Privacy, Data, and other teams may have concerns about ensuring the new agentic frontier doesn’t turn into the Wild West.
Although every organization is unique, the lessons you learn from our experience can help you start unlocking the power of agents. Here are five steps you can take today:
- Provide safe spaces with appropriate guardrails for individual employees to experiment with simple agents. Copilot Studio agent builder is a great place to start.
- Empower a small number of trusted creators to experiment with more powerful agent-building tools under the close watch of IT, Governance, Security, Privacy, Data, and HR teams. Thiswill help you see where the gaps appear in existing processes and policies, and it will provide visibility into what you need to review as these processes become more widely available.
- Revisit your labeling structures and data flows. It will be important to have these structures in place to support this new agentic environment. Start by learning from our experience governing AI internally at Microsoft.
- Adapt your review process to the new world of agents. It’s highly likely you have robust security, privacy, and accessibility reviews in place. Without too much work, you can add reviews into the publishing workflow for agents you intend to use at the line of business or company-wide level. Also consider adding reviews for Responsible AI.
- Establish a reasonable enterprise lifecycle for agents that includes attestation. That will keep agents from sprawling or remaining in place after employees have left your organization or simply no longer need a particular agent.
As AI continues to evolve and agents become essential assistants for every employee, developing structures to guide their creation and use will only become more important.
“We definitely want to prevent sprawl and promote safety, but we also want to encourage all employees at Microsoft to build agents,” Hasan says. “We accomplish that by standardizing the ‘what’ and the ‘why’ around agents and the policies that govern them.”
We’re just at the beginning of this journey, but our core principle will remain the same: We empower employees while providing guardrails.

Below, you’ll find essential guidelines for successfully governing agents within your organization, covering everything from policy frameworks and environment strategies to leveraging Copilot Studio and adhering to global regulations.
- The complexity of governing agents depends on the maturity of your organization and where you are in your adoption journey. Start slowly to let that maturity build.
- A strong policy framework is the foundation. Lean on existing app governance policies, then layer agent-specific structures on top.
- Figure out your building environment strategy. Decide on what scenarios match up with specific environments and make the relevant environments available to the relevant employees.
- Don’t forget that Copilot Studio is part of Power Platform. Use what you’ve learned empowering citizen developers in Power Platform to guide your work with agents.
- Global regulations around categories like privacy, security, and responsibility provide a good baseline for establishing governance policies. Set relevant teams to work thinking through these regulations and incorporate their insights into your agent governance.
