Gartner has identified six critical steps organizations can take to mitigate the risks associated with the proliferation of AI agents as enterprises brace for a significant uptick in their use in the coming years. The sharp increase in AI agents is anticipated to present new challenges regarding governance, security, data access, and overall IT complexity.
By 2028, large global enterprises may employ over 150,000 AI agents, a substantial rise from fewer than 15 in 2025. This surge could lead to an overwhelming number of agents, complicating the ability of IT teams to monitor, manage, and secure these systems effectively.
As AI agents become more prevalent within business functions, organizations must address various risks, including misinformation, excessive data sharing, data loss, and the emergence of shadow AI. Prohibiting or narrowly restricting AI agents is not a sustainable strategy, as employees may resort to unauthorized tools. A balanced governance model is essential for managing risks while enabling responsible employee use of AI tools.
To reduce the risks of AI agent proliferation, Chief Information Officers (CIOs) and IT leaders should focus on the following six steps:
-
Establish Agent Governance and Policies: Clearly define the rules for the creation and sharing of AI agents, including who is authorized to build them and which integrations are acceptable.
-
Build a Centralized Agent Inventory: Develop a comprehensive overview of all AI agents in use across the organization, encompassing both sanctioned and shadow AI solutions. This inventory facilitates risk classification and the implementation of necessary controls.
-
Define Agent Identity, Permissions, and Lifecycle Models: Implement identity, permission, and access-control frameworks for AI agents. It is vital to regularly assess agent usage and retire outdated or redundant agents to prevent unchecked proliferation.
-
Develop AI Information Governance: Regulate the information that AI tools and agents can access. Ensure data remains current, permissions are managed correctly, and obsolete data is archived to prevent oversharing.
-
Monitor and Remediate Agent Behavior: Maintain oversight of AI agent usage. Monitor for policy violations, unusual behavior, or instances where agents operate outside their intended scope, and take corrective measures as necessary.
-
Foster a Culture of Responsible AI Usage: Provide training on responsible AI practices to employees, and create internal communities or forums where best practices for AI agent management can be shared.
In conclusion, as the deployment of AI agents accelerates, organizations must proactively manage the associated risks to ensure effective governance and security.






