AI has advanced into a new phase characterized by execution and accountability, moving beyond mere adoption and experimentation. As agentic AI becomes integral to software development, cloud operations, and cybersecurity, organizations are increasingly acknowledging the necessity for enhanced visibility, governance, and control. Escalating costs and decentralized deployments are prompting leaders to reassess how AI is managed, measured, and secured.
According to IDC, 70% of Asia-Pacific organizations anticipate agentic AI to disrupt business models within the next 18 months. In light of this, Craig Nielsen, Vice President for Asia Pacific and Japan at GitLab, offers insights into the AI trends expected by 2026, enabling leaders to strategize for future success.
Holistic AI Agent Visibility Will Become Essential
In the coming year, organizations will recognize the critical need for visibility concerning AI agents operating across their networks. As teams deploy AI systems from various development tools and cloud platforms without centralized oversight, agent platforms that can discover and catalog these distributed systems will gain a competitive edge in the enterprise market.
This transition will be driven by practical business imperatives. As AI agents amplify system usage and associated computing costs, organizations will demand precise returns on their AI investments. Enterprises will shift from viewing agent deployments as unmonitored experiments to requiring the financial accountability expected from other enterprise technologies. The most successful organizations will implement agent discovery platforms to provide clear visibility into running agents, the resources they consume, and whether they generate measurable business value.
Human-Centric Identity Systems Will Struggle in an Agent-to-Agent Environment
The upcoming year will also see organizations facing a crisis in access and permissions as agent-to-agent interactions highlight the limitations of traditional access control mechanisms. Unlike human users or basic automation, agentic AI systems interact among themselves, delegate functions, and make decisions affecting multiple systems. This complexity reveals vulnerabilities in conventional identity solutions designed for individual human actors rather than autonomous systems interacting autonomously.
While some companies are addressing this challenge with short-term fixes—such as assigning personas to agents—such methods treat agents like employees instead of tackling the core governance issues. Organizations that persist with human-centric permission frameworks may find themselves unable to trace decision-making processes, audit agent actions, or maintain security as their AI systems become increasingly interconnected.
Organizations will need to rethink identity and access management from first principles. By forming cross-functional teams, they can design governance frameworks tailored for autonomous systems, avoiding the pitfalls of retrofitting existing human-centric models. The opportunity for proactive adjustment is limited, as deeply interconnected agent ecosystems will complicate any necessary foundational redesign.
Security Teams Will Focus on High-Impact AI Use Cases
AI has already demonstrated its potential to decrease false positives and optimize security operations. Successful implementations should begin with clearly defined, high-impact use cases, including log analysis that might overwhelm human analysts, network pattern recognition for identifying novel threats, vulnerability prioritization based on real exploitability, and automated incident triage to reduce alert fatigue. Organizations will need to identify primary sources of operational challenges (toil) and pursue holistic enhancements accordingly.
Once relevant use cases are identified, the implementation approach becomes crucial. Security teams should prioritize documenting institutional knowledge within their departments, as AI agents require concrete direction to avoid creating technical debt. This documentation will also aid in strengthening and standardizing internal security processes.
Strategic AI-Human Collaboration Will Define Competitive Edge in 2026
The future winners in the AI landscape will not be those who rapidly adopt the technology, but rather those who thoughtfully allocate tasks between AI and human contributors. With 90% of executives expecting agentic AI to become standard practice within three years, the key differentiator will lie in understanding which tasks benefit from human creativity and judgment versus those that should be automated.
Organizations that consider this balance will create cumulative advantages, enabling developers to concentrate on high-value architectural decisions and strategic initiatives, while AI manages code generation and routine maintenance. Misalignment in this balance results in wasted human talent on automatable tasks and AI making decisions where nuanced judgment is necessary.
The role of developers is shifting from writing every line of code to serving as system architects and AI orchestrators, tackling complex challenges and coordinating multiple agents. Teams embracing this evolution will innovate more effectively, accelerate project timelines, and attract top talent seeking to engage in cutting-edge AI-human collaboration.
The Next Era of AI in Asia Pacific
As the proliferation of AI agents continues, visibility will emerge as the foundation for trust. Organizations that comprehend which agents are operating, the resources they utilize, and the value they deliver will transform AI from a mere experiment into a credible enterprise capability. Governance will adapt correspondingly, allowing organizations to track, secure, and elucidate the decisions made by autonomous systems.
Understanding AI goes beyond mere technology; the most successful organizations will view AI as a partnership, recognizing where human judgment adds value and where automation leads to greater efficiency. Achieving this balance will enable organizations to accelerate their operations, innovate more proficiently, and cultivate a culture rooted in accountability, innovation, and trust.
Author: Craig Nielsen, Vice President, Asia Pacific and Japan, GitLab.
Disclaimer: The views expressed are those of the author alone, and ETCIO does not necessarily endorse them. ETCIO is not responsible for any damages incurred by individuals or organizations as a result.
Published On: January 5, 2026, at 09:00 AM IST.






