For the past several years, discussions surrounding artificial intelligence (AI) in enterprises have largely focused on models, copilots, and experimentation. However, the narrative is shifting. The pressing question now transcends what AI can accomplish; it has evolved to whether the current data infrastructure is adequately prepared for the next demands posed by AI, particularly agentic AI.
Rahul Tenglikar, Account Executive at ClickHouse, emphasizes this issue, stating, “Most of the conversations I am hearing are around agentic AI, but what are the data infrastructure challenges for agentic AI?” This highlights a significant gap that enterprises are beginning to confront. While interest in AI agents is surging rapidly, many organizations are still grappling with outdated, fragmented, and slow data systems.
This concern is evidenced by McKinsey’s 2025 AI survey, which reveals that 23% of organizations are already scaling agentic AI in at least one function, while an additional 39% are still in the experimental phase. Gartner points out that AI agents and AI-ready data rank among the fastest-advancing technologies projected in its 2025 AI Hype Cycle, suggesting that the demands on enterprise data frameworks are set to intensify.
Tenglikar explains this evolution by tracing the trajectory of enterprise data systems. The progression started with traditional data warehouses, which evolved into cloud data warehouses, and is now moving towards a more unbundled model. “Compute and storage separation is probably the next big trend in the data and infrastructure space powering some of these AI agent applications,” he noted. This shift is driven by both cost considerations and performance efficiency, allowing enterprises to scale storage independently from computing resources.
This trend is also reflected in the broader market, as Deloitte’s 2025 AI infrastructure report indicates that hybrid models are enabling enterprises to manage large-scale AI workloads while addressing challenges related to public cloud costs, latency, and data sovereignty.
Tenglikar underscores that AI agents require a fundamentally different data architecture compared to traditional business intelligence (BI) systems. He states, “AI agents need high performance querying,” alongside prerequisites such as “low latency data in milliseconds and microseconds,” “unified data access,” and responses enriched with “context” and “full explainability.”
The implications of inadequate data systems are significant. Tenglikar warns, “The worst thing that LLM can do for us is hallucination,” highlighting the dangers of delivering unreliable outputs due to insufficient data integrity. He advocates for a real-time analytics layer to be implemented over enterprise data, which can promptly provide agents with consistent responses contextualized to mitigate erroneous outputs.
The overarching lesson is that agentic AI encompasses more than just the application layer; it fundamentally involves a rethinking of data architecture. Enterprises may continue to discuss AI agents, but without underlying support for real-time analytics, swift data retrieval, adaptable scaling, and meticulous cost control, these agents are likely to struggle beyond pilot projects.
Tenglikar articulated the evolving inquiries of customers: organizations currently possess “legacy data applications and data infrastructure,” and their aim is to determine “how do we change it so that we can power our agentic AI applications.” This query is poised to become a pivotal challenge in enterprise technology as AI adoption progresses.
Disclaimer: The views expressed are solely those of the speakers and have been taken from the ETCIO Cloud Summit 2026. ETCIO does not necessarily subscribe to them.
With inputs from Swati Sengupta.






