As enterprises move to adopt artificial intelligence (AI), the focus is shifting from experimentation to practical implementation. Organizations are learning to effectively scale AI, addressing challenges around data workflows, transparency, and trust. Dwarak Rajagopal, Vice President of AI Research and Engineering at Snowflake, discusses the pressing issues, strategies, and innovations that are currently leading enterprise AI.
For years, enterprise AI seemed futuristic, but it is now being thrust into the present moment, where the realities of infrastructure, governance, and user trust are key to success. While large language models and vision transformers continue to inspire, businesses are prioritizing deployment over demonstrations.
Rajagopal states, “Many advancements in AI over the last few years have focused on providing the right context. For AI, context is essentially the data. The quality of the output relies on the quality and completeness of that context. Companies often face the issue of their data residing in multiple locations, with some being structured and others unstructured, spread across various platforms like Gmail, Docs, and Snowflake. The challenge lies in unifying this data to deliver the appropriate context for AI to resolve specific problems.”
Addressing this transition is not without its hurdles. Turning cutting-edge research into robust, scalable solutions involves tackling foundational problems, such as inference latency, data drift, observability, and hallucination control.
The Importance of Observability and Human Oversight
The emphasis on context extends to the necessity of human oversight in AI processes. Snowflake is heavily investing in solutions that create a close feedback loop between AI outputs and human validation. One such innovation is Snowflake Intelligence, currently in public preview, which allows users to ask direct questions about their data for insights. Rajagopal highlights that a critical feature is the human-in-the-loop design, where administrators verify queries before they reach business users, building trust in the AI system’s outputs.
Features like an icon marking verified queries enhance transparency. Observability is essential; users should understand what the AI did and the rationale behind it. “We don’t just provide the answer; we also show the AI’s reasoning, or what we call its ‘thinking tokens.’ If a query isn’t verified, users still have insight into the reasoning, whereas verified queries assure them of human validation,” Rajagopal explains.
To foster trust in AI systems, explainability is crucial. Enterprises want clarity not only about model outcomes but also on measuring aspects like confidence, bias, and deviation during production.
“By acquiring TruEra and integrating its open-source project TruLens, we have introduced a robust explainability layer across our platforms like Cortex Analyst and Snowflake Intelligence. This allows users to see what the AI is doing and why, while also enabling direct feedback. For instance, clients like Nissan are leveraging these tools for real-time analysis, reducing feedback loops from months to weeks, while ensuring accuracy before reintegrating insights into their systems,” says Rajagopal.
The Role of Open Source and Inference Optimization
Open-source innovation plays a pivotal role in the maturation of AI in enterprises. Organizations are increasingly choosing modular and transparent alternatives that allow for greater adaptability and community-driven enhancements. Rajagopal, seasoned in open-source projects including PyTorch, asserts its necessity in fostering feedback and trust in a rapidly evolving landscape.
“Our deployment engine is based on the open-source project vLLM, which underpins our inference systems. We are committed to continuous improvement, including recent advances in batch and interactive routing, and we give back improvements to the community,” he notes. This model benefits not only Snowflake’s internal systems but also provides assurance to users through third-party validations.
Inference optimization is another vital but often overlooked area, where many AI delivery challenges manifest. Achieving high-quality model outputs in real time, while maintaining scalability and cost-effectiveness, requires innovations at the infrastructure level rather than solely smarter models. “One significant challenge is balancing cost with performance. Many next-generation models are large and costly, yet the appropriate approach varies by use case,” Rajagopal states.
For instance, Snowflake’s AI SQL product employs a cascading model strategy, intelligently choosing smaller models for simpler tasks and larger models for more complex challenges. By building upon open-source models, Snowflake fine-tunes them and incorporates reinforcement learning tailored to specific use cases.
AI in Daily Operations
AI is reshaping operational workflows even within software teams. It is being integrated into processes like automated documentation and more efficient CI/CD pipelines, thus embedding efficiencies that accumulate over time. Rajagopal cites the example of SnowConvert, a tool designed to facilitate migration from on-premises systems to the cloud—specifically from Oracle to Snowflake. The tool has been enhanced with AI capabilities, allowing automatic conversion of Oracle SQL to Snowflake SQL without manual intervention.
“AI-powered coding tools have become integral in writing and reviewing code at Snowflake, with nearly all engineers using them daily, significantly accelerating development,” he adds. He relies on Snowflake Intelligence to prepare for customer interactions, allowing him to pose questions about customer activities or challenges with rapid access to relevant insights.
Currently, around 95% of Snowflake’s engineers use AI in their daily work, with approximately half of all code generated by AI. While some challenges, such as the seamless integration of AI into CI/CD pipelines, remain, the impact on development workflows is already noteworthy, according to Rajagopal.
As organizations embed AI deeper into their daily operations, emphasis is beginning to shift from mere experimentation to actionable implementation. Rajagopal insists that AI can enhance routine tasks and underscores the importance of developing systems that can adapt as model improvements are made.
“We’ve integrated AI into our main site, enabling users to create SQL and worksheets faster. The fundamental step is simply to start utilizing it effectively. Moreover, as models continuously improve, designing systems that accommodate this evolution is essential. This ensures ongoing benefits from advancements without necessitating significant structural changes,” Rajagopal concludes.
Publication Information Published on September 24, 2025.