Have you ever imagined what your day would look like if the intelligence assisting you truly understood how you speak, not just your language, but your dialect, your code-switching, and your pauses?
An AI that does not translate you.
It responds like you.
At Bharat Mandapam on March 18, 2026 during one of India’s largest AI-focused gatherings, the India AI Impact Summit, that imagination felt less theoretical and more deployable.
Sarvam AI unveiled two indigenous large language models, 30B and 105B parameters, trained from scratch on Indian languages. Beyond the specifications and benchmark comparisons, what emerged was a sharper thesis. AI for India cannot be imported and lightly fine-tuned. It has to be architected for India’s linguistic and infrastructural realities.
Scaling with Intent“Language large models, we started with a 3-billion parameter dense model,” co-founder Pratyush Kumar said at the launch. “But it is important to scale up. The models we are talking about today represent that next step.”
That progression from 3B to 30B and now 105B reflects not just growth in size, but maturity in architecture.
Both models are built on a Mixture of Experts, or MoE, architecture. Instead of activating the entire neural network for every prompt, MoE dynamically routes queries to specialised expert sub-networks. The result is improved efficiency while maintaining performance across reasoning, programming and tool-use tasks.
In a country where inference cost directly impacts accessibility and enterprise deployment, this architectural choice is strategic.The 105B model is roughly one-sixth the size of DeepSeek’s 600B R1 model released last year, yet Sarvam claims competitive intelligence compared to where that model stood a year ago. It is also positioned as more cost-efficient than Google’s Gemini Flash, while outperforming it across several benchmarks.
The Moment That Shifted the Room
The most memorable part of the launch was not a parameter comparison.
It was a conversation.
Sarvam’s AI assistant, Vikram, named after Indian physicist and space pioneer Vikram Sarabhai, was introduced on stage. The assistant began interacting in Hindi. Midway, it switched seamlessly into Punjabi.
The transition was fluid and contextual.
Then, in Punjabi, Vikram explained why he carried his name, paying tribute to Vikram Sarabhai and his contribution to India’s scientific journey.
It was not simply multilingual capability.
It felt cultural and intentional.
What followed added another layer of meaning. The assistant was not demonstrated only on a premium smartphone interface. It was shown operating on a feature phone.
In a country where millions still rely on basic devices, that detail was significant. If AI assumes flagship hardware and uninterrupted bandwidth, it risks becoming aspirational. If it works on a feature phone and supports voice-first interaction, it begins to resemble infrastructure.
That moment shifted the conversation from capability to reach.
Language as Infrastructure
India has 22 officially recognised languages and hundreds of dialects. Industry estimates suggest that a majority of new internet users in the coming years will be non-English speakers.
For AI systems, this is not merely a translation challenge. It is a context challenge.
Dialect carries nuance. Agricultural vocabulary, local governance terminology, informal speech patterns and blended Hindi-English expressions shape how millions communicate daily.
Global frontier models are optimised for scale. Dialect fluency demands depth.
Sarvam’s selection under the IndiaAI Mission to help build the country’s sovereign LLM ecosystem, including a planned open 120B parameter model, signals that this effort extends beyond startup ambition. It is aligned with governance use cases such as 2047 Citizen Connect and A14 Pragati, aimed at enhancing multilingual public service access.
That places the initiative within India’s broader digital public infrastructure journey.
Not Just Bigger, But Closer
Globally, AI competition continues to revolve around trillion-parameter ambitions and hyperscale compute budgets.
India’s path may evolve differently.
With linguistic plurality, device diversity, and cost-sensitive adoption curves, optimisation becomes strategy. Distribution becomes a differentiator.
At Bharat Mandapam on March 18, the takeaway was not simply that India can build large language models.
It was that India is attempting to build models that understand how it speaks.
In a country defined by dialects, that may prove to be the most scalable advantage of all.
Note: This article is based on ground reportage at the India AI Impact Summit, 2026.






