Sarvam-30B and Sarvam-105B | 19 Feb 2026

Source: TH

Bengaluru-based Sarvam AI has unveiled two new Large Language Models (LLMs) named Sarvam-30B (a 30-billion parameter model) and Sarvam-105B (a 105-billion parameter model). India-AI Impact Summit 2026.

  • At the summit, the Sarvam AI introduced Vikram, a multilingual chatbot enabling seamless conversations across Indian languages, named in honour of physicist Vikram Sarabhai to reflect indigenous scientific innovation.
  • The models launch comes amid OpenAI’s introduction of IndQA, a benchmark to assess AI understanding of Indian languages and cultural contexts, reflecting its growing focus on India
  • Indigenous Development: A 30B-parameter multilingual model designed for real-time conversations, with a 32,000-token context window (the amount of text it can read and retain at once), offering strong reasoning and instruction-following for long interactions.
    • A 105B-parameter model with a 128,000-token context window, suited for complex reasoning, multi-step problem-solving, and long-form analysis across Indian languages.
    • Both use a mixture-of-experts architecture, activating only relevant components during computation to reduce costs while maintaining high performance.
      • Parameters are the internal variables or "brain cells" of an AI model learned during training; a higher parameter count generally indicates a model with greater complexity, reasoning ability, and capacity to handle nuanced tasks.
  • Key Features & Capabilities
    • Indian Language Mastery: Unlike global models like GPT-4, which are primarily trained on English data, Sarvam’s models are built to excel in all 22 Indian languages with voice-first optimisation, making AI more accessible to the masses despite its smaller 105B-parameter scale (one-sixth the size of DeepSeek’s 600B R1 model).
      • It addresses the "data scarcity" problem in Indic languages, allowing for accurate translation and content generation in local dialects.
    • Open Source: The models will be released as open source, meaning developers and researchers can access the code/weights to build their own applications on top of Sarvam’s models.
  • Training Infrastructure: Training LLMs requires immense computing power. The models were trained using GPUs (Graphics Processing Units) accessed through the IndiaAI Mission’s common compute programme, highlighting the success of public-private partnership.
    • Under the IndiaAI Mission, Sarvam AI has been selected to build India’s first sovereign LLM ecosystem with an open-source 120B-parameter model for governance and public services. 
    • Besides Sarvam , Soket will develop a similar India-focused model for sectors like defence, healthcare, and education, while Gnani has launched its own model and Gan AI is building a 70B-parameter multilingual text-to-speech foundation model.
Read more: Sarvam AI and the Sovereign AI in India