Sarvam AI Releases 30B and 105B Open-Source Models Trained from Scratch

1 min read
Sarvam AIdeveloper

Sarvam AI has entered the open-source LLM space with a 30B and 105B parameter model trained entirely from their own training infrastructure and data pipelines. Rather than fine-tuning existing models, these are ground-up implementations, which diversifies the open-source landscape and reduces dependency on a small number of base model families.

The models are available on Hugging Face and can be deployed immediately in local environments. Community reception has been positive, with users noting that "not too bad for a first effort built from the ground-up," suggesting competitive performance characteristics relative to other open-source offerings.

This matters for the local LLM ecosystem because it increases optionality for practitioners. More diverse model architectures and training approaches mean better coverage of different use cases, languages, and optimization targets. It also demonstrates that training competitive models from scratch remains viable for well-resourced teams, encouraging further decentralization of model development.


Source: r/LocalLLaMA · Relevance: 8/10