Why Smallest.ai is Contesting the Idea of ‘LLMs’
As the IndiaAI Mission gains momentum and numerous startups focus on building for domestic markets, Sudarshan Kamath, founder of smallest.ai, takes a different approach: develop world-class technology for global markets first, then channel the gains back into India.
This global-first strategy stems from his belief that India’s B2B market presents fundamental challenges that make it difficult for startups to scale, while international markets offer better monetisation opportunities that can eventually fund expansion into India.
According to him, Indian businesses often prefer hiring more staff over adopting automation tools, perceiving time and labour as abundant, while money remains scarce. For AI founders, this creates an environment where acquiring customers is costly, margins are tight, and scale is elusive.
“The US exports technology to the world. That’s why California has a trillion-dollar economy,” Kamath explained. “You are not going to be able to sell AI in India. Sell to the world. Reinvest that in India, then build.”
Kamath reflected on the challenges of building for the Indian B2B market, contrasting it with global experiences. In a tweet, he shared how American clients often pay upfront with minimal friction, while Indian clients typically demand lengthy evaluations, and additional features before committing—despite having higher revenues. He concluded that, in India, time is often undervalued compared to money—a mindset he hopes will evolve.
But Why is It Called ‘smallest.ai’?
Kamath wants to build voice AI that actually works at scale.
Founded along with his co-founder, Akshat Mandloi, the company started as an AI research lab exploring alternatives to large language models. They believed intelligence didn’t require massive 100-billion-dollar models, instead focusing on small, specialised models. They shifted from research to enterprise voice AI after identifying strong market demand for text-to-speech technology.
At the heart of smallest.ai’s recent offering is Lightning V2, a high-performance text-to-speech model with ultra-low latency and support for over 16 languages. Designed for real-time deployment, it is already replacing large incumbents in enterprise accounts, particularly in sectors like banking and healthcare.
Lightning V2 currently includes five Indian languages, but the founder’s strategy prioritises global expansion over regional depth. They plan to add more international languages, targeting Southeast Asia, China, and Korea, while acknowledging that the Indian market, beyond Tier 1 cities, remains small and not a primary focus for deep expansion.
Its pricing starts at $10,000 per month in the US, scaling up to $1 million depending on deployment size. Indian accounts start at $2,000 per month.
In the voice AI market, Kamath sees ownership as a critical differentiator.
While many voice AI companies rely on APIs from other companies, smallest.ai trains its models from scratch. This includes not only its core text-to-speech engine but also Electron, a small language model that he says is 10 times faster than GPT-4o Mini. This full-stack approach enables precise control over model behaviour, better performance tuning, and the ability to solve “last-mile” problems common in enterprise settings.
Although Kamath eventually hopes to pivot to a B2C model and serve consumers directly, the company currently targets enterprise clients. Within this segment, the focus is on regulated, high-scale sectors, including banks, hospitals and insurance providers, where customisation, data security, and uptime are crucial.
India’s Deep Tech Ecosystem
When it comes India’s deep tech priorities, Kamath believes there is a greater focus on operational challenges than fundamental science. While compute infrastructure is gradually improving, he noted that much of the attention remains on local language AI rather than solving deeper research problems.
Drawing a contrast with China’s DeepSeek, he said, “DeepSeek was famous because they literally built it for the world…There was a first open source sort of project that showed how OpenAI did its things, and then OpenAI was taken aback because of DeepSeek.”
Many in the industry like Lossfunk founder Paras Chopra and venture capitalist Deedy Das echo this sentiment of building SOTA AI for the world.
Recently, IndiaAI Mission announced the selection of three more startups—Soket AI, Gnani.ai, and Gan.AI—to develop indigenous foundation models. This brings the number of startups under the foundation model development initiative to four, including the previously announced Sarvam AI.
Sarvam’s funding comes from investors like Lightspeed India Partners, Peak XV Partners, Lightspeed Venture Partners, and Khosla Ventures, among others.
Sarvam’s launch drew a slew of criticism on social media.
Kamath also discussed how public datasets in India are inadequate for building voice AI. Even the best datasets lack high-quality conversational data with multi-speaker and multi-accent coverage, which is essential for developing effective voice agents.
On this front, AI4Bharat, in partnership with Bhashini, creates datasets and solutions tailored for India. Many contend that developing AI in India presents a completely different challenge compared to the Western approach.
Is Voice Going to Lead Us To AGI?
Kamath remains sceptical of the hype around AGI, and sees much of the current discourse around AI as largely speculative.
“We have not even figured out how you even define intelligence and how you think of an intelligent system,” he said on the current state of understanding intelligence.
Kamath believes if AGI were to emerge through voice, it would create a surreal experience, given that humans’ primary mode of communication is speech—one reason why people often prefer calling over texting. However, maintained that achieving AGI remains a distant reality.
He also observed a clear disconnect between industry leaders and ground-level practitioners. According to him, while companies are hyper-optimistic and suggest AGI might arrive next year, everyone working on the ground knows the reality is different.
In the Silicon Valley, the AGI hype is particularly intense.
Former OpenAI researchers like Ilya Sutskever from Safe Superintelligence and Mira Murati from Thinking Machines Lab have launched startups aiming for ASI and AGI respectively, each already valued in the billions. Meta has also joined the race with its new SSI lab, headed by Alexandr Wang.
In a recent podcast, Logan Kilpatrick from Google DeepMind suggested that AGI will be defined by product experience, not just model performance. He believes the real breakthrough will come from nailing memory and context—making users feel like they’re speaking to AGI, even without a significant leap in underlying capabilities.
The post Why Smallest.ai is Contesting the Idea of ‘LLMs’ appeared first on Analytics India Magazine.



