Everybody Loves Karpathy
While the OpenAI drama was unfolding, Andrej Karpathy’s reaction to the whole situation was noteworthy, he was busy thinking about centralised and decentralised – and most recently, he even released a new free YouTube video tutorial on LLM – who knows he might as well was busy working on this ONE-hour long video while Sam Altman was returning back as CEO of OpenAI.
Clearly, OpenAI is much much bigger than Altman, and a really important company for all of humanity and the future of AI. Its employees know that really too well. Ergo, the return of Altman. Looks like the company’s ultimate goal is to make the human-machine communication experience as seamless and personalised as possible – or rather as humane as possible, before it achieves AGI.
“Really, with a keyboard and the mouse, which is just so limited,” avered OpenAI’s Mira Murati, talking to Microsoft chief technology officer Kevin Scott in a recent interview, and sharing her frustration at the limitations of communication with machines.
Makes sense. Humans have been communicating with machines wrongly the whole time, and OpenAI made it all possible to engage with it using natural language – be it writing assignments, coding, or building an app (GPTs).
But, now OpenAI is looking to take this to a whole new level, the credit goes to Andrej Karpathy and his team.
One of the founding members of OpenAI, Karpathy left it in 2017 to join Tesla, where he played a crucial role in projects like the humanoid robot “Optimus.” An avid supporter of open source, hee has been active in the community, offering courses and releasing libraries like minGPT and NanoGPT and even baby Llama.
In a weekend project, he adapted the Llama 2 architecture to a simplified version using nanoGPT and implemented a C inference engine in run.c, giving rise to baby Llama. This transformation aimed to replace GPT-2 with nanoGPT within the Llama 2 structure.
The Rise of Q*
Karpathy’s recent cryptic post on X, which talks about his thinking about centralisation and decentralisation all seem to be aligned. Elon Musk is also thinking about building something similar.
While other tech veterans like Clem Delangue of Hugging Face, were quick to misinterpret the situation thinking that he was taking sides of open or closed source.
Really? It’s Karpathy, who knows, he might as well be thinking of building an AI system that uses both centralised and decentralised LLM models to give better results.
All of this comes true, with OpenAI’s latest leak of Q* (pronounced Q-Star), a potential breakthrough in inching closer towards AGI, with promising results in solving mathematical problems, despite being limited to grade-school data.
The news about Q* surfaced during Altman’s dismissal, where several staff researchers sent a warning letter to the board of directors, highlighting a potentially “threatening” powerful AI discovery (Q*) with implications for humanity. Some at OpenAI believe that this breakthrough is an important step in inching towards AGI.
The Birth of Hybrid LLMs
“What else is new? And we should stop calling them large language models,” asked Elon Musk, replying to Peter Yang’s X post, which highlights the framework suggested by Karpathy.
Not sure about Musk, but in AIM universe, we would like to call these – “hybrid LLMs,” which integrate both centralised and decentralised system, revolving around optimising the performance of a targeted AI system.
This optimisation of output requires a trade-off between centralisation and decentralisation of decision-making and information distribution. In order to achieve optimal results, you have to balance these two aspects.
Plausible Use Cases
Recently, former Apple ML engineers’ Humane Ai came up with Humane Ai pin, the latest AI architecture suggested by Karpathy and his team at OpenAI that could reduce the latency and hallucination significantly, alongside ensuring data security of users. Notably, OpenAI works really closely with OpenAI, and is backed by Sam Altman.
The Humane Ai Pin, is most likely to leverage this hybrid LLM approach, blending features of both types of systems (centralised and decentralised) in its design and functionality, underpinned by the principles of edge computing by processing data locally.
Besides Humane Ai Pin, the hybrid LLMs are most likely getting integrated into products like Apple Vision Pro, Tesla Model 3, Microsoft Copilot, ChatGPT with voice, and more.
Not sure about AGI, yet. It’s time we all got rid of our phones, and not endlessly doom-scrolling, and speak to machines and seek information, as we engage with our colleagues, family, and friends – naturally.
The post Everybody Loves Karpathy appeared first on Analytics India Magazine.



![[Exclusive] Sarvam AI is Developing a Platform for Seamless Indic LLM Deployment image-47547](https://i0.wp.com/analyticsindiamag.com/wp-content/uploads/2023/12/Sarvam.ai-is-Developing-a-Platform-for-Seamless-Indic-LLM-Deployment.jpg?fit=1920%2C1080&ssl=1)
