While Tom Cruise’s movie, Mission Impossible – Dead Reckoning, shows the world how AI can be a perfect villain, called The Entity, a faceless antagonist that manipulates the course of humanity, big tech, in real life, are trying really hard to building a safer entity, inching towards AGI. But, there’s a twist. Everyone is still figuring it out with what they believe will lead them towards it. 

To Each Their Own

“OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity”- OpenAI Charter 2018.

OpenAI has been clear from the beginning in defining their goals. AGI is their mission.

Sam Altman said that LLM could be a part of the way to build an AGI. He believes that this entity will not have a body. “We are deep into the unknown here,” said Altman, in the Lex Friedman podcast

“For me, a system that cannot go significantly add to the sum total of scientific knowledge we have access to, kind of discover, invent, whatever you wanna call it, new fundamental science, is not a super intelligence,” said Altman. 

He further said that there is a possibility that GPT-10 could evolve into true AGI with just a few innovative ideas. However, he believes that the true excitement lies in AI serving as a tool that participates in a human feedback loop, acting as an extension of human will and amplifying their capabilities. 

Big Tech Model Reliance

* Weights have been provided based on usage of each type of model/functionality

The Iron Model of Transformers 

OpenAI’s dedication towards building an exhaustive list of transformer models trained on large datasets is probably their key to unlocking AGI, for even Sam Altman believes that LLM could be a ‘part of the way to build an AGI.’ He also feels that expansion of the GPT paradigm in important ways will help but he doesn’t know what those ways are. 

The transformer model is the key neural network architecture for the OpenAI GPT models. From the first GPT model in 2018 which was trained on approx. 117 million parameters to the latest GPT-4 model launched in March this year, whose parameters though not confirmed, – previous GPT 3.5 being trained on 175 billion parameters- OpenAI has been focusing through the LLM way. The company’s list of transformer models extend to even text-to-image models DALL-E and DALL-E-2, voice-to-text Whisper and text-to-music Jukebox as well. 

The All and Mighty Reinforcement Learning 

Google DeepMind’s CEO Demis Hassabis believes that with the progress that’s happening, AGI is just a few years away, maybe within the next ten years. However, he foresees uncertainties as careful exploration is required in the field.

Swearing by reinforcement learning, a method that learns through process of trial and error, Google DeepMind holds the crown. With models such as AlphaFold, AlphaZero, and others, DeepMind also believes that the maximisation of total reward might be sufficient to understand intelligence and its associated abilities, and that reward is enough to reach artificial general intelligence. 

While DeepMind have had their share of AGI conversations, Sundar Pichai believes the race is not priority. “While some have tried to reduce this moment to just a competitive AI race, we see it as so much more than that.” He also mentioned that their emphasis is on the race to build AI responsibly, and to make sure ‘as a society we get it right.’ 

The Black Widow of Self-Supervised Learning

“I don’t think I have any particular insight on when a singular AI system that is a general intelligence will get created.” – Mark Zuckerberg when asked about AGI timelines on Lex Fridman’s podcast

Meta’s Yann LeCun said that supervised learning and reinforcement learning will not lead to AGI as these approaches are inadequate in developing systems capable of reasoning with commonsense knowledge about the world. He believes that self-supervised learning is the way towards AGI. 

This method does not rely on data that’s been labelled by humans for training purposes, instead, it trains on entirely unlabelled or new data. There has been promising results with self-supervised language understanding models, libraries, frameworks that have surpassed traditional and fully supervised models. Since 2013, the company has expanded its research efforts towards self-supervised learning. 

The Way of Ultron 

With the latest launch of xAI , Musk seeks to build ‘good AGI’ with the purpose to ‘understand the universe.’ Musk explains how building AI that cares about understanding the universe will ‘unlikely annihilate humans because we are an interesting part of the universe. He also predicts that AGI will be achieved by 2029. 

While others are moving towards AGI with a bodyless form, Musk’s investment in everything robotics is probably a reflection of how a physical form may be the answer. A working prototype of Optimus robot, that is powered by the same self-driving computer that runs Tesla cars, was unveiled on Tesla AI Day, last year. Musk believed that thess advancements would one day contribute towards AGI. 

Shape Shifters of Multimodality

Google and OpenAI (though not largely) have incorporated multimodality functions in their models. Google’s PaLM-E, MedPalM-2 have multimodal capabilities. OpenAI’s transformer-based architecture CLIP, released in Jan 2021, processes textual descriptions associated with images, and performs zero-shot learning for image classification and object detection. GPT-4 supports image uploads, and ChatGPT app supports voice commands through Whisper integration.  

There are also a number of other models that are under research that may have a future role towards aiding companies achieve AGI – causality is one of them. Believed to bring a huge transformation, causality refers to the relationship between cause (what we do) and effect (what happens as a result of it) where machines try to learn like us.  

A Soothsayer to Navigate The Labyrinth

Sam Altman’s above tweet on testing the latest custom instructions feature was rather too specific. Resorting to ChatGPT to unroll a path towards superintelligence may be a playful number, but looking at tech leaders’ interpretation of AGI and superintelligence, their ambiguity on the matter is crystal clear. However, their approach to get there, either intentionally or unintentionally, is different and aligned with what they seem fits their long-term company goals. 

Just like them, it seems obscure as to who might finish first in the AGI race. Each of the company discussed here has employed different models but their reliance on each are at varying degrees. However, OpenAI’s huge dependency on transformer language models, and the fact that the primary goal for OpenAI itself was AGI, the company might be at the forefront of the so-called AGI race.  

The post Who Will Win the AGI Race?  appeared first on Analytics India Magazine.