Major highlights from Meta’s Inside the Lab event

CEO Mark Zuckerburg announced a few breakthroughs in AI research for the metaverse, alongside his vision for AI assistants, aka Project CAIRaoke, and more at Meta’s Inside the Lab: Building for Metaverse for AI event.

Highlighting the challenges in building a metaverse, Zuckerburg said the key to unlocking a lot of advances relies on AI, particularly in terms of creating a new generation of assistance that will help users navigate metaverse as well as the physical world via augmented reality (AR). 

Further, he said, as these worlds will be dynamic and always changing, AI will need to understand the context and learn in the way that humans do. “When we have glasses on our faces, that will be the first time that an AI system will be able to really see the world from our perspective,” said Zuckerberg.

In his keynote speech, Zuckerburg said computing is becoming more contextual. “As devices have gotten better at understanding and anticipating what we want, they have also gotten more useful. I expect that these trends will only increase in the future,” he added. 

He said the metaverse would consist of immersive worlds the user can create and interact with, including the position in 3D space, their body language, facial gestures, etc. “So, you experience it and move through it as if you are really there,” he added. 

Project CAIRoke 

Meta announced project CAIRoke, an end-to-end neural model for building on device systems, to help deliver better dialogue capabilities, true world creation, and exploration.

He said Meta AI is exploring two research areas to make this possible – egocentric perception, which is about seeing the world from a first-person perspective, and a whole new class of generative AI models to help users create anything they can imagine. 

Further, he also demonstrated BuilderBot, a new tool that enables users to generate or import things into the virtual environment via voice commands. 

Universal Speech Translator 

Meta said nearly half of the world’s population does not have access to online content in their preferred languages. To address this challenge, Meta is coming up with Universal Speech Translator, an AI tool that offers real-time speech-to-speech translation across all languages.

Jérôme Pesenti, who currently leads Facebook AI, also made two major announcements. In collaboration with the Instagram equity team, the responsible AI team plans to publish a prototype system card for Instagram feed ranking. 

Meta revealed TorchRec, a library for building SOTA recommendations systems for the open-source PyTorch machine learning framework. The system will power personalisation across their products. 

“TorchRec demonstrates Meta’s commitment to AI transparency and open science. It is available as Pytorch memory and provides common sparsity and parallelism, primitives, enabling researchers to build the same state of the art personalisation that is used by Facebook Newsfeed and by Instagram Reels today,” Pesenti added.

Check out TorchRec here

Meta is also looking to create a consortium of professors at universities with large students for underrepresented groups to teach AI courses

According to Zuckerberg, there are four basic pillars of work on AI. Foundational research, AI for products, responsible AI and AI infrastructure. “In each of these areas, we have some pretty ambitious hiring goals,” he said.

Inside the Lab offered a sneak peek into Meta’s metaverse roadmap, alongside its breakthroughs in the area of self-supervised learning and AI supercomputers