Last week, at its Explore event, VMware introduced Private AI for businesses to develop on-premise generative AI, in their own data centres rather than the cloud. Private AI brings AI models to where data is generated, processed, and consumed. At the event, it also announced its partnership with NVIDIA. 

The partnership aimed to enable enterprises to customise models and run generative AI applications, including intelligent chatbots, assistants, search and summarisation. The platform will be a fully integrated solution featuring generative AI software and accelerated computing from NVIDIA, built on VMware Cloud Foundation and optimised for AI.

VMware CEO, Raghu Raghuram, said, “With VMware Private AI, we are empowering our customers to tap into their trusted data so they can build and run AI models quickly and more securely in their multi-cloud environment.” 

This looks like a great opportunity for VMware, but what’s in it for NVIDIA?

With this partnership, NVIDIA plans to diversify its own services—which comes as some of its biggest buyers are designing their own chips to eventually manage the growing demands for data crunching in AI. Google, for instance, designs its own AI chips now and as AIM predicted Microsoft is following suit with the development of its Athena chips. This would inevitably lead to lesser demands for NVIDIA’s chips in the long term. 

Despite its hardware origins, NVIDIA looks committed to a slow but steady march towards cloud offerings. In 2020, the company acquired ‘Mellanox Technologies’, a cloud computing company for a whooping USD 6.9 billion. 

The partnership comes on the heels of NVIDIA’s launch ofAI Foundations, a new range of cloud services marked individually for different functions, especially within the realm of generative AI. It was merely a hint that NVIDIA was stepping foot outside of its GPU ring, and with its recent partnership with VMware NVIDIA has made it very evident.

Mutually Rewarding

Along with NVIDIA finding a new avenue for diversification, VMware would benefit handsomely. The company would find application across industries and widespread adoption as this collaboration aims to facilitate compatibility with companies like Dell, Lenovo, HPE, and others, enabling customers to easily deploy the solution according to their requirements.

“Using Nvidia AI software as well as Nvidia AI hardware, how do we almost make it like an appliance that you can work with Dell and Lenovo and HPE and others so customers can deploy it wherever they need to?” said VMware CEO Raghu Raghuram.

The partnership between NVIDIA and VMware doesn’t seem to be rushed, rather well thought out. 

Jensen Huang also mentioned VMware again and again during NVIDIA’s blockbuster earnings call. With 15 mentions VMware was referred to as the go-to partner for NVIDIA above giants like Google or Apple in its cloud effort. So the partnership is not out of the blue and is mutually beneficial.

Additionally, NVIDIA is providing VMware with new computing capabilities. Jensen announced a breakthrough, enabling VMware to achieve bare metal performance while maintaining security, manageability, and V motion capabilities across various GPUs and nodes.

“GPUs are in every cloud or on-premise servers everywhere. And VMware is everywhere. And so for the very first time, enterprises around the world will be able to do private AI. Private AI at scale, deployed into your company and know that it’s fully secure and multi-platform,” Jensen said. 

This would see increased and optimised usage of NVIDIA GPUs. The chip giant in turn is helping sell their vision of Private AI like it was nobody’s business. 

So much so that the uber cool jacketed NVIDIA CEO called himself “the best sales guy” and in turn got a nice chuckle from Raghuram as an approval of his salesmanship.

Benefits For The Ecosystem 

Collectively the partnership presents holistic solutions for the ecosystem. Its approach addresses privacy concerns, provides model flexibility, facilitates scalability, and optimises cost efficiency. 

One of the main issues that this offering provides resolution to is enterprises’ uncertainty over privacy and security. 

It could also bring relief to data centre costs and address the GPU crunch during this AI gold rush. By harnessing the full potential of computing resources encompassing GPUs, DPUs, and CPUs through virtual machines, the Private AI Foundation ensures optimal resource utilisation, ultimately translating into reduced overall costs for enterprises.

Flexibility emerges as another critical advantage. 

“For any meaningful enterprise, their data lives in all types of locations, distributed computing and multi-cloud will be at the very foundation of AI, there’s no way to separate these two”, said VMware CEO Raghu Raghuram.

The platform offers businesses the freedom to choose where they build and deploy their models, ranging from the NVIDIA NeMo framework to versatile options like Llama 2 and beyond. This flexibility streamlines the model development process, enhancing adaptability to specific enterprise needs.

Conclusively, the VMware-NVIDIA partnership ushers in a new era of on-premise generative AI, reshaping the AI landscape and presenting a mutually rewarding collaboration that benefits both companies and the broader ecosystem.

The post Why NVIDIA Partnered with VMware appeared first on Analytics India Magazine.