Microsoft Azure users may soon enjoy the benefits of having thousands of Nvidia GPUs at their fingertips thanks to a new joint venture between the two tech giants.
As of November 16th, Microsoft and Nvidia announced that they would be collaborating on the creation and deployment of an AI supercomputer in the cloud. On completion, this would make Azure the first public cloud bolstered with the full Nvidia AI stack, a suite that includes several thousand GPUs, networking framework, and AI software.
In doing so, Azure can help more companies to train, deploy, and even scale their AI models, including those which are larger or state-of-the-art.
According to Manuvir Das, vice-president for enterprise computing at Nvidia, the breakthrough of foundation models has led to a great deal of related research, enabled the creation of new startups, and even led to the development of new enterprise-grade-specific applications. In which case, it would be right to say that advancement in AI technologies as well as their adoption into various industrial scenarios is running at a fast clip.
Currently, a number of Azure instances are equipped with Nvidia A100 GPUs partnered with the same firm’s Quantum 200GBs InfiniBand networking infrastructure. In the near future, instances will likewise be integrated with Nvidia H100 GPUs and Nvidia Quantum-2 400Gb/s InfiniBand networking.
Those using Azure’s enterprise edition also get access to the full stack of Nvidia AI workflows, as well as software development tools that have been specifically optimized for the platform.
At the same time, Nvidia AI Enterprise has also been Microsoft-certified and is supported on Azure instances with A100 GPUs. However, Azure instances running H100 GPUs will need to wait a little longer for support measures as these are currently being worked into the next release.
This collaborative effort is expected to run for several years and will include a number of mutually beneficial initiatives. For example, both Microsoft and Nvidia will be working on the optimization of the former’s DeepSpeed software which was specifically created to optimize deep AI learning. At the same time, Nvidia gets the opportunity to use Azure virtual machines in order to move its generative AI initiatives forward.