ADVERTISEMENT

NVIDIA announces multi-year collaboration with Microsoft to build “massive” AI computer 

November 17, 2022 01:05 pm | Updated 01:05 pm IST

NVIDIA will facilitate tens of thousands of GPUs, Quantum-2 InfiniBand along with its full stack of AI software to Microsoft Azure 

a file photo of Nvidia logo seen at its headquarters in Santa Clara, California | Photo Credit: Reuters

NVIDIA, on Wednesday, announced it is partnering with Microsoft to build a “massive” cloud AI computer. 

ADVERTISEMENT

(For insights on emerging themes at the intersection of technology, business and policy, subscribe to our tech newsletter Today’s Cache.)

The U.S.-based chip designer and computing firm will be providing tens of thousands of GPUs, Quantum-2 InfiniBand, and its full stack of AI software to Azure. Microsoft and global enterprises will use the platform for rapid and cost-effective AI development and deployment, the company shared in a blog.

ADVERTISEMENT

The collaboration will see Microsoft Azure’s advanced supercomputing infrastructure combined with NVIDIA GPUs, networking, AI workflows, and software development kits.

NVIDIA will utilise Azure’s scalable virtual machine instances for research in generative AI.

Microsoft meanwhile will leverage the NVIDIA H100 Transformer Engine to accelerate transformer-based models used for large language models, generative AI, and writing computer code, among other applications.

ADVERTISEMENT

NVIDIA also said that Microsoft Azure’s AI-optimised virtual machine architected with its advanced data center GPUs will be the first public cloud instances to incorporate NVIDIA Quantum-2 400Gb/s InfiniBand for networking. And since this will allow thousands of chips to work together across several servers, it will allow the functioning of the most complex recommender systems at scale along with generative AI.

“We’re at that inflection point where AI is coming to the enterprise and getting those services out there that customers can use to deploy AI for business use cases is becoming real,” Ian Buck, Nvidia’s general manager for Hyperscale and HPC told Reuters. “We’re seeing a broad groundswell of AI adoption ... and the need for applying AI for enterprise use cases.”

Nvidia declined to comment on how much the deal is worth, but industry sources said each A100 chip is priced at about $10,000 to $12,000, and the H100 is far more expensive than that.

(With inputs from agencies)

This is a Premium article available exclusively to our subscribers. To read 250+ such premium articles every month
You have exhausted your free article limit.
Please support quality journalism.
You have exhausted your free article limit.
Please support quality journalism.
The Hindu operates by its editorial values to provide you quality journalism.
This is your last free article.

ADVERTISEMENT

ADVERTISEMENT