NVIDIA announces multi-year collaboration with Microsoft to build “massive” AI computer 

NVIDIA will facilitate tens of thousands of GPUs, Quantum-2 InfiniBand along with its full stack of AI software to Microsoft Azure 

November 17, 2022 01:05 pm | Updated 01:05 pm IST

a file photo of Nvidia logo seen at its headquarters in Santa Clara, California

a file photo of Nvidia logo seen at its headquarters in Santa Clara, California | Photo Credit: Reuters

NVIDIA, on Wednesday, announced it is partnering with Microsoft to build a “massive” cloud AI computer. 

(For insights on emerging themes at the intersection of technology, business and policy, subscribe to our tech newsletter Today’s Cache.)

The U.S.-based chip designer and computing firm will be providing tens of thousands of GPUs, Quantum-2 InfiniBand, and its full stack of AI software to Azure. Microsoft and global enterprises will use the platform for rapid and cost-effective AI development and deployment, the company shared in a blog.

The collaboration will see Microsoft Azure’s advanced supercomputing infrastructure combined with NVIDIA GPUs, networking, AI workflows, and software development kits.

NVIDIA will utilise Azure’s scalable virtual machine instances for research in generative AI.

Microsoft meanwhile will leverage the NVIDIA H100 Transformer Engine to accelerate transformer-based models used for large language models, generative AI, and writing computer code, among other applications.

NVIDIA also said that Microsoft Azure’s AI-optimised virtual machine architected with its advanced data center GPUs will be the first public cloud instances to incorporate NVIDIA Quantum-2 400Gb/s InfiniBand for networking. And since this will allow thousands of chips to work together across several servers, it will allow the functioning of the most complex recommender systems at scale along with generative AI.

“We’re at that inflection point where AI is coming to the enterprise and getting those services out there that customers can use to deploy AI for business use cases is becoming real,” Ian Buck, Nvidia’s general manager for Hyperscale and HPC told Reuters. “We’re seeing a broad groundswell of AI adoption ... and the need for applying AI for enterprise use cases.”

Nvidia declined to comment on how much the deal is worth, but industry sources said each A100 chip is priced at about $10,000 to $12,000, and the H100 is far more expensive than that.

(With inputs from agencies)

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.