Meta expects first shipments of new Nvidia chips later this year

Meta Platforms expects to receive initial shipments of Nvidia’s new flagship artificial intelligence chip later this year.

Published - March 20, 2024 09:44 am IST

Social media giant Meta is one of Nvidia’s biggest customers.

Social media giant Meta is one of Nvidia’s biggest customers. | Photo Credit: REUTERS

Facebook owner Meta Platforms expects to receive initial shipments of Nvidia's new flagship artificial intelligence chip later this year, a Meta spokesperson told Reuters.

Nvidia, the dominant designer of GPU (graphics processing unit) chips needed to power most cutting-edge artificial intelligence work, announced the new B200 "Blackwell" chip at its annual developer conference on Monday.

The chip maker said the B200 is 30 times speedier at tasks like serving up answers from chatbots, although it did not give specific details about how well it performs when chewing through huge amounts data to train those chatbots, which is the kind of work that has powered most of Nvidia's soaring sales.

Nvidia's Chief Financial Officer Colette Kress told financial analysts on Tuesday that "we think we're going to come to market later this year," but also said that shipment volume for the new GPUs would not ramp up until 2025.

(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)

Social media giant Meta is one of Nvidia's biggest customers, after buying hundreds of thousands of its previous generation of chips to support pushes into amped-up content recommendations systems and generative AI products.

Meta CEO Mark Zuckerberg disclosed in January that the company planned to have about 350,000 of those earlier chips, called H100s, in its stockpile by the end of the year. In combination with other GPUs, he added, Meta would have the equivalent of about 600,000 H100s by then.

In a statement on Monday, Zuckerberg said Meta planned to use Blackwell to train the company's Llama models. The company is currently training a third generation of the model on two GPU clusters it announced last week, which it said each contain around 24,000 H100 GPUs.

Meta planned to continue using those clusters to train Llama 3 and would use Blackwell for future generations of the model, the Meta spokesperson said.

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in


Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.