Explained | Why Nvidia tweaked its chip for China

U.S.-based chip designer Nvidia has altered its flagship tech after months of turmoil over government fears that exporting advanced chips could boost China’s military power

Updated - April 13, 2023 12:59 pm IST

Published - April 13, 2023 12:50 pm IST

File photo of the Nvidia logo

File photo of the Nvidia logo | Photo Credit: REUTERS

The story so far: GPU and AI chip giant Nvidia on March 22 announced the release of a specially designed version of its advanced H100 chip that complied with U.S. export regulations. That means it could sell it to Chinese tech companies. The move came after months of policy changes by the Biden administration that aimed to crack down on China’s access to U.S.-designed technology such as chips and semiconductors.

Why is Nvidia changing its chips?

In September, Nvidia was ordered by officials to stop exporting two high-level chips - A100 and H100 - to Chinese customers due to concerns about the technology being used for military purposes in the future. Nvidia shares were hit by the move. Another chipmaker, AMD, also admitted that its MI250 AI chips were hit by the export ban.

The restrictions were part of the Biden administration’s larger push to curtail the flow of advanced hardware and active components to China, so as to prevent the country from building up its AI, intelligence, and military capabilities. Most of these chips made by the top players in this sector, are also made in Taiwan, a country which China claims is part of its own.

The U.S. Department of Commerce was not clear as to what technical chip specifications would attract an export ban. However, the Chinese foreign ministry and the commerce ministry both criticised the move in strong terms at the time.

(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)

Will there be a difference in performance?

Nvidia has remained tight lipped about the H800’s technical specifications, but stressed on the fact that the H800 complied with U.S. export regulations. However, the H100 is a highly advanced and sophisticated piece of technology that fulfils high-demand AI use cases and accelerates their speed. For this, the chip-to-chip data transfer rate needs to be high, or the AI model may be at risk of losing its speed.

Citing an anonymous source, Reuters reported that the chip-to-chip data transfer rate of the H800 was around half of the H100.

Speaking about a newly released H100 GPU in March, Nvidia CEO Jensen Huang had said that it could bring down large language model processing costs “by an order of magnitude.”

Apart from artificial intelligence, the H100 also supports cloud computing. Some companies using or planning to introduce H100 hardware or technology include Oracle Cloud Infrastructure, Microsoft Azure, Meta, Google Cloud, and Open AI.

Where will the chipsets be deployed?

An Nvidia spokesperson said that the H800 technology is being used for cloud computing by the Chinese internet giant Baidu, the e-commerce platform Alibaba, and the entertainment conglomerate Tencent. Mr. Huang noted that the chips would power the work of startups concentrating on generative AI and large language models.

In March, Baidu CEO Robin Li unveiled the AI-powered Ernie chatbot, meant to be a rival to OpenAI’s ChatGPT. While a recorded clip showing the chatbot’s reported accomplishments led to plummeting shares, positive user feedback after a day again caused shares to rise.

Baidu has plans to bring Ernie to its search engine, similar to the Bing and OpenAI collaboration.

How crucial are the A100 and the H100 in AI development?

Big Tech companies like Microsoft and Google are in a race to bring AI-enhanced features and search engines to the public. In order to power the supercomputers that make such complex processes possible, they will need high level hardware such as the H100 and A100. The Nvidia A100 costs about $10,000 and thousands of such chips are required in order to power AI processes such as Microsoft’s AI-enabled Bing chatbot. This could result in companies paying billions of dollars to test their products even before they reach the public.

However, cost may not be such a large entry barrier as Mr. Huang claimed that the company’s technology was bringing down the price of the hardware required for AI processes. Mr. Huang said that with Nvidia’s GPUs, a large language model (LLM) could be built at the cost of around $10 or $20 million, as per a CNBC report.

While the A100 came out in 2020, the H100 came out in 2022. Demand is evidently growing for these chips, as the more expensive H100 brought in greater quarterly revenue than the A100 at the start of 2023.

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.