Home » Blog » Chinese data centers told to stick to Nvidia chips, domestic chips not compatible

Chinese data centers told to stick to Nvidia chips, domestic chips not compatible

by
0 comments

  • China is trying to establish a domestic GPU market
  • The idea is to steer away from reliance on sanctioned Nvidia GPUs
  • However compatibility and cost is causing significant issues

US export restrictions have targeted China’s access to advanced chips, fearing that cutting-edge technology could bolster China’s military capabilities.

These sanctions have forced the nation to ramp up efforts to develop its own GPU technology, with Chinese start-ups already making significant strides in GPU hardware and software development.

However, the shift from globally recognized Nvidia chips to homegrown alternatives involves extensive engineering, slowing down the progress of AI advancements. Despite China’s progress in this sector, the challenges posed by incompatible systems and technology gaps remain significant.

High cost and complexity

A government-backed think tank in Beijing has therefore suggested that Chinese data centers continue to use Nvidia chips due to the high costs and complexity involved in shifting to domestic alternatives.

Nvidia’s A100 and H100 GPUs, widely used for training AI models, were barred from export to China in August 2022, leading the company to create modified versions like the A800 and H800. However, these chips were also banned by Washington in October 2023, leaving China with limited access to the advanced hardware it had relied on.

Despite the rapid development of Chinese GPU start-ups, the think tank pointed out that transferring AI models from Nvidia hardware to domestic solutions remains challenging due to differences in hardware and software. The extensive engineering required for such a transition would result in significant costs for data centers, making Nvidia chips more appealing despite the limitations on availability.

Even with US sanctions, China’s AI computing power continues to grow at a rapid pace. As of 2023, China’s computing capacity, which includes both central processing units (CPUs) and GPUs, increased by 27 per cent year on year, reaching 230 Eflops.

GPU-based computing power, essential for AI model training and inference, grew even faster, with a 70 per cent increase over the same period. Furthermore, China’s AI hardware landscape has expanded significantly, with over 250 internet data centers (IDCs) either completed or under construction by mid-2023.

These centers are part of a larger push towards “new infrastructure,” backed by local governments, state-owned telecommunications operators, and private investors. However, this rapid build out has also led to concerns about overcapacity and under-utilization.

“If the conditions allow, [data centres] can choose [Nvidia’s] A100 and H100 high-performance computing units. If the need for computing power is limited, they can also choose H20 or alternative domestic solutions,” the China Academy of Information and Communications Technology (CAICT) said in a report on China’s computing power development issued on Sunday.

“The trend of computing power fragmentation is increasingly severe, with GPU average use rates less than 40 per cent…There are big discrepancies on hardware in IDCs, such as in GPUs, AI accelerators and network structures, which made it harder to manage and dispatch hardware resources to accommodate for differential computing needs of AI tasks, further impeding the use rate,” the report added.

Via SCMP

You might also like

You may also like

Leave a Comment

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00
Verified by MonsterInsights