ChinaTalk is hiring for a dedicated China AI lab analyst. Chinese fluency and a technical background are strongly preferred. Apply here!
We’ve got a new show up on the podcast feed with
of the Interconnects Substack talking through the biggest AI stories of this year and next. Listen in on Apple Podcasts or Spotify.Today we’re running a guest piece from Ray Wang, a Washington-based analyst.
On December 2, the Department of Commerce released new export control packages targeting Chinese access to high-bandwidth memory (HBM) with semiconductor manufacturing equipment, including tools essential for HBM manufacturing and packaging, along with the addition of over 140 Chinese chipmakers and chip toolmakers on the Entity List. The new control on HBM — an essential component for AI chips used to train complex AI models and support the AI data center, will constrain China’s AI development which is already hampered by earlier rounds of export controls, including those announced in October 2022, October 2023, and April 2024
.Why HBM Matters
The proliferation of large language models (LLMs) has prompted substantial demand for high-performance computing (HPC) and AI data center infrastructure. HBM, or High-Bandwidth Memory, a type of dynamic random-access memory (DRAM), has become a key component of AI chips — specifically Graphics Processing Units (GPUs) and application-specific integrated circuits (ASICs) that train AI models and power data centers.
Integrating HBM with GPUs or ASICs effectively addresses the so-called “memory wall” bottleneck — a performance constraint caused by the gap between processor speeds and memory access rates. By enabling rapid access to data with lower energy consumption, HBM improves the efficiency of data-intensive AI workloads. This is why most GPUs and ASICs need to incorporate HBM to optimize performance in AI training and inference tasks.
Even from a cost structure perspective, HBM is vital as well, as it accounts for 50% or more of the total cost of an AI chip. Nvidia H100 GPU for example, HBM accounts for 50% of the total cost, followed by 40% of advanced packaging and advanced manufacturing of logic chips, which in Nvidia’s case, are both done by the global foundry leader TSMC. The rest of the materials like printed circuit boards (PCBs) share the last 10% of the total cost.
The global demand for HBM has soared over the past two years, driven by increasing demand for GPUs and ASICs to support AI model training and data center buildout. Morgan Stanley’s December report forecasts the global demand for HBM in 2025 will double that of the 2024 levels. The HBM market size is previously projected to reach up to $33 billion by 2027 — an eightfold increase from $4 billion in 2023. Indeed, there are early signs that prove such a bullish outlook. For instance, HBM suppliers like SK Hynix and Micron have already sold out their HBM production until late 2025.
HBM’s unique function has made it an indispensable component for AI accelerators, as well as the broader AI chip supply chain. Today, almost all leading GPUs and ASICs, including those from Nvidia, AMD, Intel, Google, Amazon, Tesla, Microsoft, and Huawei — integrate HBM to enhance their chips’ performance (see Figure 1). Its essential role has elevated HBM’s strategic value, positioning it as a linchpin in the AI chip supply chain — one of the key reasons prompting the Biden administration’s decision on HBM restriction.
Asian Chipmakers Run the Game
According to Goldman Sachs, SK Hynix and Samsung Electronics dominate the market with more than 90% of the global HBM market (see Figure 2). Notably, SK Hynix and Micron are leading the race in the most advanced HBM, outpacing Samsung, which is struggling to qualify for Nvidia’s standard to supply the most advanced HBM.
SK Hynix, in particular, has emerged as the world’s leading HBM manufacturer, securing the bulk of orders from Nvidia’s advanced GPU — the top HBM buyer in the market. SK Hynix’s success in high-margin HBM has even led to its financial performance outperforming its long-time rival Samsung’s chip sector, which has struggled in both the foundry and HBM sectors (Figure 3).
In addition to memory makers, TSMC is another critical player in this equation. Apart from its renowned capability in advanced logic chip manufacturing — another key component for AI chips, TSMC also controls approximately 90% of the annual global capacity for Chip-on-Wafer-on-Substrate (CoWoS) — an advanced packaging technology required for integrating HBM and logic dies on a silicon interposer and then positions on top of the packaging substrate.
TSMC’s CoWoS advanced packaging capabilities are indispensable because nearly all of the integration of existing GPUs or ASICs with HBMs relies on its advanced packaging in Taiwan. This includes companies such as Nvidia, AMD, Marvell, Broadcom, and AWS. While TSMC’s leadership in advanced logic chip manufacturing already positions itself as one of the most important actors in the AI chip supply chain, its global dominance in CoWoS packaging further consolidates its central role. Interestingly, AI chip packaging is a bottleneck that has yet to be treated with enough attention.
Is China Falling Behind?
China has been lagging behind in both HBM and AI chip packaging — more because of underinvestment as opposed to export controls. HBM has only begun attracting significant attention within the memory industry in the past two years. Before that, it remained largely overlooked. Since 2013, SK Hynix has been developing HBM, initially in partnership with AMD for HBM1. Despite its industry-leading start, it did not translate to instant success for either SK Hynix or AMD due to minimal demand for HBM, generating negligible revenue for its overall DRAM sector. The same dilemma confronted other memory giants as well. Samsung, for example, even dissolved its HBM team in 2019, citing the segment’s limited market potential.
Similarly, while Chinese biggest DRAM makers like CXMT have narrowed the technology gap with competitors in traditional DRAM, they have skipped on HBM development — likely because of its perceived limited market potential. These years of insufficient investment in HBM have left the Chinese memory industry behind the market leader. The same logic applies to the domestic packaging for AI chips.
This gap becomes even clearer when closely examining the product roadmap of the four major DRAM manufacturers closely (see Figure 5). Samsung commenced mass production of HBM2 (2nd generation of HBM) in 2016, followed by SK Hynix in 2018. Chinese memory maker CXMT however, only recently began its massive production of HBM2, suggesting that China is roughly 6 to 8 years or three generations behind the front-running manufacturers. This gap is evident in earlier reports of Huawei and Baidu stockpiling Samsung’s HBM2E (3rd generation of HBM) and Chinese domestic firms still in the process of developing HBM2.
Based on the product roadmap, CXMT should be able to catch up with existing advanced HBM in roughly six to eight years. Yet, the existing and recent restrictions on semiconductor manufacturing equipment (SME), including manufacturing and packaging tools for HBM could push out that timeline. Many SMEs have overlapping functions (e.g. etching, lithography) for HBM and logic chip manufacturing, as well as advanced packaging processes. As a result, these restrictions, whether directly targeting logic chipmaking, HBM manufacturing, or packaging, are likely to hamper firm’s progress in HBM and the advanced packaging it requires. These challenges are further exacerbated by existing curbs on advanced lithography tools critical for cutting-edge HBM production.
It is also worth considering how the previous restrictions on advanced memory chips might affect China’s HBM development. Since HBM is essentially a memory technology that stacks several DRAM dies, limitations on advanced DRAM chips could continue to be a roadblock to China’s HBM advancement.
More importantly, taking the pace of development into account is pivotal. If Chinese memory makers continue to advance slower than market leaders in the coming years, the technological gap will be hard to narrow. In 2024, Chinese GPUs and ASICs are estimated to account for merely 1% of global HBM consumption. The rest is comprised of consumption from U.S. firms like Nvidia, Google, AMD, AWS, Intel, Microsoft, and Tesla — all reliant on the HBM from SK Hynix, Samsung, and Micron. The 1% share of HBM consumption by Chinese GPUs and ASIC, is mainly supplied by Samsung, instead of Chinese memory makers.
To that end, SK Hynix, Samsung, and Micron can generate much more revenue than Chinese memory makers from global GPUs/ASICs firms in coming years and reinvest it in R&D for the next generation of HBM or other areas essential for the company’s development. HBM’s strong market growth also makes it easier for these firms to compel their leadership and investors to allocate more resources to HBM development to maintain or even expand its edge — a trend already evident in companies like SK Hynix and Samsung. These business rationales, in contrast, will not necessarily apply to the Chinese memory firms given the limited demand for now.
Samsung is also a big loser for BIS’ new rule. 20% of its HBM revenue in 2024 was to China, and those sales are now banned. This impact should be soon shown in Samsung’s earnings in the coming quarters. On the other hand, the new rule should have a relatively small impact on SK Hynix and Micron, which both supply their HBM mostly to Nvidia and other non-Chinese firms.
Lastly, China's advanced packaging technologies and capacity remain limited. Compounding this challenge, AI chip packaging leader TSMC is unlikely to provide services to leading Chinese AI firms due to existing restrictions. With that in mind, even if China makes advancements in HBM technology in the coming years, its ability to close the gap with TSMC in advanced packaging remains uncertain under enhanced SME restrictions. Without advanced packaging capability, Chinese HBM will struggle to optimally incorporate it with GPUs or ASICs, which will ultimately affect their AI chips’ performance. Admittedly, emerging Chinese packagers like JCET and Tongfu Microelectronics have “CoWoS-like” packaging capability, it is yet unclear how successfully these firms can package the domestic HBM and GPU given the limited information.
That said, one should not underestimate the Chinese capability to close the gap with the market leader. Leading memory makers like YMTC and CXMT have proved their ability to rapidly narrow the gap in NAND Flash and DRAM with the ability to rapidly ramp up capacity to disrupt the market. Given the optimistic outlook for domestic HBM demand for GPUs and ASICs, increasing R&D investment, and continued government support, Chinese memory and packaging firms are poised to accelerate technological advancements. This is likely true at a time when both government and industry have heightened urgency to develop domestic HBM and AI chip supply chains amid increasing U.S. restrictions. Moreover, Chinese President Xi’s pursuit of “High-Quality Productive Forces” and “Self-Sufficiency” is likely to bring more government support for domestic HBM and AI chip supply chains.
These factors are likely to compel domestic GPU and ASIC providers to adopt homegrown HBM, stimulating the memory industry’s growth and spurring more public and private investment in this area. Chinese AI chip companies are also expected to enlarge their collaboration with domestic HBM maker and advanced packaging firms given their limited access to foreign products and the imperative to strengthen the local AI chip supply chain.
In fact, there are already some signs indicating these trends. Following Beijing's call earlier this year to prioritize domestic chip adoption, several Chinese industry groups issued statements on Monday, warning domestic firms that "U.S. chips are unreliable" in response to the BIS's new restrictions on Monday. Recent reports also suggested that Huawei, for example, alongside the government, is supporting local HBM and advanced packaging capabilities. Additionally, domestic foundries like XMC are reportedly ramping up efforts to produce HBM, signaling early efforts in building a Chinese ecosystem for HBM.
China’s AI development may not face immediate setbacks given that much of the advanced hardware supporting its AI industry is still mostly foreign made. For instance, most leading AI firms — such as Alibaba, Baidu, and Tencent still train their models with Nvidia GPUs procured before restrictions. Similarly, Huawei’s latest Ascend GPUs still use SK Hynix and Samsung’s HBM2 and HBM2E, also sourced before the restrictions took effect. China's semiconductor industry is likely to feel the impact in late 2025 or 2026, considering many Chinese firms have been preparing for this restriction by purchasing additional equipment over the past year. Nevertheless, China’s AI and semiconductor industry are ultimately on track to encounter a substantial “hardware bottleneck.” They will increasingly feel the impact of restrictions on high-end logic and memory chips (including HBM), as well as SMEs. Huawie’s chipmaking partner, SMIC, for example, is already struggling with producing logic chips below 7nm with commercially viable yield rates, despite earlier progress. The memory leader CXMT, is likely to face a similar struggle as the SME restriction disrupts its HBM product development and production. .
In short, the forthcoming restrictions on advanced HBM access will impede the performance of future Chinese AI chips, including those from major players like Huawei, and startups like Biren and Moore Thread. The broader SME export control will undermine China’s ability to develop and enhance its HBM and AI chips.
Despite export controls significantly impacting the industry, they cannot entirely block Chinese firms from advancing in critical technologies but instead force progress through costlier, slower, and more challenging paths.
HBM Architecture Series
Bjarke Ingels Group
Zaha Hadid
Frank Lloyd Wright
Frank Gehry