NVIDIA vs Samsung & SK Hynix: AI Chip Race

by Priyanka Patel

AI Arms Race Fuels Demand for High-Bandwidth Memory, Boosting Samsung and SK Hynix

Demand for high-bandwidth memory (HBM) is poised for continued growth as tech giants aggressively develop their own artificial intelligence (AI) accelerators, positioning Samsung Electronics and SK Hynix to capitalize on a burgeoning market. The escalating competition, dubbed an “anti-NVIDIA coalition” by some industry observers, is reshaping the semiconductor landscape and creating unprecedented opportunities for HBM suppliers.

Nvidia’s Dominance Challenged by Tech Giants

The push for in-house AI chips stems from a desire to reduce reliance on Nvidia, whose accelerators currently dominate the market. As one industry source noted, “What customer would order $50 billion worth of chips that haven’t been verified?” This skepticism underscores the strategic imperative for companies like Google, Amazon, Meta, and Microsoft to control their own AI infrastructure. Application-specific integrated circuits (ASICs) – semiconductors optimized for specific tasks – offer a path to lower costs and reduced power consumption compared to Nvidia’s offerings.

Nvidia CEO Jensen Huang has publicly downplayed the threat, asserting that his company’s technology remains unmatched. At the NVIDIA GTC 2025 event in June, Huang stated, “We have absolutely no ability to replace our chips. If NVIDIA provides better technology, there is no need to develop ASICs.” However, market analysts suggest ASICs are likely to coexist with Nvidia’s products, particularly for AI inference tasks.

OpenAI’s Bold HBM Demand Signals Market Surge

The growing momentum behind ASICs is directly translating into increased demand for HBM, a critical component for fast calculations in both AI accelerators and ASICs. This demand was dramatically highlighted by OpenAI CEO Sam Altman’s recent visit to Korea on November 1st, where he met with leaders from Samsung and SK Group. Altman reportedly requested a stable supply of a staggering 900,000 HBM units per month for the next four years – more than double the current global production capacity.

This potential partnership with OpenAI alone could unlock over 100 trillion won in new demand for Samsung and SK Hynix, considering the current HBM market size of $34 billion (approximately 48 trillion won) this year.

Big Tech Accelerators Drive HBM Innovation

Several major tech companies are actively developing and preparing to launch their own AI accelerators. Meta plans to mass produce its 4th generation ‘MTIA’ next year, while Microsoft’s ‘Maia 200’ is also slated for mass production in 2025. Google’s 7th generation ‘TPU’ will feature six 12-layer HBM4 modules, and Amazon’s ‘Trenium’ 3rd generation will incorporate four 12-speed HBM3Es.

[Image of Microsoft’s Maia100 AI accelerator from the MS Azure website]

S&P Global Ratings predicts the HBM market will continue its upward trajectory, fueled by both increasing demand for AI GPUs and the growing adoption of ASICs. While SK Hynix currently leads the HBM market, analysts believe the rise of ASICs presents an opportunity for Samsung Electronics and Micron to gain ground.

Strategic Implications and Future Outlook

According to Kim Woong, a senior researcher at Nice Credit Rating, major AI companies will likely leverage ASICs for tasks that don’t require the most advanced GPUs, such as the learning or inference of lightweight models. He added, “With the increase in users and traffic, the ASIC market will also gradually demand higher versions of HBM.”

The semiconductor industry anticipates continued HBM market growth through at least 2027. The competitive dynamic between Nvidia and the “anti-NVIDIA coalition” is ultimately a positive development for Samsung and SK Hynix, ensuring robust demand for their HBM products as the AI revolution unfolds.

You may also like

Leave a Comment