Nvidia may be forced to raise prices on its next-gen GPUs and AI chips, thanks to skyrocketing costs tied to new memory tech. Samsung’s upcoming HBM4 modules, boasting 3.3 TB/s bandwidth, will reportedly cost Nvidia over twice as much as the current generation a sharp shift that could ripple across the gaming and AI landscape by 2026.
Samsung HBM4 pushes bandwidth and pricing to extremes

Samsung has managed to nearly triple the bandwidth of its new HBM4 memory compared to HBM3E. These 36GB modules now hit 3.3 TB/s thanks to a redesigned stacking architecture and advanced signal correction techniques. But that power won’t come cheap.
Industry insiders say Nvidia is set to pay Samsung over $500 per HBM4 module, more than double the ~$250 price it charged for HBM3E. SK Hynix is reportedly charging even more around $550 largely due to higher production costs tied to TSMC’s base die supply.
With demand for AI computing exploding, Nvidia doesn’t have much leverage. “Nvidia’s demand for HBM4 is so high that Samsung Electronics has no choice but to secure its supply at a high price,” sources revealed.
AI race drives up cost of memory, squeezes gaming market
As Nvidia, Google, and Meta pour billions into AI infrastructure, they’re outbidding PC and gaming hardware makers on memory orders. Epic CEO Tim Sweeney recently warned that this trend could price premium gaming PCs out of reach. His comment came after RAM prices doubled within a month, with some users paying $500 for kits that cost half as much weeks earlier.
This crunch isn’t isolated to HBM. SK Hynix has also confirmed blistering specs for its new GDDR7 and LPDDR6 memory:
- GDDR7: 48 Gb/s per pin (3× faster than GDDR6), 24GB capacity
- LPDDR6: 14.4 Gb/s per pin with better voltage regulation
These next-gen modules will power both AI inference workloads and high-end gaming graphics cards. But if current trends hold, gamers may face GPU price tags that creep even higher in 2026.
What HBM4 could mean for Nvidia’s 2026 hardware lineup
Nvidia’s AI chips and flagship GPUs such as its next-gen H100 and RTX Titan-class products will likely be the first to adopt HBM4. With a mid-$500 module price and demand showing no signs of slowing, Nvidia will almost certainly pass the costs downstream.
Here’s what’s unfolding:
- HBM4: 3.3 TB/s bandwidth, 36GB, ~$500–$550 per module
- GDDR7: 48 Gb/s/pin, aimed at high-end gaming GPUs
- LPDDR6: 14.4 Gb/s/pin, tuned for mobile and AI platforms
- Launch: All tech will debut at ISSCC 2026 (Feb), Samsung to supply HBM4 in Q2
Nvidia’s dilemma: double the memory cost, endless demand
Nvidia isn’t just buying faster memory, it’s buying at any price to stay ahead in AI. That leaves PC users and gamers caught in the middle. If Samsung’s HBM4 module really does cost twice as much and delivers triple the bandwidth, Nvidia will deliver more power than ever but not without making wallets flinch.
The AI race is hot. And with memory prices rising fast, so are the stakes.

