Nvidia has asked SK hynix to move up its delivery timeline for next-generation HBM4 memory chips by six months, according to ...
NVIDIA currently uses SK hynix's HBM3E memory for its AI chips and plans to use HBM4 in its upcoming Rubin R100 AI GPU.
In an industry-first, SK hynix has announced its 16-Hi HBM3E memory, offering capacities of 48GB per stack alongside other bleeding-edge NAND/DRAM products.
Citing the growing demand for high-capacity, energy-efficient semiconductors, Nvidia CEO Jensen Huang asks SK Hynix to ...
Nvidia "is as much of a MUST-OWN LONG in tech right now as you can find," Klein wrote in a note to clients. He thinks the stock is due for another rally following the U.S. election and ahead of the ...
High-Bandwidth Memory 5 is intended to further increase the number of memory layers. This requires modern stacking technology ...
As part of its presentation, it revealed the codename for its next-generation architecture: Rubin. This architecture will follow Blackwell and arrive in 2026 as part of Nvidia's quasi-Tick-Tock ...
Su cautioned about an uncertain future as AMD prepares to ramp up GPU production and optimise for larger deployments—quite the opposite of NVIDIA’s rapid, consistent growth in AI segments.
Originally set for the first half of 2026, Nvidia CEO Jensen Huang wants SK hynix to speed up its 12-layer HBM4 memory ...
Nvidia faces considerable uncertainty in future earnings as competitors enter the AI chip market. Read more to understand why ...