With the RTX 4000 series now mostly behind us, leaks and rumors concerning Nvidia’s next generation of consumer graphics cards, the RTX 5000 line, are beginning to emerge. The most recent of them provides some performance metrics, with the RTX 5090 indicating a 1.7x overall improvement over its predecessor. Another leak suggests that Nvidia’s high-performance compute GPUs will finally use a multi-chiplet design.
Starting with the consumer series, Panzerlied, a leaker on the Chiphell forum, has revealed what appear to be RTX 5090 stats: a 50% increase in scale (presumably in cores), a 52% increase in memory bandwidth, a 78% increase in L2 cache, a 15% rise in frequency, and a 1.7x boost in performance.
Using those statistics for the RTX 4090, the successor should have roughly 24,000 CUDA cores, a 2.9 GHz boost clock, and 128MB of L2 cache.
It is also speculated that the RTX 5090’s memory will be GDDR7 and will be upgraded to 32 Gbps.
A 512-bit memory bus is said to be included in the AD102 GPU replacement, albeit it may not be utilised in the RTX 5090. According to VideoCardz, the card might have 512-bit/24 Gbps or 448-bit/28 Gbps variants.
While everyone has their own theories about the upcoming generation of Nvidia cards, it’s worth noting that Panzerlied has previously made correct claims. Furthermore, the most recent reports were “confirmed” by hardware leaker Kopite7kimi.
If the stats are accurate, one has to wonder what kind of price tag Nvidia will impose on the RTX 5090. Team Green was harshly chastised for its Lovelace pricing, but it’s difficult to envision this RTX 5090 costing less than, or even the same as, the $1,600 RTX 4090.
Previous RTX 5000-series rumors also suggested considerable performance improvements over Lovelace. There is no indication on a release date yet, however many believe it will be next year. Kopite7kimi made several claims concerning Nvidia’s next-generation devices in a linked story. In contrast to the existing Ada Lovelace/Hopper separation, he claims that the Blackwell design will be used across both consumer and datacenter GPUs. Furthermore, Nvidia appears to be following Intel and AMD in employing a multi-chiplet design for the first time in its datacenter GPU class.
Also Read: