Whew… I’m getting the vapors! AMD officially announced RDNA3 on 11/04/22, showing off the new flagship 7900XTX and 7900XT. Like the Nvidia GeForce 4090 release, there’s some interesting information here.
First, let’s dig into the specs:
- $999 MSRP
- 96 CU
- 24 GB Memory
- 350 Watt
- $899 MSRP
- 84 CU
- 20 GB Memory
- 300 Watt
If Reddit is to be believed, AMD handed Nvidia a shovel and told them to start digging. As Level1Tech pointed out, this might be AMD’s ‘All Shade Edition.’ It would certainly seem like it.
AMD was happy to take a few jabs at Nvidia during their announcement. As it’s been stated, there likely won’t be a power delivery controversy for AMD in the future. The MSRP for AMD’s new top-tier flagship GPU weighs in at $600 less than Nvidia’s, and the 7900XTX won’t need a ginormous case to house it.
Is it true? Could this be the end of Nvidia?
No, of course not. The 7900XTX isn’t for gamers. It’s for small businesses and hobby developers. Before we go any further, I recently wrote a similar piece for the GeForce 4090. It’s long, but I highly recommend reading through all of it. Much of the information I offered in that piece fits here as well.
I’m not going to rehash all of it again, but since I made some hefty predictions regarding the recent RDNA3 leaks, I felt the need to write a follow-up before RDNA3’s official December 13th launch date.
Is the 7900XTX meant for gamers?
Let’s get this out of the way. A lot of people skimmed the top of that 4090 article linked above and outright refused that the Nvidia 4090 was meant for anything but gamers. I understand the sentiment. It’s hard to believe that Daddy Green is using gamers, marketing and all, for other purposes. We want that special toy to be ours. I’ve been craving a 4090 since launch.
Mommy Red is doing the same thing. Except for this time, the MSRP for the 7900XTX is more reasonable. Dropping $1000 is no small feat for most people, but in comparison to the 4090, the sticker shock doesn’t sting so bad.
Tsk Tsk AMD…
There are a couple of things we need to address. RDNA3 is not built for future-proofing. There is no 8K gaming for years to come. That large memory pool isn’t for games.
Sorry, I needed to squeeze it all in quickly before you got upset and left.
As Nexus Gamers pointed out, AMD’s slideshow referenced 8K ultrawide gaming and not true 8K graphics. 8K ultrawide has half the pixel density of true 8K. The 7900XTX will have a more difficult time doing true 8K gaming.
Not that it matters. Most PC gamers only have a 1080P monitor, according to Steam. 1440P and 4K display panels are picking up steam (no pun intended), but we aren’t likely to see 8K monitors in general rotation for a long time. 4K and 8K live in the domain of the professional world for the foreseeable future.
As for that memory pool, we don’t need it for 4K graphics. 12Gb is okay for 4K texture packs, but 16Gb is enough. Given that resizable BAR is now a thing, the total available GPU memory won’t mean as much in the future.
Resizable BAR means more data can be transferred to GPUs faster than traditional methods. That means GPUs don’t need to keep as much stuff in memory for as long as they would have only a few years ago.
Microsoft depends on this for the Xbox Series S. At the end of the day, the Series S only has about 7.5 Gb of memory to share between the game engine and the graphics. The other 2.5 power the hypervisor and the OS running on it.
Note that game engines still need to be updated to take full advantage of resizable BAR, however. That’s coming.
Yo… You Nuts AND Wrong
Am I really, though? Let me plead my case.
Gamers don’t need 24Gb of memory or enough horsepower to push 8K ultrawide graphics. Most gamers don’t even need to power 4K graphics. So, what is all that power for?
The enterprise! I’m starting to sound like a broken record…
The one place where a ton of GPU memory makes a difference is in the world of machine learning and 3D modeling. Those applications need tons of memory. Have you noticed that most Stable Diffusion builds only output 512×512 resolution images? It’s because most systems don’t have the hardware to reliably produced larger pictures.
AMD gave us further proof of this in their announcement.
Chiplets are yummy!
RDNA3 moved to a chiplet architecture, much like the Ryzen family CPUs. Their chiplet designs are battle tested and proven. It’s cheaper for AMD, and they can produce some killer products.
The Infinity Fabric has proven to work well under server conditions, especially considering how badly remediating timing attacks like Spectre and RowHammer have affected speculative execution processing in modern CPUs.
The 7900XTX sips power compared to the Nvidia 4090, too. In the AMD leak article I linked to above, I explained that AMD will be heavily favored in the data center because of power requirements. I stick by that, but let me clarify a bit further.
I do not believe that the 7900XTX will be shoved inside blades. However, the CDNA3 products that power AMD’s professional line of cards (the AMD Instinct) will be released next year. CDNA3 is basically RDNA3 with clocked-down cores. Instinct cards will most likely use even less power.
Built-in Smarts Coming To A GPU Near You
Each compute unit in the RDNA3 chiplet design now includes 2 AI accelerators that support int8 and bf16 operations. These are common in AI frameworks like TensorFlow. Though it could be argued that AMD didn’t have much choice this late in the game. Nvidia’s Tensor Cores will likely still have an advantage in this department.
Here’s the crux of the problem, however. The market still favors Nvidia. Most ML frameworks support CUDA out of the box. Applications like Blender don’t support AMD yet. Nvidia has better media encoding capabilities, and NVENC is more widely supported (though RDNA3 improved encoding quality significantly, apparently).
With that said, ROCm, AMD’s AI platform like CUDA, is preferred among AI and data scientists despite not being as widely adopted. AMD is making significant improvements to ROCm. ROCm 5.2 is easy to install and run, and, dare I say, more accessible than CUDA.
If AMD keeps up the good fight with ROCm, they won’t be more than a couple of years out from being a significant competitor to CUDA. That doesn’t bode well for Nvidia, especially since CUDA is widely criticized as difficult to work with.
More Plugs, More Ports
In the professional world, AMD chose to support DP2.1 and added USB-C connections to their GPUs. While Nvidia’s argument for only supporting DP1.4 does make sense despite what naysayers say, it’s not a good look for a $1600 flagship product.
If AMD starts pushing app support, RDNA3 suddenly becomes a workstation monster. It uses less power than Nvidia’s products, has more memory (re: 7900XT), and easily fits into cases. Oh yeah, it’s $600 less expensive, too.
Okay, I think AMD wants to make Gamers happy, too
I can’t get over that MSRP. $1000 seems close enough to doable for many people (despite needing to save a bit) that we can’t help but salivate over that price tag. If the 7900XTX isn’t for gamers, and AMD is using that crowd to pump its product, then at least Miss Sue is treating them respectfully.
That MSRP is reasonable, and AMD made some apparent decisions to help provide more value for consumers. As mentioned above, AMD moved RDNA3 to a chiplet design. That means AMD produces less waste. Though their chiplet designs certainly have positive implications for performance improvements, it helps reduce the price tag.
Likewise, AMD opted not to support PCI-E 5.0 in RDNA3. That’s a good thing. GPUs don’t need the bandwidth that PCI-E 5.0 offers. That kind of pipeline is reserved for storage solutions. Sticking with PCI-E 4.0 means that parts are cheaper and more reliable. PCI-E 4.0 controllers have been in the market long enough that they have been battle-tested, and the manufacturing process has been improved.
Finally, AMD seems to like its partners, unlike Nvidia. For the life of me, I don’t understand why Nvidia’s CEO thinks that its partners don’t deserve recognition. While AMD certainly places plenty of restrictions on AIBs, AMD gives partners enough wiggle room for creativity and market differentiation.
Partners can throw more power at Radeon RDNA3 boards and increase core and memory speeds. Those fancy cooler designs will serve a purpose. We will likely see AMD partner cards that perform better than AMD’s reference cards out of the gate. On the other hand, the most significant difference between Nvidia’s reference boards and partner cards is maybe a little more headroom to boost clocks higher.
So, what do you think? Am I wrong, correct, or somewhere in between?