<![CDATA[ Latest from PC Gamer UK in Graphics-cards ]]> https://www.pcgamer.com 2025-02-14T16:04:56Z en <![CDATA[ AMD's Frank Azor says no 32 GB RX 9070 XT for you, probably because a 32 GB mid-range GPU didn't make much sense in the first place ]]> Sometimes, in this topsy-turvy PC hardware world of ours, a rumour turns up that really makes us scratch our heads. This week, it was rumblings about a 32 GB variant of AMD's upcoming RDNA 4 graphics card, the RX 9070 XT. You can rest easy in your beds tonight, though, because AMD's Frank Azor has categorically denied its existence. For now, at least.

Alright, let me qualify that. It's certainly possible that AMD has a 32 GB test card rattling around in its labs, because hey, everything gets a little crazy on a Friday and the team thought it might be fun. But as for one turning up on sale in March? Nuh-uh.

That's it. That's the tweet. It's no surprise that AMD isn't planning on cramming the RX 9070 XT with an excessive amount of VRAM, because it's been very open for some time about its humble aims to bring mid-range cards to the market this generation, not high end.

32 GB of VRAM would only really make sense for AI and high-level rendering purposes, not a stated goal of this generation of AMD GPUs. And while we've certainly seen controversy regarding a perceived lack of VRAM in cards like the RTX 4060 Ti in the past, 32 GB would be over-egging the pudding significantly for a mid-range GPU.

Plus, it's fairly expensive. VRAM, that is. Stuffing a card full of it while trying to keep the price competitive with Nvidia's offer of the RTX 5070 for $549 makes about as much sense as, ooh, I don't know, offering one with a gold-plated cooling shroud for bragging rights.

Although, anything's possible. Just wanted to add that qualifier, in case AMD announces the RX 9070 XT Gold Standard edition in a week. And hey, companies do be doing crazy things sometimes. One for the future maybe? Perhaps. Just not now.

This batch of cards will need to be high-performing (for their product category) and relatively cheap to sell well, and that's something that AMD appears to be keenly aware of.

At least, we're sort of hoping so at this point. Given the claimed advantages of DLSS 4 Multi Frame Generation (we've yet to test it on a mid-range card, although we've been diving in to the figures for the RTX 5080 and RTX 5090), the RX 9070 XT and RX 9070 look like they'll have an uphill battle on their hands this generation to grab some market share.

And 32 GB of VRAM, for no real reason other than a marketing-friendly number on the box? That really doesn't seem like the play. Anyway, here's the word, straight from the horse's mouth. You can all go back to your bunks.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/amds-frank-azor-says-no-32-gb-rx-9070-xt-for-you-probably-because-a-32-gb-mid-range-gpu-didnt-make-much-sense-in-the-first-place/ YRfbxTYchxiFPVgF7oEfBB Fri, 14 Feb 2025 16:04:56 +0000
<![CDATA[ AMD is finally spilling the beans about the RX 9070 series during a live stream on February 28 ]]> AMD's rollout of its upcoming graphics card has been weird. We expected to get all the details at CES, but were just left with teases. Then these cards were anticipated to launch around the time of the RTX 5090 and RTX 5080 but pushed back to March. Now we are due to finally find out more about the AMD Radeon RX 9070 series and its fancy RDNA 4 architecture on February 28.

Set to air at 8 AM EST / 5 AM PT on the AMD Gaming YouTube channel, the RX 9070 live stream is going to give more information on the cards that are confirmed to launch in early March.

With the RTX 5070 Ti set to launch on February 20 and RTX 5070 launching on March 5, the RX 9070 and RX 9070 XT graphics cards will likely launch just after these two Nvidia cards.

These will be AMD's first graphics cards to use the RDNA 4 microarchitecture so there's quite a lot of hype and/or speculation surrounding them. Notably, these cards are reportedly targeting the more midrange market so won't be a replacement for that RTX 5090 card you've been looking for.

They could, however, sway you away from the cheaper 50-series cards if the live stream and subsequent launch suggest RDNA 4 has led to big performance gains.

In the wake of Nvidia's card selling out nearly instantaneously, AMD's David McAfee announced AMD is "planning to have a wide assortment of cards available globally for its launch in March."

We know surprisingly little about AMD's next set of graphics cards right now but a recent leak suggests the AMD RX 9070 XT can run the latest Monster Hunter Wilds benchmarking tool at 211.7 average fps, which is mighty impressive.

This is at 1080p with frame gen enabled but impressive nonetheless. If these stats bore out in real tests, and customers can actually get them on shelves, AMD could have a surprise hit on its hands.

I know I just hope the price is right to compete, and an FSR improvement wouldn't go amiss.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/amd-is-finally-spilling-the-beans-about-the-rx-9070-series-during-a-live-stream-on-february-28/ LHiFmMWMvtUrffXMBEivX3 Fri, 14 Feb 2025 11:49:19 +0000
<![CDATA[ Nvidia's RTX 5070 Ti GPU officially goes on sale February 20 and the RTX 5070 is go for March 5 ]]> Earlier this week we reported a rumour that Nvidia had slightly delayed the upcoming RTX 5070 GPU from some time in February to early March. Well, it turns out that's true. Nvidia has updated its website and given a March 5 on-sale date for the 5070's release, plus February 20 for the RTX 5070 Ti.

As we discussed, back at CES Nvidia originally said both GPUs would be available in February, though didn't put a specific date on that. So, a March 5 release for the RTX 5070 is definitely a delay. The question is why?

Hopefully, the delay is mostly about making sure there's plenty of availability at launch, what with RTX 5080 and 5090 cards predictably selling out in picoseconds after their launch. Pushing the launch of the RTX 5070 back by a couple of weeks could help with that.

However, we suspect it's as much about triangulating the 5070 release to undermine the launch of AMD's competing Radeon RX 9070 and 9070 XT. Those two GPUs were originally rumoured to launch earlier this year.

AMD never said that would happen. However AMD has said that it decided to delay both cards to give them a little polish. "We are taking a little extra time to optimize the software stack for maximum performance and enable more FSR 4 titles," AMD's Ryzen CPU and Radeon graphics rep David McAfee said on January 22.

Likewise AMD has since inked in "early March" as the launch window for those new GPUs. With that in mind, Nvidia also moving the RTX 5070 to early March seems like a little too much of a coincidence. Odds are, it's an effort to win the PR war with AMD by ensuring that the RX 9070 GPUs have to share the news cycles with the RTX 5070.

Incidentally, it could be quite the fight. The latest rumours suggest the 9070 XT could have a boost clock of 3.1 GHz and offer performance comparable to a 7900 XTX in the Monster Hunter Wilds benchmark.

Copious caveats apply and we'll have to wait and see just how good the 9070 XT is. But if those rumours are right, this could be one of the most exciting GPU launches in years. Oh, provided AMD gets the pricing right.

Of course, all of this will be academic for most gamers unless both Nvidia and AMD manage to get a decent number of cards onto retail shelves. Comments on X in response to Nvidia's announcement of the launch dates say it all. "When will availability for the 5090 and 5080 cards start?" one X user replied. Well, quite.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/nvidias-rtx-5070-ti-gpu-officially-goes-on-sale-february-20-and-the-rtx-5070-is-go-for-march-5/ 6zNmU4JwkAjKCecgRmqSXf Fri, 14 Feb 2025 11:33:13 +0000
<![CDATA[ If the AMD RX 9070 XT is as beefy as these leaked specs and benchmark makes out, low Nvidia 50-series stocks might not matter ]]> Well, we all knew Q1 2025 was going to be spicy. I'm not sure we knew that would mean quite the number of delays and out-of-stock signs that we've seen so far, but we knew there was going to be a slew of new GPU shenanigans to watch out for. AMD's Radeon RX 9070 XT is certainly one of these, and it now looks like we're getting something slightly more than word of mouth regarding its specs and performance.

It's still very much borne of the ever-churning rumour mill, of course, but from X user HKEPC (via VideoCardz) we have a screenshot of GPU-Z showing a 'Navi 48' GPU, this being codename for the upcoming RX 9070 XT. And the specs? Well, they're in-line with what we expected—4,096 shader cores, 16 GB of GDDR6 memory, and a 256-memory bus—but it's nice to actually see these specs on paper– err, I mean, on-screen.

HKEPC also shared a screenshot of a supposed RX 9070 XT achieving a whopping 211.7 fps in the Monster Hunter Wilds benchmark, albeit at 1080p and with frame gene enabled. But hey, as I discovered in my own testing of this benchmark for our Monster Hunter Wilds benchmark round-up, the game is pretty brutal on the ol' GPU.

For reference on the core front, by the way, the RX 7800 XT has 3,840 and the RX 7900 XT has 5,376. Just as previous plausible leaks had it, then: the 9070 XT sitting between the two in core/CU count. Again corroborating these previous leaks, we're (supposedly) looking at a higher boost clock for the RX 9070 XT—in this case, even higher than previously thought at 3,100 MHz, presumably due to a particularly aggressive factory overclock.

All very exciting stuff, and we shouldn't have too long to wait for it, either, given Dr. Lisa Su has confirmed the GPU is officially arriving in early March. It might even beat the RTX 5070 to market given that it's rumoured to be delayed until March now, too. Maybe this will make up for AMD's botched CES appearance wherein they showed our eager eyes practically nada of the GPUs.

Then again, Nvidia's just announced a February 20 launch date for the RTX 5070 Ti, so the RX 9070 XT will have that card to go up against. But even then-again-er, Nvidia's recent RTX 50-series launches haven't exactly given us much hope for, y'know, actual stock on shelves.

Maybe that's where AMD can find an advantage: by actually having GPUs available to buy. If it can pull that off while pumping out this kind of Monster Hunter Wilds performance with 16 GB of high-bandwidth memory, preferably for cheap, I'll be one happy chappy. We'll see.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/if-the-amd-rx-9070-xt-is-as-beefy-as-these-leaked-specs-and-benchmark-makes-out-low-nvidia-50-series-stocks-might-not-matter/ GYAwtCEcsqA2XKq6uMR45K Thu, 13 Feb 2025 17:38:22 +0000
<![CDATA[ 10-series and other old GPUs should be able to run Assassin's Creed Shadows says director, and my RTX 3060 Ti sheds a single hopeful tear ]]> Assassin's Creed Shadows should have been launching around about *checks watch* now, but alas, it was delayed and has a new launch date of March 20. If you're like me and have an older GPU, you might not have cared much either way as you might have assumed decent performance the title would be out of reach. Well if so, fear not, because a recent Q&A with the game's technology director leads us to believe it will run on older hardware.

The main litmus test is ray tracing, as games today (*cough* Indiana Jones and the Great Circle *cough*) are moving towards a 'ray tracing by default' approach, ie, where you can't play the game at all without ray tracing. On this front, technology director Pierre F gives us plenty of hope:

"If your GPU does not support hardware raytracing, such as pre-RTX GPUs, we have developed our own solution to allow competent, yet older, GPUs to run Assassin's Creed Shadows. The game will use a proprietary software-based raytracing approach developed specifically for that."

Software ray tracing isn't exactly new. Lumen, in Unreal Engine 5, for example, is a software-based ray tracing and global illumination solution. But if this Assassin's Creed Shadows solution is proprietary and "designed specifically" for pre-RTX GPUs, there's reason to be hopeful it'll perform quite well even on older hardware—hopefully better than Lumen does. Ubisoft's got at least some experience with this, as its other big engine, Snowdrop, has been using software RT for a while (in Pandora and Outlaws, for example).

The technology director says: "We made efforts to support pre-RTX GPU (GTX 1070, GTX 1080TI and equivalent) that are still competent today but lack hardware level raytracing capabilities. To further highlight our commitment to this direction, we've developed a proprietary software raytraced GI [global illumination] solution to support the new dynamic Hideout."

This is implemented because there is, unfortunately, a definite ray tracing requirement for at least some of the game. This might make us wonder: Okay, what if I can do ray tracing but I don't have a card that can do ray tracing well? Thankfully, the devs are giving the option for a "Selective Raytracing" mode that will only use ray tracing when in the Hideout portion of the game.

"The reason behind this is that the Hideout allows extensive player customization at a level never seen before on Assassin's Creed. Because of that, we cannot use traditional, pre-calculated, global illumination techniques, and therefore need to adopt a real-time approach. In all other gameplay situations, such as in the open world, raytracing will not be used."

So much for ray tracing, but it doesn't stop there. The game's also going to allow us to mix and match our frame gen and upscaling solutions. This means that I (with my RTX 3060 Ti) and others like me should be able to slap on DLSS upscaling alongside FSR frame gen, for instance. Neat.

There are some other peculiar shenanigans going on that might make for decent performance on older hardware, too: "Various upscaling technologies (TAA, DLSS, FSR and XeSS) can be used in conjunction with dynamic resolution scaling to target a given framerate, in which case the game will adapt pre-upscaling resolution to try and reach the desired FPS."

Your next upgrade

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)

Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.

So... a doubly dynamic scaling and upscaling system, then? That seems complicated to balance out but whatever helps, I suppose.

The game also uses the company's "proprietary Micropolygons system" which is "a virtualized geometry system which allows us to render more polygons with more level of details continuity [sic]." Crucially, it features a scalable preset, "from Low to Ultra", which presumably means you'll be able to get quite low-poly with it if you need to.

This wouldn't necessarily be a boon if Pierre F didn't seem so confident that the whole thing should make for a great experience on lower-end hardware. But he does seem confident: "The game will look stunning even on the lowest settings. Everyone will enjoy the experience."

Fingers crossed this all pans out as planned. It's not as if there are a ton of high-end graphics cards lining the shelves right now; older hardware might be what a lot of us have to make do with. I guess we'll find out when the game launches, which should be pretty soon.

]]>
https://www.pcgamer.com/games/assassins-creed/10-series-and-other-old-gpus-should-be-able-to-run-assassins-creed-shadows-says-director-and-my-rtx-3060-ti-sheds-a-single-hopeful-tear/ pwRtFghWp5cd8GC3rfL9ZF Thu, 13 Feb 2025 16:37:18 +0000
<![CDATA[ Zotac beats those dastardly GPU scalpers by selling RTX 50-series graphics cards to actual gamers courtesy of their Discord channel ]]> It came as little surprise a full week before the release of the new Nvidia RTX 5090 and 5080 GPUs that scalpers were already attempting to gouge on RTX 50 graphics cards, cards they didn't even have. But what, exactly, can be done? AIB GPU maker Zotac may have a solution, selling cards directly to gamers via Discord.

To quote Zotac on Discord (via VideoCardz), "we want to reward real gamers and active members by giving you the chance to secure a slot to purchase a Zotac Gaming GeForce RTX 5080 or 5090—no bots, no scalpers, just my fellow gamers."

The post explains that participants must be active members of the Zotac Gaming Discord who get involved in various "challenges and discussions" and that any cheating or manipulation will result in disqualification.

Beyond that it's not clear exactly how Zotac is choosing successful members. Inevitably, all this raises questions over how access to purchase slots can all be effectively policed. Almost immediately, posters on Reddit observed that, "the Discord server is being flooded with new people spamming their way to 'engagement'. Feels like a bot-race all over again."

The initial ickiness we felt as a team when we first saw this story is echoed in those concerns. Instantly the idea of only getting access to new, scarce graphics cards if you genuflect to a manufacturer and show yourself to be some sort of true fan, for just the opportunity alone to play full price for a GPU, well, it doesn't feel great. Neither does the idea of having to take part in challenges to get in line.

However, it's also been noted that Zotac has reportedly created a separate private channel for long-time Zotac Discord members in order to make sure at least some of them get access to the cards. So, fair play to the company for giving loyal users the opportunity away from bots et al, so long as it is genuinely monitoring whether those users are really long-time folk.

Ultimately, no attempt to exclude scalpers is likely to be entirely successful. But equally this move by Zotac will surely put a few RTX 50 cards into the hands of gamers that would otherwise have been snapped up by bots and flipped for profit.

Of course, this is also a fairly labour-intensive way of going about selling GPUs. So, it's not entirely reasonable to expect every AIB card maker to conduct a similar program. But it would be nice to see a few more do something similar.

Some of the comments surrounding Zotac's program are also a timely reminder that the current situation isn't actually the end of the world. As one Redditor sagely observed, "patience is all you need. I got my 4090 about five months after launch, new for $1,600 MSRP. People just need to learn to wait. It ain't a big deal."

That can be difficult advice to swallow, given how used we all are to the instant gratification of modern online commerce. The idea of having to wait for something really jars. But maybe we all just need to recalibrate our expectations.

Instead of viewing GPU release dates as general availability dates, view them as the date from which you can get in line. And if you don't want to get in line, you don't have to. It's not the end of the world.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/zotac-beats-those-dastardly-gpu-scalpers-by-selling-rtx-50-series-graphics-cards-to-actual-gamers-courtesy-of-their-discord-channel/ ZTdEdhtSbDHGiRT7ob9iYe Wed, 12 Feb 2025 16:14:38 +0000
<![CDATA[ TSMC reportedly plots 2027 start date for its 3 nm US fab, but will that be in time to save next-gen GPUs from tariffs? ]]> Maker of most of the world's cutting-edge chips, TSMC, is reportedly accelerating its plans to produce modern 3 nm chips in the USA. Originally pencilled in for 2028, TSMC is now said to be aiming to pull in production to 2027 in response to tariffs threatened by the Trump administration.

But will that be soon enough for next-gen GPUs?

MoneyDJ (via TrendForce) claims that TSMC is bringing forward its second chip fab in Arizona. TSMC's alleged new plan is to install equipment in the new facility next year and begin volume production of chips in 2027. That's a year ahead of TSMC's current publicly stated schedule.

The reason for the accelerated time table is said to be new tariffs. As we reported recently, President Trump has threatened up to 100% tariffs on chips from Taiwan, which would directly impact TSMC's output and make imports of components like GPUs massively more expensive.

If TSMC could produce those chips in the US at one of its Arizona fabs, then it would sidestep the tariffs entirely (though its customers would still need to think of their supply chain and packaging). The question then becomes a matter of timing.

TSMC's first Arizona fab is already cranking out chips on the N4 node, a derivative of N5, broadly referred to as 5 nm, reportedly including CPU dies for AMD. That's the node also used by both Nvidia for its latest RTX 50 family of GPUs and AMD for its upcoming RDNA 4 graphics cards.

It's extremely likely that both companies will move to 3 nm or N3 for its next-gen cards, codenamed Rubin for Nvidia and UDNA for AMD. Given brand new GPUs from both outfits have just been released at the beginning of 2025 and that two-year cycles for GPU families are the norm, 2027 for the new 3 nm Arizona fab seems like it could be a good fit.

However, the timings may be a little tighter than that. To allow for a January 2025 launch, for instance, Nvidia will have been manufacturing RTX 50 GPUs many months earlier in order to allow time for the chips to be packaged and fitted to graphics cards in sufficient volumes to supply retailers.

It's also something of a big ask to expect a brand new fab to be manufacturing very large GPUs from the get go. You might normally expect a new facility to target smaller chips that are less sensitive to yields as the facility builds volumes and irons out production kinks.

Of course, neither Nvidia nor AMD need stick with a two-year schedule. If 100% tariffs or anything even close really are imposed on Taiwan-made chips, it could well be worth delaying release until TSMC's Arizona fabs can take responsibility for production.

As for the future, TSMC has a third fab planned for Arizona which will produce chips on TSMC's next-gen nodes, likely to be 2 nm or 16A. However, Fab 3 as it's known isn't expected to come online until around 2030.

Anyway, it's usually a long and expensive process building fabs and bringing up production volumes and yields. So, if TSMC really can pull in its 3 nm Arizona facility from 2028 to 2027, that would be some achievement. And it might just help prevent graphics cards from becoming even more ridiculously expensive.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/tsmc-reportedly-plots-2027-start-date-for-its-3-nm-us-fab-but-will-that-be-in-time-to-save-next-gen-gpus-from-tariffs/ FBQ2aFHWodnu4rdB79G4DM Wed, 12 Feb 2025 13:25:17 +0000
<![CDATA[ Nvidia's RTX 5070 graphics card rumoured to be delayed from February until March but that could actually be good news ]]> Back in early January when Nvidia announced the RTX 5070, the mid-range member of the new RTX 50 family was given a February launch date. Now rumours indicate the GPU has been pushed back to March.

Long time GPU-rumour leaker on X, MEGAsizeGPU, says, "the RTX 5070 will be delayed. Instead of February, it will be on the shelf in early March." How much of a delay that would constitute, if true, isn't clear.

That's because Nvidia never said anything more specific than "availability in February" for the RTX 5070. As for what's going on exactly, one obvious reason for the delay could be to increase stock level before launch.

The RTX 5080 and RTX 5090 sold out almost instantly on launch day on January 30 and have been very scarce since. So, a delay for the RTX 5070 would allow time to build up a bigger buffer of cards.

On a related note, Nvidia may have originally been expecting arch rival AMD to roll out its RX 9070 and 9070 XT GPUs a little earlier than March. But AMD has since inked in "early March" as the launch window for those new GPUs, which are expected to go up against the RTX 5070 and perhaps the even the RTX 5070 Ti.

We don't know exactly when AMD was first intending to release the RX 9070 and RX 9070 XT. But the company has revealed that the GPUs have been delayed in order to improve performance and add support for AMD's new FSR 4 upscaling technology to be added to more games.

"We are taking a little extra time to optimize the software stack for maximum performance and enable more FSR 4 titles," AMD's Ryzen CPU and Radeon graphics rep David McAfee said on January 22. AMD may also be using the time to build up stock levels.

Anyway, the point regarding those AMD GPUs is that their delayed launch has given Nvidia a little time to tweak its own plans with the RTX 5070. Apart from allowing more time to improve stock levels, a delay until March will mean that AMD's GPUs will have to share the limelight and news cycle with the RTX 5070.

That said, what Nvidia won't be able to do very easily, at least without losing face, is change the 5070's pricing to respond to AMD. Nvidia has already announced a $549 MSRP for the RTX 5070 and it would be a distinctly uncharacteristic climbdown for Nvidia to lower its price, even if a change to an announced but unreleased GPU wouldn't be totally unprecedented for the company.

Nvidia infamously cancelled the RTX 4080 12GB before it was released and rebadged it as the RTX 4070 Ti but with a $100 price cut from $899 to $799. We very much doubt Nvidia will give the RTX 5070 a similar haircut before launch. But it's not absolutely impossible.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/nvidias-rtx-5070-graphics-card-rumoured-to-be-delayed-from-february-until-march-but-that-could-actually-be-good-news/ VreD2SUgJhZpqTU8yGg8oF Wed, 12 Feb 2025 11:57:43 +0000
<![CDATA[ The unwelcome workaround for Nvidia's RTX 50-series black screen issues is to hobble your gaming monitor with a 60 Hz refresh rate ]]> During all of my initial review testing and overclocking of the RTX 5090 and RTX 5080 graphics cards, I had no issues with the black screening problem that we've seen cropping up in various forums and Reddit threads. My Founders Edition cards have worked beautifully and not once set fire to the wooden cabin tinderbox in which I do all my performance testing.

But today I hit a wall. That is how I am going to refer to the MSI RTX 5090 Suprim, a wall, because boy, that thing is chonk with a capital OMG.

This is my first third-party RTX 50-series card, and it is towering over my test rig right now, and kinda terrorising it, if truth be told. Because now, I too, have fallen victim to the black screen effect we've read about. Nvidia has said it is investigating the issue but hasn't been able to help me through the struggles with the card.

But I have found a solution… in part. But it's not a solution I would want to live with, just something that I could put up with until Nvidia comes out with a proper fix which stops this $2,700 card from blacking out when it's put under pressure.

Basically, you have to hobble your high refresh rate monitor. Thanks Reddit.

It's horrible, and I don't want to have to do it, but this way I'm able to get Cyberpunk 2077 or DaVinci Resolve to run without crashing my entire rig, and the only way I've managed to get through most of our GPU benchmarking suite is by dropping my glorious 4K 240 Hz OLED monitor down to a lowly 60 Hz refresh.

It's still not allowed me to get through a full 3DMark Time Spy run, but you can't have everything. Even if you spend this much on a brand new graphics card, it seems.

Let me count the other things I tried that have failed:

  • As always, I used Display Driver Uninstaller to clean my old drivers for a fresh start
  • Rolled back, clean uninstall, and installed the pre-released drivers Nvidia supplied for the review
  • Tried both MSI's 'Silent' and 'Gaming' BIOS settings
  • Used different power cables
  • Plugged it in and out of the PCAT power testing module
  • Reseated the RAM (always worth a try)
  • Swapping between HDMI and DP cables
  • Changed Nvidia Control Panel power modes
  • Left the room while I booted 3DMark (it used to work with games on tape with the Commodore 64)
  • Tried 120 Hz 😭
Peak Storage

SATA, NVMe M.2, and PCIe SSDs on blue background

(Image credit: Future)

Best SSD for gaming: The best speedy storage today.
Best NVMe SSD: Compact M.2 drives.
Best external hard drives: Huge capacities for less.
Best external SSDs: Plug-in storage upgrades.

It is worth noting that I have so far only tested the card on the PCG GPU test rig. This is the one which has had zero issues with the other RTX 50-series cards, on indeed any graphics card I've tested in the past 12 months.

But it is the one which did give me horrendous coil whine on the RTX 5090 Founders Edition, so I will be switching machines now I have completed testing on this overclocked MSI card to see if it works within another PC.

But yes, there you have it, run your monitor like it's 2007 and you can at least play some games on your RTX 50-series GPU.

You're welcome.

]]>
https://www.pcgamer.com/hardware/graphics-cards/today-i-found-a-potential-solution-to-your-black-screening-rtx-50-series-graphics-card-problems-though-youre-not-going-to-like-it/ UjW7LdvrtwEpUvFcVjwVbi Tue, 11 Feb 2025 17:42:15 +0000
<![CDATA[ Surely not again: Worrying analysis shows Nvidia's RTX 5090 Founders Edition graphics card may be prone to melting power connectors ]]>

Update February 12:
High performance PC builder Falcon Northwest has taken to X to say that it has tested "many" RTX 5090 FE cards, but has been unable to repeat der8auer's findings. Falcon Northwest's thermal images show much more balanced loading on the power cables wires, something confirmed by measuring the currents.

Some observers suggest that the problem may be linked to the number of times a cable has been attached and detached, with der8auer having said that his cable has attached and detached numerous times. Watch this space for an official update from Nvidia.

Original story:
Last week I mentioned problems with black screening Nvidia RTX 5080s and 5090s. Now comes much more worrying news of power connector and cable melting problems with the RTX 5090. And not just any RTX 5090, but seemingly a specific and potentially serious problem with Nvidia's own RTX 5090 Founder Edition's board design.

The investigation comes from YouTube channel der8auer, which specialises in detailed technical analysis. Following the emergence of images of a damaged RTX 5090 on Reddit, the card in question along with the power cable used and the PSU, eventually landed with der8auer.

He says the owner who suffered the failure is an experienced enthusiast who was fully aware that the power cable for such a high-end GPU needs to be carefully seated. On inspection, der8auer found all three of the card, the 12VHPWR cable, and the PSU were damaged.

The GPU's power socket had a single pin showing damage and evidence of melting, whereas that same pin was much worse on the PSU side power socket and was accompanied by slight melting to several further pins.

The cable itself was damaged on both ends, but also showed signs of partial melting of its sleeving across its length. So, the question is, what is going on here? User error? A poor quality power cable? Something to do with the Asus Loki 1000 W PSU being used?

der8auer thinks none of the above. Using his own RTX 5090 FE, he loaded the GPU up with Furmark and found some worrying results using his own, higher spec 12V-2x6 cable, which had been in service for six months, and a 1600 W Corsair PSU.

It's worth noting that cables for the closely related 12V-2x6 and 12VHPWR GPU power sockets are supposedly identical. It's only the pin length in the sockets themselves that varies between the two socket types, though some have slightly thicker cables to mitigate some of the thermals.

However, the product page for the cable in question clearly warns that it's not recommended for the latest 50-series cards and recommends its newer 12V-2x6 cable with the thicker wiring. The catch, as we'll see, is that the problem applies to at least some extent with later 12V-2x6-spec cables.

The newer 12V-2x6 socket was introduced with shorter connection detection pins in order to ensure that power was only supplied when the connector was fully bedded in the socket and thus address problems with partially attached 12VHPWR connectors on RTX 4090 boards infamously causing melted sockets.

Melted RTX 5090 power socket.

A single pin is taking half of the RTX 5090's massive power draw. (Image credit: der8auer)

Anyway, with his RTX 5090 FE fully loaded, der8auer found two of the 12 wires in his power cable were showing up as much hotter than the others on his thermal camera, with one very hot indeed.

Turning the camera to the power sockets, he found that after just four minutes the GPU's power socket had hit 90 °C, but the PSU side socket was over 140 °C.

der8auer then measured the current going down the individual wires in the power cable and found some very worrying results. The 12V-2x6 cable has 12 wires and hence 12 pins in each connector. Six of those wires / pins are 12V live and six are ground.

Their testing shows some very worrying variance across the wires. Two of the six live wires were carrying very little current, one showed about two amps, another around five amps, one up at 11 amps and the last one hitting 22 amps.

So, that last wire is carrying over 250 W of power and roughly half of the total load. That's obviously not as intended. The connectors and cables are meant to have about six to eight amps each, with the total across all six not exceeding 55 amps.

According to der8auer, some AIB RTX 5090 designs include per-pin power sensing, which would presumably stop this kind of power imbalance from happening. But, surprisingly, Nvidia's own FE design apparently does not. Instead, it essentially amalgamates all six live pins into a single power source as soon as it arrives on the GPU's PCB.

These connectors are certified for 660 W and Nvidia rates the RTX 5090 at 575 W, so there is sufficient headroom available in theory. But not if most of that power is being pushed down a single wire instead of balanced across all six.

der8auer thinks the 15% headroom involved probably isn't enough and that Nvidia ought to have used two 12V-2x6 sockets. In that scenario, even with an uneven load, the cable and connectors would likely run safely. "For this kind of power draw, one connector is simply not enough," he concludes.

He also emphasises that user error absolutely can't be to blame, especially in his own testing. Exactly how this develops and how Nvidia responds remains to be seen. We've reached out to Nvidia ourselves and are waiting to hear back.

For now, the wise advice seems to be to hold fire on that RTX 5090 FE buy or maybe look out for an AIB board with per-pin power sensing on the spec list. And if you already have an FE card, take great care with how heavily you load it, keeping an eye on both ends of the cable and indeed the cable itself. Our Dave has been running the Founders Edition in his wooden cabin, and is very much now rethinking that setup.

Of course, that's really not something you ought to have to do with a brand new $2,000 premium graphics card. So, here's hoping it gets resolved very soon indeed.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/surely-not-again-worrying-analysis-shows-nvidias-rtx-5090-founders-edition-graphics-card-may-be-prone-to-melting-power-connectors/ CvfjLD8dshNmFtnRyaxCYP Tue, 11 Feb 2025 16:45:34 +0000
<![CDATA[ Man who chucked $750 million of bitcoin into a dump now wants to buy the whole dump ]]> In 2013, a British IT specialist James Howells somehow conspired to throw a hard drive containing 7,500 bitcoin into a municipal dump. Back then it was worth less than $1 million. Now that same bitcoin stash is worth about $750 million and Howells wants to buy the entire landfill site.

Obviously, the idea is to recover the drive. Perhaps equally obviously, Howells himself doesn't have the resources to buy the landfill site from the local government council in Newport, Wales, who currently own it.

Howells says he, "would potentially be interested in purchasing the landfill site ‘as is’ and have discussed this option with investment partners and it is something that is very much on the table.”

It was in November of 2013 that Howells realised his mistake made earlier that summer. Ever since, he's been battling to be given access to the landfill site to search for the drive.

Howells escalated his efforts to recover the drive in 2023, threatening to sue the council for half a billion dollars in damages. That was followed last year by an attempt force a judicial review of council's decision to deny access to the site.

Late last year at a preliminary court hearing, Howells' lawyers argued that the location of the drive had been narrowed down so a small section of the site and recovering it would not require widespread excavation.

However, Newport council lawyers said Howells had no legal claim and that, "anything that goes into the landfill goes into the council’s ownership.” Moreover, the council claims that, "excavation is not possible under our environmental permit and that work of that nature would have a huge negative environmental impact on the surrounding area."

In the event, the judge sided with the council and Howells' claim was dismissed. But with so much money potentially at stake, perhaps unsurprisingly Howells isn't giving up.

Notably, Howells' story seems to have drifted over the years. Early media reports including the one linked above quote Howell explaining how he threw the drive away following an office clean up. "You know when you put something in the bin, and in your head, say to yourself 'that's a bad idea'? I really did have that," he's quoted saying in 2013.

More recent reports have shifted the narrative, Howells being said to have "placed" the drive in a black plastic bag in the hall of his house, only for his then partner to have mistaken the bag for trash and disposed of it.

That shift in narrative may have legal ramifications. Perhaps if Howell can show the drive was never meant to have been disposed of, it will help his cause. For now, it clearly hasn't, as the courts won't hear his case in full whatever his original intentions with the drive.

In the meantime, Newport council does not appear to be interested in selling the site to Howell and his purported investors. Where it all goes from here, who knows. But as long as bitcoin's price keeps on soaring, this surely won't be the last we'll hear about this lost digital treasure. There's nearly a billion dollars supposedly sitting in a dump and that's going to be very hard to ignore.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/man-who-chucked-usd750-million-of-bitcoin-into-a-dump-now-wants-to-buy-the-whole-dump/ o9V5HYUnjn2HpauYgcqrCA Tue, 11 Feb 2025 12:31:21 +0000
<![CDATA[ What we want from RDNA 4: PCG hardware team reveals hopes and dreams for AMD's next gaming graphics card ]]> AMD wants to know what gamers are "most excited about in RDNA4." Which has got us thinking. What are we hoping for on PC Gamer? Nvidia is now so dominant, it certainly feels like a strong new generation of GPUs from AMD is more important now than ever.

So here are our thoughts on RDNA 4, is it our last great hope for affordable mid-range PC gaming?

Jeremy L, desiccated pixel peeper
As I've said before, for me pricing is critical. I explained my thinking on this last year, but the short version is that RDNA 4 needs to be priced right from the get go. AMD keeps pricing GPUs too high at launch, getting poor reviews as a result, only to then lower the price a few months later but not make an impact because the PR damage is done.

So, with all that in mind, what I want is a Radeon RX 9070 XT with raster performance up around an RTX 4080 or 5080 (they're near enough the same, after all) plus better RT than RDNA 3 and upscaling at least as good as DLSS 3 (I don't think it's realistic to ask for DLSS 4 quality) and all for $500 maximum. That's probably too much to ask, but it's what I think AMD needs to deliver to make an impact.

Jacob R, forever an optimist
Considering Nvidia's generation-to-generation improvement is likely to get slimmer as more affordable graphics cards in the series are released, AMD does have more of an opportunity to build something competitive with the RX 9070-series than some might think. That's essentially me hoping for some decent performance-per-dollar stats.

AMD Radeon RX 7800 XT

Will RDNA 4 be another 7800 XT? (Image credit: Future)

With plenty of VRAM and a competitive price, we might end up with something similar to the RX 7800 XT, or RX 7900 GRE, for value for money, which I'm wholly not opposed to. Heck, maybe even something better altogether. Knowing AMD, these cards will be a little too pricey at launch, but throw in some healthy discounts and a decent FSR 4 implementation, and either the RX 9700 XT or RX 9700 might be a sleeper, mid-range GPU to buy by the end of the year.

Wait, why am I hoping for discounts—Frank, get your darn prices right!

Andy E, hardware botherer
As a long-term FSR user, I'll take anything that can run an enhanced version with better image quality, thanks very much. FSR 3.1 might have made some decent improvements compared to previous iterations, but with the promise of machine learning thrown into the equation, part of me is excited for the potential of a proper DLSS equivalent in the form of FSR 4.

I'll be honest, though, I'm not all that hopeful. Nvidia seems so far ahead of the curve on this one, I doubt we'll be seeing anything quite as powerful as transformer models and Multi Frame Generation bundled with the new cards. Prove me wrong, AMD. You wouldn't be the first.

Dave J, jaded
Talking with both Frank Azor and David McAfee after the CES 2025 non-appearance of the RDNA 4 graphics cards was quite a sobering experience. It was all rather downbeat, as though they'd got wind of what Nvidia was going to do that evening when it announced an RTX 5070 with RTX 4090 performance for $549. Now, that was quickly exposed as just experiential gaming performance when Multi Frame Gen is supported, and not actually a $549 GPU with the rendering chops of an RTX 4090.

Still, it forced the RX 9070 off the table and into a delayed March launch. But cards were already in the hands of retailers and ready to go out to reviewers, but promises of optimisations and more information about FSR 4 abound.

What I want to see now is AMD be aggressive about its desires to really deliver on a gaming GPU for the 4K masses (in reality 4K as a resolution is actually dropping in prominence according to the latest Steam Hardware Survey). I want AMD to deliver against the efficiency promises it's made around the new architecture; it said it was being designed to be straightforward to manufacture and that means it should be available in high volumes and at a great price.

Nvidia RTX 5080 Founders Edition graphics card from different angles

Might RDNA 4 offer RTX 5070 Ti performance for $499? (Image credit: Future)

Given that a March release sees AMD able to get even more cards off the assembly line to add to the GPUs that were already in the channel for the missed January reveal, we ought to see a graphics card launch were you can actually buy the cards on offer.

But forget being competitive, let's put it at a price that makes it almost foolhardy to pick the relative Nvidia GPU—which will most likely be the RTX 5070 Ti. Give us that level of raw gaming performance for $499 and it will be hard to argue against.

But if AMD toes Nvidia's line again, pricing its cards a scant few dollars below the Nvidia competition, then again the GeForce feature set is going to come into play and sway many gamers with the promise of higher frame rates. However fake you might consider them to be.

James B, AMD GPU sceptic
I've not spent an extended amount of time with AMD graphics cards but I'd like the excuse to. The thing I'm looking for from RDNA 4 is good value. I don't necessarily want the best tech, and don't think the RX 9070 line is promising that. I just want a reason to not go for Nvidia's 50 line. Better FSR to compete with recent DLSS improvements and a boost to ray tracing would help. Competition in the GPU space is good and I'd like the chance to show that with my wallet.

Jacob F, cloud gazer
All I want from RDNA 4 is something to make me feel justified in splashing the cash on a graphics card again. I haven't had that since Nvidia's RTX 3060 Ti, which I'm still rocking today. It's kept up with all the games I like to play, but it's pushing it a bit, now, in this new RTX-by-default era.

What this means in practice is that I'd like RDNA 4 graphics cards to deliver cheap competition that beats midrange RTX 50-series cards in terms of pure raster performance. I don't even care massively about frame gen, although great frame gen performance would be a nice bonus.

50-series beating rasterisation pound-for-pound and great upscaling in the lower midrange segment—yep, that's about it. I might actually decide to upgrade this GPU generation if that happens.

Nick E, GPU sniffer
What I want from RDNA 4 is certainly not what I'll get, partly because it's not a realistic wish and partly because it's AMD. The fundamental architecture of RDNA 3 is absolutely fine—a good balance between out-right compute ability and gaming performance—but its biggest weakness has been the lack of dedicated hardware support for matrix/tensor operations, something that Intel offered right out of the bag with Alchemist (and Nvidia since Turing in 2018).

I know we're finally getting this in RDNA 4 but it's appearing late in the game, and this will be the first revision of the units in a gaming GPU. Historically, every time AMD has introduced something completely new to its graphics processors, it's either been a wild, left-field choice (HBM with Vega, chiplets with RDNA 3) that ultimately transpires to be an unnecessary move or it's been a stripped-down, simplified approach, such as hardware ray tracing in RDNA 2.

AMD Radeon RX 6000-series graphics card shot from above, fan view, on a blank background

Will RDNA 4 be another stripped-down architecture like RDNA 2 the RX 6000-series? (Image credit: AMD)

What I'd really like RDNA 4 to offer are compute units that don't have to rely on the driver compiler to correctly implement the dual-issue instructions, to make full use of all the available ALUs; I want to see dedicated hardware for accelerating BVH traversal, rather than doing it through compute shaders; I want to see matrix/tensor cores par with the competition. In short, I want an AMD card that offers the same feature set as Intel and Nvidia, but without a second-place performance.

I don't even care all that much about the price. AMD has been undercutting Nvidia for years but it hasn't made a jot to its share of the discrete GPU market, so if the RX 9070 XT costs $700, for example, then fine. Just as long as it's as good, or better, than the other $700 graphics cards one can buy.

Except it won't be, of course. For some absurd reason, the multi-billion dollar chip business still operates its graphics division like it's a struggling underdog, a plucky team of poor engineers trying their best against the evil behemoths that dominate the industry. RDNA 4 will end up being cheaper than Blackwell, offer the same traditional rendering performance (aka rasterization), but fall behind on features and cutting-edge technology/performance.

And once again, the Team Red faithful will cry 'Just you wait until RDNA 5 comes out, then you'll see!'

It's a wrap
So, there you have it folks. Our desperate hopes and dreams for RDNA 4. It's a popular riff that AMD never misses an opportunity to miss an opportunity with its Radeon graphics, but there's something in my waters that tells me RDNA 4 is going to be different. It's not long now before we find out.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/what-we-want-from-rdna-4-pcg-team-team-reveals-hopes-and-dreams-for-amds-next-gaming-graphics-card/ 5iyNFQahgo7bwYgdB2dVx Tue, 11 Feb 2025 11:12:06 +0000
<![CDATA[ There is one thing making me excited about the new Nvidia RTX 5070 Ti and no, it isn't Multi Frame Generation ]]> The next family of cards in the new RTX Blackwell series of Nvidia GeForce GPUs is going to be an interesting one. Theoretically, the lower we go down the RTX 50-series stack, the less exciting the graphics cards look to be on paper. The generational differences in silicon look slight, and there's a general expectation that Nvidia is going to be leaning even harder on the substantial veneer of Multi Frame Generation performance to make them look good. But the binary RTX 5070 family of cards may still end up giving us the most PC enthusiast of GPUs in this entire generation.

I'm talking here about overclocking, more in the classic PC gaming sense than the sort of rather pointless number-chasing it's become. Y'know, that old school method of wringing every last drop of gaming performance out of your silicon because the price of stepping up to higher spec hardware is utterly punitive.

Before we get too deep into that, however, I do want to acknowledge that obviously the $749 RTX 5070 Ti is not mid-range silicon, not some sort of middle-order GPU for the masses. That used to be the price of a GeForce Titan card, ffs. But it is arguably the more affordable face of high-end PC graphics.

And, from my time overclocking the RTX 5080, there is a chance the RTX 5070 Ti, with the same GPU, could be a hell of a strong contender for the best overclocking card we've seen in an age. Already the RTX 5080 has delivered some pretty stunning performance on that count, allowing me to push the GPU clock well beyond the 3 GHz mark with a stable overclock, which didn't require me to drive a ton of extra voltage through the GB203 chip, either.

With the RTX 5070 Ti specs showing a clock speed well below what the RTX 5080 is delivering from its own Boost clock, I feel there really ought to be some serious headroom in that cut-down chip.

As I say, both cards are using the GB203 GPU—the RTX 5080 is taking the full chip, utilising all available 84 streaming multiprocessors (SMs), while the RTX 5070 Ti is taking hold of 70 SMs. That equates to a difference of some 1,792 CUDA cores between them, though interestingly only 768 between the RTX 5070 Ti and the old RTX 4080 of the previous generation.

In terms of the Boost clock, the RTX 5080 comes in at a rated 2,617 MHz (though that is a moveable feast given the dynamic nature of GPU frequencies these days), and the standard RTX 5070 Ti is rated at 2,452 MHz.

That's a pretty healthy difference in clock speed there and, seeing as we were able to squeeze a +525 MHz offset just using a simple MSI Afterburner tweak with the RTX 5080, it's not unreasonable to think we ought to be able to spike that sub-2,500 MHz figure a lot higher, too. And it's not like the RTX 5070 Ti is being specifically limited in terms of power draw, either. Compared with the previous generation, the RTX Blackwell card has a higher power rating, at 300 W, which is already mighty close to the RTX 5080's 360 W rating.

I've not had the pleasure of slotting an RTX 5070 Ti into the PC Gamer test rig as yet, so I cannot talk from direct experience, this is all just tentatively excited expectation born of what I've seen the other card using that same GB203 silicon doing.

If we can get the same 3 GHz+ stable clock speed out of the RTX 5070 Ti then I would expect another 10%+ in terms of gaming performance out of the card. Given that Nvidia has already primed us to expect a 20% gen-on-gen performance gain for the card, slapping another 10% on top of that will put us in the same sort of frame rate bump territory as the RTX 5090 delivers.

That's the card offering the biggest generational performance increase of the RTX Blackwell cards, with a 30% 4K gen-on-gen gaming hike. Being able to push the lower spec RTX 5070 Ti card to match that percentage boost will make it a far more tempting card.

Image 1 of 1

Nvidia RTX 5080 Founders Edition graphics card from different angles

(Image credit: Future)

Obviously, there are things which could put a blocker in all this excitement of mine. There's a good chance there may be something in the vBIOS which stops the card from clocking so high, or there could be some limitations put on the power delivery. If there's any chance of the RTX 5070 Ti being able to be overclocked to come near the RTX 5080 in terms of gaming performance you can bet there will be some limits put in place.

It's also worth noting my exceptional overclocking numbers of the RTX 5080 came from the over-engineered, loss-leading Founders Edition version of the card, which has been specifically created with high-performance power componentry.

There is no Founders Edition card for the RTX 5070 Ti, however, and whether third-party PCBs are going to be capable of driving the GB203 GPU stably at those frequencies is still one that's up for debate. Nvidia certainly suggested I'd won the silicon lottery hitting those figures with our RTX 5080 card.

A key hint to what we might be able to expect to see from overclocking the RTX 5070 Ti would be at what level the AIBs are setting their own factory overclocked versions. With the RTX 5080, the likes of Asus and Gigabyte are setting confidently high overclocked clock speeds on their retail cards. As yet, however, we don't have details of what those companies are setting their overclocked versions at for the RTX 5070 Ti. Right now, all we get is a wee "TBD" when it comes to the final clock speed specs of these cards.

But the RTX 5070 Ti is coming out this month, in a scant few weeks. So we shall know for sure whether there is reason to be cheerful about the third-tier of the RTX Blackwell cards very soon. If it's the OC king, the clock-happy mac daddy, and you had any hope of buying one, it could be the best-value card of the lot. That's a lot of maybes, I'll grant you, but it's certainly not beyond the realms of possibility.

Yes, Nvidia might accidentally create a great tinkerers GPU and let us really tweak the twangers off it.

And what of the RTX 5070? Well, it's a smaller chip being offered a lot more power, maybe that's going to give it something to offer us via its ickle GB205 GPU. Though I am definitely less convinced about that as a possibility right now.

]]>
https://www.pcgamer.com/hardware/graphics-cards/there-is-one-thing-making-me-excited-about-the-new-nvidia-rtx-5070-ti-and-no-it-isnt-multi-frame-generation/ gpwc3c9ZM7mZQ9tfZWu5Dk Mon, 10 Feb 2025 16:45:31 +0000
<![CDATA[ We asked a PSU expert which adapters or extensions are okay for the RTX 50-series. His answer: 'DO NOT BUY adapters or extenders, they can all be dangerous' ]]> If you're one of the 23 people around the world who has managed to snag an RTX 5090, you might be tempted to splash out on a fancy set of power cables to make your new upgrade look the best it can. That got us thinking about what set would be best to use, so we asked a leading expert on power supplies for the low-down. Given his credentials, we expected a nuanced, detailed response but he settled for a five-word recommendation: "Do not buy adapters or extenders."

The expert in question just so happens to be Aris Mpitziopoulos and he's the CEO of Cybenetics, the global authority on testing and rating computer power supply units, and owner of tech site Hardware Busters. In short, he knows what's what when it comes to all things electrical engineering with PCs. And more to the point, he also knows exactly what you should be doing with power cable extenders and adapters for a new RTX 5090.

When we asked him about what's the best approach to take, his reply was unequivocal: "It is simple, DO NOT BUY adapters or extenders. They can all be dangerous."

By dangerous, Aris means that there's a chance that an iffy extender or adapter could end up resulting in a connection overheating and melting when an RTX 5090 is pulling its full complement of 48 amps or more. And that's despite using the new, supposedly 'fixed' 12VHPWR connector, aka the 12V-2x6.

The problem isn't the cables themselves, though. "All 12V-2x6 cables should be able to deliver 600W (or up to 55A) since these cables are electrically compatible for all vendors!" Aris told us. "But you have to keep in mind that there are PSUs without native 12V-2x6 sockets; they use 2x 8-pin instead. Apparently, these PSUs use proprietary 12V-2x6 cables."

To give you an example of what he means by that, Corsair's 2024 Shift RM PSUs come with a dedicated 12VHPWR cable but the PSU itself doesn't use such a socket. Instead, the cable splits into two 8-pin connectors that you plug into the supply unit. Those sockets and the cable are all rated to supply up to 55 A provided you use the supplied cable. Another set might fit the sockets but there's a chance they'll make a mess of things very quickly.

This is true of many PSU models, though some do have a single 12VHPWR socket and cable. Corsair and others have switched to the newer 12V-2x6 design (which uses different sense pin lengths to ensure that current can be supplied only once the connector is fully inserted) for their latest PSUs but the older design can still be used with an RTX 5090.

A diagram of a graphics card and related power connection standards.

(Image credit: Corsair)

Aris explains: "In general, the cable remains exactly the same. Only the connectors on the PSU and GPU side change from 12VHPWR to 12V-2x6, for PSUs that used the native 12+4 pin headers. There are no changes in the cables (they are just called now 12V-2x6 cables instead of 12VHPWR which can be confusing)."

Now it's important to note that he's not saying that the adapters Nvidia and its board partners are shipping with their RTX 50 series cards are a problem. They're not and it's third-party products and extenders that you need to be wary of, as there are no guarantees whatsoever that they'll work with your PSU and RTX 50-series combination.

You might be using such a setup with an RTX 4090 and experienced no problems, but the default TGP of that card is 450 W, a full 125 W less than the 5090. Not that this is a hard limit, as our Dave discovered testing the new Blackwell monster for its review.

The card's average power was 568 W and it peaked at 637 W—a current draw of 53 A.

Your next upgrade

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)

Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.

CPUs can routinely draw four times that amount of current but they have a huge number of pins to distribute that over. So much so, that you can still use the chip, even if some of the supply pins are damaged or completely broken. When it comes to an RTX 5090, though, you've just got 12 pins, tightly packaged together. That doesn't leave a lot of room for manufacturing variances or user mistakes.

So, here's our PC Gamer PSA: If you are going to spend thousands of dollars on the most powerful gaming graphics card you can buy (well, potentially buy), then you should really consider getting the right PSU to go with it. I'd personally recommend doing this with an RTX 5080, as well as an RTX 5090.

Heck, I even did it with my RTX 4080 Super. I chose a PSU that had a dedicated 12VHPWR socket and cable. It's not the fanciest looking of things but I'd rather have a dull-looking PC than risk any kind of melty-melty action.

And if you don't want to take my word for it, then listen to the boss of Cybenetics.

]]>
https://www.pcgamer.com/hardware/graphics-cards/we-asked-a-psu-expert-and-cybenetics-chief-which-adapters-or-extensions-are-okay-for-the-rtx-50-series-his-answer-it-is-simple-do-not-buy-adapters-or-extenders-they-can-all-be-dangerous/ RpPPxCpwsUrdgcu7qb2SxF Mon, 10 Feb 2025 16:38:02 +0000
<![CDATA[ I've been testing Nvidia's new Neural Texture Compression toolkit and the impressive results could be good news for game install sizes ]]> At CES 2025, Nvidia announced so many new things that it was somewhat hard to figure out just what was really worth paying attention to. While the likes of the RTX 5090 and its enormous price tag were grabbing all the headlines, one new piece of tech sat to one side with lots of promise but no game to showcase it. However, Nvidia has now released a beta software toolkit for its RTX Neural Texture Compression (RTXNTC) system, and after playing around with it for an hour or two, I'm far more impressed with this than any hulking GPU.

At the moment, all textures in games are compressed into a common format, to save on storage space and download requirements, and then decompressed when used in rendering. It can't have escaped your notice, though, that today's massive 3D games are…well…massive and 100 GB or more isn't unusual.

RTXNTC works like this: The original textures are pre-converted into an array of weights for a small neural network. When the game's engine issues instructions to the GPU to apply these textures to an object, the graphics processor samples them. Then, the aforementioned neural network (aka decoding) reconstructs what the texture looks like at the sample point.

The system can only produce a single unfiltered texel so for the sample demonstration, RTX Texture Filtering (also called stochastic texture filtering) is used to interpolate other texels.

Nvidia describes the whole thing using the term 'Inference on Sample,' and the results are impressive, to say the least. Without any form of compression, the texture memory footprint in the demo is 272 MB. With RTXNTC in full swing, that reduces to a mere 11.37 MB.

Image 1 of 3

Screenshots of Nvidia's RTXNTC software demonstration, highlight the use of neural texture compression to reduce the impact of texture sizes in VRAM

Using uncompressed textures (Image credit: Nvidia)
Image 2 of 3

Screenshots of Nvidia's RTXNTC software demonstration, highlight the use of neural texture compression to reduce the impact of texture sizes in VRAM

Using NTC textures, transcoded to a BCn format (Image credit: Nvidia)
Image 3 of 3

Screenshots of Nvidia's RTXNTC software demonstration, highlight the use of neural texture compression to reduce the impact of texture sizes in VRAM

Using NTC textures directly (Image credit: Nvidia)

The whole process of sampling and decoding is pretty fast. It's not quite as fast as normal texture sampling and filtering, though. At 1080p, the non-NTC setup runs at 2,466 fps but this drops to 2,088 fps with Interfence on Sample. Stepping the resolution up to 4K the performance figures are 930 and 760 fps, respectively. In other words, RTXNTC incurs a frame rate penalty of 15% at 1080p and 18% at 4K—for a 96% reduction in texture memory.

Those frame rates were achieved using an RTX 4080 Super, and lower-tier or older RTX graphics cards are likely to see a larger performance drop. For that kind of hardware, Nvidia suggests using 'Inference on load' (NTC Transcoded to BCn in the demo) where the pre-compressed NTC textures are decompressed as the game (or demo, in this case) is loaded. They are then transcoded in a standard BCn block compression format, to be sampled and filtered as normal.

The texture memory reduction isn't as impressive but the performance hit isn't anywhere near as big as with Interfence on Sample. At 1080p, 2,444 fps it's almost as fast as a standard texture sample and filtering, and the texture footprint is just 98 MB. That's a 64% reduction over the uncompressed format.

All of this would be for nothing if the texture reconstruction was rubbish but as you can see in the gallery below, RTXNTC generates texels that look almost identical to the originals. Even Inference on Load looks the same.

Image 1 of 3

Screenshots of Nvidia's RTXNTC software demonstration, highlight the use of neural texture compression to reduce the impact of texture sizes in VRAM

(Image credit: Nvidia)
Image 2 of 3

Screenshots of Nvidia's RTXNTC software demonstration, highlight the use of neural texture compression to reduce the impact of texture sizes in VRAM

(Image credit: Nvidia)
Image 3 of 3

Screenshots of Nvidia's RTXNTC software demonstration, highlight the use of neural texture compression to reduce the impact of texture sizes in VRAM

(Image credit: Nvidia)

Of course, this is a demonstration and a simple beta one at that, and it's not even remotely like Alan Wake 2, in terms of texture resolution and environment complexity. RTXNTC isn't suitable for every texture, either, being designed to be applied to 'physically-based rendering (PBR) materials' rather than a single, basic texture.

And it also requires cooperative vector support to work as quickly as this and that's essentially limited to RTX 40 or 50 series graphics cards. A cynical PC enthusiast might be tempted to claim that Nvidia only developed this system to justify equipping its desktop GPUs with less VRAM than the competition, too.

Your next upgrade

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)

Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.

But the tech itself clearly has lots of potential and it's possible that AMD and Intel are working on developing their own systems that achieve the same result. While three proprietary algorithms for reducing texture memory footprints aren't what anyone wants to see, if developers show enough interest in using them, then one of them (or an amalgamation of all three) might end up being a standard aspect of DirectX and Vulkan.

That would be the best outcome for everyone, so it's worth keeping an eye on AI-based texture compression because just like with Nvidia's other first-to-market technologies (e.g. ray tracing acceleration, AI upscaling), the industry eventually adapts them as being the norm. I don't think this means we'll see a 20 GB version of Baldur's Gate 3 any time soon but the future certainly looks a lot smaller.

If you want to try out the NTC demo on your own RTX graphics card, you can grab it directly from here. Extract the zip and then head to the bin/windows-x64 folder, then just click on the ntc-renderer file.

]]>
https://www.pcgamer.com/hardware/graphics-cards/ive-been-testing-nvidias-new-neural-texture-compression-toolkit-and-the-impressive-results-could-be-good-news-for-game-install-sizes/ hKVC9vmYsFSaCYS3aigr4b Mon, 10 Feb 2025 13:08:42 +0000
<![CDATA[ I'm genuinely stunned by the overclocking performance of the RTX 5080, and curious as to why Nvidia left so much headroom ]]> The Nvidia RTX 5080 has been a much-maligned graphics card since its launch. It is a smaller, more power-hungry GPU than its Ada predecessor, and doesn't really offer a whole lot more silicon than the RTX 4080 Super. Yes, it's ostensibly the same price, but is only delivering around 15% higher frame rates at 4K, and less at lower resolutions.

Oh boy, I was not expecting this. I've not seen this level of overclocking headroom in a released GPU in years.

That hasn't stopped them from all selling out the instant they went on sale. After all, the impressive performance of Multi Frame Generation and its $999 sticker price does make it much cheaper than the $2,000+ RTX 5090 and still able to offer a healthy generational uplift in DLSS 4 supporting games.

I've felt pretty uninspired by the second-tier RTX Blackwell GPU, if truth be told. It will be a great basis for high-end gaming PCs going forward, but in raw silicon terms the needle has barely moved from Ada to Blackwell.

Well, I was pretty uninspired until I started overclocking this thing. And oh boy, I was not expecting this. I've not seen this level of overclocking headroom in a released GPU in years. There is a ton of raw performance left on the engineering room floor, and it makes what was a pretty middling performance hike over the RTX 4080 Super rather more telling.

I'm seeing over 500 MHz extra performance to play with, which is giving me practically another 10% higher frame rates. And all without really taxing the power or thermal performance of the Founders Edition board.

Since we got into the realms of boost clocks, and smart GPUs essentially altering their clock speeds on the fly depending on the thermal and power headroom available to it, there hasn't been a lot left for overclockers.

Sure, you could bump up the frequency offset a touch, but rarely by a meaningful amount, especially not to a level that would register as anything more than the sort of gaming frame rate variance you'd see just from standard testing. Even factory overclocked cards barely offered anything particularly exciting. Go look at an overclocked Asus RTX 4080 Super and you're lucky if you get another 90 MHz out of it.

What's different here, with the RTX 5080, is that even those factory overclocked cards are offering something different. The Asus ROG Astral RTX 5080 is offering a +173 MHz overclock and the Gigabyte Aorus RTX 5080 a +188 MHz overclock. We've not seen that level of confidence from a manufacture when it comes to their OC cards in a long while.

Now, as much as people might suggest I've won the silicon lottery—and as much as Nvidia wants to point out that users should not expect every board to be able to hit the same 0.5 GHz boost clock bump that I'm seeing—it's obvious to me there is a lot of headroom in the GB 203 GPU right here.

Nvidia RTX 5080 Founders Edition graphics card from different angles

(Image credit: Future)

I am not some super special overclocker. Though I have been professionally messing around with GPUs for the best part of 20 years, I am a very basic overclocker. MSI Afterburner is my jam, and I don't go too deep. I did play around with the OC Scanner, which tries to tailor the voltage curve to the capabilities of your GPU, and that can give you higher stable performance, but it didn't really move the needle too much with the RTX 5080.

Just going buckwild with the MHz sliders, however, sure did. With the Black Myth Wukong benchmark running, I was quickly able to go through the gears and hit a solid +500 MHz offset. That wasn't quite giving the 3.1 GHz clock I wanted, however, so I squeezed the GPU some more and hit +525 MHz.

It very much would not go to +550 MHz, though. That way lay catastrophic failure, black screens, and a GPU that was rather tentative about going quite so high again when I rebooted. Having let it lie for a wee while I then went back to +525 MHz and it has remained stable since.

I was rather cavalier with the memory clocks and went all the way up to +1,000 MHz, where I just drew the line out of fear.

As you can see, the performance increase is there. Pretty much a 10% bump at 4K and not far off at 1440p (which is important, as if you're running DLSS at Quality mode at 4K that's the resolution you're really running).

Combine that with the gen-on-gen bump of the RTX 5080 over the RTX 4080 Super and you're now talking about a 25% increase instead of a 15% bump. That's a far more compelling figure, and makes the RTX Blackwell card a more tempting option, especially as it's the same price and also comes with Multi Frame Gen as an extra performance panacea.

Of course, it's still well below the 50%+ frame rate increase you saw going from the RTX 3080 up to the RTX 4080, but it's a better look than the second-tier RTX Blackwell card has presented so far, and much closer to the 30% boost the RTX 5090 gets over the RTX 4090. And it also actually delivers a few occasions where the RTX 5080 is able to match the raw rendering power of that top Ada GPU.

It's barely shifted the thermals or performance demands of the GPU.

One other thing that has impressed me, and made me feel like this is more than just a numbers game, but an overclock I could legitimately live with fulltime, is that it's barely shifted the thermals or performance demands of the GPU. I was half expecting an exponential increase to get that extra 10%, which would have explained why Nvidia had been so conservative on the boost clocks. But no, it's barely changed.

Quite why there is so much performance being left behind as standard, I'm not sure. I only have this one RTX 5080 Founders Edition to hand right now, so it will be interesting to see what other manufacturers' cards deliver on that front, but I would expect to still see some performance in there to play with.

What's interesting here is that the upcoming RTX 5070 Ti also uses the same GB 203 GPU. Sure, it's going to be a fairly cut down version, but it's also got a default boost clock well below that of the RTX 5080. If there is the same headroom in that card it could well be the overclocking king of the upper mid-range.

Still, for now, I'm going to bask in the glory of winning the silicon lottery. And carry on making it run Football Manager 2024 anyways. I mean, it's going to have to keep doing that for a while now the 2025 version's been cancelled, eh 🫗.

]]>
https://www.pcgamer.com/hardware/graphics-cards/im-genuinely-stunned-about-the-overclocking-performance-of-the-rtx-5080-and-curious-why-nvidia-left-so-much-headroom/ Uu2UwXGzARk2qwXBcRURJ6 Fri, 07 Feb 2025 17:32:10 +0000
<![CDATA[ AMD asks what you want from RDNA 4. PC gamers reply: 'er, just make sure we can actually buy it, oh and don't worry about ray tracing' ]]> New GPUs selling out in picoseconds has depressingly become the norm. That includes, inevitably, Nvidia's latest RTX 50-series graphics cards. Maybe that's why when AMD yesterday asked gamers to say what they were most "excited" about for the upcoming RDNA 4 GPUs, "availability would be a brilliant start," pretty much sums up the sentiment.

Of course, that wasn't the only response to AMD's consumer and gaming rep, Frank Azor, when he posted, "What features are you most excited about in RDNA4?" on X. But the word "availability" does pop up rather a lot.

Pricing is another major theme AMD will need to address when the Radeon RX 9070 and 9070 XT arrive in early March. "Don’t want to pay more for my GPU than I paid for my entire high end gaming rig a year ago," was the response from one X user and you get exactly where they're coming from.

Next, upscaling generally and more specifically and as one poster put it, "FSR 4 is a big one." FSR 4 will move AMD into the AI upscaling era, matching the approach Nvidia has been using ever since DLSS was first announced way back in 2018.

The slight snag is that just as AMD finally catches up in that broad regard, Nvidia has just made the jump from a CNN to transformer model for its upscaling, in some ways dramatically improving quality. Oh, and it has added Multi Frame Generation to DLSS, too. As ever, then, it feels like AMD is constantly playing catch up, always taking on Nvidia with a feature set that's a few years behind.

On the other hand, AMD can take solace from some of the response on X, many of whom said they just wanted solid raster performance at a great price. "All I want is much better 'real' frames per $," is a comment that probably sums up that line of posting.

Beyond that, one notable absence, relatively speaking, was mention of ray-tracing performance. It's not that nobody mentions it at all, but RT absolutely doesn't rank nearly as highly as availability, general performance, and FSR 4. That's interesting, isn't it?

Perhaps predictably, there's something of a chorus of "give us real frames, not fake frames" along with some posters suggesting that AMD needn't waste its time knocking up an answer to Nvidia's Multi Frame Generation.

MFG is just the latest in a long line of Nvidia technologies that have caused controversy. DLSS upscaling "arrived with a thud" according Nvidia itself, and MFG is splitting opinions, albeit some on PC Gamer are very much convinced.

Arguably the real problem with MFG is how Nvidia has presented it. Claiming that the new RTX 5070 is going to match the old RTX 4090, essentially on the basis of comparing the RTX 5070 running MFG to the 4090 running natively, is at minimum dubious marketing. As we now know, the raw performance of the 5080 can't match the 4090. So, the RTX 5070 will be a long way off.

Among other lesser concerns mentioned in the responses on X are performance per watt, plenty of VRAM, AI performance, driver quality, and video encode and decode features. But more than anything it's that trio of availability, price, and upscaling that seems to matter most. Well, it seems to for the first couple of hundred responses on X.

Jump on over to X if you dare and take a look for yourself. As for me, price is key. I said last year, RDNA 4 need to be priced extremely aggressively right from launch. AMD needs to avoid the mistake of pricing too high at launch, suffering poor reviews as a consequence, only to lower the price in fairly short order, but not make much impact because the PR damage has already been done.

With that in mind, I think a Radeon RX 9070 XT with near RTX 4080/5080 (let's be honest, they're virtually the same) raster performance for $500 maximum is what I want to see, plus a decent RT uplift and FSR 4 upscaling at least as good as DLSS 3. There's not long to wait...


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/amd-asks-what-you-want-from-rdna-4-pc-gamers-reply-er-just-make-sure-we-can-actually-buy-it-oh-and-dont-worry-about-ray-tracing/ hrc4ZYuhsiiRpx6DPWzWRe Fri, 07 Feb 2025 15:54:45 +0000
<![CDATA[ Nvidia is 'investigating the reported issues with the RTX 50-series' cards after RTX 5090 and RTX 5080 owners (and some RTX 40-series folk) report black screen problems ]]> We've been tracking reports of problems with Nvidia's new RTX 50-series GPUs for a little while, but they now seem to be hitting critical mass. It seems numerous owners of the new RTX 5080 and RTX 5090 cards are suffering crashes and particularly black screens, with a few arguably less substantiated reports of "bricked" cards, too. We've asked Nvidia about the problems and have been told that it is currently "investigating the reported issues with the RTX 50-series."

As ever with an emerging issue, the current situation looks complex. Most commonly, RTX 50-series owners are reporting black screen problems. The scenarios under which this occurs vary. Some say it's happening when switching resolutions or refresh rates. Others are finding the black screen hits under heavy load, while yet others associate it with multi-monitor setups.

Some users then find the problem persists on hard reboot, with the card not detected in device manager or system BIOS, while for others it's seeming more intermittent with a reboot restoring functionality for a time before the black screen hits again. Like we said, it's complicated.

Now, we should presage all this by saying that we've had very few issues with our RTX 5080 and 5090 boards thus far. We've experienced a couple of crashes when alt-tabbing in and out of games to access various benchmark and capture tools, but nothing at all persistent that remained after rebooting. Essentially, we have no direct experience of these reported black screen problems even when overclocking the RTX 5080 to within an inch of its silicon life.

However, we've been fastidious about full driver cleans before installation. Hold that thought, it's important. Anyway, as we write these words, the number of users reporting problems across platforms including Nvidia's own forum (one example of many here) and Reddit (ditto, here) has gone well beyond a few isolated and anecdotal incidents.

Likewise, numerous YouTube videos either reporting or having directly experienced the problems are now emerging, including one from JayzTwoCents which offers an overview and another from Boosted Media where they both experienced the black screen issues and seemed to have fixed them.

Indeed, we've queried Nvidia and have been told: "We are investigating the reported issues with the RTX 50 series." So, we'll have to wait and see regarding any official solutions or driver hotfixes.

Anyway, to cut a long story short, the problem seems very likely associated with Nvidia's latest 572.16 driver release. There are some reports of "permanent" failure of the cards. But at the very least these seem much more isolated and may well constitute a typical failure rate.

In other words, when you dump a batch of graphics cards on the market and buyers are enthusiastically reporting their experience, you're bound to have a few that break, either through manufacturing issues or user errors.

It's also worth noting that some owners of RTX 40-series GPUs are also reporting problems with the 572.16 driver. So, these issues may not be entirely specific to the new RTX Blackwell cards.

Moreover, particularly with regard to the RTX 5080, we don't think the issues relate to hardware stability or the GPUs being run close to the wire at factory settings. We've managed a stable +525 MHz GPU overclock and +1,000 MHz memory overclock with our Founders Edition RTX 5080, meaning it's running at a solid 3.1 GHz. That's a very hefty overclock by recent standards.

Image 1 of 3

Nvidia RTX 5080 Founders Edition graphics card from different angles

(Image credit: Future)
Image 2 of 3

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)
Image 3 of 3

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)

If we're fairly confident at this point there is a real problem, the next step is a solution. We suspect that Nvidia will drop a blog post in the near future outlining the issue and either promising a fix in short order or accompanied by a hotfix driver update.

Until then, the best emerging advice to solve the problem is a full driver wipe with DDU (Display Driver Uninstaller) and then a driver reinstall. That suggests the problems springs from some sort of conflict with residual driver data. Going back to our earlier comment, it may be no coincidence that we ran DDU before installing our RTX 50-series review cards as a matter of routine precaution and haven't experienced the black screen problems.

Setting your PEG-16 graphics port to PCIe Gen 4 mode may also be a wise precaution in the short term following reports of PCIe signalling issues with the new RTX 50 GPUs, especially if the DDU wipe isn't entirely successful.

Anyway, if nothing else all this does rather add weight to the idea that a full driver wipe before any new GPU installation is a very good idea.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/nvidia-is-investigating-the-reported-issues-with-the-rtx-50-series-cards-after-rtx-5090-and-rtx-5080-owners-and-some-rtx-40-series-folk-report-black-screen-problems/ 2Y3ACj8LNxBXJTVAAS5hmE Fri, 07 Feb 2025 13:04:48 +0000
<![CDATA[ If showing off that you actually own an RTX 5090 isn't enough, why not show off that you own a golden one for double rarity points ]]> Nvidia RTX 50-series cards are about as ephemeral as Casper the ridiculously expensive ghost right now, with stocks having dipped to nigh-on zero within a few minutes of launching last week. One might even say they're as rare as gold. So, Asus says, how about we add some more rarity to your rare so you can pine while you pine—or gloat while you gloat, if you're lucky enough to actually get your hands on one.

Yes, that's an ancient meme, and no, I don't care. I don't care because I know I'm never going to get my hands on the just-announced Asus ROG Astral RTX 5090 Dhahab OC Edition (via VideoCardz), and that makes me a little bitter. And just like any good journalist, I'm here to share a little of that bitterness with you—you're welcome.

And look, I'm not immune to gold-plated lust syndrome any more than the next gamer who enjoys scarce and shiny things. Just ask my Golden AK-47 still lying on the floor in the virtual corridors Call of Duty: 4, or the perfectly cylindrical boreholes adorning the skulls of its countless victims. But come on, owning an RTX 5090 isn't enough? It's gotta be golden, too?

That's what "Dhahab" means, by the way: it's Arabic for "golden". Asus explains that the graphics card is a "celebration of the rapid evolution of the Middle East, featuring a unique skyline silhouette that signifies a transition from the sands to the skies". That explains the rather gorgeous splashes of blue on there, too, then.

Clearly not one for holding back, Asus has followed the ROG Astral formula and thrown an extra fan onto this GPU's shroud, making it a quad-fan graphics card. That's not one of those piddly little ones you find crammed along the side of some 50-series cards, either (which shouldn't be allowed to be added to the official fan count, by my estimation).

No, it's a full-on fan on the flipside, just like you find on the regular ROG Astral RTX 5090, this being the most expensive Asus models and one of the most expensive RTX 5090 models in general. It's currently selling for $3,080 at Newegg—well, would be selling for this if it were in stock. There's no word of how much the Dhahab Edition will cost, but when you get up in the several thousands an extra wad here or there seems to matter not.

Anyway, if you want the blingiest of blingy graphics cards, and if you're lucky enough to catch one in stock, go for it, get yourself a ROG Astral Dhahab. Just please don't send me any pictures of that gloriously gilded GPU inside your rig, and certainly don't show me screen grabs of those Multi Frame Gen frame rates.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/if-showing-off-that-you-actually-own-an-rtx-5090-isnt-enough-why-not-show-off-that-you-own-a-golden-one-for-double-rarity-points/ H75KLJwzSuRtWUfPxbBowh Thu, 06 Feb 2025 17:48:44 +0000
<![CDATA[ Where the AF are all the graphics cards?! It's not just the new RTX 50-series that's impossible to buy, finding any decent GPU in stock at the major US retailers right now is like staring into an abyss of nothing ]]> You've perhaps seen our in-depth reviews of Nvidia's new RTX 5090 and RTX 5080 and decided that you're going to be better off spending your money on an RTX 40-series card instead. AMD's RDNA 4 cards won't be here for another month, too, so you've decided to go on the hunt for a nice RTX 4070 Super or an RX 7900 XT. However, if you stick with the major US retailers, you'll be served a whole load of nothing. Err, hello AMD and Nvidia? Where are all your graphics cards?

I've just spent the past two hours updating our GPU deals page, hunting through all the graphics cards listings at Amazon, Newegg, Best Buy, Walmart, and B&H Photo to find the best offers on AMD Radeon, Intel Arc, and Nvidia GeForce models. Rather than go into detail about every model I've searched for, let me just show you this simple summary:

  • RTX 5090: Sold out (or overpriced)
  • RTX 5080: Sold out (or overpriced)

  • RTX 4090: Sold out (or overpriced)
  • RTX 4080 Super: Sold out (or overpriced)
  • RTX 4070 Ti Super: Sold out (or overpriced)
  • RTX 4070 Super: Sold out (or overpriced)
  • RTX 4070: Sold out (or overpriced)
  • RTX 4060 Ti: $400 @ Amazon
  • RTX 4060: $295 @ Newegg

  • RX 7900 XTX: Sold out (or overpriced)
  • RX 7900 XT: $730 @ Amazon
  • RX 7900 GRE: Sold out (or overpriced)
  • RX 7800 XT: $490 @ Amazon
  • RX 7700 XT: $400 @ Amazon
  • RX 7600: $260 @ Walmart

  • Arc B580: Sold out (or overpriced)
  • Arc B570: $230 @ Newegg

Yep, that's right. Almost the entire inventory of mid-range or higher-tier graphics cards is gone. There's zip out there, nothing, nyet, nada, nowt. Well, okay, there are some GPUs you can find, but they're either being sold at well over their MSRPs, or the retailers in question don't have confidence-inspiring reviews.

If I'm going to spend more than $500 on a graphics card, I want to be sure that the returns policy is as solid as steel and that I'll get my money back if I'm shipped a literal brick. The situation is not massively better here in the UK, though you can still get an RX 7900 XTX or an RTX 4080 Super at a below-MSRP price.

So, what on Earth is going on? Why is there such a dearth of graphics cards? The reason is simple(ish): AMD and Nvidia decided to wind down production of their 'last-gen' cards well before a wealth of new models could replace them. The sell-through rate of distribution centres and major retailers has been extraordinarily high, probably because they were expecting lots of new cards to arrive in January.

But, what we actually got was almost a paper launch with the RTX 50-series, with such few numbers of RTX 5090 and 5080 cards that retailers' shelves were bare within mere minutes of the products going on sale. In some cases, the shelves were empty before that point because they had already been sold. AMD, on the other hand, teased RDNA 4 before our eyes at CES but then took weeks before finally announcing that the cards themselves wouldn't be ready until early March.

Meanwhile, budget-savvy PC gamers took one look at the prices of the Blackwell cards, blinked a few times, and then ran off to snap up the remaining stocks of higher-tier RTX 40 cards. Want to grab an RTX 4070 variant? Tough luck, I'm afraid, but you can get the rather nice RX 7800 XT for a reasonable price. That's a GPU I regularly use for testing game performance and it's a real trooper of a mid-range graphics card.

The obvious question to ask at this point is, 'When will GPU stocks get better?' Unfortunately, nobody has a concrete answer to that query but I suspect it's not going to be for a while, with March being the very earliest. That's when we'll be able to get our hands on the Radeon RX 9070-series but if AMD and its partners can't meet demand, we're going to be right back to square on.

Nvidia's RTX 5070 and 5070 Ti are generally expected to be launched towards the end of February, but after the dearth of the 5080 cards on shelves (well, the dearth of sensibly priced 5080 cards), I'm not overly optimistic about 5070 stocks being good.

Your next upgrade

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)

Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.

And don't look to Intel to save the day, as it can barely get enough Arc 580 cards out to meet demand, too. The slightly less potent B570 is available but that's a budget-tier card, though not budget enough compared to the similarly priced, but evidently superior Arc B580. Intel simply doesn't have anything to offer the mid-range/mainstream market and probably won't for a long time (if at all).

All of this means that whatever graphics card you currently have will have to suffice for a good while longer. I'd avoid going down the second-hand route if you have quite an old GPU and you're hoping to pick up a cheap RTX 30-series, for example, as prices have gone up due to the paucity of new models.

The truth of the matter is now that AMD and Nvidia make gazillions of dollars in the data centre market, gaming graphics cards are just hors d'oeuvres for the revenue sheets. Fancy new graphics architectures are good for headlines, but it's just not where the money is. Nice for the chip giants, rotten for PC gamers. Same as it ever was, I guess.

]]>
https://www.pcgamer.com/hardware/graphics-cards/where-the-af-are-all-the-graphics-cards-its-not-just-the-new-rtx-50-series-thats-impossible-to-buy-finding-any-decent-gpu-in-stock-at-the-major-us-retailers-right-now-is-like-staring-into-an-abyss-of-nothing/ M9WNbvxEn7VKtaQ8PXRwtN Thu, 06 Feb 2025 13:25:02 +0000
<![CDATA[ AMD's new RDNA 4 GPUs are officially arriving in 'early March' and they'll need to be stellar to rescue the company's nosediving gaming graphics division ]]> It's official, AMD's next-gen RDNA 4 gaming graphics cards will arrive in early March. And that's not a moment too soon given the latest figures from AMD's gaming graphics division. It's not pretty, folks!

The news comes from AMD's latest earnings call, laid on for the usual assembly of bankers and money men. Among various other revelations, including broadly strong financial results for AMD as a whole, CEO Lisa Su let slip that RDNA 4 is coming in early March.

"RDNA 4 delivers significantly better rate tracing performance and add support for AI-powered upscaling technology that will bring high-quality 4K gaming to mainstream players when the first Radeon 9070 series GPUs go on sale in early March," Su said.

Arguably, that's a little sooner than we were expecting. A few weeks ago, AMD's consumer CPU and GPU boss, David McAfee said that the new RDNA 4-based Radeon 9000 graphics cards "go on sale in March."

If anything, we took that to mean the end of March, given that whenever companies provide a rough launch window, the reality tends to be the end of that window. But not this time apparently. It seems the Radeon RX 9070 and 9070 XT are just a month away.

Of course, new graphics cards from AMD can't come soon enough. AMD revealed that its Gaming Graphics division continues to nosedive. "Revenue declined 59% year-over-year to $563 million. Semi-custom [console chip] sales declined as expected as Microsoft and Sony focused on reducing channel inventory," Su said, adding, "in Gaming Graphics, revenue declined year-over-year, as we accelerated channel sellout in preparation for the launch of our next-gen Radeon 9000 series GPUs."

Now if that sounds bad, it is. But here's the thing. It looks even worse when you consider that AMD reported $922 million in revenue for Q1 of 2024 for the Gaming division. And that itself was 48% down year on year. In other words, revenues are shrinking and shrinking.

In fact, it's so bad that AMD has decided to not even bother separating out Gaming Graphics as a separate entity when it reports its results. "We plan to combine the client and the gaming segment into one single reportable segment to align with how we manage the business," AMD's CFO Jean Hu said on the call.

Anyway, there are lots of positive rumours regarding RDNA 4, some even suggest it could be close to Nvidia RTX 4080 and 5080 performance. But it will need to be absolutely stellar to turn around the fortunes of AMD's Gaming Graphics. And for our money, it'll be just that—money—that dictates how well the new RX 9070 and 9070 XT do.

As I said last year, those new GPUs need to be priced extremely aggressively to make a dent in Nvidia's dominant market share. As I said then, all too often AMD launches a new GPU or family of GPUs at prices that simply aren't appealing enough in terms of the performance and features comparison with Nvidia.

Duly, they fail to get any traction and eventually AMD drops prices to levels that would have made the cards really pretty interesting at launch. But by then, everyone has lost interest, Nvidia has acquired even more mindshare and the attention has shifted to next-gen GPUs. Rinse and repeat.

Except please let's not have a repeat. Please let's have RDNA 4 launch at a price that has us all gasping, but in a good way.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/amds-new-rdna-4-gpus-are-officially-arriving-in-early-march-and-theyll-need-to-be-stellar-to-rescue-the-companys-nosediving-gaming-graphics-division/ rTuFqZTZSavube98z7yMJX Wed, 05 Feb 2025 13:20:56 +0000
<![CDATA[ Epic talks shop about stuttering in games that use its Unreal Engine and offers solutions to the problem ]]> Mention the words 'Unreal' and 'Engine' to a PC gaming enthusiast, and they will no doubt immediately think of incredible-looking graphics, as demonstrated in the likes of Black Myth: Wukong, Senua's Saga: Hellblade II, or tech demos like The Matrix Awakens. However, there's probably one other thing that PC gamers associate with Unreal Engine and it is stutter. If you wondering just what that is and why it happens, the maker of UE has explained it all in a new blog and detailed just what it's doing to prevent it.

Let's start at the beginning. Shaders are pieces of code that are used in graphics rendering to do a very specific task. It might be just to move a triangle about a screen or it could be for something such as calculating the colour of a pixel, based on multiple light sources, materials and other effects. In the early days of shaders, they used to be small, but in today's games, they're very large and complex.

Unreal Engine is a game development tool that you program in C++ or via its built-in code blocks, called blueprints. To make it much easier to code graphics, an API such as Direct3D or Vulkan is used (application programming interface), which requires shaders to be in a standard, common format.

However, the GPU in your graphics card can't use this format so its drivers need to convert the code into something that it can process, often unique to that hardware. This is generally called shader compilation and in a modern, complex 3D game, there are tens of thousands of big shaders that need to be compiled.

If the game needs to render something and the required shaders haven't been compiled for the first time, then it will literally halt until that's been done. This momentary pause is experienced in the form of stutters, as the game flips between doing its thing and compiling shaders. It won't do it if they're already compiled, of course, but given how big some shaders are, the stuttering can be quite bad. For Unreal Engine, as Epic describes it in more detail:

"Rendering an object usually involves several shaders (e.g. a vertex shader and a pixel shader working together), as well as a number of other settings for the GPU: culling mode, blend mode, depth and stencil comparison modes etc. Together, these items describe the configuration, or state, of the GPU pipeline.

Some settings influence the executable shader code, so there are cases when the driver can only start compiling shaders when the draw command is processed. This can take tens of milliseconds or more for a single draw command, resulting in very long frames when a shader is used for the first time—a phenomenon known to most gamers as hitching or stuttering.

Modern APIs require developers to package all the shaders and settings they will use for a draw request into a Pipeline State Object [PSO] and set it as a single unit. Crucially, PSOs can be constructed at any time, so in theory engines can create everything they need sufficiently early (for example during loading), so that compilation has time to finish before rendering."

Epic then goes on to explain that, prior to Unreal Engine v5.2, its recommended best practice was to 'bundle' the most commonly required PSOs into a cache that would be created as the game starts up for the first time or when a save game or level is loaded. The problem with that approach, Epic says, is that it's very resource-intensive and in games with dynamic content, it needs to be constantly updated.

The proposed fix

With the latest versions of Unreal Engine, Epic developed what it calls PSO precaching to counter the above problems.

"When an object is loaded, the system examines its materials and uses information from the mesh (e.g. static vs. animated) as well as global state (e.g. video quality settings) to compute a subset of possible PSOs which may be used to render the object.

This subset is still larger than what ends up being used, but much smaller than the full range of possibilities, so it becomes feasible to compile it during loading. For example, Fortnite Battle Royale compiles about 30,000 PSOs for a match and uses about 10,000 of them, but that’s a very small portion of the total combination space, which contains millions.

A screenshot from Black Myth: Wukong, showing the use of maximum graphics settings

Black Myth: Wukong (Image credit: GameScience)

Objects created during map loading precache their PSOs while the loading screen is displayed. Those that stream in or spawn during gameplay can either wait for their PSOs to be ready before being rendered, or use a default material which was already compiled. In most cases, this only delays streaming for a few frames, which is not noticeable. This system has eliminated PSO compilation stuttering for materials and works seamlessly with user-generated content."

However, the system isn't perfect, and Epic acknowledges that shaders not associated with materials, called global shaders (used in post-processing effects like motion blur, for example), aren't able to be fully precached like this. Compute shaders, yes, but not graphics shaders. "These types of PSOs can still cause rare one-time hitches when they are first used. There’s ongoing work to close this remaining gap in precaching coverage," says Epic.

You might already know that your GPU drivers save compiled PSOs to your SSD or hard disk, so they can be quickly used again, but while the game is running, they're also stored in memory. The PSO precaching system only keeps a portion of them in memory, as the entire cache can get pretty big, and again, this is something that Epic is aware of and working on finding a solution to it: "We are working on solutions for reducing the memory impact and automatically deciding when precached PSOs can be kept alive."

Star Wars Jedi: Survivor fish tank - Skoova Stev

Star Wars Jedi: Survivor (Image credit: miHoYo)

In short, Epic seems to be saying that it knows shader compilation stuttering is a problem in games that use Unreal Engine, but it's actively working on solutions to it all. But if you've reached this point and you're thinking, 'Yeah, that's all fine and dandy, but I play games—I don't make them,' then I perfectly understand. After all, there's nothing we can do to prevent shader compilation stutter.

To that end, Epic has some clear advice to developers about what they can do to reduce the whole stuttering issue, such as using the latest version of UE and regularly profiling PSO hitches during development, but also that devs need to be aware that other things cause stuttering—gamers won't know whether that's a shader compilation problem or not, but they're likely to blame it anyway.

Your next upgrade

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)

Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.

It also recommends that developers don't turn to using DirectX 11, pointing out that shader compilation stutter will still happen with that API, and there are fewer ways to reduce the issue in the older tech. As Epic explains, the answer lies in the near-future, not the past: "We are finally reaching a point where we have a viable solution, and there are also good initiatives to address the API shortcomings, such as the graphics pipeline library Vulkan extension."

It's great that Epic is recognising that PSO compilation stutter is a serious enough problem to warrant such a detailed blog, but I can't help but feel that this is a little late in the day. After all, with so many UE games out there stuttering away already, the damage is already done.

Still, if the next generation of big-budget blockbusters is able to be shipped in a stutter-free state, then that's one less thing to worry about. And the woes of Star Wars Jedi: Survivor will be a thing of the past.

]]>
https://www.pcgamer.com/hardware/graphics-cards/epic-talks-shop-about-stuttering-in-games-that-use-its-unreal-engine-and-offers-solutions-to-the-problem/ 7ZXiMk9Y3hGWCKedFgq6LY Tue, 04 Feb 2025 17:02:08 +0000
<![CDATA[ 'We launched ray tracing and DLSS to a thud' reveals senior Nvidia suit reminiscing on the troubled launch of Nvidia's first RTX GPUs ]]> Sometimes it feels like Nvidia's dominance is so great, the company couldn't possibly care or even notice what we tiny fleas of PC gaming think. So, it's intriguing to find a senior Nvidia suit observing that the RTX 20 series of graphics cards launched to a "thud". Maybe Nvidia does see us after all.

More specifically, senior Nvidia VP Jeff Fisher says that Nvidia, "launched ray tracing and DLSS to a thud." It was, of course, the RTX 20 family released in September 2018 that introduced those technologies.

This revelation comes from the same new book on Nvidia, The Nvidia Way: Jensen Huang and the Making of a Tech Giant by Tae Kim, as the news that CEO Jensen Huang dreamt up DLSS upscaling just two weeks before it was announced but that it took six years to train up the AI model that made frame generation possible.

Kim explains that, "the problem was the GeForce RTX offered negligible gains in frame-rate performance over the previous-generation Pascal cards. And when gamers turned on ray tracing, which was supposed to be the killer new feature, the RTX cards suffered a 25 percent drop in frame rate. DLSS performed marginally better. When enabled, it allowed cards to run about 40 percent faster than Pascal, but at a noticeable loss of image quality." Ouch.

The book was written with widespread cooperation from Nvidia, including access to multiple senior Nvidia figures and Jensen Huang himself. So, while it's not Nvidia itself describing the RTX 20's performance gains as negligible, decrying the frame rate drop due to ray tracing or noting the poor image quality of early DLSS upscaling algorithms, these are sentiments that originate very close to the company.

They also make for uncomfortable reading in the context of Nvidia's very latest. RTX 50 GPUs, like the RTX 5090 and RTX 5080. The latter in particular brings very little inherent performance advantages over its RTX 4080 and 4080 Super predecessors. According to official specifications from Nvidia, the same looks likely to apply to the upcoming RTX 5070 and RTX 5070 Ti.

Instead, it's the introduction of Multi Frame Generation or MFG that's doing all the heavy lifting in terms of enabling a substantial generational uplift. The problem with that, of course, is that MFG doesn't help reduce latency, it actually adds to it, albeit by an impressively tiny amount.

RTX 2080

Nvidia's RTX 20 series debuted DLSS and ray tracing to a 'thud' according to Nvidia itself.

MFG also comes with some loss of image quality, albeit again not as substantial as that associated with DLSS 1.0 upscaling. But the point is that the RTX 50 family looks a lot like the RTX 20. Its pure rendering performance advantage over the previous generation is marginal, so it's other features that it must rely on for appeal.

The problem is that the RTX 20 cards had both ray tracing and DLSS as new features, with the new RTX 50 family has to make do with just MFG as a major differentiator. Though the 50 series features are far more polished. Nevertheless, has that launched to a 'thud' or has it been more generously received? Answers in a comment below!


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/we-launched-ray-tracing-and-dlss-to-a-thud-reveals-senior-nvidia-suit-reminiscing-on-the-troubled-launch-of-nvidias-first-rtx-gpus/ hwdYFgTFjou2GcTBaPdmoZ Tue, 04 Feb 2025 16:09:32 +0000
<![CDATA[ AI may move at the speed of light but Nvidia's machine learning model for frame generation took 6 years to develop ]]> Earlier today we reported how Nvidia's DLSS upscaling tech went from an idea popping out of CEO Jensen Huang's head to a SIGGRAPH keynote in just two weeks. Now it turns out it took rather longer to develop the AI model for Nvidia's frame generation technology, fully six years in fact.

Again, the revelation comes from a new book on Nvidia, The Nvidia Way: Jensen Huang and the Making of a Tech Giant by Tae Kim. Development of Nvidia's Frame Generation technology, which inserts AI-rendered game frames in between frames rendered in the traditional GPU 3D pipeline, was headed up by Bryan Catanzaro at Nvidia Research.

Author Kim says Catanzaro spent six years developing a sufficiently accurate AI model for the frame generation feature. "While we were working on it, we saw continuous improvement in the quality of results, so we kept working. Most academics don't have the freedom to work on one project for six years because they need to graduate," Catanzaro explains in the book.

Kim says frame generation and DLSS more broadly are poster examples of Nvidia's new approach to gaming graphics. While Nvidia is still committed to rolling out new GPUs on a regular basis, Nvidia Research and other groups within the company would pursue additional "moonshots" in parallel.

As we revealed in the other story, Jensen Huang instantly saw how DLSS could be used to increase the prices of Nvidia GPUs. The clever aspect of DLSS and frame gen in that regard is that it doesn't scale costs.

Nvidia can decide, for instance, to make a more complex GPU. That will not only cost more to develop, but each and every GPU will cost more to manufacture. But with DLSS and its adjacent technologies, for the most part Nvidia only has to pay to develop the algorithms which then work across a suite of different GPUs without the need for a huge investment in hardware features and without dramatically increasing the cost to manufacture GPUs.

It's a very clever approach, both in terms of increasing performance and scaling profit margins up. In that sense, it's the perfect technology. The only downside for us gamers is that we can see all the money Nvidia is making at the same time as only releasing incremental increases in traditional GPU raster performance. In an ideal world, Nvidia would put just as much effort into scaling the traditional rendering hardware as it does into these new AI-accelerated features.

However, the new RTX 50 series hasn't been a huge step forward in raw performance, instead relying on Nvidia's new Multi Frame Generation technology to boost performance using AI. That's good for Nvidia's bottom line, no doubt, but we'd much prefer to see the underlying capability of Nvidia's GPUs scale a bit more, too.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/ai-may-move-at-the-speed-of-light-but-nvidias-machine-learning-model-for-frame-generation-took-6-years-to-develop/ tkGE9pKgcurqvKQzZ6nnng Tue, 04 Feb 2025 14:48:02 +0000
<![CDATA[ 'Put them on the slides': How Jensen Huang invented and then announced Nvidia's DLSS AI upscaling tech in just two weeks ]]> Before and after upscaling. Arguably, that's the most critical dividing line in gaming graphics hardware of the past 20 years. And it was, of course, Nvidia's DLSS upscaling that created that inflection point on release in 2019. But now it turns out DLSS went from an idea to a new feature announced to the public in just two weeks. And it was all Nvidia CEO Jensen Huang's idea, apparently.

If that's an eye popping revelation, the fact that Nvidia also saw DLSS as a tool to charge more money for graphics cards is less surprising. But hold that thought. All of this comes courtesy of a new book on Nvidia, The Nvidia Way: Jensen Huang and the Making of a Tech Giant.

Author Tae Kim reveals that Nvidia CEO Jensen Huang invited executives to pitch ideas for his SIGGRAPH 2018 keynote just two weeks before the show. Huang was looking for something to really blow the audience away.

One suggestion was a new DLAA or Deep Learning approach to anti-aliasing. But it wasn't enough for Huang. "A better-looking picture is not going to sell many GPUs," he is said to have mused.

But the suggestion gave Huang an idea. "Instead of deep-learning anti-aliasing, which improved already great images, what if they could use Tensor cores to make lower-end cards perform as well as the top of the line?" Kim says, explaining Huang's reasoning.

"Nvidia could use the image-enhancement function to sample and interpolate additional pixels, so that a card designed to render graphics natively at 1,440p resolution, also known as 'Quad HD,' could produce images at the higher-resolution 4K, "Ultra HD," at a similar frame rate. AΙ would be used to fill in details to take the lower-resolution 1440p image to a higher-resolution 4K image," Kim reveals.

"What would really help," Kim says Huang exclaimed, "is if you could do deep-learning super sampling. That would be a big deal. Can you do that?" Apparently, Nvidia Research's Aaron Lefohn replied that it might be possible.

A week later, Lefohn told Huang that their early results with this deep learning super sampling, or DLSS, were promising. "Put them on the slides," Huang said. Thus, "Jensen had come up with DLSS on the spot," Kim claims and DLSS duly got its very first mention in the SIGGRAPH keynote.

"The researchers had invented this amazing thing, but Jensen saw what it was good for. It wasn't what they had thought," senior Nvidia researcher David Luebke told Kim. "It shows what a leader Jensen is and how technical and smart he is."

That is quite a story. Did it happen just like that? Did Huang really dream up in an instant what has become arguably the most important differentiating feature for Nvidia graphics cards in the last 20 years or so?

Your next upgrade

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)

Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.

It seems almost too neat, too flattering to the great leader to be true. But maybe it is. What's easier to go along with is that Huang saw the money making potential of DLSS just as quickly.

"He had seen the promise inherent in one technology and transformed that promise into a new feature with a better business case. Now, if DLSS worked, the company's entire product lineup, from the low end to the high end, would become more proficient, and thus valuable, allowing Nvidia to charge higher prices," Kim says.

Anywho, even if The Nvidia Way: Jensen Huang and the Making of a Tech Giant broadly paints a very flattering picture of Nvidia and Jensen Huang in particular, it's still a fascinating read for anyone interested in PC technology. Even if DLSS's inception wasn't exactly as Kim describes, the notion that it went from a mere idea, however that idea was initiated, to a feature being discussed in a keynote is quite the revelation.

]]>
https://www.pcgamer.com/hardware/graphics-cards/put-them-on-the-slides-how-jensen-huang-invented-and-then-announced-nvidias-dlss-ai-upscaling-tech-in-just-two-weeks/ 3kcng2QgdBq8aZAN5LnPJo Tue, 04 Feb 2025 11:57:21 +0000
<![CDATA[ Spider-Man 2's PC launch woes are woe-ier than you think, as it looks like GPU-powered DirectStorage is hobbling performance ]]> The newly released PC port of Marvel's Spider-Man 2 hasn't enjoyed the smoothest of launches, thanks to bugs and performance problems. And now it seems that one aspect of the conversion that is making things worse is the use of DirectStorage, with the system sucking up all-important GPU resources.

Spider-Man 2 was ported by Nixxes Software, which has a pretty stellar track record for converting Sony's best-sellers. Well, up until now, that is. The Dutch software company portfolio of…err…ports is second to none: Marvel's Spider-Man and Spider-Man: Miles Morales; Ratchet & Clank: Rift Apart; Horizon Zero Dawn Remastered and Horizon Forbidden West. Oh, and the entire rebooted Tomb Raider trilogy.

So what's gone wrong with Spider-Man 2? According to tech channel Compusemble on X, the issue seems to be the use of DirectStorage. This is a Microsoft API that's used to reduce the CPU overhead when transferring thousands of compressed files from an SSD to a graphics card's VRAM. Instead of handling each file serially, DirectStorage lets the CPU multitask the process and use system memory in a more efficient manner.

Nixxes has plenty of experience with DirectStorage as it's used this before in its other Sony ports. However, Compusemble points out one important difference: Spider-Man 2 is using the GPU decompression option in DirectStorage, something that most of the other ported games don't have enabled (the exception being Ratchet & Clank, at high graphics settings).

Many games store all the assets in a compressed format, to make it faster to download and to take up less space on the SSD or hard drive. However, for the GPU to use them, they need to be decompressed and that job is traditionally handled by the CPU. DirectStorage, though, has the option to let the GPU do it instead, via a system called GDeflate.

That's fine to use if the graphics processor has lots of spare performance to allocate to the task but not if it's already tied up handling the normal rendering duties. This is why Nixxes didn't use GPU decompression in Ghost of Tsushima, for example, so I'm a little puzzled as to why it thought that it would be fine in Spider-Man 2, a game that can be set to use ray tracing, in a world that's open and fast-moving.

Now, you might think that this is an easy fix. All Nixxes has to do is release a patch that just disables the use of GPU decompression, right? This is possibly all it may take but equally, disabling GDeflate might cause significant problems on gaming PCs with weaker CPUs, or there might be sections of the game that are already very demanding on the CPU—handing the decompression back over to the central processor might just turn those parts even worse.

The PlayStation 5 doesn't suffer from this issue because it has dedicated hardware for handling asset transfers and decompression, whereas the humble gaming PC doesn't. Developers have to make do with using the CPU and GPU, but given how capable such chips are these days, surely it shouldn't be this bad?

Perhaps the more important question to ask is whether this is a sign of Nixxes Software being stretched too much on porting projects, as it's done seven such tasks in a little over two years. And there's the fact that Spider-Man 2 only appeared on the PS5 in October 2023, so if one assumes that Nixxes started the port soon after, it was having to do that at the same time as converting Ghost of Tsushima and the two Horizon games.

I'm confident that the problem will eventually be solved but given Sony's push to have all its big releases coming to PC as soon as possible, after their initial PS5 launch, I do wonder if Nixxes will have to say no to some projects, simply because it doesn't have the capacity to do them properly. And if Spider-Man 2 is anything to go by, that might be a very sensible decision.


Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

]]>
https://www.pcgamer.com/hardware/graphics-cards/spider-man-2s-pc-launch-woes-are-woe-ier-than-you-think-as-it-looks-like-gpu-powered-directstorage-is-hobbling-performance/ DUBZkZHrWcqBVtS9bKyNhj Mon, 03 Feb 2025 13:29:03 +0000
<![CDATA[ UK retailer expects more RTX 5090 stock in '3 to 16 weeks', so, err, adjust your diaries accordingly ]]> If you're in the UK and currently hunting for an RTX 5090, you've probably noticed that stock is... limited, at best. The good news is, Overclockers UK has updated its X account with news of incoming stock. The bad? The estimates for new RTX 5090s available for sale are given as between three and 16 weeks—which is about as wide a margin as I've seen related to GPU stock, at the very least.

It's not just the RTX 5090 with a potentially long lead time, either (via Notebookcheck). The RTX 5080 has an ETA of two to six weeks, which is also a fairly substantial wait to buy a now-launched GPU from one of the UK's largest PC hardware retailers.

It's not Overclockers UK that's at fault here, just to be clear, nor is the UK unique in its absence of RTX 50-series GPUs.

We've been keeping track of where to buy the RTX 5090 and where to buy the RTX 5080 respectively, and it's been a struggle finding available stock in both the UK and the US no matter which retailer you looked at, although we've certainly found a few next-generation Nvidia cards for sale.

Newegg says that its own stock of RTX 5090 and RTX 5080 cards sold out in record time, with the retailer claiming an 'eight to ten times increase' in traffic over the course of the launch.

So, stock is limited worldwide, prices are high, and even those that camped out in tents for days before the launch were likely to have ended up disappointed. Still, it was a chance to meet up with your fellow PC hardware enthusiasts and eat a hotdog in a parking lot, at the very least.

Just for funsies, 16 weeks is the rough gestation period of a pig. So if Overclockers UK's longest estimate is correct, UK buyers could also experience the joy of piglets in four months time alongside their shiny new piece of graphics hardware—if a mummy pig and a daddy pig loved each other very much at this exact moment.

Your next upgrade

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)

Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.

At least we've all learned something. It's unclear when the current RTX 50-series stock shortages will let up, but if I was a betting man I'd say that these cards were going to be hard to get hold of for some time to come.

Those of you holding out for the fastest of Nvidia's new offerings may have to wait a while longer yet, but our best graphics card deals page is stuffed full of GPUs you can actually buy right now.

Worth a look if you really need a new graphics card this minute, isn't it? Looks like you might be hanging around until the pigs come home, otherwise.

]]>
https://www.pcgamer.com/hardware/graphics-cards/uk-retailer-expects-more-rtx-5090-stock-in-3-to-16-weeks-so-err-adjust-your-diaries-accordingly/ LjM3XQzQGDBaLxYuXYcdCk Mon, 03 Feb 2025 13:20:50 +0000
<![CDATA[ Put away your pitchforks, the first report of an RTX 50-series graphics card melting a cable was actually down to an ol' RTX 4090 under load ]]> The RTX 5090 and RTX 5080 are power hungry beasts, but the first report of a melting power connector was actually caused by an RTX 4090, likely due to an improperly seated power cable.

Hong Kong media outlet PCM initially reported that its testing of an RTX 5090D and RTX 5080 resulted in melted cables, with a picture of two 600 W 12VHPWR connectors looking worse for wear.

However, the outlet later clarified that a clear burn mark was present on the back of an RTX 4090, while the RTX 50-series cards appeared unscathed (via Videocardz). That suggests user error when installing the power connectors on the RTX 40-series card, evoking memories of the whole Meltgate debacle regarding melting power connectors on the RTX 4090's initial release.

The original 12VHPWR power connector was blamed for overheating and melting under heavy workload due to improper seating, before Nvidia switched to a revised 12V-2x6 connector with shorter sensing pins and longer conductor terminals which was less prone to incorrect installation.

Since then, several cable manufacturers have also released angled connectors that put less stress on the connection point when wrangled into a tight case, while MSI has gone one step further and released a yellow-tipped cable to make it clearer when a power connector is correctly installed.

If you see yellow, don't be mellow. Something like that, anyway. Still, it looks like PCM has fallen victim to a slightly misaligned cable connection on the older card, and paid the smoking-hot connection price.

As we found in our testing, the RTX 5090 is a particularly thirsty card, drawing up to 637 W peak power under a 4K native Metro Exodus torture test. Still, we've had no issues with the power connector even at those lofty power draw heights, although we did experience a lot of coil whine using a Seasonic Prime TX 1600 W PSU.

However, this was mostly mitigated by swapping the power supply out for a 1600 W EVGA model, and the card has performed pretty much flawlessly otherwise.

So while there's a fair bit to critique the RTX 50-series for currently (not least that they're expensive, and difficult to find in stock even if you can stump up the cash), melting power connections don't seem to be an issue so far. Let's hope it stays that way, too.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/put-away-your-pitchforks-the-first-report-of-an-rtx-50-series-graphics-card-melting-a-cable-was-actually-down-to-an-ol-rtx-4090-under-load/ vzsxGApfkNEctoUeUBLNyc Mon, 03 Feb 2025 12:20:09 +0000
<![CDATA[ eBay users are getting back at graphics card scalping bots by listing pictures of the RTX 5090 for $2000, occasionally framed ]]> Another graphics card launch filled with bots, scalpers, and 300% markups. Nvidia's hotly anticipated GPUs, the RTX 5090 and RTX 5080, went on sale this week and scalpers have bought as many as they can to sell on for higher prices. Yet there are some seemingly looking to stop them from getting away with it.

To catch out any bots hoovering up these sales, eBay users are now posting fake listings for these graphics cards to trick them.

In both the US and the UK, you can find tonnes of listings for what appears to be an RTX 5090 around the card's MSRP. Many of these also come with a light warning to 'read description' at the end. This clues you into the strange game being played here.

In the description sits a disclaimer that these sellers are actually emailing pictures of RTX cards to unsophisticated bots or even more unsophisticated scalpers, people looking to take advantage of the low supply, high demand situation to resell graphics cards for a pretty penny.

Many of these listings won't physically send you the picture. It's just a jpeg in an email, and not even an NFT (like that'd be worth more anyways). However, one listing says: "the photo detentions [dimensions] is 8 inches by 8 inches, I got the frame from Target. DO NOT BUY IF YOU’RE A HUMAN".

A listing on eBay of an RTX 5090 graphics card, warning the reader that it's an image of the card, and not the card itself

(Image credit: eBay)

Though I'm tempted to say that none of this amounts to more than just annoying scalpers, some like this $1,200 drawing of an RTX 5090 actually sold a unit in the last 24 hours. This $1,900 entry has also managed to make a sale. Surely these are not purchases made by humans, right? So maybe some bots are out there trying to snap up cards on eBay.

Some listings have got even more creative, like this one that is effectively someone throwing away their clutter. "You will receive a random item in the mail from a donation store. it will be either a book, a water bottle, or whatever else."

No, I don't think "whatever else" entails Nvidia's latest graphics card.

This new scheme is so effective that, when I search for an RTX 5090 on eBay right now, there is not a single genuine 5090 posting on the entire first page. Though the idea of a scalper losing out on money doesn't exactly make me feel very sympathetic, I do wonder how much of the claim that these listings are for bots is actually genuine. A big sign saying 'read description' is a tell that what you're buying isn't real but many of the listings don't even go this far.

I've seen multiple in my search (like this one) that clarify it's a picture in the description but not on the listing title. Even then, it's understandable why a potential buyer might not get all the way to the bottom of the description to see 'This beautifull piece of technology is just not beautifull in you pc but on your wall so we off this photo you can buy for on your wall.'

Though we can see the plans forming over on Reddit. I've seen some commenters creating mockups of listings to encourage others into making their own. Some are using VPNs and burner laptops to post as many listings as possible.

This is not the first graphics launch to elicit this type of posting but it's the first I've personally seen with this degree of prevalence. The RTX 30-series sold out very quickly as it not only launched during the pandemic, which led to chip shortages, but cryptocurrency mining became especially popular around this time, increasing demand.

Scalpers won't get much sympathy from me, especially when they are trying to sell the card for almost $8,000.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/ebay-users-are-getting-back-at-graphics-card-scalping-bots-by-listing-pictures-of-the-rtx-5090-for-usd2000-occasionally-framed/ khi9C98dibj6qWWx6K4EoW Fri, 31 Jan 2025 14:52:03 +0000
<![CDATA[ Intel nixes its next-gen AI GPU but still has plans to take on Nvidia ]]> Fair to say Intel's GPU plans don't always go to, er, plan. During Intel's latest earnings call for highly-remunerated bean counters, the company's new interim co-CEO let slip that its upcoming next-gen AI GPU has effectively been cancelled.

Known as Falcon Shores, it was supposed to replace Intel's Gaudi 3 chip. But no more. "Based on industry feedback, we plan to leverage Falcon Shores as an internal test chip only without bringing it to market," Michelle Holthaus said.

Instead, Intel has switched its focus to the generation of AI GPU that comes next, Jaguar Shores. "This will support our efforts to develop a system-level solution at rack scale with Jaguar Shores to address the AI data center," Holthaus said.

Our understanding is that the problem with Intel's AI GPUs is that they're all a bit bare bones as an offering. Where Nvidia will sell you a fully built up rack of GPUs with fancy interconnects, Intel doesn't have the same mature and complete product. Oh, and there's the minor matter of CUDA, Nvidia's software interface that allows developers to code in existing computer languages and port right over to Nvidia's GPUs.

"One of the things that we've learned from Gaudi is it's not enough to just deliver the silicon. We need to be able to deliver a complete rack scale solution, and that's what we're going to be able to do with Jaguar Shores," Holthaus explained.

Without necessarily wanting to dance on Intel's grave with this one, let's just say the company doesn't exactly have the best record when it comes to GPUs. Who can forget Larrabee, the chip with zillions of tiny x86 cores that was cancelled back in 2010 but might have put Intel and indeed its recently departed CEO Pat Gelsinger, who championed the Larrabee project, on a completely different path.

Intel's Arc GPUs haven't exactly been a roaring success, either, even if its latest Intel Arc B580 board is a clear step forward. So, it would be a brave call to predict that Jaguar Shores will soon be chasing Nvidia's H100 and B200 GPUs out of AI data centers any time soon.

In the meantime, ye olde Linux patch text strings have outed what appears to be references to Intel's next-gen Celestial gaming GPU in high-performance discrete format, which tallies with recent comments by Intel's main graphics rep Tom Petersen that the hardware for Celestial is basically done and dusted.

So, Intel's GPU efforts rumble on. Eventually, maybe one of them will really land. We genuinely hope so, it would be great for competition.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/intel-nixes-its-next-gen-ai-gpu-but-still-has-plans-to-take-on-nvidia/ fEKxQ93LN6cwKXLaY7oJdd Fri, 31 Jan 2025 14:45:52 +0000
<![CDATA[ Nvidia's latest 572.16 driver isn't just for the RTX 50-series, as 20, 30 and 40-series cards are also gifted with significant updates ]]> What a couple of days. The build-up, the sleeping in tents, the stock and pricing anticipation, then the actual RTX 5080 and RTX 5090 launch, followed of course by all those gorgeous, hefty cards being snatched up so quick that the stocks hit zero quicker than you could say "yes, I'd like an RTX 5080 please". But fear not, for in lieu of RTX 50-series stocks, we now have the latest GeForce driver release that should give a nice boost to more than just the latest generation of Nvidia GPUs.

In addition to enabling DLSS 4 Multi Frame Generation (for those whopping 4x fps results) in 75 games for RTX 50-series cards, Nvidia says the new Nvidia app update, which should get you on the latest 572.16 driver, also gives RTX 40-series cards enhanced DLSS frame generation.

With Blackwell, Nvidia's also changed how it approaches frame generation, and our Dave noted in his RTX 5090 review that this change allows for lower VRAM use and faster execution—plus flip metering, which ensures all generated frames are spaced evenly between each other, leading to a smoooother experience.

But fear not, 30-series and 20-series gamers. We older-gen folk—me with my RTX 3060 Ti—should now be entitled to a dose of enhanced ray reconstruction, super resolution (in beta), and DLAA (also in beta). Of course, only when these options are enabled in-game.

The new app update will also allow you to override in-game frame generation to the latest version in many games, even if it's not been implemented as an option yet. Of course, this won't allow a 40-series card to use multi-frame gen like the 50-series, but it'll upgrade regular ol' Frame Generation to the latest model.

There are several other improvements, too, such as to Video Super Resolution (VSR), which upscales videos, with the new version "using up to 30% fewer GPU resources at its highest quality setting, allowing more GeForce RTX GPUs to enable it".

Oh, and I almost forgot to mention—RTX 50-series envy and resentment, perhaps—you lucky, lucky RTX 5080 or RTX 5090 owners will also have access to 'Smooth Motion', which is driver-based frame interpolation. In other words, it adds a generated frame between frames just like the first generation of Frame Generation, but it's done on the driver level which means it can be enabled in games that don't officially support it.

Nvidia explains: "To enable NVIDIA Smooth Motion, select a compatible DirectX 11 or DirectX 12 game in Graphics > Program settings. Scroll down the list of options on the right to reach “Driver Settings”, and switch Smooth Motion on."

As for me, with my lowly RTX 3060 Ti, I'm still happy. I know I'll be trying out all the new DLSS enhancements over the weekend. Crab Champions, here I come.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/nvidias-latest-572-16-driver-isnt-just-for-the-rtx-50-series-as-20-30-and-40-series-cards-are-also-gifted-with-significant-updates/ A2aVc8SBuUUViqb6YTWYub Thu, 30 Jan 2025 17:31:11 +0000
<![CDATA[ Asus says its easy-peasy PCIe slot causes 'no damage that would affect functionality' following claims of excessive wear, but says it'll cover any damage ]]> Following last week's report of excessive wear on a graphics card's contact pins, suggested to be the result of using an Asus motherboard's quick-release PCIe slot, Asus has released a statement. In it, the company suggests no actual damage is likely to occur as a result.

The issue, as one PC builder explained on bilibili, was that the PCIe Q-Release Slim slot on one of Asus' latest motherboards had caused a large amount of wear to the gap in between contacts on a connected GPU.

Above: the mechanism in action.

The specially designed slot lets you remove a card by pulling on the IO side of the GPU first, releasing the card, which then pushes the card onto the retainer clip that holds the card in place.

Following its own testing and evaluation of an "extremely small number of cases reported", Asus says it does not expect any damage to occur when using motherboards with its PCIe Q-Release Slim slots.

Here's the important bit of the statement it sent to Wccftech:

"In our internal testing and evaluation of the extremely small number of cases reported, we found no damage to the motherboard or graphics card that would affect functionality and or performance. However, it is important to emphasize that any type of PCIe add in card will exhibit signs of usage and wear marking after 60 continuous insertions and removals. Additionally, if the installation and removal are not performed according to the manufacturer's recommendations, the likelihood of scratches and or wear may increase.

"Despite this, we have found no impact on the functionality of either the graphics card or the motherboard."

The 60 insertions and removals is a good number to know. In our Andy's original report, he questioned the number of cycles a PCIe slot is rated to, and how many would be required to cause some sort of wear. It clearly took a lot of cycles to wear a PCB as shown in the bilibili post, but how many is too many? 60 seems like a relatively decent number, as most PC gamers are unlikely to get anywhere close to that in the lifetime of a gaming PC.

What's slightly more reassuring is that Asus, in this statement, claims it will take "full responsibility" for any "specific issues or abnormalities" that may occur from the quick release slot. Remember that, if you ever need to make a claim.

Though Asus is keen to point out that you should follow its guidelines every cycle and not apply excessive force to ensure coverage… though how it proves any of that by the time a claim rolls around, I don't know.

Your next upgrade

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)

Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.

What I'm wondering is whether Asus plans to shake up this design with future motherboards. I have an older Z690 motherboard in my test bench that has a quick-release PCIe slot, though this uses a simple metal wire and pulley system.

It works like this: When you press the button the metal wire tugs on the retainer clip, which releases it and allows the graphics card to come away with a light tug. It seems to work just as well, to be honest, and as far as I can see wouldn't wear the graphics card in any way beyond the normal wear and tear—of which there will be some.

In all my years benchmarking the same graphics card over and over again, I've never seen any noticeable signs of wear on the contact pins. Maybe some light scratching from dust in the slot, but nothing I would consider detrimental to the card's longevity.

So, hey, I probably wouldn't worry about this too much, not unless you're planning to replace your GPU every other day.

]]>
https://www.pcgamer.com/hardware/graphics-cards/asus-says-its-easy-peasy-pcie-slot-causes-no-damage-that-would-affect-functionality-following-claims-of-excessive-wear-but-says-itll-cover-any-damage/ ptGhNhHRZpbC7QRNCMWz5D Thu, 30 Jan 2025 16:47:11 +0000
<![CDATA[ Color me not-shocked: RTX 5090 and RTX 5080 goes out of stock across the US and UK in 5 minutes ]]> Did you manage to grab that RTX 5090 Founders Edition card you covet? I'm guessing probably not considering all the RTX 5090 and RTX 5080 cards theoretically on sale today went out of stock in under five minutes.

I was trying to cover the RTX Blackwell launch across the US and UK today, along with the rest of the PC Gamer team, and there we were at 6 am PT, 2 pm UK time, clicking on links and trying to see if we could snaffle new graphics cards into our online baskets as soon as the launch time was upon us.

All to no avail. I managed to squeeze into a queue at Best Buy to stick a Founders Edition of the RTX 5090 into my basket for a few minutes before it kicked me out with an error message.

And every other card we could find on Newegg and Best Buy were gone the instant we clicked on them. Over at Walmart, there was one lonely PNY RTX 5080, but he got snapped up real quick, too.

Even the shockingly over-priced versions, with RTX 5080s costing the same as the RTX 5090 MSRP, had gone out of stock. Though I hope that's just some sort of listing error and someone hasn't just paid $2,000 for something that is only marginally faster than an RTX 4080 Super.

It was the same situation over in the UK, where stock was either gone in a few minutes or the retailer sites just completely fell over. Some places had pages claiming stock, but if you clicked through they either fessed up that stock had gone, or there was no way to add a card to your cart anyways.

By my reckoning, it was all done and dusted by 6.05 am PT, 2.05 pm UK. And realistically probably earlier than that—it was just that's when the sites actually caught up with the fact the cards had been snapped up.

So, is this a success? I guess from the point of view of everyone getting shot of all their stock, it will be seen as a win by both Nvidia and the retailers. How happy anyone who actually wanted a new RTX Blackwell GPU is right now, well, that might be a different matter.

Was it high demand, or low stock? All the pre-release noise was about either low volumes of chips going to manufacturers or low numbers of cards going to retailers. Neither paints a particularly rosy picture of digital shelves packed with new cards. Which is a shame considering Nvidia was reportedly able to choose whenever it wanted to release this new generation of graphics cards, and could surely have picked a point in time where it had managed to put more stock into the channel.

Your next upgrade

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)

Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.

Though would there ever be enough stock in the channel? Is it that much better to have sold out in ten minutes over five? There will still be a whole lot of disappointed folk still waiting on restocks.

I would guess Nvidia catching up on orders of its big ol' Blackwell server GPUs took more priority than the GeForce GPUs. They were all kinda on the same TSMC 4N node, so maybe there was an element of making sure those datacenter customers were looked after first.

Still, there will likely be more stock winding its way to retailers soon enough, and we'll have new RTX 5070 and RTX 5070 Ti cards coming our way next month, too. So keep an eye on our 'Where to buy...' pages if you're in the market for one of the new cards.

Just please stay away from eBay, eh?

]]>
https://www.pcgamer.com/hardware/graphics-cards/color-me-not-shocked-rtx-5090-and-rtx-5080-goes-out-of-stock-across-the-us-and-uk-in-5-minutes/ YPMmWZmHJ5PyFmqozCpPrV Thu, 30 Jan 2025 15:54:30 +0000
<![CDATA[ DLSS 4's announcement may have convinced me to switch from AMD to Nvidia for the next generation of GPUs, and I doubt I'm the only one ]]>
Andy Edser, hardware writer

Andy Edser, terrifying in blue

(Image credit: Future)

This month I've been testing: Handhelds and headsets. Both of which appear to be getting bigger as time goes on. It won't be long now before I strap a pair of speakers to the top of my head, stick a slab of granite in my backpack, and merrily head off to work—with my spine complaining all the way.

If there's one thing I've never quite understood about the PC hardware community, it's the tribalism evident when it comes to GPU choices. I've never much cared for being team red, team green, or team blue for that matter. Give me the best bang for my buck, and a fraction of my poorly-managed finances are yours.

Which is why I currently run an RX 7800 XT in the PCIe slot of my main PC. It wasn't the chiplet-based design that won me over, nor the desire to stick one to the big green man and buy the underdog choice. Simply put, I wanted great 1440p performance on a limited budget—and as a result I opted for an XFX version of AMD's mid-range card.

My other option was the RTX 4070, a card I've often used for performance comparisons and agree is also a fine choice for a reasonably-priced GPU. The RX 7800 XT was a bit cheaper though, and a bit faster in raw raster performance at 1440p, so I plumped for that with my personal cash instead.

Cut to just over a year and a half later, and I'm sat in my Vegas hotel room at CES 2025, watching the Nvidia RTX 50-series announcement along with many of the rest of you (my hardware overlord, Dave James, bravely dealt with the immense queues to see it in person). The lights go up, Jen-Hsun appears in an even shinier leather jacket, and the card rollouts begin. And it's time for me to make an admission: I scoffed a little when the RTX 5070 was revealed with claims of "4090 performance" for $549.

That'll be with a lot of upscaling help, I thought. A bit of a contentious claim, and something to potentially write up as an opinion piece later. Fake frames, man, and all that. Raw raster still counts for something, and DLSS can't really be used as a salve for all ills.

A still from a video at Nvidia's CES 2025 keynote, showing a character running through some archways in a game with DLSS 4 enabled and ful RT on, achieving 146 fps.

(Image credit: Nvidia)
Nvidia RTX 50-series GPU reviews

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)

Watch our review:
RTX 5090 video review
Read our reviews:
RTX 5090 review
RTX 5080 review

As we later found out, I wasn't entirely wrong. But then I saw the DLSS 4 demo, with all its immensely pretty ray-traced shenanigans. And the staggering frame rate boosts of Multi Frame Generation. Not to mention the claimed image quality improvements, the Neural Texturing, and all sorts of other enticing AI-based doohickeys.

Of course, it was just a demo trailer. Impossible to really judge until I saw it running in front of me for myself. But if there's one thing that's bugged me during my time with the RX 7800 XT, it's that I've been locked into using lesser upscaling tech, like FSR and XeSS.

DLSS has been better, is better, than either of those competing solutions for some time. Miles better.

I felt a twinge. It was the same twinge I've felt every time I've seen DLSS in motion for the past couple of years, the same slight regret I've felt whenever I've enabled FSR or Fluid Motion Frames on my home machine to boost the framerate in a struggling game.

FSR and XeSS work fine. But DLSS works brilliantly. And now, with all these transformer-based improvements? Nothing suggests to me that has changed. In fact, Nvidia seems to be so far ahead, the competition looks to be floundering in its wake.

There are still those among us who consider AI upscaling and frame generation to be cheating, to some degree. It'll always introduce some sort of artifacting or latency, they argue, and it's never quite as good as raw raster. I like my coffee black and my meat raw, those sorts of people.

For a long time, I was one of them. But now I think it's time to admit that, far from being a frame-boosting lesser option, a panacea for performance-related woes, it's now an integral part of PC gaming. Hell, gaming in general. Complaining about it at this point feels a little like King Canute shouting at the tide. Get those AI kids off my lawn, and all that.

And in this brave new world of upscaling being more of a requirement than an option, Nvidia really does seem to be holding all the cards. Yes, FSR 3.1 isn't bad at Quality settings, but as my upscaler comparison testing shows, it's still not a patch on DLSS when it comes to image quality—especially when you push it down to Balanced or even, goodness forbid, Performance levels.

A still from the Nvidia CES 2025 Nvidia announcement for DLSS 4, showing a vase sitting on a set of stairs with fabric flowing down behind it.

(Image credit: Nvidia)

AMD says FSR 4 is coming, and it's now machine-learning based. That's probably why you'll need either an RX 9070 or RX 9070 XT to run it, as those cards presumably have some hardware on board equivalent to Nvidia's Tensor cores. Perhaps it can deliver similar framerate gains and image quality, but something in me doubts it. Nvidia has leant so heavily on the AI button for so long, I honestly can't see AMD catching up in a single swoop.

AI-based upscaling and frame generation performance now matters more than ever. Raw raster performance? Still so, but lesser. And a $549 card that can take advantage of all the DLSS 4 benefits, to the point where it can spit out hundreds of AI-generated frames at a similar rate to the RTX 4090? Yeah, that's a darn tempting package if those claims prove out—and one that AMD seems destined to struggle to beat.

The AMD Radeon RX 9070 XT and RX 9070 RDNA 4 GPUs arranged in diagonal lines, taken from a CES 2025 presentation slide

(Image credit: AMD)

We don't really know yet what the red team will be bringing to the table. We (and every other major outlet) received a slide presentation announcing the new cards, yet no mention was made of any of them at the briefing itself. We know they'll be mid-range equivalents, we know they'll make use of FSR 4, and we know what the cooler designs look like. Virtually everything else is still conjecture.

But unless FSR 4 pulls off the same frame-generating magic trick as DLSS 4—and the pricing is ultra-competitive—these new offerings look set to be a hard sell for any gamer in the face of Nvidia's claims. Even for me, sitting over here, willing AMD to provide some meaningful competition.

There's still a lot yet that might change my mind. For example, in Dave's RTX 5080 testing, he found that Multi Frame Generation struggles with latency when really pushed. Force the RTX 5080 down to 25 fps raw performance levels and the added latency becomes noticeable, which doesn't bode well for the much-less-powerful RTX 5070 being pushed into similar territory.

We'll find out just what that's like in person when our test sample arrives. The same applies to the RX 9070 and RX 9070 XT for that matter, as all is currently quiet on that front—other than suggestions they'll be here in March. But again, unless the price is phenomenally cheaper than the Nvidia equivalent (or AMD pulls an upscaling rabbit out of a hat), it really does seem like the RTX 50-series stands a good chance of dominating this generation.

For what it's worth, I hope that's not the case. I hope FSR 4 is great, the RX 9070-series is impressive, the prices are low, and this article can be shoved back in my face. Competition is a good thing, and one company completely dominating all the others benefits no-one.

As things currently stand, though, when it comes to my own cash, the RTX 5070 looks like the most likely candidate for my next GPU purchase. Well, after I've convinced my partner I need a new graphics card and we don't need to be saving for a house quite as hard as we currently are.

Ah, damn it. I'm going to be stuck with the RX 7800 XT and old-school FSR forever, aren't I?

]]>
https://www.pcgamer.com/hardware/graphics-cards/dlss-4s-announcement-may-have-convinced-me-to-switch-from-amd-to-nvidia-for-the-next-generation-of-gpus-and-i-doubt-im-the-only-one/ 6qvUL6K3xzkCUtNoScd9r8 Thu, 30 Jan 2025 15:21:48 +0000
<![CDATA[ Japanese hardware chain instantly regrets use of lottery system for RTX 5090 and RTX 5080 launch ]]> The day has finally arrived: The Nvidia GeForce RTX 5090 and 5080 are now available for purchase…I know, I make it sound so easy. We've known for a while now that launch day supplies will be extremely limited, and so retailers have tried to manage expectations accordingly. Unfortunately, even the best-laid plans frequently don't survive contact with the general public.

As is not uncommon for in-demand hardware launches, Japanese hardware chain PC Koubou attempted to deploy a lottery system to manage demand for the 50-series cards and discourage scalping. However, a number of locations are rethinking this strategy after chaos erupted at the Akihabara store during an earlier lottery (via Videocardz).

At about 15:00 local time on January 30, a raucous line crowded the Chiyoda city location and nearby streets, with customers queuing in the hopes of even getting a chance to buy one of the new 50-series cards. People were also allegedly seen climbing the fence of a nearby elementary school, defacing the establishment's sign in the process.

According to a now-deleted post on X, the Nagoya Osu location had originally planned to enter customers present in-store at 10:20 AM on January 31 into a lottery draw for the chance to buy one of the vanishingly few graphics cards (also via Videocardz). More recent posts from the Nagoya Osu store as well as the Osaka Nihonbashi location have announced a change of plans, instead shifting lottery applications online.

Most interestingly though, these statements reveal just how few of the 50-series cards would be in stock. For those hoping to buy a card from PC Koubou, there'll be 15 units of the 5090 and 113 units of the 5080 graphics card available to purchase for customers selected via the online lottery.

These numbers are in line with what retailers across the globe have been claiming in the run-up to the launch. For instance, Overclockers UK said they had a 'single digit' amount of 5090 cards a week before launch, and MSI more recently stated they'd received a small number of GPUs directly from Nvidia.

This limited supply has in turn seen a rise in folks camping outside of Micro Center stores across the US, and plenty of chancing scalpers attempting to sell cards they do not have for as much as $7,000. The hope is that supply issues will ease as we get further into February and beyond, though by that time US-based PC gamers may be staring down a tariff that makes picking up a new 50-series card especially dear.

None of the above represents especially good odds if you're still hoping to pick up a 5090 or 5080 card in the very near future. If you'd like to take a tour of the stockists—and an assorted roster of 'sold out' signs—you can check out our curated lists of 5090 and 5080 sellers in the US and UK.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/ahead-of-nvidia-rtx-5090-and-5080-launch-japanese-hardware-chain-attempts-to-deploy-sensible-lottery-system-chaos-ensues-retailer-rethinks/ AY8iH8bFtApQDibYLG8aHW Thu, 30 Jan 2025 15:13:22 +0000
<![CDATA[ Nvidia RTX Blackwell release day live: where to buy the RTX 5090 and RTX 5080 for the picosecond they're in stock ]]>

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)

Jump straight to the section you want...
1.
RTX 5090 quick links
2.
RTX 5080 quick links
3. RTX 5090 model links
4. RTX 5080 model links
5. Live updates

Today is the RTX 5090 and RTX 5080 launch day, with both Nvidia's first RTX Blackwell graphics cards going on sale today at 6am PT (2pm UK). But be warned, the likelihood of actually getting hold of one today is pretty damned slim. Most of the noises from manufacturers and retailers has been about the slight stock levels that they're going to have come launch day.

MSI has stated that it has received a limited supply of chips from Nvidia itself (despite having listings for huge numbers of different cards) and one of the biggest UK retailers, Overclockers, has noted that it only has a "single digit" number of RTX 5090 cards for launch.

If the volume of available cards last more than ten minutes past go time I will be very surprised. Very surprised.

I guess that's why there are people already camped outside of Micro Centers around the US, and you can bet folk are already lining up outside of their local Best Buy stores, too. I just wonder how many of those people are the ones who have already listed RTX 5090 cards for exorbitant prices on Ebay. Those damned scalpers are going to be out in force for the RTX 50-series launch, you can be sure.

Nvidia RTX 5090 quick links

US RTX 5090 retailers:

UK RTX 5090 retailers:

Nvidia RTX 5080 quick links

US RTX 5080 retailers:

UK RTX 5080 retailers:

Nvidia RTX 5090 model links

Founders Edition

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)

We're sure many of you will be looking to get your hands on a Founders Edition (FE) card, ie, Nvidia's reference design card, not least because this design is gorgeous.

The FE is sure to be one of the best value cards, coming in at MSRP. It's also an impressive design, as we noted in our review. It keeps the RTX 5090's colossal GPU relatively chilly, which is especially good considering it's so compact compared to previous high-end cards.

No doubt it will also be the card most in demand come launch day.

Scan has sold FE cards in the UK previously, but this time around the retailer explains: "Scan work as a fulfilment partner for NVIDIA on the FE cards. These must be bought using the links on the NVIDIA website when stock is available."

Nvidia GeForce RTX 5090 Founders Edition
US:
Best Buy $1,999.99
UK: Nvidia £1,939

Gigabyte

Gigabyte's one of the biggest names in the AIB GPU space and for good reason: Many of its cards tend to offer a mainstream blend of price to performance. The company's got a fair few RTX 5090 models listed, now, from air-cooled TUF Gaming ones to presumably more expensive water-cooled 'Waterforce' ones.

Gigabyte Aorus GeForce RTX 5090 Master

US: B&H Photo $TBA
UK: Scan £TBA | CCL £TBA

Gigabyte GeForce RTX 5090 Master Ice

US: Newegg $TBA | B&H Photo $TBA |
UK: Overclockers £TBA | Ebuyer £TBA | CCL £TBA

Gigabyte GeForce RTX 5090 Gaming OC

US: -
UK:
Scan £TBA | CCL £TBA

Gigabyte GeForce RTX 5090 Windforce OC

US: Newegg $TBA | B&H Photo $TBA
UK: Overclockers £TBA | Ebuyer £TBA | Currys £TBA | CCL £TBA

Gigabyte GeForce RTX 5090 Aorus Xtreme Waterforce WB

US: Newegg $TBA | B&H Photo $TBA
UK: Ebuyer £TBA | CCL £TBA

Gigabyte GeForce RTX 5090 Aorus Xtreme Waterforce

US: B&H Photo $TBA
UK:
Overclockers £TBA | Ebuyer £TBA | CCL £TBA

Asus

Asus has a number of popular GPU lineups making a return for the RTX 50-series. In particular, many of you will be happy to see the usually reasonably-priced, mechanico-understated TUF Gaming lineup rearing its head for the RTX 5090. And that's both OC and non-OC versions. Plus there are some liquid-cooled options popping up here and there, now, too, designated by the 'LC' nomenclature.

Asus ROG Astral GeForce RTX 5090

US: Newegg $TBA
UK:
Overclockers £TBA | CCL £TBA

Asus GeForce RTX 5090 TUF Gaming

US: B&H Photo $TBA | Newegg $TBA
UK:
Scan £TBA | Overclockers £TBA | CCL £TBA

Asus ROG Astral GeForce RTX 5090 OC

US: B&H Photo $TBA
UK:
Scan £TBA | Overclockers £TBA

Asus GeForce RTX 5090 TUF Gaming OC

US: Newegg $TBA
UK: Overclockers £TBA | CCL £TBA

Asus GeForce RTX 5090 ROG Astral LC

US: -
UK: Overclockers £TBA | Ebuyer £TBA

Asus GeForce RTX 5090 ROG Astral LC OC

US: -
UK: Overclockers £TBA | Ebuyer £TBA

MSI

MSI usually has a ton of card designs, and many RTX 5090 designs are already showing up on retailer sites. Of course there are fan favourites such as the Gaming Trio OC, but there are other options such as my personal favourite, the Suprim (non-liquid). Plenty to choose from. (Note that if you buy directly from MSI, according to a Discord update, shipping will only begin on February 6.)

MSI GeForce RTX 5090 Gaming Trio OC

US: Newegg $TBA | B&H Photo $TBA
UK: Scan £TBA | Overclockers £TBA | CCL £TBA

MSI GeForce RTX 5090 Suprim SOC

US: B&H Photo $TBA
UK:
Scan £TBA | Overclockers £TBA | CCL £TBA

MSI GeForce RTX 5090 Suprim Liquid SOC

US: Newegg $TBA | B&H Photo $TBA
UK:
Scan £TBA | Overclockers £TBA | CCL

MSI Vanguard GeForce RTX 5090

US: Newegg $TBA | B&H Photo $TBA
UK: CCL £TBA | Currys £TBA

MSI Ventus GeForce RTX 5090 3X OC

US: Newegg $TBA | B&H Photo $TBA
UK:
Scan £TBA | CCL £TBA

The rest...

It's not all about MSI, Gigabyte, and Asus, even if these are the most recognisable names. Zotac, PNY, and others have released plenty of fantastic graphics cards over the years, so it's worth keeping an eye out for their takes on the RTX 5090. Unfortunately, there aren't many cards from these manufacturers listed in the US right now, but there are more listed in other locations such as the UK.

Nvidia RTX 5080 model links

Founders edition

Nvidia RTX 5080 Founders Edition graphics card from different angles

(Image credit: Future)

Founders Edition (FE) cards are Nvidia's own reference design ones, and the 50-series ones look stunning. So, we have no doubt that many of you will be looking to get your hands on an RTX 5080 FE.

Not only does it look stunning but it's also bound to be great value, as it'll be coming in at MSRP where most AIB cards will retail for above this. The design keeps things more than cool enough for gaming, too, as our Dave discovered in his testing which saw average temps keep under 70 °C.

The FE card will, for all these reasons, almost certainly be the most in-demand RTX 5080 upon launch, so fingers on buzzers.

Scan has sold FE cards in the UK previously, but this time around the retailer explains: "Scan work as a fulfilment partner for NVIDIA on the FE cards. These must be bought using the links on the NVIDIA website when stock is available."

Nvidia GeForce RTX 5080 Founders Edition
US:
Best Buy $999.99
UK: Nvidia £979

Gigabyte

Gigabyte's one of the best-known brands and its AIB GPUs are some of the most-bought. From mainstream value offerings such as its 'Windforce' cards through to stunners like the Aero OC, it has something to offer for most gamers. Plenty of Gigabyte RTX 5080 cards are now lining the proverbial shelves now, too, although still no definite prices.

Gigabyte Aorus GeForce RTX 5080 Master

US: Newegg $TBA | B&H Photo $TBA
UK: Scan £TBA | Overclockers £TBA | CCL £TBA | Currys £TBA

Gigabyte GeForce RTX 5080 Master Ice

US: Newegg $TBA | B&H Photo $TBA |
UK: Scan £TBA | Overclockers £TBA | CCL £TBA | Currys £TBA

Gigabyte GeForce RTX 5080 Gaming OC

US: Newegg $TBA | B&H Photo $TBA
UK:
Scan £TBA | Overclockers £TBA | Ebuyer £TBA | CCL £TBA | Currys £TBA

Gigabyte GeForce RTX 5080 Windforce

US: -
UK: Overclockers £TBA | CCL £TBA | Currys

Gigabyte GeForce RTX 5080 Aorus Xtreme Waterforce WB

US: Newegg $TBA | B&H Photo $TBA
UK: Overclockers £TBA | CCL £TBA | Currys £TBA

Gigabyte GeForce RTX 5080 Windforce OC

US: Newegg $TBA | B&H Photo $TBA
UK: Scan £TBA | Overclockers £TBA | Ebuyer £TBA | CCL £TBA

Gigabyte GeForce RTX 5080 Aorus Xtreme Waterforce

US: Newegg $TBA | B&H Photo $TBA
UK:
Scan £TBA | Overclockers £TBA | Ebuyer £TBA | CCL £TBA | Currys

Asus

Asus' most popular GPU line-ups are back for the RTX 50-series, and for the RTX 5080 in particular. We've got the much-adored and industrial-looking TUF line-up, as well as Astral and Prime. And there are plenty of stock-overclocked versions, plus some liquid-cooled ('LC') ones, so plenty to go around.

Asus ROG Astral GeForce RTX 5080

US: Newegg $TBA
UK:
Overclockers £TBA | Ebuyer £TBA | Currys £TBA

Asus GeForce RTX 5080 TUF Gaming

US: -
UK:
Overclockers £TBA | Ebuyer £TBA | Currys £TBA

Asus ROG Astral GeForce RTX 5080 OC

US: B&H Photo $TBA
UK:
Scan £TBA | Overclockers £TBA | Ebuyer £TBA | Currys £TBA

Asus GeForce RTX 5080 TUF Gaming OC

US: Newegg $TBA | B&H Photo $TBA
UK: Scan £TBA | Overclockers £TBA | Ebuyer £TBA | CCL £TBA | Currys £TBA

MSI

MSI has a very expansive GPU design portfolio, and these ranges are back in force for the RTX 5080. Plenty of retailer sites are setting up virtual shelving space for MSI RTX 5080 stock, now (including for my fave, the Suprim). There are a couple of white-bodied options in there, too, for those of you with an all-white build or looking to add a strong accent to your build.

MSI GeForce RTX 5080 Gaming Trio OC

US: Newegg $TBA | B&H Photo $TBA
UK: Scan £TBA | Overclockers £TBA | Ebuyer £TBA | CCL £TBA | Currys £TBA

MSI GeForce RTX 5080 Gaming Trio OC White

US: Newegg $TBA | B&H Photo $TBA
UK: Scan £TBA | Overclockers £TBA | Ebuyer £TBA | CCL £TBA | Currys £TBA

MSI GeForce RTX 5080 Inspire 3X OC

US: Newegg $TBA | B&H Photo $TBA
UK:
-

MSI GeForce RTX 5080 Suprim Liquid SOC

US: Newegg $TBA | B&H Photo $TBA
UK:
Scan £TBA | Overclockers £TBA | Ebuyer £TBA | CCL £TBA | Currys £TBA

MSI Vanguard GeForce RTX 5080

US: Newegg $TBA | B&H Photo $TBA
UK: -

MSI Ventus GeForce RTX 5080 3X OC

US: Newegg $TBA | B&H Photo $TBA
UK:
Scan £TBA | CCL £TBA

MSI Ventus GeForce RTX 5080 3X OC White

US: Newegg $TBA | B&H Photo $TBA
UK:
Scan £TBA | Overclockers £TBA

MSI Shadow GeForce RTX 5080 3x OC

US: Newegg $TBA
UK: -

MSI Vanguard GeForce RTX 5080 Launch Edition

US: Newegg $TBA | B&H Photo $TBA
UK: Scan £TBA

MSI Ventus GeForce RTX 5080 3X OC Plus

US: Newegg $TBA | B&H Photo $TBA
UK:
Scan £TBA | Overclockers £TBA | Ebuyer £TBA | CCL £TBA

The rest...

Although retailers seem to be making the most initial room for MSI, Asus, and Gigabyte's popular GPU line-ups, there are plenty of other third-party RTX 5080 designs to look out for. It's definitely worth keeping an eye out for Palit's, PNY's, Zotac's takes on the RTX 5080. Most listings for these are outside of the US, but we're sure that'll change before too long.

Live updates

Image 1 of 9

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)
Image 2 of 9

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)
Image 3 of 9

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)
Image 4 of 9

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)
Image 5 of 9

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)
Image 6 of 9

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)
Image 7 of 9

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)
Image 8 of 9

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)
Image 9 of 9

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)

Here's my Nvidia RTX 5090 Founders Edition review a card:

"This is one of those times where I kinda want to give multiple scores. The GPU itself is a decent improvement over the RTX 4090, with more, faster memory, more cores, and a gorgeous chassis. But in terms of brute force rendering it's only incrementally faster in comparison with the performance bumps from Turing to Ampere to Ada. The future-focused AI chops and the marvel of Multi Frame Generation, however, I will praise to the high heavens. They may be 'fake frames', but it matters not a whit when you're gaming at those ultra smooth, ultra-high ray traced settings."

Image 1 of 8

Nvidia RTX 5080 Founders Edition graphics card from different angles

(Image credit: Future)
Image 2 of 8

Nvidia RTX 5080 Founders Edition graphics card from different angles

(Image credit: Future)
Image 3 of 8

Nvidia RTX 5080 Founders Edition graphics card from different angles

(Image credit: Future)
Image 4 of 8

Nvidia RTX 5080 Founders Edition graphics card from different angles

(Image credit: Future)
Image 5 of 8

Nvidia RTX 5080 Founders Edition graphics card from different angles

(Image credit: Future)
Image 6 of 8

Nvidia RTX 5080 Founders Edition graphics card from different angles

(Image credit: Future)
Image 7 of 8

Nvidia RTX 5080 Founders Edition graphics card from different angles

(Image credit: Future)
Image 8 of 8

Nvidia RTX 5080 Founders Edition graphics card from different angles

(Image credit: Future)

And here's my less enthusiastic review of the Nvidia RTX 5080 Founders Edition:

"The RTX 5080 Founders Edition uses the same lovely shroud as the top RTX Blackwell card, and brings the same DLSS/MFG feature set to the table. But that's all that is really setting the second-tier card apart from the RTX 4080 Super as the gen-on-gen performance difference is marginal at best. It might not be an exciting GPU, but at least the veneer of Multi Frame Generation will make it feel like a generational leap to most gamers."

So, what are the bets? Will the cards last longer than ten minutes on sale?

What I will say about the RTX 50-series is that DLSS 4 and Multi Frame Generation is ace. In the games I've tested the feature in, both with games where it's been natively implemented and where it's accessible via DLSS Override in the Nvidia App, it's delivering the sort of 4K frame rates you're not going to see out of native rendering for many GPU generations.

When you're hitting triple figure frame rates in Alan Wake 2 at max ray traced settings, with such low PC latency, it's truly impressive.

Newegg RTX 5090 listing

(Image credit: Future)

No s**t, Sherlock.

Worth noting that at B&H Photo you're only going to get a look-in if you've already signed up to stock alerts for the cards, and they'll get allocated on a first come, first served basis.

Good luck out there.

RTX 5090 listing

(Image credit: Future)

Well, it's go time...

RTX 5090 listing

(Image credit: Future)

It's not looking good out there...

Well, they're up on Newegg right now, but...

RTX 5090 listing

(Image credit: Future)

Aaaaaaand, predictably, they're gone.

RTX 5090 listing

(Image credit: Future)

I was in line at Best Buy for a Founders Edition RTX 5090, but it went straight out of stock.

RTX 5090 listing

(Image credit: Future)

It's the same situation in the UK, too. All gone. I mean, who thought they'd last the full ten minutes?

RTX 5080 listing

(Image credit: Future)

RTX 5080 listing

(Image credit: Future)

I mean, the UK sites are having a real bad time right now.

RTX 5080 listing

(Image credit: Future)

RTX 5080 listing

(Image credit: Future)

Oof, and some places are putting ridiculous scarcity-based price premiums on cards, too. Witness an RTX 5080 for the RTX 5090's MSRP...

RTX 5080 listing

(Image credit: Future)

So, that was CCL, and here's the same card at Currys:

RTX 5080 listing

(Image credit: Future)
]]>
https://www.pcgamer.com/hardware/live/news/nvidia-rtx-5090-and-rtx-5080-release-day-join-us-for-the-picosecond-where-the-new-rtx-blackwell-cards-are-in-stock/ XHGQNbaLwLoVBrDnh99AtW Thu, 30 Jan 2025 12:36:11 +0000
<![CDATA[ Where to buy the Nvidia RTX 5080: the high-end GPU just launched and stocks are already running low, so here's every RTX 5080 retailer listing I've found so far ]]> The Nvidia RTX 5080 launched today, January 30, 2025, and already the GPU is flying off the virtual shelves.

We saw dazzling performance from its bigger sibling, but the RTX 5080 has also impressed us. You can find out exactly why in our RTX 5080 review, but the long story short is it can push out a whole bunch of frames when all the frame gen bells and whistles are enabled. Without this, it doesn't give the biggest uplift over the previous-gen RTX 4080 Super, but it also doesn't cost $2,000 like the RTX 5090 does

The RTX 5080 FE has an MSRP half this much at 'just' $1,000, but a lot of third-party versions cost a fair bit more than this. Retailers have prepared the shelves with spaces for tons of these third-party 'AIB' (add-in board) cards from MSI, Gigabyte, Asus, and more, and now they're live they can hardly keep the shelves filled with the high demand.

All retailers should now have prices listed for the various versions of the RTX 5080. And if you're looking for the stunning Founders Edition, look no further than Best Buy (US) and Nvidia itself (UK).

Below, I've compiled all the RTX 5080 options and where you can buy them.

Quick links

US RTX 5080 retailers:

UK RTX 5080 retailers:

Founders Edition RTX 5080

Nvidia RTX 5080 Founders Edition graphics card from different angles

(Image credit: Future)

Nvidia GeForce RTX 5080 Founders Edition
US:
Best Buy $999.99
UK: Nvidia £979

Founders Edition (FE) cards are Nvidia's own reference design ones, and the 50-series ones look stunning. So, we have no doubt that many of you will be looking to get your hands on an RTX 5080 FE.

Not only does it look stunning but it's also bound to be great value, as it'll be coming in at MSRP where most AIB cards will retail for above this. The design keeps things more than cool enough for gaming, too, as our Dave discovered in his testing which saw average temps keep under 70 °C.

The FE card will, for all these reasons, almost certainly be the most in-demand RTX 5080 upon launch, so fingers on buzzers.

Scan has sold FE cards in the UK previously, but this time around the retailer explains: "Scan work as a fulfilment partner for NVIDIA on the FE cards. These must be bought using the links on the NVIDIA website when stock is available."

MSI

MSI has a very expansive GPU design portfolio, and these ranges are back in force for the RTX 5080. Retailers have lots of these designs lining the shelves (including for my fave, the Suprim), although stocks are low. There are a couple of white-bodied options in there, too, for those of you with an all-white build or looking to add a strong accent to your build.

MSI GeForce RTX 5080 Gaming Trio OC White

US: Newegg $TBA | B&H Photo $TBA
UK: Scan £TBA | Overclockers £1,500 | Ebuyer £TBA | CCL £TBA | Currys £1,300

MSI GeForce RTX 5080 Inspire 3X OC

US: Newegg $1,190 | B&H Photo $1,160
UK:
-

MSI GeForce RTX 5080 Suprim Liquid SOC

US: Newegg $1,330 | B&H Photo $1,300
UK:
Scan £TBA | Overclockers £1,800 | CCL £TBA | Currys £1,450

MSI Vanguard GeForce RTX 5080

US: Newegg $1,250 | B&H Photo $1,230
UK: -

MSI Ventus GeForce RTX 5080 3X OC

US: Newegg $TBA | B&H Photo $TBA
UK:
Scan £979 | CCL £TBA

MSI Ventus GeForce RTX 5080 3X OC White

US: Newegg $1,170 | B&H Photo $1,130
UK:
Scan £TBA | Overclockers £1,400

MSI Shadow GeForce RTX 5080 3x OC

US: Newegg $1,160
UK: -

MSI Vanguard GeForce RTX 5080 Launch Edition

US: Newegg $1,260 | B&H Photo $1,230
UK: Scan £1,350

MSI Ventus GeForce RTX 5080 3X OC Plus

US: Newegg $1,170 | B&H Photo $1,130
UK:
Scan £1,200 | Overclockers £1,400 | Ebuyer £TBA | CCL £1,500

Gigabyte

Gigabyte's one of the best-known brands and its AIB GPUs are some of the most-bought. From mainstream value offerings such as its 'Windforce' cards through to stunners like the 'Aero OC', it has something to offer for most gamers. Plenty of Gigabyte RTX 5080 cards are now lining the proverbial shelves now, too—at least in theory, as stocks are already pretty diminished as people have lapped them up right out the gate.

Gigabyte Aorus GeForce RTX 5080 Master

US: Newegg $1,300 | B&H Photo $1,300
UK: Scan £1,350 | Overclockers £1,700 | CCL £1,700 | Currys £1,300

Gigabyte GeForce RTX 5080 Master Ice

US: Newegg $1,300 | B&H Photo $1,300 |
UK: Scan £1,350 | Overclockers £1,700 | CCL £1,700 | Currys £1,325

Gigabyte GeForce RTX 5080 Windforce

US: Newegg $1,000
UK: Overclockers £1,250 | CCL £TBA | Currys £1,150

Gigabyte GeForce RTX 5080 Aorus Xtreme Waterforce WB

US: Newegg $1,350 | B&H Photo $1,350
UK: Overclockers £2,000 | CCL £TBA | Currys £1,400

Gigabyte GeForce RTX 5080 Windforce OC

US: Newegg $1,000 | B&H Photo $1,000
UK: Scan £1,200 | Overclockers £1.400 | Ebuyer £TBA | CCL £1,400

Gigabyte GeForce RTX 5080 Aorus Xtreme Waterforce

US: Newegg $1,400 | B&H Photo $1,400
UK:
Scan £1,450 | Overclockers £1,800 | Ebuyer £TBA | CCL £1,800 | Currys £1,430

Asus

Asus' most popular GPU line-ups are back for the RTX 50-series, and for the RTX 5080 in particular. We've got the much-adored and industrial-looking TUF line-up, as well as Astral and Prime. And there are plenty of stock-overclocked versions, plus some liquid-cooled ('LC') ones, so plenty to go around, if you can catch any in stock.

Asus ROG Astral GeForce RTX 5080

US: Newegg $1,500
UK:
Overclockers £1,700 | Ebuyer £TBA | Currys £1,355

Asus GeForce RTX 5080 TUF Gaming

US: -
UK:
Overclockers £1,500 | Ebuyer £TBA | Currys £1,245

Zotac, Palit, and others

Although retailers seem to be making the most initial room for MSI, Asus, and Gigabyte's popular GPU line-ups, there are plenty of other third-party RTX 5080 designs to look out for. It's definitely worth keeping an eye out for Palit's, PNY's, Zotac's takes on the RTX 5080. Most listings for these are outside of the US, however.

]]>
https://www.pcgamer.com/hardware/graphics-cards/where-to-buy-the-nvidia-rtx-5080-reviewed-today-released-tomorrow-heres-every-rtx-5080-listing-ive-found-so-far/ XAe4JwxMaqkzkfmXiXTbfW Wed, 29 Jan 2025 17:54:58 +0000
<![CDATA[ Waiting for the launch of Nvidia's RTX 5090 and RTX 5080 is getting in-tents with hopefuls already pitching up outside of Micro Center stores ]]> Campers already appeared from r/Microcenter


Hardware launches are often bracing experiences. Scrambling to hit checkout on an online basket is its own rush, but there are those who really want to ensure their spot in line. Over the years, we've heard plenty of stories about folks camping to ensure they really are the first in-line on launch day. We've heard fewer stories about folks camping in line almost four days before launch IN JANUARY.

Hardware launches are often bracing experiences, and few understand this better than the folks camping outside of a Micro Center store in Tustin, California right now. It's all in the hopes of scooping up either a Nvidia Geforce RTX 5090 or RTX 5080, both of which launch January 30.

The $2,000+ RTX 5090 is the flagship of the new RTX Blackwell generation, offering at least 30% higher 4K performance than the old RTX 4090 but with the promise of a neural rendering future alongside Multi Frame Generation right now.

That new feature is what's giving the RTX 50-series the big generational boost in supporting games here and now. The RTX 5080 is less exciting, feeling more like some sort of RTX 4080 Ti Super, but it still gets to rock the Multi Frame Gens, and that's surely what people are queueing up for.

According to this Reddit post, campers set up shop about three days ago, and this video from Tuesday evening shows the line is only getting longer. For those curious, Tustin has seen night time temperatures sitting around 46°F on average this week (that's roughly 7°C)—I hope those staking out the Micro Center are layering up. At the very least, a certain grill meister appears to be keeping some folks warm with hot food.

This extreme queuing tactic is likely because there will be a severely limited supply of both 50-series cards for the foreseeable future. Just this week, MSI said they had received a limited supply of 5090 cards from Nvidia, and recently Overclockers UK also revealed they had a 'single digit' number of Nvidia's RTX 5090 cards a week before launch.

Micro Center Campers 50 Series (video) from r/nvidia


Considering scalpers are already trying to flog cards they definitely do not have for $7,000, you can understand why some are instead opting for an in-person purchase. Though the line in Tustin is perhaps one of the longest, committed queuers have also been observed outside of Micro Centers in Dallas and Houston, Texas.

Plenty more hardware hounds hoping to rock up early on launch day are also posting in the MicroCenter subreddit to ask about their chances for different store locations. If you're not feeling like rising bright and early or dusting off the camping equipment yourself, you can take a look at our curated list of RTX 5090 online retailers right here.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/waiting-for-the-launch-of-nvidias-rtx-5090-and-rtx-5080-is-getting-in-tents-with-hopefuls-already-pitching-up-outside-of-micro-center-stores/ enVtsTiYQurKFGb2rSaQEE Wed, 29 Jan 2025 17:28:15 +0000
<![CDATA[ New Intel Battlemage graphics cards spotted but they may not be the cut-price RTX 4070 killers we're all desperate for ]]> As many as three potentially new Intel Arc graphics cards based on the latest Battlemage tech have been spotted in the latest official Intel Linux patch. But are they the cheapo RTX 4070 killers we're all gagging for? Now that's the question.

X user Tomasz Gawroński (via ye olde Videocardz) spotted a trio of purportedly new E21x device IDs to supplement existing E20x Battlemage IDs associated with the Arc B570 and B580 cards. It's speculated these new cards could be based on a different GPU die altogether.

The Intel Arc B570 and B580 use the G21 die. However, a larger, more powerful G31 Battlemage die has been rumoured for what seems like eons. In this scenario, the new IDs could cover off B750 and B770 cards based on the G31, and maybe a low-end B380 board.

The rumoured G31 chip is said to have 32 execution units or EUs. That compares with the 20 EUs of the B580. In other words, a graphics card based on G31 would be in the order of 50% more powerful than the B580.

If you're generous about scaling and driver quality, that could put it well into RTX 4070 territory. Indeed, given the modest on-paper improvements promised by Nvidia for the 5070, that could put a G31-based Intel Arc card into direct competition with the RTX 5070. Wouldn't that be nice?

It certainly would if the G31 GPU exists, if Intel has got its drivers working and if it prices the thing right. Those are, of course, quite a lot of stars to align.

It's also worth noting that some as yet unlaunched Arc cards, based on the existing G21 chip, but packing 24GB of memory in a clamshell arrangement, have also been spotted in shipping manifests. So, it's quite possible those Linux device IDs refer to, for instance, a pair of new G21-based Intel Arc Pro cards, plus maybe that low-end B380 board.

In other words, there's no current reason in particular to think the new device IDs are particularly likely to indicate something G31-based and super exciting. It's absolutely what we're hoping for, but there's no firm indication either way. Fingers crossed.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/new-intel-battlemage-graphics-cards-spotted-but-they-may-not-be-the-cut-price-rtx-4070-killers-were-all-desperate-for/ Wpjxj4KYquETg9pjNaFAsn Wed, 29 Jan 2025 16:24:18 +0000
<![CDATA[ Reddit user unearths $850+ RX 7900 XTX from bargain bin, pays about 10 bucks for it, and claims 'it's working' ]]> I just got this for $4 from r/radeon


Oh, I do love a good thrift. Thanks to judicious use of second hand apps, a good chunk of my wardrobe is now Lucy and Yak—yes, I know, with prints this loud you saw that coming a mile off. Still, in all my years of bargain hunting, I've not yet found a deeply discounted GPU I can trust.

Well, Reddit user BlackTo0thGrin hit the jackpot when they plunged their hand into a bargain bin earlier this week, and unearthed what appears to be a AMD Radeon RX 7900 XTX in among miscellaneous stock. Realising what they had, they took it to the till—and paid the far from princely sum of "$4 plus tax."

Though a couple years old now, this particular graphics card remains our top pick for the best AMD GPU for good reason. With a MSRP of $999 at launch, this Reddit user has potentially made a saving of mythological proportions—so long as the card proves to be genuine and functional.

Reddit user BlackTo0thGrin has been sharing updates, including a picture apparently showing the GPU in question installed and powered on. However, they've yet to share anything in-depth on the card's performance—in part because this is their very first PC build. At present, their desktop doesn't even have a case yet.

In their original post, the user explained how, after enjoying their time with the ROG Ally handheld gaming PC given to them as a gift by their eldest son, they decided to build their first desktop. On the hunt for some case fans, they ventured to their "local Amazon returns/overstock store called 'Gimme a Five'," and got more than they bargained for.

The progress they've shared so far sounds tentatively promising, as they write their motherboard is detecting the GPU correctly. They also share they've "got everything updated, all the drivers and the AMD software, everything seems fine, and temps are good."

Update on the $4 GPU from r/radeon


Though I am obviously intensely jealous of this Reddit user's good luck, I am also keeping my fingers crossed that their very first PC build proves magical the whole way through (and doesn't turn into a Grimms' fairytale with a twisted moral at the end). Second-hand GPUs can be a troublesome game, with cards used for cryptocurrency mining once flooding the used market. With big cryptos no longer relying on at-home GPU mining all that much, that's less of an issue these days.

Of all the places for pretty decent hardware to turn up, an "Amazon returns/overstock store" is perhaps one of the more expected I've seen in recent months. Serious thrifters will advise you to wear gloves if you're planning to go elbow deep in any bargain bin, but it certainly beats digging hardware out of the trash.

With our surplus of e-waste, there may soon come a day when stories like that one—to badly paraphrase Shania Twain—cease to impress me much. Still, with how limited supplies of the imminent Nvidia GeForce RTX 5090 are likely to be, maybe it is worth giving your local bargain bin another look.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/reddit-user-unearths-usd850-rx-7900-xtx-from-bargain-bin-pays-about-10-bucks-for-it-and-claims-its-working/ MqJhSY8LvTXjLYHQ7ScUM7 Wed, 29 Jan 2025 14:14:45 +0000
<![CDATA[ Nvidia RTX 5080 Founders Edition review ]]> The Nvidia RTX 5080 is like the difficult second album for the RTX Blackwell GPU band. It's a card that comes in at fully half the price of its RTX 5090 sibling, and presents us with a graphics card which—even more so than the previous card—reminds me very much of its erstwhile last-gen stablemate, the RTX 4080 Super.

I don't want to have to refer to this second spin of the Blackwell wheel as an ostensible RTX 4080 Ti Super, but there are a ton of similarities between the Ada refresh and this new GB203-powered RTX 5080. And if there was ever a reason for Nvidia not enabling its new Multi Frame Generation technology on RTX 40-series cards, this is the physical embodiment of it. Right now, it's kinda all the RTX 5080's got going for it.

But while not a lot has changed between the two cards, that includes the price. We are talking about a GPU which costs half the price of the most powerful consumer graphics card on the planet, and yet notably performs better than half as well. Of course, you're always going to pay more for that last little bit of ultra-enthusiast power to step up, I just kinda mean you shouldn't feel too bad if you can only drop $1,000 on a new GPU and not the $2,000+ of the RTX 5090. Poor lamb.

And, of course, there's AI. But actually useful AI, which makes our games run faster through the magic of AI models and yet still look damn good in the process. Yes, DLSS 4 with its Multi Frame Generation feature is the sign the RTX 5080 will continually tap whenever anyone brings up its striking resemblance to an RTX 4080 Super.

Nvidia RTX 5080: The verdict

Nvidia RTX 5080 Founders Edition graphics card from different angles

(Image credit: Future)
Buy if...

You can find one for the same price as an RTX 4080 Super: While it's only slightly more powerful than an RTX 4080 Super, it is nominally the same price. But scarcity and the newness premium might well see RTX 5080 pricing beyond the previous gen. If not, it's a good buy.

You're favourite games are already Multi Frame Gen supporting: MFG is a stunning feature for the RTX 50-series and the performance bump you can get with the RTX 5080 makes it far superior to the Ada equivalent so long as the game supports it.

Don't buy if...

You've already got yourself an RTX 4080 or above: It's going to be tough to conscience spending another $1,000+ on a new graphics card if you're already sitting on one of the top Ada GPUs, especially considering what you're really paying for is access to Multi Frame Gen.

I don't hate the RTX 5080, it just very much feels like this is an Ada GPU with some tweaked Tensor and RT Cores, an enhanced bit of flip metering silicon in the display engine, and an AI management processor queuing up all the new AI-ness of this neural rendering future of ours. Which we're going to have to wait and see what those end-user benefits actually end up looking like.

I mean, you wait two and a bit years for a new graphics card architecture and the silicon we're presented with looks remarkably similar to what went before, but with the promise that it's got some revolutionary tech baked into it. So long as developers go ahead and make use of it all.

But it's not like Nvidia hasn't been upfront about what we should expect with this new chip. It's just that maybe its overly bombastic initial CES numbers didn't make it too obvious that MFG was responsible for most of its early perf claims.

It gave us the important specs and the relative gen-on-gen performance figures of a 15% increase over the previous generation at the following Editor's Day. And that's what I've seen in my own testing, across our new GPU test suite the RTX 5090 is delivering an average 4K gaming performance uplift over the RTX 4080 Super of just over 15%.

Though just 9% and 14% compared with the same card's performance at 1080p and 1440p respectively.

And it's not like Nvidia is asking us to pay any more for the new card over the one it's essentially replacing, like-for-like. Though, I've no idea how it could have charged more for this card, given the brakes the green team has put on the silicon development of this GPU, and not ended up with a full-on riot on its hands.

Nvidia RTX 5080 Founders Edition graphics card from different angles

(Image credit: Future)

Seeing 100 fps+ at top 4K settings in Alan Wake 2 and Cyberpunk 2077 is quite something to behold, though the latency in AW2 does highlight a problem we'll have further down the stack.

I just don't feel a whole lot of affection for the RTX 5080. Right now, without any neural rendering shiz to actually get excited about, it feels like the GB203 on its own just kinda isn't trying. It'll slot in exactly where the RTX 4080 Super did, filling prebuilts and the hearts of those who balk at paying $2K for a GPU, yet are able to convince themselves and their significant others that $1,000 is worth it.

Except it will have far worse stock levels and a likely RTX 50-series premium attached to any build and non-MSRP card. This is definitely a concern for the RTX 5080. While the $999 MSRP means there's no price hike over the RTX 4080 Super it's replacing, the manufacturers and retailers will be keen to exploit its initial scarcity and newness by slapping a hefty tax on top of that base MSRP. $1,500 RTX 5080s aren't going to be uncommon, I would wager.

If it wasn't for Multi Frame Gen, the RTX 5080 would be a total non-event. But of course there is DLSS4 and MFG here to salve a good chunk of the pain one might be feeling in regard to the relative performance of Nvidia's second-tier RTX Blackwell card. The still impressive technology smooths out the gaming performance of the RTX 5080 and delivers exceptional high frame rates in all the games I've tested it in. Which admittedly isn't the full 75 games and apps Nvidia has been promising, but the innovative DLSS Override feature of the Nvidia App isn't working even on the review drivers.

But seeing 100 fps+ at top 4K settings in Alan Wake 2 and Cyberpunk 2077 is quite something to behold, though the latency in AW2 does highlight a problem we'll have further down the stack. So long as that level of performance uplift remains consistent across all the supported MFG games in its long list of Day 1 supporting titles, then there are going to be a huge volume of games where the actual gaming experience of running the RTX 5080 will feel entirely different to that of the RTX 4080 Super.

And that is where we have to end up, because however I might feel about the lack of tangible silicon advancement with the RTX 5080's GPU, what it's going to feel like when the average gamer gets the card slapped into their PC is arguably all that really matters.

So, if you've ever entertained the thought of spending $1,000 on an RTX 4080 Super, then this is the obvious next object of your affections. It's a like-for-like drop-in GPU, with an MFG magic trick, which is just as effective and strangely unexciting as that sounds.

Nvidia RTX 5080: The specs

Nvidia RTX 5080 Founders Edition graphics card from different angles

(Image credit: Future)

The overall RTX Blackwell architecture remains the same as with the previous card, and I've covered that in some depth in my RTX 5090 review. Suffice to say, the big change is the fact the shaders are now to be given direct access to the Tensor Cores of an Nvidia GPU—rather than relying on CUDA programming—which will allow a level of AI game integration we've not seen before.

You're also getting a dedicated AI management processor (AMP) inside the chip which allows it to regulate and schedule AI and standard graphics workloads so that it can still do all your DLSS and Frame Generation tasks alongside the other neural rendering stuff it's going to be tasked with when RTX Neural Skin, RTX Neural Materials, RTX Neural Faces, and RTX Neural Radiance Cache come into the picture in future gameworlds.

Nvidia Multi Frame Generation timeline

(Image credit: Nvidia)

You can also kinda include Multi Frame Generation as part of this architecture, for now at least. Since it is entirely locked down to the RTX 50-series, the skinny is that MFG is only possible at these PC latency levels because of the power of the 5th Gen Tensor Cores, that AMP scheduler, and the enhanced flip metering capabilities of the RTX Blackwell silicon inside the GB203 GPU inside the RTX 5080.

I've said it's like magic before, but that's doing the Nvidia engineers who worked on it a disservice. The ability to generate up to three extra frames between every two that are rendered is impressive on its own, but being able to do so without adding a ton of extra latency into the picture, pacing it perfectly, and with only some very minor artifacting at worst is something else.

It's this feature which entirely makes the RTX 5080 as it is, without it you would have a very different GPU, or at least a much cheaper card. But whatever took its place, you wouldn't have a card that could hit 100 fps+ in the latest games at their top 4K settings.

So what is this GB203 GPU about, then? Well, it's got 5% more cores than the RTX 4080 Super, with 10752 CUDA cores inside it. Despite rocking the same TSMC custom 4N lithography, it's also a smaller chip, if only by a smidge. There are 45.6 billion transistors inside the GB203 where there are 45.9 billion inside the AD103 chip, and in terms of total die size we're looking at 378mm2 compared with 378.6mm2.

It's also worth noting the RTX 5080 is using the full GB203 GPU; given the scale of the chip and the maturity of the 4N process, that's probably not a huge surprise. But what it does mean is that any future RTX 5080 Super refresh is going to have to be running on either the GB202 or an entirely new chip. Which would also mean you'd either have to jam a lot more memory in there or use 1 GB dies to fill the 512-bit bus to match the same 16 GB.

So yes, you are still getting the same 16 GB of VRAM in the card as you did with the RTX 4080/Super cards, except this time you're getting GDDR7 instead of GDDR6X, running at 30 Gbps versus 21 and 23 for the previous Ada cards. That means there's a fair chunk more memory bandwidth available to the Blackwell chip.

There are some other tweaks inside the GB203 silicon which separates it from the AD103 chip of the RTX 4080 Super. There are more texture units, which means more texture processing power, and more L1 cache. Though you are looking at the same 65 MB level of L2 cache across the chip.

Nvidia is throwing a bit more power at the card, too, with the TGP rated at 360 W versus 320 W for the RTX 4080 Super. And that means the recommended PSU specs have risen by 100 W, too. That 750 W might not be enough to keep your new GPU fed, y'know.

Nvidia RTX 5080: The performance

Nvidia RTX 5080 Founders Edition graphics card from different angles

(Image credit: Future)

In line with the extra power Nvidia is jamming through the card, the extra memory bandwidth, and handful of extra cores, the overall gen-on-gen performance of the RTX 5080 is exactly what the green team said it would be. I'm getting a reliable 15% 4K gaming performance boost on average across our test suite.

Yeah, if you were hoping for RTX 4090 performance from the second-tier RTX Blackwell card then you're going to be disappointed.

If that sounds largely unexciting in percentage terms, it gets even less so when you look at the raw frame rates. When you're going from 47 fps to 55 fps or 31 fps to 36 fps it stops looking like any kind of tangible generational improvement in gaming performance. It's certainly not exactly going to set hearts aflame with acquisitional zeal.

Anyone on a relative RTX 40-series GPU will likely be pleased to see that; taking the pressure of any niggling desire to upgrade their already expensive graphics card.

The performance delta—as with the RTX 5090—shrinks as we drop down the resolution scale. At 1080p and 1440p it drops to 9% and a touch under 14% respectively. At least if you're going to be running at 4K with DLSS Quality you're going to see a similar performance bump as at 4K native.

But the performance picture changes dramatically once you start to look at what Multi Frame Generation does to the card's frame rates. Going from 20 fps at 4K native to 130 fps with RT Overdrive in Cyberpunk 2077 and DLSS Quality with 4x MFG really does give you the generational improvement we've been craving. And it looks great, too, even the 67 ms latency is absolutely fine.

As much as it sometimes feel like magic, MFG is not.

What I will say about latency, however, is that the Alan Wake 2 numbers do highlight a potential issue for MFG being the frame rate panacea of the lower class RTX 50-series GPUs. For AW2, I left it on the same extreme settings as the RTX 5090, which is honestly too demanding for the RTX 5080.

It gets just 19 fps natively, and only 35 fps when you turn on DLSS. Sure, you'll hit 117 fps when you slap 4x FG on the table, but the native latency is too high for DLSS to bring it down enough for frame gen's subsequent latency to be truly palatable. At 102 ms you could maybe get away with it on something like Alan Wake 2, but it's definitely stretching things for me.

Again we have to come back to where frame generation features inevitably fall down. As much as it sometimes feel like magic, MFG is not; if you don't have a high enough input frame rate the final latency is going to be utterly punitive even if the fps figures look good.

For the weaker cards in the RTX 50-series it does feel like MFG is going to be a little less exciting an advance. Though we'll have to wait and see how it holds up on the RTX 5070/Ti when they arrive in February.

It's also worth noting that, while 75 apps and games with DLSS 4 and MFG support at launch is great, it's notably not all games that sport Nvidia's Frame Generation. The DLSS Override setup in the Nvidia App is great and impressively comprehensive, but it needs game support, and can't just be used to add MFG into any existing Frame Gen game.

Black Myth Wukong is a popular modern title, and a graphically intensive one, too. It sports Nvidia's Frame Gen technology but is notable by its absence from the list of native or DLSS Override supporting games. While Bears in Space is there. Good ol' Bears in Space.

It's only one game, but it's an example of where the RTX 5080 isn't going to feel like a step up over the RTX 4080 Super even when you flip the Frame Gen switch.

System-wise, that extra 15% performance bump comes with both a steady rise in power demands and in temperature. Granted that last is mostly down to the fact that the Founders Edition comes in a dual-slot configuration as opposed to the chonky triple slot cooling array of the RTX 4080/Super cards. The cooling on the big boi was certainly more effective, but I will say I'll happily take 71°C over 63°C if the card itself is so much smaller.

If the gen-on-gen gaming performance doesn't excite you then the card's creator chops are going to leave you utterly cold. When it comes to raw rendering performance, its Blender performance is around 12% higher than the RTX 4080 Super. And then on the AI side, it's only 5% better off in the PugetBench for DaVinci Resolve tests, though is at least 14% faster than the Ada card when it comes to AI image generation with the Stable Diffusion 1.5 benchmark.

Nvidia RTX 5080: Analysis

Nvidia RTX 5080 Founders Edition graphics card from different angles

(Image credit: Future)

What would Nvidia have done if Multi Frame Generation didn't work out? Brian Catanzaro freely admitted at the Editor's Day during CES 2025 that it was not something Nvidia could have done around the Ada launch.

"Why didn't DLSS 3 launch with Multi Frame Generation?" He asks. "And the answer is, we didn't know how to make the experience good."

Catanzaro notes that there were two big problems it needed to solve to make Multi Frame Generation a workable solution to a lack of big GPU silicon advances.

"One is that the image quality wasn't good enough. And when you think about it, when you're generating multiple frames, the amount of time you're looking at generating frames is much higher, and so if there's artifacts, they're going to really stand out. But then secondly, we have this issue with frame pacing."

Nvidia solved the issues with a shift to a new AI model for its Frame Generation feature to help deal with motion artifacts, the new transformer model for resolving the image, and flip metering to ensure the extra frames are slotted in smoothly, and all without adding too much over 2x Frame Gen in terms of PC latency.

Image 1 of 8

Nvidia RTX 5080 Founders Edition graphics card from different angles

(Image credit: Future)
Image 2 of 8

Nvidia RTX 5080 Founders Edition graphics card from different angles

(Image credit: Future)
Image 3 of 8

Nvidia RTX 5080 Founders Edition graphics card from different angles

(Image credit: Future)
Image 4 of 8

Nvidia RTX 5080 Founders Edition graphics card from different angles

(Image credit: Future)
Image 5 of 8

Nvidia RTX 5080 Founders Edition graphics card from different angles

(Image credit: Future)
Image 6 of 8

Nvidia RTX 5080 Founders Edition graphics card from different angles

(Image credit: Future)
Image 7 of 8

Nvidia RTX 5080 Founders Edition graphics card from different angles

(Image credit: Future)
Image 8 of 8

Nvidia RTX 5080 Founders Edition graphics card from different angles

(Image credit: Future)

It's lucky for Nvidia's gaming division's bottom line it's got such smart folk working for it who could solve the issues with Multi Frame Generation

The work Nvidia has done in making Multi Frame Generation work is truly impressive, but if that hadn't worked out what sort of GPU generation would we have in place of the current crop of RTX Blackwell chips? Maybe the RTX 5090 wouldn't have been much different; you'd still get the extra silicon, the extra VRAM, and essentially a rendering, gaming monster of a card, though with only 30% higher overall performance.

It would likely have been tough to cost it higher than the RTX 4090 at $1,600, however, given the relative performance increase.

Things would have to have been different for the RTX 5080 and its GB203 GPU, though. This is the full chip being used at launch, which means there's no more headroom here to offer more than the 15% 4K performance bump that it offers over the RTX 4080 Super. There's no way it could have been released for the same $999 with such a slight bump and no MFG in sight.

Or else it would have had to be an entirely different, much more powerful GPU. And that would have necessarily translated further down the RTX 50-series stack, too.

It's good that, despite being half the price of the RTX 5090, the RTX 5080 isn't delivering half the performance; it's better than that. The RTX 5090 is some 50% quicker than the second-tier RTX Blackwell card. Though what I will say is that the price delta was much lower between RTX 4080 Super and RTX 4090, and the top Ada was only 35% quicker. So, that gen-on-gen comparison isn't too favourable for the RTX 50-series, either.

In reality, it's a moot point. I guess it's lucky for Nvidia's gaming division's bottom line it's got such smart folk working for it who could solve the issues with Multi Frame Generation in time for the RTX 50-series launch.

In the end, Multi Frame Generation exists, and the RTX 5080 is the silicon you're going to get because of the experience and extreme level of performance it can offer in the games that can exploit DLSS 4 and MFG. Thank Jen-Hsun for AI, eh?

]]>
https://www.pcgamer.com/hardware/graphics-cards/nvidia-rtx-5080-founders-edition-review/ V58h5iwiG2zRsyMrsiUpSm Wed, 29 Jan 2025 14:00:30 +0000
<![CDATA[ Today I learned I can run my very own DeepSeek R1 chatbot on just $6,000 of PC hardware and no megabucks Nvidia GPUs required ]]> Got the impression that a bazillion dollar's worth of GPUs are required to run a cutting-edge chatbot? Think again. Matthew Carrigan, an engineer at AI tools outfit HuggingFace, claims that you can run the hot new DeepSeek R1 LLM on just $6,000 of PC hardware. The kicker? You don't even need a high-end GPU.

Carrigan's suggested build involves a dual-socket AMD EPYC motherboard and a couple of compatible AMD chips to go with it. Apparently, the spec of the CPUs isn't actually that critical. Instead, it's all about the memory.

"We are going to need 768GB (to fit the model) across 24 RAM channels (to get the bandwidth to run it fast enough). That means 24 x 32 GB DDR5-RDIMM modules," Carrigan explains.

Links are helpfully provided and the RAM alone comes to about $3,400. Then you'll need a case, PSU, a mere 1 TB SSD, some heatsinks and fans.

Indeed, Carrigan says this setup gets you the full DeepSeek R1 experience with no compromises. "The actual model, no distillations, and Q8 quantization for full quality," he explains.

From there, simply "throw" on Linux, install llama.cpp, download 700 GB of weights, input a command line string Carrigan helpfully provides and Bob's your large language model running locally, as they say.

Notable in all this is a total absence of mention of expensive Nvidia GPUs. So what gives? Well, Carrigan provides a video of the LLM running locally on this setup plus a rough performance metric.

Nvidia Hopper GPU die

Nvidia's H100: You won't be needing one of these. (Image credit: Nvidia)

"The generation speed on this build is 6 to 8 tokens per second, depending on the specific CPU and RAM speed you get, or slightly less if you have a long chat history. The clip above is near-realtime, sped up slightly to fit video length limits," he says.

The video shows the model generating text at a reasonable pace. But that, of course, is for just one user. Open this setup out to multiple users and the per-user performance would, we assume, quickly become unusable.

In other words, that's $6,000 of hardware to support, in effect, a single user. So, this likely isn't an approach that's practical for setting up an AI business serving hundreds, thousands or even millions of users. For that kind of application, GPUs may well be more cost effective, even with their painful unit price.

Carrigan suggests a build relying on GPUs might run up to triple-digits pretty quickly, albeit with better performance.

Your next upgrade

Nvidia RTX 4070 and RTX 3080 Founders Edition graphics cards

(Image credit: Future)

Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.

But it is intriguing to learn that you don't actually need a bazillion dollar's worth of GPUs to get a full-spec LLM running locally. Arguably, it also provides insight into the true scale of intelligence implied by the latest LLMs.

As an end user experiencing what can seem like consciousness streaming out of these bots, the assumption is that it takes huge computation to generate an LLM's output. But this setup is doing it on a couple of AMD CPUs.

So, unless you think a couple of AMD CPUs is capable of consciousness, this hardware solution demonstrates the prosaic reality of even the very latest and most advanced LLMs. Maybe the AI apocalypse isn't quite upon us after all.

]]>
https://www.pcgamer.com/hardware/graphics-cards/today-i-learned-i-can-run-my-very-own-deepseek-r1-chatbot-on-just-usd6-000-of-pc-hardware-and-no-megabucks-nvidia-gpus-required/ HNJPg65LGTqGXBaEp6HEsd Wed, 29 Jan 2025 11:43:39 +0000
<![CDATA[ China's DeepSeek chatbot reportedly gets much more done with fewer GPUs but Nvidia still thinks it's 'excellent' news ]]> China's new DeepSeek R1 language model has been shaking things up by reportedly matching or even beating the performance of established rivals including OpenAI while using far fewer GPUs. Nvidia's response? R1 is "excellent" news that proves the need for even more of its AI-accelerating chips.

If you're thinking the math doesn't immediately add up, the stock market agrees, what with $600 billion being wiped off Nvidia's share price this week.

So, let's consider a few facts for a moment. Reuters reports that DeepSeek's development entailed 2,000 of Nvidia's H800 GPUs and a training budget of just $6 million, while CNBC claims that R1 "outperforms" the best LLMs from the likes of OpenAI and others.

The H800 is a special variant of Nvidia's Hopper H100 GPU that's been hobbled to fit within the US's export restriction rules for AI chips. Some in the AI industry claim that China generally and DeepSeek, in particular, have managed to dodge the export rules and acquire large numbers of Nvidia's more powerful H100 GPUs, but Nvidia has denied that claim.

Meanwhile, it's thought OpenAI used 25,000 of Nvidia's previous-gen A100 chips to train ChatGPT 4. It's hard to compare the A100 to the H800 directly, but it certainly seems like DeepSeek got more done with fewer GPUs.

That's why the market got the yips when it comes to Nvidia. Maybe we don't need quite as many chips for the AI revolution as was once thought?

Needless to say, Nvidia doesn't see it that way, lauding R1 for demonstrating how the so-called "Test Time" scaling technique can help create more powerful AI models. “DeepSeek is an excellent AI advancement and a perfect example of Test Time Scaling,” the company told CNBC. "DeepSeek’s work illustrates how new models can be created using that technique, leveraging widely-available models and compute that is fully export control compliant."

Of course, all may not be quite as it seems. As Andy reported yesterday, some observers think DeepSeek is actually spending more like $500 million to $1 billion a year. But the general consensus is that DeepSeek is getting more done for much less investment than the established players.

Nvidia Hopper GPU die

H100 is Nvidia's cash cow. Did DeepSeek secretly manage to get hold of it? (Image credit: Nvidia)

Microsoft, for instance, expects to spend $80 billion on AI hardware this year, with Meta saying it will unload $60 to $65 billion. DeepSeek's R1 model seems to imply that similar results can be achieved for an order of magnitude less investment.

Your next upgrade

Nvidia RTX 4070 and RTX 3080 Founders Edition graphics cards

(Image credit: Future)

Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.

The question is, does that automatically mean fewer GPUs being bought? Perhaps not. At the kind of investment levels demonstrated by the likes of Microsoft and Meta, the number of organisations that can get involved is necessarily limited. It's just too expensive.

Cut that by a tenth or more and suddenly the number of potential participants might explode. And they'll all want GPUs. It's a little like the move from mainframe to personal computing. Sure, each individual investment in computing was a lot smaller, but the computing business overall grew far larger.

So, maybe DeepSeek is an inflection point. From here on, AI development is more accessible, less dominated by a small number of hugely wealthy entities, and maybe even a bit more democratic.

Or maybe we'll find out that DeepSeek has used a mountain of H100s after all. Either way, Nvidia will be planning to make its own mountain—of cash.

]]>
https://www.pcgamer.com/hardware/graphics-cards/chinas-deepseek-chatbot-reportedly-gets-much-more-done-with-fewer-gpus-but-nvidia-still-thinks-its-excellent-news/ D7XfvftUtwJRNQAauybvLb Tue, 28 Jan 2025 16:16:48 +0000
<![CDATA[ FSR 4 may be a simple upgrade for FSR 3.1 games according to leaks, which hopefully means we won't see a repeat of FSR 3's poorly-supported launch ]]> I remember the FSR 3 launch. While AMD's version of Frame Generation was a boon for those of us not on Nvidia GPUs, the supported games list was made up of, err, two entries. Hopefully that won't be the case with FSR 4, however, as according to a reliable source of leaks, when it comes to backwards compatibility, "it should just work."

Exactly how it "should just work" is currently unclear (via Wccftech). According to @Kepler_L2, one of the more reliable leakers of modern times, AMD's RDNA 4 driver will simply replace the FSR 3.1 DLL with FSR 4. Whether that's applied by default on all FSR 3.1-supported games, or a manual, game-by-game process, is up for debate.

AMD hinted at this in its slide presentation for CES 2025, which announced both FSR 4 and the RX 9070-series GPUs (even if AMD technically didn't in the briefing that followed). The bottom of the FSR 4 slide says: "AMD FSR 4 upgrade feature only available on AMD Radeon RX 9070 series graphics for supported games with AMD FSR 3.1 already integrated."

So, an "upgrade feature." That certainly sounds like some form of integrated backwards compatibility to me, although it's possible you might have to go through your FSR 3.1 games in the driver itself and swish a slider, or something to that equivalent.

Still, I hope this rumour proves to be true. Button-press backwards compatibility for upscalers has become a lot easier since Nvidia introduced the option to switch over the DLSS version within its Nvidia app as part of the rollout for DLSS 4.

(Image credit: AMD)

It makes sense that AMD would try and do the same, meaning we could be looking at simple upscaler swapping in previously supported games from both major GPU manufacturers in future.

Speaking of DLSS 4, well, there's tough competition for AMD ahead. My hardware overlord, Dave James, has been pretty impressed with what he's seen of DLSS 4, including in his testing of the RTX 5090. It's not perfect, but Multi Frame Generation has some serious framerate boosting chops—and the new transformer-based upscaling looks mostly excellent in motion.

Your next upgrade

Nvidia RTX 4070 and RTX 3080 Founders Edition graphics cards

(Image credit: Future)

Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.

FSR 3.1, on the other hand, didn't quite bring the image quality improvements I was hoping for compared to previous iterations, even if it did perform relatively well in my upscaler testing showdown—at Quality settings at least.

Now FSR 4 has been buffed with machine-learning doohickeys (and presumably some on-card hardware to run them), here's hoping AMD has managed to catch up. You'll need an RX 9070-series card to play with the new tech, at the very least.

AMD will certainly be hoping that FSR 4 makes a difference, given the competitive price point that Nvidia has set the RTX 5070 at. $549 for a card that promises "RTX 4090 level performance" thanks to DLSS 4? The red team's own mid-range efforts look like they'll have to work hard to beat it—and we can only hope the pricing and upscaling performance makes for a strong competitor.

The upscaling battles seem to be beginning once more, at least, and we'll be testing both to see which emerges victorious.

]]>
https://www.pcgamer.com/hardware/graphics-cards/fsr-4-may-be-a-simple-upgrade-for-fsr-3-1-games-according-to-leaks-which-hopefully-means-we-wont-see-a-repeat-of-fsr-3s-poorly-supported-launch/ 7wGbxQUSeYnstT2aDsv84H Mon, 27 Jan 2025 16:59:39 +0000
<![CDATA[ MSI says that the supply of its RTX 5090 cards will be very tight, due to a limited supply of GPUs from Nvidia ]]> Anyone who has been around in PC gaming for a long time will know that every time a new high-end graphics card gets launched, supply is never enough to meet demand. In the case of the GeForce RTX 5090, board partner MSI says that its cards will have limited availability and it's down to an insufficient number of GB202 GPUs from Nvidia.

While this news probably isn't a shock to any graphics card enthusiast, it's worth noting that this isn't some rumour or leak—it's an official statement by MSI, one of Nvidia's key GPU partners, as reported by IT Home (via Wccftech).

As with any manufactured product, the overall availability can only be as good as the weakest link in the supply chain. In this instance, according to MSI, it's the number of GPUs being provided by Nvidia.

MSI, Asus, Gigabyte, and all other AIB vendors purchase graphics processors from Nvidia, which then distributes them mostly from its centres in Hong Kong. However, one can't simply rock up and ask for 100,000 chips—orders need to be placed well in advance and then Nvidia will allocate processors based on a number of factors, such as the size of the order, relationship with the partner, what deals are in progress, and so on.

It's not just MSI that is struggling to meet demand. Zotac Korea says that there's no chance of any 5090s being available (via Videocardz) until early February and that, as things currently stand, there is no confirmed date for the release of its RTX 5080 models.

This all tallies with the remark from UK retailer Overclockers saying that it will only have a 'single digit' number of 5090 cards available when the GPU is released for purchase.

So what to make of this? Is Nvidia deliberately constraining the supply of its GB202 chips, to charge board vendors more for each tray of processors they order? Or is it down to the fact that at 750 mm2 in size, each silicon wafer isn't going to produce many fully working dies, even if the yields are very good, so TMSC just can't make enough of them?

Even if the former isn't remotely true, the latter will be to a certain degree, and I can't imagine that Nvidia is sitting on a huge pile of Blackwell chips. We picked up through the grapevine various mutterings that Nvidia commenced manufacturing of RTX 50-series GPUs quite late in 2024, though we couldn't pin down such rumours nor determine any possible explanation for such a decision.

However, one factor that may have played a role is the consumer-grade Blackwell chips are made on the same process node as datacentre Blackwell, i.e. TSMC N4 (Ada Lovelace GPUs are made on a custom version of that node).

Given just how much money Nvidia makes from selling its massive AI processors, I should imagine TSMC order books were chock full of GB100s, leaving little spare capacity to start RTX 50-series chips in earnest.

Your next upgrade

Nvidia RTX 4070 and RTX 3080 Founders Edition graphics cards

(Image credit: Future)

Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.

And it's worth noting that TMSC's N4 process node is in hot demand. As well as Nvidia's entire Blackwell range, the Taiwanese fabrication giant also produces AMD's Zen 5 chiplets, as well as its Strix Halo and the latest Hawk Point APUs on the same node. They're all much smaller than the GB202, so fewer wafers need to be allocated for AMD's orders, but it can only churn out so many each month.

Not that this is comfort if you were hoping to snag an RTX 5090 at its MSRP when they hit retailers' shelves (if they even reach them) at the end of this month. The 575 W monster is the most powerful gaming graphics card money can buy but it would seem that for the next month or so, no amount of money may be able to get you one.

]]>
https://www.pcgamer.com/hardware/graphics-cards/msi-says-that-the-supply-of-its-rtx-5090-cards-will-be-very-tight-due-to-a-limited-supply-of-gpus-from-nvidia/ BroSmp4bU7UQBhVF87J7QP Mon, 27 Jan 2025 15:57:59 +0000
<![CDATA[ This spectacular GB202 die shot shows just how massive Nvidia's RTX 5090 GPU is but it's not the largest chip it's ever shoehorned into a gaming graphics card ]]> With Nvidia's RTX 50 series launch at CES earlier this month and all the subsequent reviews of the RTX 5090, especially in-depth ones like our Dave's, laying out all the specifications, we all know that the biggest Blackwell chip is one seriously big GPU. And thanks to a new die shot of the processor, we can now feast our eyes on all those shaders and cache.

Creating a detailed die shot of any processor isn't simple. It takes many failed attempts, involving numerous cracked chips and skin burns, to perfect the process. If that chip just so happens to be an Nvidia GB202, the GPU powering the GeForce RTX 5090, then there are a few other barriers to overcome, namely getting your hands on one and being willing to sacrifice a $2,000+ graphics card for the sake of a picture.

Enter Tony Yu, general manager of Asus China and all-round top chap, to save the day, sharing a high-resolution image of the GB202 die thank X user Kurnal managed to grab hold of and then helpfully label all the key parts (via Tom's Hardware).

While the image itself doesn't reveal any major surprises, as Nvidia has stuck with the same fundamental design layout for many years now, it does that the engineers had to make some interesting decisions in order to get everything to fit within the die's physical dimensions.

For example, if you look at the GB202 and compare it to the AD102 (the RTX 4090's GPU), you'll see that all of the logic blocks for the NVENC video encoders and decoders have moved from the bottom to the very middle of the chip.

The reason for this is twofold: firstly, the GB202 sports three encoders and two decoders, to the AD102's two and one respectively, and it also has an aggregated 512-bit memory bus. If Nvidia had kept the NVENC blocks all at the bottom, then those 16 physical memory interfaces (PHYs) would have made the die very long/tall. Perhaps too tall.

Something else we can clearly see is the all that L2 cache in the very centre of the die. Where AMD uses a fast but complex multi-level cache hierarchy, Nvidia takes a simpler approach, resulting in a main L1 cache for each SM (Streaming Multiprocessor) and then a hulking L2 (last-level) cache, as well as some smaller ones dotted about in the SMs.

Other than that, the design is pretty straightforward. The full die comprises 12 GPCs (Graphics Processing Cluster), each sporting its own 'Raster Engine', also known as a ROPs cluster. Those GPCs are organised into eight TPCs (Texture Processing Units) and inside each of those, you'll find two SMs—these house 128 CUDA cores apiece, for a grand total of 24,576 shader units.

But for all its massiveness, it's not the biggest chip Nvidia has ever stuffed into a gaming graphics card. It is in terms of transistor and shader count but not in terms of physical dimensions. With an area of 750 mm2, the GB202 is 23% larger than the AD102 (609 mm2) and 19% larger than the GA102 (RTX 3090, 628 mm2).

Your next upgrade

Nvidia RTX 4070 and RTX 3080 Founders Edition graphics cards

(Image credit: Future)

Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.

However, it's 0.5% smaller than the TU102 (754 mm2), the GPU in the RTX 2080 Ti, and 8% smaller than the GV100 (815 mm2). The latter, based on the Volta architecture, isn't really a gaming GPU, but for a while, Nvidia marketed it as such. The Titan V was very much the 'RTX 5090' of its era (2017), not least because of its $2,999 price tag.

A little over 800 mm2 is about as large as a single die can go, due to the reticule limit, but the GB202 isn't all that far off. Whether the next generations of GPUs are this big remains to be seen but if this is the last hurrah for monstrously huge, monolithic chips in gaming graphics cards, before switching to tiled or stacked chiplets, then at least we have this lovely die shot to stare at and try to spot when the RT and Tensor cores might be amidst in the ocean of transistors.

]]>
https://www.pcgamer.com/hardware/graphics-cards/this-spectacular-gb202-die-shot-shows-just-how-massive-nvidias-rtx-5090-gpu-is-but-its-not-the-largest-chip-its-ever-shoehorned-into-a-gaming-graphics-card/ kZutxBvZbmyxaCFEkwxwmJ Mon, 27 Jan 2025 12:34:00 +0000
<![CDATA[ Scalpers are already trying to rip off gamers by flipping RTX 5090 graphics cards they don't actually have for up to $7,000 ]]> Marginally less inevitable than the relentless march of cosmic entropy and ensuing heat death of the universe is the near-certainty that scalpers will try to make a quick buck on the latest high-end GPU from Nvidia. Enter the imminent RTX 5090 and eBay listings of up to $7,000.

Of course, in theory, none of the sellers on ebay ought to have an RTX 5090 to actually sell. The new GPU doesn't officially go on sale for another six days on January 30.

However, a quick scan of the eBay listings in the US reveals various claimed methods of sourcing the cards. One seller plans to do the midnight queue at Micro Center to bag as many as possible.

"I’ll [be] standing in line at Micro Center and try to get as many as possible ,so this is a presale. I’ll ship these as soon as possible," the listing says. The current bid on that one stands at $2,750.

Another claims to be posted by an employee at a retailer in possession of "guaranteed" slots to buy 5090s. "I am an employee of a certain technology retailer and have guaranteed slots for a few of these GPU's. I have no interest in upgrading my current build and am looking to sell my guarantee slot for the card," the auction, which currently stands at $4,000, says. Ouch.

As we write these words, the fixed-price listings seem to start at around $3,750. That $7,000 item is for an overclocked Asus board.

If all this is indeed utterly inevitable, it's hard to actually draw too many conclusions about availability and real-world prices for the new RTX 5090, which has an MSRP of $2,000.

There are rumours circulating that indicate supply may be very limited. But it's unclear how accurate they are or how long any shortage might last. We'll probably need a few weeks to a few months to really get a feel for that.

Odds are, the 5090 will sell out almost immediately on day one, even with quite a healthy supply. It'll be the weeks and months that follow which determine what the real-world price for Nvidia's latest uber GPU will be.

ebay RTX 5090

$7,000 for a 5090? No, thanks. (Image credit: Ebay)

Personally, I wouldn't be inclined to pay over list for this one. As I discussed the other day, high-end GPUs can actually be strong value propositions in the long term, provided you can stomach the up front hit.

An RTX 4090 bought on launch day over two years ago is still a great card today and in raw performance terms will likely remain behind only the new 5090—and not by all that much—for another couple of years.

The problem with the 5090 is that it uses carried-over N4 silicon, the same production node as the 4090. That's limited Nvidia's ability to scale the performance, what with the GB202 GPU inside the 5090 basically hitting the physical size limit, known as maximum reticle size, for a GPU die design at manufacturer TSMC.

However, it's very likely that Nvidia will move to a more advanced node for its next GPU, possibly delivering a much bigger jump with the RTX 6090, or whatever Nvidia chooses to call it, and undermining the RTX 5090's long-term appeal.

At $2,000, the 5090 is just about OK as a long-term buy. Anything much above that and the appeal just isn't there. I'd probably be more inclined to go with a $1,000 RTX 5080 and then drop another $1,000 on whatever replaces the 5080 on the assumption that the next gen will probably be a bigger step and I'll end up with something faster than a 5090.

Anyway, take care out there on eBay, peeps, they're out to get your money.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/scalpers-are-already-trying-to-rip-off-gamers-by-flipping-rtx-5090-graphics-cards-they-dont-actually-have-for-up-to-usd7-000/ a52RFk52qtzRtwtXyz5btc Fri, 24 Jan 2025 17:20:11 +0000
<![CDATA[ A variant of Asus' Q-release system has been accused of grinding GPU contact pins and the thought is making my teeth hurt ]]> Asus offers several alternative GPU-securing mechanisms as part of its Q-release system, and one has been accused of potentially damaging GPU contact pins.

If there's one internal PC component that has caused me repeated physical pain over my many years of DIY builds, it's the traditional GPU retainer clip. It's fiddly to get to with a beefy modern graphics card, and seems designed to produce sore fingertips and many curse words if used on the regular.

The solution to my issue is a quick-latch system that makes this fiddly work easier. Though as X user @9950pro highlights, a post on Bilibili says a latch design used on Asus Intel 800-series and AMD 800-series boards may risk damaging a GPU under repeated usage.

As demonstrated above, the quick-release system in question requires you to pull the far left of the card up first (the output socket side) which causes the card's body to push down on the retainer clip.

There's a slight bit of rotation of the pins in the socket as the card goes in and out, along with a fair bit of resistance against the contacts, which seems like it could potentially cause wear.

Our Nick has been using this exact release system on two Asus motherboards, and in his opinion, repeated usage could grind the contacts over time.

The mobo demonstrated above is the Asus ROG Strix B850-F Gaming Wifi, and while Nick has his reservations about the mechanism, he hasn't experienced any undue wear as of yet.

In a screenshot posted later in the Bilibili thread, a user appears to have been speaking with an Asus representative regarding further contact over the issue. I've reached out to Asus for comment, and will update this story with any further information I receive.

It's worth noting that Asus uses multiple mechanisms with its Q-release motherboards, including this one demonstrated below, which releases the GPU in a more conventional manner.

Yes, this particular PC is very dusty. Our Jacob has since informed me he has taken an air blower to the internals, so rest assured it's in much better shape now.

Anyway, all PCIe slots put some degree of resistance on the contact pins when a GPU is inserted and removed, but we swap graphics cards out of conventional slots all the time without undue wear. If this particular mechanism is causing contact pins to be chewed faster than others, it seems like some tweaks might be required to prevent damaging users cards.

It's difficult to get an exact figure on exactly how many mating cycles a modern PCIe slot is rated for. Regardless, it's thought to be relatively low, although that's the slot itself and not necessarily the pins on the GPU. And the amount of cycles a part is rated to and the amount that may cause this sort of wear could be quite different.

If you're not the sort of person that pulls GPUs in and out of a system constantly, however, and you use an Asus motherboard with this particular release mechanism, I doubt there's much to worry about. Most users will only swap cards out every few years or so, and it seems unlikely that this particular release system would do much damage with occasional use.

Still, excessive grinding and GPU pins? It's the sort of thing that makes my teeth feel funny just thinking about it.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/a-variant-of-asus-q-release-system-has-been-accused-of-grinding-gpu-contact-pins-and-the-thought-is-making-my-teeth-hurt/ P2oafX894jCBmV8Rwn2aFH Fri, 24 Jan 2025 12:01:38 +0000
<![CDATA[ AMD claims it's 'taking a little extra time' to get 'maximum performance' out of the RX 9070 and RX 9070 XT before the GPUs launch in March ]]> Eyes and ears might be turned towards Nvidia's impending RTX 50-series graphics cards right now, but don't forget that AMD's next-gen cards, the Radeon RX 9070 and RX 9070 XT, aren't far off, either. At least not in the grand scheme of things, but they're certainly further off than we originally expected. Now, AMD is giving us more information about why exactly that is.

According to AMD VP and GM David McAfee, the company is "taking a little extra time to optimize the software stack for maximum performance and enable more FSR 4 titles."

The Radeon RX 9070 and RX 9070 XT were expected to launch at CES 2025, but instead AMD didn't give a release date and revealed very few details about the new GPUs. We only got an RX 9070 and RX 9070 XT release date a couple of days ago when McAfee said they'll "go on sale in March".

Aside from speculation about possible changes of plans regarding the GPUs—switching from a chiplet to non-chiplet design—many have suspected that the main reason for the delayed launch was Nvidia's announcement of surprisingly cheap price tags for the upcoming RTX 5070 and RTX 5070 Ti, which the AMD GPUs will almost certainly be competing against.

This reason, at least, is not speculation. AMD's Frank Azor confirmed as much to us at CES, saying the RTX 50-series announcement "went into our decision" to hold off on dishing out the deetz on the new desktop GPUs. Although, Azor clarified that it wasn't the only factor, saying "it isn't any one thing."

There are many reasons why such a thing could cause a delay, least of which being needing time to let the dust settle around the RTX 50-series cards and figure out a new price point. And if this meant reducing the price of the GPUs to better compete with the RTX 50-series, this could have caused problems with board partners that had already bought up a load of GPUs at the original higher price.

Now, however, Azor seems to be implying the delay is to do with getting "maximum performance" out of the cards. Of course, that's something we'd hope the company was aiming for anyway, but if it takes a little extra time to achieve, so be it.

And really, this new statement doesn't conflict with the old one. Improving performance and getting more FSR 4 titles up and running could be one of the "multitude of different things" Azor originally mentioned. Plus, of course, any improvements will help improve the AMD chips' value proposition against the 50-series chips.

So, it does track—could track, at least. But that doesn't detract from the fact that the RX 9070 and RX 9070 were delayed, which doesn't exactly scream confidence in the product. We won't know until we get our hands on the GPUs for testing. We were hoping that'd have been following CES, but alas, "maximum performance" calls.

Best SSD for gaming: The best speedy storage today.
Best NVMe SSD: Compact M.2 drives.
Best external hard drive: Huge capacities for less.
Best external SSD: Plug-in storage upgrades.

]]>
https://www.pcgamer.com/hardware/graphics-cards/amd-claims-its-taking-a-little-extra-time-to-get-maximum-performance-out-of-the-rx-9070-and-rx-9070-xt-before-the-gpus-launch-in-march/ UEgUSKxtDMDgPymuhiTPGA Thu, 23 Jan 2025 14:22:06 +0000
<![CDATA[ Nvidia GeForce RTX 5090 FE review ]]> There is an alternative 2025 where you get the Nvidia RTX 5090 of your dreams. That's a timeline where Nvidia has busted Apple's grip on TSMC's most advanced process nodes, managed to negotiate an unprecedented deal on silicon production, and worked some magic to deliver the same sort of generational rendering performance increases we've become used to since the RTX prefix was born.

And it's a 2025 where Nvidia hasn't slapped a $400 price hike on the most powerful of its new RTX Blackwell graphics cards.

But in this timeline, the RTX 5090 is an ultra enthusiast graphics card that is begging us to be more realistic. Which, I will freely admit, sounds kinda odd from what has always been an OTT card. But, in the real world, a GB202 GPU running on a more advanced, smaller process node, with far more CUDA cores, would have cost a whole lot more than the $1,999 the green team is asking for this new card. And would still maybe only get you another 10–20% higher performance for the money—I mean, how much different is TSMC's 3 nm node to its 4 nm ones?

The RTX 5090 is a new kind of graphics card, however, in terms of ethos if not in silicon. It's now the best graphics card you can buy, but is also a GPU designed for a new future of AI processing, and I don't just mean it's really good at generating pictures of astronauts riding horses above the surface of the moon: AI processing is built into its core design and that's how you get a gaming performance boost that is almost unprecedented in modern PC graphics, even when the core at its heart hasn't changed that much. Though it has at least changed more than the Nvidia RTX 5080 GPU has...

Nvidia RTX 5090: The verdict

Buy if...

You want the best: If you want to nail triple figure frame rates in the latest 4K games, then you're going to need the might and magic of Multi Frame Gen, and that's only available with the RTX 50-series cards. And yes, I do like alliteration.

You to get in on the ground floor of neural rendering: The RTX Blackwell GPUs are the first chips to come with a full set of shaders that will have direct access to the Tensor Cores of the card. That will enable a new world of AI-powered gaming features... when devs get around to using them in released games.

You're after a hyper-powerful SFF rig: The Founders Edition is deliciously slimline, and while it generates a lot of heat it will fit in some of the smallest small form factor PC chassis around.

Don't buy if...

You need to ask the price: With a $400 price hike over the RTX 4090, the new RTX 5090 is a whole lot of cash at its $1,999 MSRP. The kicker, however, is that you'll be lucky to find one at that price given the third-party cards are looking like $2,500+ right now.

The new RTX Blackwell GPU is… fine. Okay, that's a bit mean, the GB202 chip inside the RTX 5090 is better than fine, it's the most powerful graphics core you can jam into a gaming PC. I'm maybe just finding it a little tough not to think of it like an RTX 4090 Ti or Ada Titan. Apart from hooking up the Tensor Cores to the shaders, via a new Microsoft API, and a new flip metering doohicky in the display engine, it largely feels like Ada on steroids.

The software suite backing it up, however, is a frickin' marvel. Multi Frame Generation is giving me ultra smooth gaming performance, and will continue to do so in an impressively large number of games from day one.

The nexus point between hardware and software is where the RTX 5090 thrives. When everything's running like it should I'm being treated to an unparalleled level of both image fidelity and frame rates.

It's when you look at the stark contrast between a game such as Cyberpunk 2077 running at 4K native in the peak RT Overdrive settings, and then with the DLSS and 4x Multi Frame Gen bells and whistles enabled that it becomes hard to argue with Nvidia's focus on AI modeling over what it is now, rather disdainfully calling brute force rendering.

Sure, the 30% gen-on-gen 4K rendering performance increase looks kinda disappointing when we've been treated to a 50% bump from Turing to Ada and then a frankly ludicrous 80% hike from Ampere to Ada. And, if Nvidia had purely been relying on DLSS upscaling alone to gild its gaming numbers, I'd have been looking at the vanguard of the RTX 50-series with a wrinkled nose and a raised eyebrow at its $2K sticker price.

The nexus point between hardware and software is where the RTX 5090 thrives.

But the actual gaming performance I'm seeing out of this card in the MFG test builds—and with the DLSS Override functionality on live, retail versions of games—is kinda making me a a convert to this new AI world in which we live. I'm sitting a little easier with the idea of 15 out of 16 pixels in my games getting generated by AI algorithms when I'm playing Alan Wake 2 at max 4K settings just north of 180 fps, Cyberpunk 2077's Overdrive settings at 215 fps, and Dragon Age: Veilguard at more than 300 fps.

Call it frame smoothing, fake frames, whatever, it works from a gaming experience perspective. And it's not some laggy mess full of weird graphical artifacts mangled together in order to hit those ludicrous frame rates, either. Admittedly, there are times where you can notice a glitch caused by either Frame Gen or the new DLSS Transformer model, but nothing so game or immersion breaking that I've wanted to disable either feature and flip back to gaming at 30 fps or not run at top settings.

There are also absolutely cases where the DLSS version looks better than native res and times where those extra 'fake frames' are as convincing as any other to the naked eye. Honestly you're going to have to be really looking for problems from what I've seen so far. And I've been watching side-by-side videos while they export, where you can literally watch it move one frame at a time; the frame gen options stand up incredibly well even under such close scrutiny.

Image 1 of 2

Side-by-side images of Cyberpunk 2077 with the different DLSS AI models

(Image credit: Future)
Image 2 of 2

Side-by-side images of Cyberpunk 2077 with the different DLSS AI models

(Image credit: Future)

If that noggin'-boggling performance were only available in the few games I've tested it with at launch then, again, I would be more parsimonious with my praise. But Nvidia is promising the Nvidia App is going to be offering the DLSS Override feature for 75 games and apps to turn standard Frame Gen over to the new frame multiplier. And you still don't need to log in to the app to be able to flip the Multi Frame Generation switch.

And, rando PlayStation port aside, most of the games you're going to want to play over the next 12 months—especially the most feature-rich and demanding ones—will more than likely include Nvidia's full DLSS feature-set. Unless other deals take precedence… ahem… Starfield.

I will say the switch to the transformer model for DLSS hasn't been the game-changer I was expecting from the demos I witnessed at CES, but it's at the very least often better than the standard convolution neural network in terms of image quality. It's just that it will add in some oddities of its own to the mix and doesn't completely rid us of Ray Reconstruction's ghosting.

But don't get me wrong, more base level rendering grunt would always be welcome, but to get to these sorts of fps numbers with pure rendering power alone is going to take a lot of process node shrinks, more transistors than there are stars in the sky, and a long, long time. Oh, and probably cost a ton of cash, too.

Though even a little more raster power would push those AI augmented numbers up even further, and that's something which will certainly be in my mind as I put the rest of the RTX 50-series through its paces. I, for one, am a little concerned about the RTX 5070 despite those claims of RTX 4090 performance for $549.

The RTX 5090, though, is as good as it gets right now, and is going to be as good as the RTX Blackwell generation gets… until Nvidia decides it wants to use the full GB202 chip. Yields on TSMC's mature 4N node are surely pretty good this far down the line, eh?

Image 1 of 3

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)
Image 2 of 3

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)
Image 3 of 3

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)

Literally impossible to beat with any other hardware on the planet.

And, oh is it ever pretty. With all the comic girth of the RTX 3090 and RTX 4090, they are just stupid-looking cards. I'm always taken aback whenever I pull one out of its box to stick in a PC. Being able to come back to the dual-slot comfort zone is testament to the over-engineering Nvidia has done with the Founders Edition, even if both the RTX 5090 cards I've tested have been some of the squealiest, coil-whiney GPUs I've tested in recent history. But your mileage and your PSU may vary, they certainly don't sound great with the test rig's Seasonic power supply.

Despite being somewhat of a loss-leader for Nvidia, this RTX 5090 Founders Edition is also likely to be as cheap as an RTX 5090 retails for over the next year. With every other AIB version sure to be bigger, and most of them more expensive, the Founders Edition is the card you should covet. And the one you will be disappointed about when you almost inevitably miss out on what will surely be slim inventory numbers.

The GPU at its heart might not be super exciting, but the potential of all the neural rendering gubbins Nvidia is laying down the groundwork for with this generation could change that given time. Right now, however, it feels more like an extension of Ada, and with the outstanding AI augmented performance really symptomatic of where we're at in time.

Still, when it comes to the raw gaming experience of using this svelte new RTX 5090 graphics card, it's literally impossible to beat with any other hardware on the planet.

Nvidia RTX 5090: The RTX Blackwell architecture

Image 1 of 2

Nvidia RTX Blackwell GPU architecture

(Image credit: Nvidia)
Image 2 of 2

Nvidia RTX Blackwell GPU architecture

(Image credit: Nvidia)

As a layman, not a huge amount seems to have changed from the Ada architecture through to the GB202 Blackwell GPU. As I've said, on the surface it feels very much like an extension of the Ada Lovelace design, though that is potentially because Blackwell is sitting on the same custom TSMC 4N node, so in terms of core counts and physical transistor space there isn't a lot of literal wiggle room for Nvidia.

There are 21% more transistors in the GB202 versus the AD102, and a commensurate 21% increase in die size. Compare that with the move from the RTX 3090 to RTX 4090, with the switch from Samsung's 8nm node to this same 4N process, Ada's top chip gave us 170% more transistors, but a 3% die shrink.

There are still 128 CUDA cores per streaming multiprocessor (SM), so the 170 SMs of the GB202 deliver 21,760 shaders. Though in a genuine change from Ada, each of those can be configured to handle both integer and floating point calculations. Gone are the dedicated FP32 units of old.

Though, interestingly, this isn't the full top-tier Blackwell GPU. The RTX 5090 has lopped off one full graphics processing cluster, leaving around 2800 CUDA cores on the cutting room floor. I guess that leaves room for a Super, Ti, or an RTX Blackwell Titan down the line if Nvidia deems it necessary.

You are getting the full complement of L2 cache, however, with near 100 MB available to the GPU. But then you are also seeing 32 GB of fast GDDR7 memory, too, on a proper 512-bit memory bus. That means you're getting a ton more memory bandwidth—78% more than the RTX 4090 could offer.

Neural Shaders

There are deeper, arguably more fundamental changes that Nvidia has made with this generation, however. Those programmable shaders have finally been given direct access to the Tensor Cores, and that allows for what the green team is calling Neural Shaders.

Previously the Tensor Cores could only be accessed using CUDA, but in collaboration with Microsoft, Nvidia has helped create the new Cooperative Vectors API, which allows any shader—whether pixel or ray tracing—to access the matrix calculating cores in both DX12 and Vulkan. This is going to allow developers to bring a bunch of interesting new AI-powered features directly into their games.

And it means AI is deeply embedded into the rendering pipeline. Which is why we do have a new slice of silicon in the Blackwell chips to help with this additional potential workload. The AI Management Processor, or AMP, is there to help schedule both generative AI and AI augmented game graphics, ensuring they can all be processed concurrently in good order.

Nvidia AMP silicon in action

(Image credit: Nvidia)

It's that Cooperative Vectors API which will allow for features such as neural texture compression, which is touted to deliver 7x savings against VRAM usage—ostensibly part of Nvidia's dedicated push to ensure 8 GB video cards still have a place in the future. But it also paves the way for RTX Neural Radiance Cache (to enhance lighting via inferred global illumination), and RTX Neural Materials, RTX Neural Skin, and RTX Neural Faces, which all promise to leverage the power of AI models to get us ever closer to photo realism. At least get us close to the sort of image quality you'll see in offline rendered films and TV.

The new 4th Gen RT Cores aren't to be left out, and come with a couple of new units dedicated to improving ray tracing. Part of that push is something called Mega Geometry, which massively increases the amount of geometry possible within a scene. It reminds me a whole lot of when tessellation was first introduced—the moment you turn off the textures and get down to the mesh layer in the Zorah demo, which showcases the tech, you're suddenly hit by what an unfeasible level of geometry is possible in a real-time scene.

This feature has largely been designed for devs on Unreal Engine 5 utilising Nanite, and allows them to ray trace their geometry at full fidelity. Nvidia has put so much store in Mega Geometry that it has designed the new RT Cores specifically for it.

DLSS 4 and Multi Frame Generation

Nvidia Multi Frame Generation timeline

(Image credit: Nvidia)

The final hardware piece of the RTX Blackwell puzzle to be dropped into the new GPU is Flip Metering. The new enhanced display engine has twice the pixel processing capability, and has been designed to take the load away from the CPU when it comes to ordering frames up for the display. The Flip Metering feature is there to enable Multi Frame Generation to function smoothly—displaying all those extra frames in between the rendered ones in good order is vital in order to stop it feeling "lumpy". That's not my phrase, that's a technical term from Nvidia's Mr. DLSS, Brian Catanzaro, and he should know.

In terms of the feature set, DLSS itself has also had a potentially big upgrade, too. Previously it used a convolutional neural network (CNN) as the base model for DLSS, which is an image-focused model, and made sense for something so image-focused as upscaling. But it's no longer the cutting edge of AI, so DLSS 4 has switched over to the transformer architecture you will be familiar with if you've used ChatGPT—the GPT bit stands for generative pre-trained transformer.

It's more efficient than CNN, and that has allowed Nvidia to be more computationally demanding with DLSS 4—though I've not really seen much in the way of a performance difference between the two forms in action.

Primarily it seems the transformer model was brought in to help Ray Reconstruction rid itself of the smearing and ghosting it suffers from, though it's also there for upscaling, too. Nvidia, however, is currently calling that a beta. Given my up and down experience with the transformer model in my testing, I can now understand why. It does feel very much like a v1.0 with some strange artifacts introduced for all the ones it helps remove.

I've saved the best new feature for last: Multi Frame Generation. I was already impressed with the original version of the feature introduced with the RTX 40-series, but it has been hugely upgraded for the RTX 50-series and is arguably the thing which will impress people the most while we wait for those neural shading features to actually get used in a released game.

It's also the thing which will really sell the RTX 50-series. We are still talking essentially about interpolation, no matter how much Jen-Hsun wants to talk about his GPU seeing four frames into the future. The GPU will render two frames and then squeeze up to three extra frames in between.

Using a set of new AI models it no longer needs dedicated optical flow hardware (potentially good news for RTX 30-series gamers), and is able to perform the frame generation function 40% faster and with a 30% reduction in its VRAM footprint. That flip metering system now means the GPU's display engine queues up each frame, pacing them evenly, so you get a smooth final experience.

The 5th Gen Tensor Cores have more horsepower to deal with the load, and the AMP gets involved, too, in order to keep all the necessary AI processing around both DLSS and Frame Generation, and whatever else they get up to in the pipeline, running smoothly.

Nvidia RTX 5090: The performance

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)

The raw performance of the RTX 5090 is relatively impressive. As I've mentioned earlier, I'm seeing around a 30% improvement in 4K gaming frame rates over the RTX 4090, which isn't bad gen-on-gen. We have been spoiled by the RTX 30- and 40-series cards, however, and that does make this bump seem a little less exciting.

The main increase is all at that top 4K resolution, because below that the beefy GB202 GPU does start to get bottlenecked by the processor. And that's despite us rocking the AMD Ryzen 7 9800X3D in our test rig—I've tossed the RTX 5090 into my own rig with a Ryzen 9 7950X in it and the performance certainly drops.

And in games where the CPU is regularly the bottleneck, even at 4K, the performance delta between the top Ada and Blackwell GPUs is negligible. In Homeworld 3 the 4K performance increase is just under 9%, even worse, at 1080p the RTX 5090 actually takes a retrograde step and drops 7% in comparison.

This is a graphics card built for DLSS, and as such if you hit 4K DLSS Quality settings you're actually rendering at 1440p.

Where the GPU is the star, however, the extra 4K frame rates are matched by the overall increase in power usage. This thing will drain your PSU and I measured the card pulling down nearly 640 W at peak during our extended Metro Exodus benchmark. The commensurate performance increase does, however, follow so the performance per watt at 4K remains the same compared with the RTX 4090.

But yes, it does start to fall down when you drop to 1440p and certainly 1080p. If you were hoping to smash 500 fps at 1080p with this card we might have to have a little chat. It will still draw a ton of power at the lower resolutions, too, which means its performance per watt metrics drop by 15%.

You might say that's not such a biggy considering you'll be looking to play your games at 4K with such a beast of a GPU, but this is a graphics card built for DLSS, and as such if you hit 4K DLSS Quality settings you're actually rendering at 1440p. That 30% 4K uplift figure is then kinda moot unless you're sticking to native rendering alone.

Which you absolutely shouldn't do because Multi Frame Generation is a game-changer, in the most literal sense. The performance difference going from Native, or even DLSS Quality is stark. With Alan Wake 2 now hitting 183 fps, with 102 fps 1% low, it's a glorious gaming experience. Everything in the graphics settings can be pushed to maximum and it'll still fly.

More importantly, the latency is only marginally higher than with just DLSS settings—the work Nvidia has done to pull that down with Multi Frame Generation is a marvel. As is the Flip Metering frame pacing. This is what allows the frames to come out in a smooth cadence, and makes it feel like you're really getting that high-end performance.

Cyberpunk 2077 exhibits the same huge increase in performance, and is even more responsive than Alan Wake 2, with just 43 ms latency when I've got 4x Multi Frame Generation on the go.

And even though Dragon Age: The Veilguard is pretty performant at 4K native, I'll happily take a 289% increase in perceived frame rate, especially when the actual PC latency on that game barely moves the needle. It's 28 ms at 4K native and 32 ms with DLSS Quality and 4x MFG.

Another benefit of the DLSS and MFG combo is that it pulls down the power and thermal excesses of the RTX 5090. I've noticed around a 50 W drop in power consumption with MFG in action, and that means the temps go down, and the GPU clock speed goes up.

Still, the overall combination of high power, high performance, and a new, thinner chassis means that the GPU temperature is noticeably higher than on the RTX 4090 Founders Edition. Running through our 4K native Metro Exodus torture test, the RTX 5090 Founders Edition averages 71 °C, with the occasional 77 °C peak. That's a fair chunk higher than the top Ada, though obviously that's with a far thinner chassis.

For me, I'd take that extra little bit of heat for the pleasure of its smaller footprint. What I will say, however, is that I did experience a lot of coil whine on our PC Gamer test rig. So much so, that Nvidia shipped me a second card to test if there was an issue with my original GPU. Having now tested in my home rig, with a 1600 W EVGA PSU, it seems like the issue arose because of how the Seasonic Prime TX 1600 W works with the RTX 5090, because in my PC the card doesn't have the same constantly pitching whine I experienced on our test rig.

The RTX 5090 being a beastly GPU, I've also taken note of what it can offer creatives as well as gamers. Obviously with Nvidia's AI leanings the thing can smash through a generative AI workload, as highlighted by the way it blows past the RTX 4090 in the UL Procyon image benchmark.

Though the AI index score from the PugetBench for DaVinci Resolve test shows that it's not all AI plain sailing for the RTX 5090. GenAI is one thing, but DaVinci Resolve's use of its neural smarts highlights only a 2.5% increase over the big Ada GPU.

Blender, though, matches the Procyon test, offering over a 43% increase in raw rendering grunt. I'm confident that extra memory bandwidth and more VRAM is helping out here.


PC Gamer test rig
CPU: AMD Ryzen 7 9800X3D | Motherboard: Gigabyte X870E Aorus Master | RAM: G.Skill 32 GB DDR5-6000 CAS 30 | Cooler: Corsair H170i Elite Capellix | SSD: 2 TB Crucial T700 | PSU: Seasonic Prime TX 1600W | Case: DimasTech Mini V2

Nvidia RTX 5090: The analysis

Image 1 of 3

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)
Image 2 of 3

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)
Image 3 of 3

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)

The RTX 4090's 80% performance bump is living in recent memory, rent-free in the minds of gamers.

When is a game frame a real frame? This is the question you might find yourself asking when you hear talk of 15 out of 16 pixels being generated by AI in a modern game. With only a small amount of traditional rendering actually making it onto your display, what counts as a true frame? I mean, it's all just ones and zeros in the end.

So, does it really matter? For all that you might wish to talk about Multi Frame Generation as fake frames and just frame-smoothing rather than boosting performance, the end result is essentially the same: More frames output onto your screen every second. I do understand that if we could use a GPU's pure rendering chops to hit the same frame rates it would look better, but my experience of the Blackwell-only feature is that often-times it's really hard to see any difference.

Nvidia suggests that it would take too long, and be too expensive to create a GPU capable of delivering the performance MFG is capable of, and certainly it would be impossible on this production node without somehow making GPU chiplets a thing. It would be a tall order even just to match the performance increase the RTX 4090 offered over the RTX 3090 in straight rendering.

But that's the thing, the RTX 4090's 80% performance bump is living in recent memory, rent-free in the minds of gamers. Not that that sort of increase is, or should necessarily be expected, but it shows it's not completely beyond the realms of possibility. It's just that TSMC's 2N process isn't even being used by Apple this year, and I don't think anyone would wait another year or so for a new Nvidia series of GPUs.

Though just think what a die-shrink and another couple year's maturity for DLSS, Multi Frame Gen, and neural rendering in general might mean for the RTX 60-series. AMD, be afraid, be very afraid. Or, y'know, make multiple GPU compute chiplets a thing in a consumer graphics card. Simple things, obvs.

Still, if the input latency had been an issue then MFG would have been a total non-starter and the RTX Blackwell generation of graphics cards would have felt a lot less significant with its rendering performance increase alone. At least at launch. The future-gazing features look exciting, but it's far too early to tell just how impactful they're going to be until developers start delivering the games that utilise the full suite of neural shading features.

Image 1 of 2

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)
Image 2 of 2

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)

It would have certainly been a lot tougher for Nvidia to slap a $2,000 price tag onto the RTX 5090 and get away with it. With MFG it can legitimately claim to deliver performance twice that of an RTX 4090. Without it, a sole 30% 4K performance bump wouldn't have been enough to justify a 25% increase in pricing.

What I will say in Nvidia's defence on this is that the RTX 4090 has been retailing for around the $2,000 mark for most of its existence, so the real world price delta is a lot smaller. At least compared to the RTX 5090's MSRP. How many, and for how long we'll see actual retail cards selling for this $1,999 MSRP, however, is tough to say. It's entirely likely the RTX 5090's effective selling price may end up closer to the $2,500 or even $3,000 mark once the AIBs are in sole charge of sales as the Founders Edition stock runs dry.

I can see why Nvidia went with the RTX 5090 first as the proponent of Multi Frame Generation. The top-end card is going to benefit far more from the feature than cards lower down the stack, with less upfront rendering power to call on. Sure, Nvidia claims the RTX 5070 can hit RTX 4090 performance with MFG, but I'm going to want to see that in a few more games before I can get onboard with the claims.

The issue with frame generation has always been that you need a pretty high level of performance to start with, or it ends up being too laggy and essentially a bit of a mess. The most demanding games may still be a struggle for the RTX 5070 even with MFG, but I guess we'll find out soon enough come February's launch.

Until then, I'll just have to sit back and bask in the glorious performance Nvidia's AI chops are bringing in alongside the RTX 5090.

]]>
https://www.pcgamer.com/hardware/graphics-cards/nvidia-geforce-rtx-5090-fe-review/ orhaiA4iKmdwY47UujuHM6 Thu, 23 Jan 2025 14:05:30 +0000