<![CDATA[ Latest from PC Gamer UK in Hardware ]]> https://www.pcgamer.com 2025-02-14T17:42:03Z en <![CDATA[ 3DMark benchmarks show off AMD's big daddy Strix Halo laptop chip in action and I'm a little underwhelmed ]]> Strix Halo, AMD's upcoming and extremely large APU, has finally seen some benchmarks in 3DMark Time Spy. These early results are in line with its Geekbench debut last December. This is not only a potential affirmation of what we previously saw but a sign its performance may be a little worse than we expected (hoped).

Over on the Baidu forums (as spotted by Wccftech), two pictures were posted, one of which showing what was claimed to be the AMD Radeon 8050S integrated GPU but the CPU's OPN actually suggests it is the 8060S-equipped AI Max+ 395. This is more than likely just a prelaunch software or database issue. This device managed to achieve a GPU score of 10,106 and a CPU score of 5,571 points in 3DMark Time Spy.

For clarity, that GPU score places what is currently assumed to be the AMD Ryzen AI Max+ 395 around 2,000 points in the GPU category ahead of the average score of laptop RTX 4050 scores on 3DMark's website, and just a few hundred points behind laptop RTX 4060 scores. 3DMark is a pretty good benchmark for understanding graphical power though it's important to note it's not the be-all and end-all of gaming performance. A wider suite of games might give a better idea, but nevertheless this is a good starting point to figure out where AMD's top APU lands.

Ryzen's APU goes without a dedicated GPU, like those measured laptops above, which means if the score holds firm in actual testing, we might be able to expect RTX 4060 laptop performance out of this chip when it lands in gaming laptops, and at least one tablet from Asus.

We were hoping for a little more from the big leap in graphics cores to make the larger APU worth it over more power-savvy numbers for handhelds, like the Ryzen AI 9 HX 390. AMD previously suggested the top-end chip would perform similarly to an RTX 4070 gaming laptop (well, tablet) but these early figures put it closer to an RTX 4060 laptop. Discrete performance from an integrated GPU is still impressive but not quite what we were expecting.

It reportedly took AMD four goes to actually get the Strix Halo APU right and that's because it's more bespoke than one might think on first look. Strix Halo uses its own Zen 5-based CPU CCDs and it uses a new way to interconnect them, as the methods to get the Ryzen 9 9950X up and running saw limitations in power efficiency.

This power efficiency could allow the chip to become less power-hungry, which is a natural boon for the battery life of gaming laptops. That being said, the configurable wattage of this chip goes all the way up to 120 W and any change in the power of a given machine could make for large fluctuations in performance.


Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

]]>
https://www.pcgamer.com/hardware/gaming-laptops/3dmark-benchmarks-show-off-amds-big-daddy-strix-halo-laptop-chip-in-action-and-im-a-little-underwhelmed/ mnzitUW2hWoivpkX3VJZWH Fri, 14 Feb 2025 17:42:03 +0000
<![CDATA[ MSI MPG 272URX review ]]> There was a bit of a buzz when a new class of 27-inch 4K gaming OLED monitors emerged from a flurry of press releases around CES this year. Personally, I struggled to get fired up for reasons we'll come to shortly. But the new MSI MPG 272URX has landed on my desk, so it's time to find out if I'm missing a trick.

Physically, the 272URX looks like a dead ringer for MSI's larger 32-inch 4K OLED monitors, including the MSI MPG 321URX we reviewed last summer, just slightly smaller. So, it's a reasonably slick looking monitor with slim bezels and a smattering of RGB lighting on the rear.

It offers a similar collection of inputs, including dual HDMI 2.1 ports and DisplayPort. The latter, however, is upgraded to DP2.1 this time around and in the full UHBR20 spec, which allows for 4K 240 Hz without compression, albeit the only GPUs with DP 2.1 support among Nvidia GPUs are the very latest RTX 50 cards. AMD's Radeon RX 7000 also support the 2.1 standard.

Alongside that you get USB-C with 98 W power delivery and a two-port USB-A hub. So, connectivity is well covered. Actually, a lot of the panel specifications look similar to earlier 32-inch 4K OLEDs from MSI. So along with the 3,840 by 2,160 resolution, there's that 240 Hz refresh and 0.03 ms response, all really nice numbers even if they're identical to the 32-inch alternative.

MSI MPG 272URX specs

MSI MPG 272URX OLE monitor

(Image credit: Future)

Screen size: 27-inch
Resolution: 3,840 x 2,160
Brightness: 250 nits full screen, 1,000 nits in a 4% window
Response time: 0.03 ms
Refresh rate: 240 Hz
HDR: HDR Black 400
Features: 4th Gen QD-OLED panel, HDMI 2.1 x2, DisplayPort 2.1, USB-C with 98 W PD
Price: $1,099 | £999

The same goes for quoted panel brightness. At 250 nits full screen and 1,000 nits in a 3% window, there's no advance over the 32-inch 4K model, which uses Samsung's 3rd Generation QD-OLED panel technology.

There's been mixed messaging from various monitor makers over the status of this new class of 27-inch 4K Samsung QD-OLED. They're all using the same panel, but some are marketing it as 4th Generation.

Adding to the confusion, at CES this year both LG and Samsung showed off new TV-spec large OLED panels capable of much higher full-screen brightness up around 400 nits and peak brightness of 4,000 nits in a 3% window thanks to new quantum dot materials and a so-called five-layer tandem OLED structure.

(Image credit: Future)

While the MSI MPG 272URX does indeed get Samsung's new for 2025 4th Gen monitor panel tech that also sports the new QD material and five-layer tandem OLED structure, the higher pixel density is limiting in terms of brightness.

Our understanding is that without the new panel tech, this high density 4K panel would actually have been less bright than previous QD-OLED monitors. Indeed, there's also a new 27-inch 1440p QD-OLED panel that ups full screen brightness to 300 nits thanks to the new QD-OLED technology being applied to a lower pixel density.

Firing the MSI MPG 272URX confirms that the new-for-2025 QD-OLED tech looks very familiar, at least in this implementation. Pretty much all the strengths and weaknesses of the older 3rd Gen QD-OLEDs are apparent from the get go.

(Image credit: Future)

Oh, with one exception. 4K on a 27-inch makes for a very tight pixel density of 166 DPI, up from 140 DPI on those 32 inchers. If nothing else, that elevated pixel density puts to bed any remaining issues with font rendering on this monitor. Text looks really nice.

Fonts look super crisp, image content is incredibly sharp.

It's true that Samsung has retained the slightly odd triangular as opposed to vertically striped RGB subpixel structure. On panels with lower pixel density, that resulted in text fringing and a slight softening of image detail. But here, with that 166 DPI density, it's all good.

Fonts look super crisp, image content is incredibly sharp. Of course, the same is largely true of the 32-inch QD-OLED class. Yes, this 27-incher is a tiny bit sharper. But it's a subtle upgrade in that regard and one which comes with a rather more obvious downgrade in panel size. At this price point, 27 inches feels a bit stingy.

Moreover, if the improvement in font rendering with the jump from 32-inch to 27-inch 4K is marginal on the Windows desktop, it's pretty much invisible in-game. If for whatever reason you actively want a physically smaller display perhaps for ergonomic reasons, great. But don't go buying this monitor because you think it's going to make a 32-inch 4K look a little fuzzy. It absolutely doesn't.

Image 1 of 5

MSI MPG 272URX OLED monitor

(Image credit: Future)
Image 2 of 5

MSI MPG 272URX OLED monitor

(Image credit: Future)
Image 3 of 5

MSI MPG 272URX OLED monitor

(Image credit: Future)
Image 4 of 5

MSI MPG 272URX OLED monitor

(Image credit: Future)
Image 5 of 5

MSI MPG 272URX OLED monitor

(Image credit: Future)

Anywho, getting back to that carried over QD-OLED 3rd Gen panel vibe, this MSI betrays the same marginally oversaturated colors and warm temperature of the 32-inch model and frankly many other QD-OLED panels. It's not absolutely ideal, but it's something that you adjust to and can mitigate to an extent via calibration.

The new 4th Gen QD-OLED tech for monitors as applied here isn't a big step.

Likewise, the slightly purple-grey tint of the actual QD-OLED panel itself when reflecting ambient light remains present. In really bright ambient conditions, it compromises black levels a little. But again it's only a minor distraction even then, and in typical ambient conditions and especially at night, it's a non-issue.

While we're talking relative compromises, it's immediately obvious from the full-screen brightness that this new 4th Gen QD-OLED tech for monitors as applied here isn't a big step or really any step at all in that regard. That means the full-screen brightness performance is OK, but only just. Actually, MSI has been pretty conservative with its ABL or automatic brightness limiter in SDR mode. If you crank up the SDR brightness in HDR mode, you actually get a punchier result.

The only problem is then that the SDR color mapping in HDR mode isn't all that good. So, you need to use the sRGB SDR mode to get accurate SDR colors and that in turn means compromising on overall punchiness.

(Image credit: Future)

Speaking of HDR, that's where the MSI MPG 272URX really sings. Night time scenes and indoor in-game locations look utterly stellar. A good example is a section of Cyberpunk 2077 set at night on a raised metal gantry next to a rocket. Along the sides of the metal walkway are dotted a few boxed-in fluorescent light installations.

And, oh my goodness, those lights absolutely pop. They're incredibly bright next to the dark background, but also have crisp, sharp borders. No LCD comes close to this performance, even one with a few thousand dimming zones. There's always some light bleed, always some blooming.

That said, when rendering brighter outdoor game scenes, this new MSI is no different to all other recent OLED monitors we've reviewed. It can look a little dull because even this latest OLED tech isn't capable of driving large sections of the panel really hard.

HDR aside, the other big advantage over LCD is obviously response. By way of comparison, I've been playing with a 520 Hz IPS monitor this week, too. While it has a very slight latency advantage, in subjective terms, it's nowhere near when it comes to clarity, motion blurring, and color stability in motion. It's just no contest.

(Image credit: Future)

With this monitor, you don't have to worry about overshoot, smearing, inverse ghosting, any of the stuff that all fast IPS and VA LCD monitors suffer from to at least some extent. Like pretty much all OLED panels, the 272URX's pixel response is essentially a solved problem.

What almost certainly isn't a done deal, however, is OLED burn-in. This is a really tricky subject on which to draw definitive conclusions. But there are three things we can say with confidence. First, MSI has equipped the 272URX with a full suite of burn-in mitigation features, including pixel shifting, logo and taskbar detection and various panel refresh cycles.

Second, that won't absolutely guarantee you'll never have a problem with burn-in. Third, MSI provides a three-year burn-in warranty, so you are covered for at least that long. It's also worth noting that Samsung says this latest QD-OLED panel tech is even more durable than before, but for now that's just a claim. Beyond that, it's very hard to say what might happen with long term ownership other than it will likely depend on your usage and how you set up things like the Windows interface.

Buy if...

You want crispy visuals and great fonts: This is the first time we've seen 4K on a 27-inch OLED panel and the pixel density is certainly sweet.

Don't buy if...

You want a cinematic experience: For the money, this is not a terribly large monitor.

All of which means this new MSI QD-OLED is a largely similar proposition to other QD-OLEDs we've seen. The HDR experience is mostly stunning, though it does disappoint that OLED technology still isn't capable of being punchier across larger sections of panel.

The SDR experience is mostly fabulous, but a little more compromised. Relevant to both modes are the new OLED TV panels announced at CES. Samsung has implied their boosted brightness will trickle down to gaming monitors in the near future. So, if you have any concerns over full-screen brightness, it would be worth waiting until those panels appear.

More specifically regarding the 272URX, I'm not convinced by the 27-inch form factor. The benefits over 32 inchers in terms of clarity, image detail and font rendering is marginal. At the same time, the larger 32-inch form factor scores for both gaming immersion and desktop working space in Windows.

However, if you prefer the smaller 27-inch form factor, then you'll love this monitor. It's super sharp and gives you all the existing benefits of QD-OLED tech. So, if it's pixel density above all else you've been waiting for, the MSI MPG 321URX might be for you. In all other regards, it's a mostly familiar OLED experience despite the new QD-OLED panel technology and that includes full-screen brightness that's barely good enough. If you're after something brighter, look out for those upcoming 4th Gen 1440p QD-OLED models, which we hope to review soon.

]]>
https://www.pcgamer.com/hardware/gaming-monitors/msi-mpg-272urx-review/ VePHev7NsQJUu9r5Q2CCmd Fri, 14 Feb 2025 17:03:42 +0000
<![CDATA[ I have seen the future, and it's this 3D-printed air raid siren honking its baleful tones over my neighbourhood as the waters rise ]]>

Some people look at a 3D printer and think "this will make useful parts for my projects." Some people hear an air raid siren, and assume there must be an historical war re-enactment taking place nearby.

I am the third type of person, who hears a 3D-printed air raid siren and experiences a flash forward to my apocalyptic future yet to come. Bear with me here folks, I'm cooking.

Youtuber Mark Makies has created a 3D printer air raid siren. This you probably know, if you viewed the video above (via Hackaday). What I'm not entirely sure of is why it sounds like both a comedy bit in once-popular children's TV show Noddy's Toyland Adventures, and the sound of impending doom, all at once.

It's a cool project, anyway, and I'm all down for that. A combination of cheerfully-coloured 3D-printed components and RC parts have been employed to make something both charming and terrifying, and for this Mark should be applauded.

Internally, a brushless motor controlled by a speed controller turns a rotor, which pulses air pressure inside a stator. The slots inside the stator (stay with me, this is technical) make a wooooooo sound through the honking trumpet, and cause the hairs to stand up on the back of my neck. Something like that, anyway.

Impressive. A potentiometer attached to the speed controller means the pitch can be adjusted by ramping the speed of the motor up and down, meaning those wooooos can become weeeees, and so forth. And for some reason, it reminds me of my youth.

I grew up in a small town just outside London, and every Tuesday morning a WW2 air raid siren would go off, waking me from my slumber/hangover. To this very day, I know not why. Friends would discuss it over an afternoon pint. Why does the siren sing for thee?

Still, it was both a useful way to tell me I was late for work, and a good excuse to check the skies to make sure the end times had not begun while I slept, blissfully unaware of my potential fate. The world kept turning, and the siren, it seemed to sing for no reason at all.

This video takes me back to those times. Ah, the halcyon days. Now, I fear that one day, I will hear the siren again. Hopefully it's a brightly coloured one, at least.


Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

]]>
https://www.pcgamer.com/hardware/i-have-seen-the-future-and-its-this-3d-printed-air-raid-siren-honking-its-baleful-tones-over-my-neighbourhood-as-the-waters-rise/ CQBdkMcPuVdKhisACBVEyQ Fri, 14 Feb 2025 17:00:02 +0000
<![CDATA[ Arm reportedly plans to make its own CPUs from this summer with future chips said to be powering a revolutionary Jony Ive-designed AI device ]]> Is Arm planning to make its own chips and not just sell rights to its IP and CPU designs to other companies? So says the FT in what would be a hugely disruptive development, if true.

In fact, the FT claims Arm could unveil its first in-house processor as early as this summer with future chips powering a revolutionary AI strategy—including a new Jony Ive-design personal device.

The first chip is said to be a server CPU, so it won't be going into your next gaming PC. But it's still very big news with all kinds of implications.

The FT says the move is part of a broader plan by Arm's Japanese owner Softbank to move heavily into, yup you guessed it, AI, with a planned $500 billion to be spent on infrastructure in partnership with OpenAI.

That initial in-house Arm chip is actually said to be a server CPU that can be customized for clients, the most notable of which is claimed to be Meta. It's not clear how that chip, which does not appear to be overtly AI-aligned, fits in with Softbank's broader strategy for Arm.

That said, the FT mentions how Arm's move could be part of plans by former Apple designer Jony Ive in partnership with OpenAI and Softbank to create a new AI-powered personal device with a revolutionary, highly intuitive interface.

Back on the humble old dumb PC, Arm moving into making its own chips will surely only serve to accelerate the long-mooted annexation of the PC, with Intel and AMD's x86 processors eventually usurped by Arm chips.

That's been predicted for decades and yet never actually materialised. However, Qualcomm's Snapdragon X chips have been the most plausible possible usurpers, yet. Meanwhile, Nvidia, which tried and failed to buy Arm recently, is also said to be planning a new PC chip of its own based on Arm IP.

It's worth noting that Arm currently licenses both its instruction set and actual CPU designs. But it doesn't actually commission the production of any chips itself.

What's more, it's not clear what the implications might be with its ongoing fight with Qualcomm. During court fisticuffs with Qualcomm late last year, Arm pointed out that it has never built chips itself. However, it also said it is also always considering new strategies for the future.

Anyway, this is ultimately a case of wait and see. Will Arm do its own chips? Will they eventually go into PCs? Could Arm become a major player in AI? Could your smartphone be replaced by a whole new device paradigm powered by an Arm-made AI chip? Honestly, who knows!


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/processors/arm-reportedly-plans-to-make-its-own-cpus-from-this-summer-with-future-chips-said-to-be-powering-a-revolutionary-jony-ive-designed-ai-device/ 5bxuB3Rd7f7bqB6HfFVYCN Fri, 14 Feb 2025 16:20:30 +0000
<![CDATA[ AOC Agon Pro AG276FK review ]]> How much do you really, truly care about latency and motion fluidity? The AOC Agon Pro AG276FK is hoping it's a lot. Because this 520 Hz gaming monitor costs plenty, but it only gives you 1080p resolution.

US pricing has yet to emerge. But the AOC Agon Pro AG276FK is about £500 in the UK, implying a likely sticker of around $550. If so, it'll be priced right next to the likes of the ASRock Phantom Gaming PG27FFX2A, which is a dead ringer by most specification metrics.

What we're dealing with here is a 27-inch 1080p IPS gaming monitor that's being sold unambiguously on speed. Along with the nosebleed-inducing 520 Hz refresh rate, AOC rates this monitor's pixel response at 0.5 ms GTG and an eye-popping 0.3 ms for MPRT. That is awfully quick.

Such figures rarely if ever map well to reality. But they're useful as comparators and AOC is certainly positioning this monitor as being one of the fastest IPS panels out there. Of course, fast for an IPS is nothing for an OLED monitor, pretty much all of which are rated at 0.03 ms, an order of magnitude quicker on paper. But then good luck finding a 520 Hz OLED monitor for 500 bucks. They barely exist at any price.

AOC Agon Pro AG276FK specs

AOC Agon Pro AG276FK gaming monitor

(Image credit: Future)

Screen size: 27-inch
Resolution: 1,920 x 1,080
Brightness: 400 nits full screen
Response time: 0.3 ms MPRT, 0.5 ms GTG
Refresh rate: 520 Hz
HDR: HDR400
Features: IPS panel, HDMI 2.0 x2, DisplayPort 1.4 x2
Price: $550 (estimated) | £499

Elsewhere, AOC says this monitor is good for 400 nits and has HDR400 certification. To be clear, this is not a true HDR panel. There's no local dimming and it's not capable of high dynamic range rendering. But it will decode an HDR signal correctly. That's just about better than nothing, especially given HDR sizzle really isn't what this monitor is about.

Apart from the towering refresh rate, this AOC justifies its price point with a well built, all-alloy stand that offers a full range of adjustment including height, tilt, pivot and swivel. Design wise, AOC has gone for a quirky asymmetric vibe, slim bezels on three sides of the 27-inch IPS panel, a smattering of RGB lighting, plus a slide-out headphone hanger on the right-hand bezel.

For connectivity there are two DisplayPort 1.4 connections capable of the maximum 520 Hz refresh rate. Admittedly, the pair of HDMI 2.0 sockets are only good for 240 Hz, but they're primarily there for console connectivity which only requires 120 Hz.

(Image credit: Future)

Rounding out the core features are a comprehensive OSD menu that includes overdrive controls and low latency modes, plus AOC's GMENU app that provides access to much of the OSD functionality within Windows.

Ultimately, there's nothing too exotic about this monitor on paper bar the refresh rate and response. So, the question is just how fast does it feel? Oh, and just how bad does 1080p look on a relatively large 27-inch panel?

(Image credit: Future)

To address that second point, the pixel density works out to just 82 DPI. This week, I've also been playing with one of the new breed of 4K 27-inch OLED panels. It offers in excess of twice that DPI figure. And, boy, you really can see the difference. On the Windows desktop, fonts look awfully craggy and the whole panel has a pretty pixelated look.

For sure, it's not as sharp as a 1440p 27-inch monitor, let alone 4K.

Actually in-game, however, it's not that bad. For sure, it's not as sharp as a 1440p 27-inch monitor, let alone 4K. But the detail level is tolerable. And, of course, the lower resolution means you've much more chance of hitting that 520 Hz refresh rate in terms of actual frame rates.

Well, much more chance in some games and with some GPUs. Ultimately, this is a display designed for esports, for online shooters. So, you can hit 520 fps-plus in something like Counter-Strike 2. But you're not going to see frame rates like that in, say, Cyberpunk 2077 with ray tracing enabled, probably not even with a really high-end Nvidia GPU.

Image 1 of 4

AOC Agon Pro AG276FK gaming monitor

(Image credit: Future)
Image 2 of 4

AOC Agon Pro AG276FK gaming monitor

(Image credit: Future)
Image 3 of 4

AOC Agon Pro AG276FK gaming monitor

(Image credit: Future)
Image 4 of 4

AOC Agon Pro AG276FK gaming monitor

(Image credit: Future)

Speaking of Nvidia GPUs, the new RTX 50 series with its Multi Frame Generation tech will certainly help boost frame rates in some games up bearer this monitor's refresh rate. The problem is, while you'll get the motion fluidity, you won't get the other important benefit of 520 Hz refresh, namely low latency.

If you can get your favourite shooter running up around 500 Hz, the sense of immediate response is very sweet.

Latency is totally dependent on fully rendering frames in the 3D pipeline, not guesstimating them with AI trickery. So, to get the full benefit of the AOC Agon Pro AG276FK, Frame Gen isn't going to help.

Anyway, if you can get your favourite shooter running up around 500 Hz, the sense of immediate response is very sweet. I tend to find the returns diminish above 240 Hz or so. But really competitive esports players will definitely appreciate just how instant this panel feels in terms of response to inputs. It's pretty electric.

(Image credit: Future)

The other factor in a sense of speed is obviously pixel response. Here, qualifiers are needed. For an LCD monitor, this thing is seriously quick. There are four levels of pixel overdrive available in the aforementioned OSD menu. Happily, the fastest mode is actually usable.

That's not always the case. With a lot of gaming panels you'll find the max overdrive mode is a mess of overshoot and inverse ghosting. Here, a whiff of overshoot is evident when you wiggle an app window around on the Windows desktop. But actually in-game, it's barely visible. Certainly, there's little to no motion color shift, something that can be pretty distracting.

(Image credit: Future)

So, this AOC is about as quick as current LCD technology gets. What it's not is as quick as an OLED. Of course, even the cheapest OLEDs are about $100 more expensive and won't get near 240 Hz. So, to some extent you have to decide where your preferences lie. Personally, I prefer the overall compromise of a 240 Hz 1440p panel. But then I'm not a really serious esports aficionado. The days when I was actually any good at Counter-Strike are sadly long behind me.

Speed aside, this is a very nicely calibrated monitor in sRGB SDR mode. It looks punchy and vibrant, the colors are accurate, it's just a very nice thing bar that blocky pixel density. The HDR mode is well setup, too, including nicely executed mapping of SDR tones. So, if you want you could leave this monitor in HDR mode all the time and get great image quality for all content types.

Buy if...

You want high refresh above all else: If sky-high refresh and ultra-low latency are your thing, this AOC absolutely delivers.

Don't buy if...

You want a great all-round computing experience: 1080p on a 27-inch panel is not a recipe for crispy fonts or even great visuals in most games.

Personally, I wouldn't bother with HDR at all. That's because, as mentioned, this isn't a true HDR monitor and where both SDR and HDR versions of any given content are available, there's little benefit in choosing the latter.

All told, then, I feel pretty well disposed toward the AOC Agon Pro AG276FK despite it not being my kind of monitor. I'm not majorly into esports these days, so 240 Hz or thereabouts is plenty for the vast majority of my gaming, and I'd much, much prefer something with better pixel density.

But that's my remit and not necessarily yours. If yours does indeed major on sky-high refresh and ultra-low latency, this AOC definitely delivers and does so with excellent image quality given the limitations of this panel type. It's not for me, this AOC. But if you're seriously into esports and you don't care about general Windows performance, it might just be for you.

]]>
https://www.pcgamer.com/hardware/gaming-monitors/aoc-agon-pro-ag276fk-review/ hrXLmiUWYXqhkZP6acLzLP Fri, 14 Feb 2025 16:19:16 +0000
<![CDATA[ AMD's Frank Azor says no 32 GB RX 9070 XT for you, probably because a 32 GB mid-range GPU didn't make much sense in the first place ]]> Sometimes, in this topsy-turvy PC hardware world of ours, a rumour turns up that really makes us scratch our heads. This week, it was rumblings about a 32 GB variant of AMD's upcoming RDNA 4 graphics card, the RX 9070 XT. You can rest easy in your beds tonight, though, because AMD's Frank Azor has categorically denied its existence. For now, at least.

Alright, let me qualify that. It's certainly possible that AMD has a 32 GB test card rattling around in its labs, because hey, everything gets a little crazy on a Friday and the team thought it might be fun. But as for one turning up on sale in March? Nuh-uh.

That's it. That's the tweet. It's no surprise that AMD isn't planning on cramming the RX 9070 XT with an excessive amount of VRAM, because it's been very open for some time about its humble aims to bring mid-range cards to the market this generation, not high end.

32 GB of VRAM would only really make sense for AI and high-level rendering purposes, not a stated goal of this generation of AMD GPUs. And while we've certainly seen controversy regarding a perceived lack of VRAM in cards like the RTX 4060 Ti in the past, 32 GB would be over-egging the pudding significantly for a mid-range GPU.

Plus, it's fairly expensive. VRAM, that is. Stuffing a card full of it while trying to keep the price competitive with Nvidia's offer of the RTX 5070 for $549 makes about as much sense as, ooh, I don't know, offering one with a gold-plated cooling shroud for bragging rights.

Although, anything's possible. Just wanted to add that qualifier, in case AMD announces the RX 9070 XT Gold Standard edition in a week. And hey, companies do be doing crazy things sometimes. One for the future maybe? Perhaps. Just not now.

This batch of cards will need to be high-performing (for their product category) and relatively cheap to sell well, and that's something that AMD appears to be keenly aware of.

At least, we're sort of hoping so at this point. Given the claimed advantages of DLSS 4 Multi Frame Generation (we've yet to test it on a mid-range card, although we've been diving in to the figures for the RTX 5080 and RTX 5090), the RX 9070 XT and RX 9070 look like they'll have an uphill battle on their hands this generation to grab some market share.

And 32 GB of VRAM, for no real reason other than a marketing-friendly number on the box? That really doesn't seem like the play. Anyway, here's the word, straight from the horse's mouth. You can all go back to your bunks.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/amds-frank-azor-says-no-32-gb-rx-9070-xt-for-you-probably-because-a-32-gb-mid-range-gpu-didnt-make-much-sense-in-the-first-place/ YRfbxTYchxiFPVgF7oEfBB Fri, 14 Feb 2025 16:04:56 +0000
<![CDATA[ Youtube's CEO says it's the 'new television' with 1 billion TV viewers daily, and apparently people watch Shorts on their TV now ]]> I'm old enough to remember a pre-YouTube era, where children played happily in the fields and everyone wore fetching hats to church. Now, though, it's a staple of so many of our lives, mine included. And according to YouTube CEO Neal Mohan, it's not just dominating the arena of our phone and desktop PC video watching, but on track to take over television, too.

In a blog post on the, err, YouTube official blog, Mohan takes a moment to mark the internet video sensation's 20th birthday, with his four "big bets" for YouTube in 2025 (via Sweclockers). "YouTube will remain the epicenter of culture", he says. Heavens help us all.

"For more and more people, watching TV means watching YouTube. Viewers are watching, on average, over 1B hours of YouTube content on TVs daily, and TV is now the primary device for YouTube viewing in the U.S" says Mohan.

"It’s interactive and includes things like Shorts (yes, people watch them on TVs), podcasts, and live streams, right alongside the sports, sitcoms and talk shows people already love."

I feel like following a statement on Shorts viewership with a "yes, really" qualifier is perhaps a tad defeatist, but I'll admit that even I watch the occasional YouTube Short. On my TV, though? That's sacrilege, surely.

In other news, YouTube TV apparently has more than eight million subscribers and YouTube Premium has over 100 million happy payees. Given YouTube's increasingly aggressive (and incredibly lucrative) policy of filling my watching hours with ads, I've considered paying for one myself on occasion.

Ah, who am I kidding. That'd eat into my Steam budget, and I feel its kinda like letting that sort of ad-based incentive win. Anyway, Mohan also boldly states that "YouTubers are becoming the startups of Hollywood," as would-be filmmakers are starting off on the platform in the hope of graduating to the really, really big screen:

"Creators are bringing that startup mindset to Hollywood: leaning into new models of production, building studios to elevate their production quality, and exploring new creative avenues.

"We're committed to meeting creators where they are with tools and features that power their businesses and communities" he continues. "We’ll continue to support their growth through more traditional revenue streams like ads and YouTube Premium, while introducing new ways for creators to partner with brands to bring their products to life."

Your next machine

Gaming PC group shot

(Image credit: Future)

Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

Ah good, more brands. It seems like most of my favourite creators are now acting like the QVC shopping channel and hawking dubiously-effective wares, although I've not yet been tempted to buy anything simply because a gurning thumbnail-enthusiast has shoved it in my face mid-video.

Still, it must be working in general. There's gold in them thar hills, or so I've been told.

And actually, while it's fun to poke fun, I think Mohan has a point. Even my dear sainted mother has been known to watch YouTube on her televisual box, and if that's not a sign of mass-adoption, I don't know what is. YouTube is on your phone, your desktop browser, and now, it seems, increasingly replacing the other apps on your TV set-top box.

A brave new world indeed. Now, if you'll excuse me, I'm off to tie a ring of daisies in my hair and frolic among the heather. It's a beautiful day outside, y'know?

]]>
https://www.pcgamer.com/hardware/youtubes-ceo-says-its-the-new-television-with-1-billion-tv-viewers-daily-and-apparently-people-watch-shorts-on-their-tv-now/ SDfjX4GjTkMab2KTYSSoUi Fri, 14 Feb 2025 15:08:11 +0000
<![CDATA[ A web3 free-to-play survival game found to be a front for installing malware on your PC has finally been removed from Steam ]]> Reported to have amassed over 7,000 players, a free-to-play web3 game named PirateFi launched on Steam last week and was subsequently taken down for containing "malicious files". Users reportedly found out about this takedown as Valve took to notifying players that their rigs could be compromised.

As spotted by SteamDB, the site known for tracking Steam data (via PC Mag), users who downloaded survival crafter PirateFi were informed that "The Steam account of the developer for this game uploaded builds to Steam that contained suspected malware".

As a result of this, Valve urges users to either run a "full-system scan using an antivirus product that you trust or use regularly" or "consider fully reformatting your operating system to ensure that no malicious software remains on your machine". Both are smart ways of counteracting potential malware but this is a worrying message to get from a trusted platform such as Steam either way.

The Steam reviews for the game paint a suspicious story. The first few days of launch saw a handful of positive reviews, from accounts that had played the game for no more than two hours, though many of them aren't entirely fresh accounts. The latter point is normally a good sign of legitimacy.

However, later negative reviews are mostly from fresh accounts, accusing the game of stealing their data, spending their Steam wallet, and one user even suggests the game's screenshots are stolen from another pirate game. Given that the new negative reviews are from fresh accounts accusing the game of stealing their old accounts, those old accounts could potentially account for some of the positive reviews.

According to PC Mag, a Telegram account named Jose Andres offered people $17 an hour to moderate the web3 survival pirate game. In those same chats, they claim the game has had over 7,000 players. It seems this 'job' was just a scam to get more people to play the game as part of the induction process for it was to download PirateFI.

After being active for six days, Valve took action and took the game down on February 12. Based on the SteamDB figures, it seems likely the 7,000 players figure was just part of the talk to build up trust for their scam as the game has an all-time peak of five players. However, many report the game doesn't open so the likelihood of having many concurrent players is pretty low. According to those analytics, Gamalytic reckons the game got around 800 downloads and VG Insights thinks that number is up to around 1,500.

Either way, this sets a bad precedent for the safety of the Steam store and we don't yet know about internal changes made by Valve to catch future attempts. Hopefully, this remains an isolated event.

We have reached out to Valve for comment.

Windows 11 review: What we think of the latest OS.
How to install Windows 11: Guide to a secure install.
Windows 11 TPM requirement: Strict OS security.

]]>
https://www.pcgamer.com/hardware/a-web3-free-to-play-survival-game-found-to-be-a-front-for-installing-malware-on-your-pc-has-finally-been-removed-from-steam/ i5LmLbpzqB6xjwXyFxZgSg Fri, 14 Feb 2025 14:54:47 +0000
<![CDATA[ Intel is reportedly in talks to spin off its chip factories into a partnership with arch rival TSMC and now I think I've seen everything ]]> I've seen pretty much everything in this industry over the years. And, yes, that does include a man eating his own head. But Intel and TSMC going into the chip foundry business together? Sorry, what?

This rumour, and it very much is a rumour, comes from an equities analyst at investment bank Baird who cited discussions between TSMC and Intel. According to the Wall Street Journal and to broadly précis this particular yarn, the rough idea is for TSMC to send some of its best engineers to Intel's fabs to sort them out. They can then be spun off into a separate entity managed by TSMC but co-owned by both companies.

If you're thinking this sounds like wild speculation, it does. But the markets are taking it seriously, with Intel's share price spiking by fully 6% when the rumour broke. So the question is, does it make sense?

For starters, if there's any truth to this story then it speaks volumes about the health of Intel's all-important upcoming 18A node. And not in a good way. If 18A is all Intel is cracking it up to be, then there would be no need to parachute in TSMC engineers and Intel wouldn't be looking to spin off its fabs.

Personally, I doubt 18A is as healthy as Intel claims, otherwise CEO Pat Gelsinger would not have been ejected, sorry retired, from his role. So, let's take this rumour seriously for a moment and consider the implications.

On the one hand, the idea that TSMC in part or whole takes control of Intel's fabs could be great for the US chip industry and the broader chip supply chain. The world's chip supply would no longer be so dependent on production in a location constantly teetering on edge of a geopolitical crisis thanks to tensions between Taiwan and China.

More chips produced in the US also sidesteps any concerns over increasing electronics prices thanks to the possible imposition of hefty tariffs on Taiwanese chip of up to 100% by the Trump administration.

On the other, handing over control of even more of the world's cutting-edge chip production to TSMC looks like a megamonopoly in the making. And it's hard to find any examples of long term and overwhelming monopolies in important industries working out well for the average consumer.

You could take the view that if Intel's fabs are in more trouble than the company is letting on. It might be a choice between partnering with TSMC or watching those fabs go up in proverbial smoke.

In other words, the real-world choice might be between partnering with TSMC or seeing Intel's fabs down and ceasing to exist in the medium term. In which case, it's got to be worth giving the TSMC thing a go. After all, the US government could take control of or shutter those fabs should it wish.

Moreover, if tensions between China and Taiwan increase yet further, TSMC may benefit from a larger US presence. It already has a 4nm fab of its own up and running in the US, with 3nm and 2nm fabs in the planning.

In the long run, that could once again make the US the global center of chip production. And yet the idea makes my spidey sense tingle. You only have to look at graphics card prices to see what happens when one company dominates. The idea that TSMC swallowing up Intel's fabs is going to be a good thing for we mere gamers doesn't feel at all convincing.

In the end, the only thing we can be fairly confident of is that the next decade or so looks likely to be volatile if not tumultuous in the tech industry. Tariffs, AI, geopolitics, scalpers, fake frames, a mooted mega monopoly in chips, it's all pretty baffling. Long gone are the innocent days when the most controversial aspect of building a new gaming PC was whether you were an Intel or AMD fan. It's all so much more complicated now.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/processors/intel-is-reportedly-in-talks-to-spin-off-its-chip-factories-into-a-partnership-with-arch-rival-tsmc-and-now-i-think-ive-seen-everything/ kYvSCr4xwMdUC2jNM7DCCb Fri, 14 Feb 2025 13:36:10 +0000
<![CDATA[ AMD is finally spilling the beans about the RX 9070 series during a live stream on February 28 ]]> AMD's rollout of its upcoming graphics card has been weird. We expected to get all the details at CES, but were just left with teases. Then these cards were anticipated to launch around the time of the RTX 5090 and RTX 5080 but pushed back to March. Now we are due to finally find out more about the AMD Radeon RX 9070 series and its fancy RDNA 4 architecture on February 28.

Set to air at 8 AM EST / 5 AM PT on the AMD Gaming YouTube channel, the RX 9070 live stream is going to give more information on the cards that are confirmed to launch in early March.

With the RTX 5070 Ti set to launch on February 20 and RTX 5070 launching on March 5, the RX 9070 and RX 9070 XT graphics cards will likely launch just after these two Nvidia cards.

These will be AMD's first graphics cards to use the RDNA 4 microarchitecture so there's quite a lot of hype and/or speculation surrounding them. Notably, these cards are reportedly targeting the more midrange market so won't be a replacement for that RTX 5090 card you've been looking for.

They could, however, sway you away from the cheaper 50-series cards if the live stream and subsequent launch suggest RDNA 4 has led to big performance gains.

In the wake of Nvidia's card selling out nearly instantaneously, AMD's David McAfee announced AMD is "planning to have a wide assortment of cards available globally for its launch in March."

We know surprisingly little about AMD's next set of graphics cards right now but a recent leak suggests the AMD RX 9070 XT can run the latest Monster Hunter Wilds benchmarking tool at 211.7 average fps, which is mighty impressive.

This is at 1080p with frame gen enabled but impressive nonetheless. If these stats bore out in real tests, and customers can actually get them on shelves, AMD could have a surprise hit on its hands.

I know I just hope the price is right to compete, and an FSR improvement wouldn't go amiss.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/amd-is-finally-spilling-the-beans-about-the-rx-9070-series-during-a-live-stream-on-february-28/ LHiFmMWMvtUrffXMBEivX3 Fri, 14 Feb 2025 11:49:19 +0000
<![CDATA[ Nvidia's RTX 5070 Ti GPU officially goes on sale February 20 and the RTX 5070 is go for March 5 ]]> Earlier this week we reported a rumour that Nvidia had slightly delayed the upcoming RTX 5070 GPU from some time in February to early March. Well, it turns out that's true. Nvidia has updated its website and given a March 5 on-sale date for the 5070's release, plus February 20 for the RTX 5070 Ti.

As we discussed, back at CES Nvidia originally said both GPUs would be available in February, though didn't put a specific date on that. So, a March 5 release for the RTX 5070 is definitely a delay. The question is why?

Hopefully, the delay is mostly about making sure there's plenty of availability at launch, what with RTX 5080 and 5090 cards predictably selling out in picoseconds after their launch. Pushing the launch of the RTX 5070 back by a couple of weeks could help with that.

However, we suspect it's as much about triangulating the 5070 release to undermine the launch of AMD's competing Radeon RX 9070 and 9070 XT. Those two GPUs were originally rumoured to launch earlier this year.

AMD never said that would happen. However AMD has said that it decided to delay both cards to give them a little polish. "We are taking a little extra time to optimize the software stack for maximum performance and enable more FSR 4 titles," AMD's Ryzen CPU and Radeon graphics rep David McAfee said on January 22.

Likewise AMD has since inked in "early March" as the launch window for those new GPUs. With that in mind, Nvidia also moving the RTX 5070 to early March seems like a little too much of a coincidence. Odds are, it's an effort to win the PR war with AMD by ensuring that the RX 9070 GPUs have to share the news cycles with the RTX 5070.

Incidentally, it could be quite the fight. The latest rumours suggest the 9070 XT could have a boost clock of 3.1 GHz and offer performance comparable to a 7900 XTX in the Monster Hunter Wilds benchmark.

Copious caveats apply and we'll have to wait and see just how good the 9070 XT is. But if those rumours are right, this could be one of the most exciting GPU launches in years. Oh, provided AMD gets the pricing right.

Of course, all of this will be academic for most gamers unless both Nvidia and AMD manage to get a decent number of cards onto retail shelves. Comments on X in response to Nvidia's announcement of the launch dates say it all. "When will availability for the 5090 and 5080 cards start?" one X user replied. Well, quite.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/nvidias-rtx-5070-ti-gpu-officially-goes-on-sale-february-20-and-the-rtx-5070-is-go-for-march-5/ 6zNmU4JwkAjKCecgRmqSXf Fri, 14 Feb 2025 11:33:13 +0000
<![CDATA[ A 'Musk-led consortium' of investors say they'll withdraw $97.4 billion bid to buy OpenAI—but only if it stays non-profit ]]> Earlier this week, Elon Musk alongside a group of investors put in an unsolicited bid to purchase the non-profit portion of OpenAI, OpenAI Inc. Now, according to court documents filed on Wednesday, this "Musk-led consortium" says they will withdraw the eye-watering $97.4 billion offer, but only if OpenAI's board decides against turning this venture into a for-profit organisation (Via TechCrunch).

This latest court filing describes the bid for OpenAI's governing non-profit publicised on Monday as "serious." This, despite Musk allegedly telling staff at X over email, "Our user growth is stagnant, revenue is unimpressive, and we’re barely breaking even."

Furthermore, sources told Reuters that OpenAI's board of directors had apparently not received a formal bid from Musk's side as of Tuesday. Still, the original bid has arguably succeeded in at least one of its goals: to draw public attention to OpenAI's reported intention to go for-profit through its latest restructure, and to get us all talking about it—like this.

Whether any amount of public attention will keep OpenAI non-profit still remains to be seen though. The company started as a non-profit, before shifting into a 'capped-profit' structure back in 2019. The non-profit part of the company Musk et al allegedly want to buy is what steers the ship of the wider, capped-profit company. The aforementioned restructure intends to go more traditionally for-profit in the form of a public benefit corporation.

OpenAI is presently playing it cool. The original unsolicited bid was dismissed by OpenAI CEO and co-founder Sam Altman, who went as far as to write on X, "No thank you but we will buy Twitter for $9.74 billion if you want".

Theoretically, the board could still accept a bid despite this rejection from Altman, though a recent comment suggests that's unlikely; counsel to OpenAI's board, Andrew Nussbaum, provided a statement to Bloomberg News which further clarifies the company's stance: "The nonprofit is not for sale."

For those unaware, Elon Musk co-founded the company that went on to create ChatGPT alongside Sam Altman back in 2015. Musk then left in 2018, with the company saying at the time his departure was to avoid a potential conflict of interest as Tesla became more interested in AI.

Last year, OpenAI shared redacted emails and DMs suggesting that this split was anything but wholly amicable. Not only that, but these messages also indicate that Musk wanted to push OpenAI towards becoming a for-profit organisation as early as 2017.

Sam Altman recently told Bloomberg Television, "I think he is probably just trying to slow us down. He obviously is a competitor. I wish he would just compete by building a better product, but I think there’s been a lot of tactics, many, many lawsuits, all sorts of other crazy stuff, now this." The Musk-backed competition in question is xAI and their boorish chatbot Grok. You know what? I think I'm finally getting it.


Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

]]>
https://www.pcgamer.com/hardware/a-musk-led-consortium-of-investors-say-theyll-withdraw-usd97-4-billion-bid-to-buy-openai-but-only-if-it-stays-non-profit/ rUuHRGTFNiU5LqEZ25sxxA Fri, 14 Feb 2025 09:49:13 +0000
<![CDATA[ Unreal Engine often gets flak for games running poorly or stuttering, but as Avowed demonstrates, it's really about how devs use it and the pressures of time ]]> Here's a simple question for you: What do Black Myth: Wukong, Frostpunk 2, Palworld, Stalker 2, Tekken 8, Silent Hill 2, and Avowed all have in common? Yes, that's right, they were all made using the game development package Unreal Engine 5 (UE5). Something that they also have in common is variable and somewhat confusing performance on PC. If one only goes by comments on social media, the problems all stem from UE5 but having spent a week testing Avowed, I'm not convinced that Epic's software is to blame.

That particular game has all kinds of odd things about it, such as the 1% low frame rates and wonky upscaling, but it looks great and for the most part, it runs pretty well too, though it's best to ignore the actual frame rates and judge it on feel. Avowed's developers, Obsidian Entertainment, aren't new to Unreal Engine as they made The Outer Worlds and Grounded with it, although it was the previous version.

So I'm pretty certain that one can't blame UE5 for the fact that enabling FSR 3 does almost nothing for low-end GPUs and induces ghosting and pixel crawling. It can't be blamed for the large disparity between the average and 1% low frame rates, nor the highly variable frame times. Sometimes it is down to a game's engine (looking at you, RE Engine) and while all the games I listed at the start of this article have performance issues of one kind or another, Avowed's are so uniquely odd that it surely must stem from choices made by the developers.

Take Silent Hill 2, for example. Looks fantastic but its performance at launch was all over the place, not to mention being riddled with bugs. Subsequent patches from developers Bloober Team have solved many of these problems but when I last tried the game, it still had a certain degree of jank to it. And like Obsidian, those coders have plenty of experience using Unreal Engine.

One might argue that if two experienced teams can't get their games to run properly with UE then the software must be at fault, but Avowed and Silent Hill 2's problems are very different. Performance-related, yes, but the fundamental issues aren't the same.

Games like Doom Eternal or more recently, Indiana Jones and the Great Circle, are often used as keystones for the argument that UE is rubbish. The idea is that if Id Tech 7 can be used to make such great games, and both run super well, it must be a superior piece of software. Right?

However, Id Tech 7 is proprietary and the two development teams for the above games, Id Software and MachineGames, are both subsidiaries of the parent company that owns the engine (ZeniMax Media). It was heavily customised for Indiana Jones but the point I'm making is that the coders in question are all very familiar with Id Tech—one team made the engine and the other has used nothing but Id Tech engines.

Unreal Engine is an all-round development engine that can be used for visual effects in films and TV shows, as well as video games. It's quite easy to pick up and work with—heck, if I can do it, I'm pretty sure anyone can with enough grit and determination, because I'm old and rather scatty at times. Of course, what I've made with UE5 has been nothing more than simple exercises, graphics tests, and other tomfoolery, and creating a full-blown AAA game is on a level of complexity so far removed from what I'm doing that it's akin to comparing making a paper aeroplane to a commercial jet.

Epic is aware that UE-based games have had common issues, especially with PSO compilation (aka the dreaded shader compilation stutter), as well as traversal stutter, but it's actively trying to do something about it with numerous updates to Unreal Engine 5. There again, recompiling millions of lines of code every time a new version of UE comes out isn't something that any big developer is going to relish, due to time constraints and the possibility that it might bork something.

Image 1 of 2

Doom Eternal Battlemode 2v1 Slayer Soldier

Doom Eternal (Image credit: Bethesda Softworks)
Image 2 of 2

Indiana Jones in priest vestments standing next to cardinal talking to priest in The Great Circle

Indiana Jones and The Great Circle (Image credit: MachineGames)

Unreal Engine does have faults but given its remit, I'd say they're acceptable. To me, the problems all stem from time. Game publishers need something to go out on schedule, they need them to recoup the millions of dollars invested in them as soon as possible, and performance testing and fine-tuning code is very time-consuming.

Your next upgrade

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)

Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.

This is why so many big games are released in such a shoddy state—they can always be patched later, when there's more time available to carry out such tasks in depth.

I have no doubt it will be the same for Avowed. The preview code I tested was rather wonky-donkey but with the overall game being really solid and a lot of fun, there's a good chance that all of its foibles will be fixed post-release.

Then again, the performance might not be 'fixed' if the developers are happy with how it is already or if the sales aren't great. Obsidian Entertainment is also working on a sequel to The Outer Worlds, again with UE5, so it will only be able to allocate a certain amount of workforce time to fixing Avowed.

In my mind, time is the real problem, not the engine or the developers using it, and unfortunately, time costs money. Are we happy to pay even more for our favourite games, so developers can employ more staff to do more work, to ensure PC projects run perfectly on release? That's a debate for another time, I think.

]]>
https://www.pcgamer.com/hardware/unreal-engine-often-gets-flack-for-games-running-poorly-or-stuttering-but-as-avowed-demonstrates-its-really-about-how-devs-use-it-and-the-pressures-of-time/ oJuSTyG8rX6e8sSmNV8QNA Thu, 13 Feb 2025 20:00:00 +0000
<![CDATA[ Avowed's low frame rates but smooth-feeling gameplay makes me wonder if we PC gamers worry too much about the numbers ]]> I've been spending a good chunk of my time recently testing the performance of new game releases on various PCs. Final Fantasy 7: Rebirth, Civilization 7, Kingdom Come: Deliverance 2, and now Avowed. Hour after hour, day after day, constantly running benchmarks over and over. All to get a raft of performance numbers to give you an idea of how well they run across different hardware configurations. But the charts and video clips don't really tell you the full story, because they can't show you how a game feels to play.

And in the case of Avowed, it's quite an important thing because, for the most part, it feels really smooth. Not always, of course, and not on every PC that I tested it on, but the experience is far nicer than the performance numbers suggest it should be. For example, on my Ryzen 7 5700X3D and GeForce RTX 4070 test rig, it averages 68 fps at 1440p, with the High quality preset and no upscaling.

Pretty decent, right? Not amazing but not bad, either. However, the 1% low frame rate (the fps that the game is faster than, 99% of the time) is just 34 fps. That's a big difference to the average performance and in lots of other games, you'd notice that quite easily. Not so in Avowed—in fact, it feels smooth as butter on that particular PC, with those settings. Only the odd bit of traversal stutter spoils the picture.

And it's got me wondering if we, as PC gamers, worry too much about performance. Or rather, focus too much on the performance figures. Yes, I know that makes me a big ol' hypocrite, given that my job here is to test stuff, produce charts with lots of numbers, and then judge the products on the basis of those figures.

There's a big difference between the performance of hardware and that of a game, of course. It's not like one can go 'Ooh, this CPU feels like it's running really fast.' It either is or isn't, and the only way to be certain is to run tests and get numbers. For games, it's different—unless it's a competitive one, especially if it's going to potentially be a source of income for you. But even then, surely it doesn't matter how fast it's running, as long as it feels okay and doesn't prevent you from achieving your desired goals, yes?

The obvious issue here is that feeling is subjective, and something that feels okay to me might be excruciating for someone else. But I don't think that, in the case of games, it's so subjective that my opinion is completely irrelevant. After all, I'm in the fortunate position to be able to check out a game across more PCs than most gamers come across in years of owning a computer, so I can at least comment on relative feel.

I often come across comments in discussions about a particular game or piece of hardware, where the gamer will say something along the lines of 'It has to run at 120 fps for me or I'm not interested.' Without wishing to sound like I'm denigrating such opinions, that's something I just don't get—on my own gaming PC, I really don't care what frame rate I'm getting and I never bother to check the same metrics that I collate for performance reviews. It's literally a case of firing up the game, playing it as is, and then tweaking some settings until it feels great.

Avowed really is a perfect example of this. As with all games that I do performance analysis for, I spent a couple of hours getting to a point where I could carry out repeated test loops to collate and average data. Actually, I played past that point just to see if there was something better to use but depending on how much time I have, that's not always possible. Anyway, after the said hours of working through Avowed's opening stages, I came away feeling really positive about the game, marvelling at how well it ran.

Your next machine

Gaming PC group shot

(Image credit: Future)

Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

Then I started to collate performance data across all my test rigs and was somewhat surprised to see how low some of the numbers really were. A 1% low of 34 fps compared to an average of 68 fps should feel a bit janky—not quite stuttery, but certainly not smooth. And yet it does in Avowed. It actually feels worse if it runs too fast if that makes any sense.

I know I can't abandon performance charts and just write meandering prose about how a game feels at 1080p Medium, but I do think that sometimes it's worth leaving the numbers to one side for a moment and reading everything first. It's akin to ignoring a game's review score until you've absorbed everything the reviewer has said. And having just realised how few people are going to do that, I guess I'll be doing hours of testing for performance figures for many more years to come.

For myself, though, I'll just keep going by feel. Maybe I should invent a feels-per-second metric? Ugh no, that's just another number.

]]>
https://www.pcgamer.com/hardware/avoweds-low-frame-rates-but-smooth-feeling-gameplay-makes-me-wonder-if-we-pc-gamers-worry-too-much-about-the-numbers/ j7YB28cBRPbhPLmV3BZvPE Thu, 13 Feb 2025 18:30:00 +0000
<![CDATA[ If the AMD RX 9070 XT is as beefy as these leaked specs and benchmark makes out, low Nvidia 50-series stocks might not matter ]]> Well, we all knew Q1 2025 was going to be spicy. I'm not sure we knew that would mean quite the number of delays and out-of-stock signs that we've seen so far, but we knew there was going to be a slew of new GPU shenanigans to watch out for. AMD's Radeon RX 9070 XT is certainly one of these, and it now looks like we're getting something slightly more than word of mouth regarding its specs and performance.

It's still very much borne of the ever-churning rumour mill, of course, but from X user HKEPC (via VideoCardz) we have a screenshot of GPU-Z showing a 'Navi 48' GPU, this being codename for the upcoming RX 9070 XT. And the specs? Well, they're in-line with what we expected—4,096 shader cores, 16 GB of GDDR6 memory, and a 256-memory bus—but it's nice to actually see these specs on paper– err, I mean, on-screen.

HKEPC also shared a screenshot of a supposed RX 9070 XT achieving a whopping 211.7 fps in the Monster Hunter Wilds benchmark, albeit at 1080p and with frame gene enabled. But hey, as I discovered in my own testing of this benchmark for our Monster Hunter Wilds benchmark round-up, the game is pretty brutal on the ol' GPU.

For reference on the core front, by the way, the RX 7800 XT has 3,840 and the RX 7900 XT has 5,376. Just as previous plausible leaks had it, then: the 9070 XT sitting between the two in core/CU count. Again corroborating these previous leaks, we're (supposedly) looking at a higher boost clock for the RX 9070 XT—in this case, even higher than previously thought at 3,100 MHz, presumably due to a particularly aggressive factory overclock.

All very exciting stuff, and we shouldn't have too long to wait for it, either, given Dr. Lisa Su has confirmed the GPU is officially arriving in early March. It might even beat the RTX 5070 to market given that it's rumoured to be delayed until March now, too. Maybe this will make up for AMD's botched CES appearance wherein they showed our eager eyes practically nada of the GPUs.

Then again, Nvidia's just announced a February 20 launch date for the RTX 5070 Ti, so the RX 9070 XT will have that card to go up against. But even then-again-er, Nvidia's recent RTX 50-series launches haven't exactly given us much hope for, y'know, actual stock on shelves.

Maybe that's where AMD can find an advantage: by actually having GPUs available to buy. If it can pull that off while pumping out this kind of Monster Hunter Wilds performance with 16 GB of high-bandwidth memory, preferably for cheap, I'll be one happy chappy. We'll see.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/if-the-amd-rx-9070-xt-is-as-beefy-as-these-leaked-specs-and-benchmark-makes-out-low-nvidia-50-series-stocks-might-not-matter/ GYAwtCEcsqA2XKq6uMR45K Thu, 13 Feb 2025 17:38:22 +0000
<![CDATA[ MSI Claw 8 AI+ A2VM review ]]> Pour one out for the original MSI Claw. You can still find it for sale, but with a Meteor Lake Intel chip inside and a chassis design that felt a lot more like a prototype than you'd expect from a modern gaming handheld, it seemed doomed to obscurity from the start.

The MSI Claw 8 AI+, however, feels like a much more accomplished product right from the off. It's a substantial piece of kit, with a chonky chassis that immediately delivers a sense of weight and quality the second you get it in your hands. The Hall effect thumbsticks are well-placed, the triggers and shoulder buttons feel much improved, and overall it's a good-looking, desirable object to pull from the box. As it should be, for the fairly substantial price of $900/£899.

Sitting centre stage is an 8-inch IPS-type 1200p display, and it's noticeably vibrant from the moment you boot into Windows. While the Claw 8 AI+ is not the most portable of machines, everything external has a certain wow factor that the previous model was sorely missing—and that big screen sits proudly in the middle, begging you to dive in.

Inside there's plenty to be excited about, too. It comes equipped with the Intel Core Ultra 7 258V, a Lunar Lake chip that we've been anticipating in a handheld gaming PC for a long time.

MSI Claw 8 AI+ A2VM specs

The Claw logo stamped on the back of the MSI Claw 8 AI+

(Image credit: Future)

CPU: Intel Core Ultra 7 285V
Cores:
4x Performance, 4x Efficient
Threads:
8
GPU:
Intel Arc 140V
Memory:
32 GB LPDDR5x-8533
Screen size: 8-inch
Native resolution: 1920 x 1200
Refresh rate: 120 Hz
Storage: 1 TB SSD
Battery: 80 Wh
I/O: 1x MicroSD card reader, 2x Thunderbolt 4 (Displayport/Power Delivery 3.0)
Dimensions: 299 x 126 x 24 mm
Weight: 795 g
Price: $900/£899

Intel had a rough 2024 when it came to desktop CPUs, but the Lunar Lake mobile chips stood out as perfect candidates for a portable gaming machine—thanks to excellent power efficiency and an improved, Battlemage-based Arc 140V iGPU, with eight Xe² cores and eight dedicated ray tracing units.

Not that smooth ray tracing performance was ever on the table here—it'll still be a long time before an iGPU will be capable of coping with all the settings turned up in Cyberpunk 2077 without slowing to a crawl. It's an impressive chip, though, and the Claw 8 AI+ is the first gaming handheld to make use of it.

So, this is a portable gaming machine with specs to impress. As a result I've pitted it against the Asus ROG Ally X, our current best handheld gaming PC, and the OneXPlayer OneXFly F1 Pro, an AMD Strix Point-equipped speed machine. The F1 Pro is the fastest handheld we've ever tested and retails for $1,339. That's $439 more than the Claw 8 AI+, for reference.

I've also thrown the Lenovo Legion Go into the mix, another big screen handheld with many virtues worth considering. So the big question is: Can the Claw 8 AI+ keep up with some of our top contenders?

Image 1 of 4

The MSI Claw 8 AI+ on a wooden table, showing the desktop lit up in blue with the RGB lighting set to pink.

(Image credit: Future)
Image 2 of 4

The right thumbstick of the MSI Claw 8 AI+, ringed in blue RGB

(Image credit: Future)
Image 3 of 4

The right shoulder button and trigger of the MSI Claw 8 AI+.

(Image credit: Future)
Image 4 of 4

The rear of the MSI Claw 8 AI+ showing the rear vents and the paddles.

(Image credit: Future)

In a phrase? Pretty much, yes. While the Claw matches the ROG Ally X in Black Myth Wukong at Medium settings with upscaling disabled, once FSR is thrown into the fray it ranges ahead by a couple of frames, with a much higher minimum frame rate. It's also a mere two frames off the performance of the OneXFly F1 Pro, and that's pretty impressive given the price delta between them.

In Cyberpunk 2077 it actually manages to beat the OneXFly by a single average frame, both in the upscaled benchmark and at straight-up 1080p. That's a superb turn of speed, and a good indication that the Intel Core Ultra 7 285V is serious competition for the Ryzen AI HX 370 at the heart of the OneXPlayer machine.

And so the back and forth begins. In F1 24, the Claw 8 AI+ gives the ROG Ally X a bit of a pasting—particularly with upscaling enabled, where it manages a nine fps lead. However, the OneXFly F1 Pro flexes its muscles once more, beating the Claw by a fair margin in both the upscaled and non-upscaled results.

F1 24 running on the MSI Claw 8 AI +

(Image credit: Future)

In Metro Exodus Enhanced, however, the Claw scythes its way to the top, leaving the ROG Ally X in the dust and edging ahead of the OneXFly by another single frame, with a four fps lead in the minimums.

One slightly odd result in my testing occurred in Horizon Zero Dawn. With upscaling disabled, the Claw leads the OneXFly machine by two frames on average, with double the minimum frame rate. However, with FSR set to Quality at 1080p it drops below all of our tested handhelds on average, although maintains a very high minimum figure compared to the rest.

I ran the benchmark over and over for hours and fiddled with many settings, but nope, this was a consistent quirk.

Intel drivers? Perhaps. It wouldn't be the first time we've experienced odd driver-related issues with an Intel GPU, although it's worth mentioning that all of the games in our test suite ran on the Claw without major problems, just with the very occasional performance oddity.

Speaking of quirks, how about that scorching high 3DMark Time Spy GPU score? Intel's Arc desktop cards have a habit of performing brilliantly in 3DMark benchmarks, while not translating those gains particularly well into actual games—and it seems the Arc 140V iGPU is the same. I'd look upon that number with a good bit of scepticism, but again, it was a consistent figure among multiple runs.

In terms of CPU score, it's worth noting that the Core Ultra 7 285V makes use of four Performance big boi CPU cores, and four low-powered Efficient cores. When compared to the eight fully-fledged cores in the Ryzen Z1 Extreme (and the four Zen 5 cores paired with eight Zen 5c cores in the Ryzen AI HX 370) it's no great surprise the Claw's CPU score is down on the competition.

This is reflected again in the Cinebench R24 results. While the single core index of the Intel chip is the highest out of our handhelds thanks to the speed of those Performance cores, the multi-core score pales in comparison to the AMD chips in the ROG Ally X and the OneXFly F1 Pro. Although it's worth mentioning it does manage a slightly higher result than the Lenovo Legion Go, also using the Ryzen Z1 Extreme APU.

Does this performance deficit matter? To me, no. This is a gaming handheld, after all, and real world gaming performance is where the Lunar Lake chip really delivers. While benchmarks will always drag out the best and worst of a chip, I can't see many people crying into their cereal about the lack of multi-core productivity performance in something designed primarily to play games on the go.

The MSI Claw 8 AI+ gaming PC on a wooden table.

(Image credit: Future)

I re-benchmarked the ROG Ally X myself for the purposes of this comparison, as our handheld benchmarking suite has changed slightly since Nick's original review and I wanted the latest numbers. And while the Asus machine is fairly loud under load at its max 30 W TDP, the Claw 8 AI+ is surprisingly quiet, to the point where I wasn't sure if benchmarks had finished or not when it sat a few meters away.

It runs relatively cool, too, with a max CPU temp of 86 °C. That larger chassis appears to give the Lunar Lake chip plenty of breathing room, without being plastered in vents like the OG Claw.

Speaking of efficiency, the Claw 8 AI+ managed a massive 129 minutes in PCMark 10's gaming battery life test. That's a single minute more than the ROG Ally X, and nearly double the battery life of the OneXFly F1 Pro under load. That substantial casing has allowed MSI to cram an 80 Wh battery under the hood—and combined with the efficiency of Intel's impressive chip, the Claw keeps going, and going, and going.

This translates into my real world testing, too. I managed to play an hour and a half's worth of Doom 2016 at Ultra settings on my journey home from the office, with battery life to spare. And it ran like water.

Admittedly it's nearly a nine year old game, and wasn't particularly hardware-demanding even at release. But it was something of a revelation playing a still-fantastic-looking shooter on a large screen at very high frame rates—mid-train journey—without thinking once about a cable.

The MSI Claw 8 AI+ running Doom 2016, with a cheeky pentagram on screen.

(Image credit: Future)

In fact I've been playing as many fast-paced games as I can on the Claw to test out those Hall effect thumbsticks and updated controls. The sticks are certainly accurate, and feel great under the thumbs for a long session. However, having the ROG Ally X to compare side-by-side, I have to say that the Asus machine's triggers, shoulder buttons, and face buttons do feel better.

It's a tolerance thing, perhaps, but the Asus simply feels more premium in my hands. The Claw's chassis and controls feel good, great even, but the ROG Ally X has a special quality to its moving parts that edges it ahead.

Think of it as a big, bruising, heavyweight boxer of a handheld gaming PC

So, for that matter, does the OnePlayerX OneXFly F1 Pro. It's not that the MSI feels bad, more that its direct competitors feel just a touch better in the materials and factory tolerances department.

Other issues? Well, MSI's M-Center software is pretty basic, and while the AI Engine feature is supposed to intelligently adjust performance on the fly depending on your use case, I'd advise turning it off and adjusting the power settings manually. It's fairly good at dropping TDP down on the move to save some battery, but it also managed to screw up the odd benchmark run while plugged in, which is disappointing.

Image 1 of 2

The Asus ROG Ally X next to the MSI Claw 8 AI+ on a wooden table

(Image credit: Future)
Image 2 of 2

The Asus ROG Ally X next to the MSI Claw 8 AI+ on a wooden table

(Image credit: Future)

And again, sticking the Claw side by side with the Ally X, I do wonder whether I'd trade some of the Claw's advantages for portability. That big screen is a wonderful thing, but the ROG is just small enough where slinging it in your backpack requires no consideration, whereas the Claw is big enough to make you wonder whether you might leave it at home.

Buy if...

You want superb performance for more reasonable money: The Claw 8 AI+ manages to rival and occasionally beat the fastest handheld on our books, for over $400 less.

You like a big screen: The 8-inch display is a lovely thing to behold, and ups the immersion factor of handheld gaming significantly.

Don't buy if...

You want ultra-portable: This is a chonky handheld device, and as a result it's not as easy to lug around with you as something more reasonably sized.

You want class-leading controls: The Claw's buttons, triggers and paddles are a massive improvement on its predecessor—but its competitors still have the edge.

But I simply can't ignore what the Claw 8 AI+ provides for the cash. It delivers performance capable of giving the fastest handheld we've ever tested a run for its money, a large and vibrant display, good controls, and battery life that matches the notoriously long-lasting ROG Ally X. All for $900, which given the pricing of some of the competition, actually strikes as downright reasonable for what you end up receiving.

The thorny question is, does that mean the Claw should replace the Asus ROG Ally X as our best handheld gaming PC overall? To me, not quite. While the Claw beats the Ally X in many of our benchmarks, I think as an overall package, the Asus has it. Just. Really though, both of these handhelds have their place—and for very different reasons.

If you want portability, an ultra-premium feel, and good performance for the more competitive price of $800, I'd lean towards the Ally X. But if you don't mind paying $100 more for sheer raw power and a big screen, and can put up with it being a little unwieldy and slightly rougher around the edges? Yep, that's the Claw.

Think of it as a big, bruising, heavyweight boxer of a handheld gaming PC. It lacks the odd touch of refinement, sure, and occasionally wobbles on its feet. But it delivers such a whack, such a powerful punch of gaming performance and battery life combined with that big, luscious screen, it cannot be ruled out of the fight. It's not just better than the original Claw—it's easily one of the best handhelds I've used to date.

]]>
https://www.pcgamer.com/hardware/handheld-gaming-pcs/msi-claw-8-ai-a2vm-review/ VBYeKpkxpVDoCu7YAq9hCT Thu, 13 Feb 2025 17:12:37 +0000
<![CDATA[ 10-series and other old GPUs should be able to run Assassin's Creed Shadows says director, and my RTX 3060 Ti sheds a single hopeful tear ]]> Assassin's Creed Shadows should have been launching around about *checks watch* now, but alas, it was delayed and has a new launch date of March 20. If you're like me and have an older GPU, you might not have cared much either way as you might have assumed decent performance the title would be out of reach. Well if so, fear not, because a recent Q&A with the game's technology director leads us to believe it will run on older hardware.

The main litmus test is ray tracing, as games today (*cough* Indiana Jones and the Great Circle *cough*) are moving towards a 'ray tracing by default' approach, ie, where you can't play the game at all without ray tracing. On this front, technology director Pierre F gives us plenty of hope:

"If your GPU does not support hardware raytracing, such as pre-RTX GPUs, we have developed our own solution to allow competent, yet older, GPUs to run Assassin's Creed Shadows. The game will use a proprietary software-based raytracing approach developed specifically for that."

Software ray tracing isn't exactly new. Lumen, in Unreal Engine 5, for example, is a software-based ray tracing and global illumination solution. But if this Assassin's Creed Shadows solution is proprietary and "designed specifically" for pre-RTX GPUs, there's reason to be hopeful it'll perform quite well even on older hardware—hopefully better than Lumen does. Ubisoft's got at least some experience with this, as its other big engine, Snowdrop, has been using software RT for a while (in Pandora and Outlaws, for example).

The technology director says: "We made efforts to support pre-RTX GPU (GTX 1070, GTX 1080TI and equivalent) that are still competent today but lack hardware level raytracing capabilities. To further highlight our commitment to this direction, we've developed a proprietary software raytraced GI [global illumination] solution to support the new dynamic Hideout."

This is implemented because there is, unfortunately, a definite ray tracing requirement for at least some of the game. This might make us wonder: Okay, what if I can do ray tracing but I don't have a card that can do ray tracing well? Thankfully, the devs are giving the option for a "Selective Raytracing" mode that will only use ray tracing when in the Hideout portion of the game.

"The reason behind this is that the Hideout allows extensive player customization at a level never seen before on Assassin's Creed. Because of that, we cannot use traditional, pre-calculated, global illumination techniques, and therefore need to adopt a real-time approach. In all other gameplay situations, such as in the open world, raytracing will not be used."

So much for ray tracing, but it doesn't stop there. The game's also going to allow us to mix and match our frame gen and upscaling solutions. This means that I (with my RTX 3060 Ti) and others like me should be able to slap on DLSS upscaling alongside FSR frame gen, for instance. Neat.

There are some other peculiar shenanigans going on that might make for decent performance on older hardware, too: "Various upscaling technologies (TAA, DLSS, FSR and XeSS) can be used in conjunction with dynamic resolution scaling to target a given framerate, in which case the game will adapt pre-upscaling resolution to try and reach the desired FPS."

Your next upgrade

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)

Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.

So... a doubly dynamic scaling and upscaling system, then? That seems complicated to balance out but whatever helps, I suppose.

The game also uses the company's "proprietary Micropolygons system" which is "a virtualized geometry system which allows us to render more polygons with more level of details continuity [sic]." Crucially, it features a scalable preset, "from Low to Ultra", which presumably means you'll be able to get quite low-poly with it if you need to.

This wouldn't necessarily be a boon if Pierre F didn't seem so confident that the whole thing should make for a great experience on lower-end hardware. But he does seem confident: "The game will look stunning even on the lowest settings. Everyone will enjoy the experience."

Fingers crossed this all pans out as planned. It's not as if there are a ton of high-end graphics cards lining the shelves right now; older hardware might be what a lot of us have to make do with. I guess we'll find out when the game launches, which should be pretty soon.

]]>
https://www.pcgamer.com/games/assassins-creed/10-series-and-other-old-gpus-should-be-able-to-run-assassins-creed-shadows-says-director-and-my-rtx-3060-ti-sheds-a-single-hopeful-tear/ pwRtFghWp5cd8GC3rfL9ZF Thu, 13 Feb 2025 16:37:18 +0000
<![CDATA[ Monster Hunter Wilds PC performance: From Nvidia's latest, past AMD's greatest, to Intel's failing silicon, this is what the game does to PCG's own rigs ]]> It is a brave publisher which, weeks before its game launches, is confident enough in its product to release a benchmark tool so PC people can get a bead on how well a game will run on their exact rig before spending their money on the game in question. I mean, this is why we do our performance analysis pieces, and why Nick has spent countless hours, evenings, and weekends elbow deep in preview and review code to give you an idea how the latest games might function on your PC.

Nice of Capcom to do the hard yards for us and release the Monster Hunter Wilds Benchmark Tool on Steam. So now everyone can see what a pig the game might be on their system.

That's not let Nick off the hook, however, and I'm still going to make him figure out which of the myriad graphics settings actually have the most impact on performance with the least on visual fidelity, so you have some guidance on the best settings and how to improve you frame rates.

But it means we're going to look at Monster Hunter Wilds (MHW) performance slightly differently. We're going to crowd source it, and look at how the game benchmark performs on the PC Gamer hardware teams individual gaming PCs, and the steps they've taken to get the game performing at a level they would be happy playing at.

Spoiler: not all of us managed that feat at all.

Monster Hunter Wilds has a grainy, noisy aspect to its art style, and it interacts with AMD's compute-based upscaler in ways I find… displeasing.

Allow me to begin my segment with an audible sigh. As the AMD card user of the team, I chose to set FSR to Quality at the 1440p High preset, as by default the game wants you to leave it on Balanced. FSR has a tendency to introduce a lot of unwanted noise at anything under Quality at 1440p, and I wasn't having any of that. Custom, non-preset settings for me. Also, no ray tracing. This is an RX 7800 XT, after all, and it's simply not good at it.

The benchmark runs pretty well at these levels, with an 85 fps average. Like others in the team, however, I noticed that when the world transitions to desert after the storm and you drop into the valley, the frame rate dips. Mine reached the mid 50s before recovering as the player turned towards a dune, and as I grabbed the sides of my face in horror at what my otherwise performant 1440p card had been reduced to.

Still, a mid-50 fps dip is just about acceptable to me, but if the benchmarking performance translates into regular in-game drops, I'll probably just slap on frame generation and call it a day. Latency be damned.

As you would expect this results in many more frames and an average of 113 fps, which is lovely and smooth. What isn't smooth, thanks to FSR, is the image quality—even at, err, Quality. Monster Hunter Wilds has a grainy, noisy aspect to its art style, and it interacts with AMD's compute-based upscaler in ways I find… displeasing.

Of particular note is the pack of hairy beasts you encounter as you scale the dune before entering the camp, as the fur gives FSR some serious fizz-related trouble. Turning upscaling off entirely makes those sub 60-fps dips more frequent, so here I am once more caught between a frame rate rock and an image quality hard place.

Of course, I could always play around with the individual graphics settings a bit more once I get the game proper and see if I can keep my smooth frames without leaning on the upscaling button—but when the frame generation toggle is sitting right there, I might as well call it a salve for my wounds and save myself some hassle.

Besides, is it just me or does Monster Hunter Wilds look fairly average even with the upscaling off? It's not a terrible-looking game, but after spending the day benchmarking Horizon Zero Dawn for a different article, the contrast is pretty stark. HZD and its even-prettier sequel also run very nicely on my machine, without leaning on upscaling or frame gen to avoid below-60 fps dips. Just saying, is all.

Image 1 of 4

Image from the Monster Hunter Wilds benchmark results screen

(Image credit: Capcom)

4K | Native | RT High

Image 2 of 4

Image from the Monster Hunter Wilds benchmark results screen

(Image credit: Capcom)

4K | DLSS Quality | RT High

Image 3 of 4

Image from the Monster Hunter Wilds benchmark results screen

(Image credit: Capcom)

4K | DLSS Quality | RT High | Frame Gen

Image 4 of 4

Image from the Monster Hunter Wilds benchmark results screen

(Image credit: Capcom)

4K | DLSS Quality | High (no RT) | Frame Gen

I have high expectations for frame rates and fidelity. My gaming PC is powered by an Intel Core i9 14900K and an RTX 4080 Super, which means it's power-hungry, hot, and loud. I don't mind it that way, providing it delivers me fluidity in frames even at 4K. That it does, most of the time.

Monster Hunter Wilds is shaping up to be a pig to play. I began my benchmarking quest by trying to find an actual benchmark—the basic frame rate I could expect from 4K, max settings, ray tracing set to high, and with no upscaling whatsoever. This earned me the bewildering rating of 'Good' performance, so sayeth the MH benchmark's results screen, with an average frame rate of 48.75. The thing is, that's somewhat skewed by the more performance friendly scenes in the lengthy benchmark run, which includes both rendered in-game cutscenes and actual gameplay. The cut-scenes run faster, on the whole, than the actual in-game areas, which appears to somewhat boost the final frame rate. As such, I suspect my actual in-game experience will be lower on average than 48 fps.

Time to tweak. I'm not so up my own rear-end to not expect to have to enable DLSS for greater frames per second while at 4K. That's what I start with, DLSS in Quality mode. This hands me a bump to 63.47 fps on average, and a pat on the back with an 'Excellent' rating. Yay.

The most demanding patch of the benchmark is oddly the area with very little going on: the actual desert. There can be little more on-screen than a big pile of sand and yet I'll experience frequent dips below a steady 60 fps. To combat this, I'll have to bring out the feature on everyone's lips right now.

Monster Hunter Wilds Benchmark

(Image credit: Capcom)

Frame generation deployed. By using an RTX 40-series graphics card with support for Nvidia's Frame Generation feature, I'm offered a couple 'fake' frames for my real ones to help push my frame rate into the steady 60s. Though a mere bump of 4.78 frames on average makes it one of the single lowest percentage gains I've seen for enabling this usually impressive setting. For my trouble, and higher frame rate, I also only received a 'Playable' rating. A downgrade, for an upgrade? Cheers.

For my final trick, I will improve my frame rate by 29% with only a couple movements of my mouse. I'm disabling ray tracing. Not lowering it, either, just flat-out disabling it. This is because I've seen a massive improvement in performance, from 68.25 fps to 88.23 fps, which reduces much of the chance of a sub-60 frame rate at any point. I don't miss it. The loss of accurate reflections in the one pond shown in the benchmark is easily negated by the fluidity of the game on my 144 Hz monitor. Maybe in the full game I'll be missing my reflection more, the vain PC gamer I am, but not right now.

Strangely, my PC's best performance in-game only earns me a rating of 'Good'. Looks like Frame Generation really messes with the game's ability to give out nice compliments. It's fine.

So, there we have it. I landed on a solid combination by doing entirely the expected: enabling DLSS and disabling ray tracing. A tale as old as time—or at the very least, as old as 2018.

I went as low as I'd be willing to go with a game I'd spent a good chunk of money on.

I wish I'd not done these benchmarks, because now I've seen things I cannot unsee: namely, that my build is simply not up to snuff in 2025. At least, not for a game like Monster Hunter Wilds.

I went in expecting a decent experience, because my RTX 3060 Ti has served me well until now. But I don't tend to play all the latest AAA games, so I didn't really have the best sample to make that assumption. As it turns out, my Core i5 12600KF and RTX 3060 Ti build just isn't capable of playing MHW at a frame rate that I consider acceptable, even if I lower the settings considerably below where I'd like them.

My usual strategy when booting up any new game is to set things on the High preset at 1440p, slap on DLSS at Quality, and hit go. Then, if I struggle to hit the frame rate I want, I start to lower some GPU- and VRAM-intensive settings until I'm hitting above 60 fps. Unfortunately, I wasn't able to achieve that with MHW.

At preset High settings (HDR off, Reflex on, Ray Tracing off) with DLSS enabled at Quality, I achieved a score of 17,343 and an average of 50.86 fps. But in the open world portion of the benchmark, those frame rates dropped to a gut-wrenching 30 fps. No thanks.

I spent a good while dropping settings here and there to little effect. Eventually, I went as low as I'd be willing to go with a game I'd spent a good chunk of money on. That's the same as above but with sky/cloud quality on medium, shadow quality on low, distant shadow quality on low, ambient light quality on low, ambient occlusion off, screen space reflection off, and volumetric fog on low. Oh, and—what I thought would be my saving grace—with DLSS on Performance rather than Quality setting.

Nope. The result, as you can see in the video, was a score of 18,900 and an average frame rate of 55.31. And yes, in the open world portion, I got about 40 fps instead of the 30 fps I got at the High Preset, but I still wouldn't spend my money on a game for those frame rates. So I'll be giving Monster Hunter Wilds a miss, unfortunately, and reconsidering whether I think the RTX 3060 Ti cuts it anymore.

For my benchmark run, I've used a resolution of 4K, with the Ultra graphics preset enabled, which sets DLSS to Quality. Frame generation and ray tracing are both disabled, as the former causes various glitches in the benchmark and the latter just tanks the 1% lows. I'd normally use DLSS Balanced or even Performance, depending on the game but I get a reasonably constant frame rate with Quality.

The full Monster Hunter Wilds game has the same wealth of graphics settings as the benchmark, so there are a lot of things that can be tweaked to get a smoother or higher frame rate, but you're almost certainly going to have to use upscaling in all cases.

I've recently upgraded to a modestly beefy rig, complete with an AMD Ryzen 7 78000X3D / RTX 4070 Super combo and Monster Hunter Wilds is one of the games I was looking forward to testing on it. Running everything on High at 1440p, without ray tracing or frame generation, I managed to get an average fps of just over 90 in the benchmarking tool.

This frame rate intuitively feels fine to me (though the benchmarking tool deems it 'excellent'). However, the average was brought down by consistent 70s and 80s in the gameplay portion of the benchmarking tool. As the storm fades and the world peels back, it even got down to the 60s. This drop is substantial, though I never saw stuttering that I think would majorly affect how it feels to play. I'll have to get my hands on the full game to determine if that's true.

I felt relatively content in my lot here. That's when I got struck with the thought 'What's the point in getting a new PC if I can't crank up the performance?' Too right, erm, me. Off to Ultra settings I went, and though the average fps landed almost 20 below High, the disparity between frame rates was much tighter. Testing felt more consistent, even though it dropped down to the 50s in the open desert area. The benchmarking tool rated my Ultra performance as 'excellent'. Thanks, Capcom for the validation on my rig.

Alright, I'm a modern PC gamer and there's one thing I haven't talked about yet: ray tracing. I couldn't test out a game without trying to get puddles of water and gleams off armour looking their prettiest so I tried a few runs with ray tracing enabled. Ultra settings with High ray tracing felt a bit ambitious but my rig managed to get an average of 69 fps, with High settings and Medium ray tracing getting around 10 fps more. It's nice to know I could feasibly run Monster Hunter Wilds on Ultra with High ray tracing but I think High settings is my sweet spot right now. Just please don't dip to 60 fps in the actual game.

There isn't a potato mode potato-y enough to get Monster Hunter Wilds running on my PC.

There isn't a potato mode potato-y enough to get Monster Hunter Wilds running on my wee office gaming PC. I'll grant you, it's not a monster in itself. It's a lovely little thing, a pint-sized beauty, but I have shackled it to one of Intel's last-gen GPUs and that, I expect, is the reason behind its struggling frame rates.

It's also one of the reasons I wanted to test on this PC. Having jammed the RTX 5090 into my home rig to see if it'll catch fire, I also know that it will run MHW quite happily until it does. For the record, I'm getting 128 fps average on the RTX 5090 everything maxed out at 4K, but with DLSS Quality and Frame Generation enabled.

But the performance of Intel's GPUs, especially the first gen Alchemist ones, has been up and down to say the least. I'm running the latest driver for my Arc A770, released this month for Pirate Yakuza, but still I won't be hunting any monsters with this machine until Intel releases some super-special MHW-optimised driver.

I went in modestly with my expectations—running as I am a 3440 x 1440 ultrawide monitor. So I set it at Medium settings, with FSR 3 and its frame generation enabled from the get-go.

Nick warned me against it, and he was right. There are a ton of weird artifacts that pop up as a result of frame gen, and I think I'd find it jarring in the game despite the fact it gave me an initial 41 fps average. Also, as a benchmark, it gives me no indication what the input latency will be like. And from looking at the native input frame rate, I expect it will be very high.

So, I chopped at the settings, and chopped again. Until, exasperated, I gave up my judicious pruning of performance settings—having only made modest fps gains—and dropped to the Low preset. Still, the benchmark informed me "Settings changes recommended" with my average frame rates still south of 28 fps.

With not a little trepidation I turned all the dials to 1. Switching to the Lowest preset, and dropping the XeSS level to Ultra Performance, has finally delivered me a score and average frame rate the Monster Hunter Wilds benchmark deems "Playable".

I, however, do not agree.

It looks horrific. Maybe I'm not the target audience anyways, and in my past I have suffered dreadful image quality just to play a game I was desperate to try (running Oblivion on a fraction of my CRT screen just to get a playable frame rate), but I would not be willing to run Monster Hunter Wild at these settings.

The fact it's a shade under 32 fps on average certainly isn't enough to justify the woeful state of the visuals. The models lack texture, and often geometry, and when a couple of monsters are going at it they end up being smushed together in an orgy of colourful but utterly indistinct pixels.

And don't get me started on the lack of water effects. There is a certain Minecraftiness to the frog-thing-in-a-pond part of the benchmark.

Basically: Urgh, do not play if you're rocking an Intel card.

]]>
https://www.pcgamer.com/hardware/monster-hunter-wilds-pc-performance-from-nvidias-latest-past-amds-greatest-to-intels-failing-silicon-this-is-what-the-game-does-to-the-teams-own-rigs/ vB6xKdNEbg92gRBQYFbMMn Thu, 13 Feb 2025 16:21:17 +0000
<![CDATA[ Assassin's Creed Shadows will let you take your pick from the frame gen and upscaling buffet as 'a mix and match approach is possible' ]]> It's on the lips of every PC gaming hardware enthusiast of late, whether it's spat out with vitriol or whispered with a twinge of desire: "frame generation". Whether those words give you shudders of the good or bad kind, get used to them because GPUs and games are leaning ever more in the direction of AI-aided rendering. Case and point: Assassin's Creed Shadows, which will offer a veritable variety of frame gen and upscaling goodness.

In a recent tech Q&A, Pierre F, technology director of Assassin's Creed Shadows, explained that the upcoming game will offer the full cornucopia of frame gen solutions (minus DLSS 4 Multi Frame Gen, it seems). And more than this, you'll be able to mix and match it with your upscaler of choice for quite the non-native buffet.

The tech director says: "DLSS 3.7, FSR 3.1 and XeSS 2 are all supported for both upscaling and frame generation purposes. Our own Temporal AA solution is also available. Note that a mix and match approach is possible. You may select one technology to upscale while using a different technology for frame generation purposes."

Of course, those with an AMD or Intel card won't be able to use Nvidia's DLSS upscaling or frame generation, but the ability to mix and match might be of special benefit to Nvidia RTX 20-series and 30-series gamers. It will mean these GPUs will be able to use FSR frame gen, for instance, in combination with DLSS upscaling.

AMD GPUs would be able to use XeSS upscaling alongside FSR frame gen, but only with a version of that upscaling that's not as good as FSR 3.1. And regarding Intel GPUs, well, they'll probably be sticking with XeSS 2 frame gen, as even Alchemist cards can use it.

Speaking of Intel XeSS 2 frame gen, though, isn't that somewhat of a rare gem? To date, we only have two games that support it: F1 24 and Marvel Rivals. Assassin's Creed Shadows will make a third, and that's thanks to Ubisoft's partnership with Intel for this game.

Pierre F explains: "As part of our partnership with Intel, we had direct and privileged access to XeSS 2 before it was made public, and we worked directly with Intel engineers to provide the best implementation possible." It'll be interesting to see how the blue team's frame gen plays out in more games against the titans of FSR and especially DLSS.

We shouldn't have long to wait to find out. Assassin's Creed Shadows—after a disappointing delay—is set to launch on March 20, 2025. This next iteration in the series is promising dual-protagonist gameplay to satisfy both stealthers and hack-n-slashers alike—plenty for frame gen and upscaling to deal with.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/games/assassins-creed/assassins-creed-shadows-will-let-you-take-your-pick-from-the-frame-gen-and-upscaling-buffet-as-a-mix-and-match-approach-is-possible/ N9hiByiLafVp7bYWiGwkxF Thu, 13 Feb 2025 15:17:24 +0000
<![CDATA[ Avowed PC performance analysis: Sometimes good, sometimes bad, frequently odd ]]> Originally planned as a Skyrim-like open-world game, set in the Pillars of Eternity universe, Avowed eventually transitioned into being semi-open. By that, I mean lots of very large, open levels, all connected together. But that change aside, in many ways, Avowed does feel like a cross between Skyrim and PoE, and if you're a fan of both games, you'll almost certainly like this one too.

What you might not be so keen on is that developers Obsidian targeted a 30 fps performance window for the Xbox Series X/S version, and while that doesn't necessarily mean the team set out for the PC version to be the same, as you'll soon see, Avowed isn't pushing out ultra-high frame rates.

On the plus side, it does look very good, with some notable caveats. Created using Epic's Unreal Engine 5, Avowed doesn't tote path-traced global illumination, but you do have the option to switch to Lumen. From my testing, it seems to be operating in the 'software RT' mode, which means you don't need a ray-tracing-capable GPU to enable the feature. However, that mode isn't as visually accurate as the hardware-accelerated one, so don't expect Alan Wake 2 levels of graphics.

Obsidian lists the minimum PC requirements for Avowed as being a Core i5 8400 or Ryzen 5 2600 CPU, with an Arc A580, GeForce GTX 1070, or Radeon RX 5700 for the GPU. These suggest that the game is more likely to be GPU-limited, rather than being super heavy on the CPU, but it's a little more nuanced than that. At least you don't need a lot of system memory, as 16 GB of RAM is both the minimum and recommended amount.

Test PC specs

Benchmark runs were carried out in an early location, a seaport called Claviger's Landing, which offers a good range of graphics loads. From geometry-heavy buildings and environmental detail, to open vistas, and expanses of water to showcase how the game's engine handles reflections. Other areas of the world of Avowed can be more demanding than this region but it's a good indicator of how well the game will run in general.

If you're a frequent reader of these game performance analysis pieces, you may have already noticed that one commonly used platform is missing from the list of test PCs above. On the Asus ROG Ally X I use, the game ran fine once but after changing the graphics settings, it would just fully crash upon loading. Deleting the game and all other associated files didn't help, and I never managed to complete any benchmark runs with that device.

The review code was otherwise very stable, with just the odd glitch, including an old-school 'falling through the floor' bug that occasionally reared its head. On some systems, the game would crash when executing its initial shader compilation but never did so on the second attempt.

Low quality preset

Starting with the Low quality preset, to kick off our look at native rendering performance, the above video shows that Avowed looks pretty decent from the start. All these videos were captured at 4K, though they're shown at 1080p here to reduce the file size. Beyond that, nothing else was changed from the preset values.

My first criticism of the Low preset is just how jarring the model and shadow pop-in is. This is to be expected to a certain degree at minimum graphics settings in any game, but it's quite stark in Avowed. So much so that I don't recommend using it, no matter how weak your gaming PC is.

Fortunately, the average performance at 1080p and even 1440p is more than good enough on the Core i7 9700K, RX 5700 XT combination to suggest that the Low preset is more suited to very basic gaming PCs. It's a shame that Avowed wouldn't run on the ROG Ally X because it would have been very interesting to see how a handheld gaming PC copes.

However, what's more important to note here are the 1% low frame rates. With the exception of the Core Ultra 7 265K PC, possibly due to using DDR5-8000 RAM, none of them achieves a low of 60 fps and that doesn't bode well for higher graphics settings.

Medium quality preset

Switching to the Medium quality preset instantly improves the overall look of Avowed, as there is considerably more detail in the world. Colours seem richer and even the pop-in of shadows is reduced. It's still there, of course, and if you quickly scan through the other presets, you'll see that it never quite disappears, even on maximum graphics settings.

That's a bit disappointing, to be honest, because when Avowed is looking at its best, it really is very pretty. But when you move about, the cracks start to show in the picture.

One such niggle is the fact that certain objects have a clearly defined boundary that triggers their animation. For example, flags hanging in the distance are stiff as a board until you get reasonably near to them. As you cross the trigger boundary, they immediately start to swing about as if blown by the wind. It's a somewhat clumsy way of reducing the CPU load and the zone really should be much smaller at higher quality settings.

High quality preset

The High quality preset marks the sweet spot between visual fidelity and overall performance. At 1080p, all of the test PCs produce acceptable frame rates on average and while those 1% lows don't look great, it actually feels okay in-game. This is by far the oddest behaviour of Avowed I've come across in my testing—running around with high frame rates feels worse than it does at, say, 40 fps (with 25 fps 1% lows) and this ties in with Obsidian's comments about the development targeting 30 fps on consoles.

It would seem that the entire world is designed with that performance window in mind, regardless of the platform, and to be fair to Obsidian, Avowed does feel perfectly fine to play at that level. At 60 fps, the game feels smooth as silk, even though the 1% lows are nowhere near that rate.

However, this doesn't mean it is always plain sailing. On the lower-tier PCs, traversal stutter frequently rears its head—as you move through the map, the game will briefly judder, probably due to loading in required assets. Avowed isn't especially heavy on VRAM (8 GB GPU owners will be fine) but neither is it very light, and loading all of the models and textures for a map in one go would probably push the VRAM requirements up too far.

At least there's no hint of shader compilation stutter, as the game handles all of this during loading. It's a relatively quick process too, even when doing it for the first time, though it will repeat the process (albeit very briefly) if you change a graphic setting.

Epic quality preset

Using the Epic quality preset showcases Avowed at its very best, though it's arguably only a small improvement over the High preset. Pop-in is at its minimum, though still present, but the best reasons for using this preset are the appearance of water and the quality of the shader-based antialiasing.

With the lower quality presets, they're not so hot looking, especially at 1080p. In fact, the antialiasing is terrible at 1080p Low quality and that alone is good reason to never use that preset. The appearance of water, especially the surface of the sea, isn't great either, unless you use the Epic preset, and smaller bodies of water, such as puddles on the ground, never produce really detailed reflections.

While all of the test PCs run Avowed pretty well at 1080p Epic, the 1% lows of the weaker systems are too low, and that feeling of smoothness experienced with the High preset gives way to lag and stutters. This was especially true of the RTX 4050 laptop and I'm certain that its 6 GB of VRAM is the primary cause, even though the recorded 1% lows are better than those of the 8 GB RX 5700 XT.

There's one setting I've not mentioned so far that, in theory, should make the graphics even better and that's the option to enable ray tracing. It's a simple toggle, requiring a game restart to work, but it's not clear how the use of Lumen integrates with the other quality settings. I did reach out to the publishers about this but I've yet to hear back from them.

Lumen ray tracing

As already mentioned, activating the ray tracing toggle enables Lumen in 'software RT' mode, which is why the Radeon RX 5700 XT is able to use it. However, I'm on the fence as to whether it's worth using or not.

To begin with, ray tracing with the Epic quality preset doesn't work the tested PCs any harder than the native Epic preset—in some cases, it's a fraction slower, in others it's the same performance, but in the case of the RTX 4070, it ran my benchmark test slightly better. That would be great news if Lumen made a big difference to the graphics, but it doesn't or at least, not in the area that I tested.

I was hoping for better water reflections but they're still quite low resolution, and while the overall lighting and shadowing are marginally better, there's a distinct excess of bloom to my eyes. Everything seems a fraction more washed out than without Lumen but you may well get different results on your gaming PC and monitor.

The good news is that it doesn't take long to activate ray tracing, reload the game, and see if it's worth using. Unlike upscaling, which might not be worth using at all.

Upscaling performance

A screenshot from the PC version of Avowed, from Xbox Games Studios

(Image credit: Xbox Games Studios)

Upscalers achieve a performance boost by lowering a frame's resolution by a set percentage before most of the rendering takes place. A complex algorithm then scales it back up to the monitor's resolution and then the GPU displays the frame after the final rendering touches have been applied.

We've reached a point in PC gaming where upscalers are expected to be supported and unfortunately, in some cases, they're necessary to achieve a sensible level of performance. The good thing about Avowed is that it doesn't really need upscalers; the bad news is that what it does have implemented isn't particularly great. You get the full gamut of Nvidia's DLSS 3.7 (minus ray reconstruction) but only the upscaler part of AMD's FSR 3.

It might offer Intel's XeSS too but my Arc A770 has decided to shuffle off this electronic world, and there's no detailed info about the game's upscalers on any official site. Avowed does offer Unreal Engine's TSR upscaler but it's no better than FSR, so there's little reason to use it.


FSR 3 quality upscaling
Core i5 13600K, Radeon RX 7800 XT, 1440p High preset

At 1440p, FSR 3 Quality mode should provide a small performance uplift without impacting on a game's visuals but if you watch the above video carefully, you'll notice smearing and ghosting in places, and the surface of the sea towards the end of the run looks especially poor. And while you get a very nice improvement to the average frame rate, FSR does almost nothing for the 1% lows.

You might feel that this is acceptable for your needs but recall the point I was making earlier about the game not feeling smooth at high frame rates. Upscaling just makes that sensation more noticeable. Hopefully Obsidian will be able to resolve the poor quality of FSR with future patches.


FSR 3 Quality upscaling
Core i7 9700K, Radeon RX 5700 XT, 1080p High preset

Just to make the point about the somewhat iffy implementation of FSR in Avowed even clearer, the RX 5700 XT is the sort of graphics card that would normally benefit from upscaling in a modern 3D game. Except here it makes no difference whatsoever, other than to make the graphics worse.

If the native performance of the likes of the 5700 XT wasn't decent, I'd be very disappointed and more than a little annoyed with this situation. It's generally accepted that while AMD's FSR isn't quite on par with Nvidia's DLSS, it's more than good enough in the grand scheme of things. Apart from here, of course, where it's borderline useless.


DLSS Balanced upscaling
Ryzen 9 9900X, GeForce RTX 4070 Ti, 4K Epic, Lumen ray tracing

Not that DLSS runs any better. Sure, the graphics aren't butchered like they are with FSR, but the performance uplift isn't great, even with frame generation. One might argue that asking an RTX 4070 Ti to cope with 4K Epic quality with Lumen is a silly task, as it's not that kind of GPU, but DLSS Performance upscaling forces the frame resolution right down to 1080p.

Natively, the Ryzen 9 9900X, RTX 4070 Ti test PC achieves 98 fps on average, with 1% lows of 44 fps at 1080p Epic quality with Lumen. However, DLSS Performance at 4K only results in an average performance of 66 fps. That's markedly different and it shows that DLSS hasn't been implemented well, either.

Now, you might think that applying DLSS, especially with frame generation, is a must to get high frame rates. However, the more the average fps differs from the 1% lows, the less smooth Avowed feels. Keep both around the 60 and 30 mark, respectively, and it's lovely. At 91 and 52 fps, for example, it feels quite janky—almost like it's microstuttering.

Truthfully. the only reason why one should bother using DLSS is just to get better antialiasing, especially when using one of the lower-quality presets. Shame there's no DLAA option, though.

Final thoughts

A screenshot from the PC version of Avowed, from Xbox Games Studios

(Image credit: Xbox Games Studios)

The title of this performance analysis is 'Sometimes good, sometimes, frequently odd' because I feel this best summarises the PC version of Avowed. On the High or Epic preset, it does look good and even runs pretty well. The bad? Well, that's obvious: the shoddy upscaling, the wonky antialiasing, the disappointing 1% lows, and the fact that it's yet another Unreal Engine-powered game that sports traversal stutter.

But what's odd about it? Well, at the start of this article, I described Avowed as being Skyrim in the Pillars of Eternity universe. Not because it's a fantasy RPG but because the world has that same expansive-yet-empty feel as Skyrim. While testing the early area of Claviger's Landing, I decided to look at the frame times on two systems, the Core i7 9700K and the Ryzen 9 9900X, to see if I could better understand why the 1% lows were so…well…low.

Looking at the old Coffee Lake chip first, you can see just how variable the frame times are, bouncing around all over the place. And yet there's nothing about the world that suggests there's a huge demand on the CPU. There aren't hordes of NPCs and what few there are either stay in one place or wander through short paths.

It's a smoother affair with the Zen 5 chip, as one would expect, but it's still not great and bizarrely, in the section of the benchmark run where the 9700K's frame times even out, they get worse with the 9900X. In the same test, the Ryzen 7 5700X3D was being worked surprisingly hard, with four logical cores (i.e. four threads on two cores) averaging 60% utilisation, with another four being around 50%.

That suggests Avowed is quite CPU intensive, but if it were, then the more modern test PCs would achieve far better 1% lows than the older and lower-tier ones. Only the Core Ultra 7 265K gaming rig stands apart, though that's certainly due to the use of DDR5-8000 in that system. As I said, it's odd.

Avowed is a lot of fun to play and it's rich in 'old-school RPG' vibes, which help in no small way to move one's attention away from the wonky performance. Just stick to the High preset, avoid upscaling unless you have a GeForce RTX graphics card, and aim for an average frame rate around the 60 fps mark (reducing shadow quality and draw distance are the main things to change). Oh, and ignore the 1% lows. In fact, just ignore any performance metrics altogether and judge it on feel.

]]>
https://www.pcgamer.com/hardware/avowed-pc-performance-analysis-sometimes-good-sometimes-bad-frequently-odd/ QFBJw7TBtHn8Btb2rYTpb7 Thu, 13 Feb 2025 14:00:00 +0000
<![CDATA[ New EU regulation finally cuts massive CPU boxes down to size ]]> Once upon a time I bought a single, very much regular sized eyebrow pencil online. It arrived in a needlessly MASSIVE cardboard box, presumably for ease of storing in the back of a delivery van. Thankfully most of that box was recyclable, but the same cannot be said for what's become typical for CPU boxes.

For me, excitement for new hardware is only matched by the existential dread that arises when confronted with a CPU box full of styrofoam packing peanuts that you know is just gonna go straight to landfill. Well, perhaps no longer; as of February 11, new EU regulation came into effect that will see those pesky CPU boxes finally cut down to size (via TechPowerUp).

The European Commission's refreshed Packaging and Packaging Waste Regulation (PPWR) seeks a number of aims, though the most pertinent here would be, "Minimising the weight and volume of packaging and avoiding unnecessary packaging." Some are already wondering aloud whether this means those cooler bundles CPU manufacturers are so fond of may soon become a thing of the past, but I'm unsure whether those would fall under the designation of "unnecessary" packaging—surely some people use those bundled coolers, even if most of us rightly slap on a third-party one.

We already have EU regulation to thank for standardised charging ports on your phone, stronger legislation around 'right to repair,' and—with a bit of luck—the avoidance of anything like an AI monopoly in the future. This latest packaging regulation gives manufacturers far and wide an 18-month grace period to get their act together for a hopefully less wasteful, potentially greener future. That means the days of Destiny engram-esque packaging or any of the infamous examples seen in this story are decidedly numbered.

Furthermore, the PPWR aims to both "make all packaging on the EU market recyclable in an economically viable way by 2030," and "decrease the use of virgin materials in packaging and put the sector on track to climate neutrality by 2050." E-waste is something that continues to give me The Fear, so I definitely welcome the European Commission taking aim at wasteful packaging more broadly.

Practically speaking (and less full of existential dread), holding on to the original box can be handy for ensuring delicate tech survives, say, a stressful house move. For this reason, I've been wishing for a long time that hardware boxes were ever so slightly smaller—after all, if the boxes for my anime figures can form a pleasingly compact fort, then why not CPUs?


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/new-eu-regulation-finally-cuts-those-massive-cpu-boxes-down-to-size/ tMZeZsgmgupFcYE8DJiwnA Thu, 13 Feb 2025 13:30:04 +0000
<![CDATA[ Nvidia didn't send the Monster Hunter Wilds devs any RTX 5000 cards before their CES reveal, so official DLSS 4 support is still in progress ]]> With Monster Hunter Wilds likely to be one of the biggest new games of 2025, I thought there was a chance that Capcom and Nvidia would be teaming up to ensure that the PC version of Wilds was an absolute showstopper. Typically Nvidia and AMD collaborate with developers to test games before release and ensure that their drivers are ready for launch day; surely they also sometimes seed their new hardware with major developers ahead of release too, right? When I spoke with Monster Hunter Wilds director Yuya Tokuda back in January, just a week after the announcement of the RTX 5000 series, I asked if his team had gotten to play with the new hardware yet—and if we could expect to see DLSS 4 support in Wilds on day one.

Somewhat surprisingly, no and no.

"We certainly haven't received anything or any notice within the Monster Hunter Wilds team," Tokuda told me on January 15. That was a full nine days after Nvidia announced the RTX 5000 series at CES, though well before the cards actually hit the streets (hopefully no one on the Monster Hunter team had to camp out in front of an electronics store in Akihabara to get ahold of a 5090).

Capcom may have gotten a shipment of shiny new GPUs from Nvidia since I interviewed Tokuda in Los Angeles, but I thought if any game developer was likely to get its hands on the hardware pre-release, it'd be the studio behind what's likely to be the biggest PC game of the year. Shows what I know!

When can we expect to see Nvidia's newly announced and seemingly quite impressive DLSS 4 in Monster Hunter Wilds, then? Wilds' beta and its PC benchmark both support DLSS as well as AMD and Intel's AI-driven upscaling options, but they're not exactly rocking the newest version of Nvidia's tech: both versions of the game use DLSS 3.7.10, released in mid-2024. I asked Tokuda, and he didn't give a precise answer—but safe to say it's going to be a bit.

"We always need to be able to test first, to be able to tell if we can support it or not, so it's hard to tell at this point in time, but we do always try to test with the most recent graphics cards," Tokuda said.

The good news for the tinkerers out there is that you don't actually have to wait for Capcom to implement DLSS 4 support to use it in Monster Hunter Wilds. After running the game's benchmark, I used the latest version of DLSS Swapper to upgrade the DLSS version to the latest and greatest, which, to be clear, isn't exclusive to the RTX 5000 series. It runs well on my RTX 3070, and should be helpful for those of us with older graphics cards who are forced to drop from "balanced" to "performance" mode to keep demanding games like Monster Hunter Wilds chugging along at 60 fps.

Monster Hunter Wilds: All the details to know
Monster Hunter Wilds weapons: Open the arsenal
Monster Hunter Wilds monsters: Which beasties are back
Monster Hunter Wilds tips: Up your hunting skills
2025 games: All the other releases coming this year

]]>
https://www.pcgamer.com/games/action/nvidia-didnt-send-the-monster-hunter-wilds-devs-any-rtx-5000-cards-before-their-ces-reveal-so-official-dlss-4-support-is-still-in-progress/ EWxLdqXMpEzWvfF6XXA2FM Wed, 12 Feb 2025 21:10:35 +0000
<![CDATA[ New research says ChatGPT likely consumes '10 times less' energy than we initially thought, making it about the same as Google search ]]> It's easy to slate AI in all its manifestations—trust me, I should know, I do so often enough—but some recent research from Epoch AI (via TechCrunch) suggests that we might be a little hasty if we're trashing its energy use (yes, that's the same Epoch AI that recently dropped a new, difficult math benchmark for AI). According to Epoch AI, ChatGPT likely consumes just 0.3 Wh of electricity, "10 times less" than the popular older estimate which claimed about 3 Wh.

Given a Google search amounts to 0.0003 kWh of energy consumption per search, and based on the older 3 Wh estimate, two years ago Alphabet Chairman John Hennessey said that an LLM exchange would probably cost 10 times more than a Google search in energy. If Epoch AI's new estimate is correct, it seems that a likely GPT-4o interaction actually consumes the same amount of energy as a Google search.

Server energy use isn't something that tends to cross most people's minds while using a cloud service—the 'cloud' is so far removed from our homes that it seems a little ethereal. I know I often forget there are any additional energy costs at all, other than what my own device consumes, when using ChatGPT.

Thankfully I'm not a mover or a shaker in the world of energy policy, because of course LLM interactions consume energy. Let's not forget how LLMs work: they undertake shedloads of data training (consuming shedloads of energy), then once they've been trained and are interacting, they still need to pull from gigantic models to process even simple instructions or queries. That's the nature of the beast. And that beast needs feeding energy to keep up and running.

It's just that apparently that's less energy than we might have originally thought on a per-interaction basis: "For context, 0.3 watt-hours is less than the amount of electricity that an LED lightbulb or a laptop consumes in a few minutes. And even for a heavy chat user, the energy cost of ChatGPT will be a small fraction of the overall electricity consumption of a developed-country resident."

Epoch AI explains that there are a few differences between how it's worked out this new estimate and how the original 3 Wh estimate was calculated. Essentially, the new estimate uses a "more realistic assumption for the number of output tokens in a typical chatbot usage", whereas the original estimate assumed output tokens equivalent to about 1,500 words on average (tokens are essentially units of text such as a word). The new one also assumes just 70% of peak server power and computation being performed on a newer chip (Nvidia's H100 rather than an A100).

All these changes—which seem reasonable to my eyes and ears—paint a picture of a much less power-hungry ChatGPT. However, Epoch AI points out that "there is a lot of uncertainty here around both parameter count, utilization, and other factors". Longer queries, for instance, it says could increase energy consumption "substantially to 2.5 to 40 watt-hours."

It's a complicated story, but should we expect any less? In fact, let me muddy the waters a little more for us.

We also need to consider the benefits of AI for energy consumption. A productive technology doesn't exist in a vacuum, after all. For instance, use of AI such as ChatGPT could help bring about breakthroughs in energy production that decrease energy use across the board. And use of AI could increase productivity in areas that reduce energy in other ways; for instance, a manual task that would have required you to keep your computer turned on and consuming power for 10 minutes might be done in one minute with the help of AI.

AI, explained

OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen are seen in this illustration photo taken in Krakow, Poland on December 5, 2022.

(Image credit: Jakub Porzycki/NurPhoto via Getty Images)

What is artificial general intelligence?: We dive into the lingo of AI and what the terms actually mean.

On the other hand, there's the cost of AI training to consider. But on the peculiar third hand—where did that come from?—the benefits of LLM training are starting to plateau, which means there might be less large-scale data training going forwards. Plus, aren't there always additional variables? With Google search, for instance, there's the presumed cost of constant web indexing and so on, not just the search interaction and results page generation.

In other words, it's a complicated picture, and as with all technologies, AI probably shouldn't be looked at in a vacuum. Apart from its place on the mathematician's paper, energy consumption is never an isolated variable. Ultimately, what we care about is the health and productivity of the entire system, the economy, society, and so on. As always, such debates require consideration of multi-multi-variate equations in a cost-benefit analysis, and it's difficult to get the full picture, especially when much of that picture depends on an uncertain future.

Which somewhat defines the march of capitalism, does it not? The back and forth 'but actually' that characterises these discussions gets trampled under the boots of the technology which marches ahead regardless.

And ultimately, while this new 0.3 Wh estimate is certainly a pleasant development, it's still just an estimate, and Epoch AI is very clear about this: "More transparency from OpenAI and other major AI companies would help produce a better estimate." More transparency would be nice, but I won't hold my breath.

]]>
https://www.pcgamer.com/software/ai/new-research-says-chatgpt-likely-consumes-10-times-less-energy-than-we-initially-thought-making-it-about-the-same-as-google-search/ k7ijmxHnAie2C8WJMQNprL Wed, 12 Feb 2025 17:36:18 +0000
<![CDATA[ Meta might've done something useful, pioneering an AI model that can interpret brain activity into sentences with 80% accuracy ]]> Depending on what areas of the internet you frequent, perhaps you were under the illusion that thoughts-to-text technology already existed; we all have that one mutual or online friend that we gently hope will perhaps one day post slightly less. Well, recently Meta has announced that a number of their research projects are coming together to form something that might even improve real people's lives—one day. Maybe!

Way back in 2017, Meta (at that time just called 'Facebook') talked a big game about “typing by brain.” Fast forward to now and Meta has shared news of two breakthroughs that make those earlier claims seem more substantial than a big sci-fi thought bubble (via MIT Technology Review). Firstly, Meta announced research that has created an AI model which "successfully decodes the production of sentences from non-invasive brain recordings, accurately decoding up to 80% of characters, and thus often reconstructing full sentences solely from brain signals."

The second study Meta shared then examines how AI can facilitate a better understanding of how our brains slot the Lego bricks of language into place. For people who have lost the ability to speak after traumatic brain injuries, or who otherwise have complex communication needs, all of this scientific research could be genuinely life-changing. Unfortunately, this is where I burst the bubble: the 'non-invasive' device Meta used to record brain signals so that they could be decoded into text is huge, costs $2 million, and makes you look a bit like Megamind.

Dated reference to an animated superhero flick for children aside, Meta has been all about brain-computer interfaces for years. More recently they've even demonstrated a welcome amount of caution when it comes to the intersection of hard and 'wet' ware.

This time, the Meta Fundamental Artificial Intelligence Research (FAIR) lab collaborated with the Basque Center on Cognition, Brain and Language, to record the brain signals of 35 healthy volunteers as they typed. Those brain signals were recorded using the aforementioned, hefty headgear—specifically a MEG scanner—and then interpreted by a purposefully trained deep neural network.

Meta wrote, "On new sentences, our AI model decodes up to 80% of the characters typed by the participants recorded with MEG, at least twice better than what can be obtained with the classic EEG system."

This essentially means that recording the magnetic fields produced by the electrical currents within the participants' brains resulted in data the AI could more accurately interpret, compared to just recording the electrical activity itself via an EEG. However, by Meta's own admission, this does not leave the research in the most practical of places.

For one, MEG scanners are far from helmets you can just pop on and off—it's specialised equipment that requires patients to sit still in a shielded room. Besides that, this study used a comparatively tiny sample size of participants, none of whom had a known traumatic brain injury or speech difficulties. This means that it's yet to be seen just how well Meta's AI model can interpret for those who really need it.

Still, as a drop out linguist myself, I'm intrigued by Meta's findings when it comes to how we string sentences together in the first place. Meta begins by explaining, "Studying the brain during speech has always proved extremely challenging for neuroscience, in part because of a simple technical problem: moving the mouth and tongue heavily corrupts neuroimaging signals." In light of this practical reality, typing instead of speaking is kind of genius.

So, what did Meta find? It's exactly like I said before: Linguistic Lego bricks, baby. Okay, that's an oversimplification, so I'll quote Meta directly once more: "Our study shows that the brain generates a sequence of representations that start from the most abstract level of representations—the meaning of a sentence—and progressively transform them into a myriad of actions, such as the actual finger movement on the keyboard [...] Our results show that the brain uses a ‘dynamic neural code’—a special neural mechanism that chains successive representations while maintaining each of them over long time periods."

To put it another way, your brain starts with vibes, unearths meaning, daisy chains those Lego bricks together, then transforms the thought into the action of typing…yeah, I would love to see the AI try to interpret the magnetic fields that led to that sentence too.


Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

]]>
https://www.pcgamer.com/hardware/meta-mightve-done-something-useful-pioneering-an-ai-model-that-can-interpret-brain-activity-into-sentences-with-almost-80-percent-accuracy/ jTBcXQXp28neQz6qHBMgsN Wed, 12 Feb 2025 17:24:44 +0000
<![CDATA[ Zotac beats those dastardly GPU scalpers by selling RTX 50-series graphics cards to actual gamers courtesy of their Discord channel ]]> It came as little surprise a full week before the release of the new Nvidia RTX 5090 and 5080 GPUs that scalpers were already attempting to gouge on RTX 50 graphics cards, cards they didn't even have. But what, exactly, can be done? AIB GPU maker Zotac may have a solution, selling cards directly to gamers via Discord.

To quote Zotac on Discord (via VideoCardz), "we want to reward real gamers and active members by giving you the chance to secure a slot to purchase a Zotac Gaming GeForce RTX 5080 or 5090—no bots, no scalpers, just my fellow gamers."

The post explains that participants must be active members of the Zotac Gaming Discord who get involved in various "challenges and discussions" and that any cheating or manipulation will result in disqualification.

Beyond that it's not clear exactly how Zotac is choosing successful members. Inevitably, all this raises questions over how access to purchase slots can all be effectively policed. Almost immediately, posters on Reddit observed that, "the Discord server is being flooded with new people spamming their way to 'engagement'. Feels like a bot-race all over again."

The initial ickiness we felt as a team when we first saw this story is echoed in those concerns. Instantly the idea of only getting access to new, scarce graphics cards if you genuflect to a manufacturer and show yourself to be some sort of true fan, for just the opportunity alone to play full price for a GPU, well, it doesn't feel great. Neither does the idea of having to take part in challenges to get in line.

However, it's also been noted that Zotac has reportedly created a separate private channel for long-time Zotac Discord members in order to make sure at least some of them get access to the cards. So, fair play to the company for giving loyal users the opportunity away from bots et al, so long as it is genuinely monitoring whether those users are really long-time folk.

Ultimately, no attempt to exclude scalpers is likely to be entirely successful. But equally this move by Zotac will surely put a few RTX 50 cards into the hands of gamers that would otherwise have been snapped up by bots and flipped for profit.

Of course, this is also a fairly labour-intensive way of going about selling GPUs. So, it's not entirely reasonable to expect every AIB card maker to conduct a similar program. But it would be nice to see a few more do something similar.

Some of the comments surrounding Zotac's program are also a timely reminder that the current situation isn't actually the end of the world. As one Redditor sagely observed, "patience is all you need. I got my 4090 about five months after launch, new for $1,600 MSRP. People just need to learn to wait. It ain't a big deal."

That can be difficult advice to swallow, given how used we all are to the instant gratification of modern online commerce. The idea of having to wait for something really jars. But maybe we all just need to recalibrate our expectations.

Instead of viewing GPU release dates as general availability dates, view them as the date from which you can get in line. And if you don't want to get in line, you don't have to. It's not the end of the world.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/zotac-beats-those-dastardly-gpu-scalpers-by-selling-rtx-50-series-graphics-cards-to-actual-gamers-courtesy-of-their-discord-channel/ ZTdEdhtSbDHGiRT7ob9iYe Wed, 12 Feb 2025 16:14:38 +0000
<![CDATA[ Here's Linux running inside a PDF, running inside a browser, running on a Windows PC ]]>

I'm in. I intone the words with a sense of victory as I navigate the file directory using only shell commands—a feat that might have impressed the occasional adolescent maybe two decades ago. But then, the camera pans out to reveal a PDF document… inside a Chrome browser… running on Windows.

Yes, this is Linux running in a PDF, running in a browser, on my Windows PC.

This completely unexpected turn, brought to you by Ading2210, the same high school student who gave you Doom running in a PDF. On YouTube, they go by vk6 (via Hackaday).

In the video description showcasing the LinuxPDF project, they explain: "I got Linux running inside a PDF file via a RISC-V emulator compiled to Javascript."

What a world we're in today: a world of such compute power that means high-level and rather ubiquitous technologies such as Javascript can run entire emulators, apparently inside PDFs. The fact that PDF documents allow Javascript to execute is a double-edged sword, of course, as while it can allow you to run DOOM and now, apparently, Linux, it can also put you at risk of dodgy malware scripts.

LinuxPDF can run in any Chromium-based browser, which includes Chrome (duh), Brave, Edge, and Opera. You can check it out for yourself here.

Of course, you're not getting the Ubuntu experience inside your Chromium-powered PDF, rather you're getting an incredibly barebones command line experience via TinyEMU RISC-V emulation.

And you're not getting a particularly fast version of that, either, thanks to the layers of emulation. You get a command line, plus a virtual keyboard to press—although you can also type your inputs using your own keyboard using the space at the bottom-right. It's a little janky (backspace only seems to register on the virtual keyboard, for instance) but what do you expect?

Ading2210 explains: "It works by using a separate text field for each row of pixels in the screen, whose contents are set to various ASCII characters." Pretty ingenious, if you ask me.

So, first DOOM, and now Linux. What's next? Crysis? How about a PDF reader running from an emulated OS running inside a PDF? We're waiting, Ading.

Best SSD for gaming: The best speedy storage today.
Best NVMe SSD: Compact M.2 drives.
Best external hard drive: Huge capacities for less.
Best external SSD: Plug-in storage upgrades.

]]>
https://www.pcgamer.com/software/operating-systems/heres-linux-running-inside-a-pdf-running-inside-a-browser-running-on-a-windows-pc/ 3nxjKtoVVTy7Ggg7RiAUfM Wed, 12 Feb 2025 15:35:18 +0000
<![CDATA[ Give your Wikipedia rabbit hole deep dives a dollop of dopamine by reading them in an 'anti-algorithm' TikTok-style feed ]]> Are you familiar with the concept of 'architectural forgery'? Before today, neither was I. In Japan, proposed construction exceeding a certain number of floors must prove it is structurally safe (and, importantly, earthquake resistant) by submitting architectural drawings and calculations to the authorities. In late 2005, it came to light that a number of earthquake design calculations submitted by structural engineer Hidetsugu Aneha had used falsified data, causing a ripple effect that ultimately bankrupted a number of construction firms and real estate companies across Japan.

I learnt the above interesting nugget via WikiTok, a project that reimagines your Wikipedia rabbit hole as a neverending, scrollable feed a la TikTok (via Ars Technica). Rather than offering reams of text, the page presents each article as an image-led stub summary with click-through links to sate your curiosity. Though you can access Wiktok via desktop, it really is best suited to your phone screen—just like its primary source of inspiration.

At present, WikiTok's scrollable selection is a vertical conga line of totally random articles drawn from Wikipedia's API. You can save articles that catch your eye for later by liking them, though, unlike TikTok, there are currently no videos and no invasive data tracking.

The project was made by Isaac Gemal, who used AI coding tools Cursor and Claude to build a prototype in a little under two hours. Gemal told Ars Technica, "The entire thing is only several hundred lines of code, and Claude wrote the vast majority of it." You can poke around the project yourself at GitHub here.

Inspiration for the project first arrived in the form of this X post thread, with Tyler Angert of Patina Systems even coining the term 'Wikitok.' That was on February 3, and Gemal had a prototype by 2 AM the next morning. He told Ars Technica, "AI helped me ship really really fast and just capitalize on the initial viral tweet asking for Wikipedia with scrolling."

Gemal wasn't the only one to throw a hat into the ring. A few hours later, Alexandre Pesant put forward WikTok (minus an 'i'), made using totally different AI coding tool Lovable. In terms of presentation, this is definitely a slicker effort and many more imitators are sure to follow.

As for the future of the WikiTok project (with two 'i's), Gemal also told Ars Technica, "I have no grand plans for some sort of insane monetized hyper-calculating TikTok algorithm. It is anti-algorithmic, if anything."

Rather than losing yourself in TikTok's tidal wave of content, Wikitok could be viewed as an alternative, though I'm not wholly convinced substituting one 'bad' feed for another 'good' feed is necessarily the most satisfying answer to social media overwhelm. For one thing, I don't know about you, but I tend to find that information I learn through my phone screen rarely sticks around in the ol' noggin'—and this study from 2020 suggests that it might not just be me.

As much as I've enjoyed stumbling upon obscure Hindi detective series Byomkesh Bakshi and the historic Royal Theatre in St. Petersburg, Florida, I know there are better ways to learn about either subject.

As an open-source encyclopaedia that could potentially be edited by anyone, you best believe I spend a lot of time hanging out with the citations in the 'references' section. Remember that architectural forgery scandal I referenced right at the start? It makes reference to a single citation, alongside a page-topping plea for additional reliable sources.

It's a timely reminder that not all articles are created equal, and that Wikipedia best serves as a jumping off point for further research.


Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

]]>
https://www.pcgamer.com/hardware/give-your-wikipedia-rabbit-hole-deep-dives-a-dollop-of-dopamine-by-reading-them-in-an-anti-algorithm-tiktok-style-feed/ 4irNeJhC5uQQBw66ZnvQ38 Wed, 12 Feb 2025 13:25:45 +0000
<![CDATA[ TSMC reportedly plots 2027 start date for its 3 nm US fab, but will that be in time to save next-gen GPUs from tariffs? ]]> Maker of most of the world's cutting-edge chips, TSMC, is reportedly accelerating its plans to produce modern 3 nm chips in the USA. Originally pencilled in for 2028, TSMC is now said to be aiming to pull in production to 2027 in response to tariffs threatened by the Trump administration.

But will that be soon enough for next-gen GPUs?

MoneyDJ (via TrendForce) claims that TSMC is bringing forward its second chip fab in Arizona. TSMC's alleged new plan is to install equipment in the new facility next year and begin volume production of chips in 2027. That's a year ahead of TSMC's current publicly stated schedule.

The reason for the accelerated time table is said to be new tariffs. As we reported recently, President Trump has threatened up to 100% tariffs on chips from Taiwan, which would directly impact TSMC's output and make imports of components like GPUs massively more expensive.

If TSMC could produce those chips in the US at one of its Arizona fabs, then it would sidestep the tariffs entirely (though its customers would still need to think of their supply chain and packaging). The question then becomes a matter of timing.

TSMC's first Arizona fab is already cranking out chips on the N4 node, a derivative of N5, broadly referred to as 5 nm, reportedly including CPU dies for AMD. That's the node also used by both Nvidia for its latest RTX 50 family of GPUs and AMD for its upcoming RDNA 4 graphics cards.

It's extremely likely that both companies will move to 3 nm or N3 for its next-gen cards, codenamed Rubin for Nvidia and UDNA for AMD. Given brand new GPUs from both outfits have just been released at the beginning of 2025 and that two-year cycles for GPU families are the norm, 2027 for the new 3 nm Arizona fab seems like it could be a good fit.

However, the timings may be a little tighter than that. To allow for a January 2025 launch, for instance, Nvidia will have been manufacturing RTX 50 GPUs many months earlier in order to allow time for the chips to be packaged and fitted to graphics cards in sufficient volumes to supply retailers.

It's also something of a big ask to expect a brand new fab to be manufacturing very large GPUs from the get go. You might normally expect a new facility to target smaller chips that are less sensitive to yields as the facility builds volumes and irons out production kinks.

Of course, neither Nvidia nor AMD need stick with a two-year schedule. If 100% tariffs or anything even close really are imposed on Taiwan-made chips, it could well be worth delaying release until TSMC's Arizona fabs can take responsibility for production.

As for the future, TSMC has a third fab planned for Arizona which will produce chips on TSMC's next-gen nodes, likely to be 2 nm or 16A. However, Fab 3 as it's known isn't expected to come online until around 2030.

Anyway, it's usually a long and expensive process building fabs and bringing up production volumes and yields. So, if TSMC really can pull in its 3 nm Arizona facility from 2028 to 2027, that would be some achievement. And it might just help prevent graphics cards from becoming even more ridiculously expensive.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/tsmc-reportedly-plots-2027-start-date-for-its-3-nm-us-fab-but-will-that-be-in-time-to-save-next-gen-gpus-from-tariffs/ FBQ2aFHWodnu4rdB79G4DM Wed, 12 Feb 2025 13:25:17 +0000
<![CDATA[ Nvidia confirms RTX 50-series laptops can be pre-ordered from February 25 and will be 'available starting March', stock willing ]]> While the first Nvidia RTX 50-series GPUs are now with us in the form of mighty (and perhaps melty?) graphics cards, some of us have been looking towards mobile versions of the GPUs for our slice of the Blackwell pie. We've been waiting until a March launch for RTX 50-series laptops, and now Nvidia has confirmed that pre-orders will start from February 25.

In an X post from the official Nvidia GeForce account, Nvidia says: "GeForce RTX 50 Series Laptop pre-orders start Feb 25 from OEMs. 👀 Stay tuned for more details!" And Nvidia's 50-series laptops webpage confirms that these laptops will be "available starting March 2025," so it shouldn't be long until those pre-orders arrive, if you're lucky enough to grab one.

"👀" indeed. While we can't know for certain until we test them out ourselves, this next generation of laptops is expected to offer quite a lot for gamers, primarily because of DLSS 4's Multi Frame Generation. Based on what we've seen of the desktop RTX 5080 and RTX 5090, RTX 50-series GPUs might not offer massive leaps in pure raster performance, but they more than make up for this with AI-aided frame generation.

Of course, how you feel about this will depend on whether you think of them as 'fake frames' or real ones—but hey, I'll take an extra 100 frames over zero, fake or not.

This is simplifying things, of course, because there's also the question of latency, which will be more pronounced in the case of less powerful laptop GPUs than in the desktop ones (the RTX 5070 Ti mobile, for instance, will feature a GB205 GPU, the same as will be in the desktop RTX 5070). That's because if you start with lower input frame rates you end up with more latency from subsequent generated frames.

Still, these higher-end chips should still be powerful, even if a little less powerful than their desktop counterparts, so we shouldn't be talking atrocious frame rates before Multi Frame Generation kicks in.

And it is the upper end of the 50-series GPU lineup we're expecting to kick off with: RTX 5090, RTX 5080, and RTX 5070 Ti. These were some of the first RTX 50-series gaming laptops we saw at retailers. RTX 5070 laptops, on the other hand, are expected to launch later, in April. This follows Nvidia's usual GPU release cadence—higher-end first, lower-end later—and it matches what we're seeing on the desktop graphics card front.

These laptops should all come packaged with Intel Core Ultra 200H-series (Arrow Lake) or AMD Ryzen AI 300 (Strix Point) processors, too, which will firmly plant us in the current generation.

Although we can't be certain whether the current retailer listing prices will stick, and a lot of the currently listed RTX 50-series laptops are mega expensive, at least some of them seem reasonable. Take the ROG Strix G16 with RTX 5070 Ti for $1,900 at Best Buy, for instance, or a 16-inch MSI laptop with RTX 5070 Ti for $1,599 at Newegg. Though of course, even if these do remain as affordable, whether they'll remain in stock for more than a nanosecond is another question entirely.

Plenty to hesitantly look forward to, then. With more Nvidia desktop GPUs set to launch soon and AMD ones just around the corner, too, it'll be interesting to see which garners the most attention.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/gaming-laptops/nvidia-confirms-rtx-50-series-laptops-can-be-pre-ordered-from-february-25-and-will-be-available-starting-march-stock-willing/ 475UAKNvtcVU3dMgawknqZ Wed, 12 Feb 2025 12:40:49 +0000
<![CDATA[ Nvidia's RTX 5070 graphics card rumoured to be delayed from February until March but that could actually be good news ]]> Back in early January when Nvidia announced the RTX 5070, the mid-range member of the new RTX 50 family was given a February launch date. Now rumours indicate the GPU has been pushed back to March.

Long time GPU-rumour leaker on X, MEGAsizeGPU, says, "the RTX 5070 will be delayed. Instead of February, it will be on the shelf in early March." How much of a delay that would constitute, if true, isn't clear.

That's because Nvidia never said anything more specific than "availability in February" for the RTX 5070. As for what's going on exactly, one obvious reason for the delay could be to increase stock level before launch.

The RTX 5080 and RTX 5090 sold out almost instantly on launch day on January 30 and have been very scarce since. So, a delay for the RTX 5070 would allow time to build up a bigger buffer of cards.

On a related note, Nvidia may have originally been expecting arch rival AMD to roll out its RX 9070 and 9070 XT GPUs a little earlier than March. But AMD has since inked in "early March" as the launch window for those new GPUs, which are expected to go up against the RTX 5070 and perhaps the even the RTX 5070 Ti.

We don't know exactly when AMD was first intending to release the RX 9070 and RX 9070 XT. But the company has revealed that the GPUs have been delayed in order to improve performance and add support for AMD's new FSR 4 upscaling technology to be added to more games.

"We are taking a little extra time to optimize the software stack for maximum performance and enable more FSR 4 titles," AMD's Ryzen CPU and Radeon graphics rep David McAfee said on January 22. AMD may also be using the time to build up stock levels.

Anyway, the point regarding those AMD GPUs is that their delayed launch has given Nvidia a little time to tweak its own plans with the RTX 5070. Apart from allowing more time to improve stock levels, a delay until March will mean that AMD's GPUs will have to share the limelight and news cycle with the RTX 5070.

That said, what Nvidia won't be able to do very easily, at least without losing face, is change the 5070's pricing to respond to AMD. Nvidia has already announced a $549 MSRP for the RTX 5070 and it would be a distinctly uncharacteristic climbdown for Nvidia to lower its price, even if a change to an announced but unreleased GPU wouldn't be totally unprecedented for the company.

Nvidia infamously cancelled the RTX 4080 12GB before it was released and rebadged it as the RTX 4070 Ti but with a $100 price cut from $899 to $799. We very much doubt Nvidia will give the RTX 5070 a similar haircut before launch. But it's not absolutely impossible.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/nvidias-rtx-5070-graphics-card-rumoured-to-be-delayed-from-february-until-march-but-that-could-actually-be-good-news/ VreD2SUgJhZpqTU8yGg8oF Wed, 12 Feb 2025 11:57:43 +0000
<![CDATA[ Be Quiet! Shadow Base 800 FX review ]]> The Shadow Base 800 FX is great. It's a generous size without taking up too much space. It's colourful and covered in LEDs without being tacky. It comes with four reliable 140 mm fans for quieter operation and heaps of airflow. Most of all, it's just easy to build into. For that reason alone, I'm sold.

I tested this PC case the only way I know how: building a gaming PC inside it and measuring how well it runs. For this build (full build log here), I opted for an ASRock Z890 Steel Legend WiFi loaded with an Intel Core Ultra 5 245K. To keep the processor cool, a Be Quiet! Dark Rock 5 air cooler. On the graphics front, one last hurrah for AMD's outgoing RX 7900 XT. All of these went together without issue inside the Shadow Base 800 FX. Like, zero issues.

There are a few ways that the Shadow Base 800 FX makes life easy for budding builders. First off, the motherboard tray is set back to leave ample space for a top-mounted radiator—you can fit up to a 420 mm radiator, or three 140 mm fans, in the top of the Shadow Base. That leaves plenty of room in the front of the case for another 420 mm radiator, or again three 140 mm fans, though any fans can be tucked neatly out of the way behind the front mesh.

Speaking of fans, you needn't worry about buying many more for this case. It comes with four 140 mm Be Quiet! Light Wings fans—three in the front, one as an exhaust in the back. They are supremely quiet in operation. That's a combination of both their size and construction—larger fans mean you can run slower RPMs while maintaining good airflow, and Be Quiet! claims a low-noise fan blade design with a lower-noise rifle bearing.

Shadow Base 800 FX specs

A gaming PC build using the Be Quiet Shadow Base 800 FX chassis, an Intel Core Ultra CPU and an RX 7900 XT GPU.

(Image credit: Future)

Form factor: Mid-tower
Dimensions:
550 x 247 x 522 mm
Motherboard compatibility: E-ATX, ATX, M-ATX, Mini-ITX
Front panel: 1x USB 3.2 Gen 2 (Type C), 2x USB 3.2 (Type-A), 3.5 mm mic jack, 3.5 mm headphone jack, LED control button
Included fans: 4x Light Wings PWM 140 mm (1x exhaust, 3x intake)
Fan support: Front: 3x 140 / 120 mm | Top: 3x 140 / 120 mm | Bottom: 1x 140 / 120 mm | Rear: 1x 140 / 120 mm
Radiator support: Front: Up to 420 mm | Top: Up to 420 mm | Rear: 120 mm
GPU support: Up to 430 mm
Extras: Fan and RGB controller
Price: $190/£180

They're pretty smart-looking fans. There's RGB only around the circumference of the fan that looks superb through the front mesh of the case. The longevity of these fans is a little lower than some high-end options at 60,000 hours, especially compared to Be Quiet!'s own Silent Wings 4 at 300,000 hours, but a three-year warranty certainly isn't bad.

In my tests, these fans provided ample airflow to the components within. While largely dependent on the choice of chip and cooler used across both CPU and GPU, I measured the Core Ultra 5 245K at no higher than 65°C and the RX 7900 XT no higher than 69°C while gaming. The CPU reached 76°C during more intensive testing, such as Cinebench 2024. All of which is absolutely fine with me.

When it comes to noise reduction, this Be Quiet! case comes with insulation on the closed side panel. A layer of dense foam. There's none on the front, top, or tempered glass windowed side, however, which does make me wonder how much difference it really makes to the overall noise of the build. That said, it's hardly a loud machine with the fairly easy-going components and cooling I've put inside it.

Image 1 of 2

A gaming PC build using the Be Quiet Shadow Base 800 FX chassis, an Intel Core Ultra CPU and an RX 7900 XT GPU.

(Image credit: Future)
Image 2 of 2

A gaming PC build using the Be Quiet Shadow Base 800 FX chassis, an Intel Core Ultra CPU and an RX 7900 XT GPU.

(Image credit: Future)

Another reason I found building inside this case so gosh-darn simple is the serious amount of space behind the PSU. It's cavernous back there. You really don't have to worry about PSU cables or tidying them up all that much, especially when there's a couple of inches of room for your cables behind the panel.

The motherboard panel also has long, uninterrupted cable cut-outs across the top, side, and bottom of the motherboard plate, making light work of any cable tidying. My only small issue with the design is that, while there is a cable cover down the side, it does leave a lot of the cables still quite visible from the front.

Image 1 of 2

A gaming PC build using the Be Quiet Shadow Base 800 FX chassis, an Intel Core Ultra CPU and an RX 7900 XT GPU.

(Image credit: Future)
Image 2 of 2

A gaming PC build using the Be Quiet Shadow Base 800 FX chassis, an Intel Core Ultra CPU and an RX 7900 XT GPU.

(Image credit: Future)

In Be Quiet!'s defence, the whole cable tidy comes off with a single Philips head screw. In fact, most of this case is pretty accessible with minimal tools required—if any.

There's a controller included in the Shadow Base 800 FX, to control the included fans, the lovely RGB light rings on each, and the embedded LED lighting strips down the front of the case. This controller comes pre-installed on the rear of the motherboard plate, which comes away with a single thumbscrew for easy removal and access to a motherboard's rear.

With the front panel lighting and four fans connected, two cables a piece, there are still three four-pin fan headers and three 5-volt RGB headers spare to use for further expansion/add-ons.

Image 1 of 3

A gaming PC build using the Be Quiet Shadow Base 800 FX chassis, an Intel Core Ultra CPU and an RX 7900 XT GPU.

(Image credit: Future)
Image 2 of 3

A gaming PC build using the Be Quiet Shadow Base 800 FX chassis, an Intel Core Ultra CPU and an RX 7900 XT GPU.

(Image credit: Future)
Image 3 of 3

A gaming PC build using the Be Quiet Shadow Base 800 FX chassis, an Intel Core Ultra CPU and an RX 7900 XT GPU.

(Image credit: Future)

The rear motherboard plate also has room for a 2.5-inch drive, and there are three total 2.5-inch SSD mounts within easy reach in the Shadow Base 800 FX. There's room for more, but there's only a single HDD (3.5-inch) cage included as an optional extra.

With airflow, cable management, lighting, and cooling sorted, I've only dust to deal with. Thankfully, very little inside the case itself, as both the top, bottom, and front of the case have built-in dust protection. The top and front are magnetically attached, and the rear slides in and out with ease.

Bonus points: the front panel can be removed with minimal force, and the RGB lighting on it is connected via a few stable contacts to a connection on the case proper. It goes back on just as easily.

A gaming PC build using the Be Quiet Shadow Base 800 FX chassis, an Intel Core Ultra CPU and an RX 7900 XT GPU.

(Image credit: Future)
Buy if...

✅ You want convenience: This is a case that doesn't get in your way while you're building. The panels fall away, few tools are required, and the controller on the back makes managing fans and lighting easy.

✅ You want great fans and great airflow:
The four 140 mm Be Quiet! Light Wings included with this case make for quiet, capable operation. They look great, too. You really don't have to worry about adding anything else—besides some sort of CPU cooler, of course.

Don't buy if...

❌ You want the smallest mid-tower: This is a pretty traditional mid-tower, and absolutely not trying to shrink that form factor down to its smallest possible size. It's 550 x 247 x 522 mm, for the record.

The Shadow Base 800 FX is impressive for its largely tool-free, convenient, spacious build experience. And I say this as someone with a slight bone to pick with older Be Quiet!'s previous cases. I'm going back quite far here, getting near a decade (oh my god, I'm so old); though I own a Be Quiet! Dark Base 900.

This case has been in constant use over the years, from my PC to my partner's, and much of the time I've been slightly scared to open it and change anything drastically. It's well built and has stood the test of time, but trying to shift the PSU shroud or flip the motherboard mount is not worth the hassle.

By comparison, everything on the Shadow Base 800 FX falls away with ease, magnetically attaches, requires one screw, if any… case design has changed a lot over the past 10 years, and the Shadow Base 800 FX is one of the best examples of that.

Coming in at $190/£180, you wouldn't be putting a foot wrong with the Shadow Base 800 FX for the money. Especially considering the four 140 mm, RGB fans you're getting included for the fee. Though if you are looking to save money, the Shadow Base 800 DX is more or less the same with non-RGB fans for $135/£140, or there's the straight Shadow Base 800 with no RGB whatsoever for $96/£130. Any of which would work great for your next gaming PC build.

]]>
https://www.pcgamer.com/hardware/pc-cases/be-quiet-shadow-base-800-fx-review/ p8ttCnwevVCiaj7u7QhNBT Wed, 12 Feb 2025 11:50:59 +0000
<![CDATA[ Palmer Luckey says he wants to 'turn warfighters into technomancers' as Anduril takes over production of the US Army's IVAS AR headset from Microsoft ]]> Microsoft has announced that it is getting out of the Kill-O-Vision headset business, more formally known as the US Army's Integrated Visual Augmentation System (IVAS) program. While the company's "advanced cloud infrastructure and AI capabilities will continue to provide a robust backbone for the program," responsibility for actually making the headsets and the software that runs them is being taken over by Anduril Industries, the defense contractor co-founded in 2017 by Oculus VR founder Palmer Luckey.

Microsoft said Anduril's "mission focus" as a defense technology company "will ensure future program development specifically tailored to the evolving needs of the Army." It will also, apparently, enable a lower per-unit cost of the IVAS headsets, which is something of a priority for the military: The US Army signed a $22 billion deal with Microsoft to develop the headsets in 2021, but by 2024—and following various complaints about the headsets including size, weight, and the fact that the glow of the screen could apparently be seen from a very long distance, enabling enemy soldiers to tell exactly where the wearer's head happens to be at any given moment—the Army was asking if perhaps Microsoft could do something about the price tag.

The shift may also help mollify Microsoft shareholders and employees who were less than keen on the company's dealings with the US Army, specifically the potential "reputational and financial risks to the company for being identified as a company involved in the development of weapons used by the military." With Anduril, of course, that's not a concern: Making weapons is literally all it does.

"The IVAS program represents the future of mission command, combining technology and human capability to give soldiers the edge they need on the battlefield," Luckey said. "The ultimate goal is to create a military ecosystem where technology acts as an extension of human capability. By empowering soldiers with the tools they need to make faster, smarter decisions, we’re building a future where technology and human ingenuity combine to ensure mission success."

Luckey was a little less PR-managed in a post about it on X, which included a recreation of his infamously goofy Time cover but with a far less goofy context: "Whatever you are imagining, however crazy you imagine I am, multiply it by ten and then do it again. I am back, and I am only getting started."

(Image credit: Palmer Luckey (Twitter))

"Tactical heads-up-displays that turn warfighters into technomancers and pair us with weaponized robotics were one of the products in the original Anduril pitch deck for a reason," Luckey wrote in a blog post. "The past eight years we have spent building Lattice have put Anduril in a position to make this type of thing actually useful in the way military strategists and technologists have long dreamed of, ever since Robert Heinlein's 1959 novel Starship Troopers.

"Not just day and night and thermal and ultraviolet, but peering into an idealized interactive real-time composite of past, present, and future that will quickly surpass traditional senses like vision and touch. Put another way, Superman doesn't use menus—he just sees and does."

I'm not sure that Starship Troopers is the sci-fi future we should really be aiming for, but never mind that: Luckey went on to say IVAS "represents just the beginnings of a new path in human augmentation, one that will allow America's warfighters to surpass the limitations of human form and cognition, seamlessly teaming enhanced humans with large packs of robotic and biologic teammates." So yeah, that actually sounds a lot worse.

The tech industry's insistence on creating the Torment Nexus notwithstanding, handing IVAS production off to Anduril is also a practical move for Microsoft, which discontinued production of its HoloLens 2 headsets in October 2024. Microsoft confirmed that it's out of that particular hardware game entirely in a statement provided to The Verge, saying it is "transitioning away from hardware development ... and will shift our focus to cloud and AI technologies, which will serve as the foundation for IVAS as a situational awareness platform."

Anduril's takeover of IVAS production is dependent upon approval of the US Department of Defense. The DoD hasn't yet commented on the announcement, but I strongly suspect that approval is very likely to happen.

2025 games: This year's upcoming releases
Best PC games: Our all-time favorites
Free PC games: Freebie fest
Best FPS games: Finest gunplay
Best RPGs: Grand adventures
Best co-op games: Better together

]]>
https://www.pcgamer.com/hardware/palmer-luckey-says-he-wants-to-turn-warfighters-into-technomancers-as-anduril-takes-over-production-of-the-us-armys-ivas-ar-headset-from-microsoft/ hgyBiZyzKDcxpjEo9jQ2DV Tue, 11 Feb 2025 23:36:44 +0000
<![CDATA[ The unwelcome workaround for Nvidia's RTX 50-series black screen issues is to hobble your gaming monitor with a 60 Hz refresh rate ]]> During all of my initial review testing and overclocking of the RTX 5090 and RTX 5080 graphics cards, I had no issues with the black screening problem that we've seen cropping up in various forums and Reddit threads. My Founders Edition cards have worked beautifully and not once set fire to the wooden cabin tinderbox in which I do all my performance testing.

But today I hit a wall. That is how I am going to refer to the MSI RTX 5090 Suprim, a wall, because boy, that thing is chonk with a capital OMG.

This is my first third-party RTX 50-series card, and it is towering over my test rig right now, and kinda terrorising it, if truth be told. Because now, I too, have fallen victim to the black screen effect we've read about. Nvidia has said it is investigating the issue but hasn't been able to help me through the struggles with the card.

But I have found a solution… in part. But it's not a solution I would want to live with, just something that I could put up with until Nvidia comes out with a proper fix which stops this $2,700 card from blacking out when it's put under pressure.

Basically, you have to hobble your high refresh rate monitor. Thanks Reddit.

It's horrible, and I don't want to have to do it, but this way I'm able to get Cyberpunk 2077 or DaVinci Resolve to run without crashing my entire rig, and the only way I've managed to get through most of our GPU benchmarking suite is by dropping my glorious 4K 240 Hz OLED monitor down to a lowly 60 Hz refresh.

It's still not allowed me to get through a full 3DMark Time Spy run, but you can't have everything. Even if you spend this much on a brand new graphics card, it seems.

Let me count the other things I tried that have failed:

  • As always, I used Display Driver Uninstaller to clean my old drivers for a fresh start
  • Rolled back, clean uninstall, and installed the pre-released drivers Nvidia supplied for the review
  • Tried both MSI's 'Silent' and 'Gaming' BIOS settings
  • Used different power cables
  • Plugged it in and out of the PCAT power testing module
  • Reseated the RAM (always worth a try)
  • Swapping between HDMI and DP cables
  • Changed Nvidia Control Panel power modes
  • Left the room while I booted 3DMark (it used to work with games on tape with the Commodore 64)
  • Tried 120 Hz 😭
Peak Storage

SATA, NVMe M.2, and PCIe SSDs on blue background

(Image credit: Future)

Best SSD for gaming: The best speedy storage today.
Best NVMe SSD: Compact M.2 drives.
Best external hard drives: Huge capacities for less.
Best external SSDs: Plug-in storage upgrades.

It is worth noting that I have so far only tested the card on the PCG GPU test rig. This is the one which has had zero issues with the other RTX 50-series cards, on indeed any graphics card I've tested in the past 12 months.

But it is the one which did give me horrendous coil whine on the RTX 5090 Founders Edition, so I will be switching machines now I have completed testing on this overclocked MSI card to see if it works within another PC.

But yes, there you have it, run your monitor like it's 2007 and you can at least play some games on your RTX 50-series GPU.

You're welcome.

]]>
https://www.pcgamer.com/hardware/graphics-cards/today-i-found-a-potential-solution-to-your-black-screening-rtx-50-series-graphics-card-problems-though-youre-not-going-to-like-it/ UjW7LdvrtwEpUvFcVjwVbi Tue, 11 Feb 2025 17:42:15 +0000
<![CDATA[ With great self-awareness WinRAR releases official $150 merch: 'What better way to support the software you’ve NEVER paid for than by buying a WinRAR bag?' ]]> WinRAR, the compression and encryption software known for having a paid version that many users deftly dodge, has put out flashy new merch including a messenger bag modelled after its iconic logo. For $150, you can get some of the geekiest (and most fun) merch I've seen this year.

In an announcement tweet that has amassed well over 300,000 likes as of the time of writing, the WinRAR X account makes a pitch for why you should get its snazzy new WinRAR bag.

"What better way to support the software you’ve NEVER paid for than by buying a WinRAR bag? Do it! We dare y’all!"

The bag itself is based on the WinRAR book logo and uses the place of the magenta, indigo, and blue books as your storage. To thoroughly test how much space the 14 cm x 7 cm x 21.4 cm bag could fit, the team at Tern managed to fit 805 single-sleeved Yu-Gi-Oh cards into it. If you want to double-sleeve your cards to bring them to a tournament, you can expect to fit a little less. WinRAR later confirmed that the bag could fit three Diet Cokes. Though the original post doesn't clarify, I assume they are talking about cans.

The WinRAR bag is officially supported merch made by the aforementioned site called Tern and, as well as stocking the aforementioned bag, you can get a $237 varsity jacket with WinRAR branding. Both WinRAR products are currently being made to order due to high demand so future orders won't ship out until April.

Part of the reason this post went viral is because there's quite a lot of early internet nostalgia around WinRAR. You don't need a license to download or use WinRAR so the company has been seen as one of the good guys by sects of the internet for some time.

Even now, you can download WinRAR directly from its site without having to pay. You are encouraged to pay for WinRAR when you open the app but you can close the popout and continue to compress or extract your files. This is what the viral tweet is making a pretty tongue-in-cheek reference to.

However, with the adoption of native RAR support in Windows in 2023, WinRAR has become even more niche. This merch collaboration cashes in on the strong branding of WinRAR, and has resulted in a pretty cute bag at the same time. WinRAR has been interacting with customers showing theirs off over the last few days and the 'high demands' declaration on the website suggests the collaboration has started out strong.

My heart is definitely willing to pop this bookish bag into my cart but to be frank, my wallet might not be. A $150 bag wouldn't usually be on my radar but I can't say I'm not tempted.


Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

]]>
https://www.pcgamer.com/hardware/with-great-self-awareness-winrar-releases-official-usd150-merch-what-better-way-to-support-the-software-youve-never-paid-for-than-by-buying-a-winrar-bag/ EcP4bPbG5MfgcVmbxMJ89h Tue, 11 Feb 2025 17:22:33 +0000
<![CDATA[ The Steam Deck 2 doesn't need to happen because Valve will win either way (though I hope it does) ]]> Valve has already cemented its name in the handheld PC market with the Steam Deck. All it had to do was show that handheld PCs are worth making and others would follow suit. Now, coming up to the Steam Deck's third birthday and millions of units sold, I can say that Valve has won. I just hope its victory lap takes it around the track one more time for a sequel.

The Steam Deck launched in 2022 and has been surpassed by much more impressive and expensive handhelds since. Yet it's still a popular choice for many a gamer looking for a budget entry into handheld gaming.

As well as being cheap, the Steam Deck has a bit of a secret weapon. Steam OS is still a very good bit of software. It's clean and easy to navigate and, thanks to Steam Decks having standardized specs, gives you a range of games you know it can run. For pure ease of use, the Steam Deck is the most console-like PC in the market right now.

Now, three years later, the gaming handheld market has grown. The Asus ROG Ally X is an excellent middle ground between the budget prices of the Steam Deck and the killer performance of Strix Point devices like the OneXPlayer OneXFly F1 Pro. And yet, the Steam Deck is still a viable choice for any gamer looking to make their way through some indie games from the comfort of their sofa. A large part is that the Steam Deck is one of the more reasonably priced handhelds out there, another is down to its software. The Steam Deck sold millions in just a year and sold out on the Steam store almost immediately after preorders went live.

The Steam Deck had a lot to live up to. It was a bit of an ambitious bit of hardware for Valve, a company that had only worked alongside companies on Steam Machines, controllers, and the Valve Index prior.

Valve had a few failures in the hardware department. The Steam Controller has its defenders, but it went from launch to discontinued with no successor in just four years, which isn't a great sign of success. The Alienware Steam Machine functioned poorly and Steam Link couldn't beat out the competition in the game streaming space.

The things Valve learned from these failures did make their way into the Steam Deck controls and Steam Remote Play, but there was some suspicion around the launch of the Steam Deck thanks to Valve's flawed hardware history.

There were no mainstream handheld gaming PCs up until that point, though both Razer and Alienware had shown off handheld gaming PC concepts that would never actually launch. There were also a few handhelds on the fringes making do with weak-hearted GPUs, such as the OneXPlayer and OneXPlayer Mini. The ground was there for a home run and Valve stood up to the plate.

SteamOS on multiple handheld gaming PCs

(Image credit: Future)

The Steam Deck proved to be a standout first attempt with great ergonomics in its controls, a sturdy feel, and finetuned compatibility with games. It became a gold standard for handheld performance for some time, which is helped by the 'playable/verified on Steam Deck' tag games can get on the Steam store. As of November last year, there were over 17,000 games in the verified or playable category on Steam.

I asked around the PC Gamer office if anybody still uses their Steam Deck and got a resounding "nah" but that's partially because we're spoiled for new tech, as comes with writing about hardware for a living. It's also because we're all hardware geeks at heart, who love to tinker and play around with game settings.

I recently booted up the OneXPlayer 2 Pro and one of the most notable adjustments I'm going to have to make is guessing if my games will run on it. The 'playable on Steam Deck' tag isn't just an assurance before you buy a game, it's a game grouping that allows you to easily pick what your next installation will be. It's a handy vetting process to avoid installing something that will barely run.

Steam Deck with Solid Snake

(Image credit: Future)

With a refresh to the Steam Deck, that 'playable on Steam Deck' tag should only get bigger, and could give a new performance milestone for developers to hit when making games. This is, of course, a necessary requirement for a device primarily run on Linux. Any Windows handheld device can theoretically run games but those working on Steam Deck have requirements to run the software to get it working.

And this brings me to the Steam Deck 2. In 2023, Valve said it's still some way off the Steam Deck 2 because there's not a justifiable jump in performance yet. This same sentiment was expressed at the end of 2024, when Valve's Lawrence Yang and Yazan Aldehayyat said they were waiting "for a generational leap in compute without sacrificing battery life".

Valve doesn't have a great track record of making follow-ups to its hardware. The second generation Steam controller has been rumoured for some time and project Deckard (AKA Valve Index 2) has still not managed to surface in the last half a decade, despite hints in Steam VR folders back in 2021. Part of what people like about Valve is that it often feels like the company is just doing what its employees want.

It could have put out a second model by now with the latest tech but it didn't. It could have given fans Half-Life 3 but instead focused on Half-Life: Alyx which, considering the install base of VR when it launched, was a bit of a niche. All of this is to say that the Steam Deck never struck me as a device looking to corner the market. It doesn't feel like a handheld intended to be the handheld.

You don't have to take my word for it. Gabe Newell said that he wants other PC makers to create their own Steam Decks back in 2021. Following this up, the Lenovo Legion Go S is getting a native Steam OS version. However, it's not just getting more devices on Steam OS that Valve is looking to do. In fact, I'd argue the Steam Deck has already done exactly what it should.

Steam Deck UI

(Image credit: Tested.com)

The launch of the Steam Deck ushered handheld gaming PCs into the mainstream, and naturally, Steam is the gaming platform you will download first. Heroic makes the Epic Game Store a bit (or should I say 'a lot'?) nicer but it's not as good or as popular as Valve's storefront.

Not only is Lenovo working on a Steam OS handheld with support from Valve but Asus is too. However, putting Steam OS on handheld gaming PCs doesn't feel like the final play for Valve at the end of the day. It's an option for devices, and getting out-of-the-box support without having to fiddle with any settings is certainly a nice option.

To understand what Valve is really doing here, you have to think of the big picture. I don't mean that metaphorically, I'm talking about Big Picture mode, Valve's interface intended to make controller navigation easier. Big Picture has gotten much better over the last few years, with cleaner and more specific search tools, a UI overhaul, and an easier-to-navigate storefront. Even Windows handhelds can function like a Steam OS handheld if you set Steam to automatically open Big Picture mode when you turn it on. It takes a little longer to boot up than a Steam OS native device but it's a very similar end experience once you do.

With the Steam Deck, Valve did two major things that will centre Steam at the front of the handheld PC conversation. First, as a proof of concept in the field, the Steam Deck proved that handheld gaming PCs are worth your time and money. There's a reason so many major companies followed suit after the successful launch of Steam Deck. It's likely that these companies were already exploring how to make it happen, and the release of more advanced APUs certainly helped but the near-instantaneous popularity of the Steam Deck showed potential developers that it's worth the resources necessary to develop devices.

Heroic Game Launcher running on a Steam Deck

Heroic Game Launcher running the Epic Games Store on a Steam Deck (Image credit: Future)

Secondly, making Steam as accessible as possible by removing almost all barriers to entry with its software meant that you never had to choose between Steam or Windows. The same is true of Steam OS. Opening it up to a broader market gives users the option to choose how they play. Valve has become ubiquitous with handheld gaming PCs and, as a result, it has already won.

The Steam Deck 2 could be a bit of a risk if not considered properly. The market has gotten bigger and much more impressive since 2022. The Steam Deck 2 being a smooth experience with good ergonomics isn't a nice surprise as a first attempt like the first machine, it's the bare minimum. And now, as plenty of big players like Asus and Lenovo get to the market, the Steam Deck 2 needs to outperform or outprice its competition by a great enough margin to convince prospective buyers to pick it up.

The Steam Deck's success made handheld PC gaming relevant, and Valve never really needs to put out the Steam Deck 2 to continue benefiting from its role in the market, even though I really want one anyway. The market saw a shakeup right after the launch of the Steam Deck and there's room to do the same with the second one, whatever form that may take. Maybe give us Half-Life 3 first though.

]]>
https://www.pcgamer.com/hardware/handheld-gaming-pcs/the-steam-deck-2-doesnt-need-to-happen-because-valve-will-win-either-way-though-i-hope-it-does/ YvCk5KFeDL8RZ6VrNxing6 Tue, 11 Feb 2025 17:19:48 +0000
<![CDATA[ Surely not again: Worrying analysis shows Nvidia's RTX 5090 Founders Edition graphics card may be prone to melting power connectors ]]>

Update February 12:
High performance PC builder Falcon Northwest has taken to X to say that it has tested "many" RTX 5090 FE cards, but has been unable to repeat der8auer's findings. Falcon Northwest's thermal images show much more balanced loading on the power cables wires, something confirmed by measuring the currents.

Some observers suggest that the problem may be linked to the number of times a cable has been attached and detached, with der8auer having said that his cable has attached and detached numerous times. Watch this space for an official update from Nvidia.

Original story:
Last week I mentioned problems with black screening Nvidia RTX 5080s and 5090s. Now comes much more worrying news of power connector and cable melting problems with the RTX 5090. And not just any RTX 5090, but seemingly a specific and potentially serious problem with Nvidia's own RTX 5090 Founder Edition's board design.

The investigation comes from YouTube channel der8auer, which specialises in detailed technical analysis. Following the emergence of images of a damaged RTX 5090 on Reddit, the card in question along with the power cable used and the PSU, eventually landed with der8auer.

He says the owner who suffered the failure is an experienced enthusiast who was fully aware that the power cable for such a high-end GPU needs to be carefully seated. On inspection, der8auer found all three of the card, the 12VHPWR cable, and the PSU were damaged.

The GPU's power socket had a single pin showing damage and evidence of melting, whereas that same pin was much worse on the PSU side power socket and was accompanied by slight melting to several further pins.

The cable itself was damaged on both ends, but also showed signs of partial melting of its sleeving across its length. So, the question is, what is going on here? User error? A poor quality power cable? Something to do with the Asus Loki 1000 W PSU being used?

der8auer thinks none of the above. Using his own RTX 5090 FE, he loaded the GPU up with Furmark and found some worrying results using his own, higher spec 12V-2x6 cable, which had been in service for six months, and a 1600 W Corsair PSU.

It's worth noting that cables for the closely related 12V-2x6 and 12VHPWR GPU power sockets are supposedly identical. It's only the pin length in the sockets themselves that varies between the two socket types, though some have slightly thicker cables to mitigate some of the thermals.

However, the product page for the cable in question clearly warns that it's not recommended for the latest 50-series cards and recommends its newer 12V-2x6 cable with the thicker wiring. The catch, as we'll see, is that the problem applies to at least some extent with later 12V-2x6-spec cables.

The newer 12V-2x6 socket was introduced with shorter connection detection pins in order to ensure that power was only supplied when the connector was fully bedded in the socket and thus address problems with partially attached 12VHPWR connectors on RTX 4090 boards infamously causing melted sockets.

Melted RTX 5090 power socket.

A single pin is taking half of the RTX 5090's massive power draw. (Image credit: der8auer)

Anyway, with his RTX 5090 FE fully loaded, der8auer found two of the 12 wires in his power cable were showing up as much hotter than the others on his thermal camera, with one very hot indeed.

Turning the camera to the power sockets, he found that after just four minutes the GPU's power socket had hit 90 °C, but the PSU side socket was over 140 °C.

der8auer then measured the current going down the individual wires in the power cable and found some very worrying results. The 12V-2x6 cable has 12 wires and hence 12 pins in each connector. Six of those wires / pins are 12V live and six are ground.

Their testing shows some very worrying variance across the wires. Two of the six live wires were carrying very little current, one showed about two amps, another around five amps, one up at 11 amps and the last one hitting 22 amps.

So, that last wire is carrying over 250 W of power and roughly half of the total load. That's obviously not as intended. The connectors and cables are meant to have about six to eight amps each, with the total across all six not exceeding 55 amps.

According to der8auer, some AIB RTX 5090 designs include per-pin power sensing, which would presumably stop this kind of power imbalance from happening. But, surprisingly, Nvidia's own FE design apparently does not. Instead, it essentially amalgamates all six live pins into a single power source as soon as it arrives on the GPU's PCB.

These connectors are certified for 660 W and Nvidia rates the RTX 5090 at 575 W, so there is sufficient headroom available in theory. But not if most of that power is being pushed down a single wire instead of balanced across all six.

der8auer thinks the 15% headroom involved probably isn't enough and that Nvidia ought to have used two 12V-2x6 sockets. In that scenario, even with an uneven load, the cable and connectors would likely run safely. "For this kind of power draw, one connector is simply not enough," he concludes.

He also emphasises that user error absolutely can't be to blame, especially in his own testing. Exactly how this develops and how Nvidia responds remains to be seen. We've reached out to Nvidia ourselves and are waiting to hear back.

For now, the wise advice seems to be to hold fire on that RTX 5090 FE buy or maybe look out for an AIB board with per-pin power sensing on the spec list. And if you already have an FE card, take great care with how heavily you load it, keeping an eye on both ends of the cable and indeed the cable itself. Our Dave has been running the Founders Edition in his wooden cabin, and is very much now rethinking that setup.

Of course, that's really not something you ought to have to do with a brand new $2,000 premium graphics card. So, here's hoping it gets resolved very soon indeed.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/surely-not-again-worrying-analysis-shows-nvidias-rtx-5090-founders-edition-graphics-card-may-be-prone-to-melting-power-connectors/ CvfjLD8dshNmFtnRyaxCYP Tue, 11 Feb 2025 16:45:34 +0000
<![CDATA[ If you failed to get an RTX 5090 and still want an upgrade, all signs suggest the RTX 5070 Ti will launch on February 20 ]]> With the RTX 5090 and RTX 5080 launching in the last few weeks and selling out nearly instantly, you may have to turn to an RTX 5070 Ti if you want a fancy new graphics card in your rig anytime soon and MSI may have just confirmed that it's set to release on February 20.

Though the page has since been taken down, Wccftech spotted that the French MSI store had a countdown to the launch of the RTX 5070 Ti with the countdown ending on February 20. The actual countdown didn't specify the card but the URL is 'geforce-rtx-5070-ti-graphics-card'.

Clicking on that page now results in a 404 error, which is to be expected when a page goes live earlier than intended. However, the countdown's existence in the first place could suggest it will go live again before the launch of the RTX 5070 Ti to build up hype for its launch—that, or MSI assumed it could have such a countdown and was swiftly told otherwise.

This isn't the only recent news that points to a 5070 Ti launch on February 20. Back at the end of January, a Danish storefront named Proshop listed every single model of 5070 Ti it had planned to stock as launching on that same date.

One could point at this being an admin error but every single model had the same release date, whereas RTX 5070 models had an 'expected to launch in March' tag. Even now, a few weeks later, the listings on Proshop say the RTX 5070 Ti will launch on February 20.

The RTX 5070 Ti is Nvidia's upcoming mid-range graphics card, coming with 8,960 CUDA cores, 16 GB of GDDR7 memory, and a price of $749. So far, our very own Dave has quite liked the performance of the 50-line GPUs, praising Nvidia's new Multi Frame Generation technology, looks, and decent performance bumps. Both the RTX 5090 and RTX 5080 have gorgeous Founders Edition cards, though the RTX 5070 Ti will not have an FE edition.

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)

The RTX 5090 immediately got nominated as the best graphics card in our estimation, and the previous RTX 4070 Ti Super is currently our favourite choice for the $600-$800 range. If you have been really looking to get an RTX 5090, the RTX 5070 Ti won't quite cut it but, if you just want to upgrade your rig and also get your hands on some of that Multi Frame Gen magic, you may be able to do so as early as next week.

One of the more noticeable upgrades coming with the new 50 series line is support for Multi Frame Generation, which can generate up to three extra frames for every one normally generated frame, increasing FPS with the wonder of AI. Though you won't get the raw raster performance of the more expensive cards, that frame generation technology should still see a sizable bump and we reckon the RTX 5070 Ti should have good headroom for overclocking.

If the RTX 5070 Ti also sells out quickly, you may be waiting a while for a restock, if recent UK shipping figures are any indication.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/if-you-failed-to-get-an-rtx-5090-and-still-want-an-upgrade-all-signs-suggest-the-rtx-5070-ti-will-launch-on-february-20/ D3bkTY8d8S6sC2kiiCY5jR Tue, 11 Feb 2025 15:33:42 +0000
<![CDATA[ Gamakay TK101 review ]]> The Gamakay TK101 seems like one of those products that's almost too good to be true. A $70/£58 keyboard that's mechanical, supports both Bluetooth and 2.4 GHz wireless, RGB-backlit, gasket mounted and hot-swappable must come with some compromises, right? Otherwise, that's some seriously aggressive pricing from Gamakay.

Well, let's take a look at what we've got here and find out. The TK101 features quite an odd layout, which the brand terms as being a 98 percent option. It looks similar to a 96 percent, or 1800 layout, as featured on Keychron's Q5 HE, although chops and changes things by moving some of the nav cluster keys above the number pad and omitting the right Windows, Alt and Insert keys. This leaves others in between the Enter key and the number pad, with the arrow keys underneath. It's fine to use, providing a mostly full complement of keys, although takes some getting used to—even as a long-term enthusiast, it wasn't a layout I'd come across before.

The keycaps are comprised of double shot PBT plastic, which is excellent at this price, and come with a similar retro-inspired taller profile to Keychron's Q5, too. There is some slight curvature to the top of the caps to make them more comfortable to type on, although the fact that there is a lot of key wobble can make it a bit of a pain. What's more, as much as the keycap plastic is durable, the rest of the TK101's construction isn't up to the same standard.

The chassis is entirely plastic, with a two-tone black and red colourway, which looks okay. I don't bemoan the use of plastic here, as if it's high quality enough then there will be no flex or creaking. That isn't the case though. At a push, the TK101 is *bendy*. It suffers from a lot of flex at the corners and in the middle under pressure, which can leave a lot to be desired. I appreciate that this is a more affordable option, but perhaps a metal plate running through the middle for extra structural rigidity would have been handy.

TK101 specs

The Gamakay TK101 gaming keyboard in a red and white colourway and on a colourful mouse pad.

(Image credit: Future)

Switch type: Gamakay
Keycaps: PBT, double-shot
Lighting: RGB, controllable in software and on keyboard
Onboard storage: None
Extra ports: None
Connection type: Bluetooth, 2.4 GHz receiver, wired
Cable: USB Type-C/USB Type-A, detachable
Weight: 990 g/2.18 lbs
Price: $70 / £58

The closest you get to anything metal is a mirrored surface on the keyboard's top side for added flair, which is where you find the USB-C port for charging and wired operation, a small cutout for the USB-A wireless receiver, and a selector switch for different connectivity methods. This unfortunately isn't labelled, which seems like an oversight. To find out which connection method is actually selected, there is a small set of status LEDs next to the Esc key in the top left corner, which works for lock lights as well as wireless connectivity with Bluetooth being blue and 2.4 GHz wireless being green.

Wireless connectivity works well enough, with plug-and-play operation over the 2.4 GHz receiver on my Windows gaming PC without a hitch. Bluetooth pairing is simple too, with the use of the Fn key, and either the 1, 2, or 3 key depending on which channel you wish to use. With the wired connection in tow, it technically means you can use the TK101 with up to five devices at once.

Image 1 of 2

The Gamakay TK101 gaming keyboard in a red and white colourway and on a colourful mouse pad.

(Image credit: Future)
Image 2 of 2

The Gamakay TK101 gaming keyboard in a red and white colourway and on a colourful mouse pad.

(Image credit: Future)

There is a 4000 mAh battery inside, which Gamakay says can last up to 20 hours in use. In my experience, I got around 25-30 hours on a charge with the RGB lighting at its default level before it conked out. While it is better than Gamakay's own estimate, it's quite weak otherwise, given Keychron's options can last for four times longer on the same capacity, and up to 400 hours on a charge without any RGB lighting.

Inside, my sample of the TK101 came with the brand's Pluto tactile switches. These are 50 g soft tactile switches that feel reasonable under finger with decent tactility and a slightly shorter total travel at 3.5 mm. They're quite responsive for productivity workloads and are quite smooth thanks to being lubricated. For more affordable switches, I don't necessarily mind them, although the likes of Drop's Panda switches and Cherry's special MX Purples reign supreme.

If either the tactile Pluto switches or the linear Saturns aren't to your liking, then the TK101 is at least hot-swappable, meaning you can change the switches out without needing to desolder and solder in new ones. There is a keycap and switch puller in the box to make this a simple process, and it isn't too difficult—just line the puller up with the tabs at the top and bottom of the switch and pull straight up. Then, to swap a new switch in, line the pins up on the underside with the cutouts in the PCB, and push down until the switch clicks into place.

Image 1 of 5

The Gamakay TK101 gaming keyboard in a red and white colourway and on a colourful mouse pad.

(Image credit: Future)
Image 2 of 5

The Gamakay TK101 gaming keyboard in a red and white colourway and on a colourful mouse pad.

(Image credit: Future)
Image 3 of 5

The Gamakay TK101 gaming keyboard in a red and white colourway and on a colourful mouse pad.

(Image credit: Future)
Image 4 of 5

The Gamakay TK101 gaming keyboard in a red and white colourway and on a colourful mouse pad.

(Image credit: Future)
Image 5 of 5

The Gamakay TK101 gaming keyboard in a red and white colourway and on a colourful mouse pad.

(Image credit: Future)
Buy if...

✅ You want a cheap mechanical keyboard: If you want nothing more than an affordable mechanical keyboard with some customisation and wireless connectivity, then the TK101 is an option.

Don't buy if...

❌ You want the last word in quality: The TK101's plastic chassis offers a lot of flex, and it generally feels quite cheap. More affordable options from more established brands don't suffer the same fate.

There is RGB lighting here, which acts as more of an underglow than fully-fledged illumination because of the solid keycaps above it. It isn't too bright, and can be blocked even simply by pushing keycaps down. Changing the lighting can be done on the keyboard by pressing the Fn key and the backslash key, which cycles through differing presets such as where pushing one key fans light out in all directions, or it pulsates different colours. If you've owned a cheaper RGB keyboard from the last decade or so, you'll know the presets I mean.

The RGB is also addressable in Gamakay's Launcher software, which offers a basic method of remapping keys, programming macros and fiddling with the lighting effects either from the range of presets, or on a per-key basis if you're willing to take some time out. You can map the lighting to any audio that's playing and even load a picture into the system where the software attempts to represent its colours on the keyboard. It is a bit gimmicky, but at least it's there for those folks that want it. Gamakay Launcher can be quite slow to respond, and is a tad on the clunkier side, but it gets the job done I suppose. It would have been better for the TK101 to be compatible with VIA and QMK firmware flashing, as a lot of other enthusiast-grade options are for more convenient controlling.

If you want a cheap mechanical option that'll suffice for the basics, it's completely fine, but there are better options for just a little way up the price ladder. The Ducky Zero 6108 comes with proper Cherry MX2A switches, much stronger build quality and battery life for just under $100, while the Keychron K2 V2 remains an excellent all-rounder at $69 if you don't mind giving up some keys in a smaller form factor layout with a similar feature set to the TK101 but with much better build quality and endurance.

To sum up, then, I'd say that the proposition the TK101 offers is a bit too good to be true. As much as some of Gamakay's products, namely the excellent TK75 Pro, have been diamonds in the rough, this TK101 cuts too many corners to truly stand out in the sea of increasingly affordable mechanical boards in and around this price point.

]]>
https://www.pcgamer.com/hardware/gaming-keyboards/gamakay-tk101-review/ N6KAdr9v7zEnvF7RxKRGfc Tue, 11 Feb 2025 13:16:11 +0000
<![CDATA[ Man who chucked $750 million of bitcoin into a dump now wants to buy the whole dump ]]> In 2013, a British IT specialist James Howells somehow conspired to throw a hard drive containing 7,500 bitcoin into a municipal dump. Back then it was worth less than $1 million. Now that same bitcoin stash is worth about $750 million and Howells wants to buy the entire landfill site.

Obviously, the idea is to recover the drive. Perhaps equally obviously, Howells himself doesn't have the resources to buy the landfill site from the local government council in Newport, Wales, who currently own it.

Howells says he, "would potentially be interested in purchasing the landfill site ‘as is’ and have discussed this option with investment partners and it is something that is very much on the table.”

It was in November of 2013 that Howells realised his mistake made earlier that summer. Ever since, he's been battling to be given access to the landfill site to search for the drive.

Howells escalated his efforts to recover the drive in 2023, threatening to sue the council for half a billion dollars in damages. That was followed last year by an attempt force a judicial review of council's decision to deny access to the site.

Late last year at a preliminary court hearing, Howells' lawyers argued that the location of the drive had been narrowed down so a small section of the site and recovering it would not require widespread excavation.

However, Newport council lawyers said Howells had no legal claim and that, "anything that goes into the landfill goes into the council’s ownership.” Moreover, the council claims that, "excavation is not possible under our environmental permit and that work of that nature would have a huge negative environmental impact on the surrounding area."

In the event, the judge sided with the council and Howells' claim was dismissed. But with so much money potentially at stake, perhaps unsurprisingly Howells isn't giving up.

Notably, Howells' story seems to have drifted over the years. Early media reports including the one linked above quote Howell explaining how he threw the drive away following an office clean up. "You know when you put something in the bin, and in your head, say to yourself 'that's a bad idea'? I really did have that," he's quoted saying in 2013.

More recent reports have shifted the narrative, Howells being said to have "placed" the drive in a black plastic bag in the hall of his house, only for his then partner to have mistaken the bag for trash and disposed of it.

That shift in narrative may have legal ramifications. Perhaps if Howell can show the drive was never meant to have been disposed of, it will help his cause. For now, it clearly hasn't, as the courts won't hear his case in full whatever his original intentions with the drive.

In the meantime, Newport council does not appear to be interested in selling the site to Howell and his purported investors. Where it all goes from here, who knows. But as long as bitcoin's price keeps on soaring, this surely won't be the last we'll hear about this lost digital treasure. There's nearly a billion dollars supposedly sitting in a dump and that's going to be very hard to ignore.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/man-who-chucked-usd750-million-of-bitcoin-into-a-dump-now-wants-to-buy-the-whole-dump/ o9V5HYUnjn2HpauYgcqrCA Tue, 11 Feb 2025 12:31:21 +0000
<![CDATA[ Like a Dragon: Pirate Yakuza in Hawaii system requirements suggest it could be your next Steam Deck must-play ]]> The Like a Dragon (previously known as Yakuza) series has been a mainstay of Steam Decks and budget rigs thanks to solid compatibility and the Like a Dragon: Pirate Yakuza in Hawaii system requirements are no different. Shiver me timbers!

Over on the Pirate Yakuza Steam page right now (via RPG Site), you can spot the full system requirements, and it shows a surprisingly modest rig can run the latest Like a Dragon game. The suggested processor to run the game on minimum is an Intel Core i5 3470, which is a budget CPU from a decade ago, or AMD Ryzen 3 1200. Paired with this is the Nvidia GeForce GTX 1650 or AMD Radeon RX 560, which are both budget graphics cards that are getting on in age now. Perhaps the toughest part of running the game on an older rig will be finding the 56 GB of storage to hold it.

It is worth noting that these are just the minimum system requirements, and only get you the game at 30 fps on low settings, at a resolution of 1080p. We don't yet know exactly how that will look as any rendered trailers or gameplay videos so far will be on a much more impressive rig.

You can see the full system requirements below.

The OS here is a bit specific—and likely just the point after which Microsoft operating systems will be fine—so it seems like this section will be updated and changed coming up to the launch of Like a Dragon: Pirate Yakuza in Hawaii on February 20. The recommended specs are also pretty solid, as a budget to mid-range build from half a decade ago can run the game at 60 fps on high settings.

Like a Dragon: Pirate Yakuza in Hawaii is a spin-off from the main Like a Dragon games, putting series fan favourite Goro (Mad Dog) Majima at the helm of a pirate boat in Hawaii. Being the main location of the previous game, Like a Dragon: Infinite Wealth, Pirate Yakuza follows on from Infinite Wealth but is its own contained story. As always, if you're looking for a place to start the series, Yakuza 0 is a welcoming entry and runs excellently on handheld PCs.

Unfortunately, the previous game is not Steam Deck verified but this one is before its launch, which is a great sign. Having played multiple Like a Dragon games on my handheld, I can attest that they are perfect for it (and this likely explains why you can find them in the top 100 most popular Steam Deck games). Split primarily between turn-based RPGs and beat-em-up action games, both genres function well on the best handheld gaming PCs and even the startup screen encourages you to use a gamepad to play.

If you've already played through the series, get your parrot and ceremonial cutlass ready as Pirate Yakuza will be here very soon.


Best handheld gaming PC: What's the best travel buddy?
Steam Deck OLED review: Our verdict on Valve's handheld.
Best Steam Deck accessories: Get decked out.

]]>
https://www.pcgamer.com/hardware/like-a-dragon-pirate-yakuza-in-hawaii-system-requirements-suggest-it-could-be-your-next-steam-deck-must-play/ JyDYhMCgKdTtcPBt3jxTDL Tue, 11 Feb 2025 11:52:54 +0000
<![CDATA[ What we want from RDNA 4: PCG hardware team reveals hopes and dreams for AMD's next gaming graphics card ]]> AMD wants to know what gamers are "most excited about in RDNA4." Which has got us thinking. What are we hoping for on PC Gamer? Nvidia is now so dominant, it certainly feels like a strong new generation of GPUs from AMD is more important now than ever.

So here are our thoughts on RDNA 4, is it our last great hope for affordable mid-range PC gaming?

Jeremy L, desiccated pixel peeper
As I've said before, for me pricing is critical. I explained my thinking on this last year, but the short version is that RDNA 4 needs to be priced right from the get go. AMD keeps pricing GPUs too high at launch, getting poor reviews as a result, only to then lower the price a few months later but not make an impact because the PR damage is done.

So, with all that in mind, what I want is a Radeon RX 9070 XT with raster performance up around an RTX 4080 or 5080 (they're near enough the same, after all) plus better RT than RDNA 3 and upscaling at least as good as DLSS 3 (I don't think it's realistic to ask for DLSS 4 quality) and all for $500 maximum. That's probably too much to ask, but it's what I think AMD needs to deliver to make an impact.

Jacob R, forever an optimist
Considering Nvidia's generation-to-generation improvement is likely to get slimmer as more affordable graphics cards in the series are released, AMD does have more of an opportunity to build something competitive with the RX 9070-series than some might think. That's essentially me hoping for some decent performance-per-dollar stats.

AMD Radeon RX 7800 XT

Will RDNA 4 be another 7800 XT? (Image credit: Future)

With plenty of VRAM and a competitive price, we might end up with something similar to the RX 7800 XT, or RX 7900 GRE, for value for money, which I'm wholly not opposed to. Heck, maybe even something better altogether. Knowing AMD, these cards will be a little too pricey at launch, but throw in some healthy discounts and a decent FSR 4 implementation, and either the RX 9700 XT or RX 9700 might be a sleeper, mid-range GPU to buy by the end of the year.

Wait, why am I hoping for discounts—Frank, get your darn prices right!

Andy E, hardware botherer
As a long-term FSR user, I'll take anything that can run an enhanced version with better image quality, thanks very much. FSR 3.1 might have made some decent improvements compared to previous iterations, but with the promise of machine learning thrown into the equation, part of me is excited for the potential of a proper DLSS equivalent in the form of FSR 4.

I'll be honest, though, I'm not all that hopeful. Nvidia seems so far ahead of the curve on this one, I doubt we'll be seeing anything quite as powerful as transformer models and Multi Frame Generation bundled with the new cards. Prove me wrong, AMD. You wouldn't be the first.

Dave J, jaded
Talking with both Frank Azor and David McAfee after the CES 2025 non-appearance of the RDNA 4 graphics cards was quite a sobering experience. It was all rather downbeat, as though they'd got wind of what Nvidia was going to do that evening when it announced an RTX 5070 with RTX 4090 performance for $549. Now, that was quickly exposed as just experiential gaming performance when Multi Frame Gen is supported, and not actually a $549 GPU with the rendering chops of an RTX 4090.

Still, it forced the RX 9070 off the table and into a delayed March launch. But cards were already in the hands of retailers and ready to go out to reviewers, but promises of optimisations and more information about FSR 4 abound.

What I want to see now is AMD be aggressive about its desires to really deliver on a gaming GPU for the 4K masses (in reality 4K as a resolution is actually dropping in prominence according to the latest Steam Hardware Survey). I want AMD to deliver against the efficiency promises it's made around the new architecture; it said it was being designed to be straightforward to manufacture and that means it should be available in high volumes and at a great price.

Nvidia RTX 5080 Founders Edition graphics card from different angles

Might RDNA 4 offer RTX 5070 Ti performance for $499? (Image credit: Future)

Given that a March release sees AMD able to get even more cards off the assembly line to add to the GPUs that were already in the channel for the missed January reveal, we ought to see a graphics card launch were you can actually buy the cards on offer.

But forget being competitive, let's put it at a price that makes it almost foolhardy to pick the relative Nvidia GPU—which will most likely be the RTX 5070 Ti. Give us that level of raw gaming performance for $499 and it will be hard to argue against.

But if AMD toes Nvidia's line again, pricing its cards a scant few dollars below the Nvidia competition, then again the GeForce feature set is going to come into play and sway many gamers with the promise of higher frame rates. However fake you might consider them to be.

James B, AMD GPU sceptic
I've not spent an extended amount of time with AMD graphics cards but I'd like the excuse to. The thing I'm looking for from RDNA 4 is good value. I don't necessarily want the best tech, and don't think the RX 9070 line is promising that. I just want a reason to not go for Nvidia's 50 line. Better FSR to compete with recent DLSS improvements and a boost to ray tracing would help. Competition in the GPU space is good and I'd like the chance to show that with my wallet.

Jacob F, cloud gazer
All I want from RDNA 4 is something to make me feel justified in splashing the cash on a graphics card again. I haven't had that since Nvidia's RTX 3060 Ti, which I'm still rocking today. It's kept up with all the games I like to play, but it's pushing it a bit, now, in this new RTX-by-default era.

What this means in practice is that I'd like RDNA 4 graphics cards to deliver cheap competition that beats midrange RTX 50-series cards in terms of pure raster performance. I don't even care massively about frame gen, although great frame gen performance would be a nice bonus.

50-series beating rasterisation pound-for-pound and great upscaling in the lower midrange segment—yep, that's about it. I might actually decide to upgrade this GPU generation if that happens.

Nick E, GPU sniffer
What I want from RDNA 4 is certainly not what I'll get, partly because it's not a realistic wish and partly because it's AMD. The fundamental architecture of RDNA 3 is absolutely fine—a good balance between out-right compute ability and gaming performance—but its biggest weakness has been the lack of dedicated hardware support for matrix/tensor operations, something that Intel offered right out of the bag with Alchemist (and Nvidia since Turing in 2018).

I know we're finally getting this in RDNA 4 but it's appearing late in the game, and this will be the first revision of the units in a gaming GPU. Historically, every time AMD has introduced something completely new to its graphics processors, it's either been a wild, left-field choice (HBM with Vega, chiplets with RDNA 3) that ultimately transpires to be an unnecessary move or it's been a stripped-down, simplified approach, such as hardware ray tracing in RDNA 2.

AMD Radeon RX 6000-series graphics card shot from above, fan view, on a blank background

Will RDNA 4 be another stripped-down architecture like RDNA 2 the RX 6000-series? (Image credit: AMD)

What I'd really like RDNA 4 to offer are compute units that don't have to rely on the driver compiler to correctly implement the dual-issue instructions, to make full use of all the available ALUs; I want to see dedicated hardware for accelerating BVH traversal, rather than doing it through compute shaders; I want to see matrix/tensor cores par with the competition. In short, I want an AMD card that offers the same feature set as Intel and Nvidia, but without a second-place performance.

I don't even care all that much about the price. AMD has been undercutting Nvidia for years but it hasn't made a jot to its share of the discrete GPU market, so if the RX 9070 XT costs $700, for example, then fine. Just as long as it's as good, or better, than the other $700 graphics cards one can buy.

Except it won't be, of course. For some absurd reason, the multi-billion dollar chip business still operates its graphics division like it's a struggling underdog, a plucky team of poor engineers trying their best against the evil behemoths that dominate the industry. RDNA 4 will end up being cheaper than Blackwell, offer the same traditional rendering performance (aka rasterization), but fall behind on features and cutting-edge technology/performance.

And once again, the Team Red faithful will cry 'Just you wait until RDNA 5 comes out, then you'll see!'

It's a wrap
So, there you have it folks. Our desperate hopes and dreams for RDNA 4. It's a popular riff that AMD never misses an opportunity to miss an opportunity with its Radeon graphics, but there's something in my waters that tells me RDNA 4 is going to be different. It's not long now before we find out.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.

]]>
https://www.pcgamer.com/hardware/graphics-cards/what-we-want-from-rdna-4-pcg-team-team-reveals-hopes-and-dreams-for-amds-next-gaming-graphics-card/ 5iyNFQahgo7bwYgdB2dVx Tue, 11 Feb 2025 11:12:06 +0000
<![CDATA[ OpenAI boss suggests there's the equivalent of Moore's law for AI and it's 'unbelievably stronger' ]]> OpenAI boss Sam Altman has been merrily blogging away about his thoughts on AGI, or Artificial General Intelligence, and there's a lot of food for thought. A key takeaway, however, is his link between Moore's Law, and what he perceives to be the AI equivalent.

Point two of his "three observations" regarding the oncoming AGI future discusses the idea that "the cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use."

Altman goes on to say that Moore's law "changed the world at 2x every 18 months; this is unbelievably stronger."

Moore's Law is defined as the principle that the speed and capability of computers can be expected to double every two years, due to the increase in transistor counts between generations of hardware, for a minimal cost.

As a guiding principle for measuring chip development it's widely regarded to be no longer true, with Nvidia CEO Jensen Huang often noting it's dead and buried, although it's still often cited as a frame of reference for technological development as a whole.

However, AI development appears to have moved at a relentless pace in recent years, and OpenAI has been at the forefront of it with various iterations of its ChatGPT AI chatbot.

According to Altman, Artificial General Intelligence is the next step, and is already moving apace—and if his observations prove accurate, the gigantic price drops in relation to AI usage over time might be a suitable metric to define AI development in a similar way.

Altman's Law then, perhaps. Or so he likely hopes. Moore's Law was named after an observation by Gordon Moore, an Intel co-founder who observed in 1965 that the number of components in an integrated circuit was doubling every year, before adjusting his theory in 1975 to doubling every two years.

It became a guiding principle in the semiconductor industry to ensure future-resistant long term planning, although has fallen out of favor since.

The real meat and potatoes of Altman's observations surrounds what he sees as the AGI-led future to come, and how it may help unlock the creativity and productivity of us mere worker drones due to what he sees as the fabulous potential of the tech.

Still, if you happen to create a new law while musing away on your blog, that's a happy accident, isn't it? Presumably he didn't need any AI help to do it, either. There's hope for us all then.


Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

]]>
https://www.pcgamer.com/software/ai/openai-boss-suggests-theres-the-equivalent-of-moores-law-for-ai-and-its-unbelievably-stronger/ PL6m37FcWkNk4zKmtmqDcP Mon, 10 Feb 2025 17:04:08 +0000
<![CDATA[ There is one thing making me excited about the new Nvidia RTX 5070 Ti and no, it isn't Multi Frame Generation ]]> The next family of cards in the new RTX Blackwell series of Nvidia GeForce GPUs is going to be an interesting one. Theoretically, the lower we go down the RTX 50-series stack, the less exciting the graphics cards look to be on paper. The generational differences in silicon look slight, and there's a general expectation that Nvidia is going to be leaning even harder on the substantial veneer of Multi Frame Generation performance to make them look good. But the binary RTX 5070 family of cards may still end up giving us the most PC enthusiast of GPUs in this entire generation.

I'm talking here about overclocking, more in the classic PC gaming sense than the sort of rather pointless number-chasing it's become. Y'know, that old school method of wringing every last drop of gaming performance out of your silicon because the price of stepping up to higher spec hardware is utterly punitive.

Before we get too deep into that, however, I do want to acknowledge that obviously the $749 RTX 5070 Ti is not mid-range silicon, not some sort of middle-order GPU for the masses. That used to be the price of a GeForce Titan card, ffs. But it is arguably the more affordable face of high-end PC graphics.

And, from my time overclocking the RTX 5080, there is a chance the RTX 5070 Ti, with the same GPU, could be a hell of a strong contender for the best overclocking card we've seen in an age. Already the RTX 5080 has delivered some pretty stunning performance on that count, allowing me to push the GPU clock well beyond the 3 GHz mark with a stable overclock, which didn't require me to drive a ton of extra voltage through the GB203 chip, either.

With the RTX 5070 Ti specs showing a clock speed well below what the RTX 5080 is delivering from its own Boost clock, I feel there really ought to be some serious headroom in that cut-down chip.

As I say, both cards are using the GB203 GPU—the RTX 5080 is taking the full chip, utilising all available 84 streaming multiprocessors (SMs), while the RTX 5070 Ti is taking hold of 70 SMs. That equates to a difference of some 1,792 CUDA cores between them, though interestingly only 768 between the RTX 5070 Ti and the old RTX 4080 of the previous generation.

In terms of the Boost clock, the RTX 5080 comes in at a rated 2,617 MHz (though that is a moveable feast given the dynamic nature of GPU frequencies these days), and the standard RTX 5070 Ti is rated at 2,452 MHz.

That's a pretty healthy difference in clock speed there and, seeing as we were able to squeeze a +525 MHz offset just using a simple MSI Afterburner tweak with the RTX 5080, it's not unreasonable to think we ought to be able to spike that sub-2,500 MHz figure a lot higher, too. And it's not like the RTX 5070 Ti is being specifically limited in terms of power draw, either. Compared with the previous generation, the RTX Blackwell card has a higher power rating, at 300 W, which is already mighty close to the RTX 5080's 360 W rating.

I've not had the pleasure of slotting an RTX 5070 Ti into the PC Gamer test rig as yet, so I cannot talk from direct experience, this is all just tentatively excited expectation born of what I've seen the other card using that same GB203 silicon doing.

If we can get the same 3 GHz+ stable clock speed out of the RTX 5070 Ti then I would expect another 10%+ in terms of gaming performance out of the card. Given that Nvidia has already primed us to expect a 20% gen-on-gen performance gain for the card, slapping another 10% on top of that will put us in the same sort of frame rate bump territory as the RTX 5090 delivers.

That's the card offering the biggest generational performance increase of the RTX Blackwell cards, with a 30% 4K gen-on-gen gaming hike. Being able to push the lower spec RTX 5070 Ti card to match that percentage boost will make it a far more tempting card.

Image 1 of 1

Nvidia RTX 5080 Founders Edition graphics card from different angles

(Image credit: Future)

Obviously, there are things which could put a blocker in all this excitement of mine. There's a good chance there may be something in the vBIOS which stops the card from clocking so high, or there could be some limitations put on the power delivery. If there's any chance of the RTX 5070 Ti being able to be overclocked to come near the RTX 5080 in terms of gaming performance you can bet there will be some limits put in place.

It's also worth noting my exceptional overclocking numbers of the RTX 5080 came from the over-engineered, loss-leading Founders Edition version of the card, which has been specifically created with high-performance power componentry.

There is no Founders Edition card for the RTX 5070 Ti, however, and whether third-party PCBs are going to be capable of driving the GB203 GPU stably at those frequencies is still one that's up for debate. Nvidia certainly suggested I'd won the silicon lottery hitting those figures with our RTX 5080 card.

A key hint to what we might be able to expect to see from overclocking the RTX 5070 Ti would be at what level the AIBs are setting their own factory overclocked versions. With the RTX 5080, the likes of Asus and Gigabyte are setting confidently high overclocked clock speeds on their retail cards. As yet, however, we don't have details of what those companies are setting their overclocked versions at for the RTX 5070 Ti. Right now, all we get is a wee "TBD" when it comes to the final clock speed specs of these cards.

But the RTX 5070 Ti is coming out this month, in a scant few weeks. So we shall know for sure whether there is reason to be cheerful about the third-tier of the RTX Blackwell cards very soon. If it's the OC king, the clock-happy mac daddy, and you had any hope of buying one, it could be the best-value card of the lot. That's a lot of maybes, I'll grant you, but it's certainly not beyond the realms of possibility.

Yes, Nvidia might accidentally create a great tinkerers GPU and let us really tweak the twangers off it.

And what of the RTX 5070? Well, it's a smaller chip being offered a lot more power, maybe that's going to give it something to offer us via its ickle GB205 GPU. Though I am definitely less convinced about that as a possibility right now.

]]>
https://www.pcgamer.com/hardware/graphics-cards/there-is-one-thing-making-me-excited-about-the-new-nvidia-rtx-5070-ti-and-no-it-isnt-multi-frame-generation/ gpwc3c9ZM7mZQ9tfZWu5Dk Mon, 10 Feb 2025 16:45:31 +0000
<![CDATA[ Newegg's speedy PC building contest sees two builders finish in under five minutes, leaving my best time in the mud ]]> I like to think I'm pretty speedy when it comes to building a gaming PC, but my record is firmly in the mud. Two contestants at Newegg's PC speed building contest at Megacon 2025 have finished with sub-five minute times, and that's a mighty impressive feat, even if it does come with some caveats.

And what are they, you might be asking? Well, it's not really building a PC from scratch, as all that needs to be done here is installing the motherboard, GPU, PSU and RAM, before booting the PC and ensuring that it posts correctly.

Which, if I'm honest, makes me feel a bit better about myself. I normally dally around a bit with the CPU installation, admiring the shiny new chip and thinking about all the infinitesimally small pathways inside. Then I have a cup of tea, debate once again how much thermal paste to use (it's a pea-sized amount, always), and by that point it's usually time for lunch.

Anyway, Newegg has posted a timelapse of two of its top competitors going head to head, in a competition I would surely be disqualified from for interfering with my opponents machine. We also have photos of four winners holding various bits of hardware as prizes, with two of them managing times under five minutes.

The winning time? Four minutes, nine and a half seconds. Even discounting the fact that they didn't have to unbox the components, install the CPU, or mess around with various cable configurations to make things look pretty inside the case, that's darn impressive.

Our winner appears to have nabbed themselves an Intel Arc B580 GPU, an Intel Core 7 Ultra CPU, and a $1,000 Newegg gift card. That's the beginnings of an excellent gaming PC right there, so well done them.

One thing I would like to know is how many of our brave competitors managed to drop a screw inside the casing. I have yet to manage a build where this hasn't happened at least once, and I'm pretty sure you can tell which PCs I've built the quickest simply by picking them up and shaking them around a bit to listen for tell-tale rattles.

Don't tell anyone though, alright? I've got a reputation to protect, and a career to continue. Oh goodness, I've said the quiet part out loud, haven't I? Move on, everyone. Move on.


Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

]]>
https://www.pcgamer.com/hardware/gaming-pcs/neweggs-speedy-pc-building-contest-sees-two-builders-finish-in-under-five-minutes-leaving-my-best-time-in-the-mud/ Dc8od8E63yqa8teaakj8RL Mon, 10 Feb 2025 16:41:10 +0000
<![CDATA[ We asked a PSU expert which adapters or extensions are okay for the RTX 50-series. His answer: 'DO NOT BUY adapters or extenders, they can all be dangerous' ]]> If you're one of the 23 people around the world who has managed to snag an RTX 5090, you might be tempted to splash out on a fancy set of power cables to make your new upgrade look the best it can. That got us thinking about what set would be best to use, so we asked a leading expert on power supplies for the low-down. Given his credentials, we expected a nuanced, detailed response but he settled for a five-word recommendation: "Do not buy adapters or extenders."

The expert in question just so happens to be Aris Mpitziopoulos and he's the CEO of Cybenetics, the global authority on testing and rating computer power supply units, and owner of tech site Hardware Busters. In short, he knows what's what when it comes to all things electrical engineering with PCs. And more to the point, he also knows exactly what you should be doing with power cable extenders and adapters for a new RTX 5090.

When we asked him about what's the best approach to take, his reply was unequivocal: "It is simple, DO NOT BUY adapters or extenders. They can all be dangerous."

By dangerous, Aris means that there's a chance that an iffy extender or adapter could end up resulting in a connection overheating and melting when an RTX 5090 is pulling its full complement of 48 amps or more. And that's despite using the new, supposedly 'fixed' 12VHPWR connector, aka the 12V-2x6.

The problem isn't the cables themselves, though. "All 12V-2x6 cables should be able to deliver 600W (or up to 55A) since these cables are electrically compatible for all vendors!" Aris told us. "But you have to keep in mind that there are PSUs without native 12V-2x6 sockets; they use 2x 8-pin instead. Apparently, these PSUs use proprietary 12V-2x6 cables."

To give you an example of what he means by that, Corsair's 2024 Shift RM PSUs come with a dedicated 12VHPWR cable but the PSU itself doesn't use such a socket. Instead, the cable splits into two 8-pin connectors that you plug into the supply unit. Those sockets and the cable are all rated to supply up to 55 A provided you use the supplied cable. Another set might fit the sockets but there's a chance they'll make a mess of things very quickly.

This is true of many PSU models, though some do have a single 12VHPWR socket and cable. Corsair and others have switched to the newer 12V-2x6 design (which uses different sense pin lengths to ensure that current can be supplied only once the connector is fully inserted) for their latest PSUs but the older design can still be used with an RTX 5090.

A diagram of a graphics card and related power connection standards.

(Image credit: Corsair)

Aris explains: "In general, the cable remains exactly the same. Only the connectors on the PSU and GPU side change from 12VHPWR to 12V-2x6, for PSUs that used the native 12+4 pin headers. There are no changes in the cables (they are just called now 12V-2x6 cables instead of 12VHPWR which can be confusing)."

Now it's important to note that he's not saying that the adapters Nvidia and its board partners are shipping with their RTX 50 series cards are a problem. They're not and it's third-party products and extenders that you need to be wary of, as there are no guarantees whatsoever that they'll work with your PSU and RTX 50-series combination.

You might be using such a setup with an RTX 4090 and experienced no problems, but the default TGP of that card is 450 W, a full 125 W less than the 5090. Not that this is a hard limit, as our Dave discovered testing the new Blackwell monster for its review.

The card's average power was 568 W and it peaked at 637 W—a current draw of 53 A.

Your next upgrade

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)

Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.

CPUs can routinely draw four times that amount of current but they have a huge number of pins to distribute that over. So much so, that you can still use the chip, even if some of the supply pins are damaged or completely broken. When it comes to an RTX 5090, though, you've just got 12 pins, tightly packaged together. That doesn't leave a lot of room for manufacturing variances or user mistakes.

So, here's our PC Gamer PSA: If you are going to spend thousands of dollars on the most powerful gaming graphics card you can buy (well, potentially buy), then you should really consider getting the right PSU to go with it. I'd personally recommend doing this with an RTX 5080, as well as an RTX 5090.

Heck, I even did it with my RTX 4080 Super. I chose a PSU that had a dedicated 12VHPWR socket and cable. It's not the fanciest looking of things but I'd rather have a dull-looking PC than risk any kind of melty-melty action.

And if you don't want to take my word for it, then listen to the boss of Cybenetics.

]]>
https://www.pcgamer.com/hardware/graphics-cards/we-asked-a-psu-expert-and-cybenetics-chief-which-adapters-or-extensions-are-okay-for-the-rtx-50-series-his-answer-it-is-simple-do-not-buy-adapters-or-extenders-they-can-all-be-dangerous/ RpPPxCpwsUrdgcu7qb2SxF Mon, 10 Feb 2025 16:38:02 +0000
<![CDATA[ AOC Gaming C27G4ZXE review ]]> Remember the days when high-refresh gaming meant taking out a new mortgage? Thankfully, they're gone as the new AOC Gaming C27G4ZXE proves. This is a fully 280 Hz gaming monitor for well under $200. Hooray.

Of course, at this price point something has to give. Actually, with the AOC Gaming C27G4ZXE, plenty has to give. The most obvious casualty to cost reduction is resolution. This 27 incher is a mere 1080p panel, so that's 1,920 by 1,080 pixels.

The result is a very modest pixel density of just 82 DPI. We'll come back to the impact that has on image quality. But up front it's worth noting that while this relatively low resolution is a necessary compromise to hit the price point, it actually makes sense from a price-performance perspective.

If you're shopping monitors at this end of the market, safe to assume you're not running a $1,000 GPU. So, a lower resolution will likely be a better fit with your graphics card, especially if you want to make use of that 280 Hz refresh.

AOC Gaming C27G4ZXE specs

AOC Gaming C27G4ZXE

(Image credit: Future)

Screen size: 27-inch
Resolution: 1,920 x 1,080
Brightness: 300 nits full screen
Response time: 0.3 ms MPRT, 1 ms GTG
Refresh rate: 280 Hz
HDR: HDR10
Features: VA panel, HDMI 2.0 x2, DisplayPort 1.4
Price: $175 (estimated) | £159 (Hub model)

Anyway, the next concession is a VA rather than IPS panel. As we routinely explain, VA tends to have worse response and viewing angles compared to IPS, but better contrast. This isn't always the case, but the slower pixel response can obviously be a bit of a bummer on a gaming display.

Those limitations aside, the AOC Gaming C27G4ZXE is also a little stingy when it comes to build, ergonomics and connectivity. This monitor feels a bit cheap and the stand only offers tilt adjustment.

That said, it doesn't actually look too poverty stricken thanks to some nice geometric design for the stand base and rear of the screen enclosure. This isn't a totally anonymous black square, some effort has gone in. The 27-inch panel's gentle 1500R curve also adds perhaps the slightest frisson of upmarket consumer electronics.

At the very least, it's a decent looking thing sitting on your desk when you consider the price point. It just doesn't feel all that robust. Again, you can't really expect much more at this price point.

(Image credit: Future)

As for the connectivity shortfall, well, you get two HDMI 2.0 ports and single DisplayPort input. What you don't get is any kind of USB hub. But try finding an equivalent monitor with a USB hub from a big brand at similar money. You'll struggle.

The final obvious casualty of the low price point is HDR support. Honestly, it's the merest of flesh wounds. You do get basic HDR 10 support. But with a maximum brightness of 300 nits and no local dimming, this is clearly not a true HDR display. But then neither is any monitor with entry-level HDR400 certification.

Actually, I'd argue very few LCD as opposed to OLED monitors are truly capable of high dynamic range rendering. At least with the AOC Gaming C27G4ZXE you can theoretically decode HDR content with correct colors. That's something.

(Image credit: Future)

But what of the actual image quality? First impressions are none too shabby. The C27G4ZXE may only be rated at 300 nits, but it's fairly bright and punchy. The VA panel tech helps with that impression. VA panels have much better contrast than IPS and that contrast between brighter and darker tones makes a screen look subjectively more vibrant and dynamic.

The C27G4ZXE may only be rated at 300 nits, but it's fairly bright and punchy.

AOC has actually calibrated this thing pretty well, too. The colors in default sRGB mode are bob on, which isn't always the case with cheap VA monitors. They are often set up to be over-saturated.

Sadly, that deft calibration doesn't extend to HDR content. Generally, this monitor looks rather dull in HDR mode, it's actually more vibrant in SDR mode which obviously isn't right. What's more, SDR colors in HDR mode are a mess. Ultimately, the HDR mode is best avoided unless you absolutely have to use it. That's not a huge disappointment given the price point. But it does mean that the AOC Gaming C27G4ZXE is very much best viewed as a non-HDR panel.

If those elements of the static image quality are decent, what about when things get moving? The 280 Hz refresh definitely translates into snappy responses to control inputs. The latency is great given the price point. So, this monitor is a great choice for online shooters and esports on a budget.

Image 1 of 4

AOC Gaming C27G4ZXE

(Image credit: Future)
Image 2 of 4

AOC Gaming C27G4ZXE

(Image credit: Future)
Image 3 of 4

AOC Gaming C27G4ZXE

(Image credit: Future)
Image 4 of 4

AOC Gaming C27G4ZXE

(Image credit: Future)

Less impressive is the pixel response. AOC quotes some very impressive figures here with 0.3 ms MPRT and 1 ms GTG response times. You also get four levels of pixel-accelerating overdrive in the OSD menu to help you tune the response.

Sadly, AOC hasn't delivered pixel response performance beyond expectations.

Sadly, however, this monitor conforms to the cheap VA norm. All but the quickest overdrive mode suffers from at least a little visible smearing and blurring. As for the quickest option, that largely eradicates the smearing only to swap it for a touch of overshoot and inverse ghosting.

Unfortunately, you can actually see that overshoot in games in the form of texture colors shifting as you move your mouse and the pixel overshoot their target colors. It's not super obvious, but it is there and once you see it, it can't be unseen.

(Image credit: Future)

Of course, this is the norm for a VA panel at this price point. So, it absolutely isn't a deal breaker. But if you were hoping that AOC had done something magical and somehow delivered pixel response performance way beyond expectations, well, that simply hasn't happened.

Apart from response, this monitor's other obvious weakness is the aforementioned pixel density. It's a tricky aspect to critique. The price point ultimately dictates 1080p. A 1440p 280 Hz panel at this price point is too much to ask.

And as we said, 1080p is actually a good match in terms of GPU load given the low price point here. But that 82 DPI pixel density is awfully ugly on the Windows desktop. It makes for craggy, rough looking fonts and a generally pretty pixelated vibe.

(Image credit: Future)

The saving grace is that games don't actually look too bad when it comes to visual detail. You're still getting full HD and if you switch on upscaling, such as Nvidia DLSS or AMD FSR, the slight softening effect helps to smooth out the craggies that are consequent from the fairly large pixels.

As a final note, the 1500R panel curvature is really neither here nor there. There's really very little benefit to a curved panel on a 27-inch 16:9 screen. But equally the curve is slight enough not to be a distraction.

Overall, then, this is a decent display for the money. 280 Hz and 1080p is a sensible combination and makes for a snappy feeling gaming experience even with a commensurately budget GPU. The low DPI look isn't great, but it's a reasonable compromise at this price point.

Buy if...

You want high refresh gaming on the cheap: The 280 Hz from an established band at this price point makes low-latency gaming pleasingly accessible.

Don't buy if...

You want crispy visuals: 1080p on a 27-inch panel makes for mediocre pixel density.

The caveat to that is that were it our money, we'd prefer to stretch to around $200 if at all possible and go for a 1440p 144 Hz option. The refresh is lower, but the pixel density is a lot nicer.

If there is an element that's hard to really get on board with even at the price point, it's the mediocre pixel response. Admittedly, you'd do well to find a competing screen that's much better. So, the problem isn't unique to this AOC. But the response really isn't great. So, chalk that up as something we'd like to see improve industry-wide on this class of display as opposed to something AOC specifically has messed up.

Actually, on that note, if money really is tight and $200 for a 1440p monitor isn't an option, we'd probably lean towards a 24-inch IPS monitor, maybe even one running at a lower 160 Hz refresh rate. You'll get better response and slightly better pixel density, albeit on a smaller panel.

Ultimately, there are no perfect choices here. And of the available 27-inch 1080p gaming monitors in the budget class, the AOC Gaming C27G4ZXE is absolutely a worthy contender.

]]>
https://www.pcgamer.com/hardware/gaming-monitors/aoc-gaming-c27g4zxe-review/ kT5uStyxM56w2EXfCpSrWH Mon, 10 Feb 2025 15:43:20 +0000
<![CDATA[ In a mere decade 'everyone on Earth will be capable of accomplishing more than the most impactful person can today' says OpenAI boss Sam Altman ]]> In surprising news to me, OpenAI co-founder and CEO Sam Altman has a blog. And in among the ruminations on traditional blog-like topics like "What I Wish Someone Had Told Me" and "The Strength of Being Misunderstood", he recently posted three observations on AGI (Artificial General Intelligence) and its potential uses for the human race.

"The economic growth in front of us looks astonishing, and we can now imagine a world where we cure all diseases, have much more time to enjoy with our families, and can fully realize our creative potential," says Altman.

"In a decade, perhaps everyone on earth will be capable of accomplishing more than the most impactful person can today."

Well, that sounds lovely, doesn't it? Who doesn't love a promise of a Star Trek-style, utopian, post-disease, post-scarcity future, in which our endeavours are supported by our own private genius. To justify that thinking, Altman's three observations are thus:

1. The intelligence of an AI model roughly equals the log of the resources used to train and run it. "These resources are chiefly training compute, data, and inference compute. It appears that you can spend arbitrary amounts of money and get continuous and predictable gains; the scaling laws that predict this are accurate over many orders of magnitude."

2. The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. "You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period. Moore’s law changed the world at 2x every 18 months; this is unbelievably stronger."

3. The socioeconomic value of linearly increasing intelligence is super-exponential in nature. "A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future."

That last point seems particularly pertinent, given that OpenAI has previously been reported to be burning through billions of dollars in staffing and model training costs. Exponentially increasing investment has been a keystone of the modern AI boom, and the release of China-based startup DeepSeek's R1 model (supposedly trained at a fraction of the cost of existing efforts) has recently shaken investor confidence in the US-dominated AI industry.

So it's no surprise Altman is highlighting its importance here. On the first point, however, it looks like Altman has no illusions of the mainstream AI market (if such a thing exists) letting up on training costs, hardware requirements, and "arbitrary amounts of money" in order to continue gaining ground in AI development, at least when it comes to AGI.

Images of Nvidia's Blackwell GPU from GTC.

(Image credit: Nvidia)

Still, according to Altman it at least becomes cheaper to use over time. Which, if his predictions about the future of AI agents come true, will be necessary to enable our AI-co-worker hellsc... I mean, future working methods.

"Let’s imagine the case of a software engineering agent... imagine it as a real-but-relatively-junior virtual coworker. Now imagine 1,000 of them. Or 1 million of them. Now imagine such agents in every field of knowledge work.

"The world will not change all at once; it never does. Life will go on mostly the same in the short run, and people in 2025 will mostly spend their time in the same way they did in 2024. We will still fall in love, create families, get in fights online, hike in nature, etc.

"But the future will be coming at us in a way that is impossible to ignore, and the long-term changes to our society and economy will be huge. "

Data Center

(Image credit: Akos Stiller - Getty Images)

Goody. I'm pleased to hear that, in Altman's eyes, I'll still be getting in fights online and hiking in nature this year. But AGI-enabled assistants are coming, says the OpenAI head honcho, and given the previous trends he's highlighting here, they appear to be coming rather quickly (providing the money tap keeps flowing, of course).

"Agency, willfulness, and determination will likely be extremely valuable," Altman continues. "Correctly deciding what to do and figuring out how to navigate an ever-changing world will have huge value; resilience and adaptability will be helpful skills to cultivate."

Your next upgrade

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)

Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.

"AGI will be the biggest lever ever on human willfulness, and enable individual people to have more impact than ever before, not less."

So, a lot of optimistic thinking going on here, it seems. I'd like to hold my hand up and say that I'm not too keen on the idea of an AI co-worker writing my articles for me, but if they could "marshall the intellectual capacity" to whittle down my inbox reliably without sending important messages to the spam folder, that'd be grand.

I don't want an AGI Mozart, more of a competent Jeeves. Still, as Altman has it, things do sound suspiciously bright and rosy for our creative futures:

"There is a great deal of talent right now without the resources to fully express itself, and if we change that, the resulting creative output of the world will lead to tremendous benefits for us all."

Here's hoping, anyway.

]]>
https://www.pcgamer.com/software/ai/in-a-mere-decade-everyone-on-earth-will-be-capable-of-accomplishing-more-than-the-most-impactful-person-can-today-says-openai-boss-sam-altman/ D5PwHYLW6LXQ8bURiTsYxB Mon, 10 Feb 2025 15:02:17 +0000
<![CDATA[ Just in case you've forgotten all about them, AMD posts a less-than-convincing argument as to why AI PCs are better than any other type of PC ]]> The most commonly used tech phrase in the PC industry last year was 'AI PC' and, according to Intel at least, we were promised that such computers would "save people hours weekly through built-in artificial intelligence." So far the interest in AI PCs has been somewhat muted so AMD has decided that the best way to solve this is to explain exactly why AI PCs are better than non-AI PCs.

It's done this via a simple post on X, highlighting what an AMD AI PC can do better than a common-or-garden PC: "Handle AI tasks more efficiently with the world's most powerful built-in Neural Processing Unit (NPU), which supports the CPU and GPU for faster and smarter everyday use." On the other hand, non-AI PCs "can run AI tasks, but lack[s] an integrated NPU. AI runs on the CPU and GPU, but with less efficiency for multitasking."

Well, there you have it. Surely that's all the convincing anyone would need to go out and buy a spangly new AI PC with an NPU-equipped AMD Ryzen processor, right? AMD clearly doesn't think anyone actually needs to know any kind of specifics about the 'faster and smarter everyday use' bit because we all want our PCs to be faster and smarter. Every day.

The CPU in my main PC has an NPU (it's an Intel one) but as of yet, it's not been used for anything other than running a single benchmark to see it being used. Last year, I sat through a press briefing from one big system vendor in which it spent a good deal of time explaining why AI and an NPU were going to make things so much better.

I know how an NPU can be used but I'm still totally none the wiser as to what real, tangible benefit they bring to the PC industry as a whole. Let me clarify: With an NPU, one can run a small AI algorithm locally, without having to send lots of data to a distant cloud server, which means you retain far more privacy over your details and content that are being AI'd. Summarising your notes, for example, or improving your CV—that kind of thing.

But as AMD points out, your PC can do that without an NPU. It just uses the CPU or GPU to do it instead. Ah, but that's less efficient, you might say (or at least, AMD does). How much less efficient is it? Is it slower, a lot slower; does it involve a lot more power, draining your laptop's battery faster or using up more electricity from an outlet?

Without specifics, in the form of real-world examples being showcased in front of you, I can't imagine anyone will be nodding sagely at AMD's words and being converted into a champion of the AI PC. To be blunt, it's such a weak attempt at promoting AI PCs, that AMD has potentially done more damage than good.

While X isn't well-known for inviting reasoned and well-balanced comments these days, the replies to AMD's post are not in the least bit surprising. "Scam," writes one user, whereas another expounds more with "I'm good with the non-AI PC. I can't really think of anything AI does which would benefit me as an actual creator." Only one reply was positive, though there was not a great deal there at the time of writing.

Your next machine

Gaming PC group shot

(Image credit: Future)

Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

To be fair to AMD, though, I don't think there's any reasoned argument that would make a tech-savvy buyer deliberately seek out an AI PC, even if they're a fan of generative AI or the like. The reason is that such users are already using their non-AI PCs to do all of this, either via the cloud or locally via the GPU. An NPU can be used to handle a RAG (retrieval-augmented generation) algorithm, to improve the results from a locally-run LLM but I've not seen any general-use software do this yet.

AMD's latest laptop processors are fantastic chips, and any one of those paired with an Nvidia RTX GPU will make for a potent computer capable of blasting through AI workloads. If your portable PC is on its last legs, then by all means, grab an AMD 'AI PC' and enjoy a blisteringly quick computer, that's equally capable of gaming and content creation. The fact that it will sport an NPU doesn't really add to that, though at least it doesn't detract from it.

Cutting-edge technology always needs a 'killer app' to make it really take off but we've yet to see that for the little NPU. Until then, it looks like we'll just have to put up with more half-hearted marketing exercises that will leave us all still wondering the same thing: What use is AI to me?

]]>
https://www.pcgamer.com/hardware/gaming-pcs/just-in-case-youve-forgotten-all-about-them-amd-posts-a-less-than-convincing-argument-as-to-why-ai-pcs-are-better-than-any-other-type-of-pc/ RwYA8gSqpdJ436nKbByQXD Mon, 10 Feb 2025 15:00:11 +0000
<![CDATA[ I've been testing Nvidia's new Neural Texture Compression toolkit and the impressive results could be good news for game install sizes ]]> At CES 2025, Nvidia announced so many new things that it was somewhat hard to figure out just what was really worth paying attention to. While the likes of the RTX 5090 and its enormous price tag were grabbing all the headlines, one new piece of tech sat to one side with lots of promise but no game to showcase it. However, Nvidia has now released a beta software toolkit for its RTX Neural Texture Compression (RTXNTC) system, and after playing around with it for an hour or two, I'm far more impressed with this than any hulking GPU.

At the moment, all textures in games are compressed into a common format, to save on storage space and download requirements, and then decompressed when used in rendering. It can't have escaped your notice, though, that today's massive 3D games are…well…massive and 100 GB or more isn't unusual.

RTXNTC works like this: The original textures are pre-converted into an array of weights for a small neural network. When the game's engine issues instructions to the GPU to apply these textures to an object, the graphics processor samples them. Then, the aforementioned neural network (aka decoding) reconstructs what the texture looks like at the sample point.

The system can only produce a single unfiltered texel so for the sample demonstration, RTX Texture Filtering (also called stochastic texture filtering) is used to interpolate other texels.

Nvidia describes the whole thing using the term 'Inference on Sample,' and the results are impressive, to say the least. Without any form of compression, the texture memory footprint in the demo is 272 MB. With RTXNTC in full swing, that reduces to a mere 11.37 MB.

Image 1 of 3

Screenshots of Nvidia's RTXNTC software demonstration, highlight the use of neural texture compression to reduce the impact of texture sizes in VRAM

Using uncompressed textures (Image credit: Nvidia)
Image 2 of 3

Screenshots of Nvidia's RTXNTC software demonstration, highlight the use of neural texture compression to reduce the impact of texture sizes in VRAM

Using NTC textures, transcoded to a BCn format (Image credit: Nvidia)
Image 3 of 3

Screenshots of Nvidia's RTXNTC software demonstration, highlight the use of neural texture compression to reduce the impact of texture sizes in VRAM

Using NTC textures directly (Image credit: Nvidia)

The whole process of sampling and decoding is pretty fast. It's not quite as fast as normal texture sampling and filtering, though. At 1080p, the non-NTC setup runs at 2,466 fps but this drops to 2,088 fps with Interfence on Sample. Stepping the resolution up to 4K the performance figures are 930 and 760 fps, respectively. In other words, RTXNTC incurs a frame rate penalty of 15% at 1080p and 18% at 4K—for a 96% reduction in texture memory.

Those frame rates were achieved using an RTX 4080 Super, and lower-tier or older RTX graphics cards are likely to see a larger performance drop. For that kind of hardware, Nvidia suggests using 'Inference on load' (NTC Transcoded to BCn in the demo) where the pre-compressed NTC textures are decompressed as the game (or demo, in this case) is loaded. They are then transcoded in a standard BCn block compression format, to be sampled and filtered as normal.

The texture memory reduction isn't as impressive but the performance hit isn't anywhere near as big as with Interfence on Sample. At 1080p, 2,444 fps it's almost as fast as a standard texture sample and filtering, and the texture footprint is just 98 MB. That's a 64% reduction over the uncompressed format.

All of this would be for nothing if the texture reconstruction was rubbish but as you can see in the gallery below, RTXNTC generates texels that look almost identical to the originals. Even Inference on Load looks the same.

Image 1 of 3

Screenshots of Nvidia's RTXNTC software demonstration, highlight the use of neural texture compression to reduce the impact of texture sizes in VRAM

(Image credit: Nvidia)
Image 2 of 3

Screenshots of Nvidia's RTXNTC software demonstration, highlight the use of neural texture compression to reduce the impact of texture sizes in VRAM

(Image credit: Nvidia)
Image 3 of 3

Screenshots of Nvidia's RTXNTC software demonstration, highlight the use of neural texture compression to reduce the impact of texture sizes in VRAM

(Image credit: Nvidia)

Of course, this is a demonstration and a simple beta one at that, and it's not even remotely like Alan Wake 2, in terms of texture resolution and environment complexity. RTXNTC isn't suitable for every texture, either, being designed to be applied to 'physically-based rendering (PBR) materials' rather than a single, basic texture.

And it also requires cooperative vector support to work as quickly as this and that's essentially limited to RTX 40 or 50 series graphics cards. A cynical PC enthusiast might be tempted to claim that Nvidia only developed this system to justify equipping its desktop GPUs with less VRAM than the competition, too.

Your next upgrade

Nvidia RTX 5090 Founders Edition graphics card on different backgrounds

(Image credit: Future)

Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.

But the tech itself clearly has lots of potential and it's possible that AMD and Intel are working on developing their own systems that achieve the same result. While three proprietary algorithms for reducing texture memory footprints aren't what anyone wants to see, if developers show enough interest in using them, then one of them (or an amalgamation of all three) might end up being a standard aspect of DirectX and Vulkan.

That would be the best outcome for everyone, so it's worth keeping an eye on AI-based texture compression because just like with Nvidia's other first-to-market technologies (e.g. ray tracing acceleration, AI upscaling), the industry eventually adapts them as being the norm. I don't think this means we'll see a 20 GB version of Baldur's Gate 3 any time soon but the future certainly looks a lot smaller.

If you want to try out the NTC demo on your own RTX graphics card, you can grab it directly from here. Extract the zip and then head to the bin/windows-x64 folder, then just click on the ntc-renderer file.

]]>
https://www.pcgamer.com/hardware/graphics-cards/ive-been-testing-nvidias-new-neural-texture-compression-toolkit-and-the-impressive-results-could-be-good-news-for-game-install-sizes/ hKVC9vmYsFSaCYS3aigr4b Mon, 10 Feb 2025 13:08:42 +0000
<![CDATA[ The definition of overkill: Cooling an RTX 4090 to a claimed 20°C with a household air conditioning unit ]]> I've seen some wild and wacky PC cooling solutions in my time, but this one might beat them all: An RTX 4090 cooled with a household air conditioner.

Chinese techtuber 电解碳酸钠 (which machine translates to "Electrolytic sodium carbonate") has shown off the extreme cooling system in a video on Bilibili.

The channel says that it's still waiting on a water block for a planned RTX 5090 build, but in the meantime the air conditioner has been hooked up to an RTX 4090, dropping temps down to a claimed 20 degrees Celsius under load (via Tom's Hardware).

Given the power (and size) of the air conditioning unit used here, that's probably no surprise. It's rated to 12,000 BTU of cooling, with a 1.2 kW power draw. That means it's not exactly the most efficient of cooling solutions, although it is mounted on castor wheels for, err, ease of use.

And it's even been festooned with a faceplate featuring the logos of some popular PC hardware brands. I don't think I've ever seen Nvidia, AMD, and Intel logos featured quite so prominently on a build—but when you've got this much chassis real estate to play with, why not.

Image 1 of 2

An air conditioner standing next to a gaming PC with Nvidia, AMD and Intel logos visible.

(Image credit: 电解碳酸钠 on Bilibili)
Image 2 of 2

The internals of a liquid cooled PC using a household air conditioning unit.

(Image credit: 电解碳酸钠 on Bilibili)

The processor on the test system is Intel's Core i9 13900K, which also appears to be cooled by the outer air con unit. Well, it does run a little hot, after all.

Two hoses are diverted from the outside of the air conditioner into the rear of the PC and then into what looks like respective water blocks, and, aside from the bulk of the air conditioner itself, it seems like a surprisingly neat and tidy job.

Plus, you can even feel the breeze as your PC plunges to extremely low temperatures, as our host demonstrates by sticking her head in front of the main fan:

A person putting their head in front of an air conditioning fan attached to a gaming PC.

(Image credit: 电解碳酸钠 on Bilibili)

Well, it's all in good fun, isn't it? The host later says that the setup seems ready for the addition of a 14900K and RTX 5090, and possibly even a future RTX 6090, too.

Given that we've been impressed with the performance of the RTX 5090 FE cooler (and by how slim and svelte it looks despite the mighty power of the chip inside), all of this seems very unnecessary. However, if ultimate cooling is the goal, it's difficult to think of a system that would provide more headroom than this.

Should it take the top spot as an addition to our best CPU coolers guide, perhaps? Err, no. A 12,000 BTU air conditioning unit is a very expensive and impractical thing, at least for these purposes. Still, hats off for the attempt, at the very least.


Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

]]>
https://www.pcgamer.com/hardware/cooling/the-definition-of-overkill-cooling-an-rtx-4090-to-a-claimed-20-c-with-a-household-air-conditioning-unit/ ZGutFrPdrRNLiHWRUv8DbC Mon, 10 Feb 2025 12:31:49 +0000