• 4 Posts
  • 50 Comments
Joined 2 years ago
cake
Cake day: June 9th, 2023

help-circle





  • You’re still seeing ray tracing as a graphics option instead of what it actually is: Something that makes game development considerably easier while at the same time dramatically improving lighting - provided it replaces rasterized graphics completely. Lighting levels the old-fashioned way is a royal pain in the butt, time- and labor-intensive, slow and error-prone. The rendering pipelines required to pull it off convincingly are a rat’s nest of shortcuts and arcane magic compared to the elegant simplicity of ray tracing.

    In other words: It doesn’t matter that you don’t care about it, because in a few short years, the vast majority of 3D games will make use of it. The necessary install base of RT-capable GPUs and consoles is already there if you look at the Steam hardware survey, the PS5, Xbox Series and soon Switch 2. Hell, even phones are already shipping with GPUs that can do it at least a little.

    Game developers have been waiting for this tech for decades, as has anyone who has ever gotten a taste of actually working with or otherwise experiencing it since the 1980s.

    My personal “this is the future” moment was with the groundbreaking real-time ray tracing demo heaven seven demo from the year 2000:

    https://pouet.net/prod.php?which=5

    I was expecting it to happen much sooner though, by the mid to late 2000s at the latest, but rasterized graphics and the hardware that runs it were improving at a much faster pace. This demo runs in software, entirely on the CPU, which obviously had its limitations. I got another delicious taste of near real-time RT with Nvidia’s iRay rendering engine in the early 2010s, which could churn out complex scenes with PBR materials (instead of the simple, barely textured geometric shapes of heaven seven) at a rate of just a few seconds per frame on a decent GPU with CUDA, even in real-time on a top of the line card. Even running entreily on the CPU, this engine was as fast as a conventional CPU rasterizer. I would sometimes preach about just how this was a stepping stone towards this tech appearing in games, but people rarely believed me back then.




  • They shouldn’t have released a new architecture without dedicated AI accelerators as late as 2022 then, even though they had working AI accelerators at that point - which they were only selling to data centers. FSR 4 can’t be ported back to older AMD architectures for the same reason that DLSS can’t physically work on anything older than an RTX 20 series card (which came out in 2018, by the way). You can only get so much AI acceleration out of general-purpose cores.

    AMD’s GPU division is the poster child for short-sighted conservatism in the tech industry and the results speak for themselves. What’s especially weird is that the dominant company is driving innovation (for now at least) while the underdog was trying to survive with brute forcing raster performance above all else, like we’re in some upside-down world. Normally, it’s the other way around. AMD have finally (maybe) caught up to one of Nvidia’s technologies from March of 2020, almost half a decade ago. Too bad they are 1) chasing a moving target and 2) have lost almost every other race in the GPU sphere as well, including the one for best raster performance. The fact that their upcoming generation is openly copying Nvidia’s naming scheme is not a good sign - you don’t do that when things are going well.

    Things might change in the future and I hope for there finally being some competition in the GPU sector again, but for now, it’s not looking good and the recent announcements haven’t changed anything. A vocal minority of PC gamers dismissing ray tracing, upscaling and frame generation as a whole reflects neither what developers are doing nor how buyers are behaving - and the fact that AMD is finally trying to score in all of these areas tells us that the cries of fanboys were just that and not reflective of any reality. If the new generation of AMD GPUs ends up finally delivering decent ray tracing, upscaling and frame generation performance (which I hope, because fuck monopolies and those increasingly cringey leather jackets), I wonder if the same people will suddenly reverse their course and embrace these technologies. Or maybe I should stop worrying about fanboys.


  • Nvidia is active in more than just one sector and love them or hate them, but they are dominating in consumer graphics cards (because they are by far the best option there, with both competitors tripping over their own shoes at nearly every turn), professional graphics cards (ditto), automotive electronics (ditto) and AI accelerators (ditto). The company made a number very correct and far-reaching bets on the future of GPU-accelerated computing a few decades ago, which are now all paying off big time. While I am critical of many if not most aspects of the current AI boom, I would not blame them for selling shovels during a gold rush. If there is one company in the world that has a business model built around AI right, it’s them. Even if e.g. the whole LLM bubble bursts tomorrow, they’ve made enough money to continue their dominance in other fields. A few of their other bets were correct too, like building actual productive and long-lasting relationships with game developers, spending far more on building decent drivers than anyone else and correctly predicting two industry trends very early on that are now both out in full force by making sure that their silicon puts a heavy emphasis on supporting both ray-tracing and upscaling. They were earlier than AMD and Intel, invested more resources into these hardware features while also providing better software support - and crucially, they also encouraged developers to make use of these hardware features, which is exactly the right approach. Yes, it would have been nicer of them to open source e.g. DLSS like AMD did with FSR, but the economic incentives aren’t there for this approach, unfortunately.

    The marketing claim that the 5070 can keep up with the 4090 is a bit misleading, but there’s a method to the madness: While the three instead of just one synthetic frames created by the GPU are not 100% equivalent to natively rendered frames, the frame interpolation is both far better than it has been in the past from the looks of it (to the point that most people will probably not notice it) and has also now reached a point - thanks to motion reprojection similar to tech previously found on VR headsets, but now with screen edges being AI generated - where it has a positive impact on input latency instead of merely making games appear more fluent. Still, it would have been more honest to claim that the “low-end” (at $600 - thanks scalpers!) model of the new lineup is more realistically half as fast as the previous flag ship, but I guess they felt this wasn’t bombastic enough. Huang isn’t just an ass kisser, but also too boastful for his own good. The headlines wrote themselves though, which is likely why they were fine with bending the truth a little.

    Yes, their prices are high, but if there’s one thing they learned during COVID, it’s that there are more than enough people willing and able to pay out of their noses for anything that outputs an image. If I can sell the same number of items for $600 than for half the price, then it makes no sense to sell them for less. Hell, it would even be legally dangerous for a company with this much market share.

    I know this kind of upscaling and frame interpolation tech is unpopular with a vocal subset of the gaming community, but if there is one actually useful application of AI image generation, it’s using these approaches to make games run as well as they should. It’s not like overworked game developers can just magically materialize more frames otherwise - we would be more realistically back to FPS rates in the low 20s like during the early Xbox 360 and PS3 era rather than having everything run at 4K/120 natively. This tech is here to stay, downright needed to get around the diminishing returns paradigm that has been plaguing the games industry for a while, where every small advance in visual fidelity has to be paid with a high cost in processing power. I know, YOU don’t need fancy graphics, but as expensive and increasingly unsustainable as they are, they have been a main draw for the industry for almost as long as it has existed. Developers have always tried to make their games look as impressive as they possibly could with the hardware that is available - hell, many have even created hardware specifically for the games they wanted to make (that’s one way to sum up e.g. much of the history of arcade cabinets). Upscaling and frame generation are perhaps a stepping stone towards finally cracking that elusive photorealism barrier developers have been chasing for many decades once and for all.

    The usual disclaimer before people accuse me of being a mindless corporate shill: I’m using AMD CPUs in most my PCs, I’m currently planning two builds with AMD CPUs, the Steam Deck shows just how great of an option even current AMD GPUs can be, I was on AMD GPUs for most of my gaming history until I made the switch to Nvidia when the PC version of GTA V came out, because back then, it was Nvidia who were offering more VRAM at competitive prices - and I wanted to try out doing stuff with CUDA, which is how they have managed to hold me captive ever since. My current GPU is an RTX 2080 (which I got used for a pittance - they haven’t seen any money from me directly since I bought a new GTX 960 for GTA V) and they can hype up the 50 series as much as they want with more or less misleading performance graphs, the ol’ 2080 is still doing more than fine enough at 1440p that I won’t be upgrading for many years to come.









  • I’ve only ever noticed slight shimmering on hair, but not movement artifacts. Maybe it’s less noticeable on high refresh rate monitors - or perhaps I’m blind to them, kind of how a few decades ago, I did not notice frame rates being in the single digits…

    This hair shimmering is an issue even at native resolution though, simply due to the subpixel detail common in AAA titles now. The developers of Dragon Age The Veilguard solved the problem by using several rendering passes just for the hair:

    This technique involves splitting the hair into two distinct passes, first opaque, and then transparent. To split the hair up, we added an alpha cutoff to the render pass that composites the hair with the world and first renders the hair that is above the cutoff (>=1, opaque), and subsequently the hair that is lower than the cutoff (transparent).

    Before these split passes are rendered, we render the depth of the transparent part of the hair. Mostly this is just the ends of the hair strands. This texture will be used as a spatial barrier between transparent pixels that are “under” and “on top” of the strand hair.

    Source:

    https://www.ea.com/technology/news/strand-hair-dragon-age-the-veilguard




  • DLSS without frame generation is at least equivalent (sometimes superior) to a native image though. If you’ve only ever seen FSR or PSSR with your own eyes, you might underestimate just how good DLSS looks in comparison. [Xess is a close second in my opinion - a bit softer - and depending on the game’s art style, it can look rather pleasing, but the problem is that it’s relatively rarely being implemented by game developers. It also comes with a small performance overhead on non-Intel cards.]

    Frame generation itself has issues though, namely latency, image stability and ghosting. At least the latter two are being addressed with DLSS 4, although it has to be seen how well this will work in practice. They also claim, almost as a footnote, that while frame rates are up to eight times higher than before (half of that through upscaling, half through three generated frames per real frame, from one inserted frame on the previous generation - which might indicate the raw processing power of the 5070 is half of the 4090), latency is “halved”, so maybe they are incorporating user input during synthetic frames to some degree, which would be an interesting advance. I’m speculating though based on the fluff from their press release:

    https://www.nvidia.com/en-eu/geforce/news/dlss4-multi-frame-generation-ai-innovations/

    Before anyone accuses me of being a shill for big N, I’m still on an old 2080 (which has DLSS upscaling and ray reconstruction, but not frame generation - you can combine this with AMD’s frame generation though, not that I’ve felt the need to do this so far) and and will probably be using this card for a few more years, since it’s still performing very well at 1440p with the latest games. DLSS is one of the main reasons it’s holding up so well, more than six years after its introduction.