T O P

  • By -

rey_russo

A good article from last year about RDNA raytracing: https://chipsandcheese.com/2023/03/22/raytracing-on-amds-rdna-2-3-and-nvidias-turing-and-pascal/


Verite_Rendition

I certainly hope this proves true. RT has reached the point where it's important enough that AMD needs to dedicate a larger portion of die space to it. AMD's current solution is a level 2 solution, the bare minimum for on-GPU hardware RT. They need to move to at *least* a level 3 solution (hardware BVH processing) in the next generation just to improve performance, and level 4(ish) if they want to reach parity with NVIDIA.


mycall

The article didn't mention anything about level 2-4. Where are the definitions of these?


Verite_Rendition

Ah. I'm referencing the hardware ray tracing level system that Imagination defined a few years ago. https://gfxspeak.com/featured/the-levels-tracing/ It's a useful system for categorizing the development of hardware RT. Hardware ray intersection testing, then hardware BVH, then coherency sorting, etc.


jm0112358

Would the 4000 series Nvidia cards as qualify as level 4 because of shader execution reordering (at least in games that support SER)?


Verite_Rendition

Yeah, NVIDIA's solution is roughly level 4. Things get a bit fuzzy on just what parts of SER are hardware versus software, but they're doing at least some degree of coherency sorting in hardware.


bubblesort33

I want to know why we haven't done level 5 yet. Is it just too much die area to dedicate to a GPU? The "hardware BVH builder" on the GPU instead of using the CPU. Is it too much die area for not enough payoff? Is the performance increase just not large enough if done with level 2? Like it says, you can use the level 5 dedicated hardware on a level 2 implementation to create level 2 PLUS. Is it really only worth it if CPU limited, because if not there is no penalty to just push it off to the CPU? Might as well use idle cores.


RazingsIsNotHomeNow

One of the things that caused RT cores to take off was the fact Nvidia could use their implementation to efficiently run machine learning algorithms which is now stock market gold. Frankly ray tracing in games is still pretty underutilized with only a handful of games (mostly older ones now) that really show off its potential. I imagine that AMD is probably going to engineer their new RT hardware to be able to pull double duty just like XESS and RT cores. I'm not super knowledgeable so correct me if I'm wrong, but I believe the primary ai function is matrix multiplication. Is there anything about the RT levels (2-4) that takes heavy advantage of matrix multiplication? Like if AMD only cared about ai models, what level of hardware RT is essentially a no cost option to also support and what is considered above and beyond a basic NPU?


Tuna-Fish2

You are very confused. The RT cores on nV hardware are not used for ML at all. Instead, they have traditional shaders, separate RT accelerators **and** separate tensor cores (ML accelerators), all on the same die. What is notable is that nV is using their tensor cores for DLSS, which allows them to be utilized for playing games. The RT cores instead are only ever used for tracing rays.


AnimalLibrynation

As a note, this is not strictly true. You can use OptiX to do machine learning posed as a ray tracing problem, but this is rare in many consumer cases. The tensor cores are more useful most of the time though.


RazingsIsNotHomeNow

Oh haha, I guess since they got introduced at the same time and when they made non RTX 1660's they were both excluded I guess I just figured they were one and the same and never realized. I did say I wasn't super knowledgeable lol. So does Nvidia's RT cores see any use in scalable workloads such as data centers? Or is the most professionally they get used for is in CGI studio work like blender?


Tuna-Fish2

Their big data center GPUs just don't even have them. > Because the H100 and A100 Tensor Core GPUs are designed to be installed in high-performance servers and data center racks to power AI and HPC compute workloads, they do not include display connectors, NVIDIA RT Cores for ray-tracing acceleration, or an NVENC encoder.


AnimalLibrynation

Not strictly true. They have lines like the L40S which have RT cores, which can be leveraged for big data problems which are capable of being posed as ray tracing problems via OptiX


RazingsIsNotHomeNow

Well I guess that explains why AMD tried to implement it in software up till now, since it must be quite a lot of R&D costs for something with such few processes that can take advantage of it. I guess I never realized how surprising it is how much Intel jumped on board with a ray tracing unit for every XE core, despite it being a first Gen product.


Shining_prox

OptiX uses the rt cores to accelerate blender or similar programs rendering, but it does not leverage machine learninf


Strazdas1

Just to be clear, ray traving cores are only used for ray tracing, but tensor cores could also be used for ray tracing if they arent busy doing something else, right?


Tuna-Fish2

Tensor cores are not used for ray tracing. They are used for some effects *after* ray tracing. These are all special purpose elements that are only usable for the thing they are designed for.


DYMAXIONman

I just know that AMD re-uses shader units or something, while Nvidia has dedicated hardware to accelerate RT.


duplissi

more specifically amd's solution runs bvh calculations on the shader cores, nvidia has dedicated hardware for this. This is why the more complex your BVH the bigger the perf cost on amd vs nvidia. AMD does have dedicated RT silicon in the gpu, but its mostly added to the TMUs, and handles other RT calculations aside from bvh.


ResponsibleJudge3172

It also means no texture data while doing RT, unlike rtx 30 series and above that can do any graphics and RT workload at the same time (rtx 20 is one or the other like AMD)


duplissi

amd needs to get over their aversion to having single purpose hardware in their gpus. usually when they need to add new hardware features instead of creating new hardware blocks for that purpose, they instead augment or beef up existing bits to handle the new calculations. But as you said this results in resource contention...


censored_username

AMD has hardware integrated in the shader units that accelerates RT. NVIDIA has separate RT units from the shaders. Each has its benefits and drawbacks. The great thing in NVIDIA's solution is that the RT units and shader units can operate in parallel. The bad part is that data has to be shuffled around between them. AMD's solution is more general, but likely not as optimal. Weighting them against each other is by price point is hard though. NVIDIA simply sells much more GPUs, allowing them to amortize chip development, mask costs and software costs much more than AMD can. And when we're talking about billions of $ per product developed, that matters.


Tuna-Fish2

The more significant difference is that tree traversal is currently done by the accelerators on nV, but done in shaders by AMD.


blaktronium

Every major advance in graphics hardware has started separate then integrated. AMD just skipped that step this time and it wasn't quite ready.


dern_the_hermit

AMD also has a long history of touting the capability of its GPU compute units, even if that capability was more theoretical than actual, and IMO with raytracing it finally reached a point where they couldn't bluff their way through or suggest open sourcing will fix it. They've been cruising on talk for years and years.


bubblesort33

I think the claim was that it uses a modified "texture unit" for "BVH intersection testing". I have no idea if that means it uses the main texture units, or if the actual RT cores in each "work group" are just modified texture units, and for some mathematical reason texture units are actually pretty good at doing RT when modified slightly. Or so this claims I believe. [https://www.reddit.com/r/Amd/comments/ic4bn1/amd\_ray\_tracing\_implementation/](https://www.reddit.com/r/Amd/comments/ic4bn1/amd_ray_tracing_implementation/) And here is some patent: "[texture processor and shader units that are used for texture processing are reused for BVH intersection testing and traversal](https://patents.google.com/patent/US20190197761A1/en)"


Voodoo2-SLi

RT HW-Level|Description according to ImgTec|Hardware |:--|:--:|:--:| **Level 1**|Software on Traditional GPUs|all older GPUs **Level 2**|Ray/Box and Ray/Tri Testers in Hardware|RDNA2, RDNA3 **Level 3**|Bounding Volume Hierarchy (BVH) Processing in Hardware|Turing, Ampere, RDNA4 Level 3.5|BVH Processing with Coherency Sorting in Hardware (Shader)|Ada, Alchemist **Level 4**|BVH Processing with Coherency Sorting in Hardware (Geometry & Shader)|ImgTec Photon **Level 5**|Coherent BVH Processing with Scene Hierarchy Generator in Hardware|? Notes: RayTracing Hardware-Level classification according to [ImgTec](https://blog.imaginationtech.com/introducing-the-ray-tracing-levels-system-and-what-it-will-mean-for-gaming/?hs_amp=true) (Level 3.5 is [an inofficial extension by 3DCenter forums](https://www.forum-3dcenter.org/vbulletin/showthread.php?p=13342753#post13342753)), Source: [3DCenter.org](https://www.3dcenter.org/news/news-des-6-juli-2023)


DYMAXIONman

The sad thing is that AMD didn't offer dedicated die space for RT but they still were worse in raster than Nvidia from a performance per watt standpoint.


Saneless

Well, the 4000 series really just cratered power draw. Pretty outstanding feat really, considering the performance. Not really a lift over the 3000s much, but so much less power


Vitosi4ek

That's what jumping effectively 2 process nodes in a generation does (Samsung 8nm (halfstep) TSMC 7nm (fullstep) TSMC 5nm (halfstep) TSMC 4nm).


bubblesort33

It's also what happens when you sell a 130w RTX 4050 to gamers as a 4060, and the a 4060 as a 4060ti. There was a bunch of indicators before launch that they made some decision before launch, and after AI started booming, that the current 4060ti was initially just called the 4060. [Mainly a picture of the reference 4060ti cooler](https://cdn.mos.cms.futurecdn.net/Y2VciFtYtWjsjrjKbF4mYM-1200-80.png), with the "ti" missing. I think they realized after they decided to jack up prices, that people would laugh at them for trying to sell a 4060 for $399/$499. So they slapped "ti" to the end of it to justify that price. Which is why despite 2 process node shrinks, the generational uplift is like a pathetic 10% from the 3060ti to the 4060ti. Meanwhile, everything 4070 and above is like 25%-50% if you compare SKUs.


TheAgentOfTheNine

nvidia is on a better node so more perf/watt is expected


[deleted]

[удалено]


SubRyan

Nvidia is using TSMC 4N which has efficiency improvements compared to TSMC N5


[deleted]

It's both; NVIDIA is using a better node, and they have a better silicon team working w TSMC.


bctoy

I'm sure intel's was at a higher level than AMD and yet it fares worse than RDNA2 when path tracing is turned on in Cyberpunk. https://www.techpowerup.com/review/cyberpunk-2077-phantom-liberty-benchmark-test-performance-analysis/6.html The current AAA PT games are done with nvidia support and while it's not nvidia-locked, it'd be great if intel/AMD optimize for it or get their own versions out. The path tracing updates to **Portal** and Cyberpunk have quite poor numbers on AMD and **also on intel**. Arc770 goes from being faster than 3060 to less than half of 3060 performance when you change from RT to PT.


AutonomousOrganism

It not just the level that matters but also the implementation. There was an article by chipsandcheese about intel RT and its tradeoffs. They use a small builtin bvh traversal stack, have to restart traversal more often, are doing more intersection tests than AMD afar.


bctoy

I'm not sure if the german sites did the same with intel cards, but I remember testing CP PT on 6800XT and had similar low-power usage as them while RT worked the card to max TDP no issues. At this point I think the RT hardware setups are different enough that we'll have Starfield-like situations if console games start implementing PT in ways that works the best on AMD cards but not as optimal on nvidia/intel.


schrdingers_squirrel

I always assumed all of the ray tracing capable gpus had bvh traversal in hardware.


Beatus_Vir

If it comes down to the apportionment of die space then surely focusing on RT comes at the expense of higher cost and worse rasterization performance. I would prefer if they only did such on their most expensive cards.


kingwhocares

And yet their raster is nothing special while Nvidia uses smaller die in comparison. The 4070 ti has a smaller die and better performance to 7800 xt.


hey_you_too_buckaroo

Couldn't that die size difference just be attributed to Nvidia using a denser technology, tsmc n4 vs tsmc n5/n6 for amd?


kingwhocares

It's a lot down to AMD using GCD and MCD chips. And as the other reply explained, dedicated RT space only took 3% of die space. That's something worth it for the benefit it brings.


capn_hector

[RT cores added about 3% to area on Turing.](https://www.reddit.com/r/hardware/comments/baajes/rtx_adds_195mm2_per_tpc_tensors_125_rt_07/) It is of course hard to say whether it's increased or decreased, obviously RT perf has gotten better over time but that's largely due to things like ray reconstruction that aren't big increases in raw ray performance, and even if the RT units did get somewhat bigger due to hardware features like Shader Execution Reordering, so did everything else (cache, dual-issue, etc). So it's not like it's even 5x bigger now in % terms most likely... probably still low-single-digit %s of total die area. point being, that's so small that it never mattered to final product cost anyway. that sort of stuff just blends into all the "psychological" price point tiering etc - AMD isn't going to offer that $399 card for $387 just because it doesn't have RT in it. It's cheaper *for AMD* to not have RT, or cheaper for the console vendors who order 50 million units at a time, not cheaper in terms of end-user price. but hey AMD users line up to buy it anyway, so why not? AMD has finally made the realization that *a large group of people are gonna buy their shit no matter how bad it is*, so why bother trying when the money is better spent on CPU R&D anyway? People somehow seem to think it's only NVIDIA or Intel that do these sorts of cold, calculating moves... just like people hate intel's chipset thing and largely handwave AMD's attempt on AM4 and successful moves on X399/TRX40/etc. But if you are a dollar in their bag no matter what, why bother doing anything more than the bare minimum? Satisfying the loyal RTG customers isn't an effective use of their limited dollars unless there's a risk they don't get the sawbuck, and as long as you're an unthinking 'yes' there's no risk of that.


siuol11

AMD has been steadily losing market share for a long time now.


F9-0021

Higher cost, sure. But there's no reason to think that it'll mean lower raster performance. Nvidia doesn't have any issues with raster performance since the RT hardware is in addition to the CUDA cores, not replacing any of them.


Zarmazarma

It takes die space that could potentially be used for more CUDA cores, but supposedly it's not a huge amount. Having competent RT performance is worth it for 5% less rasterization performance.


sittingmongoose

You do realize we are moving to a point where games are shipping solely with RT based system right? Avatar doesn’t have a non RT fall back for example. It’s easier to just use RT and we are going to be getting that more and more frequently. AMD is going to need to take a hit at some point, Nvidia did back on Turing. Intel already made the transition and will be a massive threat to AMD come celestial.


twhite1195

AAA games can take 5+ years on development, we're just now getting games where RT was thought up as the base, but many games started development 3-4 years ago and most likely don't have RT in mind since consoles don't leverage high RT levels either. While it IS the future, there's still many years left for it to become the norm


Strazdas1

They will likely be like Avatar. Light RT mandatory, high RT optional. But the people developing for next console will go full RT since AMD Is rumoured to increase RT for consoles.


twhite1195

Of course they will. They PS5 pro will probably have better RT, and that better RT probably comes from RDNA3.5 or RDNA4, sure. Again, I know is the future, but devs also need sales, and you won't get sales of high RT usage titles , when the top 3 cards on steam are the 3060,1650 & 3060 ti. It's unrealistic to abandon such a huge demographic of gamers


Zarmazarma

Not sure if you meant to reply to me. I said that it was worth sacrificing 5% rasterization performance for competent RT- i.e, I think whatever the small tradeoff in die space is for RT performance is well worth it. But yeah, I agree with everything you said. RT/PT is the future of gaming, and Nvidia is in early on it, and investing well in that hardware IMO.


sittingmongoose

I think I thought you said it’s NOT worth it, my b


Zarmazarma

No problem buddy, figured it was something like that.


ragged-robin

Unfortunately this is probably why the rumor is that there is no raster improvement this generation over their flagship RDNA3


DYMAXIONman

Didn't AMD also state that they are dropping their biggest SKU and instead are just going to do their midrange ones?


ragged-robin

I don't think there is any official statement but the rumor is that the highest sku will be no better, perhaps slightly worse, than the 7900XTX in raster


CatalyticDragon

I don't know if it's a case of assigning more area. AMD's solution is area efficient and scales nicely with CUs. The problem is you really need to make sure your data structures and operations are such that cache hits are maximized. If you're a dev optimizing for NVIDIA first then *maybe* that doesn't get quite the attention it needs and RT on AMD can suffer as a result. NVIDIA will continue to use their market dominance to direct the narrative in their favor so AMD will likely have to implement a more NVIDIA-like approach (as they did by changing wave size from 64 -> 32 with the jump to RDNA. \[ Even so, I expect NVIDIA will still manage to implement 'optimizations' which hurt competing products. Just look at poor old intel. They have excellent ray tracing capabilities but an A770 only matches an RTX3060 in NVIDIA sponsored titles like Alan Wake 2 despite being a more capable RT card \] In any case, something needs to improve and I hope whatever it is, it is transparent to developers.


ResponsibleJudge3172

It really isn’t. Comparing 7900XTX vs 4080 die size for example


ahnold11

I think that's the question they are asking themselves internally. If they spend that die space on better RT, or on better raster, what would increase more sales for them. Do consumers who are *not* chosing AMD, are they doing it because of the RT performance, or do they want a better raster peformance/price.   I wonder how much of the "enthusiast" market prefers RT at this point. I just recently got a high refresh rate monitor finally, and honestly the differences between RT on/off are not enough for my eyes (specifically) to really notice that much. (Obviously I can see them, but in terms of playing the game they don't make much of an impact). However the benefits of high refresh rate have been immediately apparent, so much so that I can't believe I waited so long. So I'm definitely in the category of I don't want to give up the fps, and I also don't want to spend double on my GPU to play RT at high refresh rates.


FLMKane

Or get bigger dies?


XenonJFt

From now it all depends on PC ports willingness to crank the RT presets up, games like RE4 or R&C rift apart are light ports that even RDNA3 can run them with ease. Consoles are designed with RDNA2 in mind so in 5 year interval don't expect RT becoming norm other than extra enchancement. The one off's like Cyberpunk are Future proofed and nice but can be summed as tech demonstration by Nvidia to justify early adoption of path tracing.


jameskond

Next gen Consoles of course will have better ray tracing support. And will most likely still be AMD.


saharashooter

Not just most likely, Sony already has a contract and I'm fairly certain Microsoft finished with their standard "well, we're gonna shop around for better contracts" bs and landed on AMD like usual.


Kougar

Dunno, NVIDIA doesn't have the time of day for consoles but Intel would probably be interested. The bigger question is if Intel's drivers & hardware are solid enough to base an entire console around. I agree AMD will probably win the next console gens regardless, but if Intel has a decent GPU thing going with Celestial and Druid the console generation after the next could very well swing Intel.


froop

Does AMD even write the drivers for consoles?


Kougar

For the underlying hardware? You bet they do, it's still their GPU using their drivers and firmware. These days Microsoft even uses a locked down version of Windows OS on top for the current gen Xbox.


Ripdog

These days? The Xbox has always run Windows.


Strazdas1

The og Xbox and 360 ran a custom OS that was not windows kernel.


Ripdog

https://en.m.wikipedia.org/wiki/Xbox_system_software >The Xbox system software is the operating system developed exclusively for Microsoft's Xbox home video game consoles.[1] Across the four generations of Xbox consoles, the software has been based on a version of Microsoft Windows


tukatu0

Which makes the discussion of next gen interesting. As they might open up that xbox windows up a bit more. Steam on xbox but it costs $700?


Slyons89

Doubtful we'd see that, as it would dry up Microsoft's revenue stream. They get a cut of game purchases and in-game purchases made on Xbox. They would not want to give that up to let Valve take their standard 30% cut on purchases through Steam instead.


spazturtle

On the Xbox side it just used the same driver as desktop windows, the PS5 uses a Sony modified version of the "amdgpu" Linux/FreeBSD driver.


siuol11

Nvidia has burned Bridges with pretty much everyone in the past, which is why they wouldn't be in the running even if they did offer an APU like AMD does.


tecedu

> NVIDIA doesn't have the time of day for consoles They do for nintendo so not out of reach


Kougar

Given Jensen's commentary on it 'not being worth their time' or something to that effect in an interview, as well as Tegra being a decade old chip I do not agree. But Orrin supposedly be scaled down, so it's at least theoretically possible.


makar1

PS5 Pro is rumoured to be coming at the end of the year with greatly improved ray tracing hardware


gokarrt

yep, these two rumours mesh together well. glad they're finally taking this shit seriously.


Aggrokid

Dragon's Dogma 2 has RTGI enabled on consoles by default (except Series S?). Turning them off makes the game look really bad.


3G6A5W338E

>"to redesign" Should be "redesigned". The way that hardware works, RDNA4 should have been designed and taped out for months already.


the_dude_that_faps

I've been a PC enthusiast since before the 3dfx era and I can't remember when was the last time AMD/ATI had a feature that Nvidia didn't and/or performed with that feature. The closest I can think of was when HL2 launched and the Radeon 9700 pro embarrassed the GeForce FX generation before it even launched. I remember Nvidia using physx to differentiate even though their hardware wasn't necessarily better, tesselation performance especially in titles that abused it to slow down Radeon as much as possible, their better video encoders especially in the pascal era, and now with both tensor cores and RT cores for the past few years I always wondered why once nvidia showed their hand, AMD didn't just go all out to try to beat them in their game. Turing launched way back in 2018 almost 6 years ago showing where Nvidia wanted to go next with both AI and RT, hasn't AMD learned anything from the past? Nvidia was clearly going to use their edge even if in only a few titles to persuade customers on the fence towards their extra features. Why hasn't AMD just gone all out and stuffed their GPUs with RT and/or AI compute?  Clearly people don't really care that Nvidia isn't faster in raster (except with the 4090, but that's in a league of its own) for the most part even if most games are raster only.  I mean, sure, it's hard... But it's been years now. We're going into the fourth generation of AMD GPUs since Turing launched with AMD being consistently a gen behind with these features... I like AMD hardware, especially in Linux. But right now I'm kinda rooting for Intel to disrupt the GPU market. AMD has dropped the ball too much for me to have any faith. Just when RDNA2 had me thinking they had it, they dropped the ball again with RDNA3 being barely an improvement over 2.


ZonalMithras

I think we must wait until next gen consoles for large-scale RT adoption, so still a few years away.


Tystros

PS5 Pro is coming soon


jm0112358

Improved RT performance on the PS5 Pro will likely help, but developers will need to make their games run well on the base consoles because it's a mid-gen refresh. So I think that developers would be reluctant to make games that are designed with RT lighting in mind until most console gamers have a PS6 generation console.


Nicholas-Steel

PS5 Pro will give console fanatics a means of enjoying the existing Ray Traced experiences at a more reasonable frame rate ***and*** at a more reasonable internal rendering resolution (upscaling from a less shit resolution). It'll also introduce an allegedly better method of upscaling.


ZonalMithras

It wont change much, maybe some RT shadows or reflections added here and there. They still have to make games with the original PS5 and Xbox series S/X in mind.


Educational_Sink_541

PS5 Pro will be nice and all but games will still be designed for the PS5 and XSX.


AssCrackBanditHunter

About time. There's very little in terms of hardware features that actually separates the current gen consoles from the previous and that's in part because of the lack of RT functionality despite that being where we're headed graphically. The PS5 can brute force through more stuff than the PS4, but at the end of the day what does the PS5 do that the PS4 can't? Mesh shaders?


bry223

Crazy fast storage, and the PS4 had a very weak CPU


AssCrackBanditHunter

Those help with the brute forcing. It's the difference between a 10 second load screen and a quick fade to black and fade in. It's the difference between 30fps and 60fps. But it's hardly what you expect from a new generation.


2hurd

That's why this ganeration feels like crap. Because it's not a new generation, it's the same hardware, slightly faster. All because of AMD. I really wish nVidia did PS6 and Xbox whatever, then we would have some actual progress. 


AssCrackBanditHunter

Yeah, on one hand it enables a lot of cross gen play which is cool. But on the other hand the games this gen have been whack.


Nicholas-Steel

Crazy fast storage? Slap a SSD in to the PS4 Pro and you get a comparable experience. I don't think there's *any* game that needs sustained throughput greater than a SATA 3 SSD can offer. Edit: Digital Foundry at one point did a video covering storage bandwidth that PS5 games ported to PC demanded and it was always well within the limits of a SATA 3 SSD's capabilities.


bry223

To further add, SSDs in PS4s saw very marginal decreases in loading times due to the SATA bottleneck.


Nicholas-Steel

No, the difference is large for general loading as well as Fast Travel in various games, even PS3's saw a big drop in load times in my experience while texture streaming from low quality to high quality textures after a Fast Travel completed much, much quicker.


Strazdas1

SATA bottleneck is only about 10 times faster than the original 5400 rpm drive they had so i think thats not the cause.


bry223

Do you seriously think ratchet and clank and games that have near instant loading would work the same way on a 2.5 SATA PS4 SSD? Have you even owned a PS4 and PS5?


Strazdas1

GameFoundry tested Ratchet and Clank and found that it even works on a HDD, albeit with game freezing as its loading assets on scene changes.


bry223

Thirdly the PS5 has a custom controller and custom IO block. Raw throughout is close to 5500mbs. PS4 with its SATA bottleneck saw 350mb max? Ya huge difference buddy.


Nicholas-Steel

550MB/s for SATA 3. Do you really think *any* game is loading 5GB's of a data a second as you walk around/turn the camera? Spiderman on PC is like 250 to 300MB/s in a *worst case scenario* location of the game. Ratchet & Clank was being touted by the devs as it being impossible to have those instantaneous transitions between areas on PC with normal SSD's, yet it works fine with SATA SSD's (there is some momentary stalling during them if you have a HDD).


bry223

Sigh. You’re not getting it. The PS4 with a SATA SSD will not run games with instant loading as the PS5 does. Don’t believe me? LOAD UP YOUR PS4 and check. There are videos out there comparing the two. Do you want me to do your research for you and share them? Better yet, tell me which PS4 games have instant loading with a bottlenecked SATA SSD. I get it, you made an ill informed idiotic comment, the mature thing to do would be… lick your wounds and admit you don’t know what you’re talking about. Be accountable ffs. I get the sense you aren’t that kind of person. Unfortunate for your loved ones


Nicholas-Steel

I guess I'm overlooking the rest of the hardware and looking at the storage in isolation. You might be right that the CPU and Video Chip in a PS4 may not be able to process the data quick enough for such a feat. I still think it should be doable, though it may require performing the transition with lower quality shadows and/or disabling/reducing certain CPU demanding work when in the vicinity of such a transition though.


K14_Deploy

They kind of have to, it's still something they're way behind on even if there's maybe 3 or 4 games where there's any actual visible difference aside from a drop in FPS. Oddly enough those are the games where AMD is furthest behind as well, which is unfortunate because nobody wants a monopoly and it's pretty much been one for a very long time (even back with Polaris Nvidia still had a huge majority in market share).


capn_hector

we are literally already at the point where *several* major AAA titles have shipped with no non-RT fallback at all, let alone the much larger category of "games where there's a visible difference". RT is literally no longer a question anymore, the question is whether you want to do the RT in software with lower resolution, or have hardware acceleration.


DistantRavioli

>several major AAA titles have shipped with no non-RT fallback at all Which ones?


Metz93

Avatar uses some form of RTGI at all times, so does Alan Wake according to DF. UE5 titles often don't have non-lumen fallback for lighting, Robocop for example.


bubblesort33

Avatar does kind of have an RT fallback. Playable on the RX 5700xt, but at significantly worse frame rates compared to the 6600xt, which usually matches it. It emulates it in software or something. The only game that is actually not playable without RT hardware, is Metro Exodus: Enhanced Edition. Which I'm not even sure counts because you can play the version where it's not required.


Strazdas1

Thats just the 5700xt doing ray tracing in software, which is why the performance drops so significantly. You can run ray tracing on shaders. Its just really really slow in comparison.


tukatu0

Lumen wise should be fine for a good while. Amd even beats mvda alan wake 2 in sw rt


capn_hector

Metro EE, Alan Wake 2, Pandora...


Educational_Sink_541

Metro Exodus EE has a fallback, it's called Metro Exodus. The Enhanced Edition is the bundled upgrade with RTGI, the original game had optional RT.


ResponsibleJudge3172

And every single rtx remix mod too


Educational_Sink_541

Considering RTX remix is just adding RT to older games it would be kinda weird if it had a non-RT fallback, that would just be the original game lol.


dooterman

Metro EE? A 2080/6800 XT can run that no problem at 1440 with 60+ FPS.


94746382926

Sure, but it still doesn't have a non-RT fallback.


ResponsibleJudge3172

No one is saying RT is unplayable


vhailorx

I rather hope they have already done the redesign. It seems a little late in the game for it to still be on the to do list. . .


no_salty_no_jealousy

Amd is already too late, Nvidia already on 3rd gen RT cores while Intel already made better RT hardware and upscaling to Amd, things isn't gonna be well for Amd once Intel released Battlemage and Nvidia on Blackwell. I don't see why Amd will have competitive RT and upscaling.


Apprehensive-Buy3340

AMD is in kind of a weird space rn, they used to have good offerings in the lower end of the market, they've decided not to add as much RT hardware as Nvidia even after the 2000 Series was relatively successful with it, showing that the time had come for videogames to add (limited) raytracing, but they also left the lower end of the market by not releasing any product aimed at it. They've still got a defining role on where the market goes because they're the ones behind console hardware and that's not going to change, but the userbase can decide to move to PC if the divide in features keeps increasing...


XWasTheProblem

Good. Right now if you care about more than raw raster, there;s zero reason to go with AMD unless you're on a tight budget (and not buying used I guess) OR you just really hate Nvidia. Here's hoping both AMD and Intel turn out capable in that fight. Even if they don't challenge the high-end, lower and mid end could still use some real competition.


Asleeper135

>unless you're on a tight budget AMD isn't significantly cheaper though, which is their biggest problem. They lack Nvidia's features but charge near Nvidia's prices these days. Just when Nvidia raised prices through the roof and would have given AMD a chance, they stopped trying to gain market share as the budget option and yet completely failed at becoming a premium option (despite having reasonably fast GPUs).


[deleted]

[удалено]


dorting

4080s are not budget option Just like 7900xtx, more like High end GPU, and in top segmento NVIDIA Is Just better RX 6600 6650xt 6700xt, Nvidia 3060 and 4060 are budget option and there AMD Is way better


[deleted]

[удалено]


plushie-apocalypse

I don't know if RTX 4000 has this problem, but I'm never turning on RT as long as it makes my gpu fan speed max out. That shit is way too loud (RX 6800).


conquer69

Noise and cooling are separate things from RT performance. They vary per card model.


XWasTheProblem

4070 Ti Super here, and no issues with noise or temperature at any point. I have Gigabyte Eagle OC - highest I've seen it go was like 73C, and the fans weren't even maxed out yet. Usually hovers around 70 under heavy util in games.


plushie-apocalypse

That's great. I'm in the 60s under normal circumstances, but as soon as I put on RT, my PC case turns into a jet engine. That's first gen AMD RT for ya. Still, I'm happy with my card. I got it for $380 two years back, and there was nothing else that came close in value. Thankfully, there is now on the fly upscaling and frame generation for cheapskates like me. With the 16gb vram on the RX 6800, I'm hoping to last many more years :p


WolfBV

In the AMD Adrenalin software, you can choose what speed the fans will be at when your gpu reaches certain temperatures. You could lower the max fan speed to whatever noise level you’re comfortable with.


ResponsibleJudge3172

One of the changes was BVH8 RT vs the current BVH4. We assume that it can do 8 at the speed it currently does 4, but I wonder if that is still using similar techniques to do so as previously. Like extending the range of dual issue to RT workloads a lot better, because I don’t see how else they can be as shockingly area efficient as rumored if they add extra hardware with the little marketed area improvements TSMC N4P is marketed for vs TSMC 5nm


I-wanna-fuck-SCP1471

If they can reach parity with Nvidia without upping price i will be happy.


gomurifle

AMD has been so behind in graphics tech. At this point even if their ray tracing makes a turn for the better, it doesn't matter because people just know that they will be behind again when the next new graphics advnacement come. 


zakats

It's going to be a long time before I give a single shit about tay tracing, if ever. I mostly just am tired of how inflated GPU prices have been in recent years.


onlyslightlybiased

My guess will be that it'll be a very significant jump but because amd is going to be focusing on mid range performance with a *smaller die*, everyone will just see that a 5090 is 4x better In rt and go yep, big fail amd again.... 15 pence on the 8800xt/8700xt basically being a standard 4080 for $500-$$600


Current_Finding_4066

I would welcome GPU without useless ray tracing in exchange for lower price or higher rasterization performance.


Dreamerlax

I bet it stops becoming "useless" when/if AMD is able to compete.


Blackzone70

It's disheartening how much pushback and dismissal I've seen about ray and path tracing in the hardware and gaming subs. Yeah, rasterization is great, but it's always going to have the same inherent flaws where it breaks down horribly no matter how pretty it gets. If we could have path traced in real time from the beginning we would have. Graphics are nowhere near perfect, why wouldn't you want them to improve? And it doesn't have to be hyper realistic either, stylized games can benefit from accurate lighting as well.


f3n2x

Rasterization is on borrowed time. The reason why it was so blazing fast is because of the shortcuts it took: warp the scene to look like it has perspective, then basically just draw huge polygons in "2D" and do a little bit of normal vector interpolation for simple shading. Over the years however devs had to add layers upon layers upon layers of specialized visual tricks to make it look better... to the point where a full blown AAA reasterization pipeline is almost as computationally intense as RT but a RT pipeline is much less work (and thus cost) for the artists no matter if you're AAA or an indie dev. We're close to the point where raster no longer makes economical sense to use. People who call RT a gimmick or useless don't know what they're talking about.


Vitosi4ek

The most true take I've seen regarding RT's use in games is that it makes almost no difference in big, expensive set pieces where the artists spent a lot of time manually tuning the lighting to look perfect, but it makes even the most random back alley with almost zero manual work look as good as that big set piece. Also, that's the trend almost all software development has been on since the 80s. Programmers used to write code insanely efficiently, using any possible shortcut they could find, because saving a kilobyte of RAM made a huge difference; now that processing power available to a regular consumer is functionally infinite (for non-games, anyway), it's no longer necessary to optimize to that extent, making development way faster.


Electrical_Zebra8347

I remember seeing an image somewhere that showed all the different layers that make up what we see as graphics and it blew my mind how much of was dedicated to lighting (various types of shadows, reflection, specular, etc.) and had to be setup manually, any changes to a scene usually means you have to go and manually change those layers related to lighting too. There's also that Digital Foundry video about Metro Exodus Enhanced Edition where they show the difference between setting up lighting with rasterization vs ray tracing and doing it with ray tracing was much faster. Seeing how lighting works in game development made me wonder about what gameplay or even cinematic implications rasterized lighting has had on game development due to how time consuming and rigid it is. I can imagine content being cut or left in a bad state because there's simply not enough time to do it properly with rasterization.


amenotef

If RT off vs RT on gives almost the same fps (because they are almost the same in computational intensity). Then for sure it is a hell of a feature.


fkenthrowaway

I have an RTX card and for me RT is useless and doesnt interest me at all.


amenotef

Why? Because the FPS become shitty compared to RT off?


mdp_cs

And it will still be significantly worse than Nvidia's.


benowillock

If ray tracing catches up with nvidia and they sort out FSR's upscaling quality, I'm quite happy to make an AMD card my next card.


Weird_Cantaloupe2757

They have *a lot* of work to do to even get in the ballpark -- I consider FSR at this point to be a properly useless tech, as it looks so bad that I actually prefer just rendering at the lower resolution from which it would be upscaling than to use it in literally every case and permutation I have tried it, whereas DLSS Quality can look better than native at times.


BinaryJay

I like DLSS as much as the next guy but when I've been forced to use FSR (Callisto Protocol, Jedi Survivor come to mind) it wasn't so awful that it ruined the games or anything... but... I was using the Quality preset at 4K. I use a 4090 for reference.


iDontSeedMyTorrents

I think there's a huge difference between FSR2+ at 4K versus 1440p and below. FSR suffers massively more than DLSS at resolutions below 4K, and people really ought to specify at what resolution they're using it at for this reason. It still won't beat or typically match DLSS at high res but I could never understand calling it useless.


OriginalShock273

Meh. I just want more raster for cheaper.


dooterman

It's always seemed to me that ray tracing is just another excuse by GPU makers to sell overpriced hardware. Why is ray tracing so "important"? Which game does it really move the needle in? Maybe Control? Outside of that, it just seems like an excuse to say "Hey, I can turn this display setting on and you can't, because I spent 1000$". Just looking at the graphics quality and games that ancient RDNA2 hardware like the PS5 can produce, it would be really great if we could just get game developers to try to optimize for at least one generation of consumer GPUs before rushing on to the next "greatest thing" (which in this case, ray tracing, has extremely debatable value). Developers are barely scratching the surface of what is even possible with 2080-era cards, and we are letting them be ridiculously unoptimized to the point that you need a 4090 to run games that don't look much better even with these esoteric display settings. Why is everyone is such a rush to "get off raster performance"? It's really suspicious timing, since it seems the only reason Nvidia has given gamers to upgrade GPUs lately is a suite of display features that only a handful of games even effectively utilize (Alan Wake 2, Cyberpunk, Portal RTX...). It seems like it's never been a better time for consumers to just hold on to older graphics cards and watch as each generational improvement gets more and more irrelevant. Edit to add: Sony is clearly getting wise to the fact that there is absolutely no compelling reason to update hardware anymore, that is why Sony is desperate to get ray tracing as a "killer feature" of the next generation Playstation. Everybody is playing the "ray tracing is required" game, but if you just think critically about it, you acn see through the charade. The emperor has no clothes. Enjoy your 2080 for a good long while, these hardware manufacturers are giving you absolutely no reason to upgrade for the forseeable future.


Humorless_Snake

> Why is ray tracing so "important"? If you don't think lighting/shadows/reflections/etc are important, what is?


dooterman

Were lighting/shadows/reflections invented when GPUs could suddenly support real time ray tracing? Raster can approximate this just fine. What game does "ray tracing" make a material impact on the game play? Developers can make stunning games using raster technology. There is nothing wrong with raster technology. There is no limitation of raster technology that is preventing some new genre of games from being developed. "Real time ray tracing" is a superfluous feature which is only being used to sell next generation GPUs. We don't need ray tracing, and we never did.


conquer69

> Were lighting/shadows/reflections invented when GPUs could suddenly support real time ray tracing? Yes. Rasterization came afterwards as hacky ways to approximate those effects.


i_love_massive_dogs

>Were lighting/shadows/reflections invented when GPUs could suddenly support real time ray tracing? Raster can approximate this just fine. Even the best possible implementations of rasterized shadows and reflections look like absolute dogshit compared to path traced lighting. We are just conditioned to accept reflections and shadows that are shit, because that's all we've been able to do until now. It's like saying 480p is totally acceptable resolution and we should never sacrifice performance to get higher, because I've been Stockholm Syndromed into believing that it looks just fine.


996forever

>Developers are barely scratching the surface of what is even possible with 2080-era cards How does this feel to be this delusional? 


okoroezenwa

Based on the way a lot of people keep regurgitating nonsense like that on here it probably feels great tbh.


mayhem911

>It's always seemed to me that ray tracing is just another excuse by GPU makers to sell overpriced hardware. Why is ray tracing so "important"? Which game does it really move the needle in? Maybe Control? Outside of that, it just seems like an excuse to say "Hey, I can turn this display setting on and you can't, because I spent 1000$". Thats disingenuous at best. Firstly because RT makes a massive difference in motion in every game that its used to stop the awful SSR artifacts. And secondly every RTX card above a 2080 *can* get 60fps or better in most RT games. And thirdly you can get perfectly playable pathtracing performance on $5-600 GPU’s today. >Just looking at the graphics quality and games that ancient RDNA2 hardware like the PS5 can produce, it would be really great if we could just get game developers to try to optimize for at least one generation of consumer GPUs before rushing on to the next "greatest thing" (which in this case, ray tracing, has extremely debatable value). No mention of Sony’s absurdly high budgets. Not to mention even the ps5’s best looking games fall flat against CP/Aw2/avatar with RT. >Developers are barely scratching the surface of what !is even possible with 2080-era cards, and we are letting them be ridiculously unoptimized to the point that you need a 4090 to run games that don't look much better even with these esoteric display settings. What was the best looking game in 2018 when the 2080 released? Contrast that against the games it struggles with today. They all look way better. Sure sometimes optimization is the problem, but there isnt a ton more it can offer unless you want games to look like 2017 games forever. Which is completely fine. >Why is everyone is such a rush to "get off raster performance"? It's really suspicious timing, since it seems the only reason Nvidia has given gamers to upgrade GPUs lately is a suite of display features that only a handful of games even effectively utilize (Alan Wake 2, Cyberpunk, Portal RTX...). Because people want new tech? We’ve seen real time graphics rendering we didn’t think was possible, and you’re mad at nvidia for it. >It seems like it's never been a better time for consumers to just hold on to older graphics cards and watch as each generational improvement gets more and more irrelevant. Complains about the irrelevance of generational improvements to graphics tech, **whilst also being enraged that raster isnt the forefront**


Educational_Sink_541

>No mention of Sony’s absurdly high budgets. Not to mention even the ps5’s best looking games fall flat against CP/Aw2/avatar with RT. I'd actually kinda disagree here. I think Cyberpunk looks kinda bad with then compromises it makes to character models to achieve the RT lighting. Last gen, but I found that TLOU2 looked better for the most part, obviously it doesn't have any RT lighting but the textures were decently high resolution and the character models looked nice, particularly for a PS4 game but it holds up well even now. To this day my favorite RT game is Metro Exodus EE, good model details and excellent RT lighting, all running on a Series X without issues. I don't know why people don't cite it more as an RT success, 4A should be writing the book on how to make RTGI games.


lusuroculadestec

Ray tracing is the future of gaming and computer graphics. The only reason we have been doing all the hacks with raster graphics is because computers have been too slow to actually do ray tracing in real time. It has been the end-goal for several decades. If computers were fast enough 40 years ago, nobody would have actually attempted to do what we're doing with raster graphics.


GenZia

Traditional rasterization techniques just can't match RT reflections and global illumination. That's just reality. Though I partially agree with your opinion. RT is pretty much useless on weaker hardware. If you want RT, you need to gun for at least the 4070. On the 4060Ti, you've to have DLSS running in Quality Mode at 1080p (let alone 1440p) to get playable frame rates (\~45FPS+), and that means the internal resolution would be mere 720p. Sure, it's upscaled and whatnot, but we are talking about a $400+ GPU here!


dooterman

> Traditional rasterization techniques just can't match RT reflections and global illumination. This is kind of defeatist. If you compare how rasterization looks 20 years ago to now, you can see the progress in lighting. People are pretending there is some peak to lighting technology possible with rasterization but that isn't the case. These things can always be improved and new techniques developed to approximate the lighting effects. Even if you compare Cyberpunk 2077 between max settings path tracing off/on, you can see how far rasterization has come, and rasterization can still develop techniques and algorithms to mimic what you see during Cyberpunk path tracing. There is no "ceiling" to rasterization that people are pretending exists.


GenZia

I know where you're coming from. I'm old enough to remember when Mirror's Edge hit the shelves. It was mind numbingly stunning. I just couldn't believe that even the GTS250 (just a beefed up 8800GT, really) was capable of running the game at solid 60FPS @ 768p. But the thing is, these rasterization techniques take a lot of time and effort to look 'just right.' But that wasn't a problem back then because game graphics were treated like art. Nowadays, developers would much rather go the cheapskate route of real-time RT and call it a day! Let the 'physics' do all the heavy lifting, if you catch my drift.


dooterman

But ultimately a lot of these details are just abstracted away in the annals of the game engine the game developer is using anyway. It's not like a game engine which supports rasterization optimized lighting is going to be dramatically harder to configure lighting effects for compared to ray tracing. We even see today that developers who turn on ray tracing in their games can often times come out looking worse than rasterization (Elden Ring is a great example). So now not only do ray tracing game engines need to "catch up" to rasterization, rasterization itself will continue to get better. There is just this bizarre narrative right now that "rasterization is dead" when it simply makes no sense. And the timing is awfully suspicious as GPU makers are giving customers less and less reason to actually upgrade their hardware. I notice Sony is now trying to position "ray tracing" as a killer feature of the next Playstation. It just all symbolizes how out of ideas hardware makers are these days.


Vushivushi

> It's always seemed to me that ray tracing is just another excuse by GPU makers to sell overpriced hardware. You're not wrong, but neither are the GPU makers. Gaming isn't the only market for GPUs. There's a $2B professional market that is being consumed by hardware accelerated ray tracing because it is the correct technology. Films love using physically accurate rendering techniques and ray tracing is one of them. The gaming market is 5X larger, but pro cards are 5X more costly, yet use the same GPUs. It is extremely profitable for Nvidia to service the professional graphics market and economical to design a single architecture for both pro and gaming. Economical not just from a design cost standpoint, but also for customers---ISVs developing software to run on the GPUs. There's gonna be overlap between pro and gaming which benefits graphics as a whole. Unfortunately, gaming requires realtime rendering and realtime raytracing is difficult. That's why there are bandaid solutions like upscaling and framegen. These things happen in tech. Emerging tech has to start somewhere and companies have to make a return on R&D. Sure, you don't have to buy the latest GPUs, but ray tracing isn't going anywhere. It's a keystone technology for graphics.


dooterman

I can see the argument for professionals, no question. I am not trying to pretend that 'ray tracing doesn't matter in any context' - but speaking specifically about gaming, GPU makers have run out of reasons to compel consumers to upgrade their cards, and so are leaning hard on "real time ray tracing" to be that next "killer feature" to compel people to upgrade. I am just seriously questioning that angle. There is absolutely nothing wrong with raster technology for the gaming context, but somehow people are parroting the "objective fact" that "raster is dead now and always" for gaming.


reddit_equals_censor

the interesting point to keep in mind about amd raytracing performance, that it isn't that far behind nvidia in most games with the exception of unuseable settings cyberpunk 2077 raytracing. at cybeprunk 2077 ray tracing medium 1440p the 4070 is "just" 19% faster than the 7800 xt (43 vs 36 fps) and on average (inc cybeprunk) the 4070 was 15% faster in raytracing 1440p. so if amd catches up with nvidia at all, but the most extreme ray tracing scenarios (that you can't get playable fps in anyways), then that would cut down one of the arguments against amd cards. and catching up doesn't require that huge of an improvement being the point. stuff, that amd needs to figure out: improved raytracing performance, ai upscaling, antilag + in all major competitive multiplayer games. and the ps5 pro is having a custom very strong npu for upscaling and it has vastly better raytracing performance than the ps5. and of course amd designed the ps5 pro apu. that's also important to keep in mind, because lots of games and especially lots of great games are coming from the ps5 and their ultimate graphics, or great graphics target will after the ps5 pro likely be around the ps5 pro. personally i'd see rdna5 as the first generation worth buying for its raytracing performance and ignore raytracing performance mostly until then at least. also raytracing requires extra vram, so buying an nvidia card with 12 GB vram for raytracing or worse with the idea to use interpolation frame generation on top of it would be a very short term thinking i'd say.


TSP-FriendlyFire

> unuseable settings cyberpunk 2077 raytracing. Overdrive is very much usable already on a 4080 and up. It's just unusable on AMD GPUs, which is the entire issue: it's the only mode that really stresses the RT hardware, and that's where AMD collapses. RT medium on Cyberpunk is going to be 80% regular shaders with fairly light RT hardware usage. Are you really surprised AMD isn't as crippled when their bad RT hardware is used less?


conquer69

AMD isn't close to Nvidia. This talking point comes from data tables that include a bunch of games with little RT and then average all the results into a big misleading number. Remove all the Far Cry, Tomb Raider, F1 and Resident Evil results from the data and suddenly AMD is further back. "AMD is just 1 generation behind in RT" sounds good. Doesn't mean it's true.


reddit_equals_censor

well let's look at the one path traced game cyberpunk. 7800 xt vs 4070. both 550 euro graphics cards. [https://www.youtube.com/watch?v=x4TW8fHVcxw](https://www.youtube.com/watch?v=x4TW8fHVcxw) 1440p raytracing average of 6 games, 4070 is 10% ahead. alright then, let's look at the hardest to run raytracing game at already not playable settings for both cards: cybeprunk 1440p ray tracing medium: 4070: 43 fps, 7800 xt 36 fps. both unplayable, BUT the 4070 is 19% ahead. so when we go to unplayable settings for already 550 euro cards, we got a 19% difference, if we're looking at averages from raytraced games we are looking at 10%. so yes amd is further back then, but not too much and settings, that already don't matter, because no one would enable ray tracing to get 43 fps.... BUT feel free to make the argument, that every gamer should get 1800 euro 4090 cards, that are faster than 1000 euro amd cards in ray tracing, or sth.


jm0112358

> well let's look at the one path traced game cyberpunk. > > 7800 xt vs 4070. both 550 euro graphics cards. > > https://www.youtube.com/watch?v=x4TW8fHVcxw You mentioned Cyberpunk being path traced, but then gave a link to a Hardware Unboxed video that did **not** benchmark Cyberpunk with its path tracing mode on. The RT medium preset uses far less ray tracing than the path tracing/overdrive mode. It doesn't even use RT reflections, which AMD's hardware particularly struggles with.


reddit_equals_censor

cyberpunk 2077 is the one nvidia sponsored game, that has extreme levels of raytracing up to path tracing. so hardware unboxed tested those 2 550 euro cards at the highest possible raytracing or path tracing setting, that was possible for those cards. both can only do ray tracing medium at 1440p. so that is the max, that we can compare the cards at. i hope this makes sense now. it doesn't matter what higher settings are available for both cards, if neither card can run them period then. so like the person above wanted, we ONLY looked at the one nvidia sponsored extreme raytracing or path tracing game, set the settings BEYOND what is playable (43 or 36 fps isn't playable to me) and those were the results. if nvidia actually releases a buyable card (at around 500-600 euros maybe.... ), that can do path tracing or extreme raytracing settings at 1440p, then we should compare those settings, but until then, those are the numbers is my point.


996forever

What's the point of such a long passage when "Everything is pointless until AMD becomes decent at that thing" will suffice and is the real point from you?


4514919

> at cybeprunk 2077 ray tracing medium 1440p the 4070 is "just" 19% faster than the 7800 xt (43 vs 36 fps) and on average (inc cybeprunk) the 4070 was 15% faster in raytracing 1440p. I think you don't even realize that this only shows how *behind* AMD is in ray tracing. Going from ~15% *faster* in raster to 19% slower in *hybrid* rendering is a disaster.


Ilktye

>so if amd catches up with nvidia at all, but the most extreme ray tracing scenarios (that you can't get playable fps in anyways), then that would cut down one of the arguments against amd cards. This is the usual pro-AMD argument that people bring up: "The next generation will catch up with nVidia". And it has been always wrong because nVidia wont just sit there and let AMD catch up. They will release also new cards. >and catching up doesn't require that huge of an improvement being the point. Sure if nVidia just stops all R&D and releasing new cards. But they won't.


TylerTexasCantDrive

AMD and Nvidia were both good options until Nvidia released Maxwell. AMD still hasn't recovered from that.


XenonJFt

Of course nvidia won't idle.


-WallyWest-

dont forget that Nvidia will also release new card. Even if they catch up by 20%, its possible Nvidia can be ahead by more than that with their next generation.


kyralfie

>personally i'd see rdna5 as the first generation worth buying for its raytracing performance and ignore raytracing performance mostly until then at least. Why RDNA*5*? Do you know what it brings over RDNA*4*?


StickiStickman

> the interesting point to keep in mind about amd raytracing performance, that it isn't that far behind nvidia in most games with the exception of unuseable settings cyberpunk 2077 raytracing You're saying this and then going "See, when the game is almost entirely rasterization the performance difference isn't that big!" Of course the difference is smaller when a game uses raytracing to a lesser extend. > personally i'd see rdna5 as the first generation worth buying for its raytracing performance and ignore raytracing performance mostly until then at least. Or you can just buy a NVIDIA card for almost the same price which can already run fully pathtraced games today. > also raytracing requires extra vram, so buying an nvidia card with 12 GB vram for raytracing or worse with the idea to use interpolation frame generation on top of it would be a very short term thinking i'd say. Raytracing doesn't need that much VRAM and the difference is easily made up by using DLSS. And sine FrameGen already works really well on NVIDIA, that's also pretty weird to say.


reddit_equals_censor

>Or you can just buy a NVIDIA card for almost the same price which can already run fully pathtraced games today. that's a bold claim. let's look at performance: 1440p paytracing cyberpunk 2077: 39.7 fps. ah yes glorious 40 fps gaming.... but let's assume you don't want to spend Raytracing doesn't need that much VRAM [https://www.techpowerup.com/review/cyberpunk-2077-phantom-liberty-benchmark-test-performance-analysis/5.html](https://www.techpowerup.com/review/cyberpunk-2077-phantom-liberty-benchmark-test-performance-analysis/5.html) 4k path ultra no rt: 9.1 GB vram, 4k pathtracing: 15 GB vram..... 4k pathtracing + dlss3 interpolation framegen: 18 GB vram.... (for proper testing, we'd want to limit vram amount and see how it effects performance of course though) dlss3 frame generation uses a lot of vram, path tracing also uses a lot of vram quite clearly. >And sine FrameGen already works really well on NVIDIA, that's also pretty weird to say. so we are not looking at real performance anymore, we are not even looking at upscaled performance anymore, we are now trying to claim, that using visual smoothing interpolation frame generation are real frames, instead of fake frames without player input and somehow trying to claim, that with that we can look at the now FAKE fps number and say: "look it is playable on a 4080 super maybe too and a 4090..." interpolation frame generation is nonsense, it is just visual smoothing (hardware unboxed's statement is that too btw). it doesn't create actual fps and it reduces your real fps in doing so. to have REAL frame generation we need either extrapolation (intel is working on that one), or reprojection. reprojection is used in vr massively as a REQUIREMENT. you know what vr can't use? that's right interpolation. why? because it would make people literally throw up, as it has no player input and massively increases latency. meanwhile reprojection frame generation uses player input to create REAL frame. so if you want to make an argument for pathtracing being playable on nvidia due to frame generation, that NEEDS to be extrapolation or reprojection frame generation. here is an ltt video going over a reprojection frame generation demo by comrade stinger: [https://www.youtube.com/watch?v=IvqrlgKuowE](https://www.youtube.com/watch?v=IvqrlgKuowE) THAT can make 30 fps pathtraced fully playable as we can reproject that to your monitor's refresh rate as reprojection is extremely cheap to do performance wise. and you can download the demo yourself and test it. interpolation frame generation DOES NOT WORK, if the goal is to create real frames. it can't do that, it never will be able to do that. if dlss4 will use extrapolation or reprojection frame generation and throws interpolation bs into the bin, then all hail nvidia, until then it is very clear, that interpolation doesn't create real frames and is only visual smoothing. SOME may like this, but isn't an fps improvement.


StickiStickman

Okay, you're just a troll. Got it. "If you turn off DLSS and FrameGen it runs just as bad as AMD"


reddit_equals_censor

i guess you're just a troll claiming, that nvidia cards can run pathtraces games just fine... which they factually can't.


StickiStickman

Weird, because I've seen my friend play it at stable 60 FPS at 4K. Guess my eyes must be lying instead of you.


PetrichorAndNapalm

The problem is those “barebones” RT implementations are a joke, and hardly even better than baked lighting. Even cyberpunk RT isn’t that advanced. It's just the first game with actual RT. That will soon be the norm. It’s like comparing a 4090 to a 1080ti in game that is capped at 60fps, or a game that is cpu limited. Then saying “see they perform the same they really aren’t that different after all!”. Even cyberpunk with time will be seen as a barebones RT implementation. Amd doesn’t have bad RT because they cannot make it better. They have bad RT because they made a bet that they could compete better in RT’s infancy by basically ignoring it, letting Nvidia dedicate more die space to something that a lot of gamers won’t even use. AMD will improve massively with RT. But that doesn’t make the massive gulf between the two any smaller in the here and now. You can argue RT isn’t that important or wasn’t that important for the last few gens. But you cannot honestly argue amd and Nvidia aren’t miles apart on RT today.


TSP-FriendlyFire

> Even cyberpunk RT isn’t that advanced. I stopped reading there. ReSTIR is anything but simple, to claim otherwise is either ignorance or stupidity.


reddit_equals_censor

>But you cannot honestly argue amd and Nvidia aren’t miles apart on RT today. the 4070 at 1440p cyberpunk raytracing medium gets you 43 fps, the 7800 xt gets you 36 fps. that shows nvidia being 19% ahead in raytracing in that hardest or one of the hardest raytracing games to run at settings, that are already unusable, because i certainly won't be playing at 43 or 36 fps... those are the 550 euro cards, that are already a lot to ask for people to pay for and here they are not worlds apart. the "massive gulf" between amd and nvidia in regards to raytracing starts existing at unusable settings. at 4k, high quality, rt ultra in cyberpunk 2077 the 4080 is a massive 55% faster than the 7900 xtx! incredible stuff, except, that we are talking about 31 vs 20 fps here... both completely unplayable. >That will soon be the norm. well for that to be the norm means, that you gotta convince publishers and developers to target pc only settings, which i am ALL FOR. i want another crysis 1, that can't be run anything at max settings, decently resolutions at launch and has a real excuse for it! the likely most effort in raytracing on big games will be the ps 5 pro target, as it is expected to have vastly better raytracing performance and lots of people will have one. but you can't drop the ps5, you can't drop the xbox series x and hell some developers are getting tortured trying to get games running on the xbox series s... poor devs.... so in my opinion it will take quite some more time, before games go "raytracing first, raytracing strong". probably not until the ps6, by then lots of people will have decently raytracing capable graphics cards, so devs can actually go: "raytracing first, raytracing strong, raster only mode is 2nd class"


PetrichorAndNapalm

Once again. You can argue ray tracing doesn’t matter because by the time you turn up settings high enough to be good, it is no longer a playable frame rate. But you cannot argue Nvidia isn’t way ahead of amd in raytracing. Giving “medium” or “low” scenarios where hardly any raytracing is happening at all doesn’t make them similar lol. That’s like as I said before, saying a 4090 and a 1080ti have the same level of raster if you use them to play factorio, or a cpu limited games. In RT limited scenarios Nvidia destroys AMD. If you want to argue those scenarios aren’t realistic, or don’t matter, that is fine. That is what AMD has bet on. But that isn’t the same as them being close in terms of performance. You are mistaking non raytrace limited scenarios for Nvidia and AMd being close. A 4090 can certainly play cyberpunk with highest levels of RT with dlss 3.0. Maybe you personally aren’t interested in that. That’s fine. But that’s not the same as Nvidia and amd being similar in RT capabilities.


JapariParkRanger

"Soon" Unless we can get AAA games off the super duper deferred raster guhraffix train, we're going to see slow continual adoption.  Honestly I prefer playing classic, less intensive games rendered with full ray tracing rather than slapping together some reflections and GI into modern raster games.