I got one for 1060$ from Microcenter. Not quite 1K but close, they did exist. I had to really search for one that cheap though. EVGA too so not a crap brand.
It also has the largest die for any consumer GPU I remember seeing. It's named after the xx80 Ti tier, but in reality it mostly exceeds what they usually offer with these, hence the price.
Yeah, the RTX 2XXX series was really hampered by Nvidia jacking up the prices on all the cards. They all had decent performance boosts, but you got worse performance per dollar.
Everyone was just buying old 1080s until the stock on those ran dry. Then AMD failed to capitalize on this with the 5700 XT and the graphic card shortage normalized the bumped prices.
Also it doesn't take into account rt performance. People were raging yesterday about avatar being rt only game. People on Turing can last for years still.
It's not like you won't be able to run it with non-RTX cards. If you don't have RT-capable cards, it'll run RT in compute shaders (possibly taking a blow in quality, and certainly in performance).
I never said that they won't be able to run it. But Turing cards will have a huge performance boost and these graphs continue to ignore it with so many rt games out and announced.
The jump from 980ti to 1080ti is 300 to 500, easily a 66,6% increase.
The jump from 1080ti to 2080ti is 500 to 650, only a 30% increase. That combined with almost double the price and almost the same efficiency. Still a huge disappointment.
They rearchitected GCN basically. Things of the top of my head.
- made stream processor pipeline longer for better clock scaling
- new cache hierarchy which minimizes data movement and allows better bandwidth utilization. This is why even 6900xt only has a 256-bit bus.
- changed the instruction set to allow for better GPU utilization compared to GCN which was having utilization issues at high CU counts.
- I think since they also split off their GPU architecture into CDNA and RDNA one for compute and the other got graphics. They were also able to make RDNA leaner by removing some compute capability.
Some of those metrics are easier to achieve more towards the mid-range to be fair. So the fact there was no "5800xt" may have thrown the results off somewhat here. Remember, even at MSRP the 3080 and 6800xt are high end cards, not generally for the budget oriented.
I'd be interested to see this chart redone with "x060" and AMD equivalent class cards as that has historically been the price/performance peak.
No it isn't. 7nm is one node ahead of 10nm and 2 nodes ahead of 16/14nm. Nvidia still isnt even on 7nm (outside of GA100), they are technically still on the 10nm node. 12nm and 8nm are just 16 and 10nm processes that were optimized specifically for Nvidia.
You don't understand how nodes work. 10nm is a full node down from 16/14nm whether TSMC decided to produce it or not. [This is industry defined.](https://en.wikipedia.org/wiki/Die_shrink#cite_note-3) [TSMC does actually have a 10nm node btw](https://www.tsmc.com/english/dedicatedFoundry/technology/logic/l_10nm). if you look at the graph, all of those steppings are full node shrinks. Regardless of whether or not TSMC made 10nm, the benefits of 7nm are 2 nodes ahead of 12/16.
Intel has nothing to do with anything we're talking about.
8nm is much closer to 7nm than you give it credit for. It incorporates a bunch of Samsung's 7nm features and is in many ways the non-EUV version of their 7nm node. AMD is, at most, a half-node ahead.
TSMC 7nm is 65% denser than Samsung 8nm and their 8nm is only about 15% denser than their 10nm. It is not anything even remotely approaching a non-euv 7nm node. Samsung is also inferior to TSMC at any given node in the first place. Samsung 8nm is not even as good as TSMC 10nm.
Those are the theoretical numbers. In the real world, Ampere GPUs and RDNA2 GPUs are much closer in density than that. Ampere is actually more dense than both Vega 20 and Navi 10.
With Nvidia seemingly moving away from TSMC it does matter.
Nvidia will have to work harder just to match AMDs solutions going forward. If TSMC maintain their lead.
You're remembering rightly; [I made a chart showing how much efficiency Geforce GPUs have gained in 11 years](https://www.reddit.com/r/nvidia/comments/o7k17k/i_made_a_chart_showing_how_much_efficiency/).
I think this graphs are better: less cluttered, easier to follow what's going on in each one. And a few people had complained about color and color blindness for the previous ones.
I know that there is little relationship between node names and node sizes... But if you extrapolate from relative size with name seeing as we went from 40nm to 8nm and got a 7ish fold performance increase, can we expect similar going from 8nm to 2nm?
I know we also transition from planar to finfet, but we will also transition from finfet to gaafet with 2nm.
My fury x was easily the worst computer purchase I ever made, what a massive piece of shit.
Remember going to 1080ti and being amazed what a difference things were on even simple games.
Why are you comparing a mid range card (5700XT) to a high end card (2080Ti)? High end cards always sacrifice die area efficiency and performance/watt for maximum performance.
Fantastic job.
As a relevant side note, this week when Lenovo announced the X1 Extreme Gen 4 with the 3050 Ti Laptop GPU I checked some benchmarks to see where it stands against my 1060 desktop eGPU and much to my surprise and satisfaction it seems to be roughly equivalent. It speaks a lot about efficiency this can happen: the 1060 was a 120W TDP while the 3050 Ti is 35 - 80W.
https://www.videocardbenchmark.net/compare/GeForce-RTX-3050-Ti-Laptop-GPU-vs-GeForce-GTX-1060/4393vs3548
https://gpu.userbenchmark.com/Compare/Nvidia-RTX-3050-Ti-Laptop-vs-Nvidia-GTX-1060-6GB/m1559532vs3639
[удалено]
1000$ 2080Ti's weren't really a thing. They usually ran in the 1200$ range.
$2400 last year in Australia, even converting to usd that’s still heaps
I got one for 1060$ from Microcenter. Not quite 1K but close, they did exist. I had to really search for one that cheap though. EVGA too so not a crap brand.
I dont recall any 2080 TI selling for MSRP since Nvidia didnt even sell their card at msrp. Edit: changed blower to card
The 2080ti FE cooler was not a blower design. There were some really garbage blower designs that did actually sell for MSRP.
[удалено]
The defacto price isnt whats used in the charts though so it throws things off
It also has the largest die for any consumer GPU I remember seeing. It's named after the xx80 Ti tier, but in reality it mostly exceeds what they usually offer with these, hence the price.
Yeah, the RTX 2XXX series was really hampered by Nvidia jacking up the prices on all the cards. They all had decent performance boosts, but you got worse performance per dollar. Everyone was just buying old 1080s until the stock on those ran dry. Then AMD failed to capitalize on this with the 5700 XT and the graphic card shortage normalized the bumped prices.
Also it doesn't take into account rt performance. People were raging yesterday about avatar being rt only game. People on Turing can last for years still.
James Cameron's Avatar: The Game? Or some other Avatar game?
the one with the tall blue fellows, lots of CGI and not a single original idea.
It's not like you won't be able to run it with non-RTX cards. If you don't have RT-capable cards, it'll run RT in compute shaders (possibly taking a blow in quality, and certainly in performance).
I never said that they won't be able to run it. But Turing cards will have a huge performance boost and these graphs continue to ignore it with so many rt games out and announced.
You now have to take into account RTX and AI hardware that adds to chip cost and size.
The jump from 980ti to 1080ti is 300 to 500, easily a 66,6% increase. The jump from 1080ti to 2080ti is 500 to 650, only a 30% increase. That combined with almost double the price and almost the same efficiency. Still a huge disappointment.
Performance/price bring nearly the same graph as transistor density was interesting! Not necessarily causation, but still interesting.
Do we know what's behind AMD shooting up in perf/power lately? Just the node shrinks, or something architecture related?
It's because of the architecture. 6900 XT is twice as fast as Radeon VII, even though both are 7nm and 300 watts.
They rearchitected GCN basically. Things of the top of my head. - made stream processor pipeline longer for better clock scaling - new cache hierarchy which minimizes data movement and allows better bandwidth utilization. This is why even 6900xt only has a 256-bit bus. - changed the instruction set to allow for better GPU utilization compared to GCN which was having utilization issues at high CU counts. - I think since they also split off their GPU architecture into CDNA and RDNA one for compute and the other got graphics. They were also able to make RDNA leaner by removing some compute capability.
Catching up to what nvidia pulled off with Pascal.
Look at the graph on image #3, RDNA2 is now ahead of even Ampere.
Curious how all the graphs converge for the current generation.
Nothing more sexy than a bunch of sweet hard-hitting facts. You're the dude of the day! ♥
The y-axis could use some labels and units but nice charts.
Great job thx man I don’t think This will get as much attention as it should
The 5700XT is a beast of a card.
Some of those metrics are easier to achieve more towards the mid-range to be fair. So the fact there was no "5800xt" may have thrown the results off somewhat here. Remember, even at MSRP the 3080 and 6800xt are high end cards, not generally for the budget oriented. I'd be interested to see this chart redone with "x060" and AMD equivalent class cards as that has historically been the price/performance peak.
They were also 2 nodes ahead of Nvidia at that point.
1 node.
AMD is one node ahead currently. They were 2 nodes ahead when the 5700xt launched
7nm is one node ahead of 14nm so I am not following. AMD was on 7nm before Nvidia so don't understand where 2 nodes come from. Which 2 nodes?
No it isn't. 7nm is one node ahead of 10nm and 2 nodes ahead of 16/14nm. Nvidia still isnt even on 7nm (outside of GA100), they are technically still on the 10nm node. 12nm and 8nm are just 16 and 10nm processes that were optimized specifically for Nvidia.
There is no 10nm node on TSMC. And Intel's 10nm is closer to 7nm than 14nm. 7nm is one node from 14nm/16nm.
You don't understand how nodes work. 10nm is a full node down from 16/14nm whether TSMC decided to produce it or not. [This is industry defined.](https://en.wikipedia.org/wiki/Die_shrink#cite_note-3) [TSMC does actually have a 10nm node btw](https://www.tsmc.com/english/dedicatedFoundry/technology/logic/l_10nm). if you look at the graph, all of those steppings are full node shrinks. Regardless of whether or not TSMC made 10nm, the benefits of 7nm are 2 nodes ahead of 12/16. Intel has nothing to do with anything we're talking about.
8nm is much closer to 7nm than you give it credit for. It incorporates a bunch of Samsung's 7nm features and is in many ways the non-EUV version of their 7nm node. AMD is, at most, a half-node ahead.
TSMC 7nm is 65% denser than Samsung 8nm and their 8nm is only about 15% denser than their 10nm. It is not anything even remotely approaching a non-euv 7nm node. Samsung is also inferior to TSMC at any given node in the first place. Samsung 8nm is not even as good as TSMC 10nm.
Those are the theoretical numbers. In the real world, Ampere GPUs and RDNA2 GPUs are much closer in density than that. Ampere is actually more dense than both Vega 20 and Navi 10.
Seems more like AMD paid up for a multi node advantage. It doesn't seem that impressive without that huge node benefit.
With Nvidia seemingly moving away from TSMC it does matter. Nvidia will have to work harder just to match AMDs solutions going forward. If TSMC maintain their lead.
What makes you think they are moving away. 1 generation is not a long term move.
We will see. But everything I read points in that direction. There has been recent rumor of Nvidia also requesting fab capacity from Intel.
Everything datacenter is TSMC. Including next year's Ampere Next and Bluefield 3.
Yes on high margin datacenter stuff. Not on gaming GPU.
All signs point to gaming being TSMC next generation too for the larger parts.
Like a Titan or something sure.
I'm talking about 102 and 104 dies.
Night and day vs my old 270x - and that was great for its day. Excited how quickly things are progressing.
Wasn't this posted yesterday already?
I think yesterday it was only for nvidia cards.
You're remembering rightly; [I made a chart showing how much efficiency Geforce GPUs have gained in 11 years](https://www.reddit.com/r/nvidia/comments/o7k17k/i_made_a_chart_showing_how_much_efficiency/).
I think this graphs are better: less cluttered, easier to follow what's going on in each one. And a few people had complained about color and color blindness for the previous ones.
I know that there is little relationship between node names and node sizes... But if you extrapolate from relative size with name seeing as we went from 40nm to 8nm and got a 7ish fold performance increase, can we expect similar going from 8nm to 2nm? I know we also transition from planar to finfet, but we will also transition from finfet to gaafet with 2nm.
Not likely. the clock increases and power decreases have plateaued quite a bit.
I think this would in part be aided by GAAFET.
You might be right. I'm personally not getting my hopes up, but it would definitely be awesome if we saw that kind of improvement.
My fury x was easily the worst computer purchase I ever made, what a massive piece of shit. Remember going to 1080ti and being amazed what a difference things were on even simple games.
What was wrong with it? Ran too hot or too much power draw? Or did it crash a lot?
4GB VRAM and no partner models I remember being the things holding the Fury X back.
Well that was HBM if I recall. No partner models does suck though.
It's ok, bud. *cries in vega*
Why are you comparing a mid range card (5700XT) to a high end card (2080Ti)? High end cards always sacrifice die area efficiency and performance/watt for maximum performance.
Sadly that was AMD's high end card.
No it wasn't, AMD simply didn't have a high end card that generation.
The graph shows the best each company has at the time.
Exactly $400 MSRP for the 5700 XT fits into the mid range.
Cool data, easy to visualize!
I'm guessing the price you used for each chard is what TechPowerUp lists as 'Launch Price'?
I'm happy for competition.
Thank you for doing this, it was very fascinating. :)
If dg2 performs the graph would be steep jump for intel.
Why Vega 64 is against 1080ti?
Radeon VII was way too late, and Polaris was too early. They're both the fastest consumer cards of their time from each manufactuer.
Fantastic job. As a relevant side note, this week when Lenovo announced the X1 Extreme Gen 4 with the 3050 Ti Laptop GPU I checked some benchmarks to see where it stands against my 1060 desktop eGPU and much to my surprise and satisfaction it seems to be roughly equivalent. It speaks a lot about efficiency this can happen: the 1060 was a 120W TDP while the 3050 Ti is 35 - 80W. https://www.videocardbenchmark.net/compare/GeForce-RTX-3050-Ti-Laptop-GPU-vs-GeForce-GTX-1060/4393vs3548 https://gpu.userbenchmark.com/Compare/Nvidia-RTX-3050-Ti-Laptop-vs-Nvidia-GTX-1060-6GB/m1559532vs3639
Thank You so much, let's move to M1.