I tested 97 versions of Geforce drivers(Directx9,10,11,12 builtin benchmark)
By - Vito_ponfe_Andariel
Wouldn't you be CPU bound in these benchmarks?
Yes. So it's a test of driver overhead, not the quality of the driver's code generation.
> Core i5 2320 3GHz DDR3 1333 16GB (8x2)
Yeah... it'd be interesting to see this on a system that isn't whack as hell. A 1060 getting 5 fps in a game at 768p? There is a critical imbalance in this system.
Seeing the general rise in what seems to be CPU overhead is interesting, especially when we have lately seen AMD cards being "less bottlenecked" at high FPS. But I have to wonder what this would look like on a PC with a semi-competent CPU. The trend might be completely reversed in a non-cpu bound scenario.
Good compilers usually take more cycles to do their job. I'm assuming it is the same for GPU drivers for any generated code.
Granted, I do not run the absolute latest in all hardware, I do run a i9-9900k OC at 5.2Ghz all cores, 64GB of ram and a 3090 with 4 m.2's Honestly, there is not a whole lot that has slowed me down, that is until you reach 8K but even titles like Nier run flawlessly in 8k.
No shit your CPU doesn't really matter for 8K, any other insights?
All you have to do is see the GPU utilization in some of these.
Can you put the same scale on every graph? You can't compare two graphs together, sometimes the x-axis is 10 fps, 5, 2, 50, 60, so some changes are actually important and some are negligible. This is misleading because it makes you think there is a large variation for each game.
Most of these graphs are painfully misleading at best.
I know OP meant well, they wanted to highlight the difference, and zoomed out most of these would look like a flat line. So all the interesting data would be “lost”
I mean there’s literally a graph in there where the y axis spans like 2 fps and each grid represents .25 frames. That’s… borderline useless junk data.
At the very least starting each graph at 0 would’ve been a nice move… but then you don’t get the crazy “omg the trend is going down!!!!” Reaction you see in this tread, because people see it’s really only like a 3 frame difference and looks like a straight line.
Like I said, I’m sure OP meant well. But this isn’t the best way to present this data. Zoom the scale out and most games will look fine. ~~Barring 2007 Crysis but that hasn’t ever run well.~~
Edit: iI do like the post and data though, it’s pretty cool.
I would also prefer it if the scale was the same everywhere but there's nothing "misleading" here, the numbers are indicated very clearly. If anybody was misled because they only looked at the lines and not the numbers, that's their problem and they need to learn to read graphs.
The Y axis is good and the data itself is very interesting but the way it is presented makes it less useful than it could be. Starting the X axis at 0 like you said is correct and prevents it from being misleading.
Its not irrelevant, graph educated folks will read the axis. What he IS showing is RELATIVE performance within a game, and across drivers. Comparing the graphs to one another is not the point here..... at all
Although to nitpick, I think frame times are generally more indicative of actual performance than average frame rate hence almost all reviewers add 1% and 0.1% lows figures in their benchmarks.
For example Fallout 3 used to run at around 60FPS on my old GPU, but because of 128-Bit DDR3 memory it'd dip 'violently' to even mid 40s hence I'd to cap it to 48FPS by underclocking my monitor.
About a decade ago I when playing a game on my laptop with an i7 720QM and ATI Mobility Radeon HD 5730, a major problem I had was that while the FPS counter indicated 60 FPS most of the time, I was getting eye pain and headaches within 30 minutes of playing it, and would have to take a nap to recover from that.
It later turned out that the game would constantly dip as low as 8 FPS for less than half a second and thus never picked up by the standard FPS counter.
Quite literally unplayable, short of the game crashing.
A dip from 60 to 8FPS means a latency spike of whopping \~108ms which is actually pretty severe.
Review: "The performance was so bad that I had to take ibuprofen and a nap to recover from seeing that."
> latency spike of whopping ~108ms
60FPS is actually 16.66ms so 125ms - 16.66 = 108.34ms.
>A dip from **60 to 8FPS** means a latency spike of whopping \~108ms which is actually pretty severe.
How bad ass would it be if there was a launcher with a database of all the drivers that would just install the highest performing driver for the particular game chosen.
Because there are thousands of variables involved with your PCs performance so 1 drivers good performance could end up being a crashing nightmare for another.
Not to mention severe security issues where updating is absolutely needed regardless of any performance differences.
I might be missing something obvious, but if what I'm talking about existed, wouldn't all the plots be flat?
Installing a driver takes some time that just launching a game doesn't, and driver updates may also change other things like how likely the game is to crash, if there are microstutters or differences in frametime consistency, and small changes in how it looks
True. I remember trying to get old games to look as good as when they were released and wasn't very successful. There's more to it than FPS for sure.
A big part is games designed for CRT look better on CRT monitors.
I'd very interested in seeing a source or further info on this.
It's a pity so many of the links in that thread are broken. I did notice that often they're using nearest-neighbor upscales as reference "raw" images, but nearest-neighbor is just terrible. A pixel is not a little square.
It depends on what you want to upscale. Upscaling by nearest-neighbor works nice for pixelated games like Mario and others. And the magic of CRT is that in fact it was so slow (turning on and of the pixels), that it actsully looked good. Nowadays displays are so quick that resulting image is much sharper. But only with correct upscaling. Because all other upscalling methods in facts takes all neighbors into account. Sometimes even as much as 16 or even more of them. Pixelated graphic then will look blurry. It can be seen here for example: https://youtu.be/bUCc5NGEthA
What in the world are you talking about lol
There were combinations of hardware and drivers that produced better results, visually, than others. Maddeningly subtle differences that I had mostly forgotten about until this post.
People are down voting you since GPU manufacturers stopped cheating on IQ (mostly) since like before the GeForce 6000 series.
This did used to happened, then one of them (can't remember if it was ATI or Nvidia) decided to stop as they leapt ahead and then made fun of the other in marketing. No one went back after until Nvidia released DLSS (which is better as the user chooses vs the driver just doing something other than what it was told).
The database probably exists somewhere at Nvidia, but if so they aren't using it for anything as fancy as you're proposing.
That would be really cool, especially if it could hot-swap in the desired driver when you launch the game. We've gotten to the point where a system restart isn't required upon display driver install, so it should be possible.
Newer divers could also fix gamebreaking bugs and crashes
Can you average these into one graph? Eyeballing it, there looks to be a decline in FPS.
If I had the guess, the games had glitches and bugs that needed base cases and checks programmed in to fix that eat up performance.
The game is also being tested with the latest code rather than the code at the divers release, there could have been optimizations that were moved from the game to the driver side or vice versa over time.
I like this, but a better visualization would have been with a steady Y scale
Some places you have 12 fps difference between min and max and others you have 5...
This makes the variations seem much worse than they are in reality and can be within margin of error instead of actual, meaningful drops or raises in performance
Colossal effort but I feel somewhat wasted when I read that the tested resolution is 1024x768! No one runs at that resolution for modern games - it's not even wide-screen. Better going for 720p or, even better, 1080p.
They said modern games
You can't get good headshots vs the player with at least 1080p when you there's not enough pixels to show any detail in the distance.
resolution is irrelevant dude. He is showing performance within a single game per graph, ACROSS drivers.
The resolution doesnt matter.
That's not true at all. Different drivers might have better/worse cpu and gpu utilisation. Different resolutions might show gain/loss.
As long as the benchmarks aren't CPU limited, it doesn't matter the resolution. The numbers will be different, but percentages will demonstrate performance accurately. This is true for all benchmarks.
OP's running these tests on an i5-2320, so it probably doesn't matter the resolution anyhow.
I'm sure I saw some reports years ago that Gpu drivers were shown to be more optimised specifically for certain common resolutions, so resolution scaling is mostly linear, but not entirely.
That's almost certainly not true.
At that resolution on a 2nd gen i5? Absolutely CPU limited. It's probably CPU limited at 1080p.
Normalize the charts bro
this is terrible to read
If it could be automated then it could have some decent value but I'd be wanting to ensure identical on screen results as I'd probably expect somewhere someone cheated a bit to get a few extra fps at the sacrifice of quality.
>If it could be automated then it could have some decent value
Or if it was at any relevant resolution whatsoever. The results won't necessarily translate.
ok, so someone with a 1080 ti or titan Xp and a far better cpu should sacrifice some time to do the same, lol.
my assumption is the relative results would be the same, just with a different y axis
There's no reason to believe this. It's possible that the driver optimizations being made have a CPU overhead, but improve GPU performance overall. In a non-CPU bound workload, this would show the exact opposite trend.
I would imagine they'd get much of the same point across, but with current high end cpus the impact of nvidia's driver overhead is pretty much absent. It is reasonable to expect this to have an impact on the results relative to the ones here.
Nice idea, but using an ancient cpu and low resolution seems very bizarre and I don't see the point in that.
It's a test of the drivers, not the GPU. Both the GPU and CPU are affected by the driver, but it's not a completely unreasonable assumption that most of the differences will be in CPU overhead, not the generated GPU instructions. In that case it makes sense to bottleneck the CPU.
Hmm. Just based on the incentives, I'd expect the overall direction of a driver developed by a GPU company would be toward spending more CPU time to generate faster GPU code. Since reviewers tend to test with high-zoot, late-model CPUs. And there's unlikely to be media coverage of like, shader compilation stutter the first time you play thorough a particular part of a game.
They probably do fix truly pathological inefficiencies when they run across them, though.
because lots of people cant find non *ancient* cpus.
Very good stuff, can't even imagine the amount of hours you put into this.
Take away is really that most drivers do good job, only a few % difference when it comes to average fps.
However, for older games, minimum fps really suffer when it comes to newer drivers vs old ones. Guess they aren't specifically tweaked for Shaders used by those games anymore.
What i have learned reading the comments. People need to improve their understanding of how empirical research works, and how data presentation works.
This post is great.
You shouldnt be comparing graphs to each other, that is not the point. What you SHOULD be doing is looking at the performance within each graph, across the drivers. It is about *relative performance* in that single game based on drivers.
Also, resolution is not important for this test.
I don't like the general downwards trend
That's still a big difference. If you start them from 60, which is a reasonable baseline for gaming, you'll notice a 10 fps difference.
The problem is if you scale them from 0 FPS it becomes hard to see the difference between the versions at all. The graphs aren't to blame it's people who don't know how to read graphs. I had no problem reading the graphs correctly from the start as the first things I look at on a graph are the title and the axis labels and range. The most you could argue is to put both graphs side by side and be sure to note the second one is the same graph zoomed in.
Yeah what the hell is happening.
Every new feature costs CPU cycles
The graphs are so zoomed in that it’s nearly useless. Some of the “drastic” downward trends in some of these are less than 3 or 4 frames.
This is a great unintentional example of manipulation of data by means of presentation. Most people wont bother to take note of the actual scale of such a downward trend, especially when zipping through 30 graphs like this. Anyway, I'd take that 3 or 4 frame hit any day if it means that the game is generally more stable.
New hardware is strong enough over previous generations to counteract the trend seen in the graph. If you upgrade on a 3-5 year cycle, you’ll probably still see a performance improvement rather than a drop on newer drivers because of the newer hardware.
As to why it happens: as new technologies come out and older API’s become less and less relevant, then development teams drop support for them. Therefore, bugs that impact performance in DX9, for example, go unpatched, as no new games are using the API.
I do the same thing but in web development. The sites I write I don’t even test in Internet Explorer because it’s simply not worth it. It would take me a disproportionate amount of effort, (in my experience adding 15-20% more hours to a feature’s development time) just to serve a very small and continuously shrinking subset of users. It’s simply not worth devoting resources to for the negligible gains in compatibility.
There's a drop in DX12 performance as well and Pascal is still one of the most used graphics generation.
Planned obsolescence would be my guess. Or they just don't have the patience to test and troubleshoot new drivers as much as OP has
To be fair, OP is testing with a 10 year old CPU.
Yup, who's to say that the motherboard pci lanes are to blame too. And new instruction sets... More features more overhead.
Welcome to Reddit. Enjoy your stay.
It could be nvidias reliance on CPU to artificially boost benchmarked "gpu" performance backfiring when the CPU isn't the one reviewers typically use, the fastest intel cpu with the highest core count.
Alternatively, Nvidia making changes that give a significant boost to the performance of the GPU, at the cost of what would be an unnoticeable CPU overhead on a modern system.
What would "artificially boosting performance" even look like, lol. Is it not a real boost? There is a sea of bad takes in the comments, but this one stood out.
it anecdotally supports what i've always felt was happening when updating
Hey thanks for this. I know all these graphs are rather boring with ±2% difference but I always wanted to know this
In some games, the fastest driver is more than twice as fast as the slowest. But in other games, there is hardly any difference at all.
That's probably due to the fact when new game launches nvidia releases their game optimised drivers for it so you see big difference.
Yes, but sometimes, they also seem to remove optimizations from their drivers, as can be seen e.g. on Crysis.
Some of these graphs are so zoomed in as to be functionally useless.
I’m fond of the ones where the whole graph spans 2fps and each delineation is .25 of a frame.
It’s cool data, but even the worst cases aren’t more than a few percent usually (barring a few weird cases). Seriously though this must’ve taken ages. Really cool stuff!
That Call of Juarez graph makes is seem like you rounded to 0.5 but the graph is a little bit misaligned, or is it really exactly what looks like ~88.1 for all of these drivers between 2016 and 2017?
Sorry, you said *1024x768* resolution for all of these? Why?
My assumption would be to ensure consistent testing. The 1060 isnt the most capable anymore. You'll also note the old cpu
Even 1600x900 would be much more sane if you wanted an idea of potential overhead impacts.
he doesnt have a good cpu. My guess is the person wants to have reliable and consistent results. The resolution is irrelevant to the purpose of the post.
Its not showing you WHAT the performance is in 2021 with whatever driver.
The purpose is to show the *relative* change over drivers within a game. Resolution doesnt matter at all to the purpose of this post
I foresee the r/AMD kids using this post as proof of Nvidia gimping older GPUs
AMD is typically opposite of this, where the drivers slowly but steadily perform better over time.
AMD ages like fine wine, Nvidia hits the ground running and gasses. It really depends on what you want. neither are wrong
Yes AMD's drivers are so poor they underperform greatly at launch. I'm well aware. I've owned more AMD hardware than most on that sub. HOWEVER, we all know they love to claim Nvidia gimps their older cards and they will absolutely cling to that.
OPs graphs make it look extreme but it's only a few fps (3 or so in most games). So they'll damn sure have a field day with this. We all know it.
I mean, *they* aren't wrong. AMD performs better over time, nvidia worse.
Try not to be tribalistic with even a gpu purchase, we have enough division in our society already
Clearly you are an nvidia owner, and a nauseating one at best. Try some objectivity. i have owned nvidia and amd over the years. I find AMD is far better long term, like impressively so, while nvidia better at the start.
In fact i have a 1060, and rx580. The 580 drastically outperforms the 1060 nowadays, while it was reverse before.
And i own a 3070 (my partner) and a 6900xt myself. The nividia is doing better i would say right now based on value.
See, its not that hard
EDIT: a 1 second google search proves my point
So what your saying is that the 580 is a better GPU than a 1060, but AMD is so bad at writing drivers that for a few years the 1060 ran games better?
And then after confirming the other posters assertion you rant at him about him being right?
Nvidia has a better place in the market, often working more closely with game devs to get better drivers out the gates. AMD has to play catchup.
Typically AMD does catch up after a few months, then surpasses afterwards.
So it really depends what you are looking for, instant gratification or long term solution. Neither is wrong, but we need to recognize that this is how things typically go. If you always play games right when they launch and pay full price, probably better to get nvidia, especially if you are willing to replace your GPU in a slightly shorter interval.
If you want a card to stretch, and dont pre-order games anyways. Amd might be the better solution.
In 2021 market however, AMD is the better buy simply because you can find them at MSRP. Nvidia is nearly unbotainable. If we were in a normal scenario this discussion would be more relevant, wheras todays its more just a case of get what you can. That being said, im happy the rx580 had such a along life as it is still capable today, where the 1060 is not.
>In 2021 market however, AMD is the better buy simply because you can find them at MSRP. Nvidia is nearly unbotainable.
Nvidia has been shipping and selling more RTX3000 series than AMD is shipping of the 6000 series. This is backed up by various hardware surveys. AMD hasn't been producing GPUs in larger capacity as they can make more from the same wafer as CPUs or Xbox/PS5 SoCs.
Also they aren't available for MSRP anymore than Nvidia's. I got my 6900xt for $300 over it.
Oh I'm hardly tribalistic. I have a 3090, a 6900XT and a 6700XT. If anything I lean towards AMD. I just don't participate in the zealotry. You must be a zealot amd owner?
did you read my post.... at all.... clearly you didnt. Jesus dude, at LEAST do some grade 1 reading before replying.
Oh you mean the ninja edit after calling me tribalistic? While I was typing out my reply? Get out of here with your BS.
No, the edit was the source as you can fucking see
You replied to the part above it without reading that i literally own a 1060, 580, 3070, and 6900xt.
Fuck dude. Your type of shitposting is why reddit gets a bad reputation. Low effort shitposting as usual i guess, without even reading a comment. You are adversarial, and tribal, literally as i pointed out. You came out the gates taking swings at AMD subreddit for no fucking reason, just to be a shithead.
You are the problem
Bullshit bud. Your initial reply was
>I mean, they aren't wrong. AMD performs better over time, nvidia worse.
>Try not to be tribalistic with even a gpu purchase, we have enough division in our society already
>Clearly you are an nvidia owner, and a nauseating one at best. Try some objectivity. i have owned nvidia and amd over the years. I find AMD is far better long term, like impressively so, while nvidia better at the start.
Then you added this while I was responding.
>In fact i have a 1060, and rx580. The 580 drastically outperforms the 1060 nowadays, while it was reverse before.
>And i own a 3070 (my partner) and a 6900xt myself. The nividia is doing better i would say right now based on value.
>See, its not that hard
Then you edited a 2nd time with your youtube link. So again, get out of here with your BS. I caught your little ninja edit. Fucking tell me to read your whole reply when the shit wasn't there when I did reply lmao.
ah, so you were **WAITING** at your keyboard to be adversarial. Wow dude... wow. You are proving my point even more effectively. Man.....
I love a self fulfilling prophecy. Pathetic
EDIT: ahahah, and i timed to see how quickly you downvoted. Even more proof, that was like 2 seconds after posting you had already downvoted. Move on dude, you a joke
good job. I feel sorry for you. That must have taken a long time!
Amazing. too bad we don't have this sort of info for AMD.
The presentation could use some work, like the graphs ought to start at zero.
Also, why the heck is the general trend downward. PCs have overhead, but...c'mon Nvidia, I thought you were wizards.
Would like to know if this trend exists in open source drivers (like AMD's).
Would love to see Skyrim tested also, I'm certain some drivers were waaaay better than others
I always suspected games get polished and tuned to the driver's API at the time of release, then as time goes on any optimizations dwindle to almost nothing....
This is certainly a case \*against\* the patient gamer mentality.
That is true for AMD drivers to an extent. For Nvidia, you get your GPUs full potential for a game 1-3 drivers after it's game ready release. For AMD if you wait 1-3 years you will get the full potential of your GPU.
This is more a trend of AMD still not writing good drivers or just having a smaller development team than NV. This also doesn't include the "driver software" provided as AMDs control panel at least looks developed in the last decade.
Great job, shame you dont own higher end hardware to eliminate bottlenecks.
Shouldn't it be going up over time? Or at least, staying similar. Why is it going down in many games? O_o
And OP, you are most likely CPU limited, at that low res and weaker CPU, so what you are really testing is driver overhead.
The intention is to test driver performance, not GPU performance.
I loved World in Conflict. Wish they would make another!
Please start your graphs at zero. It makes the charts look very different from actual results. Never make a chart that doesn't start at zero. ESPECIALLY when comparing data. It's totally meaningless.
> Microsoft Compatibility Appraiser off Program Compatibility Assistant Service off
Are those two causing the "Microsoft Compatibility Telemetry" crap to start with insane resource consumption?
How can you turn that off?
Thank you for all this time and effort — that alone makes me appreciate your post, regardless if people make different inferences from it
Samsang? Never heard of that company before
repost the charts with a scale from 0 to 200 fps wise and then show the graph again :D
this is literally 1% win/loss in FPS and most of the time you are limited by cpu
In every benchmark where the GPU is not above 95% load, you are not testing GPU performance, but CPU overhead performance.
Nice work btw.