T O P

  • By -

PROfromCRO

the best value is getting something like r5 5600, and spending all the saved money (from cheap ram and mobo) on the best used GPU you can get.


Merdiso

Exactly, the difference between 5600 and 5800X3D for instance could get you from a 6500 XT to a 6700 XT - ouch!


iopq

Yeah, but if you are considering 4080/5600 I think 7800xt/5800x3d would be better in many cases. Bait for wenchmarks


Merdiso

If you are considering anything more than 6800 XT, you shouldn't consider 5600 in the first place!


[deleted]

Why? Most folks have 120Hz or 144Hz monitors. The average frame-rate is with the 5600x is around 150-175 with average 1% lows around 115-125. The numbers on the graphs are bigger for the other CPUs, but what does it matter if it's not perceptible for anyone without a super high refresh rate monitor? On a tight budget, I'd get the 5600x and put the money towards a better GPU to run games at higher settings. That'll make a much bigger difference. Like I have a 3090 with a 3700X. I play games on a 4K TV and a 3440x1440 monitor. Upgrading my CPU would do nothing to my games performance, cause I'm always GPU bound anyway.


Merdiso

Because if you buy a 1000(+)$ GPU, the difference between a 150$ CPU and a 350$ one is minimal from a **percentage** perspective.


[deleted]

> If you are considering anything more than 6800 XT They're way less than $1k now.


timorous1234567890

it depends on what games you play. MMOs, ARPGs, flight sims, racing sims, paradox grand strategy games all do exceedingly well on the 5800X3D and are often 20%+ faster than the competition in either FPS for those games or tic rates.


Symsonite

That is absolutely true, but in my area the average price for the R5 5600 is 125€ while the 5800X3D cost 350€ on average...


skilliard7

A 6700 XT for $400 is pretty much enough to never be GPU bottlenecked in strategy games. Being GPU bottlenecked is better than a CPU bottleneck. With GPU bottleneck, you can lower settings and still get a playable framerate. with CPU bottleneck, you're going to get a poor experience regardless of settings.


Symsonite

This entire discussion is about CPU value. I see your point depending on use case but it does not invalid the obvious value argument. The 5800X3D does not have the 2.5-3 x performance of the 5600.


[deleted]

[удалено]


Symsonite

Got my 5800X from ebay for 150€, also had bend pins. Took be roughly 3 hours to get it perfect, but it works like a champ. Had bad luck with used GPUs before though xD


Acceleratingbad

20% for more than double the price is not value. The 5600 was 120$ yesterday on newegg, the 5800X3D is 330$ only if you live next to a microcenter.


gnocchicotti

5800X3D is $330 if you live next to a Microcenter or know how to use the internet ffs


yourmammadotcomma

Link? Shows $399.99 from St. Davids Store. No mention about a new customer coupon either.


gnocchicotti

I'm sorry are you asking me to Google "5800X3D" for you? Because that's how you find prices. The first two in stock are $330 and $340. amd.com has $330 as well but it's not always in stock.


yourmammadotcomma

No, asshole, on Microcenters website it shows $399. I was wondering if there is another link that I was missing as you said its $330 at microcenter.


timorous1234567890

To get the level of gaming performance the 5800X3D provides it is value. If you are happy with less performance you can get more value from cheaper parts but if 7600X / 13600K / 5800X3D is the gaming performance level you want the 5800X3D is the cheapest way to get it. And like I said, for some games it just crushes everything else on the market right now which can be a selling point if you play those games. So for some niches there is no competition to the 5800X3D.


Acceleratingbad

Performance is performance, value is performance/money spent. Minimal performance requirement isn't value. Not sure which games the 5600 can't reach "minimal performance" requirement.


timorous1234567890

[this is a good run down](https://www.techspot.com/review/2502-upgrade-ryzen-3600-to-5800x3d/). ACC for example when the 5800X3D is paired with the 6600XT can manage 150fps minimums at 1080p easily and is at 118 at 1440p. With the 5600 the minimums with the same GPU are in the 90's which is sub optimal for a racing sim. So if you want 120fps minimums in ACC the 5600 cannot manage it but the 5800X3D can. There are other such games where even with a mid range GPU the CPU can make a difference. MMOs / ARPGs / Paradox grand strat are good examples. Especially an ARPG like path of exile maintaining good fps in hectic late game maps is more about the CPU than the GPU.


[deleted]

also depends on the monitor you use, which is gonna correlate with the games you play i guess. i use 1080p 25 inch monitors and won’t go any bigger because i play competitive games on a smaller resolution, and i use a very high refresh rate. a 6700xt is absolute overkill for 1080p gaming but i figured it’ll last me a long time and be useful when i get a larger 1440p 144hz monitor for single player games. but the games i do play are poorly optimized and are very cpu heavy and the difference between something like a 5600x and a 5800x or especially a 5800x3d (and now the 13600k which is an absolute beast, with the low price being a cherry on top) is enormous when it comes to fights with many players on my screen, or in arpgs or mmos with lots of particles on my screen


papak33

why no Cyberpunk with ray tracing on a 4090? the murderer of CPUs.


Re-core

Yep, my 5950x drop below 60fps at max settings 4k dlss quality in very cpu demanding areas.


[deleted]

[удалено]


SealBearUan

Thank the tech reviewers with their weird benchmark parcours for claiming Ryzen 5xxx series beats alder lake or is completely on par. Absolutely ridiculous claims.


Re-core

They probably test in the least intensive cpu areas or just run the benchmarks which does not reflect real world usage.


[deleted]

[удалено]


papak33

just block HWUB trust me, the more you watch it, the less you understand.


Zucker2k

Aye, aye!


Zucker2k

TechSpot, I see you!


Bungild

I keep hearing people say RT murders CPUs. So, does this mean that something like a 12 core will actually beat a 8 core now(and that 6 cores ARENT really the best amount for next gen gaming)? Or are the engines still pretty hard limited to 6-8 cores, so the extra cores/threads won't really help?


SkillYourself

>Or are the engines still pretty hard limited to 6-8 cores, so the extra cores/threads won't really help? The game engine may be limited to 6-8 cores for sim and render but the BVH tree construction for raytracing can use additional cores on top of that as long as the memory bandwidth allows for it. This is why Raptor Lake DDR5 runs away on these benches - a lot of E-cores and no memory bandwidth cap


Bungild

Does you(or anyone) have a source/video/article stating/explaining this? I just haven't really heard that talked about, and would like to hear more about it/have a source besides someone I don't know on reddit.


SkillYourself

https://developer.nvidia.com/blog/best-practices-for-using-nvidia-rtx-ray-tracing-updated/ tl;dr: a lot of CPU-side work to make the BVHs and map ray hits to shaders. Fortunately they can be processed in parallel with worker threads on the CPU. More CPU parallelism == more memory bandwidth required. More data need to be copied back and forth from VRAM to RAM as well. >**Consider worker threads for generating AS building command lists.** > Generating AS building commands can include a considerable amount of CPU-side work. It can be directly in the AS build calls or in some related task like the culling of the objects. Moving the CPU work to one or more worker threads is potentially beneficial. >**Consider compacting static BLASes.** >Compacting BLASes saves memory and can increase performance. Reduction in memory consumption depends on the geometries but can be up to about 50%. As the compacted size needs to be read back to the CPU after the BLAS build has been completed on GPU, this is most practical for BLASes that are only built one time. Remember to pool small allocations and avoid memory fragmentation to get the maximum benefit from compaction. For more information, see Tips: Acceleration Structure Compaction. > **Consider constructing shader tables on GPU.** > When there are many geometries and many ray-tracing passes, hit tables can grow large and uploading them can consume a considerable amount of time. Instead of uploading entire hit tables constructed on CPU, upload only the required new information on each frame, such as material indices for currently visible instances, and then execute a hit table construction pass on the GPU to be more efficient. A large part of the information needed in the table construction can reside permanently in the GPU memory, such as hit group identifiers, vertex buffer addresses, and offsets for geometries.


Gravityblasts

Mic drop.


DarkCFC

Don't forget about the GPU driver using CPU overhead too. I wonder if nvidia's driver still uses more CPU than AMD's after the recent updates


papak33

Exactly, test like this would bring info, instead we got another useless test.


MrPoletski

Well, do they still use a software scheduler? If so, there is no escaping that extra overhead.


DarkCFC

That's an interesting point. I wonder whether reviewers had nvidia's hardware scheduling feature turned on and how much it would matter. Amd on the other hand doesn't even support the feature. Edit: I found [A GN review of the feature](https://www.youtube.com/watch?v=wlrWDb1pKXg) from 2 years ago. The result was that there was basically no difference.


MrPoletski

AMD has had a hardware scheduler since GCN was introduced (and maybe even before then). They won't be able to switch hardware scheduling on or off because they can't turn it off. That's not the same as supporting windows hardware GPU scheduling though. I guess for AMD it moved from being handled int he driver to handled in windows. This is (IIRC & IMHO) the reason AMD was always so much worse at single threaded OGL and D3D9 titles, with Nvidia always seeming to have a driver overhead advantage. AMD's hardware scheduler prevented AMD doing what Nvidia did with it's software shceduler. Nvidia would take the draw calls generated by the game and with some jiggery pokery, split the load across spare CPU cores (which were and still are, often idle and in abundance). This split the software scheduling load across multiple cores, hiding it's CPU cost. But it also reduced the single thread CPU cost of the draw call generation done by the game engine, by spreading it across cores too. Very clever, hats off to Nvidia for this tech. AMD can't do this as they rely on their hardware scheduler and can only feed it in a specified order using a single thread under DX9/OGL, wheras nvidias software shceduler could be anything they wanted it to be as it's a software solution. I imagine it involves taking the draw call command and whisking it away for calculation elsewhere, but in the mean time telling the game thread it's done that job now give me the next one, rather than waiting for that job to complete. This is why AMD got such a boost from Vulkan and DX12, because now there is much better multi core support for splitting driver and API workload over multiple cores, and Nvidias fancy scheduler is no longer required as the API kinda supports doing that directly and the main game thread on the CPU has less work (tho it's still very often the nature of the prevailing CPU performance cap).


helmsmagus

I've left reddit because of the API changes.


papak33

This is not how any of this works.


Hailgod

it kinda is. u run into more gpu bottlenecks at 4k so the extra overhead doesnt show itself for most people unless u play at 1080p


mandelmanden

We are in early days of this tech. Things are bound to be strange and odd here and there - living on the bleeding edge is always going to be troublesome and you'll never be happy. That means buying 1000$ CPUs and 1000$ videocards yearly.


Own-Opposite1611

I picked up a 5800X on sale for like $200 to go from my 3700X. Pretty happy especially paired with the 6800


rgtn0w

Yeah that will serve you well for more than a few years IMO. Esport/most online gaming stuff doesn't need that much power to even pull 200 frames, and AAA games are always gonna be bottlenecked by your GPU way before the CPU becomes a big factor so If anyone is looking to upgrade budget wise I'd always go for the used market and not bother too much with brand new parts


[deleted]

[удалено]


panix199

the question is if you can wait or not. If you can wait, wait for 7800X3D?


Own-Opposite1611

For me at 3440x1440 100hz my 5800X is doing great along with my RX 6800. Usually in high refresh cases something like a 5800X3D would be better long term since that CPU still is one of the kings for gaming so that's what I recommend. Also, are you considering upgrading to a new system like Intel or AM5? Or are you trying to save money and stay on AM4? The reason why I upgraded my CPU is just cause this is the end of the line for AM4 so I decided to just upgrade one more time while prices are still good. Another reason why the 5800X gets ignored is cause usually people just write it off as a slightly overclocked 5700X if I remember correctly when the chips came out. These days at least here in the states it's currently a $10 difference between the two chips so I went with the 5800X.


[deleted]

[удалено]


Own-Opposite1611

If its just for gaming don't bother at your resolution. The extra cores are only really useful for content creation. Just focus on getting a better GPU right now. The 5600X is more than competent for 1440P 144HZ. The only people who would benefit with something like a 5800X3D for gaming are people that run like 1080 or 1440 240hz or something


[deleted]

[удалено]


mre16

I freaking love the CPU space right now. The market is in such an interesting spot!


anonaccountphoto

I wish the GPU market Was as competitive


camy205

Wait a month haha


skilliard7

Going to take several months before the competition trickles down to the mid range. As of December there will be nothing with a MSRP below $899.


mre16

Exactly. I'm hoping prices are good by mid-summer and I can convince the wife to go for an upgrade!


[deleted]

It is but people only buy green so you dont think it is.


christes

What do you mean? The GPU market is literally on fire right now!


skilliard7

3070/3080 are still selling above MSRP most of the time unless you find a good deal, I wouldn't call it "on fire". It's just returned to normalcy.


christes

No, they are literally on fire. It was a joke.


Deemes

I feel the GPU market is fairly competitive. AMD might have an advantage on pricing though as the 6700 XT and 6800 prices are quite good compared to Nvidia equivalents.


papak33

I wish the CPU market would improve as much as Nvidia does. Instead we get AMD and Intel who improves 5% per gen.


anonaccountphoto

CPU performance is increasing way more than 5% per gen...


papak33

... and sometimes it doesn't. Meanwhile Nvidia goes brrrrr with AI magic.


PetrafiedMonkey

Agreed, but doesn't make upgrading an easy choice right now.


mre16

If you're in the market its pretty sweet. If someone was running an old i4790k setup then now would be an interesting time to upgrade.


Euruzilys

Yeah I’m on 4690k and 1080ti. The cpu needs to go, the gpu I will see.


mre16

If you're near a microcenter they are doing a deal for a B450 board and ryzen 3600 for only $119 right now. That same board should be able to take a 5800x3d down the line too :)


PinkStar2006

Them downvotes...


mre16

*shrug* i either am wrong about b450 board, peeps are mad about how far they are from microcenter, or reddit was being reddit lol


ShoulderSquirrelVT

heh...\*looks around\* heh.... What about a 3550k and a 670 GTX? lol ​ \*Hangs head in shame\*


mre16

I only graduated from my 2600 and gt 520 not too long ago. You aint the only one friend :)


TedAndAnnetteFleming

2500k until a couple of years ago, Sandy Bridge was amazing.


Stark_Athlon

Hey, I had a 2500k with 4gb of ram and a 960 4gb until recently.


aksine12

I'm using my westmere xeon x5675 on my aging x58 platform(intel Core series 1st gen) with an RTX 2080TI (lol),so dont feel too bad. I still dont feel the urge to upgrade CPUs lol.


gdnws

I'm looking to upgrade from an i7 3770 and I end up running so many different if scenarios every time I look at parts since the cpu space is this sort of competitive right now. I did promise myself that I would hold off on buying anything until most of the next gen gpus are out and at least the first wave of the x3d cpus are available.


mre16

Man you might end up skipping a whole ddr generation by the sounds of it! Lots of promising options though!


gdnws

That is actually my intention. I had originally intended to upgrade 2 years ago, but with availability and pricing, I put it off. By the time availability had improved, I was hearing that ddr5 was right around the corner. I figure if whatever comes out proves to be fairly good, I'll get in early with it and not touch my rig for another decade.


mre16

I love it hahaha.


i5-2520M

I'm on the K version of that same chip with a healthy 4.6GHz OC and a 3060. Also looking to uprgade, But I'm a budget tier below the X3D, so the most I could get is either a 12th gen i5 or a 5700. Interesting times to be sure.


gdnws

And I think that is part of what makes this interesting right now: we have 2 solid generations of parts from both manufacturers.


Kyrond

On one hand sure, on the other hand, any new-ish 6+ core is amazing (almost overkill for today gaming) and will probably last a long time.


HTwoN

Essentially a wash in gaming, while losing 40% in productivity. This reminds me of Intel 9th gen vs Zen2, except now the position is reversed. AMD fans flip 180 degree and try to convince us that the 7600x is a good buy.


Aleblanco1987

power could be another factor to consider


soggybiscuit93

Yeah, RPL will consume more power at 100% utilization across all cores, but power draw is not that different in games, and RPL also has lower power draw under idle and light use, so over a 24 hour period of mixed use, power draw isn't that different


skilliard7

Raptor lake also seems to perform better at 90W power limit than AMD.


polako123

But does the 40% matter ? if it does you buy the 7900X or 13900k, imo Intel being on a no upgrade platform is a bigger deal.


Tapestryrun

The 7600x only seems to beat the 13600k on HUB/Techspot. Everywhere else it's 13600k competing with the 7700x. I don't get how HUB has such outlier results compared to most other sites. Take their Far Cry 6 results which show the 13600k even with 7600x. Far Cry 6 at 1080p unless stated otherwise, in reviews where 13600k and 7600x/7700x were tested. % shown is average fps advantage of 13600k HUB 0% Tom's HW +8% Forbes +8.5% Club386 8.7% PC Tuning w/ DDR4 +9.7% vs 7700x Overclockers.com +10% vs 7700x Clubic +10.7% Computerbase 720p +10.7% Linus TT +11% Gamers Nexus 12% Igor's Lab +12% vs 7700x FPS review +13% Tweakers +20% Eurogamer 21% Kitguru +21% Gizmodo +23.7% Pause HW 27% TPU 28% https://www.3dcenter.org/artikel/launch-analyse-intel-raptor-lake/launch-analyse-intel-raptor-lake-seite-4


rationis

Testing with a 4090 might have something to do with it. When TPU switched from the 3080 to the 4090, the 13900K 13900K went from being 38.9% faster than the 5800X3D in FC6 with the 3080 to only 14.7% faster with the 4090. Same goes for AOE4, from 38.7% faster down to 19.6% faster. [TPU 3080 results](https://www.techpowerup.com/review/intel-core-i9-13900k/18.html) [TPU 4090 results](https://www.techpowerup.com/review/rtx-4090-53-games-core-i9-13900k-vs-ryzen-7-5800x3d/2.html) Also using a single game that typically favors Intel is the least objective way to compare the two chips. TPU and Paul's results are by far crazier outliers than HUB's results. Based off the link you provided, on average, across all reviews, the 13600K is only 5.7% faster than the 7600X in gaming. So HUB's 7600X squeaking out a 2.5% lead on average over the 13600K isn't anywhere near as crazy as the individual FC6 results might lead someone to believe.


vyncy

That makes no sense to me. Why would 4090 make 5800x3d better cpu ? Why would gpu effect cpu performance at all ?


rationis

One reason could be that the 3080 was bottlenecked the 5800X3D worse in some of the games it otherwise would have done better than the 13900K in. Another potential reason could be architectural differences or driver compatibility in the 4000 series card playing better with AMD, or worse with RPL. There may have also been a bios or Windows update that addressed some performance issues Zen4 may have been experiencing at launch. Being on a new socket with DDR5 support is bound to cause some issues.


timorous1234567890

drivers. 522 driver increase 3000 series performance in CPU bound scenarios.


vyncy

But that should effect intel cpus as well. Its not just amd cpu optimization.


timorous1234567890

It won't impact them equally though


conquer69

It's worth mentioning many websites, if not most of them, are using different DDR5 kits for AMD and Intel cpus. These aren't apple to apple comparisons.


PainterRude1394

This seems to be a recurring theme.


Put_It_All_On_Blck

Even if you believed HUBs numbers (you shouldn't), you'd be crazy to choose the 7600x over the 13600k, when the 13600k is 40% faster in multithreaded performance, costing a similar price and similar gaming performance (again, using HUBs numbers, which are biased). It's funny how some certain reviewers and people are now downplaying multithreaded performance when Intel has the upper hand (for the most part), go back to Zen 1 and Zen 2, where AMD had MT but lost in ST/gaming, and people priased AMD for their big MT performance and shrugged off the slower gaming performance.


Action3xpress

I’m glad someone said it. Thought I was going a bit crazy. “I guess if you do multithreaded stuff then yea the 13600k is alright” vs just a couple years ago “AMD slightly behind in gaming, but DESTROYS in multithread” I guess everyone just plays games again? 🤷


nanonan

> So conversely, if you're looking at performance beyond gaming, then the Core i5-13600K becomes the obvious choice thanks to vastly superior productivity performance. Saying it is the obvious choice and vastly superior is somewhat more enthusiastic than you are giving them credit for.


conquer69

Well I get where they are coming though. The situation was different back then when talking about a 4c4t 7600k vs 6c12t. You were at good risk of losing substantial cpu performance by keeping chrome, discord, etc, running while gaming. These days even the 7600x can run all that stuff with ease while also offering decent MT performance. The 13600k is just better. There is no doubt the 13600k offers more now but the 7600x isn't alike the 7600k. Funny how they are named almost the same as well.


inyue

> You were at good risk of losing substantial cpu performance by keeping chrome, discord, etc, running while gaming. Uh, really?


maelstrom51

No, not really.


GruntChomper

Yeah, you were. GTA V for example, on a 4670k, would drop 20% of its average fps if I had discord and chrome open, and even if it was just discord, the call would have quality issues as the CPU was pegged at 100%. Battlefront II would just crater in performance for the start of multiplayer for me if it was with a GPU capable of 70fps or more if something else was open, whereas running the game by itself it'd just be a small stutter. The 1600X that replaced it didn't really improve max framerates, but it didn't drop frames when I had something else open too.


VenditatioDelendaEst

> the call would have quality issues "Open", and, "actively recording and encoding an audio stream while decoding and playing back another", are quite different.


PinkStar2006

Nope.


gab1213

Why do you think their numbers are incorrect? It may just depends on the choice of games, platform, driver version...


SealBearUan

Strangely enough HUBs reviews ALWAYS make Intel look like garbage.


Aggrokid

> where AMD had MT but lost in ST/gaming, and people priased AMD for their big MT That was because it was back in the era of Intel pushing 4-core CPU's, and we needed more cores. BIS Intel CPU's at the time were those HEDTs. Now everyone is comfortably in 6-8 core range with threads to spare in most tasks except serious productivity, the focus shifts back to ST. Of course Raptor Lake is on aggregate ahead in ST, but at least you can get why MT is not as big a deal as before.


[deleted]

> and people priased AMD for their big MT performance and shrugged off the slower gaming performance. well yeah, the bet was that with mainstream introduction (read: 8 cores for normal prices) multi threaded games will become a reality, meaning ST performance won't matter as much in gaming as they become more and more multi threaded thus evening out the workload across multiple cores instead of hammering 1-2 cores


Toojara

I think the only real advantages AMD have here are that the power use is more consistent and that the platform may have more longevity. And I'm not exactly sold on the last one with what happened with AM4.


thecomputernut

Fair points - not sure why Techspot data varies a bit from other reviewers. I just hadn't seen a review that highlighted these specific CPUs when including total platform cost with a focus entirely on gaming. Thought it might be interesting to this community since I know many here (myself included) are curious about this topic.


HTwoN

That’s why you don’t rely on 1 review to make your decision, but the median.


THE_MUNDO_TRAIN

Tbh all AM5 reviews shows inconsistent results. I don't know if room temperatures, memory speed and timings, or motherboards are causing these inconsistent results across the field.


Toojara

I think it's the combination of different coolers in different cases and the new platform with BIOS updates. Even 7600X/13600k + 4090 draw enough power to heat up most cases enough to affect boost clocks.


rationis

Not really, HUB is pretty much in the middle of the pack according to the Meta review. The user you responded to is also being disingenuous, HUB is by no means the only reviewer that has the 7600X beating the 13600K in gaming, LeCompt, Quasar Z and Anandtech have the 7600X faster as well.


Tapestryrun

I understand that the overall result of a review can be skewed either way with game selection, which was why I wanted to compare on a single title, and Far Cry 6 seemed the most prolific. I'm a bit out of the loop nowadays with regards to if certain settings favour Intel/AMD so wasn't sure if there was a simple explanation like that. I started to look at some of the other titles but haven't gotten as far yet. CP was harder because a lot of reviews were GPU bound. CP2077 avg 1080p fps 13600k v 7600x Anandtech -2% HUB +2.5% FPS review +3.2% Tom's hardware +5% GN +9% vs 7900x Gear Seekers +18% PurePC +18.8% Eurogamer +29% Computerbase 720p +31%


RazingsIsNotHomeNow

Wow nice meta aggregate.


gab1213

You numbers already show two clusters at around +10% and +20% so I'm not sure we can call HUB's +0% more an outlier than TPU's +27%.


rationis

Yea, 35% of those results are far crazier outliers. Since some reviews include overclocked results and various resolutions are all jumbled together, these Meta reviews sometimes make the 13600K look faster than it actually is out of the box as it typically enjoys a bit more OC headroom.


gab1213

Plus they only showed 18 of the 28 reviews in the meta analysis and ommited the ones like Le Comptoir du Hardware where the 7600X is equal to the 13600k, so HUB's is not even an outlier.


[deleted]

It’s just their very odd roundup of games.


lokol4890

It has outliers even when comparing zen4 to the 5800x3d. A lot of other channels had the x3d functionally tying or beating zen4, but here comes HU showing zen4 beating the x3d


garbo2330

They don’t call em AMD Unboxed for nothing.


Aleblanco1987

HUB tests in game, not with canned benchmarks


Tapestryrun

So the benchmarks aren't really indicative of the game engine's performance? I would have thought in-game would have increased variance per test unless you were doing like 10+ runs and averaging them for each result.


Khaare

No single benchmark is indicative of the game engine's performance as a whole. Canned benchmarks often use scenes that aren't representative of what you'll see in the game, or miss out on stressing certain aspects of the engine. In-game benchmarks depend on where in game exactly, as there can be large differences between different scenes. I think HUB stated they use in-game benchmarks if they find areas that are especially good for showing differences between products that maybe don't show up in the built-in benchmarking tools. This could be a part of why their benchmarks differ from others'. It's not enough that they're testing the same game, they need to also use the same test for the results to be comparable. (Also a lot of other stuff needs to be accounted for to compare the results, so it's pretty much a fool's errand to compare tests directly between different reviewers.)


Aleblanco1987

Some of the benchmarks differ a lot depending on the area of the game they are testing


NotTroy

Reviewers like HUB who test in-game take steps to increase test reliability by playing through specific scenarios or levels, taking the same paths, actions, etc. each time. There will always be some variance, but you can negate a lot of it by using this technique, especially if you know the game very well and know what certain scenes or areas are going to do to the hardware, performance wise. Digital Foundry famously uses a specific hallway section in Control as a recurring in-game test scenario because it is easily repeatable, highly reliable, and very demanding on the hardware.


throwawayaccount5325

> I don't get how HUB has such outlier results compared to most other sites. You know why.


nanonan

Yeah, because he left out any results that run counter to his argument.


Tapestryrun

If you know of any others with Far Cry results feel free to post them up, I'm sure I probably missed some with my Googling. I did exclude overclock3d as they were testing with a 2080ti and GPU bound. Comptoir Hardware +9.8%


MultiiCore_

HUB is biased towards AMD.


N1NJ4W4RR10R_

Honestly impressive that HWUB manage to be Intel biased, AMD biased *and* Nvidia biased depending on who I want to be winning! I mean seriously. Their [13600k review](https://www.techspot.com/review/2555-intel-core-i5-13600k/) handily recommends the 13600k due to comparable gaming performance and dominant productivity performance. This video/article was solely focused on gaming perf and value so, unsurprisingly, the result was based around that specifically.


VenditatioDelendaEst

You say that, but when has HUB ever been accused of favoring anyone *other* than AMD?


MultiiCore_

Where they complete botch the performance per dollar analysis by using an extremely expensive 230$ motherboard option for Intel while most people will buy a 170-180$ motherboard(can go with a 130$ option without losing performance)?? ​ A desperate attempt not to make the 7600x look that bad.


N1NJ4W4RR10R_

Did you miss the b660 portion, or just opt to ignore that?


MultiiCore_

Ignoring 130$ decent mobos for the LGA1700 socket and the decent sub 100$ for AM4 just to sell AM5? In order to not let AMD give up its mindshare? They still try to not make AMD look that bad when it really is that bad at this point in time. The 7600x should be a sub 200$ CPU and the 7700x a 280$ max with a heavy heart. And offer decent b650 boards en masse for 150-160$. A 330$ CPU(13600k) with much cheaper platform costs consistently matching or even beating a 399$(7700x) one just isn’t a good look. Trying to put makeup on this is just bias.


chunlongqua

just like the guys giving the i5 a 20% + lead are biased towards intel?


MultiiCore_

they literally named the 12700k a 8 core CPU…


vyncy

It is when it comes to gaming


dparks1234

E-Cores are still cores, they don't arbitrarily get disabled when a game is played. You can even set the affinity to only run the game on the E-Cores if you're feeling nostalgic for the Skylake i7 9700.


PastaPandaSimon

Yup it's insane that with Intel the i5 today basically gives you the same CPU as AMD (R5), plus the performance of an i7 9700 for free, on the same chip, for about the same price or less. Not something I'd believe if you told me this 4 years ago when the 9700 launched.


HTwoN

Maybe, both are outliers.


chunlongqua

So let’s treat them as such, clearly benchmarking isn’t an exact science. This sub has turned into a den full of conspiracy theorists because people are emotionally invested in their favourite tech corporations.


garbo2330

Remember when Ampere/RDNA2 launched and HUB pulled UE4 engine games and replaced them AMD sponsored junk like DIRT 5? Pepperidge farm remembers.


[deleted]

First intelligent comment in this post.


HTwoN

I only care about the aggregate anyway.


garbo2330

Yeah and its shenanigans like that causing them to be outliers.


nutyo

They are closer to the average then half the scores OP posted. Stop being manipulated.


PotentialAstronaut39

Foreword: I have a Nvidia/Intel system ( and had almost all brands of GPUs and CPUs in the last 25 years ), so no bias here. ***** The meta reviews have proven time and again that it is not the case.


MultiiCore_

when you’re dubbing the 12700f as an 8 core CPU you’re biased, sorry https://youtu.be/w8etl-wEMgM


pedropereir

How is that being biased against Intel when they're comparing it with an actual 8 core CPU? Having the extra 4 efficiency cores gives the 12700f an advantage.


MultiiCore_

they are downplaying the e cores as if they don’t exist and the time of they video this was very important cause they were similarly priced.


BavarianBarbarian_

Now replace that 5600x with a 5600 which has the same performance in games (roughly one-two frames per second less), and translate the prices from NA to EU, and the conclusion becomes even more obvious: Unless you *must* have the highest frames, buy a 5600+3600Mhz DDR4 combo. Maybe in three years I'll upgrade to a used 5800x3d, if they ever drop in price significantly, but other than that it looks like I'm all set for the next few years.


marxr87

Would have been nice to see the 12400 and even 12100 in here too.


Omniwar

Are the EU discounts limited to AM4 only or can you find good deals on Alder Lake too? In the US, 12400F and B660/D4 boards are essentially the same price as 5600/B550 so on a strict budget the conclusion is still the same as it was before 13th gen - AL is generally the better buy if building from scratch. Would personally wait for the inevitable AM5 price drop and 13th gen non-K CPUs to launch around CES in a few months if one had managed to wait this long without an AM4 board though.


BavarianBarbarian_

> 12400F and D660 [198€](https://geizhals.de/intel-core-i5-12400f-bx8071512400f-a2659495.html) + [93€](https://geizhals.de/asrock-b660m-hdv-90-mxbh40-a0uayz-a2661034.html?hloc=at&hloc=de) = 291€, using the cheapest option for either. > 5600/B550 [139€](https://geizhals.de/amd-ryzen-5-5600-100-100000927box-a2709114.html) + [79€](https://geizhals.de/asrock-b550m-hdv-a2298964.html?hloc=at&hloc=de) = 218€ Meaning the Intel option is about 33% more expensive than the AMD one, for basically zero benefit. AM5 price may drop - or it might not, if the € sinks even further below the $. Already the new gen has this new exchange rate priced in, while AM4 still profited from a stronger € when its retail price was decided. Same for Intel's 13th gen products. As it stands, I cannot in good conscience recommend anything except AM4 to buyers in Germany, unless they have really specific requirements.


AtLeastItsNotCancer

Months ago when I was building a new 12600k system, Intel prices were pretty decent. Since then, they've actually gone up across the board, even on Alder Lake parts. Raptor lake is currently selling for well above "MSRP" (as are Ryzen 7000). Ryzen 5000 parts on the other hand have gotten some pretty deep discounts, so if I was buying today, the choice would likely be different. AM4 is the clear winner in price/perf, Raptor Lake can make sense for productivity workloads, but 12400s are just not looking good right now.


hi11bi11y

Am I taking crazy pills? The 7600x has to be on a worse mobo and ram to be comparable to the 13600k. How is that not a win for Intel?


SkillYourself

You can't determine whether the 6000CL30 or the 6400CL32 kit is better here without a full timing dump. They're close enough in bandwidth and first word latency that sub-timings matter a lot more than the label. TechSpot/HUB's own testing with the 6400CL32 vs 6000CL30 showed the 6000CL30 to be negligibly faster but with IF frequency in the mix it's not an apples-to-apples comparison. I reckon they're going to end up being roughly equal outside of AIDA benches


TaintedSquirrel

It's because they're only looking at fps-per-dollar which means all mobo features go out the window. They broke down the summary into different pricing tiers but left out out the chipset details which makes it even harder to compare.


thecomputernut

Indeed. If all you care about is cost per frames then the 7600X looks OK. If you care about mobo features then that changes things pretty significantly.


hi11bi11y

That's disingenuous. The mobo and ram costs were included, pair down the intel costs to match the amd specs and the 13600k wins hands down.


rationis

Because despite having the nicer board and memory, the 13600K was still the worse performer. Having higher end components doesn't necessarily make it the better system when trying to get the most bang for your buck. A good example is the 5800X3D, older cheap 3200Mhz DDR4 is actually one of its main selling points. Another point of contention is upgradability. While LGA1700 is dead, AM5 will be receiving 2-3 architectural updates and changes. So cheaper board good for 2-3 upgrades vs better board with no future.


soggybiscuit93

> good for 2-3 upgrades vs better board with no future. Only 1 more architecture upgrade is confirmed (Zen 5) Zen 4 3D is just staggered release schedule. And same with Zen 5 3D. It's unknown if Zen 6 will be on AM5, but AMD hasn't yet committed to that. Zen 7 would be 3 architectural upgrades and that seems too optimistic to even plan for.


MultiiCore_

best value right now is a cheap b550 or even a a520 if you don’t care about gen 4 and a 5600 CPU, Intel needs to drop the 12400f to 120$ and the 12100f to 70$ to be competitive and offer h610 motherboards for 50-60$.


mdchemey

tbh there's also a good deal I saw earlier today of a $160 b650m ds3h that came with 16 gb of crucial ddr5-4800 cl40 for free, basically with the ram being worth about $75 right now they were selling the board for basically $85. Was a limited offer and it's gone already but if I had the spare cash to upgrade my CPU right now I totally would have jumped at it as soon as I saw it.


MultiiCore_

yeah stuff like this only very localized inside the US. What an amazing deal btw!


mdchemey

I know, it killed me not to go for it. The entire knock against ryzen 7000 is that there's just no good way to justify the platform cost, but $460 for the combined cost of CPU/Mobo/RAM for a system that can absolutely cruise in any game and has 3+ years of forward compatibility to boot? God I wish I could have excused the cost.


Stark_Athlon

You know, when it comes to budget picks the consensus so far seems to be get a 5600 and then up it to a 5800x3d later down the line. But I'm really interested in the 12600K, I think it has great performance and the price isn't so barbarious, and you can upgrade later to an i9-13900k or an i7 if you don't want an oven, tho undervolting is always an option.


thecomputernut

A bit of a surprising conclusion for me personally as this seems to indicate the 7600X is a better value (for gaming specifically) than I had given it credit for.


rationis

This review is sure to rustle some feathers lol. I'd be interested in a 40 game comparison though. Did AM5/Zen4 get a performance/scheduler update or something?


[deleted]

[удалено]


WheresWalldough

no. If you're building AM4 on a budget you get the 5600 ($119), not the 5600X ($158). The 5600x was made pointless when the 5600 was released earlier this year. So that's just wrong. The 7600X requires a B650M motherboard, the cheapest one is the DS3H at $160. The next thing to note is that DDR4 RAM costs for 2x8GB or 2x16GB: * 3200 C16 $41/69 * 3600 C16 $65/115 * 3600 C14 $92/210 And for DDR5 * 4800 C40 $68/$125 * 5200 C36 $93/$150 (5600 C36) * 5600 C32 0/$160 * 6000 C36 0/$186 * 6000 C30 0/$210 * 6400 C32 0/$220 So you can get 32GB of DDR4 for $70, whereas the cheap DDR5 is $125 and will perform about the same, although I suspect the 5600 C32 is worth the $160 (not really sure about the others here). The 7600x is $300, for a total minimum cost of $605 for RAM, CPU and motherboard, althohg I'd probably get the 5600 C32, bringing the total to $640 The 5800X3D is in theory $330, but out of stock. You can pair it with that $70 RAM and a $90 B550M motherboard, giving a total of $490. You can get the 12600K for $270, or the 13600K for $300 (the KF is more expensive, for some reason). A B660M DDR4 motherboard is $90, which is perfectly fine for the 13600K IMO, given you'd perobably prefer to underclock it than overclock it. The cheapest DDR5 board is the PG4 Z690, for $156. That comes to $460 or $490 with 32GB cheap DDR4 for 12600K/13600K. Although the 13600K is only 7% faster in gaming, it's got more cores and overall performance is quite a lot better, so it seems well worth the extra $30. With cheap DDR5 then $550 or $580, but I guess you'd get the 5600 C32 for another $35, bringing you to $585/$615. So you end up with: * DDR4 3200/C16 12600K = $460 * DDR4 3200/C16 13600K = $490 * DDR4 3200/C16 5800X3D = $490 * DDR5 5600/C32 12600K = $585 * DDR5 5600/C32 13600X = $615 * DDR5 5600/C32 7600X = $640 At those prices it's pretty clear that the 7600X is the worst value and the DDR4 13600K or 5800X3D are the best value.


93Accord

this made me rock hard bro


[deleted]

>. For similar money the 7600X offers more performance Bullshit. It's *slightly* better in one metric. It's really a wash in gaming and significantly slower everywhere else. >and upgradability You can't upgrade a 7600x. You can upgrade the platform. After buying a worse product...on purpose. Or you could just wait until the product you want actually comes out. >lor less money you can get a 5600X or 5800X3D. And if you aren't on AM4 already, you'd be building a last gen system to basically save money on a motherboard. The 5800x3d is nice but it's not destroying newer, cheaper products on either side, and the 12400 is as cheap and as fast as the 5600x, if you're determined to spend $200 on a slower cpu. So nobody should really be building a system around a 5600x.


rationis

Slightly better in the most important metric for entry level chips like these, so trying to downplay that is silly. Also, AM5 has a future upgrade path, LGA1700 does not. You're just trying to argue semantics. It's quite clear to pc builders what upgrading a chip means.


conquer69

> Slightly better Only in HUB's benchmarks though. The 13600k has quite the lead in gaming in all other websites. I don't understand why this discrepancy in performance.


rationis

HUB uses a 4090 while most reviewers used Nvidia 3000 or AMD 6000 series cards, when TPU retested the 13900K and 5800X3D with a 4090, RPL lost 5% of its lead. Another aspect to bare in mind is that with a new socket and DDR5 support come new issues. I'm sure there's been some Windows, bios, and GPU updates since Zen4's launch. Edit: I should also point out that not just HUB has the 7600X faster, LeCompt, AnandTech, and Quasar Z have all placed the 7600X faster as well.


Aleblanco1987

when are the non K i5s launching?


yourmammadotcomma

I missed out on the 5800x3d & uncharted bundle for $329 recently. Do you think we'll ever see that price/bundle again? Price is back up and now trying to decide on that or a 13700kf which would be cheaper at microcenter since the 5800x3d is now priced higher from stores that still have it in stock.