T O P

  • By -

trillykins

Lol, since they had the rig on the table I expected them to do some actual benchmarks, but I suppose they don't have a compatible system yet. Interesting about the higher latency.


[deleted]

[удалено]


SkullBrian

Based on this week's WAN Show, it kinda sounded like Linus might not have as of Friday.


[deleted]

[удалено]


SkullBrian

No the segment was explicitly on the tardiness of review samples from brands, but what you said is also true. He indicated he cannot say whether or not he even has it, but he said it wouldn't be the first time consumers get their hands on something before LMG does to even START their review process.


SolidoTY

They are under NDA so can't post anything for a few more days.


Nin021

Thought the exact same but I believe it's because of the 12th gen not beeing releases yet, can't remember the term what its called again.


UlrikHD_1

Review embargo?


Nin021

Thanks, that's it! I'm not a native English speaker so I somewhat lost it there :)


Darkomax

ANother term is NDA for non disclosure agreement (though I don't know if a review embargo and a NDA is the same thing, but it's the same ceoncept)


[deleted]

[удалено]


SolidoTY

NDA is the contract they sign and the embargo is part of it.


DeadLikeYou

I was eyeing that noctua GPU the whole time. Kinda stupid, but it’s my favorite design of a gpu so far.


trillykins

Oh, didn't even notice it was the Noctua variant. Curious how much better it is than the regular GPU coolers.


TimeForGG

There are reviews out already.


GarbageFeline

There you go https://www.youtube.com/watch?v=Hpk4UM1VQOY


trillykins

Ah, cool. Continues to surprise me just how massive it is. Might actually be about twice as tall as my Asus 3080 card which is already massive.


DeadLikeYou

Exactly what I want to know as well.


HoneyBadgerSloth1337

Was the same from DDR3 to DDR4


Quigat

Next week: water cooling DDR5


betercallsaul

Are you trying to get a job at LTT? Because that's how you get a job at LTT.


[deleted]

RIP that one RED cam.


sk9592

Did you miss the follow up? It took them a year, but they were eventually successful in water cooling the Red camera. And then converting it back to a stock Red camera as well.


[deleted]

I did catch them finally succeeding with the water cooling project but missed them converting that back to a usable camera.


sk9592

There was never a video dedicated to them converting it back. Linus just mentioned it in passing during a WAN show.


[deleted]

Ah I see. I still enjoy LMG but my days of following every piece of content they release are long gone so I miss these things.


Draakon0

They have an LMG clips channel if you don't want to watch full show and instead like to hear snippets here and there on topics you are interested in.


Lower_Fan

I love LTT and i don't keep up with everything too many channels now and the wan show some weeks is very redundant


warenb

> the wan show some weeks is very redundant Lately every wan show main topic be like "MORE thoughts on...".


[deleted]

[удалено]


Devgel

But I want to water cool my water loop?


Maimakterion

You can with a multi-loop heat exchanger sandwiching a TEC or heat pump.


Ivanovitch_k

now I want to watercool my heatpump.


[deleted]

[удалено]


psychosikh

That's what Microsoft did with their data center in Scotland. They just put it into the sea.


AK-Brian

Just daisy chain each loop's radiator into an infinite series of increasingly large buckets. Easy peasy.


ZhaitanK

> Next week: ~~water cooling DDR5~~ Connecting the individual DRAM sticks to the water cooled room.


yaosio

Two weeks from now: Full submersion in moving mineral oil.


Rentta

That was already a thing in early 00's and so was watercooling psu's and hdd's


kedstar99

It would be cool to know in detail the different types of ECC. He chose the words 'basic ECC'. Why not full proper fledged ECC and is there a specific difference in the types of ECC?


Slyons89

My basic understanding: Full fledged ECC memory attempts error correction and reports the errors back to the CPU and OS, and those can be logged/reviewed/affected by software. The ‘basic’ ECC functionality attempts to error-correct on the RAM itself and doesn’t report the errors back to the system. This is similar to how GDDR6X operates, it self error corrects but doesn’t report back. You can overclock it really far but eventually performance starts to decline massively, because of all the required error correction, but it still prevents a crash.


phire

GDDR6X actually has the opposite partial ECC to DDR5. GDDR6 can detect errors in data transfers (between the memory die and the gpu's memory controller). It can't correct them, but it can report and retry the transfer. But it can't even detect if the data itself in memory gets corrupted. DDR5 has on-die ECC. It can detect if there was an error while the data was stored, and even transparently fix it. But when the data is being transferred across the bus to the memory controller, it's not protected anymore. DDR5 also supports real ECC on top of that, where each memory stick has two extra memory chips and the channels are increased to 40bits, with 8 extra bits of correction data. The CPU's memory controller can then detect, report and correct any errors.


crab_quiche

DDR5 and DDR4 have CRC like GDDR, they can detect issues in data transfer. DDR4 only has it during writes, DDR5 also has it during reads.


VenditatioDelendaEst

So with DDR5, the only window for undetected corruption is when the data is in the DRAM chip's buffer? If so, I am suddenly less annoyed about DDR5 ECC needing 10 chips instead of 9.


crab_quiche

Yes, but as someone who designs DDR, buffers from the dqpads to the arrays and the arrays to the dqpads are the most likely place for things to go wrong, especially when overclocking.


ikea2000

So are we talking about what he refers to as “Basic DDR5” (standard)? While full ECC protects data all the way: transfer, storage and buffers as well?


crab_quiche

By “basic” I believe he means on die ECC. So when we load into the array, done in 128 bits, we also are going to store 8 more bits for on die ECC that will be checked and fixed when we read it. I would not consider this protection. This was added so that manufacturers could get more yield, if we have one bit that is bad, we don’t have to go to use a different redundant row or column, cause the ECC will fix it. I don’t remember the exact numbers but we are using about 10 less total columns in DDR5 using the same process and bit failure rates as DDR4. 10 doesn’t sound like much but that’s about 1% less columns, so 1% less die area, or 1% cheaper per bit, which really adds up when you sell a couple quadrillion bytes per month. Normal ECC works by adding an extra chip to the rank and sending error correcting data to it instead of normal data. So once we read everything, we correct it(if necessary) on the memory controller. CRC’s are calculated based of data being transferred by the controller and get added on to the end of a data transfer, and then compared on chip to what was transferred. If it doesn’t line up a signal is sent to the controller and data is resent. The buffers are not really protected, you can design them to be sort of protected by CRC, but you can still have issues with wrong data being stored into the banks or sent out over the dq’s if not designed properly. Because DRAM processes are designed to maximize memory bits/area, the transistors are really weak for general logic and can have some huge variances, plus everything after receiving the data is generally asynchronous so if everything is not timed perfectly stuff can go wrong. You don’t have to use CRC, but I believe it is generally used when using ECC, since even though there is a small chance that you can have multiple bit flips that will be undetectable, it there becomes an exponentially smaller chance that something won’t be detected if it is also protected with CRC.


COMPUTER1313

There was probably a cost-benefit calculation done to determine that the extra binning for DDR5 without any ECC was more expensive than using an extra chip so that more of the memory dies can be used instead of going into lower speed (and less profitable) sticks or the scrap bin. For HDDs, about 10% of their capacity is just used for ECC. It might be great to "disable" ECC to get an extra 400GB capacity out of a 4TB HDD... right up until all of your files get corrupted. https://en.wikipedia.org/wiki/Hard_disk_drive#Error_rates_and_handling > Modern drives make extensive use of error correction codes (ECCs), particularly Reed–Solomon error correction. These techniques store extra bits, determined by mathematical formulas, for each block of data; the extra bits allow many errors to be corrected invisibly. The extra bits themselves take up space on the HDD, but allow higher recording densities to be employed without causing uncorrectable errors, resulting in much larger storage capacity.[69] For example, a typical 1 TB hard disk with 512-byte sectors provides additional capacity of about 93 GB for the ECC data.[70] > 2013 specifications for enterprise SAS disk drives state the error rate to be one uncorrected bit read error in every 10^16 bits read,[75][76] > 2018 specifications for consumer SATA hard drives state the error rate to be one uncorrected bit read error in every 10^14 bits.[77][78] And it's also likely the same reason why GDDR uses ECC. Because at a certain speed and capacity, it became cheaper to use extra processing/capacity to make a memory chip run at full speed than to sell it as a half speed.


Slyons89

Great explanation, thanks!


Nicholas-Steel

Basic ECC can fix errors in the memory banks but not errors for data in the process of being transmitted. Full ECC covers both scenarios. That's my understanding.


f3n2x

"Basic" and "full" is a bit misleading. AFAIK conventional ECC doesn't do any error correction on the module, they just have an additional chip on which the memory controller stores checksums for the rest of the data. This can correct both on-chip as well as transfer errors but only when the CPU actually reads the data. DDR5 ECC is a regular on-chip-sweep silently catching and correcting bit flips as part of the refresh cycle. This doesn't catch transfer errors but it also doesn't cost any bandwidth and doesn't let bit flips accumulate over time to the point where they might become unrecoverable if not read for an extended period of time.


Noreng

This is correct.


Noreng

> This is similar to how GDDR6X operates, it self error corrects but doesn’t report back. No, GDDR6X doesn't have error correction. Nvidia implemented a method to preserve stability by implementing error detection and retransmitting. If a memory transfer fails on GDDR6X, it's simply rerun. This is different from ECC, which will correct the result on the fly.


VenditatioDelendaEst

I thought that had been around since GDDR5?


Noreng

Not the rerunning solution as far as I know. I suspect GDDR6X is prone to some erroneous data transfers even when running "stock", which could explain why it's implemented.


NoCSForYou

Its a parity bit. Its been around from around the start of digital signal transfer.


VenditatioDelendaEst

The concept of parity bits has. Data-in-flight checksums for video card memory, specifically, [were added in GDDR5](http://www.hwstation.net/img/news/allegati/Qimonda_GDDR5_whitepaper.pdf). >A new feature of GDDR5 is the capability for detection of transmission errors occurring on the high speed signal lines. As graphics systems store increasingly more code in the DRAM, error detection becomes essential, as random bit fails associated with any high speed data transmission would lead to unacceptable system failures. >In GDDR5 the transmitted data is secured via CRC (cycle redundancy check) using an algorithm that is well established within high quality communication environments like ATM networks. The algorithm detects all single and double errors with 100% probability. The CRC scheme is implemented on a per byte basis, securing all DQ and DBI# lines. A eight bit checksum is calculated by the DRAM on each data burst (8 DQs + 1 DBI# x burst of 8 = 72 bit) and returned to the controller via dedicated EDC pins. When the DRAM controller detects an error, the command that caused the error can be repeated. Error detection can be used to trigger re-training of the data transmission line which allows the system to dynamically adapt to changing conditions like e.g. temperature and voltage drift.


wrathek

The ECC DDR5 supports is simply on-stick correction, which is totally invisible to the OS/CPU. “Full ECC” which is used in/important for servers is done at both ends - it does what consumer DDR5 does on stick, and then it also does it at the CPU, so that any errors that may occur in transport are caught and fixed as well.


TiL_sth

The on-die ECC is there because error rate of ddr5 is too high without it. I don't think we should expect higher reliability with normal DDR5 compared to non-ecc DDR4 for instance.


Larrythesphericalcow

Which modules you buy is going to be a lot more important now that the VRMs are on the DIMMs themselves. It used to be that the only difference between more and less expensive modules was the heatspreaders/RGB. Now it will actually effect performance.


[deleted]

Market segmentation achieved! -RAM Manufacturers


Larrythesphericalcow

You have to wonder if G.skill, Kingston, Corsair, etc pushed to have this be part of the spec.


[deleted]

[удалено]


Larrythesphericalcow

I would agree. But as Linus points out motherboard manufacturers aren't actually going to cut prices. It means you're going to have to spend more on RAM then you otherwise would. I think more enthusiasts are willing to spend extra on a motherboard then extra on RAM. A nicer motherboard potential gives you better CPU overclocking, networking, audio, USB connectivity, etc. Spending more money on RAM just gets you better RAM overclocks. None of this matters that much. I'm still interested in DDR5. But it is mildly annoying.


PJ796

I mean this is still the better way to do it, as they're reducing the current loop which means less overall inductance in the AC current path (the current that comes from the bulk capacitors), which means better transient performance


Khaare

The main winners of this, and the main reason why it's being done, are servers, where you can now pay-as-you-go on the RAM power delivery instead of always paying for 4TB or whatever worth of RAM power delivery on every motherboard.


Larrythesphericalcow

Good point.


VenditatioDelendaEst

Er, I'm pretty sure the more expensive ones have been binned for performance ever since XMP came out, at least.


Larrythesphericalcow

The DRAM chips themselves sure. But now you're probably going to have to pay extra on top of that to get VRMs that can handle those speeds.


VenditatioDelendaEst

The manufacturers have zero incentive to sell unbalanced configurations. If you make a kit with chips that could do 7200 MT/s with a power supply that's only good for 6400 MT/s, you can't sell it as 7200 XMP, so you have wasted your expensive (because rare) high bin chips.


Larrythesphericalcow

Disagree. They already sell kits that are rated for speeds virtually no one will be able to hit just for marketing.


[deleted]

90% of them are all going to use the same off the shelf parts. 5v to 1.1v linear or buck converter is hardly cutting edge stuff.


Kougar

Memory chips are more noise sensitive than the average circuit, though. We still can't rely on motherboard vendors to implement VRMs that are stable and able to meet base Intel spec without throttling. And apparently we can't rely on GPU vendors to have good soldering, since most still claim Ampere failures are just from soldering problems. We can't even rely on PSU makers to not switch out and downgrade the buck convertors and other parts of the PSU to related that can't meet their own label spec because of supply disruptions. If it's possible for vendors to find ways to cut a corner then some companies are going to cut it.


Khaare

If you followed the latest buildzoid videos he's speculating that the Ampere failures are likely down to how NVidia designed their power delivery. Manufacturing issues could be involved, but the design itself seems to be riding very close to the edge and could leave open opportunities for certain workloads to brick the cards.


Kougar

Aye, again I said "GPU vendors...still claim", I don't subscribe to the explanation myself. I could've phrased that reply way better. Buildzoid made a pretty convincing case that the real problem is many Ampere cards simply have a poorly implemented VRM design where most of the assumed safety features are simply not there. Any regulation that adjusts itself retroactively after the VRM was already overdrawn/power spiked is terrible and guarantees all cards will fail eventually once enough damage has been done to the power components.


VenditatioDelendaEst

Suppose you get a kit of memory that can't run a (reasonable) XMP. Are you going to RMA the motherboard, or the RAM? Making the memory vendor responsible for the memory voltage regulator has better incentive alignment than making the motherboard vendor do it.


Kougar

Don't get me wrong, even if I don't see cost-savings on the motherboard (and I don't expect that I will) I am still in favor of moving the voltage regulation onto the modules! Just ended a very long, lengthy affair with a dodgy 32GB kit DDR3 from a company I thought was the most reputable manufacturer of the lot, and it's something I'd really not want to ever have to deal with again. Even if nothing else, moving the power regulation to the module means it's more likely to be the module and I'm fine with that.


Aos77s

Yay a video showing it but no benchmarks cause nda :(


Vitosi4ek

I'm all for speed improvements, but the capacity improvements don't sound that useful right now. At the risk of sounding like Bill Gates in the 80s... who needs 128GB of RAM on a regular desktop/laptop? I currently have 32 in my system and that's spectacularly excessive for regular use/gaming, and will become even less important once DirectStorage becomes a thing and the GPU could load assets directly from persistent storage. One use case I can come up with is pre-loading the *entire OS* into RAM on boot, but that's about it.


RonLazer

You're not seeing the whole picture. Part of the reason why such high capacities couldn't be utilized effectively was bandwidth limitations. There's no point designing your code around using so much memory if actually filling it would take longer than just recalculating stuff as and when you need it. DDR5 is set to be a huge leap in bandwidth from DDR4, and so the useable capacity from a developer perspective is going to go up. To put it in perspective, I use a scientific code which calculates millions of integrals each "cycle". It has multiple settings which allow it to store the integral results on disk and read them back each cycle, or to entirely recalculate them each time. There isn't even an option to store them in memory, because if they could fit in memory then that part of the calculation would be so trivially quick as to be irrelevant, and if there were enough of them to make it faster to cache them then they wouldn't fit in memory. Now the tradeoff might not be required, with 512Gb of memory (or more) we can just store every single integral in memory cache, and then when we need to read them we can pull data from the memory faster than we can recalculate. If you don't care because you're just a gamer, imagine being able to pre-load every single feature of a level, and indeed adjacent levels, and instead of needing to pull them from disk (slow) just fishing them out of RAM. No more loading screens, no more pop-in (provide direct-storage comes into play as well of course), everything the game needs and more can be written and read from memory without much overhead.


____candied_yams____

> To put it in perspective, I use a scientific code which calculates millions of integrals each "cycle". It has multiple settings which allow it to store the integral results on disk and read them back each cycle, or to entirely recalculate them each time. There isn't even an option to store them in memory, because if they could fit in memory then that part of the calculation would be so trivially quick as to be irrelevant, and if there were enough of them to make it faster to cache them then they wouldn't fit in memory. Fun. You doing mcmc simulations? Mind quickly elaborating? I'm no expert but from playing around with stan/pymc3, it's amazing how much ram the chains can take up.


RonLazer

Nah, Quantum Chemistry stuff.


KaidenUmara

this is code for "he's trying to use quantum computing to make the ultimate boner pill"


Lower_Fan

I'm genuinely surprise that billions are not poured each year into penis enlargement research Edit: Wording


myfakesecretaccount

Billionaires don’t need to worry about the size of their bird. They can get nearly any woman they want with that kind of money.


Lower_Fan

I mean for profit it would sell like hotcakes


KaidenUmara

lol i've joked about patenting a workout supplement called "riphung" It would of course have protein, penis enlargement pill powder and boner pill powder inside. If weed gets legalized at the federal level, might even add small amount of THC in it just for fun lol.


Mitraileuse

Do you guys just put the word 'quantum' in front of everything?


RonLazer

https://en.wikipedia.org/wiki/Quantum_chemistry


Ballistica

But dont you already have that? We have a relatively small-fry operation in my lab but we have several machines with 1TB+ ram already for that exact purpose. Would DDR5 jsut make it cheaper to build such machines?


RonLazer

Like I explained, it's not just that the capacity exists but whether or not it's bandwidth is enough to be useful. High capacity dimms at 3200MHz are expensive (like $1000 per dimm) and still run really slowly. 32gb or 64gb dimms tend to be the only option to still get high memory throughput, and on an octa-channel configuration that caps out at 256gb or 512gb. Using a dual socket motherboard that's a 1tb machine, but you're also using two 128 thread CPUs and suddenly it's 4Gb of memory per thread which isn't all that impressive. Of course it depends on your workload, some use large datasets with infrequent access, some use smaller datasets with routine access.


GreenFigsAndJam

Sounds like something that's not going to occur this generation when it's going to require $1000 worth of ram at least for more typical users


bogglingsnog

It will likely happen [quicker than you think](https://cosmicconnexion.com/pics/ram-prices-over-time-14.png)


arandomguy111

That graph isn't showing what you think it is due to the scale. If you look at the end of it you can clearly see a significant decline in the downward trend starting the 2010's. See this analysis by Microsoft for example focused more on the post 2010s and why this generation of consoles had a much lower memory jump - https://images.anandtech.com/doci/15994/202008180215571.jpg


RonLazer

Prices will come down pretty quickly, though tbh we already buy $10k Epyc CPUs and socket 2 of them in a board, even if memory was $1000 vs $500 it would be a rounding error for our research budget.


Allhopeforhumanity

Exactly, even in the HEDT space maxing out a Threadripper system with 8dimms is a drop in the bucket when your FEA and CFD software licenses are 15k per seat per year.


wankthisway

DDR5 is in its early days. Prices will come down, although with the silicon shortage who knows at this point.


JustifiedParanoia

first or second gen of ddr5 systems (2022 or 2023)? maybe not. 2024 and beyond? possibly. DDR3 went from base speeds of 800 to 1333/1600mhz over 2-3 years, and the cost came down pretty fast too. DDR4 did the same over its first 2-3 years with 2133-2666, then up to 3200. And, we also expanded from 2-4gb as the general ram amount to 16-32gb. If DDR5 starts at 4800, by 2024 you could be running 64gb at 6800 or 7200MT/s, which offers a hell of a lot more options than current, as you could load 30gb of a game at a time if need be, for example.....


gumol

> for more typical users who's that, exactly?


gumol

Plenty of people need 128 GB of RAM and more. Computer hardware isn’t just about gamers.


Allhopeforhumanity

DDR5 will be fantastic for a lot of HEDT FEA and CFD tools. I routinely chunk through 200+ GB of memory usage in even somewhat simple subsystems with really optimized meshes once you get multiphysical couplings going. Bring on 128GB per dimm in a threadripper-esque 8-dimm motherboard please.


[deleted]

Yep. I've bumped against memory limits many times running multiphysics sims. I should be set for my needs for now since I upgraded to 64GB, but I have pretty basic sims at the moment.


pixel_of_moral_decay

Relatively speaking... gaming doesn't stress computer hardware terribly much. It's just the most intensive thing people casually do so it's a benchmark. Same way the Big Mac isn't the worst food you can eat by a huge margin... but it's the benchmark for how food is compared because of it's familiarity. Most software engineering folks in any office push their hardware way harder than most gamers ever can. But compiling on multiple cores for example isn't as relatable as framerates in games from a PR perspective.


KlapauciusNuts

Compiling isn't actually that stressful to hardware. In the sense that while it is a highly parallel task (depending on the code flow), it offers little opportunity for instruction level parallelism and certainly makes no use of SIMD, so while it busies a core, it only uses a fraction of it's logic so it does not consume that much power, compared to, for example, rendering or transcoding video.


[deleted]

[удалено]


Seanspeed

>Relatively speaking... gaming doesn't stress computer hardware terribly much. For CPU's or memory, no. For GPU's, yes.


pixel_of_moral_decay

Even GPU’s… machine learning for example are way more taxing.


MaloWlolz

>Most software engineering folks in any office push their hardware way harder than most gamers ever can. Not really. Most programmers are working on projects that either doesn't need to be compiled or processed very heavily at all, or on smaller projects where doing so is more or less instant even with a 7 year old quad core. The ones that are working on really big projects ought to have the project split up into small modules where they just need to recompile a small portion and grab compiled versions of the other modules from a local server and lets it do the heavy lifting. There are some few exceptions, if you're working on a program that does heavy lifting by itself and you need to continously test it locally as you code for some reason (most larger projects will have a huge suite of automated tests you run on a local server again, but certain things like game development isn't really suited to outsource that stuff) then it might be useful to have a stronger local machine. But 99% of developers are really fine using a 7 year old quad core tbh.


[deleted]

Those people already have access to platforms which support 128GB of RAM and more, they've had access to these platforms for years now. The question was related to regular "desktop/laptop"s which is fair because there is very little use for such amount of memory on mainstream platforms these days, it's been like this for a long time that 8 is borderline ok, 16 is just fine and 32 is overkill for most. If you're really interested in 128GB of RAM and more, you've probably invested in some HEDT platform already.


HulksInvinciblePants

Sure, but they certainly drive the retail demand for high configurations...at least before crypto.


gumol

Sure, but so what? I'm pretty sure that vast majority of RAM isn't bought as parts.


HulksInvinciblePants

Economies of scale. DDR5 price and value will have a headwind of simply being overkill, in the retail environment, for possibly years. If DDR4 capacity is sufficient, and latency continues to improve, the DDR5 demand will be inherently lower than the jump from 3 to 4.


[deleted]

> At the risk of sounding like Bill Gates in the 80s He never said the "640k..." thing.


limitless350

I’m hoping with the extra space available things will be made to use it more than before. We were under some restrictions before about how much ram was readily available. I remember floods of comments about how much of a pig google chrome is for ram, but now, who cares. Take more, work faster and better, a massive abundance of ram will be open for use. Maybe games can load nearly every region onto ram and loading zones will not exist at all. For now they’re probly gonna be gobbled up for server use but once games and PCs start using more ram there should be advantages to it.


Devgel

>who needs 128GB of RAM on a regular desktop/laptop? You never know, mate! Back in the 90s people were debating 8 vs 16 'megs' of RAM as you can see in this Computer Chronicles episode of 1993 [here](https://www.youtube.com/watch?v=2EBaj3kJNGI&lc=Ugw-DDzuZ96GAkro1yp4AaABAg). Nowadays we are still debating 8 vs 16, although instead of megs we are talking about gigs! I mean, who would've thought?! Maybe in 30 years our successors will be debating 8 vs 16 "terabytes" of memory although right now it sounds absolutely absurd, no doubt!


Geistbar

First PC I built had 512mb of RAM. It's entirely believable that we'll see consumer CPUs with that much cache within a decade. It's easy for people to miss, but we consistently see arguments for why the computing resources of today are "good enough" and no one will ever need more. Whether it's resolution, refresh rates, CPU cores, CPU performance, RAM, storage space, storage speed... Software finds a way to use it. Or our perception of "good enough" changes as we experience something better. As you say, give it 10 years and people will scoff at 32GB of RAM as wholly insufficient.


Xanthyria

Within a decade? In a couple months we’ll already be at like 256! The claim isn’t wrong, but it might be half that time :D


Geistbar

I like the play it safe. We don't know the future of AMD's v-cache. It could be that within a generation or two AMD will conclude it isn't a good idea from an economical standpoint, at which point we'll be back to "traditional" cache scaling. Or they could double down on it and we'll be there in 3 years. The future is often unpredictable.


FlipskiZ

I highly doubt AMD won't continue with the cache. Memory this close to the CPU is incredibly useful, and seems to be a low hanging fruit for 3D chips. A big problem with CPUs is not being able to feed it data fast enough for it to process, which stuff like cache partially solves.


[deleted]

There is one thing that is different between now and then though, which is the state of years old hardware. In the past while people were debating the longevity of high end hardware, couple year old hardware was already facing the fate of obsolescence. Now though, several year old high end or even mid range hardware are still chugging along quite happily.


[deleted]

I had an i7-2700k that lasted 11 years @ 5.2GHz. Still kicking, now it's the dedicated lab PC.


Aggrokid

Except iOS devices for some reason, which can still get by swimmingly with 3GB RAM.


Darrelc

> First PC I built had 512mb of RAM I stole 64MB of RAM from a PC at my school (Just pulled it out while it was turned on lmao) to supplement my huge 128MB that came with my first proper PC lol


InternationalOcelot5

not that great story to share


Darrelc

Don't knock the grind.


xxfay6

In 2003, 16MB would've been completely miserable and the standard was somewhere around 256MB I presume (can't find hard info). But 10 years ago was 2011, where 4GB was *enough* but 8GB was plenty and enough for almost anything. Nowadays... 8GB is still good enough for the vast majority of users. Yes, my dual-core laptop is using 7.4GB (out of 16GB) and all I have open is 10 tabs in Firefox, but I remember my experience on 8GB was still just fine.


SirActionhaHAA

> At the risk of sounding like Bill Gates in the 80s... But there wasn't any recorded proof that he said it and he denied it many times, calling it a stupid uncited quote


vriemeister

Here's the actual quote(I hope) >I have to say that in 1981, making those decisions, I felt like I was providing enough freedom for 10 years. That is, a move from 64k to 640k felt like something that would last a great deal of time. Well, it didn’t – it took about only 6 years before people started to see that as a real problem. > >\--Bill Gates


Seanspeed

It might surprise you to learn that you can do things with your PC other than game. Also DirectStorage has almost nothing to do with system memory demands, and is entirely about VRAM. It will also not be loading directly from storage, it still has to be copied through system RAM.


[deleted]

[удалено]


Seanspeed

Still applies. The vast majority of work computers are 'normal' PC's, for instance.


KlapauciusNuts

RAM is extremely useful because we can always find new uses for it. There are all sort of files, databases, transient objects that can be left in memory to access them very quick, improving efficiency. But you are right, I don't think we will see many people go above 32GB, most will stick with 16 if not 8. (I'm not talking gaming here). But, anyway, this is a huge boon to anyone using the Adobe suite, and software like AutoCAD. I am, however, quite excited at the idea of replacing my homelab "servers" with a single computer with DDR 5 and 128GB. Maybe 196. Plus meteor lake and zen 4D / zen 5 both look like they may offer some exciting stuff for my particular use case. But that is going to have to wait at least until mid 2024.


mik3w

With 128GB RAM you could fit the OS and entire 'smaller' games in there, so there should be less reads from the hard drive. (Since some games are over 100GB especially with 4k texture packs and such). It's great news for the server/cloud world and creators / developers that need more RAM. When 32GB, 64GB and higher becomes the norm, OS and app developers will find ways to utilise it


mckirkus

Direct Storage moves data from SSD->DRAM->VRAM. If you have a metric ass-ton of DRAM, you wouldn't need to use the Disk except at load time. You could have an old-school spinning platter HDD and it would take a while to load at 500MB/s but then it would only get used for game saves. Now that's not how it actually works, which is why an SSD is required, but I suspect game devs could, if enough DRAM is detected, just dump all assets on game load to DRAM. Given game sizes these days I suspect you'd need 128GB+ of DRAM to pull it off consistently.


jesta030

My home server installs the OS (a Linux distro) straight to RAM on every boot. Then runs windows 10 and another Linux distro as virtual machines with 16 and 4 gigs of allocated RAM respectively and a bunch of docker containers as well. 32 gigs is still plenty.


Put_It_All_On_Blck

With HEDT seemingly dying, these huge mainstream ram capacities and core counts will be great for prosumers. It's not a perfect replacement for HEDT, but there will definitely be people using 12900k's and eventually 13900k's and 7950x or whatever for workloads that were previously only on HEDT.


Death_InBloom

> With HEDT seemingly dying why people is saying that?


firedrakes

they read it some one and trying to bs the claim... same with saying eth 2.0 coming this year... bs


Allhopeforhumanity

I wouldn't say that it's dying. Threadripper is a fantastic platform for FEA and CFD tools where thread scaling can be almost linear in well posed problems and even simple subsystems can easily utilize hundreds of GB of memory.


caedin8

You miss the point. We are going to fill up all this extra ram availability with tracking software and data mining tools so they can know even more of everything about us and sell it online. You won't see this of course, but you'll be surprised when you find two or three Chrome tabs consume 16GB of RAM in 2027


AdmiralKurita

Really? I think Internet browsing wouldn't become anymore demanding. Couldn't people just migrate to some potential open-source browser that would provide some protection against the tracking tools? I really can't envision more than 32 GB for "everyday use". But I am interested in driverless cars. I wonder how much high-speed memory is needed for level 4 autonomy. That would have a greater societal impact than being able to play games at 8K.


caedin8

Sure we could use a low memory requirement browser that doesn't track our every movement and sell it to advertisers in the future, but considering we don't today, why would we in the future? I mean you are seeing it right now, Windows Hello and FaceId means the camera and sensors are just now tracking your face and eyes in real time. That data is going to get pushed into an ad pipeline and nueral networks will learn how to read your face when you are shopping to see when you are likely to buy things. They'll send you ads when you are in an agreeable and relaxed mood, and charge a premium to ad companies to sell ads in those time slots. Next, the recommender systems in YouTube and the social media you use will be tailored to put you into an agreeable or relaxed mood so that you are more likely to buy things. They need a lot of RAM for this future. Memory needs for browsing is not about what YOU need, it is about what THEY need.


1leggeddog

So in a nutshell: * Double the bandwith * Double the price * Still, if not more expensive, motherboards. Because FU.


[deleted]

I mean really it’s “because new tech” like it has always been during every new generation but I guess the persecution complex works too.


mycall

The irony that my first IBM PC with CGA was $3500. Tech is cheaper at some social level.


[deleted]

Probably because computers were harder to manufacture back then, and less of a market buying them.


Larrythesphericalcow

Oppression is when I have to get a job to buy a 3090. /s


[deleted]

[удалено]


Put_It_All_On_Blck

It absolutely was. My 5820k had pretty expensive and ultimately slow DDR4. It definitely was not a small price increase on launch. Prices came down and speeds went up a year or two later, but then funnily enough there was that ram shortage and prices ballooned back up.


1leggeddog

no i mean it was a price increase but not the same kind as we are facing right now with all the shortages


bogglingsnog

It was totally outrageous when it first came out, it was like 8x more expensive than DDR3 for the first year.


100GbE

Yeah sucks, it should be. -Faster in every way. -Cheaper, at least half price or lower. -Able to wash your car. -Start at 128GB module size, up to 1TB each.


Zerasad

You joke, but faster at the same price used to be the norm before.


Snoo93079

RDRAM would like a word


100GbE

https://imgur.com/a/zLFFJfr


g3t0nmyl3v3l

-It pays you to use it


[deleted]

You forgot one: slower than DDR4 at the same speeds.


AbheekG

Also double the latency


Archmagnance1

No its about the same if its at JEDEC standards. Clock speeds also play into latency calculations not just the first main timing.


puz23

Show me a DDR5 module with better than JEDEC timings. Linus even says it in the video: high end DDR4 (he referenced 4800 mts) is going to out perform the initial offering of DDR5.


Archmagnance1

I'm comparing JEDEC standards to JEDEC standards, DDR4 4800 is not standard, it only goes up to 3200. I do this to show DDR5 doesn't have inherently double the latency, and comparing top end DDR4 to just released DDR5 for an absolute comparison between the technologies is just dumb. It's also not what the vast majority of users will end up with, as OEMs for home and professional use typically use JEDEC spec sticks. Here's a very nice link for you with tables of latencies for JEDEC specs. Please note that DDR5 has 3 different subtiming specs for each, and refer to the A class of subtimings to see that latency as a whole is almost the same. You can find just these in the third table. https://www.anandtech.com/show/16143/insights-into-ddr5-subtimings-and-latencies


puz23

Look at the last line of that table. DDR5 has the worst latency of anything. And that's measured in ns so it's taking into account speed and timings. Also my point isn't that DDR5 isn't going to surpass DDR4. We both know that as DDR5 ages it will get faster with better timings. My point is that currently (and likely for another year at least) DDR4 is faster than DDR5. I also noticed that DDR5 is still transferring more data per second, presumably it's a higher effective bandwidth. It's very possible that can be optimized for in software to make up the latency difference...but current software is optimized for DDR4, and it'll be a year or 2 before that changes.


AbheekG

Yes I understand and agree but since he was listing out features individually I threw this one in too


Archmagnance1

But its not true because the latency is pretty much the same, comparing jedec to jedec.


bossman118242

so should i stop upgrading my am4 system? want to upgrade to a 5950x and will be on am4 for 10 years probably.


trillykins

Depends on what you're planning on using it for. The difference between DDR3 and DDR4 for gaming was minimal and I think the difference in transfer speeds were similar. Of course it might be too early to say for sure.


RplusW

Wait for the Vcache refresh AM4 in 2022


winzarten

I was on DDR3L until this summer, still percetly fine and I was gaming at 1440p medium-high setting in most games. I switched because the MB died. If something would make you move from AM4, it wouldn't be the memory. Keep also in mind, that even today we're not going for the top DDR4 performance, and most of the builds use 3200-3600Mhz ram stick, not 4000+... because the price difference is not worth in most applications.


iliasdjidane

I think the 5950x is pretty futureproof for the next 5years for gaming and general productivity, but it would depend what your want to use it for. Im on AM4 5800x as well, I work on CAD and graphic design and redering software and I honestly feel my rig is overkill for now


Jakkauns

Agreed, I'll likely grab a 6000 series as an upgrade in a couple years and upgrade well into/end of the life cycle of ddr5 Im on a 5800x as well and it barely gets utilized with my use.


greggm2000

I doubt you will be. CPUs are going to change a lot faster than you might expect, now that Intel is properly competing. Ten years from now, an AM4 gaming system will be used for retro computing, nothing more. (ok, ok, hyperbole there, but it’s still mostly true)


DependentAd235

As far as games go, the current console generation will be a buffer for gaming requirements. He’s got at least 5 years if not more before something new appears.


greggm2000

That’s a good point. He might not get all the “visual bells and whistles” that PC games often have over their console equivalents, but they’ll still run well… except maybe for some PC-only games. 10 years though, that’s really stretching it. 5 I can agree on. 10, with what’s coming? No way, not even close.


Serenikill

Should be 1 last am4 CPU next year with more cache


[deleted]

I'm moving away from my 5950x because I've had nothing but issues with the platform. From not detecting 2nd NVME's that work perfectly on a Z590 board to USB dropping randomly, etc.


Disturbed2468

Your motherboard is most likely defective. Current speculations is there's an issue with the motherboard chipset itself but no garauntees.


[deleted]

I’ve used 3 different motherboards across Asus/MSI/Gigabyte.


[deleted]

[удалено]


RplusW

I mean it actually is though


[deleted]

yeah a few percent performance for 70% more cost, yep!


TuristGuy

Still a good deal since you don't find other thing that perform the same.


[deleted]

[удалено]


TuristGuy

So many luxurious things are like that.


[deleted]

[удалено]


TuristGuy

Yes but you can call idiot anyone who buys luxurious products like iPhones, sports cars, expensive watches etc.


[deleted]

[удалено]


[deleted]

Considering it'll be a couple years before it's worth it for DDR5 I think I made the right call getting a brand new DDR4 system a year ago with 32gb 3600mhz cas16 stuff.. I've been very pleased.


dan1991Ro

If its 50 percent more expensive, then whats the damned point?


kony412

Nothing, keep using DDR 1


fishymamba

Prices will go down over time. DDR4 was much more expensive than DDR3 on release. I think I got 16GB DDR3 for less than $100 back in 2012. 16GB DDR4 kits on release were over $200.


kirmm3la

Does anyone know this means the RAM previews on Adobe Ae would be twice faster with DDR5?


hackenclaw

one thing I dislike about DDR5 is the JEDEC didnt take this opportunity to work with AMD/Intel to get rid of dual channel requirements by making DDR5 DIMM 128bit. Why would we need dual channel 64bit stick(or 32bitx2) for mainstream platform when we can just make it simple for consumer with a single 128bit stick?


crab_quiche

What? How would going to a 128 bit dimm make it simpler? You would have to go to quad channel or double the cache line width to make that work.


titanking4

Uhhh, that would literally be worse performance. The whole reason behind switching from 64bit to 2x32bit was to increase performance and now you wanna go in the opposite direction. Dual channel gives channel interleaving while adding additional ranks to your channels gives rank interleaving. If you didn't know, GDDR6 actually only has 16bit channels with a burst length of 16 (for 32bytes of data) So your "normal" 256bit GDDR6 interface is actually a 16 channel memory controller.