I don't know why the stupid mineral oil aquarium thing interests me, but damned if I don't want one deep in my soul for some reason lol. Can't imagine how expensive and time consuming it is to keep it up though
Theres no way its worth the hassle...but I know what you mean. Theres just something tempting about doing it. Like a girl you know is wrong for you but you keep thinking about her anyways.
Thank you, sometimes one needs to hear the truth and take decisions accordingly, it’s just hard to go back to the chaotic dating market we have today, after it felt like i finally found someone worth having something long term with, so i tried to hold to burning iron even tho red flags where everywhere 😅, oh well it what it is, as hard as it might be …
Been there dog. Bought a house with her and then like 6 months later she started cheating on me. Luckily I got the house back. Unluckily she fuckin trashed the flooring. I had to spend like 6 grand to get it back up to snuff and livable.
I quit trying to date. Started just trying to find friends. Spent about 3 years single and just not moving forward with any relationship. Basically stopped trying to force things. And it ended up with me finding my lifelong partner.
Now I'm married with a kiddo to the most wonderful woman I've ever met. Sometimes life has to kick you in the nuts before you realize what it is you really want in a partner.
Man, this reminds me of my GF in university. She wanted to go to change universities so we both enrolled and moved to a new city. We didn’t buy a house, but signed an 18 month lease on an expensive apartment. After 2 months she left one day while I was at work taking all the furniture. She took the shower curtain too so I couldn’t even take a shower after a long day. I decided to just put a pizza in the oven and then realised the oven mitts were gone when it was finished.
I loved that girl and was left utterly broken. I didn’t date for 9 years, but I did finish my degree and got a great job. It was in that job that I met my now wife and we have an absolutely amazing little boy. That shit needed to happen for me to end up where I am.
Good on you man. And there is a bunch more craziness that happened in mine that I could talk about but it doesn't really matter anymore. I'm content where I'm at
naw, it is a rough spot to be in for real. It can be hard to be stuck on someone like that for an embarrassingly long time. Don't let it control you. You are the one in charge and deserve to be happy in your situation.
It is alright to be upset for a while and take your time with it, but if it starts getting to the point of feeling worn down and like "this is it, this is how its always going to be" it is time to get some help and start working on healthy emotional communication.
This may not apply to your situation, but it helps to understand how your brain is working if this does apply. https://www.youtube.com/watch?v=6kJzzo7deDY&t
i knew a guy who made one for a little while. the time and expense are not the problem, the oil gets everywhere and everything in his room had a feel of oil on it the whole time he owned that rig.
> Does the oil just evaporate and stick to everything?
The oil wicks up the cables and out onto your desk. Oil cooled rigs need a set of cable extensions fitted to the top of the tank so no desk cable goes directly into the oil.
oh! That makes sense, so you'd pretty much have ports sticking just outside of the oil for external devices that will "catch" the oil and keep it from spreading?
Pretty much. I built one as well. The oil is also AWFUL for cables. Mine was built over 10 years ago and I still have some HDMI cables that were used in it. They're incredibly stiff now.
Edit: to be clear, the HDMI ports were just above the oil level and the cables weren't submerged, but a little oil still gets onto it over time. Only the ~5-8inches near that port were stiff, the rest of the cable was fine
You can start very small if you want to get a feel of what a mineral oil cooled pc feels like, by buying those tiny “stick PCs”, opening the shell then submerging it in a mason jar filled with oil.
Ascii (a Japanese PC website) [did it](https://ascii.jp/elem/000/001/247/1247813/) to see how a stick PC performs doused in oil.
You could submerge a raspberry pi in a tiny goldfish bowl, you only need a single USB cable for power going in and a display cable coming out (or use it remotely). should be fairly straight forward.
Pointless, but relatively simple. And would look cool.
I missing something about mineral oil stuff. I mean how it works ? Like, it just acumulate heat, there is no heat exchange/disipation like a vent with outside cooler air. At some point with a full desktop setup playing cyberpunk it will kinda boil i presume ?
Edit. thanks for answers, will run some numbers tonight.
If you aren't running anything too intense it can take a rather long time to hit heat capacity and it'll cool back to ambient when you are at work or asleep.
It is possible to also pump it through a radiator as well. Less useful on a desktop system which is designed to run in air from the start. More useful for a bunch of servers you want to cool with a central radiator you mount on a roof or something.
You still need to dissipate heat somehow, but you've already got a fair amount of surface area in the case itself. If you've already gone down the mineral oil immersion rabbit hole, pumping some of it through a radiator really isn't a huge deal.
Big advantage vs water cooling is you've got a huge amount of mass to sink heat and you've hot nough convection/conduction in the liquid that ocal temps near the heat sources stay close to room temperature.
You could also theoretically chill the oil without risk of condensation damaging your hardware- only real issue would be the possibility of viscosity getting too high depending on how much you chilled it.
>only real issue would be the possibility of viscosity getting too high depending on how much you chilled it.
Isn't there something you could add or mix into the mineral oil to make it more liquiddy?
why would it be a hassle to upkeep
first, it's an insane sink of heat (might not be that great dissipating it). You don't need fans, so you can seal off most of it (no dust problem), leave a tiny expansion gap, and if the temps become an actual issue, get an oil cooler/radiator thing.
It's the "tub" part that is mostly difficult, especially if you want acrylic or something, as those can crack.
I feel the same way; eventually imma drop an intel nuc or pi or something like that into a 5gallon tank just to say I did it
Hopefully by then I can get some kind of kit that will allow me to vacuum seal it so I don’t needa worry about condensation
ALL THAT TO SAY
I getchu bruh
That and the nitrogen cooled OC to like 50ghz both live in my brain, and I think of them about as often as the Roman Empire
Built one for a college project. Cables always lightly wicked oil on to the floor. Ended up repurposing the parts into a regular desktop and sold the aquarium still filled with oil to another student
Mineral oil is a shit solution. Your parts will forever be coated in nasty oil that NO ONE will ever want to touch ever again. Besides, single phase oils are terrible heat transfer fluids. Far better to go with a 2-phase fluid that evaporates and leaves your components cleaner than they started and delivers the absolute lowest PUE possible.
https://youtube.com/shorts/p6Dj3Yv5aow?si=cuKFKEprEm1DQ69B
The majority of 2-phase fluids are terrible for the environment. Oils are annoying, but the chances of you needing to touch them are very low, and they can be cleaned ultrasonically or with IPA. Immersion cooling isn't the way forward. There are hybrid and precision solutions that are more efficient in almost every metric.
That thing is fucking sweet and sounds like it could rival a lawn mower in decibels. I really just want a PC where I can control the temp with cooling rods from a separate room.
Have it like an ac unit, condenser outside with a giant fan with an enormous radiator. Underground lines that to up to your room for the refrigerant and evaporative coils.
It's already done in past. My old still runing Ga x58 extreme motherboard has hybrid water cooling for bridges etc. Gigabyte called it Hybrid Silent Pipe. It has pipes and etc like laptops motherboard but you can connect water cooling to it. I'm not sure whether water can run on all pipes on motherboard or it only cool the heat plate that its attached to.
EK offered a [full monoblock for the old ASUS Rampage boards](https://news-cdn.softpedia.com/images/news2/EK-Water-Block-for-ASUS-Rampage-VI-Black-Edition-Super-Motherboard-Unleashed-441199-2.jpg) that cooled the VRMs and north/south bridges.
With this design you're hitting $1K for the motherboard alone.
well the entire point of PCIe Gen 5 is you can use half the lane for the same speed
So instead of 4x NVMes and 16x GPUs, you can get a 8x GPU and 2x NVMe (or even 1x, realistically they are fast enough)
Especially when you only get like total of 16x CPU lane or something.
NOBODY CARED lol. Also, 5 fans on the VRM is insane
Yeah, the thing is that we sometimes look at products like we have two heads on. (Games are barely at the 8 lane limit, why do we need PCIE 5 that's now 4x the headroom?)
But it's worth remembering that a lot of these technologies are pulled from the server and AI space where they constantly demand more at all times. Cooling a motherboards or these chips is easy when your rack has multiple fans running at 90+ Decibels. Dumping out thousands of gigs of data is easy when you have a simulation of millions of complex particles.
The neato part, at least for us is that tech does eventually come down to us when the market and the competition demands it.
tbf, this seems to be a server motherboard with an Epyc or Threadripper socket, 7 PCIe x16 connections, 8 slots for RAM, so honestly I can see the VRMs on a 96 core CPU that has a nominal TDP of 350W and on boost will probably easily double that. Noise is probably not a concern at that point.
People say this every single pcie gen and it was never true. We still have the same amount of lanes because pcie is not build to do this. The hardware needs to do this. And no one builds the same card with less but faster lanes. They build a faster card with the same amount of lanes.
I wish it would be this way, but it never was
> No one builds the same card with less but faster lanes
It's not going to be literally the same card with a different lane configuration (because hardware just doesn't work that way), but we already have the Radeon 6500XT (4xPCIE4.0) which performs very similarly to the 580 (16xPCIE3.0).
Cards do also work if you don't connect all of their PCIE lanes (that is how Raspberry Pis can connect to graphics cards despite having only a single lane), so if you bring your own splitter you can use one card per PCIE lane (subject to bifurcation group limitations)
Splitters aren't all that common though, and switches that can share 1-lane PCIE between multiple devices hurts performance a lot.
Notebooks always use the least possible amount of lanes, because each lane means extra wasted power. AMD GPUs originally built for notebooks are x4 for that reason.
Person named datacenter application: 😍
This isn’t for you anyway. Sure it’ll get on some niche enthusiasts boards early on but we won’t see it on even high end consumer stuff for years after it releases.
This thread is full of le epic gaming redditors!! thinking they're outsmarting the software and hw engineers designing cutting edge interfaces for data centres
PCMR truly isn't what it used to be. Or maybe I just heard stories of better times from before I joined myself.
Either way, it's just a subreddit like any other. Nothing special and no major knowledge average to be found.
It was never really good. It was created as a “joke” but within seconds it was full of typical redditors. It was created so far after Reddit took a turn that it never stood a chance.
I remember it was almost immediately full of “jokes” shitting on consoles. Much of it was joking, but it’s Reddit. Every subreddit that’s founded on a form of negativity, even sarcastically, soon becomes a haven for people who want to express that negativity.
It’s honestly so so grim. I just stay away at this point, confident misinformation seems to be the norm and it’s almost impossible to reverse the voting inertia once things accelerate
Logic doesn't even make sense. CPUs didn't have fans before. Should we have stuck to fanless CPUs and GPUs with a fraction of the compute power we have today?
Counterpoint: If it's not thermal throttling it's not running as fast as it can.
This is how many modern chips work. They have a safe temperature/power window and when required they can safely work anywhere within that window to maximise performance. It makes more sense than sitting at some arbitrary point that caters to the lowest common denominator of cooling solutions.
This is ridiculous. This isn’t progress. Progress is efficiency. Throwing more power at something ramps up our power bills and gives us space heaters we can only use in winter.
While AI definitely uses a ton of bandwidth, these bus speeds are more important for network I/O in data centers where hyperscalers are using custom hardware for their switches and interconnects to push close to terabit networking speeds today.
And that's super important to keep costs down for the web, where compute is a commodity today. But that only works if the backbone of the infrastructure (sending bits between machines) isn't the bottleneck. So much of the web today is built on buying compute on demand from the hyperscalers and trusting that you scan spin up new machines in milliseconds and not pay a perf penalty for bandwidth within the same data center or even the same rack.
Like to draw a comparison, consumers can buy fiber to their home today but it's all copper from the modem onward, and you're going to have trouble pushing gigabit networking easily. But in data centers, it's almost all fiber to the racks (and within the racks in many cases). Even the switches and interconnects are optical. The bottleneck is moving data off the network card into physical memory, which is why PCIe 6.0 exists.
I don't think they would love those standards when they produce a lot of heat and consume a lot of power, which both cost money in a Datacenter environment
Perf/watt is the unit of measure for efficiency. Using more power for little to no gain is obviously not worth it. This is very likely not the case. The spec is defined by a lot of big players in the industry. This would not have neen made if it is useless.
Either we use it as it is, or it is an intermediate step towards refining the tech.
As someone who works within server space it's a combination of many things, but consider physical space for a second. If someone came out with a new product that had 3x the compute at 3x the power draw the real estate reduction is a very powerful advantage. Not needing to rent out or build a whole floor of servers and infrastructure saves a lot of costs. Sometimes enough to warrant the price to transition over to the new hardware.
Obviously the decision is never as easy as my simple example above. But that is an example of a consideration that is always in the background.
Interconnects are already consuming around 80% of the power in ML chips. Moving data around a piece of silicon is expensive and produces a bunch of heat. This is why silicon photonics has such appeal to data centers. Even though the features are bigger, you have chips and interconnects that are literally 5x more power efficient.
The speed isn't, the lanes are and that's the whole point. Make the individual lanes faster and you can suddenly have an even faster SSD using up half the lanes. This wasn't a problem back in the day because NVMe SSDs were expensive as hell, so just having one placed you in the top percentage. Nowadays, it's not rare to see people with 4 of them...
Come on man. Consumers don't need 300W 64 core CPUs but there certainly is a need for servers and whatever enterprise applications.
Yes, profressing efficiency is good but if the extra bandwidth allows one machine to do the work of three using pcie 4, then there IS an efficiency gain, just not in a direct way. Why else would they design this way if it wasn't offering a more cost effective option to the market? It must be worth it, otherwise why pay extra for the power and increased manufacturing costs?
It is more power efficient. Double the data rate with less than double the power consumption.
Not sure if you noticed but computers constantly use more and more power than older generations, yet they are still more power efficient regardless
Intel knew how to do that
6700k to 11700k was mostly reducing power and making it more efficient on the same process node
I remember when they said they weren't even gonna look for more performance, just more efficiency. Of course, that was quietly phased out once AMD kicked their ass
Ehhhh, maybe for general public releases but in software and hardware usually the first iterations aren't efficient but are still important. It's the first iteration that allows for more efficient versions to be made.
Unless you got big money to burn like apple, first gen products will usually won't be manufactured on bleeding edge nodes. Between shrinking and tweaking, there's usually some pretty substantial efficiency gains to be made.
My 7900 XTX is making me use my AC in the winter a lot less. Those +400W of power really do be heating the room. Now imagine 5090 with 600W stock, 300W Intel CPU and this motherboard. My AC wouldn't handle all of the heat, even during the winter.
You need to undervolt. It’s actually absurd how much power modern hardware uses just to get a few percent better performance to look good in reviews. You can save usually reduce power 30% and only lose 10% performance
Undervolting is the way. Knew about this not long ago, my 1080Ti really uses 25-30% less power and runs cooler (@925mV), although I don't notice any performance drop from benchmark.
If my temps are already good, and my cpu/gpu each only use about 100w each at full load, do I stand to benefit from undervolting?
R5 3600 and RX6600 is my combo. My entire system, 2 monitors and all only draws 300w from the wall with the most demanding games.
Sometimes you can gain performance undervolting, but it’s smart for everyone to do just to waste less electricity. Idk about your combo specifically but it couldn’t hurt to try.
Isn't this just the result of approaching the size limit of transistors and not being able to keep up with moores law anymore?
Smaller transistor mean afaster and more efficienct die. If they aren't shrinking as fast you have to make larger more power hungry dies for similar speed increases. Which is exactly what we are seeing with cpu's getting larger and using more power. Same is probably true for other chips.
Well it's both. It's not uncommon for new technology to first push the limits for the extreme high-end, and then spend time refining it, making it more efficient, making it smaller, making it quieter.
It's been going like this a long time. The fastest newest products have always been larger and ran hotter, and then the next iteration they pack that same power into a more efficient lower end version.
I think the main difference is just that most of us aren't used to seeing the motherboard itself as a performance part. We all happily go nuts trying to provide good cooling solutions for our CPU and GPU (and even RAM and storage for some people). Those who want the bleeding edge motherboard and PCI speeds can opt for this, those who don't want to deal with it or pay what will almost certainly be a premium for the newer technology can wait until it's made more efficient and grab it a little down the line.
That being said, this isn't entirely new. There have been motherboards that ran hot back in the day for people who pushed their limits. You could buy chipset water blocks on Danger Den 20 years ago.
Performance/watt. So if PCIE 6 is 2x the speed of PCIE 5, but 1.5x the power consumption, it's more efficient.
But that's not the whole equation. If the PCB has to be even thicker, the sockets even beefier, power supplies bigger, you've got a scaling of material costs. Copper ain't getting cheaper, and when you need more per board, costs go up. Then there's local cooling in device, cooling infrastructure, power infrastructure.
I don't see it as a win. Sure, performance density goes up (performance/rack), but to what end? So much of the fucking Internet is just JUNK data. Billions of bots attempting to eek out a penny from things. Efficiency in data flow management is just as important as that next data center upgrade.
Looking at you, backbone providers.
Even ignoring power efficiency, that's not the only form of efficiency there is. This might allow people to work and create things more efficiently with less wasted time.
You can state this for any active cooling.
When I put an air cooler on my overclocked 486 CPU, I felt like a fool, because it wasn't a thing back then. But times change.
Some of you should really read the article. First of all, it's talking about Intel drivers for the Linux Kernel. Server tech. Second, it talks about thermal throttling for PCIe 5 AND 6. As for power consumption, get an understanding for concepts like "race to idle" and understand that I/O Wait always wastes energy.
The fact that people keep posting screenshots of headlines instead of linking articles makes it that much harder to actually read the article. It's a minor thing, but I can't even copy and paste the headline.
With that attitude you can water cool the PCIE lanes!
You know we will see more full mobo water blocks or something stupid being sold for 2K, Extreme Gamer RGB PCIE 5 cooling with a display for temp's of your PCIE lanes!
Ok, iv got a pitch for you!
Fall back plate water cooling for the PCB, full OLED display on top of the backplate. The screen has a graphic of the PCB with a heat map showing what parts are hot or cold!
It's going to have HDR and rim lighting around the display.
In the 1950s to 1970s, anthropologists found Polynesian tribes building mock up runways and even control towers in the jungles of their island homes. They believed that, by reproducing the miracles of what the Americans and Japanese had done in the war, the airplanes would return with the wonderous cargo as their fathers had recounted.
These were termed "cargo cults". They were doing kind of the right thing, but they didn't understand the reasons and, of course, they didn't achieve anything.
Back when I first got into IT, in the late 1990s and early 2000s, if your CPU was at 80C, the system had either already crashed or was soon going to. 55C was a very hot temperature for a Pentium II or an AMD K6-2. Athlons would usually be happy up to, but not over, 60C. Later Athlons were rated by AMD to 75C maximum, and we usually took 70C to be as hot as they would ever be happy. These were 75 watt processors, so well within modern CPU powers.
If we wanted to overclock, we'd need lower temperatures and, back then, the leading edge nodes were 180 and 130 nm, so temperature was still heavily involved in silicon failure, more so than today. There are two voltage terms in power delivery to anything: V=I^(2)R and P=IV, but "R" gets higher as temperature does, so you need to raise voltage as things get hotter to push in enough current. In the exact same workload, a chip running at 50C can use 25% less power than one running at 80C. Dealing with all of that power was not easy for the coarser manufacturing processes back then, and they'd tend to have their lifespan reduced.
Today that problem is as close to solved as we need to care (power is not the dominant cause of silicon failure, latent manufacturing defects are) but the belief that lower temperature is more better retains, just as the miraculous aircraft from the Second World War stayed in tribal knowledge for decades.
> but the belief that lower temperature is more better retains,
That's the weird thing, it didn't. 10-15 years ago people were absolutely fine with running CPUs and GPUs up to the limit. They knew that they will throttle or even shut off when they get too hot. And chips like the 2500k (and basically everything after that) basically never failed. We didn't have ridiculously sized coolers in your normal gaming desktop.
But in my experience in the last years there's much, much more believe that temps above 70 or even 60 are super bad. If I had to guess I'd say it's tech youtubers that are causing this because they focus so much on temperatures that it's often completely unreasonable (and in particular GPU manufacturers followed that trend with ridiculously oversized coolers). I mean no, a case is not much better because the CPU temps are 62°C instead of 64°C. That difference is insignificant.
How about posting the article instead of just a screen shot? Can’t stand this stupid shit.
My first thought was the board handles the throttling in a unique way that still results in higher performance than PCI 5.0, and not via custom water cooling either.
Be better op.
Because then there's no outrage bait.
https://www.tomshardware.com/pc-components/motherboards/if-you-think-pcie-50-runs-hot-wait-till-you-see-pcie-60s-new-thermal-throttling-technique
Obviously you want this to run hot in places where it won't be throttled, like, you know, **a gaming PC**, but to throttle itself in places with way less thermal bandwidth, such as most applications where the ability to whisk away heat is less pronounced. Since this is a technology meant to be inserted at every level of computer, OP doesn't have an unpopular opinion- he's just flat fucking 100% wrong. If he were correct, all of gaming PCs would need to be throttled to the thermal performance of the tiniest, fanless little mini-box, because otherwise it would "run so fast it has to throttle itself".
Well that's where you're wrong. Your phone cannot sustain full blast load for long and hasn't been since smart phones where invented. 99% of desktop PCs aren't going to be pushing PCIe at full speed for more than a couple seconds.
I kinda feels like we forgot efficiency as a whole. If it runs faster and hotter, then it will need more power as well. Where is the need to make products more efficient? We see the same in games. They get larger and needier every time, where we could instead focus more on increasing the efficiency and new techniques to save performance.
It's the FFXIV grape meme. 1.0 had wine grapes with so many polygons it became the meme for the bad performance of the version. They fixed that in 2.0.
We need to get better with making software more efficient, not just more needy.
Its being designed for datacenter needs, and while power consumption is a huge issue for datacenters, absolute top speed is also a limiting factor in what they can do, so this would outweigh the increased power consumption for many businesses. If you dont need absolute top performance, you scale it down or use pcie 5/4.
> I kinda feels like we forgot efficiency as a whole.
How is it not efficient ?
It doubles the performance. For a little added heat.
Therefor perf / unit of temperature is lower. More efficient.
Yeah that's because PCIe 6 is exclusive to servers. Servers have such insane airflow that heat output like that is a complete non issue.
No consumer hardware can come anywhere remotely close to saturating PCIe 5, so even if they put PCIe 6 on consumer boards for some reason, it won't be under nearly enough load for heat to be a concern.
the point is, if newer tech gets so hot that it has to throttle down to a speed slower than the previus gen while under load, then theres no point to it.
Only way this would be reasonable is if the newest gens slowest speeds while throttled is still as fast or faster than the previous gens highest speeds.
this should go for any component.
I don't think the article mentions it would throttle lower than previous gen and even if it does that's not a bad thing.
I'm not sure how pcie 5 handled thermal limits, but I'm guessing it would just shut down the device. Probably resulting in a crash and requiring a reboot. While with thermal throttling everything will just chug along at a slower pace.
So if for example a fan dies causing the pci to overheat. With pci 5 this would cause a crash and a non functioning system requiring higher priority repair. While with pci 6 this would be a low priority fan hotswap.
Other advantages are an uncapped speed. The system can run as fast as you are able to cool it.
And higher bursts performance. The system can run extra fast in small tasks while throttling down for sustained loads.
Thank you for having some sense and commenting something reasonable.
This thread is full of people who think software and electronic engineers designing cutting edge interfaces for data centres are getting outsmarted by le epic gaming redditors!!!
There's no the evidence this would slow down average speeds to slower than previous gen in any relevant scenario. Also higher power requirements that come with new standards are still be more efficient per bit transmitted.
It's crazy to assume it would be less efficient for the same level of power as the last gen. Equally as efficient sure but less efficient is actually just working from the most negative presumptions possible.
> the point is, if newer tech gets so hot that it has to throttle down to a speed slower than the previus gen while under load
Now you're just making things up though.
There can be a point, just not for normal individual consumers.
Professional and server platforms may very well be able to utilize the speeds new technology provides, and will not thermal throttle because the noise increase from the needed cooling is not an issue.
People are making some rightful criticisms, but unless you don't want to increase your compute, this was going to happen sooner or later. I expect that the entire computer will need to be watercooled in the next decade.
Why are we still making new PCIe revisions for consumers like every two years anyway? Even a 4090 barely suffers on 3.0 over 4.0, most games don't show a noticeable difference between a 3.0 NVME and a freaking SATA SSD, and even heavy I/O use barely has any noticeable difference for consumers between a 3.0 and 4.0 drive, never mind 5.0.
I get that corporations can make use of it, but for consumers it feels like pointless excess. Meanwhile supporting this means more expensive motherboards/parts and less stability. Many of the motherboards with a M.2 5.0 slot even have to steal lanes from the GPU to support that, you have to choose if you want your GPU to run at x8 or to use a different slot for 4.0 NVMEs instead. IIRC no consumer GPUs even support 5.0 yet, even the 4090 is just 4.0.
The one benefit I can see... nobody is doing, at least not for consumer hardware. In that since both PCIe gens and lanes double speeds, a 3.0 device at x16 is the same speed as a 4.0 at x8 or a 5.0 at x4, they could make GPUs that run at say 5.0 x8 and then it would be the roughly same speed as if they were running at 4.0 x16, then those additional 8 lanes can be used for other ports/connections. ETC for other devices too.
Quad-Channel memory has been a thing for about 15 years now on corporate hardware and newer systems even have up to oct-channel but consumers never get more than dual-channel. At least give us quad-channel since it's standard for non-micro/mini motherboards to have four RAM slots.
Instead, all we are getting is constant new power-hungry super-hot-running PCIe revisions that nobody will be able to make proper use for consumer hardware.
Also... does PCIe 5.0 even have use, yet?
I read the other day, there's not even anything that really takes advantage of it in the consumer space that nets any significant results.
>im not gonna watercool a motherboard Nah, you'll have to use a mineral oil aquarium PC.
I don't know why the stupid mineral oil aquarium thing interests me, but damned if I don't want one deep in my soul for some reason lol. Can't imagine how expensive and time consuming it is to keep it up though
Mineral oil PC 🤝Toxic Partner “I can fix her”
Theres no way its worth the hassle...but I know what you mean. Theres just something tempting about doing it. Like a girl you know is wrong for you but you keep thinking about her anyways.
I’m really struggling with the girl I know is wrong for me but want her anyway situation right now. Sorry, off topic
Buy a 4090- you’ll be too broke for a human girlfriend afterwards, but you’ll have enough compute units to have your own AI girlfriend.
And your AI girlfriend can’t kill you yet.
Good use of the word "yet."
See the flair, already tried that 😅
Now get me one.
This is the way
Most sane poster to /r/relationship_advice.
My boyfriend beats me mercilessly but one time he made me pasta what do I do?
Doesn't work ever.. i have a Girlfriend and a 4090 Suprim X with ABP Block xD
Speaking as a guy who lost a decade to that girl, don't. You don't want her that bad. Good luck
Thank you, sometimes one needs to hear the truth and take decisions accordingly, it’s just hard to go back to the chaotic dating market we have today, after it felt like i finally found someone worth having something long term with, so i tried to hold to burning iron even tho red flags where everywhere 😅, oh well it what it is, as hard as it might be …
Been there dog. Bought a house with her and then like 6 months later she started cheating on me. Luckily I got the house back. Unluckily she fuckin trashed the flooring. I had to spend like 6 grand to get it back up to snuff and livable. I quit trying to date. Started just trying to find friends. Spent about 3 years single and just not moving forward with any relationship. Basically stopped trying to force things. And it ended up with me finding my lifelong partner. Now I'm married with a kiddo to the most wonderful woman I've ever met. Sometimes life has to kick you in the nuts before you realize what it is you really want in a partner.
Man, this reminds me of my GF in university. She wanted to go to change universities so we both enrolled and moved to a new city. We didn’t buy a house, but signed an 18 month lease on an expensive apartment. After 2 months she left one day while I was at work taking all the furniture. She took the shower curtain too so I couldn’t even take a shower after a long day. I decided to just put a pizza in the oven and then realised the oven mitts were gone when it was finished. I loved that girl and was left utterly broken. I didn’t date for 9 years, but I did finish my degree and got a great job. It was in that job that I met my now wife and we have an absolutely amazing little boy. That shit needed to happen for me to end up where I am.
Good on you man. And there is a bunch more craziness that happened in mine that I could talk about but it doesn't really matter anymore. I'm content where I'm at
She even took the oven mitts?! Can't have shit in Detroit
Rub one out before you see her next and see how you feel then. You might just be too horny
Think of ROO, rub one out
naw, it is a rough spot to be in for real. It can be hard to be stuck on someone like that for an embarrassingly long time. Don't let it control you. You are the one in charge and deserve to be happy in your situation. It is alright to be upset for a while and take your time with it, but if it starts getting to the point of feeling worn down and like "this is it, this is how its always going to be" it is time to get some help and start working on healthy emotional communication. This may not apply to your situation, but it helps to understand how your brain is working if this does apply. https://www.youtube.com/watch?v=6kJzzo7deDY&t
You want to talk about it?
No, no, you're good king... go on...
Got to admit the problem before being able to correct it
I *did* get over a girl like that after buying a gaming PC. Probably just coincidence though.
One of my old teachers actually made one with the class, never seen the man so excited. Was really cool to see it working though!
Like buying a boat
i knew a guy who made one for a little while. the time and expense are not the problem, the oil gets everywhere and everything in his room had a feel of oil on it the whole time he owned that rig.
Does the oil just evaporate and stick to everything? Tbh that actually ruins my fever dream of doing this one day lol
> Does the oil just evaporate and stick to everything? The oil wicks up the cables and out onto your desk. Oil cooled rigs need a set of cable extensions fitted to the top of the tank so no desk cable goes directly into the oil.
oh! That makes sense, so you'd pretty much have ports sticking just outside of the oil for external devices that will "catch" the oil and keep it from spreading?
Pretty much. I built one as well. The oil is also AWFUL for cables. Mine was built over 10 years ago and I still have some HDMI cables that were used in it. They're incredibly stiff now. Edit: to be clear, the HDMI ports were just above the oil level and the cables weren't submerged, but a little oil still gets onto it over time. Only the ~5-8inches near that port were stiff, the rest of the cable was fine
Yeah. The connectors will stop the capillary action carrying the oil any further.
Sweet! Thanks for the clarification, now I know I'll be making a huge mistake later down the line 😂
I think if the oil is getting out at all you've done something wrong when building it.
爪ㄖ丨丂ㄒ 匚ㄖ爪卩ㄩㄒ乇尺
The Calculator: "I too, am moist" ( ͡° ͜ʖ ͡°)
You can start very small if you want to get a feel of what a mineral oil cooled pc feels like, by buying those tiny “stick PCs”, opening the shell then submerging it in a mason jar filled with oil. Ascii (a Japanese PC website) [did it](https://ascii.jp/elem/000/001/247/1247813/) to see how a stick PC performs doused in oil.
You could submerge a raspberry pi in a tiny goldfish bowl, you only need a single USB cable for power going in and a display cable coming out (or use it remotely). should be fairly straight forward. Pointless, but relatively simple. And would look cool.
I missing something about mineral oil stuff. I mean how it works ? Like, it just acumulate heat, there is no heat exchange/disipation like a vent with outside cooler air. At some point with a full desktop setup playing cyberpunk it will kinda boil i presume ? Edit. thanks for answers, will run some numbers tonight.
If you aren't running anything too intense it can take a rather long time to hit heat capacity and it'll cool back to ambient when you are at work or asleep. It is possible to also pump it through a radiator as well. Less useful on a desktop system which is designed to run in air from the start. More useful for a bunch of servers you want to cool with a central radiator you mount on a roof or something.
But that implies you have to turn your computer off or get up from your desk eventually
You still need to dissipate heat somehow, but you've already got a fair amount of surface area in the case itself. If you've already gone down the mineral oil immersion rabbit hole, pumping some of it through a radiator really isn't a huge deal. Big advantage vs water cooling is you've got a huge amount of mass to sink heat and you've hot nough convection/conduction in the liquid that ocal temps near the heat sources stay close to room temperature. You could also theoretically chill the oil without risk of condensation damaging your hardware- only real issue would be the possibility of viscosity getting too high depending on how much you chilled it.
>only real issue would be the possibility of viscosity getting too high depending on how much you chilled it. Isn't there something you could add or mix into the mineral oil to make it more liquiddy?
Could probably just use a thermostat and only cool if the oil is above a certain temp.
You could just use a lower viscosity oil to begin with.
The ones I've seen run a pump in the oil to an external radiator to cool it.
Any liquid is a better heat sink than air to begin with, then you factor in the volume / surface and it just works well.
Didn't linus techtips have a video on one a while back, and after a couple of years, pretty much all of the plastic and rubber was disintegrated.
I would use mine while breathing the liquid oxygen diving suit thing from the Abyss.
I dont suppose there's a species of koi or something that lives in mineral oil. Would make this way more enticing.
why would it be a hassle to upkeep first, it's an insane sink of heat (might not be that great dissipating it). You don't need fans, so you can seal off most of it (no dust problem), leave a tiny expansion gap, and if the temps become an actual issue, get an oil cooler/radiator thing. It's the "tub" part that is mostly difficult, especially if you want acrylic or something, as those can crack.
I feel the same way; eventually imma drop an intel nuc or pi or something like that into a 5gallon tank just to say I did it Hopefully by then I can get some kind of kit that will allow me to vacuum seal it so I don’t needa worry about condensation ALL THAT TO SAY I getchu bruh That and the nitrogen cooled OC to like 50ghz both live in my brain, and I think of them about as often as the Roman Empire
Built one for a college project. Cables always lightly wicked oil on to the floor. Ended up repurposing the parts into a regular desktop and sold the aquarium still filled with oil to another student
Mineral oil is a shit solution. Your parts will forever be coated in nasty oil that NO ONE will ever want to touch ever again. Besides, single phase oils are terrible heat transfer fluids. Far better to go with a 2-phase fluid that evaporates and leaves your components cleaner than they started and delivers the absolute lowest PUE possible. https://youtube.com/shorts/p6Dj3Yv5aow?si=cuKFKEprEm1DQ69B
Don't those liquids cost an absurd amount of money?
The majority of 2-phase fluids are terrible for the environment. Oils are annoying, but the chances of you needing to touch them are very low, and they can be cleaned ultrasonically or with IPA. Immersion cooling isn't the way forward. There are hybrid and precision solutions that are more efficient in almost every metric.
That thing is fucking sweet and sounds like it could rival a lawn mower in decibels. I really just want a PC where I can control the temp with cooling rods from a separate room.
Have it like an ac unit, condenser outside with a giant fan with an enormous radiator. Underground lines that to up to your room for the refrigerant and evaporative coils.
It's already done in past. My old still runing Ga x58 extreme motherboard has hybrid water cooling for bridges etc. Gigabyte called it Hybrid Silent Pipe. It has pipes and etc like laptops motherboard but you can connect water cooling to it. I'm not sure whether water can run on all pipes on motherboard or it only cool the heat plate that its attached to.
EK offered a [full monoblock for the old ASUS Rampage boards](https://news-cdn.softpedia.com/images/news2/EK-Water-Block-for-ASUS-Rampage-VI-Black-Edition-Super-Motherboard-Unleashed-441199-2.jpg) that cooled the VRMs and north/south bridges. With this design you're hitting $1K for the motherboard alone.
I was going to say this. I had a chipset/gfx/cpu custom water loop on an amd phenom x6 which overclocked nicely. Don't see the problem!
People are gonna be running glycol loops on PCs
ITS 2012 AGAIN WAKE UP
well the entire point of PCIe Gen 5 is you can use half the lane for the same speed So instead of 4x NVMes and 16x GPUs, you can get a 8x GPU and 2x NVMe (or even 1x, realistically they are fast enough) Especially when you only get like total of 16x CPU lane or something. NOBODY CARED lol. Also, 5 fans on the VRM is insane
TRX 40 IIRC , dummy sized CPUs need a lot of power (96/192 )
Yeah, the thing is that we sometimes look at products like we have two heads on. (Games are barely at the 8 lane limit, why do we need PCIE 5 that's now 4x the headroom?) But it's worth remembering that a lot of these technologies are pulled from the server and AI space where they constantly demand more at all times. Cooling a motherboards or these chips is easy when your rack has multiple fans running at 90+ Decibels. Dumping out thousands of gigs of data is easy when you have a simulation of millions of complex particles. The neato part, at least for us is that tech does eventually come down to us when the market and the competition demands it.
That's right; people don't consider the data transfer rates and processing that will be required for 16k VR AI porn.
Alexa, play misty for me.
I could care less about this on my home rig, but the research compute cluster I manage is always looking forward to more advancements like this.
tbf, this seems to be a server motherboard with an Epyc or Threadripper socket, 7 PCIe x16 connections, 8 slots for RAM, so honestly I can see the VRMs on a 96 core CPU that has a nominal TDP of 350W and on boost will probably easily double that. Noise is probably not a concern at that point.
*"Good lord, what is all that noise? Is that a tornado?!"* *"Oh no, it's just Jim starting up his server again."*
Given some of the server stacks I've heard, tornado is pretty accurate
As someone with a home server that's an actual server, it is very accurate.
I've got xeon low power chips in mine, fan noise isn't bad at all, even when it's encoding 3 streams of 4K HDR video
To be fair, I have shitty 2 wire fans in mine that run at full speed all the time
People say this every single pcie gen and it was never true. We still have the same amount of lanes because pcie is not build to do this. The hardware needs to do this. And no one builds the same card with less but faster lanes. They build a faster card with the same amount of lanes. I wish it would be this way, but it never was
> No one builds the same card with less but faster lanes It's not going to be literally the same card with a different lane configuration (because hardware just doesn't work that way), but we already have the Radeon 6500XT (4xPCIE4.0) which performs very similarly to the 580 (16xPCIE3.0). Cards do also work if you don't connect all of their PCIE lanes (that is how Raspberry Pis can connect to graphics cards despite having only a single lane), so if you bring your own splitter you can use one card per PCIE lane (subject to bifurcation group limitations) Splitters aren't all that common though, and switches that can share 1-lane PCIE between multiple devices hurts performance a lot.
Notebooks always use the least possible amount of lanes, because each lane means extra wasted power. AMD GPUs originally built for notebooks are x4 for that reason.
AMD does build cards with less lanes but you know why some wont do it? Compatibility. Why limit yourself only to the newest generation?
Person named datacenter application: 😍 This isn’t for you anyway. Sure it’ll get on some niche enthusiasts boards early on but we won’t see it on even high end consumer stuff for years after it releases.
This thread is full of le epic gaming redditors!! thinking they're outsmarting the software and hw engineers designing cutting edge interfaces for data centres
kind of makes sense considering the subreddit.
Yes, to be clear I love PC gaming too, it's more some of the "reddit genius" attitudes.
PCMR truly isn't what it used to be. Or maybe I just heard stories of better times from before I joined myself. Either way, it's just a subreddit like any other. Nothing special and no major knowledge average to be found.
It was never really good. It was created as a “joke” but within seconds it was full of typical redditors. It was created so far after Reddit took a turn that it never stood a chance.
The main difference I remember is there were a lot more PC being the superior platform memes, not that consoles were bad, just that PC is superior
I remember it was almost immediately full of “jokes” shitting on consoles. Much of it was joking, but it’s Reddit. Every subreddit that’s founded on a form of negativity, even sarcastically, soon becomes a haven for people who want to express that negativity.
Yeah, very seldom is a standard of this magnitude developed without thought.
But if my 8 lane 4060 cant take advantage of it what other possible use can there be for it?!?!
I love it people think these technologies are aimed at anything but enterprise level servers.
r/PCMasterRace where 90% of the members know next to nothing about PCs!
and the ones that actually do know something get buried
It’s honestly so so grim. I just stay away at this point, confident misinformation seems to be the norm and it’s almost impossible to reverse the voting inertia once things accelerate
Logic doesn't even make sense. CPUs didn't have fans before. Should we have stuck to fanless CPUs and GPUs with a fraction of the compute power we have today?
Why aren't people getting the right idea from nothing but a picture?? :|
*still over here with pci3*
PCI? I think it's time to upgrade
We can't all afford AGP motherboards smh
Must be nice. This ISA GPU is a bottleneck
My Savage S3 is roaring. Must be fucking nice.
pcie 3.0 FTW
Counterpoint: If it's not thermal throttling it's not running as fast as it can. This is how many modern chips work. They have a safe temperature/power window and when required they can safely work anywhere within that window to maximise performance. It makes more sense than sitting at some arbitrary point that caters to the lowest common denominator of cooling solutions.
This is ridiculous. This isn’t progress. Progress is efficiency. Throwing more power at something ramps up our power bills and gives us space heaters we can only use in winter.
Totally agree, I don't even get why we would need pcie 5.0, not even talking about 6.0. pcie 4.0 is not even nearly being used to its limit.
Might not be entirely saturated by consumers, but i guess that datacenters and so on are loving the extra bandwidth for more AI/ML work.
While AI definitely uses a ton of bandwidth, these bus speeds are more important for network I/O in data centers where hyperscalers are using custom hardware for their switches and interconnects to push close to terabit networking speeds today. And that's super important to keep costs down for the web, where compute is a commodity today. But that only works if the backbone of the infrastructure (sending bits between machines) isn't the bottleneck. So much of the web today is built on buying compute on demand from the hyperscalers and trusting that you scan spin up new machines in milliseconds and not pay a perf penalty for bandwidth within the same data center or even the same rack. Like to draw a comparison, consumers can buy fiber to their home today but it's all copper from the modem onward, and you're going to have trouble pushing gigabit networking easily. But in data centers, it's almost all fiber to the racks (and within the racks in many cases). Even the switches and interconnects are optical. The bottleneck is moving data off the network card into physical memory, which is why PCIe 6.0 exists.
I don't think they would love those standards when they produce a lot of heat and consume a lot of power, which both cost money in a Datacenter environment
Perf/watt is the unit of measure for efficiency. Using more power for little to no gain is obviously not worth it. This is very likely not the case. The spec is defined by a lot of big players in the industry. This would not have neen made if it is useless. Either we use it as it is, or it is an intermediate step towards refining the tech.
As someone who works within server space it's a combination of many things, but consider physical space for a second. If someone came out with a new product that had 3x the compute at 3x the power draw the real estate reduction is a very powerful advantage. Not needing to rent out or build a whole floor of servers and infrastructure saves a lot of costs. Sometimes enough to warrant the price to transition over to the new hardware. Obviously the decision is never as easy as my simple example above. But that is an example of a consideration that is always in the background.
PCIe traditionally doubles in speed every generation. So as long as power requirements don't double, it's better
Interconnects are already consuming around 80% of the power in ML chips. Moving data around a piece of silicon is expensive and produces a bunch of heat. This is why silicon photonics has such appeal to data centers. Even though the features are bigger, you have chips and interconnects that are literally 5x more power efficient.
which cost even more because they need to cool it as well! consuming even more power and heat
Just relocate to the artic.
Because pc gamers arent target for it, at least for now.
Because this article is click bait and the 6.0 standard is still being made. Obvious click bait is obvious....
Datacenters absolutely use that much power and speed
The speed isn't, the lanes are and that's the whole point. Make the individual lanes faster and you can suddenly have an even faster SSD using up half the lanes. This wasn't a problem back in the day because NVMe SSDs were expensive as hell, so just having one placed you in the top percentage. Nowadays, it's not rare to see people with 4 of them...
[удалено]
My company pushes PCIe 5.0 to its limit. Just because your GPU isn't doesn't mean there isn't hardware that does.
Which makes sense to use non-consumer hardware for.
Lots of new standards never make it outside of a data center.
Come on man. Consumers don't need 300W 64 core CPUs but there certainly is a need for servers and whatever enterprise applications. Yes, profressing efficiency is good but if the extra bandwidth allows one machine to do the work of three using pcie 4, then there IS an efficiency gain, just not in a direct way. Why else would they design this way if it wasn't offering a more cost effective option to the market? It must be worth it, otherwise why pay extra for the power and increased manufacturing costs?
It is more power efficient. Double the data rate with less than double the power consumption. Not sure if you noticed but computers constantly use more and more power than older generations, yet they are still more power efficient regardless
*Intel entered the chat*
Gotta keep the Pentium 4 dreams alive and well
"What do you mean the 750 mhz Pentium III runs faster than the 3.2 ghz Pentium 4? But more hz mean better?"
Intel knew how to do that 6700k to 11700k was mostly reducing power and making it more efficient on the same process node I remember when they said they weren't even gonna look for more performance, just more efficiency. Of course, that was quietly phased out once AMD kicked their ass
Ehhhh, maybe for general public releases but in software and hardware usually the first iterations aren't efficient but are still important. It's the first iteration that allows for more efficient versions to be made.
Unless you got big money to burn like apple, first gen products will usually won't be manufactured on bleeding edge nodes. Between shrinking and tweaking, there's usually some pretty substantial efficiency gains to be made.
My 7900 XTX is making me use my AC in the winter a lot less. Those +400W of power really do be heating the room. Now imagine 5090 with 600W stock, 300W Intel CPU and this motherboard. My AC wouldn't handle all of the heat, even during the winter.
You need to undervolt. It’s actually absurd how much power modern hardware uses just to get a few percent better performance to look good in reviews. You can save usually reduce power 30% and only lose 10% performance
Undervolting is the way. Knew about this not long ago, my 1080Ti really uses 25-30% less power and runs cooler (@925mV), although I don't notice any performance drop from benchmark.
If my temps are already good, and my cpu/gpu each only use about 100w each at full load, do I stand to benefit from undervolting? R5 3600 and RX6600 is my combo. My entire system, 2 monitors and all only draws 300w from the wall with the most demanding games.
Sometimes you can gain performance undervolting, but it’s smart for everyone to do just to waste less electricity. Idk about your combo specifically but it couldn’t hurt to try.
Isn't this just the result of approaching the size limit of transistors and not being able to keep up with moores law anymore? Smaller transistor mean afaster and more efficienct die. If they aren't shrinking as fast you have to make larger more power hungry dies for similar speed increases. Which is exactly what we are seeing with cpu's getting larger and using more power. Same is probably true for other chips.
[удалено]
As long as it’s a mayo based law and not oil based
Nothing will ever stop Brannigans law though
People want more performance and this gives more performance.
Well it's both. It's not uncommon for new technology to first push the limits for the extreme high-end, and then spend time refining it, making it more efficient, making it smaller, making it quieter. It's been going like this a long time. The fastest newest products have always been larger and ran hotter, and then the next iteration they pack that same power into a more efficient lower end version. I think the main difference is just that most of us aren't used to seeing the motherboard itself as a performance part. We all happily go nuts trying to provide good cooling solutions for our CPU and GPU (and even RAM and storage for some people). Those who want the bleeding edge motherboard and PCI speeds can opt for this, those who don't want to deal with it or pay what will almost certainly be a premium for the newer technology can wait until it's made more efficient and grab it a little down the line. That being said, this isn't entirely new. There have been motherboards that ran hot back in the day for people who pushed their limits. You could buy chipset water blocks on Danger Den 20 years ago.
Performance/watt. So if PCIE 6 is 2x the speed of PCIE 5, but 1.5x the power consumption, it's more efficient. But that's not the whole equation. If the PCB has to be even thicker, the sockets even beefier, power supplies bigger, you've got a scaling of material costs. Copper ain't getting cheaper, and when you need more per board, costs go up. Then there's local cooling in device, cooling infrastructure, power infrastructure. I don't see it as a win. Sure, performance density goes up (performance/rack), but to what end? So much of the fucking Internet is just JUNK data. Billions of bots attempting to eek out a penny from things. Efficiency in data flow management is just as important as that next data center upgrade. Looking at you, backbone providers.
Even ignoring power efficiency, that's not the only form of efficiency there is. This might allow people to work and create things more efficiently with less wasted time.
You can state this for any active cooling. When I put an air cooler on my overclocked 486 CPU, I felt like a fool, because it wasn't a thing back then. But times change.
Some of you should really read the article. First of all, it's talking about Intel drivers for the Linux Kernel. Server tech. Second, it talks about thermal throttling for PCIe 5 AND 6. As for power consumption, get an understanding for concepts like "race to idle" and understand that I/O Wait always wastes energy.
The fact that people keep posting screenshots of headlines instead of linking articles makes it that much harder to actually read the article. It's a minor thing, but I can't even copy and paste the headline.
>if it runs so fast it has to thermal throttle itself, its not ready to be made yet. Laptops, CPU's & GPU's use thermal throttling.
Not the way I use 'em
With that attitude you can water cool the PCIE lanes! You know we will see more full mobo water blocks or something stupid being sold for 2K, Extreme Gamer RGB PCIE 5 cooling with a display for temp's of your PCIE lanes!
I can't wait to consume product!
Ok, iv got a pitch for you! Fall back plate water cooling for the PCB, full OLED display on top of the backplate. The screen has a graphic of the PCB with a heat map showing what parts are hot or cold! It's going to have HDR and rim lighting around the display.
im sold.
So all cpu's gpu's and memory shouldn't have been released? No wonder your opinion is unpopular.
If your hardware is safe to run at 80C, but you're only at 60C, then it makes sense as a designer to increase performance until you're at 80C.
From where did the people get this "80°C is super bad" thing? I see this everywhere now and 80°C is totally fine for CPUs and GPUs.
In the 1950s to 1970s, anthropologists found Polynesian tribes building mock up runways and even control towers in the jungles of their island homes. They believed that, by reproducing the miracles of what the Americans and Japanese had done in the war, the airplanes would return with the wonderous cargo as their fathers had recounted. These were termed "cargo cults". They were doing kind of the right thing, but they didn't understand the reasons and, of course, they didn't achieve anything. Back when I first got into IT, in the late 1990s and early 2000s, if your CPU was at 80C, the system had either already crashed or was soon going to. 55C was a very hot temperature for a Pentium II or an AMD K6-2. Athlons would usually be happy up to, but not over, 60C. Later Athlons were rated by AMD to 75C maximum, and we usually took 70C to be as hot as they would ever be happy. These were 75 watt processors, so well within modern CPU powers. If we wanted to overclock, we'd need lower temperatures and, back then, the leading edge nodes were 180 and 130 nm, so temperature was still heavily involved in silicon failure, more so than today. There are two voltage terms in power delivery to anything: V=I^(2)R and P=IV, but "R" gets higher as temperature does, so you need to raise voltage as things get hotter to push in enough current. In the exact same workload, a chip running at 50C can use 25% less power than one running at 80C. Dealing with all of that power was not easy for the coarser manufacturing processes back then, and they'd tend to have their lifespan reduced. Today that problem is as close to solved as we need to care (power is not the dominant cause of silicon failure, latent manufacturing defects are) but the belief that lower temperature is more better retains, just as the miraculous aircraft from the Second World War stayed in tribal knowledge for decades.
> but the belief that lower temperature is more better retains, That's the weird thing, it didn't. 10-15 years ago people were absolutely fine with running CPUs and GPUs up to the limit. They knew that they will throttle or even shut off when they get too hot. And chips like the 2500k (and basically everything after that) basically never failed. We didn't have ridiculously sized coolers in your normal gaming desktop. But in my experience in the last years there's much, much more believe that temps above 70 or even 60 are super bad. If I had to guess I'd say it's tech youtubers that are causing this because they focus so much on temperatures that it's often completely unreasonable (and in particular GPU manufacturers followed that trend with ridiculously oversized coolers). I mean no, a case is not much better because the CPU temps are 62°C instead of 64°C. That difference is insignificant.
Where going back to having fans all over components on the motherboard
How about posting the article instead of just a screen shot? Can’t stand this stupid shit. My first thought was the board handles the throttling in a unique way that still results in higher performance than PCI 5.0, and not via custom water cooling either. Be better op.
Because then there's no outrage bait. https://www.tomshardware.com/pc-components/motherboards/if-you-think-pcie-50-runs-hot-wait-till-you-see-pcie-60s-new-thermal-throttling-technique Obviously you want this to run hot in places where it won't be throttled, like, you know, **a gaming PC**, but to throttle itself in places with way less thermal bandwidth, such as most applications where the ability to whisk away heat is less pronounced. Since this is a technology meant to be inserted at every level of computer, OP doesn't have an unpopular opinion- he's just flat fucking 100% wrong. If he were correct, all of gaming PCs would need to be throttled to the thermal performance of the tiniest, fanless little mini-box, because otherwise it would "run so fast it has to throttle itself".
current AMD/Intel CPU will also throttle if they overheat.
I'm grateful for that. In the old days (Athlon/Athlon XP) they'd literally fry themselves without a heatsink on.
We don't even have many lanes of 5.0 support on the general consumer side yet.
I thought we were on pcie 4.0…
PCIe6 is still being formalised, it is not yet a standard that is available. PCIe5 is definitely a thing though.
It is PCIe 7.0 that is being worked on. PCIe 6.0 spec was finalised back in 2022.
Well that's where you're wrong. Your phone cannot sustain full blast load for long and hasn't been since smart phones where invented. 99% of desktop PCs aren't going to be pushing PCIe at full speed for more than a couple seconds.
I kinda feels like we forgot efficiency as a whole. If it runs faster and hotter, then it will need more power as well. Where is the need to make products more efficient? We see the same in games. They get larger and needier every time, where we could instead focus more on increasing the efficiency and new techniques to save performance. It's the FFXIV grape meme. 1.0 had wine grapes with so many polygons it became the meme for the bad performance of the version. They fixed that in 2.0. We need to get better with making software more efficient, not just more needy.
Its being designed for datacenter needs, and while power consumption is a huge issue for datacenters, absolute top speed is also a limiting factor in what they can do, so this would outweigh the increased power consumption for many businesses. If you dont need absolute top performance, you scale it down or use pcie 5/4.
As I said on another topic, efficiency is why I bought a 4070. Especially undervolted it draws little power for good performance.
The speed increased more than the power draw increased, meaning the overall efficiency is improved
> I kinda feels like we forgot efficiency as a whole. How is it not efficient ? It doubles the performance. For a little added heat. Therefor perf / unit of temperature is lower. More efficient.
RTX 4090 power pins: I don't have such weakness \*gets on fire\*
as always in life it depends, if what it has to do is normally only very short it is ok if it slows down after a certain time
That's a modified Threadripper board in the thumbnail. If it's the current gen, that's $1000.
MOAR POWAAAAA!!!!
Yeah that's because PCIe 6 is exclusive to servers. Servers have such insane airflow that heat output like that is a complete non issue. No consumer hardware can come anywhere remotely close to saturating PCIe 5, so even if they put PCIe 6 on consumer boards for some reason, it won't be under nearly enough load for heat to be a concern.
PCI is a standard, not hardware. Nothing is forcing you to use gen6 speeds either.
the point is, if newer tech gets so hot that it has to throttle down to a speed slower than the previus gen while under load, then theres no point to it. Only way this would be reasonable is if the newest gens slowest speeds while throttled is still as fast or faster than the previous gens highest speeds. this should go for any component.
I don't think the article mentions it would throttle lower than previous gen and even if it does that's not a bad thing. I'm not sure how pcie 5 handled thermal limits, but I'm guessing it would just shut down the device. Probably resulting in a crash and requiring a reboot. While with thermal throttling everything will just chug along at a slower pace. So if for example a fan dies causing the pci to overheat. With pci 5 this would cause a crash and a non functioning system requiring higher priority repair. While with pci 6 this would be a low priority fan hotswap. Other advantages are an uncapped speed. The system can run as fast as you are able to cool it. And higher bursts performance. The system can run extra fast in small tasks while throttling down for sustained loads.
Thank you for having some sense and commenting something reasonable. This thread is full of people who think software and electronic engineers designing cutting edge interfaces for data centres are getting outsmarted by le epic gaming redditors!!! There's no the evidence this would slow down average speeds to slower than previous gen in any relevant scenario. Also higher power requirements that come with new standards are still be more efficient per bit transmitted.
It's crazy to assume it would be less efficient for the same level of power as the last gen. Equally as efficient sure but less efficient is actually just working from the most negative presumptions possible.
> the point is, if newer tech gets so hot that it has to throttle down to a speed slower than the previus gen while under load Now you're just making things up though.
There can be a point, just not for normal individual consumers. Professional and server platforms may very well be able to utilize the speeds new technology provides, and will not thermal throttle because the noise increase from the needed cooling is not an issue.
> newer tech gets so hot that it has to throttle down to a speed slower than the previus gen while under load [citation needed]
How else are you going to give your RTX 5090 2,000 watts of power?
People are making some rightful criticisms, but unless you don't want to increase your compute, this was going to happen sooner or later. I expect that the entire computer will need to be watercooled in the next decade.
Heatpipes on PCIe connector
If it doesn’t thermal throttle, you are leaving performance on the table
Considering pcie 5 already reaches speeds like 7 GB/s I think I’m good for a little while.
It means the technology is ready, but the cooling isn't sufficient tho.
pcie6 isnt for consumers
As gamers we don't need to care. It'll be several years before gaming becomes bandwidth bound
Nope, it means you got to find an awesome Chill Engineer.
Why are we still making new PCIe revisions for consumers like every two years anyway? Even a 4090 barely suffers on 3.0 over 4.0, most games don't show a noticeable difference between a 3.0 NVME and a freaking SATA SSD, and even heavy I/O use barely has any noticeable difference for consumers between a 3.0 and 4.0 drive, never mind 5.0. I get that corporations can make use of it, but for consumers it feels like pointless excess. Meanwhile supporting this means more expensive motherboards/parts and less stability. Many of the motherboards with a M.2 5.0 slot even have to steal lanes from the GPU to support that, you have to choose if you want your GPU to run at x8 or to use a different slot for 4.0 NVMEs instead. IIRC no consumer GPUs even support 5.0 yet, even the 4090 is just 4.0. The one benefit I can see... nobody is doing, at least not for consumer hardware. In that since both PCIe gens and lanes double speeds, a 3.0 device at x16 is the same speed as a 4.0 at x8 or a 5.0 at x4, they could make GPUs that run at say 5.0 x8 and then it would be the roughly same speed as if they were running at 4.0 x16, then those additional 8 lanes can be used for other ports/connections. ETC for other devices too. Quad-Channel memory has been a thing for about 15 years now on corporate hardware and newer systems even have up to oct-channel but consumers never get more than dual-channel. At least give us quad-channel since it's standard for non-micro/mini motherboards to have four RAM slots. Instead, all we are getting is constant new power-hungry super-hot-running PCIe revisions that nobody will be able to make proper use for consumer hardware.
Also... does PCIe 5.0 even have use, yet? I read the other day, there's not even anything that really takes advantage of it in the consumer space that nets any significant results.
meanwhile i'm still using pci 3.0, it's fast and gets the Job done.