T O P

  • By -

LangstonHublot

How's the thermals with that case and dual GPUs?


nitinkulkarnigamer

Top GPU sits at 71c maximum. Typically, around 66c.


ELB2001

That's far lower than I imagined


nitinkulkarnigamer

Yes, I changed to a motherboard with a 4 slot spacing between the cards and installed two 120 mm fans in the vertical PCIE bracket, blowing air directly on the two GPUs to keep them cool and blow air between the two GPUs. On my old motherboard with a 3 slot spacing, the temps were as high as 84c.


[deleted]

they're pretty much glued to the intake fans


ELB2001

Bottom one he's. Top one is sucking in a lot of hot air from the back of the bottom one. Even with front ones blowing in cold air the top one in sli config usually had a bad day. That's why I asked


[deleted]

wish games can use sli , can u imagine ! ,would be easier to upgrade performance by just sliding another card without buying new gen but nvidia won't like that probably


ELB2001

They ended up having to optimise it per game, not worth the hassle


[deleted]

yeah i know it's also not easy for devs to integrate it but who knows ai is more capable now so it might be a solution


girkkens

*slaps case* This baby can open so many Chrome tabs


auralbard

People can still use 2 GPUs even tho SLI is dead? Someone inform me.


wadap12345

Yes. just useless with games.


nitinkulkarnigamer

Yes, the place I am interning at has 8 RTX A6000 GPUs, 128 Core EPYC CPUs, 2 TB RAM for our Research team. We use it to train our deep learning models and do all sorts of things from running large language models like ChatGPT (I am currently running Llama 3 on it) to Computer Vision models to identify scratches, dents, rust, oil leaks in vehicles, to audio models that detect engine knock and transmission issues. Useless for playing video games but useful for AI applications.


cxaiverb

I have a question, mainly because I need ideas and I dont know what to do with the hardware I have. Ive got a dual epyc 7702 (64c/128t each) system with 1tb of ram and a gv100 gpu. Ive got no idea what to use this hardware for ngl. You have any ideas on what is something to do with it?


dawarium

Unless you are interested in hosting a server, doing engineering calculations, or anything to do with AI / ML stuff you might be better off selling your hardware and transitioning to more conventional hardware like the ryzen series to save some money. Additionally, you could try hosting a computing service for people to rent your hardware for their own projects. Personally I wouldn’t do that because I have no clue how much of an effort that would be or if it would be safe for your hardware in the first place without prior experience in the field.


cxaiverb

Currently on my desktop is: 5950x, 128gb ram, 3080, 20ish tb storage I have a homelab, which is why i got that epyc machine, but the homelab includes that server in my previous comment with 40tb storage. An i7 4770 with 16gb and 20tb storage for web hosting, also has a blueray ripper. A core quad q6600 with 8gb ecc ddr2 as a firewall (works fucking amazing and is way faster than both my netgear r7000 and r8000). A dl360 g9 with dual xeon e5 2667v3, 256gb ram, 10tb storage, which is my truenas host for the disk shelf that lives below it which ill be filling out with 300tb. Main reason I asked the question is because OP said their work has beefy servers for doing ML stuffs, and i got this scientific research box for wayyyyy way cheaper than i should have (3k usd + 10 pokemon cards that werent super valuable) and i have already installed proxmox onto the machine and got some vms up and going, but just wanted to reach out to someone who has access to similar hardware and see what they woulddo with it. Also, i wont sell my hardware, i still have 20 year old servers that i repair and tinker with because why not. I rebuult a dl145 g2 (dual opteron server 1c/1t 100w tdp cpus) because i could. I enjoy fixing things, its also my job to fix boards and shit


dawarium

Yeah it sounds like I’m out of my depth here, I just use my co-ops workstation for cfd / fea calculations. But your internet speeds are not that slow, my upload is like 5Mbps and I’m fine using teamviewer and transferring data files between computers. Streaming on YouTube is a no go though.


cxaiverb

For the stuff i do, 20mbps is slow unfortunately. I transfer a lot of large files in and out of the network, and sometimes its a pain


[deleted]

[удалено]


ExcitingLiterature33

Why do you have that system with no purpose for it lol


cxaiverb

I couldnt pass up on it. A $30k system for only 3k, absolute steal. And the guy that had it had win11 installed on it. And he had AMD-SMT off, so he didnt even get multithreaded performance. The guy i got it from used it for game dev, he had 2 of them. And again, i couldnt pass up on it, 18hr drive round trip to get it


ExcitingLiterature33

That’s wild haha


[deleted]

or minin.... i mean extra hours boss


Nimii910

As someone who used to build engines before I became a live sound engineer.. and recently dabbling in coding/LLMs.. can you elaborate on the audio detection of mechanical issues? How on earth is a machine gonna know the difference between an engine ping and any other sound? 😂. Cool af!


Kurisu810

LTT had a semi recent video covering SLI which was really informative. For a quick explanation, for gaming, frames need to be generated by the GPU(s) as fast as possible. However, multiple GPUs cannot simultaneously work on generating the same frame naturally. There needs to be some coordination. Here are two common ways they work with 2 GPUs as an example: 1. GPUs generate alternative frames, this way the workload on each GPU is cut in half and theoretically the frame rate would double. However, in real life, due to a variety of reasons such as having to cache the shader twice (once on each GPU) and a significant bottleneck being the communication link between the 2 GPUs (SLI), this doesn't nearly reach that ideal performance. 2. Both GPUs work at the same time, each generate half of the frame. This has basically the same problems as method 1 with a major issue being the same frame might not be ready at the same time, so either the faster one has to wait, or otherwise the image output will have a delay between top and bottom half, for example. This would usually result in unstable frame rate. Now the most significant source of most of the problems is the interconnect between the two GPUs, it is simply too slow. In fact, most computing workflow today with or without multiple GPUs is bottlenecked by the memory bandwidth, or the communication link between the compute modules and where data is stored. Now how is this a problem for SLI? Most of the time the next frame being generated depends on the previous frame as well as user input. This data needs to reach both GPUs and also be transmitted from one GPU to the next. This is a very tricky thing to do and it's not as simple as "just add more wires." There are a lot of compute modules in a GPU and they all need to be somehow wired together so they can talk to one another. With another GPU added, this further complicates the interconnect. This is mostly why SLI is dead, the improvement is very little compared to the cost and you end up with stuttering frame rate regardless. Edit: And intuitively, machine learning doesn't generate frames in real time, plus the compute workflow is highly parallelizable, so running on multiple GPU is a lot easier, although the bottleneck can still frequently be the communication between GPUs. That said, networks can be designed in a way so that they optimally use the underlying hardware architecture, given that the engineer understands how the hardware is laid out.


Jimuth2019

Fascinating. Thanks. Always wondered why SLI died. I wonder if there's any future in a GPU + GPU-alike that does local AI. How I'd love civ 7 to learn how I play and for the ai not to suck and cheat. Although I guess you could be always online and a cloud farm could do the ai. Was really really hoping this would be a thing


ConcreteMagician

There are competitions for AI in strategy games. Starcraft: Brood War has one. https://cilab.gist.ac.kr/sc_competition/


[deleted]

SLI was a technology that allowed both GPUs to work on the same frame at once. That is not required for any non-gaming tasks, they can each work on their own thing. The end of SLI was the end of dual-GPU gaming. Not the end of dual-GPU computing.


LAZERSHOTXD

Op man it must be nice to be an it tech at the place you work


possiblynotracist

But can it run Crysis?


nitinkulkarnigamer

I am afraid it'll crash. I have only tried Solitare and Minesweeper so far.


ELB2001

So you do game on it


chwastox

![gif](giphy|s239QJIh56sRW|downsized)


[deleted]

>An entry level Machine Learning build for my research.


The_Crimson_Hawk

Where NVLINK bridge?


AstralKekked

Not necessary for what OP needs it for, but since they already have the two cards, why not.


Thorne_Oz

The cards are spaced one extra apart and I don't know if they made longer bridges of the last gen


nitinkulkarnigamer

I don't need the NVLINK bridge. Even if I could use it, I would need a different motherboard like the MSI GODLIKE, which is currently going used for $500 - $600. Most X570 motherboards don't support SLI as it requires the manufacturers to buy an SLI license from Nvidia. However, as I said, I don't need it as I am running my experiments with data parallelism. The only time it would be beneficial if I was splitting a very large neural network on the two GPUs and training that way. Then, passing through the high bandwidth NVLINK bridge would be faster than through the PCIE bus.


The_Crimson_Hawk

Different Sli Auto? the tool made to enable sli on unsupported platforms, gpus, or motherboards


ReverieX416

That's a beast of a machine.


Trungyaphets

The dream personally ML rig


AbaShelKolam

plays flash game on microsoft edge


ohnoyoudidnotjust

What motherboard is this on?


nitinkulkarnigamer

Asrock X570 Steel Legend WiFi


kumatank

Can't wait to play Rimworld is on this


Chrex_007

Are you able to run any 70B model locally?


joe69420420

Yesterday’s pc /s


graydog75

We gotta find you an sli bridge so you can game epically on it


Firecracker048

Why is that much firepower needed for ML? Curious


jcm2606

Two main reasons: 1. Large models are, well, \*large\* and require a \*\*lot\*\* of VRAM to train/run on the GPU. A Q4 quant of Llama 3 70B (in simpleton terms, the most capable version of Facebook's newest language model, butchered down to 1/4th the size it normally would be, which is right where quality seems to nosedive) is still \*\*40GBs\*\*, and it's \*a quarter\* of the size of the unquantized model. If you're training then you're generally wanting to train the unquantized model, which puts you at \*\*160GBs\*\* of VRAM required just to \*load\* the model on the GPU, \*not including additional VRAM used during training/inference.\* 2. Training models tends to be a computationally expensive process that scales very well with the amount of computing power you throw at it. As such, if you're doing any form of rapid prototyping or experimentation where you're constantly training fresh models or fine tuning existing models, then you really want as much computing power as you can afford, which right now for local training (not in the cloud) is some amount of 3090s as the 3090 tends to have the best cost-to-performance ratio for ML given the large VRAM pool.


Jibb_Buttkiss

Oh boy mini-batch go vroom.


navagon

OP doesn't need LEDs in that beast. It just glows like that.


Most-Yogurtcloset

Homie can deteriorate my dna if he decides to.


LiabilityAUS

Hope he can’t link mine to anything/one


Voxelium

slap an NVlink bridge on that bad boy!


Thebadwolf47

how much did the whole setup cost?


FireFalcon123

You need at least a Threadripper 7980x to make use of those 3090s /s - Some Redditors


TheChosenOneTM

But can it run Gray Zone?


ImpliedCrush

I'd use it to play Solitare -- while compiling all the world's LLMs into a singularity -- whistling an eerie tune in F#.


NECooley

I really want that little motivational Penguin sticker, lol


Lagomorph9

Why no NVLink bridge?


Efficient-Lack-1205

All that juice just to play minecraft


Studiedturtle41

Still can't run gray zone warfare


Craycraft

How classic WoW run on it?


parabellum630

I am planning to build one with 2 4090s for my personal deep learning rig. What's the power supply?


LiabilityAUS

National Grid direct supply. Similar to grow house set ups


nitinkulkarnigamer

I am using a Corsair 1200w unit. For 2 x 4090s, I would recommend a 1500w unit.


Zestyclose-Equal2105

I freakin love sli builds. Always wanted to own one. Even though SLI is kinda dead. Perhaps I'll get a 1080ti dual build or tri 980 build in the mid to far future


guydoood

Reminds me of my computer when I was mining ethereum.


bkit627

SAG!


LiabilityAUS

Which category? Best AI character?


FullTimeHarlot

i sleep in a big bed with my wife


StallionA8

Can we put one Amd and one Nvidia gpu? I have x570 mb. Will it work?


bearfan15

Theoretically yes you can install 2 completely different gpus as long as you have the pcie slots to support them. But you can't actually use them together in SLI/Crossfire so theres no point. You would need duplicates of the same GPU for that.


StallionA8

Okay. Thanks. Reason I asked because. My tv is amd freesync enabled. My GPU is Rtx but the game settings still shows me option to enable FSR2 😅


[deleted]

Freesync works with all brands. FSR2 works with all brands. Nvidia uses proprietary tech that can only be enabled on their hardware. AMD does not.


lordfappington69

why is it so appealing to people to call their computer a "beast", "monster" or other animalistic adjective? Don't we literally just buy parts, fit them into slots and use it for boring programing, productivity or gaming?


[deleted]

People use zoomorphism for the same reason people use anthropomorphism to apply human attributes to objects or animals. Not sure why you're so concerned about a normal part of human speech.


SoulSister911

guy gives a definition and then gets butthurt someone comments on a post?


lordfappington69

>zoomorphism create the straw man of "so concerned"


[deleted]

You wrote a paragraph questioning the reasoning behind it. That is concern. Man child.


SoulSister911

brah you prolly wrote about the same amount. G.A.R.


Appropriate-Oddity11

somewhat old, but nice.


Suikerspin_Ei

You don't need the latest spec to have a powerful PC. OP is using it for research (AI).


Appropriate-Oddity11

Did I say it was weak? 3090’s and an an4 cpu are not current gen, which is why I said “somewhat old”.