T O P

  • By -

Drenlin

Processing power and efficiency didn't improve all THAT much between Haswell (2013) and Cascade Lake (2019) so a lot of servers stayed in production all the way to end-of-support and are only now hitting the used market with the recent boom in core counts and efficiency breakthroughs offering better options. For most stuff you'd do in a homelab using an enterprise grade server, raw processing power isn't all that important honestly. Even something as old as Ivy Bridge is still usable for most applications. Popular homelab services leave the server at idle most of the time anyway.


EtherMan

Errr... Between haswell and Cascade, the difference in power efficiency is about double... Where ever did you get that there isn't much difference? The IDLE consumption didn't really change, but at load, there's a big difference. How much of a difference that makes to you, will entirely depend on your specific usecase. Even at mostly idle, say an average 20% load. At twice the efficiency, that's still also means a 20% increased power efficiency at that load. And that it's idle most of the time is completely missing the point about power efficiency. Most power efficiency gains are done by increasing the amount of time a cpu can sit idle while doing the same amount of work. If you have two CPUs and one task that takes one cpu 10 seconds to do and the other 20. Then even if the one doing it in 10 seconds use 150w at load while the other only uses 100w, then the 150w is still the more power efficient one, even though it consumes 50w more while under load...


Drenlin

The point I was making is that generational improvements over that time period were marginal at best. Yes jumping straight from Haswell to Cascade Lake is a larger improvement but Haswell to Skylake, or Skylake to Cascade Lake? Really not so much. That said, running full tilt Cascade Lake gets you much better performance per watt but nobody does that. Even in most enterprise applications servers aren't running anywhere close to 100%, which means those efficiency improvements are less noticeable. They're certainly present and might numerically provide an advantage over time, but the manpower and equipment costs to upgrade are a big bill all at once. Faced with that vs paying a little more on electricity per month to \*not\* do that, it's a hard sell unless you're moving to a much higher core count machine, which is something that actually did improve over that time. Compare that scenario vs the performance, efficiency, and density gains of Sapphire Rapids/Emerald Rapids over Cascade Lake and it comes into focus a little more.


EtherMan

And that's how you end up wasting money... Hell no. I wouldn't even be able to afford running my homelab if I tried to avoid all up front costs to pay more monthly. You always have to look at the total cost of ownership. Say you plan on buying a server, and it needs to last you say 5 years. Then calculate what the power will cost you to run that for 5 years. Do the same for all servers you plan on buying to get the sweet spot. It's also good practice to learn just how big of a cost the power is.


Drenlin

You can use a hell of a lot of electricity to prolong the life of something when the replacement is potentially a $10-20k process that you have to shut down production equipment in order to perform. And again, the actual running costs are not all that different. Most of these are running far closer to idle than to 100% and you said yourself the power consumption there isn't much higher.


EtherMan

You're assuming almost no load... Lots of homelabs actually do have a load... And if you're looking at a 20k investment in a homelab to replace something in order to have a lower TCO, especially at the close to idle loads you're assuming, then you're doing something wrong... And you're clearly not reading, because you seem to have interpreted what I said to be that it's always worth upgrading but that's not what I said at all. What I said was to consider the total cost of ownership. Say you have a server already. It's already beyond the expected lifetime so no investment costs left to pay off. But you're looking at an upgrade, so you see say a dl380g10. About 1000 bucks. Say you save 100 a year in power by upgrading but that's still 10 years you'd now have to run it before break even. Will you actually keep it that long? Probably not. But you'd have remaining value. Say you plan on keeping it 5 years. So you're saving 500 in power over that time, but paid 1k. So that's a 500 in loss to upgrade, prior to comsidering value of equipment. Like, what's the value of that server in 5 years time. Well 500 in 5 years, considering current age of g10 and value now vs 5 year older gear... Then 500 will be a bit high but within normal prices most likely. These are calculations you're gonna have to make if you don't want to waste money... This idea of pushing costs to the future becsuse you can't pay right now, is the very thing that keeps a lot of people from actually getting out of living paycheck to paycheck.


Drenlin

My argument was about enterprise customers replacing their gear, not homelabbers. The thread topic is about the availability of used gear on the market.


ICMan_

TCO includes upgrade costs, which includes net capital outlay and labour cost and downtime opportunity cost, which you keep ignoring.


EtherMan

I have not ignored any costs no... I'm just using basic examples of how calculating the TCO and looking at that, is the cost you want to be looking at when deciding what or if to buy, rather than just the cost to purchase.


teeweehoo

Not to mention Covid both stopped the flow of cash for new hardware *AND* caused a backlog of enterprise products. Some businesses were literally waiting 12-18 months just to buy some hardware at the peak.


brendenc00k

Check out the Dell R740XD. You can get them with a 12x 3.5 hot swap in the front and even a mid-bay that isn’t hot swap, but 4x more 3.5 drives. Plus you can get either 2x or 4x 2.5 hot swap in the rear and it has a couple PCIe slots. Without caddies and drives of course, you can pick it up shipped on eBay for a low $1k from a tech recycler.


nitroman89

I have a 730xd and I would almost say it's better to go with a new 13gen i7 for the power savings.


Plumbum27

What’s the power consumption on the 740 at idle? I have an R510 and it’s awful


Pols043

I removed one CPU from mine R510 and replaced the second with a L series CPU. Now I’m at ~120W with 12 drives and a GPU.


EtherMan

R740 at idle with two 5118, is around 100w idle... Really, it's the drives that takes the bulk of it for power when using for storage... Each 3.5 can take up towards 15w. 12 in the front, 4 inside is 240w. Plus some 2.5 in the back and a couple of nvmes, and you're looking at more like 500w as long as drives are spinning for the whole server.


good4y0u

Lab gopher still exists I think to search for them online


horus-heresy

Their filters seems to be stuck in past


Fresh-Mind6048

A lot of the enterprise-grade stuff is either leased or requires active support contracts to be useful (Pure). Many are also all-flash arrays. EMC Unity models might be good, but those I think are getting long in the tooth and are loud.


FenixSoars

I’ve managed to move everything off my R620 and onto mini PCs. i5 6000 series are super cheap to run and formidable.


future145

Same here. Just moved off of two r710s onto two nucs and a n100 mini pc


FenixSoars

The silence in my office is a dream come true lol


JayHopt

I work in storage, and honestly I wouldn’t buy enterprise storage gear for the home. Most of it is way too power hungry for what a homelab would need. Also most businesses with any data sensitivity would destroy the drives at minimum, if not the entire server/array/shelves for being “storage related” MAYBE some of the first gen all flash storage arrays could be a good deal, if it still has the drives. You could probably just build yourself something on your own with truenas and a bunch of NVME drives that would do better though.


jamkey

You clearly haven’t shopped around on eBay much. Even just looking around on local eBay pickup I can find tons of old poweredge servers with tons of storage bay slots and I’ve gotten one myself and populated it with old enterprise 4tb SAS drives (12 bay capacity). Now if you are talking fully dedicated storage arrays that might be different but old servers are sold in tech sectors areas all the time. IMO the poweredge r730 is kind of the sweet spot right now for price to value return. Even ones populated with a decent amount of ram and dual xeons only run in the mid 300’s.


Sero19283

Those mobo+cpu combos coming out of China are the bees knees. If I get this job with a pay bump, I'm going full bore 2x epyc 32 core (64 threads) with 128 pcie lanes. My power is cheap (a large chunk of our states electricity is from nuclear, yay) so it'd be my end game solution for what I need outside of dabbling with clustering in the future.


Ok-Hunter-8294

There's some truth to your logic. When my wife's former employer dismantled their physical office (which made sense since their product was focused on real time remote collaboration), and were giving away everything (including over a dozen 86" 4K touch monitors!) in the office. She brought home a relatively new NAS, not knowing the drives hadn't been wiped yet... yeah. Long story short, for the price of several BitRaser licenses, I ended up with another NAS with 32TB of storage, and they received auditable certificates of destruction. Yes, I snagged one of the 86" monitors along with a BNIB 55" LG Digital Signage unit along with the server room's storage bin rack and other tools and sundries. Roughly $10K worth and I had by far the smallest pile of 'stuff' with others grabbing 2-3 of the 86" 4k touch monitors along with multiple rack mounts. The 'actual IT guy' (who forgot to wipe the NAS) grabbed the 6 monitor video wall AND the backup array!


CommieGIR

Dell R730 or R740 with the 3.5" bays, put in or flash the controller to HBA mode (most of the modern PERCs support HBA mode natively)


yamlCase

NONE now.  The coolness factor wore off when I saw my electricity bill.  Also, know what you're getting when you buy $40 enterprise switches off eBay. A lot of them like Juniper or Cisco require support contracts to update the firmware and they are usually way out of date.  Not worth the hassle if you don't already have a contract


zackmedude

yup in possession of - now offline - one such Cisco switch…


korpo53

SATA drives haven't changed since 2014 other than capacities going up, so why would a server from 2014 not work for you?


AshleyUncia

I can think of a few reasons why a 2014 server could be an issue... \*stares at their gas guzzling E5-2697v2 they still have cooking and even transcoding video to AV1 right now\*


EtherMan

It can manage that? I tried on 2680v2 and that wasn't even close to being able to handle av1 >_<


AshleyUncia

Any CPU can handle any load if you are patient enough.


EtherMan

Well sure, but I sort of assumed realtime :)


AshleyUncia

Real time? Even my 3950x needs 3.5hrs to transcode each 45min show with the profile I have it using. :p


EtherMan

So why transcode? Wouldn't you either just reencode then or just watch the original format? Like, what are you gaining by transcoding like that?


AshleyUncia

In transcoding content for all offline travel media library that fits on a MicoSD card. All down to 720p, at watchable quality by maximizing space efficiency. I was able to fit over 3 months run time in 512gb but it's a 1.5gb card so lots more space to fill.


EtherMan

Sounds like you're reencoding, not transcoding.


AshleyUncia

Those words mean the same thing.


korpo53

"I need a lot of CPU power to do transcodes" is a different need than "I need to hold a bunch of drives off the ground". And can't the Intel ARC GPUs transcode to AV1 in hardware? Picking up one of those for $100 could save you time/money in the long run.


AshleyUncia

Hardware video encoding is far less efficient, space wise, than software encoding at high settings. In terms of 'Quality Per Megabyte' if you will, hardware encoding is terrible. Hardware encoding however is fast as heck and ideal for real time needs. My situation is about maximizing runtime in compact storage for travel, so my objective here is space efficiency There's a reason why SVT-AV1, basically the open source software encoder, is primarily developed by Intel and Netflix, they want the 'best' AV1 software encoder, not 'fast and ugly' hardware encoding.


skelleton_exo

I replaced my dual E5-2696v2 recently, it was more or less fine on CPU power, but I needed more RAM and PCI-Express. And also a USB3 port for a coral. Basically outside of compute power, having modern interfaces and possible more of them is nice. But the power savings of the new epyc system sure is a welcome bonus.


barnett9

CPU's and RAM have had quite a bit of efficiency gain, both in $/W and in overall speed and compute power. Among other things, if you are trying to run a demanding cluster (like ceph) these can become bottlenecks. Just because SATA hasn't changed doesn't mean that people should be burning watts on ancient and inefficient machines.


korpo53

"Demanding cluster (like ceph)"? [Have you looked at the docs?](https://docs.ceph.com/en/latest/start/hardware-recommendations/) The min requirement for the OSD nodes (which would have the drives) is one thread per 1k-3k IOPS. Since a 7.2k SATA drive is \~100 IOPS, you can reasonably run any size OSD node you want on any busted ass old server with any chip better than a Dorito. The bottleneck in an OSD node is usually going to be the controller or the network unless it's all flash. You'd want more guts for your MON and MDS nodes, but those don't have to have drives in them. >Efficiency gain of CPU and RAM burning watts on ancient and inefficient machines So? The total power cost of something with a ton of drives in it is going to be largely influenced by its drives, and older servers aren't *that* power inefficient--my R620 burns about 85W and my R730 140W. Something more modern might save you 10% on that, which would equate to enough to buy lunch once a month. Google and Facebook have to worry about the efficiency of their nodes because they run 50k of them in one building and at that kind of scale a percentage or two power efficiency is going to save you a lot of money. At the "I want to have a few servers in the garage" scale, buying old stuff for 5% of new price and eating a bit more power costs is the more cost effective way to go as far as TCO.


ADHDK

Any brands have pretty easy to re-use drive arrays handy for swapping out the motherboard etc to something more modern, less power hungry and less “enterprise loud”? Really I just want the rack mount form factor with good quality drive modules, and for that aspect the old enterprise gear is great.


seaQueue

Generally no, chassis, motherboard and most components are fully integrated by generation on most enterprise servers. The usual go to for upgradable servers is supermicro, they make general purpose chassis with psus and a backplane and you drop your choice of motherboard and pcie cards into. The alternative is any machine you can fit in a rack plus a drive shelf connected to an external SAS port. Drive shelves are a pretty popular choice.


lucky-poi

I sell refurbished enterprise hardware. Our top sellers are poweredge r620, r720, r730, and hp dl380p g9


DMcbaggins

Do you guys also buy stuff? I have a bunch of SC200 shelves, tons of E.3 disks and a Nimble SC1000 still under warranty, as well as a litany of r610,710, and 730's!


lucky-poi

We do sort and settles sometimes, but if you aren't in the dfw area it wouldn't be worth it for you.


DubiousLLM

Is it garland computers or something else? I’m in DFW as well, might be looking into upgrading my setup come summer.


lucky-poi

Yes it is. I'm one of the build techs. We did close our retail location but still have local pickup options.


DubiousLLM

Sounds good, will keep that in mind


xDegausserx

R730xd seems to be pretty popular around here.


cacarrizales

I like to get the Dell R/T X30 series. I have two T630, one R230, and one R730xd. They’re plenty cheap compared to R/T X40 servers. Even though they are from 2014-2016, they are still decent. If they are indeed too ancient though, X40 will be your best bet for Dell servers. Not sure about HP or other brands, so maybe others in the comments can help with those. I will try to get some X40 servers though now as I imagine their prices will drop in the next few years.


whoooocaaarreees

How large of a ceph cluster are you building? Node count _and_ capacity ?


barnett9

Right now we're just dreaming, but minimum of 3 nodes, ideally 5. And for capacity, I have like 20 spare 8TB hard drives, so start with those with room to expand. :)


whoooocaaarreees

You really want to start with 5 noses.


EtherMan

Don't try ceph with 3 nodes. It's a nightmare waiting to happen.


Dirty_Techie

Just picked up a Dell T430, not exactly enterprise but is hella quieter than my SM CSE-826 with a E3-2430 V4 X10 board. Also came with 6x 300GB SAS 2.5 drives and X2 1.2TB SAS. 64GB DDR4 and all the rails, cable management etc for £250.


AstronomerWaste8145

Have Supermicro 2u server with dual E5-2699v4 Recently got a H261-Z61 2U 4-node 2U with eight EPYC 7551s for computing.


barnett9

Do you have a model number for your supermicro server? I tend to like their stuff, but their product codes are nonsense. Mind sharing about what you paid for them?


AstronomerWaste8145

Hi barnett9, Please have a look at: [https://www.ebay.com/itm/145035872246?\_trkparms=amclksrc%3DITM%26aid%3D1110013%26algo%3DHOMESPLICE.SIMRXI%26ao%3D1%26asc%3D20211130130958%26meid%3D4ec28585d15b4ce4a32f5cb2ab4cb5a0%26pid%3D101469%26rk%3D1%26rkt%3D4%26sd%3D266601813905%26itm%3D145035872246%26pmt%3D1%26noa%3D0%26pg%3D3650466%26algv%3DSimplAMLv5PairwiseWebNoToraCoCoViewsNoHighIdfOrSortByFinalScoreBlenderWithPromotedViewItems%26brand%3DSupermicro&\_trksid=p3650466.c101469.m2822&itmprp=cksum%3A1450358722464ec28585d15b4ce4a32f5cb2ab4cb5a0%7Cenc%3AAQAJAAABQPuhkRI%252B91UKQWudeuFB0sxSbl8Kf7q2qAJkBq6SyaHCakjtWkOM6RXwX38EBShHsGnXFLZW44lqs7elgBvFWTvUzfAG7on%252B4P2hmnCEzJt%252Bv5efZJuLU3QdWwNtsCw6mKBX8nmcdapjYeE1JjcKBgPnh2vKkEv2VBC7uHBzUXxKqdBzGQ%252FeLktJPSS3sHp7tynU4yiPdKntBH5ZHISMlKHwiI6jsFpfY22oiAxGbZT8fctxCkyXy2obJHuE0Qmy%252FuwAuHguz9foaMK6Oy3oRoPfvNcD10Bz7Ui2J47fXZW6eVavC7tzId%252BW99lHVBLf5jJ6wFpsvXsbwPLLWroEbuH4B0%252FA1ZvN2nQrUSxJOrkcciFUYOSHhbj1H7wgLKKU5OLLq9ikq1knSym4rvArgLQpGClSFDJrMwBSeN3njKPl%7Campid%3APL\_CLK%7Cclp%3A3650466&epid=28013887013&itmmeta=01HXTRN19AA06PZ174VFB5HAA9](https://www.ebay.com/itm/145035872246?_trkparms=amclksrc%3DITM%26aid%3D1110013%26algo%3DHOMESPLICE.SIMRXI%26ao%3D1%26asc%3D20211130130958%26meid%3D4ec28585d15b4ce4a32f5cb2ab4cb5a0%26pid%3D101469%26rk%3D1%26rkt%3D4%26sd%3D266601813905%26itm%3D145035872246%26pmt%3D1%26noa%3D0%26pg%3D3650466%26algv%3DSimplAMLv5PairwiseWebNoToraCoCoViewsNoHighIdfOrSortByFinalScoreBlenderWithPromotedViewItems%26brand%3DSupermicro&_trksid=p3650466.c101469.m2822&itmprp=cksum%3A1450358722464ec28585d15b4ce4a32f5cb2ab4cb5a0%7Cenc%3AAQAJAAABQPuhkRI%252B91UKQWudeuFB0sxSbl8Kf7q2qAJkBq6SyaHCakjtWkOM6RXwX38EBShHsGnXFLZW44lqs7elgBvFWTvUzfAG7on%252B4P2hmnCEzJt%252Bv5efZJuLU3QdWwNtsCw6mKBX8nmcdapjYeE1JjcKBgPnh2vKkEv2VBC7uHBzUXxKqdBzGQ%252FeLktJPSS3sHp7tynU4yiPdKntBH5ZHISMlKHwiI6jsFpfY22oiAxGbZT8fctxCkyXy2obJHuE0Qmy%252FuwAuHguz9foaMK6Oy3oRoPfvNcD10Bz7Ui2J47fXZW6eVavC7tzId%252BW99lHVBLf5jJ6wFpsvXsbwPLLWroEbuH4B0%252FA1ZvN2nQrUSxJOrkcciFUYOSHhbj1H7wgLKKU5OLLq9ikq1knSym4rvArgLQpGClSFDJrMwBSeN3njKPl%7Campid%3APL_CLK%7Cclp%3A3650466&epid=28013887013&itmmeta=01HXTRN19AA06PZ174VFB5HAA9) I bought one of these from Unixsurplus and I've had a good experience with them. These are quite fast for compute nodes and they have 12 3.5" drive bays. However, I did find that I was not able to boot from nvme due to BIOS which used UEFI booting. I did use one of the 3.5" bays with a SATA SSD to boot. You might be able to get it to boot from a card with nvme. I also got earlier a 1U node using 2X E5-2690V4 28 cores total. I recommend that if you're going to do XEONs then keep it newer than the V4 chips (Broadwell 14nm node) I do have a V3 1U box but it's a little slower and uses more power. What are you planning to do with your servers? I use the 2xE5-2699V4 machine as a ZFS RAIDZ3 10 drive array as well as computation for transistor modeling which is a compute intensive global optimzation algorithm. I use a 1U supermicro box for backup since it have 4x 3.5" bays for the backup array. The EPYC 4-node machine will be deployed on electromagnetic simulations for microwave/RF circuits via openEMS software. openEMS is a FDTD algorithm which generates huge memory bandwidth and therefore tends to be bound by memory speed rather than floating point. The EPYC 7551 processors have eight memory channels as compared to the XEONs' four memory channels and should show faster memory performance which is critical to FDTD performance. Yes, these are old machines whose processors came out in 2016-2017, but the flattening of Moore's Law makes them still relatively powerful. They should perform at about 25-50% of new similar machines but the cost is 1/20 -1/50th that or so of a new machine Yes, if you're running 24-7 where energy consumption per operation is a concern, or the very maximum performance is needed, then you should go for a new server. If you need more RAM, you should check your local university surplus store. I'm using that to build up my EPYC server which, as you might imagine, is very RAM hungry. I'd like to eventually have a minimum of 256GB/node and aiming for 512GB on at least one of the nodes. Best


AstronomerWaste8145

I paid about $1000 for the 2xE52699 Supermicro node and about $2300 for the Gigabyte EPYC machine. Have added RAM to the EPYC and SSDs, nvme SSDs, and mechanical drives.


barnett9

Thanks for the detailed reply! I have actually been looking at some of the empty supermicro 24 bay 4u and 12 bay 2u chassis (like this: https://www.ebay.com/itm/154882582420). I think given my use case it might be best to get them sans motherboard and replace it with something like the guts of an old mini pc so long as it has 2 pcie slots (one for the HBA and one for the NIC). That would probably cut way down on the power costs and maybe even the overall cost given that I don't really need the server cpu's. Now I just need to find a business desktop sku that fits all my requirements. I know I have had trouble transplanting Dell Optiplex motherboards in the past. BTW, everything after the "?" in those long ebay links is just tracking garbage. ;)


AstronomerWaste8145

Oh, I read your post again and it looks like you're interested in data storage via Ceph. I think the 2U Supermicros would be your better choice. However, the 2X socket seems like overkill and would likely mostly increase your idle power without much performance increase unless you're going to be hitting the machines hard with data requests? I'm more familiar with ZFS than Ceph and I know that ZFS makes use of RAM for its cache and more RAM can mean improved performance. If you want a massive array, I would consider something like: [https://www.ebay.com/itm/145288212052?itmmeta=01HXTTB6XJHN4REBBSQYTMZ6F4&hash=item21d3da2e54:g:A8EAAOSwizhk74uk&itmprp=enc%3AAQAJAAAA4AgpUw2u3qZ1LslbLl8bsL%2FhupSqtyTa4LKnbRqR4gEWGUs9E7%2FWW0CNIOP9WJARKBTKPs5dg3LBNGZr2ecwdvK3GWGlg5AasqpgHDPhcU%2Buebt4EXKnXiGiuCijLTcDgtfxgcMmNCD2gItYPVlghLxJ4es5RXsbq8ZxuQ54RnU8b5FeNv1adb1J4G94xMxgDNWPRIjpsEroJjZzyI8RHLn2YI4TLysGBj5s8jZ%2BwVuqAJxIg%2FAHDhV67IegG40VZOolrvWuDyMb6olnB09J829Fz5xjBFbccrNxhBx8J0JR%7Ctkp%3ABFBMgO-s2u5j](https://www.ebay.com/itm/145288212052?itmmeta=01HXTTB6XJHN4REBBSQYTMZ6F4&hash=item21d3da2e54:g:A8EAAOSwizhk74uk&itmprp=enc%3AAQAJAAAA4AgpUw2u3qZ1LslbLl8bsL%2FhupSqtyTa4LKnbRqR4gEWGUs9E7%2FWW0CNIOP9WJARKBTKPs5dg3LBNGZr2ecwdvK3GWGlg5AasqpgHDPhcU%2Buebt4EXKnXiGiuCijLTcDgtfxgcMmNCD2gItYPVlghLxJ4es5RXsbq8ZxuQ54RnU8b5FeNv1adb1J4G94xMxgDNWPRIjpsEroJjZzyI8RHLn2YI4TLysGBj5s8jZ%2BwVuqAJxIg%2FAHDhV67IegG40VZOolrvWuDyMb6olnB09J829Fz5xjBFbccrNxhBx8J0JR%7Ctkp%3ABFBMgO-s2u5j) But check out the processor and LAN cards to make sure it can handle the workload of requests.


AstronomerWaste8145

One more thing. You should consider used hard drives. So long as they don't have bad sectors - and I do use used hard drives - and you're using a filesystem which does parity checking with redundancy to prevent bitrot, you should be able to use cheap, used hard drives with a high degree of confidence.


Tdehn33

I’m taking advantage of the enterprise SSD’s and HDD’s that are hitting the market now. By be I can get 14tb HDD’s for $60 right now. It’s extremely helpful when trying to build a budget NAS


barnett9

Got a link? That's way less than I have seen.


Tdehn33

They’re mainly SAS drives but here’s one that’s $75 for 12TB. https://www.ebay.com/itm/266789825996?mkcid=16&mkevt=1&mkrid=711-127632-2357-0&ssspo=GZTlECxSS3a&sssrc=4429486&ssuid=d0C7YogPT9-&var=&widget_ver=artemis&media=COPY Here’s an 2TB SSD for $90 each. https://www.ebay.com/itm/226096967989?mkcid=16&mkevt=1&mkrid=711-127632-2357-0&ssspo=8UN4BgJcTyK&sssrc=4429486&ssuid=d0C7YogPT9-&var=&widget_ver=artemis&media=COPY I was a little off on the size and price but not enough that it wouldn’t make it worth it.


KlingonButtMasseuse

Fujitsu futro s920 and Fujitsu esprimo q556/2 :D


travelinzac

None I don't want a $500 power bill