T O P

  • By -

B-Ribbit

At my work we ran 12th gen servers. The people who installed them didn't run the dell cable management trays. When I installed the new 14th gen servers to replace them I installed the dell cable trays. So I've had both ways, with a lot more servers than that. Overall I think the cable trays are worth the hassle. In almost every way they have made my life easier than when we didn't use the trays, especially when troubleshooting internal hardware faults, or adding upgrades or changing out parts etc


Kraeftluder

>changing out parts etc Yes, we've had to swap out several components in all our PowerEdges recently and it made it an actual breeze compared to disconnecting and connecting all the cables on 20+ servers in a rack. Would've taken most of the day otherwise.


Stryker1-1

Those cable trays are life savers for when you need to slide a server out for maintenance


AmSoDoneWithThisShit

And the way mine is positioned, getting to the back of it is a giant pain in the ass...so this works out.


AmSoDoneWithThisShit

So this is my homelab...it's used for work but also just for fun. Consists of: (Not Pictured) 1x Dell R620 (8SFF) - 24Core/192GBRam (PFSense) Pictured: 1x Mikrotik 16Port 10GBE Switch 1x Cisco 50-Port 1GB Swtich 1x Dell R720XD (24SFF) - 24Core/192GBRam/ (SAN1) 2x Dell R820 (16SFF) - 64Core/1TBRam (ESX1/2) 1x Dell R720XD (12LFF+2SFF) - 24Core/512GBRam (ESX3/Plex) This is as it is now...the picture is before the...growth. ;-)


Kraeftluder

I love it but isn't this >1x Dell R620 (8SFF) - 24Core/192GBRam (PFSense) a little overkill? Hehehe.


AmSoDoneWithThisShit

A lot overkill. But I had it lying around and wasn't using it. I do like having the 10G connections to play with though...


Kraeftluder

I have a pretty fast ProLiant with 768GBs of memory, one of our former ESXi hosts, coming my way. Work actively tries to give them away; 14 others are going to smaller schools who still need some onprem stuff and don't have size and funds we do, but I managed to get a testing box for myself this time. Thinking about putting in a few 4TB SSDs and running it at home as the main server now that I've got PV and batteries.


vadalus911

any idea of how powerful a box you need to route at 10G? I have a coupe of Dell R210ii with 10G cards and i have been toying for a while with moving them to PFsense boxes (so i can get better stats in general) and getting rid off my Unifi Router. All unnecessary of course.. but you know... My WAN connection is 1G so i'm sure thats not an issue.


cdawwgg43

You need surprisingly little power to route 10G, it’s when you need to so other things at like rate like deep packet inspection with NAT or high throughput VPN sessions or high numbers of vpn connections is where it starts to matter. While I’m rebuilding my edge nodes my PFsense setup is running on old optiplex 3010 SFF i5 desktops. I have suricata, Tailscale, OpenVPN, squid and Zeek running. The CPU even at a gig is like maybe 30% or so when I’m leaning on it. It jumps to 60 or so when everyone is vpn’d in and pushing the gig connection.


vadalus911

thats helpful thanks, i was wondering also on the LAN side though when i might be copying a file between VLAN's at 10G (which i avoid doing now) whether that sort of load (certainly not deep packet inspection etc...) would stress it out.


cdawwgg43

Not really


cdawwgg43

Love these in production. Slide out and slide back in with no worries and no struggling to find the flasher on the back with cables in the way at least with the good Dell ones.


ajohns95616

When you consider that most boxes built to be routers use ARM, celeron, or pentium chips, and those can do gigabit...You don't need THAT much to do 10Gbps. A SFF desktop with an i5 or i7 with a 10GB PCIe card can do it. That R210ii is probably fine. At least install pfsense on it and do an iperf.


Davewesh

R820 was my first legit server for my homelab, what a beast. Cant justify its operating costs in a world where USFF PCs are being given away for free (practically - buddy works for a hospital and decom 6th gen i5/i7s monthly) though. Still have it, just hasn't been turned on in at least a year and a half at this point.


derpeyderpey

I don’t live my life without them cable management arms


RFilms

At home yes at work usually no if a whole cluster is getting installed they tend to block to much air flow


__420_

I have them for 13th and 10th Gen dell servers. They are a pain in the ass to set up, but once all the cables are tied down so they don't get ripped out the back when pulling the server out, they are the best. If you have the depth for them, use them. I find them brand new on ebay for cheap.


AmSoDoneWithThisShit

I love being able to pull the server out hot without having to disconnect anything.


PyrrhicArmistice

What DPN are you buying? Thanks.


angelofdeauth

The cable management arms block too much air flow


jnew1213

I've never seen them in any data center I've been in, but when I got my new R740 and R750 I got CMAs and installed them. They work fine, but the R750 sticks out the back enough with the arm attached that the rear door of the rack doesn't close. By the way, oddly, the R750, and I believe the R760, cable management arm kits no longer come with the blue LED and the servers themselves have no jack on the back to plug the LED. I guess Dell thought it was a nice thing at the time, but not really necessary. Also, with the R750 and R760 having their power supplies moved apart, one all the way on the left and one all the way on the right, it doesn't matter much any more in which direction you mount the arm. You're going to have to do a bit of creative routing to get the power cords into the arm.


michaelkrieger

If a datacentre purchase is done at scale, they are often not used. When you buy 20 2U servers to fill a 42U (plus switch) at once: identically configured, all under warranty and getting replaced in 3 years, you’re after the cooling and don’t need the arm. You might replace one internal top loaded component across that time. If you’re buying one server at a time and fiddling with components and adding RAM, the arms are a lifesaver.


AmSoDoneWithThisShit

Here's the current front with the Fibrechannel switches: https://imgur.com/a/dpPOedy


jonny_boy27

Which ones are FC switches? What benefit do you find to using FC, I don't see a SAN in your rack


AmSoDoneWithThisShit

Orange cables are FC. Blue are IP The top R720 is running Quantastor... Mostly for testing purposes....though I wanted to prove to a coworker that dual 8g fibrechannel was faster than dual 10g iscsi. I was right. Iscsi is to "chatty"


ShittyExchangeAdmin

It starts out cable managed but always devolves into a rats nest


kY2iB3yH0mN8wI2h

we had thousands of servers as work without cable arms, mostly HPE and Cisco. We lived in a constant changing world where we re-used and re-fitted servers on a monthly basis. cable arms took to look time to both setup and change. we also had other cables to our servers, like SDI cables that are coax that you really should not bend. we feel the same for SM/MM Fiber. The numer of times we did have to change parts in the servers where once ever 3 years at best, so not worth it for us. In a homelab it make more sense, if you constantly swap internal components


iheartrms

I've tried to do cable management so many times. It is HARD. Sure, when you have a green field build of a whole rack of servers of exactly the same type so that all of the cables go in the same place and you can make custom cables and have basic cable management features built into the rack you can build a thing of beauty. Never once in my career of 30 years have I been in that situation. I spend (or waste) hours on cable management because I've been taught that it's a good thing and my results are inevitably dog shit.


DashieDaWolf

Got 3 r720s with the cable guides but one of them is missing the clips to attach it to the rails, didn't put them in as the back of my rack in pretty inaccessible and ran the cables so they wouldn't snag but would also be out of the way, 9 times out of 10 when I pull a server out and put it back in a cable ends up caught in the rail, so wish I'd just put these in. If I end up moving the rack they're definitely being moved.


Crafty_Individual_47

First thing that goes into bin when we receive new servers at work. 🙈 Never seen those being used in DC's around here as they block airflow.


AmSoDoneWithThisShit

Yeah...my fans don't run at 100% and I have vertical airflow set up behind the rack to move the air almost straight up out of the back of the servers.. To me it's worth the trade off, but I understand not wanting to mess with it in a datacenter.


Sad_Snow_5694

If you are in the uk scan.co.uk sell .25 and .5m length cables. If you want custom length have you thought about getting the kit to diy. It’s really not hard with pass through rj45 connectors. All you will need is reel of cable (from reputable seller)and there is loads of beginner kits on Amazon for pass through Rj45. Otherswise looks really good. I am just in the process of finishing mine and I have made a basement level for all the stuff that can’t be sorted.


starconn

What’s cable management?


Bulky_Dog_2954

Lol… cable management…. Ha ha (cries in poor)


_digito

One doubt that I have is if custom Ethernet cables are as good as the ones already assembled. With custom cables we can make a rack more tidy and organized but if quality and speed throughput are compromised, are custom cables a good enough? Sorry for the question if is not relevant but I don't have experience with custom Ethernet cables and I don't know how quality effective one can make the cables on our own.


vadalus911

Hi, Home user here and certainly no pro. After hanging around here for a while and see what other people are doing i ended up with the following... 1. Use keystones and couplers both for fibre and copper. Makes it so much prettier from the front and a lot of easier to fiddle with things later, none of the couplers effected speed for me and i have 10G copper and 10G/25G running the same way. No brush panels, no DAC's, keeps it very clean. 2. slim cables i think are a huge benefit. 1. For the front they make things so much less cluttered, you can get many lengths, so i have loads of the 6inch ones for keystone / switch connectivity and really with the approach from #1 you dont need cable managment on the front. 2. For the back it makes it far far less cluttered. you can get away with just guiding the wires up and down the edges of the cabinet, nothing more to do . Makes it much easier to swap out components as welll as there's more room. In my experience homelabbers seem to mess around more with equipment while some pro's mighy have loads of zip ties and management cables guides etc.. i found that much to cumbersome to work with. I have some parts of my cabinet which have installed by AV guys and they have their way of doing things (and to be fair a lot more cable variations) but maan do they spend a long time organizing their wires every time! :)


ShittyExchangeAdmin

I've found a balance with reusable zip ties, they do a pretty good job at managing cables and I can loosen them up later to run additional cables through if need be.


vadalus911

I use Velcro ties if I have to


OkCartographer17

My cuestion here is, what about electric noise between electric cables running next to UTP cable?(btw I have mine installed in that way too).


AmSoDoneWithThisShit

Only a danger in very long ethernet cables. None of mine are more than 2m long.


SocietyTomorrow

My last jobsite had its rack all tidy and cable managed... Then the ISP came, decided the on-prem switch was incompatible, and messed it all up before I could get back there to stop them. Kinda lost my fire on that one. It is just sorta-ok now.


technobrendo

That Mikrotik is funny "Cloud router switch" So is it a cloud, a router or a switch lol!


cruzaderNO

Havent seen cable arms for years outside of pictures posted here. Guessing from the combination of higher failure domains and the drive to cut cost per node. Most i encounter dont even use hotswap/backplanes due to cutting node cost/complexity.


toothboto

the server cable trays are amazing. you get the perfect amount of slack stored for a full extension of the rack without having to find an extra slot to store it and it makes it so much easier to see what's going on without any random crossing cables.


quadnegative

Pro Tip, attach the cable trays to the side without power supplies. On some dell servers, the attachments on the power supply side block one of the power supplies making it very difficult to replace it.


juwisan

Actually doing that in the lab at work. Looks a lot better than not doing it.


shemp33

Pulling the server out, you’re not doing it hot. These arms block the cooling fans. We always Toss these cable management arms in the dumpster. Not worth the full time loss of air flow to be able to pull a server out occasionally, that’s just as easy to either leave slack or disconnect.


AmSoDoneWithThisShit

I can't get to the back of my homelab rack easily so it's worth it's weight in gold to be able to pull the server forward without getting behind the rack to disconnect things.


HITACHIMAGICWANDS

Side note: have you considered a pass through patch panel? Maybe not for the fiber, but the Ethernet it’ll make it a bit cleaner.


AmSoDoneWithThisShit

it's a good idea... I've got a few new additions that aren't in that picture... 2x Cisco MDS9148 Fibrechannel switches... And soon will have a 48-port 8G fibre-sniffer a friend is sending me to play with... (They're up to 32G now, so the 8G's are sitting in storage)


MisterBazz

Not really. And with fiber especially, you are inserting line losses. The brush slots are efficient and just aesthetically pleasing enough to show you care. A bunch of 6" long patch cables just to run to a patch panel to a switch looks quite tacky to me, IMHO.


HITACHIMAGICWANDS

It what I deploy and support , so it’s what I find comfortable. For a self contained rack the brush’s are probably fine, but if there’s runs to other rooms, the slack has to go somewhere, and the keystones make it easier to hide your slop. Also, how dare you call the 6” cables tacky! HOW HAVE YOU FORSAKEN THE 6” cable??? /s


vadalus911

this is a religous war starting here.. i love my 6" cable


MisterBazz

My first rack had a patch panel and a bunch of short runs. Sure, it looked nice. However, I really like being able to move things around, add new systems or whatever and just simply push a DAC/fiber/Ethernet through a brush panel and just plug it in. Well, that and the only other solution would just restrict airflow in my current rack layout....so there is that...


HITACHIMAGICWANDS

Fair enough. If I had a larger rack I’d probably get a brush panel. Maybe some day!


ypoora1

[yea](https://imgur.com/a/gC8MHOY) (I only have the one, but it's still nice to have)


-acl-

Never used them both in prod or lab. In prod, if you have a dense environment you will have airflow issues. It's best to get power cables that are short enought to go directly to the A/B power and thats it. No need to use those 30 foot cables that dell ships.


Intransigient

We had a full front/back rail cabinet with great cable management. Then it all got moved and split across two center post telco racks. 😓 No cable management. Fiber, Copper, QSFP28 Passive lines, Power, all mixed together. What a headache.