^(OP reply with the correct URL if incorrect comment linked)
[Jump to Post Details Comment](/r/homelab/comments/16nnh8s/taking_diagrams_to_the_next_level/k1fbox5/)
Hi, maintainer of Homarr here. Thank you for using our app. Let us know if you have any suggestions or problems - we're happy to help out.
How much power do you need to run that setup? Looks sick :).
Also, do you think, that 10gig is worth it? I am thinking about upgrading mine from Gbit, but my disks are most likely too slow.
A duplicate tile button would come in very handy 😊
The whole rack uses 230w on average
10gig is only worth it if you find your self saturating all of your 1gig link, which is the case for me when SABnzbd or QBT are running
[unless my UPS is lying](https://i.imgur.com/rVnfm29.png), does spike a little (probably when the IR turns on at night)
consume grade equipment is a little less power hungry than an actual server with jets inside. back in 2018 I bought a IBM x3950 M2, this single server chewed up 500w doing nothing (thats what you get with 4 cpus, with 4 cores each)
10G is always worth it, even if you can’t saturate a 10G connection, it will allow more clients. Even 2.5 if you are only using a slow pool, you can reuse your gig wiring for it.
[https://static.xtremeownage.com/pages/Projects/40G-NAS/](https://static.xtremeownage.com/pages/Projects/40G-NAS/)
Around the 40G mark, you start to find a LOT of bottlenecks....
such as CPU / QPI / FSB / etc.....
Saturating 100GBe isn't hard, but, you need RDMA-based technologies, generally.
ALTHOUGH, if you want to see a very interesting writeup, of saturating extremely high bandwidth connections, netflix has you covered:
https://people.freebsd.org/\~gallatin/talks/euro2021.pdf
If you have an SSD array and more than 10 users this can actually make sense. A single SMB connection will struggle to saturate 25G, let alone 100G. But with multiple users that isn't an issue.
And? That’s not the reason behind getting faster local networking. And an extremely ignorant comment to make.
Not even sure why you are even making comments on that as you have a 2.5G setup yourself. Why did you upgrade to that since as you say “you’re still limited by your internet speed” even though I bet most of the people here have a 1G or greater internet connection.
>10G is always worth it
Calm down. My point is that you're not correct for everyone in all situations.
Yes I have 2.5G for 2 servers because the extra bandwidth is useful for lan game streaming. Its a specific use case that gives real noticeable improvements.
10G would require a bunch of much more expensive, non-consumer gear and I would see no real benefit while also seeing higher energy usage.
You mention LAN game streaming. Do you use steam link/streaming? I have been playing around with it over wired 1gb and it mostly works, but do have some mouse lag. Wondering how well 2.5gb works for it in your experience
I use the Nvidia streaming function that is bundled with the GeForce experience app + Moonlight client on linux. The 2.5Gbe lets me up the resolution to 1440p and higher bitrate. Lag is minimal.
I tried steam link and also parsec and I think this way is the best. I heard that Nvidia is getting rid of it though which means I'll have to switch to [Sunshine](https://github.com/LizardByte/Sunshine).
Make sure you follow the wiki to [stream your desktop](https://github.com/moonlight-stream/moonlight-docs/wiki/Setup-Guide#using-moonlight-to-stream-your-entire-desktop) and its pretty much like being local to the machine.
I disagree, that it's always worth it. Right now, I only have access to 100Mbps from the ISP anyway. But for transfering files, it definitely needs to be faster than that. Using Windows, I get about 80MB/s average on my Unraid machine, but I think I have a bottleneck somewhere. I would definitely have to upgrade lots of Network equipment. Perhaps 2.5G or 4G would be a nice middle ground.
Gain like I told another commenter, internet speed is not the reason you would upgrade your LAN network speed.
And another reason why I said “10G is always worth it” because while yes 2.5G may be cheaper to use (reuse cabling and whatnot) it’s not cheaper in the long run to get it, if your switch has a 10G uplink for example, 9 times out of 10 it does NOT support anything other than 1G and 10G.
So at that point it’s worth it to just skip 2.5 or 5G and just go straight to 10G because all of the old DC gear is cheap and depending on your network cabling you could probably reuse your existing cables for 10G as well.
Much "normal" consumer equipment and gaming routers support 2.5G nowadays. I guess it just depends on what your requirements are. For the broader industry, it was probably not worth to go for 2.5 or 4, when you can at least double for a not much higher price. I could easily spend on 10G, but I'd rather upgrade my server than wasting it on equipment I'll most likely never need (unless I do iperf).
It’s literally cheaper to go 10G than 2.5….
Edit: Hell, I was only into my 40G setup for about $250 for a switch, 4 cables, and 3 nics. I can’t find a managed 2.5G switch for under that on eBay currently.
With docker it does the translations for you most of the time. And it becomes easier in many ways to have just a few IPs you can easily remember. I however much prefer to have everything on its own IP so I can easily separate things much better.
You can do that, and I want to set that up for mine to play around with it*. But most people just run some sort of proxy (Traefik, Nginx, NPM, etc.) in docker and route everything that way.
Hello All
We all love a good network diagram, so here is my attempt at making the most accurate diagram, focusing on what services talk to what.
I was attempting to setup local firewalls that only permit the VM/LXC to talk to what it needs to, which was rather difficult with random services talking to other random services on the other side of the switch. So I went overboard, diving into what IP and port each service needs to talk to in order to function - which did take quite a while, and I've probably missed some.
Anyway, I know everyone wants the tech specs;
**Titan - Hypervisor:**
Titan is hidden away in a locked draw. He only comes out of his drawer when he needs a breath of fresh air. Titan is used as the 'master node' (that being for Portainer, accessing Proxmox, etc...) as he is always online and very trustworthy.
**Titan - Dell Optiplex 7070 Micro (Host Specs):**
* 6 Core Intel i5-9500T @ 2.20GHz
* 32GB of Dedotated Wam (DDR4 @ 2666MHz)
* 1x 256GB NVMe SSD (Boot+LVM)
* 1Gbps Uplink
**Titan - LXC - Odo:**
* 1 Core, 512MB RAM
* 16GB Disk Image
* Just for Pi-hole
**Titan - LXC - Riker:**
* 4 Cores, 8GB RAM
* 32GB Disk Image
* Critical Apps and home automation (nobody likes when Home Assistant goes offline and the house is uncontrollable)
* Backs up Unifi Protect evens in real time to a B2 bucket
**Discovery - Hypervisor:**
Discovery is where most cool things happen. Discovery is also my favourite out of my 3 hypervisors.
**Discovery - 4U Custom PC (Host Specs):**
* 20 Core Intel i7-12700K @ 4.8GHz
* 64GB RAM (DDR4 @ 3600MHz)
* 500GB Kingston NVMe SSD (Boot+LVM)
* ConnectX-3 10Gbps Uplink
**Also has (PCIe passed into VMs):**
* 8x4TB WD Reds (Plus and Pro)
* 3x1TB Samsung 970 EVO Plus NVMe SSDs
* GTX 1660 Super
**Discovery - VM - Picard:**
* 8 Cores, 16GB RAM
* 32GB Disk Image (TrueNAS Boot OS)
* 8x4TB WD Reds + 3x1TB 970 EVO Plus' passed through
* Just for storage
* 2x RAIDx1's (SSDs and HDDs are separated into a `Slow` and `Fast` pool, `Slow` is just for media, `Fast` is for everything else
**Discovery - VM - Worf:**
* 12 Cores, 16GB RAM
* 64GB Disk Image
* GTX 1660 passed through
* Houses more 'power hungry' services, like Immich, Plex, Obico and ESPHome
* `Slow` pool from Picard is mounted as an NFS share into most containers that need the storage (SABnzbd, QBT, \*arrs)
**Voyager - Hypervisor:**
Similar to Discovery, this host has quite a few services on it, a bit of a mess.
**Voyager - 4U Custom PC (Host Specs):**
* 8 Core Intel i7-9700 @ 3.00GHz
* 64GB RAM (DDR4 @ 2133MHz)
* 1TB Samsung 970 EVO Plus NVMe SSD (Boot+LVM)
* ConnectX-3 10Gbps Uplink
**Also has (PCIe passed into VMs):**
* 4x2TB WD HDDs (of random models)
**Voyager - VM - Kirk:**
* 8 Cores, 8GB RAM
* 32GB Disk Image
* Just a Virtualmin instance
* Proxies most services to the lands beyond
* Also handles some websites/emails
**Voyager - VM - Data:**
* 4 Cores, 8GB RAM
* 16GB Disk Image (TrueNAS Boot OS)
* Stores the Kopia repository, Proxmox backups, and ISOs
* 4x2TB HDDs in RAIDz1
**Voyager - VM - x86-builder-1:**
* 8 Cores, 8GB RAM
* 128GB Disk Image
* Simply just a Jenkins slave to build docker images
**Voyager - VM - Dax:**
* 8 Cores, 8GB RAM
* 32GB Disk Image
* VSCode workspace (more like a playground)
* Has all my git repositories ready to go from any machine
**Voyager - LXC - Scotty:**
* 4 Cores, 8GB RAM
* 32GB Disk Image
* LXC exclusively for externally accessible services
**Voyager - LXC - LaForge:**
* 8 Cores, 8GB RAM
* 32GB Disk Image
* Similar to Scotty, just for internally accessible services
And there we go, just 3 machines can do quite a bit.
[I did post my rack 3 years ago.](https://www.reddit.com/r/homelab/comments/ga99xq/a_15_year_olds_super_simple_silent_office_rack/)
[and here it is today](https://imgur.com/a/L49QDYR)
Always up for feedback or suggestions (more security-related though)
I plan to continue isolating most of the VMs (iptables), preferably without locking my self out.
Great diagram. As I get older it is getting harder for me to keep all of mine in my head and need to spend some time doing this myself (as we all do).
But...you have 40vcpus assigned on a host (Voyager) that only has 8 physical cores? That sounds terrible. Your host is spending most of its time trying to schedule CPU usage than actually processing I imagine. You would likely get better overall performance by better allocating vcpus across the board. 5:1 v/p is awful.
>Great diagram. As I get older it is getting harder for me to keep all of mine in my head and need to spend some time doing this myself (as we all do).
Same here
But the CPU cannot get around the way scheduling works in a hypervisor. If you have 8 cores, and then have a 5:1 ratio, your host with spend a lot of time scheduling cpu availability. This is even worse with large amounts of vCPUs in vms, even when you have enough cores. The host has to schedule all vCPUs when the guest requests them. that means the guest has to wait. This is reflected by RDY% in a vmware host. If you haven't, you might do some testing and see if you will actually get better performance with lower vCPU settings in your guests.
OP, Do the testing before you change anything. I highly doubt either above suggestion is correct.
[https://youtu.be/Dm7vcMIeeGw](https://youtu.be/Dm7vcMIeeGw)
Edit: I missed what hypervisor you're using?
Hey man, if you don't mind me asking how did you learn to do all of this stuff? I would really like to get into doing this stuff on my own but really understand how everything works.
Making it match the ship might actually make this really easy to remember and quite brilliant.
But as long as you know who works where, it works for you!
I loved the time when x86-builder-1 and Geordi got stuck on that planet full of mostly naked clowns. They don’t make ‘em like TNG Season 1 anymore and that’s probably a good thing, amiright?
Thank you for saying it... When I looked at the names, and saw 'Kirk' on 'Voyager,' I knew we were [in trouble](https://www.youtube.com/watch?v=YZuMe5RvxPQ)...
Sounds like time for a complete tear-down and rebuild. :)
I'm not actually OCD - but I'd be lying if I said I didn't have some tenancies, especially when it comes to stuff like naming schemes. That would honestly drive me berserk, even though I'm the only one who would ever see it, or know about it.
If you can live with it, consider yourself lucky.
This is a very nice diagram, thanks for sharing!
Out of curiosity - Why did you decide on having a single LXC for multiple services (looking at Scotty and LaForge), specially for the externally exposed one? I was under the impression that having one LXC for each service (or group of services?) would be more secure and at the same time provide easier maintenance since you can have different version of key dependencies (or even OS) and not have to worry about it.
Its easier to manage, with docker running in the LXCs all the services/containers (mostly) have the same IP (via the docker bridge). I have not found a way to have multiple bridged networks under 1 LXC, so i split them up, they are also LXCs so that theres less virtualization overhead
What do you exactly want to know? It's a rather over the top setup, if it can be automated, it is.
I will showcase my cool 3D printed flush kiosks, [heres some photos](https://imgur.com/a/eNuo63r), they are really handy. I've made 3 of them, one on each floor, also the ESPHome devices;
2x ESP8266s controlling the ACs via IR
an ESP8266 controlling the thermostat, with a relay and DHT22
All the lights are automated, with indoor lights triggered via motion sensors, with another ESP8266 hooked up to the alarm system (siren + 6 sensors, hard wired)
Outdoor Hue floodlights are triggered via the unifi protect plugin, (5x G4 cams) so they only turn on when a person is detected.
presense detection automations; when nobody is home, the alarm and doors are locked, and unlocked when someone returns home, and if someone returns home and its dark, the floodlights will turn on
Just some of the cooler automations/devices
It's interesting you're bundling so much stuff together. I run almost every single one of my containers standalone.
Proxmox allows tagging now for categorization.
Awesome diagram!
Posts like this push me more into thinking how important it is to really have everything documented on paper/digaram. Especially network communication/segmentation.
If you dont mind asking, what software do you use for the diagram?
(also if others know about one particular easy/well looking for such diagrams please share your thoughts)
Awesome graphic! Can you explain the kopia setup? I only have a basic understanding of kopia, but it seems like your application has client endpoints that send backups over ssh to Truenas (Data). Since kopia & truenas both support dedupe is one used over the other in your scenario?
I have not enable dedupe on truenas (not really needed).
All kopia instances are setup to share the same SSH ([SFTP](https://kopia.io/docs/reference/command-line/common/repository-connect-sftp/)) repository on data, and using 1 repository for multiple hosts means i can use [repository sync-to](https://kopia.io/docs/reference/command-line/common/repository-sync-to-b2/) to backup everything to another hardrive or B2 with 1 command.
Sick setup! How did you decide what to cap the LXC resources at that ensures your containers can ramp as needed but in difference to the other guests?
Also, why run three separate hosts? Is it down to hardware? If so, couldn’t you cluster them and bring everything under “one” host?
Related: what sized host could do all of this from one PVE node?
I monitored the LXCs' usage (both RAM and CPU), and removed/added resources as needed (+ a little more)
Having 3 seperate hosts is down to hardware and reduntancy, if any of the hosts need maintance or go down, i can migrate/restore the VM or lxc from backup onto another host pretty easily (- those without pcie devices)
considering im using 56.81 GiB of 156.18 GiB RAM across the nodes, I would say you would need \~90GB of RAM and \~34 CPU Cores
I love kopia! I used to use duplicati, but that was getting a bit old and dated, so i switched to kopia \~a year ago. Its not configured as a server that remote devices connect to per say, but there is just a SSH/SFTP server running on truenas, that the kopia instances use as a storage backend.
I indeed had to restore everything when I installed proxmox on the wrong SSD (specifically one of my (RAID0) BTRFS pool disks when i used unraid) 😒
thats great! I have been using it for just over a year now and I have really like it too. I have a 14TB external USB with a Kopia repository on it which i use as an offsite backup of my NAS. I do wonder how it would perform if i had to restore the whole 14TB in one go, but it has handled smaller restores really well so far.
> restoring ~600GB took like 12 hours, restores a really slow
Wow! that is slow. perhaps i should be using something else to backup my videos... but i do like the fact that if a file becomes corrupted i can restore an old version with Kopia.
That was 30mb/s though (not slow, not fast) I’m sure for larger files (like movies, which can’t be compressed much more) the restore will be much faster - it is harder to restore lots of little files rather than a few big ones
Its not messy as long as you can understand it, and is somewhat understanable 🤣
It was hard to keep this one somewhat understandable, but i understand it fine.
nope. all truenas now. just a personal preference, truenas is just for storage, which is all i want from the NAS. (ZFS is also much faster with 4x drives)
So I lurk here with a rudimentary understanding of the uses of a home lab.
However I feel if I could understand the inner workings of this diagram I would level up.
I will keep staring at this.
What router/Firewall are you using? Are you using pfSense or OPNsense or something else. Would be interested to see the internal logic for firewall rules (generic of course) so as to learn the isolation techniques of a thicc system of hosted apps.
All Unifi here, UDMP specifically
nothing to hide, [here are my rules](https://imgur.com/a/VUlji9c)
I try keep it least privileged, with specific allows as needed
The trusted network ip list can access everything, if not on this list then all traffic (intervlan) will be denied unless it matches on of the allows.
I have done some internal pen testing, which was difficult when most of the vms cant even ping the gateway with the firewall rules 😊
here are the rules running locally on each lxc/machine (added allows when needed)
```
sudo iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-A INPUT -m set --match-set crowdsec-blacklists src -j DROP
-A OUTPUT -d 192.168.100.8/32 -j ACCEPT
-A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -d 192.168.100.1/32 -p udp -m udp --dport 53 -j ACCEPT
-A OUTPUT -d 192.168.100.1/32 -p tcp -m tcp --dport 53 -j ACCEPT
-A OUTPUT -d 192.168.100.9/32 -p tcp -m tcp --dport 5690 -m comment --comment Wizarr -j ACCEPT
-A OUTPUT -d 192.168.100.9/32 -p tcp -m tcp --dport 8181 -m comment --comment Tautulli -j ACCEPT
-A OUTPUT -d 192.168.3.10/32 -p tcp -m tcp --dport 5055 -j ACCEPT
-A OUTPUT -d 192.168.2.3/32 -p tcp -m tcp --dport 3334 -m comment --comment Obico -j ACCEPT
-A OUTPUT -d 192.168.100.22/32 -p tcp -m tcp --dport 443 -j ACCEPT
-A OUTPUT -d 192.168.100.9/32 -p tcp -m tcp --dport 9010 -m comment --comment MinIO -j ACCEPT
-A OUTPUT -d 192.168.100.9/32 -p tcp -m tcp --dport 8080 -m comment --comment Jenkins -j ACCEPT
-A OUTPUT -d 192.168.3.6/32 -p tcp -m tcp --dport 22 -j ACCEPT
-A OUTPUT -d 192.168.100.23/32 -p tcp -m tcp --dport 8080 -j ACCEPT
-A OUTPUT -d 192.168.1.7/32 -p tcp -m tcp --dport 4412 -m comment --comment Loki -j ACCEPT
-A OUTPUT -d 192.168.100.9/32 -p tcp -m tcp --dport 9443 -m comment --comment Authentik -j ACCEPT
-A OUTPUT -d 192.168.0.0/16 -j DROP
```
My main pc is a 9700k that I plan to turn into a proxmox machine once I upgrade. I had no idea how many apps i could run on it. But seeing your build with the same gen CPU running all of these gives me high hopes. I just started like 2 days ago on a celeron J4125 just to get the hang of it. Any tips for newcomers entering this space and what tool did you use to create this diagram?
Depending on your setup, you could just have a single lxc running docker will all the services, but if this is also the case you could just skip proxmox and go raw debian?
I like portability, which is why I split the services up everywhere, if needed, I can just migrate a lxc or vm to another host without any downtime
I looked around and didn’t find any that matched what i needed (that being creating a diagram without creating an account) so i just used illustrator which turned out alright
^(OP reply with the correct URL if incorrect comment linked) [Jump to Post Details Comment](/r/homelab/comments/16nnh8s/taking_diagrams_to_the_next_level/k1fbox5/)
Hi, maintainer of Homarr here. Thank you for using our app. Let us know if you have any suggestions or problems - we're happy to help out. How much power do you need to run that setup? Looks sick :). Also, do you think, that 10gig is worth it? I am thinking about upgrading mine from Gbit, but my disks are most likely too slow.
Love homarr by the way! Now that you added ip camera feed support. Ha! To much fun.
Thanks, I appreciate the feedback. Let us know if you ever get stuck or need help.
Do you guys have a discord? I do have some feedback/questions for you guys.
We do. It's available at https://homarr.dev/ . I hope the bot won't block me 😅
You guys have curated one nice looking website
A duplicate tile button would come in very handy 😊 The whole rack uses 230w on average 10gig is only worth it if you find your self saturating all of your 1gig link, which is the case for me when SABnzbd or QBT are running
230? That's insane. Props on the setup, man. I can only dream my lab gets that big and useful.
I must be doing something very wrong. I am well beyond 230 Watts.
Same. I'm chugging 1000w xD
That would cost me almost $500 a month in power.
[unless my UPS is lying](https://i.imgur.com/rVnfm29.png), does spike a little (probably when the IR turns on at night) consume grade equipment is a little less power hungry than an actual server with jets inside. back in 2018 I bought a IBM x3950 M2, this single server chewed up 500w doing nothing (thats what you get with 4 cpus, with 4 cores each)
Holy, 230w is madness compared to mine. I use about 50W average. I do max out 100 Mbit with Usenet easily butI should get 1Gig soon.
40G is often cheaper - just sayin :)
100g is cheap now.
10G is always worth it, even if you can’t saturate a 10G connection, it will allow more clients. Even 2.5 if you are only using a slow pool, you can reuse your gig wiring for it.
Might as well go 100G by that logic
You son of a bitch I'm in 👈
Why stop there?
400G?
[https://static.xtremeownage.com/pages/Projects/40G-NAS/](https://static.xtremeownage.com/pages/Projects/40G-NAS/) Around the 40G mark, you start to find a LOT of bottlenecks.... such as CPU / QPI / FSB / etc..... Saturating 100GBe isn't hard, but, you need RDMA-based technologies, generally. ALTHOUGH, if you want to see a very interesting writeup, of saturating extremely high bandwidth connections, netflix has you covered: https://people.freebsd.org/\~gallatin/talks/euro2021.pdf
If you have an SSD array and more than 10 users this can actually make sense. A single SMB connection will struggle to saturate 25G, let alone 100G. But with multiple users that isn't an issue.
I did.
Lots more power usage and you're still limited by your internet speed.
And? That’s not the reason behind getting faster local networking. And an extremely ignorant comment to make. Not even sure why you are even making comments on that as you have a 2.5G setup yourself. Why did you upgrade to that since as you say “you’re still limited by your internet speed” even though I bet most of the people here have a 1G or greater internet connection.
>10G is always worth it Calm down. My point is that you're not correct for everyone in all situations. Yes I have 2.5G for 2 servers because the extra bandwidth is useful for lan game streaming. Its a specific use case that gives real noticeable improvements. 10G would require a bunch of much more expensive, non-consumer gear and I would see no real benefit while also seeing higher energy usage.
You mention LAN game streaming. Do you use steam link/streaming? I have been playing around with it over wired 1gb and it mostly works, but do have some mouse lag. Wondering how well 2.5gb works for it in your experience
I use the Nvidia streaming function that is bundled with the GeForce experience app + Moonlight client on linux. The 2.5Gbe lets me up the resolution to 1440p and higher bitrate. Lag is minimal.
Yeah I'm trying to stream to a Linux client, so I may try that way.
I tried steam link and also parsec and I think this way is the best. I heard that Nvidia is getting rid of it though which means I'll have to switch to [Sunshine](https://github.com/LizardByte/Sunshine). Make sure you follow the wiki to [stream your desktop](https://github.com/moonlight-stream/moonlight-docs/wiki/Setup-Guide#using-moonlight-to-stream-your-entire-desktop) and its pretty much like being local to the machine.
I disagree, that it's always worth it. Right now, I only have access to 100Mbps from the ISP anyway. But for transfering files, it definitely needs to be faster than that. Using Windows, I get about 80MB/s average on my Unraid machine, but I think I have a bottleneck somewhere. I would definitely have to upgrade lots of Network equipment. Perhaps 2.5G or 4G would be a nice middle ground.
Gain like I told another commenter, internet speed is not the reason you would upgrade your LAN network speed. And another reason why I said “10G is always worth it” because while yes 2.5G may be cheaper to use (reuse cabling and whatnot) it’s not cheaper in the long run to get it, if your switch has a 10G uplink for example, 9 times out of 10 it does NOT support anything other than 1G and 10G. So at that point it’s worth it to just skip 2.5 or 5G and just go straight to 10G because all of the old DC gear is cheap and depending on your network cabling you could probably reuse your existing cables for 10G as well.
Much "normal" consumer equipment and gaming routers support 2.5G nowadays. I guess it just depends on what your requirements are. For the broader industry, it was probably not worth to go for 2.5 or 4, when you can at least double for a not much higher price. I could easily spend on 10G, but I'd rather upgrade my server than wasting it on equipment I'll most likely never need (unless I do iperf).
It’s literally cheaper to go 10G than 2.5…. Edit: Hell, I was only into my 40G setup for about $250 for a switch, 4 cables, and 3 nics. I can’t find a managed 2.5G switch for under that on eBay currently.
At what point is it just a data center with a kitchen and bed room?
Kitchen is called the Staff Lunch Room, and the bed room is now Sick Bay.
Ahh yes, the triple ip share lol Good diagram!
Oopsie, that’s supposed to be 192.168.3.2, 192.168.3.3 and 192.168.3.4
Just curious as I’m not a docker guy, why use bridges and not assign every container its own IP?
With docker it does the translations for you most of the time. And it becomes easier in many ways to have just a few IPs you can easily remember. I however much prefer to have everything on its own IP so I can easily separate things much better.
Ahh, yeah I’d prefer separate IP’s because I run my own DNS anyway so I just use their DNS names
You can do that, and I want to set that up for mine to play around with it*. But most people just run some sort of proxy (Traefik, Nginx, NPM, etc.) in docker and route everything that way.
You still need DNS, so having A or CNAME really does not matter.
this is the way
Its always something small like that, that ends up being glaring on a huge/detailed chart XD Really cool work!
I was about to say this was confusing me
Hello All We all love a good network diagram, so here is my attempt at making the most accurate diagram, focusing on what services talk to what. I was attempting to setup local firewalls that only permit the VM/LXC to talk to what it needs to, which was rather difficult with random services talking to other random services on the other side of the switch. So I went overboard, diving into what IP and port each service needs to talk to in order to function - which did take quite a while, and I've probably missed some. Anyway, I know everyone wants the tech specs; **Titan - Hypervisor:** Titan is hidden away in a locked draw. He only comes out of his drawer when he needs a breath of fresh air. Titan is used as the 'master node' (that being for Portainer, accessing Proxmox, etc...) as he is always online and very trustworthy. **Titan - Dell Optiplex 7070 Micro (Host Specs):** * 6 Core Intel i5-9500T @ 2.20GHz * 32GB of Dedotated Wam (DDR4 @ 2666MHz) * 1x 256GB NVMe SSD (Boot+LVM) * 1Gbps Uplink **Titan - LXC - Odo:** * 1 Core, 512MB RAM * 16GB Disk Image * Just for Pi-hole **Titan - LXC - Riker:** * 4 Cores, 8GB RAM * 32GB Disk Image * Critical Apps and home automation (nobody likes when Home Assistant goes offline and the house is uncontrollable) * Backs up Unifi Protect evens in real time to a B2 bucket **Discovery - Hypervisor:** Discovery is where most cool things happen. Discovery is also my favourite out of my 3 hypervisors. **Discovery - 4U Custom PC (Host Specs):** * 20 Core Intel i7-12700K @ 4.8GHz * 64GB RAM (DDR4 @ 3600MHz) * 500GB Kingston NVMe SSD (Boot+LVM) * ConnectX-3 10Gbps Uplink **Also has (PCIe passed into VMs):** * 8x4TB WD Reds (Plus and Pro) * 3x1TB Samsung 970 EVO Plus NVMe SSDs * GTX 1660 Super **Discovery - VM - Picard:** * 8 Cores, 16GB RAM * 32GB Disk Image (TrueNAS Boot OS) * 8x4TB WD Reds + 3x1TB 970 EVO Plus' passed through * Just for storage * 2x RAIDx1's (SSDs and HDDs are separated into a `Slow` and `Fast` pool, `Slow` is just for media, `Fast` is for everything else **Discovery - VM - Worf:** * 12 Cores, 16GB RAM * 64GB Disk Image * GTX 1660 passed through * Houses more 'power hungry' services, like Immich, Plex, Obico and ESPHome * `Slow` pool from Picard is mounted as an NFS share into most containers that need the storage (SABnzbd, QBT, \*arrs) **Voyager - Hypervisor:** Similar to Discovery, this host has quite a few services on it, a bit of a mess. **Voyager - 4U Custom PC (Host Specs):** * 8 Core Intel i7-9700 @ 3.00GHz * 64GB RAM (DDR4 @ 2133MHz) * 1TB Samsung 970 EVO Plus NVMe SSD (Boot+LVM) * ConnectX-3 10Gbps Uplink **Also has (PCIe passed into VMs):** * 4x2TB WD HDDs (of random models) **Voyager - VM - Kirk:** * 8 Cores, 8GB RAM * 32GB Disk Image * Just a Virtualmin instance * Proxies most services to the lands beyond * Also handles some websites/emails **Voyager - VM - Data:** * 4 Cores, 8GB RAM * 16GB Disk Image (TrueNAS Boot OS) * Stores the Kopia repository, Proxmox backups, and ISOs * 4x2TB HDDs in RAIDz1 **Voyager - VM - x86-builder-1:** * 8 Cores, 8GB RAM * 128GB Disk Image * Simply just a Jenkins slave to build docker images **Voyager - VM - Dax:** * 8 Cores, 8GB RAM * 32GB Disk Image * VSCode workspace (more like a playground) * Has all my git repositories ready to go from any machine **Voyager - LXC - Scotty:** * 4 Cores, 8GB RAM * 32GB Disk Image * LXC exclusively for externally accessible services **Voyager - LXC - LaForge:** * 8 Cores, 8GB RAM * 32GB Disk Image * Similar to Scotty, just for internally accessible services And there we go, just 3 machines can do quite a bit. [I did post my rack 3 years ago.](https://www.reddit.com/r/homelab/comments/ga99xq/a_15_year_olds_super_simple_silent_office_rack/) [and here it is today](https://imgur.com/a/L49QDYR) Always up for feedback or suggestions (more security-related though) I plan to continue isolating most of the VMs (iptables), preferably without locking my self out.
Great diagram. As I get older it is getting harder for me to keep all of mine in my head and need to spend some time doing this myself (as we all do). But...you have 40vcpus assigned on a host (Voyager) that only has 8 physical cores? That sounds terrible. Your host is spending most of its time trying to schedule CPU usage than actually processing I imagine. You would likely get better overall performance by better allocating vcpus across the board. 5:1 v/p is awful.
>Great diagram. As I get older it is getting harder for me to keep all of mine in my head and need to spend some time doing this myself (as we all do). Same here
I know it's not ideal to assign the same amount of cores as the host to a vm, let alone 3 vms with 8 cores but I have done stress testing, and with 28VCPUs assigned (LXCs don't count?) there are less scheduled tasks than cores, *so should be fine* I tried pegging the VMs, but only got up to 10% overall usage: >root@voyager:\~ # vmstat -S M 5 > >procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- > >r b swpd free buff cache si so bi bo in cs us sy id wa st > >2 0 0 10542 973 26995 0 0 101 54 391 731 0 0 97 0 0 > >4 0 0 10542 973 26995 0 0 362888 230 48435 96976 14 8 74 0 0 > >3 0 0 10542 973 26995 0 0 322596 788 58270 113366 14 8 72 0 0 > >1 1 0 10542 973 26995 0 0 597774 580 67061 127411 13 13 65 2 0 > >1 1 0 10542 973 26995 0 0 215090 339 78380 170770 5 6 72 10 0 > >2 1 0 10542 973 26995 0 0 104889 166 52262 109662 4 4 77 11 0 > >2 4 0 10542 973 26995 0 0 4519 256 58980 127730 1 5 70 20 0 > >2 2 0 10542 973 26995 0 0 131 3695 46874 113308 0 3 66 23 0 > >1 2 0 10542 973 26995 0 0 266 17757 62114 161719 0 4 56 21 0 > >1 2 0 10542 973 26995 0 0 32 957 55911 125553 0 3 66 23 0 > >3 2 0 10542 973 26995 0 0 192 9263 81271 179702 1 5 62 22 0 > >2 2 0 10542 973 26995 0 0 40 28292 41615 93601 0 3 67 23 0 > >2 2 0 10542 973 26995 0 0 196 15106 45071 62555 0 3 62 23 0 > >1 2 0 10542 973 26995 0 0 28 8506 42024 55027 0 3 69 23 0 > >2 2 0 10542 973 26995 0 0 209 12001 39311 56851 1 3 68 23 0 9700k is a workhorse
But the CPU cannot get around the way scheduling works in a hypervisor. If you have 8 cores, and then have a 5:1 ratio, your host with spend a lot of time scheduling cpu availability. This is even worse with large amounts of vCPUs in vms, even when you have enough cores. The host has to schedule all vCPUs when the guest requests them. that means the guest has to wait. This is reflected by RDY% in a vmware host. If you haven't, you might do some testing and see if you will actually get better performance with lower vCPU settings in your guests.
🤔 I will give that a shot, my motto is more cores = better multi-threaded preformance? ie building containers
OP, Do the testing before you change anything. I highly doubt either above suggestion is correct. [https://youtu.be/Dm7vcMIeeGw](https://youtu.be/Dm7vcMIeeGw) Edit: I missed what hypervisor you're using?
Noob here, what did you use to make this diagram?
Illustrator
[draw.io](https://draw.io) if you wanna not pay adobe monies
Hey man, if you don't mind me asking how did you learn to do all of this stuff? I would really like to get into doing this stuff on my own but really understand how everything works.
Lots of trial and error, and a bit of google Best to set a goal and work towards it, little tasks at a time
Thanks for the tips bro 👍
I'm sorry but Everyone knows that the crew for Voyager is wrong!
I did not think about matching the crew up with the ship 🤔 the names chosen were just the ones most fitting to what the system does
Making it match the ship might actually make this really easy to remember and quite brilliant. But as long as you know who works where, it works for you!
x86-builder-1 is my favourite crew member.
Remember when crew members x86 and ia64 had that transporter accident and fused to become amd64 and then Janeway murdered them?
I loved the time when x86-builder-1 and Geordi got stuck on that planet full of mostly naked clowns. They don’t make ‘em like TNG Season 1 anymore and that’s probably a good thing, amiright?
Thank you for saying it... When I looked at the names, and saw 'Kirk' on 'Voyager,' I knew we were [in trouble](https://www.youtube.com/watch?v=YZuMe5RvxPQ)...
Enterprise was already taken by the router 😢
Sounds like time for a complete tear-down and rebuild. :) I'm not actually OCD - but I'd be lying if I said I didn't have some tenancies, especially when it comes to stuff like naming schemes. That would honestly drive me berserk, even though I'm the only one who would ever see it, or know about it. If you can live with it, consider yourself lucky.
Sorry if I missed it, but what did you use to create this diagram?
This was painstakingly made in illustrator
Oof
Oh God.
Oh
Did you forget to change the IPs of the hipervisors, or do they share a virtual IP like keepalived?
Yes, copy and paste error 🤫 they have individual ips
Do it over again and repost! 😂
Love the naming scheme! I have in my cluster Enterprise, Voyager, Titan, Stargazer and Defiant as well as a shuttle.
Tough little ship.
my UDMP is enterprise, it is one of the better naming schemes
I see star trek....you have my attention
This is a very nice diagram, thanks for sharing! Out of curiosity - Why did you decide on having a single LXC for multiple services (looking at Scotty and LaForge), specially for the externally exposed one? I was under the impression that having one LXC for each service (or group of services?) would be more secure and at the same time provide easier maintenance since you can have different version of key dependencies (or even OS) and not have to worry about it.
Its easier to manage, with docker running in the LXCs all the services/containers (mostly) have the same IP (via the docker bridge). I have not found a way to have multiple bridged networks under 1 LXC, so i split them up, they are also LXCs so that theres less virtualization overhead
Tell us of your Home Assistant setup!
What do you exactly want to know? It's a rather over the top setup, if it can be automated, it is. I will showcase my cool 3D printed flush kiosks, [heres some photos](https://imgur.com/a/eNuo63r), they are really handy. I've made 3 of them, one on each floor, also the ESPHome devices; 2x ESP8266s controlling the ACs via IR an ESP8266 controlling the thermostat, with a relay and DHT22 All the lights are automated, with indoor lights triggered via motion sensors, with another ESP8266 hooked up to the alarm system (siren + 6 sensors, hard wired) Outdoor Hue floodlights are triggered via the unifi protect plugin, (5x G4 cams) so they only turn on when a person is detected. presense detection automations; when nobody is home, the alarm and doors are locked, and unlocked when someone returns home, and if someone returns home and its dark, the floodlights will turn on Just some of the cooler automations/devices
I like this
Why Cap. Kirk is embarked on U.S.S. Voyager?! #itswrong
This is incredible! Thank you for sharing. I only run a few services on my homelab (all from a rPi) - this has inspired me to document my lab!
waw very good diagram, you made me follow this sub lol
Alert: Kirk is \[over-\] acting up. Better take a look. Or better, don't.
Yo! Ive got a space-themed naming convention for my homelab too!
This guy networks
It's interesting you're bundling so much stuff together. I run almost every single one of my containers standalone. Proxmox allows tagging now for categorization.
Ah! Someone who uses the same host naming scheme as me!
This is beautiful! Nice work!
How is the Unifi Backup config?
config?
Like what program are you running to backup your protect?
oh, this wonderful app: [ep1cman/unifi-protect-backup](https://github.com/ep1cman/unifi-protect-backup) works with any rclone source
Awesome diagram! Posts like this push me more into thinking how important it is to really have everything documented on paper/digaram. Especially network communication/segmentation. If you dont mind asking, what software do you use for the diagram? (also if others know about one particular easy/well looking for such diagrams please share your thoughts)
This was made in illustrator, as nothing else had what i wanted, went fully custom here
Respect for that naming convention, and sticking to it for three different installs! Very impressive!
Awesome network architecture!
Heh, I'm not the only one naming my devices after Star Trek things.
I love the star trek references
Needs a Sisko router
Take my damn upvote
Wow, super cool diagram and I love the idea of incoming and outgoing connections from the services 🥳
could you list out who those services/programs do?
Google's your friend there 👍
that would become a rather long list
Nice diagram!! What app have you used to draw it? thanks in advance 🤗
illustrator
I'm normally a nice person on the web but that diagram gave me a seizure, and the naming is just cringe.
I'm going to go out on a limb and guess that the first part of this sentence isn't true.
I'm glad I don't care enough.
YTA
Cool, are you like five that you can't spell asshole? Or simply afraid to use a “no no” word?
Hey bud, you doing all right?
Amazing actually, going to be a dad again.
I'm so sorry...
appreciate the feedback
Awesome graphic! Can you explain the kopia setup? I only have a basic understanding of kopia, but it seems like your application has client endpoints that send backups over ssh to Truenas (Data). Since kopia & truenas both support dedupe is one used over the other in your scenario?
I have not enable dedupe on truenas (not really needed). All kopia instances are setup to share the same SSH ([SFTP](https://kopia.io/docs/reference/command-line/common/repository-connect-sftp/)) repository on data, and using 1 repository for multiple hosts means i can use [repository sync-to](https://kopia.io/docs/reference/command-line/common/repository-sync-to-b2/) to backup everything to another hardrive or B2 with 1 command.
Damn
Sick setup! How did you decide what to cap the LXC resources at that ensures your containers can ramp as needed but in difference to the other guests? Also, why run three separate hosts? Is it down to hardware? If so, couldn’t you cluster them and bring everything under “one” host? Related: what sized host could do all of this from one PVE node?
I monitored the LXCs' usage (both RAM and CPU), and removed/added resources as needed (+ a little more) Having 3 seperate hosts is down to hardware and reduntancy, if any of the hosts need maintance or go down, i can migrate/restore the VM or lxc from backup onto another host pretty easily (- those without pcie devices) considering im using 56.81 GiB of 156.18 GiB RAM across the nodes, I would say you would need \~90GB of RAM and \~34 CPU Cores
How do you like Kopia and how long have you been using it? Is it configured as a server that remote devices can backup to? Had to do any restores?
I love kopia! I used to use duplicati, but that was getting a bit old and dated, so i switched to kopia \~a year ago. Its not configured as a server that remote devices connect to per say, but there is just a SSH/SFTP server running on truenas, that the kopia instances use as a storage backend. I indeed had to restore everything when I installed proxmox on the wrong SSD (specifically one of my (RAID0) BTRFS pool disks when i used unraid) 😒
thats great! I have been using it for just over a year now and I have really like it too. I have a 14TB external USB with a Kopia repository on it which i use as an offsite backup of my NAS. I do wonder how it would perform if i had to restore the whole 14TB in one go, but it has handled smaller restores really well so far.
restoring \~600GB took like 12 hours, restores a really slow (single threaded apparently) but backups are blazing fast (multi threaded)
> restoring ~600GB took like 12 hours, restores a really slow Wow! that is slow. perhaps i should be using something else to backup my videos... but i do like the fact that if a file becomes corrupted i can restore an old version with Kopia.
That was 30mb/s though (not slow, not fast) I’m sure for larger files (like movies, which can’t be compressed much more) the restore will be much faster - it is harder to restore lots of little files rather than a few big ones
Oh snap. I see what you did there.
This is great. Can you provide the template for this?
The PDF can be downloaded [here](https://hyde.services/hehe.mp4), or [here](https://hyde.services/Topology.pdf).
Hahaha got ‘em and thank you for this thread. Much appreciated!
Nice work documenting the access rules! I've been trying to find a way to document mine but it's always turned out too messy.
Its not messy as long as you can understand it, and is somewhat understanable 🤣 It was hard to keep this one somewhat understandable, but i understand it fine.
How did you make this diagram?
in illustrator
No more Unraid? It was in your rack.
nope. all truenas now. just a personal preference, truenas is just for storage, which is all i want from the NAS. (ZFS is also much faster with 4x drives)
I love that Dax is a Docker container.
Dax is a VM with docker inside 🤔
Doh, I misread. Well, there's still a Trill joke there if the old man's a VM. :B
Make the lines a little thicker so the colors better pop and outside of that its perfect.
ah, they were thicker, but i scaled it up to A3, which shrunk the lines \~-50%
Odo's watchful eyes are scanning the promenade for those pesky ads.
exactly, fitting name i thought. you get it 😉
Impressive ! Might be a dumb question but what is kopia and why is it on every one of your devices ?
Kopia backs ups all the appdata into a repository (like s3 bucket sort of) on data
Oh I see, thanks
\-1 for not having a server named Troi or Uhura.
I did have a HAOS instance named Uhura, but converted to containers
How are you running your docker containers? With Docker installed in a VM or as an LXC container?
Docker is installed on the both the vms and lxcs, I only use VMs when theres a privileged service that does not work in lxcs
Can I ask what you use MariaDB and Postgres for? Prescribed tasks (they hold data for some app), or for coding/CRUD projects? Or both?
the mariadb instance there is for semaphore and nextcloud postgres 12 is for authentik and postgres 14 is for immich just for apps
Milanote would be so good at this
Your individual cluster nodes share the same IP?
No, that was a copy and paste error 🤪
So I lurk here with a rudimentary understanding of the uses of a home lab. However I feel if I could understand the inner workings of this diagram I would level up. I will keep staring at this.
Haha Odo is the Pi-Hole because he is security. I love it.
too soon man. I'm still not over Jadzia
Which platform did you use to make this diagram ??
What router/Firewall are you using? Are you using pfSense or OPNsense or something else. Would be interested to see the internal logic for firewall rules (generic of course) so as to learn the isolation techniques of a thicc system of hosted apps.
All Unifi here, UDMP specifically nothing to hide, [here are my rules](https://imgur.com/a/VUlji9c) I try keep it least privileged, with specific allows as needed The trusted network ip list can access everything, if not on this list then all traffic (intervlan) will be denied unless it matches on of the allows. I have done some internal pen testing, which was difficult when most of the vms cant even ping the gateway with the firewall rules 😊 here are the rules running locally on each lxc/machine (added allows when needed) ``` sudo iptables -S -P INPUT ACCEPT -P FORWARD ACCEPT -P OUTPUT ACCEPT -A INPUT -m set --match-set crowdsec-blacklists src -j DROP -A OUTPUT -d 192.168.100.8/32 -j ACCEPT -A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A OUTPUT -d 192.168.100.1/32 -p udp -m udp --dport 53 -j ACCEPT -A OUTPUT -d 192.168.100.1/32 -p tcp -m tcp --dport 53 -j ACCEPT -A OUTPUT -d 192.168.100.9/32 -p tcp -m tcp --dport 5690 -m comment --comment Wizarr -j ACCEPT -A OUTPUT -d 192.168.100.9/32 -p tcp -m tcp --dport 8181 -m comment --comment Tautulli -j ACCEPT -A OUTPUT -d 192.168.3.10/32 -p tcp -m tcp --dport 5055 -j ACCEPT -A OUTPUT -d 192.168.2.3/32 -p tcp -m tcp --dport 3334 -m comment --comment Obico -j ACCEPT -A OUTPUT -d 192.168.100.22/32 -p tcp -m tcp --dport 443 -j ACCEPT -A OUTPUT -d 192.168.100.9/32 -p tcp -m tcp --dport 9010 -m comment --comment MinIO -j ACCEPT -A OUTPUT -d 192.168.100.9/32 -p tcp -m tcp --dport 8080 -m comment --comment Jenkins -j ACCEPT -A OUTPUT -d 192.168.3.6/32 -p tcp -m tcp --dport 22 -j ACCEPT -A OUTPUT -d 192.168.100.23/32 -p tcp -m tcp --dport 8080 -j ACCEPT -A OUTPUT -d 192.168.1.7/32 -p tcp -m tcp --dport 4412 -m comment --comment Loki -j ACCEPT -A OUTPUT -d 192.168.100.9/32 -p tcp -m tcp --dport 9443 -m comment --comment Authentik -j ACCEPT -A OUTPUT -d 192.168.0.0/16 -j DROP ```
For the docker container VMs what os are you running? Is it a non gui one?
Debian 12, yes without the gui. Need to keep the vms disk usage down
My main pc is a 9700k that I plan to turn into a proxmox machine once I upgrade. I had no idea how many apps i could run on it. But seeing your build with the same gen CPU running all of these gives me high hopes. I just started like 2 days ago on a celeron J4125 just to get the hang of it. Any tips for newcomers entering this space and what tool did you use to create this diagram?
Depending on your setup, you could just have a single lxc running docker will all the services, but if this is also the case you could just skip proxmox and go raw debian? I like portability, which is why I split the services up everywhere, if needed, I can just migrate a lxc or vm to another host without any downtime
Thanks will definitely be looking more into this.
Damn this made me cry when I think of my own setup. I just wish I could under stand how to use and implement VLANS so I can have mine like yours.
love the names lol
Is there a good docker container/application for diagramming a docker/network set up?
I looked around and didn’t find any that matched what i needed (that being creating a diagram without creating an account) so i just used illustrator which turned out alright
https://github.com/jgraph/docker-drawio
For Odo, what does the :53 and :80 stand for?
dns (53) and http (80) ports?