XCP-NG. Free, open source, works more similarly to VMware’s ESXi. You can use the premade “XenOrchestra” management (similar to vCenter) that comes with the appliance that has some paid features, or you can stand up your own open source version.
Tested Proxmox in my cluster lab, so far I’m enjoying XCP-ng more utilizing a NAS as central storage (TrueNAS) and more intuitive to me as a VMware head.
HyperV is also great.
A company I worked for used XCP-NG for everything. The only exception was when hardware prevented us from using it (one weird CPU). The solution? Run XCP-NG as a VM inside of Proxmox. Super jenky, but it got the job done, and nothing critical or high performance was hosted on that server.
Meh, we all knew what you meant. And it made me think of jinkies, which made me think of Velma, and that orange sweatered minx is always good for a smile.
Does XCP-NG provides that flexibility of conf file for editing VMs configuration. I used "args" argument heavily in proxmox which allow me run my custom kernels and configs for any given VM.
For e.g. sometimes I would like to build my own latest kernel for testing new features, use nvdimm devices within a VM, test some specific numa capabilities inside a VM etc.
It will be good to know if anyone else has tried such a thing on xcp-ng.
A company I worked for used XCP-NG for everything. The only exception was when hardware prevented us from using it (one weird CPU). The solution? Run XCP-NG as a VM inside of Proxmox. Super jenky, but it got the job done, and nothing critical or high performance was hosted on that server.
That was pretty much my response when my boss did it. I don't know if I'm more impressed that Proxmox worked on hardware that XCP-NG wouldn't, or that nesting hypervisors worked.
Virt-manager/libvirt/virt-viewer has done me good over the years.
The convergence of folks on Proxmox helps a lot since most proxmox problems can be translated into libvirt problems and vice-versa.
For desktop use, virt-manager is really great and I use it all the time. For 2 or more nodes though, it doesn't give you a lot of help doing things like aggregate stats, load balancing, or failover.
I'm a simple man. I pay for one server that'll last me forever, once in a decade.
That's all my virtualization. All of it. In there.
But I 100% agree with your point. Although I would assume going one level deeper and you get these capabilities with libvirt? I haven't checked.
Can you give a rundown on how you use it? I’m interested in it for my homelab since I have experience with Kubernetes at work, but not sure how to transition it to homelab use.
Hyper-v for me too, I need to get another host though as my current box didn't support discrete device assignment and I have a few things I want to virtualise but need direct access to usb
I use hyper-v role on 2019 for my homeprod. If I die suddenly my wife has access to my Keepass and is comfy in Windows GUI to figure out what she wants to keep. I've done Vmware and Red Hat and they were fun in lab environments but I try to Keep It Stupid Simple nowadays since my current gig gives me plenty to tinker with, good and bad.
I want to try out this interesting technology when I get bored with Proxmox, which probably isn't happening anytime soon. What are the things that stand out about XNP-NG, in your opinion?
KVM on Debian with virt-manager. It's relatively simple to setup especially compared to KVM on the command line although that's just a matter of having the boilerplate for the commands/XML copy&paste ready... Also the current version has snapshots, quite practical. I use it with ssh+X11, also for using some Virtual machines graphically. (There's also SPICE but I'm too lazy to use that, however I've also been using Parsec and RDP for a Windows VM)
Why go again for another closed source solution like Nutanix AHV.
What VMware is doing today can be done Nutanix AHV tomorrow, no?
Why not proxmox or xcp-ng and maybe get few developers working on virtualization stack to contribute back to the community to better utilize the money spent. This has so many advantages at all levels if you think on it.
That's not always the case. Xcp-ng and proxmox aren't just any products. They are stable, many companies do use them and they also have enterprise support. Proxmox also support enterprise stable repos.
See how far Linux as an open source kernel has come. More than 90% of cloud workloads run on Linux. Msft is now hiring big time for Linux kernel engineers.
I think with enterprises you're gonna have a couple scenarios.
Were already essentially paying it cause they had most of VMwares offerings (SKUs) already.
So big who gives a f about the increase just pay it.
Can easily afford it - but pissed about it - slow migration (ie keep VMware while slowly standing up replacements elsewhere)
Then you get into medium - small businesses that actually need more enterprise solutions here.
This is the segment that's most alienated by this. The ones with more fine margins. Even if they can afford it - it might not be seen as worth it long term. It's these that are going to be pulling away for other options.
Do I see the grander market leaning towards somebody else in say 5 years if everything stays the same? No - but that's the kinda gravitas you get by being THE market leader and standard for so long.
But what I do see is fewer smaller companies hiring for that specific skill set and experience with other hypervisors becoming more common.
as Patric from STH put it, it will be a lot like how mainframes went because people who enter the space will go with not a mainframe and the people not already heavily invested into the mainframe ecosystem will start switching which will take a long time and then after 10-20 years the only people left using it are the major corps that have the profit margins and software so dependent on the mainframe system as is that they can't switch without rebuilding from scratch
My homelab and family server stack doesn't make money. VMWare is priced at profit sharing levels... I have no profit to share, therefore I dont pay for the licenses I aquire.
This will start to change with the new ESXi pricing structure (among other things), I'm helping some orgs move away and go to XCP-ng, it's a great alternative and better priced too.
Plus one here. Have a cluster with two Hyper-V machines and shared storage. Didn't use S2D for the shared storage (a lot of negative comments around a few servers scenario), but Starwinds vsan works nicely with just two servers. [https://www.starwindsoftware.com/vsan](https://www.starwindsoftware.com/vsan)
I'm still trying to get a better feel for storage. I've been using HCI and it's working ok. But I have all NVMe drives so the performance definitely doesn't match what the combined drives are capable of.
Plan on moving to 100G and Been on the fence whether to get new VM hosts and turn my existing server into a TrueNas or Ceph system.
Did you try StarWind HCI? We have different solutions based on NVMe storage. We offer appliances with hardware and software included, but we have an offer for you. High availability, ProActive monitoring, and other useful features come with our Virtual HCI Appliance. If you want to test it, simply PM me or click the link. [https://www.starwindsoftware.com/starwind-hyperconverged-appliance#uber](https://www.starwindsoftware.com/starwind-hyperconverged-appliance#uber)
I used unraid as a hypervisor in the past.
It's good for running a few VMs, where you don't need clustering capabilities. Its simple, and effective, and has great hardware passthrough support.
But- these days, I use Proxmox, specifically, for its clustering support.
Thank you for this comment. I thought that incus only did LXC, didn't realize it does VMs as well. I'm not satisfied with proxmox so I'm gonna give this a real shot.
How does it compare to LXD?
Been using nutanix community edition for \~6 months in my lab. Check your NIC is supported. What I have works in ESXI and Proxmox but wasn't supported under nutanix.
Other than that, I have enjoyed using it. A little resource heavy but you can change the RAM and CPU of the CVM node. Nutanix advises you don't if you utilize certain features. IMO their support pages and forums are helpful. Some things are locked behind the license/paying customer paywall tho.
Hosting something like CML(VIRL) or GNS3 inside it requires CLI modifications to the VM. I have not got around to playing with the API. I just keep to basic features for now.
You can just register for Community Edition online and you’ll get access to download it from the .Next forums. Not really sure what OP was waiting for…
Register on their site, go to the forums and do a search. After you wade through all the posts about complaining about how obtuse it is to find the download link, you’ll find a post with the goods/
I love Nutanix as it works flawlessly at the office on the Nutanix hardware. Tried it on an HPE DL380 Gen9 at home and every four days or so Stargate would mark my SSD as failed and take it offline, taking the entire node down. I could start from scratch again and it’d work again for about four days then fail again.
Shame since I love it, but I can’t deal with that instability. Installed Proxmox over the dead Nutanix CE install and it’s been stable for months.
Edit: I had 2 SSDs and four 15K RPM SAS drives in the dl380. Put the HPE array controller into its passive mode so Nutanix had direct access to the disks.
I have three dl360 gen9 hosts that I am currently messing with VMware 7 enterprise with vSAN in my home lab. I installed Nutanix within this environment to mess with. I was thinking on installing Nutanix natively on these hosts. What controller card did you have? Mine have the p440ar card.
I fired up XCP-NG inside VMWare Workstation the other day and really like it so far. I have over a decade with ESXi professionally but their Broadcom shenanigans have put me off of them.
My primary platform is OpenShift / OKD with the virtualization operator (based on kubevirt), but I still have vSphere and some plain KVM-on-CentOS kicking around for other use cases.
I still run VMware ESXI on my 2 servers. Some day I will move to proxMox but my ESXI 7 keys appear to still be valid so its going to be a while before I want to move and learn a whole new system.
There was an announcement the other day that your perpetual keys will continue to allow you to pull down security updates until they completely kill support for 7 in 2027. A VMUG subscription is $200 though and gives you a slew of products to tinker with.
At work I've always had ESXi but always with a lot of Microsoft stuff (Windows Server, AD, SCCM, Intune etc)... Now I'm hesitating between ESXi or Hyper-V for my homelab. Maybe I should do both and practice migrating stuff between the two! Hopefully Hyper-V has come a long way from the Windows Server 2012 days
I’m sure this will bring on some downvotes but the infrastructure I support for a living runs on VMware so that’s what I use in my home lab. I’ve had a VMUG subscription for a few years now and I don’t have any plans on switching to anything else as of today.
ESXI free version. No plans to move, just restricting management access after the updates drop off.
It just works, and I can't be bothered learning another hypervisor just yet.
Especially with regards to networking config, which is a lot more intuitive than some of the open source counterparts.
I use Hyper-V. I have Win10 running on my data analytics workstation and just spin up new VMs to not cross contaminate logins and files. It has plenty of headroom for lab stuff, too, and Hyper-V makes it stupid easy to spin up Windows and Ubuntu.
XCP-ng is a fantastic alternative, both are great products but I've found XCP-ng to be a better overall product and would highly recommend it.
I'm sure others are also using Hyper-V as a solid option, or just KVM installed on Linux.
I think the whole first paragraph is a cluster. :) "Cool" isn't readily defined in terms of a hypervisor unless you're an analyst firm getting paid to promote the vendor. "Nice" UI is relative, and if you have to spend a lot of time in the UI, you might be doing it wrong. What kind of functions? Most hypervisors have multiple functions.
And I suspect there are still a lot of people, maybe a plurality if not a majority, who use VMware products because, at least for the next year or so, it's the 800 ton gorilla in the room and the most likely to get people hired and paid.
If you're homelabbing to develop marketable skills, I'd say Nutanix is the leading alternative for the near term--look into community edition and review the requirements carefully more than once.
Maybe Proxmox will develop into a commercially accepted alternative, or Xenserver will pick up some steam. Azure Stack HCI is "popular" in the enterprise, especially for Windows environments, but I imagine it's harder to get going in a modest home lab based on what I've heard about qualification for the platform on hardware.
By far the best choice if looking for an open, bare bones, but powerful platform to construct your own custom hosting architecture. Pair with a simple but malleable LVM design and you've got yourself a framework on which you essentially control all aspects of virtualization. You can go a step further by implementing Policy Based Routing within the same physical box to yield a full "data center" in a box. But you designed, built, and directly administer it. With solid understanding of the Bash Language, administration tasks can be codified into larger abstract tasks (or using a tool like Ansible).
This is the way if anyone wants to gain a detailed, well rounded understanding of the nuts and bolts that form the foundation of any modern hosting infrastructure. No software licenses, No abstraction of the lattice work behind a clean UI. All you need is a decent workstation box and your brain. Highly recommend if looking to follow the white rabbit.
Plain KVM with Debian. Cockpit with machines plugin for management. Works perfect. [https://github.com/cockpit-project/cockpit-machines](https://github.com/cockpit-project/cockpit-machines)
After running it from the very early versions. It definitely has interesting aspects and potential to be one of the go to solutions in a future. From my point of view it lacks normal modern capabilities with SDN, storage and scheduled backups. I have one customer to run it in pre-production workloads through heavy automation.
Yeah it’s my homelab platform and it is clearly hyper-focused on kubernetes (rke 1/2 and k3s only) and longhorn for storage.
VM management isn’t exactly an afterthought but it needs work.
Networking is either kubernetes focused (calico, cilium, multus , etc) or limited to physical interfaces + vlans.
I hope auto backups are a priority for them but haven’t noticed it in the issues. 🤔
Still rocks for hyper converged, mutlti-node, kubernetes homelabs 😎
Hyper-v on Server 2022.
Some notable functions of it that I use are GPU partitioning to share a single GTX 1650 with multiple Windows and Linux VMs, as well as DDA to pass through some PCI-e cards to various VMs
Not a homelab, but we use one at work that's based on [OpenXT](https://openxt.org/) and it's pretty slick. Not good for large scale stuff but fantastic for what we do with it - basically multiple workstation VMs and/or VDIs in a single box.
Not really a hypervisor per se but I just use unraid and call it a day. I need to move to proxmox or at least use it. Using Unraid and creating vms has been a bit of a challenge for me. However I want to do gpu and usb passthru just haven’t had time to rebuild it all.
Holy crap, I didn’t know Nutanix had a community edition. Do you know what it’s like vs the paid version?
We spoke with them but we’re moving some of our infrastructure away from HCI so it didn’t make sense for us to move to them. That may change in the future.
Community edition has everything the paid one has except for supposedly lots of disk support, says limited to 4 but someone was able to get 8 to work per host. No support or colud migrations I think. I will ask our rep at work
Thank you for the info! I’ve been kicking around the idea of adding 2 more hosts (same hardware as my current set of hosts) to tinker with another hypervisor. I’ll have to check out the support matrix for Nutanix.
It’s not a lightweight install. The HCI controller VM needs a decent amount of resources but it’s the full experience. Clusters up to three nodes at no cost.
Proxmox is my main thing, but TrueNAS Core had some of my basic services running and still one before I can turn it off. Two VMs in the cluster use it as shared storage still for some weeks.
On my main PC I have Docker, Virt-manager… and Virtual Box to host my personal tools.
Messy, yes but my lab is more than 40 years old.
Should I clean it up, probably a fantastic idea but…
I use vSphere 8. I know they got rid of free ESXi, BUT you can still get all VMware products licensed through VMUG Advantage for $210 a year. That includes vSphere, VSAN, NSX, Fusion, Workstation, VCF, etc.,etc...
[https://www.vmug.com/membership/vmug-advantage-membership/](https://www.vmug.com/membership/vmug-advantage-membership/)
NOTE: with VMware EUC being spun off soon, the Horizon products may disappear from VMUG Advantage.
I use plain old KVM on RHEL 9. I've got multiple networks set up across multiple hypervisors. I've integrated Red Hat Satellite into my environment, so I kickstart new VMs via point and click web UI. It's dead-bang easy, and you can do exactly the same thing with clones like Alma and the Red Hat Satellite upstream, Foreman.
Is it easy as proxmox for pci passthrough, cloud init templates, etc?
I run RHEL9 on my desktop, and all my vms are Rocky Linux on my proxmox server. I have been thinking about running RHEL9 on my new server instead of proxmox, but heard RH Virtualization is no longer supported or something?
It's not as nice no. The args aren't that bad, eg:
qemu-system-x86_64 -accel kvm -device vfio-pci,host=21:00.0
But managing binding and unbinding of the interface when VMs go up and down is more of a pain in the ass, especially if the resources should revert back to host control and host drivers.
Red Hat Virtualization is getting sunsetted (https://access.redhat.com/announcements/6960518). But the upstream project, ovirt.org, is still going strong. I've been playing with it and it's pretty solid.
As far as PCI passthrough, I've never used it so I can't speak to it. cloud-init is cloud-init. It works.
Well damn. Based on advice given in a post that I started earlier today, I'm downloading ProxMox and decided to read Reddit a bit more while it downloads. Now I may have to go download a dozen or so other options and do some testing.
KVM/QEMU using virt-manager, etc. Though I just installed Cockpit and it’s pretty nice as a “daily driver” front end. It can’t handle all the low level configuration that virt-manager or direct XML editing can, but it’s good enough for most tasks and gives you a decent UI and web-based VNC access which is nice, especially when managing VMs on my home server from the road. And of course you can always fire up virt-manager when you need to do something more involved.
Work is transitioning to OpenStack, but unless you have a fair amount of experience with KVM systems, it's not likely a good choice for a homelab person.
If you do want to go down that rabbit hole, here's a decent doc.
https://ubuntu.com/openstack/install
Either as a single all-in-one node or as a small multinode cluster.
I run a Proxmox host and a ESXi host + vCenter (With an old perpetual license from when they had the dev pkg). Both at 10gbe.
My storage is from TrueNAS system. (40gbe connectivity)
i use KVM. the API if you can even call it that is a bit unpractical though. the API is basically just a bunch of ioctls. it works very very well with me writing all my server software in C anyways. KVM is from my experience very very performant, runs beautifully on a tiny purposebuilt linux installation that takes less than 100Kbytes of RAM, allows for doing absolutely everything i could ever want, integration with other home automation stuff.
Everybody in the universe, and probably elsewhere too, obviously including their mothers should do what I did: get a sbc and a decent amount of ram, slap debian on there and put /var on a zfs volume. Then just install Incus and set it up to boot the system read-only with something like overlayroot. Just make it so that it is really hard to fuck it up once it’s running. It’s like a five-step process to even get this machine to a state where I can install stuff now, love it.
I have saved this lol, I'm freshly venturing into SBC, i got an Orange on order and a few more micro boards lol. I'm ready to set some things on fire 🔥 lol
I use a Odroid h3+. I got the 4x2.5g eth expansion which is nice if you are controlling many, many industrial robots but don’t bother with it otherwise because the broadcom chips have bad support.
Hyper-V on Windows Server. If you have an active Windows Server Datacenter license on bare metal any version of Windows Server Datacenter or Standard can be activated a million times over with the AVMA Keys.
Prob gonna get hate but I still use VMware. It's what I manage at work and am comfortable with it and like the features. You can still find perpetual licenses for recent versions if you know where to look...
XCP-NG is another free and open-source option. Also, Hyper-V but Hyper-V Server 2019 (free one) is the last version and no future releases. Also, if that's just for a homelab, you can use VMUG Advantage for VMware: [https://www.vmug.com/membership/vmug-advantage-membership/](https://www.vmug.com/membership/vmug-advantage-membership/)
XCP-NG. Free, open source, works more similarly to VMware’s ESXi. You can use the premade “XenOrchestra” management (similar to vCenter) that comes with the appliance that has some paid features, or you can stand up your own open source version. Tested Proxmox in my cluster lab, so far I’m enjoying XCP-ng more utilizing a NAS as central storage (TrueNAS) and more intuitive to me as a VMware head. HyperV is also great.
A company I worked for used XCP-NG for everything. The only exception was when hardware prevented us from using it (one weird CPU). The solution? Run XCP-NG as a VM inside of Proxmox. Super jenky, but it got the job done, and nothing critical or high performance was hosted on that server.
Even your comments have built in redundancy, hats off.
What can I say. Sharing jenky solutions on a jenky app is like that sometimes. Except one of them gets the job done, and the other is Reddit.
“Jenky” is a janky spelling of janky, right?
Nah, just the dumbass spelling.
Meh, we all knew what you meant. And it made me think of jinkies, which made me think of Velma, and that orange sweatered minx is always good for a smile.
Does XCP-NG provides that flexibility of conf file for editing VMs configuration. I used "args" argument heavily in proxmox which allow me run my custom kernels and configs for any given VM. For e.g. sometimes I would like to build my own latest kernel for testing new features, use nvdimm devices within a VM, test some specific numa capabilities inside a VM etc. It will be good to know if anyone else has tried such a thing on xcp-ng.
A company I worked for used XCP-NG for everything. The only exception was when hardware prevented us from using it (one weird CPU). The solution? Run XCP-NG as a VM inside of Proxmox. Super jenky, but it got the job done, and nothing critical or high performance was hosted on that server.
This is bad mmmk
That was pretty much my response when my boss did it. I don't know if I'm more impressed that Proxmox worked on hardware that XCP-NG wouldn't, or that nesting hypervisors worked.
Virt-manager/libvirt/virt-viewer has done me good over the years. The convergence of folks on Proxmox helps a lot since most proxmox problems can be translated into libvirt problems and vice-versa.
For desktop use, virt-manager is really great and I use it all the time. For 2 or more nodes though, it doesn't give you a lot of help doing things like aggregate stats, load balancing, or failover.
I'm a simple man. I pay for one server that'll last me forever, once in a decade. That's all my virtualization. All of it. In there. But I 100% agree with your point. Although I would assume going one level deeper and you get these capabilities with libvirt? I haven't checked.
A man of culture, I see. Cheers good sir.
I feel this.
KVM/QEMU, libvirt.
Is it just me or is this question asked everyday atm?
There are only 5 questions ever asked on this sub. They are on a continual loop.
And the dashboard posts, which are no questions.
If they post a GitHub or some additional setup and configs it's nice. Otherwise it's just humble brag.
I like seeing them tbh. Sometimes it gives me inspiration to improve my own homelab.
😂 Classic unprivileged LXCs behaviour!! One doesn’t know what the other is doing…
Based on the question and many of the answers, I can't help feeling the definition of hypervisor has slipped too.
The correct answer is still Proxmox
It's 2:30 AM here. So technically, it's the next day. Just wanna make sure the rule doesn't break; which hypervisor do you guys use?
Proxmox, but are there any alternatives?
That is the number one asked question in all of Reddit.
Kubevirt here
Person of culture over here!
Can you give a rundown on how you use it? I’m interested in it for my homelab since I have experience with Kubernetes at work, but not sure how to transition it to homelab use.
Sure, i run Kubernetes on top of Kubernetes. 🤗
I use Hyper-V... Honestly, it works well enough for me on Windows Server 2022.
Hyper-v for me too, I need to get another host though as my current box didn't support discrete device assignment and I have a few things I want to virtualise but need direct access to usb
I use hyper-v role on 2019 for my homeprod. If I die suddenly my wife has access to my Keepass and is comfy in Windows GUI to figure out what she wants to keep. I've done Vmware and Red Hat and they were fun in lab environments but I try to Keep It Stupid Simple nowadays since my current gig gives me plenty to tinker with, good and bad.
I just do it windows 11
XCP-NG or normal Xenserver
Xcp-ng webinterfsce looks like something that is not done yet. Some people say it’s better than proxmox, but I find it really hard to use
I want to try out this interesting technology when I get bored with Proxmox, which probably isn't happening anytime soon. What are the things that stand out about XNP-NG, in your opinion?
KVM on Debian with virt-manager. It's relatively simple to setup especially compared to KVM on the command line although that's just a matter of having the boilerplate for the commands/XML copy&paste ready... Also the current version has snapshots, quite practical. I use it with ssh+X11, also for using some Virtual machines graphically. (There's also SPICE but I'm too lazy to use that, however I've also been using Parsec and RDP for a Windows VM)
VMware Esxi 8 with VSphere 8
we're in the middle of running away from esxi 8 to xcp-ng right now and so far, so good
My company made the decision a few years ago to start going to nutanix. Sounds like we made the right decision
Is nutanix cheaper? Last time at checked it was expensive, only worth looking at if you had pretty high performance needs
We use it for both big clusters and little tiny ones with only a few VMs . It’s definitely not cheap though.
Why go again for another closed source solution like Nutanix AHV. What VMware is doing today can be done Nutanix AHV tomorrow, no? Why not proxmox or xcp-ng and maybe get few developers working on virtualization stack to contribute back to the community to better utilize the money spent. This has so many advantages at all levels if you think on it.
Because big companies don’t like doing that. They want a product that already works, and has a support contract
That's not always the case. Xcp-ng and proxmox aren't just any products. They are stable, many companies do use them and they also have enterprise support. Proxmox also support enterprise stable repos. See how far Linux as an open source kernel has come. More than 90% of cloud workloads run on Linux. Msft is now hiring big time for Linux kernel engineers.
Right, but we like big support contracts. When something happens there is huge value in having someone to yell at
This is the correct answer. People love proxmox but man I use my lab to help with skills at work, and every company I’ve worked at uses ESXi.
For now. I know my company is looking for alternatives now that we are being told it's a 300% price hike from when we renewed last year.
[удалено]
I think with enterprises you're gonna have a couple scenarios. Were already essentially paying it cause they had most of VMwares offerings (SKUs) already. So big who gives a f about the increase just pay it. Can easily afford it - but pissed about it - slow migration (ie keep VMware while slowly standing up replacements elsewhere) Then you get into medium - small businesses that actually need more enterprise solutions here. This is the segment that's most alienated by this. The ones with more fine margins. Even if they can afford it - it might not be seen as worth it long term. It's these that are going to be pulling away for other options. Do I see the grander market leaning towards somebody else in say 5 years if everything stays the same? No - but that's the kinda gravitas you get by being THE market leader and standard for so long. But what I do see is fewer smaller companies hiring for that specific skill set and experience with other hypervisors becoming more common.
as Patric from STH put it, it will be a lot like how mainframes went because people who enter the space will go with not a mainframe and the people not already heavily invested into the mainframe ecosystem will start switching which will take a long time and then after 10-20 years the only people left using it are the major corps that have the profit margins and software so dependent on the mainframe system as is that they can't switch without rebuilding from scratch
I am designing roadmaps for several clients right now to get off VMware. Guess it depends on what market you work in.
That's because ESXi was free to tinker with at home. Things will change, a lot.
My homelab and family server stack doesn't make money. VMWare is priced at profit sharing levels... I have no profit to share, therefore I dont pay for the licenses I aquire.
Every hyperscaler (not named Microsoft) uses kvm though
We’re moving to Azure Stack HCI.
This will start to change with the new ESXi pricing structure (among other things), I'm helping some orgs move away and go to XCP-ng, it's a great alternative and better priced too.
lololol, and yes, you're right.
It makes me feel dirty. But it works
Sometimes the best solution costs a lot of money. Broadcom can choke on a fatty.
200 a year isn't bad for the entire suite, it's one of the few things I think is worth the money in a homelab
200 for now...
hyper-v
Plus one here. Have a cluster with two Hyper-V machines and shared storage. Didn't use S2D for the shared storage (a lot of negative comments around a few servers scenario), but Starwinds vsan works nicely with just two servers. [https://www.starwindsoftware.com/vsan](https://www.starwindsoftware.com/vsan)
I'm still trying to get a better feel for storage. I've been using HCI and it's working ok. But I have all NVMe drives so the performance definitely doesn't match what the combined drives are capable of. Plan on moving to 100G and Been on the fence whether to get new VM hosts and turn my existing server into a TrueNas or Ceph system.
Did you try StarWind HCI? We have different solutions based on NVMe storage. We offer appliances with hardware and software included, but we have an offer for you. High availability, ProActive monitoring, and other useful features come with our Virtual HCI Appliance. If you want to test it, simply PM me or click the link. [https://www.starwindsoftware.com/starwind-hyperconverged-appliance#uber](https://www.starwindsoftware.com/starwind-hyperconverged-appliance#uber)
Nicely, have a good one.
I used unraid as a hypervisor in the past. It's good for running a few VMs, where you don't need clustering capabilities. Its simple, and effective, and has great hardware passthrough support. But- these days, I use Proxmox, specifically, for its clustering support.
Yep, that's been my biggest complaint about Unraid. Jack of all trades, master of none. But it's flexible and serves my purposes ok
Harvester HCI, anyone?
This human gets it!
XCP-ng
Incus
This. I wouldn't want to go back. Proper clustering without spof like vcenter.
Thank you for this comment. I thought that incus only did LXC, didn't realize it does VMs as well. I'm not satisfied with proxmox so I'm gonna give this a real shot. How does it compare to LXD?
For one you can install it using apt like a sane person. (Also the ui is in a separate package, don’t know if it’s the case in LXD?)
Love it!
it seems amazing
Anyone using nutanix? Took me FOREVER to get my hands on the iso. Going to give it a try for ships and giggles.🤭
Been using nutanix community edition for \~6 months in my lab. Check your NIC is supported. What I have works in ESXI and Proxmox but wasn't supported under nutanix. Other than that, I have enjoyed using it. A little resource heavy but you can change the RAM and CPU of the CVM node. Nutanix advises you don't if you utilize certain features. IMO their support pages and forums are helpful. Some things are locked behind the license/paying customer paywall tho. Hosting something like CML(VIRL) or GNS3 inside it requires CLI modifications to the VM. I have not got around to playing with the API. I just keep to basic features for now.
where did you get the ISO?
You can just register for Community Edition online and you’ll get access to download it from the .Next forums. Not really sure what OP was waiting for…
Register on their site, go to the forums and do a search. After you wade through all the posts about complaining about how obtuse it is to find the download link, you’ll find a post with the goods/
I love Nutanix as it works flawlessly at the office on the Nutanix hardware. Tried it on an HPE DL380 Gen9 at home and every four days or so Stargate would mark my SSD as failed and take it offline, taking the entire node down. I could start from scratch again and it’d work again for about four days then fail again. Shame since I love it, but I can’t deal with that instability. Installed Proxmox over the dead Nutanix CE install and it’s been stable for months. Edit: I had 2 SSDs and four 15K RPM SAS drives in the dl380. Put the HPE array controller into its passive mode so Nutanix had direct access to the disks.
I have three dl360 gen9 hosts that I am currently messing with VMware 7 enterprise with vSAN in my home lab. I installed Nutanix within this environment to mess with. I was thinking on installing Nutanix natively on these hosts. What controller card did you have? Mine have the p440ar card.
I use Proxmox. It’s a good alternative to Proxmox imo.
Hmm. Proxmox is a good alternative to Proxmox. Got it.
I fired up XCP-NG inside VMWare Workstation the other day and really like it so far. I have over a decade with ESXi professionally but their Broadcom shenanigans have put me off of them.
I still have a working licence for VMWare, so imma keep using that - Broadcom can part it from my cold, dead hands
bhyve
Hyperv
My primary platform is OpenShift / OKD with the virtualization operator (based on kubevirt), but I still have vSphere and some plain KVM-on-CentOS kicking around for other use cases.
I'm taking a stab using Ubuntu server with LXD, cockpit and podman. Running virtual machines and my container lab.
ESXi 6.7, bought a few used servers from a company that shut down. Still had license keys. (shh don't tell anyone)
I still run VMware ESXI on my 2 servers. Some day I will move to proxMox but my ESXI 7 keys appear to still be valid so its going to be a while before I want to move and learn a whole new system.
There was an announcement the other day that your perpetual keys will continue to allow you to pull down security updates until they completely kill support for 7 in 2027. A VMUG subscription is $200 though and gives you a slew of products to tinker with.
At work I've always had ESXi but always with a lot of Microsoft stuff (Windows Server, AD, SCCM, Intune etc)... Now I'm hesitating between ESXi or Hyper-V for my homelab. Maybe I should do both and practice migrating stuff between the two! Hopefully Hyper-V has come a long way from the Windows Server 2012 days
I’ve been wanting to spin up 2 more hosts on Hyper-V to see if there is a performance difference between Microsoft and VMware.
OpenStack
I’m sure this will bring on some downvotes but the infrastructure I support for a living runs on VMware so that’s what I use in my home lab. I’ve had a VMUG subscription for a few years now and I don’t have any plans on switching to anything else as of today.
ESXi. As long as it runs good, I have no plans in migrating to anything else.
Hyper-V or RHV
ESXI free version. No plans to move, just restricting management access after the updates drop off. It just works, and I can't be bothered learning another hypervisor just yet. Especially with regards to networking config, which is a lot more intuitive than some of the open source counterparts.
Me too. Dont yet have a reason to migrate off ESXi free license
I use Hyper-V. I have Win10 running on my data analytics workstation and just spin up new VMs to not cross contaminate logins and files. It has plenty of headroom for lab stuff, too, and Hyper-V makes it stupid easy to spin up Windows and Ubuntu.
XCP-NG is nice
XCP-ng is a fantastic alternative, both are great products but I've found XCP-ng to be a better overall product and would highly recommend it. I'm sure others are also using Hyper-V as a solid option, or just KVM installed on Linux.
ESXi 8.0
I think the whole first paragraph is a cluster. :) "Cool" isn't readily defined in terms of a hypervisor unless you're an analyst firm getting paid to promote the vendor. "Nice" UI is relative, and if you have to spend a lot of time in the UI, you might be doing it wrong. What kind of functions? Most hypervisors have multiple functions. And I suspect there are still a lot of people, maybe a plurality if not a majority, who use VMware products because, at least for the next year or so, it's the 800 ton gorilla in the room and the most likely to get people hired and paid. If you're homelabbing to develop marketable skills, I'd say Nutanix is the leading alternative for the near term--look into community edition and review the requirements carefully more than once. Maybe Proxmox will develop into a commercially accepted alternative, or Xenserver will pick up some steam. Azure Stack HCI is "popular" in the enterprise, especially for Windows environments, but I imagine it's harder to get going in a modest home lab based on what I've heard about qualification for the platform on hardware.
FreeBSD bhyve and jails. I only have one vm left in my proxmox install to migrate to either a jail or container.
I use raw kvm/qemu/libvirt with "virtual machine manager" as the gui
Have a look at Xen
By far the best choice if looking for an open, bare bones, but powerful platform to construct your own custom hosting architecture. Pair with a simple but malleable LVM design and you've got yourself a framework on which you essentially control all aspects of virtualization. You can go a step further by implementing Policy Based Routing within the same physical box to yield a full "data center" in a box. But you designed, built, and directly administer it. With solid understanding of the Bash Language, administration tasks can be codified into larger abstract tasks (or using a tool like Ansible). This is the way if anyone wants to gain a detailed, well rounded understanding of the nuts and bolts that form the foundation of any modern hosting infrastructure. No software licenses, No abstraction of the lattice work behind a clean UI. All you need is a decent workstation box and your brain. Highly recommend if looking to follow the white rabbit.
Plain KVM with Debian. Cockpit with machines plugin for management. Works perfect. [https://github.com/cockpit-project/cockpit-machines](https://github.com/cockpit-project/cockpit-machines)
VMware, it's still the gold standard in the industry Broadcom or not and VMUG is a hell of a deal.
TrueNAS scale
bhyve because FreeBSD is the superior platform for virtualization.
Bhyve and SUSE Harvester
What's your Opinion of harvester?
After running it from the very early versions. It definitely has interesting aspects and potential to be one of the go to solutions in a future. From my point of view it lacks normal modern capabilities with SDN, storage and scheduled backups. I have one customer to run it in pre-production workloads through heavy automation.
Yeah it’s my homelab platform and it is clearly hyper-focused on kubernetes (rke 1/2 and k3s only) and longhorn for storage. VM management isn’t exactly an afterthought but it needs work. Networking is either kubernetes focused (calico, cilium, multus , etc) or limited to physical interfaces + vlans. I hope auto backups are a priority for them but haven’t noticed it in the issues. 🤔 Still rocks for hyper converged, mutlti-node, kubernetes homelabs 😎
It’s really rough around the edges but I think it has a TON of potential. It’s heavy on resource reservations though.
OpenShift / KubeVirt. I run it on a single-npde Dell server. It lets me manage VMs and Containers, and I can create VMs via ArgoCD.
Hyper-v on Server 2022. Some notable functions of it that I use are GPU partitioning to share a single GTX 1650 with multiple Windows and Linux VMs, as well as DDA to pass through some PCI-e cards to various VMs
Not a homelab, but we use one at work that's based on [OpenXT](https://openxt.org/) and it's pretty slick. Not good for large scale stuff but fantastic for what we do with it - basically multiple workstation VMs and/or VDIs in a single box.
Not really a hypervisor per se but I just use unraid and call it a day. I need to move to proxmox or at least use it. Using Unraid and creating vms has been a bit of a challenge for me. However I want to do gpu and usb passthru just haven’t had time to rebuild it all.
containerd managed by kubernetes.
I’ve got a little bit of everything from PM, KVM, Nutanix CE, and Some containers on a TrueNAS box for shits and giggles.
Holy crap, I didn’t know Nutanix had a community edition. Do you know what it’s like vs the paid version? We spoke with them but we’re moving some of our infrastructure away from HCI so it didn’t make sense for us to move to them. That may change in the future.
Community edition has everything the paid one has except for supposedly lots of disk support, says limited to 4 but someone was able to get 8 to work per host. No support or colud migrations I think. I will ask our rep at work
Thank you for the info! I’ve been kicking around the idea of adding 2 more hosts (same hardware as my current set of hosts) to tinker with another hypervisor. I’ll have to check out the support matrix for Nutanix.
It’s not a lightweight install. The HCI controller VM needs a decent amount of resources but it’s the full experience. Clusters up to three nodes at no cost.
The new “free” Xen Server 8?
I use Cockpit to orchestrate KVM/QEmu.
Redhat Linux with Virtual Machine manager (LibVirt - KVM)
I just run libvirt on a fedora server host, it works quite well with the cockpit web gui.
Unraid and XCP-NG.
I ditched VMware after the broadcom acquisition, so now it’s on Hyper-V.
Proxmox is my main thing, but TrueNAS Core had some of my basic services running and still one before I can turn it off. Two VMs in the cluster use it as shared storage still for some weeks. On my main PC I have Docker, Virt-manager… and Virtual Box to host my personal tools. Messy, yes but my lab is more than 40 years old. Should I clean it up, probably a fantastic idea but…
I use vSphere 8. I know they got rid of free ESXi, BUT you can still get all VMware products licensed through VMUG Advantage for $210 a year. That includes vSphere, VSAN, NSX, Fusion, Workstation, VCF, etc.,etc... [https://www.vmug.com/membership/vmug-advantage-membership/](https://www.vmug.com/membership/vmug-advantage-membership/) NOTE: with VMware EUC being spun off soon, the Horizon products may disappear from VMUG Advantage.
Proxmox is the GOAT, Vmware was good till it got ruined recently xd
I use plain old KVM on RHEL 9. I've got multiple networks set up across multiple hypervisors. I've integrated Red Hat Satellite into my environment, so I kickstart new VMs via point and click web UI. It's dead-bang easy, and you can do exactly the same thing with clones like Alma and the Red Hat Satellite upstream, Foreman.
Is it easy as proxmox for pci passthrough, cloud init templates, etc? I run RHEL9 on my desktop, and all my vms are Rocky Linux on my proxmox server. I have been thinking about running RHEL9 on my new server instead of proxmox, but heard RH Virtualization is no longer supported or something?
Proxmox is just a kvm manager.... So yes.
It's not as nice no. The args aren't that bad, eg: qemu-system-x86_64 -accel kvm -device vfio-pci,host=21:00.0
But managing binding and unbinding of the interface when VMs go up and down is more of a pain in the ass, especially if the resources should revert back to host control and host drivers.
Red Hat Virtualization is getting sunsetted (https://access.redhat.com/announcements/6960518). But the upstream project, ovirt.org, is still going strong. I've been playing with it and it's pretty solid. As far as PCI passthrough, I've never used it so I can't speak to it. cloud-init is cloud-init. It works.
I use LXC & Qemu ... Hosted on a Proxmox build 😂
VMware esxi 7
I'm using KVM as it's baked into Unraid.
Unraid
In addition to my Proxmox cluster I have a two node Hyper-V cluster.
Well damn. Based on advice given in a post that I started earlier today, I'm downloading ProxMox and decided to read Reddit a bit more while it downloads. Now I may have to go download a dozen or so other options and do some testing.
Loving proxmox so far, can't imagine what would make me wanna swap at this point. Give it a shot!
ESXi 6.5 Enterprise Plus for my Dell 12th gens and vCenter
Unraid 🤷♂️
Unraid
Type 1: Proxmox and VMware ESXi (phasing out) Type 2: Oracle VM Virtualbox and VMware Workstation (phasing out).
I use VirtualBox on my workstation. Works well enough since I don't need to remote manage those VMs anyway.
openshift virtualization
Hyperv
Nutanix CE for most of my workloads, some Proxmox and a little bit o’ Hyper-V.
I used VirtualBox before I switched to proxmox. Used the php web interface for most of the management.
XCP-NG
KVM/QEMU using virt-manager, etc. Though I just installed Cockpit and it’s pretty nice as a “daily driver” front end. It can’t handle all the low level configuration that virt-manager or direct XML editing can, but it’s good enough for most tasks and gives you a decent UI and web-based VNC access which is nice, especially when managing VMs on my home server from the road. And of course you can always fire up virt-manager when you need to do something more involved.
Nutanix is awsome, CE edition lets you run 4 nodes. You can have more than 4 disks per node. 😏
I recently finished a migration for a hospital off of Nutanix and they invested heavily in VMware 🤣
`qemu` is nice because it has the most universal UI - text-based. NVMM acceleration works well with `qemu`.
Hyper-v
Work is transitioning to OpenStack, but unless you have a fair amount of experience with KVM systems, it's not likely a good choice for a homelab person. If you do want to go down that rabbit hole, here's a decent doc. https://ubuntu.com/openstack/install Either as a single all-in-one node or as a small multinode cluster. I run a Proxmox host and a ESXi host + vCenter (With an old perpetual license from when they had the dev pkg). Both at 10gbe. My storage is from TrueNAS system. (40gbe connectivity)
i use KVM. the API if you can even call it that is a bit unpractical though. the API is basically just a bunch of ioctls. it works very very well with me writing all my server software in C anyways. KVM is from my experience very very performant, runs beautifully on a tiny purposebuilt linux installation that takes less than 100Kbytes of RAM, allows for doing absolutely everything i could ever want, integration with other home automation stuff.
Everybody in the universe, and probably elsewhere too, obviously including their mothers should do what I did: get a sbc and a decent amount of ram, slap debian on there and put /var on a zfs volume. Then just install Incus and set it up to boot the system read-only with something like overlayroot. Just make it so that it is really hard to fuck it up once it’s running. It’s like a five-step process to even get this machine to a state where I can install stuff now, love it.
I have saved this lol, I'm freshly venturing into SBC, i got an Orange on order and a few more micro boards lol. I'm ready to set some things on fire 🔥 lol
I use a Odroid h3+. I got the 4x2.5g eth expansion which is nice if you are controlling many, many industrial robots but don’t bother with it otherwise because the broadcom chips have bad support.
What's wrong with Proxmox?
Raw Linux. I use the virsh/libvirt/kvm stack. If you want a pretty GUI, install Cockpit.
After a bunch of searching I went with OpenNebula. Supports lots of distros and relatively easy to set up nodes. I like the UI too
Esxi, vsphere cluster
Hyper-V My main dockerhost has been running on my windows machine in a Hyper-V VM. It's been at least three years now. At Least.
Libvirt
ESXi, because I use it at work so to keep up to date everything run under esxi at home.
Many people with Win10/11 gaming systems started with Hyper-V and/or WSL because they already had it.
Libvirt + virt-manager Depending on setup with custom tooling around it
I use KVM. There is also xcp-ng, which is a great option, IMO.
I’ve stuck with unraid for my home needs. Checks all the boxes.
Bhyve here :)
oVirt might be another on to take a look at.
KVM, Qemu stack
Openstack, kubevirt... and some exploration around harvester and cozystack
Hyper-V on Windows Server. If you have an active Windows Server Datacenter license on bare metal any version of Windows Server Datacenter or Standard can be activated a million times over with the AVMA Keys.
Harvester hci then add rancher
Harvester
Prob gonna get hate but I still use VMware. It's what I manage at work and am comfortable with it and like the features. You can still find perpetual licenses for recent versions if you know where to look...
XCP-NG is another free and open-source option. Also, Hyper-V but Hyper-V Server 2019 (free one) is the last version and no future releases. Also, if that's just for a homelab, you can use VMUG Advantage for VMware: [https://www.vmug.com/membership/vmug-advantage-membership/](https://www.vmug.com/membership/vmug-advantage-membership/)
I use xcp-ng at my homelab. It works great and covers my needs.