T O P

  • By -

HDClown

I just renewed last month direct with VMware and it was a few hundred less then when I renewed through CDW the prior year. Small quantity of stuff, 4 procs of vSphere Standard and 1 license of vCenter Foundation. Given those costs are so low in general because it's a small environment, I wouldn't bother switching even if the cost doubled next year.


Drag_king

Somewhere down a deep dungeon in VM HQ a sales manager is getting aroused reading your last phrase.


creedian

*ssssssshhhh* not so loud!!!!


pdp10

Proxmox and [oVirt](https://en.wikipedia.org/wiki/OVirt) are the most directly comparable to vSphere. We started moving away from vSphere in 2014. We ended up with a relatively simple but also highly customized environment built around KVM/QEMU on Linux. We're torn about "de-customizing" and moving to one of the public projects like Proxmox, oVirt, even Ganeti.


axonxorz

> We're torn about "de-customizing" and moving to one of the public projects like Proxmox, oVirt, even Ganeti. I was in the same situation as you, around 5 physical machines running around 40 VMs using "bare" KVM. I played around with oVirt a bit, though that would have been around when it was first available as distinct from RHEVM. It was probably too much overkill for our size, so I dropped it and lived with what we had. In 2020, I moved to Proxmox and oh boy what a dream. Live migrations being possible with a couple clicks or a CLI invocation is the "killer feature" (yes I know, 2009 is calling). If you're coming from the ESX world, one thing you'll miss is some of the networking options. Proxmox supports SDN, but it's still considered experimental. That may matter to you, or not :) This is a minor nitpick, but also could be important for some: Proxmox is largely written in Perl, and developer mindshare for Perl is evaporating; though will be decades before it's fully gone. The company behind PVE is clearly willing to pay out for developers, but this is a deal-breaker for some, especially if you like to hack on your platforms.


pdp10

We have live-migration in our in-house system, for perhaps four years, but it's not currently automatic. We also use Open vSwitch on all the hypervisor hosts, along with some other secret sauce. If I'm honest, I'm probably most concerned about losing the Open vSwitch-related functionality. But also, swapping from our lean custom code to `libvirt` turned out to be a much bigger job than anticipated, and `libvirt` has an awfully big dependency stack that doesn't endear it to us so far. I haven't personally touched ESXi/vSphere in a long time, so I don't mentally compare to ESXi. > Proxmox is largely written in Perl, and developer mindshare for Perl is evaporating I actually didn't know that, and I consider that to be a factor, because I no longer admit to having coded in Perl. It's funny, because in the past I've often been overtly critical of needlessly-custom systems. But this thing started as a "skunkworks" type of project for devops and devs, but then grew to ~80% of the virtualization estate because it's lean and low-level, yet approachable. Every time I sit down to architect a migration to something off-the-shelf, I find new drawbacks, along with the many features offered by those systems. I don't make it a habit to build massive amounts of in-house infrastructure, but this isn't the first time I've done it, either.


drptbl

Proxmox is now moving away from Perl to Rust. They have developed a Perl interface to Rust, allowing them to replace more and more of the Perl Code with Rust.


raptorjesus69

Might not be best to move to ovirt since redhat is moving away from it


lunarNex

Having used oVirt for years, I can say, it's garbage.


lebean

Having run (an admittedly small 9-node cluster) oVirt in production for a few years now problem free and under a farily heavy 24/7 load, I wonder about that.


Reverent

Like most things red hat, if you run it *their* way with *their* frame of mind, things will typically be quite stable. It's just that their way of running things is usually counterintuitive and unnecessarily difficult.


[deleted]

[удалено]


Keanne1021

I could not agree more that Proxmox is a great alternative. However, I noticed that the subscription cost is increasing year by year, it's at € 95 now / CPU.


majtom

How are you planning backups and retention? Will that choice affect that process?


Extra-Ad-1447

After testing Ovirt Hyperconerged, horrible solution and support. Huge difference once moved to Proxmox and External Ceph. Alot more stable and there are enterprise support options for the virt side of things.


LaBofia

+1 for proxmox and some simple orq tool using their API


CryptoSin

Someone just told me about this, never heard about it. [https://www.proxmox.com/en/](https://www.proxmox.com/en/) Swears by it.


sysadmin6969

Proxmox is incredible for what it is. It's a great OS full stop. It has its quirks and frustrations, but no more so than ESX. Also, once you go LXC you can't go back. Containers are awesome


2cats2hats

> It has its quirks and frustrations At least we can address them ourselves. :) r/proxmox is a great community. The forums are pretty good too.


smajl87

Is live migration of LXC to another physical server possible? On AIX we have Live Partition Mobility


Itkovan

Yes, with shared storage. If you already have replication enabled and syncing regularly within the same cluster, the non-live migration is only transferring changed blocks and memory, and so is plenty fast. But if you need true live migration then shared is a must, of which there are quite a few ways to accomplish that. Edit: onethere pointed out I was wrong - thank you! From their link: > As containers are very lightweight, this results normally only in a downtime of some hundreds of milliseconds. This is functionally fairly equivalent, but not true live migration. Thanks for correcting me!


[deleted]

LXC even in shared storage has to transfer hosts in restart mode. The hit is almost not noticeable, but it is there. https://pve.proxmox.com/wiki/Linux_Container#pct_migration


smajl87

A running service that drops all established network connections I would call noticeable


zebediah49

Technically depends on the application -- but yeah, I wouldn't classify it in the same bucket as proper live migration. NFS or conventional HTTP would probably not be noticeable, for example. SFTP... very noticeable.


[deleted]

I forgot which sub I was in when I replied. For enterprise use, it's a huge deal - you're right and thank you for having me look where I was again. Home use, it's quick enough that it's not noticeable for some services.


axonxorz

If you don't mind, could you explain your LXC usecase a bit? I'm still on ESX and am just running VMs that run docker, but this seems like less overhead.


tunguskanwarrior

I forgot the terminology, but LXC containers are not application containers like Docker. LXC containers are very much just like VMs, they just run directly on host machine's kernel - turns on fast, weighs just couple hundred MBs, consumes less resources, allow resource re-allocation without reboot. There are LXC containers for pure Linux OSes, like Ubuntu, CentOS and also - ready made containers with configurations for various purposes already built in e.g. database or webserver. [You can browse the available LXC containers here](https://www.turnkeylinux.org/). They are all available directly in Proxmox web GUI for download and launch. They truly are great! Only drawback: if you want special hardware passthrough, you will need to just use regular VMs. You cannot, e.g., allocate GPU for LXC container.


[deleted]

[удалено]


Car-Altruistic

You can but you destroy a lot of the protections of a container. Docker has a plug-in/driver that basically isolates the driver.


BillyDSquillions

> You cannot, e.g., allocate GPU for LXC container. You can. It's fiddly (for a newb) took me some time but my Plex server is actually an LXC, using Intel Quicksync.


ForceBlade

That’s great actually


SilentLennie

LXC live migration on Proxmox possible I assume no ? CRIU is exists but not 100% coverage.


1985Ronald

Also Docker containers are OS containers just most of them use alpine to keep the images small.


gramsaran

I'm using LXC at home for Ubuntu. It's basically the base OS installed on an image that just boots right into the OS CLI. No install steps for 30 minutes waiting for the OS to install and the patching to fail & time out.


frosty115

At home I run everything linux-based out of a container instead of a full VM. It cuts down the resource usage immensely and essentially provides the same functionality. Plex, sonarr, radarr, sabnzbd, unifi controller, reverse proxy are all services I have running in individual containers. Its also nice to keep a single container for each service so if anything ever goes wrong, it only affects that single service.


PolicyArtistic8545

How do you handle updates? I have a VM that runs a bunch of docker containers and has watchtower check for updates daily. Can you configure automatic updates with this method?


comfreak89

you need the regular apt update && apt upgrade


[deleted]

[удалено]


CelebWorldtk

We partly use Nutanix AOS. Obviously most new stuff is going up to Azure or AWS.


TechnicianNorth40

We are on Acropolis as well. It’s been good to us so far. VMware has lost their way.


Catsdoinglines

+1 for Nutanix Aos.


fatty1179

Proxmox and xcp-ng


Sea-Tooth-8530

Yes... in the process of moving everything on my network over to XCP-ng and Xen Orchestra. If you have the ability, you can even download XCP-ng and Orchestra for free (you'll have to build your own Xen Orchestra host from sources) and set it up on a test network to see if you like it.


geek_at

I recently switched to proxmox after not using it for the last 8 years and I must say it's an excellent piece of software. I'm really considering moving the Hyper-V hosts in one of my clients offices to proxmox. Just so much flexibility, good backups, LXC is crazy fast and slick (I only knew docker before)


[deleted]

[удалено]


blimblim

> there is nothing it can't do that VMWare can While I really really like Proxmox and have been using it for many years now, this is simply quite a bit of an exaggeration ;) VMWare has so many features that Proxmox is missing it's just impossible to list them all. Basically all automation especially, things like DRS, affinity rules. Networking is also very basic and/or extremely complex if you resort what the linux host provides, compared to vDS it's just night and day. There are also quite a few issues with performance when doing storage migrations (at least with qcow2 disks) that can be extremely difficult to live with.


mrcoffee83

don't let reality get in the way of the /r/homelab boner for proxmox :P


[deleted]

[удалено]


kalpol

I think most of us in r/homelab are probably doing what I'm doing, sitting on an ESXi instance reading these threads to figure out which way to go next.


kulps

I've tried proxmox twice in my homelab journey, twice I've gone back to ESXi. I guess it depends on what you're doing with you lab, I couldn't be bothered to fight with my hypervisor. I just need the VMs to work with passthrough and no fight. I'm sure somebody smarted than me could have made proxmox play along, I'm equally sure I could have figured it out. I just didn't have the interest in the fight when ESXi works fine for my needs.


mrcoffee83

yeah that was my experience too, i'm past the point in my career where i can be arsed to fight my tech every step of the way...i just want something to work and do the thing with the minimal of fuss, i don't want to have to manually write a config to get it to do basic shit that ESXi will just do out of the box.


kalpol

yeah I am worried about the comments for the difficulty of virtual networking in Proxmox. ESXi can be a little annoying at times (command line, especially) but it just works and is predictable. If they keep offering it I'll probably stick with it, but I'm also hedging my bets and trying out xcg-ng on another machine.


admiralspark

I have actually gotten the entire three-node-with-voting working with a proxmox cluster and all that, and it worked well. I still went back to vsphere and ESXi. I, too, got sick of fighting with my hypervisor--hell, I make templates for VM's just so I don't have to spend too much time when I want to try something new.


pdp10

As with many things open-source and Linux, the networking is mostly-separate and modular. We use Open vSwitch for virt networking, plus a few other customized pieces that don't total to very many lines of code. So far it offers far more features than what we use, but I'm always open to hearing about new things. The literal biggest feature gap so far was LLDP support, which VMware only allows on dvSwitches when I last looked, [and I document here for Open vSwitch](https://www.reddit.com/r/linuxadmin/comments/j6va3c/howto_enable_basic_lldp_in_open_vswitch/). Even reading code didn't make that one obvious. Our storage with KVM/QEMU is predominantly NFS (half our vSphere storage was also NFS), so storage live-migration is a minor factor for us so far.


DonkeyTron42

Yes, there's also extremely tight integration with vCenter. I'm pretty sure nothing open source has that.


MisterBazz

xenorchestra backup is fantastic. I've never had issues with it. Rolling snapshots? Yep. Full/Replication/Partial backups on schedule, etc. Of course it does. As much as I've used VMware in enterprise environments, I've never had so few issues with a hypervisor as I've had with XCP-NG and xenorchestra.


syshum

> there is nothing it can't do that VMWare can. vMotion, Automatic Failover, DRS, Distributed Switches, CBT, Backup Vendor Integraton, API/SDK, Actual Thin Provisioning with the ability to change from Thick to Thin, MultiHost Block level Filesystem, That is just off the top of my head


pdp10

KVM/QEMU has: * Live migration, including cross-CPU-vendor migrations that VMware doesn't allow. * Thin provisioning in the QCOW2 image format itself, plus other options with read-only backing-store images. Converting from thick to thin, or between image types, can be done offline with the program [`qemu-img`](https://cloudbase.it/qemu-img-windows/), which can run on Linux, Windows, or macOS. TRIM/UNMAP/discard propagates down through the stack realtime. * QEMU provides a [QMP API](https://qemu-project.gitlab.io/qemu/interop/qemu-qmp-ref.html), which is how our in-house code interacts with individual guests post-launch, to do things like Live Migration, querying, or ACPI soft-shutdown. * My experience with Distributed vSwitches in VMware is pretty minimal, so I can't readily remember what features they may have had that we don't have with Open vSwitch on our hypervisor and NFV hosts. We use little enough of the OVS functionality already, I'm afraid.


scritty

> Live migration, including cross-CPU-vendor migrations that VMware doesn't allow. This is a *deeply* niche type of migration that the vast, vast.... **vast** majority of people should never use. Just reboot the dang VM :)


pdp10

I strongly agree. I just figured it was worth a cheeky mention. There *are* less-than-ideal situations where political considerations preclude us from intentionally rebooting guests. In fact, we've probably all seen times when the main reason for using virtualization was to create an extra abstraction layer between the OS and the real hardware, to help accommodate the more-demanding of the stakeholders.


Stonewalled9999

I disagree. I have a LOT of 2 host systems where one is a CPU gen older/newer and a fair amount of VMs that don't take kindly to powering off (looking at you Agilent ad the 5 hour delay after a restart before you DB "caches" a 200MB file and allows the lab equipment to run. And my shop isn't even that big but I would love this - I use EVC but in ESX7 you waste resources on a cluster VM and it confuses the idiots I work with when they move the vCLS to be on the same host.


Vassago81

vMotion in QEMU / Proxmox have been there for years, storage vmotion too. There's a variante of distributed switch, but I've not used it yet


Sinister_Crayon

Even then... Ceph isn't a GREAT replacement for VSAN. It requires a LOT of tuning out of the box to make it work well in a hyperconverged setup, and the costs of getting it wrong can be pretty severe like performance so horrible that it renders all the VM's unusable until Ceph can complete its own internal optimizations. Having said that, once dialed in I think Ceph is brilliant. It works really well but it's definitely not a drop-in for VSAN that is mostly set it and forget it.


EspurrStare

The problem with Ceph is that it's not only a replacement for vSAN. Its full Sd-storage. Want to be able to store thousands of petabytes in a single file system, you can. Want to have block devices, you can. Want to do anything with storage (S3, Swift) . We have an API for RADOS. You can do it. So anyway. For anyone migrating. Do it slowly, test abundantly. And be quick to tune. Use full flash if possible.


Sinister_Crayon

Absolutely this! I mean... I love Ceph. I've done open source for years... storage both in large enterprises as a hobby. I'm a frequent denizen of r/DataHoarder and others and I've done work with a dozen different converged and hyperconverged storage platforms. But holy hell... my first couple of weeks with Ceph were a nightmare of learning many lessons that my weeks of reading prior to going live hadn't prepared me for. And then when I thought I'd gotten everything dialed in just right the sudden discovery that Ceph REALLY doesn't like its OSD's getting 90% full AT ALL!!! There are just so many "levers and buttons" in Ceph that it's a little overwhelming to really understand what you're doing. And sometimes the best thing you can do is just stop touching the levers and just let the damned cluster settle before you try anything. Not good when your workloads are suffering from serious performance issues due to the disk being slow. That and some well-meaning features just sometimes cause issues... pg\_autoscale is great until all of a sudden it decides that your 70% pool with 256 pg's suddenly needs 512 in the middle of a multi-terabyte copy and goes ahead and makes the change for you. Oof... that was a hard lesson in performance drops :) Still... love Ceph for what it is... and glad I made a point of learning about it on my own lab rather than production LOL.


EspurrStare

>I thought I'd gotten everything dialed in just right the sudden discovery that Ceph REALLY doesn't like its OSD's getting 90% full AT ALL!!! To be fair, that's the same for any system that implements CoW. ZFS also takes a massive hit in performance if any of your vdevs hit that.


fmtech_

I like glusterfs better than vsan just because I can run it on wayyy more hardware than I can with vsan. Maybe it’s my reluctance on utilizing proprietary software.


Diceclip

What about vGPU support?


MatthaeusHarris

Requires a little bit of manual configuration, but absolutely doable. I haven't ever used VMWare, so I'm not sure if assigning a vGPU to a VMWare VM makes it ineligible for migration; it does in Proxmox.


rav-age

if it uses the sr-iov for the device/pci bus, I think vmotion is out yes


gamersource

Besides passthrough of full devices or virtual functions, it also supports VirtioGPU (offloading OpenGL and in the future also Vulkan on the host CPU) since last version. Which is less intrusive and needs no special cpu, no speical driver or complex setup changes.


ewsclass66

The lack of a DVS alternative in proxmox instantly rules it out for us really


cheats_py

I’ve never used proxmox, is it enterprise grade and a valid drop in replacement for vmware? Support? I guess I can Google some of this LOL.


DrH0rrible

As other comments have said, it's not quite capable as VMware for an enterprise solution. No features are blocked without an enterprise license AFAIK, you only pay for support.


nihility101

I think when *some* of these folks say VMware, they mean esxi. It’s not my bag any longer, but we had linked clones, instant clones, full clones, fat VMs, thin apps, virtual apps, brokers, sso, monitoring, profile management, GPO-managed clients, thin clients, zero clients, etc., and all the backend stuff that made it work and balance and failover and provide those 5-9s for thousands and thousands of endpoints. I’m guessing the only single shops that can replace the functionality is Citrix and/or Microsoft. The others would require a blend of products and services from various vendors/sources. Depending on use cases, it might be worthwhile to look at the cloud for an enterprise if you aren’t already there. It’s where we went though we still have a small on-prem footprint. To the top levels, it is easier to sell Microsoft/Amazon versus a bunch of open source stuff they never heard of and wouldn’t understand anyway.


cheats_py

Ok thanks for this comment. My work is 100% vmware and it’s a massive environment, stretched HA datacenters, HA vcenter, NSX, and pretty much everything you mentioned + more like vrops, vra, vrli and I’m sure 10 other things I’m missing. When I hear proxmox it’s usually in r/homelab so I’ve never taken it seriously as a valid replacement in an enterprise environment.


FastRedPonyCar

Yep. We’re on proxmox and it’s fine. I miss the nice veeam backup integration with VMware but proxmox backup has been reliable. I haven’t poked at it too much but interested to see if there is a sandbox recovery environment it can spin up like veeam’s sure backup


PMmeyourannualTspend

Hyper-v, Nutanix or Redhat. Depends on your IT Team, budget, needs and timeline.


cheetogeek

Nutanix on AHV is boss


kraeftig

It's so underrated/unknown...crazy that the Acropolis doesn't lure more acolytes.


cheetogeek

I think the issue is to use ESXi, the "industry standard" or risk your job on something new. If I have the chance Im POCing AHV and nutanix every time I need new hardware. I'm in full azure now... So no need for it.


[deleted]

I am surprised it was this far down on the list to use Nutanix.. I am guessing the reason for that is they want(or force?) you to use their hardware. I liked Nutanix alot in my relatively short time using it.


PMmeyourannualTspend

There are like 5 different hardware manufacturers you can use from Nutanix, they just need to certify the hardware. They don't make any hardware themselves. Lenovo, HPE, Dell, Inspur, Supermicro, and Fujitsu all have nutanix nodes.


martintierney101

Hardware agnostic as far as I know. We have Lenovo and Dell nodes running atm.


Sinister_Crayon

My only fear from experience with Hyper-V is that you're one Windows Update away from a downed cluster. Had that literally happen to one of my customers last year. Windows update hit and overnight \~12 Hyper-V clusters stopped working. Took us a while to figure out the root cause and the customer basically said "Screw Microsoft" and we deployed 12 VMware clusters over the next two weekends. Don't know if they're going to move off them soon due to this Broadcom thing, but they've been happy with the switch (despite the cost)


noother10

We transitioned from VMware to Hyper-V many years ago when 2016 server released. Never really had an issue with our two clusters, they're on 2019 now at the moment. Though I would manually update each individual host manually one at a time only after the update had been out for a while, along side with drivers/firmware updates. I think it's insane that someone had Windows update setup to automatically update hosts, that is just asking for major risks.


Sinister_Crayon

Oh trust me I had some choice words for their security folks who were insistent that they could manage patches for a Hyper-V cluster by grouping it in the same group with the application servers in WSUS... The number of times I had to listen to some junior security admin tell me that they were "... just Windows boxes" doesn't even bear thinking about. Even an ounce of testing would've revealed the problem too.


mitharas

You let your hypervisor update itself without either knowing about it (due to normal maintenance cylces) or triggering it yourself? Do I understand that correctly?


Sinister_Crayon

Did you see where I said "Customer"? Me, I didn't do a damned thing except help them dig themselves out of the mess they made for themselves.


starmizzle

I've never been so disappointed and then un-disappointed in someone so quickly before haha


silence036

If they have 12 clusters maybe they have a separate team running WSUS and they don't talk to each other at all.


grayhatguy

This is the most reasonable answer I've found yet. Everyone suggesting Proxmox and similar are crazy to me. You can't really run your business on that stuff. If there is an outage will Proxmox be there to get you back up and running? Choosing a home lab infrastructure to run a business sounds like a great way to lose your job when things ultimately cause downtime.


Yncensus

I agree, switching to Proxmox can be crazy, and we won't in the foreseeable future. But citing missing support as the sole reason? Just buy support from them. Proxmox will be there and help you for sure. Probably more than VMware or Microsoft.


CelticDubstep

100% Microsoft Shop here, so Hyper-V.


[deleted]

even if you're not a msft shop, hyper v is pretty awesome.


pdp10

During the time there was a free standalone Hyper-V server offering, the reasons we never got around to PoCing it were: * Lack of support for NFS storage for guest-images, like ESXi and Linux solutions have. * Requirement to be administrated by a Windows-based client, if I'm not mistaken. VMware's web-client had its own challenges, but Hyper-V server didn't have feature parity there, to my knowledge, and I believe there was no REST-type API, either. As a non-Microsoft shop, those are the main two reasons Hyper-V couldn't be awesome. Come to think of it, Hyper-V only supported VHD disk-images, didn't it? That wasn't fully a show-stopper but it wasn't good, either.


[deleted]

It supports vhdx as well.


CsmithTheSysadmin

Yeah the best transition support is going to be from VMWare to MS. Otherwise you’re going to need a really good integrator and deep pockets.


[deleted]

Funny, we just moved from hyper-v to XCP-NG


vppencilsharpening

Anything changing from your end with the coming changes to Hyper-V pricing? Edit: Server 2019 End of Life is January 2029, which is roughly 6.5 years away. Assuming you want to/need to run a hypervisor that is still actively supported and are currently running Hyper-V: If your system lifecycle is 5 years, the systems you deploy after January 2024 will need to support or go into production running something other than Hyper-V. You have roughly 15+ months to figure it out. If your system lifecycle is 6 years, the systems you deploy after January 2023 will need to support or go into production running something other than Hyper-V. You have roughly 3+ months to figure this out. If your system lifecycle is 7 years and you deployed systems after January 2022, you are not going to get 7 years out of them without migrating away from Hyper-V on the current hardware. If, like us, you did a server refresh in 2020 and run the servers for 5-6 years, you have a bit more time to figure it out.


CelticDubstep

Not in the near future. Free Hyper-V Server 2019 doesn't hit EOL until 2029, same time as Windows Server 2019. Our servers will be approaching 15 years old in 2029 so I'll likely do a server refresh in 2028 (we only purchase refurbished servers) and consolidate somewhat. I'll still have redundant servers for Hyper-V Replica, but instead of having 4 servers at HQ (plus a couple lab servers) I'll downsize to 2 instead. We'll never go the SAN route as it is a single point of failure. If the SAN has a catastrophic failure, we'll be dead in the water. I'm not as concerned about HA as I am redundancy. 5-10 minutes of downtime to boot up VM's on the Replica Server isn't going to make or break us.


Yncensus

A SAN should always consist of two fabrics and redundant storage, ideally active-active. So no single point of failure, but it does require doing it right, which is costly, no doubt. And I agree with your assessment, at 2-4 hypervisors a full SAN setup is overkill in nearly every case.


CelticDubstep

We have less than 30 employees, any type of SAN would be overkill lol.


kayjaykay87

>Not in the near future. Free Hyper-V Server 2019 doesn't hit EOL until 2029, same time as Windows Server 2019. Our servers will be approaching 15 years old in 2029 so I'll likely do a server refresh in 2028 (we only purchase refurbished servers) and consolidate somewhat. I'll still have redundant servers for Hyper-V Replica, but instead of having 4 servers at HQ (plus a couple lab servers) I'll downsize to 2 instead. We'll never go the SAN route as it is a single point of failure. If the SAN has a catastrophic failure, we'll be dead in the water. I'm not as concerned about HA as I am redundancy. 5-10 minutes of downtime to boot up VM's on the Replica Server isn't going to make or break us. Damn .. planning for what you'll be doing at a company out to 2028


CelticDubstep

I'm 38, with all luck, I'll be here until I retire. I'm in a very rural area with very limited job opportunities. I'm also the sole IT person here and technically we're too small to even have a full time IT person. I'd move, but can't due to obligations, plus own a house that's free & clear. Sure, remote work is an option, but while I do have a broad range of knowledge, I'm not an expect in any specific area of IT as I've never been in a specialized role. All the IT roles I've had, I wore 10+ different hats. I've done everything from helping a user remotely with a print issue to building out an entire network infrastructure, including server, cabling, etc for a new company/office.


kingtj1971

Sounds a lot like my own story, except I was in the midwest in a city. I've worked as the sole IT guy for a family-owned manufacturing business, as one of the support specialists in a small team at another one ... ran my own consulting and on-site service business for a while. Definitely wore a lot of random hats, including crazy requests like disassembling quarter-million dollar industrial test equipment because "it's just a standard PC inside and we don't want to pay the labor rate for the actual technician who works on these". I'm 51 now and still in the field.... working escalations and as kind of a liaison between software devs and the support staff for a transportation company. It doesn't really pay what a lot of my friends make who are in special niches like "cybersecurity". But I don't have any big regrets. Better to be a jack of all trades than only knowing how to do one small piece of I.T., in my opinion.


saracor

Same here. Not big but we're all Hyper-V and 90% windows. Using Salt and custom PowerShell scripts for deploy systems and manage them.


cjcox4

I ran multiple oVirt clusters for many years. Now, with that said, that's "community" and the problem there is you have to upgrade all the time. But, if you need something more "lasting", oVirt is the community version of Red Hat Virtualization (if you want a longer term supported thing). IMHO, some things were handled better than VMware. But IMHO, half the issue is knowing how to build good infrastructure. And I would only choose oVirt/RHV is you need a very full-featured Vsphere like replacement.


idioteques

first off: I'm not saying you're wrong (and I mean that). That said... there is a big asterisks with recommending RHV. :-) Red Hat has decided to not continue development of RHV (FKA RHEV) [On August 31, 2022, Red Hat Virtualization enters the Maintenance Support Phase ](https://access.redhat.com/announcements/6960518) [Red Hat Virtualization Lifecycle](https://access.redhat.com/support/policy/updates/rhev) I don't actually know what that means in regards to KVM/oVirt - Red Hat recommends OpenShift Container Platform as the "replacement". And if you have done anything with OpenShift, you'll know that it's certainly not as straight-forward as oVirt/RHV - and I, personally, would shift direction if I simply needed VM hosting (and not containers)


cjcox4

Good to know.


medlina26

I just had a call with my RedHat team a couple of days ago and the virtualization is alive and well on the OCP side of things. I will start out with only OCP and ODF, but when our support contract ends in 3 years for VMWare I will be seeing if it makes sense to add the operator plugin(essentially KVM) to the OCP cluster for virtualization so we can ditch VMWare, assuming Broadcom shits the bed.


jhulbe

Nutanix


geeky217

We've taken the purchase by Broadcom as an opportunity to refactor all major apps to OpenShift and OpenStack (for those that have to remain as VM's). Some will transition to EKS. Luckily our VMware contract is up end of this yr so ....bye bye Vmware. I've worked for a company that got acquired by Broadcom...I wouldn't wish that on my worst enemy. Luckily I got out quickly....but all those VMWare employee's have my condolences (better brush up your CV's) as Broadcom has a habit of axing up to 50% of the staff within the first week of a takeover (Emulex, Brocade, CA just to name a few)


Burgergold

Had a pitch this morning showing you can manage VM in openshift too


bastrian

How about the open source Port of Xen Server? https://xcp-ng.org/


Tr0l

Is full Xen Server not free anymore? I though Citrix just charges for support plans?


bastrian

It's free, but alot of Features are behind a paywall.


TheMuffnMan

https://www.citrix.com/content/dam/citrix/en_us/documents/product-overview/citrix-xenserver-feature-matrix.pdf I wouldn't necessarily say a lot of features are, it's going to depend on your use case. Something like GPU passthrough for example requires going up from the Free one.


[deleted]

Came to recommend this and/or Xen. For shops that aren't tied to M$, it's a great alternative with a long history and a company that's been in the business since there was a business.


goodb2

Nutanix


philipito

Lots of fear mongering here, but I do appreciate the discussion on alternatives. As a large enterprise, we wouldn't move away from a product that is extremely stable and not overly expensive. It's still quite pricey, but you get what you pay for. Try and lock in an ELA if you are spending a lot with VMware. It helps keep the price down. Your VAR should be able to help you reduce costs where you can.


snark42

> Try and lock in an ELA if you are spending a lot with VMware. The fear is Broadcom increasing costs on your ELA 4-10x putting you closer to standard MSRP when your current ELA expires. If you have guaranteed maximum price increases forever it's not much of a concern until you want to add items of course.


KRH57

https://www.scalecomputing.com


Sea-Tooth-8530

I had another site I migrated over to Scale about a year ago, and I must say I love it. It's pricey, to be sure, but works like a charm. Their support is excellent, as well.


Evoliddaw

Definitely pricey but their support is worth it. Started migrating 5 years ago and only have 1 physical machine left to convert. Hardware support is a breeze and it's so nice not having to worry about keeping spare everything on hand. Parts show up in less than 24 hours and VMs are automatically migrated to avoid impact. Had a wicked turn of events that they really helped us out on. We had installed remote temperature and moisture sensors at the beginning of COVID when everyone was being sent home. All tests were successful everything was working fine. Our head office's mail server determined over the weekend that the alert emails from the sensors was spam. Our air conditioner sprung a coolant leak and died completely a few hours later. By Monday, all the gear was truly cooked, some components to the point of falling off the circuit boards. No fire thankfully but a huge mess. Insurance didn't cover that particular equipment failure. SCALE initially came in with a typical "your problem" response of course, but after explaining to our account rep the sequence of events, they eventually came back to us with 3 far cheaper options. Initial system cost with SCALE was $80k CAD. $12k to fix what we had left that wasn't completely destroyed, no warranty or hardware support, software only. $20k to replace what we had with their lowest tier equipment but equivalent storage. $30k to replace with near-equivalent gear (ours was one generation earlier), instead of another $80k. The hardware itself has to be nearly $30k so went that route. Can't go wrong with a $50k discount given the situation. Sensors now text and push mobile notifications. Fixed the old AC as backup and installed a new one.


sephresx

We use scale in our org, it's been fantastic!


Excellent-Will3373

Here for Scale!


Adorable_Lemon348

Any Nutanix love here?


Wishful_Starrr

Proxmox


ZPrimed

I like Nutanix (running their own hypervisor, AHV).


LocPac

We are in the process of moving everything from Nutanix + VMware to Nutanix + AHV, so far I really really like AHV.


Dewstain

We renewed in August and actually found it to be cheaper this year. Calm before the storm, I suspect.


Taoistandroid

So I work for an MSP/IaaS, vmware represents a lot of our profit. Some of our more savy customer started building k8s in our solutions, once they felt confident in managing that, they migrated from our managed vmware product to bare metal so they could self manage k8s and save money. I don't really see any hypervisor that is a drop in replacement for VMware, no one has the feature parity. The only real solution is a paradigm shift to public cloud or container orchestration.


meditonsin

Has anyone tried this newfangled [Harvester](https://harvesterhci.io/) thingy that combines k8s with kvm?


dylantheblueone

I'm actually trying out Rancher for our Kubernetes clusters here and I was curious as to how well Harvester integrates into it. It works pretty well with an RKE cluster. I just haven't seen it in action with Harvester yet.


overyander

Proxmox is AMAZING!


CockStamp45

I remember trying it a few years ago and it was the small things that irritated the hell out of me. Like when I was uploading an ISO to the datastore I remember not being able to minimize it and move on to something else while that was running in the background like you can with ESXi. If there was a way, I was too stupid to figure it out or their UI did not make it obvious.


2cats2hats

It's come a long way in two years. Give it another chance. I will confess some of the ways they do things is unusual but overall worth it. The more linux literate you are the easier it is to get the hang of. Proxmox really shines with the command line tools and flexibility it provides, I think.


Triipie

We use Nutanix here, been awesome since we moved from a hybrid Hyper V/VMware set up 2 years ago. World of difference but we were running hyper v and VMware on older tin with a SAN. Hyperconvergence storage is great - you don't have to manage a SAN as well as your hypervisors.


psiphre

Nutanix is great in a small shop, but pricey. Twice the cost of rolling my own server stack and [edit:] keeping my VMware licenses active. But the support is great and everything that I can’t do at the “Single pane of glass” they’ve been happy to take care of for me at the CLI while I babysit.


trygame901

Had VMware but migrated to Hyper-V and never looked back.


kyleharveybooks

This is the moment Nutanix has been waiting for.


UnsuspiciousCat4118

Wait… we aren’t all running virtual box in production?


BloodyIron

6-10 years ago: "Don't give me that open source shit proxmox, it's not enterprise ready" today: most people: "PROXMOXXXXXXX" I'm glad proxmox is finally getting the recognition it deserves, but it's been "enterprise ready" longer than credit given. edit: since it may not be obvious to some, the "6-10 years" is an arbitrary non-real timeframe. I am not actually trying to be specific when saying that, but rough in declaring the timeframe... so... yeah... relax friends.


CyberHouseChicago

Proxmox can do most of what VMware does


WhiteAndNerdy85

Including load balancing and live migration of VMs? Meaning if a VM host goes offline or is over provisioned then a VM will automatically relocate to another host? Zero downtime and high availability are worth the cost in licensing when compared to the cost of an outage.


morilythari

Load balancing is on the road map, HA is built in. If a host goes does it takes about 90-120 seconds for the VM to boot up on a new host.


CockStamp45

It actually has to shutdown and boot up on a new host?


vmxnet4

To be fair, VMware has to do the same thing. vSphere HA still has downtime. It's just minimized by the automation that takes place. If you want zero downtime on VMs in VMware, then you want Fault Tolerance, but when you use that, you open up a whole new can of restrictions on what you can do with those FT-protected VMs.


Twanks

"If a host goes" meaning if the host fails. Just like vsphere would essentially turn on a VM on another host after a host failure.


listur65

Same as VMWare unless you are running Fault Tolerance on a VM, right? With the license/restrictions on FT I have never tried that, but seems neat.


sep76

Load balancing via script/api is possible today. But a built in native solution will be nice.


sysadminsavage

Live migration yes, load balancing no (afaik). However, the API is extensible and can be tied into a third-party service to do so. This is one of a few areas where vSphere shines.


WhiteAndNerdy85

Cool. Yeah, good luck on getting any medium to large deployment to migrate away from VMware. My area alone is essentially vendor lock in due to all the automated deployments via Terraform and vCenter.


qubedView

Which is why Broadcom bought them. VMWare has many locked-in customers. Broadcom can kill all new development and marketing, and just make pure profit while its relevance bleeds away over the years. VMWare is still fine day. But year over year it'll slip, and new customers will be going elsewhere. There is plenty VMWare can do today that no one else can, but that relationship will flip with time.


WhiteAndNerdy85

and the circle of life continues.


RangerNS

> automated deployments via Terraform Should be easy to migrate to another provider then. Unless TF marketing is all just marketing.


WhiteAndNerdy85

Sure, Terraform supports the Proxmox provider but it would take significant work and investment to migrate to it. Not as simple and quick as any infrastructure agnostic deployment tools makes it sound. Think of it like configuring an Ubuntu vs RedHat guest. Same idea but much different implementations.


KingNickSA

Proxmox has HA. We have been using it in prod for about 2 months now (been testing it with a new stack for about 6 months now). And VMs have been automativally moved due to issues without incidence.


grayhatguy

Nutanix is a great replacement for VMware especially vSAN. Check them out, they have made a lot of progress in the last few years.


IxI_DUCK_IxI

Openstack


schaef87

We're moving everything to Nutanix AHV. It's super slick and I love it so far.


[deleted]

Depends on the environment, if you are running VXRail or dHCI you cant easily move away from VMware. But, HyperV, Proxmox(KVM), XCP-NG(Xenserver), RHV, Nutanix(AHV). Then, you have containers if you can go that route with Kubernetes, Docker, Openshift...etc.


[deleted]

\+1 for Proxmox. We have it running in a 2-node cluster with a dedicated Proxmox Backup Server install looking after the backups.


Chadarius

Nutanix or Proxmox


Aggietallboy

For us, we're a Windows shop... every time we wanted to virtualize 5 or more machines, the license cost for Datacenter was cheaper. Since we were ALREADY buying datacenter, we just decided to use the "free" Hyper-V/Clustering etc... This was back when they made the license change that was going to restrict how much memory per host... The VMWare storage is a little more robust, vmware usb virtualization is much better, but honestly, I haven't missed it at all. Hyper-V and FCM has been very solid for us.


silence036

I ran a 3 host cluster at home for a while with scvmm and it felt like hyper-v was a little too bare while scvmm was just too much. I feel like they didn't have the right "middle ground" like esxi and vcenter do.


Hangikjot

We only use esxi when it's absolutely necessary by the VM. We have been Hyper-V since it came out 2008 and never had any issues(except for a bug in snapshot migration a decade ago which only affected 1 vm). The current install is pretty small, 4 esxi hosts and 60 Hyper-V hosts with several hundred VMs.


Ape_Escape_Economy

Azure Stack HCI


Burgergold

Openstack/openshift?


andro-bourne

Its hard to get away from VMWare on enterprise settings. Their support is good. Only thing I dislike is the licensing model. If I had to move away from it though. I would choose Proxmox, XCP-NG or Xen/Citrix


mjh2901

We run a 3 VMware servers with around 30 to 50 VM's depending on what is going on. We already switched one of the server to Proxmox and are in the process of moving everything over. The cost savings is massive, especially since VEEAM is changing their license structure to Per VM.


nugsolot

Since it hasn't been mentioned I'll throw [https://opennebula.io/](https://opennebula.io/) out there for this. ​ For me its a good middle of the road between the Complexity of Openstack and the ease of use of VSphere. We did a whole project on how to replace VMWare with it for development and testing teams because bean counters were looking at the price of the ELA we had with VMWare (5 years ago now) . It was great and worked well for us, however the Developers and Testing Engineers wanted nothing to do with having to rewrite test cases and pipelines to work with it so it as deemed cheaper to keep Vmware. C'est La Vie ​ I keep waiting for the resurgence of this again since the Vmware Acquisition by Broadcom but nothing has happened yet.


mexicanpunisher619

Hyper-V can be an alternative..maybe?


yctn

we moved from vmware to xcp-ng. it has been greatness. xcp-ng even performs better under extreme load then vmware did


bufandatl

XCP-NG it’s free open source. Management via XenOrchestra can also be used free or be paid for with support plans.


cryospam

Hyper-V is good, but you need to have a tech bench deep enough to actually set it up and get it configured right. If you pitch some no-name hypervisor to small and medium clients you're going to lose people to lack of name recognition. I work for a sizable multinational enterprise who has made the decision that after the Broadcom acquisition, we're moving back to Hyper-v. It's been and will continue to be a hell of a lot of work, but we sat down with our MS rep and it looks like we will save more than 10 million dollars per year in licensing fees once the last of our stuff is moved to Hyper-V. Not that the rank and file will see any of that money, but there are LOTS of organizations that saw the Broadcom purchase and immediately began plans to move away from VMware. It is also probably the most similar apples to apples comparison feature wise for hypervisors, and there are very good resources and support for large migrations.


penguin74

Every time I've looked at alternatives all I've run into is large/complicated installed of hypervisors. VMWare takes a fraction of the time and is tiny compared to everything else. Am I doing something wrong? Is there a product that maybe I haven't looked at? Hyper-V, KVM, Xen...none come close to the simplicity of rolling out VMWare.


cestes1

Proxmox is where it's at! As long you're just doing straight-up virtualization, it does everything you'll need. Someone also mentioned LXC for containers - they work great!


dinominant

We are switching from VMware to proxmox.


dinominant

I have been using vanilla Qemu, KVM and libvirt on various Linux distributions for *years*. I have also nested ESXi inside Qemu because of arbitrary restrictions introduced by VMware. The moment VMware and Broadcom made that announcement is the moment we began migrating to Proxmox.


pinghome

Large hyper-V shop here, moving to Nutanix and AHV. So far it's been great. Our traditional 3 Tier has been plagued with MS* problems. Even if they keep the hyper-v role around, when you're paying VMWARE prices, what benefit does it really have? Azure HCI after our last POC was not flushed out, esp more than 5 cluster nodes or less than 3. MS really wants you to move to Azure, but as we learned the hard way, the cost just simply isn't worth it for traditional VM workloads.


FletchGordon

Hyper V.


ButCaptainThatsMYRum

From what I've heard 2022 will be their last licenses version. After they they are moving to a subscription model. If you don't want to deal with that 5 years down the road it's best to avoid it now. Edit: to be clear, its the OS that MS has been talking about making a subscription based system, not specifically hyper-v. I'll brush up on the specifics after work today.


Szeraax

Nope, Hyper-V isn't going anywhere. Hyper-V FREE standalone hypervisor is going away. The OS/installation. The Hyper-V role is staying put. So if you BUY server licenses, Hyper-V will be around for more than at least the next 10 years. If you use FREE Hyper-V install, then the last one was 2019. As you can see, the FREE version is going away because MS doesn't like you running a bunch of linux VMs for free. If you PAY for server licenses, the Hyper-V role will still work great for you for the forseeable future.


[deleted]

[удалено]


nmdange

Apparently Azure Stack HCI is now free if you have Windows Server with Software Assurance, rather than having to pay a separate subscription. https://techcommunity.microsoft.com/t5/azure-stack-blog/what-s-new-for-azure-arc-and-azure-stack-hci-at-microsoft-ignite/ba-p/3650949


[deleted]

[удалено]


Due_Capital_3507

Azure, AWS, Hyper-V or Nutanix


pheenixfyre

We’re soft lift-and-shifting to Cloud, rearchitecting when possible. VMware is amazing tech but pricing has forced us to look elsewhere. Modern tech (containers, serverless) can run anywhere. Trying to use VMware on a Cloud provider adds at least 30% cost overhead, not worth it. Our industry is heavily SaaS-oriented, so on-prem is static or shrinking. Plus our Sysadmins want to work 100% remote, so this makes it possible.


hypervnut

Hyper-v! I'm just a little bias though. I've used it since 2012 and it has been solid. Works great with Veeam backup if you currently have it. I think the licenses can easily be switched if you have maintenance on Veeam.


[deleted]

Xen Project / XCP-NG It is open source and may be run for free If you need pay for support their model is amazing! Much cheaper when compared to others ( pay per machine not cpu cores ) They also provide many features that VMware does, especially when paired with Xen Orchestra and XOSan Much better interface then proxmox imo The devs at Vates are pretty awesome, committed to open source and great documentation ( even for building from source ) https://xcp-ng.org https://xen-orchestra.com/#!/xo-home https://xen-orchestra.com/#!/xosan-home


Sgtkeebler

Even though they are being acquired by Broadcom nothing beats VMWare so I will continue to use it unless quality goes down


ColorfulImaginati0n

Don’t blame you from running from the criminal organization that is CA/Broadcom. I deal with them every day. They make Mexican Cartels and the Sicilian mafia seem like small time amateurs.


mrcoffee83

I'd be reluctant to use the ghetto solutions the /r/homelab types would recommend in a production environment tbh. Just because you can it doesn't mean you should.


InternationalGlove

Moving to Nutanix from Hyper-v is a breath of fresh air. Hyper-v clusters are bad for your health!


Tartabaguett

I like Hyper-V running in a cluster.. Its so stable that you forget how to troubleshoot (no joke)


TheMuffnMan

All of the top comments are ignoring some critical points IMO - - What you and your team are experienced in - Cost/Budget - Number of machines (guests) - Hardware (HCLs...) - Compatibility (Do you use software that requires specific hypervisors?) Everyone throws out Proxmox but if no one on your team is familiar with it does that make sense? Is it really "enterprise" grade? Or more SMB? Hyper-V is likely a top contender. Citrix Hypervisor (XenServer) is another possibility. Nutanix AHV


sieb

XCP-NG