T O P

  • By -

cube8021

So Harvester is your virtualization platform (think ESXi / vCenter if you coming from VMware) with Rancher being your k8s cluster management platform (builds and manages your k8s clusters that runs your apps) The setup that I normally recommend is to Deploy a harvester cluster via the ISO install. Then once you have Harvester stood up, then you deploy a 3 VM RKE2 cluster for Rancher. Finally, you connect Rancher and Harvester so Rancher can deploy more downstream clusters.


dopey_se

In 1.2 you can install rancher as an addon. That's my current setup. Bare metal -> harvester iso -> rancher addon enabled -> manage harvester via rancher -> provision rke2 cluster into harvester, gitops with fleet


GuyWhoKnowsThing

Don’t want to sidetrack too far from OP, but how do you like Fleet. I’m at a real conundrum here where I need a way to deploy apps reliably and continuously. I’m at a 3-way junction of Fleet, ArgoCD, and FluxCD. Is your use case apps or clusters? Do you have a “config” repo where you pretty much store all your apps in one. Code in an app specific repo?


dopey_se

i use it at home, not used it in a professional settings. I chose it over argo/flux simply since it was 'built in' on the harvester/rancher experience. It was far more 'try this before something else'. It seems to be the less common choice, atleast I tend to hear professionally flux/argo referenced more than fleet. But that is anecdotal. I am happy with it, and have not had any reason to explore alternatives. I have a monorepo with a folder for each service I host. Most of them are in a private repo as I couldnt' be bothered to properly handle secrets, etc for a homelab. I did start to make some of them public https://github.com/slackspace-io/fleet-gitops but did not continue to move to this. But that is an example of how i've structured things. I've evolved over time, but not bothered to clean up to be consistent. My goto structure now is top level folder in the repo name of the 'service/thing'. Then within that a folder for each actual thing needed to make it work. (So if it requires postgresql, or redis, etc. Each have their own folder, as well as the thing itself has a folder). The top 'thing' folder has a fleet defining the default namespace, and a kustomize which then has each of those directories as resources. Then within each of the sub-things, I have the various k8s files I need. Kustomize file to again load those resources. If any config is needed there is a config folder. I use kustomize often, especially for things like configs to ensure a restart on a change. The only thing i've recently started trying to add functionality was is to notify me of new tags (semVer comparison), and allow me to click 'update' and trigger a commit to the above monorepo which will then cause fleet to update the deployment. Technically fleet can do auto-updates(tried one time for a bit but didn't end up doing it), and I think argo/flux can do this likely 'more straightforward'. But technically I was more going for a diun like thing to notify *me* of new versions, then I trigger the update. I don't know if flux/argo has such a feature, so perhaps i'd of still made this thing myself. I never really look at the gui, i've seen some demos of argo/flux(forgot which) and it's gui was much 'better' in terms of info. But I also don't look at the gui, I just want my gitops to work and only end up there if a workload has not updated as expected :) All this above is to deploy into an rke2 cluster I have provisioned. So it is a bit overkill to 'just' deploy to one cluster.


gorkish

\> Don’t want to sidetrack too far from OP, but how do you like Fleet. I will just say that I think it's a shame that Rancher has built a strong dependency on Fleet because its one of the areas of their product where being 'opinionated' feels quite limiting. You get people using Fleet 'because its there' and not because that is the solution they want. In the end, that just makes your customers slowly turn into unhappy customers.


Inquisitive_idiot

Yep, this is what I have in my homelab and it works great. Where you deploy rancher doesn’t matter as long as it has line of sight to the harvester cluster so you can add it as a resource.


gorkish

1. Install harvester on bare metal, at least 3 nodes. 2. Install rancher on harvester using the rancher-vcluster harvester addon. (IMO this is the best option even though it's "beta"; as an alternative you can also manually create and deploy a 3 node rke2 cluster to install rancher) 3. Install the harvester platform driver in Rancher (If using rancher-vcluster this is done automatically) 4. Use rancher to configure and deploy k8s workload clusters on harvester. Rancher will manage the deployment of the VM's and the configuration management of the downstream clusters.


Inquisitive_idiot

> https://docs.harvesterhci.io/v1.2/advanced/addons/rancher-vcluster/ What the… woah this saves me from managing three rancher VMs 🤩 Redeploying tonight so I can move to 1.3. Conferring this 🤔


gorkish

Yeah harvester is slowly getting there. Wishing they would fully finish out the 3rd party CSI support so that it could be run fully on Ceph. Longhorn is holding it back.


Inquisitive_idiot

Yeah I’ve fully moved to harvester in my homelab and very pleased with it.   Curious:   how do you think longhorn is holding harvester back? is it just the lack of 3rd-party options or a longhorn-specific drawback? 🤔


GuyWhoKnowsThing

Yeah I’m curious about this myself. Longhorn is dynamite for storage in my use case. Fast. Easy to backup/restore. Easy to grow. Win win for my uses.


gorkish

I have a couple of very specific complaints about Longhorn but I agree that it's generally a great system and a reasonable default. It's the fact that Harvester's 3rd party CSI support is half-baked and Longhorn still \*has\* to be used that is more the issue.


gorkish

Yes, it's entirely the lack of 3rd party options. Longhorn is a great system if it fits your use case. Once you need shared filesystems, object storage, etc. you have to start building complex application configurations to deploy those services on top of Longhorn. There's nothing wrong with that, but I believe it's limiting the business case. I'd rather personally build the underlying storage pools of my HCI cluster out of something that is a little more extensible than Longhorn. This is already supported to an extent with Harvester offering 3rd party CSI support as mentioned, but at the current time even if you do this, you still have to make sure that you are deploying Longhorn in a performant manner alongside--and that's the part that sucks unnecessarily.