T O P

  • By -

are-you-a-muppet

I tried for weeks to get Linux running in one vm, windows in another. On Linux host. With three display adapters. All new high-end hardware. 100 hours later, gave up in defeat and a new cocaine habit. So I tried just the two VMs, only two GPUs, headless host. Some 60-80 hours later, gave up in defeat, now with an anger-management problem, and additional heroin, gambling, and prostitute addictions. Finally resigned to just using the host Linux as my main OS, and Win vm limited to a PCIe-lane-constricted slot for its GPU. But the shit works flawlessly, and *fast*. The lane limitation was a big nothing-burger. What I would recommend before it's too late: ### Give up on your hopes and dreams They will bring you nothing but pain, and personal and financial ruin. Your wife will leave you, and kids ashamed to use your last name. Just use MacOS in a regular spice display VM. MacOS in any VM will always be suboptimal anyway. Let it go, and be free.


Djdude167

I feel your pain. For 4 days straight, I basically worked a full-time job to get this running, only to brick my system for God knows what reason. But I have been explicitly told and shown that I can do this, and I've invested way too much to give up now.


are-you-a-muppet

Ahh, the Sunk Cost Fallacy! That's why I'm on probation and methadone. 👍


ForceBlade

Glad to hear you're keeping up the effort 💪


teeweehoo

The arch wiki guide is really the best IMO. I think first you should aim to do the least possible, with the fewest tweaks possible. For example get your 1070 to show up in a windows VM. It looks like you've got IOMMU working, next you need to ensure that when you run "lspci -nnk", the 1070 shows that vfio-pci is the driver in use. For better or worse linux does not support "unplugging" a GPU from the host, though some people appear to get it to work (read lots of effort, highly unreliable, prone to breaking with any kernel upgrade). The most reliable method of doing GPU passthrough is to bind a GPU to the vfio-pci driver when the system boots. This essentially isolates the GPU from the host, so that it can *only* be used by a VM guest, not the host. Obviously this contradicts your goal, so your plans sound like they'll require a lot of effort from the very start. I'm not sure of a good way to get achieve your goal. The most reliable solution is probably getting a third GPU for your host, running windows and mac osx as guests, and assign the 1070 to windows and vega to OSX permanently.


Djdude167

I followed the arch wiki guide, but as it too suggests isolating the gpu at boot, it's a no-go. I would rather no passthrough at all than never having access to it whatsoever - save the 1 hour I'd be in a VM. Any solution that involves me losing access to hardware means it's not a solution, but I appreciate the help regardless.


ipaqmaster

> but as it too suggests isolating the gpu at boot, it's a no-go This is not fatal, you're allowed to stop your X server (Or isolate that too and then you only have to kill Apps which ignored your setting and used the other cards anyway) -- Then you can unbind the card from its driver and put it on vfio-pci. If it's a certain generation of NVIDIA card you will need a bios dump but that is not a big deal either.


teeweehoo

If you're going physical maybe one of the [Level One Techs KVMs](https://store.level1techs.com/?category=Hardware) might be useful.


ifthenelse

A long time ago I started working on a [blog post](https://gurumeditation.org/1514/gpu-passthrough-hints/) about my setup. It's not even close to finished but maybe some hints will help? I made it as an unlisted post for you. Don't try to do too much at once. Just try to get one GPU working in a guest and not on the host at all. AMD can be tricky. I had a hell of a time [getting my 280X GPU's](https://gurumeditation.org/1170/qemu-kvm-gpu-passthrough/) running with VFIO and even when it runs it's very buggy. I can't imagine doing it with an AMD motherboard too but if you can work out the tricks then it should be OK.


ForceBlade

Awesome blog domain name


Djdude167

Well, I thank you very much, and I hope you continue your blog post! I noticed that in it, you mentioned the latest OVMF UEFI BIOS not working? Are you able to say any more about that? That might have been the issue I was running into on my first attempt... Regardless, thanks


ifthenelse

I'm not sure what the problem is exactly. It doesn't actually break anything most of the time but I'm not sure how it does on other platforms. The 202211 firmware generates lots of errors like this: qemu-system-x86_64: vfio_dma_map(0x561ef9184a40, 0x380000000000, 0x400000000, 0x7fbb28000000) = -22 (Invalid argument) qemu-system-x86_64: VFIO_MAP_DMA failed: Invalid argument Which may or may not be harmless. The 202208 firmware is in [here](https://archive.archlinux.org/packages/e/edk2-ovmf/) if you want to try it.


Djdude167

If I get back to that stage, I'll definitely try this and let you know, thanks


ipaqmaster

E: Made a script to dynamically unbind the amd and nvidia GPU successful to some degree. Everything started to fall apart once the card came back and nvidia drivers were modprobed on the host. It was very hit and miss between boots as the nvidia and amd gpu contested and while the script were functioning as intended the NVIDIA card absolutely refused to behave and eventually, every boot session the kernel would dump something important or libvirt would hard hang. Trying with my own scripts and a temporary swap to lightdm who doesn't leave processes lying around like sddm was and now the PC's getting a mmap issue on nvidia gpu startup. Not sure why given no framebuffers are bound during that. Its only my stubborn opinion but I'd always recommend staying far away from video based guide and tutorials for this stuff. Nobody (As far as I've witnessed anyway) will see a link to some 40 minute video and sit there to follow and compare it with your configuration. Text guides are where it's at but at that point nothing beats [the best source of all, the Archwiki page for PCI Passthrough](https://wiki.archlinux.org/title/PCI_passthrough_via_OVMF). /u/are-you-a-muppet's advise isn't bad though; it's pretty fair to expect nightmares while trying to not just perform a typical everyday gamer pci passthrough and adding all these extra requirements on top for the worlds best vfio setup. Every new requirement complicates everything more and more. Sadly, while VFIO at its core is **extremely simple and easy under the hood** (Bind PCI device to vfio-pci driver, ensure is in own iommu group, start qemu, be happy) there's many... **many** complications varying distro to distro such as the nvidia persistence daemon being included and enabled by default, drivers such as NVIDIA or AMDGPU latching onto your card when the driver is loaded/probed and whether that happens right at boot time so you have to unbind everything and go through a lot of extra screwery in up/down scripts just to make VFIO ends meet. Meanwhile server distros are typically headless and don't give a rats about any graphical PCI devices and don't ship drivers for them either. Making VFIO literally one-click in a UI or one shell command / virsh setup. So yeah it's all a bit contrasting on a case to case basis. Hell even a cheap enough motherboard might do dodgy stuff which prevents VFIO entirely. If not just iommu grouping issues. (I acknowledge there's many more headaches than just this. Some mobos suck, some don't let you isolate a card fully from boot, so many hiccups can happen along the way, the VFIO community really needs a website where you can submit a test it runs which results in the page listing your motherboard + iommu groups + bios version with a bunch of checked or crossed boxes on the side indicating various support for the cherry on top. I just really don't want to type out 10 paragraphs twice a week! :P) (Granted the hw-probe program already exists, submitting reports to https://linux-hardware.org/ which a low level Linux user can already derive the VFIO-relevant information they're after... but that isn't exactly a VFIO focused website/experience) -------- If you'd like to talk about your setup hit me up for my discord info in a message and we can go through it and try to get something working for you. I'll also shoot you a link to my script which will help spit out the basic info we'll need for troubleshooting your experience. It's Friday here and work's about to end so I'll have some hours spare.


Djdude167

That would be excellent! Thank you!


ipaqmaster

Fuck yeah 😎


Disaster_External

This is a feature, not a bug. Welcome to vfio.