I finally got around to whipping up the Windows 7 VM to round out and more or less complete the Engineering Walden Project’s Technical Reference Model. Hooray!
Because I had already configured my KVM/QEMU system with the proper pci-stub entries to prevent my GPU (and other devices intended for passthrough) from being controlled by the Fedora 23 system’s kernel (e.g. nouveau), it was a simple matter of creating the VM and performing the installation as described in that same post (without having to redo the IOMMU/pci-stub configuration).
So once you get the pci-stub arrangement set up on your KVM/QEMU platform, creating additional guest domains which can be started up alternately (not at the same time, since they use the same GPU and other PCI hardware) is a piece of cake.
Configuring an NVidia GPU for a Windows 7 Guest Domain
There was no trick to installing the Windows guest domain; simply following the same instructions for Fedora on the link above works just fine. I have confirmed, as well, that you don’t need to be extremely precise with your graphical device installation, as you can easily re-add the QXL video device and a VNC display and access the Windows system that way if your GPU should fail for whatever reason.
That said, you do need to have the QXL/VNC arrangement in order to access the graphical interface with the Windows VM (or you could use RDP directly to the VM, perhaps, but this is less reliable since you can’t watch the system while it boots or access it if the RDP service should fail for some reason) and perform the initial NVidia installation. I could not get my Windows 7 VM to display using my GeForce GTX 960 without the NVidia drivers installed; the monitor attached to the card (using HDMI; perhaps it would’ve been different had I used DVI) simply shows a black screen with a static cursor in the upper left (not even blinking!).
So, use the QXL/VNC arrangement to boot the system and, with the GPU recognized as a device within Windows, simply perform the NVidia driver installation. Everything should go smoothly, and when you reboot, the Windows guest domain will show its Windows splash screen on the QXL/VNC output, but this will switch immediately to the NVidia GPU and its monitor when Windows boots and brings you to the login screen.
You can then remove the QXL/VNC arrangement. As I said, re-adding it in the event of an issue is no big deal, I have found.
Installing the VirtIO Drivers for Windows
For New Devices
You can obtain some sweet VirtIO drivers for your Windows guest domain from the Fedora Project. They come in ISO format, and you simply load the ISO into the VM using KVM/QEMU’s CD-ROM emulation. Once available in your Windows VM, you can add KVM/QEMU VirtIO devices (such as NICs or disks) to the Windows guest domain and then head into Device Manager within Windows, scan for hardware (if necessary), and then right-click on the new device and install it. Just specify the directory on the CD-ROM which contains the appropriate driver file and away you go.
Be careful to actually select the appropriate sub-directory (e.g. Windows 7, Windows 8, Windows Server 2008); if you do not, Windows will likely automatically select an incorrect driver version and you will receive the following error at installation:
“Windows cannot load the device driver for this hardware. The driver may be corrupted or missing (Code 39)”
If that message is received, you can be pretty sure you’ve selected the wrong VirtIO driver version. Just pick the right one to work around the issue.
Converting Your Primary Hard Disk to VirtIO
If you’re like me, you didn’t perform the initial Windows guest domain installation with a VirtIO disk (which is possible, so long as you have the ISO mounted in during installation and you can select the driver from it to detect your hard disk initially), you can still convert your primary disk to VirtIO.
The process is simple: just add a tiny QEMU/KVM VirtIO disk to the Windows guest domain and install the VirtIO driver from the ISO as described above. Once this is done, you can shut your guest domain down, remove that new disk (only used to install the driver; keep it if you like), and change the guest domain’s primary disk from IDE (or whatever it is) to VirtIO. The system should boot properly and make use of the VirtIO driver for the primary drive.
Troubleshooting CPU Issues
I noticed that my sound output will occasionally become corrupt and gross, cracking and popping most unappealingly. At first, I thought the NVidia GPU and its driver was to blame, as I am using the NVidia HDMI Audio output, but after interrogating the system’s performance further, I found that I am experiencing an unacceptably high level of System Interrupt activity (visible in Resource Monitor in Windows 7) at around 4-5% of my CPU. This value should normally be around 0.2%.
Using the very cool LatencyMon utility, I was able to prove that, yes, the system is having fundamental processing trouble that is preventing the sound from working properly rather than just encountering some sort of driver issue.
The main issue for me appears to be USBPORT.SYS. I am passing into the guest a USB Root Hub on my KVM/QEMU host so that I can take advantage of simple plug and play USB behavior with the guest domain (rather than having to assign USB devices from within Virt Manager or what-have-you), so it’d be unfortunate to lose that functionality, but I may have to consider switching over to the standard KVM/QEMU USB device passthrough if I can’t figure out how to resolve the matter. Flaky sound and skippy performance is no good for games and media.
Interestingly, I’ve noticed that modifying the CPU configuration for the guest domain has an impact on its resilience in the face of this issue. Configuring the VM with two vCPUs presented as separate sockets tends to allow the system to function normally for 98% of the time, whereas configuring the system with two vCPUs presented as two cores in one socket means seriously problematic performance. It looks like all the system interrupts are handled by CPU0 when the system believes it has two separately socketed CPUs and this leaves CPU1 free to handle other things.
Sometimes, however, my system runs very smoothly and an investigation of System Interrupts shows that it’s down in the proper 0.2% range of CPU utilization. I am connecting a single USB 2.0 Hub to a 30 foot USB cable (which, itself, has a midpoint hub to permit the length of the cable), so I might be pushing the Windows USB driver a bit too hard. My Fedora 24 VM, however, has no trouble handling the arrangement.
Conclusion: Windows 7 Guest Domain with a GeForce GTX 960 = w00t!
So, that’s just some food for thought, but the point is; a Windows 7 guest domain with an IOMMU-enabled PCI passthrough of the NVidia GeForce GTX 960 GPU works, and when I’m not facing the CPU issues described above, it’s absolutely wonderful; games run smoothly and without issue, and performance seems to be at that theoretical 98% mark in relation to non-virtualized hardware capacity.
I can now switch between a Fedora 24 VM and a Windows 7 VM as quickly as a guest domain shuts down and another boots up. I can make full use of a Windows or a GNU/Linux environment from the same hardware platform, and with shared centralized storage in a FreeBSD-hosted ZFS zpool, I can share data between the systems immediately and with ease. Should one of the systems crash, the data remains secure and resilient, and I can simply rebuild the VM and mount back in the storage.