As 2020 draws ever closer, so too does the imminent death of Windows 7. Microsoft will discontinue support in early January leaving those still using the decade old OS with no security updates.

While most have already updated to Windows 10 or will do so soon, the Linux community has been welcoming many new users who’ve decided they want to give up on Microsoft and their OS (I wonder why?!).

Seeing people deciding to ditch Windows rather than move to Windows 10 reminded me of when I moved from Windows to Linux. Back in 2016 I used to play a video games far more often than I do now, so I ideally wanted to continue to have access to a machine running Windows1. I made Linux my primary OS and dual booted Windows, but dual booting quickly became a pain, even with an SSD. The solution to my problems came in the form of virtual machines. I hadn’t even considered VMs since performance is crucial for this use-case, but thanks to a PCI passthrough, it’s now possible to have the VM perform at almost near-native performance.

Please note that while this post was originally uploaded in November of 2018, it has since undergone a major rewrite.

Prerequisites

Two main things you’ll need:

  1. A Windows 10 ISO
  2. 2 GPUs (an iGPU is commonly used along with a standard dedicated GPU)
  3. Hardware that’s not ancient

As long as your hardware is from the last decade or so, you should be good in terms of compatibility. Check the Arch Wiki for more details if you’re in doubt.

Accessing your VM later

The point of this VM is for it to feel like a native Windows install. That’ll soon include passing in a mouse and keyboard to the VM, so people usually recommend a spare mouse and keyboard (one for the host OS and one for the VM, or the “guest” OS).

It’s possible not to have spares, but it means once in the Windows VM you’ll need to shut it down each time you want to make and changes in the Linux host. I go into detail about how I do this here.

My setup

Part Specification
OS Parabola GNU/Linux-libre x86_64
Kernel 4.19.101-gnu-1-lts
Shell zsh 5.7.1
Resolution 1920x1080, 1920x1080
WM sway
CPU Intel i7-8700K (12) @ 4.700GHz
GPU NVIDIA GeForce GTX 980 Ti
GPU Intel UHD Graphics 630
Memory 16GiB

There are a few things to note:

  • The two GPUs I’m using are my integrated GPU (Intel UHD Graphics 630) and the NVIDIA GeForce GTX 980 Ti. Note that you can’t use two identical cards here (a 980 Ti and a 960 are fine, but not two 960s)
  • Parabola Linux is basically just Arch Linux2. PCI passthrough will of course work on other distributions but I’m only familiar with it on Arch and Arch-based derivatives.
  • LTS kernel. This isn’t a requirement but I recommend it; I’ve personally found general performance increases and stability improvements with it.

I’m writing everything for these specifications. You’ll need to pay attention because parts of this will change depending on your personal setup.

Getting the host ready

Setting up IOMMU

We need to enable virtualisation. To enable virtualisation in the BIOS, look for an option that’ll be called either “virtualisation technology”, or more specifically “Intel VT-d” or “AMD-Vi” depending on your CPU.

You also have to set the correct kernel parameters. For GRUB, the steps are as follows:

  • Edit /etc/default/grub as root
  • Now, to the line GRUB_CMDLINE_LINUX_DEFAULT=:
    • Append intel_iommu=on (for Intel CPUs)
    • Append amd_iommu=on (for AMD CPUs)
    • Append iommu=pt (for both)

Generate the GRUB configuration file:

# grub-mkconfig -o /boot/grub/grub.cfg

To check that this has worked, run

$ dmesg | grep -i -e DMAR -e IOMMU

You’re looking for something about IOMMU being enabled. For example, the top of the output when I run it looks like this:

[    0.008015] ACPI: DMAR 0x00000000A2ADACD8 0000B8 (v01 INTEL  KBL      00000001 INTL 00000001)
[    0.139121] DMAR: IOMMU enabled
[    0.180804] DMAR: Host address width 39
[    0.180805] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[    0.180808] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e

Now, run the following script (can be downloaded from here ):

#!/bin/bash
shopt -s nullglob
for g in /sys/kernel/iommu_groups/*; do
    echo "IOMMU Group ${g##*/}:"
    for d in $g/devices/*; do
        echo -e "\t$(lspci -nns ${d##*/})"
    done;
done;
IOMMU Group 0:
  00:00.0 Host bridge [0600]: Intel Corporation 8th Gen Core Processor Host Bridge/DRAM Registers [8086:3ec2] (rev 07)
IOMMU Group 1:
  00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) [8086:1901] (rev 07)
  01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM200 [GeForce GTX 980 Ti] [10de:17c8] (rev a1)
  01:00.1 Audio device [0403]: NVIDIA Corporation GM200 High Definition Audio [10de:0fb0] (rev a1)

Let’s turn to the Arch Wiki for a definition of “IOMMU group”:

An IOMMU group is the smallest set of physical devices that can be passed to a virtual machine.

Since we’re looking to pass through the IOMMU group that the GPU is a part of, all we care about is IOMMU Group 1, since it contains the GPU and its audio controller3.

Note the IDs that come after the listing for each as we’re going to need them later (so for me, I care about the IDs 10de:17c8 and 10de:0fb0).

Isolating the GPU

We want to assign a device to the VM (i.e., “pass it through” to the VM). We still need the Linux host to see the GPU so it can pass it through to the guest OS, but we also don’t want the host to actually use the GPU so that it’s free to be passed into the VM on demand.

Whereas with most drivers you can do this on-the-fly, unfortunately with GPU drivers you usually cannot. Instead, we’re going to bind placeholder stub drivers to stop other drivers in the host from attempting to claim it. There are multiple ways of doing this, but what follows is usually what’s recommend.

We’re going to use a stub driver called vfio-pci, which has been included in Linux since version 4.1 but still needs to be manually loaded.

To do this, edit /etc/modprobe.d/vfio.conf as root and add the following, replacing with your relevant IDs from above:

options vfio-pci ids=10de:17c8,10de:0fb0

To be sure that changes in /etc/modprobe.d/ are registered and applied, open /etc/mkinitcpio.conf as root check that modconf is listed in HOOKS:

HOOKS=(... modconf ...)

Next, to ensure that vfio-pco loads before other graphics drivers, edit /etc/mkinitcpio.conf as root and append the following, making sure to retain the order:

MODULES=(... vfio_pci vfio vfio_iommu_type1 vfio_virqfd ...)

Finally, regenerate the initramfs:

# mkinitpcio -p linux

Now it’s time to reboot to apply these changes. When you reboot, you won’t be able to use the GPU in the guest OS anymore so it’s important you’ve set up your other GPU to be used in the guest OS instead (in the case of the iGPU I didn’t have to do anything).

Let’s check vfio-pci has loaded correctly, and that it has bound to the correct devices.

[    1.265026] VFIO - User Level meta-driver version: 0.3
[    1.267851] vfio-pci 0000:01:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=none
[    1.284271] vfio_pci: add [10de:17c8[ffff:ffff]] class 0x000000/00000000
[    1.300969] vfio_pci: add [10de:0fb0[ffff:ffff]] class 0x000000/00000000
[    8.045659] vfio-pci 0000:01:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=none

Setting up the Guest

Configuring libvirt

Install qemu , libvirt, ovmf, and virt-manager:

# pacman -S --needed qemu libvirt ovmf virt-manager

Start and enable libvirtd.service:

# systemctl enable --now libvirtd

Setting up and installing Windows 10 (the guest OS)

If you’re using virt-manager, add your user to the libvirt user group to ensure authentication (replacing sseneca with your username):

# usermod -a -G libvirt sseneca

To stop any network-related issues in the following step, I had to install two extra packages:

# pacman -S ebtables dnsmasq

And finally, restart libvirtd.

# systemctl restart libvirtd

Now start up virt-manager:

  • Press “Create new virtual machine”
  • Choose “Local install media (ISO image or CDROM)
  • Browse and find your Windows 10 ISO
  • Allocate your Memory and CPUs. This is highly dependent on your build.
    • I have 16GB RAM, so I gave the VM 12GB (12288 in binary)
    • For CPU specification, during the install I just enabled them all. I changed this later and recommend you do too – you can read about the changes here.
  • Choose how much space you’re giving to your VM. I have settled with giving the VM around 40GB, but the absolute minimum I recommend is around 30GB.
  • It’s essential to select “Customise configuration before install”!

Customising configuration before install

  • Change chipset to Q35
  • Change firmware to UEFI
  • Under CPUs, uncheck Copy host CPU configuration, and manually type in host-passthrough.
  • Change the default IDE disk to a SCSI disk
  • Add a Controller for SCSI drives of the “VirtIO SCSI” model
  • In the “Boot Options” menu, make it boot from the disk with the Windows 10 ISO
  • Under NIC, change Device model to virtio.
  • Press Add Hardware and then Storage.
    • Select or create custom storage and find the driver ISO. Change
    • Device type to CDROM device and set the Bus type as SATA.

Once you’re done with all of that, you can go ahead and press Begin Installation. Follow the generic installation until you get to where you can choose between Upgrade and Custom, and choose Custom. Windows will complain that it can’t find somewhere to install to, but that’s fine! Press the Load driver button, Browse, and then find the CD Drive called virtio-win.

Scroll down to viostor and open w10. You’ll probably want to select amd64 here for 64-bit processors.

After you press Next Windows should now be able to see the virtual disk. Install to that disk and carry on the rest of the generic install. After your install has completed, shut down the VM.

Attaching the PCI devices

Its time to pass in the GPU. Press Add Hardware, select PCI Host Device and find your GPU (so for me, I add my card and the related NVIDIA High Definition Audio). Now we need to pass in the keyboard and mouse. Go to Add Hardware and select your keyboard and mouse under USB Host Device.

Remove unnecessary hardware, such as the USB Redirector s, Video QXL, Console, Display Space and Tablet, and boot up the VM (if you’re passing in your main keyboard and mouse, you’ll no longer be able to control the guest OS until the VM is turned off!).

Finalising

Drivers are all you have to deal with now. These are located in the VirtIO CD which should be accessible from within Windows.

First let’s get Internet access within the VM. Open up device manager and find Ethernet-Controller. Right click on it, and install the driver located NetKVM (we want the w10 driver for Windows 10).

NVIDIA users must follow an extra step, since NVIDIA make the drivers automatically crash when it detects it’s running within a VM.

Let Windows Update do what it wants and turn off the VM. It’s time to set up audio. There are multiple ways of getting VM audio to be sent to the guest OS; the method I’m going to explain requires PulseAudio. There are methods that require only ALSA, but I won’t detail them in this post.

  • Edit the user field in /etc/libvirt/qemu.conf:
user = "sseneca"
  • Now open the libvirt configuration for your VM virsh edit win10. Modify the first line (<domain type'kvm'>=) to:
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  • At the bottom of the file, in between the two closing </devices> and </domain> tags, add the following (and change 1000 if your uid is different; can be found out by running the id command):
</devices>
    <qemu:commandline>
        <qemu:env name='QEMU_AUDIO_DRV' value='pa'/>
        <qemu:env name='QEMU_PA_SERVER' value='/run/user/1000/pulse/native'/>
    </qemu:commandline>
</domain>

Restart libvirtd.service:

# systemctl restart libvirtd

And do the same for pulseaudio.service:

$ systemctl --user restart pulseaudio.service

Conclusion

If you’re here, you should have a Windows 10 VM set up for near-native performance! Install your games and have fun. There is more customisation you can do, so if you’d not satisfied yet feel free to read on.

Miscellaneous

NVIDIA’s Error 43 circumvention

NVIDIA don’t like you doing this, so even once you’ve installed their NVIDIA drivers on Windows, you won’t be able to use the graphics card (you’ll get an Error 43). Fortunately, the solution to this is incredibly simple. Add this to your libvirt domain configuration:

<features>
  <hyperv>
    ...
    <vendor_id state='on' value='1234567890ab'/>
    ...
  </hyperv>
  ...
  <kvm>
    <hidden state='on'/>
  </kvm>
</features>

Switching between Host and Guest OS

When I decide I want to use Windows, I open virt-manager and start up the VM. Once it has loaded int he background, I simply run an attach script which passes in my keyboard and mouse, and switch my monitors to the connections used by the VM.

#!/bin/bash
if [[ $(virsh list | grep win10) != "" ]]
  then
    vm=win10
fi

sudo virsh attach-device $vm $(dirname $0)/mouse.xml
sudo virsh attach-device $vm $(dirname $0)/keyboard.xml
<hostdev mode='subsystem' type='usb' managed='no'>
  <source>
    <vendor id='0x2516'/>
    <product id='0x003b'/>
  </source>
</hostdev>
<hostdev mode='subsystem' type='usb' managed='no'>
  <source>
    <vendor id='0x1af3'/>
    <product id='0x0001'/>
  </source>
</hostdev>

Within Windows, if I want to access Linux, I can ssh into it through Powershell. I can then detach from here if I want, by using a detach script similar to the attach script above (it’s also on my GitLab).

You’ll want to change win10 to whatever your VM is named. Also, note the vendor and product IDs above. To find these, install usbutils (on Arch) for the lsusb command. This is what lsusb displays for me:

Bus 006 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 005 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 005: ID 0b05:1872 ASUSTek Computer, Inc. AURA LED Controller
Bus 001 Device 004: ID 0b05:185c ASUSTek Computer, Inc. Bluetooth Radio
Bus 001 Device 003: ID 1af3:0001 Kingsis Peripherals ZOWIE Gaming mouse
Bus 001 Device 002: ID 2516:003b Cooler Master Co., Ltd. MasterKeys Pro L
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Note how you have to prefix the numbers with an 0x.

MSI (Message Signaled-Based Interrupts)

Some people find that with MSI enabled, there is an improvement with audio latency (it is lower) versus the default Line-Based Interrupts. To check MSI is supported, run this command:

# lspci -vs 01:00.0 | grep 'MSI:'

Substitute 01:00.0 with the appropriate value for your graphics card.

Your output should look something like this:

Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+

A - after Enable means MSI is supported, but not used by the VM, while a + says that the VM is using it.

When in the VM, use MSI Utility to set this up. There are alternatives, but they’re pretty complex. This is the guide I used before the GUI tool was created – it also contains some more information on the topic if you’re interested.

CPU Host Performance

I hadn’t initially looked into any of the more detailed methods of CPU pinning, but after a while I started to notice fairly poor CPU performance within the VM. In short, for my 6 core, 12 thread i7-8700k, the following additions to the VM’s xml gave me dramatically increased performance:

<vcpu placement='static'>8</vcpu>
<cputune>
  <vcpupin vcpu='0' cpuset='2'/>
  <vcpupin vcpu='1' cpuset='8'/>
  <vcpupin vcpu='2' cpuset='3'/>
  <vcpupin vcpu='3' cpuset='9'/>
  <vcpupin vcpu='4' cpuset='4'/>
  <vcpupin vcpu='5' cpuset='10'/>
  <vcpupin vcpu='6' cpuset='5'/>
  <vcpupin vcpu='7' cpuset='11'/>
  <emulatorpin cpuset='0-1,6-7'/>
</cputune>
...
</features>
<cpu mode='host-passthrough' check='none'>
  <topology sockets='1' cores='4' threads='2'/>
</cpu>

There is an explanation of what this does on the Arch Wiki.

Cable Management

I use two monitors for my PC, one is a 144Hz monitor and the other is an ordinary 60Hz monitor. Two get both to work in the guest OS and the host OS, I have to do be particular in how I connect them up (since I also want to leverage the 144Hz refresh rate in my main monitor in both Linux and Windows). This is how I do it:

  • Connected to my 144Hz monitor, I use Display Port and DVI-D, which both support 144Hz
  • Connected to my 60Hz monitor, I use HDMI and DVI-D

Because of the space on my motherboard and graphics card, I’m using a (rather bulky) DVI-D cable to connect my 60Hz monitor to the motherboard. The HDMI cable that goes from my 60Hz monitor goes to the graphics card so that it can be used while I’m in Windows. For my 144Hz monitor, I connect the DVI-D cable to the graphics card and Display Port into the motherboard. This gives me 144Hz no matter which connection I’m using.


  1. I am of course aware of emulators compatibility layers such as Wine, but would rather run a native Windows install because of the performance gains (and lack of support for some titles often due to dodgy anti-cheat software). It’s also worth mentioning that Valve’s Proton) didn’t actually exist in 2016. ↩︎

  2. btw, I use Arch ↩︎

  3. Intel Corporation 8th Gen Core Processor Host Bridge/DRAM Registers is also in IOMMU Group 1 along with the GPU and its audio controller. This is because my GPU is connected to a processor-based PCIe slot, and it does not support isolation properly, but is fine in this case. If you find other devices in the same group as your GPU, refer to the Arch Wiki. ↩︎