PCI Passthrough guide

1 Introduction

There is a claim that PC gaming is superior to console gaming because of a lack of “exclusive” titles. However, this logic immediately fails when one considers the software needed to run games on PC. Most games are indeed exclusive – you have to run Windows.

Gaming on macOS is possible, but for those without the hardware, they’re forced to therefore look at Linux, and gaming on Linux isn’t ideal. Whilst it’s true that emulators compatibility layers such as Wine (and particularly Valve Corporation’s fork of Wine, Proton) have made giant leaps over the past few years in performance and compatibility, they still fail for games which have excessively invasive anti-cheat software, for example. It’s also true that many games do natively run on Linux (all source-based Valve games, for instance, as well as other games such as Rocket League and BioShock Infinite). Despite this, the number of AAA games one would miss out on if they only used Linux is large (GTA V, Apex Legends, or PUBG are all widely-popular games unable to be played on Linux). Moreover, AAA Linux exclusives simply don’t exist.

So then, what’s the solution? Dual boot, I hear you say…? No.┬áNot dual booting. It’s a pain! You have to restart your machine each time you want to use a program in the other OS, saving everything you were working on at the time – yuck! The experience is not seamless whatsoever.

Allow me to introduce a different solution – virtual machines. Using KVM, OVMF, and taking advantage of a PCI Passthrough, you can essentially play games within Linux at almost bare-metal performance. The following guide is how I got it set up on my machine; it can more or less be followed on similar machines to set up a similar VM.

Please note: You’ll still need a Windows ISO for this to work – you can find Windows LTSB/LTSC for free online. Also, a lot of this is lifted from the Arch Wiki entry for this topic, and there are many other guides you could follow, so one may ask…

2 Why should you read this guide?

One of my favourite content creators on YouTube uploaded a video recently and I thought many parallels could be drawn between his experiences with video game mapping tutorials and general guides/tutorials in technology, or specifically Linux. I’ve had my VM set up for a long while now and think it’s great, so I wanted to apply the KISS principle (“Keep it simple, stupid”) to tutorials on getting it set up.

The repercussions of this are straightforward. I don’t go into much detail here – this is pretty bare bones. If you need detail, check the Arch Wiki.

2.1 My setup

First, a quick look at my specs:

Part Specification
OS Arch Linux x86_64
Kernel 4.19.49-1-lts
Shell zsh 5.7.1
Resolution 1920x1080, 1920x1080
WM i3
CPU Intel i7-8700K (12) @ 4.700GHz
GPU NVIDIA GeForce GTX 980 Ti
GPU Intel UHD Graphics 630
Memory 15955MiB

There are a few things to note:

  • I have two GPUs, a dedicated NVIDIA card and an integrated GPU. You’re going to need two separate GPUs (only restriction being you can’t have two of the same kind of GPU, so for instance I couldn’t use two 980Ti cards, but its fine to use my 960 and my 980Ti)
  • Arch Linux! PCI Passthrough works on other distributions (like Fedora) but I’m only familiar with it on Arch and Arch-based derivates like Manjaro
  • LTS kernel. At first, I used vanilla linux, but I’ve found that general performance improvements can be had when using linux-lts, as well as fewer bugs

I’m writing everything for these specifications. You’ll need to pay attention because parts of this will change depending on your personal setup.

2.2 Prerequisites

2.2.1 Windows 10 ISO

Windows 10 ISO. You’ll need a Windows 10 ISO. I use Windows 10 education edition which I get free from my University, but you can use other free versions of Windows 10 such as Windows 10 Enterprise LTSC.

2.2.2 CPU compatibility

Here is a list of Intel CPUs which support hardware virtualisation… so, as long as your CPU isn’t ancient, you should be good.

For AMD, any CPU post-Bulldozer (2011) should be fine – that includes Zen (any Ryzen chips are fine).

2.2.3 Motherboard compatibility

Your motherboard has to support IOMMU. Again, basically anything new will support this. If you’re in doubt, check the Arch Wiki which links to a couple useful pages. You can check total machine compatibility soon so don’t worry if you’re not sure.

2.2.4 GPU compatibility

Your GPU ROM needs to support UEFI – this should be fine for any GPU post-2012.

2.2.5 Accessing your VM later

Soon you’re going to have a VM, and you’ll want to control it / view it / etc., so I recommend at least having a spare keyboard and mouse. Personally, I didn’t use a spare-anything, so you could proceed if you want. It may get a bit tricky switching to and from your VM later though, you’ve been warned! For how I go about accessing the VM, check out how I do it.

2.3 Setting up IOMMU

Go to your BIOS. Enable either “Intel VT-d”, “AMD-Vi”, or something more vague like “Virtualization technology”. This is usually found in the CPU or CPU-related section. In my case, I have an option which is closer to the more vague “Virtualization technology” rather than a specific name, but either way one of these has to be enabled.

Now you have to set the correct kernel parameter. For GRUB:

  • Edit /etc/default/grub as root
  • Now, to the line GRUB_CMDLINE_LINUX_DEFAULT=:
    • If you’ve an Intel CPU, append intel_iommu=on
    • If you’ve an AMD CPU, append amd_iommu=on
    • On either CPU, also append iommu=pt.
  • Once you’ve written the file, run grub-mkconfig -o /boot/grub/grub.cfg

Reboot and run dmesg | grep -e DMAR -e IOMMU, your output should look something like this:

[    0.008057] ACPI: DMAR 0x00000000A2ADACD8 0000B8 (v01 INTEL  KBL      00000001 INTL 00000001)
[    0.136701] DMAR: IOMMU enabled
[    0.178393] DMAR: Host address width 39
[    0.178393] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[    0.178396] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e
[    0.178397] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[    0.178400] DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da
[    0.178401] DMAR: RMRR base: 0x000000b03b7000 end: 0x000000b03d6fff
[    0.178401] DMAR: RMRR base: 0x000000b3800000 end: 0x000000b7ffffff
[    0.178403] DMAR-IR: IOAPIC id 2 under DRHD base  0xfed91000 IOMMU 1
[    0.178403] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[    0.178404] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.179797] DMAR-IR: Enabled IRQ remapping in x2apic mode
[    1.035503] DMAR: [Firmware Bug]: RMRR entry for device 05:00.0 is broken - applying workaround
[    1.035504] DMAR: [Firmware Bug]: RMRR entry for device 06:00.0 is broken - applying workaround
[    1.035505] DMAR: No ATSR found
[    1.035540] DMAR: dmar0: Using Queued invalidation
[    1.035585] DMAR: dmar1: Using Queued invalidation
[    1.035808] DMAR: Hardware identity mapping for device 0000:00:00.0
[    1.035809] DMAR: Hardware identity mapping for device 0000:00:01.0
[    1.035858] DMAR: Hardware identity mapping for device 0000:00:02.0
[    1.035859] DMAR: Hardware identity mapping for device 0000:00:14.0
[    1.035859] DMAR: Hardware identity mapping for device 0000:00:16.0
[    1.035860] DMAR: Hardware identity mapping for device 0000:00:17.0
[    1.035861] DMAR: Hardware identity mapping for device 0000:00:1b.0
[    1.035862] DMAR: Hardware identity mapping for device 0000:00:1c.0
[    1.035863] DMAR: Hardware identity mapping for device 0000:00:1c.2
[    1.035864] DMAR: Hardware identity mapping for device 0000:00:1c.4
[    1.035864] DMAR: Hardware identity mapping for device 0000:00:1c.6
[    1.035865] DMAR: Hardware identity mapping for device 0000:00:1d.0
[    1.035866] DMAR: Hardware identity mapping for device 0000:00:1f.0
[    1.035867] DMAR: Hardware identity mapping for device 0000:00:1f.2
[    1.035868] DMAR: Hardware identity mapping for device 0000:00:1f.3
[    1.035868] DMAR: Hardware identity mapping for device 0000:00:1f.4
[    1.035869] DMAR: Hardware identity mapping for device 0000:00:1f.6
[    1.035871] DMAR: Hardware identity mapping for device 0000:01:00.0
[    1.035872] DMAR: Hardware identity mapping for device 0000:01:00.1
[    1.035874] DMAR: Hardware identity mapping for device 0000:04:00.0
[    1.035876] DMAR: Hardware identity mapping for device 0000:05:00.0
[    1.035877] DMAR: Hardware identity mapping for device 0000:06:00.0
[    1.035878] DMAR: Setting RMRR:
[    1.035878] DMAR: Ignoring identity map for HW passthrough device 0000:00:02.0 [0xb3800000 - 0xb7ffffff]
[    1.035879] DMAR: Ignoring identity map for HW passthrough device 0000:00:14.0 [0xb03b7000 - 0xb03d6fff]
[    1.035880] DMAR: Ignoring identity map for HW passthrough device 0000:05:00.0 [0xb03b7000 - 0xb03d6fff]
[    1.035880] DMAR: Ignoring identity map for HW passthrough device 0000:06:00.0 [0xb03b7000 - 0xb03d6fff]
[    1.035881] DMAR: Prepare 0-16MiB unity mapping for LPC
[    1.035882] DMAR: Ignoring identity map for HW passthrough device 0000:00:1f.0 [0x0 - 0xffffff]
[    1.035941] DMAR: Intel(R) Virtualization Technology for Directed I/O
[   10.039023] DMAR: 32bit 0000:04:00.0 uses non-identity mapping

Now, run the following script:

#!/bin/bash
for d in /sys/kernel/iommu_groups/*/devices/*; do
    n=${d#*/iommu_groups/*}; n=${n%%/*}
    printf 'IOMMU Group %s ' "$n"
    lspci -nns "${d##*/}"
done | grep "NVIDIA"

First off, note that that I changed this script so it’d work only for NVIDIA devices (hence the grep "NVIDIA") – remove the | grep "NVIDIA" if you use a different card and manually look for your card in the output. Your output should look like this:

IOMMU Group 1 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM200 [GeForce GTX 980 Ti] [10de:17c8] (rev a1)
IOMMU Group 1 01:00.1 Audio device [0403]: NVIDIA Corporation GM200 High Definition Audio [10de:0fb0] (rev a1)

So for me, the IDs I care about are 10de:17c8 and 10de:0fb0 (the IDs associated with NVIDIA devices, found in the square brackets). Note these down, you’ll need them in a bit.

2.4 Isolating the GPU

The GPU and all the devices sharing the same IOMMUU group must have their driver replaced by a stub or VFIO driver. This is to prevent the host OS (in this case, Arch) from interacting with them – we want the GPU to be reserved for the guest OS (Windows). Linux (as in the kernel) includes vfio-pci, but it is not enabled by default, so we’ll need to do that. Some kernels have vfio-pci enabled by default (like linux-vfio on the AUR), and for those follow the instructions on the Arch Wiki.

First, create (or edit) /etc/modprobe.d/vfio.conf like so:

options vfio-pci ids=10de:17c8,10de:0fb0

and then edit /etc/mkinitcpio.conf to append

MODULES=(vfio_pci vfio vfio_iommu_type1 vfio_virqfd)

Retain the order, and add the modules to the beginning of the string. Now run mkinitcpio -p linux as root.

2.5 Verifying everything worked

Reboot and re-run dmesg | grep -i vfio, you should get an output that looks something like this:

[    1.265026] VFIO - User Level meta-driver version: 0.3
[    1.267851] vfio-pci 0000:01:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=none
[    1.284271] vfio_pci: add [10de:17c8[ffff:ffff]] class 0x000000/00000000
[    1.300969] vfio_pci: add [10de:0fb0[ffff:ffff]] class 0x000000/00000000
[    8.045659] vfio-pci 0000:01:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=none
[  888.466022] vfio-pci 0000:01:00.0: enabling device (0000 -> 0003)
[  888.466215] vfio_ecap_init: 0000:01:00.0 hiding ecap [email protected]
[  888.466225] vfio_ecap_init: 0000:01:00.0 hiding ecap [email protected]

2.6 Configuring libvirt

Install qemu , libvirt, ovmf, and virt-manager. Now edit /etc/libvirt/qemu.conf as follows:

nvram = [
    "/usr/share/ovmf/x64/OVMF_CODE.fd:/usr/share/ovmf/x64/OVMF_VARS.fd"
]

Start and enable libvirtd.service:

# systemctl start libvirtd
# systemctl enable libvirtd

and do the same with its logging component virtdlogd.socket.

2.7 Setting up and installing Windows 10 (the guest OS)

About half way through now! Add your user to the libvirt user group:

# usermod -a -G libvirt sseneca

(replace sseneca with your username).

To stop any network-related issues in the following step, I had to install a few packages:

# pacman -S ebtables dnsmasq firewalld

Start the firewalld process and then restart libvirtd:

# systemctl start firewalld
# systemctl enable firewalld
# systemctl restart libvirtd

Now start up virt-manager:

  • Press “Create new virtual machine”
  • Choose “Local install media (ISO image or CDROM)
  • Browse and find your Windows 10 ISO
  • Allocate your Memory and CPUs. This is highly dependent on your build.
    • I have 16GB RAM, so I gave the VM 12GB (12288 in binary thanks to this website). For CPU specification, during install I initially just enabled them all. However, I’ve since changed this – check out my CPU Pinning section.
  • Choose how much space you’re giving to your VM. I only have Windows 10 stored here so I only needed to give it 25GB (more than enough to install Windows 10 with nothing else). However, if you plan on having anything more than that, you’ll need to adjust the storage setting here to reflect that.
  • It’s essential to select “Customise configuration before install”!

2.7.1 Customising configuration before install

  • Change chipset to Q35
  • Change firmware to UEFI
  • Under CPUs, uncheck Copy host CPU configuration, and manually type in host-passthrough.
  • For me, I also had to manually set the CPU topology (1 socket, 6 cores and 12 threads). Again, this will vary completely on your personal CPU. If you’re stuck, look it up on your favourite search engine.
  • Under IDE Disk 1, change Disk bus to VirtIO (you’ll need drivers for this later). You should see the disk’s name change to VirtIO Disk 1.
  • Under Boot Options, check Enable boot menu, and point it to the disk which itself points to your Windows 10 ISO.
  • Under NIC, change Device model to virtio.

Now you need to download those drivers from here.

  • Press Add Hardware and then Storage. Select or create custom storage and find the driver ISO. Change Device type to CDROM device and set the Bus type as SATA.

Once you’re done with all of that, you can go ahead and press Begin Installation. Follow the generic installation until you get to where you can choose between Upgrade and Custom, and choose Custom. Windows will complain that it can’t find somewhere to install to, but that’s fine! Press the Load driver button, Browse, and then find the CD Drive called virtio-win.

Scroll down to viostor and open w10. Select amd64, because you better be running on a 64-bit machine…

Now press Next. Now you should find the previously created disk, and can carry on the rest of the generic install like normal.

2.8 Attaching the PCI devices

After your install has completed, shut down the VM. This is where you pass in your GPU – press Add Hardware, select PCI Host Device and find your GPU (so for me, I add my card and the related NVIDIA High Definition Audio). Also, add your keyboard and mouse. Go to Add Hardware and select your keyboard and mouse under USB Host Device.

Remove unnecessary hardware, such as the USB Redirector=s, =Video QXL, Console, Display Space and Tablet.

Boot up the VM.

2.9 Finalising

Drivers are all you have to deal with now. Open up device manager, and find Ethernet-Controler. Right click on it, and install the driver. Browse to the VirtIO CD, and install the w10 driver under NetKVM. Now you’ll have access to the Internet in your VM.

Finally, install your graphics drivers! NVIDIA users should check out an important extra step or else the drivers will crash. Now, let Windows Update do what it wants to, and after a reboot, you should be done!

3 More

3.1 Guest audio output properly

  • Edit the user field in /etc/libvirt/qemu.conf:

user = "sseneca"

  • Now open the libvirt configuration for your VM virsh edit win10. Modify the first line (<domain type’kvm’>=) to:
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  • At the bottom of the file, in between the two closing </devices> and </domain> tags, add:
</devices>
    <qemu:commandline>
        <qemu:env name='QEMU_AUDIO_DRV' value='pa'/>
        <qemu:env name='QEMU_PA_SERVER' value='/run/user/1000/pulse/native'/>
    </qemu:commandline>
</domain>

Don’t forget to change 1000 if uid is something different for your username when you run id.

Now restart libvirtd.service:

# systemctl restart libvirtd

as well as pulseaudio.service:

$ systemctl --user restart pulseaudio.service

3.2 NVIDIA’s Error 43 circumvention

NVIDIA don’t like you doing this, so even once you’ve installed their NVIDIA drivers on Windows, you won’t be able to use the graphics card (you’ll get an Error 43). The solution to this is simple, just add this to your libvirt domain configuration:

# virsh edit win10
<features>
  <hyperv>
        ...
        <vendor_id state='on' value='1234567890ab'/>
        ...
  </hyperv>
  ...
  <kvm>
    <hidden state='on'/>
  </kvm>
</features>

3.3 Switching between Host and Guest OS

I’m always booted into Linux. When I decide I want to play a game or for some reason need to use Windows, I open a terminal, and run a script which runs a few preliminary commands before starting up the actual VM. I decided to separate the action of starting the VM from passing in my keyboard and mouse, which means as the VM boots up I can continue to use the host OS. When I want to begin using the VM and I know it’s ready (boot for Windows takes less than 10 seconds), I run my the following attach script and switch display input (see the cable management section):

#!/bin/bash
if [[ $(virsh list | grep win10) != "" ]]
  then
    vm=win10
  fi

virsh attach-device $vm /home/sseneca/Documents/mouse.xml
virsh attach-device $vm /home/sseneca/Documents/keyboard.xml

Now my keyboard and mouse will be passed into the guest OS (Windows 10). I’ll do what I want to do, and if I want to switch back into Linux, I’ll open PuTTY. With PuTTY open, I’ll ssh into the Host OS (Arch) and run the detach script (again, with root privileges):

#!/bin/bash
if [[ $(virsh list | grep win10) != "" ]]
  then
    vm=win10
  fi

virsh detach-device $vm /home/sseneca/Documents/mouse.xml
virsh detach-device $vm /home/sseneca/Documents/keyboard.xml

You’ll want to change win10 to whatever your VM is named, and point the script to where your mouse.xml and keyboard.xml are stored on your machine.

keyboard.xml:

<hostdev mode='subsystem' type='usb' managed='no'>
  <source>
    <vendor id='0x2516'/>
    <product id='0x003b'/>
  </source>
</hostdev>

mouse.xml:

<hostdev mode='subsystem' type='usb' managed='no'>
  <source>
    <vendor id='0x1af3'/>
    <product id='0x0001'/>
  </source>
</hostdev>

Where vendor id and product id are the address you get after running lsusb. This is my output:

Bus 006 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 005 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 003 Device 002: ID 0d8c:0008 C-Media Electronics, Inc.
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 005: ID 0b05:1872 ASUSTek Computer, Inc.
Bus 001 Device 004: ID 0b05:185c ASUSTek Computer, Inc.
Bus 001 Device 003: ID 1af3:0001  
Bus 001 Device 002: ID 2516:003b Cooler Master Co., Ltd.
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

So I’ll be using 2516:003b for the keyboard and 1af3:0001 for the mouse (not sure why there’s no name for it showing up afterwards). As you can see, the format you must write them in is 0x, and then the part of the number before the colon (and then the product id is the same but you must use the part after the colon).

3.4 MSI (Message Signaled-Based Interrupts)

Some people find that with MSI enabled, there is an improvement with audio latency (it is lower) versus the default Line-Based Interrupts. To check MSI is supported, run this command:

# lspci -vs 01:00.0 | grep 'MSI:'

Substitute 01:00.0 with the appropriate value for your graphics card.

Your output should look something like this:

Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+

A - after Enable means MSI is supported, but not used by the VM, while a + says that the VM is using it.

When in the VM, use MSI Utility to set this up. There are alternatives, but they’re pretty complex. This is the guide I used before the GUI tool was created – it also contains some more information on the topic if you’re interested.

3.5 Networking not working

I recently started to have issues with networking. I’d get an error message, and to cut a long story short it turns out that that firewalld wasn’t using the iptables backend, which broke things. There is a simple, quick fix:

  • Go to /etc/firewalld/firewalld.conf
  • Edit, as root, and change FirewallBackend=nftables to FirewallBackend=iptables.
  • Restart the firewalld service and then restart the libvirtd service.

That should be it!

3.6 CPU Host Performance

I hadn’t initially looked into any of the more detailed methods of CPU pinning, but after a while I started to notice fairly poor CPU performance within the VM. In short, for my 6 core, 12 thread i7-8700k, the following additions to the VM’s xml gave me dramatically increased performance:

<vcpu placement='static'>8</vcpu>
<cputune>
  <vcpupin vcpu='0' cpuset='2'/>
  <vcpupin vcpu='1' cpuset='8'/>
  <vcpupin vcpu='2' cpuset='3'/>
  <vcpupin vcpu='3' cpuset='9'/>
  <vcpupin vcpu='4' cpuset='4'/>
  <vcpupin vcpu='5' cpuset='10'/>
  <vcpupin vcpu='6' cpuset='5'/>
  <vcpupin vcpu='7' cpuset='11'/>
  <emulatorpin cpuset='0-1,6-7'/>
</cputune>

I haven’t seen many good explanations of CPU Pinning. The Arch Wiki has one, and this website has a detailed example for a build similar to mine (along with enabling huge memory pages, something which I don’t use right now).

3.7 Cable Management

I use two monitors for my PC, one is a 144Hz monitor and the other is an ordinary 60Hz monitor. Two get both to work in the guest OS and the host OS, I have to do be particular in how I connect them up (since I also want to leverage the 144Hz refresh rate in my main monitor in both Linux and Windows). This is how I do it:

  • Connected to my 144Hz monitor, I use Display Port and DVI-D, which both support 144Hz
  • Connected to my 60Hz monitor, I use HDMI and DVI-D

Because of the space on my motherboard and graphics card, I’m using a (rather bulky) DVI-D cable to connect my 60Hz monitor to the motherboard. The HDMI cable that goes from my 60Hz monitor goes to the graphics card so that it can be used while I’m in Windows. For my 144Hz monitor, I connect the DVI-D cable to the graphics card and Display Port into the motherboard. This gives me 144Hz no matter which connection I’m using.