Intel NUC GPU passthrough in Proxmox 6.1 with Plex and Docker

April 26, 20204 minutes

Plex is a fantastic piece of software that when combined with a Plex Pass subscription allows for hardware-accelerated transcoding of video streams. Before jumping from a bare-metal home server setup on Ubuntu 18.04 to one virtualised in a Proxmox node I found that many people seem to be having great difficulty in getting this working.

After a lot of research I've managed to configure my Intel NUC running Proxmox VE 6.1-8 to passthrough the integrated GPU on the i5-8259U CPU to a guest VM running Ubuntu 20.04 and Docker that enables hardware transcoding of video streams from a Plex server.

Proxmox setup

First you need to enable IOMMU in the Proxmox host. This allows for the PCI passthrough option to become available in the GUI.

nano /etc/default/grub

Then add intel_iommu=on to the GRUB_CMDLINE_LINUX_DEFAULT option. It should look like this:

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"

Next, add the required kernel modules to /etc/modules:


Finally, update-grub and reboot!

VM setup

My Plex instance runs inside an Ubuntu 20.04 VM running Docker Engine with access to 4 vCPUs of type "host" using Docker Compose (more on this later).

VM CPU Configuration
VM CPU Configuration

Identify the GPU

Once the VM is set up, SSH into the Proxmox host and figure out which PCI device is the GPU we want to pass through to the guest using the lspci command. The output will look something like this:

[email protected]:~# lspci
00:00.0 Host bridge: Intel Corporation 8th Gen Core Processor Host Bridge/DRAM Registers (rev 08)
00:02.0 VGA compatible controller: Intel Corporation Iris Plus Graphics 655 (rev 01)00:08.0 System peripheral: Intel Corporation Skylake Gaussian Mixture Model
00:12.0 Signal processing controller: Intel Corporation Cannon Point-LP Thermal Controller (rev 30)
00:14.0 USB controller: Intel Corporation Cannon Point-LP USB 3.1 xHCI Controller (rev 30)
00:14.2 RAM memory: Intel Corporation Cannon Point-LP Shared SRAM (rev 30)
00:14.3 Network controller: Intel Corporation Cannon Point-LP CNVi [Wireless-AC] (rev 30)
# ... snip ...

Notice on the line that's highlighted we've found the GPU we want to use, the identifier for "Intel Corporation Iris Plus Graphics 655" is 00:02.0. This is what we will select when adding hardware to the VM.

Add the PCI device to the guest

Using this identify, you'll need to add a PCI device to the guest using the "Hardware" tab, "Add", "PCI Device".

Add PCI device menu
Add PCI device menu

The modal that pops up is where we'll select the the device with that identifier. Select it, and make sure "All Functions" is ticked and "Primary GPU" is not ticked.

Select PCI device
Select PCI device

I'm not entirely clear on what "All Functions" does, but having this ticked works for me.

Finally reboot the VM and SSH or VNC to it and run lspci. If you see the 00:02.0 identifier in the output it works!

Disclaimer: I'm not 100% on whether the CPU type of "host" or the PCI passthrough configuration is what makes this work. This is what worked for me after trying multiple variations of similar things.

Mount the device to the Plex container

The last and definitely the easiest part is mounting the device to the Plex container. A simple docker-compose.yml should illustrate this nicely:

version: '3.7'

    # ... snip ...
    image: linuxserver/plex
    network_mode: host
      - PUID=1000
      - PGID=1000
      - VERSION=docker
      # ... snip ...
    devices:      - /dev/dri:/dev/dri

The last few highlighted lines of this file mount the /dev/dri device to the container. All that's left is to enable hardware transcoding in the "Transcoder" settings in Plex.

Hardware transcoding option
Hardware transcoding option

To check this worked, start playing something, adjust the quality options to trigger a transcode, and take a look at the dashboard.

Dashboard showing HW transcode
Dashboard showing HW transcode

If you're seeing Transcode (hw) in the now playing section you've successfully set up GPU passthrough in Proxmox to a guest VM 🎉


This is a preview of a simpler page design that I'm working on over the next little bit. I've finally added a (click it!) but there's still a few pages left to be converted so don't worry if things don't look quite right just yet 🙏

Content on blog pages use the CC-BY-SA license. The source code and notes use the MIT license. Unsure? Mention me on Mastodon.