cleaning up!

This commit is contained in:
2024-09-26 10:49:22 +02:00
parent b670534cca
commit bad8c3664b
5 changed files with 0 additions and 0 deletions

View File

@@ -1,154 +0,0 @@
# Ultimate Beginner's Guide to Proxmox GPU Passthrough
Welcome all, to the first installment of my Idiot Friendly tutorial series! I'll be guiding you through the process of configuring GPU Passthrough for your Proxmox Virtual Machine Guests. This guide is aimed at beginners to virtualization, particularly for Proxmox users. It is intended as an overall guide for passing through a GPU (or multiple GPUs) to your Virtual Machine(s). It is not intended as an all-exhaustive how-to guide; however, I will do my best to provide you with all the necessary resources and sources for the passthrough process, from start to finish. If something doesn't work properly, please check /r/Proxmox, /r/Homelab, r/VFIO, or /r/linux4noobs for further assistance from the community.
### Before We Begin (Credits)
This guide wouldn't be possible without the fantastic online Proxmox community; both here on Reddit, on the official forums, as well as other individual user guides (which helped me along the way, in order to help you!). If I've missed a credit source, please let me know! Your work is appreciated.
Disclaimer: In no way, shape, or form does this guide claim to work for all instances of Proxmox/GPU configurations. Use at your own risk. I am not responsible if you blow up your server, your home, or yourself. Surgeon General Warning: do not operate this guide while under the influence of intoxicating substances. Do not let your cat operate this guide. You have been warned.
### Let's Get Started (Pre-configuration Checklist)
It's important to make note of all your hardware/software setup before we begin the GPU passthrough. For reference, I will list what I am using for hardware and software. This guide may or may not work the same on any given hardware/software configuration, and it is intended to help give you an overall understanding and basic setup of GPU passthrough for Proxmox only.
Your hardware should, at the very least, support: VT-d, interrupt mapping, and UEFI BIOS.
### My Hardware Configuration:
Motherboard: Supermicro X9SCM-F (Rev 1.1 Board + Latest BIOS)
CPU: LGA1150 Socket, Xeon E3-1220 (version 2) 1
Memory: 16GB DDR3 (ECC, Unregistered)
GPU: 2x GTX 1050 Ti 4gb, 2x GTX 1060 6gb 2
### My Software Configuration:
Latest Proxmox Build (5.3 as of this writing)
Windows 10 LTSC Enterprise (Virtual Machine) 3
### Notes:
1On most Xeon E3 CPUs, IOMMU grouping is a mess, so some extra configuration is needed. More on this later.
2It is not recommended to use multiple GPUs of the same exact brand/model type. More on this later.
3Any Windows 10 installation ISO should work, however, try to stick to the latest available ISO from Microsoft.
### Configuring Proxmox
This guide assumes you already have at the very least, installed Proxmox on your server and are able to login to the WebGUI and have access to the server node's Shell terminal. If you need help with installing base Proxmox, I highly recommend the official "Getting Started" guide and their official YouTube guides.
### Step 1: Configuring the Grub
Assuming you are using an Intel CPU, either SSH directly into your Proxmox server, or utilizing the noVNC Shell terminal under "Node", open up the /etc/default/grub file. I prefer to use nano, but you can use whatever text editor you prefer.
nano /etc/default/grub
Look for this line:
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
Then change it to look like this:
For Intel CPUs:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
For AMD CPUs:
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on"
IMPORTANT ADDITIONAL COMMANDS
You might need to add additional commands to this line, if the passthrough ends up failing. For example, if you're using a similar CPU as I am (Xeon E3-12xx series), which has horrible IOMMU grouping capabilities, and/or you are trying to passthrough a single GPU.
These additional commands essentially tell Proxmox not to utilize the GPUs present for itself, as well as helping to split each PCI device into its own IOMMU group. This is important because, if you try to use a GPU in say, IOMMU group 1, and group 1 also has your CPU grouped together for example, then your GPU passthrough will fail.
Here are my grub command line settings:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction nofb nomodeset video=vesafb:off,efifb:off"
For more information on what these commands do and how they help:
A. Disabling the Framebuffer: video=vesafb:off,efifb:off
B. ACS Override for IOMMU groups: pcie_acs_override=downstream,multifunction
When you finished editing /etc/default/grub run this command:
update-grub
### Step 2: VFIO Modules
You'll need to add a few VFIO modules to your Proxmox system. Again, using nano (or whatever), edit the file /etc/modules
nano /etc/modules
Add the following (copy/paste) to the /etc/modules file:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
Then save and exit.
### Step 3: IOMMU interrupt remapping
I'm not going to get too much into this; all you really need to do is run the following commands in your Shell:
echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/iommu_unsafe_interrupts.conf
echo "options kvm ignore_msrs=1" > /etc/modprobe.d/kvm.conf
### Step 4: Blacklisting Drivers
We don't want the Proxmox host system utilizing our GPU(s), so we need to blacklist the drivers. Run these commands in your Shell:
echo "blacklist radeon" >> /etc/modprobe.d/blacklist.conf
echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf
echo "blacklist nvidia" >> /etc/modprobe.d/blacklist.conf
### Step 5: Adding GPU to VFIO
Run this command:
lspci -v
Your shell window should output a bunch of stuff. Look for the line(s) that show your video card. It'll look something like this:
01:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1070] (rev a1) (prog-if 00 [VGA controller])
01:00.1 Audio device: NVIDIA Corporation GP104 High Definition Audio Controller (rev a1)
Make note of the first set of numbers (e.g. 01:00.0 and 01:00.1). We'll need them for the next step.
Run the command below. Replace 01:00 with whatever number was next to your GPU when you ran the previous command:
lspci -n -s 01:00
Doing this should output your GPU card's Vendor IDs, usually one ID for the GPU and one ID for the Audio bus. It'll look a little something like this:
01:00.0 0000: 10de:1b81 (rev a1)
01:00.1 0000: 10de:10f0 (rev a1)
What we want to keep, are these vendor id codes: 10de:1b81 and 10de:10f0.
Now we add the GPU's vendor id's to the VFIO (remember to replace the id's with your own!):
echo "options vfio-pci ids=10de:1b81,10de:10f0 disable_vga=1"> /etc/modprobe.d/vfio.conf
Finally, we run this command:
update-initramfs -u
And restart:
reset
Now your Proxmox host should be ready to passthrough GPUs!

View File

@@ -1,55 +0,0 @@
---
title: Disable IPv6 on Proxmox
description: How to permanently disable IPv6 on Proxmox VE Server.
template: comments.html
tags: [proxmox, ipv6]
---
# Disable IPv6 on Proxmox Permanently
By default, Proxmox IPv6 is enabled after installation. This means that the IPv6 stack is active and the host can communicate with other hosts on the same network via IPv6 protocol.
Output of `ip addr` command:
![Default IPv6 Proxmox][default-ipv6-proxmox-img]
You can disable IPv6 on Proxmox VE by editing the `/etc/default/grub` file.
```shell
nano /etc/default/grub
```
add `ipv6.disable=1` to the end of `GRUB_CMDLINE_LINUX_DEFAULT` and `GRUB_CMDLINE_LINUX` line. Don't change the other values at those lines.
```bash
GRUB_CMDLINE_LINUX_DEFAULT="ipv6.disable=1"
GRUB_CMDLINE_LINUX="ipv6.disable=1"
```
The config should look like this:
![Grub Configuration][grub-configuration-img]
Update the grub configuration.
```shell
update-grub
```
Save and exit. Reboot Proxmox Server to apply the changes.
Output of `ip addr` command after disabling IPv6 on Proxmox VE:
![No IPv6 Proxmox Image][no-ipv6-proxmox-img]
<!-- appendices -->
<!-- urls -->
<!-- images -->
[default-ipv6-proxmox-img]: ./assets/images/1ee15c1c-bd9a-11ec-926f-3b1ee33b95ee.jpg 'Default IPv6 Proxmox Image'
[grub-configuration-img]: ./assets/images/f1f18772-f881-11ec-9918-afad89ede03c.jpg 'Grub Configuration'
[no-ipv6-proxmox-img]: ./assets/images/542c7a30-bd9c-11ec-848e-932ce851a8c3.jpg 'No IPv6 Proxmox Image'
<!-- end appendices -->

View File

@@ -1,375 +0,0 @@
---
title: GPU Passthrough to VM
description: Proxmox full gpu passthrough to VM configuration for hardware acceleration.
template: comments.html
tags: [proxmox, gpu, passthrough]
---
# Proxmox GPU Passthrough to VM
## Introduction
GPU passthrough is a technology that allows the Linux kernel to present the internal PCI GPU directly to the virtual machine. The device behaves as if it were powered directly by the virtual machine, and the virtual machine detects the PCI device as if it were physically connected.
We will cover how to enable GPU passthrough to a virtual machine in Proxmox VE.
!!! Warning ""
**Your mileage may vary depending on your hardware.**
## Proxmox Configuration for GPU Passthrough
The following examples uses `SSH` connection to the Proxmox server. The editor is `nano` but feel free to use any other editor.
We will be editing the `grub` configuration file.
Find the PCI address of the GPU Device. The following command will show the PCI address of the GPU devices in Proxmox server:
```shell
lspci -nnv | grep VGA
```
Find the GPU you want to passthrough in result ts should be similar to this:
```shell
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU104 [GeForce RTX 2080 SUPER] [10de:1e81] (rev a1) (prog-if 00 [VGA controller])
```
What we are looking is the PCI address of the GPU device. In this case it's `01:00.0`.
`01:00.0` is only a part of of a group of PCI devices on the GPU.
We can list all the devices in the group `01:00` by using the following command:
```shell
lspci -s 01:00
```
The usual output will include VGA Device and Audio Device. In my case, we have a USB Controller and a Serial bus controller:
```shell
01:00.0 VGA compatible controller: NVIDIA Corporation TU104 [GeForce RTX 2080 SUPER] (rev a1)
01:00.1 Audio device: NVIDIA Corporation TU104 HD Audio Controller (rev a1)
01:00.2 USB controller: NVIDIA Corporation TU104 USB 3.1 Host Controller (rev a1)
01:00.3 Serial bus controller [0c80]: NVIDIA Corporation TU104 USB Type-C UCSI Controller (rev a1)
```
Now we need to get the id's of those devices. We can do this by using the following command:
```shell
lspci -s 01:00 -n
```
The output should look similar to this:
```shell
01:00.0 0300: 10de:1e81 (rev a1)
01:00.1 0403: 10de:10f8 (rev a1)
01:00.2 0c03: 10de:1ad8 (rev a1)
01:00.3 0c80: 10de:1ad9 (rev a1)
```
What we are looking are the pairs, we will use those id to split the PCI Group to separate devices.
```shell
10de:1e81,10de:10f8,10de:1ad8,10de:1ad9
```
Now it's time to edit the `grub` configuration file.
```shell
nano /etc/default/grub
```
Find the line that starts with `GRUB_CMDLINE_LINUX_DEFAULT` by default they should look like this:
```shell
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
```
=== "For Intel CPU"
``` shell
intel_iommu=on
```
=== "For AMD CPU"
``` shell
amd_iommu=on
```
Then change it to look like this (Intel CPU example) and replace `vfio-pci.ids=` with the ids for the GPU you want to passthrough:
```shell
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on pcie_acs_override=downstream,multifunction video=efifb:off video=vesa:off vfio-pci.ids=10de:1e81,10de:10f8,10de:1ad8,10de:1ad9 vfio_iommu_type1.allow_unsafe_interrupts=1 kvm.ignore_msrs=1 modprobe.blacklist=radeon,nouveau,nvidia,nvidiafb,nvidia-gpu"
```
Save the config changed and then update GRUB.
```shell
update-grub
```
Next we need to add `vfio` modules to allow PCI passthrough.
Edit the `/etc/modules` file.
```shell
nano /etc/modules
```
Add the following line to the end of the file:
```shell
# Modules required for PCI passthrough
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
```
Save and exit the editor.
Update configuration changes made in your /etc filesystem
```shell
update-initramfs -u -k all
```
**Reboot Proxmox to apply the changes**
Verify that IOMMU is enabled
```shell
dmesg | grep -e DMAR -e IOMMU
```
There should be a line that looks like `DMAR: IOMMU enabled`. If there is no output, something is wrong.
```shell hl_lines="2"
[0.000000] Warning: PCIe ACS overrides enabled; This may allow non-IOMMU protected peer-to-peer DMA
[0.067203] DMAR: IOMMU enabled
[2.573920] pci 0000:00:00.2: AMD-Vi: IOMMU performance counters supported
[2.580393] pci 0000:00:00.2: AMD-Vi: Found IOMMU cap 0x40
[2.581776] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank).
```
Check that the GPU is in a separate IOMMU Group by using the following command:
```shell
#!/bin/bash
shopt -s nullglob
for g in $(find /sys/kernel/iommu_groups/* -maxdepth 0 -type d | sort -V); do
echo "IOMMU Group ${g##*/}:"
for d in $g/devices/*; do
echo -e "\t$(lspci -nns ${d##*/})"
done;
done;
```
Now your Proxmox host should be ready to GPU passthrough!
## Windows Virtual Machine GPU Passthrough Configuration
For better results its recommend to use this [Windwos 10/11 Virutal Machine configuration for proxmox][windows-vm-configuration-url].
!!! Failure "Limitations & Workarounds"
- In order for the GPU to to function properly in the VM, you must disable Proxmox's Virutal Display - Set it `none`.
- You will lose the ability to conect to the VM via Proxmox's Console.
- Display must be conected to the physical output of the GPU for the Windows Host to initialize the GPU properly.
- **You can use a [HDMI Dummy Plug][hdmi-dummy-pluh-amazon-url]{target=\_blank} as a workaround - It will present itself as a HDMI Display to the Windows Host.**
- Make sure you have alternative way to connect to the VM for example via Remote Desktop (RDP).
Find the PCI address of the GPU.
```shell
lspci -nnv | grep VGA
```
This should result in output similar to this:
```shell
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU104 [GeForce RTX 2080 SUPER] [10de:1e81] (rev a1) (prog-if 00 [VGA controller])
```
If you have multiple VGA, look for the one that has the `Intel` in the name.
Here, the PCI address of the GPU is `01:00.0`.
![Proxmox lspci vga][proxmox-lspci-vga-img]
For best performance the VM should be configured the `Machine` type to ==q35==.
This will allow the VM to utilize PCI-Express passthrough.
Open the web gui and navigate to the `Hardware` tab of the VM you want to add a vGPU.
Click `Add` above the device list and then choose `PCI Device`
![Windows VM Add PCI Device][windows-vm-add-pci-device-img]
Open the `Device` dropdown and select the GPU, which you can find using its PCI address. This list uses a different format for the PCI addresses id, `01:00.0` is listed as `0000:01:00.0`.
![Add GPU to VM][general-vm-add-gpu-to-vm-img]
Select `All Functions`, `ROM-Bar`, `Primary GPU`, `PCI-Express` and then click `Add`.
![Windows VM GPU PCI Settings][windows-vm-gpu-pci-settings-img]
The Windows Virtual Machine Proxmox Setting should look like this:
![Windows VM GPU Hardware Settings][windows-vm-gpu-hardware-settings-img]
Power on the Windows Virtual Machine.
Connect to the VM via Remote Desktop (RDP) or any other remote access protocol you prefer.
Install the latest version of GPU Driver for your GPU.
If all when well you should see the following output in `Device Manager` and [GPU-Z][gpu-z-url]{target=\_blank}:
![GPU-Z and Device Manager GPU][gpu-z-and-device-manager-gpu-img]
That's it!
## Linux Virtual Machine GPU Passthrough Configuration
We will be using Ubuntu Server 20.04 LTS. for this guide.
From Proxmox Terminal find the PCI address of the GPU.
```shell
lspci -nnv | grep VGA
```
This should result in output similar to this:
```shell
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU104 [GeForce RTX 2080 SUPER] [10de:1e81] (rev a1) (prog-if 00 [VGA controller])
```
If you have multiple VGA, look for the one that has the `Intel` in the name.
Here, the PCI address of the GPU is `01:00.0`.
![lspci-nnv-vga][proxmox-lspci-vga-img]
For best performance the VM should be configured the `Machine` type to ==q35==.
This will allow the VM to utilize PCI-Express passthrough.
![Ubuntu VM Add PCI Device][ubuntu-vm-add-pci-device-img]
Open the `Device` dropdown and select the GPU, which you can find using its PCI address. This list uses a different format for the PCI addresses id, `01:00.0` is listed as `0000:01:00.0`.
![Add GPU to VM][general-vm-add-gpu-to-vm-img]
Select `All Functions`, `ROM-Bar`, `PCI-Epress` and then click `Add`.
![Ubuntu VM GPU PCI Settings][ubuntu-vm-gpu-pci-settings-img]
The Ubuntu Virtual Machine Proxmox Setting should look like this:
![Ubuntu VM GPU Hardware Settings][ubuntu-vm-gpu-hardware-settings-img]
Boot the VM. To test the GPU passthrough was successful, you can use the following command in the VM:
```shell
sudo lspci -nnv | grep VGA
```
The output should incliude the GPU:
```shell
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU104 [GeForce RTX 2080 SUPER] [10de:1e81] (rev a1) (prog-if 00 [VGA controller])
```
Now we need to install the GPU Driver. I'll be covering the installation of Nvidia Drivers in the next example.
Search for the latest Nvidia Driver for your GPU.
```shell
sudo apt search nvidia-driver
```
In the next step we will install the Nvidia Driver v535.
!!! note
**--no-install-recommends** is important for Headless Server. `nvidia-driver-535` will install xorg (GUI) `--no-install-recommends` flag will prevent the GUI from being installed.
```shell
sudo apt install --no-install-recommends -y build-essential nvidia-driver-535 nvidia-headless-535 nvidia-utils-535 nvidia-cuda-toolkit
```
This will take a while to install. After the installation is complete, you should reboot the VM.
Now let's test the Driver initalization. Run the following command in the VM:
```shell
nvidia-smi && nvidia-smi -L
```
If all went well you should see the following output:
![Ubuntu VM GPU Nvidia-smi][ubuntu-vm-gpu-nvidia-smi]
That's it! You should now be able to use the GPU for hardware acceleration inside the VM.
## Debug
Dbug Messages - Shows Hardware initialization and errors
```shell
dmesg -w
```
Display PCI devices information
```shell
lspci
```
Display Driver in use for PCI devices
```shell
lspci -k
```
Display IOMMU Groups the PCI devices are assigned to
```shell
#!/bin/bash
shopt -s nullglob
for g in $(find /sys/kernel/iommu_groups/* -maxdepth 0 -type d | sort -V); do
echo "IOMMU Group ${g##*/}:"
for d in $g/devices/*; do
echo -e "\t$(lspci -nns ${d##*/})"
done;
done;
```
**Reboot Proxmox to apply the changes**
<!-- urls -->
[windows-vm-configuration-url]: ../windows-vm-configuration.md 'Windows VM Configuration'
[gpu-z-url]: https://www.techpowerup.com/gpuz/ 'GPU-Z Homepage'
[hdmi-dummy-pluh-amazon-url]: https://amzn.to/391shDL 'HDMI Dummy Plug Amazon'
<!-- images -->
<!-- Proxmox/general Images-->
[proxmox-lspci-vga-img]: ./assets/images/8886bc4a-be38-11ec-ba3b-d3e0526955c4.jpg 'Proxmox lspci vga'
[general-vm-add-gpu-to-vm-img]: ./assets/images/a7d93848-be38-11ec-9607-2ba8ccd0b5ab.jpg 'Add GPU to VM'
<!-- Windows Images-->
[windows-vm-add-pci-device-img]: ./assets/images/893555e4-b914-11ec-8e85-df9da2014d5a.jpg 'Windows VM Add PCI Device'
[windows-vm-gpu-pci-settings-img]: ./assets/images/d48456fc-be38-11ec-a8da-c747b71c446f.jpg 'Windows VM GPU PCI Settings'
[windows-vm-gpu-hardware-settings-img]: ./assets/images/157b55e8-be3e-11ec-a2c2-97d25fe194df.jpg 'Windows VM GPU Hardware Settings'
[gpu-z-and-device-manager-gpu-img]: ./assets/images/13d3484a-be39-11ec-9c17-d311291bdb58.jpg 'GPU-Z and Device Manager GPU'
<!-- Ubuntu Images-->
[ubuntu-vm-add-pci-device-img]: ./assets/images/3d942380-be3d-11ec-99fc-0778f9dc8acd.jpg 'Ubuntu VM Add PCI Device'
[ubuntu-vm-gpu-pci-settings-img]: ./assets/images/4dc679d8-be3d-11ec-8ef7-03c9f9ba3344.jpg 'Ubuntu VM GPU PCI Settings'
[ubuntu-vm-gpu-hardware-settings-img]: ./assets/images/6953aefa-be3d-11ec-bfe8-7f9219dc10e2.jpg 'Ubuntu VM GPU Hardware Settings'
[ubuntu-vm-gpu-nvidia-smi]: ./assets/images/a6de4412-be40-11ec-85e6-338ef50c9599.jpg 'Ubuntu VM GPU Nvidia-smi'

View File

@@ -1,286 +0,0 @@
---
title: iGPU Passthrough to VM
description: Proxmox iGPU passthrough to VM configuration for hardware acceleration.
template: comments.html
tags: [proxmox, igpu, passthrough]
---
# iGPU Passthrough to VM (Intel Integrated Graphics)
## Introduction
Intel Integrated Graphics (iGPU) is a GPU that is integrated into the CPU. The GPU is a part of the CPU and is used to render graphics. Proxmox may be configured to use iGPU passthrough to VM to allow the VM to use the iGPU for hardware acceleration for example using video encoding/decoding and Transcoding for series like Plex and Emby.
This guide will show you how to configure Proxmox to use iGPU passthrough to VM.
!!! Warning ""
**Your mileage may vary depending on your hardware. The following guide was tested with Intel Gen8 CPU.**
There are two ways to use iGPU passthrough to VM. The first way is to use the `Full iGPU Passthrough` to VM. The second way is to use the `iGPU GVT-g` technology which allows as to split the iGPU into two parts. We will be covering the `Full iGPU Passthrough`. If you want to use the split `iGPU GVT-g Passthrough` you can find the guide [here][igpu-split-gvt-g-passthrough-url].
## Proxmox Configuration for iGPU Full Passthrough
The following examples uses `SSH` connection to the Proxmox server. The editor is `nano` but feel free to use any other editor.
We will be editing the `grub` configuration file.
Edit the `grub` configuration file.
```shell
nano /etc/default/grub
```
Find the line that starts with `GRUB_CMDLINE_LINUX_DEFAULT` by default they should look like this:
```shell
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
```
We want to allow `passthrough` and `Blacklists` known graphics drivers to prevent proxmox from utilizing the iGPU.
!!! Warning
**You will loose the ability to use the onboard graphics card to access the Proxmox's console since Proxmox won't be able to use the Intel's gpu**
Your `GRUB_CMDLINE_LINUX_DEFAULT` should look like this:
```shell
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction initcall_blacklist=sysfb_init video=simplefb:off video=vesafb:off video=efifb:off video=vesa:off disable_vga=1 vfio_iommu_type1.allow_unsafe_interrupts=1 kvm.ignore_msrs=1 modprobe.blacklist=radeon,nouveau,nvidia,nvidiafb,nvidia-gpu,snd_hda_intel,snd_hda_codec_hdmi,i915"
```
!!! Note
This will blacklist most of the graphics drivers from proxmox. If you have a specific driver you need to use for Proxmox Host you need to remove it from `modprobe.blacklist`
Save and exit the editor.
Update the grub configuration to apply the changes the next time the system boots.
```shell
update-grub
```
Next we need to add `vfio` modules to allow PCI passthrough.
Edit the `/etc/modules` file.
```shell
nano /etc/modules
```
Add the following line to the end of the file:
```shell
# Modules required for PCI passthrough
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
```
Update configuration changes made in your /etc filesystem
```shell
update-initramfs -u -k all
```
Save and exit the editor.
**Reboot Proxmox to apply the changes**
Verify that IOMMU is enabled
```shell
dmesg | grep -e DMAR -e IOMMU
```
There should be a line that looks like `DMAR: IOMMU enabled`. If there is no output, something is wrong.
```shell hl_lines="2"
[0.000000] Warning: PCIe ACS overrides enabled; This may allow non-IOMMU protected peer-to-peer DMA
[0.067203] DMAR: IOMMU enabled
[2.573920] pci 0000:00:00.2: AMD-Vi: IOMMU performance counters supported
[2.580393] pci 0000:00:00.2: AMD-Vi: Found IOMMU cap 0x40
[2.581776] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank).
```
## Windows Virtual Machine iGPU Passthrough Configuration
For better results its recommend to use this [Windows 10/11 Virtual Machine configuration for proxmox][windows-vm-configuration-url].
Find the PCI address of the iGPU.
```shell
lspci -nnv | grep VGA
```
This should result in output similar to this:
```shell
00:02.0 VGA compatible controller [0300]: Intel Corporation CometLake-S GT2 [UHD Graphics 630] [8086:3e92] (prog-if 00 [VGA controller])
```
If you have multiple VGA, look for the one that has the `Intel` in the name.
Here, the PCI address of the iGPU is `00:02.0`.
![Proxmox lspci vga][proxmox-lspci-vga-img]
For best performance the VM should be configured the `Machine` type to ==q35==.
This will allow the VM to utilize PCI-Express passthrough.
Open the web gui and navigate to the `Hardware` tab of the VM you want to add a vGPU.
Click `Add` above the device list and then choose `PCI Device`
![Windows VM Add PCI Device][windows-vm-add-pci-device-img]
Open the `Device` dropdown and select the iGPU, which you can find using its PCI address. This list uses a different format for the PCI addresses id, `00:02.0` is listed as `0000:00:02.0`.
![Add iGPU to VM][general-vm-add-igpu-to-vm-img]
Select `All Functions`, `ROM-Bar`, `PCI-Express` and then click `Add`.
![Windows VM iGPU PCI Settings][windows-vm-igpu-pci-settings-img]
!!! tip
I've found that the most consistent way to utilize the GPU acceleration is to disable Proxmox's Virtual Graphics card of the vm. The drawback of disabling the Virtual Graphics card is that it will not be able to access the vm via proxmox's vnc console. The workaround is to enable Remote Desktop (RDP) on the VM before disabling the Virtual Graphics card and accessing the VM via RDP or use any other remove desktop client. If you loose the ability to access the VM via RDP you can temporarily remove the GPU PCI Device and re-enable the virtual graphics card
The Windows Virtual Machine Proxmox Setting should look like this:
![Windows VM iGPU Hardware Settings][windows-vm-igpu-hardware-settings-img]
Power on the Windows Virtual Machine.
Connect to the VM via Remote Desktop (RDP) or any other remote access protocol you prefer.
Install the latest version of [Intel's Graphics Driver][intel-gpu-drivers-url]{target=\_blank} or use the [Intel Driver & Support Assistant][intel-driver-and-support-assistant-url]{target=\_blank} installer.
If all when well you should see the following output in `Device Manager` and [GPU-Z][gpu-z-url]{target=\_blank}:
![GPU-Z and Device Manager iGPU][gpu-z-and-device-manager-igpu-img]
That's it!
## Linux Virtual Machine iGPU Passthrough Configuration
We will be using Ubuntu Server 20.04 LTS for this guide.
From Proxmox Terminal find the PCI address of the iGPU.
```shell
lspci -nnv | grep VGA
```
This should result in output similar to this:
```shell
00:02.0 VGA compatible controller [0300]: Intel Corporation CometLake-S GT2 [UHD Graphics 630] [8086:3e92] (prog-if 00 [VGA controller])
```
If you have multiple VGA, look for the one that has the `Intel` in the name.
Here, the PCI address of the iGPU is `00:02.0`.
![lspci-nnv-vga][proxmox-lspci-vga-img]
![Ubuntu VM Add PCI Device][ubuntu-vm-add-pci-device-img]
Open the `Device` dropdown and select the iGPU, which you can find using its PCI address. This list uses a different format for the PCI addresses id, `00:02.0` is listed as `0000:00:02.0`.
![Add iGPU to VM][general-vm-add-igpu-to-vm-img]
Select `All Functions`, `ROM-Bar` and then click `Add`.
![Ubuntu VM iGPU PCI Settings][ubuntu-vm-igpu-pci-settings-img]
The Ubuntu Virtual Machine Proxmox Setting should look like this:
![Ubuntu VM iGPU Hardware Settings][ubuntu-vm-igpu-hardware-settings-img]
Boot the VM. To test the iGPU passthrough was successful, you can use the following command:
```shell
sudo lspci -nnv | grep VGA
```
The output should include the Intel iGPU:
```shell
00:10.0 VGA compatible controller [0300]: Intel Corporation UHD Graphics 630 (Desktop) [8086:3e92] (prog-if 00 [VGA controller])
```
Now we need to check if the GPU's Driver initalization is working.
```shell
cd /dev/dri && ls -la
```
The output should include the `renderD128`
![VM renderD128][vm-renderd128-img]
That's it! You should now be able to use the iGPU for hardware acceleration inside the VM and still have proxmox's output on the screen.
## Debug
Dbug Messages - Shows Hardware initialization and errors
```shell
dmesg -w
```
Display PCI devices information
```shell
lspci
```
Display Driver in use for PCI devices
```shell
lspci -k
```
Display IOMMU Groups the PCI devices are assigned to
```shell
#!/bin/bash
shopt -s nullglob
for g in $(find /sys/kernel/iommu_groups/* -maxdepth 0 -type d | sort -V); do
echo "IOMMU Group ${g##*/}:"
for d in $g/devices/*; do
echo -e "\t$(lspci -nns ${d##*/})"
done;
done;
```
<!-- appendices -->
<!-- urls -->
[igpu-full-passthrough-url]: gpu-passthrough-to-vm.md#igpu-full-passthrough 'iGPU Full Passthrough'
[igpu-split-gvt-g-passthrough-url]: igpu-split-passthrough.md 'iGPU Split GVT-g Passthrough'
[windows-vm-configuration-url]: ../windows-vm-configuration.md 'Windows VM Configuration'
[intel-gpu-drivers-url]: https://www.intel.com/content/www/us/en/support/articles/000090440/graphics.html 'Intel GPU Drivers'
[intel-driver-and-support-assistant-url]: https://www.intel.com/content/www/us/en/support/detect.html 'Intel Driver and Support Assistant'
[gpu-z-url]: https://www.techpowerup.com/gpuz/ 'GPU-Z Homepage'
<!-- images -->
<!-- Proxmox/general Images-->
[proxmox-lspci-vga-img]: ./assets/images/c98e4e9a-b912-11ec-9100-c3da7dd122f2.jpg 'Proxmox lspci vga'
[general-vm-add-igpu-to-vm-img]: ./assets/images/d3a4d31c-b918-11ec-ac96-a7ff358e0685.jpg 'Add iGPU to VM'
<!-- Windows Images-->
[windows-vm-add-pci-device-img]: ./assets/images/893555e4-b914-11ec-8e85-df9da2014d5a.jpg 'Windows VM Add PCI Device'
[windows-vm-igpu-pci-settings-img]: ./assets/images/cc1c3650-b91b-11ec-8215-bb07cf790912.jpg 'Windows VM iGPU PCI Settings'
[windows-vm-igpu-hardware-settings-img]: ./assets/images/496fa0ba-b91c-11ec-bcb5-3759896bab7f.jpg 'Windows VM iGPU Hardware Settings'
[gpu-z-and-device-manager-igpu-img]: ./assets/images/7c9df2f6-b91d-11ec-b08b-775e53b2c017.jpg 'GPU-Z and Device Manager iGPU'
<!-- Ubuntu Images-->
[ubuntu-vm-add-pci-device-img]: ./assets/images/19bbed86-bc34-11ec-bdef-d76764bad4d0.jpg 'Ubuntu VM Add PCI Device'
[ubuntu-vm-igpu-pci-settings-img]: ./assets/images/1bb4b41e-bdb1-11ec-9af2-4b05eacea61c.jpg 'Ubuntu VM iGPU PCI Settings'
[ubuntu-vm-igpu-hardware-settings-img]: ./assets/images/b177a31c-bc35-11ec-9045-2b011e6c011d.jpg 'Ubuntu VM iGPU Hardware Settings'
[vm-renderd128-img]: ./assets/images/7660a1d4-bd8e-11ec-a58e-3f9f3e6c485d.jpg 'VM renderD128'
<!-- end appendices -->

View File

@@ -1,313 +0,0 @@
---
title: iGPU Split Passthrough
description: Proxmox iGPU split passthrough to VM configuration for hardware acceleration.
template: comments.html
tags: [proxmox, igpu, passthrough]
---
# iGPU Split Passthrough (Intel Integrated Graphics)
## Introduction
Intel Integrated Graphics (iGPU) is a GPU that is integrated into the CPU. The GPU is a part of the CPU and is used to render graphics. Proxmox may be configured to use iGPU split passthrough to VM to allow the VM to use the iGPU for hardware acceleration for example using video encoding/decoding and Transcoding for series like Plex and Emby.
This guide will show you how to configure Proxmox to use iGPU passthrough to VM.
!!! Warning ""
**Your mileage may vary depending on your hardware. The following guide was tested with Intel Gen8 CPU.**
!!! Failure "Supported CPUs"
`iGPU GVT-g Split Passthrough` is supported only on Intel's **5th generation to 10th generation** CPUs!
Known supported CPU families:
- **Broadwell**
- **Skylake**
- **Kaby Lake**
- **Coffee Lake**
- **Comet Lake**
There are two ways to use iGPU passthrough to VM. The first way is to use the `Full iGPU Passthrough` to VM. The second way is to use the `iGPU GVT-g` technology which allows as to split the iGPU into two parts. We will be covering the `Split iGPU Passthrough`. If you want to use the split `Full iGPU Passthrough` you can find the guide [here][igpu-full-passthrough-url].
## Proxmox Configuration for GVT-g Split Passthrough
The following examples uses `SSH` connection to the Proxmox server. The editor is `nano` but feel free to use any other editor.
We will be editing the `grub` configuration file.
Edit the `grub` configuration file.
```shell
nano /etc/default/grub
```
Find the line that starts with `GRUB_CMDLINE_LINUX_DEFAULT` by default they should look like this:
```shell
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
```
We want to allow `passthrough` and `Blacklists` known graphics drivers to prevent proxmox from utilizing the iGPU.
Your `GRUB_CMDLINE_LINUX_DEFAULT` should look like this:
```shell
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on i915.enable_gvt=1 iommu=pt pcie_acs_override=downstream,multifunction video=efifb:off video=vesa:off vfio_iommu_type1.allow_unsafe_interrupts=1 kvm.ignore_msrs=1 modprobe.blacklist=radeon,nouveau,nvidia,nvidiafb,nvidia-gpu"
```
!!! Note
This will blacklist most of the graphics drivers from proxmox. If you have a specific driver you need to use for Proxmox Host you need to remove it from `modprobe.blacklist`
Save and exit the editor.
Update the grub configuration to apply the changes the next time the system boots.
```shell
update-grub
```
Next we need to add `vfio` modules to allow PCI passthrough.
Edit the `/etc/modules` file.
```shell
nano /etc/modules
```
Add the following line to the end of the file:
```shell
# Modules required for PCI passthrough
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
# Modules required for Intel GVT-g Split
kvmgt
```
Save and exit the editor.
Update configuration changes made in your /etc filesystem
```shell
update-initramfs -u -k all
```
**Reboot Proxmox to apply the changes**
Verify that IOMMU is enabled
```shell
dmesg | grep -e DMAR -e IOMMU
```
There should be a line that looks like `DMAR: IOMMU enabled`. If there is no output, something is wrong.
```shell hl_lines="2"
[0.000000] Warning: PCIe ACS overrides enabled; This may allow non-IOMMU protected peer-to-peer DMA
[0.067203] DMAR: IOMMU enabled
[2.573920] pci 0000:00:00.2: AMD-Vi: IOMMU performance counters supported
[2.580393] pci 0000:00:00.2: AMD-Vi: Found IOMMU cap 0x40
[2.581776] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank).
```
## Windows Virtual Machine iGPU Passthrough Configuration
For better results its recommend to use this [Windwos 10/11 Virutal Machine configuration for proxmox][windows-vm-configuration-url].
Find the PCI address of the iGPU.
```shell
lspci -nnv | grep VGA
```
This should result in output similar to this:
```shell
00:02.0 VGA compatible controller [0300]: Intel Corporation CometLake-S GT2 [UHD Graphics 630] [8086:3e92] (prog-if 00 [VGA controller])
```
If you have multiple VGA, look for the one that has the `Intel` in the name.
Here, the PCI address of the iGPU is `00:02.0`.
![Proxmox lspci vga][proxmox-lspci-vga-img]
For best performance the VM should be configured the `Machine` type to ==q35==.
This will allow the VM to utilize PCI-Express passthrough.
Open the web gui and navigate to the `Hardware` tab of the VM you want to add a vGPU.
Click `Add` above the device list and then choose `PCI Device`
!['Windows VM Add PCI Device][windows-vm-add-pci-device-img]
Open the `Device` dropdown and select the iGPU, which you can find using its PCI address. This list uses a different format for the PCI addresses id, `00:02.0` is listed as `0000:00:02.0`.
![Add iGPU MDev to VM][general-add-igpu-mdev-to-vm-img]
Click `Mdev Type`, You should be presented with a list of the available split passthrough devices choose the better performing one for the vm.
![Windows VM Add iGPU Split to VM][windows-vm-add-igpu-split-to-vm-img]
Select `ROM-Bar`, `PCI-Express` and then click `Add`.
![Windows VM iGPU PCI Split Settings][windows-vm-igpu-pci-split-settings-img]
The Windows Virtual Machine Proxmox Setting should look like this:
![Windows VM iGPU Split Hardware Settings][windows-vm-igpu-split-hardware-settings-img]
Power on the Windows Virtual Machine.
Open the VM's Console.
Install the latest version of [Intel's Graphics Driver][intel-gpu-drivers-url]{target=\_blank} or use the [Intel Driver & Support Assistant][intel-driver-and-support-assistant-url]{target=\_blank} installer.
If all when well you should see the following output in `Device Manager` and [GPU-Z][gpu-z-url]{target=\_blank}:
![GPU-Z and Device Manager iGPU][gpu-z-and-device-manager-igpu-img]
That's it! You should now be able to use the iGPU for hardware acceleration inside the VM and still have proxmox's output on the screen.
## Linux Virtual Machine iGPU Passthrough Configuration
We will be using Ubuntu Server 20.04 LTS for this guide.
From Proxmox Terminal find the PCI address of the iGPU.
```shell
lspci -nnv | grep VGA
```
This should result in output similar to this:
```shell
00:02.0 VGA compatible controller [0300]: Intel Corporation CometLake-S GT2 [UHD Graphics 630] [8086:3e92] (prog-if 00 [VGA controller])
```
If you have multiple VGA, look for the one that has the `Intel` in the name.
![Proxmox lspci vga][proxmox-lspci-vga-img]
Here, the PCI address of the iGPU is `00:02.0`.
VM should be configured the `Machine` type to ==i440fx==.
Open the web gui and navigate to the `Hardware` tab of the VM you want to add a vGPU to.
Click `Add` above the device list and then choose `PCI Device`
![Ubuntu VM Add PCI Device][ubuntu-vm-add-pci-device-img]
Open the `Device` dropdown and select the iGPU, which you can find using its PCI address. This list uses a different format for the PCI addresses id, `00:02.0` is listed as `0000:00:02.0`.
![Add iGPU MDev to VM][general-add-igpu-mdev-to-vm-img]
Click `Mdev Type`, You should be presented with a list of the available split passthrough devices choose the better performing one for the vm.
![Add iGPU Split Mdev to VM][ubuntu-vm-add-igpu-split-to-vm-img]
Select `ROM-Bar`, and then click `Add`.
![Ubuntu VM iGPU PCI Split Settings][ubuntu-vm-igpu-pci-split-settings-img]
The Ubuntu Virtual Machine Proxmox Setting should look like this:
![Ubuntu VM iGPU Split Hardware Settings][ubuntu-vm-igpu-split-hardware-settings-img]
Boot the VM. To test the iGPU passthrough was successful, you can use the following command:
```shell
sudo lspci -nnv | grep VGA
```
The output should incliude the Intel iGPU:
```shell
00:10.0 VGA compatible controller [0300]: Intel Corporation UHD Graphics 630 (Desktop) [8086:3e92] (prog-if 00 [VGA controller])
```
Now we need to check if the GPU's Driver initalization is working.
```shell
cd /dev/dri && ls -la
```
The output should incliude the `renderD128`
![VM renderD128][vm-renderd128-img]
That's it! You should now be able to use the iGPU for hardware acceleration inside the VM and still have proxmox's output on the screen.
## Debug
Dbug Messages - Shows Hardware initialization and errors
```shell
dmesg -w
```
Display PCI devices information
```shell
lspci
```
Display Driver in use for PCI devices
```shell
lspci -k
```
Display IOMMU Groups the PCI devices are assigned to
```shell
#!/bin/bash
shopt -s nullglob
for g in $(find /sys/kernel/iommu_groups/* -maxdepth 0 -type d | sort -V); do
echo "IOMMU Group ${g##*/}:"
for d in $g/devices/*; do
echo -e "\t$(lspci -nns ${d##*/})"
done;
done;
```
<!-- appendices -->
<!-- urls -->
[igpu-full-passthrough-url]: gpu-passthrough-to-vm.md#igpu-full-passthrough 'iGPU Full Passthrough'
[igpu-split-gvt-g-passthrough-url]: igpu-split-passthrough.md 'iGPU Split GVT-g Passthrough'
[windows-vm-configuration-url]: ../windows-vm-configuration.md 'Windows VM Configuration'
[intel-gpu-drivers-url]: https://www.intel.com/content/www/us/en/support/articles/000090440/graphics.html 'Intel GPU Drivers'
[intel-driver-and-support-assistant-url]: https://www.intel.com/content/www/us/en/support/detect.html 'Intel Driver and Support Assistant'
[gpu-z-url]: https://www.techpowerup.com/gpuz/ 'GPU-Z Homepage'
<!-- images -->
<!-- Proxmox/general Images-->
[proxmox-lspci-vga-img]: ./assets/images/c98e4e9a-b912-11ec-9100-c3da7dd122f2.jpg 'Proxmox lspci vga'
[general-add-igpu-mdev-to-vm-img]: ./assets/images/2cf3d69c-bd89-11ec-af8c-67974c4ba3f0.jpg 'Add iGPU MDev to VM'
<!-- Windows Images-->
[windows-vm-add-pci-device-img]: ./assets/images/893555e4-b914-11ec-8e85-df9da2014d5a.jpg 'Windows VM Add PCI Device'
[windows-vm-add-igpu-split-to-vm-img]: ./assets/images/393f9ce0-bc41-11ec-976a-cb1d91990157.jpg 'Windows VM Add iGPU Split to VM'
[windows-vm-igpu-pci-split-settings-img]: ./assets/images/0bb26720-bc42-11ec-97d5-0f6751fb6075.jpg 'Windows VM iGPU PCI Split Settings'
[windows-vm-igpu-split-hardware-settings-img]: ./assets/images/d1d0f06c-bd9f-11ec-993d-77cc04f321dc.jpg 'Windows VM iGPU Split Hardware Settings'
[gpu-z-and-device-manager-igpu-img]: ./assets/images/7c9df2f6-b91d-11ec-b08b-775e53b2c017.jpg 'GPU-Z and Device Manager iGPU'
<!-- Ubuntu Images-->
[ubuntu-vm-add-pci-device-img]: ./assets/images/19bbed86-bc34-11ec-bdef-d76764bad4d0.jpg 'Ubuntu VM Add PCI Device'
[ubuntu-vm-add-igpu-split-to-vm-img]: ./assets/images/3802e9b8-bd8b-11ec-a4ba-8305e0d2d682.jpg 'Ubuntu VM Add iGPU Split to VM'
[ubuntu-vm-igpu-pci-split-settings-img]: ./assets/images/c605680c-bd8c-11ec-81f9-4755a5d3fa24.jpg 'Ubuntu VM iGPU PCI Split Settings'
[ubuntu-vm-igpu-split-hardware-settings-img]: ./assets/images/375ed1c8-bd8d-11ec-94c6-cf0bac60954a.jpg 'Ubuntu VM iGPU Split Hardware Settings'
[vm-renderd128-img]: ./assets/images/7660a1d4-bd8e-11ec-a58e-3f9f3e6c485d.jpg 'VM renderD128'
<!-- end appendices -->