Jellyfin Forum
Nvidia GPU not detected inside Jellyfin container - Printable Version

+- Jellyfin Forum (https://forum.jellyfin.org)
+-- Forum: Support (https://forum.jellyfin.org/f-support)
+--- Forum: Troubleshooting (https://forum.jellyfin.org/f-troubleshooting)
+--- Thread: Nvidia GPU not detected inside Jellyfin container (/t-nvidia-gpu-not-detected-inside-jellyfin-container)



Nvidia GPU not detected inside Jellyfin container - beckettloose - 2025-11-10

Hello all,

I'm in the process of setting up GPU transcoding for my Jellyfin server, but I'm running into an annoying issue at the moment. This is my current setup for reference:

Host:
  • Chassis: Dell R720xd
  • CPU: 2x Intel Xeon E5-2680 v2
  • RAM: 320GB ECC Memory
  • Hypervisor OS: Proxmox VE 9.0.11
  • GPU: Nvidia Quadro P2000

VM Specs:
  • CPU: 4 cores
  • RAM: 8gb
  • OS: Ubuntu 25.04 Server
  • Nvidia Drivers: 535.274.02
  • Nvidia Container Toolkit: 1.18.0-1


Jellyfin playback fails whenever transcode is required, which appears to be caused by an issue with passing through the GPU from the VM to the container. In my troubleshooting, I started by checking the output of nvidia-smi on the vm itself and inside the container:

nvidia-smi run from VM:
Code:
redacted@docker-02:~/docker/nvidia-test$ nvidia-smi
Mon Nov 10 16:13:23 2025
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.274.02            Driver Version: 535.274.02  CUDA Version: 12.2    |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf          Pwr:Usage/Cap |        Memory-Usage | GPU-Util  Compute M. |
|                                        |                      |              MIG M. |
|=========================================+======================+======================|
|  0  Quadro P2000                  Off | 00000000:00:10.0 Off |                  N/A |
| 64%  51C    P0              18W /  75W |      0MiB /  5120MiB |      0%      Default |
|                                        |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU  GI  CI        PID  Type  Process name                            GPU Memory |
|        ID  ID                                                            Usage      |
|=======================================================================================|
|  No running processes found                                                          |
+---------------------------------------------------------------------------------------+
This gives the expected output showing the GPU is detected, and there are no active processes using it.

nvidia-smi run inside jellyfin container:
Code:
redacted@docker-02:~/docker/jellyfin$ docker exec -it jellyfin /bin/bash
I have no name!@docker-02:/$ nvidia-smi
Mon Nov 10 16:15:49 2025
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.274.02            Driver Version: 535.274.02  CUDA Version: 12.2    |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf          Pwr:Usage/Cap |        Memory-Usage | GPU-Util  Compute M. |
|                                        |                      |              MIG M. |
|=========================================+======================+======================|
Killed
It seemed like for some reason, the GPU passthrough was not working inside docker because nvidia-smi exited before printing the GPU info. Next, I took Jellyfin out of the equation by running nvidia-smi from a simple ubuntu image:
Code:
redacted@docker-02:~/docker/jellyfin$ docker run --rm --entrypoint "nvidia-smi" --runtime nvidia --gpus all ubuntu
Mon Nov 10 16:19:15 2025
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.274.02            Driver Version: 535.274.02  CUDA Version: 12.2    |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf          Pwr:Usage/Cap |        Memory-Usage | GPU-Util  Compute M. |
|                                        |                      |              MIG M. |
|=========================================+======================+======================|
|  0  Quadro P2000                  Off | 00000000:00:10.0 Off |                  N/A |
| 67%  51C    P0              18W /  75W |      0MiB /  5120MiB |      0%      Default |
|                                        |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU  GI  CI        PID  Type  Process name                            GPU Memory |
|        ID  ID                                                            Usage      |
|=======================================================================================|
|  No running processes found                                                          |
+---------------------------------------------------------------------------------------+

Interestingly, this seemed to work just fine. I'm pretty sure my Jellyfin compose file is set up right, but I'll add it here regardless:
Code:
services:
  jellyfin:
    image: jellyfin/jellyfin
    container_name: 'jellyfin'
    user: 1000:1000
    group_add: # by id as these may not exist within the container. Needed to provide permissions to the VAAPI Devices
      - '107' #render
      - '44' #video
    # Network mode of 'host' exposes the ports on the host. This is needed for DLNA access.
    network_mode: 'host'
    volumes:
      - jellyfin_config:/config
      - jellyfin_cache:/cache
      - /mnt/media-vault:/media
    restart: always
    runtime: nvidia
    devices:
      # VAAPI Devices
      #- /dev/dri/renderD128:/dev/dri/render128
      #- /dev/dri/card0:/dev/dri/card0
      # Devices below were added during troubleshooting but don't seem to make a difference
      - /dev/nvidia-caps:/dev/nvidia-caps
      - /dev/nvidia0:/dev/nvidia0
      - /dev/nvidiactl:/dev/nvidiactl
      - /dev/nvidia-modeset:/dev/nvidia-modeset
      - /dev/nvidia-uvm:/dev/nvidia-uvm
      - /dev/nvidia-uvm-tools:/dev/nvidia-uvm-tools
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]

Any ideas as to what's wrong here, or anything else to look at would be greatly appreciated.