• Login
  • Register
  • Login Register
    Login
    Username/Email:
    Password:
    Or login with a social network below
  • Forum
  • Website
  • GitHub
  • Status
  • Translation
  • Features
  • Team
  • Rules
  • Help
  • Feeds
User Links
  • Login
  • Register
  • Login Register
    Login
    Username/Email:
    Password:
    Or login with a social network below

    Useful Links Forum Website GitHub Status Translation Features Team Rules Help Feeds
    Jellyfin Forum Support Troubleshooting Jellyfin container works with NVIDIA GPU 1, but not GPU 0

     
    • 0 Vote(s) - 0 Average

    Jellyfin container works with NVIDIA GPU 1, but not GPU 0

    Jellyfin ffmpeg is able to use both GPUs on the host, but only works with my second GPU inside the container
    spongeboy03
    Offline

    Junior Member

    Posts: 2
    Threads: 1
    Joined: 2025 Nov
    Reputation: 0
    Country:United States
    #1
    2025-11-22, 01:33 AM (This post was last modified: 2025-11-23, 03:51 AM by spongeboy03. Edited 3 times in total.)
    Server Version: 10.11.3

    I'm having an issue when trying to change my jellyfin container from using my Tesla A2 to using my RTX 3090 (Both Ampere). Here is the output of nvidia-smi:

    Code:
    Fri Nov 21 20:02:46 2025
    +-----------------------------------------------------------------------------------------+
    | NVIDIA-SMI 580.105.08            Driver Version: 580.105.08    CUDA Version: 13.0    |
    +-----------------------------------------+------------------------+----------------------+
    | GPU  Name                Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
    | Fan  Temp  Perf          Pwr:Usage/Cap |          Memory-Usage | GPU-Util  Compute M. |
    |                                        |                        |              MIG M. |
    |=========================================+========================+======================|
    |  0  NVIDIA GeForce RTX 3090        Off |  00000000:01:00.0 Off |                  N/A |
    | 31%  43C    P8            17W /  350W |      15MiB /  24576MiB |      0%      Default |
    |                                        |                        |                  N/A |
    +-----------------------------------------+------------------------+----------------------+
    |  1  NVIDIA A2                      Off |  00000000:02:00.0 Off |                    0 |
    |  0%  38C    P8              6W /  60W |      14MiB /  15356MiB |      0%      Default |
    |                                        |                        |                  N/A |
    +-----------------------------------------+------------------------+----------------------+

    +-----------------------------------------------------------------------------------------+
    | Processes:                                                                              |
    |  GPU  GI  CI              PID  Type  Process name                        GPU Memory |
    |        ID  ID                                                              Usage      |
    |=========================================================================================|
    |    0  N/A  N/A            1669      G  /usr/lib/xorg/Xorg                        4MiB |
    |    1  N/A  N/A            1669      G  /usr/lib/xorg/Xorg                        4MiB |
    +-----------------------------------------------------------------------------------------+

    Here is my docker compose file. I tried adding the /dev/ mounts specified in the hardware acceleration page but it didn't fix it.
    Code:
    services:
      jellyfin:
        image: jellyfin/jellyfin
        container_name: jellyfin
        ports:
          - 8096:8096/tcp
          - 7359:7359/udp
        volumes:
          - ./config:/config
          - ./cache:/cache

          # Media
          - type: bind
            source: /mnt/omvpool/Movies
            target: /Movies

          # NVIDIA GPU device mounts
          - /dev/nvidia-caps:/dev/nvidia-caps
          - /dev/nvidia0:/dev/nvidia0
          - /dev/nvidiactl:/dev/nvidiactl
          - /dev/nvidia-modeset:/dev/nvidia-modeset
          - /dev/nvidia-uvm:/dev/nvidia-uvm
          - /dev/nvidia-uvm-tools:/dev/nvidia-uvm-tools

        restart: "unless-stopped"

        extra_hosts:
          - "host.docker.internal:host-gateway"

        runtime: nvidia

        deploy:
          resources:
            reservations:
              devices:
                - driver: nvidia
                  device_ids: ["0"]
                  capabilities: [gpu]

    When I start the container with device_ids: ["1"], playback works flawlessly, but when I change it to use my 3090, it has a fatal error. I exec'ed into the container to see if jellyfin ffmpeg could use it for simple decodes, and I get this error:

    Quote:[hevc @ 0x7f35548dd400] decoder->cvdl->cuvidGetDecoderCaps(&caps) failed -> CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
    [hevc @ 0x7f35548dd400] Failed setup for format cuda: hwaccel initialisation returned error.
    [hevc_nvenc @ 0x7f35548dc600] OpenEncodeSessionEx failed: unsupported device (2): (no details)
    [vost#0:0/hevc_nvenc @ 0x7f353fa7b100] Error while opening encoder - maybe incorrect parameters such as bit_rate, rate, width or height.
    [vf#0:0 @ 0x7f3566281dc0] Error sending frames to consumers: Function not implemented
    [vf#0:0 @ 0x7f3566281dc0] Task finished with error code: -38 (Function not implemented)
    [vf#0:0 @ 0x7f3566281dc0] Terminating thread with return code -38 (Function not implemented)
    [vost#0:0/hevc_nvenc @ 0x7f353fa7b100] Could not open encoder before EOF
    [vost#0:0/hevc_nvenc @ 0x7f353fa7b100] Task finished with error code: -22 (Invalid argument)
    [vost#0:0/hevc_nvenc @ 0x7f353fa7b100] Terminating thread with return code -22 (Invalid argument)
    [out#0/matroska @ 0x7f3566281ac0] Nothing was written into output file, because at least one of its streams received no packets.
    frame=    0 fps=0.0 q=0.0 Lsize=      0KiB time=N/A bitrate=N/A speed=N/A
    Conversion failed!
    Here is the command I used to test it. I downloaded jellyfin ffmpeg on the Debian host and ran the same command on the same file, and it was able to decode fine.
    Quote:/usr/lib/jellyfin-ffmpeg/ffmpeg \
      -hwaccel cuda -hwaccel_output_format cuda \
      -init_hw_device cuda=cu:0 \
      -i /Movies/TEST_MOVIE.mkv \
      -c:v hevc_nvenc -c:a copy \
      -y "/cache/transcodes/test_output.mkv"

    I am extremely curious as to why one gpu would work and not the other, when both are the same architecture, driver version, CUDA version, etc. The only difference between a working and non-working setup is that device_ids parameter.
    spongeboy03
    Offline

    Junior Member

    Posts: 2
    Threads: 1
    Joined: 2025 Nov
    Reputation: 0
    Country:United States
    #2
    2025-11-22, 06:56 AM
    To add on, the 3090 works in my compute containers like ollama, vllm, etc. I shut those down while testing this.
    « Next Oldest | Next Newest »

    Users browsing this thread: 1 Guest(s)


    • View a Printable Version
    • Subscribe to this thread
    Forum Jump:

    Home · Team · Help · Contact
    © Designed by D&D - Powered by MyBB
    L


    Jellyfin

    The Free Software Media System

    Linear Mode
    Threaded Mode