• Login
  • Register
  • Login Register
    Login
    Username/Email:
    Password:
    Or login with a social network below
  • Forum
  • Website
  • GitHub
  • Status
  • Translation
  • Features
  • Team
  • Rules
  • Help
  • Feeds
User Links
  • Login
  • Register
  • Login Register
    Login
    Username/Email:
    Password:
    Or login with a social network below

    Useful Links Forum Website GitHub Status Translation Features Team Rules Help Feeds
    Jellyfin Forum Support Troubleshooting Transcoding Issues on Windows Docker

     
    • 0 Vote(s) - 0 Average

    Transcoding Issues on Windows Docker

    Videos fail to load with "Playback failed due to a fatal player error."
    mistamoronic
    Offline

    Junior Member

    Posts: 16
    Threads: 2
    Joined: 2025 Jan
    Reputation: 0
    #1
    2025-01-12, 08:05 AM (This post was last modified: 2025-01-12, 08:07 AM by mistamoronic. Edited 1 time in total.)
    Hello, I am using docker 10.10.3 on Docker for Windows and I am having trouble getting transcoding to work. I tried following the steps listed in https://jellyfin.org/docs/general/admini...ualization
    But I am not sure how to complete step 4 that says to add my user to the video group with the
    Code:
    sudo usermod -aG video $USER
    command. I don't know what to put the "$USER"? Isn't my user just the 1000:1000 user I set in the yaml file? Sorry I am new to all of this.

    Then step 5 says to run the "docker exec -it jellyfin ldconfig" command and when I try that I get
    Code:
    docker container exec -it jellyfin ldconfig

    ldconfig: Can't create temporary cache file /etc/ld.so.cache~: Permission denied


    Anyway I tried to just enable the transcoding without that step to see what would happen and I get "Playback failed due to a fatal player error" when I try to watch something that requires transcoding. Checking the logs, it says this at the end
    Code:
    ffmpeg version 7.0.2-Jellyfin Copyright (c) 2000-2024 the FFmpeg developers
      built with gcc 12 (Debian 12.2.0-14)
      configuration: --prefix=/usr/lib/jellyfin-ffmpeg --target-os=linux --extra-version=Jellyfin --disable-doc --disable-ffplay --disable-ptx-compression --disable-static --disable-libxcb --disable-sdl2 --disable-xlib --enable-lto=auto --enable-gpl --enable-version3 --enable-shared --enable-gmp --enable-gnutls --enable-chromaprint --enable-opencl --enable-libdrm --enable-libxml2 --enable-libass --enable-libfreetype --enable-libfribidi --enable-libfontconfig --enable-libharfbuzz --enable-libbluray --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libopenmpt --enable-libdav1d --enable-libsvtav1 --enable-libwebp --enable-libvpx --enable-libx264 --enable-libx265 --enable-libzvbi --enable-libzimg --enable-libfdk-aac --arch=amd64 --enable-libshaderc --enable-libplacebo --enable-vulkan --enable-vaapi --enable-amf --enable-libvpl --enable-ffnvcodec --enable-cuda --enable-cuda-llvm --enable-cuvid --enable-nvdec --enable-nvenc
      libavutil      59.  8.100 / 59.  8.100
      libavcodec    61.  3.100 / 61.  3.100
      libavformat    61.  1.100 / 61.  1.100
      libavdevice    61.  1.100 / 61.  1.100
      libavfilter    10.  1.100 / 10.  1.100
      libswscale      8.  1.100 /  8.  1.100
      libswresample  5.  1.100 /  5.  1.100
      libpostproc    58.  1.100 / 58.  1.100
    [AVHWDeviceContext @ 0x564d10bf4d40] Cannot load libcuda.so.1
    [AVHWDeviceContext @ 0x564d10bf4d40] Could not dynamically load CUDA
    Device creation failed: -1.
    Failed to set value 'cuda=cu:0' for option 'init_hw_device': Operation not permitted
    Error parsing global options: Operation not permitted


    So I obviously doing something wrong lol, is it because my user is not added to the video group? Here is my docker-compose yaml file btw
    Code:
    services:
      jellyfin:
        image: jellyfin/jellyfin:latest
        container_name: jellyfin
        user: 1000:1000
        network_mode: 'host'
        ports:
          - 8096:8096
        volumes:
          - C:\Users\User1\docker\jellyfin\config:/config
          - C:\Users\User1\docker\jellyfin\cache:/cache
          - type: bind
            source: D:\Documents\Videos\Movies
            target: /media
            read_only: true
        restart: 'unless-stopped'
        runtime: nvidia
        deploy:
          resources:
            reservations:
              devices:
                - driver: nvidia
                  count: all
                  capabilities: [all]
    Efficient_Good_5784
    Offline

    Community Moderator

    Posts: 1,167
    Threads: 3
    Joined: 2023 Jun
    Reputation: 50
    #2
    2025-01-12, 08:20 AM
    You're using WSL right? Make sure you're on WSL2 and have an Nvidia GPU (as it's the only brand of GPU that's supported on Docker with WSL2).

    This might be helpful: https://docs.docker.com/desktop/features/gpu/
    mistamoronic
    Offline

    Junior Member

    Posts: 16
    Threads: 2
    Joined: 2025 Jan
    Reputation: 0
    #3
    2025-01-12, 03:08 PM
    Yes I've done that part and it works when I use that benchmark command inside that page
    mistamoronic
    Offline

    Junior Member

    Posts: 16
    Threads: 2
    Joined: 2025 Jan
    Reputation: 0
    #4
    2025-01-13, 09:25 PM (This post was last modified: 2025-01-14, 05:27 AM by mistamoronic. Edited 1 time in total.)
    (2025-01-12, 08:20 AM)Efficient_Good_5784 Wrote: You're using WSL right? Make sure you're on WSL2 and have an Nvidia GPU (as it's the only brand of GPU that's supported on Docker with WSL2).

    This might be helpful: https://docs.docker.com/desktop/features/gpu/

    Following up with this, here is the output when I run the "docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark" command from that page.

    Code:
    docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
    Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
            -fullscreen      (run n-body simulation in fullscreen mode)
            -fp64            (use double precision floating point values for simulation)
            -hostmem          (stores simulation data in host memory)
            -benchmark        (run benchmark to measure performance)
            -numbodies=<N>    (number of bodies (>= 1) to run in simulation)
            -device=<d>      (where d=0,1,2.... for the CUDA device to use)
            -numdevices=<i>  (where i=(number of CUDA devices > 0) to use for simulation)
            -compare          (compares simulation results running once on the default GPU and once on the CPU)
            -cpu              (run n-body simulation on the CPU)
            -tipsy=<file.bin> (load a tipsy model file for simulation)

    NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

    > Windowed mode
    > Simulation data stored in video memory
    > Single precision floating point simulation
    > 1 Devices used for simulation
    GPU Device 0: "Ampere" with compute capability 8.6

    > Compute 8.6 CUDA device: [NVIDIA GeForce RTX 3070]
    47104 bodies, total time for 10 iterations: 40.967 ms
    = 541.601 billion interactions per second
    = 10832.025 single-precision GFLOP/s at 20 flops per interaction


    However, when I try to do the "docker exec -it jellyfin nvidia-smi" command from this page https://jellyfin.org/docs/general/admini...ualization
    This is the result I get:
    Code:
    docker exec -it jellyfin nvidia-smi

    OCI runtime exec failed: exec failed: unable to start container process: exec: "nvidia-smi": executable file not found in $PATH: unknown

    Not sure why that is happening, because when I do the nvidia-smi test from this page: https://docs.nvidia.com/datacenter/cloud...kload.html just by itself, it works:

    Code:
    docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi
    Tue Jan 14 05:26:59 2025
    +-----------------------------------------------------------------------------------------+
    | NVIDIA-SMI 565.77.01              Driver Version: 566.36        CUDA Version: 12.7    |
    |-----------------------------------------+------------------------+----------------------+
    | GPU  Name                Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
    | Fan  Temp  Perf          Pwr:Usage/Cap |          Memory-Usage | GPU-Util  Compute M. |
    |                                        |                        |              MIG M. |
    |=========================================+========================+======================|
    |  0  NVIDIA GeForce RTX 3070        On  |  00000000:09:00.0  On |                  N/A |
    | 30%  32C    P8            23W /  220W |    1626MiB /  8192MiB |      1%      Default |
    |                                        |                        |                  N/A |
    +-----------------------------------------+------------------------+----------------------+

    +-----------------------------------------------------------------------------------------+
    | Processes:                                                                              |
    |  GPU  GI  CI        PID  Type  Process name                              GPU Memory |
    |        ID  ID                                                              Usage      |
    |=========================================================================================|
    |    0  N/A  N/A        37      G  /Xwayland                                  N/A      |
    +-----------------------------------------------------------------------------------------+
    mistamoronic
    Offline

    Junior Member

    Posts: 16
    Threads: 2
    Joined: 2025 Jan
    Reputation: 0
    #5
    2025-01-14, 05:59 AM (This post was last modified: 2025-01-14, 06:00 AM by mistamoronic. Edited 3 times in total.)
    Just wanted to put my transcoding settings here for additional information. Looking at the encode/decode support codex listed here: https://developer.nvidia.com/video-encod...matrix-new
    I should be able to decode everything and encode everything except AV1


    [Image: ikanhBZ.png]
    « Next Oldest | Next Newest »

    Users browsing this thread: 1 Guest(s)


    • View a Printable Version
    • Subscribe to this thread
    Forum Jump:

    Home · Team · Help · Contact
    © Designed by D&D - Powered by MyBB
    L


    Jellyfin

    The Free Software Media System

    Linear Mode
    Threaded Mode