Jellyfin Forum
Transcoding Issues on Windows Docker - Printable Version

+- Jellyfin Forum (https://forum.jellyfin.org)
+-- Forum: Support (https://forum.jellyfin.org/f-support)
+--- Forum: Troubleshooting (https://forum.jellyfin.org/f-troubleshooting)
+--- Thread: Transcoding Issues on Windows Docker (/t-transcoding-issues-on-windows-docker)



Transcoding Issues on Windows Docker - mistamoronic - 2025-01-12

Hello, I am using docker 10.10.3 on Docker for Windows and I am having trouble getting transcoding to work. I tried following the steps listed in https://jellyfin.org/docs/general/administration/hardware-acceleration/nvidia#configure-with-linux-virtualization
But I am not sure how to complete step 4 that says to add my user to the video group with the
Code:
sudo usermod -aG video $USER
command. I don't know what to put the "$USER"? Isn't my user just the 1000:1000 user I set in the yaml file? Sorry I am new to all of this.

Then step 5 says to run the "docker exec -it jellyfin ldconfig" command and when I try that I get
Code:
docker container exec -it jellyfin ldconfig

ldconfig: Can't create temporary cache file /etc/ld.so.cache~: Permission denied


Anyway I tried to just enable the transcoding without that step to see what would happen and I get "Playback failed due to a fatal player error" when I try to watch something that requires transcoding. Checking the logs, it says this at the end
Code:
ffmpeg version 7.0.2-Jellyfin Copyright (c) 2000-2024 the FFmpeg developers
  built with gcc 12 (Debian 12.2.0-14)
  configuration: --prefix=/usr/lib/jellyfin-ffmpeg --target-os=linux --extra-version=Jellyfin --disable-doc --disable-ffplay --disable-ptx-compression --disable-static --disable-libxcb --disable-sdl2 --disable-xlib --enable-lto=auto --enable-gpl --enable-version3 --enable-shared --enable-gmp --enable-gnutls --enable-chromaprint --enable-opencl --enable-libdrm --enable-libxml2 --enable-libass --enable-libfreetype --enable-libfribidi --enable-libfontconfig --enable-libharfbuzz --enable-libbluray --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libopenmpt --enable-libdav1d --enable-libsvtav1 --enable-libwebp --enable-libvpx --enable-libx264 --enable-libx265 --enable-libzvbi --enable-libzimg --enable-libfdk-aac --arch=amd64 --enable-libshaderc --enable-libplacebo --enable-vulkan --enable-vaapi --enable-amf --enable-libvpl --enable-ffnvcodec --enable-cuda --enable-cuda-llvm --enable-cuvid --enable-nvdec --enable-nvenc
  libavutil      59.  8.100 / 59.  8.100
  libavcodec    61.  3.100 / 61.  3.100
  libavformat    61.  1.100 / 61.  1.100
  libavdevice    61.  1.100 / 61.  1.100
  libavfilter    10.  1.100 / 10.  1.100
  libswscale      8.  1.100 /  8.  1.100
  libswresample  5.  1.100 /  5.  1.100
  libpostproc    58.  1.100 / 58.  1.100
[AVHWDeviceContext @ 0x564d10bf4d40] Cannot load libcuda.so.1
[AVHWDeviceContext @ 0x564d10bf4d40] Could not dynamically load CUDA
Device creation failed: -1.
Failed to set value 'cuda=cu:0' for option 'init_hw_device': Operation not permitted
Error parsing global options: Operation not permitted


So I obviously doing something wrong lol, is it because my user is not added to the video group? Here is my docker-compose yaml file btw
Code:
services:
  jellyfin:
    image: jellyfin/jellyfin:latest
    container_name: jellyfin
    user: 1000:1000
    network_mode: 'host'
    ports:
      - 8096:8096
    volumes:
      - C:\Users\User1\docker\jellyfin\config:/config
      - C:\Users\User1\docker\jellyfin\cache:/cache
      - type: bind
        source: D:\Documents\Videos\Movies
        target: /media
        read_only: true
    restart: 'unless-stopped'
    runtime: nvidia
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [all]



RE: Transcoding Issues on Windows Docker - Efficient_Good_5784 - 2025-01-12

You're using WSL right? Make sure you're on WSL2 and have an Nvidia GPU (as it's the only brand of GPU that's supported on Docker with WSL2).

This might be helpful: https://docs.docker.com/desktop/features/gpu/


RE: Transcoding Issues on Windows Docker - mistamoronic - 2025-01-12

Yes I've done that part and it works when I use that benchmark command inside that page


RE: Transcoding Issues on Windows Docker - mistamoronic - 2025-01-13

(2025-01-12, 08:20 AM)Efficient_Good_5784 Wrote: You're using WSL right? Make sure you're on WSL2 and have an Nvidia GPU (as it's the only brand of GPU that's supported on Docker with WSL2).

This might be helpful: https://docs.docker.com/desktop/features/gpu/

Following up with this, here is the output when I run the "docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark" command from that page.

Code:
docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
        -fullscreen      (run n-body simulation in fullscreen mode)
        -fp64            (use double precision floating point values for simulation)
        -hostmem          (stores simulation data in host memory)
        -benchmark        (run benchmark to measure performance)
        -numbodies=<N>    (number of bodies (>= 1) to run in simulation)
        -device=<d>      (where d=0,1,2.... for the CUDA device to use)
        -numdevices=<i>  (where i=(number of CUDA devices > 0) to use for simulation)
        -compare          (compares simulation results running once on the default GPU and once on the CPU)
        -cpu              (run n-body simulation on the CPU)
        -tipsy=<file.bin> (load a tipsy model file for simulation)

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

> Windowed mode
> Simulation data stored in video memory
> Single precision floating point simulation
> 1 Devices used for simulation
GPU Device 0: "Ampere" with compute capability 8.6

> Compute 8.6 CUDA device: [NVIDIA GeForce RTX 3070]
47104 bodies, total time for 10 iterations: 40.967 ms
= 541.601 billion interactions per second
= 10832.025 single-precision GFLOP/s at 20 flops per interaction


However, when I try to do the "docker exec -it jellyfin nvidia-smi" command from this page https://jellyfin.org/docs/general/administration/hardware-acceleration/nvidia/#configure-with-linux-virtualization
This is the result I get:
Code:
docker exec -it jellyfin nvidia-smi

OCI runtime exec failed: exec failed: unable to start container process: exec: "nvidia-smi": executable file not found in $PATH: unknown

Not sure why that is happening, because when I do the nvidia-smi test from this page: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/sample-workload.html just by itself, it works:

Code:
docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi
Tue Jan 14 05:26:59 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 565.77.01              Driver Version: 566.36        CUDA Version: 12.7    |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf          Pwr:Usage/Cap |          Memory-Usage | GPU-Util  Compute M. |
|                                        |                        |              MIG M. |
|=========================================+========================+======================|
|  0  NVIDIA GeForce RTX 3070        On  |  00000000:09:00.0  On |                  N/A |
| 30%  32C    P8            23W /  220W |    1626MiB /  8192MiB |      1%      Default |
|                                        |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU  GI  CI        PID  Type  Process name                              GPU Memory |
|        ID  ID                                                              Usage      |
|=========================================================================================|
|    0  N/A  N/A        37      G  /Xwayland                                  N/A      |
+-----------------------------------------------------------------------------------------+



RE: Transcoding Issues on Windows Docker - mistamoronic - 2025-01-14

Just wanted to put my transcoding settings here for additional information. Looking at the encode/decode support codex listed here: https://developer.nvidia.com/video-encode-and-decode-gpu-support-matrix-new
I should be able to decode everything and encode everything except AV1


[Image: ikanhBZ.png]