2025-12-16, 03:02 PM
Okey so that helped me.
here is what i f*cked up, I fixed it with some help of perplexity ai but this is what i did:
What was broken
On the host, libnvidia-encode.so.1 was a directory, not a shared library file, so anything that needed NVENC (the container runtime, ffmpeg, Jellyfin) crashed with “Cannot load libnvidia-encode.so.1”.
CasaOS plus your manual bind‑mounts tried to mount that same path into the container, which conflicted with the Nvidia runtime and prevented it from creating its own symlink inside the container overlay.
Inside the Jellyfin container, ffmpeg therefore couldn’t load libnvidia-encode.so.1, and every transcode died with exit code 255.
Steps that actually fixed it
Repaired the encode library on the host
Deleted the bogus directory and reinstalled the encode library so libnvidia-encode.so.1 became a proper symlink to the real .so file.
Verified that GPU containers work again by running a CUDA test container and seeing the 1080 Ti in nvidia-smi.
Stopped fighting the Nvidia runtime
Removed all custom bind‑mounts of libnvidia-encode.so* from the Jellyfin container configuration so the Nvidia container runtime could inject its own libraries and symlinks.
Left only the correct GPU settings:
Devices: /dev/nvidia0, /dev/nvidiactl, /dev/nvidia-uvm, /dev/nvidia-modeset, /dev/dri
Env: NVIDIA_VISIBLE_DEVICES=all, NVIDIA_DRIVER_CAPABILITIES=compute,video,utility.
Confirmed NVENC inside the container
Used the Jellyfin ffmpeg binary to test hardware acceleration directly with a synthetic source, and it successfully encoded video with h264_nvenc for multiple frames without errors.
Pointed Jellyfin at the working ffmpeg + NVENC
Set the ffmpeg path in Jellyfin to the container’s jellyfin-ffmpeg binary and enabled NVIDIA NVENC hardware acceleration.
As a result, Jellyfin now calls the same, working NVENC pipeline and transcoding runs on the 1080 Ti instead of failing immediately.
here is what i f*cked up, I fixed it with some help of perplexity ai but this is what i did:
What was broken
On the host, libnvidia-encode.so.1 was a directory, not a shared library file, so anything that needed NVENC (the container runtime, ffmpeg, Jellyfin) crashed with “Cannot load libnvidia-encode.so.1”.
CasaOS plus your manual bind‑mounts tried to mount that same path into the container, which conflicted with the Nvidia runtime and prevented it from creating its own symlink inside the container overlay.
Inside the Jellyfin container, ffmpeg therefore couldn’t load libnvidia-encode.so.1, and every transcode died with exit code 255.
Steps that actually fixed it
Repaired the encode library on the host
Deleted the bogus directory and reinstalled the encode library so libnvidia-encode.so.1 became a proper symlink to the real .so file.
Verified that GPU containers work again by running a CUDA test container and seeing the 1080 Ti in nvidia-smi.
Stopped fighting the Nvidia runtime
Removed all custom bind‑mounts of libnvidia-encode.so* from the Jellyfin container configuration so the Nvidia container runtime could inject its own libraries and symlinks.
Left only the correct GPU settings:
Devices: /dev/nvidia0, /dev/nvidiactl, /dev/nvidia-uvm, /dev/nvidia-modeset, /dev/dri
Env: NVIDIA_VISIBLE_DEVICES=all, NVIDIA_DRIVER_CAPABILITIES=compute,video,utility.
Confirmed NVENC inside the container
Used the Jellyfin ffmpeg binary to test hardware acceleration directly with a synthetic source, and it successfully encoded video with h264_nvenc for multiple frames without errors.
Pointed Jellyfin at the working ffmpeg + NVENC
Set the ffmpeg path in Jellyfin to the container’s jellyfin-ffmpeg binary and enabled NVIDIA NVENC hardware acceleration.
As a result, Jellyfin now calls the same, working NVENC pipeline and transcoding runs on the 1080 Ti instead of failing immediately.

