Yesterday, 07:44 AM
(This post was last modified: Yesterday, 07:50 AM by skittle. Edited 1 time in total.)
I ran into this as well or at least something like it just on jellyfin running in a docker container on Debian 13. Here is the original error that I got.
I then noticed that when I ran NVIDIA SMI, it said it was not connected to the driver. Weirder still, there was no NVIDIA kernel module loaded. I know I definitely installed the driver not a week ago!
Here was the smoking gun. Note that my kernel has a different number than the NVIDIA current module!
I think what happened is that at some point in the past (after setting the driver up) I did a harmless little apt-upgrade without noticing my kernel was upgraded. Then, because my kernel version incremented, but nothing was around to rebuild the NVIDIA module, it was never loaded when the machine later rebooted. ultimately leading to the errors with trying to use transcoding on the GPU. If you reinstall NVIDIA drivers here, it should work because it will give you a module built for your current kernel, but you can also do that yourself. It's probably a little bit faster that way. The key observation here is that it's probably not some new software that you installed. You might have just done an update which incremented your kernel, thereby breaking your NVIDIA driver from working with your new kernel.
If you're like me, and you confirm that the kernel version is not a match for the driver module, here is how you can rebuild the NVIDIA module for your new kernel.
First, make sure your headers are installed for the current kernel. Replace my version number (6.12.57) with the right name. After that, you can reboot. This might not actually be needed, but it would ensure everything is going to match after booting back up. When the installation finishes, you will run DKMS auto install.
Hopefully that finishes successfully and you see something like:
Hooray! Now you can manually load it, confirm that it's listed, and even successfully test with NVIDIA SMI.
Now, Jellyfin should be happy because the driver is working properly. You shouldn't need to reinstall anything or change anything else. To prevent this from biting me again in the future, I pinned that kernel in apt so that it would not try and upgrade it in the future even if a newer one is available. It will now be my responsibility to check for updates, update the kernel, and rebuild the nvidia module if necessary.
Hopefully, this might help somebody else who's searching the same error messages that I did and stumbles upon this thread, even if it's not exactly about the same issue for sure. Good luck!
Code:
$ docker compose up
Attaching to jellyfin
Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running prestart hook #0: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'
nvidia-container-cli: initialization error: nvml error: driver not loadedI then noticed that when I ran NVIDIA SMI, it said it was not connected to the driver. Weirder still, there was no NVIDIA kernel module loaded. I know I definitely installed the driver not a week ago!
Code:
$ nvidia-smi
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.
$ lsmod | grep nvidia
$Here was the smoking gun. Note that my kernel has a different number than the NVIDIA current module!
Code:
$ uname -r
6.12.57+deb13-amd64
$ sudo dkms status
nvidia-current/550.163.01, 6.12.48+deb13-amd64, x86_64: installedI think what happened is that at some point in the past (after setting the driver up) I did a harmless little apt-upgrade without noticing my kernel was upgraded. Then, because my kernel version incremented, but nothing was around to rebuild the NVIDIA module, it was never loaded when the machine later rebooted. ultimately leading to the errors with trying to use transcoding on the GPU. If you reinstall NVIDIA drivers here, it should work because it will give you a module built for your current kernel, but you can also do that yourself. It's probably a little bit faster that way. The key observation here is that it's probably not some new software that you installed. You might have just done an update which incremented your kernel, thereby breaking your NVIDIA driver from working with your new kernel.
If you're like me, and you confirm that the kernel version is not a match for the driver module, here is how you can rebuild the NVIDIA module for your new kernel.
First, make sure your headers are installed for the current kernel. Replace my version number (6.12.57) with the right name. After that, you can reboot. This might not actually be needed, but it would ensure everything is going to match after booting back up. When the installation finishes, you will run DKMS auto install.
Code:
sudo apt install -y linux-headers-6.12.57+deb13-amd64
sudo dkms autoinstallHopefully that finishes successfully and you see something like:
Code:
Autoinstall on 6.12.57+deb13-amd64 succeeded for module(s) nvidia-current.Hooray! Now you can manually load it, confirm that it's listed, and even successfully test with NVIDIA SMI.
Code:
$ sudo modprobe nvidia
$ lsmod | grep nvidia
nvidia 60702720 0
drm 774144 5 drm_kms_helper,drm_shmem_helper,nvidia,virtio_gpu
$ nvidia-smi
Sun Dec 14 23:10:11 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.163.01 Driver Version: 550.163.01 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 Quadro P2000 Off | 00000000:09:00.0 Off | N/A |
| 51% 25C P0 18W / 75W | 0MiB / 5120MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+Now, Jellyfin should be happy because the driver is working properly. You shouldn't need to reinstall anything or change anything else. To prevent this from biting me again in the future, I pinned that kernel in apt so that it would not try and upgrade it in the future even if a newer one is available. It will now be my responsibility to check for updates, update the kernel, and rebuild the nvidia module if necessary.
Code:
sudo apt-mark hold linux-image-amd64 linux-headers-amd64
.
..
...
..
.
linux-image-amd64 set on hold.
linux-headers-amd64 set on hold.Hopefully, this might help somebody else who's searching the same error messages that I did and stumbles upon this thread, even if it's not exactly about the same issue for sure. Good luck!
