2025-01-13, 09:25 PM
(This post was last modified: 2025-01-14, 05:27 AM by mistamoronic. Edited 1 time in total.)
(2025-01-12, 08:20 AM)Efficient_Good_5784 Wrote: You're using WSL right? Make sure you're on WSL2 and have an Nvidia GPU (as it's the only brand of GPU that's supported on Docker with WSL2).
This might be helpful: https://docs.docker.com/desktop/features/gpu/
Following up with this, here is the output when I run the "docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark" command from that page.
Code:
docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
-fullscreen (run n-body simulation in fullscreen mode)
-fp64 (use double precision floating point values for simulation)
-hostmem (stores simulation data in host memory)
-benchmark (run benchmark to measure performance)
-numbodies=<N> (number of bodies (>= 1) to run in simulation)
-device=<d> (where d=0,1,2.... for the CUDA device to use)
-numdevices=<i> (where i=(number of CUDA devices > 0) to use for simulation)
-compare (compares simulation results running once on the default GPU and once on the CPU)
-cpu (run n-body simulation on the CPU)
-tipsy=<file.bin> (load a tipsy model file for simulation)
NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
> Windowed mode
> Simulation data stored in video memory
> Single precision floating point simulation
> 1 Devices used for simulation
GPU Device 0: "Ampere" with compute capability 8.6
> Compute 8.6 CUDA device: [NVIDIA GeForce RTX 3070]
47104 bodies, total time for 10 iterations: 40.967 ms
= 541.601 billion interactions per second
= 10832.025 single-precision GFLOP/s at 20 flops per interaction
However, when I try to do the "docker exec -it jellyfin nvidia-smi" command from this page https://jellyfin.org/docs/general/admini...ualization
This is the result I get:
Code:
docker exec -it jellyfin nvidia-smi
OCI runtime exec failed: exec failed: unable to start container process: exec: "nvidia-smi": executable file not found in $PATH: unknown
Not sure why that is happening, because when I do the nvidia-smi test from this page: https://docs.nvidia.com/datacenter/cloud...kload.html just by itself, it works:
Code:
docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi
Tue Jan 14 05:26:59 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 565.77.01 Driver Version: 566.36 CUDA Version: 12.7 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 3070 On | 00000000:09:00.0 On | N/A |
| 30% 32C P8 23W / 220W | 1626MiB / 8192MiB | 1% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 37 G /Xwayland N/A |
+-----------------------------------------------------------------------------------------+