2025-01-13, 04:59 PM
(This post was last modified: 2025-01-13, 05:03 PM by JellyEll. Edited 1 time in total.)
Hi All
[edits for general coherence sake]
Apologies, I've scoured the forums and although I can see problems like mine, trying the solutions either hasn't worked or I've not seem the exact same behaviour on Jellyfin/Jellyfin logs. I will throw all of the information I can here - some may be too much, I may have missed other parts. Feel free to ask for more, and thank you in advance.
When users are direct streaming OR transcoding, the time to start a film or time to scrub through the films seems to be longer than I would have expected (ranging from 12s -> 3 minutes). It does seem to be consistently slower when transcoding, but still often slow when direct streaming. Most of the time, once the stream has started, it works, but scrub forward/backwards will cause buffering again.
I'm running Jellyfin 10.10.3 on K3s (not sure how we feel about container orchestration, but I already have a bunch of services in Kubernetes, so it made sense).
The path is:
End User -> CloudFlare Proxy -> Router (900down/150up) -> Gigabit Switch -> One of the Kubernetes hosts (https://www.aliexpress.com/item/1005005234874016.html, 16Gi RAM instance) -> Gigabit Switch -> Sinology TS-653 Pro w/ WD Golds.
I'm running HW transcoding - QSV (Also tried VAAPI) using /dev/dri/renderD128. I've logged into the container and can confirm that I've passed through the hardware properly and is accessible by the running user.
I read from some other posts that if the ffmpeg command doesn't have a path after the -i switch, it isn't hw transcoding, but I'm not sure if that's accurate or not. I have configured it how I thought I read to configure it.
I've tried creating a new library using local storage (m2) and copying and playing a file to it - this doesn't seem to change the buffer time.
I've tried disabling the proxying in CloudFlare, no change there.
I've recently moved house, with new internet and new switch, that hasn't sped it up, so I'm not against it, but I'm loathe to believe it's network related.
Thes are the logs from a stream start that took 24 seconds from pressing play to having media actually play:
This is all with just one user at a time and I see you wonderful people running a lot more on a lot less hardware, so I'm clearly missing something.
Any ideas on where my configuration is wrong, or how to tune would be hugely appreciated. Thank you for reading through!
[edits for general coherence sake]
Apologies, I've scoured the forums and although I can see problems like mine, trying the solutions either hasn't worked or I've not seem the exact same behaviour on Jellyfin/Jellyfin logs. I will throw all of the information I can here - some may be too much, I may have missed other parts. Feel free to ask for more, and thank you in advance.
When users are direct streaming OR transcoding, the time to start a film or time to scrub through the films seems to be longer than I would have expected (ranging from 12s -> 3 minutes). It does seem to be consistently slower when transcoding, but still often slow when direct streaming. Most of the time, once the stream has started, it works, but scrub forward/backwards will cause buffering again.
I'm running Jellyfin 10.10.3 on K3s (not sure how we feel about container orchestration, but I already have a bunch of services in Kubernetes, so it made sense).
The path is:
End User -> CloudFlare Proxy -> Router (900down/150up) -> Gigabit Switch -> One of the Kubernetes hosts (https://www.aliexpress.com/item/1005005234874016.html, 16Gi RAM instance) -> Gigabit Switch -> Sinology TS-653 Pro w/ WD Golds.
I'm running HW transcoding - QSV (Also tried VAAPI) using /dev/dri/renderD128. I've logged into the container and can confirm that I've passed through the hardware properly and is accessible by the running user.
I read from some other posts that if the ffmpeg command doesn't have a path after the -i switch, it isn't hw transcoding, but I'm not sure if that's accurate or not. I have configured it how I thought I read to configure it.
Code:
root@jellyfin-76d4d6768c-qqxj7:/# ls -lah /dev/dri/renderD128
crw-rw---- 1 root 104 226, 128 Jan 12 19:03 /dev/dri/renderD128
I've tried creating a new library using local storage (m2) and copying and playing a file to it - this doesn't seem to change the buffer time.
I've tried disabling the proxying in CloudFlare, no change there.
I've recently moved house, with new internet and new switch, that hasn't sped it up, so I'm not against it, but I'm loathe to believe it's network related.
Thes are the logs from a stream start that took 24 seconds from pressing play to having media actually play:
Code:
[16:42:44] [INF] [3] Jellyfin.Api.Helpers.MediaInfoHelper: User policy for Me. EnablePlaybackRemuxing: True EnableVideoPlaybackTranscoding: True EnableAudioPlaybackTranscoding: True
[16:42:44] [INF] [92] Jellyfin.Api.Controllers.DynamicHlsController: Current HLS implementation doesn't support non-keyframe breaks but one is requested, ignoring that request
[16:42:44] [INF] [92] MediaBrowser.MediaEncoding.Transcoding.TranscodeManager: /usr/lib/jellyfin-ffmpeg/ffmpeg -analyzeduration 200M -probesize 1G -fflags +genpts -f matroska -i file:"/movies/The Jester (2023)/The.Jester.2023.1080p.BluRay.DDP5.1.x265.10bit-GalaxyRG265[TGx].mkv" -map_metadata -1 -map_chapters -1 -threads 0 -map 0:0 -map 0:1 -map -0:s -codec:v:0 copy -tag:v:0 hvc1 -bsf:v hevc_mp4toannexb -start_at_zero -codec:a:0 libfdk_aac -ac 6 -ab 640000 -copyts -avoid_negative_ts disabled -max_muxing_queue_size 2048 -f hls -max_delay 5000000 -hls_time 6 -hls_segment_type fmp4 -hls_fmp4_init_filename "4d56fba990b106dde8bbacde0ded367b-1.mp4" -start_number 0 -hls_segment_filename "/config/data/transcodes/4d56fba990b106dde8bbacde0ded367b%d.mp4" -hls_playlist_type vod -hls_list_size 0 -y "/config/data/transcodes/4d56fba990b106dde8bbacde0ded367b.m3u8"
[16:42:48] [INF] [16] MediaBrowser.Controller.MediaEncoding.TranscodingJob: Stopping ffmpeg process with q command for /config/data/transcodes/4d56fba990b106dde8bbacde0ded367b.m3u8
[16:42:49] [INF] [92] MediaBrowser.MediaEncoding.Transcoding.TranscodeManager: FFmpeg exited with code 0
[16:42:49] [INF] [16] Jellyfin.Api.Controllers.DynamicHlsController: Current HLS implementation doesn't support non-keyframe breaks but one is requested, ignoring that request
[16:42:49] [INF] [16] MediaBrowser.MediaEncoding.Transcoding.TranscodeManager: /usr/lib/jellyfin-ffmpeg/ffmpeg -analyzeduration 200M -probesize 1G -ss 00:51:44.851 -noaccurate_seek -fflags +genpts -f matroska -i file:"/movies/The Jester (2023)/The.Jester.2023.1080p.BluRay.DDP5.1.x265.10bit-GalaxyRG265[TGx].mkv" -map_metadata -1 -map_chapters -1 -threads 0 -map 0:0 -map 0:1 -map -0:s -codec:v:0 copy -tag:v:0 hvc1 -bsf:v hevc_mp4toannexb -start_at_zero -codec:a:0 libfdk_aac -ac 6 -ab 640000 -copyts -avoid_negative_ts disabled -max_muxing_queue_size 2048 -f hls -max_delay 5000000 -hls_time 6 -hls_segment_type fmp4 -hls_fmp4_init_filename "4d56fba990b106dde8bbacde0ded367b-1.mp4" -start_number 516 -hls_segment_filename "/config/data/transcodes/4d56fba990b106dde8bbacde0ded367b%d.mp4" -hls_playlist_type vod -hls_list_size 0 -y "/config/data/transcodes/4d56fba990b106dde8bbacde0ded367b.m3u8"
[16:43:00] [INF] [100] MediaBrowser.MediaEncoding.Transcoding.TranscodeManager: Transcoding kill timer stopped for JobId b14a2a04e07d43bb90b42037a654885c PlaySessionId c95282ff2dc74216a118f16bce5c1bc3. Killing transcoding
[16:43:00] [INF] [100] MediaBrowser.Controller.MediaEncoding.TranscodingJob: Stopping ffmpeg process with q command for /config/data/transcodes/ca4dc911ca8351a6dd6d09ac33a07bb6.m3u8
[16:43:00] [INF] [100] MediaBrowser.MediaEncoding.Transcoding.TranscodeManager: FFmpeg exited with code 0
[16:43:00] [INF] [100] MediaBrowser.MediaEncoding.Transcoding.TranscodeManager: Deleting partial stream file(s) /config/data/transcodes/ca4dc911ca8351a6dd6d09ac33a07bb6.m3u8
[16:43:06] [INF] [16] Emby.Server.Implementations.Session.SessionWebSocketListener: Sending ForceKeepAlive message to 1 inactive WebSockets.
This is all with just one user at a time and I see you wonderful people running a lot more on a lot less hardware, so I'm clearly missing something.
Any ideas on where my configuration is wrong, or how to tune would be hugely appreciated. Thank you for reading through!