2025-04-02, 08:28 PM
(This post was last modified: 2025-04-03, 05:43 PM by timitt. Edited 1 time in total.)
I did some tests about that audio delay. It certainly is problem with ffmpeg. If audio does not start until first segment file is finished it just wont work. Nothing to do with hls.js or Jellyfin or WebOS. Ffmpeg prints notification about issue, but creates files. If I have just copied the audio to stream then the stream is completely broken (at least for MPC-HC) but if we transcode it then it creates playable files but there is wrong timing.
If I use ffmpeg to create mpegts stream then it works in both cases (copy or transcode).
Also if I make segments larger (by setting hls_time bigger) then delay fits to first segment file and then fmp4 stream works in both cases (copy or transcode).
EDIT: I found one way to solve that audio delay issue with fMP4 too. But it seems to require using separate audio and video streams like in my first mail. You just need to add -movflags delay_moov to audio stream. I just doubt separating the streams won't happen anytime soon (or ever).
If I use ffmpeg to create mpegts stream then it works in both cases (copy or transcode).
Also if I make segments larger (by setting hls_time bigger) then delay fits to first segment file and then fmp4 stream works in both cases (copy or transcode).
EDIT: I found one way to solve that audio delay issue with fMP4 too. But it seems to require using separate audio and video streams like in my first mail. You just need to add -movflags delay_moov to audio stream. I just doubt separating the streams won't happen anytime soon (or ever).