![]() |
Encoding Discussion Megathread (ffmpeg, Handbrake, AV1, etc...) - Printable Version +- Jellyfin Forum (https://forum.jellyfin.org) +-- Forum: Off Topic (https://forum.jellyfin.org/f-off-topic) +--- Forum: General Discussion (https://forum.jellyfin.org/f-general-discussion) +--- Thread: Encoding Discussion Megathread (ffmpeg, Handbrake, AV1, etc...) (/t-encoding-discussion-megathread-ffmpeg-handbrake-av1-etc) |
RE: Encoding Discussion Megathread (ffmpeg, Handbrake, AV1, etc...) - TheDreadPirate - 2024-12-04 What is the full ffmpeg command you used? And can you run this command and share the output? Code: ffprobe -loglevel error -i "video.mkv" -select_streams v -show_entries stream -v quiet -of json RE: Encoding Discussion Megathread (ffmpeg, Handbrake, AV1, etc...) - TheDreadPirate - 2024-12-04 Circling back to the cropdetect. Two things 1) I got accelerated decoding working with crop detect. 2) I've recently re-encountered an edge case that I've encountered a few times in the past. The accelerated command looks like this. It doesn't make crop detection happen any faster, in fact it slows things down by about 50%, but it massively lowers the CPU usage. Code: ffmpeg -init_hw_device vaapi=va:,driver=iHD,kernel_driver=i915 -init_hw_device qsv=qs@va -filter_hw_device qs -hwaccel qsv -hwaccel_output_format qsv -ss 120 -t 5:00 -i Bookworm.mkv -vf hwdownload,format=p010le,fps=1/2,cropdetect=mode=black:reset=60 -f null - 2>&1 It's probably the hwdownload slowing things down. But, as you mentioned, I couldn't find a QSV filter to do everything in hardware. You will have to do some detection first with ffprobe. The pixel format filter needs to match the video. "p010le" for 10-bit video and "nv12" for 8-bit video are the two that QSV supports. Regarding #2. If, at the start of the cropdetect at the -ss timestamp, your video has a different aspect ratio, either for stylistic effect or because it is a small logo on an all black background, the cropdetect will get it wrong if the windows isn't large enough. Your -t 5:00 helped me with that. I had been using "-ss 120 -to 180". MOST videos will have go back to 16:9 or 21:9 by the end of that 5 minute window. But I have a new movie where there is an extended intro/prelude with a different resolution and aspect ratio than the rest of the movie that lasts the entirety of the 5 minute cropdetect window. Extending how long crop detect goes on for seems like a waste. I'm debating how to handle this edge case. Two crop detects at different time stamps? Pick the one with the larger canvas? Each crop detect at a dynamic start time depending on the length of the video? RE: Encoding Discussion Megathread (ffmpeg, Handbrake, AV1, etc...) - bitmap - 2024-12-05 I would skip further in. I've been using -ss 300 (five minutes into the media) to avoid detecting any intros. It's arbitrary, but you're less likely to get anything odd in this way. The example I found had cropdetect running for 10 minutes, likely to avoid situations like the edge case you describe. I would pick a length for -ss that is going to fit MOST cases. Considering skipping five or seven minutes into the media is unlikely to result in those odd aspect ratio or long intro problems and it's still short enough that you're not likely to hit something midway through. Even if you do, that's why uniq sorts and picks the most frequent value. Dunno if you've ever run cropdetect with full terminal output, but it's....verbose. (2024-12-04, 06:05 PM)TheDreadPirate Wrote: What is the full ffmpeg command you used? My go to is libsvtav1 if I have issues with HWA encoding. I ran this media through that and was happy with the results as far as size and quality. I've just not seen this sort of artifacting before... RE: Encoding Discussion Megathread (ffmpeg, Handbrake, AV1, etc...) - bitmap - 2024-12-06 Here's the full command I used for that messed up encode: Code: MEDIA="Media" && INPUT="Media.mkv" && \ It's a piece of media I struggled with to get a reasonable size or much of a reduction at all with av1_qsv, so I was trying to reduce it to 720p and keep bitrate high. The result is what you see above. Reproducible through multiple encodes. RE: Encoding Discussion Megathread (ffmpeg, Handbrake, AV1, etc...) - bitmap - 2024-12-27 (2024-11-08, 09:11 PM)TheDreadPirate Wrote: Our chat over in troubleshooting got me to come back here and update my process. Now that jf-ffmpeg7 has Dolby Vision removal bundled, it turned a multi-step workflow into a "one liner". This also removed a bunch of steps I needed to take to keep video, audio, and subtitles in sync. Okay, so I've found a happy medium. The code you provided here does AMAZING things with recently-produced media and offers some improvement on older, grainy media. File sizes for test media for me are approximately halved for light grain to zero grain live action. I see much less improvement on file sizes with animated media, grain or no. For the latter, non-grainy sources work pretty well with vanilla av1_qsv, so I'm not too hurt on that front. What I've done is research on the web as well as toil via LOTS of trial and error (mostly error, and I can't understate how much error) to find a good solution for grainier media, animated and live action alike. I wanted to keep some grain but reduce file sizes while maintaining the visual clarity and detail I've come to expect with my encodes. Earlier I was mentioning the use of nlmeans_opencl but I didn't have a good tune put together, which hurt performance and didn't provide the results I wanted. Now? I have an easily-modified solution that provides amazing results. The only issue is that I wouldn't recommend automating...I'm not aware of a noise-detecting algorithm to provide information like a first pass or a map of CRP/QP scene-by-scene. Code: MEDIA="MEDIA" && INPUT="INPUT" && \ So the big thing here is I figured out how to run nlmeans_opencl in conjunction with qsv filters through hwdownload and format. This offers the amazing performance of nlmeans_opencl (compared to vanilla nlmeans) while maintaining the flexibility of vpp_qsv or other qsv filters. You can also run software filters BEFORE the first hwupload (e.g., crop or scale to use lanczos or other flags). You could likely run them after as well or use an alternative like scale_opencl which supports lanczos as an algorithm. Deinterlacing gets super wonky, so the solution is to use filter_complex to tie everything together and it works like a charm. Now, I don't really benchmark but I average around 10x encoding speed with just QSV on 1080p content. It's a wide range because that's the nature of media. With nlmeans_opencl I generally only get 0.9-1.7x encoding speed, but I can take a really stubborn source that encodes around 11-12 GB with QSV and get it down to 4-6 GB with very little loss of apparent grain and excellent retention of visual clarity. I've seen anywhere from 30-70% reduction in file size. On to how it's easily modifiable... Code: nlmeans_opencl=s=2.3:p=7:pc=3:r=7:rc=3 So this is really the filter of interest. We have strength, patch size, chroma plane patch size, research window, and chroma plane research window.
So if you have a stubborn source and the ability to use OpenCL filters, I highly recommend cropping your source and giving nlmeans_opencl a try. Play with the strength for each source (thus why automation is difficult) and see what you get. I recently took a source with 22x15 GB segments down to 650 MB - 1.7 GB with, admittedly, more quality loss than I'd prefer due to bumping up my preferred CRF, but nlmeans_opencl was a godsend for getting this media to a manageable size. |