Jellyfin Forum
Encoding Discussion Megathread (ffmpeg, Handbrake, AV1, etc...) - Printable Version

+- Jellyfin Forum (https://forum.jellyfin.org)
+-- Forum: Off Topic (https://forum.jellyfin.org/f-off-topic)
+--- Forum: General Discussion (https://forum.jellyfin.org/f-general-discussion)
+--- Thread: Encoding Discussion Megathread (ffmpeg, Handbrake, AV1, etc...) (/t-encoding-discussion-megathread-ffmpeg-handbrake-av1-etc)

Pages: 1 2 3 4 5 6 7 8 9 10 11 12


RE: Encoding Discussion Megathread (ffmpeg, Handbrake, AV1, etc...) - bitmap - 2024-01-29

Okay, so I've been gone for a long while dealing with a variety of personal priorities, but I wanted to report that I was able to get LA_ICQ working with my A380. I had to manually compile ffmpeg (which is a given), upgrade the encoder(s), and upgrade everything Intel -- in particular the driver and OneVPL for GPUs. The only option to get to a version new enough to achieve this, in my experience, is to compile it on your own. Here's the working command:
Code:
ffmpeg -y -hide_banner -v verbose -stats \
    -hwaccel qsv -hwaccel_output_format qsv -qsv_device /dev/dri/renderD129 \
    -i "${i}" \
    -map 0 -map_metadata 0 \
        -c copy -c:v:0 "av1_qsv" -preset "slower" \
    -global_quality:v:0 23 \
    -extbrc 1 -look_ahead_depth 100 \
    -c:a libopus -b:a 160k -ac 2 \
    "${o}"

Obviously some things will be different, but all the av1_qsv encoder needs to get this functioning is -extbrc 1 and -look_ahead_depth [1-100]. I believe the lookahead limit is 100, but I can experiment with other values to see what I get. The return to quality > bitrate has yielded MUCH smaller files with exceptional quality and low bitrates, which is what I expected. My latest encode took a 790 GB series remux and brought it down to 63 GB without majorly impacting quality.


RE: Encoding Discussion Megathread (ffmpeg, Handbrake, AV1, etc...) - Mel_Gibson_Real - 2024-01-31

Anyone tried or found a way to add synthetic grain to qsv AV1 encodes. It really the only thing stopping me from letting my a310 convert my entire library to av1.

Synthetic grain has really allowed me to make perfect transcodes of film media. It even helps make some purely digital media look better in small amounts.


RE: Encoding Discussion Megathread (ffmpeg, Handbrake, AV1, etc...) - TheDreadPirate - 2024-01-31

(2024-01-29, 06:20 PM)bitmap Wrote: Okay, so I've been gone for a long while dealing with a variety of personal priorities, but I wanted to report that I was able to get LA_ICQ working with my A380. I had to manually compile ffmpeg (which is a given), upgrade the encoder(s), and upgrade everything Intel -- in particular the driver and OneVPL for GPUs. The only option to get to a version new enough to achieve this, in my experience, is to compile it on your own. Here's the working command:
Code:
ffmpeg -y -hide_banner -v verbose -stats \
-hwaccel qsv -hwaccel_output_format qsv -qsv_device /dev/dri/renderD129 \
-i "${i}" \
-map 0 -map_metadata 0 \
        -c copy -c:v:0 "av1_qsv" -preset "slower" \
-global_quality:v:0 23 \
-extbrc 1 -look_ahead_depth 100 \
-c:a libopus -b:a 160k -ac 2 \
"${o}"

Obviously some things will be different, but all the av1_qsv encoder needs to get this functioning is -extbrc 1 and -look_ahead_depth [1-100]. I believe the lookahead limit is 100, but I can experiment with other values to see what I get. The return to quality > bitrate has yielded MUCH smaller files with exceptional quality and low bitrates, which is what I expected. My latest encode took a 790 GB series remux and brought it down to 63 GB without majorly impacting quality.

Is jellyfin-ffmpeg6 new enough to use global_quality and/or extbrc?  I have global_quality in my current script, but I never A/B/X tested whether it does anything.


RE: Encoding Discussion Megathread (ffmpeg, Handbrake, AV1, etc...) - bitmap - 2024-02-01

(2024-01-31, 06:36 PM)Mel_Gibson_Real Wrote: Anyone tried or found a way to add synthetic grain to qsv AV1 encodes. It really the only thing stopping me from letting my a310 convert my entire library to av1.

Synthetic grain has really allowed me to make perfect transcodes of film media. It even helps make some purely digital media look better in small amounts.

Yeah I think that's my largest drawback right now. It's not that it can't be utilized, in fact, the AV1 QSV spec lists grain as one of the features, but it has not yet been implemented anywhere I know of. The interesting part for me is that my SVT-AV1 encodes with grain synthesis occasionally look worse (subjectively) than my QSV encodes. For example, I just encoded A Wind Named Amnesia from a remux and QSv did a fantastic job. My older, non-HD sources that I have to deinterlace (and that use grain for atmospheric purposes) suffer significantly.

It'll happen, the consumer tools just aren't there yet.

(2024-01-31, 06:43 PM)TheDreadPirate Wrote: Is jellyfin-ffmpeg6 new enough to use global_quality and/or extbrc?  I have global_quality in my current script, but I never A/B/X tested whether it does anything.

I don't know the snapshot jellyfin-ffmpeg derives from, but I'm using 6.1.x snapshots to accomplish this. I would assume the Jellyfin fork is pushing through some of the major (for AV1 encoders) changes, but I haven't had a chance to test. Back before my hiatus I tried jellfin-ffmpeg6 and had no luck, but I also have discovered a number of mistakes I've made along the way. For instance, negative mapping is WAY better for encoding applications and you can map stream metadata to the end result if you wish (not just plan -map_metadata).

I'll share more as I experiment, I have a ton of remuxes I'm working through currently with the NHL on all-star break.


RE: Encoding Discussion Megathread (ffmpeg, Handbrake, AV1, etc...) - bitmap - 2024-02-10

@Mel_Gibson_Real So using libplacebo, you may be able to do a partially-hardware-accelerated encode and use grain synthesis with that filter. It's not standard, you'd have to compile your own ffmpeg.

@TheDreadPirate It looks like jellyfin-ffmpeg is 6.0.1, which might work. I browsed through the release notes and don't see anything about a fix since May of last year, but it definitely works now. Honestly may have been malformed scripting on my end, but even nyanmisaka mentioned that LA_ICQ was borked.


RE: Encoding Discussion Megathread (ffmpeg, Handbrake, AV1, etc...) - nyanmisaka - 2024-02-10

The intel driver supporting AV1 ICQ is included in the latest jellyfin-ffmpeg6 deb package. We have enabled the libplacebo filter on Linux for a long time, plus the Vulkan driver Mesa ANV driver required by the libplacebo is also included. So if you learn how to interact it with VA-API/QSV to avoid video memory copies, applying film-grain to video frames would be very fast.

https://ffmpeg.org/ffmpeg-all.html#toc-Examples-133


RE: Encoding Discussion Megathread (ffmpeg, Handbrake, AV1, etc...) - bitmap - 2024-02-14

(2024-02-10, 08:51 AM)nyanmisaka Wrote: So if you learn how to interact it with VA-API/QSV to avoid video memory copies, applying film-grain to video frames would be very fast.

https://ffmpeg.org/ffmpeg-all.html#toc-Examples-133

What do you mean by avoiding video memory copies?

As in not using the hwupload filter (really helps with deinterlacing)? Or that you'd literally have two copies of each frame because of how the process works? I thought libplacebo could work with HWA so would be part of the filter pipeline...


RE: Encoding Discussion Megathread (ffmpeg, Handbrake, AV1, etc...) - nyanmisaka - 2024-02-14

(2024-02-14, 02:45 AM)bitmap Wrote:
(2024-02-10, 08:51 AM)nyanmisaka Wrote: So if you learn how to interact it with VA-API/QSV to avoid video memory copies, applying film-grain to video frames would be very fast.

https://ffmpeg.org/ffmpeg-all.html#toc-Examples-133

What do you mean by avoiding video memory copies?

As in not using the hwupload filter (really helps with deinterlacing)? Or that you'd literally have two copies of each frame because of how the process works? I thought libplacebo could work with HWA so would be part of the filter pipeline...

That is, referencing/mapping VA-API/QSV memory as Vulkan memory instead of making a copy.

libplacebo currently only accepts Vulkan memory.

decoder(vaapi) -> hwmap -> libplacebo(vulkan) -> hwmap -> encoder(vaapi)


RE: Encoding Discussion Megathread (ffmpeg, Handbrake, AV1, etc...) - TheDreadPirate - 2024-05-02

I've picked back up my automation script after making a couple breakthroughs with converting DV to HDR10, language selection, programmatically selecting tracks for FLAC/vorbis conversion, and using parallel to run independent tasks simultaneously.  My small ~50 line script has grown into a monster 250+ line script (including documentation and comments).

Still tweaking, working out the bugs, finding redundancies, and finding where I can increase parallelization. And attempting to make it less jank.


RE: Encoding Discussion Megathread (ffmpeg, Handbrake, AV1, etc...) - bitmap - 2024-05-20

Anybody have experience with mixed acceleration encoding?

Thus far, I've been able to do a mixed encode with libsvtav1 (and will check my results after the encode is complete) like this:

Code:
ffmpeg -y -hide_banner -v verbose -stats \
  -init_hw_device vaapi=hw:/dev/dri/renderD129 -init_hw_device opencl=hw1@hw -filter_hw_device hw1 \
  -i "${i}" \
  -filter:v:0 "yadif=mode=1,hwupload=extra_hw_frames=64,scale_opencl=h=720:w=-1:algo=lanczos,nlmeans_opencl=1.0:7:5:3:3,hwdownload,format=yuv420p," \
  -c copy -c:v:0 libsvtav1 -pix_fmt yuv420p10le \
  -svtav1-params "preset=2:crf=18:tune=0:film-grain=10:lookahead=120" \
  -c:a libopus -b:a:0 160k -ac:a:0 2 \
  out_file.mkv; done

The draws are being able to use lanczos for scaling and nlmeans for denoising, both better than defaults (supposedly), but unavailable with QSV. I guess I'm wondering if it's feasible to do mixed-acceleration encoding with QSV for decode/encode and OpenCL for filtering. Something like this...

Code:
ffmpeg -y -hide_banner -v verbose -stats \
  -init_hw_device vaapi=hw:/dev/dri/renderD129 -init_hw_device qsv=qsv@hw -init_hw_device opencl=ocl@hw \
  -hwaccel qsv -filter_hw_device ocl \
  -i "${i}" \
  -filter:v:0 "yadif=mode=1,hwupload=extra_hw_frames=64,scale_opencl=h=720:w=-1:algo=lanczos,nlmeans_opencl=1.0:7:5:3:3,hwdownload,format=yuv420p," \
  -c copy -c:v:0 libsvtav1 -pix_fmt yuv420p10le \
  -svtav1-params "preset=2:crf=18:tune=0:film-grain=10:lookahead=120" \
  -c:a libopus -b:a:0 160k -ac:a:0 2 \
  out_file.mkv; done

Or is this where derive device comes in handy? I think VA-API has to be used as the source for OpenCL rather than QSV but I'm not sure...any thoughts before I go down this rabbit hole?