2024-05-06, 08:27 AM
(2024-05-06, 07:52 AM)MrFusion Wrote: The question I have is about Hardware Acceleration, as this is a newer idea for me. I'm seeing mixed commentary about HWA for encodes.
Im currently encoding a scene to do an AB test using your settings against Apple VideoToolbox H265, but am not confident on the change in Constant Quality from RF to CQ slider so will need to dial it in I guess, is there any insights on this?
Is a software encode more likely to get better quality?
1. You're likely going to have to test for yourself and decide on your range of acceptable quality. If you want a starting point, RF ranges from 0-51 while CQ ranges from 1-100. Simple math will give you an exact answer, but ~CQ 36 might be a good place to start if you're aiming for RF 18 quality. I saw somebody recommend starting at CQ 65 and I nearly choked, but if you're okay with feces smeared on a wall where your media used to be, give it a try. I use ffmpeg and do a significant amount of AV1 encoding. The recommendation is 25-35 for SVT-AV1 with ffmpeg and that's just giving up way too much quality most of the time -- I generally go with Preset 3 (analagous to "slower" for non-AV1 encoders) and a CRF of 20-23 depending on how much I care about that media. Even leaving everything at 20 and using film grain synthesis (without denoising), my encodes are generally 2-6 GB from a remux and look great.
Most of my encoding, however, is done with AV1_QSV, which is hardware-accelerated. My CRF ("global_quality") range on this tends to be 18-23, depending on content. There's no fancy film grain synth or even a decent de-noising filter IMO, so these encodes tend to be larger and occasionally struggle with certain media. Which brings me to your second question.
2. Yes. Software encoding will generally give you a better result and, currently, the best AV1 encoder out there does not support QSV acceleration. I believe they're beginning to support (or at least I saw a pull request for) NVIDIA HWA encoding. In general, software > Intel > NVIDIA >>>>> AMD in terms of quality. Also, hardware-accelerated encoding will result in larger files of lesser quality much of the time. You can account for this by lowering your CRF/CQ/GQ/etc... However, you'll almost always end up with larger encodes for the same quality as a software encode.
Now that I've said a lot of bad things about hardware encoding, let me tell you the reason a lot of people, myself included, are on the bandwagon. Intel leads the pack, as I'd say most of my encodes are indistinguishable from a good SVT-AV1 encode (file size aside). But on my machine, with an i7-13700k, an SVT-AV1 encode with the options listed above generally runs between 0.2-0.5x (sometimes a little bit quicker, content dependent). That's between two and five times as long as the media to encode. And that's FAST compared to what it used to be even a year ago. My A380? I just ran through 6 1080p, 3 4K HDR, and 3 4K SDR->HD encodes in about three hours. I get between 2-12x speeds with the A380 under decent conditions. Even with tonemapping (which I have dipped my toes in) I have rarely dipped below 1.25x.
Lots of folks set up their machines incorrectly, botch their installs, or don't follow directions. I run everything in docker-compose and Intel HWA with Jellyfin or ffmpeg is a breeze (unless there are bugs in the software you want to use). So...are you encoding a LOT of media? Are you serving several clients simultaneously with media that might require transcoding? If so, HWA is absolutely the way to go. If you're not sure on either question, I'd think and play a bit more before investing. An Arc A380 (or even the 310) is fairly inexpensive in a hobby that's is an endless money pit.
Jellyfin 10.10.0 LSIO Docker | Ubuntu 24.04 LTS | i7-13700K | Arc A380 6 GB | 64 GB RAM | 79 TB Storage