2024-05-20, 05:32 PM
Anybody have experience with mixed acceleration encoding?
Thus far, I've been able to do a mixed encode with libsvtav1 (and will check my results after the encode is complete) like this:
The draws are being able to use lanczos for scaling and nlmeans for denoising, both better than defaults (supposedly), but unavailable with QSV. I guess I'm wondering if it's feasible to do mixed-acceleration encoding with QSV for decode/encode and OpenCL for filtering. Something like this...
Or is this where derive device comes in handy? I think VA-API has to be used as the source for OpenCL rather than QSV but I'm not sure...any thoughts before I go down this rabbit hole?
Thus far, I've been able to do a mixed encode with libsvtav1 (and will check my results after the encode is complete) like this:
Code:
ffmpeg -y -hide_banner -v verbose -stats \
-init_hw_device vaapi=hw:/dev/dri/renderD129 -init_hw_device opencl=hw1@hw -filter_hw_device hw1 \
-i "${i}" \
-filter:v:0 "yadif=mode=1,hwupload=extra_hw_frames=64,scale_opencl=h=720:w=-1:algo=lanczos,nlmeans_opencl=1.0:7:5:3:3,hwdownload,format=yuv420p," \
-c copy -c:v:0 libsvtav1 -pix_fmt yuv420p10le \
-svtav1-params "preset=2:crf=18:tune=0:film-grain=10:lookahead=120" \
-c:a libopus -b:a:0 160k -ac:a:0 2 \
out_file.mkv; done
The draws are being able to use lanczos for scaling and nlmeans for denoising, both better than defaults (supposedly), but unavailable with QSV. I guess I'm wondering if it's feasible to do mixed-acceleration encoding with QSV for decode/encode and OpenCL for filtering. Something like this...
Code:
ffmpeg -y -hide_banner -v verbose -stats \
-init_hw_device vaapi=hw:/dev/dri/renderD129 -init_hw_device qsv=qsv@hw -init_hw_device opencl=ocl@hw \
-hwaccel qsv -filter_hw_device ocl \
-i "${i}" \
-filter:v:0 "yadif=mode=1,hwupload=extra_hw_frames=64,scale_opencl=h=720:w=-1:algo=lanczos,nlmeans_opencl=1.0:7:5:3:3,hwdownload,format=yuv420p," \
-c copy -c:v:0 libsvtav1 -pix_fmt yuv420p10le \
-svtav1-params "preset=2:crf=18:tune=0:film-grain=10:lookahead=120" \
-c:a libopus -b:a:0 160k -ac:a:0 2 \
out_file.mkv; done
Or is this where derive device comes in handy? I think VA-API has to be used as the source for OpenCL rather than QSV but I'm not sure...any thoughts before I go down this rabbit hole?
Jellyfin 10.10.0 LSIO Docker | Ubuntu 24.04 LTS | i7-13700K | Arc A380 6 GB | 64 GB RAM | 79 TB Storage