• Login
  • Register
  • Login Register
    Login
    Username/Email:
    Password:
    Or login with a social network below
  • Forum
  • Website
  • GitHub
  • Status
  • Translation
  • Features
  • Team
  • Rules
  • Help
  • Feeds
User Links
  • Login
  • Register
  • Login Register
    Login
    Username/Email:
    Password:
    Or login with a social network below

    Useful Links Forum Website GitHub Status Translation Features Team Rules Help Feeds
    Jellyfin Forum Off Topic General Discussion Encoding Discussion Megathread (ffmpeg, Handbrake, AV1, etc...)

    Pages (12): « Previous 1 … 7 8 9 10 11 12 Next »
     

     
    • 0 Vote(s) - 0 Average

    Encoding Discussion Megathread (ffmpeg, Handbrake, AV1, etc...)

    Bring your resources, guides, questions, suggestions, cluelessness. All is welcome.
    bitmap
    Offline

    Community Moderator

    Posts: 781
    Threads: 9
    Joined: 2023 Jul
    Reputation: 24
    #81
    2024-01-29, 06:20 PM
    Okay, so I've been gone for a long while dealing with a variety of personal priorities, but I wanted to report that I was able to get LA_ICQ working with my A380. I had to manually compile ffmpeg (which is a given), upgrade the encoder(s), and upgrade everything Intel -- in particular the driver and OneVPL for GPUs. The only option to get to a version new enough to achieve this, in my experience, is to compile it on your own. Here's the working command:
    Code:
    ffmpeg -y -hide_banner -v verbose -stats \
        -hwaccel qsv -hwaccel_output_format qsv -qsv_device /dev/dri/renderD129 \
        -i "${i}" \
        -map 0 -map_metadata 0 \
            -c copy -c:v:0 "av1_qsv" -preset "slower" \
        -global_quality:v:0 23 \
        -extbrc 1 -look_ahead_depth 100 \
        -c:a libopus -b:a 160k -ac 2 \
        "${o}"

    Obviously some things will be different, but all the av1_qsv encoder needs to get this functioning is -extbrc 1 and -look_ahead_depth [1-100]. I believe the lookahead limit is 100, but I can experiment with other values to see what I get. The return to quality > bitrate has yielded MUCH smaller files with exceptional quality and low bitrates, which is what I expected. My latest encode took a 790 GB series remux and brought it down to 63 GB without majorly impacting quality.
    Jellyfin 10.10.7 LSIO Docker | Ubuntu 24.04 LTS | i7-13700K | Arc A380 6 GB | 64 GB RAM | 79 TB Storage

    [Image: AIL4fc84QG6uSnTDEZiCCtosg7uAA8x9j1myFaFs...qL0Q=w2400]
    Mel_Gibson_Real
    Offline

    Junior Member

    Posts: 35
    Threads: 3
    Joined: 2023 Sep
    Reputation: 0
    #82
    2024-01-31, 06:36 PM
    Anyone tried or found a way to add synthetic grain to qsv AV1 encodes. It really the only thing stopping me from letting my a310 convert my entire library to av1.

    Synthetic grain has really allowed me to make perfect transcodes of film media. It even helps make some purely digital media look better in small amounts.
    TheDreadPirate
    Offline

    Community Moderator

    Posts: 15,375
    Threads: 10
    Joined: 2023 Jun
    Reputation: 460
    Country:United States
    #83
    2024-01-31, 06:43 PM
    (2024-01-29, 06:20 PM)bitmap Wrote: Okay, so I've been gone for a long while dealing with a variety of personal priorities, but I wanted to report that I was able to get LA_ICQ working with my A380. I had to manually compile ffmpeg (which is a given), upgrade the encoder(s), and upgrade everything Intel -- in particular the driver and OneVPL for GPUs. The only option to get to a version new enough to achieve this, in my experience, is to compile it on your own. Here's the working command:
    Code:
    ffmpeg -y -hide_banner -v verbose -stats \
    -hwaccel qsv -hwaccel_output_format qsv -qsv_device /dev/dri/renderD129 \
    -i "${i}" \
    -map 0 -map_metadata 0 \
            -c copy -c:v:0 "av1_qsv" -preset "slower" \
    -global_quality:v:0 23 \
    -extbrc 1 -look_ahead_depth 100 \
    -c:a libopus -b:a 160k -ac 2 \
    "${o}"

    Obviously some things will be different, but all the av1_qsv encoder needs to get this functioning is -extbrc 1 and -look_ahead_depth [1-100]. I believe the lookahead limit is 100, but I can experiment with other values to see what I get. The return to quality > bitrate has yielded MUCH smaller files with exceptional quality and low bitrates, which is what I expected. My latest encode took a 790 GB series remux and brought it down to 63 GB without majorly impacting quality.

    Is jellyfin-ffmpeg6 new enough to use global_quality and/or extbrc?  I have global_quality in my current script, but I never A/B/X tested whether it does anything.
    Jellyfin 10.10.7 (Docker)
    Ubuntu 24.04.2 LTS w/HWE
    Intel i3 12100
    Intel Arc A380
    OS drive - SK Hynix P41 1TB
    Storage
        4x WD Red Pro 6TB CMR in RAIDZ1
    [Image: GitHub%20Sponsors-grey?logo=github]
    bitmap
    Offline

    Community Moderator

    Posts: 781
    Threads: 9
    Joined: 2023 Jul
    Reputation: 24
    #84
    2024-02-01, 03:50 AM
    (2024-01-31, 06:36 PM)Mel_Gibson_Real Wrote: Anyone tried or found a way to add synthetic grain to qsv AV1 encodes. It really the only thing stopping me from letting my a310 convert my entire library to av1.

    Synthetic grain has really allowed me to make perfect transcodes of film media. It even helps make some purely digital media look better in small amounts.

    Yeah I think that's my largest drawback right now. It's not that it can't be utilized, in fact, the AV1 QSV spec lists grain as one of the features, but it has not yet been implemented anywhere I know of. The interesting part for me is that my SVT-AV1 encodes with grain synthesis occasionally look worse (subjectively) than my QSV encodes. For example, I just encoded A Wind Named Amnesia from a remux and QSv did a fantastic job. My older, non-HD sources that I have to deinterlace (and that use grain for atmospheric purposes) suffer significantly.

    It'll happen, the consumer tools just aren't there yet.

    (2024-01-31, 06:43 PM)TheDreadPirate Wrote: Is jellyfin-ffmpeg6 new enough to use global_quality and/or extbrc?  I have global_quality in my current script, but I never A/B/X tested whether it does anything.

    I don't know the snapshot jellyfin-ffmpeg derives from, but I'm using 6.1.x snapshots to accomplish this. I would assume the Jellyfin fork is pushing through some of the major (for AV1 encoders) changes, but I haven't had a chance to test. Back before my hiatus I tried jellfin-ffmpeg6 and had no luck, but I also have discovered a number of mistakes I've made along the way. For instance, negative mapping is WAY better for encoding applications and you can map stream metadata to the end result if you wish (not just plan -map_metadata).

    I'll share more as I experiment, I have a ton of remuxes I'm working through currently with the NHL on all-star break.
    Jellyfin 10.10.7 LSIO Docker | Ubuntu 24.04 LTS | i7-13700K | Arc A380 6 GB | 64 GB RAM | 79 TB Storage

    [Image: AIL4fc84QG6uSnTDEZiCCtosg7uAA8x9j1myFaFs...qL0Q=w2400]
    bitmap
    Offline

    Community Moderator

    Posts: 781
    Threads: 9
    Joined: 2023 Jul
    Reputation: 24
    #85
    2024-02-10, 04:39 AM
    @Mel_Gibson_Real So using libplacebo, you may be able to do a partially-hardware-accelerated encode and use grain synthesis with that filter. It's not standard, you'd have to compile your own ffmpeg.

    @TheDreadPirate It looks like jellyfin-ffmpeg is 6.0.1, which might work. I browsed through the release notes and don't see anything about a fix since May of last year, but it definitely works now. Honestly may have been malformed scripting on my end, but even nyanmisaka mentioned that LA_ICQ was borked.
    Jellyfin 10.10.7 LSIO Docker | Ubuntu 24.04 LTS | i7-13700K | Arc A380 6 GB | 64 GB RAM | 79 TB Storage

    [Image: AIL4fc84QG6uSnTDEZiCCtosg7uAA8x9j1myFaFs...qL0Q=w2400]
    nyanmisaka
    Offline

    Team Member

    Posts: 236
    Threads: 0
    Joined: 2023 Jun
    Reputation: 8
    #86
    2024-02-10, 08:51 AM
    The intel driver supporting AV1 ICQ is included in the latest jellyfin-ffmpeg6 deb package. We have enabled the libplacebo filter on Linux for a long time, plus the Vulkan driver Mesa ANV driver required by the libplacebo is also included. So if you learn how to interact it with VA-API/QSV to avoid video memory copies, applying film-grain to video frames would be very fast.

    https://ffmpeg.org/ffmpeg-all.html#toc-Examples-133
    1
    bitmap
    Offline

    Community Moderator

    Posts: 781
    Threads: 9
    Joined: 2023 Jul
    Reputation: 24
    #87
    2024-02-14, 02:45 AM
    (2024-02-10, 08:51 AM)nyanmisaka Wrote: So if you learn how to interact it with VA-API/QSV to avoid video memory copies, applying film-grain to video frames would be very fast.

    https://ffmpeg.org/ffmpeg-all.html#toc-Examples-133

    What do you mean by avoiding video memory copies?

    As in not using the hwupload filter (really helps with deinterlacing)? Or that you'd literally have two copies of each frame because of how the process works? I thought libplacebo could work with HWA so would be part of the filter pipeline...
    Jellyfin 10.10.7 LSIO Docker | Ubuntu 24.04 LTS | i7-13700K | Arc A380 6 GB | 64 GB RAM | 79 TB Storage

    [Image: AIL4fc84QG6uSnTDEZiCCtosg7uAA8x9j1myFaFs...qL0Q=w2400]
    nyanmisaka
    Offline

    Team Member

    Posts: 236
    Threads: 0
    Joined: 2023 Jun
    Reputation: 8
    #88
    2024-02-14, 05:43 PM
    (2024-02-14, 02:45 AM)bitmap Wrote:
    (2024-02-10, 08:51 AM)nyanmisaka Wrote: So if you learn how to interact it with VA-API/QSV to avoid video memory copies, applying film-grain to video frames would be very fast.

    https://ffmpeg.org/ffmpeg-all.html#toc-Examples-133

    What do you mean by avoiding video memory copies?

    As in not using the hwupload filter (really helps with deinterlacing)? Or that you'd literally have two copies of each frame because of how the process works? I thought libplacebo could work with HWA so would be part of the filter pipeline...

    That is, referencing/mapping VA-API/QSV memory as Vulkan memory instead of making a copy.

    libplacebo currently only accepts Vulkan memory.

    decoder(vaapi) -> hwmap -> libplacebo(vulkan) -> hwmap -> encoder(vaapi)
    1
    TheDreadPirate
    Offline

    Community Moderator

    Posts: 15,375
    Threads: 10
    Joined: 2023 Jun
    Reputation: 460
    Country:United States
    #89
    2024-05-02, 06:38 PM (This post was last modified: 2024-05-02, 06:41 PM by TheDreadPirate. Edited 1 time in total.)
    I've picked back up my automation script after making a couple breakthroughs with converting DV to HDR10, language selection, programmatically selecting tracks for FLAC/vorbis conversion, and using parallel to run independent tasks simultaneously.  My small ~50 line script has grown into a monster 250+ line script (including documentation and comments).

    Still tweaking, working out the bugs, finding redundancies, and finding where I can increase parallelization. And attempting to make it less jank.
    Jellyfin 10.10.7 (Docker)
    Ubuntu 24.04.2 LTS w/HWE
    Intel i3 12100
    Intel Arc A380
    OS drive - SK Hynix P41 1TB
    Storage
        4x WD Red Pro 6TB CMR in RAIDZ1
    [Image: GitHub%20Sponsors-grey?logo=github]
    bitmap
    Offline

    Community Moderator

    Posts: 781
    Threads: 9
    Joined: 2023 Jul
    Reputation: 24
    #90
    2024-05-20, 05:32 PM
    Anybody have experience with mixed acceleration encoding?

    Thus far, I've been able to do a mixed encode with libsvtav1 (and will check my results after the encode is complete) like this:

    Code:
    ffmpeg -y -hide_banner -v verbose -stats \
      -init_hw_device vaapi=hw:/dev/dri/renderD129 -init_hw_device opencl=hw1@hw -filter_hw_device hw1 \
      -i "${i}" \
      -filter:v:0 "yadif=mode=1,hwupload=extra_hw_frames=64,scale_opencl=h=720:w=-1:algo=lanczos,nlmeans_opencl=1.0:7:5:3:3,hwdownload,format=yuv420p," \
      -c copy -c:v:0 libsvtav1 -pix_fmt yuv420p10le \
      -svtav1-params "preset=2:crf=18:tune=0:film-grain=10:lookahead=120" \
      -c:a libopus -b:a:0 160k -ac:a:0 2 \
      out_file.mkv; done

    The draws are being able to use lanczos for scaling and nlmeans for denoising, both better than defaults (supposedly), but unavailable with QSV. I guess I'm wondering if it's feasible to do mixed-acceleration encoding with QSV for decode/encode and OpenCL for filtering. Something like this...

    Code:
    ffmpeg -y -hide_banner -v verbose -stats \
      -init_hw_device vaapi=hw:/dev/dri/renderD129 -init_hw_device qsv=qsv@hw -init_hw_device opencl=ocl@hw \
      -hwaccel qsv -filter_hw_device ocl \
      -i "${i}" \
      -filter:v:0 "yadif=mode=1,hwupload=extra_hw_frames=64,scale_opencl=h=720:w=-1:algo=lanczos,nlmeans_opencl=1.0:7:5:3:3,hwdownload,format=yuv420p," \
      -c copy -c:v:0 libsvtav1 -pix_fmt yuv420p10le \
      -svtav1-params "preset=2:crf=18:tune=0:film-grain=10:lookahead=120" \
      -c:a libopus -b:a:0 160k -ac:a:0 2 \
      out_file.mkv; done

    Or is this where derive device comes in handy? I think VA-API has to be used as the source for OpenCL rather than QSV but I'm not sure...any thoughts before I go down this rabbit hole?
    Jellyfin 10.10.7 LSIO Docker | Ubuntu 24.04 LTS | i7-13700K | Arc A380 6 GB | 64 GB RAM | 79 TB Storage

    [Image: AIL4fc84QG6uSnTDEZiCCtosg7uAA8x9j1myFaFs...qL0Q=w2400]
    Pages (12): « Previous 1 … 7 8 9 10 11 12 Next »
     

    « Next Oldest | Next Newest »

    Users browsing this thread: 2 Guest(s)


    • View a Printable Version
    • Subscribe to this thread
    Forum Jump:

    Home · Team · Help · Contact
    © Designed by D&D - Powered by MyBB
    L


    Jellyfin

    The Free Software Media System

    Linear Mode
    Threaded Mode