![]() |
GPU Showdown - Printable Version +- Jellyfin Forum (https://forum.jellyfin.org) +-- Forum: Support (https://forum.jellyfin.org/f-support) +--- Forum: General Questions (https://forum.jellyfin.org/f-general-questions) +--- Thread: GPU Showdown (/t-gpu-showdown) |
GPU Showdown - _Nick - 2025-04-03 Hi all, I’ve read older threads suggesting Intel QuickSync (QS) is generally preferred over NVENC, but why exactly is that the case, excluding setup difficulty (e.g., GPU pass-through)? What's the basis for that recommendation, especially across different QS and NVENC generations? Is it due to:
Also, how big is the actual gap between NVENC gen 8 (RTX 40 series) and QuickSync ver. 9 (Arc GPUs, Core Ultra 5 245K, etc.)? Remember Intel dropped lower-end options this gen, so no i3-14100/13100 equivalents—just Core Ultra starting at 245K. Maybe a fairer comparison is NVENC vs. QuickSync ver. 8, which was standard in all Intel 11th–14th gen desktop CPUs? (Note: QS ver. 8 has no AV1 encode). Let's discuss. Nick. RE: GPU Showdown - _Nick - 2025-04-03 Reason: I’m nearly done migrating Jellyfin to dedicated hardware (see specs above) and considering replacing the 3080 Ti with a cheaper, more up-to-date GPU. RE: GPU Showdown - TheDreadPirate - 2025-04-03 Nvidia GPUs are, generally, much more expensive than Intel GPUs. Even more so when you add the scalper markup on more recent models. And non-scalpers don't understanding that charging the original MSRP for a RTX 2000 GPU is not reasonable. The price difference is even harder to justify if you don't intend to also game on the PC running Jellyfin or don't have some other workload that requires CUDA. Regarding quality, for a long time Intel Quick Sync was the gold standard in terms of quality per bit. In my experience, the gap is still present with Nvidia GTX 1000 series GPUs, but maybe around RTX 2000 the gap in quality per bit was not noticeable. Obviously, I am referring to the gap present for contemporary Intel and Nvidia parts at the time. There is also the benefit that Quick Sync is also present in Intel integrated graphics on their CPUs. While not as performant as Intel dedicated graphics in Arc or Battlemage, Quick Sync on Intel integrated graphics are still very quick and are more than sufficient for most users. Especially if you don't need tone mapping. This reduces idle system power usage and energy costs if that is a concern for you. When specifically talking about dedicated graphics, Arc and, to a lesser degree, Battlemage do have noticeably higher idle power draw compared to contemporary Nvidia GPUs. Especially if you have an older motherboard that does not support ACPI. If you have cheap power where you live, the amount of idle power draw for Intel Arc/Battlemage is not really a concern. Then, as you stated, setup difficulty. Specifically with Linux and even more so when using containers. Whether that be Docker or LXCs. The basic Intel i915 driver is built directly into the Linux kernel and GPU passthrough into a container is simple. Additionally, due to Intel releasing all their drivers under the MIT or BSD license, jellyfin-ffmpeg is able to bundle all of Intel's user space media drivers so that you don't have to worry about installing those. Only the OpenCL driver needs to be installed separately. It also uses the MIT license, but I think there are technical reasons it can't be bundled with jellyfin-ffmpeg. With Nvidia, you HAVE to install their proprietary drivers and encoding packages separately. If you are using a container, you also have to install their Container toolkit. While the additional Docker and LXC configuration is well documented, it is an extra step you don't have to take with Intel (or AMD for that matter). There ARE benefits to going the DKMS driver route that Nvidia takes vs a kernel driver for Intel. You can update drivers separate from kernel updates. If you so choose, you could continue using an end-of-life Linux distro, that will no longer receive kernel updates, and still get Nvidia driver updates. With Intel you would only get driver updates with kernel updates. Regarding AV1. Whether you should use AV1 depends on your needs. I chose to get an Arc GPU for AV1 support because I have a somewhat limited upload bandwidth (40Mbps) and an even lower configured max bit rate per stream (10Mbps) to ensure that when multiple users are streaming I still have sufficient bandwidth to adequately serve them all but still maximize quality. I also pre-transcode all my content to AV1, and all audio to OPUS, to reduce storage use. I typically achieve 50-90% size reduction with AV1 vs 30-60% when I was pre-transcoding to HEVC, with some exceptions (older films with a lot of film grain). This also has the added benefit of not needing to transcode AT ALL since all of my devices support AV1 AND pre-transcoding usually brings the native bit rate below the allowed maximum of 10Mbps. Which is another thing to consider if you're considering pre-transcoding to AV1 or just having an AV1 capable GPU for on-the-fly transcoding. How many of your devices support AV1? A surprising number of phones support AV1, while a surprising number of streaming boxes only recently added AV1 support. RE: GPU Showdown - _Nick - 2025-04-03 Very interesting, that driver information you talked about was extremely useful, as well as the idle power draw. Also, I hadn’t considered pre-transcoding everything to AV1. I'm probably one of only two users on my server with a large-screen device that supports AV1 decoding. It would make sense for storage, since my 12TB HDD is filling up fast. Even if older devices force it to be transcoded back to AVC or HEVC, I'd still benefit from the space savings. When you batch-transcoded your library to AV1 using the A380, were you able to preserve HDR10 and/or DoVi without noticeable quality loss (to your eyes or the average viewer) when re-encoding 70GB 4K HDR remuxes? Lastly, was there a specific reason you chose the slightly more expensive A380 over the A310, considering both use the exact same media engine RE: GPU Showdown - TheDreadPirate - 2025-04-03 Your typical Dolby Vision video is usually profile 7 or 8. Those profiles don't support AV1. There is a separate profile 10 for Dolby Vision with AV1. AFAIK, there is no way, or easy way, to convert profiles 7 and 8 to profile 10 nor clients that support it. However, you can convert Dolby Vision to plain HDR10 since Dolby Vision profiles 7 and 8 are built on top of HDR10. I wrote a guide for that conversion process (linked below). The guide is specifically for removing Dolby Vision without re-encoding. But encoding Dolby Vision HEVC to AV1 with jellyfin-ffmpeg 7 automatically strips Dolby Vision without the need for additional steps or parameters. https://forum.jellyfin.org/t-converting-dolby-vision-to-hdr10 I also wrote a script for automating pre-transcoding, available on Github. FYI, I also convert most audio codecs to OPUS in my script. Except MP3, AAC, FLAC, VORBIS, and tracks that are already OPUS (duh). https://github.com/solidsnake1298/Arc-Encoding-Automator When I got my A380 the A310 was only available in China and Taiwan at the time, IIRC. It is more widely available now. |