• Login
  • Register
  • Login Register
    Login
    Username/Email:
    Password:
    Or login with a social network below
  • Forum
  • Website
  • GitHub
  • Status
  • Translation
  • Features
  • Team
  • Rules
  • Help
  • Feeds
User Links
  • Login
  • Register
  • Login Register
    Login
    Username/Email:
    Password:
    Or login with a social network below

    Useful Links Forum Website GitHub Status Translation Features Team Rules Help Feeds
    Jellyfin Forum Support Troubleshooting Stuck on iGPU passthrough / Can't get renderD128 to appear.

    Pages (2): 1 2 Next »

     
    • 0 Vote(s) - 0 Average

    Stuck on iGPU passthrough / Can't get renderD128 to appear.

    iGPU Transcoder Issues.
    Vaemarr
    Offline

    Junior Member

    Posts: 9
    Threads: 1
    Joined: 2025 Feb
    Reputation: 0
    Country:Australia
    #1
    2025-02-18, 12:37 PM
    Hi all,

    I'm in need of some help as I am pulling my hair out trying to get transcoding working.

    I've tried everything I can find online but I am genuinely stuck and desperately need help.

    I'm on an Intel N4505 with an integrated Intel GPU and for the life of me I cannot get /dev/dri/renderD128 to appear.

    So for my setup I have proxmox running on an Intel NUC, a CentOS VM running docker and Jellyfin running as a docker container.

    By all appearances it seems I have the iGPU passing through to the VM.

    This is what I get from my CentOS virtual machine.

    [root@styx ~]$ lspci | grep VGA
    00:01.0 VGA compatible controller: Device 1234:1111 (rev 02)
    01:00.0 VGA compatible controller: Intel Corporation Device 4e55 (rev 01)

    [ root@styx ~]$ lspci -nnk -d 8086:4e55
    01:00.0 VGA compatible controller [0300]: Intel Corporation Device [8086:4e55] (rev 01)
            Subsystem: Intel Corporation Device [8086:3027]

    [ root@styx ~]$ lspci -k | grep -A 3 VGA
    00:01.0 VGA compatible controller: Device 1234:1111 (rev 02)
            Subsystem: Red Hat, Inc. Device 1100
            Kernel driver in use: bochs-drm
            Kernel modules: bochs_drm
    --
    01:00.0 VGA compatible controller: Intel Corporation Device 4e55 (rev 01)
            Subsystem: Intel Corporation Device 3027
    05:01.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge
    05:02.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge

    So to me it definitely appears that the iGPU passthrough is working but for some reason I think the issue has something to do with the VM not seeing the renderer aspect.

    As for my proxmox info:

    Kernel Version Linux 6.2.16-20-bpo11-pve #1 SMP PREEMPT_DYNAMIC PVE 6.2.16-20~bpo11+1 (2023-12-01T14:42Z)
    PVE Manager Version pve-manager/7.4-19/f98bf8d4

    And I am running on CentOS 7 for my VM running docker.

    This is in my GRUB config:
    GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"


    I created /etc/modprobe.d/i915.conf and added:

    options i915 enable_guc=3


    I also have /etc/modprobe.d/vfio.conf with:

    options vfio-pci ids=8086:4e55 disable_vga=1

    Then  /etc/modules:

    # Modules required for PCI passthrough
    vfio
    vfio_iommu_type1
    vfio_pci
    vfio_virqfd

    I have no idea what else I am missing or might have done wrong.

    P.S I have also tried this too:

    https://cetteup.com/216/how-to-use-an-in...roxmox-vm/

    If someone can help me I would REALLY appreciate it. Also happy for someone to remote in and take a look if easier.



    [color=oklab(0.89908 -0.00192907 -0.0048306)]f/dev/dri/renderD128S[/color]
    TheDreadPirate
    Offline

    Community Moderator

    Posts: 15,375
    Threads: 10
    Joined: 2023 Jun
    Reputation: 460
    Country:United States
    #2
    2025-02-18, 02:00 PM
    Any particular reason for continuing to use CentOS 7?

    And can you clarify which host is running the 6.2 kernel? Is that Proxmox? Or is that the VM? If that is Proxmox, which kernel is the VM running?
    Jellyfin 10.10.7 (Docker)
    Ubuntu 24.04.2 LTS w/HWE
    Intel i3 12100
    Intel Arc A380
    OS drive - SK Hynix P41 1TB
    Storage
        4x WD Red Pro 6TB CMR in RAIDZ1
    [Image: GitHub%20Sponsors-grey?logo=github]
    Vaemarr
    Offline

    Junior Member

    Posts: 9
    Threads: 1
    Joined: 2025 Feb
    Reputation: 0
    Country:Australia
    #3
    2025-02-18, 09:09 PM
    (2025-02-18, 02:00 PM)TheDreadPirate Wrote: Any particular reason for continuing to use CentOS 7?

    And can you clarify which host is running the 6.2 kernel?  Is that Proxmox?  Or is that the VM?  If that is Proxmox, which kernel is the VM running?

    So I am using Centos 7 for the VM just because it's what I am most familiar with, I also have many containers already established on it.

    The 6.2 kernel was for the proxmox host, and my CentOS VM is on 3.10.0-1160.119.1.el7.x86_64.
    TheDreadPirate
    Offline

    Community Moderator

    Posts: 15,375
    Threads: 10
    Joined: 2023 Jun
    Reputation: 460
    Country:United States
    #4
    2025-02-18, 09:20 PM
    The VM's kernel is not new enough. Jasper Lake support was added in 5.6. If this was a LXC, it would use the host's kernel and this wouldn't be a problem.

    CentOS 7 is EOL as of June 30, 2024. Just for security reasons, you should upgrade to supported distro since you will no longer get security patches.
    Jellyfin 10.10.7 (Docker)
    Ubuntu 24.04.2 LTS w/HWE
    Intel i3 12100
    Intel Arc A380
    OS drive - SK Hynix P41 1TB
    Storage
        4x WD Red Pro 6TB CMR in RAIDZ1
    [Image: GitHub%20Sponsors-grey?logo=github]
    Vaemarr
    Offline

    Junior Member

    Posts: 9
    Threads: 1
    Joined: 2025 Feb
    Reputation: 0
    Country:Australia
    #5
    2025-02-19, 01:57 PM
    (2025-02-18, 09:20 PM)TheDreadPirate Wrote: The VM's kernel is not new enough.  Jasper Lake support was added in 5.6.  If this was a LXC, it would use the host's kernel and this wouldn't be a problem.

    CentOS 7 is EOL as of June 30, 2024.  Just for security reasons, you should upgrade to supported distro since you will no longer get security patches.

    OK so turns out you were correct. 

    I spun up a ubuntu VM, installed docker, setup GPU passthrough and deployed Jellyfin in docker.

    I now have /dev/dri/renderD128 however I am still having issues getting videos to transcode.

    I have added /dev/dri/renderD128 into my --devices (via portainer).

    When I am inside the container, I can see and access that location.

    Ideas?
    TheDreadPirate
    Offline

    Community Moderator

    Posts: 15,375
    Threads: 10
    Joined: 2023 Jun
    Reputation: 460
    Country:United States
    #6
    2025-02-19, 03:39 PM
    I'd need to see the ffmpeg logs to figure that out. My first guess is that you did not install the Intel GuC and HuC firmware, which are required for Jasper Lake.

    https://jellyfin.org/docs/general/admini...ion/intel/

    Go down to the "low power encoding" section.
    Jellyfin 10.10.7 (Docker)
    Ubuntu 24.04.2 LTS w/HWE
    Intel i3 12100
    Intel Arc A380
    OS drive - SK Hynix P41 1TB
    Storage
        4x WD Red Pro 6TB CMR in RAIDZ1
    [Image: GitHub%20Sponsors-grey?logo=github]
    Vaemarr
    Offline

    Junior Member

    Posts: 9
    Threads: 1
    Joined: 2025 Feb
    Reputation: 0
    Country:Australia
    #7
    2025-02-19, 11:04 PM
    (2025-02-19, 03:39 PM)TheDreadPirate Wrote: I'd need to see the ffmpeg logs to figure that out.  My first guess is that you did not install the Intel GuC and HuC firmware, which are required for Jasper Lake.

    https://jellyfin.org/docs/general/admini...ion/intel/

    Go down to the "low power encoding" section.

    Hi Mate,

    I followed the instructions from that link.

    I'm not sure if its working though still, when I run intel-gpu-top i dont seem to get my renderer showing up.


    Attached Files Thumbnail(s)
       

    .txt   ffmpeg_log_jellyfin.txt (Size: 43.16 KB / Downloads: 27)
    Vaemarr
    Offline

    Junior Member

    Posts: 9
    Threads: 1
    Joined: 2025 Feb
    Reputation: 0
    Country:Australia
    #8
    2025-02-20, 12:39 AM (This post was last modified: 2025-02-20, 12:40 AM by Vaemarr. Edited 1 time in total.)
    Update:

    So I noticed 2x things:

    1) when I run intel-gpu-top I get card1 instead of renderD128 even though 128 is appearing to the host. And I selected as the render device in Jellyfin settings.

    2) I noticed now that if I reduce the quality of a video, I get transcoding happening but only on Card 1 and not renderD128. I thought it would be supposed to transcode on renderD128?

    Is this meant to be the case or am I misunderstanding something. Apologies, my knowledge on this stuff is very limited.
    TheDreadPirate
    Offline

    Community Moderator

    Posts: 15,375
    Threads: 10
    Joined: 2023 Jun
    Reputation: 460
    Country:United States
    #9
    2025-02-20, 02:06 PM
    "card1" is the overarching graphics card. "renderD128" is Quick Sync, which is a sub-component of the GPU.
    Jellyfin 10.10.7 (Docker)
    Ubuntu 24.04.2 LTS w/HWE
    Intel i3 12100
    Intel Arc A380
    OS drive - SK Hynix P41 1TB
    Storage
        4x WD Red Pro 6TB CMR in RAIDZ1
    [Image: GitHub%20Sponsors-grey?logo=github]
    Vaemarr
    Offline

    Junior Member

    Posts: 9
    Threads: 1
    Joined: 2025 Feb
    Reputation: 0
    Country:Australia
    #10
    2025-02-20, 07:56 PM
    (2025-02-20, 02:06 PM)TheDreadPirate Wrote: "card1" is the overarching graphics card. "renderD128" is Quick Sync, which is a sub-component of the GPU.

    Right. So does that mean the transcoding is working correctly?

    I still have some files that won't even play
    Pages (2): 1 2 Next »

    « Next Oldest | Next Newest »

    Users browsing this thread: 1 Guest(s)


    • View a Printable Version
    • Subscribe to this thread
    Forum Jump:

    Home · Team · Help · Contact
    © Designed by D&D - Powered by MyBB
    L


    Jellyfin

    The Free Software Media System

    Linear Mode
    Threaded Mode