• Login
  • Register
  • Login Register
    Login
    Username/Email:
    Password:
    Or login with a social network below
  • Forum
  • Website
  • GitHub
  • Status
  • Translation
  • Features
  • Team
  • Rules
  • Help
  • Feeds
User Links
  • Login
  • Register
  • Login Register
    Login
    Username/Email:
    Password:
    Or login with a social network below

    Useful Links Forum Website GitHub Status Translation Features Team Rules Help Feeds
    Jellyfin Forum Support Troubleshooting RAM Usage

    Pages (3): « Previous 1 2 3 Next »

     
    • 0 Vote(s) - 0 Average

    RAM Usage

    natzilla
    Offline

    Junior Member

    Posts: 26
    Threads: 3
    Joined: 2023 Jun
    Reputation: 0
    #11
    2023-06-21, 04:31 PM
    (2023-06-21, 08:25 AM)Venson Wrote:
    (2023-06-21, 04:16 AM)natzilla Wrote:
    (2023-06-20, 06:14 AM)Venson Wrote:
    (2023-06-20, 04:19 AM)joshuaboniface Wrote: Neither error seems related to memory: the first is just a client disconnecting uncleanly, and the second is a playback failure (wrong HWA perhaps). But I'm not sure either of them would really cause such a massive memory leak. For ref, my instance has been up 2 weeks and is only using ~3.3GB

    Code:
    ● jellyfin.service - Jellyfin Media Server
        Loaded: loaded (/etc/systemd/system/jellyfin.service; enabled; vendor preset: enabled)
        Drop-In: /etc/systemd/system/jellyfin.service.d
                └─jellyfin.service.conf
        Active: active (running) since Mon 2023-06-05 21:59:44 EDT; 2 weeks 0 days ago
      Main PID: 476 (jellyfin)
          Tasks: 23 (limit: 19171)
        Memory: 3.3G
            CPU: 2d 2h 38min 29.487s
        CGroup: /system.slice/jellyfin.service
                └─476 /usr/bin/jellyfin --webdir=/usr/share/jellyfin/web --restartpath=/usr/lib/jellyfin/restart.sh --ffmpeg=/usr/local/bin/ffmpeg

    It would also help to have more details on the specifics of your setup: what version, what OS, package format, etc.

    Looking at the logfile he has a lot of errored websocket connections, maybe there is actually a memory leak there?
    On the other hand the 
    Webhook Item Added Notifier
    message comes every few sec which could be an issue?

    @natzilla i would recommend disabling all plugins and see if that helps as a troubleshooting step.

    Making that change it did take longer for the ram to fill up. It does at least appear like it's going down when no activity is present on it, but to still get as high as 16G? that still seems much to me. I have included a new batch of logs on this reply.

    Are you sure that graph only shows jellyfin memory and not "overall" system memory?

    Yes I am sure. That graph is just for jellyfin only.
    joshuaboniface
    Offline

    Project Leader

    Posts: 115
    Threads: 25
    Joined: 2023 Jun
    Reputation: 16
    Country:Canada
    #12
    2023-06-21, 05:52 PM (This post was last modified: 2023-06-21, 05:54 PM by joshuaboniface.)
    Good catch @Venson, yea @natzilla your actual Jellyfin memory usage is only 2.9G as reported by your systemctl output, and that seems normal. Something else is using up all your memory.

    (2023-06-21, 04:31 PM)natzilla Wrote: Yes I am sure. That graph is just for jellyfin only.

    For your Jellyfin *system* but not the process. The process itself is only using what systemctl reports.

    Try posting your entire ps aux output, or htop sorted by %Mem, that will show what's really using it. If it is Jellyfin then we learned something new about systemctl's accuracy, but I suspect it's not.
    natzilla
    Offline

    Junior Member

    Posts: 26
    Threads: 3
    Joined: 2023 Jun
    Reputation: 0
    #13
    2023-06-21, 07:31 PM
    (2023-06-21, 05:52 PM)joshuaboniface Wrote: Good catch @Venson, yea @natzilla your actual Jellyfin memory usage is only 2.9G as reported by your systemctl output, and that seems normal. Something else is using up all your memory.

    (2023-06-21, 04:31 PM)natzilla Wrote: Yes I am sure. That graph is just for jellyfin only.

    For your Jellyfin *system* but not the process. The process itself is only using what systemctl reports.

    Try posting your entire ps aux output, or htop sorted by %Mem, that will show what's really using it. If it is Jellyfin then we learned something new about systemctl's accuracy, but I suspect it's not.

    As requested

    USER        PID %CPU %MEM    VSZ  RSS TTY      STAT START  TIME COMMAND
    root          1  0.0  0.0 167744 13032 ?        Ss  Jun20  0:08 /sbin/init
    root          2  0.0  0.0      0    0 ?        S    Jun20  0:00 [kthreadd]
    root          3  0.0  0.0      0    0 ?        I<  Jun20  0:00 [rcu_gp]
    root          4  0.0  0.0      0    0 ?        I<  Jun20  0:00 [rcu_par_gp]
    root          5  0.0  0.0      0    0 ?        I<  Jun20  0:00 [slub_flushwq]
    root          6  0.0  0.0      0    0 ?        I<  Jun20  0:00 [netns]
    root          8  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kworker/0:0H-events_highpri]
    root          10  0.0  0.0      0    0 ?        I<  Jun20  0:00 [mm_percpu_wq]
    root          11  0.0  0.0      0    0 ?        S    Jun20  0:00 [rcu_tasks_rude_]
    root          12  0.0  0.0      0    0 ?        S    Jun20  0:00 [rcu_tasks_trace]
    root          13  0.0  0.0      0    0 ?        S    Jun20  0:00 [ksoftirqd/0]
    root          14  0.0  0.0      0    0 ?        I    Jun20  0:16 [rcu_sched]
    root          15  0.0  0.0      0    0 ?        S    Jun20  0:00 [migration/0]
    root          16  0.0  0.0      0    0 ?        S    Jun20  0:00 [idle_inject/0]
    root          18  0.0  0.0      0    0 ?        S    Jun20  0:00 [cpuhp/0]
    root          19  0.0  0.0      0    0 ?        S    Jun20  0:00 [cpuhp/1]
    root          20  0.0  0.0      0    0 ?        S    Jun20  0:00 [idle_inject/1]
    root          21  0.0  0.0      0    0 ?        S    Jun20  0:01 [migration/1]
    root          22  0.0  0.0      0    0 ?        S    Jun20  0:00 [ksoftirqd/1]
    root          24  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kworker/1:0H-events_highpri]
    root          25  0.0  0.0      0    0 ?        S    Jun20  0:00 [cpuhp/2]
    root          26  0.0  0.0      0    0 ?        S    Jun20  0:00 [idle_inject/2]
    root          27  0.0  0.0      0    0 ?        S    Jun20  0:01 [migration/2]
    root          28  0.0  0.0      0    0 ?        S    Jun20  0:00 [ksoftirqd/2]
    root          30  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kworker/2:0H-events_highpri]
    root          31  0.0  0.0      0    0 ?        S    Jun20  0:00 [cpuhp/3]
    root          32  0.0  0.0      0    0 ?        S    Jun20  0:00 [idle_inject/3]
    root          33  0.0  0.0      0    0 ?        S    Jun20  0:00 [migration/3]
    root          34  0.0  0.0      0    0 ?        S    Jun20  0:44 [ksoftirqd/3]
    root          36  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kworker/3:0H-events_highpri]
    root          37  0.0  0.0      0    0 ?        S    Jun20  0:00 [cpuhp/4]
    root          38  0.0  0.0      0    0 ?        S    Jun20  0:00 [idle_inject/4]
    root          39  0.0  0.0      0    0 ?        S    Jun20  0:01 [migration/4]
    root          40  0.0  0.0      0    0 ?        S    Jun20  0:00 [ksoftirqd/4]
    root          42  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kworker/4:0H-events_highpri]
    root          43  0.0  0.0      0    0 ?        S    Jun20  0:00 [cpuhp/5]
    root          44  0.0  0.0      0    0 ?        S    Jun20  0:00 [idle_inject/5]
    root          45  0.0  0.0      0    0 ?        S    Jun20  0:01 [migration/5]
    root          46  0.0  0.0      0    0 ?        S    Jun20  0:00 [ksoftirqd/5]
    root          48  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kworker/5:0H-events_highpri]
    root          49  0.0  0.0      0    0 ?        S    Jun20  0:00 [cpuhp/6]
    root          50  0.0  0.0      0    0 ?        S    Jun20  0:00 [idle_inject/6]
    root          51  0.0  0.0      0    0 ?        S    Jun20  0:01 [migration/6]
    root          52  0.0  0.0      0    0 ?        S    Jun20  0:00 [ksoftirqd/6]
    root          54  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kworker/6:0H-events_highpri]
    root          55  0.0  0.0      0    0 ?        S    Jun20  0:00 [cpuhp/7]
    root          56  0.0  0.0      0    0 ?        S    Jun20  0:00 [idle_inject/7]
    root          57  0.0  0.0      0    0 ?        S    Jun20  0:00 [migration/7]
    root          58  0.0  0.0      0    0 ?        S    Jun20  0:02 [ksoftirqd/7]
    root          60  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kworker/7:0H-events_highpri]
    root          61  0.0  0.0      0    0 ?        S    Jun20  0:00 [kdevtmpfs]
    root          62  0.0  0.0      0    0 ?        I<  Jun20  0:00 [inet_frag_wq]
    root          63  0.0  0.0      0    0 ?        S    Jun20  0:00 [kauditd]
    root          65  0.0  0.0      0    0 ?        S    Jun20  0:00 [khungtaskd]
    root          66  0.0  0.0      0    0 ?        S    Jun20  0:00 [oom_reaper]
    root          67  0.0  0.0      0    0 ?        I<  Jun20  0:00 [writeback]
    root          68  0.0  0.0      0    0 ?        S    Jun20  0:11 [kcompactd0]
    root          69  0.0  0.0      0    0 ?        SN  Jun20  0:00 [ksmd]
    root          70  0.0  0.0      0    0 ?        SN  Jun20  0:01 [khugepaged]
    root        116  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kintegrityd]
    root        117  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kblockd]
    root        118  0.0  0.0      0    0 ?        I<  Jun20  0:00 [blkcg_punt_bio]
    root        119  0.0  0.0      0    0 ?        I    Jun20  0:00 [kworker/6:1-events]
    root        120  0.0  0.0      0    0 ?        I<  Jun20  0:00 [tpm_dev_wq]
    root        121  0.0  0.0      0    0 ?        I<  Jun20  0:00 [ata_sff]
    root        122  0.0  0.0      0    0 ?        I<  Jun20  0:00 [md]
    root        123  0.0  0.0      0    0 ?        I<  Jun20  0:00 [edac-poller]
    root        124  0.0  0.0      0    0 ?        I<  Jun20  0:00 [devfreq_wq]
    root        125  0.0  0.0      0    0 ?        S    Jun20  0:00 [watchdogd]
    root        129  0.0  0.0      0    0 ?        I<  Jun20  0:09 [kworker/4:1H-kblockd]
    root        134  0.0  0.0      0    0 ?        S    Jun20  0:08 [kswapd0]
    root        135  0.0  0.0      0    0 ?        S    Jun20  0:00 [ecryptfs-kthrea]
    root        137  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kthrotld]
    root        138  0.0  0.0      0    0 ?        I<  Jun20  0:00 [acpi_thermal_pm]
    root        140  0.0  0.0      0    0 ?        S    Jun20  0:00 [scsi_eh_0]
    root        141  0.0  0.0      0    0 ?        I<  Jun20  0:00 [scsi_tmf_0]
    root        142  0.0  0.0      0    0 ?        S    Jun20  0:00 [scsi_eh_1]
    root        143  0.0  0.0      0    0 ?        I<  Jun20  0:00 [scsi_tmf_1]
    root        145  0.0  0.0      0    0 ?        I<  Jun20  0:00 [vfio-irqfd-clea]
    root        146  0.0  0.0      0    0 ?        I<  Jun20  0:00 [mld]
    root        147  0.0  0.0      0    0 ?        I<  Jun20  0:02 [kworker/6:1H-kblockd]
    root        148  0.0  0.0      0    0 ?        I<  Jun20  0:00 [ipv6_addrconf]
    root        158  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kstrp]
    root        161  0.0  0.0      0    0 ?        I<  Jun20  0:00 [zswap-shrink]
    root        167  0.0  0.0      0    0 ?        I<  Jun20  0:00 [charger_manager]
    root        204  0.0  0.0      0    0 ?        I<  Jun20  0:02 [kworker/0:1H-kblockd]
    root        210  0.0  0.0      0    0 ?        S    Jun20  0:00 [scsi_eh_2]
    root        211  0.0  0.0      0    0 ?        I<  Jun20  0:00 [scsi_tmf_2]
    root        224  0.0  0.0      0    0 ?        I<  Jun20  0:00 [cryptd]
    root        234  0.0  0.0      0    0 ?        I<  Jun20  0:02 [kworker/3:1H-kblockd]
    root        235  0.0  0.0      0    0 ?        I<  Jun20  0:02 [kworker/1:1H-kblockd]
    root        238  0.0  0.0      0    0 ?        I<  Jun20  0:03 [kworker/7:1H-kblockd]
    root        239  0.0  0.0      0    0 ?        I<  Jun20  0:02 [kworker/2:1H-kblockd]
    root        243  0.0  0.0      0    0 ?        I<  Jun20  0:03 [kworker/5:1H-kblockd]
    root        305  0.0  0.0      0    0 ?        I<  Jun20  0:00 [raid5wq]
    root        352  0.0  0.0      0    0 ?        S    Jun20  0:49 [jbd2/sda2-8]
    root        353  0.0  0.0      0    0 ?        I<  Jun20  0:00 [ext4-rsv-conver]
    root        429  0.0  0.5 195284 83492 ?        S<s  Jun20  0:33 /lib/systemd/systemd-journald
    root        465  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kaluad]
    root        467  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kmpath_rdacd]
    root        468  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kmpathd]
    root        469  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kmpath_handlerd]
    root        473  0.0  0.1 289348 27100 ?        SLsl Jun20  0:18 /sbin/multipathd -d -s
    root        477  0.0  0.0  25876  6704 ?        Ss  Jun20  0:01 /lib/systemd/systemd-udevd
    root        532  0.0  0.0      0    0 ?        I    Jun20  0:02 [kworker/2:3-events]
    _rpc        547  0.0  0.0  8100  4132 ?        Ss  Jun20  0:00 /sbin/rpcbind -f -w
    systemd+    548  0.0  0.0  89352  6516 ?        Ssl  Jun20  0:00 /lib/systemd/systemd-timesyncd
    root        553  0.0  0.0      0    0 ?        I<  Jun20  0:00 [rpciod]
    root        554  0.0  0.0      0    0 ?        I<  Jun20  0:00 [xprtiod]
    systemd+    704  0.0  0.0  16116  7968 ?        Ss  Jun20  0:01 /lib/systemd/systemd-networkd
    root        706  0.0  0.0      0    0 ?        S    Jun20  0:00 [nv_queue]
    root        707  0.0  0.0      0    0 ?        S    Jun20  0:00 [nv_queue]
    systemd+    710  0.0  0.0  25528 12768 ?        Ss  Jun20  0:00 /lib/systemd/systemd-resolved
    root        712  0.0  0.0      0    0 ?        S    Jun20  0:00 [nvidia-modeset/]
    root        713  0.0  0.0      0    0 ?        S    Jun20  0:00 [nvidia-modeset/]
    root        723  0.0  0.0      0    0 ?        S    Jun20  0:00 [UVM global queu]
    root        724  0.0  0.0      0    0 ?        S    Jun20  0:00 [UVM deferred re]
    root        725  0.0  0.0      0    0 ?        S    Jun20  0:00 [UVM Tools Event]
    message+    748  0.0  0.0  8868  4768 ?        Ss  Jun20  0:00 @dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
    root        753  0.0  0.0  82772  4004 ?        Ssl  Jun20  0:09 /usr/sbin/irqbalance --foreground
    root        755  0.0  0.1  32796 17188 ?        Ss  Jun20  0:00 /usr/bin/python3 /usr/bin/networkd-dispatcher --run-startup-triggers
    nvidia-+    757  0.0  0.0  5308  1960 ?        Ss  Jun20  0:00 /usr/bin/nvidia-persistenced --user nvidia-persistenced --no-persistence-mode --verbose
    root        758  0.0  0.0 239268  8956 ?        Ssl  Jun20  0:00 /usr/libexec/polkitd --no-debug
    root        759  0.0  0.0      0    0 ?        I<  Jun20  0:00 [nfsiod]
    syslog      760  0.0  0.0 222400  5348 ?        Ssl  Jun20  0:05 /usr/sbin/rsyslogd -n -iNONE
    root        768  0.0  0.2 1540728 42540 ?      Ssl  Jun20  0:21 /usr/lib/snapd/snapd
    root        776  0.0  0.0  48124  7860 ?        Ss  Jun20  0:00 /lib/systemd/systemd-logind
    root        784  0.0  0.0 392572 12856 ?        Ssl  Jun20  0:00 /usr/libexec/udisks2/udisksd
    root        792  0.0  0.0  15420  9152 ?        Ss  Jun20  0:00 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
    root        802  0.0  0.0 317956 12032 ?        Ssl  Jun20  0:00 /usr/sbin/ModemManager
    root        834  0.0  0.0      0    0 ?        S    Jun20  0:00 [NFSv4 callback]
    root        848  0.0  0.1 109748 19840 ?        Ssl  Jun20  0:00 /usr/bin/python3 /usr/share/unattended-upgrades/unattended-upgrade-shutdown --wait-for-signal
    root        853  0.0  0.0  6892  2936 ?        Ss  Jun20  0:00 /usr/sbin/cron -f -P
    daemon      855  0.0  0.0  3860  1296 ?        Ss  Jun20  0:00 /usr/sbin/atd -f
    root        866  0.0  0.0  6172  1084 tty1    Ss+  Jun20  0:00 /sbin/agetty -o -p -- \u --noclear tty1 linux
    root        867  2.0  0.3 865432 62216 ?        Ssl  Jun20  35:10 /usr/sbin/tailscaled --state=/var/lib/tailscale/tailscaled.state --socket=/run/tailscale/tailscaled.sock --port=41641
    root        2098  0.0  0.0  17172 10844 ?        Ss  Jun20  0:00 sshd: pfs [priv]
    pfs        2101  0.0  0.0  17052  9592 ?        Ss  Jun20  0:00 /lib/systemd/systemd --user
    pfs        2102  0.0  0.0 169308  3856 ?        S    Jun20  0:00 (sd-pam)
    pfs        2183  0.0  0.0  17304  7892 ?        R    Jun20  0:00 sshd: pfs@pts/0
    pfs        2184  0.0  0.0  8732  5184 pts/0    Ss  Jun20  0:00 -bash
    jellyfin    2204  3.2 17.7 7034120 2850508 ?    Ssl  Jun20  51:08 /usr/bin/jellyfin --webdir=/usr/share/jellyfin/web --restartpath=/usr/lib/jellyfin/restart.sh --ffmpeg=/usr/lib/jellyfin-ffmpeg/ffmpeg
    root        6153  0.0  0.0      0    0 ?        I    Jun20  0:03 [kworker/1:1-events]
    root        7138  0.0  0.0 239608  8608 ?        Ssl  Jun20  0:00 /usr/libexec/upowerd
    root        7638  0.0  0.0      0    0 ?        I    Jun20  0:00 [kworker/5:0-mm_percpu_wq]
    root        9266  0.0  0.0      0    0 ?        I    03:29  0:00 [kworker/1:0-cgroup_destroy]
    root        9705  0.0  0.1 295552 20412 ?        Ssl  03:30  0:00 /usr/libexec/packagekitd
    root      10208  0.0  0.0      0    0 ?        I    04:37  0:00 [kworker/7:0-events]
    root      13694  0.0  0.0      0    0 ?        I    13:12  0:00 [kworker/2:1-events]
    root      14418  0.0  0.0      0    0 ?        I    15:07  0:00 [kworker/4:0-events]
    root      14524  0.0  0.0      0    0 ?        I    15:24  0:00 [kworker/6:0-mm_percpu_wq]
    root      14796  0.0  0.0      0    0 ?        I    16:12  0:00 [kworker/0:0-events]
    root      15423  0.0  0.4 429948 70436 ?        Ssl  17:59  0:02 /usr/libexec/fwupd/fwupd
    root      15456  0.0  0.0      0    0 ?        I    17:59  0:00 [kworker/3:2-events]
    root      15547  0.0  0.0      0    0 ?        I    18:09  0:00 [kworker/5:1-mm_percpu_wq]
    root      15717  0.0  0.0      0    0 ?        I    18:34  0:01 [kworker/u16:6-events_unbound]
    root      15780  0.0  0.0      0    0 ?        I    18:46  0:00 [kworker/0:2-events]
    root      15874  0.0  0.0      0    0 ?        I    18:57  0:00 [kworker/4:2-events]
    root      15900  0.0  0.0      0    0 ?        I    19:03  0:00 [kworker/7:1-events]
    root      15931  0.0  0.0      0    0 ?        I    19:10  0:00 [kworker/3:1-events]
    root      15944  0.0  0.0      0    0 ?        I    19:12  0:00 [kworker/u16:5-nfsiod]
    root      15946  0.0  0.0      0    0 ?        I    19:12  0:00 [kworker/u16:7-nfsiod]
    jellyfin  15988  2.9  1.1 9498676 191252 ?      Sl  19:16  0:23 /usr/lib/jellyfin-ffmpeg/ffmpeg -analyzeduration 50000000 -init_hw_device cuda=cu:0 -filter_hw_device cu -hwaccel cuda -hwaccel_output_format cuda -threads 1 -a
    root      15990  0.3  0.0      0    0 ?        S    19:16  0:02 [irq/38-nvidia]
    root      15991  0.0  0.0      0    0 ?        S    19:16  0:00 [nvidia]
    root      15992  0.0  0.0      0    0 ?        S    19:16  0:00 [nv_queue]
    root      15994  0.0  0.0      0    0 ?        S    19:16  0:00 [UVM GPU1 BH]
    root      16015  0.0  0.0      0    0 ?        I<  19:18  0:00 [kworker/u17:2-xprtiod]
    root      16021  0.0  0.0      0    0 ?        I<  19:19  0:00 [kworker/u17:3-xprtiod]
    root      16047  0.0  0.0      0    0 ?        I    19:26  0:00 [kworker/u16:0-nfsiod]
    root      16052  0.0  0.0      0    0 ?        I    19:27  0:00 [kworker/u16:1-events_unbound]
    root      16053  0.0  0.0      0    0 ?        I<  19:27  0:00 [kworker/u17:0-xprtiod]
    root      16057  0.0  0.0      0    0 ?        I    19:27  0:00 [kworker/u16:2-rpciod]
    pfs        16073  0.0  0.0  10068  1612 pts/0    R+  19:30  0:00 ps aux
    joshuaboniface
    Offline

    Project Leader

    Posts: 115
    Threads: 25
    Joined: 2023 Jun
    Reputation: 16
    Country:Canada
    #14
    2023-06-21, 08:09 PM
    Looking through the entries, Jellyin itself is using 17% of the memory, which if it's 16G is 2.7GB so pretty close to what systemctl said. FFMpeg is also using another 1.1%, and since it's a child process of Jellyfin, it would be counted, so we're basically at the ~3GB that systemctl reported:

    Code:
    USER        PID %CPU %MEM    VSZ  RSS TTY      STAT START  TIME COMMAND
    [///]
    jellyfin    2204  3.2 17.7 7034120 2850508 ?    Ssl  Jun20  51:08 /usr/bin/jellyfin --webdir=/usr/share/jellyfin/web --restartpath=/usr/lib/jellyfin/restart.sh --ffmpeg=/usr/lib/jellyfin-ffmpeg/ffmpeg
    jellyfin  15988  2.9  1.1 9498676 191252 ?      Sl  19:16  0:23 /usr/lib/jellyfin-ffmpeg/ffmpeg -analyzeduration 50000000 -init_hw_device cuda=cu:0 -filter_hw_device cu -hwaccel cuda -hwaccel_output_format cuda -threads 1 -a

    However, nothing else in the output seems to be using any more RAM than this.

    Looking back at your graph, all it says is "Memory Usage". What monitoring tool is this? It's very likely that what you're seeing is not Jellyfin - or anything - actually using all the RAM, but Linux's page cache being put to use. https://www.linuxatemyram.com/ This is super common to see in more Linux-naive monitoring tools that treat anything using memory as "used" even though that memory is actually free to applications.

    You can check this yourself with free -m to see buff/cache, available and free; free will, after some time, start to get very low, but buff/cache increases along with available. buff/cache is memory used by the page cache i.e. caching files on the filesystem in memory for faster access, while available is the memory in the cache that is actually available for appliations to use.

    Graphing/monitoring Linux memory usage should be differentiating between these values so that one could see the actual application memory usage versus page cache usage, but your graph doesn't seem to be, so I suspect you're not seeing the whole picture there and thus thinking that Jellyfin is "using all the RAM" when in reality it's only using ~19% of it.

    Really, things only become a problem when you actually start seeing swap space being used. If swap isn't being used, even sitting at "100%" memory usage is fine as it's all page cache. Once things start swapping, or you start getting Out Of Memory errors in your syslogs (or dmesg), then you are truly out of memory.

    Hopefully that helps!
    natzilla
    Offline

    Junior Member

    Posts: 26
    Threads: 3
    Joined: 2023 Jun
    Reputation: 0
    #15
    2023-06-21, 09:21 PM
    (2023-06-21, 08:09 PM)joshuaboniface Wrote: Looking through the entries, Jellyin itself is using 17% of the memory, which if it's 16G is 2.7GB so pretty close to what systemctl said. FFMpeg is also using another 1.1%, and since it's a child process of Jellyfin, it would be counted, so we're basically at the ~3GB that systemctl reported:

    Code:
    USER        PID %CPU %MEM    VSZ  RSS TTY      STAT START  TIME COMMAND
    [///]
    jellyfin    2204  3.2 17.7 7034120 2850508 ?    Ssl  Jun20  51:08 /usr/bin/jellyfin --webdir=/usr/share/jellyfin/web --restartpath=/usr/lib/jellyfin/restart.sh --ffmpeg=/usr/lib/jellyfin-ffmpeg/ffmpeg
    jellyfin  15988  2.9  1.1 9498676 191252 ?      Sl  19:16  0:23 /usr/lib/jellyfin-ffmpeg/ffmpeg -analyzeduration 50000000 -init_hw_device cuda=cu:0 -filter_hw_device cu -hwaccel cuda -hwaccel_output_format cuda -threads 1 -a

    However, nothing else in the output seems to be using any more RAM than this.

    Looking back at your graph, all it says is "Memory Usage". What monitoring tool is this? It's very likely that what you're seeing is not Jellyfin - or anything - actually using all the RAM, but Linux's page cache being put to use. https://www.linuxatemyram.com/ This is super common to see in more Linux-naive monitoring tools that treat anything using memory as "used" even though that memory is actually free to applications.

    You can check this yourself with free -m to see buff/cache, available and free; free will, after some time, start to get very low, but buff/cache increases along with available. buff/cache is memory used by the page cache i.e. caching files on the filesystem in memory for faster access, while available is the memory in the cache that is actually available for appliations to use.

    Graphing/monitoring Linux memory usage should be differentiating between these values so that one could see the actual application memory usage versus page cache usage, but your graph doesn't seem to be, so I suspect you're not seeing the whole picture there and thus thinking that Jellyfin is "using all the RAM" when in reality it's only using ~19% of it.

    Really, things only become a problem when you actually start seeing swap space being used. If swap isn't being used, even sitting at "100%" memory usage is fine as it's all page cache. Once things start swapping, or you start getting Out Of Memory errors in your syslogs (or dmesg), then you are truly out of memory.

    Hopefully that helps!

    This does help to put it into perspective. The graph is from proxmox. I selected the ubuntu vm and this is how it reports the memory usage. The one thing I have noticed is the logs do look much better than before when the webhook was enabled. I will continue to monitor and see if I eventually hit the same non responsive state I have been facing every few days.
    joshuaboniface
    Offline

    Project Leader

    Posts: 115
    Threads: 25
    Joined: 2023 Jun
    Reputation: 16
    Country:Canada
    #16
    2023-06-21, 11:58 PM
    Aah knowing that it's ProxMox definitely explains it. Because of how the KVM virtualization in ProxMox works, from ProxMox's perspective, the RAM is indeed "used" by the VM even though it's just page cache data inside the VM, because the hypervisor can't hand that RAM out to other VMs. Unlike inside the VM itself where that memory could easily be used by another application.

    Best thing to do would just be to shrink the VM. If Jellyfin isn't using 16GB (and it isn't, not even close), the VM doesn't need to be 16GB and thus it won't steal 16GB of RAM from the hypervisor. Based on your usage a 4GB VM is probably plenty.
    natzilla
    Offline

    Junior Member

    Posts: 26
    Threads: 3
    Joined: 2023 Jun
    Reputation: 0
    #17
    2023-06-24, 12:06 AM
    (2023-06-21, 11:58 PM)joshuaboniface Wrote: Aah knowing that it's ProxMox definitely explains it. Because of how the KVM virtualization in ProxMox works, from ProxMox's perspective, the RAM is indeed "used" by the VM even though it's just page cache data inside the VM, because the hypervisor can't hand that RAM out to other VMs. Unlike inside the VM itself where that memory could easily be used by another application.

    Best thing to do would just be to shrink the VM. If Jellyfin isn't using 16GB (and it isn't, not even close), the VM doesn't need to be 16GB and thus it won't steal 16GB of RAM from the hypervisor. Based on your usage a 4GB VM is probably plenty.

    Ok, so today It was advised the server became unresponsive, slow, or not doing anything for playback request. I've grabbed logs when it was reported. I am seeing thread pool starvation logs here. It does not seem the RAM is to really blame as ps aux showed that around 27% only the systemctl showed 13.2G in use. I've attached logs. The only way to restore playback is by restarting the VM.

    ● jellyfin.service - Jellyfin Media Server
        Loaded: loaded (/lib/systemd/system/jellyfin.service; enabled; vendor preset: enabled)
        Drop-In: /etc/systemd/system/jellyfin.service.d
                └─jellyfin.service.conf
        Active: active (running) since Tue 2023-06-20 17:11:44 UTC; 3 days ago
      Main PID: 2204 (jellyfin)
          Tasks: 1893 (limit: 18652)
        Memory: 13.2G
            CPU: 7h 22min 50.748s
        CGroup: /system.slice/jellyfin.service
                └─2204 /usr/bin/jellyfin --webdir=/usr/share/jellyfin/web --restartpath=/usr/lib/jellyfin/restart.sh --ffmpeg=/usr/lib/jellyfin-ffmpeg/ffmpeg

    Jun 23 23:50:20 jellyfin jellyfin[2204]: [23:50:20] [WRN] Slow HTTP Response from https://x.x.x.x/Sessions/Playing/Progress to 67.169.246.80 in 0:00:52.9345716 with Status Code 204
    Jun 23 23:50:37 jellyfin jellyfin[2204]: [23:50:37] [WRN] As of "06/23/2023 23:50:27 +00:00", the heartbeat has been running for "00:00:10.0824389" which is longer than "00:00:01". This could be caused by thread pool starvatio>
    Jun 23 23:50:43 jellyfin jellyfin[2204]: [23:50:43] [WRN] Slow HTTP Response from https://x.x.x.x/Sessions/Playing/Progress to 67.169.246.80 in 0:00:31.6790499 with Status Code 204
    Jun 23 23:50:58 jellyfin jellyfin[2204]: [23:50:58] [WRN] Slow HTTP Response from https://x.x.x.x/Sessions/Playing/Progress to 67.169.246.80 in 0:01:01.8857423 with Status Code 204
    Jun 23 23:51:10 jellyfin jellyfin[2204]: [23:51:10] [WRN] Slow HTTP Response from https://x.x.x.x/Sessions/Playing/Progress to 67.169.246.80 in 0:00:39.1877933 with Status Code 204
    Jun 23 23:51:15 jellyfin jellyfin[2204]: [23:51:15] [WRN] Slow HTTP Response from https://x.x.x.x/Sessions/Playing/Progress to 67.169.246.80 in 0:01:33.2583702 with Status Code 204
    Jun 23 23:51:20 jellyfin jellyfin[2204]: [23:51:20] [WRN] Slow HTTP Response from https://x.x.x.x/Sessions/Playing/Progress to 67.169.246.80 in 0:02:07.9183341 with Status Code 204
    Jun 23 23:51:51 jellyfin jellyfin[2204]: [23:51:51] [WRN] Slow HTTP Response from https://x.x.x.x/Sessions/Playing/Progress to 67.169.246.80 in 0:01:09.2732972 with Status Code 204
    Jun 23 23:52:17 jellyfin jellyfin[2204]: [23:52:17] [WRN] Slow HTTP Response from https://x.x.x.x/Sessions/Playing/Progress to 67.169.246.80 in 0:01:20.6886333 with Status Code 204
    Jun 23 23:52:45 jellyfin jellyfin[2204]: [23:52:45] [WRN] As of "06/23/2023 23:52:41 +00:00", the heartbeat has been running for "00:00:03.8813143" which is longer than "00:00:01". This could be caused by thread pool starvatio>

    Fri Jun 23 11:53:19 PM UTC 2023 (date) command

    USER        PID %CPU %MEM    VSZ  RSS TTY      STAT START  TIME COMMAND
    root          1  0.0  0.0 167744 12208 ?        Ss  Jun20  0:13 /sbin/init
    root          2  0.0  0.0      0    0 ?        S    Jun20  0:01 [kthreadd]
    root          3  0.0  0.0      0    0 ?        I<  Jun20  0:00 [rcu_gp]
    root          4  0.0  0.0      0    0 ?        I<  Jun20  0:00 [rcu_par_gp]
    root          5  0.0  0.0      0    0 ?        I<  Jun20  0:00 [slub_flushwq]
    root          6  0.0  0.0      0    0 ?        I<  Jun20  0:00 [netns]
    root          8  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kworker/0:0H-events_highpri]
    root          10  0.0  0.0      0    0 ?        I<  Jun20  0:00 [mm_percpu_wq]
    root          11  0.0  0.0      0    0 ?        S    Jun20  0:00 [rcu_tasks_rude_]
    root          12  0.0  0.0      0    0 ?        S    Jun20  0:00 [rcu_tasks_trace]
    root          13  0.0  0.0      0    0 ?        S    Jun20  0:00 [ksoftirqd/0]
    root          14  0.0  0.0      0    0 ?        I    Jun20  0:53 [rcu_sched]
    root          15  0.0  0.0      0    0 ?        S    Jun20  0:02 [migration/0]
    root          16  0.0  0.0      0    0 ?        S    Jun20  0:00 [idle_inject/0]
    root          18  0.0  0.0      0    0 ?        S    Jun20  0:00 [cpuhp/0]
    root          19  0.0  0.0      0    0 ?        S    Jun20  0:00 [cpuhp/1]
    root          20  0.0  0.0      0    0 ?        S    Jun20  0:00 [idle_inject/1]
    root          21  0.0  0.0      0    0 ?        S    Jun20  0:02 [migration/1]
    root          22  0.0  0.0      0    0 ?        S    Jun20  0:00 [ksoftirqd/1]
    root          24  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kworker/1:0H-events_highpri]
    root          25  0.0  0.0      0    0 ?        S    Jun20  0:00 [cpuhp/2]
    root          26  0.0  0.0      0    0 ?        S    Jun20  0:00 [idle_inject/2]
    root          27  0.0  0.0      0    0 ?        S    Jun20  0:02 [migration/2]
    root          28  0.0  0.0      0    0 ?        S    Jun20  0:00 [ksoftirqd/2]
    root          30  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kworker/2:0H-events_highpri]
    root          31  0.0  0.0      0    0 ?        S    Jun20  0:00 [cpuhp/3]
    root          32  0.0  0.0      0    0 ?        S    Jun20  0:00 [idle_inject/3]
    root          33  0.0  0.0      0    0 ?        S    Jun20  0:02 [migration/3]
    root          34  0.0  0.0      0    0 ?        S    Jun20  2:13 [ksoftirqd/3]
    root          36  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kworker/3:0H-events_highpri]
    root          37  0.0  0.0      0    0 ?        S    Jun20  0:00 [cpuhp/4]
    root          38  0.0  0.0      0    0 ?        S    Jun20  0:00 [idle_inject/4]
    root          39  0.0  0.0      0    0 ?        S    Jun20  0:02 [migration/4]
    root          40  0.0  0.0      0    0 ?        S    Jun20  0:00 [ksoftirqd/4]
    root          42  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kworker/4:0H-events_highpri]
    root          43  0.0  0.0      0    0 ?        S    Jun20  0:00 [cpuhp/5]
    root          44  0.0  0.0      0    0 ?        S    Jun20  0:00 [idle_inject/5]
    root          45  0.0  0.0      0    0 ?        S    Jun20  0:02 [migration/5]
    root          46  0.0  0.0      0    0 ?        S    Jun20  0:00 [ksoftirqd/5]
    root          48  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kworker/5:0H-events_highpri]
    root          49  0.0  0.0      0    0 ?        S    Jun20  0:00 [cpuhp/6]
    root          50  0.0  0.0      0    0 ?        S    Jun20  0:00 [idle_inject/6]
    root          51  0.0  0.0      0    0 ?        S    Jun20  0:02 [migration/6]
    root          52  0.0  0.0      0    0 ?        S    Jun20  0:00 [ksoftirqd/6]
    root          54  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kworker/6:0H-events_highpri]
    root          55  0.0  0.0      0    0 ?        S    Jun20  0:00 [cpuhp/7]
    root          56  0.0  0.0      0    0 ?        S    Jun20  0:00 [idle_inject/7]
    root          57  0.0  0.0      0    0 ?        S    Jun20  0:02 [migration/7]
    root          58  0.0  0.0      0    0 ?        S    Jun20  0:06 [ksoftirqd/7]
    root          60  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kworker/7:0H-events_highpri]
    root          61  0.0  0.0      0    0 ?        S    Jun20  0:00 [kdevtmpfs]
    root          62  0.0  0.0      0    0 ?        I<  Jun20  0:00 [inet_frag_wq]
    root          63  0.0  0.0      0    0 ?        S    Jun20  0:00 [kauditd]
    root          65  0.0  0.0      0    0 ?        S    Jun20  0:00 [khungtaskd]
    root          66  0.0  0.0      0    0 ?        S    Jun20  0:00 [oom_reaper]
    root          67  0.0  0.0      0    0 ?        I<  Jun20  0:00 [writeback]
    root          68  0.0  0.0      0    0 ?        S    Jun20  0:55 [kcompactd0]
    root          69  0.0  0.0      0    0 ?        SN  Jun20  0:00 [ksmd]
    root          70  0.0  0.0      0    0 ?        SN  Jun20  0:04 [khugepaged]
    root        116  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kintegrityd]
    root        117  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kblockd]
    root        118  0.0  0.0      0    0 ?        I<  Jun20  0:00 [blkcg_punt_bio]
    root        120  0.0  0.0      0    0 ?        I<  Jun20  0:00 [tpm_dev_wq]
    root        121  0.0  0.0      0    0 ?        I<  Jun20  0:00 [ata_sff]
    root        122  0.0  0.0      0    0 ?        I<  Jun20  0:00 [md]
    root        123  0.0  0.0      0    0 ?        I<  Jun20  0:00 [edac-poller]
    root        124  0.0  0.0      0    0 ?        I<  Jun20  0:00 [devfreq_wq]
    root        125  0.0  0.0      0    0 ?        S    Jun20  0:00 [watchdogd]
    root        129  0.0  0.0      0    0 ?        I<  Jun20  0:38 [kworker/4:1H-kblockd]
    root        134  0.0  0.0      0    0 ?        S    Jun20  0:42 [kswapd0]
    root        135  0.0  0.0      0    0 ?        S    Jun20  0:00 [ecryptfs-kthrea]
    root        137  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kthrotld]
    root        138  0.0  0.0      0    0 ?        I<  Jun20  0:00 [acpi_thermal_pm]
    root        140  0.0  0.0      0    0 ?        S    Jun20  0:00 [scsi_eh_0]
    root        141  0.0  0.0      0    0 ?        I<  Jun20  0:00 [scsi_tmf_0]
    root        142  0.0  0.0      0    0 ?        S    Jun20  0:00 [scsi_eh_1]
    root        143  0.0  0.0      0    0 ?        I<  Jun20  0:00 [scsi_tmf_1]
    root        145  0.0  0.0      0    0 ?        I<  Jun20  0:00 [vfio-irqfd-clea]
    root        146  0.0  0.0      0    0 ?        I<  Jun20  0:00 [mld]
    root        147  0.0  0.0      0    0 ?        I<  Jun20  0:12 [kworker/6:1H-kblockd]
    root        148  0.0  0.0      0    0 ?        I<  Jun20  0:00 [ipv6_addrconf]
    root        158  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kstrp]
    root        161  0.0  0.0      0    0 ?        I<  Jun20  0:00 [zswap-shrink]
    root        167  0.0  0.0      0    0 ?        I<  Jun20  0:00 [charger_manager]
    root        204  0.0  0.0      0    0 ?        I<  Jun20  0:10 [kworker/0:1H-kblockd]
    root        210  0.0  0.0      0    0 ?        S    Jun20  0:00 [scsi_eh_2]
    root        211  0.0  0.0      0    0 ?        I<  Jun20  0:00 [scsi_tmf_2]
    root        224  0.0  0.0      0    0 ?        I<  Jun20  0:00 [cryptd]
    root        234  0.0  0.0      0    0 ?        I<  Jun20  0:09 [kworker/3:1H-kblockd]
    root        235  0.0  0.0      0    0 ?        I<  Jun20  0:09 [kworker/1:1H-kblockd]
    root        238  0.0  0.0      0    0 ?        I<  Jun20  0:14 [kworker/7:1H-kblockd]
    root        239  0.0  0.0      0    0 ?        I<  Jun20  0:09 [kworker/2:1H-kblockd]
    root        243  0.0  0.0      0    0 ?        I<  Jun20  0:15 [kworker/5:1H-kblockd]
    root        305  0.0  0.0      0    0 ?        I<  Jun20  0:00 [raid5wq]
    root        352  0.0  0.0      0    0 ?        S    Jun20  3:34 [jbd2/sda2-8]
    root        353  0.0  0.0      0    0 ?        I<  Jun20  0:00 [ext4-rsv-conver]
    root        429  0.0  0.1  72660 21188 ?        S<s  Jun20  1:56 /lib/systemd/systemd-journald
    root        465  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kaluad]
    root        467  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kmpath_rdacd]
    root        468  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kmpathd]
    root        469  0.0  0.0      0    0 ?        I<  Jun20  0:00 [kmpath_handlerd]
    root        473  0.0  0.1 289348 27100 ?        SLsl Jun20  0:51 /sbin/multipathd -d -s
    root        477  0.0  0.0  25876  5680 ?        Ss  Jun20  0:01 /lib/systemd/systemd-udevd
    _rpc        547  0.0  0.0  8100  4128 ?        Ss  Jun20  0:00 /sbin/rpcbind -f -w
    systemd+    548  0.0  0.0  89352  6188 ?        Ssl  Jun20  0:01 /lib/systemd/systemd-timesyncd
    root        553  0.0  0.0      0    0 ?        I<  Jun20  0:00 [rpciod]
    root        554  0.0  0.0      0    0 ?        I<  Jun20  0:00 [xprtiod]
    systemd+    704  0.0  0.0  16116  7592 ?        Ss  Jun20  0:04 /lib/systemd/systemd-networkd
    root        706  0.0  0.0      0    0 ?        S    Jun20  0:00 [nv_queue]
    root        707  0.0  0.0      0    0 ?        S    Jun20  0:00 [nv_queue]
    systemd+    710  0.0  0.0  25528 12412 ?        Ss  Jun20  0:01 /lib/systemd/systemd-resolved
    root        712  0.0  0.0      0    0 ?        S    Jun20  0:00 [nvidia-modeset/]
    root        713  0.0  0.0      0    0 ?        S    Jun20  0:00 [nvidia-modeset/]
    root        723  0.0  0.0      0    0 ?        S    Jun20  0:00 [UVM global queu]
    root        724  0.0  0.0      0    0 ?        S    Jun20  0:00 [UVM deferred re]
    root        725  0.0  0.0      0    0 ?        S    Jun20  0:00 [UVM Tools Event]
    message+    748  0.0  0.0  8868  4552 ?        Ss  Jun20  0:01 @dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
    root        753  0.0  0.0  82772  4000 ?        Ssl  Jun20  0:28 /usr/sbin/irqbalance --foreground
    root        755  0.0  0.0  32796 13832 ?        Ss  Jun20  0:00 /usr/bin/python3 /usr/bin/networkd-dispatcher --run-startup-triggers
    nvidia-+    757  0.0  0.0  5308  1960 ?        Ss  Jun20  0:00 /usr/bin/nvidia-persistenced --user nvidia-persistenced --no-persistence-mode --verbose
    root        758  0.0  0.0 239268  8768 ?        Ssl  Jun20  0:00 /usr/libexec/polkitd --no-debug
    root        759  0.0  0.0      0    0 ?        I<  Jun20  0:00 [nfsiod]
    syslog      760  0.0  0.0 222400  5444 ?        Ssl  Jun20  0:18 /usr/sbin/rsyslogd -n -iNONE
    root        768  0.0  0.2 1540728 44136 ?      Ssl  Jun20  0:47 /usr/lib/snapd/snapd
    root        776  0.0  0.0  48124  7436 ?        Ss  Jun20  0:01 /lib/systemd/systemd-logind
    root        784  0.0  0.0 392572 12260 ?        Ssl  Jun20  0:01 /usr/libexec/udisks2/udisksd
    root        792  0.0  0.0  15420  7744 ?        Ss  Jun20  0:00 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
    root        802  0.0  0.0 317956 11580 ?        Ssl  Jun20  0:00 /usr/sbin/ModemManager
    root        834  0.0  0.0      0    0 ?        S    Jun20  0:00 [NFSv4 callback]
    root        848  0.0  0.1 109748 16632 ?        Ssl  Jun20  0:00 /usr/bin/python3 /usr/share/unattended-upgrades/unattended-upgrade-shutdown --wait-for-signal
    root        853  0.0  0.0  6892  2932 ?        Ss  Jun20  0:01 /usr/sbin/cron -f -P
    daemon      855  0.0  0.0  3860  1296 ?        Ss  Jun20  0:00 /usr/sbin/atd -f
    root        866  0.0  0.0  6172  1084 tty1    Ss+  Jun20  0:00 /sbin/agetty -o -p -- \u --noclear tty1 linux
    root        867  2.9  0.3 865688 59908 ?        Ssl  Jun20 144:02 /usr/sbin/tailscaled --state=/var/lib/tailscale/tailscaled.state --socket=/run/tailscale/tailscaled.sock --port=41641
    root        2098  0.0  0.0  17172  9112 ?        Ss  Jun20  0:00 sshd: pfs [priv]
    pfs        2101  0.0  0.0  17052  8816 ?        Ss  Jun20  0:00 /lib/systemd/systemd --user
    pfs        2102  0.0  0.0 169308  3780 ?        S    Jun20  0:00 (sd-pam)
    pfs        2183  0.0  0.0  17304  6424 ?        R    Jun20  0:00 sshd: pfs@pts/0
    pfs        2184  0.0  0.0  8732  5180 pts/0    Ss  Jun20  0:00 -bash
    jellyfin    2204  5.0 27.3 24647540 4385952 ?    Ssl  Jun20 239:27 /usr/bin/jellyfin --webdir=/usr/share/jellyfin/web --restartpath=/usr/lib/jellyfin/restart.sh --ffmpeg=/usr/lib/jellyfin-ffmpeg/ffmpeg
    root        7138  0.0  0.0 239608  8532 ?        Ssl  Jun20  0:00 /usr/libexec/upowerd
    root        9705  0.0  0.1 295552 20152 ?        Ssl  Jun21  0:03 /usr/libexec/packagekitd
    root      26286  0.0  0.0      0    0 ?        I    Jun22  0:03 [kworker/6:3-mm_percpu_wq]
    root      33299  0.0  0.0      0    0 ?        I    06:49  0:03 [kworker/0:3-events]
    root      33833  0.0  0.0      0    0 ?        I    08:30  0:01 [kworker/2:1-mm_percpu_wq]
    root      36092  0.0  0.0      0    0 ?        I    13:19  0:00 [kworker/5:1]
    root      36816  0.0  0.0      0    0 ?        I    15:22  0:01 [kworker/5:0-mm_percpu_wq]
    root      36934  0.0  0.0      0    0 ?        I    15:41  0:00 [kworker/2:0-events]
    root      37067  0.0  0.0      0    0 ?        I    15:59  0:01 [kworker/4:1-events]
    root      37208  0.0  0.0      0    0 ?        I    16:22  0:00 [kworker/1:1-events]
    root      37423  0.0  0.0      0    0 ?        I    17:04  0:00 [kworker/6:0-mm_percpu_wq]
    root      37623  0.0  0.0      0    0 ?        I    17:35  0:00 [kworker/0:0-cgroup_destroy]
    root      44858  0.0  0.0      0    0 ?        I    20:11  0:00 [kworker/3:2-events]
    root      45080  0.0  0.0      0    0 ?        I    20:44  0:01 [kworker/4:0-mm_percpu_wq]
    root      45556  0.0  0.0      0    0 ?        I    21:50  0:00 [kworker/7:0-mm_percpu_wq]
    root      45640  0.0  0.0      0    0 ?        I    22:02  0:00 [kworker/7:1-mm_percpu_wq]
    root      47174  0.0  0.0      0    0 ?        I    23:21  0:00 [kworker/1:3-mm_percpu_wq]
    root      47339  0.0  0.0      0    0 ?        I    23:24  0:00 [kworker/u16:5-events_unbound]
    root      47619  0.0  0.0      0    0 ?        I    23:31  0:00 [kworker/u16:1-flush-8:0]
    root      47633  0.0  0.0      0    0 ?        I<  23:32  0:00 [kworker/u17:0-xprtiod]
    root      47652  0.0  0.0      0    0 ?        I<  23:32  0:00 [kworker/u17:3-xprtiod]
    root      48217  0.0  0.0      0    0 ?        I    23:41  0:00 [kworker/3:0-mm_percpu_wq]
    root      48244  0.0  0.0      0    0 ?        R    23:42  0:00 [kworker/u16:0-events_unbound]
    root      48269  0.0  0.0      0    0 ?        I    23:48  0:00 [kworker/u16:2-flush-8:0]
    root      48276  0.0  0.0      0    0 ?        I    23:51  0:00 [kworker/7:2-mm_percpu_wq]
    pfs        48278  0.0  0.0  10068  1608 pts/0    R+  23:51  0:00 ps aux



    After restarting ===========================================================================================

    ● jellyfin.service - Jellyfin Media Server
        Loaded: loaded (/lib/systemd/system/jellyfin.service; enabled; vendor preset: enabled)
        Drop-In: /etc/systemd/system/jellyfin.service.d
                └─jellyfin.service.conf
        Active: active (running) since Fri 2023-06-23 23:55:01 UTC; 9min ago
      Main PID: 754 (jellyfin)
          Tasks: 44 (limit: 18652)
        Memory: 1.0G
            CPU: 59.870s
        CGroup: /system.slice/jellyfin.service
                └─754 /usr/bin/jellyfin --webdir=/usr/share/jellyfin/web --restartpath=/usr/lib/jellyfin/restart.sh --ffmpeg=/usr/lib/jellyfin-ffmpeg/ffmpeg

    Jun 24 00:00:01 jellyfin jellyfin[754]: [00:00:01] [INF] Daily trigger for Playback Reporting Trim Db set to fire at 2023-06-25 00:00:00.000 +00:00, which is 23:59:58.9954745 from now.
    Jun 24 00:00:12 jellyfin jellyfin[754]: [00:00:12] [INF] Playback stopped reported by app Jellyfin Mobile (iOS) 1.5.0 playing Bedazzled. Stopped at 5588581 ms
    Jun 24 00:00:12 jellyfin jellyfin[754]: [00:00:12] [INF] Playback stop did not have a tracker : cb75206c-7d16-4fa0-89c8-6a898044ad14-9e742a13ff6c46c590273ef186d46136-d4872b55ec85282b1a294b8427912cd6
    Jun 24 00:01:00 jellyfin jellyfin[754]: [00:01:00] [INF] DailyTrigger fired for task: Remove Old Cached Data
    Jun 24 00:01:00 jellyfin jellyfin[754]: [00:01:00] [INF] Queuing task HousekeepingTask
    Jun 24 00:01:00 jellyfin jellyfin[754]: [00:01:00] [INF] Executing Remove Old Cached Data
    Jun 24 00:01:00 jellyfin jellyfin[754]: [00:01:00] [INF] Remove Old Cached Data Completed after 0 minute(s) and 0 seconds
    Jun 24 00:01:00 jellyfin jellyfin[754]: [00:01:00] [INF] ExecuteQueuedTasks
    Jun 24 00:01:01 jellyfin jellyfin[754]: [00:01:01] [INF] Daily trigger for Remove Old Cached Data set to fire at 2023-06-25 00:01:00.000 +00:00, which is 23:59:58.9946607 from now.
    Jun 24 00:04:30 jellyfin jellyfin[754]: [00:04:30] [WRN] Slow HTTP Response from https://x.x.x.x/Items/4a590227d0f2d46216...16&quality=>
    lines 1-22/22 (END)


    Attached Files
    .zip   logs-06-23.zip (Size: 125.41 KB / Downloads: 65)
    joshuaboniface
    Offline

    Project Leader

    Posts: 115
    Threads: 25
    Joined: 2023 Jun
    Reputation: 16
    Country:Canada
    #18
    2023-06-24, 07:50 AM
    OK there's definitely something wrong there, it should not be consuming that much RAM. It's definitely weird that ps aux is reporting so much less than systemctl as well. Unless someone beats me to it I'll check out the logs soon.
    natzilla
    Offline

    Junior Member

    Posts: 26
    Threads: 3
    Joined: 2023 Jun
    Reputation: 0
    #19
    2023-06-26, 03:19 PM
    (2023-06-24, 07:50 AM)joshuaboniface Wrote: OK there's definitely something wrong there, it should not be consuming that much RAM. It's definitely weird that ps aux is reporting so much less than systemctl as well. Unless someone beats me to it I'll check out the logs soon.

    I want to add it's possible it might have something to do with transcode, but I'm unsure. Least from what behavior I saw over the weekend.
    joshuaboniface
    Offline

    Project Leader

    Posts: 115
    Threads: 25
    Joined: 2023 Jun
    Reputation: 16
    Country:Canada
    #20
    2023-06-26, 04:35 PM
    (2023-06-26, 03:19 PM)natzilla Wrote:
    (2023-06-24, 07:50 AM)joshuaboniface Wrote: OK there's definitely something wrong there, it should not be consuming that much RAM. It's definitely weird that ps aux is reporting so much less than systemctl as well. Unless someone beats me to it I'll check out the logs soon.

    I want to add it's possible it might have something to do with transcode, but I'm unsure. Least from what behavior I saw over the weekend.

    It definitely seems related though nothing jumped out in the logs either. I'd be really curious what the ps aux and free -m (and htop for good measure) outputs look like while doing transcodes.
    Pages (3): « Previous 1 2 3 Next »

    « Next Oldest | Next Newest »

    Users browsing this thread: 1 Guest(s)


    • View a Printable Version
    • Subscribe to this thread
    Forum Jump:

    Home · Team · Help · Contact
    © Designed by D&D - Powered by MyBB
    L


    Jellyfin

    The Free Software Media System

    Linear Mode
    Threaded Mode