Technical limits with Jellyfin ? - Printable Version +- Jellyfin Forum (https://forum.jellyfin.org) +-- Forum: Support (https://forum.jellyfin.org/f-support) +--- Forum: Troubleshooting (https://forum.jellyfin.org/f-troubleshooting) +--- Thread: Technical limits with Jellyfin ? (/t-technical-limits-with-jellyfin) Pages:
1
2
|
RE: Technical limits with Jellyfin ? - ZmgwsYzhM2nV - 2023-10-06 As I'm thinking of transferring my Jellyfin install to a TrueNAS Scale server (using the built-in TrueCharts app), are you recommending against that? Even if I store config files etc. on a SSD within that system, that still would be ZFS... (2023-10-06, 03:57 PM)TheDreadPirate Wrote: Having your database on a ZFS file system is possible, it is very much not recommended. Database READ performance is fine, but when you are writing to a database that is on a ZFS file system there are serious performance penalties. RE: Technical limits with Jellyfin ? - TheDreadPirate - 2023-10-06 (2023-10-06, 10:50 PM)ZmgwsYzhM2nV Wrote: As I'm thinking of transferring my Jellyfin install to a TrueNAS Scale server (using the built-in TrueCharts app), are you recommending against that? Even if I store config files etc. on a SSD within that system, that still would be ZFS... If the ZFS filesystem is a single SSD, and the file system block size is 4KB or 8KB, it would be acceptable. The problems start to happen when a database is on a multi-disk ZFS array. The block size is usually something larger like 128KB or more. ZFS is a copy-on-write file system. Meaning that instead of just modifying the block in place, it makes a copy of the entire block being modified. So no matter how small of a change you are making you are, at minimum, writing a single block. So if your are making A LOT of write transactions to the jellyfin database, which would happen during a library scan, there is a lot of unnecessary writing happening. And this seriously reduces database performance. Probably not noticeable for a small library with the database on a SSD. But if the database is on a hard drive based ZFS array with, probably, larger block sizes, combined with a large library and, thus, large database, you are going to run into serious performance issues. You are going to experience something called "write amplification". If you want to write 50MB to your database, the file system forces you do write 1GB (numbers pulled out of my butt). It sounds like the OP used a symlink to move the jellyfin data directory on their ZFS array storing their library instead of the root file system. Which is usually EXT4 in Linux. RE: Technical limits with Jellyfin ? - ZmgwsYzhM2nV - 2023-10-07 Right. It's kind of a bummer, because the whole appeal of something like TrueNAS is data integrity and security. Then again, in my case it probably would be OK, because there's only in the hundreds of movies, I maybe add one every other couple of days, I'm the only user etc. Anyway, sorry for derailing the thread a little. RE: Technical limits with Jellyfin ? - vincen - 2023-10-07 (2023-10-06, 03:57 PM)TheDreadPirate Wrote: Having your database on a ZFS file system is possible, it is very much not recommended. Database READ performance is fine, but when you are writing to a database that is on a ZFS file system there are serious performance penalties.First time I hear such possible issues with ZFS. Do you have any technical references about it ? I use also Plex on that server that uses also flat files DB and storage of it on a ZFS volume has never been a problem Unhappy right now I'm not able to move the DB on an ext4 storage all the more with all metadatas directories that go with it (2023-10-06, 11:26 PM)TheDreadPirate Wrote: If the ZFS filesystem is a single SSD, and the file system block size is 4KB or 8KB, it would be acceptable. The problems start to happen when a database is on a multi-disk ZFS array. The block size is usually something larger like 128KB or more.Thanks for clarification about "issue" with ZFS but I would have 2 additional questions: -> is It still a problem on a ZFS array that have high speed (mine read/writes at around 2.5g/s which should compensate the problem no ? -> the issue is only for write operations so nearly only when you scan libraries right ? out of it amount of data written in db is ridiculous no ? RE: Technical limits with Jellyfin ? - TheDreadPirate - 2023-10-07 (2023-10-07, 09:47 AM)vincen Wrote: Thanks for clarification about "issue" with ZFS but I would have 2 additional questions: Throughput isn't the problem. It is I/O performance. As you add disks to a ZFS array, or any kind of RAID array, your SEQUENTIAL throughput increases but your random I/O performance does not increase nearly as much. You are physically limited by how fast the read/write head in the hard drive can move around. Random I/O performance is very important for database operations. Read or write. In general, we recommend that the jellyfin databases reside on an SSD. You issues are compounded by having a large database on a hard drive based ZFS array. And since you have the entire /var/lib/jellyfin directory on the same array, your database is competing for resources with the jellyfin cache and your reading your media to send to users. RE: Technical limits with Jellyfin ? - Kevin Nord - 2023-11-14 (2023-10-06, 03:57 PM)TheDreadPirate Wrote: Having your database on a ZFS file system is possible, it is very much not recommended. Database READ performance is fine, but when you are writing to a database that is on a ZFS file system there are serious performance penalties. Hi, it sounds like you have a fair bit of knowledge on this, and I'm curious. Why does a DB on ZFS have such a different performance than on EXT4? Additionally, how does BTRFS stack up? I've never really thought much about performance on different fs types, thank you for planting that seed in my head RE: Technical limits with Jellyfin ? - TheDreadPirate - 2023-11-14 ZFS is a CoW (copy on write) file system. Meaning that when a file is modified it makes a copy of the whole file with the new modifications instead of modifying the existing file on the disk. This isn't a big deal for most things. But for a database where you are making a lot of small changes very frequently to the same file this can result in significant write amplification. Meaning that a small number of small write operations results in a large amount of bytes written. This can be problematic both from a performance perspective and from an SSD endurance perspective. BTRFS is also a CoW file system. Having said all of that. My understanding is that the use of journal files by SQLite, and other modern DBs, mitigates these issues. So having your DB on ZFS or BTRFS isn't as bad as it used to be. As long as it is an SSD. Especially if you intend to use the snapshot capabilities of ZFS and BTRFS. |