"Too many open files in system error" during library scan - Printable Version +- Jellyfin Forum (https://forum.jellyfin.org) +-- Forum: Support (https://forum.jellyfin.org/f-support) +--- Forum: Troubleshooting (https://forum.jellyfin.org/f-troubleshooting) +--- Thread: "Too many open files in system error" during library scan (/t-too-many-open-files-in-system-error-during-library-scan) |
"Too many open files in system error" during library scan - awkward-gopher - 2025-01-29 Greetings! I'm running jellyfin in a docker container on spare MacOS laptop. I run it via docker compose. Code: jellyfin: I mount a volume via SMB called "jellyfin" from a Synology NAS which is where all of my media is stored. When I run a library scan, it will run for a while, then these errors will start pouring out: Code: jellyfin | System.IO.IOException: Too many open files in system : '/data/shows/Yada-Yada/Season 4/metadata/YadaYada.jpg' I've noticed some media is missing in the web UI which is present in the file structure, and I suspect the root cause of the issue is these errors (though I could be wrong). I've tried checking the ulimit on the MacOS host, as well as within the docker container. All of the limits I've seen appear quite high, or I made them quite high.Does anyone have any good troubleshooting ideas? Here are some questions that come to mind. 1. Is there some way to slow the scan down a bit to allow the file handles to close in time? 2. Is there some way of testing by opening a bunch of file handles on my SMB mount? Maybe this is only a problem over SMB? 3. Is there a bug that's keeping the file handles open for longer than they should be? RE: "Too many open files in system error" during library scan - TheDreadPirate - 2025-01-29 I believe there is an ulimit for Samba that is separate from the host. And, AFAICT, the default behavior of Samba is to use some hardcoded limit. I've been able to find Linux instructions for configuring Samba to use the host system's ulimit, which is configurable, but I'm not sure how that translates to MacOS. https://serverfault.com/questions/325608/samba-stuck-at-maximum-of-1024-open-files If you have real time monitoring enabled, try disabling that? RE: "Too many open files in system error" during library scan - awkward-gopher - 2025-01-30 I'd already disabled real time monitoring to no effect. I do think there's something going on with Samba in my case, and I'm going to try to dive into that a bit further. I tried running a simple script to hopefully open enough file descriptors. This allowed me to test directly on the NAS via ssh, as well as from MacOS. Code: i=0 On the NAS, this runs instantly and fails at 1012 open files, and returns "Too many open files" From the Mac, accessing the same location, but via Samba, it's much slower (no real surprise there), but eventually slows to an incredibly slow rate, like, one new attempt per minute, but never hits a "Too many open files" error. So, I'm a bit more confident that something related to SMB is the issue here. Thanks so much for your time, and let me know if you think of anything else I should try. But I do understand that an SMB mount issue is sort of outside of the scope of this forum. RE: "Too many open files in system error" during library scan - awkward-gopher - 2025-01-31 Well, I did something absolutely wacky, and it worked. Here's how it breaks down: SMB support on Mac OS is just really unstable. I couldn't actually get it working using that. Instead, I baked a new container. I'll provide the dockerfile and entrypoint for reference if anyone is interested. But, with these changes, it mounts the SMB mount *within* the container, instead of mounting it in Mac OS and accessing it via a docker volume. Here's how it works: Added a new Dockerfile: Code: FROM jellyfin/jellyfin:latest And the entrypoint Code: #!/bin/bash And an update to the compose service Code: jellyfin: By doing this, everything works and I'm able to scan the library. I think it might be a bit snappier as well. But that's not measured, just a feeling. RE: "Too many open files in system error" during library scan - gnattu - 2025-01-31 I don't recommend using docker on macOS unless you are fine giving up all hardware acceleration capabilities. Also the way you are checking the max open files on macOS is not right. You should do launchctl limit and the default is only 256 open files.To make a persistent change to increase this limit you need to paste the following, save as limit.maxfiles.plist , and put it under /Library/LaunchDaemons/ , then reboot your Mac. Upon next boot, the maxfiles should show 524288 for launchctl limit .Bind mount for remote network share on docker was never stable especially on macOS/Windows where such mount has to go across a VM boundary. If you have to use docker I recommend to use docker volume with CIFS backend instead of hacking with mount points in the container. It is easier and works more reliable. Code: <?xml version="1.0" encoding="UTF-8"?> |