Jellyfin Forum
"Too many open files in system error" during library scan - Printable Version

+- Jellyfin Forum (https://forum.jellyfin.org)
+-- Forum: Support (https://forum.jellyfin.org/f-support)
+--- Forum: Troubleshooting (https://forum.jellyfin.org/f-troubleshooting)
+--- Thread: "Too many open files in system error" during library scan (/t-too-many-open-files-in-system-error-during-library-scan)



"Too many open files in system error" during library scan - awkward-gopher - 2025-01-29

Greetings!

I'm running jellyfin in a docker container on spare MacOS laptop. I run it via docker compose.

Code:
  jellyfin:
    container_name: jellyfin
    image: jellyfin/jellyfin
    network_mode: host
    restart: unless-stopped
    volumes:
      - ./jellyfin-config:/config
      - /Volumes/jellyfin:/data
    ports:
      - "8096:8096"
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/Los_Angeles

I mount a volume via SMB called "jellyfin" from a Synology NAS which is where all of my media is stored. When I run a library scan, it will run for a while, then these errors will start pouring out:

Code:
jellyfin  | System.IO.IOException: Too many open files in system : '/data/shows/Yada-Yada/Season 4/metadata/YadaYada.jpg'
jellyfin  |    at System.IO.FileSystemInfo.Create(String fullPath, String fileName, Boolean asDirectory, FileStatus& fileStatus)
jellyfin  |    at System.IO.Enumeration.FileSystemEntry.ToFileSystemInfo()
jellyfin  |    at System.IO.Enumeration.FileSystemEnumerator`1.MoveNext()
jellyfin  |    at System.Linq.Enumerable.SelectEnumerableIterator`2.ToArray()
jellyfin  |    at MediaBrowser.Controller.Providers.DirectoryService.GetFiles(String path)
jellyfin  |    at MediaBrowser.LocalMetadata.Images.EpisodeLocalImageProvider.GetImages(BaseItem item, IDirectoryService directoryService)
jellyfin  |    at System.Linq.Enumerable.SelectManySingleSelectorIterator`2.ToList()
jellyfin  |    at MediaBrowser.Providers.Manager.MetadataService`2.RefreshMetadata(BaseItem item, MetadataRefreshOptions refreshOptions, CancellationToken cancellationToken)

I've noticed some media is missing in the web UI which is present in the file structure, and I suspect the root cause of the issue is these errors (though I could be wrong).

I've tried checking the ulimit on the MacOS host, as well as within the docker container. All of the limits I've seen appear quite high, or I made them quite high.

Does anyone have any good troubleshooting ideas? Here are some questions that come to mind.

1. Is there some way to slow the scan down a bit to allow the file handles to close in time?
2. Is there some way of testing by opening a bunch of file handles on my SMB mount? Maybe this is only a problem over SMB?
3. Is there a bug that's keeping the file handles open for longer than they should be?


RE: "Too many open files in system error" during library scan - TheDreadPirate - 2025-01-29

I believe there is an ulimit for Samba that is separate from the host. And, AFAICT, the default behavior of Samba is to use some hardcoded limit. I've been able to find Linux instructions for configuring Samba to use the host system's ulimit, which is configurable, but I'm not sure how that translates to MacOS.

https://serverfault.com/questions/325608/samba-stuck-at-maximum-of-1024-open-files

If you have real time monitoring enabled, try disabling that?


RE: "Too many open files in system error" during library scan - awkward-gopher - 2025-01-30

I'd already disabled real time monitoring to no effect. I do think there's something going on with Samba in my case, and I'm going to try to dive into that a bit further.

I tried running a simple script to hopefully open enough file descriptors. This allowed me to test directly on the NAS via ssh, as well as from MacOS.

Code:
i=0
while true; do
    touch "testfile_$i"
    exec {fd}<"testfile_$i" || break
    echo "Opened file descriptor $i"
    ((i++))
done

echo "Hit limit at $i open files!"


On the NAS, this runs instantly and fails at 1012 open files, and returns "Too many open files"

From the Mac, accessing the same location, but via Samba, it's much slower (no real surprise there), but eventually slows to an incredibly slow rate, like, one new attempt per minute, but never hits a "Too many open files" error. So, I'm a bit more confident that something related to SMB is the issue here.

Thanks so much for your time, and let me know if you think of anything else I should try. But I do understand that an SMB mount issue is sort of outside of the scope of this forum.


RE: "Too many open files in system error" during library scan - awkward-gopher - 2025-01-31

Well, I did something absolutely wacky, and it worked.

Here's how it breaks down: SMB support on Mac OS is just really unstable. I couldn't actually get it working using that. Instead, I baked a new container. I'll provide the dockerfile and entrypoint for reference if anyone is interested. But, with these changes, it mounts the SMB mount *within* the container, instead of mounting it in Mac OS and accessing it via a docker volume.

Here's how it works:

Added a new Dockerfile:

Code:
FROM jellyfin/jellyfin:latest

# Install CIFS utilities for mounting SMB shares
RUN apt-get update && \
    apt-get install -y cifs-utils && \
    apt-get clean

# Copy custom entrypoint script
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh

# Override entrypoint
ENTRYPOINT ["/entrypoint.sh"]

And the entrypoint
Code:
#!/bin/bash

set -e

# Ensure required env vars are set
if [[ -z "$SMB_SERVER" || -z "$SMB_SHARE" || -z "$SMB_USER" || -z "$SMB_PASS" ]]; then
echo "Missing SMB environment variables! Exiting."
exit 1
fi

# Define mount point
MOUNT_POINT="/data"

# Create mount directory if it doesn’t exist
mkdir -p "$MOUNT_POINT"

# Attempt to mount SMB share
echo "Mounting SMB share: //$SMB_SERVER/$SMB_SHARE at $MOUNT_POINT..."
mount -t cifs "//$SMB_SERVER/$SMB_SHARE" "$MOUNT_POINT" \
-o username="$SMB_USER",password="$SMB_PASS",vers=3.0,uid=1000,gid=1000,iocharset=utf8,rw

# Ensure mount is successful
if ! mountpoint -q "$MOUNT_POINT"; then
echo "Failed to mount SMB share! Exiting."
exit 1
fi

echo "SMB share mounted successfully."

# Run the default Jellyfin entrypoint
exec /jellyfin/jellyfin

And an update to the compose service
Code:
  jellyfin:
    container_name: jellyfin
    build:
      context: .
      dockerfile: Dockerfile
    privileged: true
    cap_add:
      - SYS_ADMIN
      - DAC_READ_SEARCH
    devices:
      - /dev/fuse
    tmpfs:
      - /run
      - /tmp
    network_mode: host
    restart: unless-stopped
    volumes:
      - ./jellyfin-config:/config
    ports:
      - "8096:8096"
    environment:
      PUID: 1000
      PGID: 1000
      TZ: America/Los_Angeles
      SMB_SERVER: my-server-name
      SMB_SHARE: jellyfin
      SMB_USER: username
      SMB_PASS: password

By doing this, everything works and I'm able to scan the library. I think it might be a bit snappier as well. But that's not measured, just a feeling.


RE: "Too many open files in system error" during library scan - gnattu - 2025-01-31

I don't recommend using docker on macOS unless you are fine giving up all hardware acceleration capabilities. Also the way you are checking the max open files on macOS is not right. You should do launchctl limit and the default is only 256 open files.

To make a persistent change to increase this limit you need to paste the following, save as limit.maxfiles.plist, and put it under /Library/LaunchDaemons/, then reboot your Mac. Upon next boot, the maxfiles should show 524288 for launchctl limit.

Bind mount for remote network share on docker was never stable especially on macOS/Windows where such mount has to go across a VM boundary. If you have to use docker I recommend to use  docker volume with CIFS backend instead of hacking with mount points in the container. It is easier and works more reliable. 

Code:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
  <dict>
    <key>Label</key>
      <string>limit.maxfiles</string>
    <key>ProgramArguments</key>
      <array>
        <string>launchctl</string>
        <string>limit</string>
        <string>maxfiles</string>
        <string>524288</string>
        <string>524288</string>
      </array>
    <key>RunAtLoad</key>
      <true/>
  </dict>
</plist>