2026-03-02, 09:41 PM
(This post was last modified: 2026-03-03, 02:20 AM by mjd. Edited 1 time in total.)
TL;DR An automated check of the /health endpoint caused a connection leak in my caddy reverse proxy.
Posting here since it took me a minute to troubleshoot this and maybe someone else will come across it too.
Running Jellyfin 10.11.6 behind caddy 2.11.1. I have a remote uptime monitor hitting https://jellyfin.mycaddyserver/health every 60 seconds. I noticed over time (even just a few hours) the caddy server was holding on to over 7000 ESTABLISHED tcp connections back to the jellyfin backend all while nobody was even accessing the server (besides the uptime monitor). Jellyfin logs were showing a constant flood of:
Caddy was holding on to the connections back to jellyfin even though the uptime monitor was gone. So I added this to the reverse proxy in the Caddyfile:
It immediately dropped down to less than 10 ESTABLISHED connections and remained there. When I reloaded caddy there was a spam of these logs in jellyfin:
I assume that the /health endpoint shouldn't be attempting to maintain a websocket connection at all - but if anyone else runs into instability with a reverse proxy and health checks - there you go.
Posting here since it took me a minute to troubleshoot this and maybe someone else will come across it too.
Running Jellyfin 10.11.6 behind caddy 2.11.1. I have a remote uptime monitor hitting https://jellyfin.mycaddyserver/health every 60 seconds. I noticed over time (even just a few hours) the caddy server was holding on to over 7000 ESTABLISHED tcp connections back to the jellyfin backend all while nobody was even accessing the server (besides the uptime monitor). Jellyfin logs were showing a constant flood of:
Code:
[2026-03-02 00:29:46.802 -05:00] [INF] WS "[REDACTED-IP]" request
[2026-03-02 00:29:51.805 -05:00] [INF] WS "[REDACTED-IP]" request
[2026-03-02 00:29:56.813 -05:00] [INF] WS "[REDACTED-IP]" request
[2026-03-02 00:29:58.703 -05:00] [INF] Sending ForceKeepAlive message to 3 inactive WebSockets.
[2026-03-02 00:29:58.703 -05:00] [INF] Lost 3 WebSockets.Caddy was holding on to the connections back to jellyfin even though the uptime monitor was gone. So I added this to the reverse proxy in the Caddyfile:
Code:
reverse_proxy jellyfin.internal:8096 {
transport http {
# Lowers the time an idle connection sits in the kernel table
keepalive 30s
# Limits the total "warm" connections per backend to something sane
keepalive_idle_conns 10
}
}It immediately dropped down to less than 10 ESTABLISHED connections and remained there. When I reloaded caddy there was a spam of these logs in jellyfin:
Code:
[2026-03-02 14:49:12.644 -05:00] [INF] WS "[REDACTED-IP]" closed
[2026-03-02 14:49:12.644 -05:00] [INF] WS "[REDACTED-IP]" closed
[2026-03-02 14:49:12.645 -05:00] [INF] WS "[REDACTED-IP]" closed
[2026-03-02 14:49:12.645 -05:00] [INF] WS "[REDACTED-IP]" closed
[2026-03-02 14:49:12.645 -05:00] [INF] WS "[REDACTED-IP]" closed
[2026-03-02 14:49:12.645 -05:00] [INF] WS "[REDACTED-IP]" closed
[2026-03-02 14:49:12.645 -05:00] [INF] WS "[REDACTED-IP]" closed
[2026-03-02 14:49:12.645 -05:00] [INF] WS "[REDACTED-IP]" closed
[2026-03-02 14:49:12.645 -05:00] [INF] WS "[REDACTED-IP]" closed
[2026-03-02 14:49:12.645 -05:00] [INF] WS "[REDACTED-IP]" closed
[2026-03-02 14:49:12.645 -05:00] [INF] WS "[REDACTED-IP]" closed
[2026-03-02 14:49:12.645 -05:00] [INF] WS "[REDACTED-IP]" closed
[2026-03-02 14:49:12.645 -05:00] [INF] WS "[REDACTED-IP]" closed
[2026-03-02 14:49:12.645 -05:00] [INF] WS "[REDACTED-IP]" closed
[2026-03-02 14:49:12.645 -05:00] [INF] WS "[REDACTED-IP]" closedI assume that the /health endpoint shouldn't be attempting to maintain a websocket connection at all - but if anyone else runs into instability with a reverse proxy and health checks - there you go.

