2026-03-01, 05:44 PM
(This post was last modified: 2026-03-01, 05:57 PM by kosem. Edited 1 time in total.)
tldr; iptables magic to make jellyfin client auto discovery work on docker container with bridge network
Hi guys, not sure if that's the right place but I just had to share this one.
I'm running Jellyfin on docker with linux host and so far a great experience.
Although recently I understood my docker bridge network setup breaks Auto Discovery for clients.
Client Auto Discovery is utilizing udp broadcasting on your lan segment using port 7359.
udp broadcasting on docker bridge isn't working, thus our auto discovery fails.
I've seen many people on the internet and on Jellyfin Forums as well trying to get it to work on Bridge and all presented advise is only moving to network: host.
For obvious security reasons such a thing wouldn't be my first choice.
When you use bridge network if you expose udp port of 7359 using normal docker or docker-compose syntax you create DNAT (rewrites destination to jellyfin container) rule behind the scenes using iptables.
Main issue is although incoming broadcast traffic gets through dnat and to the jellyfin container, the linux host doesn't save conntrack (existing connections state) on broadcasted traffic. so basically what happens is you get to the container but when the packet returns to docker host it stops there as there is not state of the original client ip (because it was broadcast). default docker nat stuff, regardless of the broadcast part.
an example of a problem:
Android Jellyfin Client (192.168.1.5) sends a discovery broadcast on the lan on port 7359.
Host (192.168.1.10:RANDOM_PORT_1) gets the request, processes it through dnat to jellyfin container (172.16.0.2:7359)
Docker Host (172.16.0.1:RANDOM_PORT_2) -> Jellyfin Container (172.16.0.2:7359) - This is normal docker nat behavior
Jellyfin Container (172.16.0.2:7359) -> Docker Host (172.16.0.1:RANDOM_PORT_2) - Reply from server regarding jellyfin url info etc
....
nothing happens. why?
no conntrack of the original sender of the Android Jellyfin Client (192.168.1.5)
Jellyfin Container sees only packet after nat so from it's point of view the sender ip is 172.16.0.1 (Docker Host). shit.
possible try was to disable nat completely but then you have to manage all the iptables rules for each exposed port alone.
what's the magic trick:
So after an extended fight with docker ecosystem and linux iptables and some GPT I came across the TEE target in iptables.
TEE basically clones the packet and lets you run it through a gateway.
Why is it great? we can route incoming traffic for broadcast of udp port 7359 right into the container and then container sees the original sender and can reply directly!
using this you don't even need to expose the port using the docker/docker-compose syntax.
yes, ofc you can use the ROUTE target but it requires you to predefine routes in a file and/or specify both ip and interface (would be less ideal to integrate this subcommand into a docker compose file rather than get only the container ip).
I'm terribly sorry if this was too much words.
I felt like many people were disappointed on the forums about being forced to expose all jellyfin ports on the host network mode.
I will try to make it fully automatic in my docker-compose file. will update if it works.
I hope we will break the network host meta.
Kosem
Hi guys, not sure if that's the right place but I just had to share this one.
I'm running Jellyfin on docker with linux host and so far a great experience.
Although recently I understood my docker bridge network setup breaks Auto Discovery for clients.
Client Auto Discovery is utilizing udp broadcasting on your lan segment using port 7359.
udp broadcasting on docker bridge isn't working, thus our auto discovery fails.
I've seen many people on the internet and on Jellyfin Forums as well trying to get it to work on Bridge and all presented advise is only moving to network: host.
For obvious security reasons such a thing wouldn't be my first choice.
When you use bridge network if you expose udp port of 7359 using normal docker or docker-compose syntax you create DNAT (rewrites destination to jellyfin container) rule behind the scenes using iptables.
Main issue is although incoming broadcast traffic gets through dnat and to the jellyfin container, the linux host doesn't save conntrack (existing connections state) on broadcasted traffic. so basically what happens is you get to the container but when the packet returns to docker host it stops there as there is not state of the original client ip (because it was broadcast). default docker nat stuff, regardless of the broadcast part.
an example of a problem:
Android Jellyfin Client (192.168.1.5) sends a discovery broadcast on the lan on port 7359.
Host (192.168.1.10:RANDOM_PORT_1) gets the request, processes it through dnat to jellyfin container (172.16.0.2:7359)
Docker Host (172.16.0.1:RANDOM_PORT_2) -> Jellyfin Container (172.16.0.2:7359) - This is normal docker nat behavior
Jellyfin Container (172.16.0.2:7359) -> Docker Host (172.16.0.1:RANDOM_PORT_2) - Reply from server regarding jellyfin url info etc
....
nothing happens. why?
no conntrack of the original sender of the Android Jellyfin Client (192.168.1.5)
Jellyfin Container sees only packet after nat so from it's point of view the sender ip is 172.16.0.1 (Docker Host). shit.
possible try was to disable nat completely but then you have to manage all the iptables rules for each exposed port alone.
what's the magic trick:
So after an extended fight with docker ecosystem and linux iptables and some GPT I came across the TEE target in iptables.
TEE basically clones the packet and lets you run it through a gateway.
Why is it great? we can route incoming traffic for broadcast of udp port 7359 right into the container and then container sees the original sender and can reply directly!
bash
sudo iptables -t mangle -I PREROUTING -p udp --dport 7359 -j TEE --gateway IP_OF_JELLYFIN_CONTAINER
using this you don't even need to expose the port using the docker/docker-compose syntax.
yes, ofc you can use the ROUTE target but it requires you to predefine routes in a file and/or specify both ip and interface (would be less ideal to integrate this subcommand into a docker compose file rather than get only the container ip).
I'm terribly sorry if this was too much words.
I felt like many people were disappointed on the forums about being forced to expose all jellyfin ports on the host network mode.
I will try to make it fully automatic in my docker-compose file. will update if it works.
I hope we will break the network host meta.
Kosem


