VPS, nginx, ssh tunnel to local server - Printable Version +- Jellyfin Forum (https://forum.jellyfin.org) +-- Forum: Support (https://forum.jellyfin.org/f-support) +--- Forum: Troubleshooting (https://forum.jellyfin.org/f-troubleshooting) +---- Forum: Networking & Access (https://forum.jellyfin.org/f-networking-access) +---- Thread: VPS, nginx, ssh tunnel to local server (/t-vps-nginx-ssh-tunnel-to-local-server) |
VPS, nginx, ssh tunnel to local server - raccoonsummer - 2023-08-18 I'm trying to set up a VPS such that I can allow my parents to use their Roku to load / play things from my Jellyfin server. The VPS is running an nginx server with a reverse proxy setup as per the Jellyfin documentation, as far as I can tell, but I think some parts are misconfigured, and there's some differences between how I have my system set up versus the official documentation, and so I'm not really sure how to get it all working correctly. The general idea that I'm going for is: internet client -> https to my VPS -> proxy connection via SSH tunnel -> my synology -> docker container running Jellyfin My starting point for this was this thread on Reddit: https://www.reddit.com/r/jellyfin/comments/10w8b34/confused_about_sharing_jellyfin_to_a_vps_to_allow/ My Environment: Local: Jellyfin via Docker, running on a Synology DS920+ which handles hardware transcoding and has all the local storage for media. VPS: Linode VPS, static IP, nginx installed and configured, Let's Encrypt SSL cert is OK, custom DNS and domain name all set up (using a subdomain setup, ie jellyfin.cloud.mydomain.com), secured with only SSH Certificate login and Fail2Ban set up / secured. Where I went off script / tried customizing things: I can't get the Docker container to handle the SSH Tunnel configuration from the Jellyfin container to the VPS, so I figured I could do that from the Synology itself via command line for testing, then later set it up via scheduled task to make sure that the tunnel is set up and running on reboot, and/or resets daily at some time, so that it's sure to be up and running. I haven't been able to confirm that this is actually working or not, but it appears to be? Configuration stuff: VPS: /etc/nginx/conf.d/jellyfin.conf Code: # jellyfin configuration, taken from https://jellyfin.org/docs/general/networking/nginx/ On the Synology via command line, I'm running the below command to establish the ssh tunnel: Code: ssh -NTC -o ServerAliveInterval=60 -o ExitOnForwardFailure=yes -R 127.0.0.1:8096:127.0.0.1:8096 nginx-ssh@jellyfin.cloud.mydomain.tld I can see that the connection is established on the VPS: [vpshost]$ sudo netstat -at Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 localhost:8096 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:https 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:http 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:ssh 0.0.0.0:* LISTEN tcp 0 288 cloud.mydomain.tld:ssh redacted.ip.addr:15100 ESTABLISHED tcp 0 0 cloud.mydomain.tld:ssh redacted.ip.addr:54494 ESTABLISHED tcp6 0 0 [::]:https [::]:* LISTEN tcp6 0 0 [::]:http [::]:* LISTEN tcp6 0 0 [::]:ssh [::]:* LISTEN However, when I try to load my jellyfin instance, I am getting a 502 Bad Gateway error from the web server: https://jellyfin.cloud.mydomain.tld/ forwards to https://jellyfin.cloud.mydomain.tld/web/ ---> returns 502 Bad Gateway nginx/1.20.1 /var/log/nginx/error.log 2023/08/18 14:02:40 [crit] 699#699: connect() to 23.214.95.221:80 failed (13: Permission denied) while requesting certificate status, responder: r3.o.lencr.org, peer: 23.214.95.221:80, certificate: "/etc/letsencrypt/live/jellyfin.cloud.mydomain.tld/fullchain.pem" 2023/08/18 14:02:40 [crit] 699#699: connect() to 23.214.95.212:80 failed (13: Permission denied) while requesting certificate status, responder: r3.o.lencr.org, peer: 23.214.95.212:80, certificate: "/etc/letsencrypt/live/jellyfin.cloud.mydomain.tld/fullchain.pem" 2023/08/18 14:02:40 [crit] 699#699: connect() to [2600:1406:5600:3::17d6:5fdd]:80 failed (13: Permission denied) while requesting certificate status, responder: r3.o.lencr.org, peer: [2600:1406:5600:3::17d6:5fdd]:80, certificate: "/etc/letsencrypt/live/jellyfin.cloud.mydomain.tld/fullchain.pem" 2023/08/18 14:02:40 [crit] 699#699: connect() to [2600:1406:5600:3::17d6:5fd4]:80 failed (13: Permission denied) while requesting certificate status, responder: r3.o.lencr.org, peer: [2600:1406:5600:3::17d6:5fd4]:80, certificate: "/etc/letsencrypt/live/jellyfin.cloud.mydomain.tld/fullchain.pem" 2023/08/18 14:02:41 [crit] 699#699: *1 connect() to 127.0.0.1:8096 failed (13: Permission denied) while connecting to upstream, client: 146.70.174.92, server: jellyfin.cloud.mydomain.tld, request: "GET /web/ HTTP/2.0", upstream: "http://127.0.0.1:8096/web/index.html", host: "jellyfin.cloud.mydomain.tld" 2023/08/18 14:02:41 [crit] 699#699: *1 connect() to 127.0.0.1:8096 failed (13: Permission denied) while connecting to upstream, client: 146.70.174.92, server: jellyfin.cloud.mydomain.tld, request: "GET /favicon.ico HTTP/2.0", upstream: "http://127.0.0.1:8096/favicon.ico", host: "jellyfin.cloud.mydomain.tld", referrer: "https://jellyfin.cloud.mydomain.tld/web/" Potential problems that I'm considering, but don't know how to troubleshoot:
NOTE: I've replaced all instances of my actual domain name with "mydomain.tld" Thank you reading and for any suggestions or troubleshooting steps! I'd have put this all in the Jellyfin Reddit thread on the subject, but it appears that that's no longer possible. I'm hoping that this setup isn't too weird, since I'd really like to avoid having to forward ports from my home network. RE: VPS, nginx, ssh tunnel to local server - TheDreadPirate - 2023-08-18 For the SSH command, I think it the "-R" part should be Code: -R 8096:127.0.0.1:8096 The documentation says it should be <remote port>:<localhost>:<local port>. I think it makes more sense to use something like wireguard to tunnel between your VPS and your Synology box. Also, is there a reason you are using a VPS instead of hosting nginx in another container on your Synology NAS? Or did you already have the VPS for other reasons? RE: VPS, nginx, ssh tunnel to local server - raccoonsummer - 2023-08-18 TheDreadPirate, I just tried your suggested change, and unfortunately I'm still getting the 502 Bad Gateway page. I'd like to use wireguard, but I'm even less familiar with how to arrange that to allow VPS traffic to be proxied to my local system. The reason behind trying to put it up on a VPS is that the VPS gets a static IP that I can point my domain name at, is a separate system from anything at my home, and if anything goes wrong with the VPS, I can just nuke it and set up a new one. I like the idea of the separation in devices for security's sake. Also getting https:// working properly at home is kinda funky with the DDNS stuff, I think? If I were to use my local hardware to host externally, I'd have to open a port, firewall it properly (which isn't so bad), and also run some DDNS service to keep my IP / DNS entries up do date. Not the worst thing... and if it'd be significantly easier to set up I might consider it - I just also haven't found a good, clear guide on how to do it securely in such a way that I can be pretty confident that a security issue won't expose my whole home network. RE: VPS, nginx, ssh tunnel to local server - TheDreadPirate - 2023-08-18 With LetsEncrypt and certbot, the fact that you are using DDNS is completely transparent. The process is exactly the same for the certificate request process. Certbot manages both of my certs, and both certs use my NoIP DDNS addresses. As for security concerns. If you use non-standard ports you significantly reduce your chance someone attempting to break in. Most cyber-actors/script kiddies are scanning common ports to run their cookie cutter attacks against. Port 80, 443, 22, 25, 3389, etc. If you run your services on ephemeral ports (49152-65535) you are unlikely to be noticed. If you keep your software up to date, cookie cutter exploit scripts will be ineffective. And, let's be real, none of us are worth someone going out of their way to scan every port. We are not worth someone using their secret zero-day exploit against. If I'm a hacker of a high enough caliber that I discovered a zero-day vulnerability and developed the capability to exploit it I'm using that against some massive multi-national company, defense contractor, government, or intelligence agency. If a I'm bottom of the barrel script kiddie, I'm going after easy targets and letting this random script I found on a hacker forum do all the work. And if you setup your nginx container properly, it kind of is separate from your other containers and devices. Even on an entirely bare metal setup like mine, as long as you properly use groups and permissions you are requiring this hypothetical hacker to have additional exploits to break out to system wide access. |