Hi TCP users,
Currently, I have a homelab server that runs Jellyfin with direct access to local media content and a reverse proxy point to it. While it works well for people in Europe (where the server is), it is quite slow for some of my friends who are living in Asia. I am having some options in mind:
- Hire a VPS in Asia and set up another Jellyfin instance there. This works but I don’t really want to have two Jellyfin instances with two databases and also accessing to local media content will be curbersome to manage.
- Hire a VPS in Asia and set up a CDN but I am not sure if it will ever work with Jellyfin ?
So I would like to ask do you know any things about this and any idea to improve this situation ?
Thank you very much!
Edit: Thanks for all of your response. Based on my experience, I think the slowness is caused by the fact that there are too many hops to jump through before reaching the final client. So I think I will try to do several things:
- Try to optimize my upload speed, it is fast enough but not very stable recently so it could have some impact
- Set up a second Jellyfin instance and sync a part of my library there for my friends.
Edit: Slow here means both slow page loading and slow buffering.
Convince one of your Asian friends to run a mirror and sync everything to them if possible.
HI, kinda late to the party. I’m in a similar rut with intercontinental internet issues, and would like to share my thoughts
While not a full-fledged CDN, you may consider setting up an Asian VPS to serve as a second reverse proxy/ingress route, terminate TLS there, and route plaintext HTTP back to your homelab (this virtual tunnel shall be behind a WireGuard VPN interface). As I’ve figured out in my blogpost here (see scenario 2), this allows the initial TCP and TLS handshakes to happen nearer to the user instead of going all the way to Europe and back home.
You can consider setting up a separate Jellyfin instance for Asia, but of course that comes with setting up syncing media, maintaining separate user credentials, and so on. So before renting compute, I suggest trying these smaller actions first - if they work you mightn’t need a VPS anymore:
- Look into Linux sysctls tuning of network parameters. My personal tweaks for the
/etc/sysctl.confstuff are: - Implement some sort of Smart Queue Management on your router (e.g. CAKE algorithm) to avoid the bufferbloat problem
- Enable HTTP/3+QUIC on your reverse proxy for reduced handshakes. Though it’s unlikely native Jellyfin clients also benefit from such features
Curious to see if any of this helps :)
- Look into Linux sysctls tuning of network parameters. My personal tweaks for the
You’re describing a CDN. You can’t afford it.
I’d look more into boosting whatever your uplink is versus trying to distribute to localized users.
The uplink isn’t the problem as it works for viewers in Europe.
Uplink is exactly the problem. Not sure why you think otherwise. The internet doesn’t work by multicast.
Maybe we don’t talk about the same. The uplink at OPs router isn’t the problem, there is enough upload speed so that others in Europe can stream. Users in Asia don’t have enough bandwidth, so there’s a bottleneck somewhere in between.
And yes, a VPN could help by routing the traffic through other hops, but chances are that it doesn’t help or even make it worse, but it’s worth trying.
Bandwidth does not degrade over distance. That’s not how that works…
Again, I’m confused on what you’re suggesting the actual issue is here.
If the uplink bandwidth is more than sufficient for users in Europe, and it doesn’t degrade over distance, then why is the same uplink not enough for the exact same thing in Asia?
Exactly, bandwidth doesn’t degrade over distance, so why would the uplibk bandwidth be the issue for Asia when its fine for Europe.
Ok you’re almost there. It is plenty fast for people in Europe but it is slow for those in Asia. So bandwidth is not the issue
So it’s not just me. The peering between europe and asia IS crap!
I’ve been to thailand in november and the connections to europe were hit or miss the whole time. The latency was poor and the reliability varied day by day.
The only thing that made any difference was switching providers on the EU side. It seems that some ISPs have better peering than others.
Also lowering the MTU for the vpn tunnel seemd to help a lot, but that might’ve been a placebo.
I’ve often described Europe as being the ‘other end of the internet’ since from Australia it’s often routed over the Pacific to US(via Hawaii and either Guam or New Zealand), over the US, then over the Atlantic.
tu.berlin is 316ms away.
Even large streaming services drop their servers close to the users to make the experience good. They just do better at scaling.
You could federated authentication so only one ldap service is maintained. You could also sync media from one device to the other so you don’t need to manually update both.
IMHO Jellyfin is processing everything it sent to clients. So I do not think it possible to put it behind SDN( may be it possible if server side transcoding is off) Please define slow. Slow on what part? It should be like 250ms RRT to your server which is not much for web-based apps.
Define “slow”. Pages hang before loading? Or it often stops to buffer a stream?
Tailscale, headscale, or something along those lines may help optimize the route but as others have said to resolve this is an actual fashion you’d need a cdn which requires significant geo-redundant hardware which comes at a pretty significant cost. That being said I think your friend has a good shot if you implement the former.
I was trying to stream my Jellyfin server on vacation…Over Tailnet I couldn’t reliably stream anything. Over VPN it was as good as local. I can’t believe it’s just a routing issue but I wasn’t proxied so it should have been the same. So a VPN for one user might fix the issue. The headaches of segmenting the network on that VPN are another problem even if the hardware/router is capable but doable.
Is it possible you misconfigured your tailnet and instead of using a direct connection to your local subnet router you were using an ethereal port via a DERP relay? You can read into it more on tailscales documentation, but essentially you need to leave UPD inbound port 41641 open to your subnet router inbound from WAN.
I checked for relay. I recall it’s pretty easy to see on the desktop icon. I’ll have to try again next time I’m away to see.
I don’t know if it’s on the icon, I believe you have to use the cli “tailscale status” to view your tailnet nodes connection types
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters HTTP Hypertext Transfer Protocol, the Web IP Internet Protocol SSL Secure Sockets Layer, for transparent encryption TCP Transmission Control Protocol, most often over IP TLS Transport Layer Security, supersedes SSL VPN Virtual Private Network VPS Virtual Private Server (opposed to shared hosting)
5 acronyms in this thread; the most compressed thread commented on today has 13 acronyms.
[Thread #64 for this comm, first seen 6th Feb 2026, 17:01] [FAQ] [Full list] [Contact] [Source code]





