2023-10-09, 07:41 PM
To be clear -- and this is murky because of the language I'm using -- I'm not advocating for completely adaptive streaming. I think that might be out of scope. But better handling of low-bandwidth or poorly-networked clients seems like a good target, as stuttering video due to poor connectivity is a common issue that folks experience and remote network troubleshooting is difficult at best.
Most of what I'm wondering about is the history of how we got to the current implementation of what seems to be "user knows best". That's audio, video, and bit rate selection, which leads to all kinds of issues. I don't necessarily agree with other posts that Jellyfin should automate ALL of this, nor that features should obfuscate media selection through this sort of feature, but a bit of intelligence in bit rate selection (while it might increase transcoding) would offer a better end user experience.
Most of what I'm wondering about is the history of how we got to the current implementation of what seems to be "user knows best". That's audio, video, and bit rate selection, which leads to all kinds of issues. I don't necessarily agree with other posts that Jellyfin should automate ALL of this, nor that features should obfuscate media selection through this sort of feature, but a bit of intelligence in bit rate selection (while it might increase transcoding) would offer a better end user experience.
Jellyfin 10.10.3 LSIO Docker | Ubuntu 24.04 LTS | i7-13700K | Arc A380 6 GB | 64 GB RAM | 79 TB Storage