(07-22-2018, 12:03 PM)Vincent. a écrit : Hello Marc,
la question est claire, mais peut-être peu d'abonnés à Tidal. Dans mon cas, c'est Qobuz, sans souci. As-tu essayé de ne rien mettre dans les paramètres du buffer ?
Merci de ta réponse Vincent, et maintenant que tu le dis, il me semble que d'origine il n'y avait pas de valeur dans buffer et je n'avais pas de soucis ! J'essaie de suite, merci en tout cas !
En même temps je lis cela qui est intéressant je trouve..
undefined Originally Posted by garym undefined
@pippin, Could you elaborate on this. So my Transporter has a smaller internal buffer than my Touch or Radio."
Well, it's all about that "Net Neutrality" thing everybody keeps talking about.
Yes. I don't know the exact figures for the Transporter but Touch and Radio use 3MB which is a bit of a top-end compromise. iPeng by default uses 8MB but to do this we had to include an option to use a smaller buffer because it conflicts with some services, namely Pandora but also internet radio stations can be affected and e.g show wrong track information.
You rarely really need a big buffer over your local network, iPeng's is primarily there for people using it e.g. in a car.
What a larger buffer buys you is reliability with unstable connections. It doesn't help when bandwidth is the problem (because then you won't get a large buffer filled anyway) and you don't need it when everything is fine.
But streaming over the internet _can_ be unstable. "Unstable" in this case might simply mean that a packet shows up 10s late because something on the way between the server and you is congested. This is a problem for streaming and the reason streaming providers build up those huge CDNs (content distribution networks). It's not the bandwidth itself (which is usually sufficient) and not the latency (which doesn't matter, you're going to buffer more than one or two seconds anyway and you almost never have more latency than that), it's congestion causing interruptions.
Now... a little background on what happens on your network on Saturday evening...
What the internet is being used for (in the US) on Saturday evening is Netflix and YouTube. The two combined make up more than 50% of peak internet traffic in the US.
http://www.techtimes.com/articles/20...out-others.htm
That's a lot of traffic you need to get your packets through. But that alone is not the biggest issue. The biggest issue is that NetFlix and YouTube have priority access to your network (to the distribution network from the backbone to your home) because they don't feed their stream through the "normal" internet backbone but directly to the ISP's networks.
http://www.forbes.com/sites/realspin...ix-neutrality/
What this means in this case is: when your TIDAL stream arrives at your ISP's network, all those NetFlix and YouTube streams are already there using up bandwidth and your packets have to make their way through the rest. There is enough capacity that they will get through, that's why you still see high bandwidth figures when you measure your connection, but some may arrive later because they had to wait (or even be re-transmitted if it's an IP based stream) and if the delay for one packet exceeds your buffer size... dang.
You can get around this with a bigger buffer because the overall bandwidth is not affected. After some of your packets waited for 10s it might be that a lot of them arrive almost at the same time so you _do_ have bandwidth to fill a bigger buffer which is why you often build bigger buffers into the native Apps for such services - after all the service knows it has to support that.
All of that said: That's for "normal" operation.
From my experience with occasional issues with TIDAL I actually had the impression that when there is heavy stuttering then there is a bandwidth issue of some kind because streams never really recover. I don't know whether it's on my side or on TIDAL's. It usually goes away after a few minutes (it's also pretty rare). It could be that e.g. something in my area is temporarily eating a lot of bandwidth. Or, if it's on TIDAL's side it could be that the node they are streaming through is too congested. Given how fast things seem to usually improve I'd almost guess it's the latter and TIDAL (or their CDN, I think they are not operating that themselves) re-configures the network to use other, less congested nodes instead.
That's why it might make sense to report issues to them, after all they can learn if there is some kind of scheme behind the congestion.
Ensuite dans la discussion, les informaticiens de Tidal conseillent de changer de serveur DNS quand cela arrive.
Préampli passif Hattor Big (Takman REX) w. Alim linéaire J92 Reddo Audio + Hattor tube stage (ECC82 GE Triple Mica) + Benchmark ahb2 - DAc AudioGD M7S - Lecteur Primare CD31 - Streamer Mano Ultra sur pieds Soundcare - Enceintes Dali Euphonia MS4 sur pieds Soundcare - Câblage hp Legato Referenza Superiore, modul Rastacable 4SE by Rastabill, Espace Musical Muse2 - Tidal Hifi et DD de 2To de zik