Messages : 1,900
Sujets : 24
Inscription : Dec 2015
Type: Particulier
(03-16-2019, 10:33 AM)Gulistan a écrit : tipunch >
test de différents players > sortie SPDIF optique > enregistrement > comparaison avec le rip originel (par inversion de polarité)
on élimine ainsi les "perturbations" par liaison câblée.
Si les fichiers récupérés sont identiques à la source, on est en droit de dire que les lecteurs n'ont pas d'influence sur le rendu.
Bien entendu, se méfier des players qui passent par une couche de l'OS qui va faire du re-sampling à la volée ou de ceux qui comptent un traitement actif.
Tu ne pourras jamais convaicre les audiophiles qui entendent ... parce qu'il faut que tu démontres que le son arrivé à leurs oreilles est identique au son juste avant d'entrer dans les players.
Messages : 9,079
Sujets : 98
Inscription : Dec 2015
Type: Particulier
(03-16-2019, 11:38 AM)bz31 a écrit : identique au son juste avant d'entrer dans les players.
avant d'entrer dans le player, ce n'est pas un son !
Messages : 1,900
Sujets : 24
Inscription : Dec 2015
Type: Particulier
(03-16-2019, 11:47 AM)bbill a écrit : (03-16-2019, 11:38 AM)bz31 a écrit : identique au son juste avant d'entrer dans les players.
avant d'entrer dans le player, ce n'est pas un son !
Mais si. C'est un fichier son. Il veut montrer que les players donnent tous le même résultat sur ce même fichier son après la sortie du player.
Il a donc pris des mesures avec " sa définition de la sortie". Maintenant les audiophiles viennent lui dire qu'il s'est trompé de sortie du player, la sortie du player est, selon les audiophiles, l'entrée des leurs oreilles ...
Messages : 2,893
Sujets : 64
Inscription : Mar 2016
Type: Particulier
03-16-2019, 12:28 PM
(Modification du message : 03-16-2019, 12:51 PM par a supprimer merci.)
Pour dire les choses différemment:
- le test demontre que le lecteur ne change pas la representation digitale du fichier audio (les bits)
- ce que le test n'aborde pas c'est la qualité "electrique" du signal en sortie du lecteur (le bruit)
Quel que soit le mode de transmission du signal digital au DAC, coaxial, USB, Toslink, I2S, des perturbations électriques sont transmises au DAC.
Voici ce qu'explique, par exemple, John Brown, l'ingénieur d'ECDesigns:
"Source noise spectrum can be measured when streaming digital silence. However, it is likely that this spectrum changes radically when streaming music. When streaming music, source noise is masked by this much stronger data signal and the CPU load in the source is likely to increase considerably.
Based on my personal research the source dependency problem has to be (partially) caused by ripple voltage on the digital interface signal(s). With USB this translates to a ripple voltage on the differential signals, differential interfaces cancel common mode noise but fail to cancel source related (unequal) noise on both interface signals. With S/PDIF coax it translates to ripple voltage on the electrical signal. With Toslink the ripple voltage is translated to fluctuations in the light output and translated to ripple voltage in the optical receiver.
The problem is that we can't simply use a "voltage stabiliser) to get rid of this ripple as the bandwidth of such circuit would be far too low. Even zener diodes are not fast enough and introduce unwanted non-linear effects that make matters even worse.
The only practical solution is using cleanest possible digital audio source and this also applies to many other DACs, even if these are based on built-in low phase noise clocks and FIFO buffers.
Messages : 831
Sujets : 3
Inscription : Feb 2019
Type: Particulier
(03-16-2019, 12:28 PM)paulw a écrit : With Toslink the ripple voltage is translated to fluctuations in the light output and translated to ripple voltage in the optical receiver.
En binaire, le "ripple" importe peu, c'est le changement d'état qui définit le passage de 0 à 1 et de 1 à 0.
Faut bien comprendre ce principe.
Messages : 2,893
Sujets : 64
Inscription : Mar 2016
Type: Particulier
03-16-2019, 01:04 PM
(Modification du message : 03-16-2019, 01:18 PM par a supprimer merci.)
La question suivante est: quel est l'effet de ces perturbations sur le signal audio analogique en sortie du DAC ?
C'est plus compliqué. J'essaierai de retrouver des explications sur le sujet.
Pour ma part, l'ecoute comparative de la source UPL d'ECDesigns (tres faible bruit) avec plusieurs autres lecteurs (PC, chromecast, streamers divers) est evidente. Ceux qui en ont fait l'ecoute le confirment.
Ces impressions "subjectives", résultant de tests qui n'ont pas été fait en aveugle, peuvent etre remises en cause, certes, mais c'est tout de meme intéressant de faire l'essai.
(03-16-2019, 12:54 PM)Gulistan a écrit : (03-16-2019, 12:28 PM)paulw a écrit : With Toslink the ripple voltage is translated to fluctuations in the light output and translated to ripple voltage in the optical receiver.
En binaire, le "ripple" importe peu, c'est le changement d'état qui définit le passage de 0 à 1 et de 1 à 0.
Faut bien comprendre ce principe.
Toujours la meme incomprehension..
Tu fais comme ci le DAC etait parfait. Mais aucun DAC ne l'est, et ces perturbations electriques influencent le traitement dans le DAC.
Honnêtement je ne pense pas que ni toi ni moi n'ayons les compétences techniques pour comprendre tout cela.
Il n'en reste que ton test ne demontre rien d'autre que "bits are bits" en sortie du lecteur.
C'est tout !
Tu ne regardes qu'un aspect du problème (le "binaire") et pas l'ensemble.
Tu pourrais avoir l'honnêteté intellectuelle de reconnaître les limites de ton test, plutôt que d'en tirer des conclusions qui dépassent le cadre du test !
Oui, les lecteurs sont "egaux" quand on ne regarde que le "binaire", et non, les lecteurs ne sont pas égaux quand on regarde la qualité du signal. C'est quand même pas compliqué à comprendre...
Messages : 831
Sujets : 3
Inscription : Feb 2019
Type: Particulier
Je ne parle pas du DAC, mais de ce qui y arrive.
Mon test montre que peu importe le lecteur, les données arrivent sans perte et modification, comme lors de tous les transfert au sein du ordinateur (disque dur, ram, etc.)
L'intégrité est conservée jusqu'au DAC.
Messages : 2,893
Sujets : 64
Inscription : Mar 2016
Type: Particulier
03-16-2019, 01:25 PM
(Modification du message : 03-16-2019, 01:41 PM par a supprimer merci.)
Oui, je suis bien d'accord avec cela.
Malheureusement, en demat, les choses ne s'arrêtent pas là !
Donc en tirer des conclusions générales qui consistent à dire que tous les lecteurs sont égaux n'est pas possible avec le simple test que tu as fait.
On est d'accord ?
Dans ton premier message, tu commences par "J'ai répondu que ça ne changerait rien au son.". Et bien non, ton test ne te permets pas d'en arriver à cette conclusion...
Voici ce que John Swenson expliquait en 2013 sur le sujet (ici: https://www.audioasylum.com/cgi/t.mpl?f=...o&m=125989). Depuis, il n'y a pas eu de progrès significatifs.
"I've been working on a new USB DAC design recently, so I have a setup that I'm continuously looking at with scopes and logic analyzers etc. In this situation the logic analyzer said everything was fine, the bits were perfect. A logic analyzer runs the analog voltage on the wire through a "threshold" to distinguish if it is high or low, what you see on the screen is just high or low, ie "bits". But when I looked with the scope which shows the actual voltage levels of the signal I saw some extra signal riding on top of the highs and lows of the "bits". This turned out to be noise on the ground plane caused by the processor that was generating the bits. (it was much worse than it should have been due to a poor board layout of the processor reference board) That noise was enough to cause significant change to the audio out even though the "bits" were correct.
One interesting aspect of this was that you could easily see changes in this ground plane noise depending on what the processor was doing. While this was a fairly gross example of the effect, it clealy shows that things going on in the processing and transmission of the bits can have an affect on the sound at the output, even though the correct bits get to the DAC chip.
Next you might ask "well isn't that a broken system, if it was "good" shouldn't it not be an issue?" Note that this was the official reference board for the processor made by the manufacturer, who should know how to make things that work well with their processor. This just goes to show that things that can cause audible differences in digital audio are frequently not part of "it works as a digital system", the board did what it was supposed to, it delivered the bits.
A better board design could have cut this ground noise down significantly, but it would still be there.
What we DAC designers have to do is figure out ways to design products that produce analog out that is immune to this sort of thing. Unfortunately this is extremely difficult to do. There are many people on this board that expect that this is easy to do, just put in the right 50cent part and presto the design is completely immune to everything. It doesn't work that way. High frequency ground noise is extremly pernicious stuff, it will find a way to get around just about any obsticle you throw in its path.
Different designers take different approaches to try and achieve this with varying degrees of success. The different approaches will usually be affective at decreasing susceptibility to different types of noise so one DAC model may not care about a certain aspect (say cable differences) while another may be pretty immune to cable aspects but be susceptible to timing variations in packets. This may be a part of why some people say they can hear certain aspects and others say they cannot.
These techniques for noise suppression are pretty esoteric knowledge, there really are only a few people that really understand all this, there are very few places in the real world were the combined knowlege to make this really work right are required, thus very few people have a good grasp on all of this. The result is that many actual designs on the market are fairly lacking in this department, or are only targeting one aspect of it.
This is slowly changing and companies are starting to get an inkling of what it takes to do well with this and are hiring people with some knowledge in this field, but there aren't nearly enough to go around, so it's going to be some time before all digital audio systems you can buy do a good job in this regard. "
Messages : 1,588
Sujets : 1
Inscription : Jul 2016
Type: Particulier
(03-16-2019, 01:04 PM)paulw a écrit : Toujours la meme incomprehension..
Tu fais comme ci le DAC etait parfait. Mais aucun DAC ne l'est, et ces perturbations electriques influencent le traitement dans le DAC.
Oui, les lecteurs sont "egaux" quand on ne regarde que le "binaire", et non, les lecteurs ne sont pas égaux quand on regarde la qualité du signal. C'est quand même pas compliqué à comprendre...
Incompréhension pour incompréhension.
Entre le lecteur - le soft -, et la sortie physique en spdiff usb ou autre, il y a du matériel dont les qualités "hard"
(alimentations horloges buffers ect et autres je ne sais quoi )
pourraient interférer sur la qualité "électrique" de la sortie, soit.
Mais le problème posé est différent : à environnement matériel égal, une seule variable :
en quoi le soft de lecture, s'il est bit perfect, peut-il jouer sur le résultat ?
Messages : 2,893
Sujets : 64
Inscription : Mar 2016
Type: Particulier
03-16-2019, 01:39 PM
(Modification du message : 03-16-2019, 01:49 PM par a supprimer merci.)
Le fil que je viens de citer, qui date de 2013, réponds à ta question. Il y a d'autres contributions intéressantes sur ce sujet, et j'essaierai de les retrouver.
Voici ce que Swenson expliquait:
"The insight is to note that in my previous post I was talking about things that affect the sound that are NOT changes to the bits, but things like ground plane noise. I was trying to show that what is happening inside the computer (processor, memory acceses etc) can change the ground plane noise. Not just the amplitude but also the spectrum of the noise. I'll give some specific examples later.
Not all programs that read files and send bits to a DAC do it exactly the same way. Some may have several buffers the data goes throuigh on it's path, some may only have one or two. Some may be built using a "layered" hierachical approach with different software "modules" that call each other, where others may be fairly "flat" with just one routine that does all the processing.
The exact sequence of instructions and memory accesses is guaranteed to be different between the programs. Since it is these instructions and memory accesses that cause the ground plane noise, I hope you can see that differences in how a task is done can produce different noise.
And BTW this CAN be measured. I've built a little ground noise analyzer that can easily see the difference in the noise from different programs doing supposedly the same thing.
Now for a concrete example. Let's take a simple program that is just coppying audio data from a file to a buffer and then to an simple output port. It has two threads, one reading the file and putting the data in the buffer, and one taking data out of the buffer and putting it on the out port using an external clock to time the opperation. The first thread waits until the buffer is empty then fills it up and goes back to sleep. (in reality there would be two buffers used in a ping pong arrangement, but that is irrelevant to the issue at hand).
So lets take this program and make two copies, one which has a small buffer and one which has a large buffer. The total amount of processing is exactly the same, the code is exactly the same, but is the ground plane noise the same? NO!
In the case of the small buffer the first thread spends a fairly short period of time waiting since the buffer empties out quickly. It spends a small amount of work often. With the large buffer each time it wakes up it has to handle a lot more data, but it waits a much longer time between sessions.
So why does this matter? If you look at the "work performed by the thread" over time the large buffer version shows a very "bursty" activity, but the small buffer shows a much more uniform activity. If you look at this in the frequency domain the small buffer version is dominated by relatively low intensity at high frequencies, mostly above the human hearing range. But when you look at the large buffer version you see higher intensity at much lower frequencies that are right smack dab in the middle of the human hearing range. This latter noise is going to have a much bigger affect on audibility.
And note this was exactly the same code, just different buffer sizes. Think what can happen when you are comparing different programs that use very different program architectures.
As an analogy, think about getting a group of people from point A to point B, either using a two seater sports car or a 30 person bus. The sports car has to go much faster and more often, the bus can only take a few trips and lumber along. But the result is the same. All the people get from poingt A to point B in the same total amount of time. But if you stand at the side of the road and have to put up with the noise, is it the same? "
Et une autre citation de Swenson, toujours dans le même fil, qui me semble assez claire:
"What is digital?
Digital is a nonlinear system.
Take a piece of wire with a voltage on it. These days many chips use a voltage range from 0 - 3.3V. The volotage on that wire can be anywhere in the range from 0 - 3.3V, it IS in actuality an analog signal that can take on any value in that range.
We then run that intgo a circuit that applies a threshold, lets make it easy and say the threshold is at 1.5 volts. What this circuit does is any value on it's input below 1.5V produces 0V on the output. And any value above 1.5V produces 3.3V on the output.
This circuit "quantizes" the input to just two values 0V and 3.3V (of course it is not perfect, but that is the ideal behavior).
So how does this circuit get this 0V and 3.3V for the output? It is the values on the power and ground pins of the chip. Remember we were talking about ground plane noise, well the ground pin of the chip is connected to the ground plane. The power pin is connected to a 3.3V supply that usually has even more noise than the ground plane.
So when you look at the analog voltage coming out of the chip with a scope you see noise on both the upper and lower parts of the "waveform". It is never an absolutely pure 0V or 3.3V. BUT as long as this noise never gets above 1.5V for the "low" part of the signal or below 1.5V for the "high" part of the signal, the next chip in the chain can correctly interpret it as either high or low with IT'S threshold circuit. That is the basic concept of digital, because of the nonlinear behavior (the threshold) it can withstand large amounts of noise and still keep the integrity of the data intact.
This is why somany people think "bits are bits".
BUUUT, as I was trying to point out in the previous posts it's not all just about those thresholded values. The noise on the ground plane produced by the processing and transmission of those bits can affect the sound WITHOUT being large enough to change bits.
It can cause oscillator producing the timing signals to have much more jitter than it normally would, it can cause the analog out of the DAC chip to be noisier than it normally would be, it can cause any anlog output circuitry to be noisier that it normally would be.
And here is one that is a little harder to understand. What determines that infamous threshold in the digital chips? It turns out that in most chips used today it is a ratio between the voltages on the power and ground pins. Remember that both of these pins have noise on them, that means the threshold value is going to be changing as well. So what? as long as the noise on the input is not near the changing threshold, who cares? Well it comes back to that "the output is not really ideal" statement. It takes time for the signal to go from 0V to 3.3V. This is called the "ramp time" and you can easily see this on a scope. So the input signal is a ramp (with noise embedded on it!) going into a threshold that is also dancing around, what is the result? The time at which the threshold circuit says it changed from low to high varies quite a bit, this is the dreaded "JITTER".
This jitter is directly related to the ground plane noise talked about before. Note that the bits stay intact through all this! The threshold circuit can still correctly interpet the change from low to high.
This whole thing is about there being other paths that can change the sound other than just "the bits". And those paths CAN be affected by differences in how different programs process and produce exactly the same data.
That test you are bringing up is doing a digital loopback, it takes the bits from one place into another, then sends them back out and compares them, they are the same. Thats what the circuit chain is supposed to do, and it does it. BUT every one of those parts of the chain is producing noise on its ground plane, which CAN cause changes in the sound coming out of the DAC, EVEN THOUGH the bits didn't actually change.
Does that make sense? I don't know how to better explain it. "
|