192kHz considered harmful

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • bluegaspode
    Senior Member
    • Jul 2009
    • 3229

    #61
    Originally posted by Phil Leigh
    I
    What some people are misunderstanding here is that it is the reconstruction filter that recovers the analogue signal, not the DAC. The DAC simply presents the filter with a set of voltages over time. It is within the filter that the sinc function becomes manifest and this is indeed an infinite series - the mathematical definition of a filter is a continuous function over time. A filter is not a step function!
    To be clear on this, what comes out of the filter IS a mathematically perfect sine wave...
    I read through the paper posted before in the meantime (http://www.lavryengineering.com/docu...ing_Theory.pdf )

    Is there any more (hopefully easy to understand) information about how these reconstruction filters work in practice? The paper comes close around page 18, where it is shown how at least the right half of a sinc function can be produced by some circuit.
    I'm missing the left part of the sinc function because based on the explanation it is need as well to recreate the original wave-form.

    I obviously never cared about how DA converters work, based on the paper I now think of thousands of of sinc-producing circuits which all add up to the final waveform.
    Probably this is not how it works in practice but this a missing link for me now to agree that Nyquist theorem and good circuits is all that we need.
    Did you know: SqueezePlayer will stream all your music to your Android device. Take your music everywhere!
    Remote Control + Streaming to your iPad? Squeezebox + iPad = SqueezePad
    Want to see a Weather Forecast on your Radio/Touch/Controller ? => why not try my Weather Forecast Applet
    Want to use the Headphones with your Controller ? => why not try my Headphone Switcher Applet

    Comment

    • DaveWr
      Senior Member
      • Jan 2007
      • 629

      #62
      Originally posted by maggior
      Just trying to educate myself here...

      Does this mean that there could be audible distortion introduced due to ringing in a recording that has clipped waveforms? Taken to an extreme, clipping could start to approximate a square wave.

      BTW, I'm finding this discussion to be very interesting...I'm learning a lot.
      Clipping itself is an extreme distortion, at medium power levels due the high frequency content of these waveforms, this can easily destroy loudspeaker tweeters.

      The clipping waveform will usually have some of this high frequency removed by the anti-alias filter that is the first part of the ADC (analogue to digital convertor). Although it will be now more benign, it is still a distortion.

      The ringing issue from square waves is no longer any issue, due to the way ADCs are designed. they don't have a very sharp (often 7th order) 'brickwall' filter at 20khz anymore. This was always the source of some low level errors, also repeated in the playback DAC chain. 10 years ago it was trendy to have multiple filter choices in your DAC chain of your CD player, and they did sound slightly different.

      As with all things digital speeds have gone up, and the digital guys made it easier to get more linear ADC / DAC systems by using only a few bits but at very high oversampled rates to achieve better results to standard multibit DACs.

      Dave

      Comment

      • Soulkeeper
        Senior Member
        • Dec 2009
        • 1226

        #63
        Originally posted by bluegaspode
        Is there any more (hopefully easy to understand) information about how these reconstruction filters work in practice?
        I found this, which looks promising. I'll start reading it myself, now.

        Comment

        • DaveWr
          Senior Member
          • Jan 2007
          • 629

          #64
          Originally posted by Soulkeeper
          I found this, which looks promising. I'll start reading it myself, now.
          Those systems are 1 bit A/D and D/A systems. This is the technology used by Sony in their DSD techniques as used in SACDs.

          Dave

          Comment

          • Soulkeeper
            Senior Member
            • Dec 2009
            • 1226

            #65
            AFAIU, delta-sigma DACs are used for PCM decoding. It's the most widespread type of audio DAC, at least according to some of what I've read.

            Unfortunately the article I linked to went into more detail about ADC than DAC.

            After a quick scan of the DSD article on Wikipedia, I get the impression that DSD stores audio in a delta-sigma modulated format, while delta-sigma DACs convert a PCM signal to a delta-sigma modulated signal as part of the DA conversion.
            Last edited by Soulkeeper; 2012-03-07, 22:27.

            Comment

            • DaveWr
              Senior Member
              • Jan 2007
              • 629

              #66
              Originally posted by Soulkeeper
              AFAIU, delta-sigma DACs are used for PCM decoding. It's the most widespread type of audio DAC, at least according to some of what I've read.

              Unfortunately the article I linked to went into more detail about ADC than DAC.

              After a quick scan of the DSD article on Wikipedia, I get the idea that DSD stores audio in a delta-sigma modulated format, while delta-sigma DACs convert a PCM signal to a delta-sigma modulated signal as part of the DA conversion.
              You are exactly right - virtually all modern DACs are multibit delta-sigma DACs. These bring another filter that does affect sound quality - the interpolation filter. This is used to manufacture samples that don't exist in the original sampling. This is usually where designers claim all their specialities. For example, Linn in their DS products don't use the DAC interpolation, but their own design of interpolation and noise shaping digital filters. Whether this is different / better is probably a mute point. I think it is very much a low level effect.

              Comment

              • pippin
                Senior Member
                • Oct 2007
                • 14809

                #67
                Originally posted by bluegaspode
                I obviously never cared about how DA converters work, based on the paper I now think of thousands of of sinc-producing circuits which all add up to the final waveform.
                Probably this is not how it works in practice but this a missing link for me now to agree that Nyquist theorem and good circuits is all that we need.

                We have to do away with a huge misunderstanding here, I believe.
                This article is NOT about how a DAC should work, which technology is best and what kind of limitations practical recording and playing equipment may have.

                All of this has nothing, really NOTHING to do with the topic at hand. It just doesn't matter how good or bad the DAC or the ADC or the speaker or the microphone is. All of this has absolutely NOTHING to do with the storage format for the music.


                All that Nyquist/Shannon says is: if you have a frequency X, which is the maximum frequency you are interested in (here: the highest frequency you could probably ever hear), then if you use a sampling frequency of 2*X to store your sampled data, then ALL information contained in the signals below X will be in included in the information you store. There is NO additional information you get by using a higher sampling frequency. Nothing. All you get is additional information about frequencies ABOVE X but not below, the information on the frequencies below X is already there and it's complete.


                What we have to understand here, is that this has nothing to do with any imperfections in the recording or playback process, these will of course be there, but if your recording is crap, i won't get better just because you STORE more of it and at a higher sample rate. And if your DAC is distorting, then it will not get any better just because you throw higher frequencies at it - to the contrary, the article implies it's getting actually WORSE because of effects letting distortions from the higher frequencies leak into the lower frequency spectrum.
                To be clear: this is NOT "missing information" that was just not recorded due to the low sample rate, it's DISTORTED information due to the bad reproduction process in the DAC.


                What the article does NOT say is that it doesn't make sense to use different sample rates or sample sizes for processing. It can make sense to use something different while processing your data, for example because of limitations of the technology you use. A good example is the 24 bit sample size used in the Squeezebox internally. This makes perfect sense because what the Squeezebox does is it does digital processing to change the volume. If you stick to 16 bit data, you would get rounding errors and information losses due to this processing that you can avoid if you go to 24 bit in processing.
                But it does NOT mean, that anything gets better if the data you throw at it is already 24 bit.

                To use a somewhat different analogy: When a bank calculates interest, it will use 4 additional digits behind the cent (so 1$ is 1.00 0000 for them). Why? Because if you get, for example, 1% of interest for your dollar per year and that interest is paid monthly, the monthly interest you get would amount to 0.00 0833 ct. If you round that, you just get 0 so you would never get any interest which would be plain wrong because to calculate things right, they will have to pay you 1 ct per year.
                HOWEVER, they will never actually PAY you 0.0833 ct because there is no such thing. and your Dollar doesn't get any different just because you write it as 1.00 0000 $, it's still exactly the same thing as 1$ and actually everything behind the last cent digit has no meaning at all (or it would already be a rounding error).


                Likewise, nobody says that there will be no way to invent some fancy technology that does a more accurate recording of the analog audio signal at 2 MHz sample rate and this can be superior to a 16 bit 44.1 kHz microphone.
                HOWEVER: If you then take the digital output of that hypothetical processor and down-convert it to 44.1 kHz samplerate audio, then for all frequencies below 22.05 kHz, there will be NO, not even the slightest loss of information.
                So it doesn't make sense to STORE and TRANSMIT the data at higher frequencies.

                There are a few good arguments for 48 kHz, most notably that since it's common to use 96kHz or 192 kHz equipment in processing (remember: it can STILL make a lot of sense to do PROCESSING at higher frequencies, especially in the digital domain) you get pretty much simplified up/downsampling logic.
                I can't argue about 24 vs. 16 bit, that does indeed depend on the actual dynamic range you can record and reproduce and I don't know where technology is here, purely from an information theory standpoint, 24 bit word size DOES contain more information than 16 bit word size, that's different from the sample rate thing.


                Now there is a third thing, and that's the "trust your ears" thing.
                1. Yes, you should, because in the end it's all that matters
                2. Normal people do that but as of my experience, audiophile's don't. They just trust the money or some other rationale, otherwise they would not be so opposed to double-blind tests.

                The problem with "trust your ears" is also two-fold:
                1. It has nothing, really nothing, to do with all we've discussed above.
                2. It can lead to unexpected results.

                What you PERCEIVE as superior sound does not have to be the sound that has the higher similarity to the original signal. All of what we discussed about above only spoke about how similar the signal you are reproducing is to the original sound as it was mastered. It says nothing about how GOOD that actually sounds.

                One extreme example: if you compare 128kbps mp3 compressed audio (lame codec at high quality) to the original file and you do that across a large sample of songs and on good equipment, your chances are somewhat high that you are able to discriminate between the two signals.
                What that means is: when you do an ABX test and X is either A or B and you don't know what either is you have a pretty high chance to say whether X is actually A or B.

                Now if you ask an ENTIRELY DIFFERENT question: you ask whether A or B sounds BETTER, you have about a 50% chance to find mp3 to sound better than the original file. If you have good hearing, your chances to prefer mp3 are actually a bit HIGHER (because most hearing defects destroy the psychoacoustic masking mp3 uses to one extent or the other.
                The same can of course hold true of a 192kHz sample rate with reproduction artifacts coming from ultrasonic frequencies just that if you KNOW what you are listening to you will say "yes, that mp3 degraded the quality" while with the HD stuff you say "oh yes, that additional samplerate added more detail".

                Chances are very high that if you hear more "detail" due to inaudible processing difference it's because that "additional detail" just wasn't there in the original recording and made up by your equipment.


                To sum up:
                1. It CAN make a lot of sense to use whatever technology gives you the bet result in creating a digital representation of an analog signal, if this involves high frequencies, so be it, but you always have to be aware that the opposite can be true as well.
                2. For STORAGE and TRANSPORTATION of data, it's perfectly fine to downconvert this to for example 48kHz or 44.1 hKz samplerate, since you di this in the digital domain you don't even have losses due to bad equipment, inaccuracies or whatever, you will lose no, no theoretical and no practical information about the audible frequency range.
                3. When you are reproducing (playing) the audio then, again, what is the best technology available to you can depend on a lot of technical details. Evidence from the article above seems to indicate that using ultrasonic frequencies in your samples makes things worse, not better, but in the end all of this will be up to the engineer developing the DA conversion system. But again: for all of this it makes NO difference whether the material you throw at it is 48 or 96 kHz, except maybe for some practicality reasons (not having to transcode data at some source) but that's pure handling and has no impact on quality, if it does, the DAC designer has made a really bad job.
                4. "Good" or "Bad" in 1-3 doesn't mean how well something sounds to your ears but how similar it is to the original signal, in a some cases a "bad" or "lossy" signal can actually sound better to most ears.
                ---
                learn more about iPeng, the iPhone and iPad remote for the Squeezebox and
                Logitech UE Smart Radio as well as iPeng Party, the free Party-App,
                at penguinlovesmusic.com
                New: iPeng 9, the Universal App for iPhone, iPad and Apple Watch

                Comment

                • Mnyb
                  Senior Member
                  • Feb 2006
                  • 16539

                  #68
                  Thanks pippin .

                  It is not about specific techniques let's assume that it is sota the best 192k stuff is used during mastering and playback

                  I to so tries to make this point if we just leave bithdept depth aside for a while .

                  Ponder for a moment that no analog signal recorded had any content above > 24 kHz

                  Then a 48k signal and a 192k signal would contain exactly the same signal it would reconstruct to exactly the same wave form you would not even forensically be able to tell them apart .

                  What is said is that a digital signal has the necessary data to completely describe any signal at fs/2.

                  this includes the time domain there is a fallacy to believe that the content between samples are lost that there is an inaccuracy of +_ 1/fs this is false.

                  So arguments for higher fs necessary implies that it should be any merit of playing back ultrasonics .


                  There may be 1000's of reason for a studio or DAW software to operate at any level .


                  As pippin and many others say to get that high quality 44.1 or 48 signal any kind of higher fs recording and processing can be involved but that's not really the point , on the contrary employing such methods will probably yield a perfect such signal for all intents and purposes and actually prove the point that fs at 44.1 encodes all info in the frequency domain we can hear.
                  But all beside the point
                  --------------------------------------------------------------------
                  Main hifi: Rasbery PI digi+ MeridianG68J MeridianHD621 MeridianG98DH 2 x MeridianDSP5200 MeridianDSP5200HC 2 xMeridianDSP3100 +Rel Stadium 3 sub.
                  Bedroom/Office: Boom
                  Loggia: Raspi hifiberry dac + Adams
                  Bathroom : Radio (with battery)
                  iPad with iPengHD & SqueezePad
                  (spares Touch, SB3, reciever ,controller )
                  server Intel NUC Esxi VM Linux mint 18 LMS 7.9.2

                  http://people.xiph.org/~xiphmont/demo/neil-young.html

                  Comment

                  • bluegaspode
                    Senior Member
                    • Jul 2009
                    • 3229

                    #69
                    Originally posted by pippin
                    All of this has nothing, really NOTHING to do with the topic at hand. It just doesn't matter how good or bad the DAC or the ADC or the speaker or the microphone is. All of this has absolutely NOTHING to do with the storage format for the music.
                    You are missing my point.
                    It's fundamental that I am asking about how DACs work, because otherwise the Nyquist theorem is of no value in our practical world.

                    To make my argument clear lets go back to circles: I think there is some theory that claims that all data you need to draw a perfect circle is to have it's center point and radius (that would be 2 samples).
                    Without asking how the circle will be drawn later, it is WRONG to just conclude that it doesn't help to record more samples (like 100 points on the perimeter of the circle).

                    Let's say the hardware that is used to draw the circle, is only able to work with a rough estimation of PI (2 instead of 3.14...). The circle that this device would be drawing would be a very bad approximation of the original circle and every graphophile lover of circles would complain. Now in such a scenario you will draw BETTER circles with 100 samples of points on the perimeter and even better circles with 1000 samples of points on the perimeter.

                    So as long as a DAC (or the reproduction filter) isn't working with good enough sinc waveforms the Nyquist theory remains just a nice theory.
                    Maybe you all know so much more about DACs than me, that you are not questioning the sinc functions in your DAC anymore.

                    But I am ... and maybe also many audiophiles that read that article, so don't stop at claiming 'Nyquist is enough, there is no more to talk about'.
                    If you want to prove that in nowadays world higher samplerates don't help you cannot stop at Nyquist but you need to go all the way to the loudspeaker.

                    You are exactly right - virtually all modern DACs are multibit delta-sigma DACs. These bring another filter that does affect sound quality - the interpolation filter. This is used to manufacture samples that don't exist in the original sampling.
                    Uhh ohhh. And why is this better than just using one of the original samples if I had the double sample rate? Or are we in DAC snake oil territory already when it comes to creating interpolation filters that invent some new samples?
                    Last edited by bluegaspode; 2012-03-08, 07:21.
                    Did you know: SqueezePlayer will stream all your music to your Android device. Take your music everywhere!
                    Remote Control + Streaming to your iPad? Squeezebox + iPad = SqueezePad
                    Want to see a Weather Forecast on your Radio/Touch/Controller ? => why not try my Weather Forecast Applet
                    Want to use the Headphones with your Controller ? => why not try my Headphone Switcher Applet

                    Comment

                    • pippin
                      Senior Member
                      • Oct 2007
                      • 14809

                      #70
                      Originally posted by bluegaspode
                      Let's say the hardware that is used to draw the circle, is only able to work with a rough estimation of PI (2 instead of 3.14...). The circle that this device would be drawing would be a very bad approximation of the original circle and every graphophile lover of circles would complain. Now in such a scenario you will draw BETTER circles with 100 samples of points on the perimeter and even better circles with 1000 samples of points on the perimeter.
                      Sorry, but that's nonsense.

                      We are talking about digital information processing here.
                      If your hardware that does the drawing of the circle only can use rough approximations, you need to convert your perfect circle (center plus radius) to whatever strange approximation your machine needs to draw a good circle.
                      All you need for that is a computer that understands pi and how to do a perfect circle from the center point and the radius (and of course your machine's limitation).

                      The information (center point and radius) is still complete. There is nothing else you will ever need to describe it.

                      Even worse. What you are postulating assumes that a WORSE representation of the circle could actually lead to better results through a process that involves that the CREATOR of the limited approximation actually has a better understanding of the limitations of your reproduction machine than that machine itself (because it's creating a worse approximation of the circle to assist the latter's reproduction process).

                      Actually a pretty accurate description of a lot of stuff being done in audio but still by no means sensible.

                      So as long as a DAC (or the reproduction filter) isn't working with good enough sinc waveforms the Nyquist theory remains just a nice theory.
                      No, that, too, is wrong. The theory is correct. Just because your filter is bad doesn't mean that any different data will serve it better. Worse. Now you need to do an end-to-end optimization just to avoid using perfect information. Doesn't make sense and will certainly not work.

                      Maybe you all know so much more about DACs than me, that you are not questioning the sinc functions in your DAC anymore.
                      No, we have understood Shannon and so we know it's two completely separate problems.

                      Again (that's what I wrote above): if your DAC needs something else than 44.1/16, it can CREATE IT FROM THAT. There is no issue with upsampling 44.1/16 to 12.3GHz@486 bits if that's what your DAC needs. There will not be any information loss. But you gain exactly nothing from transmitting and storing data in that format.
                      ---
                      learn more about iPeng, the iPhone and iPad remote for the Squeezebox and
                      Logitech UE Smart Radio as well as iPeng Party, the free Party-App,
                      at penguinlovesmusic.com
                      New: iPeng 9, the Universal App for iPhone, iPad and Apple Watch

                      Comment

                      • bluegaspode
                        Senior Member
                        • Jul 2009
                        • 3229

                        #71
                        Originally posted by pippin
                        No, that, too, is wrong. The theory is correct. Just because your filter is bad doesn't mean that any different data will serve it better. Worse. Now you need to do an end-to-end optimization just to avoid using perfect information. Doesn't make sense and will certainly not work.
                        Come on. It doesn't make sense to adhere to a perfect theory when the hardware cannot come close to what the theory claims (as said: I don't know if DACs currently come close or not).

                        I want to listen to music NOW with the best quality possible NOW and its of no help if one insists that based on the theory I don't need more samples when real world hardware with existing deficiencies might still can come closer to the original waveform when fed with more samples. Maybe such a discussion is a bit obsolote, now that DACs (seem to be) able to do oversampling internally with ease.

                        There is no issue with upsampling 44.1/16 to 12.3GHz@486 bits if that's what your DAC needs.
                        But is it really that easy? Reading the paper (both the one provided first, but even more the one posted by DaveWR) I come to the conclusion that oversampling in the digital world isn't trivial at all, because I need a big enough set of points from sinc-functions from previous and future samples to get the actual oversampled value between two provided samples.
                        So when talking about good enough oversampling we are in the domain of buffer sizes/look aheads and computation power and compromises will be necessary as we cannot do 'correct' oversampling based on Nyquist/Shannon in the digital world with limited computational power.

                        So the question comes up: what is cheaper to produce: a DAC that internally can do oversampling with very high accuracy or a DAC without oversampling that is just fed with 192kHz?


                        Or taking examples from other domains, where theory alone does not help:

                        Why are there error correction bits on (data) CDs? In theory they are not needed as bits are bits. But in reality the hardware has deficiencies (i.e. scratches) so we deliberately put extra bits on the disc to overcome the deficiencies.

                        Noone would ever start to argue that these extra bits are a waste, because in theory with perfect CDs that cannot be harmed by scratches they are not needed. And also noone tries to build perfect scratch-resistant CDs now, but just accepts that we use some extra bits to workaround some deficiences of the hardware.

                        The same logic also applies to Nyquist theorem. It is NOT enough to just point to the theorem and then stop all discussions about higher sample rates (which I think is a flaw in that part of the paper, though the other parts surely compensate this).

                        So my main question remains: are there known deficiencies in DACs that are easier to overcome with more samples or is it cheaper to work on the deficiences thus just improving the DAC that gets fed the same input?

                        Just pointing to Nyquist does not answer such a question.
                        Always adhering to the perfect theory is too expensive in the real world, so workarounds dominate here.
                        Did you know: SqueezePlayer will stream all your music to your Android device. Take your music everywhere!
                        Remote Control + Streaming to your iPad? Squeezebox + iPad = SqueezePad
                        Want to see a Weather Forecast on your Radio/Touch/Controller ? => why not try my Weather Forecast Applet
                        Want to use the Headphones with your Controller ? => why not try my Headphone Switcher Applet

                        Comment

                        • cliveb
                          Senior Member
                          • Apr 2005
                          • 2071

                          #72
                          Pippin,

                          Overall you make good points, but you missed an important issue:

                          Originally posted by pippin
                          All that Nyquist/Shannon says is: if you have a frequency X, which is the maximum frequency you are interested in (here: the highest frequency you could probably ever hear), then if you use a sampling frequency of 2*X to store your sampled data, then ALL information contained in the signals below X will be in included in the information you store. There is NO additional information you get by using a higher sampling frequency. Nothing. All you get is additional information about frequencies ABOVE X but not below, the information on the frequencies below X is already there and it's complete.
                          What you have failed to point out is that if you sample at 2*X, and if the signal being sampled contains any frequency components greater than X, then the result does NOT accurately encode the information up to X - it will include aliasing components below X that were not in the original sample. This is why the signal needs to be band-limited before sampling. I'm sure you understand this and not mentioning it was simply an oversight. But I mention it so that you get nit-picked by friend rather than foe :-)
                          Until recently: Transporter -> ATC SCM100A, now sold :-(
                          House move forced change to: piCorePlayer(RPi2/HiFiBerry DIGI2 Pro) -> Meridian 218 -> Meridian M6

                          Comment

                          • cliveb
                            Senior Member
                            • Apr 2005
                            • 2071

                            #73
                            Originally posted by bluegaspode
                            But is it really that easy? Reading the paper (both the one provided first, but even more the one posted by DaveWR) I come to the conclusion that oversampling in the digital world isn't trivial at all, because I need a big enough set of points from sinc-functions from previous and future samples to get the actual oversampled value between two provided samples.
                            Er, no. Traditional oversampling (by factors of 2) is extremely trivial - you just stuff zero valued samples in between the existing samples. This does not create any extra information - it just alters the aliasing artefacts and moves them further up the frequency spectrum. Each doubling of the oversampling rate moves the artefacts one octave higher. The purpose of oversampling is simply to allow a gentler analogue reconstruction filter to be used.

                            I'm not so sure about what's now called "upsampling" - where the increase in sample rate is not a factor of 2. That does require new sample values to be computed. (I'd guess that it's done by first oversampling by a large factor of 2, applying a digital reconstruction filter, then resampling the result at the desired target rate. Please can someone correct me on this). Frankly I cannot see the point of upsampling - it acheives nothing of practical value that simple oversampling doesn't.
                            Until recently: Transporter -> ATC SCM100A, now sold :-(
                            House move forced change to: piCorePlayer(RPi2/HiFiBerry DIGI2 Pro) -> Meridian 218 -> Meridian M6

                            Comment

                            • pippin
                              Senior Member
                              • Oct 2007
                              • 14809

                              #74
                              Originally posted by bluegaspode
                              Come on. It doesn't make sense to adhere to a perfect theory when the hardware cannot come close to what the theory claims (as said: I don't know if DACs currently come close or not).
                              It can. All this is about is transmission and storage of data.
                              And we do have hardware that can do that perfectly, so no problem to be solved here.
                              I want to listen to music NOW with the best quality possible NOW and its of no help if one insists that based on the theory I don't need more samples when real world hardware with existing deficiencies might still can come closer to the original waveform when fed with more samples.
                              Yes. The point is: it can't. Theory or not.

                              Maybe such a discussion is a bit obsolote, now that DACs (seem to be) able to do oversampling internally with ease.
                              Again: The DAC has nothing to do with this. It's all about the bits on your harddisc only.

                              Handling more data actually causes a lot of real-world problems that DO degrade you audio experience: lack of disc space, lack of bandwidth, higher power consumption, processing requirements, noise, heat and reduced battery life of components and still we haven't even entered the DAC.
                              All just to shovel around redundant data that you could as well create using a cheap 2ct a piece logic component right before the DAC.

                              So the question comes up: what is cheaper to produce: a DAC that internally can do oversampling with very high accuracy or a DAC without oversampling that is just fed with 192kHz?
                              No. The question is: is it cheaper to blow up 48 kHz samplerate-file right in front of your DAC using a shift register or to transport, store and process it end-to-end.

                              Or taking examples from other domains, where theory alone does not help:

                              Why are there error correction bits on (data) CDs? In theory they are not needed as bits are bits. But in reality the hardware has deficiencies (i.e. scratches) so we deliberately put extra bits on the disc to overcome the deficiencies.
                              Again: you are mixing information theory and technical implementation. The CD production process is lossy. If you don't trust your harddrive, add error correction bits, too. But you would not add three empty tracks to your CD in the hope of making the chances of reading the non-empty ones better by some obscure theory.

                              The same logic also applies to Nyquist theorem.
                              No. And you can repeat that as often as you like. The Nyquist theorem is a mathematical theorem and it holds. Always. No need for fudge-ups.
                              Fudge up your data storage, transmission, DACs, loudspeakers, microphones, mixing equipment, whatever, but there is no need to fudge up your data.

                              So my main question remains: are there known deficiencies in DACs that are easier to overcome with more samples or is it cheaper to work on the deficiences thus just improving the DAC that gets fed the same input?
                              Again. I know you don't want to understand it: but if you need more samples, YOU CAN MAKE THEM UP. It's cheaper, it's easier and it's even BETTER!

                              To stay with your other example: If you need padding zeroes in your database tables, you will create them yourself. You don't go to oracle and tell them to ship a number of expensive, certified zeroes to you to fill up your database.
                              Last edited by pippin; 2012-03-08, 10:13.
                              ---
                              learn more about iPeng, the iPhone and iPad remote for the Squeezebox and
                              Logitech UE Smart Radio as well as iPeng Party, the free Party-App,
                              at penguinlovesmusic.com
                              New: iPeng 9, the Universal App for iPhone, iPad and Apple Watch

                              Comment

                              • pippin
                                Senior Member
                                • Oct 2007
                                • 14809

                                #75
                                Originally posted by cliveb
                                What you have failed to point out is that if you sample at 2*X, and if the signal being sampled contains any frequency components greater than X, then the result does NOT accurately encode the information up to X - it will include aliasing components below X that were not in the original sample. This is why the signal needs to be band-limited before sampling. I'm sure you understand this and not mentioning it was simply an oversight. But I mention it so that you get nit-picked by friend rather than foe :-)
                                Err... no.
                                Again (why is this so hard to understand???). The discussion is NOT about limitations in the sampling process or the reproduction process. It's about what of that information you then later need to keep to store, transmit and use the data.
                                Aliasing is adding noise (or reducing the quality of your signal) during the sampling process. I never did or will claim that any sampling technology is perfect, none will ever be.
                                And yes, that first paper as well as your argument indicate that using a smaller band for sampling actually helps with the quality of the result but I would not claim that it's impossible to do good recordings at higher samplerates, too.

                                But that wasn't my point about Shannon/Nyquist.
                                ---
                                learn more about iPeng, the iPhone and iPad remote for the Squeezebox and
                                Logitech UE Smart Radio as well as iPeng Party, the free Party-App,
                                at penguinlovesmusic.com
                                New: iPeng 9, the Universal App for iPhone, iPad and Apple Watch

                                Comment

                                Working...