Home of the Squeezebox™ & Transporter® network music players.
Page 3 of 6 FirstFirst 12345 ... LastLast
Results 21 to 30 of 56
  1. #21
    Senior Member
    Join Date
    May 2015
    Location
    Grosse Pointe Woods, MI
    Posts
    769
    Quote Originally Posted by Wombat View Post
    I was there also
    But this is how it goes. Higher DR must sound better. Higher bitrate must sound better. Strangely people that claim to hear all kind of things trust most in these numbers.
    Lately i did read about the 24/44.1 new Metallica is said to sound better as the cd while the RMS number is exactly -0.9dB more silent as are the peaks.
    Since the 24/44.1 is said to was created for the iTunes version it may simply be the cd version dropped in volume to comply with the Mastered for iTunes clipping AAC headroom.

    Edit: and there is also that chance your newly purchased 24/44.1 download went from 16 to 24 bit from the process of adding a steady watermark
    If I think about it, I can't help but smile at the naive thinking that higher DR or higher bitrate must or any other technical improvement must necessarily sound better.

    The first and biggest problem is that for something to truely sound better, it has to at least sound different. In the modern audio world, the baseline for technical performance is often so good that we are actually pretty far into diminishing or even vanishing returns. So, sounding different is not always a given, or even possible.

    The second problem is that there is no uniform, generally agreed up standard for "better sound". All you have to do is look at all of the people who think vinyl or analog tape sound better, look at the actual technical performance these ancient and inherently audibly flawed media can possibly provide and once you stop retching, realize that we have yet another example of a total lack of correlation between improved technical performance and improved sound quality as perceived by some.

  2. #22
    Senior Member pablolie's Avatar
    Join Date
    Feb 2006
    Location
    bay area, california.
    Posts
    1,077
    Yes 16/44 is plenty, and we deserve more recordings that... uh... deserve it.

    Based on my reading, even the best human platinum ears can not hear beyond 20/44. Again, in some extreme border cases arguably the quantification error is the issue, and not the sampling frequency (Nyquist nailed that one).

    But I have not EVER heard of ONE scientifically conducted test that ever remotely indicates any human on the planet would benefit from anything beyond 20/44... and most stuff we get in 16/44 doesn't remotely deserve it, thanks music industry...
    ...pablo
    Server: Virtual Machine (on VMware Workstation 12) running Ubuntu 16.04 + LMS 7.9
    System: SB Touch -optical-> Benchmark DAC2HGC -AnalysisPlus Oval Copper XLR-> NAD M22 Power Amp -AnalysisPlus Black Mesh Oval-> Totem Element Fire
    Other Rooms: 2x SB Boom; 1x SB Radio; 1x SB Classic-> NAD D7050 -> Totem DreamCatcher + Velodyne Minivee Sub
    Computer audio: workstation -USB-> audioengine D1 -> Grado PS500e/Shure 1540

  3. #23
    Senior Member Julf's Avatar
    Join Date
    Dec 2010
    Posts
    2,451
    Quote Originally Posted by pablolie View Post
    Based on my reading, even the best human platinum ears can not hear beyond 20/44.
    And I would question the 20. Even 16 bits means hearing stuff that is way below the background noise level of your listening room while listening to music at 120 dB...
    "To try to judge the real from the false will always be hard. In this fast-growing art of 'high fidelity' the quackery will bear a solid gilt edge that will fool many people" - Paul W Klipsch, 1953

  4. #24
    Senior Member
    Join Date
    May 2015
    Location
    Grosse Pointe Woods, MI
    Posts
    769
    Quote Originally Posted by pablolie View Post
    Yes 16/44 is plenty, and we deserve more recordings that... uh... deserve it.

    Based on my reading, even the best human platinum ears can not hear beyond 20/44. Again, in some extreme border cases arguably the quantification error is the issue, and not the sampling frequency (Nyquist nailed that one).

    But I have not EVER heard of ONE scientifically conducted test that ever remotely indicates any human on the planet would benefit from anything beyond 20/44... and most stuff we get in 16/44 doesn't remotely deserve it, thanks music industry...
    Your interpretation of the accepted scientific facts in this area are correct, but you may be asking the wrong question.

    I claim that the more relevant question is whether we can hear the removal of music above a certain frequency, since that is what we are actually doing. We always remove signals above a certain frequency when we make recordings and the like. The relevant question is how low we can set the limit and not hear the difference.

    This becomes relevant because of masking. At the highest frequencies, lower frequency signals mask higher frequency signals at the same amplitude because the sensitivity of our ears is falling off so rapidly.

    Furhtermore, musical sound with few exceptions have ampltudes that inherently drop off rapidly above certain frequencies due to the physics of how they work. There is a saying in analytical physics that "Everything is a combination of second order systems." and second order systems naturally fall off at 12 dB per octave above resonance.

    If you examine the spectral contents of a variety of recordings you will find that just about all of them have peak amplitude at 12 KHz or less, and naturally roll off pretty sharply above that. For example people talk about the high frequencies that are created by cymbals, but they generally peak around 7 KHz and roll off rapidly above that.

    It turns out that in the most sensitive test cases, a sharp roll off above 16 KHz if well done (and it generally is these days) is not detectable using musical program material, for listeners with really good hearing. For the rest of us, it could be half that or worse.

  5. #25
    Senior Member
    Join Date
    Aug 2005
    Posts
    556

    bits and sampling rate do different things

    The Nyquist theorem says that sampling at twice the highest frequency in the source will reproduce it perfectly. So 44.1 kHz will get to 22 kHz in principle. But it is critical that there be NO signal above half the sample rate, or it is aliased below the 22 kHz into the audio band as bad distortion. So, players must have a sharp low-pass filter in the stream. The problem with this is that if the amplitude response has a sharp cutoff, the phase response oscillates wildly. The phase is equivalent to delay, and this can affect imaging. So, if the sample rate is, say 96 kHz, you can make a nice smooth (e.g., Gaussian) filter that has a smooth amplitude and frequency response. But it doubles the file size. IMHO, the DVD standard of 48 kHz should be sufficient for flat response to 20 kHz.
    The number of bits, to my ear does make a difference, especially on loud congested music, for example a symphony playing many parts loudly and at the same time (ives Symphony No. 3). 16 bits gets congested. It is hard to have the instruments maintain their unique place in the soundstage. In principle, by doing some slights of hand (interpolating randomly between bit levels) CDs claim to be able to get 19 bits, which might be sufficient. And I have heard some very good sounding CDs. But not that many. Remember that with 16 bits, there are only 65,000 levels (half negative), so there is a inherent 1/325 % distortion due to imperfect representation of the sample height.
    I care more about 24 bits than 96 kHz, since I am old and am lucky to hear above 15 kHz.

  6. #26
    Senior Member
    Join Date
    May 2015
    Location
    Grosse Pointe Woods, MI
    Posts
    769
    Quote Originally Posted by jarome View Post
    The Nyquist theorem says that sampling at twice the highest frequency in the source will reproduce it perfectly. So 44.1 kHz will get to 22 kHz in principle. But it is critical that there be NO signal above half the sample rate, or it is aliased below the 22 kHz into the audio band as bad distortion.
    In fact DACs for high fidelity use have been built without anti-aliasing filters. Some of them are sold commercialy and are highly admired by some audiophiles. In general they don't sound all that bad because program material with significant content above 20 KHz is relatively rare.

    Secondly many modern DACs have what are called Linear Phase filters and they work as advertised. Their phase shift characteristic closely matches that of a regular short delay, so in a certain sense they have no excess delay beyond that which is inherent in playing a recording some time after it was made.
    So, players must have a sharp low-pass filter in the stream.
    False for the reasons given.

    The problem with this is that if the amplitude response has a sharp cutoff, the phase response oscillates wildly.
    This is false even when linear phase filters are not used. The phrase "oscillates wildly" while poetic, is not accurate. The oscillation is damped. and therefore brief. Furthermore it can be completely eliminated if the filter has what is known as a minimum phase characteristic which is possible to achieve fairly economically given the continually falling cost of digital logic ceircuitry. The damped rinigning takes place at the Nyquist frequency which in a common CD player is outside the normal audible range.

    The number of bits, to my ear does make a difference, especially on loud congested music, for example a symphony playing many parts loudly and at the same time (ives Symphony No. 3). 16 bits gets congested. It is hard to have the instruments maintain their unique place in the soundstage. In principle, by doing some slights of hand (interpolating randomly between bit levels) CDs claim to be able to get 19 bits, which might be sufficient. And I have heard some very good sounding CDs. But not that many. Remember that with 16 bits, there are only 65,000 levels (half negative), so there is a inherent 1/325 % distortion due to imperfect representation of the sample height.
    I care more about 24 bits than 96 kHz, since I am old and am lucky to hear above 15 kHz.
    The above comments that I am trying to correct here are false for the reasons given. I can debunk the second paragraph as well, but I think the proven falsehoods in the first paragraph that I corrected make my point - which is that these kinds of comments are false and constitute a kind of religious faith that is not uncommon among poorly-informed audiophiles. Knowlegable audiophiles simply know better.

  7. #27
    Senior Member Julf's Avatar
    Join Date
    Dec 2010
    Posts
    2,451
    Quote Originally Posted by jarome View Post
    16 bits gets congested.
    I still haven't come across a commercial recording that uses more than 16 bits of dynamic range.

    In principle, by doing some slights of hand (interpolating randomly between bit levels) CDs claim to be able to get 19 bits, which might be sufficient.
    Tell us more - how does that work? Sounds like you are talking about dither - that applies to any digital signal, not just CD. The only slights of hand that a CD does is error correction when you get read errors.

    Remember that with 16 bits, there are only 65,000 levels (half negative), so there is a inherent 1/325 % distortion due to imperfect representation of the sample height.
    Not distortion, but quantization noise. And the "1/325 %" (0.00003) is also misleading, because you also have to look at the frequency distribution of the error.
    "To try to judge the real from the false will always be hard. In this fast-growing art of 'high fidelity' the quackery will bear a solid gilt edge that will fool many people" - Paul W Klipsch, 1953

  8. #28
    Senior Member
    Join Date
    May 2015
    Location
    Grosse Pointe Woods, MI
    Posts
    769
    Quote Originally Posted by jarome View Post
    The number of bits, to my ear does make a difference, especially on loud congested music, for example a symphony playing many parts loudly and at the same time (ives Symphony No. 3). 16 bits gets congested.
    Begs the question raised by the use of more unscientific, placebophile poetic language.

    What does "Congested" mean? To me "congested" means Intermodulation Distortion which really means nonlinear Distortion.

    Friendly advice: If you are going to try to school knowledgeable people about audio, first know the appropriate words of art and what they mean.

    If you want to listen to audible amounts of IM, let me introduce you to two legacy formats, LP and analog tape, that are rife with it.

    In contrast, properly dithered digital is free of IM or more properly any kind nonlinear distortion. No, not inaudible IM. None at all.

    Unless you intentionally futz with it, the digital domain is utterly free of any kind of frequency response, phase, amplitude or modulation distortion. Any of that one might find in the digital domain actually comes from the analog domain.

    For example, if you generate any kind of frequency response, phase response, THD or IM test signal in the digital domain and analyze it there, there are no added artifacts or spurious responses. The frequency response you measure is not +/- 0.1 dB. It is +/- zero dB or as close to that as your numerical calculations allow.

    It is hard to have the instruments maintain their unique place in the soundstage.
    That sounds to me like problems with channel balance or separation. Again, in the digital domain those are perfect. If you find any kind of errors there, they probabaly come from the signals tarry in the analog domain.

    In principle, by doing some slights of hand (interpolating randomly between bit levels) CDs claim to be able to get 19 bits, which might be sufficient.
    Wrong. The thing you seem to be alluding to is not slight of hand. It is how proper digital works and has worked since digital audio was developed by Bell Labs starting in the 1930s. You seem to be referring to shat knowledgeable people call "Shaped dither". With perceptually shaped dither, 16 bits can deliver the subjective equivalent of 120 dB dynanmic range which is by the way, equal to SACD.

    And I have heard some very good sounding CDs. But not that many.
    If you don't like what you hear on CDs. blame the people who might actually have some responsibility like the artists and production staff. They obviously peed in the soup because CDs are sonically transparent. That means that if you do a fair job of recording them, they are not possible to audibly differentiate from their analog sources. These days everything starts out and finished up analog, right?


    Remember that with 16 bits, there are only 65,000 levels (half negative), so there is a inherent 1/325 % distortion due to imperfect representation of the sample height.
    That is false, and by claiming this we have a tacit admission of (1) No formal education related to digital audio and (2) No practical hands-on experience with digital audio at any reasonable technical level.

    In fact if you properly (IOW, just follow the cook book and don't pee in the soup) record a pure sine wave with 16 bits, any try to measure its distortion artifacts, it has none.

    I care more about 24 bits than 96 kHz, since I am old and am lucky to hear above 15 kHz.
    Yet another audiophile myth. The r eason why cutting off all music above 20 Hz causes no audible effects is due to masking, not due to any inability to hear isolated test tones > 15 KHz.

  9. #29
    Senior Member
    Join Date
    Mar 2007
    Location
    UK
    Posts
    1,293
    Numerical calculation errors are easily demonstrated as measurable in the real world e.g. DAC on-board digital filters, SRC software. It's a big myth that these calculations are generally perfect in the real world (even though they could be in particular implementations).

    For reference:-
    (1) Benchmark are one of the "good guys" and yet: https://benchmarkmedia.com/blogs/app...e-measurements "A careful examination of the two curves will also show that the DAC1 has slightly more ripple in the frequency response. However this ripple is insignificant from an audibility standpoint and it is hard to see even on this expanded scale. This difference is due to the improved digital filters in the DAC2." Note the DAC2 still exhibits this, albeit less.
    (2) Comparison of various popular SRC software, some so poor they even have errors measurable in 16 bits!: http://src.infinitewave.ca/

    Also there are historical shenanigans with cheap and/or poor ADCs which have caused measurable issues in a great many recordings.

    So I think the argument descends to audibility. Realistically, it can't be won with digital perfection.

    Please understand my point: I've no evidence that the above issues are audible.
    Last edited by darrenyeats; 2017-06-26 at 05:11.
    Check it, add to it! http://www.dr.loudness-war.info/

    SB Touch

  10. #30
    Senior Member
    Join Date
    May 2015
    Location
    Grosse Pointe Woods, MI
    Posts
    769
    b
    Quote Originally Posted by darrenyeats View Post

    Also there are historical shenanigans with cheap and/or poor ADCs which have caused measurable issues in a great many recordings.
    Attempts to measure this or validate it with DBTs have come up empty.

    In general, the legacy ADCs were both very good and very expensive. For example, in the early 70s I worked in grad school with a digital interface that was used to connect a EIA 680 hybrid computer with an IBM 1130 digital computer. It was typical of the best precision conversion hardware of the day. At its core it was a 320,000 samples/second 16 bit ADC/DAC pair, with an analog multiplexer that allowed dividing it among up to 8 different concurrent channels. It was based on a resistive ladder, and has +/- 1 LSB precision. It cost a half-million dollars. It was a catalog, off-the-shelf item. If you could proffer the purchase order credibly, in due time they delivered.

    In about the same time, I was part of this DBT of a piece of digital gear http://djcarlst.provide.net/abx_digi.htm. The critical evaluation by over 20 experienced audiophiles and some of the best recording engineers in the city was that there was no audible difference. My recollection that additional listening tests involving non-musical signals that generally taxed the capabilities of analog tape also passed through it blamelessly. It ran in the low 5-figure range.

    Back in the early days of digital audio (pre-CD), there were some questionable ADCs, perhaps most commonly accused would be the conversion subsystem of the 3M digital recorder. Read about it here: http://www.mixonline.com/news/news-p...-system/377974 . Note that in the day, with all its faults, it was judged by leading professionals to be superior to 15-30 ips, half track analog recording on the best tape stock, which was the previous high standard for quality work. A recording that was mastered on it, "Bop Until You Drop" by Ry Cooder is often cited as an objectionable recording which analog bigots blame on the 3M mastering. I have an early CD of this song, and it stands head and shoulders above most recordings of the day. Many consider it to be an exemplary work. In the face of a controversy like this, resolving it in favor of analog bigotry seems unwise.

    So I think the argument descends to audibility. Realistically, it can't be won with digital perfection.
    Mentioning digital perfection seems like an excluded middle argument.

    Perfection is always an impossible goal in the real world, but between the realistic constraints on recording acoustic events and the limitations of the human ear, sonically blameless performance involving digital has been possible for almost half a century, and is currently available for walking around money.

    For example my M-Audio Microtrack is a stand alone recording system including balanced mic inputs with phantom power. It is now about 8 years old and in its lossless modes, and is sonically blameless. I think its performance can be duplicated today with modern hardware for less than $100. In the day it sold for not much more than twice that.

    I believe that today sonically transparent DAC chips run about $1, and a USB DAC with sonically blameless performance for line-level outputs can be had for under $10.

    Many of the esoteric formats that people buy overprice hardware to play either have negligible recorded software to play, or any works that are available using them can be circumvented by simply buying the same work from the same source in a mainstream format.

    IOW, people seem to be inventing technically unwarranted recording formats to sell overpriced DACs.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •