Well... I just installed SqueezePlayer on my tablet and it's playing well up to 24/44,1.
Above it stops playing.
Is there something I need to setup in LMS that songs with bitrates up to 24/192 can be played or will this not be possible?
It's not that I will hear a difference between 16/44,1 and higher rates. But it would be more convenient if they would be played. Honestly I don't like the idea to downsampling them (lazy me).
Announcement
Collapse
No announcement yet.
Backyard deck system
Collapse
X
-
Originally posted by pippin View PostNo, what he's saying is that if you just drop every second sample that would be the same process as sampling at a lower sampling rate all along.
If there is really any improvement in a particular recording from going from a 48 kHz sample rate to a 96 kHz sample rate that improvement will be lost in the process but you still get something slightly superior to CD quality which should be fine for applications where you don't have a 96kHz DAC anyway. You might be able to get a very slight improvement over that by interpolation so you would "save" some of the benefits of the higher sample rate recording but if you don't have the CPU required, the result is actually a broken playback which is the worst SQ you'll ever have. Working always beats "theoretically better but not working".
Those upsampled tracks have gone through so many potentially distortion- and aliasing-adding conversions it probably doesn't matter what you are doing to them anyway, they will be worse than the original 44.1/16 recording anyway.
One obvious choice we all have is to just down convert everything offline , I convinced that I can't hear the difference even on state of the art recordings down converted to 16/44.1 if you do 24/44.1 and 24/48 of everything you are probably good to go and enjoy life and music
Leave a comment:
-
No, what he's saying is that if you just drop every second sample that would be the same process as sampling at a lower sampling rate all along.
If there is really any improvement in a particular recording from going from a 48 kHz sample rate to a 96 kHz sample rate that improvement will be lost in the process but you still get something slightly superior to CD quality which should be fine for applications where you don't have a 96kHz DAC anyway. You might be able to get a very slight improvement over that by interpolation so you would "save" some of the benefits of the higher sample rate recording but if you don't have the CPU required, the result is actually a broken playback which is the worst SQ you'll ever have. Working always beats "theoretically better but not working".
Those upsampled tracks have gone through so many potentially distortion- and aliasing-adding conversions it probably doesn't matter what you are doing to them anyway, they will be worse than the original 44.1/16 recording anyway.
Leave a comment:
-
Originally posted by philippe_44 View PostYou're right but here is what I meant more precisely.
Sampling a signal s at a rate of T has the effect of duplicating its spectrum an infinite amount of time with an n/T spacing. Obviously if the spectrum is larger than 1/T (max complex frequency 1/2T), then overlapp (aliasing) occurs and original info is lost.
So, assuming that spectrum is below that 1/T limit, sampling at T/2 has the effect of duplicating spectrum every 2n/T and sampling at 4T spaces it at 4n/T. So, mathematically speaking, a signal with a spectrum below 1/T and sampled at T/2 or T/4 can be simply punctured at 1/2 or 1/4 without any loss of information, assuming that you do a perfect cardinal sine filtering when you switch back to analogue domain.
What I meant by "no filtering needed" is that if you want do downsample at non integer multiple of initial rate, then you have to re-interpolate (filtering ...) in the digital domain at the new rate . When it is an integer multiple, there is no need of that. And if you downsample in respect with spectrum size, there is no need to lowpass filter
In "real life", the analogue conversion filtering is not a perfect cardinal sine and the benefit of oversampling is that by increasing space between spectrum "replica", you ease the analogue filtering requirements. But, without entering into an audiophile debate, assuming a decent DAC, I was suggesting that in case the host CPU is a problem, moving from 96K to 48K without filtering should be done by 1/2 puncturing which requires no CPU, re-interpolating and lowpass is not needed
(PS: I'm not trying to be pedantic, sorry if it looks like that)
there is not much that could aliase down ? Or for other reasons there are nothing much above the limit .
There might be real world issues anyway ,but that's beyond my detailed understanding . I think the current use of SoX is best practice .
But the LMS architecture may need a compromise solution for low CPU servers that just do as you suggest with multiples of the sample rate giving end results that's playable but may be compromised . Or is there a less CPU demanding resampler out there .
Or is it so simple as give SoX the right commands and it runs a less demanding procedure .
But how many low CPU servers is there today ? Would not mores law fix this faster than the comunity finds a solution ?
Leave a comment:
-
Originally posted by Mnyb View PostThere is always need for filtering for various reasons, but I'm afraid I can't explain it very good .
Simplest case there is signal content even if it's noise above the nyqvist limit for 44.1 or 48 k (22,05khz , 24khz) or in practice a little bit lover ,this will aliase back into the signal if you just drop samples . Therefore they are filtered before downsampling . There must be no signal content at all above the nyqvist limit of the target sample rate .
Sampling a signal s at a rate of T has the effect of duplicating its spectrum an infinite amount of time with an n/T spacing. Obviously if the spectrum is larger than 1/T (max complex frequency 1/2T), then overlapp (aliasing) occurs and original info is lost.
So, assuming that spectrum is below that 1/T limit, sampling at T/2 has the effect of duplicating spectrum every 2n/T and sampling at 4T spaces it at 4n/T. So, mathematically speaking, a signal with a spectrum below 1/T and sampled at T/2 or T/4 can be simply punctured at 1/2 or 1/4 without any loss of information, assuming that you do a perfect cardinal sine filtering when you switch back to analogue domain.
What I meant by "no filtering needed" is that if you want do downsample at non integer multiple of initial rate, then you have to re-interpolate (filtering ...) in the digital domain at the new rate . When it is an integer multiple, there is no need of that. And if you downsample in respect with spectrum size, there is no need to lowpass filter
In "real life", the analogue conversion filtering is not a perfect cardinal sine and the benefit of oversampling is that by increasing space between spectrum "replica", you ease the analogue filtering requirements. But, without entering into an audiophile debate, assuming a decent DAC, I was suggesting that in case the host CPU is a problem, moving from 96K to 48K without filtering should be done by 1/2 puncturing which requires no CPU, re-interpolating and lowpass is not needed
(PS: I'm not trying to be pedantic, sorry if it looks like that)
Leave a comment:
-
There is a low end alternative to lame , shine . Results is worse , but it actually works on Sheeva plug and other machines without floating piont .
I have not yet seen a cruder alternative to SoX for low end servers ?
Depending on CPU architecture and FPU I've had a server where lame used much more CPU than SoX.
But enough off topic from me , let's see some backyard systems
Leave a comment:
-
Originally posted by philippe_44 View PostAgreed but in the special case of 192/96/48 there is no need for filtering, so I was wondering if that optimization was there and could help by setting downsampling to 48 instead of 44.1
Simplest case there is signal content even if it's noise above the nyqvist limit for 44.1 or 48 k (22,05khz , 24khz) or in practice a little bit lover ,this will aliase back into the signal if you just drop samples . Therefore they are filtered before downsampling . There must be no signal content at all above the nyqvist limit of the target sample rate .
Leave a comment:
-
Originally posted by Mnyb View PostYes pippin is right LMS does state of the art downsampling when a player needs it it's inaudible to the listener a very good choice.
But if your server can't take it that's a problem .
Very very old server versions >10 years ago ,that where around at the time of slimp3 and SB1 might just dumped 1/2 of the samples , but at those time squeezeboxes where fun gadgets before anyone at slimdevices realised that it had hifi potential ,that came with the SB2.Last edited by philippe_44; 2015-08-14, 05:42.
Leave a comment:
-
Originally posted by pippin View PostLMS uses sox which does rather complex interpolation and actually eats quite a bit of CPU in the process. Even more than MP3 encoding and dramatically more than FLAC encoding.
Keeps surprising me, tooOriginally posted by philippe_44 View PostJust as a curiosity, wouldn't LMS downsampling from 96 to 48 be a simple puncturing of every other sample, so very limited CPU requirement ? (no spectrum aliasing expected)
But if your server can't take it that's a problem .
Very very old server versions >10 years ago ,that where around at the time of slimp3 and SB1 might just dumped 1/2 of the samples , but at those time squeezeboxes where fun gadgets before anyone at slimdevices realised that it had hifi potential ,that came with the SB2.
Leave a comment:
-
Backyard deck system
LMS uses sox which does rather complex interpolation and actually eats quite a bit of CPU in the process. Even more than MP3 encoding and dramatically more than FLAC encoding.
Keeps surprising me, too
Leave a comment:
-
Originally posted by marflao View PostSo far I connected my Nexus 7 via Bluetooth to a Nude Audio "Super M" while playing songs from Google Music.
But it doesn't work without hiccups... Lot's of buffering.
So this LMS => SqueezePlayer combo might be a better option.
I'm just not 100% sure if my NAS has enough power for the downsampling?!
Leave a comment:
-
Sure, it's in the context menu for the current track under "more info"
Leave a comment:
-
Originally posted by pippin View PostHm? iPeng does play 24/96 natively.
Did you maybe enable bitrate limiting?
Leave a comment:
-
Hm? iPeng does play 24/96 natively.
Did you maybe enable bitrate limiting?
Leave a comment:
-
Originally posted by marflao View PostSo far I connected my Nexus 7 via Bluetooth to a Nude Audio "Super M" while playing songs from Google Music.
But it doesn't work without hiccups... Lot's of buffering.
So this LMS => SqueezePlayer combo might be a better option.
I'm just not 100% sure if my NAS has enough power for the downsampling?!
Leave a comment:
Leave a comment: