PDA

View Full Version : 6.01 eval and performance



Free Lunch
2005-04-01, 12:38
Hi,

I just tried the April 1 nightly of 6.01. Nice to see that there is
now the concept of a maintenance branch!

This is my third attempt at evaluating 6.X.

1. I just tried to unshuffle a playlist with 5147 tracks. The server
went offline for 57 seconds, the music stopped, slimp3 display
blanked, etc. See the vmstat output below.

2. Navigating the music folder is still rather slow.

3. Navigating 'Artists' is also still rather slow. Especially when
you drop into an artist directory and then pop back up. Why the delay
of a few seconds? (there are approx 2100 artists listed)

4. My first impression of 6.0 has not changed - it is not ready to be
on the front page for people to download as a 'release'. Basic
functionality is still broken. I cannot imagine demoing it to someone
who sells audio gear for a living or does home installations.
Hopefully it will eventually get there now that the maintenance
concept exists (we've been waiting at least a couple years for
stability and performance).

Back to 5.4.1..


Regards,

FL

procs memory swap io system cpu
r b w swpd free buff cache si so bi bo in cs us sy id
0 0 0 180 3108 47956 350424 0 0 0 32 1184 178 11 0 89
0 0 0 180 3108 47956 350424 0 0 0 0 1189 190 12 0 88
0 0 0 180 3112 47956 350424 0 0 0 0 1178 170 9 0 91
0 0 0 180 2984 47956 350552 0 0 128 0 1176 173 10 0 90
1 0 0 180 2920 48028 350608 0 0 128 0 1193 137 62 1 37
1 0 0 180 2808 48208 350688 0 0 260 0 1176 133 97 3 0
1 0 0 180 3792 48148 349728 0 0 132 0 1145 76 97 3 0
0 1 0 180 3600 48244 349840 0 0 208 0 1163 96 97 3 0
0 1 0 180 3408 48304 349900 0 0 120 0 1136 63 99 1 0
0 1 0 180 3216 48380 349944 0 0 120 0 1141 70 99 1 0
1 0 0 180 2856 48540 349976 0 0 192 0 1164 100 98 2 0
1 0 0 180 3664 48496 349024 0 0 156 0 1145 87 99 1 0
1 0 0 180 3344 48668 349096 0 0 244 0 1179 126 98 2 0
0 1 0 180 3088 48844 349120 0 0 200 0 1156 105 97 3 0
1 0 0 180 2788 48964 349172 0 0 172 0 1201 96 99 1 0
1 0 0 180 3664 48932 348192 0 0 140 0 1144 78 98 2 0
1 0 0 180 3472 49032 348220 0 0 128 24 1143 69 98 2 0
1 0 0 180 3152 49156 348260 0 0 164 0 1156 91 99 1 0
1 0 0 180 2884 49280 348304 0 0 168 0 1152 87 98 2 0
1 0 0 180 3792 49204 347352 0 0 124 0 1149 73 98 2 0
1 0 0 180 3600 49312 347420 0 0 176 0 1152 92 99 1 0
0 1 0 180 3280 49404 347480 0 0 152 0 1143 84 98 2 0
1 0 0 180 3024 49516 347512 0 0 144 0 1146 74 98 2 0
1 0 0 180 2760 49608 347568 0 0 148 0 1152 80 99 1 0
0 1 0 180 3472 49608 346740 0 0 196 0 1164 105 99 1 0
1 0 0 180 3152 49720 346792 0 0 164 0 1152 88 98 2 0
1 0 0 180 3088 49772 346812 0 0 72 0 1133 44 99 1 0
1 0 0 180 3024 49772 346812 0 0 0 0 1106 6 99 1 0
1 0 0 180 3024 49772 346812 0 0 0 0 1106 3 99 1 0
1 0 0 180 3024 49772 346812 0 0 0 0 1114 13 100 0 0
1 0 0 180 3024 49772 346816 0 0 4 0 1115 5 99 1 0
1 0 0 180 3024 49772 346816 0 0 0 0 1110 11 100 0 0
1 0 0 180 3024 49772 346816 0 0 0 0 1107 3 99 1 0
1 0 0 180 3024 49772 346816 0 0 0 0 1106 5 99 1 0
1 0 0 180 3024 49772 346816 0 0 0 0 1113 7 100 0 0
1 0 0 180 3024 49772 346816 0 0 0 0 1106 5 99 1 0
1 0 0 180 3024 49772 346816 0 0 0 0 1106 7 100 0 0
1 0 0 180 3024 49772 346816 0 0 0 0 1106 5 99 1 0
1 0 0 180 3024 49772 346816 0 0 0 0 1108 5 100 0 0
1 0 0 180 3024 49772 346816 0 0 0 0 1114 11 99 1 0
1 0 0 180 3024 49772 346820 0 0 4 0 1128 5 100 0 0
1 0 0 180 3024 49772 346820 0 0 0 0 1149 51 99 1 0
1 0 0 180 3024 49772 346820 0 0 0 0 1111 10 100 0 0
1 0 0 180 3024 49772 346820 0 0 0 0 1107 5 99 1 0
1 0 0 180 3024 49772 346820 0 0 0 0 1112 7 99 1 0
1 0 0 180 3024 49772 346820 0 0 0 0 1106 5 100 0 0
1 0 0 180 3024 49772 346820 0 0 0 0 1106 7 99 1 0
1 0 0 180 3024 49772 346820 0 0 0 0 1106 9 99 1 0
1 0 0 180 3028 49772 346820 0 0 0 0 1106 5 99 1 0
1 0 0 180 3876 49780 346816 0 0 8 0 1119 17 99 1 0
1 0 0 180 3876 49780 346816 0 0 0 0 1106 9 100 0 0
1 0 0 180 3876 49780 346816 0 0 0 0 1106 3 99 1 0
1 0 0 180 3876 49780 346816 0 0 0 0 1107 5 100 0 0
2 0 0 180 3884 49780 346816 0 0 0 0 1115 9 99 1 0
1 0 0 180 3884 49792 346816 0 0 0 80 1112 15 100 0 0
1 0 0 180 3820 49792 346932 0 0 76 0 1133 47 99 1 0
1 0 0 180 3756 49796 347040 0 0 28 0 1113 19 87 13 0
0 0 0 180 3692 49812 346948 0 0 28 844 1366 144 89 7 4
0 0 0 180 3564 49812 347076 0 0 128 0 1181 176 9 0 91
0 0 0 180 3564 49812 347076 0 0 0 0 1175 160 10 0 90

--

Jack Coates
2005-04-01, 13:45
Free Lunch wrote:
> Hi,
....
> procs memory swap io system cpu
> r b w swpd free buff cache si so bi bo in cs us sy id
> 0 0 0 180 3108 47956 350424 0 0 0 32 1184 178 11 0 89
> 0 0 0 180 3108 47956 350424 0 0 0 0 1189 190 12 0 88
> 0 0 0 180 3112 47956 350424 0 0 0 0 1178 170 9 0 91
> 0 0 0 180 2984 47956 350552 0 0 128 0 1176 173 10 0 90
> 1 0 0 180 2920 48028 350608 0 0 128 0 1193 137 62 1 37
> 1 0 0 180 2808 48208 350688 0 0 260 0 1176 133 97 3 0
> 1 0 0 180 3792 48148 349728 0 0 132 0 1145 76 97 3 0
> 0 1 0 180 3600 48244 349840 0 0 208 0 1163 96 97 3 0
> 0 1 0 180 3408 48304 349900 0 0 120 0 1136 63 99 1 0
> 0 1 0 180 3216 48380 349944 0 0 120 0 1141 70 99 1 0
> 1 0 0 180 2856 48540 349976 0 0 192 0 1164 100 98 2 0
> 1 0 0 180 3664 48496 349024 0 0 156 0 1145 87 99 1 0
> 1 0 0 180 3344 48668 349096 0 0 244 0 1179 126 98 2 0
> 0 1 0 180 3088 48844 349120 0 0 200 0 1156 105 97 3 0
> 1 0 0 180 2788 48964 349172 0 0 172 0 1201 96 99 1 0
> 1 0 0 180 3664 48932 348192 0 0 140 0 1144 78 98 2 0
> 1 0 0 180 3472 49032 348220 0 0 128 24 1143 69 98 2 0
> 1 0 0 180 3152 49156 348260 0 0 164 0 1156 91 99 1 0
> 1 0 0 180 2884 49280 348304 0 0 168 0 1152 87 98 2 0
> 1 0 0 180 3792 49204 347352 0 0 124 0 1149 73 98 2 0
> 1 0 0 180 3600 49312 347420 0 0 176 0 1152 92 99 1 0
> 0 1 0 180 3280 49404 347480 0 0 152 0 1143 84 98 2 0
> 1 0 0 180 3024 49516 347512 0 0 144 0 1146 74 98 2 0
> 1 0 0 180 2760 49608 347568 0 0 148 0 1152 80 99 1 0
> 0 1 0 180 3472 49608 346740 0 0 196 0 1164 105 99 1 0
> 1 0 0 180 3152 49720 346792 0 0 164 0 1152 88 98 2 0
> 1 0 0 180 3088 49772 346812 0 0 72 0 1133 44 99 1 0
....

say, sure does take you a long time to get information out of your
disk... hdparm -tT /dev/[your-music-disk] please?

--
Jack at Monkeynoodle dot Org: It's a Scientific Venture...
Riding the Emergency Third Rail Power Trip since 1996!

Jason Snell
2005-04-02, 10:06
>1. I just tried to unshuffle a playlist with 5147 tracks. The server
>went offline for 57 seconds, the music stopped, slimp3 display
>blanked, etc. See the vmstat output below.

I can confirm this behavior with the 4/2 build (and similar behavior
in 6.0) -- in my case (Mac OS X) the slim server process actually
quit, so I had to start it again via the preference pane.
>
>2. Navigating the music folder is still rather slow.

Agreed -- especially if you're entering a list with a lot of items,
be it artists or tracks. There's a notable delay.

-jason
--
Jason Snell / Editorial Director, Mac Publishing / jsnell (AT) macworld (DOT) com
415-243-3565 / AIM: MW jsnell / www.macworld.com / www.playlistmag.com

Free Lunch
2005-04-02, 12:15
On Apr 1, 2005 3:45 PM, Jack Coates <jack (AT) monkeynoodle (DOT) org> wrote:
> Free Lunch wrote:
> > Hi,
> ...
> say, sure does take you a long time to get information out of your
> disk... hdparm -tT /dev/[your-music-disk] please?

I think you're on the wrong track. I am very picky about disk performance ;-)

First, this is unique to 6.X. Playlist sorting performance is poor in
5.4.1 but not nearly this bad.

First, the vmstat output shows the CPU being burned in User space, not
system. Waiting on disk would typically be shown as Sys or wait time.
The amount of CPU time spent in user space suggests some serious code
churn.

Regardless, it takes no time at all to stat(3) every music file in
that list. Of course accessing the disk should be unnecessary since
we're just re-sorting an in-memory playlist - right?

The system can stat every file in the playlist in just over a tenth of
a second. It is worth noting that the time to sort the playlist does
not improve with subsequent attempts (the disk caching makes no
difference). The file metadata is all in cache:

% time ls -lR >/dev/null
0.082u 0.041s 0:00.13 92.3% 0+0k 0+0io 0pf+0w

% ls -lR | wc
8476 62685 580699

/dev/hde1:
Timing buffer-cache reads: 128 MB in 0.43 seconds =299.11 MB/sec
Timing buffered disk reads: 64 MB in 1.15 seconds = 55.52 MB/sec


Thank you for your response,

FL

Dan Sully
2005-04-02, 12:17
* Free Lunch shaped the electrons to say...

>First, this is unique to 6.X. Playlist sorting performance is poor in
>5.4.1 but not nearly this bad.

Is this "Shuffle by Album", "Shuffle By Song" or both?

-D
--
<dmercer> Because that is what our industry does.
Churns out useless shit. Followed by inferior re-implementations of useless shit.

Jack Coates
2005-04-02, 14:53
Free Lunch wrote:
> On Apr 1, 2005 3:45 PM, Jack Coates <jack (AT) monkeynoodle (DOT) org> wrote:
>
>>Free Lunch wrote:
>>
>>>Hi,
>>
>>...
>>say, sure does take you a long time to get information out of your
>>disk... hdparm -tT /dev/[your-music-disk] please?
>
>
> I think you're on the wrong track. I am very picky about disk performance ;-)
>
> First, this is unique to 6.X. Playlist sorting performance is poor in
> 5.4.1 but not nearly this bad.
>
> First, the vmstat output shows the CPU being burned in User space, not
> system. Waiting on disk would typically be shown as Sys or wait time.
> The amount of CPU time spent in user space suggests some serious code
> churn.
>

what was the time interval on that vmstat? I wasn't even looking at the
CPU, as it only got hit for three intervals, followed by twenty-four
intervals of disk reading. See why I'm asking about your disk?

> Regardless, it takes no time at all to stat(3) every music file in
> that list. Of course accessing the disk should be unnecessary since
> we're just re-sorting an in-memory playlist - right?
>

I think you're spending too much time on theory and not looking at the
practical reports from your system. I haven't read the code closely
enough to say what it ought to be doing at this point, but your vmstat
clearly shows that you're reading disk. Lots of it, and slowly to boot.

> The system can stat every file in the playlist in just over a tenth of
> a second. It is worth noting that the time to sort the playlist does
> not improve with subsequent attempts (the disk caching makes no
> difference). The file metadata is all in cache:
>
> % time ls -lR >/dev/null
> 0.082u 0.041s 0:00.13 92.3% 0+0k 0+0io 0pf+0w
>
> % ls -lR | wc
> 8476 62685 580699
>
> /dev/hde1:
> Timing buffer-cache reads: 128 MB in 0.43 seconds =299.11 MB/sec
> Timing buffered disk reads: 64 MB in 1.15 seconds = 55.52 MB/sec
>

[root@felix tftpboot]# hdparm -tT /dev/hdc

/dev/hdc:
Timing buffer-cache reads: 1520 MB in 2.00 seconds = 760.00 MB/sec
Timing buffered disk reads: 134 MB in 3.00 seconds = 44.67 MB/sec

Maybe not picky enough about performance :) Granted that's a nice new
300GB disk, but I get similar numbers from a two-year-old 40GB at
/dev/hda. Okay, how about -v?

[root@felix tftpboot]# hdparm -v /dev/hdc

/dev/hdc:
multcount = 16 (on)
IO_support = 1 (32-bit)
unmaskirq = 1 (on)
using_dma = 1 (on)
keepsettings = 0 (off)
readonly = 0 (off)
readahead = 8 (on)
geometry = 36473/255/63, sectors = 585940320, start = 0

--
Jack at Monkeynoodle dot Org: It's a Scientific Venture...
Riding the Emergency Third Rail Power Trip since 1996!

Moses Leslie
2005-04-02, 16:27
On Sat, 2 Apr 2005, Jack Coates wrote:

> Free Lunch wrote:
> > On Apr 1, 2005 3:45 PM, Jack Coates <jack (AT) monkeynoodle (DOT) org> wrote:
> >
> > First, this is unique to 6.X. Playlist sorting performance is poor in
> > 5.4.1 but not nearly this bad.
> >
> > First, the vmstat output shows the CPU being burned in User space, not
> > system. Waiting on disk would typically be shown as Sys or wait time.
> > The amount of CPU time spent in user space suggests some serious code
> > churn.

FWIW, I get similarly poor performance with 6.0 on my setup, which I
consider fairly hefty and underworked in general. I did get significantly
improved responsiveness by changing the IO scheduler in 2.6 to cfq (don't
even get me started on the horrible schedulers and vm in linux..)

> Maybe not picky enough about performance :) Granted that's a nice new
> 300GB disk, but I get similar numbers from a two-year-old 40GB at
> /dev/hda. Okay, how about -v?

FWIW, I believe that the testing (-tT) in hdparm is no longer considered
applicable, since it's all just read out of cache anyway. I feel the best
way to benchmark disk performance under *nix is to use bonnie (or
bonniee++) with a filesize of at least double your ram, to ensure that
there's as little caching going on as possible. It's not perfect, but
it's pretty accurate.

The server I'm seeing poor performance on is an athlon xp 2600+, 512M ram,
8x250G 7200rpm drives, 4 each on two 3ware raid cards that are just doing
jbod, so that it's SCSI to the OS. They're tied together in an 8 disk
software raid 5, since 3ware raid 5 performance is fairly dismal.

The performance for me is significantly worse with 6.x than it was in 5.x.
Shuffle by album is the worst, but shuffle by song is fairly painful too.

Maybe there's some perl library where older versions are significantly
less efficient (but still work)?

I've got a mid sized collection (slimserver says: 1325 albums with 17594
songs by 1498 artists), and above average hardware, and 6.0 feels much
slower than 5.4.1 for general use.

Moses

Jack Coates
2005-04-02, 19:04
Moses Leslie wrote:
> On Sat, 2 Apr 2005, Jack Coates wrote:
>
>
>>Free Lunch wrote:
>>
>>>On Apr 1, 2005 3:45 PM, Jack Coates <jack (AT) monkeynoodle (DOT) org> wrote:
>>>
>>>First, this is unique to 6.X. Playlist sorting performance is poor in
>>>5.4.1 but not nearly this bad.
>>>
>>>First, the vmstat output shows the CPU being burned in User space, not
>>>system. Waiting on disk would typically be shown as Sys or wait time.
>>>The amount of CPU time spent in user space suggests some serious code
>>>churn.
>
>
> FWIW, I get similarly poor performance with 6.0 on my setup, which I
> consider fairly hefty and underworked in general. I did get significantly
> improved responsiveness by changing the IO scheduler in 2.6 to cfq (don't
> even get me started on the horrible schedulers and vm in linux..)
>

You know, that might be a factor; I've kept my kernel backported to 2.4
for a lot of reasons. I also don't use shuffle much.

>
>>Maybe not picky enough about performance :) Granted that's a nice new
>>300GB disk, but I get similar numbers from a two-year-old 40GB at
>>/dev/hda. Okay, how about -v?
>
>
> FWIW, I believe that the testing (-tT) in hdparm is no longer considered
> applicable, since it's all just read out of cache anyway. I feel the best
> way to benchmark disk performance under *nix is to use bonnie (or
> bonniee++) with a filesize of at least double your ram, to ensure that
> there's as little caching going on as possible. It's not perfect, but
> it's pretty accurate.
>

Sure, that is a more accurate measurement of platter speed, but the
hdparm measurement is not totally inapplicable (as evidenced by
repeatable performance differences). hdparm -tT measures speed to get
stuff from the disk subsystem as opposed to speed from the individual
disk hardware. If you want to benchmark a piece of hardware, bonnie's
your gal; if you want to measure a real world system, you need to look
at the whole thing.

In other words, if we're seeing performance differences with hdparm -tT,
we're going to see performance differences with bonnie too.

> The server I'm seeing poor performance on is an athlon xp 2600+, 512M ram,
> 8x250G 7200rpm drives, 4 each on two 3ware raid cards that are just doing
> jbod, so that it's SCSI to the OS. They're tied together in an 8 disk
> software raid 5, since 3ware raid 5 performance is fairly dismal.
>

what's your vmstat look like during a sort?

> The performance for me is significantly worse with 6.x than it was in 5.x.
> Shuffle by album is the worst, but shuffle by song is fairly painful too.
>
> Maybe there's some perl library where older versions are significantly
> less efficient (but still work)?
>

Possible, but Slimserver doesn't use many external versions; it
generally bundles its own modules.

> I've got a mid sized collection (slimserver says: 1325 albums with 17594
> songs by 1498 artists), and above average hardware, and 6.0 feels much
> slower than 5.4.1 for general use.
>
> Moses
>

I've got just shy of 8K tracks on a mid-range home-brew server which is
fairly busy with other stuff.

--
Jack at Monkeynoodle dot Org: It's a Scientific Venture...
Riding the Emergency Third Rail Power Trip since 1996!

Moses Leslie
2005-04-02, 22:25
On Sat, 2 Apr 2005, Jack Coates wrote:

> > FWIW, I get similarly poor performance with 6.0 on my setup, which I
> > consider fairly hefty and underworked in general. I did get significantly
> > improved responsiveness by changing the IO scheduler in 2.6 to cfq (don't
> > even get me started on the horrible schedulers and vm in linux..)
> >
>
> You know, that might be a factor; I've kept my kernel backported to 2.4
> for a lot of reasons. I also don't use shuffle much.

I had the same-ish issues under 2.4, I went to 2.6 fairly recently
specifically to use the alternative schedulers, 2.2 was when I was last
happy with linux vm stuff :)

> what's your vmstat look like during a sort?

Unfortunately, I'm not sure. The vmstat that comes with debian stable
apparently doesn't like the new /proc structure that comes with 2.6, and
promptly dies after starting it :)

If I have to reboot for some other reason I'll boot into 2.4 and see what
vmstat looks like.

> > Maybe there's some perl library where older versions are significantly
> > less efficient (but still work)?
> >
>
> Possible, but Slimserver doesn't use many external versions; it
> generally bundles its own modules.

The only reason I even thought of this is because of a similar experience
I had with mod_perl and mason trying to install request tracker a long
time ago. If you were using 5.005 + some other older versions of a few
modules, the web page would take *forever* to load. If you used 5.6 (then
pretty new), it was fast. Apparently everyone who developed it just used
5.6 and had never tried with 5.005, because it was almost unusable :)

> I've got just shy of 8K tracks on a mid-range home-brew server which is
> fairly busy with other stuff.

It's literally over a minute of 100% cpu usage (well, 95ish that it
actually gets to use, but 100% of the cpu used) on this 2600+ when shuffle
by album is selected with all the tracks, so I think there has to be
something common in the setups or usage of some people that's absent from
others that causes this.

The startup scan (if I wipe the sql file before starting) is twice as long
with 6.0 as well (1000s vs 2000s), it's almost all CPU time there as well.

On the plus side, 6.0 uses about half the ram (65M currently after being
up for a couple days).

Moses

Jack Coates
2005-04-03, 00:13
Moses Leslie wrote:
> On Sat, 2 Apr 2005, Jack Coates wrote:
>
>
>>>FWIW, I get similarly poor performance with 6.0 on my setup, which I
>>>consider fairly hefty and underworked in general. I did get significantly
>>>improved responsiveness by changing the IO scheduler in 2.6 to cfq (don't
>>>even get me started on the horrible schedulers and vm in linux..)
>>>
>>
>>You know, that might be a factor; I've kept my kernel backported to 2.4
>>for a lot of reasons. I also don't use shuffle much.
>
>
> I had the same-ish issues under 2.4, I went to 2.6 fairly recently
> specifically to use the alternative schedulers, 2.2 was when I was last
> happy with linux vm stuff :)
>
>
>>what's your vmstat look like during a sort?
>
>
> Unfortunately, I'm not sure. The vmstat that comes with debian stable
> apparently doesn't like the new /proc structure that comes with 2.6, and
> promptly dies after starting it :)
>
> If I have to reboot for some other reason I'll boot into 2.4 and see what
> vmstat looks like.
>
>
>>>Maybe there's some perl library where older versions are significantly
>>>less efficient (but still work)?
>>>
>>
>>Possible, but Slimserver doesn't use many external versions; it
>>generally bundles its own modules.
>
>
> The only reason I even thought of this is because of a similar experience
> I had with mod_perl and mason trying to install request tracker a long
> time ago. If you were using 5.005 + some other older versions of a few
> modules, the web page would take *forever* to load. If you used 5.6 (then
> pretty new), it was fast. Apparently everyone who developed it just used
> 5.6 and had never tried with 5.005, because it was almost unusable :)
>
>
>>I've got just shy of 8K tracks on a mid-range home-brew server which is
>>fairly busy with other stuff.
>
>
> It's literally over a minute of 100% cpu usage (well, 95ish that it
> actually gets to use, but 100% of the cpu used) on this 2600+ when shuffle
> by album is selected with all the tracks, so I think there has to be
> something common in the setups or usage of some people that's absent from
> others that causes this.
>
> The startup scan (if I wipe the sql file before starting) is twice as long
> with 6.0 as well (1000s vs 2000s), it's almost all CPU time there as well.
>
> On the plus side, 6.0 uses about half the ram (65M currently after being
> up for a couple days).
>
> Moses

7710 songs.

add all songs to a new playlist (35 seconds or so), then hit shuffle by
album (maybe 70 seconds before the webui refreshed with the new data). I
have high CPU usage, but that's what it's there for... it's hardly
interfering with anything else. The slimp3 display goes dark for about
30 seconds, then it's fine. I wasn't trying to play music, people are
trying to sleep here :)

You'll note that I have a very different disk I/O picture than Free
Lunch does... and I'll also postulate that if you're seeing serious
Slimserver degradation and can't even run vmstat, you could probably
also use a good hard look at your system's overall performance
characteristics.

[jack@felix jack]$ vmstat 1 1000
procs -----------memory---------- ---swap-- -----io---- --system--
----cpu----
r b swpd free buff cache si so bi bo in cs us
sy id wa
0 0 12528 19032 59764 137536 0 0 4 2 6 7 3
1 96 0
0 0 12528 19028 59764 137536 0 0 0 0 133 118 1
0 99 0
0 0 12528 19192 59764 137536 0 0 0 0 129 108 0
0 100 0
0 0 12528 19188 59768 137536 0 0 0 140 123 114 0
0 100 0
0 0 12528 19188 59768 137536 0 0 0 0 118 97 3
0 97 0
0 0 12528 19188 59768 137536 0 0 0 0 113 97 0
0 100 0
0 0 12528 19188 59768 137536 0 0 0 0 121 94 0
0 100 0
1 0 12528 19184 59772 137536 0 0 4 0 136 115 0
0 100 0
0 0 12528 19180 59776 137536 0 0 0 180 156 121 0
0 100 0
0 0 12528 19180 59776 137536 0 0 0 0 141 114 4
0 96 0
0 0 12528 19180 59776 137536 0 0 0 0 129 109 0
0 100 0
0 0 12528 19180 59776 137536 0 0 0 0 142 135 0
0 100 0
2 0 12528 14608 59780 140804 0 0 2832 0 581 226 29
5 66 0
1 0 12528 12188 59788 140368 0 0 0 96 112 110 99
1 0 0
2 0 12528 10268 59788 140368 0 0 0 0 126 110 100
0 0 0
1 0 12528 10168 59788 140368 0 0 0 0 112 92 100
0 0 0
4 0 12528 9860 59812 140652 0 0 64 56 161 140 84
1 15 0
1 0 12528 9496 59812 141016 0 0 0 0 109 95 100
0 0 0
1 0 12528 9108 59844 141372 0 0 24 144 121 118 100
0 0 0
2 0 12528 8872 59844 141608 0 0 28 32 139 106 100
0 0 0
1 0 12528 8652 59848 141760 0 0 156 0 267 96 100
0 0 0
procs -----------memory---------- ---swap-- -----io---- --system--
----cpu----
r b swpd free buff cache si so bi bo in cs us
sy id wa
3 0 12528 8180 59848 142016 0 0 256 0 405 173 87
4 9 0
0 0 12528 8240 59856 142016 0 0 8 0 234 154 20
0 80 0
0 0 12528 8236 59860 142016 0 0 0 216 147 118 0
0 100 0
2 0 12528 8164 59860 142016 0 0 0 0 131 105 3
0 97 0
0 0 12528 8164 59860 142016 0 0 0 0 118 102 0
0 100 0
0 0 12528 8164 59860 142016 0 0 0 0 120 104 0
0 100 0
0 0 12528 8164 59860 142016 0 0 0 0 129 112 0
0 100 0
1 0 12528 8160 59868 142016 0 0 0 44 140 137 0
0 100 0
0 0 12528 8228 59868 142016 0 0 0 0 160 129 6
0 94 0
0 0 12528 8228 59868 142016 0 0 0 0 130 112 0
0 100 0
2 0 12528 8228 59868 142020 0 0 4 0 112 95 87
1 12 0
2 0 12528 8228 59868 142020 0 0 0 0 108 89 99
1 0 0
1 0 12528 8220 59876 142020 0 0 0 112 114 103 99
1 0 0
2 0 12528 8220 59876 142020 0 0 0 0 108 92 100
0 0 0
1 0 12528 8220 59876 142020 0 0 0 0 107 89 99
1 0 0
1 0 12528 8220 59876 142020 0 0 0 0 108 92 97
3 0 0
1 0 12528 8216 59880 142020 0 0 4 0 117 114 100
0 0 0
1 0 12528 8212 59884 142020 0 0 0 56 111 99 99
1 0 0
1 0 12528 8212 59884 142020 0 0 0 0 110 86 100
0 0 0
1 0 12528 8212 59884 142020 0 0 0 0 109 92 99
1 0 0
3 0 12528 8212 59884 142020 0 0 0 0 108 86 99
1 0 0
procs -----------memory---------- ---swap-- -----io---- --system--
----cpu----
r b swpd free buff cache si so bi bo in cs us
sy id wa
2 0 12528 8212 59884 142020 0 0 0 32 109 103 100
0 0 0
1 0 12528 8212 59884 142020 0 0 0 0 108 87 100
0 0 0
2 0 12528 8212 59884 142020 0 0 0 0 109 92 100
0 0 0
1 0 12528 8212 59884 142020 0 0 0 0 107 84 99
1 0 0
1 0 12528 8212 59884 142020 0 0 0 0 108 94 100
0 0 0
1 0 12528 7956 59892 142036 0 0 8 1024 154 100 99
0 1 0
1 0 12528 7956 59892 142036 0 0 0 0 108 90 99
1 0 0
1 0 12528 7956 59892 142036 0 0 0 0 107 87 99
1 0 0
1 0 12528 7956 59892 142036 0 0 0 0 117 93 99
1 0 0
2 0 12528 7956 59892 142036 0 0 0 0 107 88 99
1 0 0
1 0 12528 7952 59896 142036 0 0 0 496 139 108 99
1 0 0
1 0 12528 7948 59900 142036 0 0 4 0 112 91 100
0 0 0
2 0 12528 7948 59900 142036 0 0 0 0 107 90 100
0 0 0
1 0 12528 7948 59900 142036 0 0 0 0 108 86 100
0 0 0
1 0 12528 7948 59900 142036 0 0 0 0 116 112 100
0 0 0
2 0 12528 7944 59904 142036 0 0 0 36 129 93 100
0 0 0
2 0 12528 7944 59904 142036 0 0 0 0 110 92 99
1 0 0
2 0 12528 7944 59904 142036 0 0 0 0 108 83 100
0 0 0
1 0 12528 7944 59904 142036 0 0 0 0 113 94 99
1 0 0
2 0 12528 7944 59904 142036 0 0 0 0 108 87 100
0 0 0
1 0 12528 7940 59908 142036 0 0 0 108 111 105 100
0 0 0
procs -----------memory---------- ---swap-- -----io---- --system--
----cpu----
r b swpd free buff cache si so bi bo in cs us
sy id wa
2 0 12528 7940 59908 142036 0 0 0 0 107 87 100
0 0 0
2 0 12528 7940 59908 142036 0 0 0 0 108 90 100
0 0 0
1 0 12528 7940 59908 142036 0 0 0 0 109 89 99
1 0 0
1 0 12528 7940 59908 142036 0 0 0 0 121 123 99
1 0 0
1 0 12528 7940 59908 142036 0 0 0 112 119 95 98
2 0 0
1 0 12528 7936 59912 142036 0 0 4 0 109 97 96
4 0 0
2 0 12528 7936 59912 142036 0 0 0 0 109 83 99
1 0 0
1 0 12528 7936 59912 142036 0 0 0 0 125 113 99
1 0 0
2 0 12528 7936 59912 142036 0 0 0 0 108 90 100
0 0 0
2 0 12528 7936 59912 142036 0 0 0 36 111 105 100
0 0 0
1 0 12528 7548 59912 142036 0 0 0 0 118 113 100
0 0 0
3 0 12528 7548 59912 142036 0 0 0 0 109 97 100
0 0 0
5 0 12528 6912 58480 139604 0 0 0 0 109 155 98
2 0 0
3 0 12528 6828 58064 134624 0 0 0 0 108 479 93
7 0 0
1 0 12528 19252 57576 132968 0 0 0 860 190 206 93
7 0 0
2 0 12528 19252 57576 132968 0 0 0 0 108 92 99
1 0 0
1 0 12528 19252 57576 132968 0 0 0 0 108 87 99
1 0 0
1 0 12528 19252 57576 132968 0 0 0 0 117 100 99
1 0 0
3 0 12528 19252 57576 132968 0 0 0 0 112 88 99
1 0 0
1 0 12528 19240 57588 132968 0 0 0 740 189 119 100
0 0 0
1 0 12528 19740 57592 132968 0 0 4 0 110 100 100
0 0 0
procs -----------memory---------- ---swap-- -----io---- --system--
----cpu----
r b swpd free buff cache si so bi bo in cs us
sy id wa
3 0 12528 19740 57592 132968 0 0 0 0 108 87 100
0 0 0
1 0 12528 19740 57592 132968 0 0 0 0 109 83 100
0 0 0
1 0 12528 19740 57592 132968 0 0 0 0 119 101 100
0 0 0
1 0 12528 19736 57596 132968 0 0 0 80 112 97 100
0 0 0
1 0 12528 19736 57596 132968 0 0 0 0 117 112 99
1 0 0
1 0 12528 19736 57596 132968 0 0 0 0 108 84 100
0 0 0
1 0 12528 19736 57596 132968 0 0 0 0 117 97 100
0 0 0
2 0 12528 19736 57596 132968 0 0 0 0 108 90 100
0 0 0
2 0 12528 19732 57600 132968 0 0 0 100 110 102 100
0 0 0
1 0 12528 19732 57600 132968 0 0 0 0 108 83 100
0 0 0
2 0 12528 19732 57600 132968 0 0 0 0 107 98 100
0 0 0
1 0 12528 19732 57600 132968 0 0 0 0 111 86 99
1 0 0
1 0 12528 19732 57600 132968 0 0 0 0 108 97 100
0 0 0
2 0 12528 19732 57600 132968 0 0 0 168 144 113 99
1 0 0
1 0 12528 19732 57600 132968 0 0 0 0 110 91 100
0 0 0
1 0 12528 19728 57604 132968 0 0 4 0 108 91 100
0 0 0
1 0 12528 19728 57604 132968 0 0 0 0 118 98 100
0 0 0
2 0 12528 19696 57608 132996 0 0 28 296 180 133 91
2 7 0
2 0 12528 19688 57616 132996 0 0 4 136 131 112 99
1 0 0
0 0 12528 19580 57616 133008 0 0 12 0 238 118 72
2 26 0
2 0 12528 19584 57616 133008 0 0 0 0 180 124 35
1 64 0
procs -----------memory---------- ---swap-- -----io---- --system--
----cpu----
r b swpd free buff cache si so bi bo in cs us
sy id wa
2 0 12528 19584 57616 133008 0 0 0 0 161 89 97
3 0 0
0 0 12528 19504 57616 133012 0 0 0 0 164 157 23
0 77 0
0 0 12528 19496 57624 133012 0 0 0 712 239 128 0
1 99 0
0 0 12528 19496 57624 133012 0 0 0 0 128 113 0
0 100 0


--
Jack at Monkeynoodle dot Org: It's a Scientific Venture...
Riding the Emergency Third Rail Power Trip since 1996!

Moses Leslie
2005-04-03, 00:41
On Sat, 2 Apr 2005, Jack Coates wrote:

> 7710 songs.
>
> add all songs to a new playlist (35 seconds or so), then hit shuffle by
> album (maybe 70 seconds before the webui refreshed with the new data). I
> have high CPU usage, but that's what it's there for... it's hardly
> interfering with anything else. The slimp3 display goes dark for about
> 30 seconds, then it's fine. I wasn't trying to play music, people are
> trying to sleep here :)

Ok, well, that's what I'm complaining about :) It completely stops
playing music for a minute or so (2x the tracks, 2x the wait I'm
guessing). It wasn't nearly that bad under 5.4, you could often hit the
shuffle and have no dropouts.

I don't really care about how much CPU time it uses, I'm just noting that
for reference. The problem is that the music stops playing :)

Moses

Craig
2005-04-03, 01:55
Moses Leslie wrote:
> On Sat, 2 Apr 2005, Jack Coates wrote:
>
>> 7710 songs.
>>
>> add all songs to a new playlist (35 seconds or so), then hit shuffle
>> by album (maybe 70 seconds before the webui refreshed with the new
>> data). I have high CPU usage, but that's what it's there for... it's
>> hardly interfering with anything else. The slimp3 display goes dark
>> for about 30 seconds, then it's fine. I wasn't trying to play music,
>> people are trying to sleep here :)
>
> Ok, well, that's what I'm complaining about :) It completely stops
> playing music for a minute or so (2x the tracks, 2x the wait I'm
> guessing). It wasn't nearly that bad under 5.4, you could often hit
> the shuffle and have no dropouts.
>
> I don't really care about how much CPU time it uses, I'm just noting
> that for reference. The problem is that the music stops playing :)
>
> Moses

I think this might be bug #1160 that I reported a while back

Craig

Jack Coates
2005-04-03, 07:32
Craig wrote:
> Moses Leslie wrote:
>
>> On Sat, 2 Apr 2005, Jack Coates wrote:
>>
>>> 7710 songs.
>>>
>>> add all songs to a new playlist (35 seconds or so), then hit shuffle
>>> by album (maybe 70 seconds before the webui refreshed with the new
>>> data). I have high CPU usage, but that's what it's there for... it's
>>> hardly interfering with anything else. The slimp3 display goes dark
>>> for about 30 seconds, then it's fine. I wasn't trying to play music,
>>> people are trying to sleep here :)
>>
>>
>> Ok, well, that's what I'm complaining about :) It completely stops
>> playing music for a minute or so (2x the tracks, 2x the wait I'm
>> guessing). It wasn't nearly that bad under 5.4, you could often hit
>> the shuffle and have no dropouts.
>>
>> I don't really care about how much CPU time it uses, I'm just noting
>> that for reference. The problem is that the music stops playing :)
>>
>> Moses
>
>
> I think this might be bug #1160 that I reported a while back
>
> Craig

yeah, sounds like it. Kind of odd usage, which explains why so few
people have seen it... why reshuffle your already playing playlist?

--
Jack at Monkeynoodle dot Org: It's a Scientific Venture...
Riding the Emergency Third Rail Power Trip since 1996!

Free Lunch
2005-04-03, 13:00
On Apr 3, 2005 3:13 AM, Jack Coates <jack (AT) monkeynoodle (DOT) org> wrote:
>
> 7710 songs.
>
> add all songs to a new playlist (35 seconds or so), then hit shuffle by
> album (maybe 70 seconds before the webui refreshed with the new data). I
> have high CPU usage, but that's what it's there for... it's hardly
> interfering with anything else. The slimp3 display goes dark for about
> 30 seconds, then it's fine. I wasn't trying to play music, people are
> trying to sleep here :)

Ah-hah! But what about when you *are* trying to play music? ;-)

It seems like you're seeing a similar performance problem, regardless
of the disk i/o pattern specifics.

I run into this one a lot. I frequently listen to shuffled playlists
and then decide I want to continue listening to the artist or release
currently playing.. So I simply unshuffle the play list. But then I
incur the music outages, slimserver limbo, etc. Pushing a button on
the remote shouldn't 'knock out' the display 30 or 70 seconds (the "OH
- Don't push that button!!" button when doing a demo).

I note that re-sorting the playlist on the system is pretty quick:

% time sort -r __00_04_20_04_0a_b1.m3u > /dev/null
0.008u 0.002s 0:00.00 0.0% 0+0k 0+0io 0pf+0w


Regards,

FL

Dan Sully
2005-04-03, 15:02
* Free Lunch shaped the electrons to say...

>I note that re-sorting the playlist on the system is pretty quick:
>
>% time sort -r __00_04_20_04_0a_b1.m3u > /dev/null
>0.008u 0.002s 0:00.00 0.0% 0+0k 0+0io 0pf+0w

Unfortunately - due to some legacy bits, that's not all the shuffle code needs to do.

What shuffle mode are you going from -> to?

-D
--
<dr.pox> do they call it 'gq' because it makes your text fashionable?

Free Lunch
2005-04-04, 06:53
Hi Dan,

On Apr 3, 2005 6:02 PM, Dan Sully <dan (AT) slimdevices (DOT) com> wrote:
> * Free Lunch shaped the electrons to say...
>
> >I note that re-sorting the playlist on the system is pretty quick:
> >
> >% time sort -r __00_04_20_04_0a_b1.m3u > /dev/null
> >0.008u 0.002s 0:00.00 0.0% 0+0k 0+0io 0pf+0w
>
> Unfortunately - due to some legacy bits, that's not all the shuffle code needs to do.

Understood. It just seems the code in this case is so far away from
what the hardware is capable of. Seems like it would be faster to
/bin/sort the playlist and re-load it. Sure, that would be a kludge
but it says something when using a 'hammer' suddenly looks lithe and
elegant.

> What shuffle mode are you going from -> to?

All, because I do this using the remote and it cycles through them.

This is the same method I have my folks using with their squeezebox.
Listen to this big random playlist and when you hear something you
really like or suits your mood, press the button on the remote to
unshuffle. When you get bored, press it to go back to random.


Regards,

FL

Craig
2005-04-04, 07:53
Free Lunch wrote:
> Hi Dan,
>
> On Apr 3, 2005 6:02 PM, Dan Sully <dan (AT) slimdevices (DOT) com> wrote:
>> * Free Lunch shaped the electrons to say...
>>
>>> I note that re-sorting the playlist on the system is pretty quick:
>>>
>>> % time sort -r __00_04_20_04_0a_b1.m3u > /dev/null
>>> 0.008u 0.002s 0:00.00 0.0% 0+0k 0+0io 0pf+0w
>>
>> Unfortunately - due to some legacy bits, that's not all the shuffle
>> code needs to do.
>
> Understood. It just seems the code in this case is so far away from
> what the hardware is capable of. Seems like it would be faster to
> /bin/sort the playlist and re-load it. Sure, that would be a kludge
> but it says something when using a 'hammer' suddenly looks lithe and
> elegant.
>
>> What shuffle mode are you going from -> to?
>
> All, because I do this using the remote and it cycles through them.
>
> This is the same method I have my folks using with their squeezebox.
> Listen to this big random playlist and when you hear something you
> really like or suits your mood, press the button on the remote to
> unshuffle. When you get bored, press it to go back to random.
>
>
That's how I found it, it's the going through 'shuffle by Album' that messes
things up but if you are in 'shuffle by song' then there is no choice but to
go through 'shuffle by album' to turn shuffle off when you use the remote.
If there was a pause before the shuffle started then maybe you could step
over 'shuffle by album' mode.

Craig