Home of the Squeezebox™ & Transporter® network music players.
Page 2 of 2 FirstFirst 12
Results 11 to 17 of 17
  1. #11
    Senior Member
    Join Date
    Jan 2011
    Location
    Staffordshire. UK
    Posts
    5,223
    Quote Originally Posted by zzzap View Post
    Also do a test and see if not browsing Music Folder and Material Skin respond more quickly. Real reason for me spending time on this

    Although my system only having 600Mhz on tap for each core. And as far as I can tell LMS only use a single core when serving its web content.
    I'm done testing

    ronnie

  2. #12
    Senior Member
    Join Date
    Oct 2005
    Location
    Ireland
    Posts
    21,828
    Quote Originally Posted by zzzap View Post
    Although my system only having 600Mhz on tap for each core. And as far as I can tell LMS only use a single core when serving its web content.
    All of LMS runs on a single core it is a single threaded system - this is historical.
    So altering CPU priority on a 4 core RPi4 is not going to make much difference compared to when LMS used to run on a single core processor

    LMS transcoding applications will run on the other cores but there is still I/O from these processes to LMS.

    As you can see LMS CPU usage is very small - LMS is usually I/O bound (i.e. waiting for network comms to complete such as webUI, or player data /UI).
    LMS uses memory caches whenever it can.
    Your network setup may be as much part of the issues - for example, wired connection router to RPI and players should gives better performance as wireless is a shared "half-duplex" medium.

  3. #13
    Thanks guys for participate.

    I guess seeing is believing. As I menioned you feel the system is much more snappy when service cpu priorety are manipulated. Here are the result using wget loading the index file 20 timers from remote client.

    First 11 are with PRI = -23
    CPUSchedulingPolicy=fifo
    CPUSchedulingPriority=22

    Next 11-20 are with default setting where PRI are 20

    Code:
    user@G9-793-72E5:~$ sudo wget "http://192.168.10.253:9000/Material"
    --2022-08-13 12:53:39--  http://192.168.10.253:9000/Material
    Connecting to 192.168.10.253:9000... connected.
    HTTP request sent, awaiting response... 301 Moved Permanently
    Location: /Material/ [following]
    --2022-08-13 12:53:39--  http://192.168.10.253:9000/Material/
    Connecting to 192.168.10.253:9000... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 5325 (5.2K) [text/html]
    Saving to: ‘Material’
    
    Material                    100%[=========================================>]   5.20K  --.-KB/s    in 0s
    
    (101 MB/s) - ‘Material’ saved [5325/5325]
    (101 MB/s) - ‘Material.1’ saved [5325/5325]
    (134 MB/s) - ‘Material.2’ saved [5325/5325]
    (130 MB/s) - ‘Material.3’ saved [5325/5325]
    (206 MB/s) - ‘Material.4’ saved [5325/5325]
    (126 MB/s) - ‘Material.5’ saved [5325/5325]
    (172 MB/s) - ‘Material.6’ saved [5325/5325]
    (137 MB/s) - ‘Material.7’ saved [5325/5325]
    (92.2 MB/s) - ‘Material.8’ saved [5325/5325]
    (181 MB/s) - ‘Material.9’ saved [5325/5325]
    (162 MB/s) - ‘Material.10’ saved [5325/5325]
    
    Next 10 wget's are with default service value 20
    
    (208 MB/s) - ‘Material.11’ saved [5325/5325]
    (178 MB/s) - ‘Material.12’ saved [5325/5325]
    (212 MB/s) - ‘Material.13’ saved [5325/5325]
    (212 MB/s) - ‘Material.14’ saved [5325/5325]
    (215 MB/s) - ‘Material.15’ saved [5325/5325]
    (211 MB/s) - ‘Material.16’ saved [5325/5325]
    (192 MB/s) - ‘Material.17’ saved [5325/5325]
    (221 MB/s) - ‘Material.18’ saved [5325/5325]
    (132 MB/s) - ‘Material.19’ saved [5325/5325]
    (209 MB/s) - ‘Material.20’ saved [5325/5325]
    Last edited by zzzap; 2022-08-13 at 06:42. Reason: spelling

  4. #14
    Senior Member
    Join Date
    Oct 2005
    Location
    Ireland
    Posts
    21,828
    Quote Originally Posted by zzzap View Post
    Next 11-20 are with default setting where PRI are 20
    Have all the caches (OS & LMS) been cleared between two runs ?

  5. #15
    Quote Originally Posted by bpa View Post
    Have all the caches (OS & LMS) been cleared between two runs ?
    systemctl daemon and lms have to be reloaded between settings.

    Client cash (WCL Debian) where not cleared. Numbers should be benefitial for the last run without clearing the cash, but they're not.

  6. #16
    Guess what we can take from this is that top i rather useless to meassure an idle system

    While Nice use 20 to -20, CPUSchedulingPriority is 1-99.

    btw, default Squeezelite install on RPi-OS are running PRI=-46

  7. #17
    Senior Member
    Join Date
    Oct 2005
    Location
    Ireland
    Posts
    21,828
    Quote Originally Posted by zzzap View Post
    Guess what we can take from this is that top i rather useless to meassure an idle system
    There was a problem recently where LMS would go into a CPU loop.
    On a Pi4 top showed CPU usage as 25% (i.e. 1 CPU core was at 100% - the other 3 were mostly idle) - this is because top averages CPU usage across all cores. In this "idle" was a measure of usage of the other cores.

    /proc/stat gives a snapshot of each CPU core usage and htop will show it graphically.

    If you want to separate out parts of LMS to tune - play an internet station "direct" to player - this means LMS will only be doing I/O to player (status not data) and UIs - no database and no audio data processing.

    Be careful of caches, not sure but I think some may persist between LMS restarts if there is no significant delay - it may be better to delete relevant LMS caches between measurement runs.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •