Home of the Squeezebox™ & Transporter® network music players.
Page 5 of 6 FirstFirst ... 3456 LastLast
Results 41 to 50 of 51
  1. #41
    Senior Member ralphy's Avatar
    Join Date
    Jan 2006
    Location
    Canada
    Posts
    2,790
    Quote Originally Posted by fpo View Post
    Unfortunately no change, dropouts still. The KeepAlive behavior seems unchanged. You say you have enabled KeepAlive on the slimprotocol socket. Is this the one with port 9000 or 3483. It is needed on port 9000.

    Here is the traffic on port 9000 prior to the 100% cpu time state:

    056.2501063 HOST to CLIENT 2759 TCP:[Continuation to #2754]Flags=...A...., SrcPort=9000, DstPort=62970, PayloadLen=1460, Seq=1349354153 - 1349355613, Ack=3133506948, Win=229 {TCP:45, IPv4:1}
    056.2501063 HOST to CLIENT 2760 TCP:[Continuation to #2754]Flags=...AP..., SrcPort=9000, DstPort=62970, PayloadLen=730, Seq=1349355613 - 1349356343, Ack=3133506948, Win=229 {TCP:45, IPv4:1}
    056.2501196 CLIENT to HOST 2761 TCP:Flags=...A...., SrcPort=62970, DstPort=9000, PayloadLen=0, Seq=3133506948, Ack=1349356343, Win=1289 {TCP:45, IPv4:1}
    056.6534090 HOST to CLIENT 2762 TCP:Flags=...A...F, SrcPort=9000, DstPort=62970, PayloadLen=0, Seq=1349356343, Ack=3133506948, Win=229 {TCP:45, IPv4:1}
    056.6534705 CLIENT to HOST 2763 TCP:Flags=...A...., SrcPort=62970, DstPort=9000, PayloadLen=0, Seq=3133506948, Ack=1349356344, Win=1289 {TCP:45, IPv4:1}
    140.4406135 CLIENT to HOST 4733 TCP:[Dup Ack #2763]Flags=...A...., SrcPort=62970, DstPort=9000, PayloadLen=0, Seq=3133506948, Ack=1349356344, Win=1329 {TCP:45, IPv4:1}
    140.4409142 HOST to CLIENT 4734 TCP:Flags=.....R.., SrcPort=9000, DstPort=62970, PayloadLen=0, Seq=1349356344, Ack=0, Win=0 {TCP:45, IPv4:1}

    Host sends data: PayloadLen 1460 then 730 (streambuffer is full at this time seamingly)
    Client acknowledges the data up to seq 1349356343
    Host acks the ack and sends FIN with seq 1349356343
    Client acks with seq 1349356344 (note: +1)
    There is no traffic on port 9000 between time 56.653 and 140.44. There is traffic on port 3483 however (not shown)
    Client dup acks with seq 1349356344 again
    Host resets with seq 1349356344

    After that there is 100% cpu time.
    squeezelite-1.9.8-1324-test1-win64.zip had keepalive set on the slimprotocol 3483 port.

    squeezelite-1.9.8-1324-test2-win64.zip enables keepalive on the stream socket, which is not always port 9000 depending on the stream source and I removed the keepalive change from test1.

    The test zip files include a patch file with the code changes.
    Last edited by ralphy; 2021-01-15 at 05:15.
    Ralphy

    1-Touch, 5-Classics, 3-Booms, 2-UE Radio
    Squeezebox client builds donations always appreciated.

  2. #42
    Junior Member
    Join Date
    Jan 2021
    Posts
    15
    Quote Originally Posted by ralphy View Post
    squeezelite-1.9.8-1324-test1-win64.zip had keepalive set on the slimprotocol 3483 port.

    squeezelite-1.9.8-1324-test2-win64.zip enables keepalive on the stream socket, which is not always port 9000 depending on the stream source and I removed the keepalive change from test1.

    The test zip files include a patch file with the code changes.
    For some unknown reason Dropbox does not let me downlod the file squeezelite-1.9.8-1324-test2-win64.zip.

    ..Got it now, sorry
    Last edited by fpo; 2021-01-14 at 11:28.

  3. #43
    Junior Member
    Join Date
    Jan 2021
    Posts
    15
    Quote Originally Posted by ralphy View Post
    ...
    squeezelite-1.9.8-1324-test2-win64.zip
    ...
    Still dropouts although the timing has changed slightly. I see however no difference at the packet level with regard to the response of the client towards a KeepAlive probe from the host, i.e. still no [Keep Alive Ack] from the client at the stream socket. Can't see a fault in the patch either.

    Strange... Why the host sends KeepAlive packets is a mystery. It shouldn't do that unless explicitly configured to do so. Nevertheless the client should be able cope with it.

  4. #44
    Senior Member ralphy's Avatar
    Join Date
    Jan 2006
    Location
    Canada
    Posts
    2,790
    Quote Originally Posted by fpo View Post
    Still dropouts although the timing has changed slightly. I see however no difference at the packet level with regard to the response of the client towards a KeepAlive probe from the host, i.e. still no [Keep Alive Ack] from the client at the stream socket. Can't see a fault in the patch either.

    Strange... Why the host sends KeepAlive packets is a mystery. It shouldn't do that unless explicitly configured to do so. Nevertheless the client should be able cope with it.
    I've implemented a change to the stream socket handling from the esp32 squeezelite project to see if it helps.

    The stream thread now waits/sleeps until any previous polling operation has completed before opening a new stream socket.

    Please try squeezelite-1.9.8-1324-test3-win64.zip.
    Ralphy

    1-Touch, 5-Classics, 3-Booms, 2-UE Radio
    Squeezebox client builds donations always appreciated.

  5. #45
    Junior Member
    Join Date
    Dec 2020
    Posts
    15
    I found another thread which directly addresses this issue with "audio stopping for 30 seconds".
    https://forums.slimdevices.com/showt...l=1#post986931

    It was also referenced elsewhere in the thread:
    https://forums.slimdevices.com/showt...ine#post987080

    I have verified that the -b fix also works for me with SqueezeLite-X. I first noticed thisissue with SqueezeLite-X, before I found that SqueezeLite (command line) also had the same issue. By adding the -b parameter I also cured the issue in SqueezeLite-X, which I prefer to use, and it has been stable for hours, with no interruptions.

    I would like to know: is this the *actual* solution?

    Using a larger buffer would cure certain problems, such as overrunning the boundary of the struct from malloc. But it might also only side-step the problem, only to re-emerge later on, because the problem is somewhere else. Only by reproducing and testing can we be sure, although I understand that @ralphy has not been able to reproduce the issue yet. Maybe those other forum postings will provide some further clues on how to verify the fix.

    If this actually does cure the problem, then setting a new default buffer size will put this problem to rest.
    Last edited by foopydog; 2021-01-15 at 16:37. Reason: added remark about SqueezeLite-X

  6. #46
    Junior Member
    Join Date
    Jan 2021
    Posts
    15
    Quote Originally Posted by ralphy View Post
    No luck yet. See my log below with lines inserted where I detect cpu load.

    [22:52:50.094] process_strm:280 strm command t
    [22:52:50.094] sendSTAT:195 STAT: STMt
    [22:52:52.096] sendSTAT:195 STAT: STMt
    [22:52:54.097] sendSTAT:195 STAT: STMt
    [22:52:55.095] process:527 strm
    [22:52:55.095] process_strm:280 strm command t
    [22:52:55.095] sendSTAT:195 STAT: STMt
    [22:52:56.095] sendSTAT:195 STAT: STMt
    [22:52:58.095] sendSTAT:195 STAT: STMt
    [22:53:00.094] sendSTAT:195 STAT: STMt
    [22:53:00.094] process:527 strm
    [22:53:00.094] process_strm:280 strm command t
    [22:53:00.094] sendSTAT:195 STAT: STMt
    [22:53:02.096] sendSTAT:195 STAT: STMt
    [22:53:04.096] sendSTAT:195 STAT: STMt
    -22:53:04.208- ########## START OF HEAVY CPU LOAD
    -22:53:04.208- ########## CPU LOAD: total:64,0% user:12,2% kernel:51,8%
    -22:53:04.701- ########## CPU LOAD: total:100,0% user:9,5% kernel:91,9%
    [22:53:05.094] process:527 strm
    [22:53:05.094] process_strm:280 strm command t
    [22:53:05.094] sendSTAT:195 STAT: STMt
    -22:53:05.196- ########## CPU LOAD: total:100,0% user:9,5% kernel:91,5%
    -22:53:05.708- ########## CPU LOAD: total:97,8% user:9,2% kernel:88,7%
    [22:53:06.095] sendSTAT:195 STAT: STMt
    -22:53:06.197- ########## CPU LOAD: total:100,0% user:19,2% kernel:83,0%
    -22:53:06.696- ########## CPU LOAD: total:100,0% user:3,1% kernel:97,1%
    -22:53:07.208- ########## CPU LOAD: total:97,7% user:15,3% kernel:82,4%
    -22:53:07.699- ########## CPU LOAD: total:100,0% user:19,1% kernel:82,7%
    [22:53:08.097] sendSTAT:195 STAT: STMt
    -22:53:08.207- ########## CPU LOAD: total:98,4% user:12,3% kernel:86,1%
    -22:53:08.696- ########## CPU LOAD: total:100,0% user:22,4% kernel:79,9%
    -22:53:09.206- ########## CPU LOAD: total:97,9% user:12,2% kernel:85,7%
    -22:53:09.700- ########## CPU LOAD: total:100,0% user:9,5% kernel:91,7%
    [22:53:10.095] sendSTAT:195 STAT: STMt
    [22:53:10.095] process:527 strm
    [22:53:10.095] process_strm:280 strm command t
    [22:53:10.095] sendSTAT:195 STAT: STMt
    -22:53:10.209- ########## CPU LOAD: total:98,4% user:12,3% kernel:86,1%
    -22:53:10.703- ########## CPU LOAD: total:100,0% user:22,1% kernel:79,1%
    -22:53:11.196- ########## CPU LOAD: total:98,2% user:6,3% kernel:91,9%
    -22:53:11.707- ########## CPU LOAD: total:97,7% user:15,3% kernel:82,5%
    [22:53:12.095] sendSTAT:195 STAT: STMt
    -22:53:12.199- ########## CPU LOAD: total:100,0% user:19,1% kernel:82,6%
    -22:53:12.696- ########## CPU LOAD: total:100,0% user:3,1% kernel:97,6%
    -22:53:13.207- ########## CPU LOAD: total:97,7% user:15,3% kernel:82,4%
    -22:53:13.701- ########## CPU LOAD: total:100,0% user:6,3% kernel:94,9%
    [22:53:14.096] sendSTAT:195 STAT: STMt
    -22:53:14.210- ########## CPU LOAD: total:98,2% user:3,1% kernel:95,2%
    -22:53:14.697- ########## CPU LOAD: total:100,0% user:9,6% kernel:93,0%
    [22:53:15.094] process:527 strm
    [22:53:15.094] process_strm:280 strm command t
    [22:53:15.094] sendSTAT:195 STAT: STMt
    -22:53:15.196- ########## CPU LOAD: total:100,0% user:6,3% kernel:97,1%
    -22:53:15.707- ########## CPU LOAD: total:98,0% user:15,3% kernel:82,7%
    [22:53:16.095] sendSTAT:195 STAT: STMt
    -22:53:16.203- ########## CPU LOAD: total:100,0% user:12,6% kernel:88,2%
    -22:53:16.696- ########## CPU LOAD: total:100,0% user:0,0% kernel:100,0%
    -22:53:17.203- ########## CPU LOAD: total:98,6% user:3,1% kernel:95,5%
    -22:53:17.696- ########## CPU LOAD: total:100,0% user:9,5% kernel:91,8%
    [22:53:18.095] sendSTAT:195 STAT: STMt
    -22:53:18.205- ########## CPU LOAD: total:98,2% user:9,2% kernel:89,0%
    -22:53:18.697- ########## CPU LOAD: total:100,0% user:0,0% kernel:100,0%
    -22:53:19.209- ########## CPU LOAD: total:97,6% user:3,1% kernel:94,6%
    -22:53:19.703- ########## CPU LOAD: total:100,0% user:6,3% kernel:95,0%
    [22:53:20.094] sendSTAT:195 STAT: STMt
    [22:53:20.094] process:527 strm
    [22:53:20.094] process_strm:280 strm command t
    [22:53:20.094] sendSTAT:195 STAT: STMt
    -22:53:20.210- ########## CPU LOAD: total:98,6% user:9,2% kernel:89,4%
    -22:53:20.695- ########## CPU LOAD: total:100,0% user:6,4% kernel:96,5%
    -22:53:21.203- ########## CPU LOAD: total:98,4% user:12,3% kernel:86,1%
    -22:53:21.696- ########## CPU LOAD: total:100,0% user:19,0% kernel:82,5%
    [22:53:22.096] sendSTAT:195 STAT: STMt
    -22:53:22.204- ########## CPU LOAD: total:98,4% user:9,2% kernel:89,1%
    -22:53:22.709- ########## CPU LOAD: total:98,9% user:6,2% kernel:92,8%
    -22:53:23.202- ########## CPU LOAD: total:100,0% user:9,5% kernel:92,0%
    -22:53:23.711- ########## CPU LOAD: total:100,0% user:18,4% kernel:82,9%
    [22:53:24.097] sendSTAT:195 STAT: STMt
    [22:53:24.097] sendSTAT:195 STAT: STMo
    [22:53:24.099] process:527 strm
    [22:53:24.099] process_strm:280 strm command p
    [22:53:24.099] sendSTAT:195 STAT: STMp
    [22:53:24.099] process_strm:316 pause interval: 0
    -22:53:24.216- ########## CPU LOAD: total:98,9% user:9,3% kernel:89,7%
    [22:53:24.231] process:527 strm
    [22:53:24.231] process_strm:280 strm command t
    [22:53:24.231] sendSTAT:195 STAT: STMt
    [22:53:24.533] process:527 strm
    [22:53:24.533] process_strm:280 strm command t
    [22:53:24.533] sendSTAT:195 STAT: STMt
    -22:53:24.715- ########## CPU LOAD: total:100,0% user:3,1% kernel:97,2%
    [22:53:24.835] process:527 strm
    [22:53:24.835] process_strm:280 strm command t
    [22:53:24.835] sendSTAT:195 STAT: STMt
    [22:53:25.094] process:527 strm
    [22:53:25.094] process_strm:280 strm command t
    [22:53:25.094] sendSTAT:195 STAT: STMt

    Heavy cpu load starts at 22:53:04.208. At 22:53:24.097, that is about 20 seconds later, the behavior of the stream thread changes from 'STAT: STMt' to 'STMo' followed by 'strm command p' and 'pause interval: 0'. I' curious whether that behavior changes when 100% cpu load is avoided.

    Let me explain my reasoning: What I saw when I attached the Visual Studio Debugger to squeezelite and interruped it while in 100% cpu load was, that its from one thread and this thread sometimes had its priority elevated slightly, presumably when in kernel code. Such a situation could possibly derail other threads and trigger unexpected behavior. 100% cpu load is in essence an extreme stress test for the system and it fails obviously. We don't realy know whether the cpu load is cause or effect, but it seems to me of no use to find a solution in a stressed system we do not control 100%. The fault may well be in the Windows TCP/IP stack implementation. We would not know. So my priority would be to get rid of the cpu load, then to look further.

    I assume the source of the cpu load is a tight loop in
    Code:
    static void slimproto_run() {
        ...
        while (running && !new_server) {
            ....
            if ((ev = wait_readwake(ehandles, 1000)) != EVENT_TIMEOUT) {
           ....
           }
        ....
    }
    Inside the if section there are two continues that could, when permanently triggered, make the loop tight. The stream thread is another thread presumably. It might be just perfect only to get derailed inside a system call. We don't know yet.

    My proposal would be to find the source of the cpu load, slow the thread down a bit by inserting a short sleep(), insert some temporary logging and see what happens.

    Does it make sense to insert a debug log followed by a sleep(1) immediately before the two continues? You know the code better than me but it should not be too often one gets a EVENT_READ and then read nothing.

  7. #47
    Junior Member
    Join Date
    Jan 2021
    Posts
    15
    Quote Originally Posted by foopydog View Post
    I would like to know: is this the *actual* solution?

    Using a larger buffer would cure certain problems, such as overrunning the boundary of the struct from malloc. But it might also only side-step the problem, only to re-emerge later on, because the problem is somewhere else. Only by reproducing and testing can we be sure, although I understand that @ralphy has not been able to reproduce the issue yet. Maybe those other forum postings will provide some further clues on how to verify the fix.

    If this actually does cure the problem, then setting a new default buffer size will put this problem to rest.
    Default stream buffer size is 2 MiB. One post mentions -b 655360:1310720, which is about 1.8 GiB total buffer size (stream+output). I'm tempted to assume that is not the solution if squeezelite as a core component is supposed to run on small sytems. What you get with a big stream buffer is that the whole mp3 file is read in one fell swoop. That might just mask the problem. What about folks that listen to classical music. Those files can be fairly large.

    Moreover, the problem seems not only related to the buffer size but also specific to mp3 files. I have not noticed dropouts when playing flac files. I'm testing with -b 256 which results in a 256 kiB stream buffer size since its multiplied internally by 1024. The files I'm testing with are about 6 MB in size. So there should be plenty of wrapping around in the stream buffer. Wrapping around does not seam to be a problem with flac data. Is there anything special when wrapping around mp3 data? Looking at the code I don't think so. The dropouts are really fickle. E.g. when I'm testing I'm reading stderr programmatically and writing to a memory buffer. I can make the dropouts disappear when I put a small delay in the logging function. So it's seems contingent on the internal timing within squeezelite. Different cpu frequencies, different cache sizes etc. can thus make the dropouts appear or disappear.

  8. #48
    Junior Member
    Join Date
    Jan 2021
    Posts
    15
    @ ralphy

    I don't see any volatile modifier in the code although there is data shared among threads, e.g. the read/write positions of the buffers.

    Is that by design?

    Everything seems to be synchronized via mutexes, however, if I understand volatile correctly, data updated on one thread could be lost on another thread when this thread has the data cached.

  9. #49
    Junior Member
    Join Date
    Mar 2016
    Location
    Germany
    Posts
    14
    I have the same issue.
    Actually i am running LMS 8.1.0 on a Synology DS220+ as Docker container.
    Works flawless, but when i use Squeezelite X in Win 10 20H2 i have the same dropouts mentioned here. Very high cpu load, 30 seconds dropout.
    Same on my notebook and PC.

  10. #50
    Senior Member ralphy's Avatar
    Join Date
    Jan 2006
    Location
    Canada
    Posts
    2,790
    Quote Originally Posted by fpo View Post
    No luck yet.

    Let me explain my reasoning: What I saw when I attached the Visual Studio Debugger to squeezelite and interruped it while in 100% cpu load was, that its from one thread and this thread sometimes had its priority elevated slightly, presumably when in kernel code. Such a situation could possibly derail other threads and trigger unexpected behavior. 100% cpu load is in essence an extreme stress test for the system and it fails obviously. We don't realy know whether the cpu load is cause or effect, but it seems to me of no use to find a solution in a stressed system we do not control 100%. The fault may well be in the Windows TCP/IP stack implementation. We would not know. So my priority would be to get rid of the cpu load, then to look further.

    I assume the source of the cpu load is a tight loop in
    Code:
    static void slimproto_run() {
        ...
        while (running && !new_server) {
            ....
            if ((ev = wait_readwake(ehandles, 1000)) != EVENT_TIMEOUT) {
           ....
           }
        ....
    }
    Inside the if section there are two continues that could, when permanently triggered, make the loop tight. The stream thread is another thread presumably. It might be just perfect only to get derailed inside a system call. We don't know yet.

    My proposal would be to find the source of the cpu load, slow the thread down a bit by inserting a short sleep(), insert some temporary logging and see what happens.

    Does it make sense to insert a debug log followed by a sleep(1) immediately before the two continues? You know the code better than me but it should not be too often one gets a EVENT_READ and then read nothing.
    squeezelite-1.9.8-1324-test4-win64.zip adds a short delay before both of the continue statements in slimproto.c.
    Ralphy

    1-Touch, 5-Classics, 3-Booms, 2-UE Radio
    Squeezebox client builds donations always appreciated.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •