squeezelite-1.9.8-1324-test1-win64.zip had keepalive set on the slimprotocol 3483 port.
squeezelite-1.9.8-1324-test2-win64.zip enables keepalive on the stream socket, which is not always port 9000 depending on the stream source and I removed the keepalive change from test1.
The test zip files include a patch file with the code changes.
Results 41 to 50 of 51
-
2021-01-14, 10:43 #41
Last edited by ralphy; 2021-01-15 at 05:15.
Ralphy
1-Touch, 5-Classics, 3-Booms, 2-UE Radio
Squeezebox client builds donations always appreciated.
-
2021-01-14, 11:25 #42
- Join Date
- Jan 2021
- Posts
- 15
-
2021-01-14, 14:35 #43
- Join Date
- Jan 2021
- Posts
- 15
Still dropouts although the timing has changed slightly. I see however no difference at the packet level with regard to the response of the client towards a KeepAlive probe from the host, i.e. still no [Keep Alive Ack] from the client at the stream socket. Can't see a fault in the patch either.
Strange... Why the host sends KeepAlive packets is a mystery. It shouldn't do that unless explicitly configured to do so. Nevertheless the client should be able cope with it.
-
2021-01-15, 12:26 #44
I've implemented a change to the stream socket handling from the esp32 squeezelite project to see if it helps.
The stream thread now waits/sleeps until any previous polling operation has completed before opening a new stream socket.
Please try squeezelite-1.9.8-1324-test3-win64.zip.Ralphy
1-Touch, 5-Classics, 3-Booms, 2-UE Radio
Squeezebox client builds donations always appreciated.
-
2021-01-15, 16:31 #45
- Join Date
- Dec 2020
- Posts
- 15
I found another thread which directly addresses this issue with "audio stopping for 30 seconds".
https://forums.slimdevices.com/showt...l=1#post986931
It was also referenced elsewhere in the thread:
https://forums.slimdevices.com/showt...ine#post987080
I have verified that the -b fix also works for me with SqueezeLite-X. I first noticed thisissue with SqueezeLite-X, before I found that SqueezeLite (command line) also had the same issue. By adding the -b parameter I also cured the issue in SqueezeLite-X, which I prefer to use, and it has been stable for hours, with no interruptions.
I would like to know: is this the *actual* solution?
Using a larger buffer would cure certain problems, such as overrunning the boundary of the struct from malloc. But it might also only side-step the problem, only to re-emerge later on, because the problem is somewhere else. Only by reproducing and testing can we be sure, although I understand that @ralphy has not been able to reproduce the issue yet. Maybe those other forum postings will provide some further clues on how to verify the fix.
If this actually does cure the problem, then setting a new default buffer size will put this problem to rest.Last edited by foopydog; 2021-01-15 at 16:37. Reason: added remark about SqueezeLite-X
-
2021-01-15, 16:35 #46
- Join Date
- Jan 2021
- Posts
- 15
No luck yet. See my log below with lines inserted where I detect cpu load.
[22:52:50.094] process_strm:280 strm command t
[22:52:50.094] sendSTAT:195 STAT: STMt
[22:52:52.096] sendSTAT:195 STAT: STMt
[22:52:54.097] sendSTAT:195 STAT: STMt
[22:52:55.095] process:527 strm
[22:52:55.095] process_strm:280 strm command t
[22:52:55.095] sendSTAT:195 STAT: STMt
[22:52:56.095] sendSTAT:195 STAT: STMt
[22:52:58.095] sendSTAT:195 STAT: STMt
[22:53:00.094] sendSTAT:195 STAT: STMt
[22:53:00.094] process:527 strm
[22:53:00.094] process_strm:280 strm command t
[22:53:00.094] sendSTAT:195 STAT: STMt
[22:53:02.096] sendSTAT:195 STAT: STMt
[22:53:04.096] sendSTAT:195 STAT: STMt
-22:53:04.208- ########## START OF HEAVY CPU LOAD
-22:53:04.208- ########## CPU LOAD: total:64,0% user:12,2% kernel:51,8%
-22:53:04.701- ########## CPU LOAD: total:100,0% user:9,5% kernel:91,9%
[22:53:05.094] process:527 strm
[22:53:05.094] process_strm:280 strm command t
[22:53:05.094] sendSTAT:195 STAT: STMt
-22:53:05.196- ########## CPU LOAD: total:100,0% user:9,5% kernel:91,5%
-22:53:05.708- ########## CPU LOAD: total:97,8% user:9,2% kernel:88,7%
[22:53:06.095] sendSTAT:195 STAT: STMt
-22:53:06.197- ########## CPU LOAD: total:100,0% user:19,2% kernel:83,0%
-22:53:06.696- ########## CPU LOAD: total:100,0% user:3,1% kernel:97,1%
-22:53:07.208- ########## CPU LOAD: total:97,7% user:15,3% kernel:82,4%
-22:53:07.699- ########## CPU LOAD: total:100,0% user:19,1% kernel:82,7%
[22:53:08.097] sendSTAT:195 STAT: STMt
-22:53:08.207- ########## CPU LOAD: total:98,4% user:12,3% kernel:86,1%
-22:53:08.696- ########## CPU LOAD: total:100,0% user:22,4% kernel:79,9%
-22:53:09.206- ########## CPU LOAD: total:97,9% user:12,2% kernel:85,7%
-22:53:09.700- ########## CPU LOAD: total:100,0% user:9,5% kernel:91,7%
[22:53:10.095] sendSTAT:195 STAT: STMt
[22:53:10.095] process:527 strm
[22:53:10.095] process_strm:280 strm command t
[22:53:10.095] sendSTAT:195 STAT: STMt
-22:53:10.209- ########## CPU LOAD: total:98,4% user:12,3% kernel:86,1%
-22:53:10.703- ########## CPU LOAD: total:100,0% user:22,1% kernel:79,1%
-22:53:11.196- ########## CPU LOAD: total:98,2% user:6,3% kernel:91,9%
-22:53:11.707- ########## CPU LOAD: total:97,7% user:15,3% kernel:82,5%
[22:53:12.095] sendSTAT:195 STAT: STMt
-22:53:12.199- ########## CPU LOAD: total:100,0% user:19,1% kernel:82,6%
-22:53:12.696- ########## CPU LOAD: total:100,0% user:3,1% kernel:97,6%
-22:53:13.207- ########## CPU LOAD: total:97,7% user:15,3% kernel:82,4%
-22:53:13.701- ########## CPU LOAD: total:100,0% user:6,3% kernel:94,9%
[22:53:14.096] sendSTAT:195 STAT: STMt
-22:53:14.210- ########## CPU LOAD: total:98,2% user:3,1% kernel:95,2%
-22:53:14.697- ########## CPU LOAD: total:100,0% user:9,6% kernel:93,0%
[22:53:15.094] process:527 strm
[22:53:15.094] process_strm:280 strm command t
[22:53:15.094] sendSTAT:195 STAT: STMt
-22:53:15.196- ########## CPU LOAD: total:100,0% user:6,3% kernel:97,1%
-22:53:15.707- ########## CPU LOAD: total:98,0% user:15,3% kernel:82,7%
[22:53:16.095] sendSTAT:195 STAT: STMt
-22:53:16.203- ########## CPU LOAD: total:100,0% user:12,6% kernel:88,2%
-22:53:16.696- ########## CPU LOAD: total:100,0% user:0,0% kernel:100,0%
-22:53:17.203- ########## CPU LOAD: total:98,6% user:3,1% kernel:95,5%
-22:53:17.696- ########## CPU LOAD: total:100,0% user:9,5% kernel:91,8%
[22:53:18.095] sendSTAT:195 STAT: STMt
-22:53:18.205- ########## CPU LOAD: total:98,2% user:9,2% kernel:89,0%
-22:53:18.697- ########## CPU LOAD: total:100,0% user:0,0% kernel:100,0%
-22:53:19.209- ########## CPU LOAD: total:97,6% user:3,1% kernel:94,6%
-22:53:19.703- ########## CPU LOAD: total:100,0% user:6,3% kernel:95,0%
[22:53:20.094] sendSTAT:195 STAT: STMt
[22:53:20.094] process:527 strm
[22:53:20.094] process_strm:280 strm command t
[22:53:20.094] sendSTAT:195 STAT: STMt
-22:53:20.210- ########## CPU LOAD: total:98,6% user:9,2% kernel:89,4%
-22:53:20.695- ########## CPU LOAD: total:100,0% user:6,4% kernel:96,5%
-22:53:21.203- ########## CPU LOAD: total:98,4% user:12,3% kernel:86,1%
-22:53:21.696- ########## CPU LOAD: total:100,0% user:19,0% kernel:82,5%
[22:53:22.096] sendSTAT:195 STAT: STMt
-22:53:22.204- ########## CPU LOAD: total:98,4% user:9,2% kernel:89,1%
-22:53:22.709- ########## CPU LOAD: total:98,9% user:6,2% kernel:92,8%
-22:53:23.202- ########## CPU LOAD: total:100,0% user:9,5% kernel:92,0%
-22:53:23.711- ########## CPU LOAD: total:100,0% user:18,4% kernel:82,9%
[22:53:24.097] sendSTAT:195 STAT: STMt
[22:53:24.097] sendSTAT:195 STAT: STMo
[22:53:24.099] process:527 strm
[22:53:24.099] process_strm:280 strm command p
[22:53:24.099] sendSTAT:195 STAT: STMp
[22:53:24.099] process_strm:316 pause interval: 0
-22:53:24.216- ########## CPU LOAD: total:98,9% user:9,3% kernel:89,7%
[22:53:24.231] process:527 strm
[22:53:24.231] process_strm:280 strm command t
[22:53:24.231] sendSTAT:195 STAT: STMt
[22:53:24.533] process:527 strm
[22:53:24.533] process_strm:280 strm command t
[22:53:24.533] sendSTAT:195 STAT: STMt
-22:53:24.715- ########## CPU LOAD: total:100,0% user:3,1% kernel:97,2%
[22:53:24.835] process:527 strm
[22:53:24.835] process_strm:280 strm command t
[22:53:24.835] sendSTAT:195 STAT: STMt
[22:53:25.094] process:527 strm
[22:53:25.094] process_strm:280 strm command t
[22:53:25.094] sendSTAT:195 STAT: STMt
Heavy cpu load starts at 22:53:04.208. At 22:53:24.097, that is about 20 seconds later, the behavior of the stream thread changes from 'STAT: STMt' to 'STMo' followed by 'strm command p' and 'pause interval: 0'. I' curious whether that behavior changes when 100% cpu load is avoided.
Let me explain my reasoning: What I saw when I attached the Visual Studio Debugger to squeezelite and interruped it while in 100% cpu load was, that its from one thread and this thread sometimes had its priority elevated slightly, presumably when in kernel code. Such a situation could possibly derail other threads and trigger unexpected behavior. 100% cpu load is in essence an extreme stress test for the system and it fails obviously. We don't realy know whether the cpu load is cause or effect, but it seems to me of no use to find a solution in a stressed system we do not control 100%. The fault may well be in the Windows TCP/IP stack implementation. We would not know. So my priority would be to get rid of the cpu load, then to look further.
I assume the source of the cpu load is a tight loop in
Code:static void slimproto_run() { ... while (running && !new_server) { .... if ((ev = wait_readwake(ehandles, 1000)) != EVENT_TIMEOUT) { .... } .... }
My proposal would be to find the source of the cpu load, slow the thread down a bit by inserting a short sleep(), insert some temporary logging and see what happens.
Does it make sense to insert a debug log followed by a sleep(1) immediately before the two continues? You know the code better than me but it should not be too often one gets a EVENT_READ and then read nothing.
-
2021-01-15, 19:50 #47
- Join Date
- Jan 2021
- Posts
- 15
Default stream buffer size is 2 MiB. One post mentions -b 655360:1310720, which is about 1.8 GiB total buffer size (stream+output). I'm tempted to assume that is not the solution if squeezelite as a core component is supposed to run on small sytems. What you get with a big stream buffer is that the whole mp3 file is read in one fell swoop. That might just mask the problem. What about folks that listen to classical music. Those files can be fairly large.
Moreover, the problem seems not only related to the buffer size but also specific to mp3 files. I have not noticed dropouts when playing flac files. I'm testing with -b 256 which results in a 256 kiB stream buffer size since its multiplied internally by 1024. The files I'm testing with are about 6 MB in size. So there should be plenty of wrapping around in the stream buffer. Wrapping around does not seam to be a problem with flac data. Is there anything special when wrapping around mp3 data? Looking at the code I don't think so. The dropouts are really fickle. E.g. when I'm testing I'm reading stderr programmatically and writing to a memory buffer. I can make the dropouts disappear when I put a small delay in the logging function. So it's seems contingent on the internal timing within squeezelite. Different cpu frequencies, different cache sizes etc. can thus make the dropouts appear or disappear.
-
2021-01-15, 20:01 #48
- Join Date
- Jan 2021
- Posts
- 15
@ ralphy
I don't see any volatile modifier in the code although there is data shared among threads, e.g. the read/write positions of the buffers.
Is that by design?
Everything seems to be synchronized via mutexes, however, if I understand volatile correctly, data updated on one thread could be lost on another thread when this thread has the data cached.
-
2021-01-17, 13:45 #49
- Join Date
- Mar 2016
- Location
- Germany
- Posts
- 14
I have the same issue.
Actually i am running LMS 8.1.0 on a Synology DS220+ as Docker container.
Works flawless, but when i use Squeezelite X in Win 10 20H2 i have the same dropouts mentioned here. Very high cpu load, 30 seconds dropout.
Same on my notebook and PC.
-
2021-01-18, 06:52 #50
squeezelite-1.9.8-1324-test4-win64.zip adds a short delay before both of the continue statements in slimproto.c.
Ralphy
1-Touch, 5-Classics, 3-Booms, 2-UE Radio
Squeezebox client builds donations always appreciated.