Home of the Squeezebox™ & Transporter® network music players.
Page 2 of 2 FirstFirst 12
Results 11 to 15 of 15
  1. #11
    Junior Member
    Join Date
    May 2021
    Posts
    12
    Now lets look at the squeezelite logs:

    Code:
    [01:16:08.861] stream_thread:428 streambuf read 3132 bytes
    [01:16:08.961] stream_thread:428 streambuf read 1566 bytes
    [01:16:09.026] sendSTAT:166 ms_played: 328110 (frames_played: 14470092 device_frames: 441)
    [01:16:09.026] sendSTAT:195 STAT: STMt
    [01:16:09.026] sendSTAT:200 received bytesL: 12661166 streambuf: 2093499 outputbuf: 3325264 calc elapsed: 328110 real elapsed: 328203 (diff: -93) device: 10 delay: 0        <-- first STMt after LMS closed connection
    [01:16:09.026] process:528 strm
    [01:16:09.026] process_strm:280 strm command t
    [01:16:09.026] sendSTAT:166 ms_played: 328110 (frames_played: 14470092 device_frames: 441)
    [01:16:09.026] sendSTAT:195 STAT: STMt
    [01:16:09.026] sendSTAT:200 received bytesL: 12661166 streambuf: 2093499 outputbuf: 3325264 calc elapsed: 328110 real elapsed: 328203 (diff: -93) device: 10 delay: 0
    [01:16:09.062] stream_thread:428 streambuf read 3652 bytes
    
    ... reading from the buffer continues
    
    [01:18:31.890] stream_thread:428 streambuf read 3340 bytes                              <-- frame 17677 with TCP FIN arrives
    [01:18:31.991] stream_thread:428 streambuf read 3131 bytes
    [01:18:32.021] process:528 strm
    [01:18:32.021] process_strm:280 strm command t
    [01:18:32.021] sendSTAT:166 ms_played: 471080 (frames_played: 20775069 device_frames: 441)
    [01:18:32.021] sendSTAT:195 STAT: STMt
    [01:18:32.021] sendSTAT:200 received bytesL: 16819726 streambuf: 2097151 outputbuf: 3306184 calc elapsed: 471080 real elapsed: 471203 (diff: -123) device: 10 delay: 0
    [01:18:32.092] stream_thread:428 streambuf read 3445 bytes
    [01:18:32.193] stream_thread:428 streambuf read 1253 bytes
    
    ... reading from the buffer continues
    
    [01:19:40.919] stream_thread:428 streambuf read 4280 bytes
    [01:19:41.019] stream_thread:428 streambuf read 3236 bytes
    [01:19:41.021] sendSTAT:166 ms_played: 540080 (frames_played: 23817969 device_frames: 441)
    [01:19:41.021] sendSTAT:195 STAT: STMt
    [01:19:41.021] sendSTAT:200 received bytesL: 18902856 streambuf: 2097151 outputbuf: 3320872 calc elapsed: 540080 real elapsed: 540203 (diff: -123) device: 10 delay: 0
    [01:19:41.120] stream_thread:428 streambuf read 2296 bytes
    [01:19:41.220] stream_thread:428 streambuf read 3654 bytes
    [01:19:41.321] stream_thread:428 streambuf read 2818 bytes
    [01:19:41.422] stream_thread:428 streambuf read 3237 bytes
    [01:19:41.522] stream_thread:428 streambuf read 3445 bytes                               <-- last read from the stream buffer
    [01:19:43.021] sendSTAT:166 ms_played: 542080 (frames_played: 23906169 device_frames: 441)
    [01:19:43.021] sendSTAT:195 STAT: STMt
    [01:19:43.021] sendSTAT:200 received bytesL: 18918306 streambuf: 2049237 outputbuf: 3334120 calc elapsed: 542080 real elapsed: 542203 (diff: -123) device: 10 delay: 0
    [01:19:44.019] process:528 strm
    [01:19:44.019] process_strm:280 strm command t
    [01:19:44.019] sendSTAT:166 ms_played: 543080 (frames_played: 23950269 device_frames: 441)
    [01:19:44.019] sendSTAT:195 STAT: STMt
    [01:19:44.019] sendSTAT:200 received bytesL: 18918306 streambuf: 2020947 outputbuf: 3322312 calc elapsed: 543080 real elapsed: 543203 (diff: -123) device: 10 delay: 0
    [01:19:45.020] sendSTAT:166 ms_played: 544080 (frames_played: 23994369 device_frames: 441)
    [01:19:45.020] sendSTAT:195 STAT: STMt
    [01:19:45.020] sendSTAT:200 received bytesL: 18918306 streambuf: 1994641 outputbuf: 3301288 calc elapsed: 544080 real elapsed: 544203 (diff: -123) device: 10 delay: 0
    [01:19:47.021] sendSTAT:166 ms_played: 546096 (frames_played: 24082569 device_frames: 441)
    [01:19:47.021] sendSTAT:195 STAT: STMt
    [01:19:47.021] sendSTAT:200 received bytesL: 18918306 streambuf: 1940677 outputbuf: 3314536 calc elapsed: 546096 real elapsed: 546203 (diff: -107) device: 10 delay: 16
    
    ...
    
    [01:20:59.017] sendSTAT:200 received bytesL: 18918306 streambuf: 2019 outputbuf: 209176 calc elapsed: 618036 real elapsed: 618203 (diff: -167) device: 10 delay: 16
    [01:21:00.018] sendSTAT:166 ms_played: 618638 (frames_played: 27281711 device_frames: 441)
    [01:21:00.018] sendSTAT:195 STAT: STMo
    [01:21:00.018] sendSTAT:200 received bytesL: 18918306 streambuf: 2019 outputbuf: 0 calc elapsed: 618638 real elapsed: 619203 (diff: -565) device: 10 delay: 16
    [01:21:00.018] process:528 strm
    [01:21:00.018] process_strm:280 strm command t
    [01:21:00.018] sendSTAT:166 ms_played: 618638 (frames_played: 27281711 device_frames: 441)
    [01:21:00.018] sendSTAT:195 STAT: STMt
    [01:21:00.018] sendSTAT:200 received bytesL: 18918306 streambuf: 2019 outputbuf: 0 calc elapsed: 618638 real elapsed: 619203 (diff: -565) device: 10 delay: 16
    [01:21:00.028] process:528 strm
    [01:21:00.028] process_strm:280 strm command p
    [01:21:00.028] sendSTAT:166 ms_played: 618654 (frames_played: 27281711 device_frames: 441)
    [01:21:00.028] sendSTAT:195 STAT: STMp
    [01:21:00.028] sendSTAT:200 received bytesL: 18918306 streambuf: 2019 outputbuf: 0 calc elapsed: 618654 real elapsed: 619219 (diff: -565) device: 10 delay: 32
    [01:21:00.028] process_strm:316 pause interval: 0
    [01:21:00.160] process:528 strm
    [01:21:00.160] process_strm:280 strm command t
    squeezelite performs last successfull read from the stream buffer at 01:19:41.522 which matches with the arrival of RST packet from the server. Once Windows receives RST packet it destroys TCP connection and associated buffers so squeezelite has nothing to read from. Once squeezelite buffer empties it pauses music and sends STMo to the sever, 20-30 seconds later music playback resumes.

  2. #12
    Junior Member
    Join Date
    May 2021
    Posts
    12
    We have a lot of different variables at play here: song size, song bitrate, LMS, squeezelite, squeezelite stream buffer, server TCP send buffer, client TCP receive buffer, server TCP FIN timeout. Lets briefly look into what each of these contribute.


    Song size and song bitrate - we don't have any control over these, but they obviously play big role in this failure. Song needs to be big enough to fill all the buffers and bitrate needs to be low enough to slow down how fast buffers are drained. Most of the songs do not meet requirements to trigger this issue though.

    LMS is not really doing anything wrong, it sends the data out as fast as it can and once all of the data is written to the socket it closes it.

    squeezelite is also not at fault here, it is reading data as fast as it can and suddenly socket disappears on it. I ran this same test with squeezeplay and failure mode was exactly the same. Squeezeplay uses larger buffer size 3145728 (vs 2097152) so everything failed closer to the end of the song.

    TCP send buffer - the size of this buffer does not really matter, because kernel TCP stack sends TCP FIN packet to the client only after this buffer is emptied.

    squeezelite stream buffer - this can be increased and it will reduce how often this issue happens, but it will not eliminate the issue. You can always find file that will be larger than the buffer size.

    TCP receive buffer and TCP FIN timeout - the combination of these is what causes the issue. If the TCP receive buffer is larger than the TCP FIN timeout multiplied by the song bitrate, we are going to have a failure. To test this I have increased TCP FIN timeout (net.ipv4.tcp_fin_timeout) on the server from 60 sec to 120 sec and that of course allowed enough time for the client to finish reading data from the TCP buffers. Client finished reading data 84 seconds after server finished sending data.

    Name:  tcp_session_ok_large_fin_timeout.jpg
Views: 109
Size:  162.5 KB


    What are some of the possible solutions? Haven't given much thought to these, so some might not make sense.

    Increasing squeezelite stream buffer - that's what most of the people that experience this issue are doing now.

    Increasing TCP FIN timeout (net.ipv4.tcp_fin_timeout) on the server - it is a system wide setting and might not be a good idea if LMS is not running on the dedicated box.

    LMS setting FIN time out on the socket level using SO_LINGER - not sure if this is possible though.

    Squeezelite explicitly setting TCP receive buffer size on the socket level using SO_RCVBUF - is this possible.

    LMS waiting to close the socket until Squeezelite notifies it that all of the data was received - this would be the only way to cover 100% of the cases, but is this possible to do?


    Thanks for reading and sorry for the long posts!

  3. #13
    Junior Member
    Join Date
    May 2021
    Posts
    12
    Last thing I was curious about is why only Windows users are seemingly experiencing this issue. Logically thinking, the underlying condition should be happening on Linux just the same. I ran a test on Linux to find out.

    Name:  tcp_session_linux.jpg
Views: 96
Size:  211.2 KB

    As we can see, TCP flow timing on Linux looks very similar to Windows. Server sends TCP FIN packet and client takes another 150 seconds to drain TCP receive buffer and reply with FIN, ACK. Server then responds with RST instead of final ACK, because socket on the server has already been destroyed. The only difference here is that Linux does not send TCP Window Update packets to the server as the buffer is drained, so it does not trigger RST from the server before client has time to read all the data from the buffer.

  4. #14
    Senior Member ralphy's Avatar
    Join Date
    Jan 2006
    Location
    Canada
    Posts
    2,880
    Thanks for taking the time to research what's happening. Sorry, it's taken so long for me to reply. There is a lot of information to digest and I wanted to take the time to review it all.

    One thought I had was if enabling tcp keep alive would prevent the server closing the socket. I've sent you a PM with a test windows squeezelite build that enables 30 second tcp keep alives on the streaming socket. It's 32bit since my 64bit build laptop is waiting for a replacement power adapter to arrive.

    You should see this in the squeezelite debug log.

    Code:
    [13:52:01.701] set_keepalive:294 tcp keepalive enabled
    and I've included a patch file in the zip with the code change.
    Ralphy

    1-Touch, 5-Classics, 3-Booms, 2-UE Radio
    Squeezebox client builds donations always appreciated.

  5. #15
    Junior Member
    Join Date
    May 2021
    Posts
    12
    Ralphy, thanks so much for looking into this.

    The good news is that your change did take effect, and client machine now initiates TCP Keep-Alive packets every 30 secs after last transmission from the server (frames 22775 and 23007 in Wireshark trace below).

    The bad news is that Linux TCP stack does not seem to care about that. My guess is that once Linux starts the TCP FIN timeout timer after FIN packet is sent out it does not reset it, even if Keep-Alive packet is received.


    Name:  tcp_session_rst_02_v1389.png
Views: 49
Size:  91.5 KB

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •