PDA

View Full Version : Why is LMS using such an old SQLite version?



cparker
2018-08-09, 04:33
Hi

Is there any reason why LMS is using such an old SQLite version, 3.7.7.1 which dates back to 2011.

Looking through the SQLite release log there seem to be pretty impressive performance gains made over the years so just wondering why this hasn't been rolled into LMS. (DBD-SQLite-1.58 released March 2018)

Cheers

PS Can we have the old forum software back.. thanks :p (scrub that I found the link at the bottom.. phew :) )

mherger
2018-08-09, 05:15
> Is there any reason why LMS is using such an old SQLite version, 3.7.7.1
> which dates back to 2011.

There are many potential reasons. Call it laziness, or "never touch a
running system", or lack of priorities, premature optimization, lack of
time...

Updating the library for all platforms is virtually impossible. Dealing
with the potential issues of running different versions on different
platforms might lead to some unwanted decisions (eg. rollback, drop
support for some platforms etc.) and certainly would consume
considerable amounts of time. Which probably would better be spent
elsewhere.

Maybe there just isn't any compelling reason to update.

> Looking through the SQLite release log there seem to be pretty
> impressive performance gains made over the years so just wondering why
> this hasn't been rolled into LMS. (DBD-SQLite-1.58 released March
> 2018)

Did you carefully read where those improvements would apply? Does it
apply to LMS? Did you confirm it's any faster at all? And where do you
see the database as the bottleneck, btw?

I'd be happy to see a thorough comparison of current LMS and LMS on
latest SQLite, scanning libraries of various sizes (5k, 30k, 100k) on
various platforms (Pi, Mac, Windows, desktop Linux). Or at least one of
them showing that "impressive performance gain". What would you measure?
Scanning time? Browsing performance? Time to playback?

> PS Can we have the old forum software back.. thanks :p

Can it get any older?... it's vbulletin 4, announced in 2009. Its
successor v5 was announced in 2012. We never updated beyond security
updates.

--

Michael

mherger
2018-08-09, 06:38
Ok, I thought I'd give it the quickest try I could, running a test on my dev machine (2017 MBP, SSD, 16GB RAM). Here are the results:

DBD::SQLite 1.58 (sqlite 3.22.0)

Discovering files/directories: /Users/mh/Documents/SqueezeZeugs/flac100000 (107993 of 107993) Complete 00:00:19
Scanning new music files: /Users/mh/Documents/SqueezeZeugs/flac100000 (99663 of 99663) Complete 00:06:01
Building full text index (7 of 7) Complete 00:05:30
Create library views (2 of 2) Complete 00:00:00
Database Optimize (2 of 2) Complete 00:00:08

The server has finished scanning your media library.
Total Time: 00:11:58 (Thursday, August 9, 2018 / 2:57 PM)

Discovering files/directories: /Users/mh/Documents/SqueezeZeugs/flac100000 (107993 of 107993) Complete 00:00:17
Scanning new music files: /Users/mh/Documents/SqueezeZeugs/flac100000 (99663 of 99663) Complete 00:06:11
Building full text index (7 of 7) Complete 00:05:35
Create library views (2 of 2) Complete 00:00:00
Database Optimize (2 of 2) Complete 00:00:08

The server has finished scanning your media library.
Total Time: 00:12:11 (Thursday, August 9, 2018 / 3:34 PM)


DBD::SQLite 1.34_01 (sqlite 3.7.7.1)

Discovering files/directories: /Users/mh/Documents/SqueezeZeugs/flac100000 (107993 of 107993) Complete 00:00:20
Scanning new music files: /Users/mh/Documents/SqueezeZeugs/flac100000 (99663 of 99663) Complete 00:08:43
Building full text index (7 of 7) Complete 00:00:21
Create library views (2 of 2) Complete 00:00:00
Database Optimize (2 of 2) Complete 00:00:15

The server has finished scanning your media library.
Total Time: 00:09:39 (Thursday, August 9, 2018 / 3:08 PM)

Discovering files/directories: /Users/mh/Documents/SqueezeZeugs/flac100000 (107993 of 107993) Complete 00:00:19
Scanning new music files: /Users/mh/Documents/SqueezeZeugs/flac100000 (99663 of 99663) Complete 00:08:55
Building full text index (7 of 7) Complete 00:00:30
Create library views (2 of 2) Complete 00:00:01
Database Optimize (2 of 2) Complete 00:00:17

The server has finished scanning your media library.
Total Time: 00:10:02 (Thursday, August 9, 2018 / 3:20 PM)


There clearly is an improvement in some parts of the newer code, but unfortunately a massive loss in another part. Now you could say I don't use FTS. But I do.

That might show you that a decision pro or con can't easily be taken. Things are a bit more complicated than they might seem.

cparker
2018-08-19, 02:25
Thanks for the research. Long story, short.. my interest is due to a low powered wintel system grinding to a stop when doing a full rescan due to Trackstat. So I was looking at the db and noticed the old version and hence the interest. Is there anyway of decompressing/patching a wintel system to play around with this further?

Cheers

mherger
2018-08-19, 03:34
> Thanks for the research. Long story, short.. my interest is due to a
> low powered wintel system grinding to a stop when doing a full rescan
> due to Trackstat. So I was looking at the db and noticed the old
> version and hence the interest. Is there anyway of
> decompressing/patching a wintel system to play around with this further?

You can run LMS from the source if you're willing to install Activestate
Perl 5.14 (which I could provide you a copy if needed).

I guess you've already upped the memory parameter for the database? It
can make a huge difference (depending on library size, storage type etc.).

How large is your library?
What is the bottleneck? Memory? CPU? Disk I/O? If you run out of memory,
then any other optimization might be irrelevant.
Do you know what kind of query it is that plugin is chewing so badly on?

--

Michael

cparker
2018-08-20, 04:37
> Thanks for the research. Long story, short.. my interest is due to a
> low powered wintel system grinding to a stop when doing a full rescan
> due to Trackstat. So I was looking at the db and noticed the old
> version and hence the interest. Is there anyway of
> decompressing/patching a wintel system to play around with this further?

You can run LMS from the source if you're willing to install Activestate
Perl 5.14 (which I could provide you a copy if needed).

I guess you've already upped the memory parameter for the database? It
can make a huge difference (depending on library size, storage type etc.).

How large is your library?
What is the bottleneck? Memory? CPU? Disk I/O? If you run out of memory,
then any other optimization might be irrelevant.
Do you know what kind of query it is that plugin is chewing so badly on?

--

Michael

hi Michael

That would be great if you could supply me the source.

My library is 30k tracks (on a NAS) and the bottleneck is I/O for sure, the SATA died and wiped out the disks, so I rebuilt it with Win7 booting from a USB drive. It has 8x (i7) CPU cores and 16Gb RAM so is strong enough. I can't change the OS due to other software, it works fine day to day but definitely chewing on the db rebuild when Trackstat does a massive update query after a full rescan. So I would like to see if I can play around with the database side of things and happy to share anything useful I find.

Thanks in advance

mherger
2018-08-20, 05:31
> My library is 30k tracks (on a NAS) and the bottleneck is I/O for sure,
> the SATA died and wiped out the disks, so I rebuilt it with Win7 booting
> from a USB drive.

You're running LMS from a different computer than where the files are
saved? This kind of I/O might be the bottleneck during the scan. But I
could imagine that when TrackStat is doing its job, it's LMS server side
only, no access to the files. I might be wrong. But please double check,
because your machine seems more than powerful enough to process 30k
tracks. Check the resource monitor while TrackStat is running. Monitor
disk, network, cpu and memory usage.

> It has 8x (i7) CPU cores and 16Gb RAM so is strong

Did you check LMS' memory settings for the database?

--

Michael

cparker
2018-08-23, 07:33
You're running LMS from a different computer than where the files are
saved? This kind of I/O might be the bottleneck during the scan. But I
could imagine that when TrackStat is doing its job, it's LMS server side
only, no access to the files. I might be wrong. But please double check,
because your machine seems more than powerful enough to process 30k
tracks. Check the resource monitor while TrackStat is running. Monitor
disk, network, cpu and memory usage.

Correct yes, music is on a powerful NAS box, my setup has always been like this, so I think its due to the laptop now running from USB2 ouch!



> It has 8x (i7) CPU cores and 16Gb RAM so is strong[/color]

Did you check LMS' memory settings for the database?


Yep its set to Maximum 2Gb RAM

Cheers

mherger
2018-09-11, 00:41
If you want to run LMS from the source, grab a copy from
https://github.com/Logitech/slimserver. Then run "perl slimserver.pl".
Running from the source would allow you to replace DBD::SQLite (inside
the CPAN folder). Please share your experience. I'm still considering
updating... Just have to figure out what was causing the slow-down in
the fulltext indexing.

--

Michael

Roland0
2018-09-11, 12:42
Running from the source would allow you to replace DBD::SQLite (inside
the CPAN folder). Please share your experience. I'm still considering
updating... Just have to figure out what was causing the slow-down in
the fulltext indexing.


I've been running LMS with updated modules / libraries (http://forums.slimdevices.com/showthread.php?107040-Howto-update-perl-modules-bundled-with-LMS) (DBD::SQLite 1.54 (sqlite 3.13.0)) since 03/2017 on a RPi3 without any issues.
I can confirm that fulltext indexing is painfully slow, however, since I haven't tried it with any other version of SQLite, I have no comparison.

mherger
2018-09-11, 21:54
> I've been running LMS with 'updated modules / libraries '
> (http://forums.slimdevices.com/showthread.php?107040-Howto-update-perl-modules-bundled-with-LMS)
> (DBD::SQLite 1.54 (sqlite 3.13.0)) since 03/2017 on a RPi3 without any
> issues.

Great! Any improvements you've experienced?

> I can confirm that fulltext indexing is painfully slow, however, since I
> haven't tried it with any other version of SQLite, I have no comparison.

That's an odd one. In my previous tests this was constant. Every test I
did was a massive slow down in FTS. But I always did delete the
library.db to start from scratch for every test. Now I repeated the test
for the same collection, without deleting that file, and performance was
as before (plus speed up in other aspects). As if re-using a file was
much faster than growing it from zero.

I'm now running my own system at home on an oldish Intel Atom CPU with
the latest DBD::SQLite. In this "real life" scenario with about 20k
tracks, several plugins which use the database in some aspect
extensively (Music And Artist!), it's about 20% faster over all, with
FTS being pretty much the same as before.

I'll try to update my pCP based system, too, to get another datapoint.
And I'll run a bunch more tests with my testing environment and the 100k
library, using the tweaked DBD::SQLite update (stock vs. some stuff
disabled, plus ICU support added). And then I'd have to figure out how
to compile this for Windows...

--

Michael

DJanGo
2018-09-12, 02:52
That's an odd one. In my previous tests this was constant. Every test I
did was a massive slow down in FTS. But I always did delete the
library.db to start from scratch for every test. Now I repeated the test
for the same collection, without deleting that file, and performance was
as before (plus speed up in other aspects). As if re-using a file was
much faster than growing it from zero.--

Michael

Hi Michael,

is that because of /usr/share/squeezeboxserver/lib/DBIx/Migration.pm
that calls /usr/share/squeezeboxserver/SQL/SQLite/schema_20_up.sql


DROP TABLE IF EXISTS fulltext;
and
DROP TABLE IF EXISTS fulltext_terms;

DJanGo
2018-09-12, 06:45
Hi Michael,
To answer my own "question" - maybe...

eg running a clean scan from scratch -> no library.db (with the std. "old" Sqlite Version) on 7.92 (latest) needs 38min.
running a clean scan over the gui on the same hardware/software needs 22 mins.
The is some very little minor difference if using DROP TABLE IF EXISTS fulltext and DROP TABLE IF EXISTS fulltext_terms; or not.

Roland0
2018-09-13, 16:07
> I've been running LMS with 'updated modules / libraries '
> (http://forums.slimdevices.com/showthread.php?107040-Howto-update-perl-modules-bundled-with-LMS)
> (DBD::SQLite 1.54 (sqlite 3.13.0)) since 03/2017 on a RPi3 without any
> issues.

Great! Any improvements you've experienced?

Unfortunately, this was a new setup on new hardware, so I don't have anything to compare it to.



That's an odd one. In my previous tests this was constant. Every test I
did was a massive slow down in FTS. But I always did delete the
library.db to start from scratch for every test. Now I repeated the test
for the same collection, without deleting that file, and performance was
as before (plus speed up in other aspects). As if re-using a file was
much faster than growing it from zero.

I'm now running my own system at home on an oldish Intel Atom CPU with
the latest DBD::SQLite. In this "real life" scenario with about 20k
tracks, several plugins which use the database in some aspect
extensively (Music And Artist!), it's about 20% faster over all, with
FTS being pretty much the same as before.


I've just updated my LMS to use the latest DBD::sqlite (1.58) and could run some benchmarks, then downgrade to 1.34 and re-run them.
The question is how to benchmark (re-scanning the library is simple, but not really representative of normal use) - how did you measure the 20% performance gain?

Currently, I'm using this script to simulate regular use:


#!/bin/bash

# adjust genre_id, playlist_id, search term and URL

BCMDS='{"id":1,"method":"slim.request","params":["-", ["playlists", 0, 999]]}
{"id":1,"method":"slim.request","params":["-", ["titles", 0, 99,"tags:acCdefgGiJklnpPDqrROstTuwy"]]}
{"id":1,"method":"slim.request","params":["-", ["albums", 0, 999,"genre_id:771"]]}
{"id":1,"method":"slim.request","params":["-", ["artists", 0, 999,"genre_id:771"]]}
{"id":1,"method":"slim.request","params":["-", ["search", 0, 999,"term:Front"]]}
{"id":1,"method":"slim.request","params":["-", ["playlists", "tracks", 0, 999,"playlist_id:475485", "tags:acCdefgGiJklnpPDqrROstTuwy"]]}'
LCNT=1
TS=$(date +%s.%N)
VERBOSE=1 # 0 to get CSV values

while IFS= read -r line ; do
[ $VERBOSE -ne 0 ] && echo "running query $LCNT"
RES=$(curl -Ss -H "Content-Type: application/json" -X POST -d "${line}" -o - "http://raspi3-64:9000/jsonrpc.js")
[ $? -ne 0 ] && echo "curl error:$? " && break
echo "$RES"|grep -q '"result":{"count":0}' && echo "No results for $line"
LCNT=$((LCNT+1))
done <<< "$BCMDS"
TE=$(date +%s.%N)
RT=$(bc <<< "$TE-$TS")
if [ $VERBOSE -ne 0 ]; then
echo "runtime: $RT"
else
echo "$(date +'%H:%M:%S');$RT"
fi


Running this in a loop

while true; do ./bench-lms.sh; done
puts 100% load on the server and should be fully DB-bound.

mherger
2018-09-13, 20:36
> I've just updated my LMS to use the latest DBD::sqlite (1.58) and could
> run some benchmarks, then downgrade to 1.34 and re-run them.
> The question is how to benchmark (re-scanning the library is simple, but
> not really representative of normal use) - how did you measure the 20%
> performance gain?

I only tested the scan. It's very heavy on database stuff, with a good
mix of read and write. When scanning the collection is 20% faster then
something has been improved. Either optimized query paths, processing of
data, or I/O or whatever.

> Running this in a loop
>
> Code:
> --------------------
> while true; do ./bench-lms.sh; done
> --------------------
>
> puts 100% load on the server and should be fully DB-bound.

Agreed, benchmarking is difficult to do correctly. Running this kind of
stuff in a loop would likely only hit SQLite's buffer - which is not
representative either. In general I've had little reason for complaints
with my setup in day-to-day use. But I know that eg. some of Erland's
plugins are extremely heavy on database stuff. If you're running one of
those, that might be interesting, too.

One other reason why I was looking into updating SQLite is that it
supports new features in the fulltext indexing I'd like to leverage.

--

Michael

mherger
2018-09-14, 04:58
FWIW: I just pushed support for DBD::SQLite 1.58 on select platforms to
7.9. See https://forums.slimdevices.com/showthread.php?109460 for details.

--

Michael

Roland0
2018-09-15, 03:15
Running this kind of stuff in a loop would likely only hit SQLite's buffer - which is not representative either.

Not sure about that. If I were to speculate: SQLite doesn't seem to do any real query caching, and the buffer seems very limited, leading to a weird spiking performance profile:
25622
(bar chart = seconds one execution of the benchmark takes)
No I/O, Database Memory Config: high (LMS consumes about 400MB RAM), nothing else running on the RPi3


In general I've had little reason for complaints
with my setup in day-to-day use. But I know that eg. some of Erland's
plugins are extremely heavy on database stuff. If you're running one of
those, that might be interesting, too.

I've added some trackstat:


BCMDS='{"id":1,"method":"slim.request","params":["-", ["playlists", 0, 999]]}
{"id":1,"method":"slim.request","params":["-", ["titles", 0, 99,"tags:acCdefgGiJklnpPDqrROstTuwy"]]}
{"id":1,"method":"slim.request","params":["-", ["albums", 0, 999,"genre_id:771"]]}
{"id":1,"method":"slim.request","params":["-", ["artists", 0, 999,"genre_id:771"]]}
{"id":1,"method":"slim.request","params":["aa:bb:cc:dd:ee:ff", ["trackstat", "statisticsjive", "statistics:topratedgenres", "_start:1","_itemsPerResponse:999"]]}
{"id":1,"method":"slim.request","params":["aa:bb:cc:dd:ee:ff", ["trackstat", "statisticsjive", "statistics:topratedartists", "_start:1","_itemsPerResponse:999"]]}
{"id":1,"method":"slim.request","params":["aa:bb:cc:dd:ee:ff", ["trackstat", "statisticsjive", "statistics:notcompletelyratedalbums", "_start:1","_itemsPerResponse:999"]]}
{"id":1,"method":"slim.request","params":["aa:bb:cc:dd:ee:ff", ["trackstat", "statisticsjive", "statistics:leastplayedalbums", "_start:1","_itemsPerResponse:999"]]}
{"id":1,"method":"slim.request","params":["aa:bb:cc:dd:ee:ff", ["trackstat", "statisticsjive", "statistics:mostplayednotrecent", "_start:1","_itemsPerResponse:999"]]}
{"id":1,"method":"slim.request","params":["aa:bb:cc:dd:ee:ff", ["trackstat", "statisticsjive", "statistics:mostplayednotrecentartists", "_start:1","_itemsPerResponse:999"]]}
{"id":1,"method":"slim.request","params":["aa:bb:cc:dd:ee:ff", ["trackstat", "statisticsjive", "statistics:partlyplayedartists", "_start:1","_itemsPerResponse:999"]]}
{"id":1,"method":"slim.request","params":["-", ["search", 0, 999,"term:Front"]]}
{"id":1,"method":"slim.request","params":["-", ["playlists", "tracks", 0, 999,"playlist_id:475485", "tags:acCdefgGiJklnpPDqrROstTuwy"]]}'

Results are compareable (one benchmark run takes 42s/26s) , only the spike occurs every third run.
Increasing Database Memory Config to max shaves off ~1sec, but doesn't do anything else (there is no significant increase of memory consumption, either)
I'll repeat these with a newly-built library.db and with the the old SQLite version.

drmatt
2018-09-15, 04:16
Tbh given how many apps go for the complexity of full blown mysql/postgres when they are only storing a few configuration items I'm amazed lms works so well with sqlite underneath.


Transcoded from Matt's brain by Tapatalk

Mnyb
2018-09-15, 04:35
I did not get the latest version for some reason using the amd64 deb...

Logitech Media Server Version: 7.9.2 - 1536946607 @ Fri Sep 14 20:16:26 CEST 2018
Hostname: SqueezeVM
Server IP Address: 192.168.1.82
Server HTTP Port Number: 9000
Operating system: Debian - EN - utf8
Platform Architecture: x86_64-linux
Perl Version: 5.22.1 - x86_64-linux-gnu-thread-multi
Audio::Scan: 0.95
IO::Socket::SSL: 2.024
Database Version: DBD::SQLite 1.34_01 (sqlite 3.7.7.1)
Total Players Recognized: 5

mherger
2018-09-15, 06:40
> I did not get the latest version for some reason using the amd64 deb...
>
> Perl Version: 5.22.1 - x86_64-linux-gnu-thread-multi

The reason is explained in the announcement :-). I'll add that to the
todo list.
--

Michael

mherger
2018-09-15, 06:44
>> Running this kind of stuff in a loop would likely only hit SQLite's
>> buffer - which is not representative either.
>>
> Not sure about that. If I were to speculate: SQLite doesn't seem to do
> any real query caching, and the buffer seems very limited, leading to a
> weird spiking performance profile:

If I get that chart right, then your LMS is CPU bound. Which means it's
not waiting for I/O.

--

Michael

Mnyb
2018-09-15, 07:22
> I did not get the latest version for some reason using the amd64 deb...
>
> Perl Version: 5.22.1 - x86_64-linux-gnu-thread-multi

The reason is explained in the announcement :-). I'll add that to the
todo list.
--

Michael

Dang so Linux mint 18 has such old Perl version :)

mherger
2018-09-15, 07:34
> Dang so Linux mint 18 has such old Perl version :)

Guess why I have 5.10 binaries? Because my system is still CentOS 6
based. That's an old Perl :-).

--

Michael

Mnyb
2018-09-15, 10:01
> Dang so Linux mint 18 has such old Perl version :)

Guess why I have 5.10 binaries? Because my system is still CentOS 6
based. That's an old Perl :-).

--

Michael

Respect what you are doing , perl seems to vary between 5.8 to 5.24 in the user base .
Ill test when its aviable no need to rush i have modest collection of 50k tracks and a reasonably fast machine anyway.

mherger
2018-09-15, 10:53
> Respect what you are doing , perl seems to vary between 5.8 to 5.24 in
> the user base .

5.26 ;-)

> Ill test when its aviable no need to rush i have modest collection of
> 50k tracks and a reasonably fast machine anyway.

It should be available now. Please keep a copy of the old package: I
don't have a system running this Perl version, therefore built using
https://perlbrew.pl for the first time. If that is working fine, that'll
save me some headaches in the future.

--

Michael

Roland0
2018-09-15, 12:55
>> Running this kind of stuff in a loop would likely only hit SQLite's
>> buffer - which is not representative either.
>>
> Not sure about that. If I were to speculate: SQLite doesn't seem to do
> any real query caching, and the buffer seems very limited, leading to a
> weird spiking performance profile:

If I get that chart right, then your LMS is CPU bound. Which means it's not waiting for I/O.

Yes, it's 100% CPU bound. There is zero I/O during the run (the DB files are already cached in RAM), which is is one of the main reasons I prefer this benchmark to rebuild library / scan for files. The I/O of reading the files introduces a lot of overhead unrelated to the DB performance, and increases variability (e.g. due to file system caching).
I've posted the (quite promising) results in the other thread (https://forums.slimdevices.com/showthread.php?109460).