Home of the Squeezebox™ & Transporter® network music players.
Page 1 of 3 123 LastLast
Results 1 to 10 of 209

Hybrid View

  1. #1
    Senior Member
    Join Date
    Mar 2017
    Posts
    3,682

    Announce: Bliss DSTM mixer

    This is a mixer for "Don't Stop the Music" that uses the results of bliss analysis to find suitable tracks. For details about bliss itself please refer to its website.

    There are two parts to this mixer:

    1. A Linux/macOS/Windows app to analyse your music, save results to an SQLite database, and upload results to LMS
    2. An LMS plugin that contains pre-built mixer binaries for Linux (x86_64, arm, 64-bit arm), macOS (fat binary), and Windows


    The LMS plugin can be installed from my repo

    Binaries for the analyser will be placed on the Github releases page. This analyser requires ffmpeg to be installed for Linux and macOS (homebrew), but libraries are bundled with the Windows version. Contained within each ZIP is a README.md file with detailed usage steps. The current ZIPs can be downloaded from:



    As a quick guide:

    1. Install the LMS plugin
    2. Download the relevant ZIP of bliss-analyser
    3. Install ffmpeg for Linux or macOS
    4. Edit 'config.ini' in the bliss-analyser folder to contain the correct path to your music files, and the correct LMS hostname or IP address
    5. Analyse your files with: bliss-analyser analyse
    6. Once analysed, upload DB to LMS with: bliss-analyser upload
    7. Choose 'Bliss' as DSTM mixer in LMS


    On a 2015-era i7 8 core laptop with SSD I can analyse almost 14000 tracks/hour. Obviously this will vary depending upon track lengths, etc, but gives a rough idea of how long the analysis stage will take.

    The analyser only stores relative paths in its database - hence you can analyse on one machine and run the mixer on another. e.g. If you music is stored in /home/user/Music, then /home/user/Music/Artist/Album/01-Track.mp3 is stored in the database as Artist/Album/01-Track.mp3

    This mixer and analyser are Rust ports of the Bliss part of MusicSimilarity. I started that plugin to see if merging Essentia with Musly results would improve things, then discovered Bliss. For my music collection Bliss appears to create better mixes, and is much faster than Essentia. Hence this plugin.

    to-bliss.py can be used to convert a MusicSimilarity DB file (if it has bliss analysis) into a bliss.db - saving the need to re-analyse music if it has already been analysed with bliss.
    Last edited by cpd73; 2022-06-18 at 02:21.
    Material debug: 1. Launch via http: //SERVER:9000/material/?debug=json (Use http: //SERVER:9000/material/?debug=json,cometd to also see update messages, e.g. play queue) 2. Open browser's developer tools 3. Open console tab in developer tools 4. REQ/RESP messages sent to/from LMS will be logged here.

  2. #2
    Senior Member
    Join Date
    Jan 2010
    Location
    Hertfordshire
    Posts
    9,438
    Quote Originally Posted by cpd73 View Post
    This is a mixer for "Don't Stop the Music" that uses the results of bliss analysis to find suitable tracks. For details about bliss itself please refer to its website.

    There are two parts to this mixer:

    1. A Linux/macOS/Windows app to analyse your music, save results to an SQLite database, and upload results to LMS
    2. An LMS plugin that contains pre-built mixer binaries for Linux(x86_64, arm, 64-bit arm), macOS (fat binary), and Windows


    The LMS plugin can be installed from my repo

    Binaries for the analyser will be placed on the Github releases page. This analyser requires ffmpeg to be installed for Linux and macOS (homebrew), but libraries are bundled with the Windows version. Contained within each ZIP is a README.md file with detailed usage steps. The current 0.0.1 ZIPs can be downloaded from:



    As a quick guide:

    1. Install the LMS plugin
    2. Download the relevant ZIP of bliss-analyser
    3. Install ffmpeg for Linux or macOS
    4. Edit 'config.ini' in the bliss-analyser folder to contain the correct path to your music files, and the correct LMS hostname or IP address
    5. Analyse your files with: bliss-analyser analyse
    6. Once analysed, upload DB to LMS with: bliss-analyser upload
    7. Choose 'Bliss' as DSTM mixer in LMS


    On a 2015-era i7 8 core laptop with SSD I can analyse almost 14000 tracks/hour. Obviously this will vary depending upon track lengths, etc, but gives a rough idea of how long the analysis stage will take.

    The analyser only stores relative paths in its database - hence you can analyse on one machine and run the mixer on another. e.g. If you music is stored in /home/user/Music, then /home/user/Music/Artist/Album/01-Track.mp3 is stored in the database as Artist/Album/01-Track.mp3

    This mixer and analyser are Rust ports of the Bliss part of MusicSimilarity. I started that plugin to see if merging Essentia with Musly results would improve things, then discovered Bliss. For my music collection Bliss appears to create better mixes, and is much faster than Essentia. Hence this plugin. However, whilst MusicSimilarity supports CUE files (it splits them apart for analysis) bliss-analyser currently does not. I realised I only had 3 CUE albums, and it was easier to just split them into individual files.

    to-bliss.py can be used to convert a MusicSimilarity DB file (if it has bliss analysis) into a bliss.db - saving the need to re-analyse music if it has already been analysed with bliss.
    I currently use MusicIP which adds fingerprinting in a track's tags. If I understand correctly Bliss doesn't use tags but stores info in a database.
    I add my music to a portable USB drive connected to a Windows laptop where I add tags, apply replaygain and analyse using MusicIP.
    I then copy the music to another USB drive plugged into a Pi4 using FreeFileSync.
    From the description it sounds like I can analyse on the laptop then upload the database to LMS on the Pi.
    If I add new music to my library is it possible to only analyse the new music or does the analyser analyse the whole library skipping the tracks already in the database?
    By the way the link in your post to Bliss doesn't work, is this the correct one?
    https://lelele.io/bliss.html

    Sent from my Pixel 3a using Tapatalk

  3. #3
    Senior Member
    Join Date
    Mar 2017
    Posts
    3,682
    Quote Originally Posted by slartibartfast View Post
    I currently use MusicIP which adds fingerprinting in a track's tags. If I understand correctly Bliss doesn't use tags but stores info in a database.
    That is correct. Whilst the analysis data is quite small (20 floating point numbers) and could be stored in a tag, I didn't want to touch the actual music files. Plus reading data from a DB is quicker then re-reading tags from all files.

    Quote Originally Posted by slartibartfast View Post
    From the description it sounds like I can analyse on the laptop then upload the database to LMS on the Pi.
    Yes, that's what I do. I scan on my i7 laptop, but the mixer (and LMS) run on a Pi4.

    Quote Originally Posted by slartibartfast View Post
    If I add new music to my library is it possible to only analyse the new music or does the analyser analyse the whole library skipping the tracks already in the database?
    Only new files that are not in its DB are analysed, and any old files are removed from the DB (unless --keep-old is used).

    Quote Originally Posted by slartibartfast View Post
    By the way the link in your post to Bliss doesn't work, is this the correct one?
    https://lelele.io/bliss.html
    That link is correct. However, the link in my original post works for me - tried on both desktop and mobile.
    Material debug: 1. Launch via http: //SERVER:9000/material/?debug=json (Use http: //SERVER:9000/material/?debug=json,cometd to also see update messages, e.g. play queue) 2. Open browser's developer tools 3. Open console tab in developer tools 4. REQ/RESP messages sent to/from LMS will be logged here.

  4. #4
    Senior Member
    Join Date
    Jan 2010
    Location
    Hertfordshire
    Posts
    9,438
    Quote Originally Posted by cpd73 View Post
    That is correct. Whilst the analysis data is quite small (20 floating point numbers) and could be stored in a tag, I didn't want to touch the actual music files. Plus reading data from a DB is quicker then re-reading tags from all files.



    Yes, that's what I do. I scan on my i7 laptop, but the mixer (and LMS) run on a Pi4.



    Only new files that are not in its DB are analysed, and any old files are removed from the DB (unless --keep-old is used).



    That link is correct. However, the link in my original post works for me - tried on both desktop and mobile.
    I tried the link from Tapatalk where it doesn't work but it does work from a browser . I'll give this a try.

    Sent from my Pixel 3a using Tapatalk

  5. #5
    Senior Member
    Join Date
    Jun 2005
    Location
    The South, UK
    Posts
    441
    Sounds interesting, I currently run LMS on a Win10 server but am considering migrating to a PCP solution for the server. Would I be able to upload the Bliss DB to the PCP server and run the plugin/DTSM mixer on that platform?

    I currently use MusicIP tags to drive the DSTM mixer but this is not easily transferable to the PCP platform, plus the analysis is slow.
    Location 1: LMS 8.3 on Win 10 Brix Server, x3 SB Radios, x1 Touch, x1 Controller : Location 2: LMS 8.3 on Win 10 Brix Server, x2 SB Radios, x1 Duet Receiver, x1 Controller : Alexa Mediaserver Smart Skill, Material Android, SqueezeliteX control

  6. #6
    Senior Member
    Join Date
    Mar 2017
    Posts
    3,682
    Quote Originally Posted by staresy View Post
    Sounds interesting, I currently run LMS on a Win10 server but am considering migrating to a PCP solution for the server. Would I be able to upload the Bliss DB to the PCP server and run the plugin/DTSM mixer on that platform?
    That's the idea. Don't use pCP so cannot confirm it works, but I see no reason why it should not.
    Material debug: 1. Launch via http: //SERVER:9000/material/?debug=json (Use http: //SERVER:9000/material/?debug=json,cometd to also see update messages, e.g. play queue) 2. Open browser's developer tools 3. Open console tab in developer tools 4. REQ/RESP messages sent to/from LMS will be logged here.

  7. #7
    Senior Member
    Join Date
    Jan 2010
    Location
    Hertfordshire
    Posts
    9,438
    In the repo I see the Bliss Mixer plugin is called Auto Play, is that right? There is also another plugin with the same name but a different description.

    Sent from my Pixel 3a using Tapatalk

  8. #8
    Senior Member
    Join Date
    Mar 2017
    Posts
    3,682
    Quote Originally Posted by slartibartfast View Post
    In the repo I see the Bliss Mixer plugin is called Auto Play, is that right? There is also another plugin with the same name but a different description.
    Ah, oops! Should be fixed now!
    Material debug: 1. Launch via http: //SERVER:9000/material/?debug=json (Use http: //SERVER:9000/material/?debug=json,cometd to also see update messages, e.g. play queue) 2. Open browser's developer tools 3. Open console tab in developer tools 4. REQ/RESP messages sent to/from LMS will be logged here.

  9. #9
    Senior Member
    Join Date
    Jan 2010
    Location
    Hertfordshire
    Posts
    9,438
    There are some genres that I never want to appear in mixes like Comedy, Spoken Word etc. It would be nice to have an easier way to exclude them. I would need to make a "Genre group" containing all of the genres that I do want to be included.

    Sent from my Pixel 3a using Tapatalk

  10. #10
    Senior Member
    Join Date
    Mar 2017
    Posts
    3,682
    Quote Originally Posted by slartibartfast View Post
    There are some genres that I never want to appear in mixes like Comedy, Spoken Word etc. It would be nice to have an easier way to exclude them. I would need to make a "Genre group" containing all of the genres that I do want to be included.
    You have two choices:

    1. For each folder containing these tracks (or their parent) create a file named '.notmusic' (note leading dot). Or you can create this in their parent folder. Whenever the analyser sees this file it does not process a folder or child folders.
    2. Use the 'ignore' feature


    Both of these are mentioned in the README.md
    Material debug: 1. Launch via http: //SERVER:9000/material/?debug=json (Use http: //SERVER:9000/material/?debug=json,cometd to also see update messages, e.g. play queue) 2. Open browser's developer tools 3. Open console tab in developer tools 4. REQ/RESP messages sent to/from LMS will be logged here.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •