Home of the Squeezebox™ & Transporter® network music players.
Page 9 of 19 FirstFirst ... 7891011 ... LastLast
Results 81 to 90 of 181
  1. #81
    Senior Member bakker_be's Avatar
    Join Date
    May 2009
    Location
    Kwaadmechelen, Belgium
    Posts
    996
    Quote Originally Posted by prabbit View Post
    If you're referring to the lack of an HTML page response, that is to be expected. The server is running on port 11000, but its job is to not respond with an HTML page but with a list of tracks to feed DTSM, which it does. Running in debug mode you'll see this. It's easier to review the messages if you pipe into a log file, if you're interested in the details.

    Based on what I see in the image, you should be able to get Similarity mixes from DTSM (assuming you've configured the Music Similarity plugin settings).
    Try as I might, I can't seem to redirect the DEBUG output to a file. AFAIK "./music-similarity.py -l DEBUG > musicsimilarity.log" should send all output to the musicsimilarity.log file. It does create the file, but all other output remains in the terminal window The only info in the file is:
    Code:
     * Serving Flask app 'lib.app' (lazy loading)
     * Environment: production
       WARNING: This is a development server. Do not use it in a production deployment.
       Use a production WSGI server instead.
     * Debug mode: off
    Main System: Touch; Marantz SR-5004 + TMA Premium 905 + TMA Premium 901 + Teufel Ultima 20 Mk 2 + BK Monolith+ FF + Lenovo T460 + Kodi + Pioneer PDP-LX5090H
    Workshop: iPad 32GB Wifi + Squeezepad (local playback activated)
    Wherever needed: Acer Iconia Tab A700 + Squeezeplayer
    Kitchen: iPhone 5s + iPeng (local playback activated) + NAD 312 + Teufel Ultima 20 Mk 2
    Headphone (cozy corner): Lenovo T550 + Squeezelite-X + Cyrus Soundkey + Topping A30 + Focal Elear
    Car: TBC ...

  2. #82
    Senior Member
    Join Date
    Mar 2017
    Posts
    3,502
    Quote Originally Posted by prabbit View Post
    Understood, completely. It's your tool for your use case that you've chosen to freely share. I was commenting on what I saw after I took a peek behind the curtains and started to understand how things were connected. I don't currently have the skills to fork what you've done thus far to extend Similarity into SmartMix 2.0. But maybe someone else does or maybe you do, if the idea of creating playlists or influencing playlists based off of specific Musly/Essentia metadata is interesting or exciting. And if not, no complaints here. This is freeware/donation-ware, and I accept everything that comes with such a plugin.
    I'm more than happy to add such functionality, or have others submit pull requests, etc, to implement this. Just that its not the current focus. I love the DSTM feature, its how I mainly listen to my music now - lpay 1 track, or album, and let LMS continue to add more tracks based on that.

    Quote Originally Posted by prabbit View Post
    I'm not a math(s) major
    Me neither! I really am just making this up as I go along!

    Quote Originally Posted by prabbit View Post
    I've also only had an intro to statistics course at university. If 1000 tracks are selected to build the model music style database and then the 4 highest or lowest are used to filter based on a comparison to the seed track, I wonder if we're running into a sampling bias (or some other statistical bias). The 1000 tracks get further split into genres/genre groups based on relative percentage to the existing catalog and that affects the model. I do understand Similarity is pulling from the entire music collection when selecting a song to play and not just the 1000 model tracks. Like you, I only use local files; I do not use any streaming services.
    I know nothing about statistics, or indeed how Musly does its similarity or how Essentia matches against models, etc. I'm just trying to put bits together! My initial thinking was that Musly is pretty good at getting similar tracks, so then adding filtering by BPM, Key, etc. would help improve even more.

    Quote Originally Posted by prabbit View Post
    Right now Similarity is a bit of a black box to me and it's not clear how much of an effect any one lever/setting has on a mix, so I've chosen to disable most settings and start fairly wide open. Next, I may go to the opposite extreme and use a very narrow range of setting values. As I get a sense for what's happening, I'll know where to go to make adjustments.
    I agree, hence the initial post asking for best default settings, etc.

    Quote Originally Posted by prabbit View Post
    I do fully understand that and appreciate the distinction. The closer to 0 a value, the less likely it is that type; the closer to 1 the more likely it is. The intro to statistics course taught me that much. I do admit that I don't recall everything from the class though.
    Sorry, I didn't mean to sound condescending. You probably know more about the theory of this than I
    Material debug: 1. Launch via http: //SERVER:9000/material/?debug=json (Use http: //SERVER:9000/material/?debug=json,cometd to also see update messages, e.g. play queue) 2. Open browser's developer tools 3. Open console tab in developer tools 4. REQ/RESP messages sent to/from LMS will be logged here.

  3. #83
    Senior Member
    Join Date
    Mar 2017
    Posts
    3,502
    Quote Originally Posted by bakker_be View Post
    Try as I might, I can't seem to redirect the DEBUG output to a file. AFAIK "./music-similarity.py -l DEBUG > musicsimilarity.log" should send all output to the musicsimilarity.log file. It does create the file, but all other output remains in the terminal window
    Try:

    Code:
    ./music-similarity.py -l DEBUG 2>&1 | tee musicsimilarity.log
    '2>&1' ensures standard-error messages are placed on the same output as standard-output. '|' pipes the output of the left hand side to the right hand side. 'tee' takes this and shows on terminal and places into 'musicsimilarity.log'
    Material debug: 1. Launch via http: //SERVER:9000/material/?debug=json (Use http: //SERVER:9000/material/?debug=json,cometd to also see update messages, e.g. play queue) 2. Open browser's developer tools 3. Open console tab in developer tools 4. REQ/RESP messages sent to/from LMS will be logged here.

  4. #84
    Senior Member
    Join Date
    Jun 2009
    Posts
    144
    Code:
     cmd > filename
    will redirect only the standard output to the file "filename" but not the errors.

    To collect both you have to use
    Code:
     cmd > filename 2>&1
    But I suggest to use
    Code:
    cmd | tee -a filename
    That way you will see all output (standard and error) on the terminal and it will be written into the file "filename".

    Edit: Damn! I am too slow. :-) and wrong: "tee" reads only from standard input. :-(
    Last edited by jd68; 2022-01-17 at 14:51.

  5. #85
    Senior Member
    Join Date
    Aug 2012
    Location
    Austria
    Posts
    1,262
    Quote Originally Posted by prabbit View Post
    It was a fantastic tool that allowed us to create mixes based on a variety of attributes — just like the ones in the music-similarity database. It had slider bars that allowed us to set a range for various attributes and then build a playlist from the tracks that met that criteria.

    Imagine I want to query the music-similarity database for all songs with these values and produce a playlist—that was SmartMix and that's what I think the purpose of those values would or could be.

    • happy: > 0.9
    • aggressive: 0.4 - 0.6
    • danceable: >0.75
    • electronic: >0.8
    • dark: >0.5

    You can basically do this with LMS Essentia (using either extgui4lms or the LMS Playlist Editor as a frontend)

    if you are interested in some background info regarding musly/essentia/similarity, see this thread
    Various SW: Web Interface | Text Interface | Playlist Editor / Generator | Music Classification | Similar Music | Announce | EventTrigger | Ambient Noise Mixer | DB Optimizer | Image Enhancer | Chiptunes | LMSlib2go | ...
    Various HowTos: build a self-contained LMS | Bluetooth/ALSA | Control LMS with any device | ...

  6. #86
    Senior Member bakker_be's Avatar
    Join Date
    May 2009
    Location
    Kwaadmechelen, Belgium
    Posts
    996
    Quote Originally Posted by cpd73 View Post
    Try:

    Code:
    ./music-similarity.py -l DEBUG 2>&1 | tee musicsimilarity.log
    '2>&1' ensures standard-error messages are placed on the same output as standard-output. '|' pipes the output of the left hand side to the right hand side. 'tee' takes this and shows on terminal and places into 'musicsimilarity.log'
    That did the trick! Thanks, "tee" is now stored in tips & tricks toolbox
    Main System: Touch; Marantz SR-5004 + TMA Premium 905 + TMA Premium 901 + Teufel Ultima 20 Mk 2 + BK Monolith+ FF + Lenovo T460 + Kodi + Pioneer PDP-LX5090H
    Workshop: iPad 32GB Wifi + Squeezepad (local playback activated)
    Wherever needed: Acer Iconia Tab A700 + Squeezeplayer
    Kitchen: iPhone 5s + iPeng (local playback activated) + NAD 312 + Teufel Ultima 20 Mk 2
    Headphone (cozy corner): Lenovo T550 + Squeezelite-X + Cyrus Soundkey + Topping A30 + Focal Elear
    Car: TBC ...

  7. #87
    Senior Member
    Join Date
    Mar 2017
    Posts
    3,502
    Quote Originally Posted by prabbit View Post
    Imagine I want to query the music-similarity database for all songs with these values and produce a playlist—that was SmartMix and that's what I think the purpose of those values would or could be.

    • happy: > 0.9
    • aggressive: 0.4 - 0.6
    • danceable: >0.75
    • electronic: >0.8
    • dark: >0.5


    That criteria in my music collection finds 124 songs. I'd like to put that on play.
    Thinking about this some more. Would it make sense to allow creation of 'Smart Mixes' that will select a number of tracks, allow this to be added to the queue, and then DSTM takes over adding new tracks. DSTM, however, would not know about the SmartMix, just that these would be the initial songs to create mixes from. If so creating another API that returns X songs based upon some attributes should be doable. It then requires the plugin to implement a JSONRPC and the creation of a UI to create these mixes. Obviously for me the UI would be created in Material. I know SlimBrowse would allow some sort of UI creation for this, but that's beyond my knowledge level at the mo - and as I only use Material it's not of major importance to me.

    e.g. I would add a 'Smart Mixes' (or similar) entry to 'My Music' to list current mixes, allow editing, etc. The edit dialog would allow you to specify which attributes to use, number of tracks, and (perhaps) order of tracks (closest to attributes, random, etc.)
    Material debug: 1. Launch via http: //SERVER:9000/material/?debug=json (Use http: //SERVER:9000/material/?debug=json,cometd to also see update messages, e.g. play queue) 2. Open browser's developer tools 3. Open console tab in developer tools 4. REQ/RESP messages sent to/from LMS will be logged here.

  8. #88
    Senior Member
    Join Date
    Mar 2017
    Posts
    3,502
    Quote Originally Posted by prabbit View Post
    If 1000 tracks are selected to build the model music style database and then the 4 highest or lowest are used to filter based on a comparison to the seed track, I wonder if we're running into a sampling bias
    The "4" that I referred to was not tracks, but attributes. As in I find the 4 strongest Essentia attributes a track has (so attributes >=0.8 or <=0.2) and then filter tracks against those. What my script does is:

    Code:
    for each seed track (the LMS plugin sends a maximum of 5)
        get 5000 most similar tracks for Musly
            for each of these similar tracks
                filter out based on meta-data - to stop repeated titles, artists, etc.
                check BPM of candidate track against seed track
                check key of candidate track against seed track
                check loundness of candidate track against seed track
                for each of the 4 strongest Essentia attributes of seed track
                    check candidate track's attribute aginst seed track
            stop when we have enough matching tracks for this seed
    
    randomise selected tracks
    return required amount of tracks
    ...there's more to it than that, but that is the basic idea of the filtering.
    Material debug: 1. Launch via http: //SERVER:9000/material/?debug=json (Use http: //SERVER:9000/material/?debug=json,cometd to also see update messages, e.g. play queue) 2. Open browser's developer tools 3. Open console tab in developer tools 4. REQ/RESP messages sent to/from LMS will be logged here.

  9. #89
    Senior Member
    Join Date
    Apr 2007
    Posts
    326
    Quote Originally Posted by Roland0 View Post
    You can basically do this with LMS Essentia (using either extgui4lms or the LMS Playlist Editor as a frontend)

    if you are interested in some background info regarding musly/essentia/similarity, see this thread

    I tried to install your plugins and was successful with some of them, but others require either systems, knowledge, or skills that I don't have at the moment. I remember using extgui4lms many years ago when it was still very new. No doubt your contributions have been important and valuable, because they helped Craig with his Similarity project. If I could figure out how to take my existing high-level Essentia JSON analysis files and have them re-processed by a Windows-based or Ubuntu-on-Windows-built tool, then I'd probably be able to proceed.

    Quote Originally Posted by cpd73 View Post
    Thinking about this some more. Would it make sense to allow creation of 'Smart Mixes' that will select a number of tracks, allow this to be added to the queue, and then DSTM takes over adding new tracks. DSTM, however, would not know about the SmartMix, just that these would be the initial songs to create mixes from. If so creating another API that returns X songs based upon some attributes should be doable. It then requires the plugin to implement a JSONRPC and the creation of a UI to create these mixes. Obviously for me the UI would be created in Material. I know SlimBrowse would allow some sort of UI creation for this, but that's beyond my knowledge level at the mo - and as I only use Material it's not of major importance to me.

    e.g. I would add a 'Smart Mixes' (or similar) entry to 'My Music' to list current mixes, allow editing, etc. The edit dialog would allow you to specify which attributes to use, number of tracks, and (perhaps) order of tracks (closest to attributes, random, etc.)
    This makes sense to me: Mood Mixes.

    I also only use Material (in Chrome, Material app on Android, in SLX on Windows).

    It seems straightforward enough to build the API and UI to expose all low- and high-level attributes in the music-similarity.db file; that is, if one knows how to do these sorts of things.
    Ask users to adjust sliders to produce a range for each attribute, along with some onscreen hints that each attribute indicates the strength of the probability the song matches that attribute type.
    Gather all responses, pass those to the database, return a result. (Maybe 'maximum number of files to return' is also an attribute to pass so you're not returning thousands of entries, or if you do it's because the user specifically requested it.)
    The result could be a new playlist. I'd likely almost always choose 'random', but others may want them played in some order — of course, what order is unknown because no attribute would be weighted more than others, so there'd be no way to rank them. Unless, that too, was something that was requested: rank by [ordered list of attributes].
    For the UI, I'd convert the attribute values to 0 to 100 rather than 0 to 1, since most humans understand that easier than 0 to 1. If a user doesn't want to filter on an attribute they should allow all values by having the slider range extend from 0 to 100 — the default position. Behind the scenes I would also only use the hundredths position (e.g., 0.91) and no further (e.g., 0.9152377), although rounding to 0.92 would be expected here. I wouldn't expect rounding to the hundredths position would materially (no pun intended) affect the mix.

    After the playlist runs out, it seems reasonable that DTSM would kick in using whatever rules are set for DTSM.

  10. #90
    Senior Member
    Join Date
    Aug 2012
    Location
    Austria
    Posts
    1,262
    Quote Originally Posted by prabbit View Post
    If I could figure out how to take my existing high-level Essentia JSON analysis files and have them re-processed by a Windows-based or Ubuntu-on-Windows-built tool, then I'd probably be able to proceed.
    LMS Essentia analysis/upload should run on any Linux system, so Ubuntu-on-Windows should work (using the "Gaia/SVM" variant as described on the homepage)
    Re-using the existing high-level Essentia JSONs should be possible, with one provision: LMS Essentia adds the file's path (as used by LMS) to the JSON (as metadata.tags.file_path), so the existing JSONs would have to be modified before uploading them to LMS Essentia's database.
    Various SW: Web Interface | Text Interface | Playlist Editor / Generator | Music Classification | Similar Music | Announce | EventTrigger | Ambient Noise Mixer | DB Optimizer | Image Enhancer | Chiptunes | LMSlib2go | ...
    Various HowTos: build a self-contained LMS | Bluetooth/ALSA | Control LMS with any device | ...

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •