Home of the Squeezebox™ & Transporter® network music players.
Page 1 of 8 123 ... LastLast
Results 1 to 10 of 78
  1. #1
    Senior Member
    Join Date
    May 2005
    Location
    UK
    Posts
    741

    'Official' docker container for LMS?

    Hi all,

    Apologies if this ends up being posted twice, first version didn't appear.

    I've been gradually docker-ising a lot of my services, and wondered if an
    'official' docker container for running LMS would be considered?

    Would seem an ideal way to 'standardise' installs to ensure a sane Perl /
    packages setup, and (hopefully) wouldn't require too much ongoing effort to
    update as new versions are release.

    Thoughts?

    Andy


  2. #2
    Babelfish's Best Boy mherger's Avatar
    Join Date
    Apr 2005
    Location
    Switzerland
    Posts
    20,391

    'Official' docker container for LMS?

    > I've been gradually docker-ising a lot of my services, and wondered if an
    > 'official' docker container for running LMS would be considered?


    I'm not too much of a Docker expert myself. Last time I looked into
    existing images I thought many of them lacked on the update level. If
    you could provide a good, working image, which allowed for easy
    updating, was truely platform agnostic etc. that would be great. And I
    would certainly consider supporting it.

    --

    Michael

  3. #3
    Senior Member
    Join Date
    May 2005
    Location
    UK
    Posts
    741
    Quote Originally Posted by mherger View Post
    I'm not too much of a Docker expert myself. Last time I looked into
    existing images I thought many of them lacked on the update level. If
    you could provide a good, working image, which allowed for easy
    updating, was truely platform agnostic etc. that would be great. And I
    would certainly consider supporting it.
    Sadly, I wouldn't consider myself 'expert' enough to be able to put this together either. I might have a go though.

    As you say, I also had a look at the various containers available, and none seemed to fit the bill.

    Ideally, you'd want something that was fired off as part of your nightly builds, and automatically uploaded the new container to the Docker registry, enabling people to update automatically.

    Will see what I can find out, but it's not really my forte!

    Andy

  4. #4
    Senior Member
    Join Date
    May 2005
    Location
    UK
    Posts
    741
    Can you point me to some installation instructions for getting LMS running from git? My google skills are failing me today.

    Also, somewhere that lists which paths would be worth 'exporting' from the container so that they're persisted when the container is rebuilt (config, database, installed plugins perhaps?)

    Cheers

    Andy

  5. #5
    Senior Member
    Join Date
    Apr 2008
    Location
    Paris, France
    Posts
    2,233
    Hi. Michael told me to bomb this thread instead of another one

    Here is what I've seen about the current unofficial docker image landscape:
    • They're all designed for x86. I imagine we'd want x86+arm32v7 instead
    • Many are not that elegant. The one I liked most was, IIRC, from justifiably (github repo here)
    • They're heavy as hell: ubuntu or debian "minimal" docker images are huge. Alpine would be a much better candidate if LMS can compile with the MUSL libc (I don't know what I am talking about, here. But the image size difference is spectacular, at least on ARM)


    I think images are heavy because the LMS build process is not documented or the doc is not fresh enough. I think this is the most prominent issue.
    • My cluster environment is built with Buildroot, I wanted to add an LMS package to buildroot, but being very clumsy with makefiles in general, and not being able to understand the LMS build process (some binary parts are included, players firmwares I understand, but there seems to be more), I had to give up.
      An "official" Buildroot LMS makefile would be neat, IMHO.
    • The other seemingly popular way towards a clean .tar filesystem (or a Docker image) seems to be delivering an official Docker image that contains compiler and scripts. Run that and the thing pops up an executable image/tar. The same one, every time, since the container's build environment starts afresh every time.
    • In theory with multi-stage Docker files you can, in one file (hence single pull on docker hub) compile if necessary and then use the resulting image (without keeping gcc et al. in a buried layer of the image); I don't believe this is necessary, Docker is not on the way up. I would wary about using too many "Dockerisms" in general (including Compose, that python Hydra.)


    The second major issue a Dockerized LMS has I think, is with networking. Docker networking, last time I checked, is worthless for us outside of the basic "host" mode or classic bridge networking. SDN designers seem to play with addressing schemes (MAC and/or IP) with enthusiasm, but that kind of creativity does not fit well slimproto and LMS who expect MACs or IPs to be unique in the network. Broadcasts have to work, and this is not a given with software defined networks.
    • Many LMS image out there will use host networking. That is fine in that it allows including hardware players. It isn't great from the security standpoint, as this mode exposes in the container all interfaces from the host. In addition and in my experience, LMS is fine when listening/using all interfaces. Restricting it to some interface or IP, always broke discovery for me.
    • Some image/containers use routing and don't care that broadcast discovery is broken between hosts. To me, this is not acceptable.
    • The option I chose, after trying a few things, was to use a named Docker bridge on each host. IP pools have to computed for each host (e.g 192.168.1.0/24, host 1 uses IPs 192.168.1.1-10, host2 IPs 192.168.1.11-20 etc.) Then in the bridge I add a vxlan device with the same VNI and port for all hosts. The xvlan link acts as an interconnect cable between network switches, all containers see all MACs/IPs, discovery works. IP pool splitting and vxlan creation has to be done in a host script.


    Integrating physical players to a dedicated network is a bit of a pita, and as usual wifi is the worst offender. Assuming a dedicated network:
    • One of the containers in the cluster has to be a DHCP server (working off an IP pool not distributed by Docker). I don't believe hardware SBs are zeroconf-aware, I'm sure some software "audiophile" players won't, so I see a DHCP container as a must. dnsmasq is my favorite (and it is great at clustered DNS, too. Option "loop-detect" can be a life saver.)
    • The bridge has to include a VLAN interface in the hosts that are near a network switch with hardware players connected. The switch has to define the chosen VLAN ID as PVID on the ports the hardware players are connected to; this because none of the SB players are VLAN-aware, AFAIK.
      I did not implement that; thinking of it, and for a domestic setup, perhaps the bridged/vxlan network I described above could be a bridged/vlan network all along. Vxlan is less prone to local configuration clashes, but in this case this is not a concern.
    • And now, to Wifi: the easiest option would be to plug an AP into a VLAN PVID, just like an SB3. The other options are to run an AP in a host or in a container: the wireless AP interface has to be bridged with the bridge Docker uses, and all is well.
      • Doing it in the host is one more external script, and likely to break if the host configuration changes.
      • Doing it in a container is much preferable for repeatability, except the container will require exclusive access to the wifi phy (and firmware blobs I think.) Quick solution, run the AP container in host mode, not great.
        Better solution, use a host script to export the host's phy to the namespace of the AP container (once it has started; on its side, the AP has to wait until its environment magically spans a wifi interface...) This is feasible, but in my use case I had hosts with more than one phy, and my linux 4.14 kernel has a tendency to mix phys and export the wrong interface to the container (!) So, possible but fiddly to make work reliably, I think.
        However achieving that is kind of cool, since you can add an AP on as many wifi-capable hosts as you want, thus optimizing wifi coverage. Also, hostapd now implements a multi-AP feature, it entails using an ethernet backhaul as control channel between APs; sounds exactly like what the doctor ordered, but I haven't looked into it.


    Regarding audio files, I haven't explored the subject at all, but I think the following options are possible:
    • Map a volume container on the host that has some storage device connected, export it via NFS (Docker can do that), have the LMS server consume the NFS share. Host-side scripting might be necessary, e.g. to detect an USB drive.
    • Do the same, but distributed. In this case I think a discovery/merge/reexport container would be needed to abstract file distribution from the LMS container. In this case, the LMS DB should be able to gracefully cope with files that vanish or come up. I don't know how LMS copes with that, currently.


    Web front-end discovery can become a problem. In my case I use an orchestration such that I don't know in advance which host will run LMS on the dedicated network. Docker will do the NAT dance all right to allow access from the physical network, but it does not advertise which services are available on which host. I added a bit of mDNS advertising on the host once the LMS container is confirmed to have started ok. "Lately", vulnerabilities have been found in mDNS/DNS-SD, so the "Bonjour tab" is gone from browsers. iOS never had that implemented. I have added a bonjour plugin to my browsers, but I don't think that is a viable solution.

    As I mentioned in my previous post, LMS feels a bit of a monolith. If it could be split into different parts, perhaps load, performance, could be improved. IMHO anything "High Availability" (=stateful replication) is a wrong, bad idea. But cloning stateless processes, like perhaps a web front-end, could make sense. To do this sort of things I would not rely on Docker's swarm/services; Easy to get working, but comes with a lot of trade-offs. IMHO, swarm mode is another Dockerism that will fade away.
    I would rather look into setting up an independent "ingress" network with its own load-balancing policy. I understand this is the kind of stuff Traefik was designed for. Perhaps service advertisement could be done there.


    Apologies for the long, long post, HTH.
    I subscribe to the thread, just in case.
    Last edited by epoch1970; 2020-03-21 at 05:31.
    2 SB 3 • 1 PCP 6 • Libratone Loop, Zipp, Zipp Mini • iPeng (iPhone + iPad) • LMS 7.9 (linux) with plugins: CD Player, WaveInput, Triode's BBC iPlayer by bpa • IRBlaster by Gwendesign (Felix) • Smart Mix, Music Walk With Me, What Was That Tune? by Michael Herger • PowerSave by Jason Holtzapple • Song Info, Song Lyrics by Erland Isaksson • AirPlay Bridge by philippe_44 • WeatherTime by Martin Rehfeld • Auto Dim Display, SaverSwitcher, ContextMenu by Peter Watkins.

  6. #6
    Senior Member
    Join Date
    May 2005
    Location
    UK
    Posts
    741
    Quite a lot of your concerns shouldn't actually be a problem.

    Regarding network, you just expose the appropriate ports from the container to the 'host'. Then all devices would use the 'host' IP address to communicate with the server. Not sure if that would prevent auto discovery from working, but if you haven't got that many devices then it shouldn't be too hard to manually enter the address into them.

    Similarly the audio files, if they're available to the host, then it's simple in Docker to make a host's directory available to the container.

    I've got as far as installing a 'base' Alpine VM, and if someone can assist me with installation instructions I'm going to try to get LMS installed and running on it. If this works, I should then know what steps are required to do the same in a Docker container.

    My setup is all x86, and I have no real experience of what would need to be 'build' for Arm. Would need a bit of assistance with that if it ever got that far.

    Andy

  7. #7
    Babelfish's Best Boy mherger's Avatar
    Join Date
    Apr 2005
    Location
    Switzerland
    Posts
    20,391

    'Official' docker container for LMS?

    > My setup is all x86, and I have no real experience of what would need to
    > be 'build' for Arm. Would need a bit of assistance with that if it ever
    > got that far.


    If an image should do it for all, then it'll become difficult. We do
    provide binaries for some platforms. But you'd have to pick a
    combination of Perl version and OS that would be available on both
    platforms. Or you built binaries during deployment - which would flood
    the container with dev tools.

    --

    Michael

  8. #8
    Senior Member
    Join Date
    May 2005
    Location
    UK
    Posts
    741
    Can you point me to 'definitive' install instructions @mherger ?

    Also in a list of paths that should possibly be exported from the container.

    Andy

  9. #9
    Babelfish's Best Boy mherger's Avatar
    Join Date
    Apr 2005
    Location
    Switzerland
    Posts
    20,391

    'Official' docker container for LMS?

    > Can you point me to 'definitive' install instructions @mherger ?

    No, because they don't exist. What would you want to know specifically?
    I mean, you could stick with some Debian based base system. But that you
    don't want because of its size. And with Alpine you're on your own.

    > Also in a list of paths that should possibly be exported from the
    > container.


    I'd certainly store the prefs folder outside. Cache probably, too.

    --

    Michael

  10. #10
    Senior Member
    Join Date
    May 2005
    Location
    UK
    Posts
    741
    Pretty much, how to get from a git checkout to a running system. Appreciate you can't provide specific instructions for Alpine, but something along the lines of 'check out this directory, make sure perl and these modules are installed, run the service' etc.

    As you say, I guess I could just go with a Debian base image and install the latest nightly, but if I'm going to do this I may as well try to do it 'properly'

    Cheers

    Andy

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •