-
Notifications
You must be signed in to change notification settings - Fork 43
Working on remote LXD daemons? #41
Comments
Currently, LXDock is hardcoded for local connections, yes (it's not pylxd, it's us). But fundamentally, there's nothing stopping it from working well with remote LXD servers (at least I don't think so). It's just that @ellmetha and I have no experience with working with this kind of setup. Of course, a PR to add this feature would be very welcome. |
Thanks! I've gotten it working in a go project before using the native lib - hopefully it'll be a similar process in python. I'll take a whack at it if I get a few minutes in the next week or two. |
What about shared folders, @hsoft ? Do you expect them to work? Because I think that they expect sharing from host to container, not from client to container, I think that would require creating network sharing such as nfs, which is not natively supported if I'm not mistaken. |
@luisfaceira I honestly don't have much of a clue with regards to that. I very rarely use something else than linux, so I'm not familiar with this problem domain. |
Just to clarify, using LXD as a deamon although notably useful in the context of using LXC outside of Linux, is not a question specific to that situation. LXD is a (v2) client-server implementation of the original LXC (v1) which was "local-only". This was what grabbed my attention on this project as a "vagrant is too heavy". Because if it's local only, it's not a big benefit from what our team currently uses, which is vagrant with the vagrant-lxc plugin/provider. We already have a solution for the "vagrant is too heavy" for some years. I've looked into upgrading existing vagrant-lxc (LXC1) to a vagrant-lxd (LXC2) provider myself... which would leverage all the remaining features of vagrant... but one of the tricky parts is exactly making it work remotely, particularly the shared folders. |
@luisfaceira It is a true that LXD is a networked daemon, but it also provides other functionalities: an image system (LXD is image-based), snapshots, profiles, ... All these functionalities contribute to ease the use of containers (both locally or remotely). LXDock does not currently work with remote LXD daemons but this is probably one of the features we'll be working on for the next releases. That said, I doubt that shared folders could work the way you describe. When you add a disk device to a remote container using the LXD client, the source path have to be located on the server hosting the containers. 😉 |
The idea I've been having for a while is unison integration to act as "shared folders". |
@ellmetha Unfortunately I also think that it's not easy to add shared folder support, that was what I tried to point out in my first comment. Adding remote lxd support without shared folder is steering away completely from the vagrant-like workflow. And I'm also arguing that supporting remote daemons should be a core feature for this project. My argument is not that the main feature of LXD is for it to be a networked daemon, containers are incredible, and system-containers such as the ones on LXD are significantly different from process-containers such as Docker - I get that. It's when compared to LXC that I would say that LXD has its main feature to be networked. AFAIK LXC was already image-based, had snapshots, profiles, etc. My point is that vagrant, combined with a provider plugin vagrant-lxc already provides a solution for "Vagrant is too heavy", it provides it for a very long time (we've been using it for years), and it's not "using a workflow similar to Vagrant.", it is the Vagrant workflow with all its bells-and-whistles, such as supporting other providers (e.g. AWS) and other provisioners (e.g. Chef/Shell) that are very useful to us. Now, I'm all in favor of alternatives and competition, I'm just trying to share and contribute with my own perspective, that for this to standout as a potential vagrant-killer (that's how we looked into it for our own use), it will be essential that it takes advantage of the "networking" part, to solve some of the issues that we have with a vagrant-lxc setup and get potentially additional features:
To be fair, we would also benefit from not being tied to a version of LXC and a vagrant plugin that are getting outdated, and we also might (though doubtfully) leverage some of its more recent cool features:
Don't get me wrong, I applaud this initiative (if I find the time I will even try to contribute to it with more than words). But while it does not support remote daemons, and the hardest part of it on a "workflow similar to vagrant" which are likely shared folders and port forwarding from client to container, I would call this LXCdock :) and it's unfortunately not something our team can consider using and contributing actively. I hope that someone is able to look into remote daemon support, I was pretty excited yesterday when I thought that this project had solved that, that it was a "vagrant-workflow for networked LXC (LXD)"! Keep up the good work! |
Progress (397e4ab): thanks to LXD's networking facility, it's quite straightforward to add fundamental support. That works so far so good. For shared folders, what we may need in next steps: (1) reconsidering the concept of (2) creating Any ideas? |
@lingxiaoyang great! Regarding (1), I'd say that we should make ACL fiddling local-only and simply issue warnings/errors when we encounter configurations trying to mix ACLs and remote servers. Regarding (2), yes, totally. But I don't think we need to do that now. Having limitations regarding shared folders when we deal with remote LXD servers seems like a very acceptable situation. |
I'm excited that someone has been able to take the first steps on this direction! Kudos to @lingxiaoyang I don't fully understand the code being submitted (don't have the time for a thorough read), so please excuse me if some of my comments are already answered or addressed in the actual solution in your commit. (1) Regarding the semantic doubt (who is the For the project to be well architected and prepared for real remote support, I don't think it's wise to treat "remote" as an exception, but as the rule (which still applies to having a remote on I don't think we need to have the user configure an (2) Regarding Another thing that I'm concerned is the way the remote servers are to be configured. If I'm understanding your commit correctly (mostly based on the updated docs), you propose that the configuration of the remote server shall be included in the That is a file that is to be committed along the project, that describes what are the steps to create a set of virtual environments to execute it. But I don't think the answer to the question "in which LXD daemon shall I run this" is project-specific, but instead depends on who/where it is being run. So, instead of reading the configurations from the file and automatically defining those to be available on the lxc client remote list, IMHO it would make more sense to do something more close to the opposite. For a multi-remote situation, I don't see a practical use-case so I wouldn't consider it a priority, but to provide such flexibility, one could simply add the I think both my comment about I think that going to (A), though tempting, might make it more difficult to eventually become (B) - which as I've expressed before, (B) is what I assumed LXDock to be, and what IMHO it should be for it to be a "vagrant-killer" and at least for my team to use it (since vagrant already supports local lxc, with plugins). Hope my comments are helpful, keep up the good work! |
I hadn't looked at the code yet, but now that I look at it, I tend to agree with @luisfaceira regarding remote config. I don't see the use case for letting LXDock setup remotes. Let's just add the ability for it to use remotes other than the default one. |
@luisfaceira Thank you for the comments! It's great to have your feedback and know your use cases to eventually improve LXDock. Regarding (B), more implementation details may need to be discussed as it conflicts with several localhost settings assumed by LXDock at this moment (e.g., hostname setting, container IP, shares...). But I agree with you that we try to keep the learning curve low and the project maintainable. @hsoft I'll change this behavior. I think it'll be quite straightforward. |
I reviewed my code (sorry I forgot a few details since it has been some time) and in fact the main reason that I added |
Update: I'm working on the pylxd interactive exec (canonical/pylxd#241). If this is accepted and merged, we can completely throw away the |
I discovered lxdock recently but the initial elation I felt vanished when I discovered it expected to run local only. Then I saw this issue thread. Having this feature will make lxdock most attractive indeed, especially if (at least, one) folder sharing scheme is available too. Well done, folks. This is impressive. Kudos to all the project team making this possible .. Kudos to @luisfaceira for excellently articulating important use cases. |
FYI - I am working on a remote version (I forked / have PR in my own fork waiting), its not quite the same as what you guys are going for, also it breaks almost all the test. :) I used SLP and a simple crontab script to populate a node list, then writing a scheduler around that. I really like this project! |
I run LXD kinda like
docker-machine
on OSX for dev work. I use theVagrantfile
in the LXD repo to start up a vagrant machine, then point my locallxc
client at the vagrant box. Once the default remote is configured, thelxc
client and my go-based pet project pick up the default remote values from~/.config/lxc/config.yml
without a hitch.When I installed
lxdock
and runlxdock status
I immediately getIs
lxdock
hard-coded to look for a local unix socket, or is this an artifact of using thepylxd
client?Is there any specific reason to not support remote LXD daemons?
The text was updated successfully, but these errors were encountered: