Skip to content
This repository has been archived by the owner on May 1, 2024. It is now read-only.

Working on remote LXD daemons? #41

Open
colebrumley opened this issue Mar 10, 2017 · 17 comments
Open

Working on remote LXD daemons? #41

colebrumley opened this issue Mar 10, 2017 · 17 comments

Comments

@colebrumley
Copy link

I run LXD kinda like docker-machine on OSX for dev work. I use the Vagrantfile in the LXD repo to start up a vagrant machine, then point my local lxc client at the vagrant box. Once the default remote is configured, the lxc client and my go-based pet project pick up the default remote values from ~/.config/lxc/config.yml without a hitch.

When I installed lxdock and run lxdock status I immediately get

  File "/Users/cole/py3.5/lib/python3.5/site-packages/requests_unixsocket/adapters.py", line 32, in connect
    sock.connect(socket_path)
FileNotFoundError: [Errno 2] No such file or directory

Is lxdock hard-coded to look for a local unix socket, or is this an artifact of using the pylxd client?

Is there any specific reason to not support remote LXD daemons?

@ghost
Copy link

ghost commented Mar 10, 2017

Currently, LXDock is hardcoded for local connections, yes (it's not pylxd, it's us). But fundamentally, there's nothing stopping it from working well with remote LXD servers (at least I don't think so). It's just that @ellmetha and I have no experience with working with this kind of setup.

Of course, a PR to add this feature would be very welcome.

@colebrumley
Copy link
Author

Thanks! I've gotten it working in a go project before using the native lib - hopefully it'll be a similar process in python. I'll take a whack at it if I get a few minutes in the next week or two.

@luisfaceira
Copy link

What about shared folders, @hsoft ? Do you expect them to work? Because I think that they expect sharing from host to container, not from client to container, I think that would require creating network sharing such as nfs, which is not natively supported if I'm not mistaken.

@ghost
Copy link

ghost commented Mar 27, 2017

@luisfaceira I honestly don't have much of a clue with regards to that. I very rarely use something else than linux, so I'm not familiar with this problem domain.

@luisfaceira
Copy link

Just to clarify, using LXD as a deamon although notably useful in the context of using LXC outside of Linux, is not a question specific to that situation.

LXD is a (v2) client-server implementation of the original LXC (v1) which was "local-only".

This was what grabbed my attention on this project as a "vagrant is too heavy". Because if it's local only, it's not a big benefit from what our team currently uses, which is vagrant with the vagrant-lxc plugin/provider. We already have a solution for the "vagrant is too heavy" for some years.

I've looked into upgrading existing vagrant-lxc (LXC1) to a vagrant-lxd (LXC2) provider myself... which would leverage all the remaining features of vagrant... but one of the tricky parts is exactly making it work remotely, particularly the shared folders.

@ellmetha
Copy link
Contributor

@luisfaceira It is a true that LXD is a networked daemon, but it also provides other functionalities: an image system (LXD is image-based), snapshots, profiles, ... All these functionalities contribute to ease the use of containers (both locally or remotely).

LXDock does not currently work with remote LXD daemons but this is probably one of the features we'll be working on for the next releases. That said, I doubt that shared folders could work the way you describe. When you add a disk device to a remote container using the LXD client, the source path have to be located on the server hosting the containers. 😉

@ghost
Copy link

ghost commented Mar 28, 2017

The idea I've been having for a while is unison integration to act as "shared folders".

@luisfaceira
Copy link

@ellmetha Unfortunately I also think that it's not easy to add shared folder support, that was what I tried to point out in my first comment. Adding remote lxd support without shared folder is steering away completely from the vagrant-like workflow.

And I'm also arguing that supporting remote daemons should be a core feature for this project.

My argument is not that the main feature of LXD is for it to be a networked daemon, containers are incredible, and system-containers such as the ones on LXD are significantly different from process-containers such as Docker - I get that. It's when compared to LXC that I would say that LXD has its main feature to be networked. AFAIK LXC was already image-based, had snapshots, profiles, etc.

My point is that vagrant, combined with a provider plugin vagrant-lxc already provides a solution for "Vagrant is too heavy", it provides it for a very long time (we've been using it for years), and it's not "using a workflow similar to Vagrant.", it is the Vagrant workflow with all its bells-and-whistles, such as supporting other providers (e.g. AWS) and other provisioners (e.g. Chef/Shell) that are very useful to us.

Now, I'm all in favor of alternatives and competition, I'm just trying to share and contribute with my own perspective, that for this to standout as a potential vagrant-killer (that's how we looked into it for our own use), it will be essential that it takes advantage of the "networking" part, to solve some of the issues that we have with a vagrant-lxc setup and get potentially additional features:

  • Better cross-platform setup (currently, we have Windows/OSX developers using the "too heavy" vagrant/VirtualBox with the same Vagrantfile - or a full-Desktop-Development-Linux-VM)
  • Share a local LXD server across multiple developers with advantages such as:
    ** Reuse space, base images only exist once, if done properly whole applications only exist once for multiple developers
    ** Allow a much-more efficient CI that has multiple small-agents doing parallel runs on a fat server
    ** Have our builds/tests executed locally by developers but ran on a powerful server (before committing/opening pull-requests)
  • Share containers for debugging in team

To be fair, we would also benefit from not being tied to a version of LXC and a vagrant plugin that are getting outdated, and we also might (though doubtfully) leverage some of its more recent cool features:

  • Snapshot running containers
    ** In theory we could snapshot an application server on a breakpoint and repeat debugging multiple times or have others do it
  • Live container migration
    ** It passed the tests, let's promote it to production

Don't get me wrong, I applaud this initiative (if I find the time I will even try to contribute to it with more than words).

But while it does not support remote daemons, and the hardest part of it on a "workflow similar to vagrant" which are likely shared folders and port forwarding from client to container, I would call this LXCdock :) and it's unfortunately not something our team can consider using and contributing actively.

I hope that someone is able to look into remote daemon support, I was pretty excited yesterday when I thought that this project had solved that, that it was a "vagrant-workflow for networked LXC (LXD)"!

Keep up the good work!

@lingxiaoyang
Copy link
Member

lingxiaoyang commented Jun 15, 2017

Progress (397e4ab): thanks to LXD's networking facility, it's quite straightforward to add fundamental support. That works so far so good.

For shared folders, what we may need in next steps:

(1) reconsidering the concept of Host. Is it the local host or the remote host? ACL configuration requires to execute commands on remote host, do we ask user for an ssh command in LXDock file?

(2) creating RemoteSharing classes between local host and remote host, conceptually similar to Provisioner classes between host and guest. This enables us to support multiple types of sharing, either rsync, nfs, unison, Vagrant, etc.

Any ideas?

@ghost
Copy link

ghost commented Jun 15, 2017

@lingxiaoyang great!

Regarding (1), I'd say that we should make ACL fiddling local-only and simply issue warnings/errors when we encounter configurations trying to mix ACLs and remote servers.

Regarding (2), yes, totally. But I don't think we need to do that now. Having limitations regarding shared folders when we deal with remote LXD servers seems like a very acceptable situation.

@luisfaceira
Copy link

I'm excited that someone has been able to take the first steps on this direction! Kudos to @lingxiaoyang

I don't fully understand the code being submitted (don't have the time for a thorough read), so please excuse me if some of my comments are already answered or addressed in the actual solution in your commit.


(1)

Regarding the semantic doubt (who is the Host?) I think it's wrong to consider the local computer an Host. That was confusing when there was the assumption that the lxc client and the lxc daemon were on the same node (the host was always local), but if this commit is finally making this project agnostic of where the lxdaemon is, then IMHO I think we always have a local client and a remote host, even though sometimes the host of containers can be connected through 127.0.0.1.

For the project to be well architected and prepared for real remote support, I don't think it's wise to treat "remote" as an exception, but as the rule (which still applies to having a remote on 127.0.0.1).

I don't think we need to have the user configure an ssh, there's not even a guarantee that the container will have an SSH daemon and I don't think we even need that, because LXD already provides us a way to execute commands inside the container. Using an lxc exec shall allow to execute a provisioning command inside the container (independently of the host being local or remote), without needing to setup anything else, shouldn't it?


(2)

Regarding RemoteSharing I agree with @hsoft that it's possible to take a progressive approach, and only have a single type of folder sharing, BUT I think at least one solution is needed for LXDock to be useful. IMO there is no "vagrant workflow" without folder sharing/syncing.


Another thing that I'm concerned is the way the remote servers are to be configured. If I'm understanding your commit correctly (mostly based on the updated docs), you propose that the configuration of the remote server shall be included in the .lxdock.yml file. I strongly advice against that.

That is a file that is to be committed along the project, that describes what are the steps to create a set of virtual environments to execute it. But I don't think the answer to the question "in which LXD daemon shall I run this" is project-specific, but instead depends on who/where it is being run.
For example, if I execute the project in an office, I could have a highly-powered server for my dev team to use, but in home I would use the local unix socket instead, and for running the project inside a CI environment on the cloud I would use a different server on the cloud.
Another example: On my Linux machine I use it with local server, but when I go to a windows machine to develop/test my project with MS's browser, (yes, it is possible to sue lxc client in windows) I use a cloud-hosted lxd as remote.

So, instead of reading the configurations from the file and automatically defining those to be available on the lxc client remote list, IMHO it would make more sense to do something more close to the opposite.
My proposal is that LXDock would simply use whatever is configured as the default remote in the lxc client (which in a vanilla installation is the local unix socket). Therefore, in my CI environment, I would simply set another thing as the default, I can configure SSL settings, differently etc. Whoever wants to run it locally, runs it locally, whoever wants to run it remotely, configures its client to run it remotely without messing with the rest of the team by changing a commited file. This would also keep things simpler on the LXDock side, easier to learn/maintain, wouldn't it?

For a multi-remote situation, I don't see a practical use-case so I wouldn't consider it a priority, but to provide such flexibility, one could simply add the remote config setting in the lxdock similar to what is on the commit, but just providing the name, and the configuration would be done on the environment on the lxc client itself, instead of on the commited file.


I think both my comment about Host and about the .lxcdock.yml file expresses an idea that I want to reinforce: One thing is for (A) LXDock to be local and then "we sort of also support remote", another thing is for it to be (B) indepedent/agnostic of being local or remote, and simply using lxc client to whatever host it is configured to.

I think that going to (A), though tempting, might make it more difficult to eventually become (B) - which as I've expressed before, (B) is what I assumed LXDock to be, and what IMHO it should be for it to be a "vagrant-killer" and at least for my team to use it (since vagrant already supports local lxc, with plugins).


Hope my comments are helpful, keep up the good work!

@ghost
Copy link

ghost commented Jun 16, 2017

I hadn't looked at the code yet, but now that I look at it, I tend to agree with @luisfaceira regarding remote config. I don't see the use case for letting LXDock setup remotes. Let's just add the ability for it to use remotes other than the default one.

@lingxiaoyang
Copy link
Member

@luisfaceira Thank you for the comments! It's great to have your feedback and know your use cases to eventually improve LXDock.

Regarding (B), more implementation details may need to be discussed as it conflicts with several localhost settings assumed by LXDock at this moment (e.g., hostname setting, container IP, shares...). But I agree with you that we try to keep the learning curve low and the project maintainable.

@hsoft I'll change this behavior. I think it'll be quite straightforward.

@lingxiaoyang
Copy link
Member

I reviewed my code (sorry I forgot a few details since it has been some time) and in fact the main reason that I added lxc remote add was to call lxc exec for lxdock shell. This is based on the fact that pylxd doesn't support interactive execute. Otherwise, I really don't need to register an lxc remote name.

@lingxiaoyang
Copy link
Member

Update: I'm working on the pylxd interactive exec (canonical/pylxd#241). If this is accepted and merged, we can completely throw away the lxc remote name and focus on the endpoint!

@laymonk
Copy link

laymonk commented Aug 25, 2017

I discovered lxdock recently but the initial elation I felt vanished when I discovered it expected to run local only. Then I saw this issue thread. Having this feature will make lxdock most attractive indeed, especially if (at least, one) folder sharing scheme is available too. Well done, folks. This is impressive. Kudos to all the project team making this possible .. Kudos to @luisfaceira for excellently articulating important use cases.

@grsmith-projects
Copy link

FYI - I am working on a remote version (I forked / have PR in my own fork waiting), its not quite the same as what you guys are going for, also it breaks almost all the test. :)

I used SLP and a simple crontab script to populate a node list, then writing a scheduler around that.

I really like this project!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

6 participants