Skip to content
This repository has been archived by the owner on Oct 28, 2021. It is now read-only.
/ vagrant-lxc-k8s Public archive

Run multi-node Kubernetes locally using Linux Containers.

License

Notifications You must be signed in to change notification settings

b0ch3nski/vagrant-lxc-k8s

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

vagrant-lxc-k8s

Run multi-node Kubernetes locally using Linux Containers.

Caveats

  • Usage of sudo is required. Containers must start in privileged mode to ensure proper operation of Docker and Kubernetes.
  • Only 1 Kubernetes master configuration is currently supported.
  • Ansible role that initializes Kubernetes is NOT idempotent. There is no easy way to find already initialized clusters/nodes using kubeadm.

Requirements

  • LXC >= 2.1
    • configured to use lxcbr0 network bridge
  • Vagrant >= 2.1.5
    • with plugin vagrant-lxc >= 1.4.0
  • Ansible >= 2.6

Optionally, Kubectl and Helm could be installed and used on host system to control the cluster in containers (make sure that their versions match these installed in containers).

Configuration

Basic support for profiles is included - default configuration is available in default.yml file.

Variable names are pretty much self-explanatory (-:

Use following environment variable to switch between profiles:

export VAGRANT_LXC_K8S_PROFILE=my-awesome-profile.yml

Any Ansible variable could be overridden in ansible section.

For example, to install different versions of Kubernetes and Helm, copy default.yml to my-awesome-profile.yml with an addition of:

ansible:
  k8s_version: 1.9.11
  k8s_helm_version: 2.9.1

To print all default Ansible variables, execute:

cat vars/global.yml roles/*/defaults/main.yml

Usage

Initialize

Create containers and install Kubernetes:

vagrant up

Following command will bring up desired amount of containers and deploy Kubernetes cluster using Ansible provisioner.

Administrator Kubectl configuration will be automatically copied to your local ~/.kube/config on host system.

Cluster is fully operational when kubectl get pods -n kube-system query returns all pods in Running state.

Dashboard should be available at following URL:

http://<K8S-1-CONTAINER-IP>:8080/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

Re-initialize

Reset Kubernetes configuration without recreating LXC containers:

ansible-playbook -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory playbooks/reset_kubernetes.yml
vagrant provision

Hint: Direct install_kubernetes.yml playbook launch is also possible (with support for Ansible tags).

Status

Check containers status:

vagrant status
sudo lxc-ls -f

Hint: The latter one will also print IP addresses.

Shutdown

Stop all containers:

vagrant halt

Hint: Append --force parameter to enforce shutdown.

Removal

Remove all containers:

vagrant destroy

Hint: Append --parallel parameter to speed up.

Troubleshooting

Enable verbose logging:

export VAGRANT_LOG=debug