Skip to content

A local kubernetes cluster with a single master and multiple worker nodes, provisioned with Vagrant and Ansible.

License

Notifications You must be signed in to change notification settings

snajdov/kubernetes-local

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Local Kubernetes cluster with Vagrant and Ansible

A local kubernetes cluster with a single master and multiple worker nodes, provisioned with Vagrant and Ansible.

Requirements

Setup

  • x1 Master Node

    • hostname: k8s-master
    • memory: 2GiB
    • cpus: 2
    • ip: 192.168.50.10
  • x2 Worker Nodes (can be changed by modifing NUM_WORKERS in the Vagrantfile)

    • hostnames: node-1, node-2
    • memory: 2GiB
    • cpus: 2
    • ips: 192.168.50.11, 192.168.50.12
  • Network: 192.168.50.0/24

  • Pod Cluster Network: 10.88.0.0/16

Versions

All versions are parameterized. You can edit them in the Vagrantfile.

Building the Cluster

To build the whole cluster:

  1. cd into the repository
  2. Run: vagrant up (this will also execute the Ansible playbooks)
  3. Wait for the whole process to finish
  4. Run: vagrant ssh k8s-master to ssh into the master node
  5. Run: kubectl get nodes to see if all the nodes were registered and Ready (it may take up to a minute)
  6. Run: kubectl get pods --all-namespaces and check if all pods are running

Context

The Ansible playbooks follow the official Kubernetes documentation for creating a cluster with kubeadm.

The hardware requirements for kubeadm are set in the Vagrantfile.

Containerd and its dependencies are installed from source, because the official Ubuntu repo version is broken.

  • If your pods hang on deletion or similar, it might be due to the Ubuntu version.

After building the cluster, the kubernetes-setup folder will contain the kubectl config file named kube-config, if you want to copy it to your host machine. Otherwise, you can run kubectl from the master node, where it is already configured.

To ssh into the master, use:

vagrant ssh k8s-master

Deploying Pods

After successful completion of the Vagrantfile you can deploy pods. Here's an example of x2 nginx Pods with a Service/Nodeport loadbalancing between them.

# Deploy the pods
kubectl run --image=nginx --port=80 nginx01
kubectl run --image=nginx --port=80 nginx02

# Check that the pods are running and note on which node they are deployed
kubectl get pods -o wide

# Tag them with some common labels (e.g. name=myapp)
kubectl label pod/nginx01 name=myapp
kubectl label pod/nginx02 name=myapp

# Create a Node Port Service that will listen on port 35321
kubectl create service nodeport myapp --tcp=35321:80

# Specify the selector to be name=myapp so that it loadbalances between the nginx pods
kubectl set selector svc/myapp name=myapp

# See the node port it listens on
# At the line "NodePort:" see the second value with format <PORT>/TCP
kubectl describe svc/myapp

At this point, we have a Node Port Service listening on <PORT>. We can now access the nginx pods from our browser using any of the node's IP addresses.

For example, in your browser type: 192.168.50.11:<PORT>

But to see that the Service actually loadbalances between the two pods, we can edit the nginx index.html file.

# Enter into pod/nginx01's shell
kubectl exec -it nginx01 -- bash

# Change nginx to nginx01
sed -i 's/nginx\!/nginx01\!/' /usr/share/nginx/html/index.html

# Enter into pod/nginx02's shell
kubectl exec -it nginx02 -- bash

# Change nginx to nginx01
sed -i 's/nginx\!/nginx02\!/' /usr/share/nginx/html/index.html

Now go back to the browser and (hard) reload the page. You should see the page switch between Welcome to nginx01! and Welcome to nginx02!.

Hurray! We have a working cluster :)

About

A local kubernetes cluster with a single master and multiple worker nodes, provisioned with Vagrant and Ansible.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published