Skip to content

twogg-git/k8s-intro

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 

Repository files navigation

header

Note: To follow this tutorial we are going to use Katacoda Single-Node-Cluster an online Kubernetes-Minikube playground. If you want to try this exercices locally, here is Minikube setup link.

Start a Kubernetes cluster with Minukube

We are going to use Minikube to deploy Kubernetes locally. It comes with support for a variety of hypervisors, including VirtualBox, VMware Fusion, KVM, and xhyve, and OSs, including OSX, Windows, and Linux.

The Minikube CLI can be used to start, stop, delete, obtain status, and perform other actions on the virtual machine.

# Start a virtual machine with Minikube, then a Kubernetes cluster will be runnig in that VM
minikube start

# The dashboard allows you to view your applications in a UI 
minikube dashboard

Basics commands

Kubectl is the native CLI for Kubernetes, it performs actions on the Kubernetes cluster once the Minikube virtual machine has been started.

# Display addresses of the master and services with label kubernetes.io/cluster-service=true 
kubectl cluster-info

# To further debug and diagnose cluster problems use
kubectl cluster-info dump

# Show all nodes on that cluster, only "Status Ready" nodes will accept applications for deployment
kubectl get nodes

# Outputs the most important information about all the resources available
kubectl get all 

Deployment Commands

Now, we are going to practice deployment and managemnt commnands with a Golang app image stored in Docker Hub.

kubectl run twogg --image=twogghub/k8s-intro:1.4-k8s

The run command creates a new deployment. Here we include the deployment name, and app image location (include the full repository url for images hosted outside Docker hub).

This command performed a few things for you:

  • Searched for a suitable node where an instance of the application could be run (we have only 1 available node)
  • Scheduled the application to run on that Node
  • Configured the cluster to reschedule the instance on a new Node when needed
kubectl get deployments

In this case, there is 1 deployment running a single instance of twogg. (The instance is running inside a Docker container on that node). Pods that are running inside Kubernetes are running on a private, isolated network. By default they are visible from other pods and services within the same kubernetes cluster, but not outside that network. When we use kubectl, we're interacting through an API endpoint to communicate with our application.

Cluster Ip Exposure

The following are the exposure types that Kubernetes provides for your applications:

  • ClusterIp: Expose service through k8s cluster with ip/name:port
  • NodePort: Expose service through vm's also external to k8s ip/name:port
  • LoadBalancer: Expose service to external connections or whatever you defined in your load balancer setup.
kubectl expose deployment twogg --port=8080 --external-ip=$(minikube ip) --type=LoadBalancer

Now, we are going to deploy a service named twogg accesible by the port 8080, and expose an IP to access it outside the cluster. Also we need to optain and set an external ip from nimukube.

kubectl get pods,services --output wide

To get detailed information on both pods ans services running use --output wide flag. This flags are availiable on most of the get combinations commands.

Update and Scale Commands

# Fist we are going to get more info from our deployed service
kubectl describe service twogg

# Now we set a new image to be deployed 
kubectl set image deployment twogg twogg=twogghub/k8s-intro:1.5-k8s

# And we are going to scale our pods to 3 replicas in our service 
kubectl scale --replicas=3 deployment twogg

# Check the deployment changes with 
kubectl get deployment twogg --output wide

Pod interaction commands

# To get all pods detailed output
kubectl get pods --output wide 

# To get an open output for all pods  
kubectl get pods --output wide --watch

# Review a pod's logs
kubectl logs <pod-name>

# Get pod's info output in a yaml file
kubectl get pod <pod-name> --output=yaml

# To interac inside the pod
kubectl exec -ti <pod-name> /bin/bash

Kubectl Proxy

Note: Please, try this commands locally Katacoda Kubernetes playground does not support a cloud Kubernetes proxy.

This kubectl command create a proxy that will forward communications into the cluster-wide, private network. The proxy can be terminated by pressing control-C and won't show any output while its running.

kubectl proxy

With the proxy running now we have a connection between our host and the Kubernetes cluster.

The API server will automatically create an endpoint for each pod, based on the pod name.

# First we need to get the Pod name, and we'll store in the environment variable POD_NAME
export POD_NAME=$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')

# To output our pods name 
echo $POD_NAME

To make an HTTP request to the application running in that pod: http://localhost:8001/api/v1/proxy/namespaces/default/pods/$POD_NAME/

Cleaning up your cluster

Every pod is generated based on its deployment file, every time you delete a pod, it comes up again because you defined the value 'replicas: X' in the deployment file, to delete a Pod/s permanently, You will have to first delete the deployment of that pod and then delete the pod.

# To delete an specific pod  
kubectl delete pod <pod-name>

# This will delete the pod(s) permanently and the deployment itself will be deleted permanently 
kubectl delete deployment <deploy-name>

# You can alternatively, delete the deployment file from Kubernetes's UI as well
kubectl delete -f deployment_file_name.yml

# To delete an specific service  
kubectl delete service <service-name>

# To delete ALL your workspace
kubectl delete pods,services,deployments --all