Skip to content

Configuration files, scripts, Ansible playbooks, and information needed to establish the fundamental components of my infrastructure, such as Proxmox and Kubernetes cluster

Notifications You must be signed in to change notification settings

theautomation/infrastructure

Repository files navigation

infrastructure

Kubernetes

Prerequisites

  • Ansible installed on your local machine.
  • SSH access to the target machines where you want to install k3s.

Add SSH keys on local device

  1. Copy private and public key
cp /path/to/my/key/ansible ~/.ssh/ansible
cp /path/to/my/key/ansible.pub ~/.ssh/ansible.pub
  1. Change permissions on the files
sudo chmod 600 ~/.ssh/ansible
sudo chmod 600 ~/.ssh/ansible.pub
  1. Make ssh agent to actually use copied key
ssh-add ~/.ssh/ansible

Setting up VM hosts on Proxmox

  1. Create VM from template and bootup.

  2. Login to the VM and set hostname.

    hostnamectl set-hostname <hostname>
  3. Reboot.

  4. Set static ip for the VM in the DHCP server.

  5. Reboot.

  6. Set this hostname in the ansible inventory hosts.ini file.

Install k3s

Usage

  1. Clone this repository to your local machine:
git clone https://github.com/theautomation/infrastructure.git
  1. Modify the inventory file inventory.ini to specify the target machines where you want to install k3s. Ensure you have appropriate SSH access and privileges.
  2. Run one of the following Ansible playbooks to initialize the k3s master and optional worker nodes:
    1. Bare Installation: Execute the k3s_install_cluster_bare.yaml playbook to install a clean cluster without additional deployments:
      sudo ansible-playbook --ask-vault-pass -kK k3s_install_cluster_bare.yaml
      
    2. Minimal Installation with Additional Deployments: Execute the k3s_install_cluster_minimal.yaml playbook to install the bare minimum plus additional deployments:
      sudo ansible-playbook --ask-vault-pass -kK k3s_install_cluster_minimal.yaml
      

Local initialization and setup

Flush dns cache, needed when the same hostnames are used in previous machines (optional).

resolvectl flush-caches

Kubectl

Install kubectl on your local machine. Read the following page to know how to install kubectl on Linux.

Bitnami Kubeseal

Install Kubeseal locally to use Bitnami sealed serets in k8s.

sudo wget https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.19.1/kubeseal-0.19.1-linux-amd64.tar.gz

sudo tar xzvf kubeseal-0.19.1-linux-amd64.tar.gz

sudo install -m 755 kubeseal /usr/local/bin/kubeseal

Kubernetes Cheatsheet

Drain and terminate all pods gracefully on the node while marking the node as unschedulable

kubectl drain --ignore-daemonsets --delete-emptydir-data <nodename>

Make the node unschedulable

kubectl cordon <nodename>

Make the node schedulable

kubectl uncordon <nodename>

Convert to BASE64

echo -n '<value>' | base64

Decode a secret with config file data

kubectl get secret <secret_name> -o jsonpath='{.data}' -n <namespace>

Create secret from file

kubectl create secret generic <secret name> --from-file=<secret filelocation> --dry-run=client  --output=yaml > secrets.yaml

Restart Pod

kubectl rollout restart deployment <deployment name> -n <namespace>

Change PV reclaim policy

kubectl patch pv <pv-name> -p "{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}"

Shell into pod

kubectl exec -it <pod_name> -- /bin/bash

Copy to or from pod

kubectl cp <namespace>/<pod>:/tmp/foo /tmp/bar

Reuse PV in PVC

  1. Remove the claimRef in the PV this will set the PV status from Released to Available
kubectl patch pv <pv_name> -p '{"spec":{"claimRef": null}}'
  1. Add volumeName in PVC
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-iscsi-home-assistant-data
  namespace: home-automation
  labels:
    app: home-assistant
spec:
  storageClassName: freenas-iscsi-csi
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  volumeName: pvc-iscsi-home-assistant-data

Bitnami Sealed Secret

Create TLS (unencrypted) secret

kubectl create secret tls cloudflare-tls --key origin-ca.pk --cert origin-ca.crt --dry-run=client -o yaml > cloudflare-tls.yaml

Encrypt secret with custom public certificate.

kubeseal --cert "./sealed-secret-tls-2.crt" --format=yaml < secret.yaml > sealed-secret.yaml

Add sealed secret to configfile secret

echo -n <mypassword_value> | kubectl create secret generic <secretname> --dry-run=client --from-file=<password_key>=/dev/stdin -o json | kubeseal --cert ./sealed-secret-tls-2.crt -o yaml \
-n democratic-csi --merge-into <secret>.yaml

Raw sealed secret

strict scope (default):

echo -n foo | kubeseal --raw --from-file=/dev/stdin --namespace bar --name mysecret
AgBChHUWLMx...

namespace-wide scope:

echo -n foo | kubeseal --cert ./sealed-secret-tls-2.crt --raw --from-file=/dev/stdin --namespace bar --scope namespace-wide
AgAbbFNkM54...

cluster-wide scope:

echo -n foo | kubeseal --cert ./sealed-secret-tls-2.crt --raw --from-file=/dev/stdin --scope cluster-wide
AgAjLKpIYV+...

Include the sealedsecrets.bitnami.com/namespace-wide annotation in the SealedSecret

metadata:
  annotations:
    sealedsecrets.bitnami.com/namespace-wide: "true"

Include the sealedsecrets.bitnami.com/cluster-wide annotation in the SealedSecret

metadata:
  annotations:
    sealedsecrets.bitnami.com/cluster-wide: "true"

Github

AWS Bitnami tutorial

Blogpost Tutorial

Node Feature Discovery

Show node lables

kubectl get no -o json | jq .items[].metadata.labels

Rsync

rsync exact copy

sudo rsync -axHAWXS --numeric-ids --info=progress2 /mnt/sourcePart/ /mnt/destPart

ISCSI

Discovering targets in iSCSI server

sudo iscsiadm --mode discovery -t sendtargets --portal storage-server-lagg.lan.theautomation.nl

Mount disk

sudo iscsiadm --mode node --targetname iqn.2005-10.org.freenas.ctl:<disk-name> --portal storage-server-lagg.lan.theautomation.nl --login

Unmount disk

sudo iscsiadm --mode node --targetname iqn.2005-10.org.freenas.ctl:<disk-name> --portal storage-server-lagg.lan.theautomation.nl -u

Repair iSCSI share

  1. SSH into one of the nodes in the cluster and start discovery
    sudo iscsiadm -m discovery -t st -p storage-server-lagg.lan.theautomation.nl
  2. Login to target
    sudo iscsiadm --mode node --targetname iqn.2005-10.org.freenas.ctl:<disk-name> --portal storage-server-lagg.lan.theautomation.nl --login
  3. See the device using lsblk. In the following steps, assume sdd is the device name.
  4. Create a local mount point & mount to replay logfile sudo mkdir -vp /mnt/data-0 && sudo mount /dev/sdd /mnt/data-0/
  5. Unmount the device sudo umount /mnt/data-0/
  6. Run check / ncheck sudo xfs_repair -n /dev/sdd; sudo xfs_ncheck /dev/sdd If filesystem corruption was corrected due to replay of the logfile, the xfs_ncheck should produce a list of nodes and pathnames, instead of the errorlog.
  7. If needed run xfs repair sudo xfs_repair /dev/sdd
  8. Logout from target
    sudo iscsiadm --mode node --targetname iqn.2005-10.org.freenas.ctl:<disk-name> --portal storage-server-lagg.lan.theautomation.nl --logout
  9. Volumes are now ready to be mounted as PVCs.

TrueNAS

Rename volume

zfs rename r01_1tb/k8s/{current zvol name} r01_1tb/k8s/{new zvol name} 

About

Configuration files, scripts, Ansible playbooks, and information needed to establish the fundamental components of my infrastructure, such as Proxmox and Kubernetes cluster

Topics

Resources

Stars

Watchers

Forks

Languages