Low Orbit Flux Logo 2 F

Kubernetes - Kubeadm - Cluster Setup

Prerequisites:

Make sure you have a unique MAC and product_uuid:


ip link
/sys/class/dmi/id/product_uuid

Enable bridged traffic:


cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

Also with IPv6:


cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

Load settings:


sudo sysctl --system

Remove Swap

Disabling / removing swap space is absolutely required.

Required:


sudo swapoff -a
sudo sed -i '/swap/d' /etc/fstab

Optional:


sudo rm /swap.img

Install Kubeadm / Kubelet / Kubectl

Install these:


sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

Download key:


#sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg


Update repo:


#echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list

Install and pin versions:


sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

Cgroup Driver

cgroup driver options:

Check kublet cgroup driver????????

Check system cgroup version ( v1 or v2 ):


grep cgroup /proc/filesystems

If you see cgroup2 in the output it is available:


nodev   cgroup
nodev   cgroup2


Kubernetes Cgroup Driver

!!!!! don’t run kubeinit here, run later. Also might not need this config file. Come back and check on this. !!!!!!!!!

Explicitly set cgroup driver to systemd with minimal config ( kubernetesVersion should match kubeadm/kublet version ):


# kubeadm-config.yaml
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
kubernetesVersion: v1.27.0
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd

Use this config:


kubeadm init --config kubeadm-config.yaml


Docker Cgroup Driver

Check the Docker service and see which driver it is useing:

 
systemctl status docker

Edit the service config and add options to the ExecStart linke:


sudo vi /lib/systemd/system/docker.service

Replace this line:


ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock 

With this:


ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd


Restart Docker:


systemctl daemon-reload
systemctl restart docker

Verify the cgroup driver in use by Docker:


docker info|grep -i cgroup

Want to see this:


 Cgroup Driver: systemd
 Cgroup Version: 1

Not this:


 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 

Setup

Your first server, later a VIP for your cluster:



sudo echo 192.168.3.214 cluster-endpoint >> /etc/hosts

sudo kubeadm init --control-plane-endpoint=cluster-endpoint --pod-network-cidr=10.244.0.0/16

Also, adjust like this ( or add in DNS ):

/etc/hosts
192.168.3.221 cluster-endpoint kube-test1

As a regular user:


mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

echo "export KUBECONFIG=$HOME/.kube/config" >> ~/.bashrc     # might be optional

Two of many network options:

You will generally use a command that looks like this to setup your pod network:


kubectl apply -f podnetwork.yaml

I choose to use Flannel and this is what I used:


kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

Check that CoreDNS is working. This means that networking is working.


kubectl get pods --all-namespaces

Output should look like this:


user1@swan1:~$ kubectl get pods --all-namespaces
NAMESPACE     NAME                            READY   STATUS    RESTARTS   AGE
kube-system   coredns-64897985d-ppfwj         1/1     Running   0          24m
kube-system   coredns-64897985d-txh7j         1/1     Running   0          24m
kube-system   etcd-swan1                      1/1     Running   1          25m
kube-system   kube-apiserver-swan1            1/1     Running   1          25m
kube-system   kube-controller-manager-swan1   1/1     Running   1          25m
kube-system   kube-flannel-ds-gzqmq           1/1     Running   0          71s
kube-system   kube-proxy-dm4ln                1/1     Running   0          24m
kube-system   kube-scheduler-swan1            1/1     Running   1          25m

SKIP THIS ( probably ) - Turn off control plane isolation so that you can run pods on the same node as the control plane.


kubectl taint nodes --all node-role.kubernetes.io/master-

Worker Node

Get the token from the control plane host if you don’t have it:


kubeadm token list

Create a new token if it is expired ( after 24 hours )


kubeadm token create

Get the cert hash if you don’t have it:


openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //'

On a worker node:


sudo su - 
kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>

Watch control plane pods start up:


kubectl get pod -n kube-system -w

In case you need to re-upload the certs after 2 hour expiration:


sudo kubeadm init phase upload-certs --upload-certs

Join other control plane nodes:


sudo kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07

Join workers nodes:


sudo kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866

See doc for external etcd setup or manual cert distribution:

Other

Rebalance CoreDNS after another control-plane node has joined:


kubectl -n kube-system rollout restart deployment coredns 

Get Kubectl on client host:


scp root@<control-plane-host>:/etc/kubernetes/admin.conf .
kubectl --kubeconfig ./admin.conf get nodes

Access API server here: http://localhost:8001/api/v1

Deprovision cluster:

Drain and reset:


kubectl drain <node name> --delete-emptydir-data --force --ignore-daemonsets
kubeadm reset

Clean IPTables:


iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
ipvsadm -C

Delete node and reset:


kubectl delete node <node name>
sudo kubeadm reset

HA Cluster

HA Cluster:

Steps:

initilaize:


sudo kubeadm init --control-plane-endpoint "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" --upload-certs

–upload-certs will upload certs for other nodes in the server so that you don’t need to do it manually

Dashboard


kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml

kubectl proxy  

Access the proxy like this: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/

More

Checking Kublet:


systemctl status kubelet
journalctl -xeu kubelet

Not working:


kubectl port-forward service/mongo 28015:27017

Fix Issues

Fix problems caused by Docker restarting:


sudo systemctl restart docker.service
sudo kubeadm reset

Could be CPU/Mem?



Pod sandbox changed, it will be killed and re-created.

Sometimes fixies connectivity:



kubectl edit configmap coredns -n kube-system

        forward . /etc/resolv.conf {
        forward . 8.8.8.8 {

This controls the other cluster components. Restart it every time I can’t connect with kubectl because all services including the API server are down:



sudo systemctl restart kubelet

More Info

Configs here:



/etc/kubernetes/*
/etc/kubernetes/admin.conf

References