Using non-Talos nodes with a Talos K8s Cluster
5min read engineering devops
Talos Linux is an operating system meant to run one thing: Kubernetes. After using kubeadm on Debian, Talos is, in my opinion the quickest way to bootstrap a Kubernetes cluster on bare metal.
When running a Talos cluster however, you might want to join worker nodes that can’t run Talos, like when running a Kubelet node on Raspberry Pi 5 (at the time of writing, RPi 5 does not have uboot support required for booting into Talos).
In this case, you can still join this node running a separate distro (here: Raspberry Pi OS) to the Talos cluster, with a few straightforward, albeit hacky steps.
Prerequisites
- a running Talos cluster (follow the Talos - Getting Started guide)
- a running target node for joining, running
containerd
andkubelet
(here for RPi 5, follow this guide)
Retrieving Talos Kubernetes config
(adapted with thanks from siderolabs/talos/issues#3990)
To start, we’ll fetch the configuration of the existing Talos cluster:
#!/bin/bash -e
# NOTE $VIP and $TARGET
# $VIP is the IP of a control node with talos installed
# $TARGET is the IP/hostname of the machine that you want to install these files to
talosctl -n "$VIP" cat /etc/kubernetes/kubeconfig-kubelet > ./kubelet.conf
talosctl -n "$VIP" cat /etc/kubernetes/bootstrap-kubeconfig > ./bootstrap-kubelet.conf
talosctl -n "$VIP" cat /etc/kubernetes/pki/ca.crt > ./ca.crt
Next, modify the bootstrap configuration to point to the control plane
sed -i "/server:/ s|:.*|: https://${VIP}:6443|g" \
./kubelet.conf \
./bootstrap-kubelet.conf
Fetch the cluster domain and DNS:
clusterDomain=$(talosctl -n "$VIP" get kubeletconfig -o jsonpath="{.spec.clusterDomain}")
clusterDNS=$(talosctl -n "$VIP" get kubeletconfig -o jsonpath="{.spec.clusterDNS}")
Create a Kubelet configuration file with our Talos cluster domain & DNS:
We will assume your node is running
containerd
as a container runtime
cat > var-lib-kubelet-config.yaml <<EOT
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook
clusterDomain: "$clusterDomain"
clusterDNS: $clusterDNS
runtimeRequestTimeout: "0s"
cgroupDriver: systemd
containerRuntimeEndpoint: unix://var/run/containerd/containerd.sock
EOT
Finally, let’s move all our configuration to our target node:
scp bootstrap-kubelet.conf root@$TARGET:/etc/kubernetes/bootstrap-kubelet.conf
scp kubelet.conf root@$TARGET:/etc/kubernetes/kubelet.conf
ssh root@$TARGET "mkdir -p /etc/kubernetes/pki"
scp ca.crt root@$TARGET:/etc/kubernetes/pki/ca.crt
scp var-lib-kubelet-config.yaml root@$TARGET:/var/lib/kubelet/config.yaml`
Proxying Kubeprism traffic
Whether you’re using Flannel or Cilium as a CNI, Kubeprism won’t be running on your non-Talos nodes, thus your CNI won’t be able to access the Kube API via a local TCP load balancer.
To bypass this, we will set up a quick HAProxy service on our target node:
apt-get update && apt-get install -y haproxy
cat > haproxy_patch.cfg <<EOT
frontend kubeprism
mode tcp
bind localhost:7445
default_backend k8s_api
backend k8s_api
mode tcp
server lb $VIP:6443
EOT
scp haproxy_patch.cfg root@TARGET:/etc/haproxy/haproxy.cfg
ssh root@TARGET "systemctl restart haproxy"
Finally, enable cgroup limits (if you’re running on a Raspberry Pi):
ssh pi@raspberrypi.local
sudo sed -i '$ s/$/ cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1 swapaccount=1/' /boot/firmware/cmdline.txt
sudo reboot
If everything has been configured correctly, you should now see a Ready
non-Talos node in your Talos cluster:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
raspberrypi Ready <none> 98m v1.32.2
talos-pij-o3h Ready <none> 8h v1.32.2
talos-rur-zf3 Ready control-plane 8h v1.32.2