Create Kubernetes cluster on CentOS Stream

Create Kubernetes cluster on CentOS Stream

Introduction

This is a step-by-step installation of a bare-metal Kubernetes cluster on a set of CentOS Stream servers or virtual machines. It’s definitely not the good way of doing it but it’s a first step to start touching self hosted Kubernetes when already familiar with CentOS.

Common

Install CentOS Stream on master and worker nodes. You may leave SELinux in enforcing mode. Many tutorials advise to disable SELinux but it appears it’s not longer needed.

During the installation process you MUST disable the swap. The installer will complain about it but you can safely ignore the warning.

Setup passwordless SSH connection to root on the nodes. You need to copy your own ssh public key as authorized in the root account.

You should have the certificate of your own Certificate Authority as ca.crt available.

Setup the firewall rules (as root@node):

firewall-cmd --permanent --add-port=6443/tcp # Kubernetes API (Master)
firewall-cmd --permanent --add-port=2379-2380/tcp # etcd (Master) Only for multi-master
firewall-cmd --permanent --add-port=8090/tcp # Platform Agent (Master/Worker)
firewall-cmd --permanent --add-port=8091/tcp # Platform API (Operator)
firewall-cmd --permanent --add-port=10250/tcp # Kubernets API (Master/Worker)
firewall-cmd --permanent --add-port=10251/tcp # Only for multi-master
firewall-cmd --permanent --add-port=10252/tcp # Only for multi-master
firewall-cmd --permanent --add-port=10255/tcp  # K8S API Read-only
firewall-cmd --permanent --add-port=5000/tcp # Private Registry
firewall-cmd --permanent --add-port=30000-32767/tcp # NodePort
firewall-cmd --permanent --add-port=8285/udp # Flannel (Master/Worker) Needed ?
firewall-cmd --permanent --add-port=8472/udp # Flannel (Master/Worker)
firewall-cmd --permanent --add-port=5353/udp # DNS Ingress Controler ?
firewall-cmd --permanent --add-port=5353/tcp # DNS Ingress Controler ?
firewall-cmd --add-masquerade --permanent # (Master/Worker)
firewall-cmd --reload

Make iptable see bridged traffic (as root@node):

modprobe br_netfilter
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

Copy your CA certificate as trusted on the node from you host:

scp ca.crt root@node:/etc/pki/ca-trust/source/anchors/

Set it as trusted (as root@node);

update-ca-trust

Install docker from its repo (as root@node):

dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo
dnf install -y https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm
dnf install -y --allowerasing docker-ce

By default, the docker configuration uses cgroup and it need to be switched to systemd. For that, you need to edit /usr/lib/systemd/system/docker.service and add the native.cgroupdriver as shown below.

ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd

Enable docker daemon and start it (as root@node):

systemctl enable --now docker

Now install Kubernetes from its repo (as root@node):

cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

Install kubeadm, kubelet and kubectl. Enable kubelet and start it (as root@node):

dnf install -y kubeadm kubelet kubectl
systemctl enable --now kubelet

Master node

This section describes the steps specific to the master node only.

You need to initialize the master node with kubeadm (as root@master). Keep track of the reported join command as you will need it for the worker nodes. The parameter --pod-network-cidr=10.244.0.0/16 is required for flannel network. You also need to update your user environment as reported by the command. This user is referenced as user in this document. You also need to update the root account with this environment.

kubeadm init --pod-network-cidr=10.244.0.0/16

In Kubernetes we need to chose a network implementation. Install the flannel network (as user@master):

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

This step is optional and should not be done for a production environment. Enable the use of the master node as a worker node (as user@master):

kubectl taint nodes --all node-role.kubernetes.io/master-

Worker node

This section describes the steps specific to the master node only.

For each worker node, initialize the worker node and join the cluster (as root@worker):

kubeadm join 192.168.0.11:6443 --token e66.change.me.1yn --discovery-token-ca-cert-hash sha256:325.change.me.79e 

If you no longer have the token you can create a new one on the master node (as root@worker):

kubeadm token create --print-join-command
kubeadm token list 

Test the cluster

you can test that everything is working with these simple tests (as user@master). That will do the following:

  1. Get the nodes
  2. Get the pods
  3. Deploy and run the busybox image and make it sleep for a long long time
  4. Execute a simple command on the running busybox pod
  5. Deploy dns utils
  6. Execute dns debugging commands on the running pod
kubectl get nodes
kubectl get pod
kubectl run busybox --image=busybox:1.28.4 --command -- sleep 99999
kubectl exec busybox -- cat /etc/resolv.conf
kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml
kubectl exec -i -t dnsutils -- nslookup kubernetes.default
kubectl exec -i -t dnsutils -- nslookup kubernetes
kubectl exec -i -t dnsutils -- cat /etc/resolv.conf

Deployement of the Dashboard

Deploy the dashboard (as user@master):

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml

Configure access to the dashoard (as user@master):

mkdir ~/dashboard
cat > ~/dashboard/dashboard-admin.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
EOF

Install the access:

kubectl apply -f ~/dashboard/dashboard-admin.yaml

kubectl get secret -n kubernetes-dashboard $(kubectl get serviceaccount admin-user -n kubernetes-dashboard -o jsonpath="{.secrets[0].name}") -o jsonpath="{.data.token}" | base64 --decode

Launch the proxy on the master node (as user@master):

kubectl proxy

The dashboard is only accessible from the inside of the cluster. So we need to forward the 8001 port using SSH from your point of connexion to the master node:

ssh -L 8001:localhost:8001 root@master

The dashboad is then available from the host. You need to copy paste the token provided by the get secret command to the dashboard.

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

Copy/paste the secret key reported by the kubectl get secret command in the login popup.

Create a private registry

Create a PersistentVolume and PersistentVolumeClaim

Create a PersistentVolume on the master and the corresponding PersistentVolumeClaim. First, you must create the folder on the master node (as root@master):

mkdir /data/registry

Create the PersistentVolume and PersistentVolumeClaim for the registry:

cat > registry-storage.yaml <<EOF
apiVersion: v1
kind: PersistentVolume
metadata:
  name: registry-pv
spec:
  capacity:
    storage: 5Gi
  accessModes:
  - ReadWriteOnce
  storageClassName: local
  local:
    path: /data/registry
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - master
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: registry-pvc
spec:
  storageClassName: local
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
EOF

Deploy the registry

Create a certificate for the registry (for master node) and sign it with your CA:

openssl genrsa -out "registry.key" 2048
openssl req -new -subj "/C=XX/L=City/O=You/CN=master/emailAddress=foo@bar.com" -key "registry.key" -out "registry.csr"
openssl x509 -req -days 3650 -in "registry.csr" -CA "ca.crt" -CAkey "ca.key" -CAcreateserial -out "registry.crt" -sha256 

Create a secret to be used for securing the deployment of the registry:

kubectl create secret tls registry-tls --key="registry.key" --cert="registry.crt"
kubectl describe secret registry-tls

Create the yaml of the deployment of the registry:

cat > registry.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: private-repository
  labels:
    app: private-repository
spec:
  replicas: 1
  selector:
    matchLabels:
      app: private-repository
  template:
    metadata:
      labels:
        app: private-repository
    spec:
      volumes:
      - name: certs-vol
        secret:
          secretName: registry-tls
      - name: registry-vol
        persistentVolumeClaim:
          claimName: registry-pvc
      containers:
        - image: registry:2
          name: private-repository
          imagePullPolicy: IfNotPresent
          env:
          - name: REGISTRY_HTTP_TLS_CERTIFICATE
            value: "/certs/tls.crt"
          - name: REGISTRY_HTTP_TLS_KEY
            value: "/certs/tls.key"
          ports:
            - containerPort: 5000
          volumeMounts:
          - name: certs-vol
            mountPath: /certs
          - name: registry-vol
            mountPath: /var/lib/registry
EOF

And finally, deploy it:

kubectl apply -f registry.yaml

Create the service

Create the YAML of the registry service:

cat > registry-service.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: registry-service
spec:
  type: NodePort
  selector:
    app: private-repository
  ports:
  - port: 5000
    targetPort: 5000
    nodePort: 30500
EOF

Install the registry service:

kubectl apply -f registry-service.yaml

The registry is now accessible from master node on port 30500.

Create a secret

Create a secret for connecting to the local registry:

kubectl create secret docker-registry ocirsecret --docker-server=master:30500 --docker-username='a' --docker-password='b' --docker-email='a@b.com'

Push an image from podman to our private registry

Pushing a podman image to our private registry to be used in the cluster is done with the podman push command. You first need to retrieve the image ID using podman image ls and then push it:

podman image ls
podman push <<IMAGE ID>> docker://master:30500/blah:1.0

The previous command may fail if your certificate was not properly generated or configured. You may override this check with --tls-verify=false

Conclusion

This was clearly not the correct way to deploy a bare metal Kubernetes cluster. CentOS Stream should not be used for this purpose. Fedora Core OS is most probably a better choice but installation of Fedora Core OS is very different from CentOS Stream if you are already familiar with regular Linux distributions.

The manual steps for creating the cluster are good for learning and understanding but it is not the way to go for product. One may try Ansible and the load of pre-existing playbooks.

This entry was posted in Non classé. Bookmark the permalink.

Comments are closed.