Deploying a Kubernetes cluster on Container Linux with kubeadm (Part 2)


So you want to use Kubernetes but aren’t sure how to get started? In this article, you’ll see exactly how to get Kubernetes up and running in your cluster.

This is the second part in a series dedicated to getting your application up and running in the cloud. This part assumes you already have some infrastructure up and running with the necessary ports open, etc. If you don’t, or you’re not sure, see Provisioning an infrastructure for Kubernetes on AWS with Terraform (Part 1) first, then come back!

kubeadm is a command-line tool that helps you set up a minimal Kubernetes cluster which conforms to best practices as set forth by the upstream Kubernetes community and the Cloud Native Computing Foundation. This could be a cluster you use to familiarize yourself with Kubernetes, do some initial testing, or serve as a building block for a more complex cluster.

For additional reference, see https://kubernetes.io/docs/setup/independent/install-kubeadm/


Section 1: Installation

First, you’ll need install and set up the following packages on the master node and all worker nodes: kubeadm, kubelet, and kubectl. This section will walk you through this process. When doing this, you’ll need to make sure the versions match the version of Kubernetes that you end up running (check this later).

You’ll need to execute the following steps on each machine in your cluster. For each instance, SSH into [email protected]<ip-address-of-instance>. You may need to execute the following instructions using sudo. Instead of typing sudo for each command, you can instead become root for the session by executing:

sudo -i

Step 1: Install CNI Plugins

First, we need to install the Container Networking Interface (CNI) plugins. This will set up the machine to enable container-level networking — that is, assigning IP addresses to containers and mapping the host machine ip address /port to a specific container (or pod, in the case of Kubernetes).

Check https://github.com/containernetworking/plugins/releases for the latest version of CNI, and set theCNI_VERSION in the code block below to match it.

CNI_VERSION="v0.7.5"
mkdir -p /opt/cni/bin
curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-amd64-${CNI_VERSION}.tgz" | tar -C /opt/cni/bin -xz

Step 2: Install crictl

Next, you need to install crictl using the commands below. crictlis a command line interface for the Kubelet Container Runtime Interface (CRI). This allows kubelet to interact with the application that is actually running the containers (such as Docker), allowing Kubernetes to control, among other things, starting and stopping containers.

Check https://github.com/kubernetes-sigs/cri-tools/releases for the latest version.

CRICTL_VERSION="v1.14.0" 
mkdir -p /opt/bin
curl -L "https://github.com/kubernetes-incubator/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-amd64.tar.gz" | tar -C /opt/bin -xz

Step 3: Install kubeadm, kubelet, and kubectl

These are the tools we’ll be focusing on, and will be described in more detail later. For now, go ahead and install them using the following commands.

RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)"
cd /opt/bin
curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/amd64/{kubeadm,kubelet,kubectl}
chmod +x {kubeadm,kubelet,kubectl}

Step 4: Add the kubelet systemd service

This will set things up to have the kubelet service run when the system boots up, and also start it up now.

curl -sSL "https://raw.githubusercontent.com/kubernetes/kubernetes/${RELEASE}/build/debs/kubelet.service" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service 
mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/kubernetes/${RELEASE}/build/debs/10-kubeadm.conf" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl enable --now kubelet

The kubelet service will now be waiting for instructions from kubeadm.

 — Checkpoint — 

To check that the kubelet service is running, run the following command:

systemctl status kubelet

While waiting for instructions from kubeadm, the service will crash every few seconds. Expected output should be similar to the below — the service should be recognized and loaded, but the “Active” status will likely not be “running”.

Step 5: Repeat

This is the end of the section that has to be done on each node (master and workers). If you haven’t yet, go back to the beginning of Section 1 and install these utilities on each of the rest of your node.


Section 2: Kubernetes Master Setup

Step 6: Start kubeadm

kubeadm will set up your Kubernetes cluster. This is executed from the host machine which will run the master node.

There are many arguments you can pass when running kubeadm init. Those are beyond the scope of this tutorial. You can find details at https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

For Kubernetes to be able to set up a network amongst the pods to allow them to communicate, a pod network add-on must be set up. Here, we’ll use Flannel, but there are many options. See https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network for more information. Flannel and tools like it are referred to as a “network fabric”. Network fabrics are responsible for allocating subnet addresses to hosts, providing a software-defined IPv4 network for your cluster.

Now, go ahead and run

kubeadm init \
--pod-network-cidr=10.244.0.0/16 \
--apiserver-cert-extra-sans=<master-node-public-IP> \
--kubernetes-version=${RELEASE}

The first argument is a requirement for Flannel. If you use a different network fabric, they may require different arguments to be passed to kubeadm init.

The second argument will make sure that the certificates that are generate allow us to access our cluster using kubectl from a remote machine (e.g. your laptop). Make sure to replace <master-node-public-IP> with the public IP address of the master node.

The last argument ensures that the Kubernetes version we start is the same as the version we installed for kubeadm, kubelet, and kubectl.

When you run kubeadm init, it will do some prechecks, then download and install the Kubernetes control plane. It may take a few minutes. When done, the output should look similar to the following:

source: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/

Make note of the command in the last line of the output, particularly the values of master-ip, master-port, token, and hash. We’ll use this later to connect new nodes to our cluster.

At this point, we no longer need to act as the root user, and can execute

exit

to go back to being the regular-level user.

But, as the output states, we now need to do some setup in order to enable regular users to access the cluster configuration, which is needed for the rest of the commands in this tutorial. Run the following commands to allow the regular user to use the cluster.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Step 7: Set up the network

Run the following to get Flannel set up and running. This will also start up CoreDNS, which allows DNS to be used inside the private network that your cluster is using.

sysctl net.bridge.bridge-nf-call-iptables=1
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

Step 8 (Optional): Single-Machine Cluster Setup

If you only have one host machine, it will need to function as both the Kubernetes master node and as a worker node. This is not enabled by default — the master node is marked as “tainted” with the role node-role.kubernetes.io/master. The taint can be removed to enable the scheduler to schedule pods on this node. So, if you only have one machine in your cluster, run the following command. Otherwise, skip it.

kubectl taint nodes --all node-role.kubernetes.io/master-

 — Checkpoint — 

At this point, let’s confirm that things are up and running.

kubectl get pods --all-namespaces

This will show you all the Kubernetes pods that are currently running. There should be a handful, as shown below.

Now, run

systemctl status kubelet

This will show you the status of the kubelet service that we started up a few steps ago. Output should be similar to the below.


Section 3: Worker Node Setup

Step 9: Joining Nodes to the Master

So now you’ve confirmed that you have the master node up and running. The next step is to get the other nodes to join the Kubernetes cluster.

Recall the command with the master IP address, port, token, and hash that you noted in Step 6 after running kubeadm init. If you don’t have it handy, you can retrieve the token by running kubeadm token list from the machine which hosts the master node. To retrieve the value of --discovery-token-ca-cert-hash, run:

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

Tokens expire after 24 hours. If your token has expired, you can get a new one by running kubeadm token create from the machine which hosts the master node.

From the node that you want to join to the Kubernetes cluster, run:

kubeadm join <master-ip>:<master-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>

If you get an error related to “failed to request cluster info”, make sure you have the necessary ports opened on your machines (see https://kubernetes.io/docs/setup/independent/install-kubeadm/#check-required-ports).

— Checkpoint — 

Now is a good time to make sure all the nodes have actually joined successfully. From the host running the master node, run:

kubectl get nodes

This will list the master and worker nodes (distinguished by the value in the ROLES column of the output). If you don’t see a node you were expecting, go back and try joining again.

Then celebrate! You have a Kubernetes cluster up and running!


Section 4: Remote Administrator (Optional)

It’s not ideal that we have to SSH to the master node in order use kubectl to administer the cluster. Instead, we can 

Step 10: Install kubectl locally

First install kubectl on your local machine. For instructions see:Install and Set Up kubectl

Step 11: Copy config from master node

Next copy the kubectl config from the master node to your local machine.

mkdir -p ~/.kube
scp [email protected]<master-node-public-IP>:/home/core/.kube/config ~/.kube/config

Step 12: Update config with master node public IP address

The copy of the kubectl config file likely has the internal IP address of the master node. We need to update it to the public IP address.

Open the file ~/.kube/config in a text editor. It should begin with something similar to the following:

apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://10.0.0.x:6443
name: kubernetes

Replace the string “10.0.0.x” with the public IP address.

Save the file and test it out by running:

kubectl get nodes

You should see the same set of nodes as you did from running the command earlier on the master instance.


Section 5: Tear Down

When you want to stop running your cluster, you should first tear it down. To do this cleanly, you’ll need to sever the tie between master and worker from both the master and worker nodes.

Step 13: Remove node connections from the master side

To remove the connection to a node from the master, first drain the connection and then delete the node by running the following commands from the master node. Do this for every node that you want to remove from the cluster. The <node name> can be found in the output from the kubeadm get nodes command, which we ran in the Checkpoint above.

kubectl drain  --delete-local-data --force --ignore-daemonsetskubectl delete node 

Step 14: Remove master connections from the node side

Then, from each of those nodes, reset its state so that it doesn’t think it’s joined any cluster by running:

kubeadm reset

That’s it! You’ve now set up and torn down a Kubernetes cluster! Congratulations! I’ll be posting more tutorials about using your Kubernetes cluster to do useful things. In the meantime, check out the official documentation at https://kubernetes.io/docs/home/.