Installing Kubernetes cluster on Linux

Installing Kubernetes cluster on Linux

Intro: Kubernets is very popular container orchestration platform. It allows you to have flexible cluster with several nodes and gives you flexibility to add any node at any time. It manages how containers are distributed accross this cluster and their lifecycle.

What is a container?

Container is level of virtualization, where the content of the container is isolated from the operating system it runs on. This concept has many advantages:

  • you can run several different version of components with different set of libraries
  • you can easily spin up such container an destroy it
  • you can have several different instances of the same container with same set of services, but distributed accross nodes of your cluster

One of the most popular container solution is Docker

Before we start

This tutorial assumes that you have basic knowledge about linux administration. The following steps were executed on a virutal machine with Ubuntu Linux 18.04.1. There are several providers of this kind of hosting, such as Vultr, Linode, DigitalOcean and many more. You can even use your own "bare metal" PC. If you have created such linux account, let's start with basic setup.

Create at least two virtual linux machines

You can easily do that in your VPS (Virutal Private Server) hosting. Make sure that you enable private networking between these machines

Enable private networking

First of all, enable private networking on your hosting control panel (I'm using vultr.com)

vi /etc/netplan/10-ens7.yaml

Make sure that it looks like:

network:
  version: 2
  renderer: networkd
  ethernets:
    ens7:
      mtu: 1450
      dhcp4: no
      addresses: [{your host private IP}/16]

Replace the IP address 10.1.36.2 with the one, which is displayed in your control panel in the private network settings

netplan apply

After that, you should see a new network interfacen, when you type:

ifconfig

Do this for each host in your cluster. Then, you should be able to ping these private IPs between your host in your private network.

Install docker

On each machine of your VPS network, install docker:

sudo apt install docker.io

Install kubernetes

apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update && apt-get install -y kubelet kubeadm kubectl
rm -rf $HOME/.kube
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

Initialize master

What is a "master"? Master is single VPS machine, which acts line a controller of your other machines which are part of the cluster. You should use it for that single purpose and don't run any containers on this machine (this is the default setting).

Following steps have to be executed on master only:

Networking

sysctl net.bridge.bridge-nf-call-iptables=1
export KUBECONFIG=/etc/kubernetes/admin.conf

We need to install a CNI, we will use flannel CNI

What is a CNI? It's Container Network Interface. It's a container networking specification. It allows you to have a software defined network.

Let's initialize kubernetes

kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address={your master IP} 

After that, you will get an info how other pods can join the network (sample):

(do not run, just copy to clipboard):

kubeadm join 15.139.149.102:6443 --token gxqr5t.4132n8s2yu36x --discovery-token-ca-cert-hash sha256:fb7558407f1234a9c76125b2344ac6f19d01720b8a901fff30a7095531807e

This is what you need to install on worker to be visible by the master.

Installing flannel CNI

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Checking

kubectl get pod --all-namespaces

All of them have to be in status "Running" and all available, e.g. 1/1 or 3/3 ..

Running docker image from private docker repositroy docker login {your_private_repository_url}

cat ~/.docker/config.json
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>

Then, create file: {your-pod.yaml}

apiVersion: v1
kind: Pod
metadata:
  name: {name fo your pod}
spec:
  containers:
  - name: {container name}
    image: {your image URL in private docker repository}
  imagePullSecrets:
  - name: regcred

Then, run it:

kubectl create -f {your-pod.yaml}

To check it:

kubectl get pods

it has to be in state Running

To see what's going on inside of the pod's console:

kubectl attach {name fo your pod}

Attaching to the shell of your container:

kubectl exec -it {name fo your pod} sh

from within the container, you can try to do this:

ping codegravity.com

you will quickly realize, that hostname is not resolved and you cannot access this address.

Accessing external network When you run the pod, you will realize, that your pod cannot access anything else that what is within its own internal network.

You have to careful with this setting, but it is possible to allow outbound connections:

kubectl delete pod  {name fo your pod}

Wait for a moment until it's terminated, then:

edit {your-pod.yaml} and add the following lines

apiVersion: v1
kind: Pod
metadata:
  name: {name fo your pod}
spec:
  containers:
  - name: {container name}
    image: {your image URL in private docker repository}
  imagePullSecrets:
  - name: regcred
  hostNetwork: true
  dnsPolicy: Default

Kubectl autocomplete With autocomplete, you can simply just hit [tab] key and you will see a list of available commands.

To enable the autocomplete, just run:

source <(kubectl completion bash)

Join additional nodes to your cluster On master, run the following command:

kubeadm token create --print-join-command

It will print out something like:

kubeadm join 25.79.121.119:6443 --token 8m98hq.reaaz02g7xdfquyx --discovery-token-ca-cert-hash sha256:ac33444217d906884eb9a2a4ea017737c99bd1ec89a6fb805b0d590c12a783e6

Run this on the node you want to join the cluster

Verify then, if it's in list of nodes:

kubectl get nodes

Issue with kubernetes DNS: when running this command:

kubectl get pods --namespace=kube-system -l k8s-app=kube-dns

kubernetes DNS has been crashing:


root@master:~# kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
NAME                        READY     STATUS             RESTARTS   AGE
kube-dns-86f4d74b45-tr5jk   2/3       CrashLoopBackOff   1313       7d

This is what actually helped to restart the DNS:

systemctl stop kubelet
systemctl stop docker
iptables --flush
iptables -tnat --flush
iptables -P FORWARD ACCEPT
systemctl start docker
systemctl start kubelet
kubectl delete pods -n kube-system -l k8s-app=kube-dns 

Persistence

Install NFS server

apt install nfs-kernel-server
vi /etc/exports
/share 10.1.36.2/16(rw,sync,no_subtree_check,no_root_squash)

Replace 10.1.36.2 with your own private networking IP on which you have the server running. Make sure that path you've specified is accessible and has correct permissions apt-get install nfs-common

Then, try to mount the NFS folder:

mkdir /mnt/nfs
mount 10.1.36.2:/share /mnt/nfs 

(Replace 10.1.36.2 with your own IP)

Voila ! when you access /mnt/nfs, it serves content from /share

This server shall be accessible within your private network. Make sure that you don't have it accessible to the outside world.

For further configuration options, please refer to Creating Kubernetes Persistent Volume This assumes, that you should have your kubernetes cluster up and running.

Create a new file: pv.yaml:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv0001
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: slow
  mountOptions:
    - hard
    - nfsvers=4.1
  nfs:
    path: /share
    server: 10.1.96.3

(Replace 10.1.96.3 with your own IP and /share with the path the NFS server exports)

After that, execute:

kubectl create -f pv.yaml 

Check, if the Kubernetes persistent volume has been created successfully:

kubectl get pv