How to setup Kubernetes Cluster on CentOS 7

Share on Social Media

In this guide, you will learn, how to setup Kubernetes cluster on CentOS 7 or other Redhat based Linux distros. #centlinux #linux #k8s

What is Kubernetes? :

Kubernetes or k8s is an open-source container orchestration system for automated application deployment, management and scaling across clusters of hosts. Kubernetes is initially developed by Google, but now maintained by Cloud Native Computing Foundation. Kubernetes requires a container runtime interface (CRI) for orchestration. Kubernetes supports different CRIs including Docker, containerd and cri-o.

In our previous article, we have configured a Docker Swarm Cluster on CentOS 7 for container orchestration. Now, in this article, we are installing a two node Kubernetes / K8s cluster with Docker CE on CentOS 7.

This article is about the installation and configuration of Kubernetes on CentOS 7 and it doesn’t addresses the technical details about Kubernetes architecture and components. Therefore, if you are interested to read more about Kubernetes you should read Kubernetes in Action (PAID LINK) by Manning Publications.

System Specification:

We have two CentOS 7 virtual machines with following specifications.

Hostname:kubemaster-01kubenode-01
IP Address:192.168.116.160/24192.168.116.161/24
Cluster Role:K8s masterK8s node
CPU:3.4 Ghz (2 cores) *3.4 Ghz (2 cores) *
Memory:2 GB2 GB
Storage:40 GB40 GB
Operating System:CentOS 7.6CentOS 7.6
Docker version:18.09.518.09.5
Kubernetes version:1.14.11.14.1

* We must have at least 2 cores on each node to install Kubernetes.

Make sure the hostnames are resolvable on all nodes. You can either setup a Private DNS Server or use Local DNS Resolver for this purpose.

Install Docker CE on CentOS 7:

We are configuring Docker CE as Kubernetes CRI (Container Runtime Interface). Other choices for Kubernetes CRI are containerd, cri-o and frakti.

Connect with Kubernetes master kubemaster-01.centlinux.com using ssh as root user.

Install Docker CE prerequisite packages using yum command.

# yum install -y device-mapper-persistent-data lvm2 yum-utils

Add Docker yum repository as follows:

# yum-config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo
Loaded plugins: fastestmirror
adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo

Build yum cache for Docker repository.

# yum makecache fast
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.xeonbd.com
 * extras: mirror.xeonbd.com
 * updates: mirror.xeonbd.com
base                                                     | 3.6 kB     00:00
docker-ce-stable                                         | 3.5 kB     00:00
extras                                                   | 3.4 kB     00:00
updates                                                  | 3.4 kB     00:00
(1/2): docker-ce-stable/x86_64/primary_db                  |  27 kB   00:00
(2/2): docker-ce-stable/x86_64/updateinfo                  |   55 B   00:01
Metadata Cache Created

Install Docker CE using yum command.

# yum install -y docker-ce

Configure Docker service for use by Kubernetes.

# mkdir /etc/docker
# cat > /etc/docker/daemon.json << EOF
> {
>   "exec-opts": ["native.cgroupdriver=systemd"],
>   "log-driver": "json-file",
>   "log-opts": {
>     "max-size": "100m"
>   },
>   "storage-driver": "overlay2",
>   "storage-opts": [
>     "overlay2.override_kernel_check=true"
>   ]
> }
> EOF

Start and enable Docker service.

# systemctl enable docker.service
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
# systemctl start docker.service

Docker CE has been installed. Repeat the above steps to install Docker CE on kubenode-01.centlinux.com.

Install Kubernetes on CentOS 7:

Set following Kernel parameter as required by Kubernetes.

# cat > /etc/sysctl.d/kubernetes.conf << EOF
> net.ipv4.ip_forward = 1
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF

Reload Kernel parameter configuration files.

# modprobe br_netfilter
# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/kubernetes.conf ...
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
* Applying /etc/sysctl.conf ...

Turn off Swap for Kubernetes installation.

# swapoff -a
# sed -e '/swap/s/^/#/g' -i /etc/fstab

Kubernetes uses following services ports on Master node.

PortProtocolPurpose
6443TCPKubernetes API server
2379-2380TCPetcd server client API
10250TCPKubelet API
10251TCPkube-scheduler
10252TCPkube-controller-manager

Allow Kubernetes service ports on kubemaster-01.centlinux.com in Linux firewall.

# firewall-cmd --permanent --add-port={6443,2379,2380,10250,10251,10252}/tcp
success
# firewall-cmd --reload
success

Kubernetes uses following service ports on Worker node.

PortProtocolPurpose
10250TCPKubelet API
30000-32767TCPNodePort Services

Allow Kubernetes service ports on kubenode-01.centlinux.com in Linux firewall.

# firewall-cmd --permanent --add-port={10250,30000-32767}/tcp
success
# firewall-cmd --reload
success

Switch SELinux to Permissive mode using following commands.

# setenforce 0
# sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

Add Kubernetes yum repository as follows.

# cat > /etc/yum.repos.d/kubernetes.repo << EOF
> [kubernetes]
> name=Kubernetes
> baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
> EOF

Build yum cache for kubernetes repository.

# yum makecache fast
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.xeonbd.com
 * extras: mirror.xeonbd.com
 * updates: mirror.xeonbd.com
base                                                     | 3.6 kB     00:00
docker-ce-stable                                         | 3.5 kB     00:00
extras                                                   | 3.4 kB     00:00
kubernetes/signature                                     |  454 B     00:00
Retrieving key from https://packages.cloud.google.com/yum/doc/yum-key.gpg
Importing GPG key 0xA7317B0F:
 Userid     : "Google Cloud Packages Automatic Signing Key <gc-team@google.com>"
 Fingerprint: d0bc 747f d8ca f711 7500 d6fa 3746 c208 a731 7b0f
 From       : https://packages.cloud.google.com/yum/doc/yum-key.gpg
Is this ok [y/N]: y
Retrieving key from https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
kubernetes/signature                                     | 1.4 kB     00:07 !!!
updates                                                  | 3.4 kB     00:00
kubernetes/primary                                         |  47 kB   00:00
kubernetes                                                              339/339
Metadata Cache Created

Install Kubernetes packages using yum command.

# yum install -y kubelet kubeadm kubectl

To enable automatic completion of kubectl commands, we have to execute the script provided by kubectl command itself. You must ensure that bash-completion package is installed.

# source <(kubectl completion bash)

For making it persistent, we have to add the script in Bash Completion directory.

# kubectl completion bash > /etc/bash_completion.d/kubectl

Kubernetes has been installed. Repeat above steps to install Kubernetes on kubenode-01.centlinux.com.

Configure Kubelet Service on Master Node:

Use kubeadm command to pull images that are required to configure kubelet service.

# kubeadm config images pull
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.14.1
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.14.1
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.14.1
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.14.1
[config/images] Pulled k8s.gcr.io/pause:3.1
[config/images] Pulled k8s.gcr.io/etcd:3.3.10
[config/images] Pulled k8s.gcr.io/coredns:1.3.1

Initialize and configure the kubelet service as follows:

# kubeadm init
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubemaster-01.centlinux.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.116.160]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubemaster-01.centlinux.com localhost] and IPs [192.168.116.160 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubemaster-01.centlinux.com localhost] and IPs [192.168.116.160 127.0.0.1 ::1]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 42.152638 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node kubemaster-01.centlinux.com as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kubemaster-01.centlinux.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: mm20xq.goxx7plwzrx75tv3
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.116.160:6443 --token mm20xq.goxx7plwzrx75tv3 
    --discovery-token-ca-cert-hash sha256:00065886b183ea9cc2e9fbb68ff2a82b52574c2ab5ad8868c4fd6c2feb006d6f

Execute following commands as suggested by above command.

# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config

Start and enable Kubelet Service.

# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
# systemctl start kubelet.service

Add a node to Kubernetes Cluster:

Execute status of nodes in the Kubernetes cluster.

# kubectl get nodes
NAME                        STATUS     ROLES    AGE   VERSION
kubemaster-01.centlinux.com   NotReady   master   50m   v1.14.1

Add another node to Kubernetes cluster by executing the command provided by kubeadm init command.

# kubeadm join 192.168.116.160:6443 --token mm20xq.goxx7plwzrx75tv3 
>     --discovery-token-ca-cert-hash sha256:00065886b183ea9cc2e9fbb68ff2a82b52574c2ab5ad8868c4fd6c2feb006d6f
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

If you experience network errors, then you have to install a non-default network like Flannel on all nodes.

# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.extensions/psp.flannel.unprivileged configured
clusterrole.rbac.authorization.k8s.io/flannel unchanged
clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg unchanged
daemonset.extensions/kube-flannel-ds-amd64 unchanged
daemonset.extensions/kube-flannel-ds-arm64 unchanged
daemonset.extensions/kube-flannel-ds-arm unchanged
daemonset.extensions/kube-flannel-ds-ppc64le unchanged
daemonset.extensions/kube-flannel-ds-s390x unchanged

Check status of nodes in Kubernetes cluster again.

# kubectl get nodes
NAME                        STATUS   ROLES    AGE   VERSION
kubemaster-01.centlinux.com   Ready    master   45m   v1.14.1
kubenode-01.centlinux.com     Ready    <none>   43m   v1.14.1

We have successfully setup Kubernetes cluster of two nodes on CentOS 7.

Conclusion – Setup Kubernetes Cluster:

In this guide, you have learned, how to setup Kubernetes Cluster on CentOS 7 or other Redhat based Linux distros.

Scroll to Top