K8S installation procedure
The procedure below is for Single Master, Multi-node kind of
deployment, which is most suitable for personal QA and Development Purpose.
Following procedure has been tested on
Centos 7.6
Kubernetes v1.13.4
Docker 18.09
This procedure should work on VMs created on VirtualBox,
VMware and Nutanix. It can also work on instances of cloud platforms like AWS.
Note:
-
All commands are run as root user unless
explicitly mentioned.
-
Be sure that hostnames are properly set for both
master and node VMs. If hostnames do not resolve, then add entries for all in
/etc/hosts.
-
Do not change hostnames after installation. This
will break the cluster.
Step 1) Install and
configure required kernel modules. To be
performed on both Master VM and Node VMs
Install ipvs
# yum install ipvsadm -y
Create modules file
# vi
/etc/modules-load.d/ip_vs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
br_netfilter
nf_conntrack_ipv4
Enable netfilter bridge for ipv4
# vi
/usr/lib/sysctl.d/00-system.conf
net.bridge.bridge-nf-call-iptables
= 1
Enable IP Forwarding if is disabled. Edit /etc/sysctl.conf file and
set net.ipv4.ip_forward to 1
# vi /etc/sysctl.conf
net.ipv4.ip_forward = 1
# sysctl -p
Disable SELinux, firewalld and Swap partition. It is
recommended to create VMs without a swap partition
Disable SELINUX
# sed -i --follow-symlinks
's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
Disable swap partition
# swapoff -a
<comment lines for swap
partition in /etc/fstab>
disable firewall service
# systemctl disable firewalld
Reboot machine and verify if all kernel modules are loaded
properly
Reboot machine
# reboot
check after reboot if all modules are loaded
# lsmod | grep
'^\(ip_vs\|ip_vs_rr\|ip_vs_wrr\|ip_vs_sh\|nf_conntrack_ipv4\|br_netfilter\)'
nf_conntrack_ipv4 15053
0
br_netfilter 22256 0
ip_vs_sh 12688 0
ip_vs_wrr 12697 0
ip_vs_rr 12600 0
ip_vs 145497 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
verify netfilter bride flag
# sysctl -a | grep
bridge-nf-call-iptables
net.bridge.bridge-nf-call-iptables
= 1
Step 2) Install
Docker and Kubernetes. To be performed on both Master VM and Node VMs
Install lvm2 and device mapper. Both required for Docker and
Kubernetes to work
# yum install -y yum-utils
device-mapper-persistent-data lvm2
Install Docker repo
Setup Kubernetes repo
# cat << EOF
>/etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
Install docker, Kubernetes and Kubernetes client
# yum install -y --nogpgcheck
docker-ce kubelet kubeadm kubectl kubernetes-cni
Start and enable both docker and Kubernetes service (Very important
step)
# systemctl start docker
&& systemctl enable docker
# systemctl start kubelet
&& systemctl enable kubelet
It is recommended to reboot the VM after this step
# reboot
Step 3) Initialize Master Node. To be performed only on
Master VM
Here we are using kubeadm to initialize the Kubernetes
Master node. This command starts various components used by Kubernetes master
like API server, Proxy, Controllers and Controller Manager, Etcd, Scheduler
etc. All these components are containerized and run as docker containers.
Initialise cluster using kubeadm. Note that MasternodeIP is the ip
associated with VM’s ethernet. In most of the case, VM get this ip from
company’s lan network through DHCP.
# kubeadm init
--apiserver-advertise-address <yourMasternodeIP>
--pod-network-cidr=192.168.0.0/16
Example:
# kubeadm init
--apiserver-advertise-address 10.3.36.34 --pod-network-cidr=192.168.0.0/16
Output of the above kubeadm command has below things.
[preflight] Running pre-flight checks
[WARNING
SystemVerification]: this Docker version is not on the list of
validated versions: 18.09.3. Latest validated version: 18.06
Your
Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a
regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f
[podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on
each node
as root:
kubeadm join 10.3.36.133:6443 --token ilwb4j.0af8ar1n9emvja73
--discovery-token-ca-cert-hash
sha256:86003cacb999e3a9f583a3c5bcc3fde39d0863edd3e419f9a202099918215deb
Please observe and understand
-
The warning for docker version can be ignored.
Because, even the Kubernetes hasn’t been officially tested for latest version
of docker, it works perfectly fine and it is just a minor version change.
-
You must see the message “Your Kubernetes master
has initialized successfully!” or “Your Kubernetes control-plane has
initialized successfully!“ in the output.
-
The settings for admin.conf are required for a client program kubectl to work. We are running above command on master VM, so that
master can act as client. (Client and Node are two different things). A
separate VM/Docker instance can be configured as a client to manage a cluster.
-
Next instruction is to install Pod Network. We
are going to install it in next step.
-
In the end, output has kubeadm join command. Copy this command as is. The same command
will be run on Node VMs to add them to the cluster.
Step 4) Install
PodNetwork, setup a client and Kubernetes Dashboard. To be performed on Master
VM.
Configure Master as a client so that kubectl command can work.
# mkdir -p $HOME/.kube
# cp /etc/kubernetes/admin.conf
$HOME/.kube/config
# chown $(id -u):$(id -g)
$HOME/.kube/config
Run first kubectl command on master
# kubectl get nodes
NAME
STATUS ROLES AGE
VERSION
k8s-training-master
NotReady master 100s
v1.13.4
You can see the STATUS as “NotReady”. It is because Pod Network
hasn’t yet been installed.
Let’s install Pod network “WeaveNet”.
# IPALLOC_RANGE=192.168.0.0/16
# kubectl apply -f “https://cloud.weave.works/k8s/net?k8s-version=$(kubectl
version | base64 | tr -d '\n')&env.IPALLOC_RANGE=192.168.0.0/16”
Run below command again to see if master is ready
# kubectl get nodes
NAME STATUS ROLES
AGE VERSION
k8s-training-master Ready master
3m27s v1.13.4
Install Dashboard. Dashboard is nothing but GUI client. Run below
command to install Dashboard.
# kubectl apply -f “https://gist.githubusercontent.com/initcron/32ff89394c881414ea7ef7f4d3a1d499/raw/baffda78ffdcaf8ece87a76fb2bb3fd767820a3f/kube-dashboard.yaml”
Above dashboard service is of NodePort type. One can find nodeport
using below command. Nodeport is written under PORT(S). It is 5 digit number
starting with 3. Dash board can be accessed using IP of any VM in the cluster
by attaching port number to it.
Ex. http://10.3.36.133:32179
# kubectl get svc
kubernetes-dashboard -n kube-system
ME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE
kubernetes-dashboard NodePort
10.110.200.10 <none> 80:32179/TCP
7d8h
Step 5) Join Node VMs
to form a cluster. To be performed on Node VMs.
Join nodes to Master to setup a cluster. (Note- IP, token and cert
will be different every different deployment)
# kubeadm join 10.3.36.133:6443
--token ilwb4j.0af8ar1n9emvja73 --discovery-token-ca-cert-hash
sha256:86003cacb999e3a9f583a3c5bcc3fde39d0863edd3e419f9a202099918215deb
This is the same command that “kubeadm init” had output while
initializing the master
Run following command on master and you should see nodes joined the
cluster and are in Ready state. It takes couple of minutes for nodes to be
Ready after running "kubeadmin join" command.
# kubectl get nodes
NAME STATUS ROLES
AGE VERSION
k8s-training-master Ready
master 7m53s v1.13.4
k8s-training-node1 Ready
<none> 61s v1.13.4
k8s-training-node2 NotReady
<none> 3s v1.13.4
This is all. Your k8s
cluster is ready to use.
Optional Step) Setup
a client.
We already setup a master as a client in step 4). But we can
manage the k8s cluster from other linux VM which is not a part of a cluster, or
a windows machine or even docker container which is again not a part of a
cluster.
On linux VM/Docker, setup a Kubernetes repo as given in step
2) and run below command
# yum install kubectl
Then copy admin.conf from Master to the client machine. Before that
we can choose to use different user than root.
# useradd k8s
# su – k8s
As k8s user
$ mkdir -p $HOME/.kube
$ scp -p root@<MasterIP>:/etc/kubernetes/admin.conf
$HOME/.kube/config
$ chown $(id -u):$(id -g) $HOME/.kube/config
Now kubectl command shall point to your newly setup k8s cluster
For windows, follow the link https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-using-curl and select the Windows tab and follow the instructions.
Reset Cluster
Follow steps below to reset Nodes
# kubectl drain <node
name> --delete-local-data --force --ignore-daemonsets
# kubectl delete node <node
name>
After deleting node, reset all kubeadm installed state:
# kubeadm reset
Once Worker nodes are reset, they can be re-added to the cluster using
“kubeadm join” command.
Join command can be regenerated, if lost, by running below command
# kubeadm
token create --print-join-command
On master node, “kubeadm init” command can be run to re-initialize the
cluster.
Useful info. Fortunate me I found your web site accidentally, and I am surprised why this twist of fate did not took place in advance! I bookmarked it.
ReplyDeleteI pay a quick visit each day some websites and sites to read articles or reviews, except this web site offers quality based posts.
ReplyDelete