Basics of Kubernetes
Why Kubernetes
Imagine a scenario where you have quite a few applications which are dockerized( let’s say 10) and you have 3 servers where you want to host those application. The application might contain services that handle your authentication, service with core features, few helper services, databases etc. The load on these different services is obviously different as the core services might need more resources(CPU, Memory) than helper services. Now you deployed all the non critical and less resource consuming services on 2 of the servers and the service which needs more resource on one of the server.
With this kind of setup, there might be few problems during following period:
- What if your core application crashes and goes down?
- What if the traffic is very high during office time(9-6) and very low during off period and you need to scale up/down your applications accordingly?
- What if the user base is rapidly growing and you have to scale accordingly?
- What if there is no any resource on the server that you are running core application but if you have plenty of resources on the other 2 server where non critical services are running?
Kubernetes tends to solve these kinds of problems. I won’t be focusing on how kubernetes tries to solves these and similar kinds of problem but give an overview on the few basic concept as well as how to use kubectl
, way to interact with api server that kubernetes exposes using command line.
Basics of kubernetes
Docker
To understand kubernetes, we need to understand first what docker is. The aim of this article is to get a basic understanding of kubernetes but not docker. If you are unfamiliar of what docker is, I suggest you to go through this article first.
Pod
Pod is the smallest unit in a kubernetes cluster. Pod encapusates the container/s. Generally a pod contains a single container but that is not always true. Since the main purpose of kubernetes(k8s) is scaling, a single pod usually contains a single container with specific feature. Let’s say you deployed a pod with two containers that handles authentication and the core feature but the load on the core feature container can be much more higher that the authentication server and we might to add another replica of this pod. But since both application are on the same pod and the pod is the smallest unit, you will be getting two pods with two authentication as well as core service container and the memory consumed by the new authentication docker container is completely useless which could have been utlized on other places. So it is much better to create a pod with containers which help us when we scale up/down.
We now know what a pod is, let us try a create a pod. But before we try and create a pod, we must have a k8s cluster for us to practice. For learning purposes we can install either minikube or kind to create a local kubernetes cluster.
Creating a k8s cluster with kind
Please go through this for the installation of kind on your local machine.
For mac you can just install it with homebrew.
brew install kind
Let us create a cluster with kind.
kind create cluster
We can verify if our cluster is up and running by using the following command.
╰─$
╭─k8s@kuberentes ~/Desktop/demo
╰─$ kubectl cluster-info --context kind-kind
Kubernetes control plane is running at https://127.0.0.1:52459
CoreDNS is running at https://127.0.0.1:52459/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
If you do not have kubectl installed, check out this link for the installation.
From the above output, we can see that the control plane is running at https://127.0.0.1:52459. Control plane is the node that orchestrates everything on the kubernetes cluster but let us forget all about this.
╭─k8s@kuberentes ~/Desktop/demo
╰─$ curl https://127.0.0.1:52459 -k 60 ↵
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
- We have tried to login anonymously on the control plane and it is forbidden. So how would we login into this cluster? The answer is the
~/.kube/config
file that you have on your home directory which contains the credentials for the cluster admin for this local kubernetes clusters.
Now that we have everything we need, let us create our first pod on kubernetes cluster.
Creating pods
╭─k8s@kuberentes ~/Desktop/demo
╰─$ cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: web
image: nginx
- apiVersion - Determines which version of kubernetes API do we want to use to create this object. This is a mandatory field to create any object in kubernetes. v1 was the first stable api version which contains many core objects in kubernetes and it is used for creating pods. Feel free to check this article to find our which apiVersion to use to create the different kubernetes objects.
- Kind: We specify the type of object that we want to create, pod in this case
- Metadata: As the name suggests, it contains the metadata for this pod.
- Specs: we specify the state that we want this pod to have. For this pod, we want to create a container with name web and with the image nginx which will be fetched from the dockerhub in this scenario. If we notice the entry inside containers is given as array which means we can have multiple containers in a single pod.
Let us create our first pod.
╭─k8s@kuberentes ~/Desktop/demo
╰─$ kubectl apply -f pod.yaml
pod/nginx created
The pod is created. Let us verify the pod that we have just created exists.
╭─k8s@kuberentes ~/Desktop/demo
╰─$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 31s
The pod has been successfully created.
- NAME: The name that we have mentioned on the metadata
- READY: This pod has a single container and the container is on ready state. If the entry was 1/2, it means the pod had two containers and only one of them is ready and the other is on not ready state.
- STATUS: The pod is successfulyl running
- RESTARTS: There have been zero restarts for this pod
- AGE: Time since the pods was created
Deployment
Deployment is a something that sits on top of pods encapsulating them, similar to pods-containers relation. But why do we need deployment when we already have a pod? Well, if we want to scale up the pods, we have to create a new pod by ourself and delete if we do not need them. As deployment encapsulates the pods, we can directly define on deployment on how many replicas of pods that we want , scale up/down the pods and much more. There are many features that we can use with deployment.
Creating an deployment
╭─k8s@kuberentes ~/Desktop/demo/k8s
╰─$ cat deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
- apiVersion: The apiversion that we need to create deployment is
app/v1
- metadata: The name of this deployment is nginx similar to the pod that we have created earlier
- Spec: the state that we want our deployment to be on
- replicas: This defines the number of actively running pods at any time. Let us say if one of the nginx pod crashed due to heavy load, the deployment is responsible for creating a new pod such that there are three pods running at any time.
- selector: Defines which pods this deployment will manage. In this case we are asking for the deployment to match the labels
app:nginx
. So this will apply to the pods which have labels set asapp:nginx
. - template: This looks somewhat similar to the specification of pod that we have created earlier. We have label
app:nginx
inside the metadata field and the spec which is using the nginx image. We do not need to specify the apiVersion in this case.
Let us apply this manifest file and create a new deployment.
╭─k8s@kuberentes ~/Desktop/demo/k8s
╰─$ kubectl apply -f deploy.yaml
deployment.apps/nginx created
Listing the pods
╭─k8s@kuberentes ~/Desktop/demo/k8s
╰─$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 17m
nginx-6799fc88d8-gr2hw 0/1 ContainerCreating 0 12s
nginx-6799fc88d8-pfwrn 1/1 Running 0 12s
nginx-6799fc88d8-phmjm 0/1 ContainerCreating 0 12s
If you notice the status two of the newly created pod is on ContainerCreating and one of the pods is on running state.
After some time:
╭─k8s@kuberentes ~/Desktop/demo/k8s
╰─$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 18m
nginx-6799fc88d8-gr2hw 1/1 Running 0 83s
nginx-6799fc88d8-pfwrn 1/1 Running 0 83s
nginx-6799fc88d8-phmjm 1/1 Running 0 83s
- All of the pods are successfully created.
- If you notice the names of the newly created pods there is something different than the single nginx pod that we have created earlier. This means this pod is a part of a deployment and the name of the deployment is
nginx
, part of a replicaset(Will be discussed later) which namenginx-6799fc88d8
and the pod name isnginx-6799fc88d8-phmjm
.
Listing the deployment
╭─k8s@kuberentes ~/Desktop/demo/k8s
╰─$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 3/3 3 3 8m23s
- Name: Name of the deployment
- READY: 3 pods out of 3 desired pods are ready
- UP-TO-DATE: Number of pods have been updated to achieve the current desired state
- AVAILABLE: Number of pods available to the end users
- AGE: Time since this deployment is running
Replicaset
Along with pod, deployment also encapsulates the replicaset. Relicaset is an kubernetes object which always ensures the number of pods that should be running at any given time. Deployment uses replicasets under the hood to upscale, downscale and restart the pods.
╭─k8s@kuberentes ~/Desktop/demo/k8s
╰─$ kubectl get replicaset
NAME DESIRED CURRENT READY AGE
nginx-6799fc88d8 3 3 3 15m
Scaling the deployment
Now that we have our deployment ready, let us scale up such that we would have 6 pods at any time.
╭─k8s@kuberentes ~/Desktop/demo/k8s
╰─$ kubectl scale deployment nginx --replicas=6
deployment.apps/nginx scaled
Listing the pods
╭─k8s@kuberentes ~/Desktop/demo/k8s
╰─$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 35m
nginx-6799fc88d8-gr2hw 1/1 Running 0 18m
nginx-6799fc88d8-h6gwj 1/1 Running 0 16s
nginx-6799fc88d8-jbpsw 1/1 Running 0 16s
nginx-6799fc88d8-nksf9 1/1 Running 0 16s
nginx-6799fc88d8-pfwrn 1/1 Running 0 18m
nginx-6799fc88d8-phmjm 1/1 Running 0 18m
Now we have 6 instances of the nginx pod running.
Restarting the deployment
Imagine a scenario when we want to restart all the created pods. It would be a very hard if we have 1000 instances of the pods running and we might need to manually restart each and every pod. But the deployment makes our life very easier. We can restart every pod within a deployment using a single command.
╭─k8s@kuberentes ~/Desktop/demo/k8s
╰─$ kubectl rollout restart deployment nginx
deployment.apps/nginx restarted
Listing the pods
╭─k8s@kuberentes ~/Desktop/demo/k8s
╰─$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 38m
nginx-6799fc88d8-gr2hw 0/1 Terminating 0 21m
nginx-6799fc88d8-pfwrn 1/1 Running 0 21m
nginx-6799fc88d8-phmjm 0/1 Terminating 0 21m
nginx-6d9967585d-c2tnl 0/1 ContainerCreating 0 7s
nginx-6d9967585d-ghgk2 1/1 Running 0 14s
nginx-6d9967585d-jrdsc 1/1 Running 0 10s
nginx-6d9967585d-ss6s2 0/1 ContainerCreating 0 5s
nginx-6d9967585d-v74c5 1/1 Running 0 14s
nginx-6d9967585d-vcs4q 1/1 Running 0 14s
- We can see few of the newly created pods(14s AGE) are in running state, few are in ContainerCreating state and some of the pods are in terminating state.
- After we have run the
rollout restart
command, new replicaset was created withreplicas=6
and the older replica was modified such that its replicas is equal to 0. And we can see the changes in replicaset from the names of the pods that are part of the deployment.
Listing the replicaset
╭─k8s@kuberentes ~/Desktop/demo/k8s
╰─$ kubectl get replicaset
NAME DESIRED CURRENT READY AGE
nginx-6799fc88d8 0 0 0 47m
nginx-6d9967585d 6 6 6 26m
After some time, all of the pods are on running state.
╭─k8s@kuberentes ~/Desktop/demo/k8s
╰─$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 66m
nginx-6d9967585d-c2tnl 1/1 Running 0 27m
nginx-6d9967585d-ghgk2 1/1 Running 0 27m
nginx-6d9967585d-jrdsc 1/1 Running 0 27m
nginx-6d9967585d-ss6s2 1/1 Running 0 27m
nginx-6d9967585d-v74c5 1/1 Running 0 27m
nginx-6d9967585d-vcs4q 1/1 Running 0 27m
Deleting the deployment
╭─k8s@kuberentes ~/Desktop/demo/k8s
╰─$ kubectl delete deployment nginx
deployment.apps "nginx" deleted
Deleting the deployment is very straightforward. This delete all the pods as well as the replicasets.
Aftering deletion:
╭─k8s@kuberentes ~/Desktop/demo/k8s
╰─$ kubectl get deployment,replicasets,pods
NAME READY STATUS RESTARTS AGE
pod/nginx 1/1 Running 0 68m
- We can see that only one pod is running which was created at the beginning.
Debugging the issues
Once in a while, it is guranteed to run into issues while playing with the cluster. So, here are few things that would help you debug the issues.
Getting a shell inside a pod
Like we used to exec into a docker container, we can exec into the pods using kubectl exec
command and get a shell.
╭─k8s@kuberentes ~/Desktop/demo/k8s
╰─$ kubectl exec -it nginx -- bash 130 ↵
root@nginx:/# id
uid=0(root) gid=0(root) groups=0(root)
- Format :
kubectl exec -it <pod_name> -- <command>
-it
runs the command in interactive mode similar to dockernginx
is the name of the pod--
seperates the command that we want to run inside a container with every other commands and flagsbash
is the command that we wanted to run inside the pod
Reading the container logs
╭─k8s@kuberentes ~/Desktop/demo/k8s
╰─$ kubectl logs nginx
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2022/07/24 15:41:11 [notice] 1#1: using the "epoll" event method
2022/07/24 15:41:11 [notice] 1#1: nginx/1.23.1
2022/07/24 15:41:11 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
2022/07/24 15:41:11 [notice] 1#1: OS: Linux 5.10.104-linuxkit
2022/07/24 15:41:11 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2022/07/24 15:41:11 [notice] 1#1: start worker processes
2022/07/24 15:41:11 [notice] 1#1: start worker process 32
2022/07/24 15:41:11 [notice] 1#1: start worker process 33
2022/07/24 15:41:11 [notice] 1#1: start worker process 34
2022/07/24 15:41:11 [notice] 1#1: start worker process 35
- Format:
kubectl logs <pod_name>
- Use
-f
flag will continue to read the logs liketail
does
Describe
We can use kubectl descibe
to get details about different services which also can be used to debug the issues.
╭─k8s@kuberentes ~/Desktop/demo/k8s
╰─$ kubectl describe pod nginx 1 ↵
Name: nginx
Namespace: default
Priority: 0
Node: kind-control-plane/172.18.0.2
Start Time: Sun, 24 Jul 2022 21:26:04 +0545
Labels: <none>
Annotations: <none>
Status: Running
IP: 10.244.0.8
IPs:
IP: 10.244.0.8
Containers:
web:
Container ID: containerd://a8b331cdd9bee501e217e10e10f9d1d32075f30c2602b9712130c7ce34cbd157
Image: nginx
Image ID: docker.io/library/nginx@sha256:1761fb5661e4d77e107427d8012ad3a5955007d997e0f4a3d41acc9ff20467c7
Port: <none>
Host Port: <none>
State: Running
Started: Sun, 24 Jul 2022 21:26:11 +0545
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qnbqq (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-qnbqq:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
Format: kubectl describe <kubernetes-object> <object-name>
Resources: