Labels are key value pairs that can be attached to Kubernetes objects such as Nodes, Pods etc. They are either predefined or user defined. User defined labels are defined under the metadata section of the object configuration which is in yaml format.

In this scenario, we are going to assign a label to a node, and then schedule a pod on that node by calling the same label on the spec > nodeSelector section of the Pod

To check the list of nodes in the cluster
networkandcode@cloudshell:~ (kubernetes-cka-224606)$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-standard-cluster-1-default-pool-7a916bd9-cpxw Ready <none> 2m v1.11.5-gke.4
gke-standard-cluster-1-default-pool-7a916bd9-v1jd Ready <none> 2m v1.11.5-gke.4
gke-standard-cluster-1-default-pool-7a916bd9-xg0f Ready <none> 2m v1.11.5-gke.4
networkandcode@cloudshell:~ (kubernetes-cka-224606)$

There would be certain predefined labels for the nodes, to check these labels
networkandcode@cloudshell:~ (kubernetes-cka-224606)$ kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
gke-standard-cluster-1-default-pool-7a916bd9-cpxw Ready <none> 9m v1.11.5-gke.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/fluentd-ds
-ready=true,beta.kubernetes.io/instance-type=n1-standard-1,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=default-pool,cloud.google.com/gke-os-dis
tribution=cos,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=gke-standard-c
luster-1-default-pool-7a916bd9-cpxw
gke-standard-cluster-1-default-pool-7a916bd9-v1jd Ready <none> 9m v1.11.5-gke.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/fluentd-ds
-ready=true,beta.kubernetes.io/instance-type=n1-standard-1,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=default-pool,cloud.google.com/gke-os-dis
tribution=cos,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=gke-standard-c
luster-1-default-pool-7a916bd9-v1jd
gke-standard-cluster-1-default-pool-7a916bd9-xg0f Ready <none> 9m v1.11.5-gke.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/fluentd-ds
-ready=true,beta.kubernetes.io/instance-type=n1-standard-1,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=default-pool,cloud.google.com/gke-os-dis
tribution=cos,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=gke-standard-c
Luster-1-default-pool-7a916bd9-xg0f

To see the labels associated with the 3rd node
networkandcode@cloudshell:~ (kubernetes-cka-224606)$ kubectl get nodes/gke-standard-cluster-1-default-pool-7a916bd9-xg0f --show-labels
NAME STATUS ROLES AGE VERSION LABELS
gke-standard-cluster-1-default-pool-7a916bd9-xg0f Ready <none> 20m v1.11.5-gke.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/fluentd-ds
-ready=true,beta.kubernetes.io/instance-type=n1-standard-1,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=default-pool,cloud.google.com/gke-os-dis
tribution=cos,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=gke-standard-c
Luster-1-default-pool-7a916bd9-xg0f

Let’s assign a new user defined label to node1, the label’s key is nodeNumber and it’s value is 3
networkandcode@cloudshell:~ (kubernetes-cka-224606)$ kubectl label node gke-standard-cluster-1-default-pool-7a916bd9-xg0f nodeNumber=3
node "gke-standard-cluster-1-default-pool-7a916bd9-xg0f" labeled

Let’s check the labels assigned to the 3rd node again, it should now show the newly assigned label
networkandcode@cloudshell:~ (kubernetes-cka-224606)$ kubectl get nodes gke-standard-cluster-1-default-pool-7a916bd9-xg0f --show-labels
NAME STATUS ROLES AGE VERSION LABELS
gke-standard-cluster-1-default-pool-7a916bd9-xg0f Ready <none> 24m v1.11.5-gke.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/fluentd-ds
-ready=true,beta.kubernetes.io/instance-type=n1-standard-1,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=default-pool,cloud.google.com/gke-os-dis
tribution=cos,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=gke-standard-c
luster-1-default-pool-7a916bd9-xg0f,nodeNumber=3

As of now, we haven’t scheduled any pods on the nodes. To check the list of pods in the cluster
master $ kubectl get pods
No resources found.

Let’s define a pod configuration in yaml

networkandcode@cloudshell:~ (kubernetes-cka-224606)$ cat >> podConfig.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
spec:
containers:
- name: nginx-container
image: nginx
nodeSelector: # matching Labels on node(s), defined under the pod’s spec section
nodeNumber: “3”

^c

There is a nodeSelector section inside the spec of the pod, that has the label key and value that we assigned to the node earlier, this ensures the Pod getting scheduled on that particular node matching the label, in this case node3. If we haven’t specified this label, we won’t have the control on where(which node) to place this pod, and that would be handled automatically by kubectl

Let’s apply the pod configuration to schedule the pod on node3

networkandcode@cloudshell:~ (kubernetes-cka-224606)$ kubectl create -f podConfig.yaml
pod "nginx-pod" created

To check the list of pods again
networkandcode@cloudshell:~ (kubernetes-cka-224606)$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-pod 1/1 Running 0 1m

To see the node where the pod is scheduled
networkandcode@cloudshell:~ (kubernetes-cka-224606)$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-pod 1/1 Running 0 1m 10.8.0.10 gke-standard-cluster-1-default-pool-7a916bd9-xg0f

The above output shows the pod is scheduled on node03 which is the desired behavior

To delete the pod
networkandcode@cloudshell:~ (kubernetes-cka-224606)$ kubectl delete pod nginx-pod
pod "nginx-pod" deleted

networkandcode@cloudshell:~ (kubernetes-cka-224606)$ kubectl get pods
No resources found.

To remove the added label from node01
networkandcode@cloudshell:~ (kubernetes-cka-224606)$ kubectl label node gke-standard-cluster-1-default-pool-7a916bd9-xg0f nodeNumber-
node "gke-standard-cluster-1-default-pool-7a916bd9-xg0f" labeled

The label with key ‘nodeNumber’ shouldn't no longer be present on the 3rd node
networkandcode@cloudshell:~ (kubernetes-cka-224606)$ kubectl get node gke-standard-cluster-1-default-pool-7a916bd9-xg0f --show-labels
NAME STATUS ROLES AGE VERSION LABELS
gke-standard-cluster-1-default-pool-7a916bd9-xg0f Ready <none> 2h v1.11.5-gke.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/fluentd-ds
-ready=true,beta.kubernetes.io/instance-type=n1-standard-1,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=default-pool,cloud.google.com/gke-os-dis
tribution=cos,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=gke-standard-c
luster-1-default-pool-7a916bd9-xg0f

--end-of-post--