Kubernetes > Pods > Containers > Resources > Memory
The memory resources we are going to see on this blog are
- Memory request = Minimum or guaranteed memory for the container
- Memory limit = Maximum memory, should not exceed for the container
Let’s define the Pod configuration
networkandcode@cloudshell:~ (kubernetes-cka-224606)$ cat ex1.yml
---
apiVersion: v1
kind: Pod
metadata:
name: pod1
namespace: default
spec:
containers:
- name: ctr1
image: polinux/stress
resources:
limits:
memory: "150Mi"
requests:
memory: "100Mi"
command: ["stress"]
# --vm-bytes 120M lets the container try to allocates 150MiB of memory
args: ["--vm", "1", "--vm-bytes", "120M", "--vm-hang", "1"]
…
Let’s create the Pod
networkandcode@cloudshell:~ (kubernetes-cka-224606)$ kubectl create -f ex1.yml
pod/pod1 created
The Pod should now be running
networkandcode@cloudshell:~ (kubernetes-cka-224606)$ kubectl get pods
NAME READY STATUS RESTARTS AGE
pod1 1/1 Running 0 24m
To view more details about the Pod
networkandcode@cloudshell:~ (kubernetes-cka-224606)$ kubectl get pod pod1 -o yaml
--TRUNCATED--
spec:
containers:
- args:
- --vm
- "1"
- --vm-bytes
- 120M
- --vm-hang
- "1"
command:
- stress
image: polinux/stress
imagePullPolicy: Always
name: ctr1
resources:
limits:
memory: 150Mi
requests:
cpu: 100m
memory: 100Mi
--TRUNCATED--
To get the metric details of the Pod
networkandcode@cloudshell:~ (kubernetes-cka-224606)$ kubectl top pod pod1
NAME CPU(cores) MEMORY(bytes)
pod1 80m 120Mi
So, it’s currently consuming 120Mi, which is 20Mi above the ‘memory request’ value(100Mi) we specified, however is 30Mi below the ‘memory limit’ configured(150Mi).
Let’s define another pod by modifying the memory resources as follows
networkandcode@cloudshell:~ (kubernetes-cka-224606)$ cat ex2.yml
---
apiVersion: v1
kind: Pod
metadata:
name: pod2
namespace: default
spec:
containers:
- name: ctr2
image: polinux/stress
resources:
requests:
memory: "50Mi"
limits:
memory: "100Mi"
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "120M", "--vm-hang", "1"]
...
Let’s create this Pod now
networkandcode@cloudshell:~ (kubernetes-cka-224606)$ kubectl create -f ex2.yml
pod/pod2 created
This Pod isn’t running due to memory constraint
networkandcode@cloudshell:~ (kubernetes-cka-224606)$ kubectl get pod pod2
NAME READY STATUS RESTARTS AGE
pod2 0/1 CrashLoopBackOff 3 67s
There is no metrics as it isn’t running
networkandcode@cloudshell:~ (kubernetes-cka-224606)$ kubectl top pod pod2
NAME CPU(cores) MEMORY(bytes)
pod2 0m 0Mi
Let’s check for more details
networkandcode@cloudshell:~ (kubernetes-cka-224606)$ kubectl get pod pod2 -o yaml
--TRUNCATED--
containerStatuses:
- containerID: docker://7a42ac697350621cbf53245a60a5e550b1314971b65eae204334cebef6aabd26
image: polinux/stress:latest
imageID: docker-pullable://polinux/stress@sha256:6d1825288ddb6b3cec8d3ac8a488c8ec2449334512ecb938483fc2b25cbbdb9a
lastState:
terminated:
containerID: docker://5e34e57ffa71e06dfd9abf4550bf847320dcf14ea78f2e16d130f72fcce47362
exitCode: 1
finishedAt: 2019-03-23T18:25:55Z
reason: OOMKilled
startedAt: 2019-03-23T18:25:54Z
name: ctr2
ready: false
restartCount: 4
state:
terminated:
containerID: docker://7a42ac697350621cbf53245a60a5e550b1314971b65eae204334cebef6aabd26
exitCode: 1
finishedAt: 2019-03-23T18:26:45Z
reason: OOMKilled
startedAt: 2019-03-23T18:26:44Z
--TRUNCATED--
We can see this warning by checking the node details too
networkandcode@cloudshell:~ (kubernetes-cka-224606)$ kubectl describe nodes | grep Warning
Warning OOMKilling 3m8s kernel-monitor, gke-cluster-default-pool-f22707e6-28mw Memory cgroup out of memory: Kill process 11226 (stress) score 1958 or sacrifice child
Warning OOMKilling 3m7s kernel-monitor, gke-cluster-default-pool-f22707e6-28mw Memory cgroup out of memory: Kill process 11274 (stress) score 1959 or sacrifice child
Warning OOMKilling 2m53s kernel-monitor, gke-cluster-default-pool-f22707e6-28mw Memory cgroup out of memory: Kill process 11386 (stress) score 1958 or sacrifice child
Warning OOMKilling 2m28s kernel-monitor, gke-cluster-default-pool-f22707e6-28mw Memory cgroup out of memory: Kill process 11482 (stress) score 1958 or sacrifice child
Warning OOMKilling 98s kernel-monitor, gke-cluster-default-pool-f22707e6-28mw Memory cgroup out of memory: Kill process 11674 (stress) score 1959 or sacrifice child
Warning OOMKilling 8s kernel-monitor, gke-cluster-default-pool-f22707e6-28mw Memory cgroup out of memory: Kill process 11985 (stress) score 1958 or sacrifice child
Let’s delete all the Pods we created - Pod1 and Pod2
networkandcode@cloudshell:~ (kubernetes-cka-224606)$ kubectl delete pods --all
pod "pod1" deleted
pod "pod2" deleted
Let’s define a new Pod which very high memory request that is more than the available memory on any one of the nodes in the Cluster
networkandcode@cloudshell:~ (kubernetes-cka-224606)$ cat ex3.yml
---
apiVersion: v1
kind: Pod
metadata:
name: pod3
namespace: default
spec:
containers:
- name: ctr3
image: polinux/stress
resources:
requests:
memory: "900Gi"
limits:
memory: "1000Gi"
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "120M", "--vm-hang", "1"]
…
It should remain in Pending state
networkandcode@cloudshell:~ (kubernetes-cka-224606)$ kubectl create -f ex3.yml
pod/pod3 created
networkandcode@cloudshell:~ (kubernetes-cka-224606)$ kubectl get pods pod3
NAME READY STATUS RESTARTS AGE
pod3 0/1 Pending 0 73s
Reference: https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/
--end-of-post--