Prerequisites: Services

Network Policies are defined as Kubernetes objects and are used to control inbound/outbound network traffic, however they would work as desired only if supported by the networking plugin used in the Cluster. If you are using GKE, you need to enable network-policy addon while creating the cluster as follows
networkandcode@cloudshell:~$ gcloud container clusters create cluster1 --enable-network-policy --zone us-central1-a
Here cluster1 is the name of the cluster

To generate kubeconfig for the cluster
networkandcode@cloudshell:~$ gcloud container clusters get-credentials cluster1 --zone us-central1-a

Fetching cluster endpoint and auth data.kubeconfig entry generated for cluster1.

Let's define a deployment manifest and create it

networkandcode@k8s-master:~$ cat ex25-deploy.yaml
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: deploy25
spec:
  template:
    metadata:
      labels:
        tag: label25
    spec:
      containers:
      - name: ctr25
        image: nginx
  replicas: 4
  selector:
    matchLabels:
      tag: label25
...

networkandcode@k8s-master:~$ kubectl create -f ex25-deploy.yaml
deployment.extensions/deploy25 created

networkandcode@k8s-master:~$ kubectl get deploy deploy25
NAME       READY   UP-TO-DATE   AVAILABLE   AGE
deploy25   4/4     4            4           12s

Let's expose this deployment using a Cluster IP service

networkandcode@k8s-master:~$ cat ex25-svc-ci.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: svc25-ci
spec:
  type: ClusterIP  # ClusterIP is the default service type even if not specified
  selector:
    tag: label25
  ports:
  - name: port25
    protocol: TCP
    port: 8080  # this is the service port
    targetPort: 80 # this is the container port
...

networkandcode@k8s-master:~$ kubectl create -f ex25-svc-ci.yaml
service/svc25-ci created

networkandcode@cloudshell:~$ kubectl get svc svc25-ci
NAME       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
svc25-ci   ClusterIP   10.35.245.194   <none>        8080/TCP   21s

Let's create a separate Pod from which we would try accessing the service created above

networkandcode@cloudshell:~$ cat ex25-po-apache.yaml
---
apiVersion: v1
kind: Pod
metadata:
  name: po25-apache
spec:
  containers:
  - name: ctr25-apache
    image: httpd
...

networkandcode@cloudshell:~$ kubectl create -f ex25-po-apache.yaml
pod/po25-apache created

networkandcode@cloudshell:~$ kubectl get po po25-apache
NAME          READY   STATUS    RESTARTS   AGE
po25-apache   1/1     Running   0          39s

Login to this Pod and try to CURL the service IP on the service port defined (8080)

networkandcode@cloudshell:~$ kubectl exec -it po25-apache bash
root@po25-apache:/usr/local/apache2# curl 10.35.245.194:8080
bash: curl: command not found

# we need to install curl
root@po25-apache:/usr/local/apache2# apt update -y

root@po25-apache:/usr/local/apache2# apt install curl -y

root@po25-apache:/usr/local/apache2# curl 10.35.245.194:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
--TRUNCATED--

So access is allowed from the standalone apache pod to the nginx service, as network traffic is not blocked by default within the Cluster. We can now try to block this traffic using a Network policy

Let's check the short-form and apiVersion for NetworkPolicy

networkandcode@cloudshell:~$ kubectl api-resources | grep NetworkPolicy
networkpolicies                   netpol       extensions                     true         NetworkPolicy
networkpolicies                   netpol       networking.k8s.io              true         NetworkPolicy

networkandcode@cloudshell:~$ kubectl explain netpol
KIND:     NetworkPolicy
VERSION:  extensions/v1beta1
--TRUNCATED--

We may now define and create the Network policy

networkandcode@cloudshell:~$ cat ex25-netpol.yaml
---
apiVersion: extensions/v1beta1
kind: NetworkPolicy
metadata:
  name: netpol25
spec:
  podSelector:  # netpol gets applied to Pod(s) with these matching labels
    matchLabels:
      tag: label25
  ingress:  # allow inbound traffic
  - from:  # only from Pod(s) with these matching labels
    - podSelector:
        matchLabels:
          tag: label25-apache
...

networkandcode@cloudshell:~$ kubectl create -f ex25-netpol.yaml
networkpolicy.extensions/netpol25 created

networkandcode@cloudshell:~$ kubectl get netpol
NAME       POD-SELECTOR   AGE
netpol25   tag=label25    5s

The Network policy is now applied to the nginx service as it covers Pods with the label 'tag: label25', and these Pods or the service can only receive packets from Pod(s) that have the label 'tag: label25-apache'

We may now try to CURL the service again from the apache Pod, however it shouldn't work as there is a network policy now, and there isn't a label on the apache Pod

networkandcode@cloudshell:~$ kubectl create -f ex25-netpol.yaml
networkpolicy.extensions/netpol25 created

networkandcode@cloudshell:~$ kubectl exec -it po25-apache -- curl 10.35.245.194:8080
^Ccommand terminated with exit code 130

Let's now add a label to the Pod and try to CURL again, it should work this time because of the label

networkandcode@cloudshell:~$ kubectl label po po25-apache tag=label25-apache
pod/po25-apache labeled

# The label should show up in metadata > labels
# we could also add this by modifying the pod manifest file and then apply it using 'kubectl apply -f ex25-po-apache.yaml
networkandcode@cloudshell:~$ kubectl get po po25-apache -o yaml
apiVersion: v1
kind: Pod
metadata:
  --TRUNCATED--
  labels:
    tag: label25-apache
--TRUNCATED--

networkandcode@cloudshell:~$ kubectl exec -it po25-apache -- curl 10.35.245.194:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
--TRUNCATED--

In this post, we have seen how to filter incoming traffic AKA ingress, in a similar way we could also use it for policing outbound or egress traffic, and use NetworkPolicy objects for various scenarios

Reference: https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/

--end-of-post--