Delivering modern cloud-native applications with open source technologies on Azure Kubernetes Service
When you run modern, microservices-based applications in Kubernetes, you often want to control which components can communicate with each other. The principle of least privilege should be applied to how traffic can flow between pods in an Azure Kubernetes Service (AKS) cluster. Let’s say you likely want to block traffic directly to back-end applications. The Network Policy feature in Kubernetes lets you define rules for ingress and egress traffic between pods in a cluster.
Create a new AKS cluster with advanced networking
Our AKS cluster in lab 1 uses basic networking and this lab requires AKS advanced networking. Follow these steps to create a new cluster:
Create a virtual network and subnet
az network vnet create \
--resource-group $RGNAME \
--name myVnet \
--address-prefixes 10.0.0.0/8 \
--subnet-name myAKSSubnet \
--subnet-prefix 10.240.0.0/16
Validate service principle values in profile
echo $APPID
echo $CLIENTSECRET
Get the virtual network resource and subnet ID’s
VNET_ID=$(az network vnet show --resource-group $RGNAME --name myVnet --query id -o tsv)
echo $VNET_ID
SUBNET_ID=$(az network vnet subnet list --resource-group $RGNAME --vnet-name myVnet --query [].id --output tsv)
echo $SUBNET_ID
Assign the service principal Contributor permissions to the virtual network resource
az role assignment create --assignee $APPID --scope $VNET_ID --role Contributor
Create AKS Cluster
Note: Note the
--network-policy
parameter
CLUSTERNAME=aks-np-${UNIQUE_SUFFIX}
az aks create \
--resource-group $RGNAME \
--name $CLUSTERNAME \
--node-count 3 \
--generate-ssh-keys \
--network-plugin azure \
--service-cidr 10.0.0.0/16 \
--dns-service-ip 10.0.0.10 \
--docker-bridge-address 172.17.0.1/16 \
--vnet-subnet-id $SUBNET_ID \
--service-principal $APPID \
--client-secret $CLIENTSECRET \
--network-policy azure
Get credentials
az aks get-credentials --resource-group $RGNAME --name $CLUSTERNAME
Deploy our application
Create namespace
kubectl create ns hackfest
Create secret to allow pods to access Cosmos from this new cluster
export MONGODB_USER=$(az cosmosdb show --name $COSMOSNAME --resource-group $RGNAME --query "name" -o tsv)
export MONGODB_PASSWORD=$(az cosmosdb list-keys --name $COSMOSNAME --resource-group $RGNAME --query "primaryMasterKey" -o tsv)
Use Instrumentation Key from lab 2 Build Application Components and Prerequisites
export APPINSIGHTS_INSTRUMENTATIONKEY='replace-me'
kubectl create secret generic cosmos-db-secret --from-literal=user=$MONGODB_USER --from-literal=pwd=$MONGODB_PASSWORD --from-literal=appinsights=$APPINSIGHTS_INSTRUMENTATIONKEY -n hackfest
Follow the steps from the earlier lab 3 Helm Setup and Deploy Application
Note: the helm charts from lab 3 earlier should already be updated and should work fine without edit
Test and ensure app works correctly (Browse the UI and update data)
Deny all inbound traffic to a pod (data-api)
Quickly test access to one of our api’s from a pod
kubectl run --rm -it --image=alpine network-policy --namespace hackfest --generator=run-pod/v1
wget -qO- http://data-api.hackfest:3009/status
# should see a result such as:
{"message":"api default endpoint for data api","payload":{"uptime":"3 hours"}}
Exit the pod:
exit
Create the deny policy
Review the file block-access-to-data-api.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: data-api-policy
namespace: hackfest
spec:
podSelector:
matchLabels:
app: data-api
ingress: []
kubectl apply -f ./labs/networking/network-policy/block-access-to-data-api.yaml
Retry accessing the pod
kubectl run --rm -it --image=alpine network-policy --namespace hackfest --generator=run-pod/v1
wget -qO- http://data-api.hackfest:3009/status
# no results...
Exit the pod:
exit
Notice that if you browse the service-tracker-ui web page, the app no longer works. The api’s can no longer access the data-api, so the app is now broken. We should probably fix this.
Allow inbound traffic based on pod label
Update the policy to allow flights-api to access
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: data-api-policy
namespace: hackfest
spec:
podSelector:
matchLabels:
app: data-api
ingress:
- from:
- namespaceSelector: {}
podSelector:
matchLabels:
app: flights-api
Apply the policy
kubectl apply -f ./labs/networking/network-policy/fix-access-data-api.yaml
Test access from flights-api
# lookup your pod name as it will be different
kubectl exec -it flights-api-9f9bb5b86-4x7z8 -n hackfest sh
wget -qO- http://data-api.hackfest:3009/status
"message":"api default endpoint for data api","payload":{"uptime":"4 hours"}}
Exit the pod:
exit
Test access from weather-api (this should fail)
# lookup your pod name as it will be different
kubectl exec -it data-api-69dbc755f7-lr6hn -n hackfest sh
wget -qO- http://data-api.hackfest:3009/status
# no results...
Exit the pod:
exit
Allow inbound traffic based on namespace
Create and label the production namespace
kubectl create namespace production
kubectl label namespace/production purpose=production
Update the policy to allow the hackfest
namespace
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: data-api-policy
namespace: hackfest
spec:
podSelector:
matchLabels:
app: data-api
ingress:
- from:
- namespaceSelector:
matchLabels:
purpose: production
- podSelector:
matchLabels:
app: flights-api
- podSelector:
matchLabels:
app: weather-api
- podSelector:
matchLabels:
app: quakes-api
Apply the policy
kubectl apply -f ./labs/networking/network-policy/fix-access-namespace.yaml
Validate that all pods and the web page are working properly
Validate the a pod in the specifed namespace can also access the pod
kubectl run --rm -it --image=alpine network-policy --namespace production --generator=run-pod/v1
wget -qO- http://data-api.hackfest:3009/status
{"message":"api default endpoint for data api","payload":{"uptime":"5 hours"}}
Exit the pod:
exit