75 Kubernetes Deployment and Runtime Validation
Deploy your application to Kubernetes and validate runtime behavior, resilience, and observability.
In this lab, you will deploy your containerized application to Kubernetes and validate that it behaves correctly under real orchestration conditions.
You will move from single-container execution to cluster-managed execution.
Context and goals
So far, you have:
- Built a Node.js application
- Containerized it with Docker
- Orchestrated services with Docker Compose
- Automated validation using CI
In this lab, you will:
- Deploy your application to Kubernetes
- Expose it via a Service
- Validate runtime behavior
- Simulate failures
- Explore scaling and resilience
This lab is intentionally deep. Core sections are required. At least one extension lane item is required for grading.
Verify your environment
Your goal is to end up with a working Kubernetes cluster and kubectl access.
For this lab, you must use k3s on the course Ubuntu Server VM.
If you are working on a personal machine, consult the official documentation for your chosen Kubernetes distribution, but the lab instructions assume k3s on Ubuntu Server.
At the end of setup, this must work:
kubectl get nodesYou must see at least one node in Ready state.
Install k3s on the course VM (recommended)
Click to expand: k3s + kubectl setup on Ubuntu Server
Run these commands inside your Ubuntu VM.
# Update packagessudo apt update
# Install k3s (single-node cluster) and make kubeconfig readablecurl -sfL https://get.k3s.io | \ INSTALL_K3S_EXEC="server --write-kubeconfig-mode 644" sh -
# Put kubeconfig in the default location for kubectlmkdir -p "$HOME/.kube"cp /etc/rancher/k3s/k3s.yaml "$HOME/.kube/config"
# Ensure kubectl uses your local kubeconfigexport KUBECONFIG="$HOME/.kube/config"
# Persist for future sessionsecho 'export KUBECONFIG="$HOME/.kube/config"' >> ~/.bashrc
# Verify cluster accesskubectl get nodeskubectl get pods -ANotes:
- No user group modification is required.
kubectlmust use your local$HOME/.kube/config.- Do not use
sudo kubectlfor this lab. - To restart k3s:
sudo systemctl restart k3sTools checklist
Click to expand: tools you should have
You should have:
kubectldocker(for building the image)
On the course VM:
kubectlis provided by k3s (it installs akubectlbinary)- Docker should already be available from earlier labs
Quick checks:
kubectl version --clientdocker versionCreate a dedicated namespace
Create a namespace for this lab:
kubectl create namespace quote-labSet it as default for your session:
kubectl config set-context --current --namespace=quote-labVerify:
kubectl get podsBuild and tag your image for Kubernetes
Build your Docker image locally:
docker build -t quote-app:local -f docker/Dockerfile .Image note for k3s
k3s uses containerd by default.
For this lab, use the image workflow recommended in class.
If your cluster cannot access your local Docker images directly, you may need to:
- Tag the image appropriately
- Push it to a registry
- Or import it into containerd
We will keep the focus on Kubernetes behavior rather than image distribution mechanics.
Fallback: ImagePullBackOff or image not found
If your pod status shows ImagePullBackOff, Kubernetes cannot find your local Docker image.
k3s uses containerd internally. Your Docker image store is separate.
Import your image into k3s:
docker save quote-app:local | sudo k3s ctr images import -Verify the image exists inside k3s:
sudo k3s ctr images list | grep quote-appIf you have already applied the Deployment, force Kubernetes to retry by deleting the failing pod:
kubectl get podskubectl delete pod <pod-name>kubectl get pods -wIf you have not created the Deployment yet, continue the lab and apply your manifests first.
If you are unsure whether the cluster itself works, deploy a simple public image:
kubectl create deployment nginx-test --image=nginxkubectl expose deployment nginx-test --port=80 --type=ClusterIPkubectl get podsIf nginx runs successfully, your cluster is healthy and the issue is only image distribution.
If using another environment, adjust accordingly.
Create a Deployment
Create a file named deployment.yaml with the following content:
apiVersion: apps/v1kind: Deploymentmetadata: name: quote-appspec: replicas: 1 selector: matchLabels: app: quote-app template: metadata: labels: app: quote-app spec: containers: - name: quote-app image: quote-app:local ports: - containerPort: 3000 readinessProbe: httpGet: path: /health port: 3000 initialDelaySeconds: 3 periodSeconds: 5Apply it:
kubectl apply -f deployment.yamlObserve:
kubectl get podskubectl describe pod <pod-name>Expose the application with a Service
Create service.yaml:
apiVersion: v1kind: Servicemetadata: name: quote-appspec: selector: app: quote-app ports: - port: 80 targetPort: 3000 type: ClusterIPApply it:
kubectl apply -f service.yamlVerify:
kubectl get svcAccess the application
Use port-forwarding:
kubectl port-forward svc/quote-app 8080:80Open:
http://localhost:8080Verify that:
- The app loads
- The health endpoint responds
- Basic routes function correctly
Validate readiness behavior
Delete the running pod:
kubectl delete pod <pod-name>Observe:
kubectl get pods -wConfirm:
- A new pod is created automatically
- The Service continues routing traffic
- Downtime is minimal
Scale the Deployment
Scale to multiple replicas:
kubectl scale deployment quote-app --replicas=3Observe:
kubectl get podsConfirm:
- Three pods are running
- All are Ready
Discuss:
- What does scaling change?
- What does it not change?
Introduce a controlled failure
This step is required for grading.
Modify your deployment to intentionally break readiness.
Examples:
- Change readinessProbe path to a non-existent endpoint
- Use the wrong container port
- Set an invalid image tag
Apply the change and observe:
kubectl apply -f deployment.yamlkubectl get podskubectl describe pod <pod-name>Confirm:
- Pods fail readiness
- Traffic is not routed
- Kubernetes reports the failure
Restore the correct configuration.
Inspect logs and events
Inspect application logs:
kubectl logs <pod-name>Inspect cluster events:
kubectl get eventsObserve how Kubernetes surfaces runtime issues.
Extension lanes
At least one extension item is required for grading. You may complete more for additional distinction.
Lane A: Resource management and limits
- Add resource requests and limits to the container
- Observe scheduling behavior
- Simulate memory pressure
- Explain how limits protect cluster stability
Lane B: Liveness probes
- Add a livenessProbe
- Simulate an internal failure
- Observe container restarts
- Explain readiness vs liveness differences
Lane C: Configuration management
- Move configuration to a ConfigMap
- Inject environment variables
- Change config without rebuilding the image
- Explain why immutable images matter
Lane D: Database integration
- Deploy PostgreSQL inside Kubernetes
- Use a Service for connectivity
- Add a PersistentVolumeClaim
- Validate data persistence across restarts
Lane E: Horizontal scaling reasoning
- Simulate traffic load
- Observe CPU usage
- Discuss what HPA would require
- Explain why autoscaling depends on metrics
Lane F: Multi-stage validation
- Create separate namespaces for dev and staging
- Deploy different versions
- Compare behaviors
- Explain how this fits into CI/CD pipelines
Reflect on orchestration
Answer the following verbally during review:
- What does Kubernetes automate?
- What remains your responsibility?
- What signals determine application health?
- How does this differ from Docker Compose?
Wrap-up
You have:
- Deployed an application to Kubernetes
- Exposed it with a Service
- Validated runtime behavior
- Simulated failures
- Explored scaling and resilience
This completes the practical orchestration portion of the course.
In larger systems, this foundation expands to:
- Ingress controllers
- Service meshes
- Observability stacks
- GitOps workflows
- Infrastructure automation
You now understand the core control loop that powers modern cloud-native systems.