75 Kubernetes Deployment and Runtime Validation

Deploy your application to Kubernetes and validate runtime behavior, resilience, and observability.

In this lab, you will deploy your containerized application to Kubernetes and validate that it behaves correctly under real orchestration conditions.

You will move from single-container execution to cluster-managed execution.

Context and goals

So far, you have:

  • Built a Node.js application
  • Containerized it with Docker
  • Orchestrated services with Docker Compose
  • Automated validation using CI

In this lab, you will:

  • Deploy your application to Kubernetes
  • Expose it via a Service
  • Validate runtime behavior
  • Simulate failures
  • Explore scaling and resilience

This lab is intentionally deep. Core sections are required. At least one extension lane item is required for grading.


Verify your environment

Your goal is to end up with a working Kubernetes cluster and kubectl access.

For this lab, you must use k3s on the course Ubuntu Server VM.

If you are working on a personal machine, consult the official documentation for your chosen Kubernetes distribution, but the lab instructions assume k3s on Ubuntu Server.

At the end of setup, this must work:

Terminal window
kubectl get nodes

You must see at least one node in Ready state.

Click to expand: k3s + kubectl setup on Ubuntu Server

Run these commands inside your Ubuntu VM.

Terminal window
# Update packages
sudo apt update
# Install k3s (single-node cluster) and make kubeconfig readable
curl -sfL https://get.k3s.io | \
INSTALL_K3S_EXEC="server --write-kubeconfig-mode 644" sh -
# Put kubeconfig in the default location for kubectl
mkdir -p "$HOME/.kube"
cp /etc/rancher/k3s/k3s.yaml "$HOME/.kube/config"
# Ensure kubectl uses your local kubeconfig
export KUBECONFIG="$HOME/.kube/config"
# Persist for future sessions
echo 'export KUBECONFIG="$HOME/.kube/config"' >> ~/.bashrc
# Verify cluster access
kubectl get nodes
kubectl get pods -A

Notes:

  • No user group modification is required.
  • kubectl must use your local $HOME/.kube/config.
  • Do not use sudo kubectl for this lab.
  • To restart k3s:
Terminal window
sudo systemctl restart k3s

Tools checklist

Click to expand: tools you should have

You should have:

  • kubectl
  • docker (for building the image)

On the course VM:

  • kubectl is provided by k3s (it installs a kubectl binary)
  • Docker should already be available from earlier labs

Quick checks:

Terminal window
kubectl version --client
docker version

Create a dedicated namespace

Create a namespace for this lab:

Terminal window
kubectl create namespace quote-lab

Set it as default for your session:

Terminal window
kubectl config set-context --current --namespace=quote-lab

Verify:

Terminal window
kubectl get pods

Build and tag your image for Kubernetes

Build your Docker image locally:

Terminal window
docker build -t quote-app:local -f docker/Dockerfile .
Image note for k3s

k3s uses containerd by default.

For this lab, use the image workflow recommended in class.
If your cluster cannot access your local Docker images directly, you may need to:

  • Tag the image appropriately
  • Push it to a registry
  • Or import it into containerd

We will keep the focus on Kubernetes behavior rather than image distribution mechanics.

Fallback: ImagePullBackOff or image not found

If your pod status shows ImagePullBackOff, Kubernetes cannot find your local Docker image.

k3s uses containerd internally. Your Docker image store is separate.

Import your image into k3s:

Terminal window
docker save quote-app:local | sudo k3s ctr images import -

Verify the image exists inside k3s:

Terminal window
sudo k3s ctr images list | grep quote-app

If you have already applied the Deployment, force Kubernetes to retry by deleting the failing pod:

Terminal window
kubectl get pods
kubectl delete pod <pod-name>
kubectl get pods -w

If you have not created the Deployment yet, continue the lab and apply your manifests first.

If you are unsure whether the cluster itself works, deploy a simple public image:

Terminal window
kubectl create deployment nginx-test --image=nginx
kubectl expose deployment nginx-test --port=80 --type=ClusterIP
kubectl get pods

If nginx runs successfully, your cluster is healthy and the issue is only image distribution.

If using another environment, adjust accordingly.


Create a Deployment

Create a file named deployment.yaml with the following content:

apiVersion: apps/v1
kind: Deployment
metadata:
name: quote-app
spec:
replicas: 1
selector:
matchLabels:
app: quote-app
template:
metadata:
labels:
app: quote-app
spec:
containers:
- name: quote-app
image: quote-app:local
ports:
- containerPort: 3000
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 3
periodSeconds: 5

Apply it:

Terminal window
kubectl apply -f deployment.yaml

Observe:

Terminal window
kubectl get pods
kubectl describe pod <pod-name>

Expose the application with a Service

Create service.yaml:

apiVersion: v1
kind: Service
metadata:
name: quote-app
spec:
selector:
app: quote-app
ports:
- port: 80
targetPort: 3000
type: ClusterIP

Apply it:

Terminal window
kubectl apply -f service.yaml

Verify:

Terminal window
kubectl get svc

Access the application

Use port-forwarding:

Terminal window
kubectl port-forward svc/quote-app 8080:80

Open:

http://localhost:8080

Verify that:

  • The app loads
  • The health endpoint responds
  • Basic routes function correctly

Validate readiness behavior

Delete the running pod:

Terminal window
kubectl delete pod <pod-name>

Observe:

Terminal window
kubectl get pods -w

Confirm:

  • A new pod is created automatically
  • The Service continues routing traffic
  • Downtime is minimal

Scale the Deployment

Scale to multiple replicas:

Terminal window
kubectl scale deployment quote-app --replicas=3

Observe:

Terminal window
kubectl get pods

Confirm:

  • Three pods are running
  • All are Ready

Discuss:

  • What does scaling change?
  • What does it not change?

Introduce a controlled failure

This step is required for grading.

Modify your deployment to intentionally break readiness.

Examples:

  • Change readinessProbe path to a non-existent endpoint
  • Use the wrong container port
  • Set an invalid image tag

Apply the change and observe:

Terminal window
kubectl apply -f deployment.yaml
kubectl get pods
kubectl describe pod <pod-name>

Confirm:

  • Pods fail readiness
  • Traffic is not routed
  • Kubernetes reports the failure

Restore the correct configuration.


Inspect logs and events

Inspect application logs:

Terminal window
kubectl logs <pod-name>

Inspect cluster events:

Terminal window
kubectl get events

Observe how Kubernetes surfaces runtime issues.


Extension lanes

At least one extension item is required for grading. You may complete more for additional distinction.

Lane A: Resource management and limits
  • Add resource requests and limits to the container
  • Observe scheduling behavior
  • Simulate memory pressure
  • Explain how limits protect cluster stability
Lane B: Liveness probes
  • Add a livenessProbe
  • Simulate an internal failure
  • Observe container restarts
  • Explain readiness vs liveness differences
Lane C: Configuration management
  • Move configuration to a ConfigMap
  • Inject environment variables
  • Change config without rebuilding the image
  • Explain why immutable images matter
Lane D: Database integration
  • Deploy PostgreSQL inside Kubernetes
  • Use a Service for connectivity
  • Add a PersistentVolumeClaim
  • Validate data persistence across restarts
Lane E: Horizontal scaling reasoning
  • Simulate traffic load
  • Observe CPU usage
  • Discuss what HPA would require
  • Explain why autoscaling depends on metrics
Lane F: Multi-stage validation
  • Create separate namespaces for dev and staging
  • Deploy different versions
  • Compare behaviors
  • Explain how this fits into CI/CD pipelines

Reflect on orchestration

Answer the following verbally during review:

  • What does Kubernetes automate?
  • What remains your responsibility?
  • What signals determine application health?
  • How does this differ from Docker Compose?

Wrap-up

You have:

  • Deployed an application to Kubernetes
  • Exposed it with a Service
  • Validated runtime behavior
  • Simulated failures
  • Explored scaling and resilience

This completes the practical orchestration portion of the course.

In larger systems, this foundation expands to:

  • Ingress controllers
  • Service meshes
  • Observability stacks
  • GitOps workflows
  • Infrastructure automation

You now understand the core control loop that powers modern cloud-native systems.