Skip to main content

Deployments: Managed Updates

Key Takeaways for AI & Readers
  • Role: Manages ReplicaSets to provide declarative updates for Pods.
  • Strategies: RollingUpdate (gradual) vs. Recreate (all at once).
  • Features: Built-in rollback, scaling, and status tracking.

1. Deep Dive: RollingUpdate Logic

The default strategy is RollingUpdate. It replaces Pods gradually. You control the speed and safety with two parameters:

maxSurge (Default: 25%)

  • Definition: How many extra pods can be created above the desired replica count.
  • Example: Replicas=4, maxSurge=25% (1 pod).
  • Result: During update, you might have up to 5 pods running (4 old + 1 new).
  • Higher Value: Faster rollout, but consumes more CPU/RAM quota.

maxUnavailable (Default: 25%)

  • Definition: How many pods can be down during the update.
  • Example: Replicas=4, maxUnavailable=25% (1 pod).
  • Result: You are guaranteed to have at least 3 pods running at all times.
  • Zero Value: Setting this to 0 ensures 100% capacity is maintained, but requires maxSurge > 0.

Pro Tip: For critical high-availability apps, set maxUnavailable: 0 and maxSurge: 1. This ensures you never drop below full capacity.

Revision History Limit

By default, K8s keeps 10 old ReplicaSets so you can rollback.

  • Setting: .spec.revisionHistoryLimit
  • Trade-off: If you set this too high, you clutter etcd. If too low (0), you can't rollback!

Progress Deadline Seconds

How long should K8s wait for a Deployment to finish before giving up?

  • Default: 600 seconds (10 minutes).
  • Behavior: If the Deployment is stuck (e.g., ImagePullBackOff) for 10 minutes, the controller marks the Deployment status as ProgressDeadlineExceeded. This is crucial for CI/CD pipelines to know when to fail a job.

2. Managing Rollouts

Check Status

kubectl rollout status deployment/my-app

Waits until the rollout finishes. Useful in CI/CD scripts!

Pause & Resume

You can pause a rollout to verify a "canary" set of pods before letting it finish.

kubectl rollout pause deployment/my-app
# ... verify the new version ...
kubectl rollout resume deployment/my-app

Rollback (The "Undo" Button)

If you deploy v2 and it's crashing, you can instantly revert.

kubectl rollout undo deployment/my-app

This updates the Deployment to use the previous ReplicaSet revision.


3. Deployment Patterns

Blue/Green (Not native, but possible)

  1. Create Deployment blue (v1). Service points to blue.
  2. Create Deployment green (v2).
  3. Wait for green to be healthy.
  4. Update Service selector to point to green.
  5. Delete blue.

Canary (Native-ish)

  1. Deployment app-primary (10 replicas). Service points to label app: my-app.
  2. Deployment app-canary (1 replica). Also has label app: my-app.
  3. Traffic is split 10:1 between primary and canary naturally by the Service load balancing.
  4. If canary is good, update app-primary to new version and delete app-canary.

4. Common Pitfalls

  1. Missing Probes: If you don't have Readiness Probes, Kubernetes will assume the new Pod is "Ready" as soon as the container starts. It will kill the old Pods immediately, potentially causing downtime if your app takes 10s to boot.
  2. Resource Quotas: If your Namespace has a strict quota, a maxSurge update might fail because the cluster forbids creating the extra temporary Pod.
  3. Label Selector Immutability: You cannot change the label selector of an existing Deployment. You must delete and recreate it.