Deployments describe applications hosted on the cluster. They provide declarative updates on top of ReplicaSets, with Kubernetes orchestrating Pods and application configuration state.


Deployment specifications comprise:

ReplicationControllers comprise:

  • A selector, which matches Pods.
  • A count of replicas defining the desired number of Pods.
  • A Pod template, used for creation of new Pods.

For example:

apiVersion: apps/v1
kind: Deployment
    name: my-app
    replicas: 12
            app: my-app
            app: my-app
              - name: nginx
                  image: nginx


The key operations that differ from ReplicaSets are:

  • Creating
  • Updating the pod template, e.g. for configuration, secrets and the container image, triggers rollouts.
  • Scaling the number of instances up and down.


Deployments utilise ReplicaSets to support a number of continuous delivery strategies. There are two supported strategies:

  • RollingUpdate (the default) creates a new ReplicaSet in which it prepares replacement pods, terminating the old ones only once the pods are ready.
    • maxUnavailable (% or integer) sets the number of replicas that are permitted to be unavailable at any given point during the deployment. The rollout will fail if the number of replicas considered available falls below this count, preserving uptime.
    • maxSurge (% or integer) defines the maximum number of replicas that are allowed to be in play above the desired number of replicas.
  • Recreate terminates all pods in the current ReplicaSet prior to scaling up the new one, and should be used for applications which don't allow multiple versions to co-exist.

Throughout rollouts the service fronting the deployment will add and remove endpoints as appropriate. Each version of the deployment has a ReplicaSet, the name of which includes the pod template hash.


Get a rolling high-level overview of counts with:

kubectl rollout status deployment/my-app

The exit status of this command reflects the state of the deployment.

More detailed information can be obtained through describe.

There are three states:

  • Complete signals that all update work is done.
  • Progressing indicates that an update is in-flight.
  • Failed means that the update couldn't complete because the new ReplicaSet couldn't scale to the desired number of replicas, e.g. because of a resource constraint or failing readiness probe.


An in-flight rollout can be paused to allow corrections to be made. This is required since making additional changes to a deployment during a rollout will cause the changes to be applied after the current rollout completes, not edit the existing one.

kubectl rollout pause deployment/my-app
kubectl edit deployment/my-app
kubectl rollout resume deployment/my-app

The deployment can be restarted:

kubectl rollout restart deployment/my-app`

Rolling back

Previous ReplicaSets are retained up to revisionHistoryLimit, allowing us to roll back to previous versions. We can investigate these:

kubectl rollout history deployment/my-app
kubectl rollout history --revision=1 deployment/my-app

To roll back to the previous revision:

kubectl rollout undo deployment/my-app

Or to any other revision:

kubectl rollout undo --to-revision=1 deployment/my-app


kubectl scale scales up or down the number of Pods behind a ReplicaSet or Deployment.

Understanding changes

What changed?

kubectl rollout history deploy/dog-sitter

Use the replicasets to understand why:

kubectl get replicaset \
    -l app=dog-sitter \
    -o custom-columns=REVISION:".metadata.annotations.deployment\.kubernetes\.io/revision,CREATED:.metadata.creationTimestamp" \
    --sort-by ".metadata.annotations.deployment\.kubernetes\.io/revision"

Diff two revisions, with pretty formatting:

git diff \
    <(kubectl rollout history deploy/dog-sitter --revision 665) \
    <(kubectl rollout history deploy/dog-sitter --revision 666)

  1. To docker container run command
  2. To docker-compose service