Part 6- Deployment & ReplicaSet in Kubernetes

Erhan Cetin
10 min readNov 20, 2020

Smooth Transition to Kubernetes

Please before starting, have a quick look at the sample project which is used in this post.

As we know from the previous post, a ReplicationControllers is used for replicating pods and rescheduling them when they fail. A similar resource called ReplicaSet which is almost identical to ReplicationController was introduced in the Kubernetes. But, one minute, why do we need a ReplicaSet. I did some googling … It seems: A ReplicaSet behaves exactly like a ReplicationController, but it has more expressive pod selectors and is comparatively more useful. What I mean is that about pod selector :

As you see labels in the portion of YAML above, A ReplicationController supports equality-based selectors like “= , == , != “ . But an RS ( ReplicaSet) set-based label selector like “in, notin, exist”. You can think as with an RC → app=newsproducer , with an RS → app in (’newsproducer’) , env notin (’state’, ’develop’)

I’m not going to talk too much about an RS here. Because I’ve already explained it in the RC here. Let’s look at a little bit of a ReplicaSet resource.

$ kubectl apply -f https://raw.githubusercontent.com/ErhanCetin/k8s-smooth-transition/develop/k8s/medium-post/replicaset/news-tracker-job-rs.yaml  # create an RC$ kubectl get rs newsproducer-rs
$ kubectl get pods

An RC and an RS both manage the pods by a label selector and ensure that a certain number are always up and running. The main difference between them is selection methods which are equality and set-based selector.

I want to jump into another resource called Deployment Controller which wraps around and extends the ReplicaSet Controller. We’re going to manage the rollout process through a Deployment Controller.

What is a Deployment Controller?

Actually, we can also ask “ How to update a running app in Kubernetes cluster ?” to ourselves. We have three options, either via a Replication Controller, ReplicaSet, or a Deployment Controller. If you use the Replication Controller, you need to do it via an imperative way by executing the kubectl command explicitly. But a Deployment Controller introduces an easier way, declarative way, to us. The deployment controller extends the ReplicaSet Controller and is responsible for rolling out software updates when you create pods with deployment resources and update them with new versions of your software. We can also roll out an app via a ReplicaSet but a deployment resource give us a declarative application update.

Let’s look at a sample deployment resource. It is similar to a ReplicaSet resource:

$ kubectl delete rs newsproducer-rs  # clean up ReplicatSet in your local if exist.$  kubectl apply -f  https://raw.githubusercontent.com/ErhanCetin/k8s-smooth-transition/develop/k8s/medium-post/deployment/newsapi-deployment.yaml # create a deployment$ kubectl get pods  # check the pods belong to deployment.

A Pod which is created by a Deployment Resource contains deployment resource name in its name like newsapi-deployment-7f56599b76-dzdxx. You can understand all three pods belong to the one deployment → newsapi-deployment-7f56599b76

$ kubectl get deployment -o wide # check the deployment
$ kubectl get rs -o wide # When you create a deployment resource, the Deployment Controller automatically creates a new ReplicaSet :

You can think how a pod is managed via the Deployment Controller as follow:

BTW, let me give you a little detail about pod naming. The “newsapi-deployment-7f56599b76-dzdxx” is a combination of deployment name + ReplicaSet-id + unique-id of pod name

Ways of updating pods :

As I’ve written above, we can roll out the pods via the Deployment Controller. When you change the image within a deployment resource, a rolling update is initiated. A deployment rollout is an asynchronous process, and it naturally takes time to complete the whole process. Before digging into how to roll out an app through a Deployment Controller, let’s look at strategies for updating a pod.

Side Note: After a pod is created, it is not possible to change an existing pod's image. You should remove old pods and replace them with new ones. There are two ways of doing that ;

  1. Delete all existing pods first and then start the new ones. This makes your app to be unavailable for a short time.
  2. Start new ones and, once they’re up, delete the old ones, either by adding all the new pods and then deleting all the old ones at once, or sequentially, by adding new pods and removing old ones gradually. In this option, the app should handle running two versions of the app at the same time.

Updating pods through a Deployment Controller :

There are several strategies used to replace old pods with new pods in a deployment resource :

Recreate Deployment:

  • All existing pods are killed before new ones are created.

Rolling Update Deployment ( default ):

The Deployment Controller will update the Pods in a rolling out the way. You have, in fact, performed a rolling update with zero downtime. There are two parameters used in the YAML file to manage the rolling update, “max unavailable” and “max surge” :

  • max unavailable: is an optional field that specifies the maximum number of Pods that can be unavailable during the update process. The value can be an absolute number (for example, 5) or a percentage of desired Pods (for example, 10%). For example, when this value is set to 30%, the old ReplicaSet can be scaled down to 70% of the desired Pods immediately when the rolling update starts. Once new pods are ready, old ReplicaSet can be scaled down further, followed by scaling up the new ReplicaSet, ensuring that the total number of Pods available at all times during the update is at least 70% of the desired Pods.
  • max surge: is an optional field that specifies the maximum number of Pods that can be created over the desired number of Pods. For example, when this value is set to 30%, the new ReplicaSet can be scaled up immediately when the rolling update starts, such that the total number of old and new Pods does not exceed 130% desired Pods. Once old Pods have been killed, the new ReplicaSet can be scaled up further, ensuring that the total number of Pods running at any time during the update is at most 130% of desired Pods.
  • For details: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

Time to play with a Deployment Controller to make a rolling update :

Scaling down :

$ kubectl apply -f  https://raw.githubusercontent.com/ErhanCetin/k8s-smooth-transition/develop/k8s/medium-post/deployment/newsconsumer-deployment-replica3.yaml
$ kubectl get pods

Change replicas to 2 and apply the YAML. The one pod will be automatically killed.

kubectl apply -f  https://raw.githubusercontent.com/ErhanCetin/k8s-smooth-transition/develop/k8s/medium-post/deployment/newsconsumer-deployment-replica2.yaml

Scaling up :

Just change the replica count from 2 to 3 and apply the YAML file. The Deployment controller will automatically create a new pod.

Rolling Update :

$  kubectl delete deployment newsconsumer-deployment # be sure you have no newsconsumer-deployment resource in your minikube.

A Deployment’s rollout is triggered if and only the Deployment’s pod template (that is, .spec.template) is changed, for example, if the labels or container images of the template are updated. Other updates, such as scaling the Deployment, do not trigger a rollout. For our example, to initiate a rollout we need to change the image in deployment YAML. I’ll use v1.1 and the latest consumer YAML.

Here is our modified YAML file :

Rolling Update Steps :

$ kubectl apply -f https://raw.githubusercontent.com/ErhanCetin/k8s-smooth-transition/develop/k8s/medium-post/deployment/newconsumer-deployment-replica2.yaml - - record # Create a deployment with latest release $ kubectl get pods
# check the history
$ kubectl rollout history deployment.v1.apps/newsconsumer-deployment

Now, we have the latest version with two pods and want to change replicas to 5 to proceed with our sample.

$ kubectl scale deployment.v1.apps/newsconsumer-deployment --replicas=5 - - record
$ kubectl get pods

Three new pods are added ( no need for new ReplicaSet — just scaling up happened).

Now we can roll out with v1.1: There are 5 pods with the version latest(v1.0). According to the “ maxUnavailable: 0” in the YAML file above, when we deploy the v1.1 version, no old pods will be unavailable. It means after all pods with the new versions are created, old ones will be removed.

Change the image version to initiate a rolling out:

$ kubectl set image deployment/newsconsumer-deployment newsconsumer=erhancetin/k8s-news-tracker-consumer:1.1 --record
$ kubectl describe deployment newsconsumer-deployment # Let’s check the image of the pod to be sure we have v1.1.
$ kubectl rollout history deployment.v1.apps/newsconsumer-deployment # check the rollout history

Side Note: The “ — record” flag is to write the command executed in the resource annotation kubernetes.io/change-cause. The recorded change is useful for future introspection. For example, to see the commands executed in each Deployment revision.

What we’ve done so far :

  • Create a deployment with replicas=2 via the “kubectl apply …” command.
  • Change replicas to 5 for scaling up via “kubectl scale …” command
  • Trigger a rollout for v1.1 via “kubectl set image …”
  • Check the history of the rollout via “ kubectl rollout history … “

Details of the rollout :

I want to give some details about scaling up and rolling out steps. If you check the deployment YAML file again, you will remember there are maxSurge=2 ( max 2 pods can be created at once) and maxUnavailable=0 (no pod can be unavailable, which means zero downtime rolling) in it.

Let’s check the deployment events :

$ kubectl describe deployment newsconsumer-deployment

Events order :

  • 1. Scaling up from 2 to 5 replicas.
  • 2. : A new ReplicaSet is created for v1.1 and 2 new pods at once is scaled up due to maxSurge=2.
  • 3. : One old pod is scaled down from old ReplicaSet.
  • 4. , 5. ,6. 7., 8. : show, gradually, scaling up and down. In Step 9, all replicas are being scaled down from the old ReplicaSet.
  • Long story short: A new ReplicaSet is created and the Deployment manages to move the Pods from the old ReplicaSet to the new one at a controlled rate.

Rollback a Deployment :

It seems I need to touch on a little bit of a rolling back mechanism. When the Deployment is not stable, we can rollback a Deployment but how?

$ kubectl rollout history deployment.v1.apps/newsconsumer-deployment # check the rollout history

According to the revision list above, the current version is v1.1. Let’s rollback it to the latest version ( revision=1). We have just 2 revisions. That’s why we can rollback in two ways :

$ kubectl rollout undo deployment.v1.apps/newsconsumer-deployment ## just previous version.
$ kubectl rollout undo deployment.v1.apps/newsconsumer-deployment --to-revision=1 ## rollback to a specific revision.
# I’ll proceed with second option :
$ kubectl rollout undo deployment.v1.apps/newsconsumer-deployment --to-revision=1
$ kubectl get pods

The newsconsumer-deployment-574c9dd545 belongs to revision 1 ( old ReplicaSet → check “Now we can roll out with v1.1” section in the post).

Some Useful Deployment Commands

$ kubectl get deploy$ kubectl describe deployment newsconsumer-deployment$ kubectl delete deployment newsproducer-deployment$ kubectl get rs$ kubectl delete rs newsproducer-rs$ kubectl rollout undo deployment.v1.apps/newsconsumer-deployment --to-revision=2$ kubectl rollout undo deployment.v1.apps/newsconsumer-deployment  # previous one$ kubectl rollout history deployment.v1.apps/newsconsumer-deployment —revision=1$ kubectl set image deployment/newsconsumer-deployment newsconsumer=erhancetin/k8s-news-tracker-consumer:1.1 --record$ kubectl rollout status deployment.v1.apps/newsconsumer-deployment

--

--