Sec : 5 : Abs:Begginers : Kubernetes Pods, replicasets, deployment

 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Section : 6 : Kubernetes Concepts , PODs , Replication Sets , Deployments

Sections :

  • PODs with YAML
  • Demo PODs with YAML
  • Tips and Tricks developing Kubernetes Manifest files with Visual Studio
  • Hands on Lab familiarizing with the Lab environment
  • Hands On Labs
  • Solutions : Pods with Yaml Lab
  • Replication Controller and Replica Sets

 

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

 

 22. PODs with YAML


api Version : A few possible value for this key is 

The kind refers to the type of object that we create.

Once you create the pod how do you see it.

$ kubectl get pods 

To see detail information about the pod 

$ kubectl describe pod myapp-pod

We are going to create a pod using a YAML file. For our goal is to create yaml file with Pod specification in it.


 You can either use kubectl create / you can either use apply

$ kubectl create

$ kubectl apply

$ kubectl apply -f pod.yaml

$ kubectl get pods

$ kubectl descibe pod nginx


24 : Tips & Tricks --Developing Kubernetes Manifest files with Visual Studio code.

kubectl run nginx --image=nginx

27 :  Solution Pod with Yaml Lab

 Find out which  pod the image is hosted on

 kublectl get pods -o wide

 How many containers are part of the Pod webapp

What images are used in the new webapp pod.

nginx and agentx


What does the READY column of the kubelet get pods indicate.

1/2 -- which means there are two containers in this pod in which 1 is not ready out of the two.

> Delete the webapp pod

kubectl delete pod webapp

> Create a new Pod in the name of redis with the image redis123

kubectl run redis --image=redis123 --dry-run -o yaml

-o --> output into a yaml format

 the -- dry-run command is deprecated is it replaced with --dry-run=client

kubectl run redis --image=redis123 --dry-run=client -o yaml    

kubectl run redis --image=redis123 --dry-run=client -o yaml > redis.yaml



  Now we can create a pod using the file which was dry run last time.

 kubectl create -f redis.yaml

 There is error > kubectl get pods ErrImagePull - this is because there is no image by the name "nginx" the image name is not current. This is because we purposely gave redis123 a wrong name to the image.

you can edit it using kubectl edit or open up the redis.yaml file.

Now we are going to apply the changes

kubectl apply -f redis.yaml


28 : Replication Controllers and Replicasets

Controllers are the brain behind kubernetes they are the process that monitor Kubernetes objects and respond accordingly. This lecture we will speak about one controller respectively and that is the Replication controller

what is a Replica and why do we need replication controller.

instead of having a single pod hosting an application we create mutiple replicas . To prevent users from losing access to our application we would like to have more than one instance of the Pod running at the same time. That way even if one fails we have our application running on the other one. 

The Replication controlller helps us to run multiple instances of a single Pod.

thus ensuring High Availability , does that mean you cannot have a Replication Controller if you running your application in a single Pod.

Nope : Even if you have a single Pod Replication controller can help by automatically bring up a new Pod when the existing one fails. Thus Replication controller ensure that the specified number of Pods as all times. Even if it is just one or hundred.

Another reason why we need Replication Controllers is to create multiple Pods to share the load . In the below example we we have set of users increase we deploy another Pod to balance the load .


If the demand further increases and if were to run out of resources on the first node we could deploy additional nodes across the other nodes in the cluster

As you can see the Replication Controllers spans multiple nodes in the cluster . It helps us balance the load  across multiple Pods or different Nodes or as well as scale our application when the demand increases .

It is important to know that there are two similar terms 

1. Replication Controller 

2. Replica Sets

Both have the same purpose but they are not the same.

Replication Controller is an older technology that is being replaced by Replica Set

Replica Set is the new recommended way to setup application

Now let's see how we create a Replication Controller.

The API Version is specific to what we are creating

In this case Replication Controller is supported in the version V1


So far it has been same as what we created in the last lectures.

The next is the most crucial part of the definition file and that is the specification written as spec : For any Kubernetes definition file : The spec section defines what is inside the object that we are creating. In this case we know that the Replication controller creates multiple instances of a Pod . But what Pod.

We create a Template section under spec to provide a POD template to be used by the Replication Controller to create Replicas

Now how do we define the POD template it is not that hard because we have already done that in our previous section. Remember we created a POD definition file in the previous exercise


We can reuse the contents of the file to populate the template section. Move all the contents of the Pod definition file to the template section.

except for the few lines which are Api Version and Kind.

Remember whatever we move must be under the template section meaning they must be intended and should have more space than the template line it self.  They should be children of the template session. 

Looking at our file now we have two metadata sections one is for the Replication Controller and another one for the Pod. And we have two spec sections one for each . We have nested two definition files one being the Replication Controller and other the Pod definition file.

Replication Controller being the parent and Pod definition being the child. Now there is something still missing we haven't told how many replica we need in the Replication Controller. For that add another property to the spec called "replicas:"


And input the number of replicas you need under it. Remember template and replicas are directl children of spec section.


Once the file is ready use the kubectl create -f yaml-file

The replication controller is created.

to check the number of replications -- by the replication controllers

>> kubectl get replicationcontroller 

Replica Set

Now lets see Replic Set -- It is very similar to Replication Controller.

apiVersion: apps/v1    -- this is different in Replic Set  which is different from what we have other than application controller. 

If you get the above wrong you are likely to get an error like this. 



However there is one major difference between Replica set and Replication Controller. 

Relipca Set needs a "Selector Definition"

The selector sets helps

 Replica sets requires a "Selector Definition" . Selection session helps a replica set to Identify what Pods fall under it.

But why would have to specify what Pods fall under it if you provided the contents of Pods definition if you have provided the Pod definition file itself in the template . Thats because Replica Set can also manage Pods that were created as not a part of the Replica Set .

For instance there were Pods created before the creation of the replica sets that match labels specified in the selector . The replica sets will take also those pods into consideration.

Selector is one of the Major differences between a Replication Controller and Replica set.

A selector filed is not a required field in the case of Replication controller . But it is still available when you skip it assumes it the same as the labels that is specified in the Pod definition file. In case of Replicaset a user input is required for this property. And it has to be written in the form of Match labels. 


the match Labels specifies under selector simply matches the labels specified under it with the labels of the Pod. The Replica set also provides many other options for matching labels that were not available on the replication controller.

And to create a Replica Set 

>>> kubectl create -f replicasets-definition.yml

To get the replica set info

>>> kubectl get replicaset

 To get pods info

>>> kubectl get pods

Labels & Selectors

Why do we label our pods and objects in Kubernetes.

example :

lets say we deployed three instance of our webapp application as three Pods. We would like to create a Replication Controller or Replication set to make sure we have three Pods at all given time. And that is the one of the use case of Replica Sets . You can use it to monitor existing pods. If you have them already created as it is in this example. In case if they were not created the Replica set will create them for you. 

The role of the Replica set is to monitor the PODs and if any of them fail deploy a new one. A Replica Set is a process that monitors the PODs . Now how does the replica sets know what Pods to monitor. There could be 100s of other Pods running different applications

This is where Labeling our Pods during creation comes in handy.

Now we can provide these labels as filter to Replicasets



This way the Replica Sets know which Pod to monitor. Same set of Labels and selectors are used through out Kubernetes. 

How we Scale the Replica Set

kubectl scale rs new-replica-set --replicas=5 

another way

kubectl edit rs  rs new-replica-set  // the file opens up.

Say we started with 3 replicas and in future we decided to scale to 6 . How do we update our replica set to 6 replicas. Well there are multiple ways to do it.

1. First is it to update the number of replica to 6 in the replication_definitions.yml file.

2.  And then use the command 

    >> kubectl replace -f repliaset-definition.yml

 >> kubectl get rs    -- rs is the short form for replicaset.

Second method to do this 

3. kubectl scale --replicas=6 -f repliaset-definition.yml

 

However remember using the file name as input will not result into number of Replicas being updated automatically in the file

 though you have used the  the scale command the number of replica set will remain 3 in the definition file.

 kubectl scale --replicas=6 -f repliaset-definition.yml

 





you can use 

>>> kubectl describe replicaset myapp-replicaset  

to see more detailed output,

Lets say we need to increase the replica in out replica set we can do so by

>>> kubectl edit replicaset myapp-replicaset


29. Deployments

say for example you have a web-server that you need to deploy in a production environment. Whenever newer build of docker become available in the docker registry you want to upgrade your docker instances seamlessly. However when you upgrade your instances you do not want to upgrade all of them at once.

You might want to upgrade them one after the other . That kind of upgrade is known as rolling updates.

How do we create a Delployment

 


$ kubectl create -f deployment-definition.yml

$ kubectl get deployment

This deployment automatically creates a replicaset . this will create



The replica sets ultimately create Pods so you create

To see all the created object at once .

$ kubectl get all   //

30. Demo : Deployment




$ kubectl get all -- command tells you all the objects created in the cluster


32. Solution : Deployment

33. Deployments update and roll back :

First lets understand roll out and versioning in our deployments.

When you first create a deployment it creates a Roll Out. A new roll out creates a new deployment revision lets call it Revision 1

when it is updated. it means the container version is updated

The container version is updated to a new one. A new roll out is figured and a New deployment revision is created named revision 2. This helps us keep track of the changes made to the deployment and enables us to roll back to a previous version of deployment if necessary .

You can check the status of the rollout 

$ kubectl rollout history deployment/myapp-deployment


Recreate Strategy :


Rolling update is the default deployment strategy .



How exactly do you update your deployments.

once we make any changes or update to the deployment-definitions.yml file you can apply the changes.

by

kubectl apply -f deployment-definition.yml

But there is another way to do the same thing

you can use the kubectl set command to set the image of your application. 

 kubectl set image deployment/myapp-deployment nginx=nginx:1.9.1 

But doing it this will will result into have a different configuration file. So you should be careful while making changes in the future. 



34 . Demo -- Deployments , Update and roll back :

kubectl rollout status deployment.apps/myapp-deployment


lets delete this deployment

$ kubectl delete deployment myapp-deployment

I am going to create the deployment and run the status immediately after that

 

$ kubectl create -f deployment myapp-deployment

kubectl rollout status deployment.apps/myapp-deployment

As soon as you run this status you will see the status of each replica. which is been brought up. 

History command

kubectl history status deployment.apps/myapp-deployment


That CHANGE-CAUSE ,

You will notice that there is no change cause specified. That is because we didn't specifically ask it to record the change. There is no change cause specified.  Lets go back and fix that . 

we will delete the deployment again.

$ kubectl delete deployment myapp-deployment

check if the pods are getting terminated . wait for the pods to get terminated.

$ kubectl get pods

 




Comments

Popular posts from this blog

Sec : 3 : Abs:Begginers : Setup Kubernetes : Setup Introduction

Kubernetes | Containers - Container D and Docker

Sec : 4 YAML Introduction