Sec 5: Abs:Begginers : Kubernetes Concepts : PODs, Replicasets , Deployments

 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++


 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

19 . Pods with Yaml 

apiVersion : Kubernetes api version : 

Values for apiVersion

> ReplicationController : v1

kind : kind refers to the kind of object that we are creating 


The command below generates the Yaml code for the image that you want to run.

kubectl run redis --image=redis --dry-run -o yaml

redis123 is a false image in this case. but if you have a real image. You can create an image as such


 --dry-run  -- Is deprecated


kubectl run redis --image=redis --dry-run=client -o yaml 

Directed to be written to a file below.

kubectl run redis --image=redis --dry-run=client -o yaml  > redis.yaml

Now use the redis command

kubectl create -f redis.yaml


Building a Replica Set 



> kubectl get replicationcontroller 

If you want to see the pods created by the replication controller. 

> kubectl create -f rc-definition.yml

> kubectl get pods 

Replica Sets 

It is very similar to replication controller 


Scaling the number of replicas to 6 from 3 .

open the file and edit the file and the 

kubectl replace -f replicaset-definition.yml

The other way to do this to 

kubectl scale --replicas=6 -f replicaset-definition.yml

or 

kubectl scale --replicas=6 replicaset myapp-replicaset



kubectl scale --replicas=6 replicaset myapp-replicaset




You can otherwise use the below command to do that 

kubectl edit replicaset myapps-replicaset

This will open up a text editor in a text format. This is temprary file created by Kubernetes in memory to allow us to edit the file. changes made to this file is directly made to the running instance of the cluster as soon as the file is saved.



Another way we can do this .

kubectl scale replicaset myapp-replicaset --replicas=2


26. Demo - ReplicaSets

kubectl explain replicaset

kubectl get rs

29. Deployments


Deployments comes higher in the hierarchy 

The contents of a deployment file is exactly similar to a replica set file. Except for the Kind

Kind = Deployment

The deployment files create the Replica-sets as 

if you type kubectl get replicaset > you will see a new replicaset in the name of  the deployment.

The replica set ultimately creates Pods 


To view all the created objects at once. run the below command.

kubectl get all


30. Demo - Deployment

33. Deployments - Update and Rollback

-- Rollout and Versioning 

Lets understand what is Rollout and Versioning in a deployment . 

When you first create a deployment it triggers a Rollout . A New Rollout creates a new deployment revision. 

Lets call it Revision 1


In the future when the application is update when the container version is updated to a new one a new roll out is triggered and a new deployment revision is created .


This helps us keep track of the changes we have made to our deployment and enables us to roll back to the previous version of deployment if necessary.

Check the status of the Rollout 

kubectl rollout status deployment/myapp-deployment
kubectl rollout status <name_of_the_deployment>

See the revisions and history of rollouts 

kubectl rollout history deployment/myapp-deployment

There are two types of deployment strategies 

For instance you have 5  replicas of your application deployment 

-- One way to update all these versions is to destroy all of these and the create newer version of these application instances 

Meaning first destroy 5 new instances and then deploy 5 new instances .


The disadvantage of this method is that the period in which the application is down is inaccessible for the users.



And thankfully this is not the default deployment strategy. 

Second Strategy :

The second strategy is that we do not destroy all of them at once instead we take down the older version and bring up the newer version . This way the application never goes down and the upgrade is seamless.

Remember if you do not define a Rollout strategy it will assume it as Rolling Update.


Rolling Update is the default deployment strategy .


We talked about upgrades , how exactly do you update your deployment . When I say update it could be different things .

-- Such as updating your application version .

Once we make the changes to the file we can use the below command to apply the changes.


A new rollout is triggered and a new version of the deployment is created. 

But there is another way to do the same thing. 

You can use the kubectl set image command to update the image of your application.

But remember doing it this way will result the deployment file having a different configuration.

Lets see how a deployment performs upgrades under the hoods.

When a new deployment is created say to deploy 5 replicas it first creates a replica set automatically which inturn creates the number of pods required to meet the number of replicas . When you upgrade your application as you saw in the previous slides the kubernetes deployment object creates a new replica set under the hood and starts deploying the containers there at the same time taking down the Pods in the old replicasets 


following a Rolling Upgrade Strategy . 

Say for instance once you upgrade your application you realize that something isn't very right something is wrong with the new version of the build you used to upgrade so you would like to rollback your upgrade 

Kubernetes deployments  allow you to rollback to your previous revision. 

To undo a change run the  below command.

kubectl rollout undo deployment/myapp-deployment  

The deployment will destroy the pods in the New Replica set and bring the older ones up in the old replicaset 


34. Demo - Deployments - Update and Rollback

kubectl create -f deployment.yaml 


Section 6:  Networking in kubernetes 

37. Basics of Networking in Kubernetes


Networking in Kubernetes :



There are networking solutions provided for kubernetes. There are predefined networks that you can use, 


With any of these setups > It now assigns a unique IP Address for each network on the nodes and Unique IP for each Pod.


And with simple Routing techniques  the cluster enables networking between different Pods/Node in the cluster.  Thus how all the Pods could communicate each other using the assigned IP Address.

Section : 7 : Services

Kubernetes Services enables communication between various components within and outside of the application . Kubernetes services helps us connect applications together with other application or users . For example our application has groups of Pods running various sections , such as a group for serving frontend loads to users a group for application for running backend processes and a third group connecting to an external data source . It is services that enables connectivity between these groups of Pods.

Services that enable the frontend application to be made available to the end users .It enables communication between backend and frontend Pods and helps in connectivity to an external data source. 

Thus Service enable lose coupling between micro-services in our application.



Lets look at some other aspects of networking in this lecture.

Lets start with External communication.

So we deployed a Pod having a web application running on it , How do we as an external user access the webpage .

The below example , the kubernetes node has an IP Address . My Laptop is on the same network as well . The internal pod network has an IP Address and Pod has its own IP Address. Clearly I cannot ping or access the Pod 

Clearly my laptop cannot connect to the application inside the Pod. So what are the option to access the web page.

-- First if we were to ssh into the Kubernetes Node @ 192.168.1.2 from the node we would be able access the webpage  a by doing the curl or if the node has a GUI we will fire up w eb browser and see the webpage in a browser following the address http://10.244..0.2 . But this is from inside the kubernetes Node But that's not what I really want . 



I want to access the webserver from my own laptop without having to ssh into the node simply by access the IP of the kubernetes node .

So we need something in the middle to map request to the node from our laptop.


This is where the Kubernetes service comes into picture. A Kubernetes service is an object just like Pods , replica-sets  or deployments one of the objects that we worked with before. One of the use case is to listen to the use case is to listen to a Port on the node and forward request on that Port to a Pod running the web-application. 

This type of service is called a "Node-Port Service" .  Because the service listens to a Port on the Node and forwards the request to the Pods. 

There are other kinds of service available which we will now discuss 

1. NodePort : 

This type of service is called a "Node-Port Service" .  Because the service listens to a Port on the Node and forwards the request to the Pods. 

2. Cluster IP

In this case the service creates a virtual IP inside the cluster to enable communication between different services such as a set of front-end server to a set of Back-end servers 

3. Load Balancer 

The third type is called the load balancer, where it provisions a load balancer for our application which provides a load balancer for our application in supported cloud providers - A good example for that would to be distribute the load across different web-servers in your front-end tiers .


NodePort Service :

A service can help us by mapping a port on the node to a Port on the Pod

If you look at the diagram below you can notice that there are three Ports involved.


-- The port on the Pod where the actual web service is running is Port 80 and it is referred to as the ( target port ) . Because that is where the service forwards the request 
-- The second Port is the one on the service itself it is simply referred to as the Port -- remember these terms are from the view point of the service . 
A service is in fact like a Virtual Server inside the node , inside the cluster it has its own IP Address - And that IP is called the cluster IP of the service 

And finally we have a Port on the node itself which we use to access the web externally and that is known as the NodePort .  As you can see it is set to 30008 . That is because Node Ports can only be on a valid range which by default is from 30000 /- . which is in the range (30000 - 32767) 


Lets now see how to create a service -- service-definition.yml

Just how we created a deployment , replicaset or Pod in the past we will use a definition file to create a service.

The type -below refers to the service we are creating .


If you do not provide a Target Port is is assumed to be the same as Port value . And if you do not provide a NodePort a free Port in the valid range between 30000 - 32767 is automatically allocated.

Also note that Ports is an array so . you can have multiple port mapping under a single service. So we have all the information in but something is really missing. 

There is nothing in the definition file that connects the service to the Pod we simple have mentioned the target port . But we haven't mentioned the target port in which Pod. There could be hundreds of other Pods running webservice on Port 80.

We will use labels and Selectors to link this together .We know that the Pod was created with a Label we need to bring that label into this service definition file. So we have a new property in the spec section. 

Under selector provide a list of labels to identify the Pod for this refer to the Pod definition file.


Pull the labels from the Pod definition file and place it under the selectors section. This links the service to the Pod.

once done create the service, use the kubctl command 

kubectl create -f service-definition.yml

kubectl get services 


the curl command has the IP Address of the node .and used the port 30008 which is the port of the node to access the webserver.

So far we spoke about a service mapped to single Pod . But that is not the case all the time . What do you do when you have multiple pods.

In production environment you have multiple instances of your web application running for high availability and load balancing purposes. In this case we have multiple similar Pods running our web-application . 

They have the same label as Myapp

The same label is used as in selector during the creation of the service. So when a service is created it looks for a Matching Pod with the label and finds three of them .The service then selects automatically all the three Pods as end point to forward the external request coming from the user . You do not have to any additional configuration to make this happen. Thus the service acts as a built in load balancer to distribute load across different Pods.

And finally lets see what happens when we distribute Nodes in multiple different Nodes. In this case we have the web application in separate nodes. 



When we create a service without having to do any additional configuration . Kubernetes automatically creates a service that spans across all the nodes in the cluster and maps the target port to the same node port on all the nodes in the cluster .

This way you can access the application using the IP  of any node in the cluster and using the same port number which in this case is 30008.

40 . Services - Cluster IP

 You may have a set of different Pods running different parts of the application in a full stack application.



You may have a set of Pods running a front-end server  and another set of Pods running a back-end server and another set of server running a key value store like a "redis" and may be another set of Pods running a persistent database like MySQL .

The web front-end servers needs to communicate to the backend servers and the backend servers needs to communicate to the database as well as the Redis services etc. So what is the best way to establish connectivity to these services or tiers of my application . The Pods all have an IP address assigned to them as we can see on the screen . But these IPs as we know are not static . These Pods can go down any time and new Pods are created all the time . So these IP addresses cannot be relied upon for internal communications between the application .

If one of the Pods in the front-end application wants to connect to the backend service which of the Pod would it connect to and who makes that decision.

A Kubernetes Pods can help us to group the Pods together and provide a single interface to access the Pods in a group.

For example a service created for the backend Pods will help group all the backend Pods together and provide a single interface for other Pods to access the service . The request are forwarded to one of the Pods Randomly .similarly create additional service for backend pods to access the systems through the service . This enables us to easily and effectively to deploy a microservices based application on Kubernetes cluster . Each layer can scale or move as required with impacting communication between various services . 

Each service gets an IP and a name assigned to it inside the cluster and that is the name that must be used by other Pods to access the service. This type of service is known as the cluster IP.

To create such a service use a definition file in the service-definition.yml used 

The target port here is the where the backend is exposed . In this case it is 80 , port is where the sevice is exposed which is 80 as well.

To link the service to a set of Pods we use selector , we will refer the Pods definition file and copy the labels and move it under the selector session.


And that should be it. you can now use the 

kubectl create -f  -- command 

and then check its status.



41. Services - Load Balancer


Create a new VM for load balancer purpose . Install and configure a suitable load balancer on it . like HA Proxy or Nginx etc . And then configure the load balancer to route traffic to the underline nodes. 

Now managing and maintaining a load balancer might be a tedious task. If i am using the GCP or AWS then i can leverage the native cloud balancer platform . Kubernetes has support for integrating with the native load balancers of certain cloud providers in configuring that for us.

So all you need to do is set your service type to LoadBalancer


Now note that this only works with supported cloud platforms (GCP, Azure, AWS) are definitely supported so if you set the service to load balancer in an unsupported environment like Virtual Box it will have the same effect of setting it node port . 

Section 8: Microservices Application

Lets see how we can put together the application stack in the single docker engine using docker run commands . Lets assume all images of application is already built and are available on Docker repository . 

Lets as start with the Data layer

First we run the Docker run command to start an instance of redis 

> docker run -d --name=redis redis 

-- we will add the "-d" parameter to run this container in the background. And we will also name the container Redis. Now naming the containers are important - why is that important - hold that thought we will come to that in a bit.

Next we will deploy the postgreSQL database 

> docker run -d --name=db postgres:9.4

Next we will start with application services . We will deploy a front-end app for voting interface by running an instance of voting-app image 

> docker run -d --name=vote -p 5000:80 voting-app

Since this is a webserver it has a web UI instance running on port 80 we will publish that port 5000 on the host system so that we can access it from a browser .

Next we will deploy the result web application. That shows the results to the user.

> docker run -d --nmae=result -p 50001:80 result-app

This way we can access the web UI of the resulting app on a browser.


Finally we deploy the instance of a worker by running an instance of the worker image 

> docker run -d --name=worker worker

This is all good and we can see all the instances running on the host . But there us some problem.


The problem is that we have successfully run all the containers but we haven't linked them together. As in we haven't told the Voting application to use this particular redis instance. There could be multiple Redis instances running. We haven't told the worker and the resulting app to use this particular PostgreSQL database that we ran. So how do we do that -- That's is where we use Links.

Link is a command line option that allows to link two containers together , for example the voting app web service is dependent on the Redis service . When the webserver starts as you can see in this piece of code . The webserver looks for  Redis service running on host Redis .


But the Voting-app host couldn't resolve a host by the name Redis. To make the Voting-app aware of the Redis service we add a link option while adding a Voting-app container to link it to the Redis container.

adding --link redis:redis option to the 

> docker run -d --name=vote -p 5000:80 --link redis:redis voting-app

 redis:redis <redis_container:host_name>

This is the reason we name the container when we ran it the first time. so that we can use its name while creating the link.

What it is in fact doing is it create an entry into /etc/hosts file on the voting-app container adding an entry with hostname - redis with the internal IP of the redis container.

similarly we add a link to the result-app to communicate with the database 

> docker run -d --name=result  -p 5001:80 --link db:db  result-app

finally the worker application as per code needs access to both the redis and postgresSQL database.


> docker run -d --name=result -p 5001:80 --link db:db result-app

Using link this way is deprecate and the support may be removed in future by docker. 


Section 8: Microservices Application on Kubernetes

We just saw how the Voting-app works on Dockers now lets see how to deploy it on Kubernetes . So it it important to have an Idea of what we are planning to achieve and plan accordingly before we get started. We already know how the application works and it is good idea to write down what we plan to do. 

Our Goal is to deploy this application as containers.

1. Deploy containers on a kubernetes cluster
2. Enable connectivity between the containers so the applications can access each other and the databases.
3. Enable external access for the external facing applications which are voting and the result app so that the users can access the web-browsers.

We know that we cannot deploy containers directly on kubenetes. We learned that the smallest object that we can create on a kubernetes cluster is a Pod. Therefore we must deploy these applications as a Pods on our Kubernetes cluster or we can deploy them as Replicaset or Deployments.

For simplicity we will stick to Pods in this lecture. And later we will see how to convert that into deployment. 

here the Pods are deployed , later we will see how to establish connectivity between the services. So it is important to know what the connectivity requirements are so we must be really clear what application requires access to what services.

We know the redis database is accessed by the Voting app and the worker app. The Voting-app sends votes to the redis database and the worker app read the vote from the redis database.
we know that the Postgres-app is accessed by the worker app to update with the total count of votes and is also accessed by the result app to read the total count of votes to display in the resulting web-page.

We know that the Voting-app is accessed by the external users and the voters and the result app is also accessed by the external users to view the results . so most of the component is accessed by another component except for the worker app . Note that the worker app is not accessed by anyone.

The worker-app is not accessed by anyone , the worker-app simple reads the redis-app and updates the vote in the result-app.


Now the Voting-app has a python web-server that listens on port:80 . And the results-app that also noje.js app that listens to the port:80 .  And the redis data listens to port : 6379 and the postgresSQL database listens on 5432 . The worker app does not have any service as it is accessed by none 


So how do you make one component accessible by another. 

The right way to use is a service . We learned that a service can be used to expose an application to other applications or users for a external access. So we will create a service for the Redis Pod so that it can be accessed by the voting-app and the worker-app 


We will call it a Redis service and it will be accessible anywhere within the cluster by the name of the service Redis. So why is that service name important  -- The source code within the voting app and the worker-app are hard-coded to point to the redis database running on a host by the name Redis. So it is important to name your service.

This is not a best practice to hardcode data like this within the source code instead you must be using environment variables but for the sake of simplicity . These services are not to be accessed outside the cluster . They should be of the type clusterIP 


To be continued....going just by videos faster without documenting/

46. Demo - Deploying Microservices Application on Kubernetes






Worker Pod below.


Now we will create the services .

















Comments

Popular posts from this blog

Sec : 3 : Abs:Begginers : Setup Kubernetes : Setup Introduction

Kubernetes | Containers - Container D and Docker

Sec : 4 YAML Introduction