Architecture
Explaining Architecture with Example:
Requirements: To deploy a hello-world container imager in the Kubernetes cluster. Assume that the container image is hosted in the Docker Image.
Step 1: I am the DevOps engineer, and I need to deploy the hello-world container in the Kubernetes cluster by running the command "kubectl create -f deployment.yaml." for now, you can assume that "deployment.yaml" is the instruction file that will communicate with cluster to download and deploy into the Kubernetes cluster.
Step 2: Kubectl can be executed through CLI or run using Jenkins or other CICD tools. As a first step, the "kubectl deploy" command should reach the Kubernetes cluster(http://iam7hillskubernetescluster:443) by using the proper authentication mechanism(user credentials or through access token). The API server in the Kubernetes control plane will help you perform authentication. In general, all traffic to the Kubernetes cluster (control and worker nodes) happens through the API server.
Step 3: Cluster Store is used to store cluster configurations, basically the cluster database. You can have one or more control planes; let us assume that when you execute the "kubectl deploy" command, and if one of your control planes is down, then it will redirect your request to another control plane to honor your request., which means your cluster store can help you to perform High Availability(HA).
Step 4: The Controller Manager is an essential components in the control plane; it helps you to compare the current deployed state and the desired state(which you want to deploy as part of your deployment.yaml" .
For example, if your previous deployment version of your helloworld is 10, and your new deployment version in yaml is 11, it will make sure that your current state and your desired state match and set to 11. Only then will its job be done. That's why I called this is so critical. In addition, it also helps you perform autoscaling and self-healing, which I will explain later in other sections.
Step 5: The scheduler will continuously monitor all the worker node servers to check if they are healthy to perform any new deployments and maintain some ranking among the worker nodes. Based on the ranking order, it will allocate the "kubectl deploy" command to perform in one of the worker nodes. In my example, i have shown that worker node1 as a healthy node that performs the deployment in this case.
Step 6: Each agent(Kubeadm) in the worker node will constantly check the API server to see if there are any tasks that the scheduler assigned to them. Once the worker node knows the job is posted, it will immediately pick the task and start working on the request.
Step 7: In my example, I have shown that worker node1 is a healthy node that performs the deployment in this case. This worker node will finally read the "deployment.yaml" file and start executing the instructions given in the yaml.
Step 8: In my case, the deployment image package is stored in the docker hub, where the worker node has to download and deploy. So, it goes back to the API server and then will connect to the Docker hub to download the Image into the worker node1.
Step 9: It help you to perform load balancing internal routing if the hello-world container depends on other pods/application running with different worker node 2 and 3, then this kube-proxy will help you with this.