Part 1 - Kubernetes Architecture

Erhan Cetin
5 min readNov 15, 2020

Smooth Transition to Kubernetes

I hesitated to write another post on Kubernetes since there are already tons of great Kubernetes posts on the internet. But I plan to publish a series of posts, specifically for the developer, about Kubernetes, microservices, and cloud-native applications. Hence, This is where it all starts. Kubernetes components will be the first part of this long journey.

My post series will walk you through the application, which is one of the real-world apps that I’ve prepared for this article and ongoing articles. You will be able to learn some of the core Kubernetes components with this sample application below.

News Tracker App Architecture

Please visit the News Tracker Application and have a look at the readme in the project. This will ease your understanding process. Please keep in mind, I just want you to have a smooth transition to Kubernetes by explaining the concept with the sample app. Before starting my post, I want to give a big thanks to Marko Luksa who is the author of Kubernetes in Action book. I’ve been inspired by his book that is worth spending the time on. Let’s get started with the traditional introduction, what is Kubernetes?

What is Kubernetes?

The Kubernetes, originally developed by Google written Golang and later donated to CNCF, is an open-source orchestrator for deploying containerized applications. It’s introduced in 2014 and has become popular for building cloud-native applications. There are tons of definition of Kubernetes around the internet, but the most like definition for me, from Kubernetes in Action book:

Kubernetes abstracts away the hardware infrastructure and exposes your whole datacenter as a single enormous computational resource. It allows you to deploy and run your software components without having to know about the actual servers underneath. When deploying a multi-component application through Kubernetes, it selects a server for each component, deploys it, and enables it to easily find and communicate with all the other components of your application.

As already mentioned, I just want you to have a smooth transition to Kubernetes. For that, I’ve created a series of Kubernetes components. In this post, in the first part, we’ll have look at the Kubernetes core components. Let’s get started.

Kubernetes Core Components :

Kubernetes Core Component Architecture

Master/Control Plane Node:

The master node is the primary control plane to manage all administrative tasks like scheduling, deployment of containers and other users interact with clusters. A cluster can have more than one master node for fault tolerance. Usually, all the master components like above are set up on a single host, but of course, considering high-availability scenarios needs to have master redundancy. It consists of several components :

  • API Server: Kubernetes API Server exposes the REST API to perform all administrative tasks within the master node. The API Server is located in central management of the cluster and receives REST requests like from kubectl to know what it does. The API Server is the only entry point to write cluster configuration to etcd cluster to make them persistent.
  • etcd: Kubernetes stores all configuration data of the cluster in etcd which is accessible by all nodes in the cluster and an open-source distributed key-value data store developed by CoreOS. The API Server can just write the etcd directly. None of the other components can write the etcd. The etcd is responsible for storing and replicating data. For details: etcd.
  • Control Manager: It is also known as controllers, regulates the Kubernetes cluster, and manages different non-terminating control loops by watching the desired state of the Kubernetes objects like deployments, replicas, nodes, and so on through the API server. It contains various controllers such as Replication, Deployment, Node, Namespaces, Endpoints controller, etc. It keeps track of worker nodes, managing node failures, replicating components, and so on.
  • Kube-Scheduler: This component schedules the pods on workers based on resource usage decides where to deploy which service. It keeps always track of the capacity and resource of nodes over the API Server in the etcd.

2. Kubernetes Worker Nodes

It contains sufficient information to manage networking between containers such as Docker, communication between the master node as assigning the resources to the containers as per scheduled. Its components are:

  • Kubelet: It runs as an agent on each worker node and manages the state of the node. It ensures all containers in the node are running and are in a healthy state. To do that, It gets notified by the API Server via the Scheduler component and share performance and health information from the node, pods, and containers with the Control plane over the API Server. Kubelet monitors the state of a pod. If a node fails, the replication controller observes this change and launches another node, and moves relevant resources to the new node. It reports node activity like pod or node health or liveness probe to API Server.
  • Kube-proxy: It is the network proxy and a load balancer, which runs on each worker node and listens to the API server for each Service endpoint creation/deletion. It manages the network routing for TCP and UDP packets for ensuring that the services are available to external parties.
  • Pods: A pod is the smallest deployable unit of Kubernetes and consists of one or more containers. Pods run together in an instrumented way and are managed by kube-proxy. Pods can run on different nodes as containers can span more than one machine. All containers in a pod have the same IP address and port space. They can communicate using localhost or standard inter-process communication
  • Container Runtime: All applications like databases, API Servers, queue systems, etc. run in containers, placed inside one or multiple pods, and need an external IP address to be accessible from outside. Container Runtime on Worker Node manages our container’s lifecycle. Container Runtime can be docker, rkt, CRI-O, Frakti, or another container runtime which runs containers.

For details about the Kubernetes Concepts, you can visit the link.

--

--