Introducing Autopilot, an AI coding assistant
gradient
How to use Kubernetes components

How to use Kubernetes components

Feb 14, 2022
6 min read

As a software engineer, when you deploy an application, you make it accessible to its respective audience. You can do this by containerizing your application, but that can cause issues.

Containers can become unavailable if traffic is high or if there is server downtime due to a lack of infrastructure. In this case, you would need to deploy applications on multiple containers so that if one becomes unavailable, others can step in to handle the traffic. However, handling multiple containers is also challenging.

This is where Kubernetes can help. It’s an open-source platform that’s used for the orchestration of containers on which your application might be deployed. Because Kubernetes was developed by Google, it’s incredibly reliable and uses extensible APIs to manage containers.

But how does Kubernetes work? In this article, you’ll learn about all the major components that make up a Kubernetes deployment and how they combine to form a system.

Getting started

Kubernetes is made up of a control plane containing one or more master nodes and worker nodes. Worker nodes are where your apps are installed, but they are managed by master nodes, also known as central managers.

The fundamental goal of Kubernetes is to manage containers. However, containers are not managed independently and are instead managed as part of a bigger entity called a pod.

This pod is made up of one or more containers and shares the same IP address, storage access, and namespaces. In most cases, one container in a pod hosts the application, while the others assist the primary container in managing it.

Kubernetes architecture

Control plane

The control plane is responsible for managing the clusters process, which includes kube-apiserver, etcd, Kubernetes scheduler, kube-controller-manager, and Cloud controller manager. Sometimes, third-party solutions are utilized like cluster-level logging, cluster DNS, and resource monitoring.

Here, you’ll look at the different components of the control plane nodes that are responsible for handling the overall process of container orchestration.

Control plane

Kube-apiserver

The kube-apiserver is the central component in a Kubernetes cluster. It’s responsible for handling internal and external traffic.

Kube-apiserver is the only component that connects with the etcd database and works as a master component and the frontend of the cluster’s shared state. It’s mainly responsible for handling the API calls that normally handle authentication, authorization, and admission controls.

Etcd database

etcd is the database system that’s used for storing cluster state, networking information, and other persistent information. It stores information in the form of key-value pairs. When an update is needed, it doesn’t overwrite the pairs; rather, it creates a new entry and appends the end, and marks the previous entry for future removal. Typically, it works with most HTTP libraries and curl. Every update that you make in the database travels through the kube-api server.

One interesting thing to note here is that if you have simultaneous requests to update a value in the database, all these requests go to kube-apiserver, which then passes them to the etcd database. The first request would easily be processed, but the second one (with the previous version number) would not be available since the object is rewritten by the first request. In this instance, error code 409 would be replied to the user. To ensure you don’t encounter this, make sure you only request one request at a time.

Kubernetes scheduler

The kube-scheduler determines which worker node will host the pod using different scheduling algorithms. It first checks for the availability of the resources in nodes, and based on the type of request, it assigns one that’s available.

Each and every node that meets the scheduling requirements is referred to as a feasible node. In the case that a node is not suitable for the current pod, the pod will remain unscheduled until there is a node available.

Scheduling decisions are based on resource availability and other factors like hardware, software, workload, and policy constraints. All these factors together ensure that a node is suitable for a pod to run. The kube-scheduler is the one responsible for managing the timely service of the whole Kubernetes infrastructure.

Kube-controller-manager

The kube-controller-manager component runs different control processes. It constantly interacts with the kube-apiserver to determine the state of the cluster. If the state is not matched, the manager contacts the necessary controller to match the desired state.

The different controllers that make up the kube-controller-manager are the node controller, endpoint controller, replication controller, and token controller.

Cloud controller manager

The Cloud controller manager is similar to kube-controller-manager, except that it interacts with the cloud-specific APIs. As Kubernetes continues to advance, it also handles some of the controller tasks that had been previously handled by the kube-controller-manager.

It distinguishes between components that interface with the cloud platform and those that solely interact with your cluster. For more information on the detailed functions of Cloud controller manager, you can visit Kubernetes docs.

Worker node

The worker nodes are where the actual application is deployed. To containerize your application, you need to put the containers in pods that run on different nodes. A node is simply a physical or a virtual machine, depending on the type of cluster you have in your Kubernetes environment.

As mentioned above, each node is managed by the kube-apiserver, and scheduling happens via the kube-scheduler, which means each node is under the control of the control plane. Typically, there are multiple nodes in a cluster that are composed of kubelet and kube-proxy, which are part of the worker node. Let’s take a more detailed look at kubelet and kube-proxy.

Kubelet

Kubelet is the part of every worker node that makes sure the containers that need to run are currently running. It receives the pod specification as an API call and works to configure the worker node until the specifications are met.

The most important part is that a kubelet does not manage any container that is not created by Kubernetes.

Kube-proxy

Kube-proxy is responsible for managing the network requirements of the node by maintaining an iptable. Communication within or outside the cluster is enabled by it.

Kube-proxy uses an operating system packet filtering layer if one exists and is currently available. Otherwise, kube-proxy itself forwards the traffic.

Add-ons

Kubernetes addons are different resources that help in implementing the Kubernetes cluster features. These aren’t regular components, but are used to help developers manage things more smoothly.

Addons can easily integrate with your Kubernetes architecture to help you get the most out of it. Below are a few of the addons you can use in Kubernetes:

Cluster DNS

Cluster dns is a DNS server that serves DNS records to Kubernetes. Although this is an addon, a cluster should add it because it provides a lightweight mechanism in service discovery.

Kubernetes-started containers already have a DNS server in their DNS lookups.

Web UI (dashboard)

The Web UI dashboard is provided for each Kubernetes cluster and helps the user manage and troubleshoot any clusters and applications running on a cluster.

Container resource monitoring

Container resource monitoring maintains a single collection of generic time-series metrics about containers and provides a user interface for browsing that data.

Most engineering teams use additional tooling for long term monitoring of Kubernetes resources and metrics.

Cluster-level logging

Cluster-level logging saves container-level logs to a central log store and provides an easy interface to search and browse the logs in case any error occurs in containerizing the applications or to monitor the containerization procedure.

Many teams take it a step further with alerting and correlation of logs to other metrics and Kubernetes events.

Final thoughts

Once you’ve created an application, you need to deploy it and make it accessible for your intended audience. There are several options for this, but the most popular choice is to use a managed Kubernetes service from a major cloud provider like AWS (EKS), Azure (AKS), or Google (GKE) for containerizing the application. When you choose to containerize the application, Kubernetes becomes vital tooling due to its ability to handle applications of any scale.

In this article, you learned about the different components that make up Kubernetes. As Kubernetes continues to become a more prominent tool, learning how it works is vital to your success and the success of your organization. While it may not be ideal for all small-scale applications, for most large-scale applications, it can be both cost-effective and efficient to learn.

If you're looking to build an internal dashboard to make monitoring your Kubernetes clusters easy and efficient, try out Airplane. With Airplane, you can transform queries, scripts, APIs, and more into custom workflows and UIs. Airplane Views is a React-based platform that makes it easy to build complex UIs. Get started by using Airplane's pre-built component library and template libraries.

Sign up for a free account or book a demo to test it out and build your first internal dashboard.

Share this article:
Gourav Singh Bais
Gourav is a Data Scientist at Allianz. He is skilled in developing Machine Learning/Deep learning pipelines, retraining systems, and transforming Data Science prototypes to production solutions.

Subscribe to new blog posts from Airplane.