Introducing Autopilot, an AI coding assistant
gradient
Kubernetes deployments: a complete guide

Kubernetes deployments: a complete guide

Jun 20, 2022
7 min read

Kubernetes is an open-source platform that allows you to automate the deployment and management of containerized applications. Containerization involves packaging an application, its configuration, and dependencies into an isolated unit, so it can be easily deployed and executed across different environments.

In Kubernetes, clusters are units of deployed and active Kubernetes instances. A cluster contains a control plane and sets of worker nodes. The containerized applications are grouped into pods, which is the smallest unit in a Kubernetes cluster, inside the worker nodes. A node consists of one or more pods of single or multiple containers, and resources to make the containers interact with other Kubernetes subcomponents.

The control plane has components that watch, schedule, and manage the pods in the worker nodes. The communication between the control plane and worker nodes happens through the Kubernetes API server, which is implemented in the kube-apiserver, a component of the control plane.

The Kubernetes architecture is designed to watch the state of the deployed containers and manage them in a way that maintains the desired state. The controller manager, which is part of the control plane, consists of control loops that watch the shared state of the cluster components, and make changes when necessary to return to the desired state.

Sometimes, a pod may fail or become inaccessible, perhaps due to an error in resource configuration or an incorrect container image. One way to manage this is to use a ReplicaSet to run replicas of your pod instances in order to maintain a specific number of running pods.

Deployments allow you to specify a desired state for your pods and ReplicaSets. Kubernetes’ declarative system enables you to define how you want your containers to be run and managed, after which the deployment can issue declarative updates for your cluster’s pods and ReplicaSets. Kubernetes then takes it from there, ensuring that the state of the cluster matches your declared definition, and modifying the present state to conform to the desired state if required.

In this article, you’ll learn the use cases and benefits of Kubernetes deployment, as well as the different strategies for executing deployments. Finally, the tutorial will teach you how to create a deployment with Kubernetes.

Why do you need Kubernetes deployments?

A deployment is used to describe the desired end state of a rollout. A rollout is a process of modifying a cluster state to the desired state without downtime.

Using deployments is essential in many cases:

  • Deployments are used to create new pods and ReplicaSets automatically. The definition file has specifications for Kubernetes to perform replication of pods and ReplicaSets.
  • Deployments give you a declarative way to describe the desired state of pods and ReplicaSets. The YAML definition file helps you to specify what the end state of the pods and ReplicaSets will look like.
  • Deployments help to revert a deployed cluster to a previous state if the present state is unstable. Kubernetes keeps a history of deployments, so rollbacks can be performed if needed. This may be useful if, for example, you have a crash-looping pod. Deployment rollbacks also update the revision of the deployment.
  • Deployments come in handy when you need to scale up to accommodate more load. They help to create more pods and ReplicaSets in that situation. You can configure your deployments to automatically scale up the pod workload as the need increases by deploying more pods.

What makes up a Kubernetes deployment?

Before creating a deployment, you should know the parts that make up the deployment, and how they work together to make a deployment functional.

A deployment is made up of the following components:

  • YAML file: A YAML file describes the desired state for the Kubernetes cluster.
  • Pods: Pods consist of containers, configurations, and environments to run the applications.
  • ReplicaSet: This is a group of identical pod instances, configured so that the number of running pods always matches the number of pods specified by the YAML file. It ensures that a new pod is created when one fails.
  • kube-scheduler: The kube-scheduler is a component of the control plane, and declares how the pods and ReplicaSets are deployed in the worker nodes.
  • kube-controller-manager: This is another component of the control plane. It watches and modifies the present cluster state to match the desired state defined in the YAML file. It creates, updates, and removes pods and ReplicaSets.

Benefits of using Kubernetes deployments

Using deployments helps you maintain the stability and high availability of your containers. For example, if a node crashes, you’ll want to have a deployment or ReplicaSet in place to replace failed pods. Unless you want to customize the declarative orchestration updates, it’s better to allow deployments to manage your ReplicaSets than it is to manage them directly.

Without deployments, you will be creating naked pods, which are considered antipatterns. A naked pod is one that won’t be replaced in the event of a pod failure.

The following are some benefits of using deployments:

  • Deployments automate the deploying, updating, and scaling of containerized applications. This removes much of the tedious overhead work involved, and allows your team to focus on their primary tasks.
  • Deployments are generally faster and less prone to errors than creating your pods manually, as pod instances are automatically launched, and are run as specified in the deployment definition across all cluster nodes.
  • The controller manager uses the specifications in the deployment to know when it needs to replace a failed pod or inaccessible node as it watches the health of the pods and nodes. This ensures that business-critical applications are kept running.

Different deployment strategies

No matter what your goal is, choosing the right Kubernetes deployment is crucial to achieving resilience in production. This choice is typically guided by your business goals, costs, your cloud provider, the stability of the application to be deployed, how well-tested the application is, and how much downtime you are able to tolerate in production. Here’s a closer look at five common deployment strategies:

  • Ramped: This strategy avoids downtime by slowly replacing old pods with new pods, one by one. This makes it easy to pause or roll back if any challenges or unexpected issues arise during a deployment. The slow release across instances suits stateful applications.
  • Recreate: This deployment strategy shuts down the old pods and creates new ones for replacement. This could result in downtime, though, and as such is best implemented in a development or testing environment, not production. It’s commonly used when testing or doing quality assurance analyses.
  • Canary: This deployment style runs the ReplicaSet of the new pods alongside the old pods until you’re confident that there are no errors or problems. Then you can scale up the new pods and replace the old ones. This allows you to test the new version of your application with a small section of your user base. The canary rollout is slow, but the strategy offers fast rollback when problems arise in deployment.
  • A/B: The A/B deployment style works like the canary deployment pattern, but targets a specific section of the user base. This is because it seeks to monitor more than just stability. It measures how the features of the application or usage factors such as device type and geographic location influence business goals. Although it might require more configuration than other deployment types, the A/B strategy is helpful in circumstances when you need to run different versions in parallel.
  • Rolling: This Kubernetes deployment strategy releases a new version to production and lets it run concurrently with the old version. When the new version has been well tested, it’s then allowed to replace the older version. This gives you instant rollout or rollback, which is an advantage. However, it’s difficult to achieve when deploying stateful applications, and requires more storage, memory, compute power, and other important resources.

Creating a deployment with Kubernetes

In this section of the article, you’ll take a look at how to create a Kubernetes deployment. The kubectl command allows you to create and manage deployments with Kubernetes through the CLI.

Before you start, you’ll need to ensure that you have installed kubectl and minikube on your development machine. Without these, the kubectl command won’t work in your terminal. If you don’t have them installed already, you can follow the minikube and kubectl installation guides.

Once those are installed, create a new folder to hold the files you’ll need for this tutorial:

bash

Create a new YAML file called nginx-deployment.yaml, and add the following code to it:

yaml

The file you’ve just created is your deployment file, which contains the usual Kubernetes configuration. It indicates:

  • The apiVersion being deployed is apps/v1.
  • The kind of file, which is Deployment.
  • The metadata, which specifies the deployment name.
  • The spec, which defines the deployment configurations and replicas.

In addition, it specifies that:

  • The deployment named nginx-deployment is being created. This is shown in .metadata.name.
  • Four replicated pods are to be created. This is shown in the spec.relicas field. You can modify the field later on when you update your deployment.
  • The pods are to be labeled nginx, as shown in the .metadata.labels field.
  • The pods are to run the latest version of the nginx container image in the Docker Hub. This is shown in the .template.spec field. If you want to deploy with your application, ensure that you have an image of your containerized application hosted in a container registry like the Docker Hub. To learn more about containerizing applications, follow the official Docker Getting Started guide.

To create the deployment, run the following command:

bash

Upon successful creation of the deployment, you will get a terminal response like the following:

bash

If you get the above output in your terminal, congratulations! You have just created a deployment with Kubernetes.

To see your deployment, run the following command:

bash

You should get a terminal output listing the available deployments, as follows:

NAMEREADYUP-TO-DATEAVAILABLEAGE
nginx-deployment4/4446s

To view the currently running pods, use the following command:

bash

Then, a list of the pods will be printed out like this:

NAMEREADYSTATUSRESTARTSAGE
nginx-deployment-9456bbbf9-fr8mc1/1Running020s
nginx-deployment-9456bbbf9-lpn891/1Running020s
nginx-deployment-9456bbbf9-wq76m1/1Running020s
nginx-deployment-9456bbbf9-xxczh1/1Running020s

Check out the Kubernetes documentation to learn more about how to use the kubectl command to interact with your deployments.

Final thoughts

In summary, you’ve taken a high-level look at Kubernetes deployments, and you have a good idea of the components of a deployment, why deployments are important, and the benefits they provide when you use them. On top of that, you also learned how to write a YAML definition file, and how to use the kubectl command to create a deployment.

If you're looking for a maintenance-free platform to build a deployment pipeline dashboard, then try using Airplane. Airplane is the developer platform for building internal tools. With Airplane, you can build Tasks (single or multi-step operations that anyone on your team can use) and Views (a React-based platform for quickly building UIs). You can use Airplane's deployment pipeline dashboard template to get started easily.

Sign up for a free account or book a demo to try it out!

Share this article:
Jekayin-Oluwa Olabemiwo
Jekayin is a software engineer, skilled technical writer, and speaker. He enjoys building APIs and working on backend engineering projects.

Subscribe to new blog posts from Airplane.