Introducing Autopilot, an AI coding assistant
gradient
Should you run a database on Kubernetes?

Should you run a database on Kubernetes?

Nov 4, 2021
6 min read

Kubernetes is a free and open-source orchestration solution aimed at automating deployment, scaling, and management of containerized applications. Although originally intended for stateless applications, the rising popularity of containerized stateful workloads is seeing Kubernetes also being used for those applications.

By default, containers are stateless. This means that if they crash or have to be restarted, you’re almost guaranteed to lose any data living within them. As a container orchestrator, Kubernetes regularly restarts and moves containers around from node to node. This becomes an important concern for stateful workloads that need to preserve data—no matter what Kubernetes does to the containers running the applications.

An example of a stateful application would be a database server.

How could a database run on Kubernetes? Are there any mechanisms to manage such workloads in Kubernetes? And if so, what types of databases and data are best suited for it?

In this article, we will find the answers.

Different options for running databases

Let’s first think about the different ways to run a database server in the enterprise.

  • Self-managed database: This is where you host your database server either on-premises or in the cloud using virtual machines. You’ll be fully responsible for setting up the database server, setting up its security, patching, upgrading, provisioning storage, providing high availability, scaling, having backups, and doing other day-to-day database administrator (DBA) operations. It’s the most manual option, but this is where you’ll be in full control of your database and data. Many companies still opt to take this approach.
  • Managed databases: Solutions like Amazon RDS, Azure Databases, Google Cloud Databases, or Instaclustr make it easier to deploy and scale database servers in the cloud. The vendor takes care of the storage, compute, network bandwidth, installation, upgrades, high availability, and so forth. As a consumer, your enterprise only needs to host the databases on one of the vendor-supplied instances running a database engine of your choice (e.g., SQL or NoSQL). This is a popular option for most modern enterprises.
  • Kubernetes-managed Databases: This is a hybrid between running a database all by yourself and using a managed database service provider. This is because you can run Kubernetes on-premises, in the cloud, or use a managed service. With this approach, you can take advantage of many of Kubernetes’s strengths, like automated scheduling, self-healing, or horizontal scaling. Database administration tasks, like performance tuning, backup, and restore, will still require your attention and might slightly differ due to some containerization abstractions.

Persistent storage and other Kubernetes features

Although Kubernetes was developed to manage containerized applications that didn’t need data persistence, it now offers solutions to manage stateful workloads. Persistent volumes provide an API that allows Kubernetes administrators to manage volumes, which, paired with storage classes, offer a safe and abstract way to store and manage data.

However, the cloud is unpredictable, and Kubernetes often needs to restart and reschedule pods. Persistent volumes, therefore, find it difficult to move data between nodes while ensuring they’re attached to the correct container. To make things more complex, some databases run in a multi-node cluster configuration.

Kubernetes introduced constructs in version 1.5 to help tackle these problems. StatefulSets ensure that pods are based on an identical container specification, retaining a unique ID even if they are moved to another node. By coupling pods with persistent volumes through the unique ID, it’s possible to maintain the workload state even when they are rescheduled. DaemonSets, although slightly more complex, is also an option when running a copy of your workload on every (or some) node of your cluster.

Distributed stateful workloads often require a complex sequence of actions that predefined resources can’t handle. For example, a distributed database may need to perform a specific set of actions whenever a database node (in the case of Kubernetes, a pod) goes down. Examples of such actions can be electing a leader, balancing data, and so on.

Native Kubernetes capabilities can’t really handle such situations, but it’s custom resources can help. Custom resources allow the Kubernetes API to be extended with domain-specific logic, defining new resource types and controllers. The Operator pattern makes use of custom resources to manage applications and their components by helping the development of custom solutions.

OSS frameworks, like kubebuilder, or Operator Framework, provide the building blocks to create Operators like the Postgres Operator, MySQL Operator for Kubernetes, Elastic Cloud on Kubernetes (ECK), or K8ssandra.

Distributed database features

Most database engines come with one or more ways to distribute data and make it highly available. When choosing a database to run on Kubernetes, you need to consider these features:

  • Replication: Does the database support replication? If so, what types of replication does it support (e.g., bidirectional, transaction, and snapshot)? This will help improve reliability, fault tolerance, and accessibility.
  • Sharding: Is the database capable of partitioning data and holding different pieces in separate instances (i.e., pods)? This can help improve redundancy and spread the load.
  • Failover: Is the database capable of failing over from the master, read-write node to other read-only nodes and promoting the read-only node as the master? This will also help improve reliability, fault tolerance, and accessibility.
  • Scalability: Does your database handle scaling (in and out)? Kubernetes paves the way for horizontal scaling, but your database needs to handle adding or removing instances on demand. This can help handle the increased load or reduce costs when the load decreases.

Databases that natively have these capabilities (e.g., PostgreSQL, Elasticsearch, MongoDB, or Cassandra) can more easily deal with the uncertainty of heterogeneous cloud environments.

Data availability requirements to consider

Since pods and compute nodes are generally ephemeral by nature, Kubernetes is better suited for some types of data than others. It’s important to understand how critical the data is and to what degree it has to be available.

In order to achieve high availability, some database engines use what’s called eventual consistency. Eventual consistency is a technique that ensures that if no new updates are made to a given piece of data, eventually, all access to it will return the last updated value. It assumes that the data may have some inconsistencies in different nodes (depending on where you are reading it from) at any point in time as it’s being continuously updated, but once the updates are complete, all nodes will have an identical copy of it, and all client requests will get the same data. When you run database systems in Kubernetes, you’ll need to consider if that’s acceptable from a business perspective. For example, this may not be an option for financial systems.

Some database engines can deal with failovers (for example, when the pod that is running the primary read/write copy of the data reschedules or crashes), but it may take some time for the secondary or standby node to recover and assume the primary role. You need to consider how much data unavailability you can afford in such cases and if stale data is acceptable. It may not be feasible, for example, in a healthcare solution showing patient history records.

As you can see, it all depends on business requirements. Workloads dealing with transient data (e.g., caching layers), read-only data (e.g., look-up tables), or data that can be easily rebuilt (e.g., API outputs) are a better fit for Kubernetes.

Conclusion

As an orchestration technology, Kubernetes simplifies many common operational concerns, like scheduling, auto-scaling, or failover. While stateless workloads fit perfectly within this model, stateful ones, like databases, have other areas of concern that need to be addressed. We have seen this:

  • Persistent volumes and storage classes offer a safe and abstracted way to manage data.
  • StatefulSets and DaemonSets can build on these concepts by allowing pods to be integrally tied to the persistent data.
  • Custom resources and operators can help provide custom logic for applications that need data persistence.

However, it’s important to consider the available support for the database engine you want to run on Kubernetes, as well as the type of data you want to store and the data’s availability requirements. Running services in Kubernetes involves catering to a certain amount of volatility. Databases that can handle replication, shading, and failover are, therefore, better candidates for Kubernetes. Similarly, data that can be easily and quickly regenerated are also good candidates for Kubernetes hosting. At the end of the day, it will come down to how much fault tolerance the business requires.

If you're looking for a maintenance-free solution to build workflows and internal UIs that make connecting and managing your databases quick and efficient, consider using Airplane. With Airplane, you can connect various database resources and build internal tasks and dashboards that can help you view, update, and monitor your databases.

Sign up for free or book a demo and build your first database dashboard within minutes.

Share this article:
Ricardo Castro
Ricardo Castro is a Senior Site Reliability Engineer at FARFETCH, as well as a Certified Kubernetes Administrator (CKA) and Certified Kubernetes Application Developer (CKAD).

Subscribe to new blog posts from Airplane.