Introducing Autopilot, an AI coding assistant
gradient
8 tips for Kubernetes load testing

8 tips for Kubernetes load testing

Apr 6, 2022
10 min read

Kubernetes is a popular choice for orchestrating containerized workloads, but before going live, it’s crucial to test your Kubernetes clusters for irregularities, stability, and responsiveness.

While Kubernetes provides built-in tools and services for collecting metrics on performance parameters, tHey can’t simulate a workload on a given application to test its scalability and speed. This is where Kubernetes load and performance testing tools come into play. These tools let developers test their application before it reaches production, allowing them to remedy performance issues before tHey become problems.

There are plenty of Kubernetes load and performance testing tools available, and in this article, we’ll be discussing some of the most popular ones, and looking at how they complement the built-in Kubernetes testing capabilities.

Kubernetes testing basics

Testing a Kubernetes application is fairly straightforward. To do so requires a load-testing primary node, as well as multiple worker nodes that will simulate traffic on the main application node. The primary node will act as an entry point for the testing workloads, and from here, the performance of the workloads can be monitored by different tools for logs and metrics.

Just like any other form of testing, you need to decide on a target for your Kubernetes testing. These targets could be anything from simple bug discovery or quality assurance, to larger scopes such as cost optimization or error prevention. Each of these requires you to modify your testing strategy accordingly. Without a clear target in mind, you might find yourself trying to test for many things at once, reducing the amount of useful data from your testing efforts.

If you are new to the trade, you might want to consider trying out the simple open-source solutions before committing to a paid service. This will allow you to gain experience with load testing so that you better understand what you want to accomplish with your Kubernetes testing setup. If you’re already sure of your testing goals, you can try a few of the more advanced tools and choose the one that suits your needs the best. We explore both basic and advanced tools below.

The top 8 Kubernetes testing tools

In this section, we will list some of the popular tools and services used to implement load and performance testing in Kubernetes.

Basic tools

Before diving into more advanced or managed testing solutions, here are a couple of simple, open-source tools that you can use to get started with Kubernetes testing quickly and easily.

K6

K6 is an open-source load testing tool written in Go. It allows users to write scripts in JavaScript, rather than using a proprietary scripting language. This makes the learning curve shorter than many tools since most developers are familiar with JavaScript, or at least able to quickly grasp the basics. Combine that with the fact that JavaScript is a popular and comprehensive language, and you’ve got an entire community that can be helpful in troubleshooting.

After the tests are written, you simply run the tests using the included CLI tool, and you’ve got yourself a load-testing tool. All in all, K6 is an easy way to get started with load testing.

Kube-burner

Kube-burner is a stress-testing tool for Kubernetes clusters. It’s got one simple purpose: to load test your applications. Unlike some of the other tools in this roundup, Kube-burner is not something you install into your cluster. Instead, it’s a CLI you install locally, which can then be used to run one-off tests against your cluster and applications.

This can be an advantage for some, as it simplifies setup. It also doesn’t interfere with any existing infrastructure, as it runs from an engineer’s local computer. However, that isn’t the only way you can run this tool. Should you decide you want to integrate the tool more deeply into your workflow, it has the advantage of being a simple binary file. As long as you have control of your CI/CD pipeline runners, it’s easy to integrate and run as part of your CI/CD pipeline.

The methodology of Kube-burner is fairly simple. It runs the load test, collects the desired Prometheus metrics, and saves them in a time-series database. Kube-burner isn’t an all-in-one solution—you’ll still have to do some work of your own to get the most out of it—but for many people, this is a reasonable trade for the flexibility it offers.

Hey

Hey is a tiny command-line tool that you can use to send requests to your web app. It can also spawn child processes to generate the required load for testing your Kubernetes applications.

Hey is also written in Go, and is very easy to start using. While you can’t set up automated tests using Hey, you can use this tool to instantly generate loads for on-demand testing of your Kubernetes app.

Advanced solutions

Once you have an idea of what you intend to do with your Kubernetes load-testing efforts, you can pick one of these tools to further up your game.

Speedscale

Speedscale is a managed performance and load-testing tool that allows testing of your Kubernetes applications using simulated user traffic. The most basic way to use the tool is by setting it up to record your application, then visiting your website or sending an API request and having the tool record the request. From here, you can choose to replay it to test the performance of your application, or you can multiply the request and make sure your application can handle a certain load.

Speedscale has seamless integration with Kubernetes via its own CLI tool called speedctl, which can easily be installed on a Kubernetes cluster using kubectl commands. Once speedctl is configured, everything from running performance tests to application response simulations can be controlled without affecting production environments or any need to code. You’re also able to integrate the performance testing directly into your CI/CD pipeline, creating gates so your application won’t be deployed if the performance doesn’t meet the expected standards.

Speedscale is a great option if you want something that’s deeply integrated with your workflow, rather than just a simple one-off test.

Stormforge

Stormforge is one of the few Kubernetes testing-as-a-service products, and provides a comprehensive set of tools to test your Kubernetes clusters thoroughly. Stormforge boasts the ability to create advanced, easily-scalable load tests in under three minutes. You can also easily integrate it in your CI/CD pipelines to automate load testing while building your app artifacts.

Stormforge relies on an open workload model that provides a more accurate simulation of real-world situations than closed workload models. Scaling your load tests to thousands of requests and millions of concurrent users is a simple task for Stormforge.

One of the key goals of Stormforge is to be able to compare different runs of your tests to provide you with insights on how your changes are affecting your Kubernetes clusters. On top of that, Stormforge is completely cloud-native, making it an easy choice for Kubernetes testing.

Kube-monkey

Many engineers have heard of Chaos Engineering, a practice made popular by Netflix’s Chaos Monkey tool. For those who don’t know of it, it’s based on, quite literally, chaos. You introduce controlled chaos into your infrastructure, thereby making sure that it’s able to handle any unforeseen events that could happen in the future.

An implementation of the Chaos Monkey project, Kube-monkey is a cluster resiliency tool designed to natively test Kubernetes clusters. Kube-monkey creates a simulated cluster failure situation by randomly terminating Kubernetes containers during a pre-configured timeframe. This helps create a more fault-tolerant production environment, in which the cluster can tolerate random failures.

The failure testing simulation is opt-in and allows you to add critical namespaces to a blacklist, ensuring that their pods will never be terminated in a test.

Jmeter

Jmeter is an open source, Java-based load testing tool that can be integrated with Kubernetes. Unlike most load-testing tools, which only offer a CLI interface, Jmeter has a CLI mode, but is more commonly used via a GUI that allows developers to test clusters without writing any code.

In some teams, this is a drawback in terms of developer experience, as most engineers are used to using tools that are command-line offerings. If you’re an individual developer wanting to load test your application, it’s up to you whether this is a pro or a con, but if you’re looking to integrate the tool into your organization, it’s probably worth thinking about how the users prefer to use the application. In some cases, it can be a big advantage that Jmeter can be used without any coding.

While the interface may be a concern for some groups, the performance doesn’t need to be. The user interface allows anyone to configure load tests and run them immediately. Developers can create test scenarios with just a few clicks, perform different types of stress and service tests, and generate reports based on test execution. Jmeter even allows the recording of user actions in the browser, which can be later integrated into tests simulating a large number of concurrent users.

Litmuschaos

Litmuschaos is an alternative to Kube-Monkey, providing the ability to stress test your cluster. The tool has taken inspiration from the principles of cloud computing, running with microservices, and using custom resource definitions (CRDs) to execute.

This can be an advantage, as it allows you to integrate the tool deeply into your infrastructure, providing greater insights. It also allows you to run Litmuschaos like you would any other application and integrate it with your existing setup. This allows you to focus only on general infrastructure management, and removes the need to come up with entirely new ideas for running a load-testing tool. Whether this is a pro or a con will, of course, depend on what your existing infrastructure looks like.

Litmuschaos is being developed to support several specific scenarios. The first and most obvious use case is to support developers while tHey’re working on an application. Litmus can be used as part of the development pipeline to make sure everything is working as intended.

To extend on the developer use case, Litmus can also be integrated into your CI/CD pipeline, making sure your application isn’t getting deployed unless you know for sure that it will run as expected.

Lastly, Litmuschaos is being developed with the intent to help site reliability engineers, making sure that any already deployed applications work as intended. SREs can schedule chaotic experiments and figure out whether any of the infrastructures have some weaknesses.

A quick note on cloud providers

An important thing to note is that many people host their Kubernetes clusters on third-party cloud servers such as Amazon Elastic Kubernetes Service, Google Kubernetes Engine, and Microsoft’s Azure Kubernetes Service. While these services are often thought of as all-in-one solutions, none of them offer a native load testing tool. However, with the exception of Hey, the tools mentioned in this article can be deployed to your cluster via config files, and several offer direct integration with cloud providers.

Kubernetes load and performance testing best practices

When you begin to set up load and performance testing infrastructure in your Kubernetes application, there are a few key points you should keep in mind to get the most out of your efforts.

Take a service-oriented approach to Kubernetes testing

There’s a high chance that if you are using Kubernetes to deploy your application, you’ve already designed your app to be modular. If you’ve used one of the service-based architectures, testing your app becomes easier. This is because such architectures define clear boundaries between various sections in your app. The more modular your app is, the easier it is to take it apart and test each component individually. Here’s what you should keep in mind when breaking your app down into services:

Services should be independent of each other

Continuing on the point from above, your app’s components or services should be as independent of each other as possible. This not only helps you to test each component independently, but also helps you when handling traffic spikes by enabling you to scale each component up and down based on user requirements.

Choose optimal sizes for your services

While breaking your app down into components, you should endeavor to keep the distribution as equitable as possible. While it’s true that it’s not possible to do this in every scenario, this implementation can greatly streamline management and testing. It gives you a reasonable number of services to test, and it also keeps the number of concerns that each service addresses minimal. With services that handle only a few business concerns at once, you can rest assured that each of these concerns will be thoroughly, independently tested before being rolled out to your users.

Be smart with your load tests on each service

When setting up load tests for your services, you can optimize your resource consumption by allocating test resources for each service based on its significance in the app, as well as the traffic that it will handle. Important services such as payments or registrations need to be tested rigorously, since you can’t afford for them to go down. Testing on less-important services, such as backup managers, can be dialed back, since even if tHey go down, it won’t affect the normal functioning of your app. That being said, you should test extensively if you have the resources to do so.

Target a geographically diverse testing setup

Testing a Kubernetes setup requires you to look at it from the perspective of your users. If, as is common for SaaS applications, your app’s users are located around the world, you should look for a testing setup that can imitate the behavior of your global userbase by sending user actions to your app from various locations around the world. This way, you can ensure there is minimal latency for users, regardless of their location, and that no local networks are blocking access to your app.

Monitor and track your clusters’ performance thoroughly

Without a doubt, the best partner to testing is monitoring. While testing helps you to look for issues in your app manually, monitoring your app in production provides you with real-world insights into the performance of your app. Third-party solutions help you to gain deep insights into the all-around performance of your Kubernetes clusters in real time.

In some cases, it’s not possible to load test a production Kubernetes cluster without disturbing the end-user experience. In situations like these, it’s worth considering monitoring as an alternative to testing.

Final thoughts

In this article, you’ve learned about the importance of load and performance testing your Kubernetes applications, as well as the tools required for these tests. One thing you should know before getting started with performance testing tools is that they all have comparable features, but tend to shine in specific areas. It’s important for engineers to find out the specific use cases first, as it will help them find the perfect tool for their load-testing goals. There are basic and advanced tools, as well as open-source and SaaS offerings.

If you're looking for a way to build a dashboard to monitor your Kubernetes applications or a workflow to help streamline testing, you should try out Airplane. With Airplane, you can create internal operations that support your engineering workflows.

You can get started for free or book a demo to learn more about Airplane.

Share this article:
Harshit Mehndiratta
Harshit Mehndiratta is a Software Developer at 5Data. In addition, he is a technical author who enjoys writing about his experiences using Kubernetes as well as JavaScript and React JS.

Subscribe to new blog posts from Airplane.