Introducing Autopilot, an AI coding assistant
gradient
Istio vs Linkerd vs. Consul: Comparing popular service meshes

Istio vs Linkerd vs. Consul: Comparing popular service meshes

Nov 29, 2021
8 min read

With the rise of microservices infrastructures, organizations now run a multitude of services in parallel. These services and applications are often containerized and rely upon decoupled APIs to function properly. While it’s true that these services aren’t always integrated with one another, there are situations where intra-service communication is necessary.

This is where a service mesh can come in handy.

Service meshes act as additional infrastructure layers within your existing deployment—facilitating chatter between two or more services over the network. They effectively control the delivery of service requests. Administrators can deploy these meshes via a sidecar proxy instance. These sidecars affix themselves to individual services (microservices deployment) or to application containers, VMs, or pods central to Kubernetes.

What makes a service mesh important?

A service mesh is a cleaner, low-maintenance alternative to traditional API gateways, which cloud-savvy organizations have long used for message exchange between services.

The mesh itself thankfully doesn’t require manual updating as you add or remove microservices. They’re highly scalable and flexible on the network side, and offer a few other features as well:

  • Service discovery
  • Load balancing
  • Encryption
  • Policy management
  • Tracing
  • Telemetry and metrics collection

A service mesh is therefore designed to boost transparency, performance, and security across your services ecosystem.

The average enterprise runs 464 custom applications; accounting for each service and its functional relationship with others can be daunting. These dedicated sidecars can even facilitate improved health monitoring.

We’ll break down three leading service meshes—Istio, Linkerd, and Consul—and evaluate which could be best for your organization.

How we’ve compared service meshes

This article’s goal is to provide an objective assessment of these three service meshes while sprinkling in relevant commentary on look, feel, and user-friendliness. We’ll grade each offering based on the following:

  • Setup. What’s required to get each mesh up and running?
  • Maintenance and ease of use. Is it straightforward to maintain the tool and complete normal tasks?
  • Security. Does the mesh have any glaring vulnerabilities, and does it help boost ecosystem security?
  • Features. What functions and benefits does the tool have that set it apart?

After assessing each service mesh, we’ll determine whether or not one option rises above the rest. Let’s jump into the comparison.

Istio

‍‍Istio launched in 2017 and has morphed into an all-encompassing service mesh solution for DevOps teams. The platform has its fair share of provider integration support—namely from Google Cloud, IBM Cloud, Red Hat, and VMWare (among others). It remains a leading service mesh today for Kubernetes deployments, though it also supports VMs and endpoints outside of Kubernetes.

Setup process

Setting up Istio requires you to first configure an existing Kubernetes platform. Istio supports a large variety of vendor solutions, each with its own Istio cluster setup guide. The service mesh explicitly supports Kubernetes versions 1.19, 1.20, 1.21, and 1.22.

Next, Istio asks that you perform installation by choosing from a variety of approaches. These include Istio’s command line (Istioctl), Instio Operator, multi-cluster setup, Helm, VM, and via external control plane. The documentation states that you should double check any pod or service requirements before proceeding.

This comparison tackles Istio installation and setup using Istio’s default profile. The following command installs the default profile onto your cluster, depending on your specific Kubernetes configuration:

shell

To install Istio using a YAML configuration file, use the following command instead (replacing the file name with your configuration file):

shell

This process is very quick. The command line also permits a wide range of customizations once the base configuration is in place.

Maintenance and ease of use

Thankfully, there are multiple pathways available for upgrading, downgrading, and maintaining Istio using the central control plane. You can apply canary upgrades, in-place production upgrades or downgrades, Helm upgrades and configuration, or through gateways.

Istio warns against jumping more than two minor versions between upgrades—say from 1.16 to 1.19. This can cause problems.

Istio is very easy to use due to its command line interface. It’s easy to install and wield during daily use, and while you can get very granular, quick configuration changes are possible. This saves time without requiring numerous steps.

Security

Istio leverages TLS encryption for all service-to-service communications. The mesh enforces strong authentication and authorization rules tied to user identities. Rate limits, quotas, and access controls can prevent traffic-related attacks, and shut out users without proper privileges.

Features

Istio focuses on four chief areas:

  • connections
  • security
  • control
  • observation

Traffic-management controls steal the show, as Istio excels at distributing API calls and user activity. Canary rollouts and percentage-based traffic distribution are also beneficial.

Envoy proxies are central to the Istio experience, which makes it easy to use and boosts performance. You can configure a single virtual service to tackle “all services in a specific namespace.” The mesh’s service-discovery feature automatically identifies critical endpoints. Istio also supports custom resource definitions in Kubernetes and even supports numerous Kubernetes clusters simultaneously.

Istio authenticates using a custom Kubernetes API. Security is managed via a privileged user account.

Finally, Istio’s templating enables rapid attribute building. Overall, the solution offers many perks to Kubernetes devotees.

Cons

Istio is still relatively complex, just like the microservices architecture it runs on top of. The platform isn’t as lightweight as other options, and therefore incurs a greater resource cost relative to competitors.

Linkerd

‍Linkerd 2.0 launched in late 2018 and has been regularly updated since. Built mainly for the Kubernetes framework, Linkerd is also open source and has a strong following, supporting Walmart, Comcast, eBay, and others.

Setup process

Linkerd is lightweight and therefore easy to install in roughly five minutes. Before you begin, you must have a working Kubernetes cluster set up, and your machine must be running kubectl for good measure. Once that’s validated, install the Linkerd CLI using the following command:

shell

Validate that things are looking shipshape by using the following verification command:

shell

Next, install the Linkerd control plane onto your cluster:

shell

Verify success by running linkerd check. Linkerd allows you to install a demo application via a simple command, which you can modify to suit your own needs and deployment:

shell

To configure Linkerd further, you can install any number of included extensions to supplement the basic control plane.

Maintenance and ease of use

Linkerd is designed to be bare bones. That means that you’re not wrestling with numerous functions or components that might otherwise interrupt your administrative workflows. It’s easy to use the CLI to execute service mesh commands. Additionally, Linkerd supports visualizations through Buoyant Cloud and the default dashboard (simple CLI invocation), making it easier to monitor your services in real time.

Security

Linkerd supports TLS encryption of all communications immediately, without needing custom configurations. The solution also supports adjacent features like diagnostics and debugging to prevent vulnerabilities from wreaking havoc over time. Daily key rotation is also standard within Linkerd.

Features

Linkerd is both lightweight and hassle free. It retains excellent compatibility with existing platforms, and minimal work is needed to get up and running. Linkerd also offers the following features:

  • Rate, latency, and request-volume tracking
  • Load balancing, automatic retries, timeouts, and blue-green deployments
  • A self-contained control plane and performant data plane written in Rust

Linkerd offers its extension library, which allows you to bolt functions onto Linkerd’s default configuration. This is completely voluntary. Linkerd stays out of the way when you need it to, while providing supplemental support when required.

Linkerd even supports route-based metrics tracking for greater visibility into your infrastructure. It’s easy to glean performance gains and make changes based on this information. It also natively supports Grafana.

Cons

While not strictly a con, Linkerd is known for its simplicity. This may leave some teams wanting more from their service mesh. For example, there’s no rate limiting, delay injection, or circuit breaking. Lastly, Linkerd exposes no dedicated API for control plane operation.

Consul

Image courtesy of Consul.io

Consul’s value proposition is simple: securely connect services from any platform using any private and private cloud. Backed by HashiCorp, Consul is the trio’s oldest service mesh, launching in 2014.

Setup process

There are three ways to install Consul:

  • precompiled binary
  • source
  • Kubernetes

We’ll take the binary approach. It’s simpler than installing from source, and Consul’s Kubernetes CLI is still in beta. Documentation states that it’s not yet ready for production.

To begin, grab the compiled binary matched to your machine OS—either macOS, Windows, Linux, FreeBSD, or Solaris. You can either use HomeBrew package manager or opt for Consul’s 64-bit download ZIP.

Unzip this file into any directory on your machine. Either the consul or consul.exe binaries are sufficient depending on your environment. No added files are required. However, note that CLI access does require you to place it somewhere on your PATH.

Finally, verify your installation using the following command:

shell

Maintenance and ease of use

Consul offers users options right out of the gate. You can use the CLI to your heart’s content for more advanced configurations, or opt for the Consul UI. The latter lets you visualize your services, nodes, key value pairs, and intentions. You can also navigate through access controls. Should you want to dive deeper, the GUI offers a look into topology, instances, intentions, routing, and tags.

Consul is meant to be long running, so keeping its node agent updated is essential. The process typically includes the following steps:

  • Performing any compatibility checks
  • Installing the new version on each server
  • Shutting down older versions
  • Rolling out clients in stages

Large version jumps are possible with Consul’s guidance. There may also be some backward-incompatible upgrades in the future, for which Consul has a defined process.

Security

Consul offers encryption and access control while outlining security models based on one’s environment, deployment, and workloads. This ensures that your configuration is airtight and tuned properly. TLS encryption and key-based gossip encryption are supported. It’s easy for admins to stay on top of things, both in real time and from day one.

Features

Consul provides well-rounded control features centered on configuration and segmentation. You can deploy Consul natively or via proxy framework. The latter supports Envoy. Consul leverages a local client which lets teams easily run Consul via pod on each node. The Consul API enables this.

The service mesh also offers the following features:

  • Service discovery via DNS or HTTP
  • Health checks across different nodes, clusters, and services—plus traffic stoppages to problematic hosts
  • Hierarchical key-and-value stores via API
  • Standard data center support without abstraction

Overall, Consul is centralized, provides easy service discovery, includes network automations, and even employs a zero-trust security approach. It’s purpose-built for dynamic infrastructure. This includes multi-cloud with private data centers in tow.

Cons

Because Consul is mature and feature-rich, it can be more difficult to master. Additionally, some experimentation may be necessary to get Consul up and running effectively. For example, the company’s documentation on traffic management points to Envoy’s own documentation—complicating matters slightly.

Final thoughts

It’s clear that each company, while taking a unique approach to service meshing, does offer solutions in close proximity to their competitors. So it’s not so much about which mesh is best, but which fits your use case or skill level. Linked is perhaps best for those with less experience. It may even be preferred for deployments where every ounce of performance is critical. Meanwhile, something like Consul is best reserved for veterans who need ultimate configurability.

Before making your decision, form a clear understanding of your infrastructure and daily priorities. This will help steer you toward a compatible and capable service mesh.

If you're looking for a platform to build robust internal tools that make it easier to manage your services and applications, then check out Airplane. Airplane is the developer platform for building internal tools. With Airplane, you can build custom workflows and UIs using scripts, queries, APIs, and more. Airplane also offers strong out-of-the-box capabilities, such as job scheduling, audit logs, permissions settings, and more.

To build your first engineering workflow that makes monitoring your applications easy, sign up for a free account or book a demo.  

Share this article:
Tyler Charboneau
Tyler is a hardware-software devotee and researcher. He specializes in simplifying the complex while speaking effectively to all audiences.

Subscribe to new blog posts from Airplane.