Introducing the Kubernetes container runtime interface

Standards and conventions are an essential part of computer technology. For example, the HTTP and TCP/IP standards make the Internet possible. JSON and YAML data format standards power agnostic data exchange. Multimedia streaming on MP3 and MP4 player.

However, norms and conventions do not appear by magic. They evolve. Typically, companies are starting to do things their own way. Over time, more and more companies begin to do the same, and they all standardize in a cooperative and selective way to innovate more easily according to the interests of each company. TCP/IP, Ethernet, the RS232 socket for wired connections, and the IEEE 802.11x wireless protocol emerged from a hodgepodge of past proprietary networking technologies.

Standards and conventions expand the possibilities. They don’t limit them.

This is also true with the Kubernetes container orchestration framework. Initially, Kubernetes only used Docker to create containers running on virtual machines in a Kubernetes cluster. Docker was hugely popular at the time and arguably pushed container technology into the mainstream of computing. Other container engines emerged, but Kubernetes remained tightly tied to Docker. This changed in 2015 with the release of Kubernetes version 1.5, which introduced the Container Runtime Interface (CRI) to decouple tight binding from Docker so that any container engine could run under Kubernetes.

How does it all work? Let’s look at the details and start with the nature of an interface.

Understand a programming interface

To understand how CNI works, you need to understand what an interface is.

An interface is a programming artifact that describes what a software component is supposed to do, but does not describe how to do it. Think of an interface as a contract between the component and the consumer of the component. Both parties agree to interact according to the definition published by the component.

It’s like buying a taco from a taco truck. There’s an implied contract at play. The person taking the order at the truck window is in charge of ordering, paying for, and delivering the taco. The shopper knows that the order taker is in charge of ordering, paying for, and delivering the taco. Both parties participate according to the terms described by the common interface for the purchase of a taco. However, in terms of making the taco, it’s the private affair of whatever happens in the taco truck.

The example below shows an interface, specifically the behavior of ICalculator written in the C# programming language.

using System;

interface ICalculator {

  double add(double a, double b);
  double subtract(double a, double b
  double multiply(double a, double b);
  double divide(double a, double b);

}

Note that the ICalculator interface publishes the method names, their parameters, and the expected return type for each method. But, there is no behavior, only a description of the methods of the class.

Now take a look at the following example below. A class named Calculator implements the ICalculator interface and the methods of the class have real behavior.

using System;

class Calculator: ICalculator {
   public double add(double a, double b); {
      return a + b
   }
   public double subtract(double a, double b); {
      return a - b
   }
   public double multiply(double a, double b); {
      return a * b
   }
   public double divide(double a, double b); {
      return a / b
   }
}

In programming language, we can say that the Calculator class supports the ICalculator interface. It gets really interesting when you mix a programming principle called interface programming.

Programming on the interface means that the developer only works with the interface of a component. The developer doesn’t need to know the details of a component’s implementation, only how it’s supposed to work, hence the interface. The C# code snippet below demonstrates the concept:

using System;

namespace SimpleInterface
{
    interface ICalculator
    {
        double Add(double a, double b);
        double Subtract(double a, double b);
        double Multiply(double a, double b);
        double Divide(double a, double b);
    }

    // a class that implements the interface, ICalculator
    class Calculator : ICalculator
    {
        public double Add(double a, double b)
        {
            return a + b;
        }
        public double Divide(double a, double b)
        {
            return a - b;
        }

        public double Multiply(double a, double b)
        {
            return a * b;
        }
        public double Subtract(double a, double b)
        {
            return a / b;
        }
    }

    class Program
    {
        static void Main(string[] args)
        {
            // Create a variable named calculator that is an
            // instance of the ICalculator interface
            ICalculator calculator = new Calculator();
            Console.WriteLine(calculator.Add(1, 2));
            Console.WriteLine(calculator.Subtract(1, 2));
            Console.WriteLine(calculator.Multiply(1, 2));
            Console.WriteLine(calculator.Divide(1, 2));
        }
    }
}

Notice the statement:

ICalculator calculator = new Calculator();

This statement creates a new instance of the Calculator class, but extracts only the interface supported by the class. Thus, the developer programs on the interface.

The implication of the principle of interface programming is that a developer writes the same code on any number of software components with predictable results. As in the taco truck example described above, as long as a buyer knows how the buy-taco the interface works, he or she can buy tacos from any taco truck, no matter what’s going on inside.

The implication of the principle of programming at the interface carries directly over to container execution environments under Kubernetes. Before looking at this connection, however, we need to look at the role of the kubelet component in Kubernetes containers and container orchestration.

A Kubernetes cluster is separated into two sets of virtual machines: the control plane and the worker nodes. The control plane manages the cluster, while worker nodes provide the actual logic for a given application. Containers that run on one or more worker nodes execute application logic.

The component that creates containers on a worker node is called kubelet. Each worker node in a Kubernetes cluster runs a kubelet instance.

The control plane determines which containers should be created and which worker node in the cluster has the capacity to host the given container. It then contacts the kubelet instance running on the identified node and directs the kubelet to create the required containers. Kubelet then works with internal components on the node to create the containers. (See Figure 1, below)

Figure 1: A kubelet instance running on each worker node in a Kubernetes cluster realizes containers on the node.

As mentioned earlier, when Kubernetes was first released, kubelet agents worked directly with Docker to build and run containers. Over time, kubelet needed more flexibility, so the tight connection between kubelet and Docker had to be broken. This is where CRI comes in.

The CRI describes the contract by which kubelet interacts with container builders to create and run containers on a worker node.

The CRI is divided into two parts: the interface for ImageService and the interface for RuntimeService. These interfaces are written according to the Protocol buffers (protobuf) specification that gRPC uses as the communication protocol between kubelet and the components implementing the CRI.

The CRI ImageService interface describes methods for working with container images, which are the abstract templates from which a container is created. Operationally, the component that implements the ImageService interface is called a container manager.

The CRI RuntimeService interface describes methods for creating and running containers in memory. The component that implements the CRI RuntimeService interface is called the container runtime.

There are many container managers available that support IRC, such as container and IRC-O. Here are some examples of container runtimes to run and runs.

Know that the term container execution is used in different contexts depending on companies and projects. Sometimes the term describes both the container manager and the container runtime environment. Other times it refers specifically to the actual execution of the container. In this article, the term container execution specifically refers to the container runtime, the component that creates containers in memory, e.g. runc.

Figure 2 below shows how the kubelet works with the CRI to manage containers.

CRI container management

Figure 2: CRI services describe how containers should be managed and created by kubelet.

Around the time CRI effectively decoupled Docker from Kubernetes, Docker overhauled its own architecture. Originally, Docker provided the container manager, container runtime, and CLI tools for developers to interact with the container manager in a single deployment. However, this made Docker bulky and also violated the fundamental principle of low-level computer architecture: a component should only do one thing.

Thus, Docker separated the CLI tool, the container manager (containrd), and the container runtime (runc) into separate deployment units. Containerd and runc support CRI and come with Kubernetes by default.

CRI gives Kubernetes the flexibility to run a variety of container managers and container runtimes, so businesses can use the container manager and container runtime technology that best suits their needs.

Many companies choose to work with Kubernetes’ default container manager, containerd, and container runtime, runc. However, some companies have operational requirements to isolate the container manager and container runtime environment from the host computer kernel. In this case, Kata-containers with containerd creates each container in a lightweight virtual machine to ensure a high degree of isolation.

The importance of CRI is that it makes Kubernetes more flexible. Yet, as is the case in many cases, increasing flexibility increases complexity. At the operational level, working with the CRI beyond the default containerd/runc setup requires some familiarity with the low-level aspects of Kubernetes, and there is a significant learning curve.

For companies running complex, mission-critical applications that support millions of users, this steep learning curve is a worthwhile investment. The thing to keep in mind is that CRI was intended to make Kubernetes more powerful. As demonstrated by the explosion of container management and container runtime technologies, CRI is achieving its goal.