Container Performance Outlook

Containers have their origins in the Linux chroot and 

  • namespace (isolated view on filesystem, process tree, etc.), 
  • cgroups (limits resource usage of CPU, memory, disk I/O, network, etc.), 
  • seccomp (write or read something to/ from OS) and 
  • capabilities (being allowed to e.g., set system time). 

For some it is the abstraction of how software gets packaged and/or how the execution environment is encapsulated. The difference to hypervisors is the operating system (OS). Containers, on one hardware, share the same OS. Efficient access to peripherals is the challenge for both. 

On Linux containers are executed by so called runtimes. Runtimes are distinguished in two classes a) low level and b) high level. 

  • Low level containers specify images (the format and capabilities such as OS process tree integration) and how these are interpreted. The commonly known example is the Open Container Initiative (OCI) specification. 
  • High level containers provide an, more or less standardized API. The commonly known example is Container Runtime Interface (CRI), as standardized under the Cloud Native Computing Foundation (CNCF) umbrella. In CRI so called Kublets are the local agents, acting as proxy between the control plane and the low level container (runtime).

Let’s have a look at the low level container runtime. The most distinguishing factor are

  • the used programming languages (Go, Rust, C,..), which impacts the memory safety overhead for e.g., garbage collection like Go-Lang or non per default type and reference safety like C(++)
  • The statically and dynamically linked libraries, which impacts the binary size and memory footprint 

Common issue for all Linux based container approaches is the latency, often for memory access after/ before trap between user and kernel space because features such as direct memory access (DMA) are used by the OS, and throughput for peripheral access. 

In comparison, hypervisors invert the Linux container issues. Register access is slow and memory access is quick, because hypervisor has kernel space components which expose features such as DMA.

Sub-clauses: Sure some OS provider features such as CPU and memory bank dedication, may making use of features like memory mapped IO (MMIO), where peripheral access (the corresponding register) is mapped through CPU memory space. 

For hypervisors like KVM the VirtIO specification provides a good framework to overcome the aforementioned downsites. Due to the site here, “a virtio device is a device that exposes a virtio interface for the software to manage and exchange information. It can be exposed to the emulated environment using PCI, Memory Mapping I/O”. The dive usually is an emulated one e.g., by KVM/QEMU. 

A promising future approach to/ for (Linux) container would be

  • Hardware e.g., micro controller which per default expose VirtIO interfaces
  • Programming language and opcode agnosticity e.g., via Web Assembly like runtimes
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *