When we first published this blog post in 2017, the technology landscape for containers was quite different than it is today. Over the past few years, we have seen significant changes take place that have affected, and continue to affect how Containers are adopted. Read on to understand the changes and developments we saw and offer our view of where we believe Containers are heading to in the near future.
For this trip, step into my DeLorean time machine, and let’s journey to 1979, when the concept of containers first emerged.
1979: Unix V7
Note to reader, yes, I was less than 10 years old at the time. During the development of Unix V7 in 1979, the chroot system call was introduced, changing the root directory of a process and its children to a new location in the filesystem. This advance was the beginning of process isolation: segregating file access for each process. Chroot was added to BSD in 1982.
2000: FreeBSD Jails
Flash-forward nearly two decades later to 2000, when a small shared-environment hosting provider came up with FreeBSD jails to achieve clear-cut separation between its services and those of its customers for security and ease of administration. FreeBSD Jails allows administrators to partition a FreeBSD computer system into several independent, smaller systems – called “jails” – with the ability to assign an IP address for each system and configuration.
2001: Linux VServer
Like FreeBSD Jails, Linux VServer is a jail mechanism that can partition resources (file systems, network addresses, memory) on a computer system. Introduced in 2001, this operating system virtualization that is implemented by patching the Linux kernel. Experimental patches are still available, but the last stable patch was released in 2006.
2004: Solaris Containers
In 2004, the first public beta of Solaris Containers was released that combines system resource controls and boundary separation provided by zones, which were able to leverage features like snapshots and cloning from ZFS.
2005: Open VZ (Open Virtuzzo)
This is an operating system-level virtualization technology for Linux which uses a patched Linux kernel for virtualization, isolation, resource management and checkpointing. The code was not released as part of the official Linux kernel.
2006: Process Containers
Process Containers (launched by Google in 2006) was designed for limiting, accounting and isolating resource usage (CPU, memory, disk I/O, network) of a collection of processes. It was renamed “Control Groups (cgroups)” a year later and eventually merged to Linux kernel 2.6.24.
2008: LXC
LXC (LinuX Containers) was the first, most complete implementation of Linux container manager. It was implemented in 2008 using cgroups and Linux namespaces, and it works on a single Linux kernel without requiring any patches.
2011: Warden
CloudFoundry started Warden in 2011, using LXC in the early stages and later replacing it with its own implementation. Warden can isolate environments on any operating system, running as a daemon and providing an API for container management. It developed a client-server model to manage a collection of containers across multiple hosts, and Warden includes a service to manage cgroups, namespaces and the process life cycle.
2013: LMCTFY
Let Me Contain That For You (LMCTFY) kicked off in 2013 as an open-source version of Google’s container stack, providing Linux application containers. Applications can be made “container aware,” creating and managing their own subcontainers. Active deployment in LMCTFY stopped in 2015 after Google started contributing core LMCTFY concepts to libcontainer, which is now part of the Open Container Foundation.
2013: Docker
When Docker emerged in 2013, containers exploded in popularity. It’s no coincidence the growth of Docker and container use goes hand-in-hand.
Just as Warden did, Docker also used LXC in its initial stages and later replaced that container manager with its own library, libcontainer. But there’s no doubt that Docker separated itself from the pack by offering an entire ecosystem for container management.
2016: The Importance of Container Security Is Revealed
With the wide adoption of container-based applications, systems became more complex and risk increased, laying the groundwork for container security. Vulnerabilities like dirty COW only furthered this thinking. This led to a shift left in security along the software development lifecycle, making it a key part of each stage in container app development, also known as DevSecOps. The goal is to build secure containers from the ground up without reducing time to market.
2017: Container Tools Become Mature
In 2017, many container management tools entered the mainstream. Kubernetes was adopted by the Cloud Native Computing Foundation (CNCF) in 2016, and in 2017 VMWare, Azure, AWS, and Docker announced their support for it.
This was also the year of early tooling that helped manage important aspects of container infrastructure. Ceph and REX-Ray set standards for container storage, while Flannel connects millions of containers across datacenters.
Adoption of rkt and Containerd by CNCF
The container ecosystem is unique as it is powered by a community-wide effort and commitment to open source projects. Docker’s donation of the Containerd project to the CNCF in 2017 is emblematic of this concept, as well as the CNCF’s adoption of the rkt (pronounced “rocket”) container runtime around the same time. This has led to greater collaboration between projects, more choices for users, and a community centered around improving the container ecosystem.
Kubernetes Grows Up
In 2017 the open-source project demonstrated great strides towards becoming a more mature technology. Kubernetes supports increasingly complex classes of applications – enabling enterprise transition to both hybrid cloud and microservices. At DockerCon in Copenhagen, Docker announced they will support the Kubernetes container orchestrator, and Azure and AWS fell in line, with AKS (Azure Kubernetes Service) and EKS, a Kubernetes service to rival proprietary ECS. It was also the first project adopted by the CNCF and commands a growing list of third-party system integration service providers.
2018: The Gold Standard
2018 saw containerization become the foundation for modern software infrastructure and Kubernetes being used for most enterprise container projects. In 2018, the Kubernetes project on GitHub had over 1500 contributors (today there are more than double that number). The massive adoption of Kubernetes pushed cloud vendors such as AWS, Google with GKE (Google Kubernetes Engine), and Azure, to offer managed Kubernetes services. Furthermore, leading software vendors such as VMWare, RedHat, and Rancher started offering Kubernetes-based management platforms.
Infrastructure provider VMware moved toward adopting Kubernetes when in late 2018 it announced that it was acquiring Heptio, a consultant firm that helps enterprises deploy and manage Kubernetes.
Open source projects such as Kata containers, gVisor, and Nabla attempt to provide secured container runtimes with lightweight virtual machines that perform the same way container do, but provide stronger workload isolation.
Another innovation in 2018 was Podman, a daemonless, open-source, Linux-native tool designed to manage containers and pods (groups of containers).
2019: A Shifting Landscape
The year ushered in significant changes in the container landscape. New runtime engines now started replacing the Docker runtime engine, most notably containerd, an open source container runtime engine, and CRI-O, a lightweight runtime engine for Kubernetes.
In 2019 we saw a tectonic shift take place in the containers landscape, when Docker Enterprise was acquired and split off, resulting in Docker Swarm being put on a 2-year end-of-life horizon. At the same time, we witnessed the decline in popularity of the rkt container engine, while officially still part of the CNCF stable.
VMware doubled down on its commitment to Kubernetes by first acquiring Heptio and then Pivotal Software (with both PAS and PKS). The move is intended to offer enterprises the ability to take advantage of cloud-like capabilities of cloud native deployments in their on-premise environments.
Last year we also saw advancements in adoption of serverless technology with platforms such as Knative, a Kubernetes-based serverless workloads management platform, gaining traction with organizations.
2019 saw the launch of Kubernetes-based hybrid-cloud solutions such as Google Anthos , AWS Outposts, and Azure Arc. These cloud platforms blur the traditional lines between cloud and on-prem environments , as you can now manage on-prem and single-vendor cloud clusters.
2020: Kubernetes Grows Up
In 2020, Kubernetes matured and added several features that provided much-needed support for “day 2” operations.
Dockershim Removal from Kubernetes
Kubernetes shocked the industry by announcing the deprecation and subsequent removal of Dockershim, a container runtime interface (CRI) that allowed Docker containers to run on Kubernetes. This was a significant move, as Docker was one of the most popular container runtimes used in Kubernetes.
The removal of Dockershim was not a rejection of Docker, but rather a move towards standardization. Kubernetes wanted to standardize on the Container Runtime Interface (CRI) as the way to interface with all container runtimes, and Dockershim was a non-standard, Docker-specific legacy code. The removal of Dockershim means that developers need to use a CRI-compliant runtime like Containerd or CRI-O to run containers on Kubernetes.
Ingress API
Available as a beta feature since Kubernetes 1.1, the Ingress API has received numerous enhancements. It has become a popular choice among users and load balancers. The Ingress API handles external access to services, exposing HTTP and HTTPS routes. It performs tasks such as load balancing, providing name-based virtual hosts, and SSL/TLS termination.
Kubectl Node Debugging
kubectl node debugging enables end-users to debug nodes through kubectl, allowing them to inspect running pods without restarting or entering containers. Debugging tasks such as filesystem checks, additional debug utility execution, and initial network requests via the host namespace can be performed.
Kubernetes aimed to eliminate the need for SSH in node debugging and maintenance with this feature. The feature is enabled by default from Kubernetes version 1.20.
Kubernetes Topology Manager
High-performance workloads often require a combination of CPUs and hardware accelerators for parallel computation and high throughput. Optimizations such as CPU isolation, memory, and device allocations are essential for achieving top performance.
The Kubernetes Topology Manager, introduced in version 1.18 as a beta feature, is a kubelet component that reduces latency and enhances performance in critical applications. It serves as a single information source for various components through an interface called Hint Providers. This enables components to make resource allocation decisions in line with the topology, delivering low latency and optimized performance for critical workloads.
2021: Containers for the Enterprise
2021 saw numerous developments in Kubernetes, ranging from autoscaling and security improvements to third-party tools that introduce new capabilities. Many vendors worked to make Kubernetes more user-friendly and accessible for organizations.
Multicluster Kubernetes management
Multicluster management became a top priority in 2021. Projects such as the Cluster API, the Kubernetes Multi-Cluster API, Hypershift, and kcp aimed to help organizations better manage multicluster Kubernetes environments. These initiatives emerged in response to the growing adoption of GitOps, cloud and edge computing, and increasing demand for multiclusters as organizations expand.
Kubernetes autoscaling evolves
The Kubernetes Event-Driven Autoscaling (KEDA) project matured and received approval from the CNCF as it demonstrated the ability to extend adoption for end-users. KEDA, installed as a Kubernetes operator, adds or removes cluster resources based on external data source events. This development marked the growth and expansion of Kubernetes deployments in the industry.
MITRE ATT&CK Framework for Containers
The ATT&CK framework for containers was developed to provide a detailed understanding of the security risks associated with container environments, and how attacks on these environments can be detected and prevented. It includes a variety of tactics and techniques, such as Initial Access, Execution, Persistence, Privilege Escalation, Defense Evasion, Credential Access, Discovery, Lateral Movement, Collection, Exfiltration, and Command and Control.
This framework has become an indispensable tool for security professionals dealing with container environments. It provides them with the knowledge and tools needed to defend against attacks and to respond effectively when an attack does occur.
eBPF Foundation
Extended Berkeley Packet Filter (eBPF) is a technology that can run sandboxed programs in the Linux kernel without changing kernel source code or loading kernel modules. The eBPF foundation is an open source project that aims to develop and promote the adoption of the eBPF technology. It works to improve eBPF technology and create innovative, high-performance networking, security, and tracing tools for Linux, which have become widely used in the cloud native community.
2022: Record Adoption of Container Technologies
2022 was a significant year for cloud-native technologies, particularly for Kubernetes, which emerged as the preferred platform for digital transformation and cloud-native workloads.
Record adoption of Kubernetes
Kubernetes experienced tremendous growth throughout the year. According to a 2022 report from the CNCF, 96% of surveyed participants were either using or evaluating Kubernetes, and 79% used managed services like EKS, AKS, or GKE.
Kubernetes becomes widely accessible
Initially, Kubernetes appeared to be a tool that only large enterprises could benefit from due to its steep learning curve and the need for experts to wield it. However, advancements in usability and the rise of managed services offerings have made Kubernetes accessible for small-to-medium businesses in 2022.
Azure Container Apps
Azure Container Apps is a serverless container service offered by Microsoft Azure. It allows developers to deploy and run containerized applications on a fully managed platform. Azure Container Apps supports both Linux and Windows containers and offers features like automatic scaling, integrated CI/CD, and enterprise-grade security.
Increase in edge usage
2022 also saw a growing interest in deploying Kubernetes at the edge and within bare-metal instances. The move to the edge was driven by several factors, including the need to run high-throughput computation, such as artificial intelligence, closer to the data source. Several CNCF open-source projects, including KubeEdge, SuperEdge, and Akri, can facilitate edge Kubernetes deployments.
Increasing use of stateful deployments
While containers are designed to be ephemeral and stateless, most applications still require some form of persistent storage. The community has developed several workarounds to enable stateful deployments in Kubernetes. These include improved support for persistent volumes (PVs), Kubernetes-native backup architecture, implementing a data backup plan with automation, recovering in the correct order, and utilizing a process agnostic to database types.