What Is a Container?

Rani Osnat
January 15, 2023

What Is Container Technology?

Containers provide a lightweight package that lets you deploy applications anywhere, making them more portable. Container images package services and applications together with their configurations and dependencies. This makes development and testing easier, because applications can be automatically deployed to a realistic staging environment. It also eases scalability in production environments.

Another important element of containers is that they are immutable, meaning that at least in principle, they should not be changed after being deployed. To modify a container, you tear it down and deploy a new one. 

The unique properties of containers makes them much more reliable than traditional infrastructure: deployments become repeatable, and you can easily roll back by deploying an older version of a container image. Immutability also makes it possible to deploy the same container image in development, testing, and production environments, supporting agile development principles.

This is part of our series of articles about Docker containers.

In this article:

What Are the Benefits of Containers?

Containers provide a highly effective way to deploy applications and services at scale on any hardware. Applications or services running as containers use a small fraction of the resources on the host (enabling a large number of containers to run on one host). They are well isolated, so they don’t interfere with each other or directly affect the host’s operations.

Here are the main benefits of containers compared to other ways of running software on host infrastructure:

  • Lightweight—because containers share the system’s operating system kernel, there is no need to run a complete operating system instance for each application, reducing the size of container files and resources needed. Containers can start quickly, are torn down easily, and are easy to scale horizontally, meaning they can better support cloud-native applications.
  • Portability and platform independence—containers have all their dependencies inside. This means that the same software can be created once and run consistently on laptops, on-premise hardware, or in the cloud, with no reconfiguration required.
  • Support for modern architectures—containers can be constructed from a simple configuration file and have a high level of portability and consistency. This makes them highly suitable for DevOps, microservices architectures, and serverless computing, in which software is built from small components that are iteratively developed.
  • Increased utilization—containers allow developers and operators to increase CPU and memory utilization on physical machines. Containers allow granular deployment and scaling of application components, which can support microservices design patterns.

How Containers Work

In a containerized environment, the host operating system controls each container’s access to computing resources (i.e., storage, memory, CPU) to ensure that no container consumes all the host’s resources. 

container image file is a static, complete, executable version of a service or application. Different technologies use different image types. A Docker image comprises several layers starting with the base image that contains the necessary dependencies to execute the container’s code. It has static layers topped with a readable and writable layer. Every container has a specific, customized container layer, so the underlying image layers are reusable—developers can save and apply them to other containers. 

container engine executes the container images. Most organizations use container orchestration or scheduling solutions like Kubernetes to manage their container deployments. Containers are highly portable because every image contains the dependencies required to execute the code stored in the appropriate container. 

The main advantage of containerization is that users can execute a container image on a cloud instance for testing and then deploy it on an on-premises production server. The application performs correctly in both environments without requiring changes to the code within a container. 

What Is a Container Image?

A container image is a static immutable file with instructions that specify how a container should run and what should run inside it. An image contains executable code that enables containers to run as isolated processes on IT infrastructure. It consists of platform settings, such as system libraries and tools, that enable software programs to run on a containerization platform like Docker. 

Container images are compiled from file system layers built onto a base or parent image. The term base image usually refers to a new image with basic infrastructure components, to which developers can add their own custom components. Compiling a container image using layers enables you to reuse components rather than creating each image from scratch.

What Is Docker?

Docker is an open source platform for creating, deploying, and managing virtualized application containers. It provides an ecosystem of tools that provide various capabilities to package, provision, and run containers. 

Docker utilizes a client-server architecture. Here is how it works:

  • The daemon deploys containers—a Docker client talks to a daemon that builds, runs and distributes Docker containers. 
  • Clients and daemons can share resources—a Docker daemon and client can run on the same system. Alternatively, you can connect the client to a remote daemon. 
  • Clients and daemons communicate via APIs—A Docker daemon and client can communicate via a REST API over a network interface or UNIX sockets.

Related content: Read our guides to:

What Are Windows Containers?

In the past, Docker Toolbox, a variant of Docker for Windows, used to run a VirtualBox instance with a Linux operating system on top of it. It allowed Windows developers to test containers before deploying them on production Linux servers.

Recently, Microsoft adopted container technology, enabling containers to run natively on Windows 10 and Windows Server. Microsoft and Docker worked together to build a native Docker for Windows variant. Kubernetes and Docker Swarm shortly followed.

It is now possible to create and run native Windows and Linux containers on Windows 10 devices. You can also deploy and orchestrate these on Windows servers or Linux servers if you use Linux containers. 

What Is Windows Subsystem for Linux?

The Windows Subsystem for Linux (WSL) lets you run a Linux file system, Linux command-line tools, and GUI applications directly on Windows. WSL is a feature of the Windows operating system that enables you to use Linux with the traditional Windows desktop and applications. 

Here are common WSL use cases:

  • Use Bash, Linux-first frameworks like Ruby and Python, and common Linux tools like sed and awk alongside Windows productivity tools.
  • Run Linux in a Bash shell with various distributions, such as Ubuntu, OpenSUSE, Debian, Kali, and Alpine. It enables you to use Bash while running command-line Linux tools and applications.
  • Use Windows applications and Linux command-line tools on the same set of files.

WSL requires less CPU, memory, and storage resources than a VM. Developers use WSL to deploy to Linux server environments or work on open source web development projects. 

What are Container Runtimes?

Containers are lightweight virtual, isolated entities that include dependencies. They require a container runtime (which typically comes with the container engine) that can unpack the container image file and translate it into a process that can run on a computer.

You can find various types of available container runtimes. Ideally, you should choose the runtime compatible with the container engine of your choice. Here are key container runtimes to consider:

  • containerd—this container runtime manages the container lifecycle on a host, which can be a physical or virtual machine (VM). containerd is a daemon process that can create, start, stop, and destroy containers. It can also pull container images from registries, enable networking for a container, and mount storage.
  • LXC—this Linux container runtime consists of templates, tools, and language and library bindings. LXC is low-level, highly flexible, and covers all containment features supported by the upstream kernel.
  • CRI-O—this is an implementation of the Kubernetes Container Runtime Interface (CRI) that enables you to use Open Container Initiative (OCI)-compatible runtimes. CRI-O offers a lightweight alternative to employing Docker as a runtime for Kubernetes. It lets Kubernetes use any OCI-compliant runtime as a container runtime for running pods. CRI-O supports Kata and runc containers as container runtimes, but you can plug any OCI-conformant runtime.
  • Kata—a Kata container can improve the isolation and security of container workloads. It offers the benefits of using a hypervisor, including enhanced security, alongside container orchestration functionality provided by Kubernetes. Unlike the runC runtime, the Kata container runtime uses a hypervisor for isolation when spawning containers, creating lightweight VMs and putting containers inside.

The Open Container Initiative (OCI) is a standard that helps develop container runtimes that will support Kubernetes and other container orchestrators. It includes configurations, several file-system layers, and a manifest that specifies how a runtime should function. OCI also includes a standard specification for container images.

Learn more in our detailed guide to container runtimes

Containers vs. Virtual Machines

A virtual machine (VM) is an environment created on a physical hardware system that acts as a virtual computer system with its own CPU, memory, network interfaces, and storage. It is a “guest operating system” running within the “host operating system” installed directly on the host machine.

Containerization and virtualization are similar in that applications can run in multiple environments. The main differences are size, portability, and the level of isolation:

  • VMs—Each VM has its own operating system, which can perform multiple resource-intensive functions at once. Because more resources are available on the VM, it can abstract, partition, clone, and emulate servers, operating systems, desktops, databases, and networks. A VM has strong isolation because it runs its own operating system.
  • Containers—runs specific package applications, their dependencies and the minimal execution environment they require. A container typically runs one or more applications, and does not attempt to emulate or replicate an entire server. A container has inherently weaker isolation because it shares the operating system kernel with other containers and processes.

Learn more in our guide to Docker vs. virtual machines

Containers and Kubernetes

Kubernetes is a container orchestration platform provided as open source software. It enables you to unify a cluster of machines as a single pool of computing resources. You can employ Kubernetes to organize applications into groups of containers. Kubernetes uses the Docker engine to run the containers, ensuring your application runs as intended.

Here are key features of Kubernetes:

  • Compute scheduling—Kubernetes automatically considers the resource needs of containers to find a suitable place to run them. 
  • Self-healing—when a container crashes, Kubernetes creates a new one to replace it.
  • Horizontal scaling—Kubernetes can observe CPU or custom metrics and add or remove instances according to actual needs.
  • Volume management—Kubernetes can manage your application’s persistent storage.
  • Service discovery and load balancing—Kubernetes can load balance IP addresses, multiple instances, and DNS.
  • Automated rollouts and rollbacks—Kubernetes monitors the health of new instances during updates. The platform can automatically roll back to a previous version if a failure occurs.
  • Secret and configuration management—Kubernetes can manage secrets and application configuration.

Containers serve as the foundation of modern, cloud native applications. Docker offers the tools needed to create container images easily, and Kubernetes provides a platform that runs everything.

Best Practices for Building Container Images

Use the following best practices when writing Dockerfiles to build images:

  • Ephemeral—you should build containers as ephemeral entities that you can stop or delete at any moment. It enables you to replace a container with a new one from the Dockerfile with minimal configuration and setup.  
  • dockerignore—a .dockerignore file can help you reduce image size and build time. You achieve this by excluding any unnecessary files from the build context. By default the Docker image includes the recursive contents of a directory in which the Dockerfile resides, and .dockerignore lets you specify files that should not be included.
  • Size—you should reduce image file sizes to minimize the attack surface. Use small base images such as Alpine Linux or distroless Linux images. However, you do need to keep Dockerfiles readable. You can apply a multi-stage build (available only for Docker 17.05 or higher) or a builder pattern.
  • Multi-stage build—this build lets you use multiple FROM statements within a single Dockerfile. It enables you to selectively copy artifacts from one stage to another, leaving behind anything unneeded in the final image. You can use it to reduce image file sizes without maintaining separate Dockerfiles and custom scripts for a builder pattern.
  • Packages—never install unnecessary packages when you build images.
  • Commands—do not use multiple RUN commands. When possible, use multi-line commands for faster builds, for example, when you need to install a list of packages.
  • Linters—use a linter to automatically catch errors in your Docker file and clean up your syntax and layout.

Learn more in our detailed guide to container images

Best Practices for Container Security

Container security is a process that includes various steps. It covers container building, content and configuration assessment, runtime assessment, and risk analysis. Here are key security best practices for containers:

  • Prefer slim containers—you can minimize the application’s attack surface by removing unnecessary components.
  • Use only trusted base images—the CI/CD process should only include usable images that were previously scanned and tested for reliability.
  • Harden the host operating system—you should use a script to configure the host properly according to CIS benchmarks. You can use a lightweight Linux distribution for hosting containers like CoreOS or Red Hat Enterprise Linux Atomic Host.
  • Remove permission—you should never run a privileged container because it allows malicious users to take over the host system. It threatens your entire infrastructure.
  • Manage secrets—a secret can include database credentials, SSL keys, encryption keys, or API keys. You must manage secrets to ensure it is impossible to discover them. 
  • Run source code tests—software composition analysis (SCA) and static application security testing (SAST) tools have evolved to support DevOps and automation. They are integral to container security, helping you track open source software, license restrictions, and code vulnerabilities.
Rani Osnat
Rani is the SVP of Strategy at Aqua. Rani has worked in enterprise software companies more than 25 years, spanning project management, product management and marketing, including a decade as VP of marketing for innovative startups in the cyber-security and cloud arenas. Previously Rani was also a management consultant in the London office of Booz & Co. He holds an MBA from INSEAD in Fontainebleau, France. Rani is an avid wine geek, and a slightly less avid painter and electronic music composer.