top of page

How Containers actually works? Container Image

Writer: TheNextHackerTheNextHacker

Updated: Apr 3, 2023


A container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.

It is a read-only template that contains all the instructions needed to create a container. Container images are designed to be portable, so they can be easily shared, deployed, and run on any system that supports containerization, without the need for additional dependencies or modifications. Container images are typically packaged using a layered file system approach, where each layer contains a set of changes or modifications to the previous layer. This allows container images to be built and updated quickly and efficiently, as only the layers that have changed need to be rebuilt or downloaded. The base layer typically contains the operating system, while subsequent layers contain additional software, libraries, and application code. Each layer is identified by a unique identifier or hash, which is used to cache and retrieve the layer from a registry or cache. Container images can be built using a variety of tools and technologies, such as Dockerfiles, build scripts, or configuration management tools, and can be stored in local or remote registries for distribution and sharing.

The internals of a container image are made up of a number of different components, including:

  • A filesystem: The filesystem contains the code, libraries, and other files that are needed to run the application.

  • A runtime: The runtime is responsible for executing the application's code.

  • A set of system tools: The system tools include utilities like ls, cd, and pwd that are used to manage files and directories.

  • A set of system libraries: The system libraries provide the functionality that the application needs to run, such as networking and file I/O.

  • A set of settings: The settings include things like the application's environment variables and its working directory.

All of these components are packaged together into a single image file that can be used to run the application on any Docker-compatible platform. Lets deep dive in Container Image

In container technology, file systems are used to package and distribute these container images. A file system in a container image consists of a set of files and directories that are packaged together to create a complete application or service.

Here are some examples of how file systems work in container images:

Suppose you have a simple Python web application that includes a set of HTML templates, CSS files, and Python scripts. You can create a Dockerfile that includes the necessary files and configurations to build the container image. When the image is built, the files and directories are organized into layers that make up the complete file system for the container.

Here is an example Dockerfile:


FROM python:3.9-slim-buster

COPY requirements.txt /app/requirements.txt
RUN pip install --no-cache-dir -r /app/requirements.txt

COPY . /app

WORKDIR /app

CMD [ "python", "app.py" ]

In this example, the base image is Python 3.9, which includes the necessary runtime and dependencies to run the application. The COPY and RUN commands are used to copy the necessary files and dependencies into the image and install them.


How does this Docker file build many layers of the Linux filesystem?

When you build a Docker image using a Dockerfile, each line in the Dockerfile creates a new layer in the image, resulting in a set of stacked read-only file systems. This layering allows for efficient use of storage space and enables easy sharing and re-use of common layers across multiple images.

Each layer represents a change to the file system, such as adding a new file, modifying an existing file, or installing a package. Each layer is identified by a unique hash value, which is generated based on the contents of the layer. You can use the docker history command to see the layers that were used to build a Docker image. The docker history command shows the history of the image, including the layers that were created and the commands that were run in each layer.


To see the layers of the image built from the Dockerfile example in my previous answer, you can run the following command:

docker history <image-name>

For example, if the image was built with the tag my-image, you would run:

In this output, you can see that there are three layers that correspond to the apt-get command, the COPY command, and the WORKDIR and CMD commands combined.

If you want to check the layers of a Docker image without using the Docker command, you can use the tar command to extract the contents of the image and then examine the resulting directory structure.

First, you'll need to find the location of the Docker image file. This can be done using the docker image inspect command, which will show you the location of the image's layers on disk. For example:



In this example, the layers are stored under the directories /var/lib/docker/aufs/diff/01234567 /var/lib/docker/aufs/diff/89012345 /var/lib/docker/aufs/diff/67890abc

Next, you can use the tar command to extract the contents of the image:

This will create a directory called myimage containing the contents of the Docker image, with each layer extracted into a separate directory.

Then, you can extract the manifest and layer tarballs from the archive



The manifest is a JSON file that describes the image's layers and configuration. Each layer is represented by a SHA256 hash that corresponds to a tarball in the layers directory. You can use the tar command to extract the contents of each layer tarball:


Here's an example of how to inspect the layers of a Docker image using Linux commands:



# Export the Docker image to a tar archive
docker save alpine:latest > alpine.tar

# Extract the manifest and layer tarballs from the archive
mkdir alpine && cd alpine
tar -xf ../alpine.tar
cat manifest.json

# Extract the contents of the first layer tarball
mkdir layers && cd layers
tar -xf ../$(cat ../manifest.json | jq -r '.[0].Layers[0]' | sed 's/.*\///')
ls -l

# Inspect the contents of the layer
cat <sha256>/etc/os-release

Here's an example output for the commands I listed earlier:


This example shows the extraction of the first layer of the alpine image and the output of the ls and cat commands to inspect the contents of the layer.

In conclusion, container images play a crucial role in the world of containers by providing a lightweight, portable, and isolated environment for applications to run in. These images are built using various underlying technologies, including a layered filesystem, tar archives, and overlays. Understanding these technologies is important for efficient management and optimization of container images. Thank you for your support! If you find my content informative and helpful, please consider subscribing to my page/channel and sharing it with others who may find it valuable. Your support is greatly appreciated and helps me reach a wider audience. Facebook:-> https://www.facebook.com/thenexthacker

 
 
 

Komentar


bottom of page