Introduction to Container Architecture with Docker: Core Concepts and Basic Commands
Hello dear readers, I am Gökhan Güngör. In today's rapidly evolving software world, running applications more efficiently, consistently, and portably has become a priority for every IT professional and developer. This is precisely where container technology and its most popular representative, Docker, come into play. In this article, we will delve into container architecture with Docker, explain core concepts, and examine the most frequently used Docker commands step by step.
What is Container Architecture? Why Do We Need It?
In traditional virtualization methods (VMs), each application runs on a separate virtual machine with its own operating system. This increases resource consumption and prolongs startup times. Containers, on the other hand, are lightweight, portable structures that run the application and all its dependencies (libraries, settings, etc.) within an isolated package, without needing their own operating system. Containers share the host operating system's kernel.
Advantages Provided by Containers:
- Portability: Containers guarantee that applications run identically in every environment (development, testing, production). The "it worked on my machine!" problem disappears.
- Isolation: Applications run in isolated environments without affecting each other. A problem in one container does not impact others.
- Efficiency: They consume significantly fewer resources (CPU, RAM) compared to virtual machines. They start much faster.
- Consistency: Ensures the same environment at every stage, from development to production.
- Rapid Deployment and Scalability: New containers can be launched in seconds, allowing applications to scale quickly.
What is Docker and Its Place in the Container World
Docker is an open-source platform that makes container technology accessible and easy to use for everyone. It provides a set of tools and services for building, distributing, and running applications within containers. Docker manages containers using the isolation features (cgroups and namespaces) provided by the Linux kernel.
Core Components of Docker:
- Docker Engine: Consists of Docker Daemon (dockerd), REST API, and Docker CLI. It is the background service that creates, runs, and manages containers.
- Docker Images: Executable, read-only templates that contain your application and all its dependencies. Multiple containers can be created from a single Docker image.
- Docker Containers: Runnable instances of Docker images. They are the isolated and running form of your application.
- Docker Hub (Registries): A central repository used for storing and sharing Docker images. Docker Hub is the most well-known, but private registries can also be used.
- Dockerfile: A text-based script that defines step-by-step how a Docker image should be built.
Docker Installation (Overview)
Installing Docker is quite straightforward, and official installation guides are available for many operating systems (Linux, Windows, macOS). It can typically be installed easily via an operating system-specific package manager or the Docker Desktop application. We will not cover installation details in this article, but it is recommended to refer to the official Docker documentation.
Basic Docker Commands
Here are the essential commands you need to know to start working with Docker:
1. Check Docker Version:
Checks if Docker is installed on your system and which version is running.
docker --version2. Pull an Image:
Downloads a Docker image from a registry like Docker Hub to your local system.
docker pull ubuntu:latest3. List Images:
Lists all Docker images on your local system.
docker images4. Run a Container:
Creates and runs a new container from a Docker image. -d is used to run the container in the background (detached), -p for port mapping.
docker run -d -p 80:80 nginx:latestThis command starts a container from the Nginx image, runs it in the background, and forwards the host machine's port 80 to the container's port 80.
5. List Running Containers:
Lists all currently running containers. The -a or --all parameter shows all containers, including stopped ones.
docker psdocker ps -a6. Stop a Container:
Stops the specified container.
docker stop [container_name_or_id]7. Start a Container:
Restarts a stopped container.
docker start [container_name_or_id]8. Restart a Container:
Stops and then restarts a running container.
docker restart [container_name_or_id]9. Remove a Container:
Deletes a stopped container from the system. To delete a running container, you must stop it first.
docker rm [container_name_or_id]10. Remove an Image:
Deletes a Docker image from the local system. If there are running or stopped containers associated with the image, you must delete those containers first.
docker rmi [image_name_or_id]11. Enter a Container (Exec):
Allows you to execute commands inside a running container. -it opens an interactive terminal.
docker exec -it [container_name_or_id] bash12. View Container Logs:
Displays the standard output and error streams (logs) of a container. The -f parameter allows you to follow logs in real-time.
docker logs [container_name_or_id]docker logs -f [container_name_or_id]Building Our Own Image with Dockerfile (Simple Example)
Creating a Docker image for your own application requires writing a Dockerfile. Here's a simple Dockerfile example that containerizes a Node.js application:
First, create a file named Dockerfile in your application's directory:
# Specify the base image
FROM node:14-alpine
# Set the working directory
WORKDIR /app
# Copy application dependencies
COPY package*.json ./
# Install dependencies
RUN npm install
# Copy application source code
COPY . .
# Expose the port the application listens on
EXPOSE 3000
# Start the application
CMD ["npm", "start"]To build an image using this Dockerfile:
docker build -t my-nodejs-app:1.0 .To run the created image:
docker run -d -p 3000:3000 my-nodejs-app:1.0Conclusion
Docker and container architecture have become an indispensable part of modern software development and deployment processes. Thanks to the portability, isolation, and efficiency advantages it provides, it significantly accelerates the workflows of developers and operations teams. The core concepts and commands covered in this article are sufficient for you to make a solid start in the world of Docker. I wish you success on your containerization journey!
See you in the next article!