Kitabı oku: «Kubernetes Cookbook», sayfa 2
It is important to say that there are drawbacks to using this method instead of creating images that support multiple architectures:
– Cross-compilation adds complexity and overhead to the build process because it works differently for each programming language.
– Building an image takes more time because of installing and configuring the cross-compilation toolchains.
– Creating distinct Dockerfiles for each architecture becomes necessary, leading to a less maintainable and scalable approach.
– Distinguishing the image’s architecture relies on using tags or names. In the case of multi-arch images, these tags or names may remain identical across all architectures.
Let’s create a multi-arch image for our application. We will use the Dockerfile we created earlier.
docker buildx create – use – name multi-arch # create a builder instance
docker buildx build – platform linux/amd64,linux/arm64 -t auth-app: latest.
Buildx is a Docker CLI plugin, formerly called BuildKit, that extends the Docker build. Because we are using Colima with Docker runtime inside, we can use Buildx. Podman also supports Buildx. The ` – platform’ flag specifies the target platforms. The “linux/amd64” is the default platform. The “linux/arm64” is the platform for Apple’s M chip.
Under the hood, Buildx uses QEMU to emulate the target architecture. The build process can take more time than usual cause it will start separate VMs for each target architecture. After the build is complete, you can find out the image’s available architectures by using the following command:
docker inspect auth-app | jq '.[].Architecture’
You need to install the “jq’ tool to run this and further commands. It is a command-line JSON processor that helps you parse and manipulate JSON data.
brew install jq
You will get the following output:
“amd64”
You might notice that only one architecture is available. This is because Buildx uses the ` – output=docker’ type by default, which cannot export multi-platform images. Instead, multi-platform images must be pushed to a registry using the ` – output=oci’ or simply with just the ` – push’ flag. When you use this flag, Docker creates a manifest with all available architectures for the image and attaches it to a nearby image within the registry where it’s pushed. When you pull the image, it will choose your architecture’s image. Let’s check the manifest for the [official Rust image] (https://hub.docker.com/_/rust) on the Docker Hub registry:
docker manifest inspect rust:1.73-bookworm | jq '.manifests[].platform’
Why don’t we specify any URL for a remote Docker Hub registry? That is because Docker CLI has a default registry, so the actual command above explicitly looks like this:
docker manifest inspect docker.io/rust:1.73-bookworm | jq '.manifests[].platform’
You will see output like so:
{
“architecture”: “amd64”,
“os”: “linux”
}
{
“architecture”: “arm”,
“os”: “linux”,
“variant”: “v7”
}
{
“architecture”: “arm64”,
“os”: “linux”,
“variant”: “v8”
}
{
“architecture”: “386”,
“os”: “linux”
}
You can see that the Rust image supports four architectures. Roughly speaking, the “arm’ architecture is for the Raspberry Pi. The “386” architecture is for 32-bit systems. The “amd64” architecture is for 64-bit systems. The “arm64” architecture is for Apple’s M chip.
The Role of Docker in Modern Development
Docker has transformed modern software development by providing a standardized approach through containerization. This approach has made software development, testing, and operations more efficient. Docker creates container images on various hardware configurations, including traditional x64/64 and ARM architectures. It integrates with multiple programming languages, making development and deployment more accessible and versatile for developers.
Docker is helpful for individual development environments and container orchestration and management. Organizations use Docker to streamline their software delivery pipelines, making them more efficient and reliable. Docker provides a comprehensive tool suite for containerization, which impacts software development at all stages.
Our journey doesn’t end with Docker alone as we navigate the complex world of modern development. The following section will explain the critical role of Kubernetes in orchestration and how it fits into the contemporary development landscape. Let’s explore how Kubernetes can orchestrate containerized applications.
Understanding Kubernetes’ Role in Orchestration
Building on our prior knowledge, we understand that container deployment is straightforward. What Kubernetes brings to the table, as detailed earlier, is large-scale container orchestration – particularly beneficial in complex microservice and multi-cloud environments.
Kubernetes, often regarded as the cloud’s operating system, extends beyond its origins as Google’s internal project, now serving as a cornerstone in the orchestration of containerized applications. It is a decent system for automating containerized application deployment, scaling, and management. It is a portable, extensible, and open-source platform. It is also a production-ready platform that powers the most extensive applications worldwide. Google, Spotify, The New York Times, and many other companies use Kubernetes at scale.
With the increasing complexity of microservices, Kubernetes’ vibrant community, including contributors from leading entities like Google and Red Hat, continually enhances its capabilities to simplify its management. Its active development mirrors the characteristic rapid evolution of open-source projects. Expect more discussions about Kubernetes involving IT professionals and individuals from diverse technical backgrounds, even those less familiar with technology.
Comparing Docker Compose and Kubernetes
Docker is a container platform. Kubernetes is a platform for orchestrating containers. It’s crucial to recognize that these two platforms cater to distinct purposes. An alternative to Kubernetes, even if incomplete, is Docker Compose. It presents a simpler solution for running Docker applications with multiple containers, finding its niche in local development environments. Some fearless individuals even deploy it in production. However, when comparing them, Docker Compose is like a small forklift that moves containers. On the other hand, Kubernetes can be envisioned as a cutting-edge logistics center comparable to the top-tier facilities in Amazon’s warehouses. It gives advanced automation, offering unparalleled container management at scale.
Docker Compose for Multi-Container Applications
With Docker Compose, you can define and run multiple containers. It uses a simple YAML file structure to configure the services. A service definition contains the configuration that is applied to each container. You can create and start all the services from your configuration with a single command.
Let’s enhance our auth-app application. Let’s assume it requires in-memory storage to keep the user’s data. We will use Redis for that. Also, we need a broker to send messages to the queue. We will use RabbitMQ as a traditional way to do that. Let’s create a “compose. yml’ file with the following content:
version: “3”
services:
auth-app:
image: <username> /auth-app: latest
ports:
– “8080:8080”
environment:
RUST_LOG: info
REDIS_HOST: redis
REDIS_PORT: 6379
RABBITMQ_HOST: rabitmq
RABBITMQ_PORT: 5672
redis:
image: redis: latest
volumes:
– redis:/data
ports:
– 6379
rabitmq:
image: rabbitmq: latest
volumes:
– rabbitmq:/var/lib/rabbitmq
environment:
RABBITMQ_DEFAULT_USER: guest
RABBITMQ_DEFAULT_PASS: guest
ports:
– 5672
volumes:
redis:
rabbitmq:
To run two containers, you need to use the following command:
docker-compose up
Ofter it’s practical to run containers in the background:
docker-compose up -d
And follow the logs in the same terminal session:
docker-compose logs -f
To stop all the compose’s containers, use the following command:
docker-compose down
Transitioning from Docker Compose to Kubernetes Orchestration
Migrating from Docker Compose to Kubernetes can offer several benefits and enhance the capabilities of your containerized applications. There are various reasons why Kubernetes can be a suitable option for this transition:
– Docker Compose is constrained by a single-cluster limitation, restricting deployment to just one host. Conversely, Kubernetes is a platform that effectively manages containers across multiple hosts.
– In Docker Compose, the failure of the host running containers results in the failure of all containers on that host. In contrast, Kubernetes employs a primary node to oversee the cluster and multiple worker nodes. If a worker node fails, the cluster can operate with minimal disruption.
– Kubernetes boasts many features and possibilities that can be expanded with new components and functionalities. Although Docker Compose allows adding a few features, it generally needs to catch up to Kubernetes in popularity and scope.
– With robust cloud-native support, Kubernetes facilitates deployment on any cloud provider. This flexibility has contributed to its growing popularity among software developers in recent years.
Conclusion
This section discusses how software packaging has evolved from traditional methods to modern containerization techniques using Docker and Kubernetes. It explains the benefits and considerations associated with Docker Engine, Docker Desktop, Podman, and Colima. The book will further explore the practical aspects of encapsulating applications into containers, the importance of Docker in current development methods, and the crucial role Kubernetes plays in orchestrating containerized applications at scale.
Docker and Kubernetes: Understanding Containerization
Creating a Local Cluster with Minikube
Minikube is a tool that makes it easy to run Kubernetes locally. It simplifies the process by running a single-node cluster inside a virtual machine (VM) on your device, which can emulate a multi-node Kubernetes cluster. Minikube is the most used local Kubernetes cluster. It is a great way to get started with Kubernetes. It is also an excellent environment for testing Kubernetes applications before deploying them to a production cluster.
There are equivalent alternatives to Minikube, such as Kubernetes support in Docker Desktop and Kind (Kubernetes in Docker), where you can also run Kubernetes clusters locally. However, Minikube is the most favored and widely used tool. It is also the most straightforward. It is a single binary that you can quickly download and run on your machine. It is also available for Windows, macOS, and Linux.
Installing Minikube
To install Minikube, download the binary from the [official website] (https://minikube.sigs.k8s.io/docs/start/). For example, If you use macOS with Intel Chip, apply this command:
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64
sudo install minikube-darwin-amd64 /usr/local/bin/minikube
In case you prefer not to use Curl and Sudo combination, you can use Homebrew:
brew install minikube
Configuring and Launching Your Minikube Cluster
You can start Minikube simply as much as possible with the default configuration:
minikube start
While the provided command is generally functional, it’s recommended to explicitly specify the Minikube driver to enhance understanding of future provisioning configurations. For instance, the Container Network Interface (CNI) is set to auto by default, potentially leading to unforeseen consequences depending on the Minikube-selected driver.
It’s worth noting that Minikube often selects the driver based on the underlying operating system configuration. For example, if the Docker service runs, Minikube might default to using the Docker driver. Explicitly specifying the driver ensures a more predictable and tailored configuration for your specific needs.
minikube start – cpus=4 – memory=8192 – disk-size=50g – driver=docker – addons=ingress – addons=metrics-server
Most options are self-explanatory. The ` – driver’ option specifies the virtualization driver. By default, Minikube prefers the Docker driver or VM on macOS if Docker is not installed. On Linux – Docker, KVM2, and Podman drivers are favored; however, you can use all seven currently available options. The ` – addons’ option specifies the list of add-ons to enable. You can list the available add-ons by using the following command:
minikube addons list
If you use Docker Desktop, make sure the virtual machine’s CPU and memory settings are higher than Minikube’s settings. Otherwise, you will get an error like:
Exiting due to MK_USAGE: Docker Desktop has only 7959MB memory, but you specified 8192MB.
Once you’ve started, use this command to check the cluster’s status:
minikube status
And get:
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
Interacting with Minikube Cluster
The kubectl command-line tool is the most common way to interact with Kubernetes. It has to be the first tool for any Kubernetes user. It’s an official client for Kubernetes API. Minikube already has it, and we can use it – however, the recommended way is to install Kubectl from the [official website] (https://kubernetes.io/docs/tasks/tools/) and use it separately from Minikube. At least, that’s because Minikube’s kubectl is not always up to date and can be a few versions behind.
You can check Minikube’s kubectl version by using the following command:
minikube kubectl – version
Alternatively, if you have kubectl installed separately, you can use it by using the following command:
kubectl version
From now on, we will use the kubectl command-line tool installed separately from Minikube.
You will receive the client version, also known as kubectl, and the server version, the Kubernetes cluster. It’s okay if the versions differ, as the Kubernetes server has a different release cycle than kubectl. While it’s better to aim for identical versions, it’s not always necessary.
To get the list of nodes in the cluster, use the following command:
kubectl – get nodes
You will get our cluster’s single node:
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane 10m v1.24.1
This output means that we have one node that was created 10 minutes ago. The node has a role control plane, which is the primary node. Usually, cluster-plane nodes are for Kubernetes components (things that make Kubernetes run), not for user workloads (applications that users deploy on Kubernetes). But, due to Minikube’s development purposes, it is the only node in the cluster for everything.
It is also worth noting that this single node exposes the Kubernetes API server. You can find out the URL of it by using the following command:
kubectl – cluster-info
You will get the same address where kubectl is requesting to:
Kubernetes control plane is running at https://127.0.0.1:59813
Finally, let’s use the first add-on we enabled earlier. The metrics server is a cluster-wide aggregator of resource usage data. It collects metrics from Kubernetes, such as CPU and memory usage per node and pod. It is a prerequisite for the autoscaling mechanism we will discuss later in this book. For now, let’s check cluster node resource usage:
kubectl – top node
You will receive data showing the utilization of CPU and memory resources by the node. In our case, the usage might appear minimal because nothing has been deployed yet. The specific percentages can vary depending on background processes and Minikube’s overhead.
NAME CPU (cores) CPU% MEMORY (bytes) MEMORY%
minikube 408m 10% 1600Mi 20%
Stopping and Deleting Your Minikube Cluster
To stop Minikube, use the following command:
minikube stop
You can also delete the cluster by using the following command:
minikube delete
Recipe: Deploying Your First Application to Kubernetes
In this recipe, we will deploy our first application to the Kubernetes cluster. We will use the same application we containerized in the previous recipe. That said, we will use the same Docker image we built earlier. However, we will deliberately use a non-common imperative approach using command-line commands to start with simple things. We will use the declarative way in this chapter as soon as we warm up. For now, let’s refresh our fundamental computer science knowledge and recall the differences between these two approaches.
Understanding Imperative vs. Declarative Management Model
Imperative paradigm is a term that is mainly, but not always, related to programming. In this programming style, the engineer tells the computer step-by-step how to do a task. The imperative approach is used to operate programs or issue direct commands to configure infrastructure. For example, using terminal command-line commands to start a Docker container demonstrates the use of the imperative approach.
In the declarative paradigm, the engineer tells the computer what to do, not how. The goal is to describe the desired state of the system. The declarative approach is mostly used to configure infrastructure, especially a cloud one. The “compose. yml’ file also describes and runs a containerized application in a declarative way. Usually, the declarative approach contains a manifest file, which is a text file with the system’s final state.
Even if the declarative approach is necessary for infrastructure, particularly for Kubernetes, in some rare situations, such as debugging and real-time troubleshooting, the imperative method is still the case, so let’s start with it.
Pushing Your Container Image to a Registry
Before we start, we need to push the image to the registry. We will use the Docker Hub registry. You can create a free account on the [official website] (https://hub.docker.com/). Once you’ve created an account and generated an access token, you can log in to the registry by using the following command:
docker login
You will be prompted to enter your username and password. After that, you can push the image to the registry by using the following command:
docker tag auth-app: latest <username> /auth-app: latest
docker push <username> /auth-app: latest
Imperative Deployment with kubectl run
The fastest way to instantly deploy an application is to use the “kubectl run’ command.
This command creates a pod Kubernetes object. A pod is the smallest and simplest unit of deployment in Kubernetes. At this point, let’s assume that it is a group of one or more containers that share storage, network, and specification. Also, it is the basic building block of Kubernetes.
Let’s start Minikube and create a deployment. Use the following command:
kubectl – run auth-app – image= <username> /auth-app: latest – port=8080
Then check the pod status by using the following command:
kubectl – get pods
You will get the following output:
NAME READY STATUS RESTARTS AGE
auth-app 1/1 Running 0 4m55s
To get all the events how the pod got the running state, use the following command:
kubectl – get events – field-selector involvedObject.name=auth-app
You will get the following output:
LAST SEEN TYPE REASON OBJECT MESSAGE
10m Normal Scheduled pod/auth-app Successfully assigned default/auth-app to minikube
10m Normal Pulling pod/auth-app Pulling image "<username> /auth-app: latest”
10m Normal Pulled pod/auth-app Successfully pulled image "<username> /auth-app: latest” in 7.158188757s
10m Normal Created pod/auth-app Created container auth-app
10m Normal Started pod/auth-app Started container auth-app
The pod came over the running state in four steps. First, it was scheduled to the node. Then, it pulled the image from the registry. After that, Pod created the container and started it. We now have a running pod, but we cannot access it outside the cluster. To do that, we need to expose the port’s pod.
Exposing Your Application with Port Forwarding
To expose the pod to the outside world, we need to use the “kubectl port-forward’ command. It forwards the local port to a port on the pod. Use the following command to make the pod accessible on port 8080:
kubectl – port-forward pod/auth-app 8080:8080
After that, you can request the `/health’ endpoint by using the following command:
curl http://localhost:8080/health
You will get the following output:
{“status”: “OK”}
Also, we can check access log of the pod by using the following command:
kubectl – logs -f pod/auth-app
You will get the following line specifically for our request.
[2023-11-11T12:58:01Z INFO actix_web::middleware::logger] 127.0.0.1 “GET /health HTTP/1.1” 200 15 "-" “curl/8.1.2” 0.000163
Using port-forwarding exposes the pod, but it’s not advised for production-like infrastructure. This is because it is not scalable, forwards one port at a time, and is insecure. It is also not reliable because it does not have any retry mechanism. And it’s still an imperative, less convenient command.
You can use port-forwarding with complete confidence in a local development environment. For example, when you must debug or test the application manually. Sometimes, it makes sense to use it in CI/CD pipelines. When you need to test the application by running integration or system tests, the declarative description looks too redundant compared to a simple command.
Conclusion
In this section, we have introduced Minikube as a local Kubernetes environment, outlined its installation and usage, and demonstrated deploying and managing an application through an imperative method, emphasizing Minikube’s capabilities for local development, testing, and learning Kubernetes fundamentals.