Kitabı oku: «K8s Applications mit MicroK8S auf Raspberry PI», sayfa 2
Zertifikate und LetsEncrypt
Inspiration:
https://suda.pl/5-minute-home-server-with/
https://faun.pub/wildcard-k8s-4998173b16c8
https://collabnix.github.io/kubetools/
https://forum.netcup.de/netcup-intern/technik/11841-let-s-encrypt-wildcard-zertifikate-via-certbot/
Um den Cluster vernünftig betreiben zu können braucht es Zertificate.
Darum habe ich den Certificate-Manager installiert.
#!/bin/bash
############################################################################################
# $Date: 2021-10-21 21:40:29 +0200 (Do, 21. Okt 2021) $
# $Revision: 659 $
# $Author: alfred $
# $HeadURL: https://monitoring.slainte.at/svn/slainte/trunk/k8s/k8s/K5_certmanager.sh $
# $Id: K5_certmanager.sh 659 2021-10-21 19:40:29Z alfred $
#
# cert-manager
#
############################################################################################
#shopt -o -s errexit #—Terminates the shell script if a command returns an error code.
shopt -o -s xtrace #—Displays each command before it’s executed.
shopt -o -s nounset #-No Variables without definition
# Voraussetzung: Scripts in der richtigen Reihenfolge
#
# Definitionen für das Deployment
#
sname=$(basename "$0")
app="mikrok8s/install/${sname}"
pf=\$"Revision: "
sf=" "\$
fr="\$Revision: 659 $"
revision=${fr#*"$pf"}
revision=${revision%"$sf"*}
xd=(`date '+%Y-%m-%d'`)
wd="${HOME}/copy/${app}/${xd}/r${revision}"
id="/opt/cluster/${app}/${xd}/r${revision}"
rm -f -R ${wd}
mkdir -p ${wd}
#
cat <<EOF > ${wd}/install_certmanager.sh
#!/bin/bash
#
# \$Date: 2021-10-21 21:40:29 +0200 (Do, 21. Okt 2021) $
# \$Revision: 659 $
# \$Author: alfred $
# \$HeadURL: https://monitoring.slainte.at/svn/slainte/trunk/k8s/k8s/K5_certmanager.sh $
# \$Id: K5_certmanager.sh 659 2021-10-21 19:40:29Z alfred $
#
# Installation des cert-managers mit helm
#
#shopt -o -s errexit #—Terminates the shell script if a command returns an error code.
shopt -o -s xtrace #—Displays each command before it’s executed.
shopt -o -s nounset #-No Variables without definition
# Voraussetzung: Scripts in der richtigen Reihenfolge
microk8s kubectl create namespace cert-manager
microk8s helm3 repo add jetstack https://charts.jetstack.io
microk8s helm3 repo update
microk8s helm3 install cert-manager jetstack/cert-manager \
--namespace cert-manager --version v1.5.4 \
--set installCRDs=true \
--set ingressShim.defaultIssuerName=letsencrypt-production \
--set ingressShim.defaultIssuerKind=ClusterIssuer \
--set ingressShim.defaultIssuerGroup=cert-manager.io
#
#wget https://github.com/jetstack/cert-manager/releases/download/v1.5.4/cert-manager.yaml --output-document=${id}/cert-manager.yaml
#wget https://github.com/jetstack/cert-manager/releases/download/v1.5.4/cert-manager.crds.yaml --output-document=${id}/cert-manager.crds.yaml
#microk8s kubectl apply -f ${id}/*.yaml
#
sleep 1m
microk8s kubectl get pods --namespace cert-manager
EOF
chmod 755 ${wd}/install_certmanager.sh
#
ansible pc1 -m shell -a ${id}'/install_certmanager.sh'
#
#!/bin/bash
############################################################################################
# $Date: 2021-11-28 11:05:45 +0100 (So, 28. Nov 2021) $
# $Revision: 1404 $apiVersion: networking.k8s.io/v1
# $Author: alfred $
# $HeadURL: https://monitoring.slainte.at/svn/slainte/trunk/k8s/k8s/K14_webserver.sh $
# $Id: K14_webserver.sh 1404 2021-11-28 10:05:45Z alfred $
#
# Einspielen der lokalen Konfigurationen - Produktiv ist slainte.
# https://stackoverflow.com/questions/67430592/how-to-setup-letsencrypt-with-kubernetes-microk8s-using-default-ingress
#
############################################################################################
#shopt -o -s errexit #—Terminates the shell script if a command returns an error code.
shopt -o -s xtrace #—Displays each command before it’s executed.
shopt -o -s nounset #-No Variables without definition
#
# Definitionen für das Deployment
#
sname=$(basename "$0")
app="mikrok8s/install/${sname}"
pf=\$"Revision: "
sf=" "\$
fr="\$Revision: 1404 $"
revision=${fr#*"$pf"}
revision=${revision%"$sf"*}
xd=(`date '+%Y-%m-%d'`)
wd="${HOME}/copy/${app}/${xd}/r${revision}"
id="/opt/cluster/${app}/${xd}/r${revision}"
rm -f -R ${wd}
mkdir -p ${wd}
#
cat <<EOF > ${wd}/webserver-depl-svc.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webserver-depl
namespace: slainte
spec:
selector:
matchLabels:
app: webserver-app
template:
metadata:
labels:
app: webserver-app
spec:
containers:
- name: webserver-app
image: nginx:1.20
---
apiVersion: v1
kind: Service
metadata:
name: webserver-svc
namespace: slainte
spec:
selector:
app: webserver-app
ports:
- port: 80
name: http
targetPort: 80
protocol: TCP
- port: 443
name: https
targetPort: 443
protocol: TCP
EOF
ansible pc1 -m shell -a 'microk8s kubectl apply -f '${id}'/webserver-depl-svc.yaml'
cat <<EOF > ${wd}/letsencrypt-staging.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
#change to your email
email: slainte@slainte.at
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- http01:
ingress:
class: public
EOF
ansible pc1 -m shell -a 'microk8s kubectl apply -f '${id}'/letsencrypt-staging.yaml'
cat <<EOF > ${wd}/letsencrypt-prod.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
#change to your email
email: slainte@slainte.at
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: public
EOF
ansible pc1 -m shell -a 'microk8s kubectl apply -f '${id}'/letsencrypt-prod.yaml'
cat <<EOF > ${wd}/ingress-routes-update.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webserver-routes
namespace: slainte
annotations:
# Class checken mit kubectl -n ingress describe daemonset.apps/nginx-ingress-microk8s-controller
kubernetes.io/ingress.class: public
# Das ist für das Zertifikat
cert-manager.io/cluster-issuer: "letsencrypt-prod"
# Das ist für das http -> https forwarding
# See https://kubernetes.github.io/ingress-nginx/examples/rewrite/
nginx.ingress.kubernetes.io/rewrite-target: /\$1
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-temporary-redirect: "false"
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/ssl-proxy-headers: "X-Forwarded-Proto: https"
nginx.ingress.kubernetes.io/proxy-body-size: 0m
nginx.ingress.kubernetes.io/proxy-buffering: "off"
# nginx.ingress.kubernetes.io/ssl-passthrough: "true"
# https://github.com/nginxinc/kubernetes-ingress/tree/v1.12.0/examples/ssl-services
# nginx.ingress.kubernetes.io/ssl-services: "\${image}-svc"
# nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
tls:
- hosts:
- k8s.slainte.at
secretName: k8s-slainte-at-tls
rules:
- host: k8s.slainte.at
http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: webserver-svc
port:
number: 80
defaultBackend:
service:
name: webserver-svc
port:
number: 80
EOF
ansible pc1 -m shell -a 'microk8s kubectl apply -f '${id}'/ingress-routes-update.yaml '
# Service PROD
curl -k -v http://k8s.slainte.at
#erreichbar sein.
#Aber auch mit https
curl -k -v https://k8s.slainte.at
#
## Prüfen des Zertifikates
ansible pc1 -m shell -a 'microk8s kubectl get certificate --all-namespaces'
ansible pc1 -m shell -a 'microk8s kubectl describe certificate --all-namespaces'
ansible pc1 -m shell -a 'microk8s kubectl get certificaterequests.cert-manager.io '
ansible pc1 -m shell -a 'microk8s kubectl describe certificaterequests '
ansible pc1 -m shell -a 'microk8s kubectl get certificatesigningrequests.certificates.k8s.io '
ansible pc1 -m shell -a 'microk8s kubectl get Issuer'
ansible pc1 -m shell -a 'microk8s kubectl get ClusterIssuer'
ansible pc1 -m shell -a 'microk8s kubectl describe ClusterIssuer letsencrypt-prod '
ansible pc1 -m shell -a 'microk8s kubectl get challenges.acme.cert-manager.io '
ansible pc1 -m shell -a 'microk8s kubectl describe challenges.acme.cert-manager.io '
##
exit
Die Zertifikate entstehen dann, wenn sie gebraucht werden. Die Definition ist im Ingress. Derzeit gibt es einen URL pro Namespace, und damit ein Zertifikat.
Die Namespaces werden hier definiert.
#!/bin/bash
############################################################################################
# $Date: 2021-11-23 18:03:25 +0100 (Di, 23. Nov 2021) $
# $Revision: 1272 $
# $Author: alfred $
# $HeadURL: https://monitoring.slainte.at/svn/slainte/trunk/k8s/k8s_app/namespace/slainte_env.sh $
# $Id: slainte_env.sh 1272 2021-11-23 17:03:25Z alfred $
#
# Bauen und deployen
#
############################################################################################
#shopt -o -s errexit #—Terminates the shell script if a command returns an error code.
shopt -o -s xtrace #—Displays each command before it’s executed.
shopt -o -s nounset #-No Variables without definition
export secretName="k8s-slainte-at-tls"
export host="k8s.slainte.at"
export namespace_comment="Namespace für die Produktion"
export cluster_issuer="letsencrypt-prod"
export docker_registry="docker.registry:5000"
#
Bzw.
#!/bin/bash
############################################################################################
# $Date: 2021-11-23 18:03:25 +0100 (Di, 23. Nov 2021) $
# $Revision: 1272 $
# $Author: alfred $
# $HeadURL: https://monitoring.slainte.at/svn/slainte/trunk/k8s/k8s_app/namespace/default_env.sh $
# $Id: default_env.sh 1272 2021-11-23 17:03:25Z alfred $
#
# Bauen und deployen
#
############################################################################################
#shopt -o -s errexit #—Terminates the shell script if a command returns an error code.
shopt -o -s xtrace #—Displays each command before it’s executed.
shopt -o -s nounset #-No Variables without definition
export secretName="default-k8s-slainte-at-tls"
export host="default.k8s.slainte.at"
export namespace_comment="Namespace zum Testen"
# export cluster_issuer="letsencrypt-staging"
# Auch hier die Prod, wegen HTTP Strict Transport Security (HSTS)
export cluster_issuer="letsencrypt-prod"
export docker_registry="docker.registry:5000"
#
Das sieht dann im Ingress wie folgt aus:
---
# Yaml für ${image}:${tag}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ${image}-routes
namespace: ${namespace}
annotations:
kubernetes.io/ingress.class: public
cert-manager.io/cluster-issuer: "${cluster_issuer}"
# https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-temporary-redirect: "false"
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/ssl-proxy-headers: "X-Forwarded-Proto: https"
nginx.ingress.kubernetes.io/proxy-body-size: 0m
nginx.ingress.kubernetes.io/proxy-buffering: "off"
# https://github.com/nginxinc/kubernetes-ingress/tree/v1.12.0/examples/ssl-services
nginx.ingress.kubernetes.io/ssl-services: "${image}-svc"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
tls:
- hosts:
- ${host}
secretName: ${secretName}
rules:
- host: ${host}
http:
paths: # https://github.com/google/re2/wiki/Syntax, https://www.regular-expressions.info/refcapture.html
- path: /${image}(/|$)(.*)
pathType: Prefix
backend:
service:
name: ${image}-svc
port:
number: 443
defaultBackend:
service:
name: default-svc
port:
number: 80
---
Man könnte aber auch mit Wildcard-Zertifikaten arbeiten. Ist aber in unserem Falle ein wenig übertrieben. Nachdem es ja eh für jeden Namespace und jeden host einen Ingress braucht, kann man dafür auch ein eigenes Zertifikat anlegen lassen.
Entwicklungsumgebung
Inspiration:
https://pimylifeup.com/ubuntu-install-docker/
https://brjapon.medium.com/setting-up-ubuntu-20-04-arm-64-under-raspberry-pi-4-970654d12696
https://microk8s.io/docs/registry-built-in
https://microk8s.io/docs/registry-private
https://github.com/docker-library/hello-world
https://www.freecodecamp.org/news/how-to-remove-images-in-docker/
https://gobyexample.com/hello-world
https://linuxconfig.org/how-to-install-go-on-ubuntu-20-04-focal-fossa-linux
https://forums.docker.com/t/docker-private-registry-how-to-list-all-images/21136/2
https://github.com/fraunhoferfokus/deckschrubber
https://collabnix.github.io/kubetools/
Um das Source-Repository nutzen zu können brauchen wir eine Entwicklungsumgebung. Dafür benutzen wir unseren Entwicklungs-Raspbery PI.
Abbildung 8: Gesamtsystem
Dieser Entwicklungs-Raspberry ist ein Raspberry 4 mit 8GB RAM und einer 120 GB SDRAM Karte. Zusätzlich gibt es noch eine 1TB-USB-Platte für all die Backups.
Auf diesem Rechner ist Docker installiert.
alfred@monitoring:~$ docker version
Client:
Version: 20.10.7
API version: 1.41
Go version: go1.13.8
Git commit: 20.10.7-0ubuntu1~20.04.1
Built: Wed Aug 4 22:53:01 2021
OS/Arch: linux/arm64
Context: default
Experimental: true
Server:
Engine:
Version: 20.10.7
API version: 1.41 (minimum version 1.12)
Go version: go1.13.8
Git commit: 20.10.7-0ubuntu1~20.04.1
Built: Wed Aug 4 19:07:47 2021
OS/Arch: linux/arm64
Experimental: false
containerd:
Version: 1.5.2-0ubuntu1~20.04.2
GitCommit:
runc:
https://github.com/docker-library/hello-world Version: 1.0.0~rc95-0ubuntu1~20.04.2
GitCommit:
docker-init:
Version: 0.19.0
GitCommit:
alfred@monitoring:~$
Es gibt sehr viele Anleitungen, wie man Docker installiert. Ich habe dazu folgendes Skript.
#!/bin/bash
############################################################################################
# $Date: 2021-11-22 18:47:00 +0100 (Mo, 22. Nov 2021) $
# $Revision: 1252 $
# $Author: alfred $
# $HeadURL: https://monitoring.slainte.at/svn/slainte/trunk/k8s/k8s_app/portainer/portainer.sh $
# $Id: portainer.sh 1252 2021-11-22 17:47:00Z alfred $
#
# https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-20-04-de
#
# Installieren Docker und Portainer
#
############################################################################################
#shopt -o -s errexit #—Terminates the shell script if a command returns an error code.
shopt -o -s xtrace #—Displays each command before it’s executed.
shopt -o -s nounset #-No Variables without definition
#
sudo apt update
sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
sudo apt update
apt-cache policy docker-ce
sudo apt install docker-ce
sudo usermod -aG docker ${USER}
#
docker volume create portainer_data
docker run -d -p 8000:8000 -p 9443:9443 --name portainer \
--restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /portainer_data:/data \
portainer/portainer-ce:latest
docker ps
docker run -d -p 9001:9001 --name portainer_agent \
--restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /var/lib/docker/volumes:/var/lib/docker/volumes \
portainer/agent:latest
Damit gibt es dann auch den Portainer, der recht praktisch ist, um die lokale Dockerinstallation zu steuern.
Abbildung 9: Portainer
Wir müssen das Repository das sich am Kubernetes Cluster befindet in die Konfiguration am Entwicklungsrechner eintragen.
alfred@monitoring:~/go$ cat /etc/hosts
127.0.0.1 localhost
127.0.0.1 monitoring
127.0.0.1 monitoring.slainte.at
192.168.0.213 docker.registry
192.168.0.201 pc1
192.168.0.202 pc2
192.168.0.203 pc3
192.168.0.204 pc4
192.168.0.205 pc5
alfred@monitoring:~/go$ sudo cat /etc/docker/daemon.json
{
"insecure-registries" : ["docker.registry:5000"]
}
Die Registry ist als Loadbalancer-Service ausfallsicher. Die Storage ist in einem persistant volume, somit funktioniert das auch von überall im Cluster.
Nun können wir versuchen ein hello world zu erzeugen.
package main
import "fmt"
func main() {
fmt.Println("hello world")
}
Dieser Programmteil ist in GO geschrieben. Go wird wie folgt installiert.
alfred@monitoring:~$ sudo apt install golang
alfred@monitoring:~$ go version
go version go1.13.8 linux/arm64
alfred@monitoring:~$
Dann kann man das go Programm testen und builden.
alfred@monitoring:~$ mkdir go
alfred@monitoring:~$ cd go
alfred@monitoring:~/go$ nano hello-world.go
alfred@monitoring:~/go$ go run hello-world.go
hello world
alfred@monitoring:~/go$ go build hello-world.go
alfred@monitoring:~/go$ ls -lisa
total 1936
4688691 4 drwxrwxr-x 2 alfred alfred 4096 Sep 30 11:26 .
1140881 4 drwxr-xr-x 21 alfred alfred 4096 Sep 30 11:25 ..
127267 1924 -rwxrwxr-x 1 alfred alfred 2097605 Sep 30 11:26 hello-world
4688693 4 -rw-rw-r-- 1 alfred alfred 75 Sep 30 11:25 hello-world.go
alfred@monitoring:~/go$ ./hello-world
hello world
alfred@monitoring:~/go$
Jetzt haben wir das go-Programm, und müssen es in einen Docker-Container packen.
alfred@monitoring:~/go$ docker rmi $(docker images -q) # Löscht alle vorhandenen Images im lokalen Repository
Um einen Container zu erzeugen, brauchen wir ein dockerfile.
alfred@monitoring:~/dev/hello-world$ cat dockerfile
# syntax=docker/dockerfile:1
# Alpine is chosen for its small footprint
# compared to Ubuntu
FROM golang:1.16-alpine
WORKDIR /app
LABEL maintainer="microk8s.raspberry@slainte.at"
COPY go.mod ./
COPY *.go ./
RUN go build -o /hallo
EXPOSE 80
CMD ["/hallo"]
alfred@monitoring:~/dev/hello-world$
Nun bauen wir den Container.
alfred@monitoring:~/dev/hello-world$ docker build . -t docker.registry:5000/hello-world:20211003
Sending build context to Docker daemon 9.216kB
Step 1/8 : FROM golang:1.16-alpine
1.16-alpine: Pulling from library/golang
be307f383ecc: Already exists
e31131f141ae: Pull complete
7f3ae2225eeb: Pull complete
27b4cf6759f9: Pull complete
05c56ed0aaf5: Pull complete
Digest: sha256:45412fe3f5016509fc448b83faefc34e6f9e9bcc8ca1db1c54505d5528264e16
Status: Downloaded newer image for golang:1.16-alpine
---> bebfc96a903d
Step 2/8 : WORKDIR /app
---> Running in c81df142320d
Removing intermediate container c81df142320d
---> a81936939cb7
Step 3/8 : LABEL maintainer="microk8s.raspberry@slainte.at"
---> Running in 88c7acfdf90e
Removing intermediate container 88c7acfdf90e
---> b023746a02d8
Step 4/8 : COPY go.mod ./
---> 599501f1547a
Step 5/8 : COPY *.go ./
---> 9f60a66c2ac0
Step 6/8 : RUN go build -o /hallo
---> Running in 03fd6b04a839
Removing intermediate container 03fd6b04a839
---> e62606c61d94
Step 7/8 : EXPOSE 80
---> Running in 2b3562abb146
Removing intermediate container 2b3562abb146
---> 121727752180
Step 8/8 : CMD ["hallo"]
---> Running in cfa9e78598dc
Removing intermediate container cfa9e78598dc
---> 52dfdc427a61
Successfully built 52dfdc427a61
Successfully tagged docker.registry:5000/hello-world:20211003
alfred@monitoring:~/dev/hello-world$
Anzeige im lokalen Repository.
alfred@monitoring:~/go$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.registry:5000/hello-world 20211003 ebe5a2422dca 24 seconds ago 2.1MB
alfred@monitoring:~/go$
Wir können den Inhalt untersuchen, indem wir den Docker in ein Tar-File exportieren.
alfred@monitoring:~/go$ docker ps -lq
fc63f4826cff
alfred@monitoring:~/go$
alfred@monitoring:~/go$ mkdir test
alfred@monitoring:~/go$ docker export fc63f4826cff > ./test/hello-world.tar
alfred@monitoring:~/go$ cd test/
alfred@monitoring:~/go/test$ tar -xf hello-world.tar
alfred@monitoring:~/go/test$ ll
total 4132
drwxrwxr-x 6 alfred alfred 4096 Oct 3 09:37 ./
drwxrwxr-x 4 alfred alfred 4096 Oct 3 09:37 ../
-rwxr-xr-x 1 alfred alfred 0 Oct 3 09:32 .dockerenv*
drwxr-xr-x 4 alfred alfred 4096 Oct 3 09:32 dev/
drwxr-xr-x 2 alfred alfred 4096 Oct 3 09:32 etc/
-rwxrwxr-x 1 alfred alfred 2097605 Sep 30 11:26 hello-world*
-rw-rw-r-- 1 alfred alfred 2105344 Oct 3 09:37 hello-world.tar
drwxr-xr-x 2 alfred alfred 4096 Oct 3 09:32 proc/
drwxr-xr-x 2 alfred alfred 4096 Oct 3 09:32 sys/
alfred@monitoring:~/go/test$
Wir können das Image auch lokal starten.
alfred@monitoring:~/dev/hello-world$ docker run docker.registry:5000/hello-world:20211003
hello world
alfred@monitoring:~/dev/hello-world$
Pushen in das remote-Repository. In dem Falle gibt es einen darunter liegenden den Layer schon.
alfred@monitoring:~/dev/hello-world$ docker push docker.registry:5000/hello-world:20211003
The push refers to repository [docker.registry:5000/hello-world]
6fa65fc3457a: Pushed
af86841dfdf3: Pushed
001055a1bdf7: Pushed
350991c52258: Pushed
93bd567aa306: Pushed
b48145e02449: Pushed
20211003: digest: sha256:cf16e57415939719367e5e17c09d2f17e36af2c1b84f24208f5750ea2c18b485 size: 1570
alfred@monitoring:~/dev/hello-world$
Anzeige aller vorhandenen Images im Remote Repository
alfred@monitoring:~/go$ curl docker.registry:5000/v2/_catalog
{"repositories":["hello-world"]}
alfred@monitoring:~/go$
alfred@monitoring:~/go$ curl docker.registry:5000/v2/hello-world/tags/list
{"name":"hello-world","tags":["20211003","20211001","20210830","20210930"]}
alfred@monitoring:~/go$
alfred@monitoring:~/go$ curl docker.registry:5000/v2/hello-world/manifests/20211003
{
"schemaVersion": 1,
"name": "hello-world",
"tag": "20211003",
"architecture": "arm64",
"fsLayers": [
{
"blobSum": "sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"
},
{
"blobSum": "sha256:34b32ef7cc99133a34d56a284820986ce50c2ff9c1864da8557458489d7428a1"
}
],
"history": [
{
"v1Compatibility": "{\"architecture\":\"arm64\",\"config\":{\"Hostname\":\"\",\"Domainname\":\"\",\"User\":\"\",\"AttachStdin\":false,\"AttachStdout\":false,\"AttachStderr\":false,\"Tty\":false,\"OpenStdin\":false,\"StdinOnce\":false,\"Env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"],\"Cmd\":[\"/hello-world\"],\"Image\":\"sha256:c7642e609037b051e03214a85ddc50434de042a6b08c23bcac16dcb92629ee46\",\"Volumes\":null,\"WorkingDir\":\"\",\"Entrypoint\":null,\"OnBuild\":null,\"Labels\":null},\"container\":\"6d8917fc20a0fcbece354d49781ee260e15cb02df65c35efa5978f583948f9c4\",\"container_config\":{\"Hostname\":\"6d8917fc20a0\",\"Domainname\":\"\",\"User\":\"\",\"AttachStdin\":false,\"AttachStdout\":false,\"AttachStderr\":false,\"Tty\":false,\"OpenStdin\":false,\"StdinOnce\":false,\"Env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"],\"Cmd\":[\"/bin/sh\",\"-c\",\"#(nop) \",\"CMD [\\\"/hello-world\\\"]\"],\"Image\":\"sha256:c7642e609037b051e03214a85ddc50434de042a6b08c23bcac16dcb92629ee46\",\"Volumes\":null,\"WorkingDir\":\"\",\"Entrypoint\":null,\"OnBuild\":null,\"Labels\":{}},\"created\":\"2021-10-03T07:40:42.472956387Z\",\"docker_version\":\"20.10.7\",\"id\":\"031921ba69921f2705920021974bfdc500b6f42530f679aa56481bdbf1543d3e\",\"os\":\"linux\",\"parent\":\"91fdfdcec09b0c57e4f28bd9c1d199796b66e5162c2e0432f8a9dc689952e13c\",\"throwaway\":true,\"variant\":\"v8\"}"
},
{
"v1Compatibility": "{\"id\":\"91fdfdcec09b0c57e4f28bd9c1d199796b66e5162c2e0432f8a9dc689952e13c\",\"created\":\"2021-10-03T07:40:35.230309736Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c #(nop) COPY file:c46bbf237d2238c6f6241fa3c853f5d4861e4981bd19d09a99fa3f1ef76ed73a in / \"]}}"
}
],
"signatures": [
{
"header": {
"jwk": {
"crv": "P-256",
"kid": "AADV:MDEU:C5BD:UTIA:ME3T:42HB:WJUA:GTV4:DHU6:KPQM:OXBY:RRAS",
"kty": "EC",
"x": "PEw45Fk1zyjGc6iHHYh6_Ydk3mxLC8UE1mvlo_6XY24",
"y": "8e-Ad67SUvJMaYHjlyh05hqv7kcekIm6J-jR8It9unA"
},
"alg": "ES256"
},
"signature": "0iZv8OxENn9Oo_2eTyIQpfxFgBBsIW1z4p0AfkxP-bBxDE-1i5SN76FTwsLdTI9SRHnyY6R0_AoMSB1gWacdeA",
"protected": "eyJmb3JtYXRMZW5ndGgiOjIxMjUsImZvcm1hdFRhaWwiOiJDbjAiLCJ0aW1lIjoiMjAyMS0xMC0wM1QwODoxNjoxM1oifQ"
}
]
alfred@monitoring:~/go$
Somit ist das Image im Repository. Jetzt brauchen wir noch eine Service Definition.
alfred@pc1:/opt/cluster/go$ cat hello-world.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
#namespace: default
spec:
selector:
matchLabels:
app: hello-world
replicas: 1
template:
metadata:
labels:
app: hello-world
annotations:
sidecar.istio.io/inject: "false"
spec:
containers:
- name: hello-world
image: docker.registry:5000/hello-world:20211003
# resource limits
resources:
requests:
memory: "24Mi"
cpu: "500m" # half vcpu
limits:
memory: "64Mi"
cpu: "1000m" # one vcpu
env:
# currently no env vars used for this container
- name: FOO
value: bar
# check for lifetime liveness, restarts if dead
livenessProbe:
exec:
command:
- /hello-world
initialDelaySeconds: 5
periodSeconds: 10
# check for initial readyness
readinessProbe:
exec:
command:
- /hello-world
initialDelaySeconds: 3
periodSeconds: 3
restartPolicy: Always
dnsPolicy: ClusterFirst
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-service
#namespace: default
labels:
app: hello-world
spec:
ports:
# port=available to other containers
- port: 1234
name: hello
# targetPort=exposed from inside container
targetPort: 1234
protocol: TCP
selector:
app: hello-world
---
alfred@pc1:/opt/cluster/go$
Nun installieren wir den Service.
alfred@pc1:/opt/cluster/go$ kubectl apply -f hello-world.yaml
deployment.apps/hello-world created
service/hello-world-service created
alfred@pc1:/opt/cluster/go$
Wir kontrollieren ob der Service richtig gestartet wird.
alfred@pc1:/opt/cluster/go$ kubectl describe -n default pod hello-world-6bb7844865-4j5bw
Name: hello-world-6bb7844865-4j5bw
Namespace: default
Priority: 0
Node: pc5/192.168.0.205
Start Time: Sun, 03 Oct 2021 20:56:03 +0200
Labels: app=hello-world
pod-template-hash=6bb7844865
Annotations: cni.projectcalico.org/podIP: 10.1.80.118/32
cni.projectcalico.org/podIPs: 10.1.80.118/32
sidecar.istio.io/inject: false
Status: Running
IP: 10.1.80.118
IPs:
IP: 10.1.80.118
Controlled By: ReplicaSet/hello-world-6bb7844865
Containers:
hello-world:
Container ID: containerd://e9d5a90c4ac98c57f9f27bdbe88b0bd5535d4e297ecbb103118a963d554615a9
Image: docker.registry:5000/hello-world:20211003
Image ID: docker.registry:5000/hello-world@sha256:1bb9b5564a34689396c097bb410a837d5e074b61caff822cb43921f425b09a50
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 03 Oct 2021 20:56:08 +0200
Finished: Sun, 03 Oct 2021 20:56:08 +0200
Ready: False
Restart Count: 1
Limits: