Added kubernetes post.
This commit is contained in:
61
.gitea/workflows/release.yaml
Normal file
61
.gitea/workflows/release.yaml
Normal file
@ -0,0 +1,61 @@
|
||||
name: "Release"
|
||||
|
||||
on:
|
||||
push:
|
||||
tags: ["*"]
|
||||
|
||||
env:
|
||||
DEPLOY_RELEASE_NAME: "blog"
|
||||
DEPLOY_IMAGE_URL: "gitea.le-memese.com/s3rius/blog"
|
||||
DEPLOY_NAMESPACE: "s3rius"
|
||||
|
||||
jobs:
|
||||
docker_build:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v4
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
- name: Login to DockerHub
|
||||
uses: docker/login-action@v1
|
||||
with:
|
||||
registry: gitea.le-memese.com
|
||||
username: ${{ gitea.actor }}
|
||||
password: ${{ secrets.PACKAGE_PAT }}
|
||||
- name: Build and push
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
context: .
|
||||
push: true
|
||||
file: ./Dockerfile
|
||||
platforms: linux/amd64
|
||||
tags: |
|
||||
${{ env.DEPLOY_IMAGE_URL }}:latest
|
||||
${{ env.DEPLOY_IMAGE_URL }}:${{ gitea.ref_name }}
|
||||
deploy_helm:
|
||||
runs-on: ubuntu-latest
|
||||
needs: docker_build
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v4
|
||||
- name: Setup helm
|
||||
uses: azure/setup-helm@v4.3.0
|
||||
- name: Deploy
|
||||
run: |
|
||||
echo "${{secrets.KUBE_CONFIG}}" > /tmp/kubeconfig
|
||||
helm upgrade \
|
||||
"${{ env.DEPLOY_RELEASE_NAME }}" \
|
||||
'oci://gitea.le-memese.com/common/charts/py-app' \
|
||||
--install \
|
||||
--wait \
|
||||
--atomic \
|
||||
--kubeconfig "/tmp/kubeconfig" \
|
||||
--namespace="${{ env.DEPLOY_NAMESPACE }}" \
|
||||
--create-namespace \
|
||||
--values=./values.yaml \
|
||||
--version "0.1.0" \
|
||||
--set-literal "image.tag=${{ gitea.ref_name }}" \
|
||||
--set-literal "image.repository=${{ env.DEPLOY_IMAGE_URL }}"
|
@ -11,6 +11,7 @@ generate_robots_txt = true
|
||||
|
||||
[markdown]
|
||||
highlight_code = true
|
||||
highlight_theme = "nord"
|
||||
render_emojis = true
|
||||
smart_punctuation = true
|
||||
|
||||
|
BIN
content/posts/kube-intro/imgs/reqs.gif
Normal file
BIN
content/posts/kube-intro/imgs/reqs.gif
Normal file
Binary file not shown.
After Width: | Height: | Size: 89 KiB |
@ -6,7 +6,7 @@ date = "2025-07-22"
|
||||
|
||||

|
||||
|
||||
### Problem in learning kubernetes
|
||||
## Problem in learning kubernetes
|
||||
|
||||
Lots of people are considering learning Kubernetes but are struggling to take the first
|
||||
steps because the official docs are complete dogshit for beginners.
|
||||
@ -23,7 +23,7 @@ Which is actually complete bullshit as well.
|
||||
Because of the fact that Kubernetes automates a lot of things, it makes it easier to use than to not use it.
|
||||
And it is not that hard to start.
|
||||
|
||||
### What is Kubernetes? And how is better than Docker?
|
||||
## What is Kubernetes? And how is better than Docker?
|
||||
|
||||
People that are aquite good with docker might ask, like `why should I care? Because I can run docker in cluster mode with docker-swarm!`
|
||||
Weeeeell, here's my response to you. The main problem with docker swarm is that it
|
||||
@ -33,7 +33,7 @@ and won't have unless they change their approach for declaring workloads. Kubern
|
||||
supported by a lot of cloud providers, which makes it a de-facto standard for container orchestration. Sad to admit,
|
||||
but docker swarm is silently dying from the day it was born.
|
||||
|
||||
### What's inside of Kubernetes?
|
||||
## What's inside of Kubernetes?
|
||||
|
||||
Since I wanted to give high-level overview of Kubernetes, I won't go into details about each component,
|
||||
and networking, but I will give you a brief overview of the main components that make up Kubernetes.
|
||||
@ -140,7 +140,7 @@ But please keep in mind that by default k8s encodes all secrets using base64, wh
|
||||
To make secrets actually secret, you better use [Vault](https://developer.hashicorp.com/vault/docs/platform/k8s)
|
||||
or [External secrets operator](https://external-secrets.io/latest/) or something similar. But for now let's just use default base64 encoded secrets.
|
||||
|
||||
### How to deploy kubernetes
|
||||
## How to deploy kubernetes
|
||||
|
||||
For local development you have several options:
|
||||
* [K3D](https://k3d.io/) - k3s in Docker.
|
||||
@ -195,7 +195,7 @@ With this configuration, you can create a cluster with a single command:
|
||||
k3d cluster create --config "k3d.yaml"
|
||||
```
|
||||
|
||||
### Connecting to the cluster
|
||||
#### Connecting to the cluster
|
||||
|
||||
After you install kubectl you should be able to locate file `~/.kube/config`. This file contains all required infomration to connect to clusters. Tools like minikube, k3d or kind will automatically update this file when you create a cluster.
|
||||
|
||||
@ -230,5 +230,219 @@ contexts:
|
||||
# Currently selected context. It will be used as a default one
|
||||
# for all kubectl commands.
|
||||
current-context: k3d-default
|
||||
|
||||
```
|
||||
|
||||
I will update my context to use k3d-default as my default context with this command:
|
||||
|
||||
```bash
|
||||
kubectl config use-context k3d-default
|
||||
```
|
||||
|
||||
Also, we can check if we are connected to the cluster by running:
|
||||
```bash
|
||||
kubectl cluster-info
|
||||
```
|
||||
|
||||
Once we are connected to the cluster, we can start deploying applications and managing resources. But before that
|
||||
I want to mention [Lens](https://k8slens.dev/) and [K9S](https://k9scli.io/).
|
||||
These two things will help you a lot to get into kubernetes. Becuase kubectl is great, no shit,
|
||||
I'll be using it for showing everything what is going on.
|
||||
|
||||
But I highly recommend you to install `Lens` or `K9S` to have better overview of your cluster and resources.
|
||||
So you will be able to see what is going on in your cluster, what pods are running, what services are available,
|
||||
and so on. I personally use `k9s` and I think it's much better, because it has everything you need and it's fast as hell.
|
||||
Lens is a bit more heavy, but it's still a great tool for getting started with Kubernetes.
|
||||
|
||||
|
||||
## Deploying your first application
|
||||
|
||||
I'm going to use python to write a small server to deploy in the cluster. Here's the application:
|
||||
|
||||
```python,name=server.py
|
||||
import os
|
||||
from aiohttp import web
|
||||
|
||||
routes = web.RouteTableDef()
|
||||
|
||||
@routes.get("/")
|
||||
async def index(req: web.Request) -> web.Response:
|
||||
# We increment the request count and return the state
|
||||
req.app["state"]["requests"] += 1
|
||||
return web.json_response(req.app["state"])
|
||||
|
||||
|
||||
app = web.Application()
|
||||
app["state"] = {
|
||||
"requests": 0,
|
||||
"hostname": os.environ.get("HOSTNAME", "unknown"),
|
||||
}
|
||||
app.router.add_routes(routes)
|
||||
|
||||
if __name__ == "__main__":
|
||||
web.run_app(app, port=8000, host="0.0.0.0")
|
||||
```
|
||||
|
||||
As you can see, this is a simple aiohttp application that returns the number of requests and the hostname of the pod.
|
||||
This is a good example of an application that can be deployed in Kubernetes, because it is stateless and can be scaled easily.
|
||||
|
||||
Let's create a Dockerfile for this application:
|
||||
```dockerfile,name=Dockerfile
|
||||
FROM python:3.11-alpine3.19
|
||||
|
||||
RUN pip install "aiohttp>=3.12,<4.0"
|
||||
WORKDIR /app
|
||||
COPY server.py .
|
||||
CMD ["python", "server.py"]
|
||||
```
|
||||
|
||||
Here's a simple Dockerfile that installs aiohttp and runs the application. Let's build an image and upload it to our running cluster.
|
||||
|
||||
```bash
|
||||
docker build -t "registry.localhost:5000/small-app:latest" .
|
||||
docker push "registry.localhost:5000/small-app:latest"
|
||||
```
|
||||
|
||||
Now let's deploy this application so it will become available from our host machine.
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
# Here's the name of the deployment, which is small-app.
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
name: small-app
|
||||
spec:
|
||||
replicas: 1
|
||||
# Here we define selector that will help this deployment
|
||||
# to find pods that it manages. Usually it is the
|
||||
# same as labels in pod template.
|
||||
selector:
|
||||
matchLabels:
|
||||
app: small-app
|
||||
strategy: {}
|
||||
# Here we define the pod template that will be used to create
|
||||
# pods for this deployment.
|
||||
template:
|
||||
metadata:
|
||||
# Each pod create by this deployment will have these labels.
|
||||
labels:
|
||||
app: small-app
|
||||
spec:
|
||||
containers:
|
||||
- image: registry.localhost:5000/small-app:latest
|
||||
name: small-app
|
||||
ports:
|
||||
- containerPort: 8000
|
||||
protocol: TCP
|
||||
resources: {}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: small-app
|
||||
namespace: small-app
|
||||
spec:
|
||||
# This type of service will make it accessible only
|
||||
# from inside the cluster.
|
||||
# If you want to expose it to the outside world,
|
||||
# you can use LoadBalancer or NodePort.
|
||||
# But we don't need it for now.
|
||||
type: ClusterIP
|
||||
# Here we define ports available for this service.
|
||||
# Port is the port that will be exposed by the service,
|
||||
# targetPort is the port that the service
|
||||
# will forward traffic to in target pods.
|
||||
ports:
|
||||
- name: http
|
||||
port: 80
|
||||
protocol: TCP
|
||||
targetPort: 8000
|
||||
# Here we define where to route traffic for this service.
|
||||
# In this case we route traffic to pods that have
|
||||
# label app=small-app.
|
||||
selector:
|
||||
app: small-app
|
||||
---
|
||||
# Write ingress
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: small-app
|
||||
namespace: small-app
|
||||
spec:
|
||||
rules:
|
||||
# Here's the host configuration for the Ingress
|
||||
# It will route traffic for this host to the small-app service
|
||||
- host: small-app.localhost
|
||||
http:
|
||||
paths:
|
||||
# Here we define the path and the backend service
|
||||
# The path is set to '/' which means all
|
||||
# traffic to this host will be routed
|
||||
# to the defined backend service
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: small-app
|
||||
port:
|
||||
number: 80
|
||||
resources: {}
|
||||
```
|
||||
|
||||
Now let's save this file as `small-app.yaml` and apply it to our cluster:
|
||||
|
||||
```bash
|
||||
kubectl apply -f small-app.yaml
|
||||
```
|
||||
|
||||
Once the command is executed, you should see that the deployment and service are created successfully.
|
||||
|
||||
```bash
|
||||
❯ kubectl get pods -n small-app
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
small-app-54f455696b-rp8qw 1/1 Running 0 13m
|
||||
```
|
||||
|
||||
If you were using my k3d configuration, you should be able to access the application at [small-app.localhost](http://small-app.localhost/).
|
||||
|
||||
Let's check if it works. I'm gonna use curl and jq for that:
|
||||
|
||||
```bash
|
||||
❯ curl -s http://small-app.localhost | jq
|
||||
{
|
||||
"requests": 1,
|
||||
"hostname": "small-app-54f455696b-rp8qw"
|
||||
}
|
||||
```
|
||||
|
||||
Now you can scale up the application by changing the number of replicas in the deployment.
|
||||
You can do it by editing the deployment and updating the `replicas` field to 3, for example and then running
|
||||
|
||||
```bash
|
||||
kubectl apply -f "small-app.yaml"
|
||||
```
|
||||
|
||||
Or alternatively you can use `kubectl scale` command:
|
||||
```bash
|
||||
kubectl scale deployment -n small-app small-app --replicas 3
|
||||
```
|
||||
|
||||
Let's verify that the application is scaled up:
|
||||
```bash
|
||||
❯ kubectl get pods -n small-app
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
small-app-54f455696b-6tkxx 1/1 Running 0 21s
|
||||
small-app-54f455696b-9sd7r 1/1 Running 0 21s
|
||||
small-app-54f455696b-rp8qw 1/1 Running 0 25m
|
||||
```
|
||||
|
||||
Now let's fire some requests to the application and see how it works.
|
||||
|
||||

|
||||
|
||||
Works as expected. The application is scaled up and we can see that the requests are distributed between the pods.
|
||||
|
||||
I guess that is more than enough to get you started with Kubernetes. I might create some more posts on how to tune up your cluster, how to use volumes, how to use secrets and configmaps, and so on.
|
||||
|
||||
But now I'm tired and just want to publish it already. So please go easy on me. Stay tuned.
|
||||
|
22
values.yaml
Normal file
22
values.yaml
Normal file
@ -0,0 +1,22 @@
|
||||
nameOverride: "blog"
|
||||
|
||||
image:
|
||||
tag: "latest"
|
||||
|
||||
service:
|
||||
port: 80
|
||||
|
||||
ingress:
|
||||
enabled: true
|
||||
annotations:
|
||||
cert-manager.io/cluster-issuer: cert-issuer
|
||||
external-dns.alpha.kubernetes.io/hostname: s3rius.blog
|
||||
tls:
|
||||
- hosts:
|
||||
- s3rius.blog
|
||||
secretName: blog-tls
|
||||
hosts:
|
||||
- host: s3rius.blog
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
Reference in New Issue
Block a user