Self-hosted Blog almost for free

Β· 1894 words Β· 9 minute read

This blog is self-hosted and is running on a home NAS in a virtual machine with K3s cluster.

This post is not a How-To or a tutorial. This is just to share my approach.

Some might think this is overkill but here is my objectives:

  • learn more about K8s (using K3s), practice troubleshoting skills;
  • get it done via self-hosting;
  • automate pipeline as much as possible;
  • minimum downtime;

Post about my home NAS hardware can be found here.
For the OS part I prefer headless Ubuntu, latest available LTS version. But, since I don’t want to spend too much time on reinventing the wheel with the NAS setup, I went with TrueNAS Scale OS.
Thank you to the TrueNAS maintainers and community. It is a very good option compared to the paid alternatives. All essential containers are available. They can be installed in the apps section of the dashboard with a few clicks. The terminal is not required.

Although TrueNAS Scale 24.04 includes K3s for managing apps, the issue is that it’s running on a single machine, which means my apps will experience downtime.
I decided to spin up a separate K3s cluster on a dedicated VM.
For the VM I went with NixOS, having discovered this OS while doing research on what to choose for the NAS.
Eventually, I plan to have 3 machines in the cluster but for now two. Second machine is RPi 400.

Note: TrueNAS Scale, with the release of 24.10, moved from K3s to Docker. Learn more here


NixOS became my choice for K3s setup. It has an awesome feature - a config file at /etc/nixos/configuration.nix.
It is very easy to re-create a system with a few commands after a fresh installation.

Installing NixOS as a VM was quite easy. I’m not going to go into details on this one.
It is easy to go through the steps in the installation wizard and enable the SSH agent.
I only needed to make a few changes to make it work as a K3s server.

 1  # Open ports in the firewall.
 2   networking.firewall.allowedTCPPorts = [80 443 22 
 3    6443 # k3s: required so that pods can reach the API server (running on port 6443 by default)
 4    2379 # k3s, etcd clients: required if using a "High Availability Embedded etcd" configuration
 5    2380 # k3s, etcd peers: required if using a "High Availability Embedded etcd" configuration
 6  ];
 7  networking.firewall.allowedUDPPorts = [
 8    8472 # k3s, flannel: required if using multi-node for inter-node networking
 9  ];
10
11  services.k3s = {
12    enable = true;
13    role = "server";
14    clusterInit = true;
15    token = "<some-token>";
16  };
17
18  services.k3s.extraFlags = toString [
19    "--write-kubeconfig-mode" "644" # "--kubelet-arg=v=4" # Optionally add additional args to k3s
20  ];

Then do sudo nixos-rebuild switch.

Validate it is running with kubectl get nodes.

NAME       STATUS   ROLES                       AGE   VERSION
nixos      Ready    control-plane,etcd,master   5s   v1.30.4+k3s1

If something goes wrong with K3s, check journal logs journalctl -xeu k3s.service.

Server token located at /var/lib/rancher/k3s/server/node-token. This token is required to connect an agent to K3s server.


Installing NixOS on Raspberry Pi 4 was a bit harder. The main problem is that configuration.nix file is completely empty.
You can find detailed instructions on the official wiki website.

After completing the setup of NixOS, the next step is the K3s agent.
It has slight difference from previous one and don’t need services.k3s.extraFlags.

 1  networking.firewall.allowedTCPPorts = [80 443 22 
 2    6443 # k3s: required so that pods can reach the API server (running on port 6443 by default)
 3    2379 # k3s, etcd clients: required if using a "High Availability Embedded etcd" configuration
 4    2380 # k3s, etcd peers: required if using a "High Availability Embedded etcd" configuration
 5  ];
 6  networking.firewall.allowedUDPPorts = [
 7    8472 # k3s, flannel: required if using multi-node for inter-node networking
 8  ];
 9
10  services.k3s = {  
11    enable =  true;
12    role = "agent";
13    token = "<some-token>";
14    serverAddr = "https://<ip of first node>:6443";
15  };

Do sudo nixos-rebuild switch and it should connect to the first machine.

Run kubectl get nodes on the first machine and we should see someting like this:

kubectl get nodes
NAME       STATUS   ROLES                       AGE   VERSION
nixos      Ready    control-plane,etcd,master   43m   v1.30.4+k3s1
nixosrpi   Ready    <none>                      6s    v1.30.4+k3s1

Yay! πŸŽ‰

Now lets switch nixosrpi to be a control-plane to create a HA cluster.
Change role to server and rebuild nixos again.

NAME       STATUS   ROLES                       AGE    VERSION
nixos      Ready    control-plane,etcd,master   44m    v1.30.4+k3s1
nixosrpi   Ready    control-plane,etcd,master   30s    v1.30.4+k3s1

Again, in an ideal situation would need to have 3 machines for HA (high availability) cluster.


My blog is built with hugo and uses a modified mini theme.
There are plenty of instructions out there on how to get started. So, I will share only the things that matters the most.

After creating my blog and pushing it to repository I needed to publish image.
First, enable permissions in
Repository settings -> Actions -> General in the section Workflow permissions change to Read and write permissions.

Create file .github/workflows/deploy.yaml with workflow. Replace <github-user-name> with yours.

 1name: deploy
 2on:
 3  push:
 4    branches:
 5      - 'main'
 6env:
 7  REGISTRY: ghcr.io
 8  IMAGE_NAME: ${{ github.repository }}
 9  
10jobs:
11  build-and-push-image:
12    runs-on: ubuntu-latest
13    permissions:
14      contents: read
15      packages: write
16      id-token: write
17    steps:
18      - name: Checkout repository
19        uses: actions/checkout@v4
20        with:
21          submodules: true
22
23      - name: Set up Docker Buildx
24        uses: docker/setup-buildx-action@f95db51fddba0c2d1ec667646a06c2ce06100226 # v3.0.0
25
26      - name: Set version tag
27        id: vars
28        run: echo "VERSION=${GITHUB_SHA:0:7}" >> $GITHUB_ENV # Tag with short commit SHA
29
30      - name: Log in to the Container registry
31        uses: docker/login-action@343f7c4344506bcbf9b4de18042ae17996df046d # v3.0.0
32        with:
33          registry: https://ghcr.io
34          username: ${{ github.actor }}
35          password: ${{ secrets.GITHUB_TOKEN }}
36
37      - name: Build and push Docker image
38        uses: docker/build-push-action@ad44023a93711e3deb337508980b4b5e9bcdc5dc
39        with:
40          context: .
41          push: true
42          tags: ghcr.io/<github-user-name>/blog:${{ env.VERSION }},ghcr.io/<github-user-name>/blog:latest
43          platforms: linux/amd64,linux/arm64

Since I have two different architectures, I need to specify two platforms in the build step.

It took me some time to create a perfect two stage build Dockerfile. Resulting image is just a static website running on nginx.

I think this is the Gem of this post πŸ˜„

Copy contents to Dockerfile.

 1FROM --platform=$BUILDPLATFORM alpine:latest AS builder
 2
 3# Download and install hugo
 4ENV HUGO_VERSION 0.140.1
 5
 6# Installing Hugo and ca-certificates
 7RUN set -x &&\
 8  apk add --no-cache --update gcompat wget go git ca-certificates &&\
 9  case "$(uname -m)" in \
10  x86_64) ARCH=amd64 ;; \
11  aarch64) ARCH=arm64 ;; \
12  *) echo "hugo official release only support amd64 and arm64 now"; exit 1 ;; \
13  esac && \
14  HUGO_DIRECTORY="hugo_extended_${HUGO_VERSION}_linux-${ARCH}" && \
15  HUGO_BINARY="${HUGO_DIRECTORY}.tar.gz" && \
16  wget https://github.com/gohugoio/hugo/releases/download/v${HUGO_VERSION}/${HUGO_BINARY} &&\
17  tar xzf ${HUGO_BINARY} &&\
18  rm -fr ${HUGO_BINARY} README.md LICENSE  && \
19  mv hugo /usr/bin/hugo && \
20  mkdir /usr/share/blog
21
22# Set working directory and copy site content
23WORKDIR /src
24COPY . /src
25
26RUN hugo -d /output --environment production --enableGitInfo
27
28# Stage 2: Serve the static files with Nginx
29FROM nginx:alpine-slim
30COPY --from=builder /output /usr/share/nginx/html
31
32# Expose port
33EXPOSE 80

With that blog image published to ghcr.io.

Now, to get it back we need to add private repository to K3s server.
But first we need to acquire read packages token.

  1. Go to user settings on Github.
  2. Select Developer Settings on the left side menu.
  3. Select Personal access tokens
  4. Select Tokens
  5. Generate New Token - select classic
  6. Give it a name via Note, I prefer “read_packages”.
  7. Choose only read:packages.
  8. And click Generate token

Now let’s set this token as secret to the K3s cluster and deploy blog. The general recommendation is to create a separate namespace to keep things tidy.

  1. First create namespace
1kubectl create namespace blog-hugo
  1. Set token as secret
1kubectl create secret docker-registry ghcr --docker-server=ghcr.io \
2                                           --docker-username=<user-name>  \
3                                           --docker-password="<github-token>" \
4                                           -n blog-hugo
  1. Create file blog-deployments.yaml and replace <github-user-name> with yours.
 1apiVersion: v1
 2kind: Namespace
 3metadata:
 4  name: blog-hugo
 5---
 6apiVersion: apps/v1
 7kind: Deployment
 8metadata:
 9  name: blog-hugo
10  namespace: blog-hugo
11spec:
12  replicas: 3  # Set the desired number of replicas
13  selector:
14    matchLabels:
15      app: blog-hugo
16  template:
17    metadata:
18      labels:
19        app: blog-hugo
20    spec:
21      containers:
22      - name: blog-hugo
23        image: ghcr.io/<github-user-name>/blog
24        imagePullPolicy: Always
25        ports:
26        - containerPort: 80
27      imagePullSecrets:
28      - name: ghcr
29---
30apiVersion: v1
31kind: Service
32metadata:
33  name: blog-hugo
34  namespace: blog-hugo
35  labels:
36    app: blog-hugo
37spec:
38  ports:
39  - name: 80-8080
40    port: 8080  # Exposed service port
41    targetPort: 80  # Port inside the container
42    protocol: TCP
43  selector:
44    app: blog-hugo
45  type: LoadBalancer  # Expose the service externally
  1. Run kubectl apply -f blog-deployments.yaml

Run this to confirm that pods are running kubectl get pods -n blog-hugo

NAME                         READY   STATUS    RESTARTS        AGE
blog-hugo-658c6444dc-x97g8   1/1     Running   1 (6h59m ago)   19h
blog-hugo-658c6444dc-xf8g2   1/1     Running   0               19h
blog-hugo-658c6444dc-z44fs   1/1     Running   0               19h

Run this to confirm that pods are distributed between nodes: kubectl get pods -n blog-hugo -o wide

NAME                         READY   STATUS    RESTARTS     AGE   IP            NODE       NOMINATED NODE   READINESS GATES
blog-hugo-658c6444dc-x97g8   1/1     Running   1 (7h ago)   19h   10.42.0.140   nixos      <none>           <none>
blog-hugo-658c6444dc-xf8g2   1/1     Running   0            19h   10.42.1.30    nixosrpi   <none>           <none>
blog-hugo-658c6444dc-z44fs   1/1     Running   0            19h   10.42.1.29    nixosrpi   <none>           <none>

Blog should be accessible via http://<vm-ip>:8080.


The last thing is to expose the blog to the public Internet.

My preferred option is to use Cloudflare tunnel. I think, the free option is enough for a blog.

I have my domain name with Cloudflare. It is easy to setup this way. Assuming you have your domain with Cloudflare or brought it there.

  1. Got to Cloudflare dashboard
  2. Open Zero Trust
  3. Open Networks, Tunnels should be selected.
  4. Create tunnel with Cloudflared
  5. Give it a name and click next
  6. Each installation option will have base64 token, copy it.
  7. Create namespace
1kubectl create namespace cloudflared
  1. Set token as secret
1kubectl create secret generic cloudflared-token \
2    --from-literal=token=<token> \
3    -n cloudflared
  1. Create file cloudflared-deployment.yaml
 1apiVersion: v1
 2kind: Namespace
 3metadata:
 4  name: cloudflared
 5---
 6apiVersion: apps/v1
 7kind: Deployment
 8metadata:
 9  name: cloudflared-blog
10  namespace: cloudflared
11  labels:
12    app: cloudflared-blog
13spec:
14  replicas: 3
15  selector:
16    matchLabels:
17      app: cloudflared-blog
18  template:
19    metadata:
20      labels:
21        app: cloudflared-blog
22    spec:
23      containers:
24      - name: cloudflared
25        image: cloudflare/cloudflared:latest
26        args:
27          - tunnel
28          - --no-autoupdate
29          - --protocol=quic
30          - run
31          - --url=http://blog-hugo.blog-hugo.svc.cluster.local:8080
32        env:
33        - name: TUNNEL_TOKEN
34          valueFrom:
35            secretKeyRef:
36              name: cloudflared-token
37              key: token
  1. Run kubectl apply -f cloudflared-deployment.yaml

Run this to confirm that pods are running kubectl get pods -n cloudflared

NAME                                READY   STATUS    RESTARTS     AGE
cloudflared-blog-6bcd649ddc-8k7zs   1/1     Running   0            23h
cloudflared-blog-6bcd649ddc-dlqtj   1/1     Running   0            23h
cloudflared-blog-6bcd649ddc-z7ppt   1/1     Running   1 (8h ago)   25h

One more problem left to solve! Need to do automated rollout updates but I will leave it for the next time.
The current setup gives us an automated build process. It also allows for a more reliable manual rollout of the image with higher availability.

Set new image with tag

kubectl set image deployment/blog-hugo blog-hugo=ghcr.io/sakrist/blog:<tag> -n blog-hugo

Start rolluot

kubectl rollout status deployment/blog-hugo -n blog-hugo

Happy New Year to you all!!! πŸŽ…πŸŽ„


Additional sources:

comments powered by Disqus