Self-hosted Blog almost for free

· 1833 words · 9 minute read

This blog is self-hosted and is running on a home NAS in a virtual machine with K3s cluster.

This post is not a How-To or a tutorial. This is just to share my approach.

Some might think this is overkill but here is my objectives:

  • learn more about K8s (using K3s), practice troubleshoting skills;
  • get it done via self-hosting;
  • automate pipeline as much as possible;
  • minimum downtime;

Post about my home NAS hardware can be found here.
For the OS part I prefer headless Ubuntu, latest available LTS version. But, since I don’t want to spend too much time on reinventing the wheel with the NAS setup, I went with TrueNAS Scale OS.
Thank you to the TrueNAS maintainers and community. It is a very good option compared to the paid alternatives. All essential containers are available. They can be installed in the apps section of the dashboard with a few clicks. The terminal is not required.

Although TrueNAS Scale 24.04 includes K3s for managing apps, the issue is that it’s running on a single machine, which means my apps will experience downtime.
I decided to spin up a separate K3s cluster on a dedicated VM.
For the VM I went with NixOS, having discovered this OS while doing research on what to choose for the NAS.
Eventually, I plan to have 3 machines in the cluster but for now two. Second machine is RPi 400.

Note: TrueNAS Scale, with the release of 24.10, moved from K3s to Docker. Learn more here


NixOS became my choice for K3s setup. It has an awesome feature - a config file at /etc/nixos/configuration.nix.
It is very easy to re-create a system with a few commands after a fresh installation.

Installing NixOS as a VM was quite easy. I’m not going to go into details on this one.
It is easy to go through the steps in the installation wizard and enable the SSH agent.
I only needed to make a few changes to make it work as a K3s server.

  # Open ports in the firewall.
   networking.firewall.allowedTCPPorts = [80 443 22 
    6443 # k3s: required so that pods can reach the API server (running on port 6443 by default)
    2379 # k3s, etcd clients: required if using a "High Availability Embedded etcd" configuration
    2380 # k3s, etcd peers: required if using a "High Availability Embedded etcd" configuration
  ];
  networking.firewall.allowedUDPPorts = [
    8472 # k3s, flannel: required if using multi-node for inter-node networking
  ];

  services.k3s = {
    enable = true;
    role = "server";
    clusterInit = true;
    token = "<some-token>";
  };

  services.k3s.extraFlags = toString [
    "--write-kubeconfig-mode" "644" # "--kubelet-arg=v=4" # Optionally add additional args to k3s
  ];

Then do sudo nixos-rebuild switch.

Validate it is running with kubectl get nodes.

NAME       STATUS   ROLES                       AGE   VERSION
nixos      Ready    control-plane,etcd,master   5s   v1.30.4+k3s1

If something goes wrong with K3s, check journal logs journalctl -xeu k3s.service.

Server token located at /var/lib/rancher/k3s/server/node-token. This token is required to connect an agent to K3s server.


Installing NixOS on Raspberry Pi 4 was a bit harder. The main problem is that configuration.nix file is completely empty.
You can find detailed instructions on the official wiki website.

After completing the setup of NixOS, the next step is the K3s agent.
It has slight difference from previous one and don’t need services.k3s.extraFlags.

  networking.firewall.allowedTCPPorts = [80 443 22 
    6443 # k3s: required so that pods can reach the API server (running on port 6443 by default)
    2379 # k3s, etcd clients: required if using a "High Availability Embedded etcd" configuration
    2380 # k3s, etcd peers: required if using a "High Availability Embedded etcd" configuration
  ];
  networking.firewall.allowedUDPPorts = [
    8472 # k3s, flannel: required if using multi-node for inter-node networking
  ];

  services.k3s = {  
    enable =  true;
    role = "agent";
    token = "<some-token>";
    serverAddr = "https://<ip of first node>:6443";
  };

Do sudo nixos-rebuild switch and it should connect to the first machine.

Run kubectl get nodes on the first machine and we should see someting like this:

kubectl get nodes
NAME       STATUS   ROLES                       AGE   VERSION
nixos      Ready    control-plane,etcd,master   43m   v1.30.4+k3s1
nixosrpi   Ready    <none>                      6s    v1.30.4+k3s1

Yay! 🎉

Now lets switch nixosrpi to be a control-plane to create a HA cluster.
Change role to server and rebuild nixos again.

NAME       STATUS   ROLES                       AGE    VERSION
nixos      Ready    control-plane,etcd,master   44m    v1.30.4+k3s1
nixosrpi   Ready    control-plane,etcd,master   30s    v1.30.4+k3s1

Again, in an ideal situation would need to have 3 machines for HA (high availability) cluster.


My blog is built with hugo and uses a modified mini theme.
There are plenty of instructions out there on how to get started. So, I will share only the things that matters the most.

After creating my blog and pushing it to repository I needed to publish image.
First, enable permissions in
Repository settings -> Actions -> General in the section Workflow permissions change to Read and write permissions.

Create file .github/workflows/deploy.yaml with workflow. Replace <github-user-name> with yours.

name: deploy
on:
  push:
    branches:
      - 'main'
env:
  REGISTRY: ghcr.io
  IMAGE_NAME: ${{ github.repository }}
  
jobs:
  build-and-push-image:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      packages: write
      id-token: write
    steps:
      - name: Checkout repository
        uses: actions/checkout@v4
        with:
          submodules: true

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@f95db51fddba0c2d1ec667646a06c2ce06100226 # v3.0.0

      - name: Set version tag
        id: vars
        run: echo "VERSION=${GITHUB_SHA:0:7}" >> $GITHUB_ENV # Tag with short commit SHA

      - name: Log in to the Container registry
        uses: docker/login-action@343f7c4344506bcbf9b4de18042ae17996df046d # v3.0.0
        with:
          registry: https://ghcr.io
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: Build and push Docker image
        uses: docker/build-push-action@ad44023a93711e3deb337508980b4b5e9bcdc5dc
        with:
          context: .
          push: true
          tags: ghcr.io/<github-user-name>/blog:${{ env.VERSION }},ghcr.io/<github-user-name>/blog:latest
          platforms: linux/amd64,linux/arm64

Since I have two different architectures, I need to specify two platforms in the build step.

It took me some time to create a perfect two stage build Dockerfile. Resulting image is just a static website running on nginx.

I think this is the Gem of this post 😄

Copy contents to Dockerfile.

FROM --platform=$BUILDPLATFORM alpine:latest AS builder

# Download and install hugo
ENV HUGO_VERSION 0.140.1

# Installing Hugo and ca-certificates
RUN set -x &&\
  apk add --no-cache --update gcompat wget go git ca-certificates &&\
  case "$(uname -m)" in \
  x86_64) ARCH=amd64 ;; \
  aarch64) ARCH=arm64 ;; \
  *) echo "hugo official release only support amd64 and arm64 now"; exit 1 ;; \
  esac && \
  HUGO_DIRECTORY="hugo_extended_${HUGO_VERSION}_linux-${ARCH}" && \
  HUGO_BINARY="${HUGO_DIRECTORY}.tar.gz" && \
  wget https://github.com/gohugoio/hugo/releases/download/v${HUGO_VERSION}/${HUGO_BINARY} &&\
  tar xzf ${HUGO_BINARY} &&\
  rm -fr ${HUGO_BINARY} README.md LICENSE  && \
  mv hugo /usr/bin/hugo && \
  mkdir /usr/share/blog

# Set working directory and copy site content
WORKDIR /src
COPY . /src

RUN hugo -d /output --environment production --enableGitInfo

# Stage 2: Serve the static files with Nginx
FROM nginx:alpine-slim
COPY --from=builder /output /usr/share/nginx/html

# Expose port
EXPOSE 80

With that blog image published to ghcr.io.

Now, to get it back we need to add private repository to K3s server.
But first we need to acquire read packages token.

  1. Go to user settings on Github.
  2. Select Developer Settings on the left side menu.
  3. Select Personal access tokens
  4. Select Tokens
  5. Generate New Token - select classic
  6. Give it a name via Note, I prefer “read_packages”.
  7. Choose only read:packages.
  8. And click Generate token

Now let’s set this token as secret to the K3s cluster and deploy blog. The general recommendation is to create a separate namespace to keep things tidy.

  1. First create namespace
kubectl create namespace blog-hugo
  1. Set token as secret
kubectl create secret docker-registry ghcr --docker-server=ghcr.io \
                                           --docker-username=<user-name>  \
                                           --docker-password="<github-token>" \
                                           -n blog-hugo
  1. Create file blog-deployments.yaml and replace <github-user-name> with yours.
apiVersion: v1
kind: Namespace
metadata:
  name: blog-hugo
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: blog-hugo
  namespace: blog-hugo
spec:
  replicas: 3  # Set the desired number of replicas
  selector:
    matchLabels:
      app: blog-hugo
  template:
    metadata:
      labels:
        app: blog-hugo
    spec:
      containers:
      - name: blog-hugo
        image: ghcr.io/<github-user-name>/blog
        imagePullPolicy: Always
        ports:
        - containerPort: 80
      imagePullSecrets:
      - name: ghcr
---
apiVersion: v1
kind: Service
metadata:
  name: blog-hugo
  namespace: blog-hugo
  labels:
    app: blog-hugo
spec:
  ports:
  - name: 80-8080
    port: 8080  # Exposed service port
    targetPort: 80  # Port inside the container
    protocol: TCP
  selector:
    app: blog-hugo
  type: LoadBalancer  # Expose the service externally
  1. Run kubectl apply -f blog-deployments.yaml

Run this to confirm that pods are running kubectl get pods -n blog-hugo

NAME                         READY   STATUS    RESTARTS        AGE
blog-hugo-658c6444dc-x97g8   1/1     Running   1 (6h59m ago)   19h
blog-hugo-658c6444dc-xf8g2   1/1     Running   0               19h
blog-hugo-658c6444dc-z44fs   1/1     Running   0               19h

Run this to confirm that pods are distributed between nodes: kubectl get pods -n blog-hugo -o wide

NAME                         READY   STATUS    RESTARTS     AGE   IP            NODE       NOMINATED NODE   READINESS GATES
blog-hugo-658c6444dc-x97g8   1/1     Running   1 (7h ago)   19h   10.42.0.140   nixos      <none>           <none>
blog-hugo-658c6444dc-xf8g2   1/1     Running   0            19h   10.42.1.30    nixosrpi   <none>           <none>
blog-hugo-658c6444dc-z44fs   1/1     Running   0            19h   10.42.1.29    nixosrpi   <none>           <none>

Blog should be accessible via http://<vm-ip>:8080.


The last thing is to expose the blog to the public Internet.

My preferred option is to use Cloudflare tunnel. I think, the free option is enough for a blog.

I have my domain name with Cloudflare. It is easy to setup this way. Assuming you have your domain with Cloudflare or brought it there.

  1. Got to Cloudflare dashboard
  2. Open Zero Trust
  3. Open Networks, Tunnels should be selected.
  4. Create tunnel with Cloudflared
  5. Give it a name and click next
  6. Each installation option will have base64 token, copy it.
  7. Create namespace
kubectl create namespace cloudflared
  1. Set token as secret
kubectl create secret generic cloudflared-token \
    --from-literal=token=<token> \
    -n cloudflared
  1. Create file cloudflared-deployment.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: cloudflared
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cloudflared-blog
  namespace: cloudflared
  labels:
    app: cloudflared-blog
spec:
  replicas: 3
  selector:
    matchLabels:
      app: cloudflared-blog
  template:
    metadata:
      labels:
        app: cloudflared-blog
    spec:
      containers:
      - name: cloudflared
        image: cloudflare/cloudflared:latest
        args:
          - tunnel
          - --no-autoupdate
          - --protocol=quic
          - run
          - --url=http://blog-hugo.blog-hugo.svc.cluster.local:8080
        env:
        - name: TUNNEL_TOKEN
          valueFrom:
            secretKeyRef:
              name: cloudflared-token
              key: token
  1. Run kubectl apply -f cloudflared-deployment.yaml

Run this to confirm that pods are running kubectl get pods -n cloudflared

NAME                                READY   STATUS    RESTARTS     AGE
cloudflared-blog-6bcd649ddc-8k7zs   1/1     Running   0            23h
cloudflared-blog-6bcd649ddc-dlqtj   1/1     Running   0            23h
cloudflared-blog-6bcd649ddc-z7ppt   1/1     Running   1 (8h ago)   25h

One more problem left to solve! Need to do automated rollout updates but I will leave it for the next time.
The current setup gives us an automated build process. It also allows for a more reliable manual rollout of the image with higher availability.

Set new image with tag

kubectl set image deployment/blog-hugo blog-hugo=ghcr.io/sakrist/blog:<tag> -n blog-hugo

Start rolluot

kubectl rollout status deployment/blog-hugo -n blog-hugo

Happy New Year to you all!!! 🎅🎄


Additional sources: