🧩 So, I over-engineered my blog
or, How to run Ghost on Google Kubernetes Engine
The world seems to be moving firmly in the direction of Kubernetes, and over the past several years I've managed my fair share of teams and systems that built and deployed on k8s (or other, similar containerized orchestration environments). But I hadn't played around with Kubernetes personally all that much, and was craving a first hand experience of the alleged joys (and frustrations).
Until now.
I've been running this blog using Ghost using a fairly vanilla setup: a VM (on Google Compute Engine) running Ghost; nginx for the reverse proxy and SSL termination; certificate management using Let's Encrypt.
I wanted to migrate this over to a Kubernetes setup (specifically, in Google Kubernetes Engine aka GKE). Googling confirmed my suspicion that not many people have been down this path ?
Create a StatefulSet
Here's the YAML first, we'll break it down line by line:
apiVersion: apps/v1
kind: StatefulSet
metadata:
annotations:
name: floatingsun-net
spec:
replicas: 1
selector:
matchLabels:
app: floatingsun-net
serviceName: floatingsun-net
template:
metadata:
labels:
app: floatingsun-net
spec:
containers:
- image: ghost:alpine
name: ghost
env:
- name: url
value: "http://floatingsun.net"
- name: admin__url
value: "https://floatingsun.net"
- name: mail__transport
value: SMTP
- name: mail__options__service
value: Mailgun
- name: mail__options__auth__user
valueFrom:
secretKeyRef:
name: mail-secrets
key: mailuser
- name: mail__options__auth__pass
valueFrom:
secretKeyRef:
name: mail-secrets
key: mailpass
- name: mail__options__port
value: "2525"
ports:
- containerPort: 2368
volumeMounts:
- mountPath: /var/lib/ghost/content
name: floatingsun-net-data
volumeClaimTemplates:
- metadata:
name: floatingsun-net-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Starting at the very top:
- Since a blog is decidedly NOT stateless, we're going to use a
StatefulSet
instead of a regularDeployment
- I'm using the
ghost:alpine
image. You can choose another image if you prefer. Notice the peculiar setup of theURL
andadminURL
env variables; ignore them for now, we'll come back to this later. - Since a local postfix mailer won't work on Google cloud, you will want to use a 3rd party service. Ghost recommends (and I second) Mailgun; they have a great free tier for GCP customers. Create a k8s secret for Mailgun credentials with
kubectl create secret generic mail-secrets --from-literal=mailuser=[user] --from-literal=mailpass=[pass]
- We'll mount a persistent volume at
/var/lib/ghost/content
to store the sqlite database, themes, images and any other content. I decided to stick with sqlite for simplicity even though past versions of this blog used MySQL. Besides, sqlite should easily be able to handle the tiny traffic volume! - This persistent volume will be backed by a dynamic persistent volume claim. This way GKE can handle all the volume creation etc, and will also retain the volume for us (i.e. volume won't be deleted automatically if/when you delete the stateful set)
Deploy this with kubectl apply -f statefulset.yaml
. Now that we have a Pod running, let's expose it via a Service.
Create a Service
apiVersion: v1
kind: Service
metadata:
name: floatingsun-net
labels:
app: floatingsun-net
spec:
selector:
app: floatingsun-net
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 2368
The only notable items here are the type, set to NodePort
and the targetPort, set to the default Ghost port. I'm using NodePort here mainly because I wanted to use GKE's managed SSL feature. Another common option is to use a LoadBalancer
service, which is also pretty useful for testing your setup before you are ready to cut over the DNS.
IMPORTANT: make sure to verify that the pod is running, Ghost is up and the service is able to route traffic to the pod before proceeding to the next step.
Creating a Managed Certificate
I think we can all agree that Lets Encrypt has been a boon for security on the web: getting SSL certificates has never been easier or cheaper (FREE!). But you know what's better? Not having to manage a certificate by yourself at all! Why not let Google do it for you?
GKE has a managed SSL certificate feature (in beta) that looked promising, and worked as advertised in my limited testing. Creating a certificate is simple:
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: floatingsun-net
spec:
domains:
- floatingsun.net
The key detail is obviously the domain
field. Once you apply the change, you can check the status by running kubectl get managedcertificate
– the "Status" field will NOT transition from "Provisioning" to "Active" yet, that's expected. Per the documentation:
For the Google-managed certificate to be issued and its resource state to become ACTIVE, you must have a load balancer setup, including a target proxy and forwarding rule, that references the certificate resource, as well as a DNS configuration that resolves your domain's hostname to the forwarding rule's IP address.
So let's go ahead and create a static (reserved) IP for our domain. Note that this MUST be a global reservation, like so: gcloud compute addresses create floatingsun-net --global
. You can get the actual address by running gcloud compute addresses describe floatingsun-net
.
Once you have a static IP reserved, we're ready to create an ingress. This MUST be done after the pods are up and running.
Create the Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: floatingsun-net
annotations:
kubernetes.io/ingress.global-static-ip-name: floatingsun-net
networking.gke.io/managed-certificates: floatingsun-net
spec:
backend:
serviceName: floatingsun-net
servicePort: 80
Note the two annotations – those are what tell the ingress to use our managed certificate and the static IP from earlier. The backend spec should be self-explanatory.
Remember the weird discrepancy between the URL
and adminURL
from the Ghost env? Here's the thing – the GKE ingress automatically creates health checks against the backend services. Specifically, it expects services to "Serve a response with an HTTP 200
status to GET
requests on the /
path". But the problem is that if you configure Ghost with a https URL
, it will return a 301
(permanent redirect), even though the server is actually running on localhost serving vanilla HTTP! This fails the healthcheck and consequently no traffic will be sent to the Ingress. I've asked about this in the Ghost forum – see the post for more details and related issues on Github.
Until then, a (sub-optimal) workaround is to use an HTTP URL
. Sadly this does mean that some of the URLs Ghost generates when rendering the post content (e.g. image links) will be insecure. I recommend still using the https version for adminURL
.
So, was it worth it?
Congratulations, now you have an over-engineered Ghost setup. But is it worth it?
Let's put it this way: here are all the things I'm looking forward to NOT doing anymore:
- manually upgrading Ghost: just
kubectl delete
the pod and relax while GKE spins up a new pod with the latest image. - worry about managing or renewing the SSL certificate ever again.
- worry about keeping the OS and various packages on the VM updated.
- SSH-ing into the VM to see what went wrong; configuring nginx by hand; managing a MySQL database etc etc
Please spare me any wise-crack remarks about how I could just pay for Ghost Pro – where's the fun in that?