DIY Distributed HomeServer
Now that the Social Isolation is in full effect, I find myself with some extra time on my hands. Instead of watching Netflix/Disney+/Prime I decided to migrate my Home Automation to a new server. My old installation was running on a FreeBSD installation. Although super stable and flexible with the help of Jails (yes, another container like framework) I often had difficulties with installing something. So I wanted to migrate to a more open container platform like Docker. I say “like Docker” because I think Docker isn’t as popular as it used to be. For example, as for Fedora 31, Docker is no longer part of the distribution, it comes with Podman as the default container platform.
Now, you might ask yourself, what does he want to do, run a nuclear reactor from home? No, far simpler than that. I want to run several servers, for example I’m running PiHole for dns blocking, Ubiquiti controller and of course Home Assistant for some home automation fun.
For my system I have the following requirements:
- Out of the box scheduling
- Web based UI so that I can manage stuff from my couch with my phone
- Flexible, it needs to be able to run more than just Docker
- Easy to install and maintain
- Low footprint
I first looked at using Openshift for my home server, since I have experience in using it in a Production Environment as a developer. Openshift has some cool features regarding networking and I can read and write configuration yaml needed to deploy a container. So my first action was to try and install it on my server. After reading the instructions, easy to install and maintain is not the first thing that comes into mind, let alone a low footprint. Of course I could use crc (code ready containers). A super easy way to install openshift on your dev machine. Easy to install but man it eats your resources faster than a Chrome browser.
Ok, so no openshift for me. How about the next best thing, plain vanilla Kubernetes (or as the cool kids say k8s)? This appears to be a hassle to install as well. And as a non ops man it seems hard to maintain as well. Another no go.. I could however install Microk8s, the lightweight version of Kubernetes. After some reading it still felt heavy, it still runs several containers in order to operate. So Microk8s goes on the shortlist, it looks like a cool platform and I have experience with Kubernetes, albeit through Openshift – in my opinion that still counts as experience.
So after some more googling and soul searching, I briefly thought about going native Docker!! But luckily I came across a nice (Beta) product called Nomad.
Their website calls it: “A simple and flexible workload orchestrator to deploy and manage containers and non-containerized applications across on-prem and clouds at scale.”
Cool, I want that and who am I to not believe such a statement! I looked at some howto’s on installing it and it looked like an easy system to install, run, maintain and last but not least upgrade. Although it is still in the beta state of development it had the look and feel of a product I could use for my simple use case (run a couple of containers and other stuff on my server)
So what is Nomad and why should you take a look at it as well. Nomad is an orchestrator of jobs, and a job can be a lot of things. Out of the box, a job can be one of the following (https://nomadproject.io/docs/drivers/):
- Isolated Exec
- Raw Exec
- qemu (Had to look it up, however in short: Run a VM)
- Community (build your own driver): LXC. Podman, Jails, Firecracker etc. are already created
Nomad provides an interface (CLI and a web based UI) that you can use to create and schedule a job. Now on itself this is all nice and fun but where does it actually run. That is the second part of the orchestration. Nomad allows you to create a cluster of machines that can run the jobs.
Nomad allows you to set up a cluster of servers across multiple regions that can run jobs for you. In the picture above the “Nomad Servers” are the admin servers of Nomad, they decide what to run and where. The clients are the actual servers that do the hard work of running the tasks. Adding a client to the pool is as simple as installing the Nomad Agent and configuring it to join the cluster. The Nomad Servers, based on the configuration, will determine the capabilities of the server and allocate tasks to the server.
Now that we have seen a short analysis of what Nomad is and what it can do, the question is, why would I use it over Kubernetes, the current orchestrator to beat? Keep in mind, that I am looking for a simple orchestration solution with which I can host a scala of solutions.
|Container orchestration||Orchestration of containers and more (ea Java, Linux exec etc.)|
|Runs multiple containers to run the system||Single lightweight agent|
|Meant for local installation for development purposes||Single node cluster uses the same installer as the enterprise multi-region, multi cloud vendor setup|
|Extra features: Container registry, dns, knative, istio||Besides orchestration: None|
Now to answer the question I stated in the beginning:. What to use as my orchestration server for my Home Server? The answer is Nomad, because of its simplicity and ease of setup. Although I like Microk8s and will use it for research / development purposes,. Micro k8s is meant for exactly that, and not running production. Although I have no doubt I could use it for my “production” home server. I still prefer the simplicity of Nomad and the fact that it only requires a single executable (agent) to install. And although it is all very cool, I don’t need Knative, istio etc. Although a container registry would have been nice!
In my next post I will show you what my Nomad setup is,what limitations I found, and what I had to do as a work around. So stay tuned!!