Home server setup

I have two Raspberry Pi 4s that have more than enough headroom to handle all of the services listed below. One is a 4GB model and the other is 8GB. They advertise their hostnames over mDNS but I still assign them static IPs in my router’s admin panel. I use Ansible and Docker to manage the servers.

I’ve named the servers based off artifacts and places from Tolkein’s Middle Earth: monitoring is handled by Palantir, storage is on Arkenstone, plotters are controlled by Ithildin, and the name of my SMB share is Erebor. 1 A palantir is a magical stone that shows what’s happening in other places, the Arkenstone is a unique gemstone, ithildin is a metallic inlay used for magical runes, and Erebor is a Dwarven kingdom in a mountain.


The 4GB Raspberry Pi monitors the network, blocks ads, and runs any services that collect metrics.

Grafana and VictoriaMetrics provide most of the visualization and data collection for the server, but I also needed to run an instance of InfluxDB to round out its capabilities. The problem is that some InfluxDB clients expect to run queries against the server, which VictoriaMetrics doesn’t support. It only provides an ingestion endpoint and expects any queries to use PromQL or its own query language.

I’m also running Pi-hole to block ads at a deeper level than most ad blockers, which is the scariest part of the system. If the server or service goes down, DNS stops working for my network and I’ll need to log on to the router to reset DNS back to my ISP. So far, it’s been working well, blocking about 5% of traffic and showing me which host names are most commonly requested.

Because both servers use Docker for almost all of their services, I use docker_stats_exporter to show a per-container breakdown of CPU and memory usage. For Raspberry Pis, I had to add cgroup_enable=cpuset cgroup_enable=memory in /boot/cmdline.txt to enabling tracking CPU and memory statistics by container.

The speedtest-exporter runs every few hours to measure observed internet speeds and let my ISP know that their service is being monitored.

I wrote a little Go program that serves a metrics endpoint for how many articles are stored in my Instapaper account, broken down by folders. There’s an excellent library called instapaper-go-client that provides almost a one-to-one function interface with the API.

An AirGradient DIY Pro on my desk with firmware to serve Prometheus metrics on the local network provides visualizations of the particulate matter and CO2 concentrations and temperature of my office over time.


The 8GB Raspberry Pi 4 is used as a storage server, installed in an Argon 40 Neo case and connected to an OWC Mercury Elite Pro Quad drive bay.

I’m using ZFS for the large hard drives that serve as SMB shares using Samba via crazy-max/docker-samba. The two drives are set up in a mirror configuration, so I still have a local restore option in the event of a single hardware failure. I should have bought the two drives from different vendors, but when I expand the pool to four, I’ll pair up the old and new drives as mirrors. zfs_exporter makes the volume size available to the monitoring server and smart_exporter reports on any detectable hardware failures.

To back up the data on the drives to Backblaze B2, I’m using restic and autorestic so the configuration can be declarative instead of a shell script. It was relatively easy to set up and, with a post-backup hook, can report metrics on the last backup time and size.

The only other service running on this server is Gitea to keep my notes and source code repositories.


A 2GB Raspberry Pi 3 is connected to my AxiDraw MiniKit 2 with AxiCLI installed. It doesn’t have any persistent services running on it aside from the node exporter.