Home server setup
I have two Raspberry Pi 4s that have more than enough headroom to handle all of the services listed below. One is a 4GB model and the other is 8GB. They advertise their hostnames over mDNS but I still assign them static IPs in my router’s admin panel. I use Ansible and Docker to manage the servers.
I’ve named the servers based off artifacts and places from Tolkein’s Middle Earth: monitoring is handled by Palantir, storage is on Arkenstone, plotters are controlled by Ithildin, and the name of my SMB share is Erebor. 1 A palantir is a magical stone that shows what’s happening in other places, the Arkenstone is a unique gemstone, ithildin is a metallic inlay used for magical runes, and Erebor is a Dwarven kingdom in a mountain.
Monitoring
The 4GB Raspberry Pi monitors the network, blocks ads, and runs any services that collect metrics.
Grafana and VictoriaMetrics provide most of the visualization and data collection for the server, but I also needed to run an instance of InfluxDB to round out its capabilities. The problem is that some InfluxDB clients expect to run queries against the server, which VictoriaMetrics doesn’t support. It only provides an ingestion endpoint and expects any queries to use PromQL or its own query language.
I’m also running Pi-hole to block ads at a deeper level than most ad blockers, which is the scariest part of the system. If the server or service goes down, DNS stops working for my network and I’ll need to log on to the router to reset DNS back to my ISP. So far, it’s been working well, blocking about 5% of traffic and showing me which host names are most commonly requested.
Because both servers use Docker for almost all of their services, I use docker_stats_exporter to show a per-container breakdown of CPU and memory usage.
For Raspberry Pis, I had to add cgroup_enable=cpuset cgroup_enable=memory
in /boot/cmdline.txt
to enabling tracking CPU and memory statistics by container.
The speedtest-exporter runs every few hours to measure observed internet speeds and let my ISP know that their service is being monitored.
I wrote a little Go program that serves a metrics endpoint for how many articles are stored in my Instapaper account, broken down by folders. There’s an excellent library called instapaper-go-client that provides almost a one-to-one function interface with the API.
An AirGradient DIY Pro on my desk with firmware to serve Prometheus metrics on the local network provides visualizations of the particulate matter and CO2 concentrations and temperature of my office over time.
Storage
The 8GB Raspberry Pi 4 is used as a storage server, installed in an Argon 40 Neo case and connected to an OWC Mercury Elite Pro Quad drive bay.
I’m using ZFS for the large hard drives that serve as SMB shares using Samba via crazy-max/docker-samba. The two drives are set up in a mirror configuration, so I still have a local restore option in the event of a single hardware failure. I should have bought the two drives from different vendors, but when I expand the pool to four, I’ll pair up the old and new drives as mirrors. zfs_exporter makes the volume size available to the monitoring server and smart_exporter reports on any detectable hardware failures.
To back up the data on the drives to Backblaze B2, I’m using restic and autorestic so the configuration can be declarative instead of a shell script. It was relatively easy to set up and, with a post-backup hook, can report metrics on the last backup time and size.
The only other service running on this server is Gitea to keep my notes and source code repositories.
Plotting
A 2GB Raspberry Pi 3 is connected to my AxiDraw MiniKit 2 with AxiCLI installed. It doesn’t have any persistent services running on it aside from the node exporter.
Roadmap
Meta
-
Add proper host names and DNS for the servers, instead of relying on mDNS.
-
Get Traefik working to add SSL termination and to reference services over DNS.
-
Add Dashy to act as a central dashboard for all the services.
-
Use Woodpecker to automatically deploy new services that I’m developing locally and find some way to serve the Docker containers.
-
Have Watchtower automatically upgrade any Docker containers.
-
Figure out a way to view logs from all services centrally, without needing to run
docker logs
from each host. -
Add a k3s cluster to run stateless services, like the metrics programs I’ve been writing.
-
Add an IoT-specific, locked down 2.4GHz network.
-
Automatically unlock the ZFS volume when the storage server is booting or, failing that, write an Ansible playbook to unlock it manually.
Hardware
-
Organize the servers, router, hubs, and hard drives into a mini rack with a single power supply.
-
Boot all the Raspberry Pis off SSDs instead of SD cards.
-
Add a UPS so the drives can be shutdown cleanly in the event of a power outage.
-
Set up a small Windows PC for Nvidia GameStream.
-
Set up an Nvidia Jetson Nano to monitor 3D printer cameras for failed prints and convert handwriting from my writing tablet.
-
Convert the storage server to a Raspberry Pi CM4 connected to a 4-SATA connector carrier board and a 2.5Gbps Ethernet link.
Monitoring
-
Write metrics services for Netlify analytics, Aranet4 and Atmotube air quality readings, Flume water usage, Rheem EcoNet water heater usage, washer and dryer duty cycles, kitchen appliances, Xfinity data usage, PG&E Share My Data electricity usage, thermostat requests, internet router statistics, and anything else I can possibly monitor.
-
Set up an IotaWatt to report per-circuit energy usage to InfluxDB.
-
Connect the EMU-2 to the monitoring server for Monitoring electricity usage at the meter.
-
Monitor the precise energy usage of certain outlets, like the one powering these servers.
Storage
-
Add automatic restores of the backed up data to validate that files can be recovered in the event of local data loss.
-
Simplify the ZFS volume layout.
-
Periodically run ZFS maintenance routines.
-
Turn off the hard drives and Samba server when I go to sleep.
-
Add a photo organization service and catalog all of my photos, automatically downloading them from connected phones and cameras.
-
Accept scans from any document scanners wirelessly.
-
Automatically archive all links to external websites in my notes.
-
Start running Paperless-ngx to organize scanned documents.
-
Add rmFakecloud to store any notebooks created on my writing tablet.
-
Find some service that can visualize financial data locally.
Experimental
-
Connect my 3D printer to the network again using OctoPrint.
-
Connect both pen plotters to Ithildin and write a service for sending jobs to them.
-
Try out PXE (network) booting any Raspberry Pis used as cluster nodes.