Running NixOS in a VM

When I first started building a home server in February 2022, NixOS was a tempting option because the entire system would be defined by configuration files. While I got VMs working with NixOS, I struggled to cross-compile them from an Intel laptop to be deployed on ARM64 Raspberry Pis and eventually moved to Ansible and Docker Compose. This note describes how I was going to test server configurations before deploying them.

You can use any directory structure you’d like for the configuration repository – it’s completely arbitrary – but I found a few people following the common/, hosts/, and ops/ convention. Most of this boilerplate is from Xe’s article on Morph and NixOS.

To run NixOS under QEMU, common/generic-qemu.nix is slightly modified from Xe’s generic-libvirtd.nix:

{ modulesPath, ... }: {
  imports = [ (modulesPath + "/profiles/qemu-guest.nix") ];

  services.openssh.enable = true;

  boot.initrd.availableKernelModules =
    [ "ata_piix" "uhci_hcd" "virtio_pci" "sr_mod" "virtio_blk" ];
  boot.initrd.kernelModules = [ ];
  boot.kernelModules = [ "kvm-intel" ];
  boot.extraModulePackages = [ ];

  boot.kernelParams = [
    "console=tty1"
    "console=ttyS0,115200"
  ];

  fileSystems."/" = {
    device = "/dev/vda1";
    fsType = "ext4";
  };
}

Crucially, the boot.kernelParams section allows QEMU to use its parent TTY as the serial console when launched from the command line with -nographic. This echoes its serial console to the terminal the VM is run from, which is important for a headless NixOS.

The host I’m using is named Vilya, and its configuration is at hosts/vilya/configration.nix:

{ config, pkgs, ... }: {
  imports = [
    ../../common/generic-qemu.nix
    ../../common
  ];

  networking.hostName = "vilya";
  networking.firewall.enable = false;
}

This pulls in all of the common configuration and declares it as a QEMU guest machine. It could also import any host-specific services like Grafana or Prometheus, potentially defined in a hosts/vilya/services/ directory.

The rest of the configuration is common to most hosts, but I’ll include it here for completeness. common/default.nix includes any configuration that’s shared between all hosts, and imports common/users/default.nix, which should include your user. This is the common/default.nix:

{ ... }: {
  imports = [ ./users ];

  boot.cleanTmpDir = true;
  nix.settings.auto-optimise-store = true;

  services.journald.extraConfig = ''
    SystemMaxUse=100M
    MaxFileSec=7day
  '';

  services.resolved = {
    enable = true;
    dnssec = "false";
  };
}

And my common/users/default.nix:

{ config, pkgs, ... }: {
  users.users.matt = {
    isNormalUser = true;
    # This is a personal preference, but hints at the possible customization.
    shell = pkgs.fish;
    openssh.authorizedKeys.keys = [
      "ssh-ed25519 <key>"
    ];
  };

  users.users.root.openssh.authorizedKeys.keys =
      config.users.users.matt.openssh.authorizedKeys.keys;
}

I’ve removed my public key, but this lets users log in over SSH without needing to enter a password. The user even lacks a password, so I may need to add one by setting the user’s hashedPassword property to the output of mkpasswd -m sha-512.

Deployment

There’s probably a way to use a Nix file to deploy the system and run it in a VM, but I don’t know Nix well enough to write that. At default.nix, I have the following configuration to set up the tools I need to build and run VMs:

with import <nixpkgs> {};

mkShell {
  name = "lab";
  buildInputs = [
    pkgs.qemu
    pkgs.nixos-generators
  ];
}

Running nix develop makes these tools available to the shell. To generate a virtual disk for the storage medium, I used qemu-img:

; qemu-img create -f qcow2 vdisk1 10G

This creates a 10GiB file named vdisk1 in the current working directory. To generate the ISO to boot from:

; nixos-generate -c hosts/vilya/configuration.nix -f iso

My ThinkPad X270 takes about 5 minutes on this step, which was a bit too long for rapid feedback. And finally, to run the VM:

; qemu-system-x86_64 -enable-kvm -nographic -m 2048 -boot d \
    -cdrom <iso-path> -hda vdisk1 \
    -net user,hostfwd=tcp::10022-:22 -net nic

This forwards the SSH port 22 to 10022 on the host machine (-net user,hostfwd=tcp::10022-:22), but also leaves the controlling terminal with a serial console open to the VM. I can access the machine over SSH with ssh matt@localhost -p 10022.