9.5 KiB
QVM - Lightweight QEMU Development VM Wrapper
A standalone CLI tool for running commands in an isolated NixOS VM with persistent state and shared caches.
Motivation
Complex per-project VM systems create too many qcow2 images (~7GB each). QVM provides a simpler approach:
- One master image shared across all projects
- Persistent overlay for VM state
- Shared caches for cargo, pnpm, etc.
- Mount any directory as workspace
Primary use case: Running AI coding agents (opencode, etc.) in isolation to prevent host filesystem access while maintaining build cache performance.
Security Model
Full VM isolation (not containers):
- Container escapes via kernel exploits are a known attack surface
- VM escapes are rare - hypervisor boundary is fundamentally stronger
- For long unattended AI sessions, VM isolation is the safer choice
9p mount restrictions:
- Only explicitly mounted directories are accessible
- Uses
security_model=mapped-xattr(no passthrough) - Host filesystem outside mounts is invisible to VM
Architecture
HOST VM
──── ──
~/.local/share/qvm/
└── base.qcow2 (read-only base image, ~7GB)
~/.local/state/qvm/
├── overlay.qcow2 (persistent VM state, CoW)
├── vm.pid (QEMU process ID)
├── ssh.port (forwarded SSH port)
├── serial.log (console output)
└── workspaces.json (mounted workspace registry)
~/.cache/qvm/
├── cargo-home/ ──9p──▶ /cache/cargo/
├── cargo-target/ ──9p──▶ /cache/target/
├── pnpm-store/ ──9p──▶ /cache/pnpm/
└── sccache/ ──9p──▶ /cache/sccache/
$(pwd) ──9p──▶ /workspace/project-{hash}/
~/.config/qvm/
└── flake/ (user's NixOS flake definition)
├── flake.nix
└── flake.lock
Multiple Workspaces
When qvm run is called from different directories, each gets mounted simultaneously:
/workspace/abc123/←/home/josh/projects/foo/workspace/def456/←/home/josh/projects/bar
The hash is derived from the absolute path. Commands run with CWD set to their workspace.
Shared Caches
Caches are mounted from host, so:
- Cargo dependencies shared across all projects
- pnpm store shared (content-addressable)
- sccache compilation cache shared
- Each project still uses its own Cargo.lock/package.json - different versions coexist
CLI Interface
# Run a command in VM (mounts $PWD as workspace, CDs into it)
qvm run opencode
qvm run "cargo build --release"
qvm run bash # interactive shell
# VM lifecycle
qvm start # start VM daemon if not running
qvm stop # graceful shutdown
qvm status # show VM state, SSH port, mounted workspaces
# Maintenance
qvm rebuild # rebuild base image from flake
qvm reset # wipe overlay, start fresh (keeps base image)
# Direct access
qvm ssh # SSH into VM
qvm ssh -c "command" # run command via SSH
Behavior Details
qvm run <command>:
- If VM not running, start it (blocking until SSH ready)
- Mount $PWD into VM if not already mounted (via 9p hotplug or pre-mount)
- SSH in and execute:
cd /workspace/{hash} && <command> - Stream stdout/stderr to terminal
- Exit with command's exit code
- VM stays running for next command
qvm start:
- Check if VM already running (via pid file)
- If base.qcow2 missing, run
qvm rebuildfirst - Create overlay.qcow2 if missing (backed by base.qcow2)
- Launch QEMU with KVM, virtio, 9p mounts
- Wait for SSH to become available
- Print SSH port
qvm stop:
- Send ACPI shutdown to VM
- Wait for graceful shutdown (timeout 30s)
- If timeout, SIGKILL QEMU
- Clean up pid file
qvm rebuild:
- Run
nix buildon ~/.config/qvm/flake - Copy result to ~/.local/share/qvm/base.qcow2
- If VM running, warn user to restart for changes
qvm reset:
- Stop VM if running
- Delete overlay.qcow2
- Delete workspaces.json
- Next start creates fresh overlay
Default NixOS Flake
The tool ships with a default flake template. User can customize at ~/.config/qvm/flake/.
Included by Default
{
# Base system
boot.kernelPackages = pkgs.linuxPackages_latest;
# Shell
programs.zsh.enable = true;
users.users.dev = {
isNormalUser = true;
shell = pkgs.zsh;
extraGroups = [ "wheel" ];
};
# Essential tools
environment.systemPackages = with pkgs; [
git
vim
tmux
htop
curl
wget
jq
ripgrep
fd
# Language tooling (user can remove/customize)
rustup
nodejs_22
pnpm
python3
go
];
# SSH server
services.openssh = {
enable = true;
settings.PasswordAuthentication = false;
};
# 9p mounts (populated at runtime)
fileSystems."/cache/cargo" = { device = "cargo"; fsType = "9p"; options = [...]; };
fileSystems."/cache/target" = { device = "target"; fsType = "9p"; options = [...]; };
fileSystems."/cache/pnpm" = { device = "pnpm"; fsType = "9p"; options = [...]; };
fileSystems."/cache/sccache" = { device = "sccache"; fsType = "9p"; options = [...]; };
# Environment
environment.variables = {
CARGO_HOME = "/cache/cargo";
CARGO_TARGET_DIR = "/cache/target";
PNPM_HOME = "/cache/pnpm";
SCCACHE_DIR = "/cache/sccache";
};
# Disk size
virtualisation.diskSize = 20480; # 20GB
}
User Customization
Users edit ~/.config/qvm/flake/flake.nix to:
- Add/remove packages
- Change shell configuration
- Add custom NixOS modules
- Pin nixpkgs version
- Include their dotfiles
After editing: qvm rebuild
QEMU Configuration
qemu-system-x86_64 \
-enable-kvm \
-cpu host \
-m "${MEMORY:-8G}" \
-smp "${CPUS:-4}" \
\
# Disk (overlay backed by base)
-drive file=overlay.qcow2,format=qcow2,if=virtio \
\
# Network (user mode with SSH forward)
-netdev user,id=net0,hostfwd=tcp::${SSH_PORT}-:22 \
-device virtio-net-pci,netdev=net0 \
\
# 9p shares for caches
-virtfs local,path=${CACHE_DIR}/cargo-home,mount_tag=cargo,security_model=mapped-xattr \
-virtfs local,path=${CACHE_DIR}/cargo-target,mount_tag=target,security_model=mapped-xattr \
-virtfs local,path=${CACHE_DIR}/pnpm-store,mount_tag=pnpm,security_model=mapped-xattr \
-virtfs local,path=${CACHE_DIR}/sccache,mount_tag=sccache,security_model=mapped-xattr \
\
# 9p shares for workspaces (added dynamically or pre-mounted)
-virtfs local,path=/path/to/project,mount_tag=ws_abc123,security_model=mapped-xattr \
\
# Console
-serial file:serial.log \
-monitor none \
-nographic \
\
# Daemonize
-daemonize \
-pidfile vm.pid
Resource Allocation
Default: 50% of RAM, 90% of CPUs (can be configured via env vars or config file later)
Implementation Plan
Phase 1: Core Scripts
qvmmain script - dispatcher for subcommandsqvm-start- launch QEMU with all mountsqvm-stop- graceful shutdownqvm-run- mount workspace + execute command via SSHqvm-ssh- direct SSH accessqvm-status- show state
Phase 2: Image Management
qvm-rebuild- build image from flakeqvm-reset- wipe overlay- Default flake template - copy to ~/.config/qvm/flake on first run
Phase 3: Polish
- First-run experience - auto-create dirs, copy default flake, build image
- Error handling - clear messages for common failures
- README - usage docs
File Structure (New Repo)
qvm/
├── bin/
│ ├── qvm # Main dispatcher
│ ├── qvm-start
│ ├── qvm-stop
│ ├── qvm-run
│ ├── qvm-ssh
│ ├── qvm-status
│ ├── qvm-rebuild
│ └── qvm-reset
├── lib/
│ └── common.sh # Shared functions
├── flake/
│ ├── flake.nix # Default NixOS flake template
│ └── flake.lock
├── flake.nix # Nix flake for installing qvm itself
├── README.md
└── LICENSE
Edge Cases & Decisions
| Scenario | Behavior |
|---|---|
qvm run with no args |
Error: "Usage: qvm run " |
qvm run while image missing |
Auto-trigger qvm rebuild first |
qvm run from same dir twice |
Reuse existing mount, just run command |
qvm stop when not running |
No-op, exit 0 |
qvm rebuild while VM running |
Warn but proceed; user must restart VM |
qvm reset while VM running |
Stop VM first, then reset |
| SSH not ready after 60s | Error with troubleshooting hints |
| QEMU crashes | Detect via pid, clean up state files |
Not In Scope (Explicit Exclusions)
- Multi-VM: Only one VM at a time
- Per-project configs: Single global config (use qai for project-specific VMs)
- Windows/macOS: Linux + KVM only
- GUI/Desktop: Headless only
- Snapshots: Just overlay reset, no checkpoint/restore
- Resource limits: Trust the VM, no cgroups on host
Dependencies
qemu(with KVM support)nix(for building images)openssh(ssh client)jq(for workspace registry)
Future Considerations (Not v1)
- Config file for memory/CPU/ports
- Tab completion for zsh/bash
- Systemd user service for auto-start
- Health checks and auto-restart
- Workspace unmount command