| bin | ||
| flake/default-vm | ||
| lib | ||
| .gitignore | ||
| flake.lock | ||
| flake.nix | ||
| qemu_dev_vm.md | ||
| README.md | ||
QVM - Lightweight QEMU Development VM Wrapper
A standalone CLI tool for running commands in an isolated NixOS VM with persistent state and shared caches.
Motivation
Running AI coding agents in isolation presents a security challenge. Containers provide some isolation, but kernel exploits remain a real attack surface. QVM uses full VM isolation for stronger security guarantees while maintaining developer ergonomics.
Why QVM?
- VM isolation over container isolation - Hypervisor boundary is fundamentally stronger than kernel namespaces
- One master image, shared caches - Single ~7GB base image instead of per-project images
- Transparent workspace mounting - Current directory automatically available in VM
- Persistent state - VM overlay preserves installed tools and configuration
- Shared build caches - Cargo, pnpm, and sccache caches shared across all projects
Primary use case: Running AI coding agents (opencode, aider, cursor) in isolation to prevent unintended host filesystem access while maintaining build cache performance.
Installation
System-wide installation (NixOS flake)
Add QVM to your NixOS configuration:
{
inputs.qvm.url = "github:yourusername/qvm";
environment.systemPackages = [
inputs.qvm.packages.${system}.default
];
}
Direct execution
Run without installation:
nix run github:yourusername/qvm -- start
Development shell
For local development:
git clone https://github.com/yourusername/qvm
cd qvm
nix develop
Quick Start
# Start the VM (auto-builds base image on first run)
qvm start
# Run commands in VM with current directory mounted
qvm run opencode
qvm run cargo build --release
qvm run npm install
# Interactive SSH session
qvm ssh
# Check VM status
qvm status
# Stop VM
qvm stop
The first qvm start will build the base image, which takes several minutes. Subsequent starts are fast (3-5 seconds).
Commands
qvm start
Start the VM daemon. Creates base image and overlay if they don't exist.
qvm start
Behavior:
- Checks if VM already running (no-op if yes)
- Builds base image if missing (via
qvm rebuild) - Creates overlay.qcow2 if missing (copy-on-write backed by base image)
- Launches QEMU with KVM acceleration
- Mounts all registered workspaces and cache directories via 9p
- Waits for SSH to become available
- Exits when SSH is ready (VM runs as background daemon)
qvm stop
Gracefully stop the running VM.
qvm stop
Behavior:
- Sends ACPI shutdown signal to VM
- Waits up to 30 seconds for graceful shutdown
- Force kills QEMU if timeout exceeded
- Cleans up PID file
- No-op if VM not running
qvm run
Execute a command in the VM with current directory mounted as workspace.
qvm run <command> [args...]
Examples:
qvm run cargo build
qvm run npm install
qvm run "ls -la"
qvm run bash # Interactive shell in VM
Behavior:
- Generates hash from current directory absolute path
- Registers workspace in
~/.local/state/qvm/workspaces.json - Auto-starts VM if not running
- Checks if workspace is mounted (warns to restart VM if newly registered)
- SSHs into VM and executes:
cd /workspace/{hash} && <command> - Streams stdout/stderr to terminal
- Exits with command's exit code
- VM stays running for next command
Note: Workspaces are mounted at VM startup. If you run from a new directory, you'll need to restart the VM to mount it.
qvm ssh
Open interactive SSH session or run command in VM.
qvm ssh # Interactive shell
qvm ssh -c "command" # Run single command
Examples:
qvm ssh # Drop into root shell
qvm ssh -c "systemctl status" # Check systemd status
qvm ssh -c "df -h" # Check disk usage
Connects as root user. Password: root (if needed, though SSH key auth is configured).
qvm status
Show VM state, SSH port, and mounted workspaces.
qvm status
Example output:
VM Status: Running
PID: 12345
SSH Port: 2222
Mounted Workspaces:
abc12345 -> /home/josh/projects/qvm
def67890 -> /home/josh/projects/myapp
Cache Directories:
/cache/cargo -> ~/.cache/qvm/cargo-home
/cache/target -> ~/.cache/qvm/cargo-target
/cache/pnpm -> ~/.cache/qvm/pnpm-store
/cache/sccache -> ~/.cache/qvm/sccache
qvm rebuild
Rebuild the base VM image from NixOS flake.
qvm rebuild
Behavior:
- Runs
nix buildon~/.config/qvm/flake - Copies result to
~/.local/share/qvm/base.qcow2 - Warns if VM is running (restart required to use new image)
Use when:
- Customizing VM configuration (edited
~/.config/qvm/flake/flake.nix) - Updating base image with new packages or settings
- Pulling latest changes from upstream NixOS
qvm reset
Delete overlay and start fresh. Keeps base image intact.
qvm reset
Behavior:
- Stops VM if running
- Deletes
overlay.qcow2(all VM state changes) - Deletes
workspaces.json(registered workspaces) - Next
qvm startcreates fresh overlay from base image
Use when:
- VM state is corrupted
- Want to return to clean base image state
- Testing fresh install scenarios
Directory Layout
QVM uses XDG-compliant directories:
~/.config/qvm/
└── flake/ # User's customizable NixOS flake
├── flake.nix # VM system configuration
└── flake.lock # Pinned dependencies
~/.local/share/qvm/
└── base.qcow2 # Base VM image (~7GB, read-only)
~/.local/state/qvm/
├── overlay.qcow2 # Persistent VM state (copy-on-write)
├── vm.pid # QEMU process ID
├── ssh.port # SSH forwarded port
├── serial.log # VM console output
└── workspaces.json # Registered workspace mounts
~/.cache/qvm/
├── cargo-home/ # Shared cargo registry/cache
├── cargo-target/ # Shared cargo build artifacts
├── pnpm-store/ # Shared pnpm content-addressable store
└── sccache/ # Shared compilation cache
Inside the VM
/workspace/
├── abc12345/ # Mounted from /home/josh/projects/qvm
└── def67890/ # Mounted from /home/josh/projects/myapp
/cache/
├── cargo/ # Mounted from ~/.cache/qvm/cargo-home
├── target/ # Mounted from ~/.cache/qvm/cargo-target
├── pnpm/ # Mounted from ~/.cache/qvm/pnpm-store
└── sccache/ # Mounted from ~/.cache/qvm/sccache
Environment variables in VM:
CARGO_HOME=/cache/cargoCARGO_TARGET_DIR=/cache/targetPNPM_HOME=/cache/pnpmSCCACHE_DIR=/cache/sccache
Workspace Management
When you run qvm run from different directories, each gets registered and mounted:
cd ~/projects/myapp
qvm run cargo build # Workspace abc12345
cd ~/projects/other
qvm run npm install # Workspace def67890
Both workspaces are mounted simultaneously in the VM. The hash is derived from the absolute path, so the same directory always maps to the same workspace ID.
Important: Workspaces are mounted at VM start time. If you run from a new directory:
- Workspace gets registered in
workspaces.json qvm runwill detect it's not mounted and warn you- Restart VM to mount:
qvm stop && qvm start
Cache Sharing
Build caches are shared between host and VM via 9p mounts:
Cargo:
- Registry and crate cache shared across all projects
- Each project still uses its own
Cargo.lock - Different versions coexist peacefully
pnpm:
- Content-addressable store shared
- Each project links to shared store
- Massive disk space savings
sccache:
- Compilation cache shared
- Speeds up repeated builds across projects
All caches persist across VM restarts and resets (they live in ~/.cache/qvm, not in the overlay).
Customization
Editing VM Configuration
The VM is defined by a NixOS flake at ~/.config/qvm/flake/flake.nix. Edit this file to customize the VM.
Default packages included:
- git, vim, tmux, htop
- curl, wget, jq, ripgrep, fd
- opencode (AI coding agent)
- Language toolchains can be added
Example: Add Rust and Node.js:
environment.systemPackages = with pkgs; [
# ... existing packages ...
# Add language toolchains
rustc
cargo
nodejs_22
python3
];
Example: Add custom shell aliases:
environment.shellAliases = {
"ll" = "ls -lah";
"gst" = "git status";
};
Example: Include your dotfiles:
environment.etc."vimrc".source = /path/to/your/vimrc;
Apply changes:
# Edit the flake
vim ~/.config/qvm/flake/flake.nix
# Rebuild base image
qvm rebuild
# Restart VM to use new image
qvm stop
qvm start
Resource Allocation
Default resources:
- Memory: 8GB
- CPUs: 4 cores
- Disk: 20GB
To customize, set environment variables before qvm start:
export QVM_MEMORY="16G"
export QVM_CPUS="8"
qvm start
These will be configurable in a config file in future versions.
Security Model
VM Isolation Benefits
Why VM over containers?
- Container escapes via kernel exploits are well-documented
- VM escapes require hypervisor exploits (far rarer)
- For long-running unattended AI sessions, VM isolation provides stronger guarantees
9p Mount Security
Filesystem sharing uses security_model=mapped-xattr:
- No direct passthrough to host filesystem
- Only explicitly mounted directories are visible
- Host filesystem outside mounts is completely invisible to VM
- File ownership and permissions mapped via extended attributes
What the VM can access:
- Registered workspaces (directories you ran
qvm runfrom) - Shared cache directories
- Nothing else on the host
What the VM cannot access:
- Your home directory (except mounted workspaces)
- System directories
- Other users' files
- Any unmounted host paths
Network Isolation
VM uses QEMU user-mode networking:
- VM can make outbound connections
- No inbound connections to VM except forwarded SSH port
- SSH port forwarded to random high port on localhost only
Troubleshooting
VM won't start
Check if QEMU/KVM is available:
qemu-system-x86_64 --version
lsmod | grep kvm
Check the serial log for errors:
cat ~/.local/state/qvm/serial.log
SSH timeout
If SSH doesn't become ready within 60 seconds:
- Check if VM process is running:
ps aux | grep qemu - Check serial log:
cat ~/.local/state/qvm/serial.log - Try increasing timeout (future feature)
Workspace not mounted
If you see "Workspace not mounted in VM":
qvm stop
qvm start
qvm run <command>
Workspaces must be registered before VM start.
Build cache not working
Verify cache directories exist and are mounted:
qvm ssh -c "ls -la /cache"
qvm ssh -c "echo \$CARGO_HOME"
Check that cache directories on host exist:
ls -la ~/.cache/qvm/
VM is slow
Ensure KVM is enabled (not using emulation):
lsmod | grep kvm_intel # or kvm_amd
Check resource allocation:
qvm status # Shows allocated CPUs/memory (future feature)
Limitations
Explicit exclusions:
- Multi-VM: Only one VM at a time
- Per-project configs: Single global VM (use other tools for project-specific VMs)
- Cross-platform: Linux + KVM only (no macOS/Windows)
- GUI: Headless only, no desktop environment
- Snapshots: Only overlay reset, no checkpoint/restore
Dependencies
Required on host system:
qemu(with KVM support)nix(for building VM images)openssh(SSH client)jq(JSON processing)nc(netcat, for port checking)
All dependencies are included automatically when installing via Nix flake.
Architecture
HOST VM
──────────────────────────────── ──────────────────────────
~/.local/share/qvm/base.qcow2 → (read-only base image)
↓
~/.local/state/qvm/overlay.qcow2 → (persistent changes)
~/.cache/qvm/cargo-home/ ──9p──→ /cache/cargo/
~/.cache/qvm/cargo-target/ ──9p──→ /cache/target/
~/.cache/qvm/pnpm-store/ ──9p──→ /cache/pnpm/
~/.cache/qvm/sccache/ ──9p──→ /cache/sccache/
$(pwd) ──9p──→ /workspace/{hash}/
Image layering:
- Base image contains NixOS system from flake
- Overlay is copy-on-write layer for runtime changes
qvm resetdeletes overlay, preserves baseqvm rebuildupdates base, keeps overlay
9p virtfs mounts:
- Used for workspace and cache sharing
security_model=mapped-xattrfor securitymsize=104857600for performance (100MB transfer size)- Mounts configured at VM start, no hotplug
Contributing
Contributions welcome! This is a simple Bash-based tool designed to be readable and hackable.
Key files:
bin/qvm- Main dispatcherbin/qvm-*- Subcommand implementationslib/common.sh- Shared utilities and pathsflake/default-vm/flake.nix- Default VM template
Development:
git clone https://github.com/yourusername/qvm
cd qvm
nix develop
./bin/qvm start
License
MIT