VMM Backends
Four backends behind a common trait. Same code, same protocol, same result format. Pick the right trade-off per workload.
libkrun
Embedded VMM via FFI. No separate process. Transparent socket networking (TSI). The fastest path from code to VM.
Firecracker
AWS Lambda's VMM. Minimal device model, battle-tested jailer. ext4 block device rootfs. The production hardening choice.
Cloud Hypervisor
Modern rust-vmm VMM. virtio-fs rootfs via virtiofsd. Snapshot/restore, warm migration, GPU passthrough via VFIO.
QEMU
Full device emulation. Broadest hardware support. GPU passthrough with OVMF firmware boot. The escape hatch for complex workloads.
Feature Matrix
Boot times measured on bare metal (AMD Ryzen 9 7950X3D, KVM). Full lifecycle: OCI rootfs assembly, VM boot, command execution, result collection, teardown.
libkrun
An embedded VMM that runs inside the hotcell worker process via FFI. No separate binary to manage, no REST API, no socket coordination. The VM starts when you call krun_start_enter() and the worker process becomes the guest.
Uses Hypervisor.framework on macOS and KVM on Linux. The only backend that runs on macOS, making it the default for development.
Firecracker
The VMM that powers AWS Lambda and Fargate. A separate binary configured via REST API over a Unix socket. Minimal device model with a tiny attack surface. Uses ext4 block device images built from OCI rootfs layers.
Full snapshot/restore support for warm migration between hosts. Pause a running VM, serialize its memory and CPU state, transfer to another host, restore.
Cloud Hypervisor
A modern, Rust-based VMM built on the rust-vmm crate ecosystem. Uses virtio-fs via external virtiofsd processes for rootfs and shared directory access. Configured via REST API over a Unix socket.
The most feature-rich backend: snapshot/restore, warm migration, live pre-copy migration plumbing, GPU passthrough via VFIO with firmware boot mode (CLOUDHV.fd).
QEMU
The most mature VMM in the ecosystem. Full device emulation, broadest hardware support. Configured via command-line arguments, managed via QMP (JSON over Unix socket).
QEMU is hotcell's escape hatch for workloads that need OVMF firmware boot (required for NVIDIA GPU ROM initialization), complex PCI topologies, or specific device emulation. Uses virtio-fs via virtiofsd for rootfs access.
Choosing a Backend
Developing on macOS?
Use libkrun (the default). It's the only backend that supports macOS via Hypervisor.framework.
Multi-tenant production?
Use Firecracker. Battle-tested at AWS scale, minimal attack surface, snapshot/restore for migration.
GPU compute (CUDA)?
Use Cloud Hypervisor or QEMU. Both support VFIO passthrough. QEMU adds OVMF firmware for NVIDIA driver initialization.
Need to share host directories?
Use libkrun, Cloud Hypervisor, or QEMU. All three support virtio-fs shared mounts. Firecracker uses ext4 block devices only.