Experimental — Hotcell is under active development and should not be used in production.

CLI

The hotcell CLI is the fastest way to run OCI images in microVMs. Pull an image, boot a VM, stream output. No server, no embedding — just run it.

Quick Start

terminal
$ hotcell run alpine -- echo "hello from a VM"
hello from a VM

The CLI pulls the image (with layer caching), assembles a root filesystem, boots a microVM, streams console output to stdout, and exits with the guest's exit code. Logs and errors go to stderr, so piping works correctly.

Run Python

terminal
$ hotcell run -m 512 python:3.12-slim -- python3 -c "print('hello')"
hello

Run with networking

terminal
$ hotcell run --network inet alpine -- wget -qO- http://example.com
<!doctype html>
...

Usage

Environment variables

terminal
$ hotcell run -e NAME=hotcell alpine -- sh -c 'echo Hello $NAME'
Hello hotcell

Volume mounts

Share host directories into the guest. The format is HOST_PATH:GUEST_TAG, where the tag becomes the mount point inside the VM at /<tag>. Files are shared directly between host and guest — no copying.

terminal
$ hotcell run -v /tmp/data:data alpine -- ls /data
input.txt

Exit codes and piping

The CLI exits with the guest's exit code, and sends VM output to stdout while keeping logs on stderr. This makes it composable in shell pipelines and scripts.

exit codes
$ hotcell run alpine -- sh -c "exit 42"
$ echo $?
42
piping
$ hotcell run alpine -- cat /etc/os-release | grep PRETTY
PRETTY_NAME="Alpine Linux v3.21"

Pre-cache images

Use hotcell pull to download and cache image layers without running anything. Subsequent runs using the same image skip the download.

terminal
$ hotcell pull alpine:latest
$ hotcell pull python:3.12-slim

Options Reference

hotcell run

Flag
Default
Description
<IMAGE>
required
OCI image reference (e.g. alpine, python:3.12-slim)
-- <COMMAND>
required
Command to run inside the VM
-m, --memory
256
VM memory in MiB
--timeout
30
Execution timeout (seconds)
--network
disabled
disabled, inet (internet only), or full (all TCP)
-e, --env
KEY=VALUE env var (repeatable)
-v, --volume
Share a host directory into the VM as HOST:TAG (repeatable)

Advanced options

Flag
Default
Description
--cpus
1
Virtual CPUs
--allow-host
Restrict network to specific destinations (CIDR or host:port, repeatable)
--workdir
Working directory inside the VM
--backend
libkrun
VMM backend: libkrun, firecracker, or ch
--worker
auto
Path to hotcell-libkrun-worker binary
--firecracker-bin
firecracker
Path to Firecracker binary
--firecracker-kernel
vmlinux.bin
Path to Firecracker kernel image
--ch-bin
cloud-hypervisor
Path to Cloud Hypervisor binary
--ch-kernel
vmlinux
Path to Cloud Hypervisor kernel image

hotcell pull

Flag
Default
Description
<IMAGE>
required
OCI image reference to pre-cache

Environment variables

Variable
Description
HOTCELL_WORKER
Path to hotcell-libkrun-worker binary
HOTCELL_BACKEND
Default VMM backend (libkrun, firecracker, or ch)
HOTCELL_FIRECRACKER_BIN
Path to Firecracker binary
HOTCELL_FIRECRACKER_KERNEL
Path to Firecracker kernel image
HOTCELL_CH_BIN
Path to Cloud Hypervisor binary
HOTCELL_CH_KERNEL
Path to Cloud Hypervisor kernel image
HOTCELL_LOG
Log filter (default: hotcell=warn). Uses tracing EnvFilter syntax.

Backend Selection

The CLI supports all three VMM backends. Use --backend to select. The default is libkrun which works on macOS and Linux. Firecracker and Cloud Hypervisor require Linux with KVM.

firecracker backend
$ hotcell run --backend firecracker \
    --firecracker-bin /usr/bin/firecracker \
    --firecracker-kernel /var/lib/hotcell/vmlinux.bin \
    alpine -- echo "hello from Firecracker"
hello from Firecracker

Worker binary

The libkrun backend requires the hotcell-libkrun-worker binary. The CLI discovers it automatically by checking: (1) the --worker flag or HOTCELL_WORKER env var, (2) the directory adjacent to the hotcell binary, (3) PATH.

Image References

The CLI normalizes short image references following Docker conventions. You don't need to type the full registry path — bare names are expanded to docker.io/library/<name>:latest.

normalization rules
alpine                    → docker.io/library/alpine:latest
alpine:3.19               → docker.io/library/alpine:3.19
myuser/myimage            → docker.io/myuser/myimage:latest
ghcr.io/owner/image:v1    → ghcr.io/owner/image:v1  (pass-through)