Why VMs, Not Containers?
Containers share the host kernel. That's fine for trusted workloads. For untrusted code, you need something stronger. This page explains the isolation spectrum and where different tools sit on it.
The Problem
Containers are not sandboxes
Docker containers use Linux namespaces and cgroups to isolate processes. But every container on a host shares the same kernel. A kernel vulnerability, a misconfigured capability, or a container runtime bug can let code escape and access the host.
For your own microservices, this is an acceptable trade-off. For running other people's code — user submissions, AI agent tool calls, CI jobs, plugins — the shared kernel is the problem.
The escape history
- error CVE-2019-5736 — runc allowed container escape via /proc/self/exe overwrite
- error CVE-2020-15257 — containerd shim API exposed to host-network containers via abstract unix sockets, enabling privilege escalation
- error CVE-2022-0185 — Linux kernel heap overflow reachable from unprivileged containers
- error CVE-2024-21626 — runc working directory container breakout via leaked file descriptors
These affect any container runtime sharing the host kernel
The Isolation Spectrum
There's no single "right" isolation level. Every approach trades off between security, performance, and compatibility. The question is which trade-off fits your threat model.
Docker / OCI
Namespaces and cgroups. Shared kernel. Fast, compatible, well-understood. No protection against kernel bugs.
gVisor
User-space kernel intercepts syscalls. Reduces kernel attack surface but still runs on the host kernel. Some syscall compatibility gaps.
Kata Containers
Full VM per container via QEMU or Cloud Hypervisor. Strong isolation. Complex to deploy, heavier resource footprint, Kubernetes-centric.
MicroVMs
Purpose-built VMMs (Firecracker, Cloud Hypervisor, libkrun). Minimal device model, fast boot. VM isolation without the VM overhead. This is where hotcell sits.
Full VMs
QEMU/KVM with full device emulation. Maximum isolation. Seconds-to-minutes boot, heavy memory footprint. Overkill for ephemeral workloads.
Weaker isolation, faster ← Docker — gVisor — Kata — MicroVMs — Full VMs → Stronger isolation, heavier
Approaches in Detail
gVisor
GooglegVisor intercepts application syscalls in user space, reimplementing a Linux-compatible kernel (Sentry) that handles most operations without touching the host kernel. This dramatically reduces the kernel attack surface — the application never makes direct kernel syscalls.
Strengths
- + Lightweight, no hardware virtualization needed
- + Drop-in replacement for Docker runtime
- + Reduces host kernel exposure significantly
Limitations
- - Still runs on the host kernel (some syscalls pass through)
- - Syscall compatibility gaps can break applications
- - Performance overhead for syscall-heavy workloads
Kata Containers
OpenInfra FoundationKata wraps each container in its own lightweight VM, providing hardware-level isolation while presenting a standard OCI container interface. It integrates with Kubernetes via containerd and CRI-O.
Strengths
- + True VM isolation with separate kernel
- + Kubernetes-native, CRI-compatible
- + Mature, production-proven at scale
Limitations
- - Complex deployment (agent, shim, hypervisor, kernel)
- - Kubernetes-centric — hard to use outside K8s
- - Heavier resource footprint per container
E2B
Commercial / CloudE2B provides cloud-hosted sandboxes purpose-built for AI agents. You call their API, they boot a Firecracker microVM, your agent runs code in it. Designed for the "let the LLM run code" use case.
Strengths
- + Zero infrastructure to manage
- + Purpose-built for AI agent tool use
- + SDKs in Python, TypeScript, etc.
Limitations
- - Cloud-only — code leaves your network
- - Metered by compute-second
- - Closed-source isolation layer
Modal
Commercial / CloudModal is a serverless compute platform for running Python functions in the cloud. Define a function, Modal runs it in a sandboxed container on their infrastructure. Focused on ML/AI workloads — training, inference, batch jobs, and increasingly agent tool execution.
Strengths
- + GPU support, excellent for ML workloads
- + Great Python developer experience
- + Scales to zero, pay-per-use
Limitations
- - Cloud-only — code leaves your network
- - Metered by compute-second
- - Container-based isolation (not VM-level)
SlicerVM
Commercial / Self-hostedSlicerVM provides lightweight Linux VMs that boot in under a second, backed by Firecracker. It's a VM management platform — create, run, and manage persistent or ephemeral VMs via CLI, REST API, or Go SDK. From the team behind OpenFaaS and Actuated.
Strengths
- + Production-proven (3M+ CI minutes for CNCF)
- + Self-hosted, data stays on your network
- + Full OS experience (systemd, SSH, GPU passthrough)
Limitations
- - Proprietary, commercial license ($25–250/mo)
- - Requires a guest agent inside VMs
- - Platform, not an embeddable library
Comparison
Where Hotcell Fits
Hotcell sits in the microVM tier of the isolation spectrum. It gives you VM-level isolation — separate kernel, separate memory, separate process tree — with sub-200ms boot times and standard OCI image support. Use it for ephemeral one-shot execution or persistent long-lived services with automatic port forwarding.
What makes hotcell different
It's a library, not a platform
Add hotcell as a Rust dependency and call backend.run(). No daemon, no sidecar, no Kubernetes. You embed it in your application.
Defense-in-depth, not just a VM
The VM is the first boundary. On Linux, the VMM process itself is jailed with 22 hardening layers: syscall filtering, filesystem restrictions, capability dropping, resource limits. If someone escapes the VM, they land in a sandbox.
Open and auditable
MIT-licensed. Every syscall in the allowlist is documented. Every hardening layer is in the source code. 37 adversarial tests verify escape attempts fail.
Ephemeral and persistent
Run one-shot commands that return structured results, or create persistent VMs that run long-lived services with automatic port forwarding. Both modes share the same security model and API.
Your hardware, your data
Runs on your machines. No cloud dependency, no metered billing, no data leaving your network. Works on macOS for development and Linux for production.
When hotcell is not the right choice
- arrow_forward You need GPU access — hotcell doesn't support PCI passthrough. Look at Modal or SlicerVM.
- arrow_forward You want zero infrastructure — E2B or Modal handle everything. Hotcell requires you to run and manage the host.
- arrow_forward You need full OS management with SSH and systemd — hotcell persistent VMs run services with port forwarding, but SlicerVM offers a full Linux experience with SSH access, secret injection, and OS-level management.
- arrow_forward You're already on Kubernetes — Kata Containers integrates natively with the container ecosystem. Hotcell is standalone.
- arrow_forward You need production guarantees today — hotcell is experimental (v0.1.0). It has not been independently audited.