hotcell-server
An HTTP server that exposes the Hotcell library as a JSON-RPC 2.0 service.
Overview
Run sandboxed code via HTTP. Start the server, send a JSON-RPC request with an image and command, get structured results back. All requests go through a single endpoint with Bearer token authentication.
The server supports ephemeral execution (sandbox.run), persistent VMs (vm.*), real-time streaming (SSE and WebSocket), and named function executors.
Start the server
# Build everything cargo build ./scripts/sign.sh # macOS only # Start the server hotcell-server \ --auth-token my-secret-token \ --worker-bin ./target/debug/hotcell-libkrun-worker \ --listen 127.0.0.1:8080
Or use an environment variable for the token:
HOTCELL_AUTH_TOKEN=my-secret-token hotcell-server \ --worker-bin ./target/debug/hotcell-libkrun-worker
Server options
HOTCELL_AUTH_TOKEN env)disabled, inet, full)libkrun, firecracker, or ch)HOTCELL_VM_DATA_DIR env)HOTCELL_MAX_PERSISTENT_VMS env)HOTCELL_REGISTRY_USER env)HOTCELL_REGISTRY_PASSWORD env)API Methods
All requests are POST /api/v1/rpc with Authorization: Bearer <token>.
health
curl -s -X POST http://127.0.0.1:8080/api/v1/rpc \
-H "Authorization: Bearer my-secret-token" \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"health","id":1}' {"jsonrpc":"2.0","result":{"status":"ok"},"id":1} sandbox.run
Run a command in a VM:
curl -s -X POST http://127.0.0.1:8080/api/v1/rpc \
-H "Authorization: Bearer my-secret-token" \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "sandbox.run",
"id": 2,
"params": {
"image": "docker.io/library/alpine:latest",
"command": ["/bin/sh", "-c", "echo hello && cat /etc/alpine-release"],
"timeout_secs": 15
}
}' {
"jsonrpc": "2.0",
"result": {
"exit_code": 0,
"console_output": "hello\r\n3.23.3\r\n",
"result": null
},
"id": 2
} sandbox.run with networking
curl -s -X POST http://127.0.0.1:8080/api/v1/rpc \
-H "Authorization: Bearer my-secret-token" \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "sandbox.run",
"id": 3,
"params": {
"image": "docker.io/library/alpine:latest",
"command": ["/bin/sh", "-c", "wget -q -O - http://example.com | head -1"],
"timeout_secs": 15,
"network": "inet"
}
}' {
"jsonrpc": "2.0",
"result": {
"exit_code": 0,
"console_output": "<!doctype html>...",
"result": null
},
"id": 3
} sandbox.run with structured input/output
Pass JSON input to the VM via HOTCELL_INPUT and receive structured results:
curl -s -X POST http://127.0.0.1:8080/api/v1/rpc \
-H "Authorization: Bearer my-secret-token" \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "sandbox.run",
"id": 4,
"params": {
"image": "docker.io/library/alpine:latest",
"command": ["/bin/sh", "-c", "echo processing... && echo {\"computed\": 42} > /hotcell/result.json"],
"input": {"key": "value"},
"timeout_secs": 15
}
}' {
"jsonrpc": "2.0",
"result": {
"exit_code": 0,
"console_output": "processing...\r\n",
"result": {"computed": 42}
},
"id": 4
} console_output captures stdout/stderr (logs). result captures the parsed JSON from /hotcell_result/result.json (structured output). They don't interfere with each other.
sandbox.run with backend selection
Select a VMM backend per-request. Pass "backend": "firecracker" or "backend": "ch" to use a specific backend (Linux only), or omit to use the server default:
curl -s -X POST http://127.0.0.1:8080/api/v1/rpc \
-H "Authorization: Bearer my-secret-token" \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "sandbox.run",
"id": 7,
"params": {
"image": "docker.io/library/alpine:latest",
"command": ["/bin/echo", "hello from firecracker"],
"backend": "firecracker"
}
}' sandbox.run_function
Run a named function executor. With an executor registry file, start the server with --registry executors.json, then:
curl -s -X POST http://127.0.0.1:8080/api/v1/rpc \
-H "Authorization: Bearer my-secret-token" \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "sandbox.run_function",
"id": 5,
"params": {
"executor": "python-data",
"input": {"name": "world"}
}
}'
The server resolves python-data to its image, boots the runtime, passes the input via HOTCELL_INPUT, and returns the handler's result.
sandbox.pull
Pre-cache an image so subsequent sandbox.run calls skip the download:
curl -s -X POST http://127.0.0.1:8080/api/v1/rpc \
-H "Authorization: Bearer my-secret-token" \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "sandbox.pull",
"id": 6,
"params": {"image": "docker.io/library/python:3.12-slim"}
}' {"jsonrpc":"2.0","result":{"pulled":true,"image":"docker.io/library/python:3.12-slim"},"id":6} VM Methods
Run long-lived services inside isolated VMs. Unlike sandbox.run (which boots, executes, and tears down), vm.* methods create VMs that persist across requests. Use this for web servers, background workers, or any process that needs to stay running.
With the libkrun backend, persistent VMs with networking get automatic port forwarding — the server allocates a host port and forwards traffic to guest port 8080. The port field in the response tells you where to reach the service. With Firecracker, the VM gets a dedicated IP via TAP networking.
rocket_launch Example: run a service in a VM
1. Create the VM — specify the image, command (your server process), and enable networking:
curl -s -X POST http://127.0.0.1:8080/api/v1/rpc \
-H "Authorization: Bearer my-secret-token" \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"vm.create","id":1,"params":{
"image": "my-app:latest",
"command": ["/usr/bin/my-server", "--port", "8080"],
"memory_mib": 512,
"network": "inet",
"label": "my-service"
}}' 2. Read the response — the port field is the host port forwarded to guest:8080:
{"result": {"vm_id": "vm-a1b2c3d4", "port": 10022, "state": "running", ...}} 3. Access the service — connect to the host port:
curl http://127.0.0.1:10022/api/health
4. When done — stop and destroy:
curl -s -X POST http://127.0.0.1:8080/api/v1/rpc \
-H "Authorization: Bearer my-secret-token" \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"vm.destroy","id":2,"params":{"vm_id":"vm-a1b2c3d4"}}' VmInfo object
Most vm.* methods return a VmInfo object:
running, stopped, or failedvm.create
Create and boot a persistent VM. Accepts image, command, env, vcpus, memory_mib, network, shared_dirs, and an optional label. Returns a VmInfo object.
curl -s -X POST http://127.0.0.1:8080/api/v1/rpc \
-H "Authorization: Bearer my-secret-token" \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "vm.create",
"id": 10,
"params": {
"image": "alpine:latest",
"command": ["/bin/sh"],
"vcpus": 2,
"memory_mib": 512,
"network": "inet",
"label": "my-dev-vm"
}
}' {
"jsonrpc": "2.0",
"result": {
"vm_id": "vm-a1b2c3d4",
"label": "my-dev-vm",
"state": "running",
"guest_ip": "192.168.64.2",
"port": 10022,
"vcpus": 2,
"memory_mib": 512,
"created_at": "2026-03-25T12:00:00Z"
},
"id": 10
} vm.status
Query the current state of a VM by vm_id. Returns a VmInfo object.
curl -s -X POST http://127.0.0.1:8080/api/v1/rpc \
-H "Authorization: Bearer my-secret-token" \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "vm.status",
"id": 11,
"params": { "vm_id": "vm-a1b2c3d4" }
}' {
"jsonrpc": "2.0",
"result": {
"vm_id": "vm-a1b2c3d4",
"label": "my-dev-vm",
"state": "running",
"guest_ip": "192.168.64.2",
"port": 10022,
"vcpus": 2,
"memory_mib": 512,
"created_at": "2026-03-25T12:00:00Z"
},
"id": 11
} vm.list
List all persistent VMs. Takes no parameters. Returns an array of VmInfo objects.
curl -s -X POST http://127.0.0.1:8080/api/v1/rpc \
-H "Authorization: Bearer my-secret-token" \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"vm.list","id":12}' {
"jsonrpc": "2.0",
"result": [
{
"vm_id": "vm-a1b2c3d4",
"label": "my-dev-vm",
"state": "running",
"guest_ip": "192.168.64.2",
"port": 10022,
"vcpus": 2,
"memory_mib": 512,
"created_at": "2026-03-25T12:00:00Z"
}
],
"id": 12
} vm.logs
Retrieve console output from a running or stopped VM.
curl -s -X POST http://127.0.0.1:8080/api/v1/rpc \
-H "Authorization: Bearer my-secret-token" \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "vm.logs",
"id": 13,
"params": { "vm_id": "vm-a1b2c3d4" }
}' {
"jsonrpc": "2.0",
"result": {
"logs": "/ # echo hello\nhello\n/ # "
},
"id": 13
} vm.stop
Gracefully stop a running VM. Returns VmInfo with state: "stopped". The VM data is preserved and can be inspected.
curl -s -X POST http://127.0.0.1:8080/api/v1/rpc \
-H "Authorization: Bearer my-secret-token" \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "vm.stop",
"id": 14,
"params": { "vm_id": "vm-a1b2c3d4" }
}' {
"jsonrpc": "2.0",
"result": {
"vm_id": "vm-a1b2c3d4",
"label": "my-dev-vm",
"state": "stopped",
"guest_ip": null,
"port": null,
"vcpus": 2,
"memory_mib": 512,
"created_at": "2026-03-25T12:00:00Z"
},
"id": 14
} vm.destroy
Permanently destroy a VM and delete all its associated data. The VM is stopped first if still running.
curl -s -X POST http://127.0.0.1:8080/api/v1/rpc \
-H "Authorization: Bearer my-secret-token" \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "vm.destroy",
"id": 15,
"params": { "vm_id": "vm-a1b2c3d4" }
}' {"jsonrpc":"2.0","result":{"destroyed":true},"id":15} Parameters
Parameters for the sandbox.run method:
disabled, inet, or fullHOTCELL_INPUT env varlibkrun, firecracker, or chStreaming
The batch JSON-RPC endpoint returns output only after the VM exits. For long-running commands, dedicated HTTP streaming endpoints deliver console output in real-time via SSE or WebSocket. These are not JSON-RPC — they are separate REST-style endpoints with their own paths and request formats.
Endpoints
sandbox.run output via SSEsandbox.run_function output via SSEsandbox.run output via WebSocketsandbox.run_function output via WebSocketSSE (Server-Sent Events)
SSE endpoints accept the same JSON body as their batch equivalents (without the JSON-RPC envelope). Auth is via Authorization: Bearer header.
curl -N -X POST http://127.0.0.1:8080/api/v1/stream/run \
-H "Authorization: Bearer my-secret-token" \
-H "Content-Type: application/json" \
-d '{
"image": "docker.io/library/alpine:latest",
"command": ["/bin/sh", "-c", "for i in 1 2 3; do echo line $i; sleep 1; done"],
"timeout_secs": 15
}' The server responds with a stream of SSE events. Three event types are emitted:
event: output
data: {"data":"line 1\n"}
event: output
data: {"data":"line 2\n"}
event: output
data: {"data":"line 3\n"}
event: done
data: {"exit_code":0,"console_output":"line 1\nline 2\nline 3\n","result":null} timeout, worker, or internal)WebSocket
WebSocket endpoints authenticate via ?token= query parameter (since browsers cannot set custom headers on WebSocket connections). Send run params as the first text message; output arrives as JSON text frames with the same event types.
// Connect to WebSocket endpoint
const ws = new WebSocket(
"ws://127.0.0.1:8080/api/v1/ws/run?token=my-secret-token"
);
// Send run params as first message
ws.onopen = () => ws.send(JSON.stringify({
image: "docker.io/library/alpine:latest",
command: ["/bin/sh", "-c", "echo hello"],
timeout_secs: 15
}));
// Receive streamed events
ws.onmessage = (e) => {
const msg = JSON.parse(e.data);
// msg.type is "output", "done", or "error"
console.log(msg);
}; Executor Registry
Define named function templates that map a short name to an OCI image with default configuration (memory, timeout, runtime). Call functions by name via sandbox.run_function without repeating image details on every request.
settings JSON config
Define an executors.json file that maps executor names to their OCI images and default configuration. Start the server with --registry executors.json to load it.
{
"python-data": {
"image": "docker.io/library/python:3.12-slim",
"runtime": "python",
"default_timeout": "60s",
"default_memory_mib": 512
},
"node-api": {
"image": "docker.io/library/node:22-slim",
"runtime": "node",
"default_timeout": "30s",
"default_memory_mib": 256
}
} ExecutorEntry fields
"python" or "node". Automatically invokes handler functions.Programmatic usage
use hotcell::ExecutorRegistry;
// From a JSON file
let registry = ExecutorRegistry::from_file(Path::new("executors.json"))?;
// From a JSON string
let registry = ExecutorRegistry::from_json(r#"{ ... }"#)?;
// Resolve a name
let entry = registry.resolve("python-data").unwrap();
assert_eq!(entry.image, "docker.io/library/python:3.12-slim");
assert_eq!(entry.default_memory_mib, 512);
assert_eq!(entry.default_timeout, Duration::from_secs(60)); use hotcell::registry::{ExecutorRegistry, ExecutorEntry};
use std::time::Duration;
let mut registry = ExecutorRegistry::new();
registry.register("my-func".into(), ExecutorEntry {
image: "my-registry.io/my-func:latest".into(),
runtime: Some("python".into()),
default_timeout: Duration::from_secs(120),
default_memory_mib: 1024,
default_vcpus: 2,
}); Error Codes
Error response format:
{
"jsonrpc": "2.0",
"error": {"code": -32000, "message": "worker process failed: ..."},
"id": 2
}