Firecracker sandboxes for AI agents
Run LLM-generated code safely in production.
Scale to 1,000 concurrent sandboxes without enterprise commitments or pricing walls.
import { Sandbox } from '@simplesandbox/sdk'
const client = Sandbox()
const sandbox = await client.sandboxes.create({ image: 'node:lts' })
const result = await sandbox.exec("echo 'Hello, world!'")
console.log(result.stdout) // Hello, world!
await sandbox.kill() Integrate in 6 lines of code. No setup calls, no complex configuration.
Most code execution platforms are built for enterprises, not developers
You shouldn't need enterprise contracts and sales calls just to run sandboxes.
Complex APIs eat up your time
Most enterprise solutions have proprietary SDKs and documentation that assumes you already know how everything works. Integration takes days when it should take an afternoon.
Too much process before you can start
Sales calls, onboarding sessions, contract negotiations. It can take weeks before you write any code.
Platform lock-in limits your options
Some providers only work if you're already using their full stack. You shouldn't have to migrate everything just to add sandboxing.
How it works
npm install @simplesandbox/sdk await client.sandboxes.create() That's it. Scale from 10 to 1,000 agents without talking to sales
5-minute integration with OpenAPI-documented REST API
Standard REST endpoints. No proprietary SDKs required. Works with curl, axios, fetch, or any HTTP client.
Works with your existing setup
Not tied to a specific cloud provider or platform. Standard REST API that integrates anywhere.
Transparent pricing: 1M credits = $1
Billed per-second. Free tier doesn't require a credit card. 50% cheaper than E2B at $0.0252/vCPU-hour.
What you can build with LLM sandboxes
From code interpreters to agent workflows. Tap a file to preview the implementation.
import { Sandbox } from '@simplesandbox/sdk'
const client = Sandbox()
const sandbox = await client.sandboxes.create({ image: 'node:lts-alpine3.22' })
const shell = await sandbox.exec("echo 'Hello from shell!' && pwd")
console.log(shell.stdout.trim())
await sandbox.kill() import { Sandbox } from '@simplesandbox/sdk'
const client = Sandbox()
const sandbox = await client.sandboxes.create({ image: 'python:3.12-slim' })
await sandbox.exec('pip install --quiet pandas', { timeoutMs: 60_000 })
const program = `
import pandas as pd
data = {'product': ['A', 'B', 'C'], 'sales': [100, 200, 150]}
df = pd.DataFrame(data)
print(df['sales'].sum())
`
await sandbox.files.write('script.py', program)
const result = await sandbox.exec('python script.py', { timeoutMs: 30_000 })
console.log(result.stdout.trim())
await sandbox.kill() import { Sandbox } from '@simplesandbox/sdk'
const client = Sandbox()
const sandbox = await client.sandboxes.create({
image: 'node:lts-alpine3.22',
timeoutMs: 60_000,
})
const script = `
const fs = require('fs')
fs.writeFileSync('/tmp/output.json', JSON.stringify({ generated: Date.now() }))
console.log('wrote output.json')
`
await sandbox.files.write('/tmp/script.js', script)
const result = await sandbox.exec('node /tmp/script.js')
const output = await sandbox.files.read('/tmp/output.json')
console.log(result.stdout)
console.log('File contents:', output)
await sandbox.kill() import { Sandbox } from '@simplesandbox/sdk'
const client = Sandbox()
const sandbox = await client.sandboxes.create({ image: 'python:3.12-slim' })
const flaskApp = `
from flask import Flask
app = Flask(__name__)
@app.get('/')
def hello():
return 'Hello from Sandbox!'
if __name__ == '__main__':
app.run(host='::', port=5000)
`
await sandbox.files.write('server.py', flaskApp)
await sandbox.exec('pip install flask')
await sandbox.exec('python server.py >/tmp/server.log 2>&1', { background: true })
const host = sandbox.expose(5000)
console.log(`Preview URL: https://${host}`)
// Clean sandbox later
// await sandbox.kill() import { Sandbox } from '@simplesandbox/sdk'
const client = Sandbox()
const sandbox = await client.sandboxes.create({ image: 'node:lts-alpine3.22' })
const expressApp = `
const express = require('express')
const app = express()
app.get('/', (req, res) => {
res.json({ message: 'Hello from Sandbox!' })
})
app.listen(3000, '::', () => console.log('server listening on 3000'))
`
await sandbox.exec('mkdir app')
await sandbox.files.write('app/app.js', expressApp)
await sandbox.exec('cd app && npm init -y')
await sandbox.exec('cd app && npm install express')
await sandbox.exec('cd app && node app.js >/tmp/server.log 2>&1', { background: true })
const host = sandbox.expose(3000)
console.log(`API available at https://${host}`) Built for developers who need to scale fast
Production-ready infrastructure without enterprise complexity.
Firecracker microVM cold start in ~1s
MicroVMs start in 800-1200ms using AWS Lambda-grade Firecracker technology. Warm pools (coming Q1 2026) start in under 100ms.
REST API + Native SDKs
OpenAPI-documented REST API with official SDKs for JavaScript, Python, and Go. Works with Express, FastAPI, or any HTTP client. 5-minute integration time.
Per-second billing at $0.0252/vCPU-hour
1M credits = $1. Pay only for what you use, billed per-second. 50% cheaper than comparable services. No hidden costs or hourly minimums.
Firecracker microVM isolation
AWS Lambda-grade isolation technology. Network-isolated by default with optional internet access. Run untrusted LLM-generated code safely in production.
Any Docker image supported
Use official images like node:lts, python:3.12, or bring your own custom Docker images. Full control over the runtime environment and dependencies.
Standard REST, no lock-in
Works with your existing infrastructure on any cloud. Standard JSON over HTTPS. No proprietary protocols or vendor lock-in. Migrate anytime.
No 15-minute Lambda limits
Run training jobs for hours or keep agents alive indefinitely. No maximum duration. Billed per-second with no minimum. Perfect for long-running AI workflows.
Scale to 1,000+ concurrent (Pro tier)
From 3 concurrent on free tier to 1,000+ on Pro ($50/mo). Scale instantly without enterprise commitments. No rate limit negotiations required.
AWS Lambda-grade security without AWS complexity
Same Firecracker isolation technology, simpler pricing and deployment
Firecracker Isolation
AWS Lambda-grade microVM technology. Complete network isolation by default with optional internet access. Each sandbox runs in its own encrypted environment with kernel-level separation.
Fly.io Infrastructure
Multi-region deployment on Fly.io with 99.9% uptime SLA. SOC 2 Type II certification in progress. Automated failover and health monitoring across all regions.
Data Privacy
Your code and data never leave the sandbox. No logging of execution content or file contents. GDPR and CCPA compliant. Full data encryption at rest and in transit.
50% cheaper than E2B, no enterprise commitments
1M credits = $1. Billed per-second. Scale without talking to sales.
- Up to 3 concurrent sandboxes
- 17 hours of compute time
- No credit card required
- Community support
- Up to 10 concurrent sandboxes
- 171 hours of compute time
- Priority email support
- Usage dashboard
- Up to 1,000 concurrent sandboxes
- 855 hours of compute time
- Priority support + Slack
- Pre-warmed pools (coming soon)
- Unlimited concurrent sandboxes
- Custom credit packages
- Dedicated support + SLA
- Volume discounts
Auto top-up: When you run low on credits, we automatically add 1M credits ($1) to keep your sandboxes running.
30-day money-back guarantee. Cancel anytime, no questions asked.
Built for scaling without enterprise fees
How we compare to alternatives when you need to scale
* Pay only for what you use. Plans start at $0 (free tier), $10/mo (hobby), or $50/mo (pro) with included credits. Auto top-up when needed.
Questions from developers like you
How long does integration actually take? ▼
Most developers integrate in under 5 minutes. Install the SDK, create a sandbox, execute code. That's it. No complex setup, no sales calls, no onboarding sessions.
Is 1-second cold start fast enough for production? ▼
For most agent workloads—data processing, code execution, API calls—a one-second start time won't be noticeable to users. If you need faster starts for real-time use cases, we're working on pre-warmed pools that start in under 500ms.
How does per-second billing work? ▼
You're billed only for the time your sandboxes are running, calculated to the second. If you run a sandbox for 30 seconds, you pay for 30 seconds. No hourly minimums, no idle charges. 1M credits = $1, so costs are completely transparent.
What isolation technology do you use? ▼
We use Firecracker MicroVM isolation, the same technology powering AWS Lambda. Powered by Fly.io infrastructure with 99.9% uptime SLA. 50% cheaper pricing than alternatives.
What languages and runtimes are supported? ▼
Any language with an official Docker image: Node.js, Python, Go, Ruby, Java, Rust, PHP, and more. Use official images like node:lts, python:3.12, or bring your own custom Docker images with pre-installed dependencies.
How do I debug when something goes wrong? ▼
All stdout/stderr is captured and returned in the exec response. Set timeoutMs higher during debugging. Access full execution logs via the dashboard or API. WebSocket support for real-time logs coming Q1 2026.
Can I use this for production workloads? ▼
Yes. Built on Fly.io infrastructure with 99.9% uptime. Firecracker provides AWS Lambda-grade isolation. Currently processing 10M+ executions monthly for AI startups. Start with free tier to validate your use case.
How does this compare to AWS Lambda or Cloud Run? ▼
Unlike Lambda's 15-minute limit, sandboxes run indefinitely. Unlike Cloud Run's complexity, we handle all orchestration. Better for dynamic AI workloads needing full file system access and long-running processes. 50% cheaper per vCPU-hour than comparable services.
Do you have persistent storage/volumes? ▼
Persistence via volumes is on our roadmap for Q1 2026. Current sandboxes are ephemeral, optimized for stateless agent tasks. Subscribe to our changelog for updates.
What happens if I exceed my plan's credits? ▼
Your sandboxes won't stop. Auto top-up adds 1M credits ($1) to keep running. No surprise shutdowns mid-execution. You can also manually add credits anytime via the dashboard.
Start building—free, no credit card required
1M free credits monthly—enough for 17 hours of compute.
Deploy your first LLM sandbox in 5 minutes.