Firecracker sandboxes for AI agents

Run LLM-generated code safely in production.
Scale to 1,000 concurrent sandboxes without enterprise commitments or pricing walls.

~1s cold start
No duration limits
1,000 concurrent sandboxes
quickstart.js
import { Sandbox } from '@simplesandbox/sdk'

const client = Sandbox()
const sandbox = await client.sandboxes.create({ image: 'node:lts' })

const result = await sandbox.exec("echo 'Hello, world!'")
console.log(result.stdout) // Hello, world!

await sandbox.kill()

Integrate in 6 lines of code. No setup calls, no complex configuration.

Most code execution platforms are built for enterprises, not developers

You shouldn't need enterprise contracts and sales calls just to run sandboxes.

Complex APIs eat up your time

Most enterprise solutions have proprietary SDKs and documentation that assumes you already know how everything works. Integration takes days when it should take an afternoon.

Too much process before you can start

Sales calls, onboarding sessions, contract negotiations. It can take weeks before you write any code.

Platform lock-in limits your options

Some providers only work if you're already using their full stack. You shouldn't have to migrate everything just to add sandboxing.

How it works

1
Install SDK
npm install @simplesandbox/sdk
2
Create sandbox
await client.sandboxes.create()
3
Ship to production
That's it.
Average integration time: 5 minutes

Scale from 10 to 1,000 agents without talking to sales

1

5-minute integration with OpenAPI-documented REST API

Standard REST endpoints. No proprietary SDKs required. Works with curl, axios, fetch, or any HTTP client.

2

Works with your existing setup

Not tied to a specific cloud provider or platform. Standard REST API that integrates anywhere.

3

Transparent pricing: 1M credits = $1

Billed per-second. Free tier doesn't require a credit card. 50% cheaper than E2B at $0.0252/vCPU-hour.

What you can build with LLM sandboxes

From code interpreters to agent workflows. Tap a file to preview the implementation.

import { Sandbox } from '@simplesandbox/sdk'

const client = Sandbox()
const sandbox = await client.sandboxes.create({ image: 'node:lts-alpine3.22' })

const shell = await sandbox.exec("echo 'Hello from shell!' && pwd")
console.log(shell.stdout.trim())

await sandbox.kill()

Built for developers who need to scale fast

Production-ready infrastructure without enterprise complexity.

Firecracker microVM cold start in ~1s

MicroVMs start in 800-1200ms using AWS Lambda-grade Firecracker technology. Warm pools (coming Q1 2026) start in under 100ms.

REST API + Native SDKs

OpenAPI-documented REST API with official SDKs for JavaScript, Python, and Go. Works with Express, FastAPI, or any HTTP client. 5-minute integration time.

Per-second billing at $0.0252/vCPU-hour

1M credits = $1. Pay only for what you use, billed per-second. 50% cheaper than comparable services. No hidden costs or hourly minimums.

Firecracker microVM isolation

AWS Lambda-grade isolation technology. Network-isolated by default with optional internet access. Run untrusted LLM-generated code safely in production.

Any Docker image supported

Use official images like node:lts, python:3.12, or bring your own custom Docker images. Full control over the runtime environment and dependencies.

Standard REST, no lock-in

Works with your existing infrastructure on any cloud. Standard JSON over HTTPS. No proprietary protocols or vendor lock-in. Migrate anytime.

No 15-minute Lambda limits

Run training jobs for hours or keep agents alive indefinitely. No maximum duration. Billed per-second with no minimum. Perfect for long-running AI workflows.

Scale to 1,000+ concurrent (Pro tier)

From 3 concurrent on free tier to 1,000+ on Pro ($50/mo). Scale instantly without enterprise commitments. No rate limit negotiations required.

AWS Lambda-grade security without AWS complexity

Same Firecracker isolation technology, simpler pricing and deployment

🔒

Firecracker Isolation

AWS Lambda-grade microVM technology. Complete network isolation by default with optional internet access. Each sandbox runs in its own encrypted environment with kernel-level separation.

🏢

Fly.io Infrastructure

Multi-region deployment on Fly.io with 99.9% uptime SLA. SOC 2 Type II certification in progress. Automated failover and health monitoring across all regions.

📊

Data Privacy

Your code and data never leave the sandbox. No logging of execution content or file contents. GDPR and CCPA compliant. Full data encryption at rest and in transit.

50% cheaper than E2B, no enterprise commitments

1M credits = $1. Billed per-second. Scale without talking to sales.

Free
$0
1M credits/month
  • Up to 3 concurrent sandboxes
  • 17 hours of compute time
  • No credit card required
  • Community support
Create free account
Most Popular
Hobby
$10
10M credits/month
  • Up to 10 concurrent sandboxes
  • 171 hours of compute time
  • Priority email support
  • Usage dashboard
Get started
Pro
$50
50M credits/month
  • Up to 1,000 concurrent sandboxes
  • 855 hours of compute time
  • Priority support + Slack
  • Pre-warmed pools (coming soon)
Get started
Enterprise
Custom
Volume pricing
  • Unlimited concurrent sandboxes
  • Custom credit packages
  • Dedicated support + SLA
  • Volume discounts
Contact sales

Auto top-up: When you run low on credits, we automatically add 1M credits ($1) to keep your sandboxes running.

30-day money-back guarantee. Cancel anytime, no questions asked.

Built for scaling without enterprise fees

How we compare to alternatives when you need to scale

E2B
Up to 100 concurrent $150/mo + usage
101-1,000 concurrent $150/mo + addons
1,000+ concurrent $100k/year
vCPU: $0.0504/hour • 24-hour max duration
Our Advantage
SimpleSandbox
Up to 1,000 concurrent Pay per use*
1,000+ concurrent Pay per use*
Enterprise commitment None required
vCPU: $0.0252/hour (50% cheaper) • No duration limit

* Pay only for what you use. Plans start at $0 (free tier), $10/mo (hobby), or $50/mo (pro) with included credits. Auto top-up when needed.

Questions from developers like you

How long does integration actually take?

Most developers integrate in under 5 minutes. Install the SDK, create a sandbox, execute code. That's it. No complex setup, no sales calls, no onboarding sessions.

Is 1-second cold start fast enough for production?

For most agent workloads—data processing, code execution, API calls—a one-second start time won't be noticeable to users. If you need faster starts for real-time use cases, we're working on pre-warmed pools that start in under 500ms.

How does per-second billing work?

You're billed only for the time your sandboxes are running, calculated to the second. If you run a sandbox for 30 seconds, you pay for 30 seconds. No hourly minimums, no idle charges. 1M credits = $1, so costs are completely transparent.

What isolation technology do you use?

We use Firecracker MicroVM isolation, the same technology powering AWS Lambda. Powered by Fly.io infrastructure with 99.9% uptime SLA. 50% cheaper pricing than alternatives.

What languages and runtimes are supported?

Any language with an official Docker image: Node.js, Python, Go, Ruby, Java, Rust, PHP, and more. Use official images like node:lts, python:3.12, or bring your own custom Docker images with pre-installed dependencies.

How do I debug when something goes wrong?

All stdout/stderr is captured and returned in the exec response. Set timeoutMs higher during debugging. Access full execution logs via the dashboard or API. WebSocket support for real-time logs coming Q1 2026.

Can I use this for production workloads?

Yes. Built on Fly.io infrastructure with 99.9% uptime. Firecracker provides AWS Lambda-grade isolation. Currently processing 10M+ executions monthly for AI startups. Start with free tier to validate your use case.

How does this compare to AWS Lambda or Cloud Run?

Unlike Lambda's 15-minute limit, sandboxes run indefinitely. Unlike Cloud Run's complexity, we handle all orchestration. Better for dynamic AI workloads needing full file system access and long-running processes. 50% cheaper per vCPU-hour than comparable services.

Do you have persistent storage/volumes?

Persistence via volumes is on our roadmap for Q1 2026. Current sandboxes are ephemeral, optimized for stateless agent tasks. Subscribe to our changelog for updates.

What happens if I exceed my plan's credits?

Your sandboxes won't stop. Auto top-up adds 1M credits ($1) to keep running. No surprise shutdowns mid-execution. You can also manually add credits anytime via the dashboard.

Start building—free, no credit card required

1M free credits monthly—enough for 17 hours of compute.
Deploy your first LLM sandbox in 5 minutes.