Run 5, 10, 20 AI agents in parallel — each on its own task, its own branch, in an isolated container. You stay in control. Every decision is yours.
You've tried running multiple Claude Code sessions. They collide — overwriting each other's changes, rebuilding shared code, stepping on each other in the same repo.
Label an issue in Linear. 10times.dev handles everything else — cloning, containerizing, running the agent, notifying you, previewing, merging.
AI doesn't know good from mediocre. It can't feel that the UI is off by just one pixel in a way that kills conversion. It doesn't have 10 years of context about your users, your team, and your product.
With 10 agents working in parallel, your judgment — what to build, what to reject, when it's good enough — becomes the bottleneck. Your taste is the compiler. Your experience is the architecture.
| Mode | What you get | For who |
|---|---|---|
| Local | Docker · localhost · hot reload | You, fast review |
| Remote | Supabase branch DB + Netlify deploy · public URL · auto TTL 24h | Client, tester, PM |
Two-tier commit classification means agents never block on a review. They mark what needs your attention and keep moving. You catch up asynchronously.
supabase/migrations/ and mark 🟡. You run them manually on stage, then prod. Always.
--force-with-lease. Never touch another agent's code.
A talk about what nobody in the AI hype cycle wants to admit: more agents means more human judgment, not less. Your taste, context, and intuition become the system's only meaningful bottleneck.
Live demo. Real repo. Real agents running in parallel. No slides pretending this works — it works on stage.
Former academic lecturer. Solo developer. Founder of UkryteSkarby.pl. Building in public with Claude Code since it shipped. This talk is live documentation of a real system running a real product.
10times.dev is in private early access. Join the list and get the repo, the CLAUDE_GLOBAL.md, and a walkthrough of the full setup.
No pitch. You'll get a GitHub link and a setup guide.