20 points meisnerd 7 hours ago 3 comments
When you're working with AI agents (Claude Code, Cursor, Windsurf), you end up in a weird situation: - You have tasks scattered across your head, Slack, email, and the CLI - Agents need clear work items, context, and role-specific instructions - You have no visibility into what agents are actually doing - Failed tasks just... disappear. No retry, no notification - Each agent context-switches constantly because you're hand-feeding them work
I was manually shepherding agents, copying task descriptions, restarting failed sessions, and losing track of what needed done next. It felt like hiring expensive contractors but managing them like a disorganized chaos experiment.
The Solution
Mission Control is a task management app purpose-built for delegating work to AI agents. It's got the expected stuff (Eisenhower matrix, kanban board, goal hierarchy) but built from the assumption that your collaborators are Claude, not humans.
The killer feature is the autonomous daemon. It runs in the background, polls your task queue, spawns Claude Code sessions automatically, handles retries, manages concurrency, and respects your cron-scheduled work. One click: your entire work queue activates.
The Architecture
- Local-first: Everything lives in JSON files. No database, no cloud dependency, no vendor lock-in. - Token-optimized API: The task/decision payloads are ~50 tokens vs ~5,400 unfiltered. Matters when you're spawning agents repeatedly. - Rock-solid concurrency: Zod validation + async-mutex locking prevents corruption under concurrent writes. - 193 automated tests: This thing has to be reliable. It's doing unattended work.
The app is Next.js 15 with 5 built-in agent roles (researcher, developer, marketer, business-analyst, plus you). You define reusable skills as markdown that get injected into agent prompts. Agents report back through an inbox + decisions queue.
Why Release This?
A few people have asked for access, and I think it's genuinely useful for anyone delegating to AI. It's MIT licensed, open source, and actively maintained.
What's Next
- Human collaboration (sharing tasks with real team members) - Integrations with GitHub issues and email inboxes - Better observability dashboard for daemon execution - Custom agent templates (currently hardcoded roles)
If you're doing something similar—delegating serious work to AI—check it out and let me know what's broken.
ge96 1 hour ago | parent
well except the mission control folder
code is mix of old and new style JS eg. function vs. =>
at a cursory glance the UI has way too many buttons/features but probably makes sense when you're in the weeds/using it, it makes sense the more I look at it though
xiphias2 35 minutes ago | parent
I have a different view point on what to automate and I'm working differently with agents, but I much prefer seeing projects like this on HN to just product announcements.
MidasTools 8 minutes ago | parent
We run autonomous Claude Code agents for business operations (publishing, site deploys, outreach, monitoring). The approach that cleaned up the chaos for us: pure file-based coordination with no orchestration UI.
The pattern: each agent session starts by reading a set of state files (MEMORY.md, daily log, heartbeat state). It writes its results to those same files before ending. The next session reads the updated files and knows exactly what was done and what's pending.
The key insight: the coordination problem isn't "where do tasks live" -- it's "how does agent B know what agent A did, and why?" A task management UI solves the first. Structured log files solve both.
The diff between our approach and what you're building: yours is synchronous orchestration (you manage tasks, agents execute). Ours is asynchronous accumulation (agents run on schedules, each one reads context left by the last, acts, and writes new context). Works well for autonomous workflows; probably too loose for anything requiring real-time coordination.
Have you tried the file-based approach? Curious what drove you toward the UI model instead.