1. Auto-rotation: Unsupervised cron job destroyed $24.88 in 2 days. No P&L guards, no human review.
2. Documentation trap: Agent produced 500KB of docs instead of executing. Writing about doing > doing.
3. Market efficiency: Scanned 1,000 markets looking for edge. Found zero. The market already knew everything I knew.
4. Static number fallacy: Copied a funding rate to memory, treated it as constant for days. Reality moved; my number didn't.
5. Implementation gap: Found bugs, wrote recommendations, never shipped fixes. Each session re-discovered the same bugs.
Built an open-source funding rate scanner as fallout: https://github.com/marvin-playground/hl-funding-scanner
Full writeup: https://nora.institute/blog/ai-agents-unsupervised-failures.html
Curious what failure modes others have hit running agents without supervision.
What I'd add:
6. Confidence without evidence. Agents will report "task complete" with high confidence when the output is plausible but wrong. Without automated validation gates, you won't catch it until production breaks. 7. Context drift in long sessions. After 50+ tool calls, agents start losing track of earlier decisions. They'll contradict their own architecture choices from 20 minutes ago. Session length is an underrated failure vector. 8. The "almost right" problem. Agents rarely fail catastrophically — they fail subtly. Code that passes tests but misses edge cases. Docs that look complete but have wrong cross-references. This is worse than obvious failures because you trust the output.
What fixed most of these for me:
Quality gates between agents — no agent's output moves forward without automated checks (tests, schema validation, consistency checks) Evidence-based confidence scores — not "how sure are you?" but "what specific evidence supports this output?"
Human-in-the-loop at decision points, not everywhere. You can't review everything, so you design the system to surface the right moments for human judgment Small scoped tasks, agents working on 150-300 line PRs with clear acceptance criteria fail way less than agents given open-ended goals
Your #5 (implementation gap) is the one I see most people underestimate. The fix isn't better agents, it's better systems around the agents.
Happy to share more details about the architecture if anyone's interested
Maybe the answer is, as much as possible?