disinto/factory/best-practices/memory.md
openhands 5eb17020d5 feat: progressive disclosure + escalate everything to claude
- PROMPT.md references best-practices/ files instead of inlining all knowledge
- best-practices/{memory,disk,ci,dev-agent,git}.md — loaded on demand by claude
- All alerts go to claude -p. Claude decides what to fix and what to escalate.
- update-prompt.sh targets specific best-practices files for self-learning
2026-03-12 13:04:50 +00:00

1.3 KiB

Memory Best Practices

Environment

  • VPS: 8GB RAM, 4GB swap, Debian
  • Running: Docker stack (8 containers), Woodpecker CI, OpenClaw gateway

Safe Fixes (no permission needed)

  • Kill stale claude processes (>3h old): pgrep -f "claude" --older 10800 | xargs kill
  • Drop filesystem caches: sync && echo 3 | sudo tee /proc/sys/vm/drop_caches
  • Restart bloated Anvil: sudo docker restart harb-anvil-1 (grows to 12GB+ over hours)
  • Kill orphan node processes from dead worktrees

Dangerous (escalate)

  • docker system prune -a --volumes — kills CI images, hours to rebuild
  • Stopping harb stack containers — breaks dev environment
  • OOM that survives all safe fixes — needs human decision on what to kill

Known Memory Hogs

  • claude processes from dev-agent: 200MB+ each, can zombie
  • dockerd: 600MB+ baseline (normal)
  • openclaw-gateway: 500MB+ (normal)
  • Anvil container: starts small, grows unbounded over hours
  • forge build with via_ir: can spike to 4GB+. Use --skip test script to reduce.
  • Vite dev servers inside containers: 150MB+ each

Lessons Learned

  • After killing processes, always sync && echo 3 | sudo tee /proc/sys/vm/drop_caches
  • Swap doesn't drain from dropping caches alone — it's actual paged-out process memory
  • Running CI + full harb stack = 14+ containers on 8GB. Only one pipeline at a time.