Replace hardcoded harb references across the entire codebase: - HARB_REPO_ROOT → PROJECT_REPO_ROOT (with deprecated alias) - Derive PROJECT_NAME from CODEBERG_REPO slug - Add PRIMARY_BRANCH (master/main), WOODPECKER_REPO_ID env vars - Parameterize worktree prefixes, docker container names, branch refs - Genericize agent prompts (gardener, factory supervisor) - Update best-practices docs to use $-vars, prefix harb lessons All project-specific values now flow from .env → lib/env.sh → scripts. Backward-compatible: existing harb setups work without .env changes. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1.3 KiB
1.3 KiB
Memory Best Practices
Environment
- VPS: 8GB RAM, 4GB swap, Debian
- Running: Docker stack (8 containers), Woodpecker CI, OpenClaw gateway
Safe Fixes (no permission needed)
- Kill stale
claudeprocesses (>3h old):pgrep -f "claude" --older 10800 | xargs kill - Drop filesystem caches:
sync && echo 3 | sudo tee /proc/sys/vm/drop_caches - Restart bloated Anvil:
sudo docker restart ${PROJECT_NAME}-anvil-1(grows to 12GB+ over hours) - Kill orphan node processes from dead worktrees
Dangerous (escalate)
docker system prune -a --volumes— kills CI images, hours to rebuild- Stopping project stack containers — breaks dev environment
- OOM that survives all safe fixes — needs human decision on what to kill
Known Memory Hogs
claudeprocesses from dev-agent: 200MB+ each, can zombiedockerd: 600MB+ baseline (normal)openclaw-gateway: 500MB+ (normal)- Anvil container: starts small, grows unbounded over hours
forge buildwith via_ir: can spike to 4GB+. Use--skip test scriptto reduce.- Vite dev servers inside containers: 150MB+ each
Lessons Learned
- After killing processes, always
sync && echo 3 | sudo tee /proc/sys/vm/drop_caches - Swap doesn't drain from dropping caches alone — it's actual paged-out process memory
- Running CI + full project stack = 14+ containers on 8GB. Only one pipeline at a time.