- Rename acquire_cron_lock → acquire_run_lock in lib/formula-session.sh and all five *-run.sh call sites - Update all *-run.sh file headers: "Cron wrapper" → "Polling-loop wrapper" - Rewrite docs/updating-factory.md: replace crontab check with pgrep, replace "Crontab empty after restart" section with polling-loop equivalent - Update docs/EVAL-MCP-SERVER.md to reflect polling-loop reality - Update lib/guard.sh, lib/AGENTS.md, lib/ci-setup.sh comments - Update formulas/*.toml comments (cron → polling loop) - Update dev/dev-poll.sh usage comment - Update tests/smoke-init.sh to handle compose vs bare-metal scheduling - Update .woodpecker/agent-smoke.sh comments - Update site HTML: architecture.html, quickstart.html, index.html - Clarify _install_cron_impl is bare-metal only (compose uses polling loop) - Keep site/collect-engagement.sh and site/collect-metrics.sh cron refs (genuinely cron-driven on the website host, separate from factory loop) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|---|---|---|
| .codeberg/ISSUE_TEMPLATE | ||
| .woodpecker | ||
| architect | ||
| bin | ||
| dev | ||
| disinto-factory | ||
| docker | ||
| docs | ||
| formulas | ||
| gardener | ||
| knowledge | ||
| lib | ||
| planner | ||
| predictor | ||
| projects | ||
| review | ||
| site | ||
| state | ||
| supervisor | ||
| templates/issue | ||
| tests | ||
| vault | ||
| .dockerignore | ||
| .env.example | ||
| .gitignore | ||
| .shellcheckrc | ||
| AGENTS.md | ||
| CLAUDE.md | ||
| docker-compose.yml | ||
| README.md | ||
| RESOURCES.example.md | ||
| VISION.md | ||
Disinto
Autonomous code factory — disinto.ai
A mining robot, lost and confused, builds a Disinto from scrap —
a device so powerful it vaporizes three-quarters of a mountain on a single battery.
— Isaac Asimov, "Robot AL-76 Goes Astray" (1942)
Point it at a git repo with a Woodpecker CI pipeline and it will pick up issues, implement them, review PRs, and keep the system healthy — all on its own.
Architecture
entrypoint.sh (while-true polling loop, 5 min base interval)
│
├── every 5 min ──→ review-poll.sh ← finds unreviewed PRs, spawns review
│ └── review-pr.sh ← claude -p: review → approve/request changes
│
├── every 5 min ──→ dev-poll.sh ← pulls ready issues, spawns dev-agent
│ └── dev-agent.sh ← claude -p: implement → PR → CI → review → merge
│
├── every 6h ────→ gardener-run.sh ← backlog grooming (duplicates, stale, tech-debt)
│ └── claude -p: triage → promote/close/escalate
│
├── every 6h ────→ architect-run.sh ← strategic decomposition of vision into sprints
│
├── every 12h ───→ planner-run.sh ← gap-analyse VISION.md, create backlog issues
│ └── claude -p: update AGENTS.md → create issues
│
└── every 24h ───→ predictor-run.sh ← infrastructure pattern detection
entrypoint-edge.sh (edge container)
├── dispatcher.sh ← polls ops repo for vault actions
└── every 20 min → supervisor-run.sh ← health checks (bash checks, zero tokens)
├── all clear? → exit 0
└── problem? → claude -p (diagnose, fix, or escalate)
Prerequisites
Required:
- Claude CLI —
claudein PATH, authenticated - Docker — for provisioning a local Forgejo instance (or a running Forgejo/Gitea instance)
- Woodpecker CI — local instance connected to your forge; disinto monitors pipelines, retries failures, and queries the Woodpecker Postgres DB directly
- PostgreSQL client (
psql) — for Woodpecker DB queries (pipeline status, build counts) jq,curl,git
Optional:
- Foundry (
forge,cast,anvil) — only needed if your target project uses Solidity - Node.js — only needed if your target project uses Node
Setup
# 1. Clone
git clone https://github.com/johba/disinto.git
cd disinto
# 2. Bootstrap a project (provisions local Forgejo, creates tokens, clones repo)
disinto init https://github.com/yourorg/yourproject
Or configure manually — edit .env with your values:
# Forge (auto-populated by disinto init)
FORGE_URL=http://localhost:3000 # local Forgejo instance
FORGE_TOKEN=... # dev-bot token
FORGE_REVIEW_TOKEN=... # review-bot token
# Woodpecker CI
WOODPECKER_SERVER=http://localhost:8000
WOODPECKER_TOKEN=...
WOODPECKER_DB_PASSWORD=...
WOODPECKER_DB_USER=woodpecker
WOODPECKER_DB_HOST=127.0.0.1
WOODPECKER_DB_NAME=woodpecker
# Tuning
CLAUDE_TIMEOUT=7200 # max seconds per Claude invocation (default: 2h)
# 3. Start the agent and edge containers
docker compose up -d
# 4. Verify the entrypoint loop is running
docker exec disinto-agents-1 tail -f /home/agent/data/agent-entrypoint.log
Directory Structure
disinto/
├── .env.example # Template — copy to .env, add secrets + project config
├── .gitignore # Excludes .env, logs, state files
├── lib/
│ ├── env.sh # Shared: load .env, PATH, API helpers
│ └── ci-debug.sh # Woodpecker CI log/failure helper
├── dev/
│ ├── dev-poll.sh # Poll: find ready issues
│ └── dev-agent.sh # Implementation agent (claude -p)
├── review/
│ ├── review-poll.sh # Poll: find unreviewed PRs
│ └── review-pr.sh # Review agent (claude -p)
├── gardener/
│ ├── gardener-run.sh # Executor: backlog grooming
│ └── best-practices.md # Gardener knowledge base
├── planner/
│ ├── planner-run.sh # Executor: vision gap analysis
│ └── (formula-driven) # run-planner.toml executed by dispatcher
├── vault/
│ └── vault-env.sh # Shared env setup (vault redesign in progress, see #73-#77)
├── docs/
│ └── VAULT.md # Vault PR workflow and branch protection documentation
└── supervisor/
├── supervisor-poll.sh # Supervisor: health checks + claude -p
├── update-prompt.sh # Self-learning: append to best-practices
└── best-practices/ # Progressive disclosure knowledge base
├── memory.md
├── disk.md
├── ci.md
├── forge.md
├── dev-agent.md
├── review-agent.md
└── git.md
Agents
| Agent | Trigger | Job |
|---|---|---|
| Supervisor | Every 20 min | Health checks (RAM, disk, CI, git). Calls Claude only when something is broken. Self-improving via best-practices/. |
| Dev | Every 5 min | Picks up backlog-labeled issues, creates a branch, implements, opens a PR, monitors CI, responds to review, merges. |
| Review | Every 5 min | Finds PRs without review, runs Claude-powered code review, approves or requests changes. |
| Gardener | Every 6h | Grooms the issue backlog: detects duplicates, promotes tech-debt to backlog, closes stale issues, escalates ambiguous items. |
| Planner | Every 12h | Updates AGENTS.md documentation to reflect recent code changes, then gap-analyses VISION.md vs current state and creates up to 5 backlog issues for the highest-leverage gaps. |
Vault: Being redesigned as a PR-based approval workflow (issues #73-#77). See docs/VAULT.md for the vault PR workflow and branch protection details.
Design Principles
- Bash for checks, AI for judgment — polling and health checks are shell scripts; Claude is only invoked when something needs diagnosing or deciding
- Pull over push — dev-agent derives readiness from merged dependencies, not labels or manual assignment
- Progressive disclosure — the supervisor reads only the best-practices file relevant to the current problem, not all of them
- Self-improving — when Claude fixes something new, the lesson is appended to best-practices for next time
- Project-agnostic — all project-specific values (repo, paths, CI IDs) come from
.env, not hardcoded scripts
Runtime constraints
Disinto is intentionally opinionated about its own runtime. These are hard constraints, not preferences:
- Debian + GNU userland — all scripts target Debian with standard GNU tools (
bash,awk,sed,date,timeout). No portability shims for macOS or BSD. - Shell + a small set of runtimes — every agent is a bash script. The only interpreted runtimes used by disinto core are
python3(TOML parsing inlib/load-project.sh, JSON state tracking indev/dev-poll.sh, recipe matching ingardener/gardener-poll.sh) andclaude(the AI CLI). No Ruby, Perl, or other runtimes. Do not add new runtime dependencies without a strong justification. - Few, powerful dependencies — required non-standard tools:
jq,curl,git,tmux,psql, andpython3(≥ 3.11 fortomllib; or installtomlifor older Pythons). Adding anything beyond this list requires justification. - Node.js and Foundry are target-project dependencies — if your target repo uses Node or Solidity, install those on the host. They are not part of disinto's core and must not be assumed present in disinto scripts.
The goal: any Debian machine with the prerequisites listed above can run disinto. Keep it that way.