|
All checks were successful
Step 2 of .woodpecker/nomad-validate.yml previously ran
`nomad job validate` against a single explicit path
(nomad/jobs/forgejo.nomad.hcl, wired up during the S1.1 review). Replace
that with a POSIX-sh loop over nomad/jobs/*.nomad.hcl so every jobspec
gets CI coverage automatically — no "edit the pipeline" step to forget
when the next jobspec (woodpecker, caddy, agents, …) lands.
Why reverse S1.1's explicit-line approach: the "no-ad-hoc-steps"
principle that drove the explicit list was about keeping step *classes*
enumerated, not about re-listing every file of the same class. Globbing
over `*.nomad.hcl` still encodes a single class ("jobspec validation")
and is strictly stricter — a dropped jobspec can't silently bypass CI
because someone forgot to add its line. The `.nomad.hcl` suffix (set as
convention by S1.1 review) is what keeps non-jobspec HCL out of this
loop.
Implementation notes:
- `[ -f "$f" ] || continue` guards the no-match case. POSIX sh has no
nullglob, so an empty jobs/ dir would otherwise leave the literal
glob in $f and fail nomad job validate with "no such file". Not
reachable today (forgejo.nomad.hcl exists), but keeps the step safe
against any transient empty state during future refactors.
- `set -e` inside the block ensures the first failing jobspec aborts
(default Woodpecker behavior, but explicit is cheap).
- Loop echoes the file being validated so CI logs point at the
specific jobspec on failure.
Docs (nomad/AGENTS.md):
- "How CI validates these files" now lists all *five* steps (the S1.1
review added step 2 but didn't update the doc; fixed in passing).
- Step 2 is documented with explicit scope: what offline validate
catches (unknown stanzas, missing required fields, wrong value
types, bad driver config) and what it does NOT catch (cross-file
host_volume name resolution against client.hcl — that's a
scheduling-time check; image reachability).
- "Adding a jobspec" step 4 updated: no pipeline edit required as
long as the file follows the `*.nomad.hcl` naming convention. The
suffix is now documented as load-bearing in step 1.
- Step 2 of the "Adding a jobspec" checklist cross-links the
host_volume scheduling-time check, so contributors know the
paired-write rule (client.hcl + cluster-up.sh) is the real
guardrail for that class of drift.
Acceptance criteria:
- Broken jobspec (typo in stanza, missing required field) fails step
2 with nomad's error message — covered by the loop over every file.
- Fixed jobspec passes — standard validate behavior.
- Step 1 (nomad config validate) untouched.
- No .sh changes, so no shellcheck impact; manual shellcheck pass
shown clean.
- Trigger path `nomad/**` already covers `nomad/jobs/**` (confirmed,
no change needed to `when:` block).
Refs: #843 (S1.4), #825 (S0.5 base pipeline), #840 (S1.1 first jobspec)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
||
|---|---|---|
| .forgejo/ISSUE_TEMPLATE | ||
| .woodpecker | ||
| action-vault | ||
| architect | ||
| bin | ||
| dev | ||
| disinto-factory | ||
| docker | ||
| docs | ||
| formulas | ||
| gardener | ||
| knowledge | ||
| lib | ||
| nomad | ||
| planner | ||
| predictor | ||
| projects | ||
| review | ||
| site | ||
| state | ||
| supervisor | ||
| templates/issue | ||
| tests | ||
| tools/edge-control | ||
| .dockerignore | ||
| .env.example | ||
| .gitignore | ||
| .shellcheckrc | ||
| AGENTS.md | ||
| CLAUDE.md | ||
| docker-compose.yml | ||
| README.md | ||
| RESOURCES.example.md | ||
| VISION.md | ||
Disinto
Autonomous code factory — disinto.ai
A mining robot, lost and confused, builds a Disinto from scrap —
a device so powerful it vaporizes three-quarters of a mountain on a single battery.
— Isaac Asimov, "Robot AL-76 Goes Astray" (1942)
Point it at a git repo with a Woodpecker CI pipeline and it will pick up issues, implement them, review PRs, and keep the system healthy — all on its own.
Architecture
entrypoint.sh (while-true polling loop, 5 min base interval)
│
├── every 5 min ──→ review-poll.sh ← finds unreviewed PRs, spawns review
│ └── review-pr.sh ← claude -p: review → approve/request changes
│
├── every 5 min ──→ dev-poll.sh ← pulls ready issues, spawns dev-agent
│ └── dev-agent.sh ← claude -p: implement → PR → CI → review → merge
│
├── every 6h ────→ gardener-run.sh ← backlog grooming (duplicates, stale, tech-debt)
│ └── claude -p: triage → promote/close/escalate
│
├── every 6h ────→ architect-run.sh ← strategic decomposition of vision into sprints
│
├── every 12h ───→ planner-run.sh ← gap-analyse VISION.md, create backlog issues
│ └── claude -p: update AGENTS.md → create issues
│
└── every 24h ───→ predictor-run.sh ← infrastructure pattern detection
entrypoint-edge.sh (edge container)
├── dispatcher.sh ← polls ops repo for vault actions
└── every 20 min → supervisor-run.sh ← health checks (bash checks, zero tokens)
├── all clear? → exit 0
└── problem? → claude -p (diagnose, fix, or escalate)
Prerequisites
Required:
- Claude CLI —
claudein PATH, authenticated - Docker — for provisioning a local Forgejo instance (or a running Forgejo/Gitea instance)
- Woodpecker CI — local instance connected to your forge; disinto monitors pipelines, retries failures, and queries the Woodpecker Postgres DB directly
- PostgreSQL client (
psql) — for Woodpecker DB queries (pipeline status, build counts) jq,curl,git
Optional:
- Foundry (
forge,cast,anvil) — only needed if your target project uses Solidity - Node.js — only needed if your target project uses Node
Setup
# 1. Clone
git clone https://github.com/johba/disinto.git
cd disinto
# 2. Bootstrap a project (provisions local Forgejo, creates tokens, clones repo)
disinto init https://github.com/yourorg/yourproject
This will generate a docker-compose.yml file.
Or configure manually — edit .env with your values:
# Forge (auto-populated by disinto init)
FORGE_URL=http://localhost:3000 # local Forgejo instance
FORGE_TOKEN=... # dev-bot token
FORGE_REVIEW_TOKEN=... # review-bot token
# Woodpecker CI
WOODPECKER_SERVER=http://localhost:8000
WOODPECKER_TOKEN=...
WOODPECKER_DB_PASSWORD=...
WOODPECKER_DB_USER=woodpecker
WOODPECKER_DB_HOST=127.0.0.1
WOODPECKER_DB_NAME=woodpecker
# Tuning
CLAUDE_TIMEOUT=7200 # max seconds per Claude invocation (default: 2h)
# 3. Start the agent and edge containers
docker compose up -d
# 4. Verify the entrypoint loop is running
docker exec disinto-agents tail -f /home/agent/data/agent-entrypoint.log
Directory Structure
disinto/
├── .env.example # Template — copy to .env, add secrets + project config
├── .gitignore # Excludes .env, logs, state files
├── lib/
│ ├── env.sh # Shared: load .env, PATH, API helpers
│ └── ci-debug.sh # Woodpecker CI log/failure helper
├── dev/
│ ├── dev-poll.sh # Poll: find ready issues
│ └── dev-agent.sh # Implementation agent (claude -p)
├── review/
│ ├── review-poll.sh # Poll: find unreviewed PRs
│ └── review-pr.sh # Review agent (claude -p)
├── gardener/
│ ├── gardener-run.sh # Executor: backlog grooming
│ └── best-practices.md # Gardener knowledge base
├── planner/
│ ├── planner-run.sh # Executor: vision gap analysis
│ └── (formula-driven) # run-planner.toml executed by dispatcher
├── vault/
│ └── vault-env.sh # Shared env setup (vault redesign in progress, see #73-#77)
├── docs/
│ └── VAULT.md # Vault PR workflow and branch protection documentation
└── supervisor/
├── supervisor-poll.sh # Supervisor: health checks + claude -p
├── update-prompt.sh # Self-learning: append to best-practices
└── best-practices/ # Progressive disclosure knowledge base
├── memory.md
├── disk.md
├── ci.md
├── forge.md
├── dev-agent.md
├── review-agent.md
└── git.md
Agents
| Agent | Trigger | Job |
|---|---|---|
| Supervisor | Every 20 min | Health checks (RAM, disk, CI, git). Calls Claude only when something is broken. Self-improving via best-practices/. |
| Dev | Every 5 min | Picks up backlog-labeled issues, creates a branch, implements, opens a PR, monitors CI, responds to review, merges. |
| Review | Every 5 min | Finds PRs without review, runs Claude-powered code review, approves or requests changes. |
| Gardener | Every 6h | Grooms the issue backlog: detects duplicates, promotes tech-debt to backlog, closes stale issues, escalates ambiguous items. |
| Planner | Every 12h | Updates AGENTS.md documentation to reflect recent code changes, then gap-analyses VISION.md vs current state and creates up to 5 backlog issues for the highest-leverage gaps. |
Vault: Being redesigned as a PR-based approval workflow (issues #73-#77). See docs/VAULT.md for the vault PR workflow and branch protection details.
Design Principles
- Bash for checks, AI for judgment — polling and health checks are shell scripts; Claude is only invoked when something needs diagnosing or deciding
- Pull over push — dev-agent derives readiness from merged dependencies, not labels or manual assignment
- Progressive disclosure — the supervisor reads only the best-practices file relevant to the current problem, not all of them
- Self-improving — when Claude fixes something new, the lesson is appended to best-practices for next time
- Project-agnostic — all project-specific values (repo, paths, CI IDs) come from
.env, not hardcoded scripts
Runtime constraints
Disinto is intentionally opinionated about its own runtime. These are hard constraints, not preferences:
- Debian + GNU userland — all scripts target Debian with standard GNU tools (
bash,awk,sed,date,timeout). No portability shims for macOS or BSD. - Shell + a small set of runtimes — every agent is a bash script. The only interpreted runtimes used by disinto core are
python3(TOML parsing inlib/load-project.sh, JSON state tracking indev/dev-poll.sh, recipe matching ingardener/gardener-poll.sh) andclaude(the AI CLI). No Ruby, Perl, or other runtimes. Do not add new runtime dependencies without a strong justification. - Few, powerful dependencies — required non-standard tools:
jq,curl,git,tmux,psql, andpython3(≥ 3.11 fortomllib; or installtomlifor older Pythons). Adding anything beyond this list requires justification. - Node.js and Foundry are target-project dependencies — if your target repo uses Node or Solidity, install those on the host. They are not part of disinto's core and must not be assumed present in disinto scripts.
The goal: any Debian machine with the prerequisites listed above can run disinto. Keep it that way.