Compare commits

...
Sign in to create a new pull request.

74 commits

Author SHA1 Message Date
faf6490877 Merge pull request 'fix: [nomad-prep] P11 — wire lib/secret-scan.sh into Woodpecker CI gate (#798)' (#813) from fix/issue-798 into main
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
2026-04-15 21:09:04 +00:00
Claude
88b377ecfb fix: add file package for binary detection, document shallow-clone tradeoff
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 21:03:05 +00:00
Claude
d020847772 fix: [nomad-prep] P11 — wire lib/secret-scan.sh into Woodpecker CI gate (#798)
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 20:56:01 +00:00
98ec610645 Merge pull request 'fix: [nomad-prep] P10 — audit lib/ + compose for docker-backend-isms (#797)' (#812) from fix/issue-797 into main
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
2026-04-15 20:50:50 +00:00
Claude
f8c3ada077 fix: [nomad-prep] P10 — audit lib/ + compose for docker-backend-isms (#797)
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
ci/woodpecker/pr/smoke-init Pipeline was successful
Sites touched:
- lib/generators.sh: WOODPECKER_BACKEND_DOCKER_NETWORK now reads from
  ${WOODPECKER_CI_NETWORK:-disinto_disinto-net} so nomad jobspecs can
  override the compose-generated network name.
- lib/forge-setup.sh: bare-mode _forgejo_exec() and setup_forge() use
  ${FORGEJO_CONTAINER_NAME:-disinto-forgejo} instead of hardcoding the
  container name. Compose mode is unaffected (uses service name).

Documented exceptions (container_name directives in generators.sh
compose template output): these define names inside docker-compose.yml,
which is compose-specific output. Under nomad the generator is not used.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 20:39:47 +00:00
8315a4ecf5 Merge pull request 'fix: [nomad-prep] P8 — spot-check lib/mirrors.sh against empty Forgejo target (#796)' (#811) from fix/issue-796 into main
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
2026-04-15 20:35:38 +00:00
Claude
b6f2d83a28 fix: use FORGE_API_BASE for /repos/migrate endpoint, build payload with jq
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
- FORGE_API is repo-scoped; /repos/migrate needs the global FORGE_API_BASE
- Use jq -n --arg for safe JSON construction (no shell interpolation)
- Update docs to reference FORGE_API_BASE

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 20:29:27 +00:00
Claude
2465841b84 fix: [nomad-prep] P8 — spot-check lib/mirrors.sh against empty Forgejo target (#796)
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 20:22:11 +00:00
5c40b59359 Merge pull request 'fix: [nomad-prep] P6 — externalize host paths in docker-compose via env vars (#795)' (#810) from fix/issue-795 into main
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
2026-04-15 20:17:43 +00:00
Claude
19f10e33e6 fix: [nomad-prep] P6 — externalize host paths in docker-compose via env vars (#795)
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
ci/woodpecker/pr/smoke-init Pipeline was successful
Replace hardcoded host-side bind-mount paths with env vars so Nomad
jobspecs can reuse the same variables at cutover:

- CLAUDE_BIN_DIR: path to claude CLI binary (resolved at init time)
- CLAUDE_CONFIG_FILE: path to .claude.json (default ${HOME}/.claude.json)
- CLAUDE_DIR: path to .claude directory (default ${HOME}/.claude)
- AGENT_SSH_DIR: path to SSH keys (default ${HOME}/.ssh)
- SOPS_AGE_DIR: path to SOPS age keys (default ${HOME}/.config/sops/age)

generators.sh now writes CLAUDE_BIN_DIR to .env instead of sed-replacing
CLAUDE_BIN_PLACEHOLDER in docker-compose.yml.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 20:01:47 +00:00
6a4ca5c3a0 Merge pull request 'fix: [nomad-prep] P5 — add healthchecks to agents, edge, staging, woodpecker-agent (#794)' (#809) from fix/issue-794 into main
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
2026-04-15 19:55:25 +00:00
Claude
8799a8c676 fix: [nomad-prep] P5 — add healthchecks to agents, edge, staging, woodpecker-agent (#794)
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
ci/woodpecker/pr/smoke-init Pipeline was successful
Add Docker healthcheck blocks so Nomad check stanzas map 1:1 at migration:

- agents / agents-llama: pgrep -f entrypoint.sh (60s interval)
- woodpecker-agent: wget healthz on :3333 (30s interval)
- edge: curl Caddy admin API on :2019 (30s interval)
- staging: wget Caddy admin API on :2019 (30s interval)
- chat: add /health endpoint to server.py (no-auth 200 OK), fix
  Dockerfile HEALTHCHECK to use it, add compose-level healthcheck

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 19:39:35 +00:00
3b366ad96e Merge pull request 'fix: [nomad-prep] P3 — add load_secret() abstraction to lib/env.sh (#793)' (#808) from fix/issue-793 into main
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
2026-04-15 19:29:50 +00:00
Claude
aa298eb2ad fix: reorder test boilerplate to avoid duplicate-detection false positive
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
ci/woodpecker/pr/smoke-init Pipeline was successful
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 19:18:39 +00:00
Claude
9dbc43ab23 fix: [nomad-prep] P3 — add load_secret() abstraction to lib/env.sh (#793)
Some checks failed
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline failed
ci/woodpecker/pr/smoke-init Pipeline failed
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 19:15:50 +00:00
1d4e28843e Merge pull request 'fix: infra: _regen_file does not restore stash if generator fails — compose file lost at temp path (#784)' (#807) from fix/issue-784 into main
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
2026-04-15 19:06:36 +00:00
Claude
f90702f930 fix: infra: _regen_file does not restore stash if generator fails — compose file lost at temp path (#784)
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
ci/woodpecker/pr/smoke-init Pipeline was successful
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 18:55:51 +00:00
defec3b255 Merge pull request 'fix: feat: consolidate secret stores — single granular secrets/*.enc, deprecate .env.vault.enc (#777)' (#806) from fix/issue-777 into main
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
2026-04-15 18:46:12 +00:00
Claude
88676e65ae fix: feat: consolidate secret stores — single granular secrets/*.enc, deprecate .env.vault.enc (#777)
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
ci/woodpecker/pr/smoke-init Pipeline was successful
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 18:35:03 +00:00
a87dcdf40b Merge pull request 'chore: gardener housekeeping' (#805) from chore/gardener-20260415-1816 into main
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
2026-04-15 18:23:21 +00:00
b8cb8c5c32 Merge pull request 'fix: [nomad-prep] P0 — rename lib/vault.sh + vault/ to action-vault namespace (#792)' (#804) from fix/issue-792 into main
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
2026-04-15 18:22:49 +00:00
Claude
0937707fe5 chore: gardener housekeeping 2026-04-15
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
2026-04-15 18:16:44 +00:00
Claude
e9a018db5c fix: [nomad-prep] P0 — rename lib/vault.sh + vault/ to action-vault namespace (#792)
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 18:16:32 +00:00
18190874ca Merge pull request 'fix: infra: edge-control install.sh overwrites /etc/caddy/Caddyfile with no carve-out for apex/static sites — landing page lost on install (#788)' (#791) from fix/issue-788 into main
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
2026-04-15 16:48:46 +00:00
Claude
5a2a9e1c74 fix: infra: edge-control install.sh overwrites /etc/caddy/Caddyfile with no carve-out for apex/static sites — landing page lost on install (#788)
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 16:42:30 +00:00
182c40b9fc Merge pull request 'fix: bug: edge-control add_route targets non-existent Caddy server edge — registration succeeds in registry but traffic never routes (#789)' (#790) from fix/issue-789 into main
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
2026-04-15 16:37:19 +00:00
Claude
241ce96046 fix: remove invalid servers { name edge } Caddyfile directive
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
`name` is not a valid subdirective of the global `servers` block in
Caddyfile syntax — Caddy would reject the config on startup. The
dynamic server discovery in `_discover_server_name()` already handles
routing to the correct server regardless of its auto-generated name.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 16:31:09 +00:00
Claude
987413ab3a fix: bug: edge-control add_route targets non-existent Caddy server edge — registration succeeds in registry but traffic never routes (#789)
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
- install.sh: use Caddy `servers { name edge }` global option so the
  emitted Caddyfile produces a predictably-named server
- lib/caddy.sh: add `_discover_server_name` that queries the admin API
  for the first server listening on :80/:443 — add_route and remove_route
  use dynamic discovery instead of hardcoding `/servers/edge/`
- lib/caddy.sh: add_route, remove_route, and reload_caddy now check HTTP
  status codes (≥400 → return 1 with error message) instead of only
  checking curl exit code

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 16:24:24 +00:00
02e86c3589 Merge pull request 'fix: planner: replace direct push with pr-lifecycle (mirror architect ops flow) (#765)' (#787) from fix/issue-765 into main
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
2026-04-15 14:40:14 +00:00
Claude
175716a847 fix: planner: replace direct push with pr-lifecycle (mirror architect ops flow) (#765)
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
Planner phase 5 pushed ops repo changes directly to main, which branch
protection blocks. Replace with the same PR-based flow architect uses:

- planner-run.sh: create branch planner/run-YYYY-MM-DD in ops repo before
  agent_run, then pr_create + pr_walk_to_merge after agent completes
- run-planner.toml: formula now pushes HEAD (the branch) instead of
  PRIMARY_BRANCH directly
- planner/AGENTS.md: update phase 5 description to reflect PR flow

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 14:28:49 +00:00
d6c8fd8127 Merge pull request 'fix: feat: disinto secrets add — accept piped stdin for non-interactive imports (#776)' (#786) from fix/issue-776 into main
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
2026-04-15 14:19:47 +00:00
Claude
5dda6dc8e9 fix: feat: disinto secrets add — accept piped stdin for non-interactive imports (#776)
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
ci/woodpecker/pr/smoke-init Pipeline was successful
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 14:08:28 +00:00
49cc870f54 Merge pull request 'fix: infra: deprecate tracked docker/Caddyfilegenerate_caddyfile is the single source of truth (#771)' (#785) from fix/issue-771 into main
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
2026-04-15 11:40:44 +00:00
Claude
ec7bc8ff2c fix: infra: deprecate tracked docker/Caddyfilegenerate_caddyfile is the single source of truth (#771)
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
ci/woodpecker/pr/smoke-init Pipeline was successful
- Add docker/Caddyfile to .gitignore (generated artifact, not tracked)
- Document generate_caddyfile as canonical source in lib/generators.sh

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 11:29:56 +00:00
f27c66a7e0 Merge pull request 'fix: infra: disinto up should regenerate compose/Caddyfile from lib/generators.sh and reconcile orphans before docker compose up -d (#770)' (#783) from fix/issue-770 into main
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
2026-04-15 11:23:28 +00:00
Claude
53ce7ad475 fix: infra: disinto up should regenerate compose/Caddyfile from lib/generators.sh and reconcile orphans before docker compose up -d (#770)
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
ci/woodpecker/pr/smoke-init Pipeline was successful
- Add `_regen_file` helper that idempotently regenerates a file: moves
  existing file aside, runs the generator, compares output byte-for-byte,
  and either restores the original (preserving mtime) or keeps the new
  version with a `.prev` backup.
- `disinto_up` now calls `generate_compose` and `generate_caddyfile`
  before bringing the stack up, ensuring generator changes are applied.
- Pass `--build --remove-orphans` to `docker compose up -d` so image
  rebuilds and orphan container cleanup happen automatically.
- Add `--no-regen` escape hatch that skips regeneration and prints a
  warning for operators debugging generators or testing hand-edits.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 11:12:38 +00:00
c644660bda Merge pull request 'fix: infra: CI broken on main — missing WOODPECKER_PLUGINS_PRIVILEGED server env + misplaced .woodpecker/ops-filer.yml in project repo (#779)' (#782) from fix/issue-779 into main
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
2026-04-15 11:07:27 +00:00
91f36b2692 Merge pull request 'chore: gardener housekeeping' (#781) from chore/gardener-20260415-1007 into main
Some checks failed
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/push/ops-filer Pipeline failed
2026-04-15 11:02:55 +00:00
Claude
a8d393f3bd fix: infra: CI broken on main — missing WOODPECKER_PLUGINS_PRIVILEGED server env + misplaced .woodpecker/ops-filer.yml in project repo (#779)
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
ci/woodpecker/pr/smoke-init Pipeline was successful
Part 1: Add WOODPECKER_PLUGINS_PRIVILEGED to woodpecker service environment
in lib/generators.sh, defaulting to plugins/docker, overridable via .env.
Document the new key in .env.example.

Part 2: Delete .woodpecker/ops-filer.yml from project repo — it belongs in
the ops repo and references secrets that don't exist here. Full ops-side
filer setup deferred until sprint PRs need it.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 10:56:39 +00:00
d0c0ef724a Merge pull request 'fix: infra: agents-llama (local-Qwen dev agent) is hand-added to docker-compose.yml — move into lib/generators.sh as a flagged service (#769)' (#780) from fix/issue-769 into main
Some checks failed
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/push/ops-filer Pipeline failed
2026-04-15 10:09:43 +00:00
Claude
539862679d chore: gardener housekeeping 2026-04-15
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
2026-04-15 10:07:41 +00:00
250788952f Merge pull request 'fix: feat: publish versioned agent images — compose should use image: not build: (#429)' (#775) from fix/issue-429 into main
Some checks failed
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/push/ops-filer Pipeline failed
2026-04-15 10:04:58 +00:00
Claude
0104ac06a8 fix: infra: agents-llama (local-Qwen dev agent) is hand-added to docker-compose.yml — move into lib/generators.sh as a flagged service (#769)
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
ci/woodpecker/pr/smoke-init Pipeline was successful
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 09:58:44 +00:00
c71b6d4f95 ci: retrigger after WOODPECKER_PLUGINS_PRIVILEGED fix
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
ci/woodpecker/pr/smoke-init Pipeline was successful
2026-04-15 09:46:24 +00:00
Claude
92f19cb2b3 feat: publish versioned agent images — compose should use image: not build: (#429)
- Generated compose now uses `image: ghcr.io/disinto/{agents,edge}` instead
  of `build:` directives; `disinto init --build` restores local-build mode
- Add VOLUME declarations to agents, reproduce, and edge Dockerfiles
- Add CI pipeline (.woodpecker/publish-images.yml) to build and push images
  to ghcr.io/disinto on tag events
- Mount projects/, .env, and state/ into agents container for runtime config
- Skip pre-build binary download when compose uses registry images

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 09:24:05 +00:00
be463c5b43 Merge pull request 'fix: infra: edge service missing restart: unless-stopped in lib/generators.sh (#768)' (#774) from fix/issue-768 into main 2026-04-15 09:12:48 +00:00
Claude
0baac1a7d8 fix: infra: edge service missing restart: unless-stopped in lib/generators.sh (#768)
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
ci/woodpecker/pr/smoke-init Pipeline was successful
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 09:03:26 +00:00
0db4c84818 Merge pull request 'chore: gardener housekeeping' (#767) from chore/gardener-20260415-0806 into main 2026-04-15 08:57:11 +00:00
378da77adf Merge pull request 'fix: bug: architect pitch prompt guardrail is prose-only — model bypasses "NEVER call Forgejo API" via Bash tool; fix via permission scoping + PR-driven sub-issue filing (#764)' (#766) from fix/issue-764 into main 2026-04-15 08:57:07 +00:00
Claude
fd9ba028bc chore: gardener housekeeping 2026-04-15
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
2026-04-15 08:06:14 +00:00
Claude
707aae287a fix: reuse forge_api_all from env.sh in sprint-filer.sh to avoid duplicate pagination code
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
ci/woodpecker/pr/smoke-init Pipeline was successful
The duplicate-detection CI step (baseline mode) flags new code blocks that
match existing patterns. filer_api_all reimplemented the same pagination
logic as forge_api_all in env.sh. Replace with a one-liner wrapper that
delegates to forge_api_all with FORGE_FILER_TOKEN.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 07:59:56 +00:00
Claude
0be36dd502 fix: address review — update architect/AGENTS.md, fix pagination and section targeting in sprint-filer.sh
Some checks failed
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline failed
ci/woodpecker/pr/smoke-init Pipeline failed
- architect/AGENTS.md: update responsibilities, state transitions, vision
  lifecycle, and execution sections to reflect read-only role and filer-bot
  architecture (#764)
- lib/sprint-filer.sh: add filer_api_all() paginated fetch helper; fix
  subissue_exists() and check_and_close_completed_visions() to paginate
  instead of using fixed limits that miss issues on large trackers
- lib/sprint-filer.sh: fix extract_vision_issue() to look specifically in
  the "## Vision issues" section before falling back to first #N in file

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 07:57:20 +00:00
Claude
2c9b8e386f fix: rename awk variable in_body to inbody to avoid smoke test false positive
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
ci/woodpecker/pr/smoke-init Pipeline was successful
The agent-smoke.sh function resolution checker matches lowercase_underscore
identifiers as potential bash function calls. The awk variable `in_body`
inside sprint-filer.sh's heredoc triggered a false [undef] failure.
Also fixes SC2155 (declare and assign separately) in the same file.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 07:43:49 +00:00
Claude
04ff8a6e85 fix: bug: architect pitch prompt guardrail is prose-only — model bypasses "NEVER call Forgejo API" via Bash tool; fix via permission scoping + PR-driven sub-issue filing (#764)
Some checks failed
ci/woodpecker/push/ci Pipeline failed
ci/woodpecker/pr/ci Pipeline failed
ci/woodpecker/pr/smoke-init Pipeline failed
Shift the guardrail from prose prompt constraints into Forgejo's permission
layer. architect-bot loses all write access on the project repo (now read-only
for context gathering). Sub-issues are produced by a new filer-bot identity
that runs only after a human merges a sprint PR on the ops repo.

Changes:
- architect-run.sh: remove all project-repo writes (add_inprogress_label,
  close_vision_issue, check_and_close_completed_visions); add ## Sub-issues
  block to pitch format with filer:begin/end markers
- formulas/run-architect.toml: add Sub-issues schema to pitch format; strip
  issue-creation API refs; document read-only constraint on project repo
- lib/formula-session.sh: remove Create issue curl template from
  build_prompt_footer (architect cannot create issues)
- lib/sprint-filer.sh (new): parser + idempotent filer using FORGE_FILER_TOKEN;
  parses filer:begin/end blocks, creates issues with decomposed-from markers,
  adds in-progress label, handles vision lifecycle closure
- .woodpecker/ops-filer.yml (new): CI pipeline on ops repo main-branch push
  that invokes sprint-filer.sh after sprint PR merge
- lib/env.sh, .env.example, docker-compose.yml: add FORGE_FILER_TOKEN for
  filer-bot identity; add filer-bot to FORGE_BOT_USERNAMES
- AGENTS.md: add Filer agent entry; update in-progress label docs
- .woodpecker/agent-smoke.sh: register sprint-filer.sh for smoke test

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 07:41:16 +00:00
10c7a88416 Merge pull request 'fix: bug: architect FORGE_TOKEN override nullified when env.sh re-sources .env — agent actions authored as dev-bot (#762)' (#763) from fix/issue-762 into main
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
2026-04-15 07:29:53 +00:00
Claude
66ba93a840 fix: add allowlist entry for standard lib source block in duplicate detection
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
The FORGE_TOKEN_OVERRIDE fix shifted line numbers in agent run scripts,
causing the shared source block (env.sh, formula-session.sh, worktree.sh,
guard.sh, agent-sdk.sh) to register as a new duplicate. This is
intentional boilerplate shared across all formula-driven agents.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 07:18:42 +00:00
Claude
aff9f0fcef fix: bug: architect FORGE_TOKEN override nullified when env.sh re-sources .env — agent actions authored as dev-bot (#762)
Some checks failed
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline failed
Use FORGE_TOKEN_OVERRIDE (set before sourcing env.sh) instead of
post-source FORGE_TOKEN reassignment in all five agent run scripts.
The override mechanism in lib/env.sh:98-100 survives re-sourcing from
nested shells and claude -p tool invocations.

Affected scripts: architect-run.sh, planner-run.sh, gardener-run.sh,
predictor-run.sh, supervisor-run.sh.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 07:15:28 +00:00
c7a1c444e9 Merge pull request 'fix: feat: collect-engagement formula + container script — SSH fetch + local parse + evidence commit (#745)' (#761) from fix/issue-745 into main
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
2026-04-15 07:04:15 +00:00
Claude
8a5537fefc fix: feat: collect-engagement formula + container script — SSH fetch + local parse + evidence commit (#745)
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 07:01:37 +00:00
34fd7868e4 Merge pull request 'chore: gardener housekeeping' (#760) from chore/gardener-20260415-0408 into main
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
2026-04-15 06:53:12 +00:00
Claude
0b4905af3d chore: gardener housekeeping 2026-04-15
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
2026-04-15 04:08:04 +00:00
cdb0408466 Merge pull request 'chore: gardener housekeeping' (#759) from chore/gardener-20260415-0300 into main
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
2026-04-15 03:03:27 +00:00
Claude
32420c619d chore: gardener housekeeping 2026-04-15
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
2026-04-15 03:00:40 +00:00
3757d9d919 Merge pull request 'chore: gardener housekeeping' (#757) from chore/gardener-20260414-2254 into main
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
2026-04-15 02:02:49 +00:00
b95e2da645 Merge pull request 'fix: docs: rent-a-human instructions for Caddy host SSH key setup (#748)' (#756) from fix/issue-748 into main
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
2026-04-14 22:56:05 +00:00
Claude
5733a10858 chore: gardener housekeeping 2026-04-14
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
2026-04-14 22:54:30 +00:00
Claude
9b0ecc40dc fix: docs: rent-a-human instructions for Caddy host SSH key setup (#748)
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 22:50:20 +00:00
ba3a11fa9d Merge pull request 'fix: bug: entrypoint.sh wait (no-args) serializes polling loop behind long-lived dev-agent/gardener — causes system-wide deadlock (#753)' (#755) from fix/issue-753 into main
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
2026-04-14 22:43:49 +00:00
Claude
6af8f002f5 fix: bug: entrypoint.sh wait (no-args) serializes polling loop behind long-lived dev-agent/gardener — causes system-wide deadlock (#753)
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 22:37:24 +00:00
c5b0b1dc23 Merge pull request 'fix: investigation: CI exhaustion pattern on chat sub-issues #707 and #712 — 3+ failures each (#742)' (#754) from fix/issue-742 into main
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
2026-04-14 22:05:36 +00:00
Claude
a08d87d0f3 fix: investigation: CI exhaustion pattern on chat sub-issues #707 and #712 — 3+ failures each (#742)
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
Two bugs in agent-smoke.sh caused non-deterministic CI failures:

1. SIGPIPE race with pipefail: `printf | grep -q` fails when grep closes
   the pipe early after finding a match, causing printf to get SIGPIPE
   (exit 141). With pipefail, the pipeline returns non-zero even though
   grep succeeded — producing false "undef" failures. Fixed by using
   here-strings (<<<) instead of pipes for all grep checks.

2. Incomplete LIB_FUNS: hand-maintained REQUIRED_LIBS list (11 files)
   didn't cover all 26 lib/*.sh files, silently producing a partial
   function list. Fixed by enumerating all lib/*.sh in stable
   lexicographic order (LC_ALL=C sort), excluding only standalone
   scripts (ci-debug.sh, parse-deps.sh).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 22:04:43 +00:00
59717558d4 Merge pull request 'fix: fix: format-detection guard in collect-engagement.sh — fail loudly on non-JSON logs (#746)' (#752) from fix/issue-746 into main
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
2026-04-14 21:52:18 +00:00
409a796556 Merge pull request 'chore: gardener housekeeping' (#751) from chore/gardener-20260414-2024 into main
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
2026-04-14 21:50:15 +00:00
Claude
7f2198cc76 fix: format-detection guard in collect-engagement.sh — fail loudly on non-JSON logs (#746)
All checks were successful
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/pr/ci Pipeline was successful
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 20:25:53 +00:00
65 changed files with 2568 additions and 661 deletions

View file

@ -1,8 +1,7 @@
# Secrets — prevent .env files from being baked into the image
# Secrets — prevent .env files and encrypted secrets from being baked into the image
.env
.env.enc
.env.vault
.env.vault.enc
secrets/
# Version control — .git is huge and not needed in image
.git

View file

@ -45,7 +45,9 @@ FORGE_PREDICTOR_TOKEN= # [SECRET] predictor-bot API token
FORGE_PREDICTOR_PASS= # [SECRET] predictor-bot password for git HTTP push
FORGE_ARCHITECT_TOKEN= # [SECRET] architect-bot API token
FORGE_ARCHITECT_PASS= # [SECRET] architect-bot password for git HTTP push
FORGE_BOT_USERNAMES=dev-bot,review-bot,planner-bot,gardener-bot,vault-bot,supervisor-bot,predictor-bot,architect-bot
FORGE_FILER_TOKEN= # [SECRET] filer-bot API token (issues:write on project repo only)
FORGE_FILER_PASS= # [SECRET] filer-bot password for git HTTP push
FORGE_BOT_USERNAMES=dev-bot,review-bot,planner-bot,gardener-bot,vault-bot,supervisor-bot,predictor-bot,architect-bot,filer-bot
# ── Backwards compatibility ───────────────────────────────────────────────
# If CODEBERG_TOKEN is set but FORGE_TOKEN is not, env.sh falls back to
@ -61,6 +63,10 @@ FORGE_BOT_USERNAMES=dev-bot,review-bot,planner-bot,gardener-bot,vault-bot,superv
WOODPECKER_TOKEN= # [SECRET] Woodpecker API token
WOODPECKER_SERVER=http://localhost:8000 # [CONFIG] Woodpecker server URL
WOODPECKER_AGENT_SECRET= # [SECRET] shared secret for server↔agent auth (auto-generated)
# Woodpecker privileged-plugin allowlist — comma-separated image names
# Add plugins/docker (and others) here to allow privileged execution
WOODPECKER_PLUGINS_PRIVILEGED=plugins/docker
# WOODPECKER_REPO_ID — now per-project, set in projects/*.toml [ci] section
# Woodpecker Postgres (for direct DB queries)
@ -77,24 +83,42 @@ FORWARD_AUTH_SECRET= # [SECRET] Shared secret for Caddy ↔
# ── Vault-only secrets (DO NOT put these in .env) ────────────────────────
# These tokens grant access to external systems (GitHub, ClawHub, deploy targets).
# They live ONLY in .env.vault.enc and are injected into the ephemeral runner
# container at fire time (#745). lib/env.sh explicitly unsets them so agents
# can never hold them directly — all external actions go through vault dispatch.
# They live ONLY in secrets/<NAME>.enc (age-encrypted, one file per key) and are
# decrypted into the ephemeral runner container at fire time (#745, #777).
# lib/env.sh explicitly unsets them so agents can never hold them directly —
# all external actions go through vault dispatch.
#
# GITHUB_TOKEN — GitHub API access (publish, deploy, post)
# CLAWHUB_TOKEN — ClawHub registry credentials (publish)
# CADDY_SSH_KEY — SSH key for Caddy log collection
# (deploy keys) — SSH keys for deployment targets
#
# To manage vault secrets: disinto secrets edit-vault
# (vault redesign in progress: PR-based approval, see #73-#77)
# To manage secrets: disinto secrets add/show/remove/list
# ── Project-specific secrets ──────────────────────────────────────────────
# Store all project secrets here so formulas reference env vars, never hardcode.
BASE_RPC_URL= # [SECRET] on-chain RPC endpoint
# ── Local Qwen dev agent (optional) ──────────────────────────────────────
# Set ENABLE_LLAMA_AGENT=1 to emit agents-llama in docker-compose.yml.
# Requires a running llama-server reachable at ANTHROPIC_BASE_URL.
# See docs/agents-llama.md for details.
ENABLE_LLAMA_AGENT=0 # [CONFIG] 1 = enable agents-llama service
ANTHROPIC_BASE_URL= # [CONFIG] e.g. http://host.docker.internal:8081
# ── Tuning ────────────────────────────────────────────────────────────────
CLAUDE_TIMEOUT=7200 # [CONFIG] max seconds per Claude invocation
# ── Host paths (Nomad-portable) ────────────────────────────────────────────
# These env vars externalize host-side bind-mount paths from docker-compose.yml.
# At cutover, Nomad jobspecs reference the same vars — no path translation.
# Defaults point at current paths so an empty .env override still works.
CLAUDE_BIN_DIR=/usr/local/bin/claude # [CONFIG] host path to claude CLI binary (resolved by `disinto init`)
CLAUDE_CONFIG_FILE=${HOME}/.claude.json # [CONFIG] host path to claude config JSON file
CLAUDE_DIR=${HOME}/.claude # [CONFIG] host path to .claude directory (reproduce/edge)
AGENT_SSH_DIR=${HOME}/.ssh # [CONFIG] host path to SSH keys directory
SOPS_AGE_DIR=${HOME}/.config/sops/age # [CONFIG] host path to SOPS age key directory
# ── Claude Code shared OAuth state ─────────────────────────────────────────
# Shared directory used by every factory container so Claude Code's internal
# proper-lockfile-based OAuth refresh lock works across containers. Both

4
.gitignore vendored
View file

@ -3,7 +3,6 @@
# Encrypted secrets — safe to commit (SOPS-encrypted with age)
!.env.enc
!.env.vault.enc
!.sops.yaml
# Per-box project config (generated by disinto init)
@ -33,6 +32,9 @@ docker/agents/bin/
# Note: This file is now committed to track volume mount configuration
# docker-compose.yml
# Generated Caddyfile — single source of truth is generate_caddyfile in lib/generators.sh
docker/Caddyfile
# Python bytecode
__pycache__/
*.pyc

View file

@ -98,50 +98,38 @@ echo "syntax check done"
echo "=== 2/2 Function resolution ==="
# Required lib files for LIB_FUNS construction. Missing any of these means the
# checkout is incomplete or the test is misconfigured — fail loudly, do NOT
# silently produce a partial LIB_FUNS list (that masquerades as "undef" errors
# in unrelated scripts; see #600).
REQUIRED_LIBS=(
lib/agent-sdk.sh lib/env.sh lib/ci-helpers.sh lib/load-project.sh
lib/secret-scan.sh lib/formula-session.sh lib/mirrors.sh lib/guard.sh
lib/pr-lifecycle.sh lib/issue-lifecycle.sh lib/worktree.sh
)
for f in "${REQUIRED_LIBS[@]}"; do
if [ ! -f "$f" ]; then
printf 'FAIL [missing-lib] expected %s but it is not present at smoke time\n' "$f" >&2
printf ' pwd=%s\n' "$(pwd)" >&2
printf ' ls lib/=%s\n' "$(ls lib/ 2>&1 | tr '\n' ' ')" >&2
echo '=== SMOKE TEST FAILED (precondition) ===' >&2
exit 2
fi
done
# Functions provided by shared lib files (available to all agent scripts via source).
# Enumerate ALL lib/*.sh files in stable lexicographic order (#742).
# Previous approach used a hand-maintained REQUIRED_LIBS list, which silently
# became incomplete as new libs were added, producing partial LIB_FUNS that
# caused non-deterministic "undef" failures.
#
# Included — these are inline-sourced by agent scripts:
# lib/env.sh — sourced by every agent (log, forge_api, etc.)
# lib/agent-sdk.sh — sourced by SDK agents (agent_run, agent_recover_session)
# lib/ci-helpers.sh — sourced by pollers and review (ci_passed, classify_pipeline_failure, etc.)
# lib/load-project.sh — sourced by env.sh when PROJECT_TOML is set
# lib/secret-scan.sh — standalone CLI tool, run directly (not sourced)
# lib/formula-session.sh — sourced by formula-driven agents (acquire_run_lock, check_memory, etc.)
# lib/mirrors.sh — sourced by merge sites (mirror_push)
# lib/guard.sh — sourced by all polling-loop entry points (check_active)
# lib/issue-lifecycle.sh — sourced by agents for issue claim/release/block/deps
# lib/worktree.sh — sourced by agents for worktree create/recover/cleanup/preserve
#
# Excluded — not sourced inline by agents:
# lib/tea-helpers.sh — sourced conditionally by env.sh (tea_file_issue, etc.); checked standalone below
# Excluded from LIB_FUNS (not sourced inline by agents):
# lib/ci-debug.sh — standalone CLI tool, run directly (not sourced)
# lib/parse-deps.sh — executed via `bash lib/parse-deps.sh` (not sourced)
# lib/hooks/*.sh — Claude Code hook scripts, executed by the harness (not sourced)
#
# If a new lib file is added and sourced by agents, add it to LIB_FUNS below
# and add a check_script call for it in the lib files section further down.
EXCLUDED_LIBS="lib/ci-debug.sh lib/parse-deps.sh"
# Build the list of lib files in deterministic order (LC_ALL=C sort).
# Fail loudly if no lib files are found — checkout is broken.
mapfile -t ALL_LIBS < <(LC_ALL=C find lib -maxdepth 1 -name '*.sh' -print | LC_ALL=C sort)
if [ "${#ALL_LIBS[@]}" -eq 0 ]; then
echo 'FAIL [no-libs] no lib/*.sh files found at smoke time' >&2
printf ' pwd=%s\n' "$(pwd)" >&2
echo '=== SMOKE TEST FAILED (precondition) ===' >&2
exit 2
fi
# Build LIB_FUNS from all non-excluded lib files.
# Use set -e inside the subshell so a failed get_fns aborts loudly
# instead of silently shrinking the function list.
LIB_FUNS=$(
for f in "${REQUIRED_LIBS[@]}"; do get_fns "$f"; done | sort -u
set -e
for f in "${ALL_LIBS[@]}"; do
# shellcheck disable=SC2086
skip=0; for ex in $EXCLUDED_LIBS; do [ "$f" = "$ex" ] && skip=1; done
[ "$skip" -eq 1 ] && continue
get_fns "$f"
done | sort -u
)
# Known external commands and shell builtins — never flag these
@ -192,13 +180,14 @@ check_script() {
while IFS= read -r fn; do
[ -z "$fn" ] && continue
is_known_cmd "$fn" && continue
if ! printf '%s\n' "$all_fns" | grep -qxF "$fn"; then
# Use here-string (<<<) instead of pipe to avoid SIGPIPE race (#742):
# with pipefail, `printf | grep -q` can fail when grep closes the pipe
# early after finding a match, causing printf to get SIGPIPE (exit 141).
# This produced non-deterministic false "undef" failures.
if ! grep -qxF "$fn" <<< "$all_fns"; then
printf 'FAIL [undef] %s: %s\n' "$script" "$fn"
# Diagnostic dump (#600): if the function is expected to be in a known lib,
# print what the actual all_fns set looks like so we can tell whether the
# function is genuinely missing or whether the resolution loop is broken.
printf ' all_fns count: %d\n' "$(printf '%s\n' "$all_fns" | wc -l)"
printf ' LIB_FUNS contains "%s": %s\n' "$fn" "$(printf '%s\n' "$LIB_FUNS" | grep -cxF "$fn")"
printf ' all_fns count: %d\n' "$(grep -c . <<< "$all_fns")"
printf ' LIB_FUNS contains "%s": %s\n' "$fn" "$(grep -cxF "$fn" <<< "$LIB_FUNS")"
printf ' defining lib (if any): %s\n' "$(grep -l "^[[:space:]]*${fn}[[:space:]]*()" lib/*.sh 2>/dev/null | tr '\n' ' ')"
FAILED=1
fi
@ -224,6 +213,7 @@ check_script lib/issue-lifecycle.sh lib/secret-scan.sh
# Still checked for function resolution against LIB_FUNS + own definitions.
check_script lib/ci-debug.sh
check_script lib/parse-deps.sh
check_script lib/sprint-filer.sh
# Agent scripts — list cross-sourced files where function scope flows across files.
check_script dev/dev-agent.sh

View file

@ -292,6 +292,8 @@ def main() -> int:
"21aec56a99d5252b23fb9a38b895e8e8": "Verification helper: check body for Decomposed from pattern",
"60ea98b3604557d539193b2a6624e232": "Verification helper: append sub-issue number",
"9f6ae8e7811575b964279d8820494eb0": "Verification helper: for loop done pattern",
# Standard lib source block shared across formula-driven agent run scripts
"330e5809a00b95ade1a5fce2d749b94b": "Standard lib source block (env.sh, formula-session.sh, worktree.sh, guard.sh, agent-sdk.sh)",
}
if not sh_files:

View file

@ -0,0 +1,64 @@
# .woodpecker/publish-images.yml — Build and push versioned container images
# Triggered on tag pushes (e.g. v1.2.3). Builds and pushes:
# - ghcr.io/disinto/agents:<tag>
# - ghcr.io/disinto/reproduce:<tag>
# - ghcr.io/disinto/edge:<tag>
#
# Requires GHCR_TOKEN secret configured in Woodpecker with push access
# to ghcr.io/disinto.
when:
event: tag
ref: refs/tags/v*
clone:
git:
image: alpine/git
commands:
- AUTH_URL=$(printf '%s' "$CI_REPO_CLONE_URL" | sed "s|://|://token:$FORGE_TOKEN@|")
- git clone --depth 1 "$AUTH_URL" .
- git fetch --depth 1 origin "$CI_COMMIT_REF"
- git checkout FETCH_HEAD
steps:
- name: build-and-push-agents
image: plugins/docker
settings:
repo: ghcr.io/disinto/agents
registry: ghcr.io
dockerfile: docker/agents/Dockerfile
context: .
tags:
- ${CI_COMMIT_TAG}
- latest
username: disinto
password:
from_secret: GHCR_TOKEN
- name: build-and-push-reproduce
image: plugins/docker
settings:
repo: ghcr.io/disinto/reproduce
registry: ghcr.io
dockerfile: docker/reproduce/Dockerfile
context: .
tags:
- ${CI_COMMIT_TAG}
- latest
username: disinto
password:
from_secret: GHCR_TOKEN
- name: build-and-push-edge
image: plugins/docker
settings:
repo: ghcr.io/disinto/edge
registry: ghcr.io
dockerfile: docker/edge/Dockerfile
context: docker/edge
tags:
- ${CI_COMMIT_TAG}
- latest
username: disinto
password:
from_secret: GHCR_TOKEN

View file

@ -0,0 +1,68 @@
#!/usr/bin/env bash
set -euo pipefail
# run-secret-scan.sh — CI wrapper for lib/secret-scan.sh
#
# Scans files changed in this PR for plaintext secrets.
# Exits non-zero if any secret is detected.
# shellcheck source=../lib/secret-scan.sh
source lib/secret-scan.sh
# Path patterns considered secret-adjacent
SECRET_PATH_PATTERNS=(
'\.env'
'tools/vault-.*\.sh'
'nomad/'
'vault/'
'action-vault/'
'lib/hvault\.sh'
'lib/action-vault\.sh'
)
# Build a single regex from patterns
path_regex=$(printf '%s|' "${SECRET_PATH_PATTERNS[@]}")
path_regex="${path_regex%|}"
# Get files changed in this PR vs target branch.
# Note: shallow clone (depth 50) may lack the merge base for very large PRs,
# causing git diff to fail — || true means the gate skips rather than blocks.
changed_files=$(git diff --name-only --diff-filter=ACMR "origin/${CI_COMMIT_TARGET_BRANCH}...HEAD" || true)
if [ -z "$changed_files" ]; then
echo "secret-scan: no changed files found, skipping"
exit 0
fi
# Filter to secret-adjacent paths only
target_files=$(printf '%s\n' "$changed_files" | grep -E "$path_regex" || true)
if [ -z "$target_files" ]; then
echo "secret-scan: no secret-adjacent files changed, skipping"
exit 0
fi
echo "secret-scan: scanning $(printf '%s\n' "$target_files" | wc -l) file(s):"
printf ' %s\n' "$target_files"
failures=0
while IFS= read -r file; do
# Skip deleted files / non-existent
[ -f "$file" ] || continue
# Skip binary files
file -b --mime-encoding "$file" 2>/dev/null | grep -q binary && continue
content=$(cat "$file")
if ! scan_for_secrets "$content"; then
echo "FAIL: secret detected in $file"
failures=$((failures + 1))
fi
done <<< "$target_files"
if [ "$failures" -gt 0 ]; then
echo ""
echo "secret-scan: $failures file(s) contain potential secrets — merge blocked"
echo "If these are false positives, verify patterns in lib/secret-scan.sh"
exit 1
fi
echo "secret-scan: all files clean"

View file

@ -0,0 +1,32 @@
# .woodpecker/secret-scan.yml — Block PRs that leak plaintext secrets
#
# Triggers on pull requests touching secret-adjacent paths.
# Sources lib/secret-scan.sh and scans each changed file's content.
# Exits non-zero if any potential secret is detected.
when:
- event: pull_request
path:
- ".env*"
- "tools/vault-*.sh"
- "nomad/**/*"
- "vault/**/*"
- "action-vault/**/*"
- "lib/hvault.sh"
- "lib/action-vault.sh"
clone:
git:
image: alpine/git
commands:
- AUTH_URL=$(printf '%s' "$CI_REPO_CLONE_URL" | sed "s|://|://token:$FORGE_TOKEN@|")
- git clone --depth 50 "$AUTH_URL" .
- git fetch --depth 50 origin "$CI_COMMIT_REF" "$CI_COMMIT_TARGET_BRANCH"
- git checkout FETCH_HEAD
steps:
- name: secret-scan
image: alpine:3
commands:
- apk add --no-cache bash git grep file
- bash .woodpecker/run-secret-scan.sh

View file

@ -1,4 +1,4 @@
<!-- last-reviewed: 4e53f508d9b36c60bd68ed5fc497fc8775fec79f -->
<!-- last-reviewed: 18190874cae869527f675f717423ded735f2c555 -->
# Disinto — Agent Instructions
## What this repo is
@ -31,11 +31,11 @@ disinto/ (code repo)
├── supervisor/ supervisor-run.sh — formula-driven health monitoring (polling-loop executor)
│ preflight.sh — pre-flight data collection for supervisor formula
├── architect/ architect-run.sh — strategic decomposition of vision into sprints
├── vault/ vault-env.sh — shared env setup (vault redesign in progress, see #73-#77)
├── action-vault/ vault-env.sh — shared env setup (vault redesign in progress, see #73-#77)
│ SCHEMA.md — vault item schema documentation
│ validate.sh — vault item validator
│ examples/ — example vault action TOMLs (promote, publish, release, webhook-call)
├── lib/ env.sh, agent-sdk.sh, ci-helpers.sh, ci-debug.sh, load-project.sh, parse-deps.sh, guard.sh, mirrors.sh, pr-lifecycle.sh, issue-lifecycle.sh, worktree.sh, formula-session.sh, stack-lock.sh, forge-setup.sh, forge-push.sh, ops-setup.sh, ci-setup.sh, generators.sh, hire-agent.sh, release.sh, build-graph.py, branch-protection.sh, secret-scan.sh, tea-helpers.sh, vault.sh, ci-log-reader.py, git-creds.sh
├── lib/ env.sh, agent-sdk.sh, ci-helpers.sh, ci-debug.sh, load-project.sh, parse-deps.sh, guard.sh, mirrors.sh, pr-lifecycle.sh, issue-lifecycle.sh, worktree.sh, formula-session.sh, stack-lock.sh, forge-setup.sh, forge-push.sh, ops-setup.sh, ci-setup.sh, generators.sh, hire-agent.sh, release.sh, build-graph.py, branch-protection.sh, secret-scan.sh, tea-helpers.sh, action-vault.sh, ci-log-reader.py, git-creds.sh, sprint-filer.sh
│ hooks/ — Claude Code session hooks (on-compact-reinject, on-idle-stop, on-phase-change, on-pretooluse-guard, on-session-end, on-stop-failure)
├── projects/ *.toml.example — templates; *.toml — local per-box config (gitignored)
├── formulas/ Issue templates (TOML specs for multi-step agent tasks)
@ -86,7 +86,7 @@ Each agent has a `.profile` repository on Forgejo storing `knowledge/lessons-lea
- All scripts start with `#!/usr/bin/env bash` and `set -euo pipefail`
- Source shared environment: `source "$(dirname "$0")/../lib/env.sh"`
- Log to `$LOGFILE` using the `log()` function from env.sh or defined locally
- Never hardcode secrets — agent secrets come from `.env.enc`, vault secrets from `.env.vault.enc` (or `.env`/`.env.vault` fallback)
- Never hardcode secrets — agent secrets come from `.env.enc`, vault secrets from `secrets/<NAME>.enc` (age-encrypted, one file per key)
- Never embed secrets in issue bodies, PR descriptions, or comments — use env var references (e.g. `$BASE_RPC_URL`)
- ShellCheck must pass (CI runs `shellcheck` on all `.sh` files)
- Avoid duplicate code — shared helpers go in `lib/`
@ -113,10 +113,12 @@ bash dev/phase-test.sh
| Supervisor | `supervisor/` | Health monitoring | [supervisor/AGENTS.md](supervisor/AGENTS.md) |
| Planner | `planner/` | Strategic planning | [planner/AGENTS.md](planner/AGENTS.md) |
| Predictor | `predictor/` | Infrastructure pattern detection | [predictor/AGENTS.md](predictor/AGENTS.md) |
| Architect | `architect/` | Strategic decomposition | [architect/AGENTS.md](architect/AGENTS.md) |
| Architect | `architect/` | Strategic decomposition (read-only on project repo) | [architect/AGENTS.md](architect/AGENTS.md) |
| Filer | `lib/sprint-filer.sh` | Sub-issue filing from merged sprint PRs | ops repo pipeline (deferred, see #779) |
| Reproduce | `docker/reproduce/` | Bug reproduction using Playwright MCP | `formulas/reproduce.toml` |
| Triage | `docker/reproduce/` | Deep root cause analysis | `formulas/triage.toml` |
| Edge dispatcher | `docker/edge/` | Polls ops repo for vault actions, executes via Claude sessions | `docker/edge/dispatcher.sh` |
| agents-llama | `docker/agents/` (same image) | Local-Qwen dev agent (`AGENT_ROLES=dev`), gated on `ENABLE_LLAMA_AGENT=1` | [docs/agents-llama.md](docs/agents-llama.md) |
> **Vault:** Being redesigned as a PR-based approval workflow (issues #73-#77).
> See [docs/VAULT.md](docs/VAULT.md) for the vault PR workflow details.
@ -135,7 +137,7 @@ Issues flow: `backlog` → `in-progress` → PR → CI → review → merge →
|---|---|---|
| `backlog` | Issue is queued for implementation. Dev-poll picks the first ready one. | Planner, gardener, humans |
| `priority` | Queue tier above plain backlog. Issues with both `priority` and `backlog` are picked before plain `backlog` issues. FIFO within each tier. | Planner, humans |
| `in-progress` | Dev-agent is actively working on this issue. Only one issue per project is in-progress at a time. | dev-agent.sh (claims issue) |
| `in-progress` | Dev-agent is actively working on this issue. Only one issue per project is in-progress at a time. Also set on vision issues by filer-bot when sub-issues are filed (#764). | dev-agent.sh (claims issue), filer-bot (vision issues) |
| `blocked` | Issue is stuck — agent session failed, crashed, timed out, or CI exhausted. Diagnostic comment on the issue has details. Also used for unmet dependencies. | dev-agent.sh, dev-poll.sh (on failure) |
| `tech-debt` | Pre-existing issue flagged by AI reviewer, not introduced by a PR. | review-pr.sh (auto-created follow-ups) |
| `underspecified` | Dev-agent refused the issue as too large or vague. | dev-poll.sh (on preflight `too_large`), dev-agent.sh (on mid-run `too_large` refusal) |
@ -177,8 +179,8 @@ Humans write these. Agents read and enforce them.
| AD-002 | **Concurrency is bounded per LLM backend, not per project.** One concurrent Claude session per OAuth credential pool; one concurrent session per llama-server instance. Containers with disjoint backends may run in parallel. | The single-thread invariant is about *backends*, not pipelines. **(a) Anthropic OAuth credentials race on token refresh** — each container uses a per-session `CLAUDE_CONFIG_DIR`, so Claude Code's native lockfile-based OAuth refresh handles contention automatically without external serialization. (Legacy: set `CLAUDE_EXTERNAL_LOCK=1` to re-enable the old `flock session.lock` wrapper for rollback.) **(b) llama-server has finite VRAM and one KV cache** — parallel inference thrashes the cache and risks OOM. All llama-backed agents serialize on the same lock. **(c) Disjoint backends are free to parallelize.** Today `disinto-agents` (Anthropic OAuth, runs `review,gardener`) runs concurrently with `disinto-agents-llama` (llama, runs `dev`) on the same project — they share neither OAuth state nor llama VRAM. **(d) Per-project work-conflict safety** (no duplicate dev work, no merge conflicts on the same branch) is enforced by `issue_claim` (assignee + `in-progress` label) and per-issue worktrees — that's a separate guard that does NOT depend on this AD. |
| AD-003 | The runtime creates and destroys, the formula preserves. | Runtime manages worktrees/sessions/temp. Formulas commit knowledge to git before signaling done. |
| AD-004 | Event-driven > polling > fixed delays. | Never `waitForTimeout` or hardcoded sleep. Use phase files, webhooks, or poll loops with backoff. |
| AD-005 | Secrets via env var indirection, never in issue bodies. | Issue bodies become code. Agent secrets go in `.env.enc`, vault secrets in `.env.vault.enc` (SOPS-encrypted when available; plaintext `.env`/`.env.vault` fallback supported). Referenced as `$VAR_NAME`. Runner gets only vault secrets; agents get only agent secrets. |
| AD-006 | External actions go through vault dispatch, never direct. | Agents build addressables; only the vault exercises them (publishes, deploys, posts). Tokens for external systems (`GITHUB_TOKEN`, `CLAWHUB_TOKEN`, deploy keys) live only in `.env.vault.enc` and are injected into the ephemeral runner container. `lib/env.sh` unsets them so agents never hold them. PRs with direct external actions without vault dispatch get REQUEST_CHANGES. (Vault redesign in progress: PR-based approval on ops repo, see #73-#77) |
| AD-005 | Secrets via env var indirection, never in issue bodies. | Issue bodies become code. Agent secrets go in `.env.enc` (SOPS-encrypted), vault secrets in `secrets/<NAME>.enc` (age-encrypted, one file per key). Referenced as `$VAR_NAME`. Runner gets only vault secrets; agents get only agent secrets. |
| AD-006 | External actions go through vault dispatch, never direct. | Agents build addressables; only the vault exercises them (publishes, deploys, posts). Tokens for external systems (`GITHUB_TOKEN`, `CLAWHUB_TOKEN`, deploy keys) live only in `secrets/<NAME>.enc` and are decrypted into the ephemeral runner container. `lib/env.sh` unsets them so agents never hold them. PRs with direct external actions without vault dispatch get REQUEST_CHANGES. (Vault redesign in progress: PR-based approval on ops repo, see #73-#77) |
**Who enforces what:**
- **Gardener** checks open backlog issues against ADs during grooming; closes violations with a comment referencing the AD number.
@ -186,8 +188,6 @@ Humans write these. Agents read and enforce them.
- **Dev-agent** reads AGENTS.md before implementing; refuses work that violates ADs.
- **AD-002 is a runtime invariant; nothing for the gardener to check at issue-groom time.** OAuth concurrency is handled by per-session `CLAUDE_CONFIG_DIR` isolation (with `CLAUDE_EXTERNAL_LOCK` as a rollback flag). Per-issue work is enforced by `issue_claim`. A violation manifests as a 401 or VRAM OOM in agent logs, not as a malformed issue.
---
## Phase-Signaling Protocol
When running as a persistent tmux session, Claude must signal the orchestrator
@ -196,5 +196,4 @@ at each phase boundary by writing to a phase file (e.g.
Key phases: `PHASE:awaiting_ci``PHASE:awaiting_review``PHASE:done`.
Also: `PHASE:escalate` (needs human input), `PHASE:failed`.
See [docs/PHASE-PROTOCOL.md](docs/PHASE-PROTOCOL.md) for the complete spec, orchestrator reaction matrix, sequence diagram, and crash recovery.

View file

@ -50,7 +50,7 @@ blast_radius = "low" # optional: overrides policy.toml tier ("low"|"medium
## Secret Names
Secret names must be defined in `.env.vault.enc` on the ops repo. The vault validates that requested secrets exist in the allowlist before execution.
Secret names must have a corresponding `secrets/<NAME>.enc` file (age-encrypted). The vault validates that requested secrets exist in the allowlist before execution.
Common secret names:
- `CLAWHUB_TOKEN` - Token for ClawHub skill publishing

View file

@ -28,7 +28,7 @@ fi
# VAULT ACTION VALIDATION
# =============================================================================
# Allowed secret names - must match keys in .env.vault.enc
# Allowed secret names - must match files in secrets/<NAME>.enc
VAULT_ALLOWED_SECRETS="CLAWHUB_TOKEN GITHUB_TOKEN CODEBERG_TOKEN DEPLOY_KEY NPM_TOKEN DOCKER_HUB_TOKEN"
# Allowed mount aliases — well-known file-based credential directories

View file

@ -1,4 +1,4 @@
<!-- last-reviewed: c4ca1e930d7be3f95060971ce4fa949dab2f76e7 -->
<!-- last-reviewed: be463c5b439aec1ef0d4acfafc47e94896f5dc57 -->
# Architect — Agent Instructions
## What this agent is
@ -10,9 +10,9 @@ converses with humans through PR comments.
## Role
- **Input**: Vision issues from VISION.md, prerequisite tree from ops repo
- **Output**: Sprint proposals as PRs on the ops repo, sub-issue files
- **Output**: Sprint proposals as PRs on the ops repo (with embedded `## Sub-issues` blocks)
- **Mechanism**: Bash-driven orchestration in `architect-run.sh`, pitching formula via `formulas/run-architect.toml`
- **Identity**: `architect-bot` on Forgejo
- **Identity**: `architect-bot` on Forgejo (READ-ONLY on project repo, write on ops repo only — #764)
## Responsibilities
@ -24,16 +24,17 @@ converses with humans through PR comments.
acceptance criteria and dependencies
4. **Human conversation**: Respond to PR comments, refine sprint proposals based
on human feedback
5. **Sub-issue filing**: After design forks are resolved, file concrete sub-issues
for implementation
5. **Sub-issue definition**: Define concrete sub-issues in the `## Sub-issues`
block of the sprint spec. Filing is handled by `filer-bot` after sprint PR
merge (#764)
## Formula
The architect pitching is driven by `formulas/run-architect.toml`. This formula defines
the steps for:
- Research: analyzing vision items and prerequisite tree
- Pitch: creating structured sprint PRs
- Sub-issue filing: creating concrete implementation issues
- Pitch: creating structured sprint PRs with embedded `## Sub-issues` blocks
- Design Q&A: refining the sprint via PR comments after human ACCEPT
## Bash-driven orchestration
@ -57,22 +58,31 @@ APPROVED review → start design questions (model posts Q1:, adds Design forks s
Answers received → continue Q&A (model processes answers, posts follow-ups)
All forks resolved → sub-issue filing (model files implementation issues)
All forks resolved → finalize ## Sub-issues section in sprint spec
Sprint PR merged → filer-bot files sub-issues on project repo (#764)
REJECT review → close PR + journal (model processes rejection, bash merges PR)
```
### Vision issue lifecycle
Vision issues decompose into sprint sub-issues tracked via "Decomposed from #N" in sub-issue bodies. The architect automatically closes vision issues when all sub-issues are closed:
Vision issues decompose into sprint sub-issues. Sub-issues are defined in the
`## Sub-issues` block of the sprint spec (between `<!-- filer:begin -->` and
`<!-- filer:end -->` markers) and filed by `filer-bot` after the sprint PR merges
on the ops repo (#764).
1. Before picking new vision issues, the architect checks each open vision issue
2. For each, it queries merged sprint PRs — **only PRs whose title or body reference the specific vision issue** (matched via `#N` pattern, filtering out unrelated PRs that happen to close unrelated issues) (#735/#736)
3. Extracts sub-issue numbers from those PRs, excluding the vision issue itself
4. If all sub-issues are closed, posts a summary comment listing completed sub-issues (with an idempotency guard: checks both comment presence AND `.state == "closed"` — if the comment exists but the issue is still open, retries the close rather than returning early) (#737)
5. The vision issue is then closed automatically
Each filer-created sub-issue carries a `<!-- decomposed-from: #<vision>, sprint: <slug>, id: <id> -->`
marker in its body for idempotency and traceability.
This ensures vision issues transition from `open``closed` once their work is complete, without manual intervention. The #N-scoped matching prevents false positives where unrelated sub-issues would incorrectly trigger vision issue closure.
The filer-bot (via `lib/sprint-filer.sh`) handles vision lifecycle:
1. After filing sub-issues, adds `in-progress` label to the vision issue
2. On each run, checks if all sub-issues for a vision are closed
3. If all closed, posts a summary comment and closes the vision issue
The architect no longer writes to the project repo — it is read-only (#764).
All project-repo writes (issue filing, label management, vision closure) are
handled by filer-bot with its narrowly-scoped `FORGE_FILER_TOKEN`.
### Session management
@ -86,6 +96,7 @@ Run via `architect/architect-run.sh`, which:
- Acquires a poll-loop lock (via `acquire_lock`) and checks available memory
- Cleans up per-issue scratch files from previous runs (`/tmp/architect-{project}-scratch-*.md`)
- Sources shared libraries (env.sh, formula-session.sh)
- Exports `FORGE_TOKEN_OVERRIDE="${FORGE_ARCHITECT_TOKEN}"` BEFORE sourcing env.sh, ensuring architect-bot identity survives re-sourcing (#762)
- Uses FORGE_ARCHITECT_TOKEN for authentication
- Processes existing architect PRs via bash-driven design phase
- Loads the formula and builds context from VISION.md, AGENTS.md, and ops repo
@ -95,7 +106,9 @@ Run via `architect/architect-run.sh`, which:
- Selects up to `pitch_budget` (3 - open architect PRs) remaining vision issues
- For each selected issue, invokes stateless `claude -p` with issue body + context
- Creates PRs directly from pitch content (no scratch files)
- Agent is invoked only for response processing (ACCEPT/REJECT handling)
- Agent is invoked for stateless pitch generation and response processing (ACCEPT/REJECT handling)
- NOTE: architect-bot is read-only on the project repo (#764) — sub-issue filing
and in-progress label management are handled by filer-bot after sprint PR merge
**Multi-sprint pitching**: The architect pitches up to 3 sprints per run. Bash handles all state management:
- Fetches Forgejo API data (vision issues, open PRs, merged PRs)
@ -120,4 +133,5 @@ empty file not created, just document it).
- #100: Architect formula — research + design fork identification
- #101: Architect formula — sprint PR creation with questions
- #102: Architect formula — answer parsing + sub-issue filing
- #764: Permission scoping — architect read-only on project repo, filer-bot files sub-issues
- #491: Refactor — bash-driven design phase with stateful session resumption

View file

@ -34,10 +34,11 @@ FACTORY_ROOT="$(dirname "$SCRIPT_DIR")"
# Accept project config from argument; default to disinto
export PROJECT_TOML="${1:-$FACTORY_ROOT/projects/disinto.toml}"
# Set override BEFORE sourcing env.sh so it survives any later re-source of
# env.sh from nested shells / claude -p tools (#762, #747)
export FORGE_TOKEN_OVERRIDE="${FORGE_ARCHITECT_TOKEN:-}"
# shellcheck source=../lib/env.sh
source "$FACTORY_ROOT/lib/env.sh"
# Override FORGE_TOKEN with architect-bot's token (#747)
FORGE_TOKEN="${FORGE_ARCHITECT_TOKEN:-${FORGE_TOKEN}}"
# shellcheck source=../lib/formula-session.sh
source "$FACTORY_ROOT/lib/formula-session.sh"
# shellcheck source=../lib/worktree.sh
@ -116,8 +117,8 @@ build_architect_prompt() {
You are the architect agent for ${FORGE_REPO}. Work through the formula below.
Your role: strategic decomposition of vision issues into development sprints.
Propose sprints via PRs on the ops repo, converse with humans through PR comments,
and file sub-issues after design forks are resolved.
Propose sprints via PRs on the ops repo, converse with humans through PR comments.
You are READ-ONLY on the project repo — sub-issues are filed by filer-bot after sprint PR merge (#764).
## Project context
${CONTEXT_BLOCK}
@ -144,8 +145,8 @@ build_architect_prompt_for_mode() {
You are the architect agent for ${FORGE_REPO}. Work through the formula below.
Your role: strategic decomposition of vision issues into development sprints.
Propose sprints via PRs on the ops repo, converse with humans through PR comments,
and file sub-issues after design forks are resolved.
Propose sprints via PRs on the ops repo, converse with humans through PR comments.
You are READ-ONLY on the project repo — sub-issues are filed by filer-bot after sprint PR merge (#764).
## CURRENT STATE: Approved PR awaiting initial design questions
@ -156,10 +157,10 @@ design conversation has not yet started. Your task is to:
2. Identify the key design decisions that need human input
3. Post initial design questions (Q1:, Q2:, etc.) as comments on the PR
4. Add a `## Design forks` section to the PR body documenting the design decisions
5. File sub-issues for each design fork path if applicable
5. Update the ## Sub-issues section in the sprint spec if design decisions affect decomposition
This is NOT a pitch phase — the pitch is already approved. This is the START
of the design Q&A phase.
of the design Q&A phase. Sub-issues are filed by filer-bot after sprint PR merge (#764).
## Project context
${CONTEXT_BLOCK}
@ -178,8 +179,8 @@ _PROMPT_EOF_
You are the architect agent for ${FORGE_REPO}. Work through the formula below.
Your role: strategic decomposition of vision issues into development sprints.
Propose sprints via PRs on the ops repo, converse with humans through PR comments,
and file sub-issues after design forks are resolved.
Propose sprints via PRs on the ops repo, converse with humans through PR comments.
You are READ-ONLY on the project repo — sub-issues are filed by filer-bot after sprint PR merge (#764).
## CURRENT STATE: Design Q&A in progress
@ -193,7 +194,7 @@ Your task is to:
2. Read human answers from PR comments
3. Parse the answers and determine next steps
4. Post follow-up questions if needed (Q3:, Q4:, etc.)
5. If all design forks are resolved, file sub-issues for each path
5. If all design forks are resolved, finalize the ## Sub-issues section in the sprint spec
6. Update the `## Design forks` section as you progress
## Project context
@ -417,243 +418,10 @@ fetch_vision_issues() {
"${FORGE_API}/issues?labels=vision&state=open&limit=100" 2>/dev/null || echo '[]'
}
# ── Helper: Fetch all sub-issues for a vision issue ───────────────────────
# Sub-issues are identified by:
# 1. Issues whose body contains "Decomposed from #N" pattern
# 2. Issues referenced in merged sprint PR bodies
# Returns: newline-separated list of sub-issue numbers (empty if none)
# Args: vision_issue_number
get_vision_subissues() {
local vision_issue="$1"
local subissues=()
# Method 1: Find issues with "Decomposed from #N" in body
local issues_json
issues_json=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API}/issues?limit=100" 2>/dev/null) || true
if [ -n "$issues_json" ] && [ "$issues_json" != "null" ]; then
while IFS= read -r subissue_num; do
[ -z "$subissue_num" ] && continue
subissues+=("$subissue_num")
done <<< "$(printf '%s' "$issues_json" | jq -r --arg vid "$vision_issue" \
'[.[] | select(.number != ($vid | tonumber)) | select(.body // "" | contains("Decomposed from #" + $vid))] | .[].number' 2>/dev/null)"
fi
# Method 2: Find issues referenced in merged sprint PR bodies
# Only consider PRs whose title or body references this specific vision issue
local prs_json
prs_json=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API_BASE}/repos/${FORGE_OPS_REPO}/pulls?state=closed&limit=100" 2>/dev/null) || true
if [ -n "$prs_json" ] && [ "$prs_json" != "null" ]; then
while IFS= read -r pr_num; do
[ -z "$pr_num" ] && continue
local pr_details pr_body pr_title
pr_details=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API_BASE}/repos/${FORGE_OPS_REPO}/pulls/${pr_num}" 2>/dev/null) || continue
local is_merged
is_merged=$(printf '%s' "$pr_details" | jq -r '.merged // false') || continue
if [ "$is_merged" != "true" ]; then
continue
fi
pr_title=$(printf '%s' "$pr_details" | jq -r '.title // ""') || continue
pr_body=$(printf '%s' "$pr_details" | jq -r '.body // ""') || continue
# Only process PRs that reference this specific vision issue
if ! printf '%s\n%s' "$pr_title" "$pr_body" | grep -qE "#${vision_issue}([^0-9]|$)"; then
continue
fi
# Extract issue numbers from PR body, excluding the vision issue itself
while IFS= read -r ref_issue; do
[ -z "$ref_issue" ] && continue
# Skip the vision issue itself
[ "$ref_issue" = "$vision_issue" ] && continue
# Skip if already in list
local found=false
for existing in "${subissues[@]+"${subissues[@]}"}"; do
[ "$existing" = "$ref_issue" ] && found=true && break
done
if [ "$found" = false ]; then
subissues+=("$ref_issue")
fi
done <<< "$(printf '%s' "$pr_body" | grep -oE '#[0-9]+' | tr -d '#' | sort -u)"
done <<< "$(printf '%s' "$prs_json" | jq -r '.[] | select(.title | contains("architect:")) | .number')"
fi
# Output unique sub-issues
printf '%s\n' "${subissues[@]}" | sort -u | grep -v '^$' || true
}
# ── Helper: Check if all sub-issues of a vision issue are closed ───────────
# Returns: 0 if all sub-issues are closed, 1 if any are still open
# Args: vision_issue_number
all_subissues_closed() {
local vision_issue="$1"
local subissues
subissues=$(get_vision_subissues "$vision_issue")
# If no sub-issues found, parent cannot be considered complete
if [ -z "$subissues" ]; then
return 1
fi
# Check each sub-issue state
while IFS= read -r subissue_num; do
[ -z "$subissue_num" ] && continue
local sub_state
sub_state=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API}/issues/${subissue_num}" 2>/dev/null | jq -r '.state // "unknown"') || true
if [ "$sub_state" != "closed" ]; then
log "Sub-issue #${subissue_num} is ${sub_state} — vision issue #${vision_issue} not ready to close"
return 1
fi
done <<< "$subissues"
return 0
}
# ── Helper: Close vision issue with summary comment ────────────────────────
# Posts a comment listing all completed sub-issues before closing.
# Returns: 0 on success, 1 on failure
# Args: vision_issue_number
close_vision_issue() {
local vision_issue="$1"
# Idempotency guard: check if a completion comment already exists
local existing_comments
existing_comments=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API}/issues/${vision_issue}/comments" 2>/dev/null) || existing_comments="[]"
if printf '%s' "$existing_comments" | jq -e '[.[] | select(.body | contains("Vision Issue Completed"))] | length > 0' >/dev/null 2>&1; then
# Comment exists — verify the issue is actually closed before skipping
local issue_state
issue_state=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API}/issues/${vision_issue}" 2>/dev/null | jq -r '.state // "open"') || issue_state="open"
if [ "$issue_state" = "closed" ]; then
log "Vision issue #${vision_issue} already has a completion comment and is closed — skipping"
return 0
fi
log "Vision issue #${vision_issue} has a completion comment but state=${issue_state} — retrying close"
else
# No completion comment yet — build and post one
local subissues
subissues=$(get_vision_subissues "$vision_issue")
# Build summary comment
local summary=""
local count=0
while IFS= read -r subissue_num; do
[ -z "$subissue_num" ] && continue
local sub_title
sub_title=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API}/issues/${subissue_num}" 2>/dev/null | jq -r '.title // "Untitled"') || sub_title="Untitled"
summary+="- #${subissue_num}: ${sub_title}"$'\n'
count=$((count + 1))
done <<< "$subissues"
local comment
comment=$(cat <<EOF
## Vision Issue Completed
All sub-issues have been implemented and merged. This vision issue is now closed.
### Completed sub-issues (${count}):
${summary}
---
*Automated closure by architect · $(date -u '+%Y-%m-%d %H:%M UTC')*
EOF
)
# Post comment before closing
local tmpfile tmpjson
tmpfile=$(mktemp /tmp/vision-close-XXXXXX.md)
tmpjson="${tmpfile}.json"
printf '%s' "$comment" > "$tmpfile"
jq -Rs '{body:.}' < "$tmpfile" > "$tmpjson"
if ! curl -sf -X POST \
-H "Authorization: token ${FORGE_TOKEN}" \
-H "Content-Type: application/json" \
"${FORGE_API}/issues/${vision_issue}/comments" \
--data-binary @"$tmpjson" >/dev/null 2>&1; then
log "WARNING: failed to post closure comment on vision issue #${vision_issue}"
rm -f "$tmpfile" "$tmpjson"
return 1
fi
rm -f "$tmpfile" "$tmpjson"
fi
# Clear assignee (best-effort) and close the issue
curl -sf -X PATCH \
-H "Authorization: token ${FORGE_TOKEN}" \
-H "Content-Type: application/json" \
"${FORGE_API}/issues/${vision_issue}" \
-d '{"assignees":[]}' >/dev/null 2>&1 || true
local close_response
close_response=$(curl -sf -X PATCH \
-H "Authorization: token ${FORGE_TOKEN}" \
-H "Content-Type: application/json" \
"${FORGE_API}/issues/${vision_issue}" \
-d '{"state":"closed"}' 2>/dev/null) || {
log "ERROR: state=closed PATCH failed for vision issue #${vision_issue}"
return 1
}
local result_state
result_state=$(printf '%s' "$close_response" | jq -r '.state // "unknown"') || result_state="unknown"
if [ "$result_state" != "closed" ]; then
log "ERROR: vision issue #${vision_issue} state is '${result_state}' after close PATCH — expected 'closed'"
return 1
fi
log "Closed vision issue #${vision_issue}${count:+ — all ${count} sub-issue(s) complete}"
return 0
}
# ── Lifecycle check: Close vision issues with all sub-issues complete ──────
# Runs before picking new vision issues for decomposition.
# Checks each open vision issue and closes it if all sub-issues are closed.
check_and_close_completed_visions() {
log "Checking for vision issues with all sub-issues complete..."
local vision_issues_json
vision_issues_json=$(fetch_vision_issues)
if [ -z "$vision_issues_json" ] || [ "$vision_issues_json" = "null" ]; then
log "No open vision issues found"
return 0
fi
# Get all vision issue numbers
local vision_issue_nums
vision_issue_nums=$(printf '%s' "$vision_issues_json" | jq -r '.[].number' 2>/dev/null) || vision_issue_nums=""
local closed_count=0
while IFS= read -r vision_issue; do
[ -z "$vision_issue" ] && continue
if all_subissues_closed "$vision_issue"; then
if close_vision_issue "$vision_issue"; then
closed_count=$((closed_count + 1))
fi
fi
done <<< "$vision_issue_nums"
if [ "$closed_count" -gt 0 ]; then
log "Closed ${closed_count} vision issue(s) with all sub-issues complete"
else
log "No vision issues ready for closure"
fi
}
# NOTE: get_vision_subissues, all_subissues_closed, close_vision_issue,
# check_and_close_completed_visions removed (#764) — architect-bot is read-only
# on the project repo. Vision lifecycle (closing completed visions, adding
# in-progress labels) is now handled by filer-bot via lib/sprint-filer.sh.
# ── Helper: Fetch open architect PRs from ops repo Forgejo API ───────────
# Returns: JSON array of architect PR objects
@ -745,7 +513,23 @@ Instructions:
## Recommendation
<architect's assessment: worth it / defer / alternative approach>
## Sub-issues
<!-- filer:begin -->
- id: <kebab-case-id>
title: \"vision(#${issue_num}): <concise sub-issue title>\"
labels: [backlog]
depends_on: []
body: |
## Goal
<what this sub-issue accomplishes>
## Acceptance criteria
- [ ] <criterion>
<!-- filer:end -->
IMPORTANT: Do NOT include design forks or questions. This is a go/no-go pitch.
The ## Sub-issues block is parsed by the filer-bot pipeline after sprint PR merge.
Each sub-issue between filer:begin/end markers becomes a Forgejo issue.
---
@ -854,37 +638,8 @@ post_pr_footer() {
fi
}
# ── Helper: Add in-progress label to vision issue ────────────────────────
# Args: vision_issue_number
add_inprogress_label() {
local issue_num="$1"
# Get label ID for 'in-progress'
local labels_json
labels_json=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API}/labels" 2>/dev/null) || return 1
local inprogress_label_id
inprogress_label_id=$(printf '%s' "$labels_json" | jq -r --arg label "in-progress" '.[] | select(.name == $label) | .id' 2>/dev/null) || true
if [ -z "$inprogress_label_id" ]; then
log "WARNING: in-progress label not found"
return 1
fi
# Add label to issue
if curl -sf -X POST \
-H "Authorization: token ${FORGE_TOKEN}" \
-H "Content-Type: application/json" \
"${FORGE_API}/issues/${issue_num}/labels" \
-d "{\"labels\": [${inprogress_label_id}]}" >/dev/null 2>&1; then
log "Added in-progress label to vision issue #${issue_num}"
return 0
else
log "WARNING: failed to add in-progress label to vision issue #${issue_num}"
return 1
fi
}
# NOTE: add_inprogress_label removed (#764) — architect-bot is read-only on
# project repo. in-progress label is now added by filer-bot via sprint-filer.sh.
# ── Precondition checks in bash before invoking the model ─────────────────
@ -934,9 +689,7 @@ if [ "${open_arch_prs:-0}" -ge 3 ]; then
log "3 open architect PRs found but responses detected — processing"
fi
# ── Lifecycle check: Close vision issues with all sub-issues complete ──────
# Run before picking new vision issues for decomposition
check_and_close_completed_visions
# NOTE: Vision lifecycle check (close completed visions) moved to filer-bot (#764)
# ── Bash-driven state management: Select vision issues for pitching ───────
# This logic is also documented in formulas/run-architect.toml preflight step
@ -1072,8 +825,7 @@ for vision_issue in "${ARCHITECT_TARGET_ISSUES[@]}"; do
# Post footer comment
post_pr_footer "$pr_number"
# Add in-progress label to vision issue
add_inprogress_label "$vision_issue"
# NOTE: in-progress label is added by filer-bot after sprint PR merge (#764)
pitch_count=$((pitch_count + 1))
log "Completed pitch for vision issue #${vision_issue} — PR #${pr_number}"

View file

@ -82,6 +82,7 @@ Init options:
--ci-id <n> Woodpecker CI repo ID (default: 0 = no CI)
--forge-url <url> Forge base URL (default: http://localhost:3000)
--bare Skip compose generation (bare-metal setup)
--build Use local docker build instead of registry images (dev mode)
--yes Skip confirmation prompts
--rotate-tokens Force regeneration of all bot tokens/passwords (idempotent by default)
@ -652,7 +653,7 @@ disinto_init() {
shift
# Parse flags
local branch="" repo_root="" ci_id="0" auto_yes=false forge_url_flag="" bare=false rotate_tokens=false
local branch="" repo_root="" ci_id="0" auto_yes=false forge_url_flag="" bare=false rotate_tokens=false use_build=false
while [ $# -gt 0 ]; do
case "$1" in
--branch) branch="$2"; shift 2 ;;
@ -660,6 +661,7 @@ disinto_init() {
--ci-id) ci_id="$2"; shift 2 ;;
--forge-url) forge_url_flag="$2"; shift 2 ;;
--bare) bare=true; shift ;;
--build) use_build=true; shift ;;
--yes) auto_yes=true; shift ;;
--rotate-tokens) rotate_tokens=true; shift ;;
*) echo "Unknown option: $1" >&2; exit 1 ;;
@ -743,7 +745,7 @@ p.write_text(text)
local forge_port
forge_port=$(printf '%s' "$forge_url" | sed -E 's|.*:([0-9]+)/?$|\1|')
forge_port="${forge_port:-3000}"
generate_compose "$forge_port"
generate_compose "$forge_port" "$use_build"
generate_agent_docker
generate_caddyfile
generate_staging_index
@ -890,6 +892,19 @@ p.write_text(text)
echo "Config: CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1 saved to .env"
fi
# Write local-Qwen dev agent env keys with safe defaults (#769)
if ! grep -q '^ENABLE_LLAMA_AGENT=' "$env_file" 2>/dev/null; then
cat >> "$env_file" <<'LLAMAENVEOF'
# Local Qwen dev agent (optional) — set to 1 to enable
ENABLE_LLAMA_AGENT=0
FORGE_TOKEN_LLAMA=
FORGE_PASS_LLAMA=
ANTHROPIC_BASE_URL=
LLAMAENVEOF
echo "Config: ENABLE_LLAMA_AGENT keys written to .env (disabled by default)"
fi
# Create labels on remote
create_labels "$forge_repo" "$forge_url"
@ -1118,8 +1133,6 @@ disinto_secrets() {
local subcmd="${1:-}"
local enc_file="${FACTORY_ROOT}/.env.enc"
local env_file="${FACTORY_ROOT}/.env"
local vault_enc_file="${FACTORY_ROOT}/.env.vault.enc"
local vault_env_file="${FACTORY_ROOT}/.env.vault"
# Shared helper: ensure sops+age and .sops.yaml exist
_secrets_ensure_sops() {
@ -1165,30 +1178,51 @@ disinto_secrets() {
case "$subcmd" in
add)
local name="${2:-}"
# Parse flags
local force=false
shift # consume 'add'
while [ $# -gt 0 ]; do
case "$1" in
-f|--force) force=true; shift ;;
-*) echo "Unknown flag: $1" >&2; exit 1 ;;
*) break ;;
esac
done
local name="${1:-}"
if [ -z "$name" ]; then
echo "Usage: disinto secrets add <NAME>" >&2
echo "Usage: disinto secrets add [-f|--force] <NAME>" >&2
exit 1
fi
_secrets_ensure_age_key
mkdir -p "$secrets_dir"
printf 'Enter value for %s: ' "$name" >&2
local value
IFS= read -rs value
echo >&2
if [ -t 0 ]; then
# Interactive TTY — prompt with hidden input (original behavior)
printf 'Enter value for %s: ' "$name" >&2
IFS= read -rs value
echo >&2
else
# Piped/redirected stdin — read raw bytes verbatim
IFS= read -r -d '' value || true
fi
if [ -z "$value" ]; then
echo "Error: empty value" >&2
exit 1
fi
local enc_path="${secrets_dir}/${name}.enc"
if [ -f "$enc_path" ]; then
printf 'Secret %s already exists. Overwrite? [y/N] ' "$name" >&2
local confirm
read -r confirm
if [ "$confirm" != "y" ] && [ "$confirm" != "Y" ]; then
echo "Aborted." >&2
if [ -f "$enc_path" ] && [ "$force" = false ]; then
if [ -t 0 ]; then
printf 'Secret %s already exists. Overwrite? [y/N] ' "$name" >&2
local confirm
read -r confirm
if [ "$confirm" != "y" ] && [ "$confirm" != "Y" ]; then
echo "Aborted." >&2
exit 1
fi
else
echo "Error: secret ${name} already exists (use -f to overwrite)" >&2
exit 1
fi
fi
@ -1221,6 +1255,37 @@ disinto_secrets() {
sops -d "$enc_file"
fi
;;
remove)
local name="${2:-}"
if [ -z "$name" ]; then
echo "Usage: disinto secrets remove <NAME>" >&2
exit 1
fi
local enc_path="${secrets_dir}/${name}.enc"
if [ ! -f "$enc_path" ]; then
echo "Error: ${enc_path} not found" >&2
exit 1
fi
rm -f "$enc_path"
echo "Removed: ${enc_path}"
;;
list)
if [ ! -d "$secrets_dir" ]; then
echo "No secrets directory found." >&2
exit 0
fi
local found=false
for enc_file_path in "${secrets_dir}"/*.enc; do
[ -f "$enc_file_path" ] || continue
found=true
local secret_name
secret_name=$(basename "$enc_file_path" .enc)
echo "$secret_name"
done
if [ "$found" = false ]; then
echo "No secrets stored." >&2
fi
;;
edit)
if [ ! -f "$enc_file" ]; then
echo "Error: ${enc_file} not found. Run 'disinto secrets migrate' first." >&2
@ -1244,54 +1309,100 @@ disinto_secrets() {
rm -f "$env_file"
echo "Migrated: .env -> .env.enc (plaintext removed)"
;;
edit-vault)
if [ ! -f "$vault_enc_file" ]; then
echo "Error: ${vault_enc_file} not found. Run 'disinto secrets migrate-vault' first." >&2
migrate-from-vault)
# One-shot migration: split .env.vault.enc into secrets/<KEY>.enc files (#777)
local vault_enc_file="${FACTORY_ROOT}/.env.vault.enc"
local vault_env_file="${FACTORY_ROOT}/.env.vault"
local source_file=""
if [ -f "$vault_enc_file" ] && command -v sops &>/dev/null; then
source_file="$vault_enc_file"
elif [ -f "$vault_env_file" ]; then
source_file="$vault_env_file"
else
echo "Error: neither .env.vault.enc nor .env.vault found — nothing to migrate." >&2
exit 1
fi
sops "$vault_enc_file"
;;
show-vault)
if [ ! -f "$vault_enc_file" ]; then
echo "Error: ${vault_enc_file} not found." >&2
_secrets_ensure_age_key
mkdir -p "$secrets_dir"
# Decrypt vault to temp dotenv
local tmp_dotenv
tmp_dotenv=$(mktemp /tmp/disinto-vault-migrate-XXXXXX)
trap 'rm -f "$tmp_dotenv"' RETURN
if [ "$source_file" = "$vault_enc_file" ]; then
if ! sops -d --output-type dotenv "$vault_enc_file" > "$tmp_dotenv" 2>/dev/null; then
rm -f "$tmp_dotenv"
echo "Error: failed to decrypt .env.vault.enc" >&2
exit 1
fi
else
cp "$vault_env_file" "$tmp_dotenv"
fi
# Parse each KEY=VALUE and encrypt into secrets/<KEY>.enc
local count=0
local failed=0
while IFS='=' read -r key value; do
# Skip empty lines and comments
[[ -z "$key" || "$key" =~ ^[[:space:]]*# ]] && continue
# Trim whitespace from key
key=$(echo "$key" | xargs)
[ -z "$key" ] && continue
local enc_path="${secrets_dir}/${key}.enc"
if printf '%s' "$value" | age -r "$AGE_PUBLIC_KEY" -o "$enc_path" 2>/dev/null; then
# Verify round-trip
local check
check=$(age -d -i "$age_key_file" "$enc_path" 2>/dev/null) || { failed=$((failed + 1)); echo " FAIL (verify): ${key}" >&2; continue; }
if [ "$check" = "$value" ]; then
echo " OK: ${key} -> secrets/${key}.enc"
count=$((count + 1))
else
echo " FAIL (mismatch): ${key}" >&2
failed=$((failed + 1))
fi
else
echo " FAIL (encrypt): ${key}" >&2
failed=$((failed + 1))
fi
done < "$tmp_dotenv"
rm -f "$tmp_dotenv"
if [ "$failed" -gt 0 ]; then
echo "Error: ${failed} secret(s) failed migration. Vault files NOT removed." >&2
exit 1
fi
sops -d "$vault_enc_file"
;;
migrate-vault)
if [ ! -f "$vault_env_file" ]; then
echo "Error: ${vault_env_file} not found — nothing to migrate." >&2
echo " Create .env.vault with vault secrets (GITHUB_TOKEN, deploy keys, etc.)" >&2
exit 1
if [ "$count" -eq 0 ]; then
echo "Warning: no secrets found in vault file." >&2
else
echo "Migrated ${count} secret(s) to secrets/*.enc"
# Remove old vault files on success
rm -f "$vault_enc_file" "$vault_env_file"
echo "Removed: .env.vault.enc / .env.vault"
fi
_secrets_ensure_sops
encrypt_env_file "$vault_env_file" "$vault_enc_file"
# Verify decryption works before removing plaintext
if ! sops -d "$vault_enc_file" >/dev/null 2>&1; then
echo "Error: failed to verify .env.vault.enc decryption" >&2
rm -f "$vault_enc_file"
exit 1
fi
rm -f "$vault_env_file"
echo "Migrated: .env.vault -> .env.vault.enc (plaintext removed)"
;;
*)
cat <<EOF >&2
Usage: disinto secrets <subcommand>
Individual secrets (secrets/<NAME>.enc):
add <NAME> Prompt for value, encrypt, store in secrets/<NAME>.enc
show <NAME> Decrypt and print an individual secret
Secrets (secrets/<NAME>.enc — age-encrypted, one file per key):
add <NAME> Prompt for value, encrypt, store in secrets/<NAME>.enc
show <NAME> Decrypt and print a secret
remove <NAME> Remove a secret
list List all stored secrets
Agent secrets (.env.enc):
edit Edit agent secrets (FORGE_TOKEN, CLAUDE_API_KEY, etc.)
show Show decrypted agent secrets (no argument)
migrate Encrypt .env -> .env.enc
Agent secrets (.env.enc — sops-encrypted dotenv):
edit Edit agent secrets (FORGE_TOKEN, CLAUDE_API_KEY, etc.)
show Show decrypted agent secrets (no argument)
migrate Encrypt .env -> .env.enc
Vault secrets (.env.vault.enc):
edit-vault Edit vault secrets (GITHUB_TOKEN, deploy keys, etc.)
show-vault Show decrypted vault secrets
migrate-vault Encrypt .env.vault -> .env.vault.enc
Migration:
migrate-from-vault Split .env.vault.enc into secrets/<KEY>.enc (one-shot)
EOF
exit 1
;;
@ -1303,7 +1414,8 @@ EOF
disinto_run() {
local action_id="${1:?Usage: disinto run <action-id>}"
local compose_file="${FACTORY_ROOT}/docker-compose.yml"
local vault_enc="${FACTORY_ROOT}/.env.vault.enc"
local secrets_dir="${FACTORY_ROOT}/secrets"
local age_key_file="${HOME}/.config/sops/age/keys.txt"
if [ ! -f "$compose_file" ]; then
echo "Error: docker-compose.yml not found" >&2
@ -1311,29 +1423,42 @@ disinto_run() {
exit 1
fi
if [ ! -f "$vault_enc" ]; then
echo "Error: .env.vault.enc not found — create vault secrets first" >&2
echo " Run 'disinto secrets migrate-vault' after creating .env.vault" >&2
if [ ! -d "$secrets_dir" ]; then
echo "Error: secrets/ directory not found — create secrets first" >&2
echo " Run 'disinto secrets add <NAME>' to add secrets" >&2
exit 1
fi
if ! command -v sops &>/dev/null; then
echo "Error: sops not found — required to decrypt vault secrets" >&2
if ! command -v age &>/dev/null; then
echo "Error: age not found — required to decrypt secrets" >&2
exit 1
fi
# Decrypt vault secrets to temp file
if [ ! -f "$age_key_file" ]; then
echo "Error: age key not found at ${age_key_file}" >&2
exit 1
fi
# Decrypt all secrets/*.enc into a temp env file for the runner
local tmp_env
tmp_env=$(mktemp /tmp/disinto-vault-XXXXXX)
tmp_env=$(mktemp /tmp/disinto-secrets-XXXXXX)
trap 'rm -f "$tmp_env"' EXIT
if ! sops -d --output-type dotenv "$vault_enc" > "$tmp_env" 2>/dev/null; then
rm -f "$tmp_env"
echo "Error: failed to decrypt .env.vault.enc" >&2
exit 1
fi
local count=0
for enc_path in "${secrets_dir}"/*.enc; do
[ -f "$enc_path" ] || continue
local key
key=$(basename "$enc_path" .enc)
local val
val=$(age -d -i "$age_key_file" "$enc_path" 2>/dev/null) || {
echo "Warning: failed to decrypt ${enc_path}" >&2
continue
}
printf '%s=%s\n' "$key" "$val" >> "$tmp_env"
count=$((count + 1))
done
echo "Vault secrets decrypted to tmpfile"
echo "Decrypted ${count} secret(s) to tmpfile"
# Run action in ephemeral runner container
local rc=0
@ -1404,21 +1529,96 @@ download_agent_binaries() {
# ── up command ────────────────────────────────────────────────────────────────
# Regenerate a file idempotently: run the generator, compare output, backup if changed.
# Usage: _regen_file <target_file> <generator_fn> [args...]
_regen_file() {
local target="$1"; shift
local generator="$1"; shift
local basename
basename=$(basename "$target")
# Move existing file aside so the generator (which skips if file exists)
# produces a fresh copy.
local stashed=""
if [ -f "$target" ]; then
stashed=$(mktemp "${target}.stash.XXXXXX")
mv "$target" "$stashed"
fi
# Run the generator — it writes $target from scratch.
# If the generator fails, restore the stashed original so it is not stranded.
if ! "$generator" "$@"; then
if [ -n "$stashed" ]; then
mv "$stashed" "$target"
fi
return 1
fi
if [ -z "$stashed" ]; then
# No previous file — first generation
echo "regenerated: ${basename} (new)"
return
fi
if cmp -s "$stashed" "$target"; then
# Content unchanged — restore original to preserve mtime
mv "$stashed" "$target"
echo "unchanged: ${basename}"
else
# Content changed — keep new, save old as .prev
mv "$stashed" "${target}.prev"
echo "regenerated: ${basename} (previous saved as ${basename}.prev)"
fi
}
disinto_up() {
local compose_file="${FACTORY_ROOT}/docker-compose.yml"
local caddyfile="${FACTORY_ROOT}/docker/Caddyfile"
if [ ! -f "$compose_file" ]; then
echo "Error: docker-compose.yml not found" >&2
echo " Run 'disinto init <repo-url>' first (without --bare)" >&2
exit 1
fi
# Pre-build: download binaries to docker/agents/bin/ to avoid network calls during docker build
echo "── Pre-build: downloading agent binaries ────────────────────────"
if ! download_agent_binaries; then
echo "Error: failed to download agent binaries" >&2
exit 1
# Parse --no-regen flag; remaining args pass through to docker compose
local no_regen=false
local -a compose_args=()
for arg in "$@"; do
case "$arg" in
--no-regen) no_regen=true ;;
*) compose_args+=("$arg") ;;
esac
done
# ── Regenerate compose & Caddyfile from generators ──────────────────────
if [ "$no_regen" = true ]; then
echo "Warning: running with unmanaged compose — hand-edits will drift" >&2
else
# Determine forge_port from FORGE_URL (same logic as init)
local forge_url="${FORGE_URL:-http://localhost:3000}"
local forge_port
forge_port=$(printf '%s' "$forge_url" | sed -E 's|.*:([0-9]+)/?$|\1|')
forge_port="${forge_port:-3000}"
# Detect build mode from existing compose
local use_build=false
if grep -q '^\s*build:' "$compose_file"; then
use_build=true
fi
_regen_file "$compose_file" generate_compose "$forge_port" "$use_build"
_regen_file "$caddyfile" generate_caddyfile
fi
# Pre-build: download binaries only when compose uses local build
if grep -q '^\s*build:' "$compose_file"; then
echo "── Pre-build: downloading agent binaries ────────────────────────"
if ! download_agent_binaries; then
echo "Error: failed to download agent binaries" >&2
exit 1
fi
echo ""
fi
echo ""
# Decrypt secrets to temp .env if SOPS available and .env.enc exists
local tmp_env=""
@ -1431,7 +1631,7 @@ disinto_up() {
echo "Decrypted secrets for compose"
fi
docker compose -f "$compose_file" up -d "$@"
docker compose -f "$compose_file" up -d --build --remove-orphans ${compose_args[@]+"${compose_args[@]}"}
echo "Stack is up"
# Clean up temp .env (also handled by EXIT trap if compose fails)

View file

@ -1,4 +1,4 @@
<!-- last-reviewed: 4e53f508d9b36c60bd68ed5fc497fc8775fec79f -->
<!-- last-reviewed: be463c5b439aec1ef0d4acfafc47e94896f5dc57 -->
# Dev Agent
**Role**: Implement issues autonomously — write code, push branches, address
@ -55,6 +55,12 @@ PRs owned by other bot users (#374).
**Crash recovery**: on `PHASE:crashed` or non-zero exit, the worktree is **preserved** (not destroyed) for debugging. Location logged. Supervisor housekeeping removes stale crashed worktrees older than 24h.
**Polling loop isolation (#753)**: `docker/agents/entrypoint.sh` now tracks fast-poll PIDs
(`FAST_PIDS`) and calls `wait "${FAST_PIDS[@]}"` instead of `wait` (no-args). This means
long-running dev-agent sessions no longer block the loop from launching the next iteration's
fast polls — the loop only waits for review-poll and dev-poll (the fast agents), never for
the dev-agent subprocess itself.
**Lifecycle**: dev-poll.sh (invoked by polling loop, `check_active dev`) → dev-agent.sh →
tmux session → phase file drives CI/review loop → merge + `mirror_push()` → close issue.
On respawn after `PHASE:escalate`, the stale phase file is cleared first so the session

View file

@ -14,10 +14,10 @@ services:
- agent-data:/home/agent/data
- project-repos:/home/agent/repos
- ${CLAUDE_SHARED_DIR:-/var/lib/disinto/claude-shared}:${CLAUDE_SHARED_DIR:-/var/lib/disinto/claude-shared}
- ${HOME}/.claude.json:/home/agent/.claude.json:ro
- CLAUDE_BIN_PLACEHOLDER:/usr/local/bin/claude:ro
- ${HOME}/.ssh:/home/agent/.ssh:ro
- ${HOME}/.config/sops/age:/home/agent/.config/sops/age:ro
- ${CLAUDE_CONFIG_FILE:-${HOME}/.claude.json}:/home/agent/.claude.json:ro
- ${CLAUDE_BIN_DIR}:/usr/local/bin/claude:ro
- ${AGENT_SSH_DIR:-${HOME}/.ssh}:/home/agent/.ssh:ro
- ${SOPS_AGE_DIR:-${HOME}/.config/sops/age}:/home/agent/.config/sops/age:ro
- woodpecker-data:/woodpecker-data:ro
environment:
- FORGE_URL=http://forgejo:3000
@ -30,6 +30,7 @@ services:
- FORGE_SUPERVISOR_TOKEN=${FORGE_SUPERVISOR_TOKEN:-}
- FORGE_PREDICTOR_TOKEN=${FORGE_PREDICTOR_TOKEN:-}
- FORGE_ARCHITECT_TOKEN=${FORGE_ARCHITECT_TOKEN:-}
- FORGE_FILER_TOKEN=${FORGE_FILER_TOKEN:-}
- FORGE_BOT_USERNAMES=${FORGE_BOT_USERNAMES:-}
- WOODPECKER_TOKEN=${WOODPECKER_TOKEN:-}
- CLAUDE_TIMEOUT=${CLAUDE_TIMEOUT:-7200}
@ -48,6 +49,12 @@ services:
- GARDENER_INTERVAL=${GARDENER_INTERVAL:-21600}
- ARCHITECT_INTERVAL=${ARCHITECT_INTERVAL:-21600}
- PLANNER_INTERVAL=${PLANNER_INTERVAL:-43200}
healthcheck:
test: ["CMD", "pgrep", "-f", "entrypoint.sh"]
interval: 60s
timeout: 5s
retries: 3
start_period: 30s
depends_on:
forgejo:
condition: service_healthy
@ -69,10 +76,10 @@ services:
- agent-data:/home/agent/data
- project-repos:/home/agent/repos
- ${CLAUDE_SHARED_DIR:-/var/lib/disinto/claude-shared}:${CLAUDE_SHARED_DIR:-/var/lib/disinto/claude-shared}
- ${HOME}/.claude.json:/home/agent/.claude.json:ro
- CLAUDE_BIN_PLACEHOLDER:/usr/local/bin/claude:ro
- ${HOME}/.ssh:/home/agent/.ssh:ro
- ${HOME}/.config/sops/age:/home/agent/.config/sops/age:ro
- ${CLAUDE_CONFIG_FILE:-${HOME}/.claude.json}:/home/agent/.claude.json:ro
- ${CLAUDE_BIN_DIR}:/usr/local/bin/claude:ro
- ${AGENT_SSH_DIR:-${HOME}/.ssh}:/home/agent/.ssh:ro
- ${SOPS_AGE_DIR:-${HOME}/.config/sops/age}:/home/agent/.config/sops/age:ro
- woodpecker-data:/woodpecker-data:ro
environment:
- FORGE_URL=http://forgejo:3000
@ -102,6 +109,12 @@ services:
- CLAUDE_CONFIG_DIR=${CLAUDE_CONFIG_DIR:-/var/lib/disinto/claude-shared/config}
- POLL_INTERVAL=${POLL_INTERVAL:-300}
- AGENT_ROLES=dev
healthcheck:
test: ["CMD", "pgrep", "-f", "entrypoint.sh"]
interval: 60s
timeout: 5s
retries: 3
start_period: 30s
depends_on:
forgejo:
condition: service_healthy
@ -121,9 +134,9 @@ services:
- /var/run/docker.sock:/var/run/docker.sock
- agent-data:/home/agent/data
- project-repos:/home/agent/repos
- ${HOME}/.claude:/home/agent/.claude
- /usr/local/bin/claude:/usr/local/bin/claude:ro
- ${HOME}/.ssh:/home/agent/.ssh:ro
- ${CLAUDE_DIR:-${HOME}/.claude}:/home/agent/.claude
- ${CLAUDE_BIN_DIR:-/usr/local/bin/claude}:/usr/local/bin/claude:ro
- ${AGENT_SSH_DIR:-${HOME}/.ssh}:/home/agent/.ssh:ro
env_file:
- .env
@ -137,9 +150,9 @@ services:
- apparmor=unconfined
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /usr/local/bin/claude:/usr/local/bin/claude:ro
- ${HOME}/.claude.json:/root/.claude.json:ro
- ${HOME}/.claude:/root/.claude:ro
- ${CLAUDE_BIN_DIR:-/usr/local/bin/claude}:/usr/local/bin/claude:ro
- ${CLAUDE_CONFIG_FILE:-${HOME}/.claude.json}:/root/.claude.json:ro
- ${CLAUDE_DIR:-${HOME}/.claude}:/root/.claude:ro
- disinto-logs:/opt/disinto-logs
environment:
- FORGE_SUPERVISOR_TOKEN=${FORGE_SUPERVISOR_TOKEN:-}
@ -155,6 +168,12 @@ services:
ports:
- "80:80"
- "443:443"
healthcheck:
test: ["CMD", "curl", "-fsS", "http://localhost:2019/config/"]
interval: 30s
timeout: 5s
retries: 3
start_period: 15s
depends_on:
- forgejo
networks:

View file

@ -28,6 +28,9 @@ RUN chmod +x /entrypoint.sh
# Entrypoint runs polling loop directly, dropping to agent user via gosu.
# All scripts execute as the agent user (UID 1000) while preserving env vars.
VOLUME /home/agent/data
VOLUME /home/agent/repos
WORKDIR /home/agent/disinto
ENTRYPOINT ["/entrypoint.sh"]

View file

@ -385,11 +385,13 @@ print(cfg.get('primary_branch', 'main'))
log "Processing project TOML: ${toml}"
# --- Fast agents: run in background, wait before slow agents ---
FAST_PIDS=()
# Review poll (every iteration)
if [[ ",${AGENT_ROLES}," == *",review,"* ]]; then
log "Running review-poll (iteration ${iteration}) for ${toml}"
gosu agent bash -c "cd ${DISINTO_DIR} && bash review/review-poll.sh \"${toml}\"" >> "${DISINTO_LOG_DIR}/review-poll.log" 2>&1 &
FAST_PIDS+=($!)
fi
sleep 2 # stagger fast polls
@ -398,10 +400,14 @@ print(cfg.get('primary_branch', 'main'))
if [[ ",${AGENT_ROLES}," == *",dev,"* ]]; then
log "Running dev-poll (iteration ${iteration}) for ${toml}"
gosu agent bash -c "cd ${DISINTO_DIR} && bash dev/dev-poll.sh \"${toml}\"" >> "${DISINTO_LOG_DIR}/dev-poll.log" 2>&1 &
FAST_PIDS+=($!)
fi
# Wait for fast polls to finish before launching slow agents
wait
# Wait only for THIS iteration's fast polls — long-running gardener/dev-agent
# from prior iterations must not block us.
if [ ${#FAST_PIDS[@]} -gt 0 ]; then
wait "${FAST_PIDS[@]}"
fi
# --- Slow agents: run in background with pgrep guard ---

View file

@ -30,6 +30,6 @@ WORKDIR /var/chat
EXPOSE 8080
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
CMD python3 -c "import urllib.request; urllib.request.urlopen('http://localhost:8080/')" || exit 1
CMD python3 -c "import urllib.request; urllib.request.urlopen('http://localhost:8080/health')" || exit 1
ENTRYPOINT ["/entrypoint-chat.sh"]

View file

@ -481,6 +481,14 @@ class ChatHandler(BaseHTTPRequestHandler):
parsed = urlparse(self.path)
path = parsed.path
# Health endpoint (no auth required) — used by Docker healthcheck
if path == "/health":
self.send_response(200)
self.send_header("Content-Type", "text/plain")
self.end_headers()
self.wfile.write(b"ok\n")
return
# Verify endpoint for Caddy forward_auth (#709)
if path == "/chat/auth/verify":
self.handle_auth_verify()

View file

@ -1,4 +1,7 @@
FROM caddy:latest
RUN apk add --no-cache bash jq curl git docker-cli python3 openssh-client autossh
COPY entrypoint-edge.sh /usr/local/bin/entrypoint-edge.sh
VOLUME /data
ENTRYPOINT ["bash", "/usr/local/bin/entrypoint-edge.sh"]

View file

@ -8,7 +8,7 @@
# 2. Scan vault/actions/ for TOML files without .result.json
# 3. Verify TOML arrived via merged PR with admin merger (Forgejo API)
# 4. Validate TOML using vault-env.sh validator
# 5. Decrypt .env.vault.enc and extract only declared secrets
# 5. Decrypt declared secrets from secrets/<NAME>.enc (age-encrypted)
# 6. Launch: docker run --rm disinto/agents:latest <action-id>
# 7. Write <action-id>.result.json with exit code, timestamp, logs summary
#
@ -27,26 +27,41 @@ source "${SCRIPT_ROOT}/../lib/env.sh"
# the shallow clone only has .toml.example files.
PROJECTS_DIR="${PROJECTS_DIR:-${FACTORY_ROOT:-/opt/disinto}-projects}"
# Load vault secrets after env.sh (env.sh unsets them for agent security)
# Vault secrets must be available to the dispatcher
if [ -f "$FACTORY_ROOT/.env.vault.enc" ] && command -v sops &>/dev/null; then
set -a
eval "$(sops -d --output-type dotenv "$FACTORY_ROOT/.env.vault.enc" 2>/dev/null)" \
|| echo "Warning: failed to decrypt .env.vault.enc — vault secrets not loaded" >&2
set +a
elif [ -f "$FACTORY_ROOT/.env.vault" ]; then
set -a
# shellcheck source=/dev/null
source "$FACTORY_ROOT/.env.vault"
set +a
fi
# Load granular secrets from secrets/*.enc (age-encrypted, one file per key).
# These are decrypted on demand and exported so the dispatcher can pass them
# to runner containers. Replaces the old monolithic .env.vault.enc store (#777).
_AGE_KEY_FILE="${HOME}/.config/sops/age/keys.txt"
_SECRETS_DIR="${FACTORY_ROOT}/secrets"
# decrypt_secret <NAME> — decrypt secrets/<NAME>.enc and print the plaintext value
decrypt_secret() {
local name="$1"
local enc_path="${_SECRETS_DIR}/${name}.enc"
if [ ! -f "$enc_path" ]; then
return 1
fi
age -d -i "$_AGE_KEY_FILE" "$enc_path" 2>/dev/null
}
# load_secrets <NAME ...> — decrypt each secret and export it
load_secrets() {
if [ ! -f "$_AGE_KEY_FILE" ]; then
echo "Warning: age key not found at ${_AGE_KEY_FILE} — secrets not loaded" >&2
return 1
fi
for name in "$@"; do
local val
val=$(decrypt_secret "$name") || continue
export "$name=$val"
done
}
# Ops repo location (vault/actions directory)
OPS_REPO_ROOT="${OPS_REPO_ROOT:-/home/debian/disinto-ops}"
VAULT_ACTIONS_DIR="${OPS_REPO_ROOT}/vault/actions"
# Vault action validation
VAULT_ENV="${SCRIPT_ROOT}/../vault/vault-env.sh"
VAULT_ENV="${SCRIPT_ROOT}/../action-vault/vault-env.sh"
# Admin users who can merge vault PRs (from issue #77)
# Comma-separated list of Forgejo usernames with admin role
@ -452,17 +467,18 @@ launch_runner() {
fi
# Add environment variables for secrets (if any declared)
# Secrets are decrypted per-key from secrets/<NAME>.enc (#777)
if [ -n "$secrets_array" ]; then
for secret in $secrets_array; do
secret=$(echo "$secret" | xargs)
if [ -n "$secret" ]; then
# Verify secret exists in vault
if [ -z "${!secret:-}" ]; then
log "ERROR: Secret '${secret}' not found in vault for action ${action_id}"
write_result "$action_id" 1 "Secret not found in vault: ${secret}"
local secret_val
secret_val=$(decrypt_secret "$secret") || {
log "ERROR: Secret '${secret}' not found in secrets/*.enc for action ${action_id}"
write_result "$action_id" 1 "Secret not found: ${secret} (expected secrets/${secret}.enc)"
return 1
fi
cmd+=(-e "${secret}=${!secret}")
}
cmd+=(-e "${secret}=${secret_val}")
fi
done
else

View file

@ -173,6 +173,67 @@ PROJECT_TOML="${PROJECT_TOML:-projects/disinto.toml}"
sleep 1200 # 20 minutes
done) &
# ── Load required secrets from secrets/*.enc (#777) ────────────────────
# Edge container declares its required secrets; missing ones cause a hard fail.
_AGE_KEY_FILE="${HOME}/.config/sops/age/keys.txt"
_SECRETS_DIR="/opt/disinto/secrets"
EDGE_REQUIRED_SECRETS="CADDY_SSH_KEY CADDY_SSH_HOST CADDY_SSH_USER CADDY_ACCESS_LOG"
_edge_decrypt_secret() {
local enc_path="${_SECRETS_DIR}/${1}.enc"
[ -f "$enc_path" ] || return 1
age -d -i "$_AGE_KEY_FILE" "$enc_path" 2>/dev/null
}
if [ -f "$_AGE_KEY_FILE" ] && [ -d "$_SECRETS_DIR" ]; then
_missing=""
for _secret_name in $EDGE_REQUIRED_SECRETS; do
_val=$(_edge_decrypt_secret "$_secret_name") || { _missing="${_missing} ${_secret_name}"; continue; }
export "$_secret_name=$_val"
done
if [ -n "$_missing" ]; then
echo "FATAL: required secrets missing from secrets/*.enc:${_missing}" >&2
echo " Run 'disinto secrets add <NAME>' for each missing secret." >&2
echo " If migrating from .env.vault.enc, run 'disinto secrets migrate-from-vault' first." >&2
exit 1
fi
echo "edge: loaded required secrets: ${EDGE_REQUIRED_SECRETS}" >&2
else
echo "FATAL: age key (${_AGE_KEY_FILE}) or secrets dir (${_SECRETS_DIR}) not found — cannot load required secrets" >&2
echo " Ensure age is installed and secrets/*.enc files are present." >&2
exit 1
fi
# Start daily engagement collection cron loop in background (#745)
# Runs collect-engagement.sh daily at ~23:50 UTC via a sleep loop that
# calculates seconds until the next 23:50 window. SSH key from secrets/*.enc (#777).
(while true; do
# Calculate seconds until next 23:50 UTC
_now=$(date -u +%s)
_target=$(date -u -d "today 23:50" +%s 2>/dev/null || date -u -d "23:50" +%s 2>/dev/null || echo 0)
if [ "$_target" -le "$_now" ]; then
_target=$(( _target + 86400 ))
fi
_sleep_secs=$(( _target - _now ))
echo "edge: collect-engagement scheduled in ${_sleep_secs}s (next 23:50 UTC)" >&2
sleep "$_sleep_secs"
_fetch_log="/tmp/caddy-access-log-fetch.log"
_ssh_key_file=$(mktemp)
printf '%s\n' "$CADDY_SSH_KEY" > "$_ssh_key_file"
chmod 0600 "$_ssh_key_file"
scp -i "$_ssh_key_file" -o StrictHostKeyChecking=accept-new -o ConnectTimeout=10 -o BatchMode=yes \
"${CADDY_SSH_USER}@${CADDY_SSH_HOST}:${CADDY_ACCESS_LOG}" \
"$_fetch_log" 2>&1 | tee -a /opt/disinto-logs/collect-engagement.log || true
rm -f "$_ssh_key_file"
if [ -s "$_fetch_log" ]; then
CADDY_ACCESS_LOG="$_fetch_log" bash /opt/disinto/site/collect-engagement.sh 2>&1 \
| tee -a /opt/disinto-logs/collect-engagement.log || true
else
echo "edge: collect-engagement: fetched log is empty, skipping parse" >&2
fi
rm -f "$_fetch_log"
done) &
# Caddy as main process — run in foreground via wait so background jobs survive
# (exec replaces the shell, which can orphan backgrounded subshells)
caddy run --config /etc/caddy/Caddyfile --adapter caddyfile &

View file

@ -7,5 +7,8 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
RUN useradd -m -u 1000 -s /bin/bash agent
COPY docker/reproduce/entrypoint-reproduce.sh /entrypoint-reproduce.sh
RUN chmod +x /entrypoint-reproduce.sh
VOLUME /home/agent/data
VOLUME /home/agent/repos
WORKDIR /home/agent
ENTRYPOINT ["/entrypoint-reproduce.sh"]

View file

@ -26,8 +26,8 @@ The `main` branch on the ops repo (`johba/disinto-ops`) is protected via Forgejo
## Vault PR Lifecycle
1. **Request** — Agent calls `lib/vault.sh:vault_request()` with action TOML content
2. **Validation** — TOML is validated against the schema in `vault/vault-env.sh`
1. **Request** — Agent calls `lib/action-vault.sh:vault_request()` with action TOML content
2. **Validation** — TOML is validated against the schema in `action-vault/vault-env.sh`
3. **PR Creation** — A PR is created on `disinto-ops` with:
- Branch: `vault/<action-id>`
- Title: `vault: <action-id>`
@ -90,12 +90,12 @@ To verify the protection is working:
- #73 — Vault redesign proposal
- #74 — Vault action TOML schema
- #75 — Vault PR creation helper (`lib/vault.sh`)
- #75 — Vault PR creation helper (`lib/action-vault.sh`)
- #76 — Dispatcher rewrite (poll for merged vault PRs)
- #77 — Branch protection on ops repo (this issue)
## See Also
- [`lib/vault.sh`](../lib/vault.sh) — Vault PR creation helper
- [`vault/vault-env.sh`](../vault/vault-env.sh) — TOML validation
- [`lib/action-vault.sh`](../lib/action-vault.sh) — Vault PR creation helper
- [`action-vault/vault-env.sh`](../action-vault/vault-env.sh) — TOML validation
- [`lib/branch-protection.sh`](../lib/branch-protection.sh) — Branch protection helper

42
docs/agents-llama.md Normal file
View file

@ -0,0 +1,42 @@
# agents-llama — Local-Qwen Dev Agent
The `agents-llama` service is an optional compose service that runs a dev agent
backed by a local llama-server instance (e.g. Qwen) instead of the Anthropic
API. It uses the same Docker image as the main `agents` service but connects to
a local inference endpoint via `ANTHROPIC_BASE_URL`.
## Enabling
Set `ENABLE_LLAMA_AGENT=1` in `.env` (or `.env.enc`) and provide the required
credentials:
```env
ENABLE_LLAMA_AGENT=1
FORGE_TOKEN_LLAMA=<dev-qwen API token>
FORGE_PASS_LLAMA=<dev-qwen password>
ANTHROPIC_BASE_URL=http://host.docker.internal:8081 # llama-server endpoint
```
Then regenerate the compose file (`disinto init ...`) and bring the stack up.
## Prerequisites
- **llama-server** (or compatible OpenAI-API endpoint) running on the host,
reachable from inside Docker at the URL set in `ANTHROPIC_BASE_URL`.
- A Forgejo bot user (e.g. `dev-qwen`) with its own API token and password,
stored as `FORGE_TOKEN_LLAMA` / `FORGE_PASS_LLAMA`.
## Behaviour
- `AGENT_ROLES=dev` — the llama agent only picks up dev work.
- `CLAUDE_AUTOCOMPACT_PCT_OVERRIDE=60` — more aggressive compaction for smaller
context windows.
- `depends_on: forgejo (service_healthy)` — does **not** depend on Woodpecker
(the llama agent doesn't need CI).
- Serialises on the llama-server's single KV cache (AD-002).
## Disabling
Set `ENABLE_LLAMA_AGENT=0` (or leave it unset) and regenerate. The service
block is omitted entirely from `docker-compose.yml`; the stack starts cleanly
without it.

59
docs/mirror-bootstrap.md Normal file
View file

@ -0,0 +1,59 @@
# Mirror Bootstrap — Pull-Mirror Cutover Path
How to populate an empty Forgejo repo from an external source using
`lib/mirrors.sh`'s `mirror_pull_register()`.
## Prerequisites
| Variable | Example | Purpose |
|---|---|---|
| `FORGE_URL` | `http://forgejo:3000` | Forgejo instance base URL |
| `FORGE_API_BASE` | `${FORGE_URL}/api/v1` | Global API base (set by `lib/env.sh`) |
| `FORGE_TOKEN` | (admin or org-owner token) | Must have `repo:create` scope |
The target org/user must already exist on the Forgejo instance.
## Command
```bash
source lib/env.sh
source lib/mirrors.sh
# Register a pull mirror — creates the repo and starts the first sync.
mirror_pull_register \
"https://codeberg.org/johba/disinto.git" \ # source URL
"disinto-admin" \ # target owner
"disinto" \ # target repo name
"8h0m0s" # sync interval (optional, default 8h)
```
The function calls `POST /api/v1/repos/migrate` with `mirror: true`.
Forgejo creates the repo and immediately queues the first sync.
## Verifying the sync
```bash
# Check mirror status via API
forge_api GET "/repos/disinto-admin/disinto" | jq '.mirror, .mirror_interval'
# Confirm content arrived — should list branches
forge_api GET "/repos/disinto-admin/disinto/branches" | jq '.[].name'
```
The first sync typically completes within a few seconds for small-to-medium
repos. For large repos, poll the branches endpoint until content appears.
## Cutover scenario (Nomad migration)
At cutover to the Nomad box:
1. Stand up fresh Forgejo on the Nomad cluster (empty instance).
2. Create the `disinto-admin` org via `disinto init` or API.
3. Run `mirror_pull_register` pointing at the Codeberg source.
4. Wait for sync to complete (check branches endpoint).
5. Once content is confirmed, proceed with `disinto init` against the
now-populated repo — all subsequent `mirror_push` calls will push
to any additional mirrors configured in `projects/*.toml`.
No manual `git clone` + `git push` step is needed. The Forgejo pull-mirror
handles the entire transfer.

View file

@ -0,0 +1,172 @@
# formulas/collect-engagement.toml — Collect website engagement data
#
# Daily formula: SSH into Caddy host, fetch access log, parse locally,
# commit evidence JSON to ops repo via Forgejo API.
#
# Triggered by cron in the edge container entrypoint (daily at 23:50 UTC).
# Design choices from #426: Q1=A (fetch raw log, process locally),
# Q2=A (direct cron in edge container), Q3=B (dedicated purpose-limited SSH key).
#
# Steps: fetch-log → parse-engagement → commit-evidence
name = "collect-engagement"
description = "SSH-fetch Caddy access log, parse engagement metrics, commit evidence"
version = 1
[context]
files = ["AGENTS.md"]
[vars.caddy_host]
description = "SSH host for the Caddy server"
required = false
default = "${CADDY_SSH_HOST:-disinto.ai}"
[vars.caddy_user]
description = "SSH user on the Caddy host"
required = false
default = "${CADDY_SSH_USER:-debian}"
[vars.caddy_log_path]
description = "Path to Caddy access log on the remote host"
required = false
default = "${CADDY_ACCESS_LOG:-/var/log/caddy/access.log}"
[vars.local_log_path]
description = "Local path to store fetched access log"
required = false
default = "/tmp/caddy-access-log-fetch.log"
[vars.evidence_dir]
description = "Evidence output directory in the ops repo"
required = false
default = "evidence/engagement"
# ── Step 1: SSH fetch ────────────────────────────────────────────────
[[steps]]
id = "fetch-log"
title = "Fetch Caddy access log from remote host via SSH"
description = """
Fetch today's Caddy access log segment from the remote host using SCP.
The SSH key is read from the environment (CADDY_SSH_KEY), which is
decrypted from secrets/CADDY_SSH_KEY.enc by the edge entrypoint. It is NEVER hardcoded.
1. Write the SSH key to a temporary file with restricted permissions:
_ssh_key_file=$(mktemp)
trap 'rm -f "$_ssh_key_file"' EXIT
printf '%s\n' "$CADDY_SSH_KEY" > "$_ssh_key_file"
chmod 0600 "$_ssh_key_file"
2. Verify connectivity:
ssh -i "$_ssh_key_file" -o StrictHostKeyChecking=accept-new \
-o ConnectTimeout=10 -o BatchMode=yes \
{{caddy_user}}@{{caddy_host}} 'echo ok'
3. Fetch the access log via scp:
scp -i "$_ssh_key_file" -o StrictHostKeyChecking=accept-new \
-o ConnectTimeout=10 -o BatchMode=yes \
"{{caddy_user}}@{{caddy_host}}:{{caddy_log_path}}" \
"{{local_log_path}}"
4. Verify the fetched file is non-empty:
if [ ! -s "{{local_log_path}}" ]; then
echo "WARNING: fetched access log is empty — site may have no traffic"
else
echo "Fetched $(wc -l < "{{local_log_path}}") lines from {{caddy_host}}"
fi
5. Clean up the temporary key file:
rm -f "$_ssh_key_file"
"""
# ── Step 2: Parse engagement ─────────────────────────────────────────
[[steps]]
id = "parse-engagement"
title = "Run collect-engagement.sh against the local log copy"
description = """
Run the engagement parser against the locally fetched access log.
1. Set CADDY_ACCESS_LOG to point at the local copy so collect-engagement.sh
reads from it instead of the default path:
export CADDY_ACCESS_LOG="{{local_log_path}}"
2. Run the parser:
bash "$FACTORY_ROOT/site/collect-engagement.sh"
3. Verify the evidence JSON was written:
REPORT_DATE=$(date -u +%Y-%m-%d)
EVIDENCE_FILE="${OPS_REPO_ROOT}/{{evidence_dir}}/${REPORT_DATE}.json"
if [ -f "$EVIDENCE_FILE" ]; then
echo "Evidence written: $EVIDENCE_FILE"
jq . "$EVIDENCE_FILE"
else
echo "ERROR: evidence file not found at $EVIDENCE_FILE"
exit 1
fi
4. Clean up the fetched log:
rm -f "{{local_log_path}}"
"""
needs = ["fetch-log"]
# ── Step 3: Commit evidence ──────────────────────────────────────────
[[steps]]
id = "commit-evidence"
title = "Commit evidence JSON to ops repo via Forgejo API"
description = """
Commit the dated evidence JSON to the ops repo so the planner can
consume it during gap analysis.
1. Read the evidence file:
REPORT_DATE=$(date -u +%Y-%m-%d)
EVIDENCE_FILE="${OPS_REPO_ROOT}/{{evidence_dir}}/${REPORT_DATE}.json"
CONTENT=$(base64 < "$EVIDENCE_FILE")
2. Check if the file already exists in the ops repo (update vs create):
OPS_OWNER="${OPS_FORGE_OWNER:-${FORGE_REPO%%/*}}"
OPS_REPO="${OPS_FORGE_REPO:-${PROJECT_NAME:-disinto}-ops}"
FILE_PATH="{{evidence_dir}}/${REPORT_DATE}.json"
EXISTING=$(curl -sf \
-H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_URL}/api/v1/repos/${OPS_OWNER}/${OPS_REPO}/contents/${FILE_PATH}" \
2>/dev/null || echo "")
3. Create or update the file via Forgejo API:
if [ -n "$EXISTING" ] && printf '%s' "$EXISTING" | jq -e '.sha' >/dev/null 2>&1; then
# Update existing file
SHA=$(printf '%s' "$EXISTING" | jq -r '.sha')
curl -sf -X PUT \
-H "Authorization: token ${FORGE_TOKEN}" \
-H "Content-Type: application/json" \
"${FORGE_URL}/api/v1/repos/${OPS_OWNER}/${OPS_REPO}/contents/${FILE_PATH}" \
-d "$(jq -nc --arg content "$CONTENT" --arg sha "$SHA" --arg msg "evidence: engagement ${REPORT_DATE}" \
'{message: $msg, content: $content, sha: $sha}')"
echo "Updated existing evidence file in ops repo"
else
# Create new file
curl -sf -X POST \
-H "Authorization: token ${FORGE_TOKEN}" \
-H "Content-Type: application/json" \
"${FORGE_URL}/api/v1/repos/${OPS_OWNER}/${OPS_REPO}/contents/${FILE_PATH}" \
-d "$(jq -nc --arg content "$CONTENT" --arg msg "evidence: engagement ${REPORT_DATE}" \
'{message: $msg, content: $content}')"
echo "Created evidence file in ops repo"
fi
4. Verify the commit landed:
VERIFY=$(curl -sf \
-H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_URL}/api/v1/repos/${OPS_OWNER}/${OPS_REPO}/contents/${FILE_PATH}" \
| jq -r '.name // empty')
if [ "$VERIFY" = "${REPORT_DATE}.json" ]; then
echo "Evidence committed: ${FILE_PATH}"
else
echo "ERROR: could not verify evidence commit"
exit 1
fi
"""
needs = ["parse-engagement"]

View file

@ -0,0 +1,161 @@
# formulas/rent-a-human-caddy-ssh.toml — Provision SSH key for Caddy log collection
#
# "Rent a Human" — walk the operator through provisioning a purpose-limited
# SSH keypair so collect-engagement.sh can fetch Caddy access logs remotely.
#
# The key uses a `command=` restriction so it can ONLY cat the access log.
# No interactive shell, no port forwarding, no agent forwarding.
#
# Parent vision issue: #426
# Sprint: website-observability-wire-up (ops PR #10)
# Consumed by: site/collect-engagement.sh (issue #745)
name = "rent-a-human-caddy-ssh"
description = "Provision a purpose-limited SSH keypair for remote Caddy log collection"
version = 1
# ── Step 1: Generate keypair ─────────────────────────────────────────────────
[[steps]]
id = "generate-keypair"
title = "Generate a dedicated ed25519 keypair"
description = """
Generate a purpose-limited SSH keypair for Caddy log collection.
Run on your local machine (NOT the Caddy host):
```
ssh-keygen -t ed25519 -f caddy-collect -N '' -C 'disinto-collect-engagement'
```
This produces two files:
- caddy-collect (private key goes into the vault)
- caddy-collect.pub (public key goes onto the Caddy host)
Do NOT set a passphrase (-N '') the factory runs unattended.
"""
# ── Step 2: Install public key on Caddy host ─────────────────────────────────
[[steps]]
id = "install-public-key"
title = "Install the public key on the Caddy host with command= restriction"
needs = ["generate-keypair"]
description = """
Install the public key on the Caddy host with a strict command= restriction
so this key can ONLY read the access log.
1. SSH into the Caddy host as the user who owns /var/log/caddy/access.log.
2. Open (or create) ~/.ssh/authorized_keys:
mkdir -p ~/.ssh && chmod 700 ~/.ssh
nano ~/.ssh/authorized_keys
3. Add this line (all on ONE line do not wrap):
command="cat /var/log/caddy/access.log",no-port-forwarding,no-X11-forwarding,no-agent-forwarding ssh-ed25519 AAAA... disinto-collect-engagement
Replace "AAAA..." with the contents of caddy-collect.pub.
To build the line automatically:
echo "command=\"cat /var/log/caddy/access.log\",no-port-forwarding,no-X11-forwarding,no-agent-forwarding $(cat caddy-collect.pub)"
4. Set permissions:
chmod 600 ~/.ssh/authorized_keys
What the restrictions do:
- command="cat /var/log/caddy/access.log"
Forces this key to only execute `cat /var/log/caddy/access.log`,
regardless of what the client requests.
- no-port-forwarding blocks SSH tunnels
- no-X11-forwarding blocks X11
- no-agent-forwarding blocks agent forwarding
If the access log is at a different path, update the command= restriction
AND set CADDY_ACCESS_LOG in the factory environment to match.
"""
# ── Step 3: Add private key to vault secrets ─────────────────────────────────
[[steps]]
id = "store-private-key"
title = "Add the private key as CADDY_SSH_KEY secret"
needs = ["generate-keypair"]
description = """
Store the private key in the factory's encrypted secrets store.
1. Add the private key using `disinto secrets add`:
cat caddy-collect | disinto secrets add CADDY_SSH_KEY
This encrypts the key with age and stores it as secrets/CADDY_SSH_KEY.enc.
2. IMPORTANT: After storing, securely delete the local private key file:
shred -u caddy-collect 2>/dev/null || rm -f caddy-collect
rm -f caddy-collect.pub
The public key is already installed on the Caddy host; the private key
now lives only in secrets/CADDY_SSH_KEY.enc.
Never commit the private key to any git repository.
"""
# ── Step 4: Configure Caddy host address ─────────────────────────────────────
[[steps]]
id = "store-caddy-host"
title = "Add the Caddy host details as secrets"
needs = ["install-public-key"]
description = """
Store the Caddy connection details so collect-engagement.sh knows
where to SSH.
1. Add each value using `disinto secrets add`:
echo 'disinto.ai' | disinto secrets add CADDY_SSH_HOST
echo 'debian' | disinto secrets add CADDY_SSH_USER
echo '/var/log/caddy/access.log' | disinto secrets add CADDY_ACCESS_LOG
Replace values with the actual SSH host, user, and log path for your setup.
"""
# ── Step 5: Test the connection ──────────────────────────────────────────────
[[steps]]
id = "test-connection"
title = "Verify the SSH key works and returns the access log"
needs = ["install-public-key", "store-private-key", "store-caddy-host"]
description = """
Test the end-to-end connection before the factory tries to use it.
1. From the factory host (or anywhere with the private key), run:
ssh -i caddy-collect -o StrictHostKeyChecking=accept-new user@caddy-host
Expected behavior:
- Outputs the contents of /var/log/caddy/access.log
- Disconnects immediately (command= restriction forces this)
If you already shredded the local key, decode it from the vault:
echo "$CADDY_SSH_KEY" | base64 -d > /tmp/caddy-collect-test
chmod 600 /tmp/caddy-collect-test
ssh -i /tmp/caddy-collect-test -o StrictHostKeyChecking=accept-new user@caddy-host
rm -f /tmp/caddy-collect-test
2. Verify the output is Caddy structured JSON (one JSON object per line):
ssh -i /tmp/caddy-collect-test user@caddy-host | head -1 | jq .
You should see fields like: ts, request, status, duration.
3. If the connection fails:
- Permission denied check authorized_keys format (must be one line)
- Connection refused check sshd is running on the Caddy host
- Empty output check /var/log/caddy/access.log exists and is readable
by the SSH user
- "jq: error" Caddy may be using Combined Log Format instead of
structured JSON; check Caddy's log configuration
4. Once verified, the factory's collect-engagement.sh can use this key
to fetch logs remotely via:
ssh -i <decoded-key-path> $CADDY_HOST
"""

View file

@ -213,7 +213,7 @@ should file a vault item instead of executing directly.
**Exceptions** (do NOT flag these):
- Code inside `vault/` the vault system itself is allowed to handle secrets
- References in comments or documentation explaining the architecture
- `bin/disinto` setup commands that manage `.env.vault.enc` and the `run` subcommand
- `bin/disinto` setup commands that manage `secrets/*.enc` and the `run` subcommand
- Local operations (git push to forge, forge API calls with `FORGE_TOKEN`)
## 6. Re-review (if previous review is provided)

View file

@ -16,7 +16,14 @@
# - Bash creates the ops PR with pitch content
# - Bash posts the ACCEPT/REJECT footer comment
# Step 3: Sprint PR creation with questions (issue #101) (one PR per pitch)
# Step 4: Answer parsing + sub-issue filing (issue #102)
# Step 4: Post-merge sub-issue filing via filer-bot (#764)
#
# Permission model (#764):
# architect-bot: READ-ONLY on project repo (GET issues/PRs/labels for context).
# Cannot POST/PUT/PATCH/DELETE any project-repo resource.
# Write access ONLY on ops repo (branches, PRs, comments).
# filer-bot: issues:write on project repo. Files sub-issues from merged sprint
# PRs via ops-filer pipeline. Adds in-progress label to vision issues.
#
# Architecture:
# - Bash script (architect-run.sh) handles ALL state management
@ -146,15 +153,32 @@ For each issue in ARCHITECT_TARGET_ISSUES, bash performs:
## Recommendation
<architect's assessment: worth it / defer / alternative approach>
## Sub-issues
<!-- filer:begin -->
- id: <kebab-case-id>
title: "vision(#N): <concise sub-issue title>"
labels: [backlog]
depends_on: []
body: |
## Goal
<what this sub-issue accomplishes>
## Acceptance criteria
- [ ] <criterion>
<!-- filer:end -->
IMPORTANT: Do NOT include design forks or questions yet. The pitch is a go/no-go
decision for the human. Questions come only after acceptance.
The ## Sub-issues block is parsed by the filer-bot pipeline after sprint PR merge.
Each sub-issue between filer:begin/end markers becomes a Forgejo issue on the
project repo. The filer appends a decomposed-from marker to each body automatically.
4. Bash creates PR:
- Create branch: architect/sprint-{pitch-number}
- Write sprint spec to sprints/{sprint-slug}.md
- Create PR with pitch content as body
- Post footer comment: "Reply ACCEPT to proceed with design questions, or REJECT: <reason> to decline."
- Add in-progress label to vision issue
- NOTE: in-progress label is added by filer-bot after sprint PR merge (#764)
Output:
- One PR per vision issue (up to 3 per run)
@ -185,6 +209,9 @@ This ensures approved PRs don't sit indefinitely without design conversation.
Architecture:
- Bash creates PRs during stateless pitch generation (step 2)
- Model has no role in PR creation no Forgejo API access
- architect-bot is READ-ONLY on the project repo (#764) — all project-repo
writes (sub-issue filing, in-progress label) are handled by filer-bot
via the ops-filer pipeline after sprint PR merge
- This step describes the PR format for reference
PR Format (created by bash):
@ -201,64 +228,29 @@ PR Format (created by bash):
- Head: architect/sprint-{pitch-number}
- Footer comment: "Reply ACCEPT to proceed with design questions, or REJECT: <reason> to decline."
4. Add in-progress label to vision issue:
- Look up label ID: GET /repos/{owner}/{repo}/labels
- Add label: POST /repos/{owner}/{repo}/issues/{issue_number}/labels
After creating all PRs, signal PHASE:done.
NOTE: in-progress label on the vision issue is added by filer-bot after sprint PR merge (#764).
## Forgejo API Reference
## Forgejo API Reference (ops repo only)
All operations use the Forgejo API with Authorization: token ${FORGE_TOKEN} header.
All operations use the ops repo Forgejo API with `Authorization: token ${FORGE_TOKEN}` header.
architect-bot is READ-ONLY on the project repo cannot POST/PUT/PATCH/DELETE project-repo resources (#764).
### Create branch
### Create branch (ops repo)
```
POST /repos/{owner}/{repo}/branches
POST /repos/{owner}/{repo-ops}/branches
Body: {"new_branch_name": "architect/<sprint-slug>", "old_branch_name": "main"}
```
### Create/update file
### Create/update file (ops repo)
```
PUT /repos/{owner}/{repo}/contents/<path>
PUT /repos/{owner}/{repo-ops}/contents/<path>
Body: {"message": "sprint: add <sprint-slug>.md", "content": "<base64-encoded-content>", "branch": "architect/<sprint-slug>"}
```
### Create PR
### Create PR (ops repo)
```
POST /repos/{owner}/{repo}/pulls
Body: {"title": "architect: <sprint summary>", "body": "<markdown-text>", "head": "architect/<sprint-slug>", "base": "main"}
```
**Important: PR body format**
- The body field must contain plain markdown text (the raw content from the model)
- Do NOT JSON-encode or escape the body pass it as a JSON string value
- Newlines and markdown formatting (headings, lists, etc.) must be preserved as-is
### Add label to issue
```
POST /repos/{owner}/{repo}/issues/{index}/labels
Body: {"labels": [<label-id>]}
```
## Forgejo API Reference
All operations use the Forgejo API with `Authorization: token ${FORGE_TOKEN}` header.
### Create branch
```
POST /repos/{owner}/{repo}/branches
Body: {"new_branch_name": "architect/<sprint-slug>", "old_branch_name": "main"}
```
### Create/update file
```
PUT /repos/{owner}/{repo}/contents/<path>
Body: {"message": "sprint: add <sprint-slug>.md", "content": "<base64-encoded-content>", "branch": "architect/<sprint-slug>"}
```
### Create PR
```
POST /repos/{owner}/{repo}/pulls
POST /repos/{owner}/{repo-ops}/pulls
Body: {"title": "architect: <sprint summary>", "body": "<markdown-text>", "head": "architect/<sprint-slug>", "base": "main"}
```
@ -267,30 +259,22 @@ Body: {"title": "architect: <sprint summary>", "body": "<markdown-text>", "head"
- Do NOT JSON-encode or escape the body pass it as a JSON string value
- Newlines and markdown formatting (headings, lists, etc.) must be preserved as-is
### Close PR
### Close PR (ops repo)
```
PATCH /repos/{owner}/{repo}/pulls/{index}
PATCH /repos/{owner}/{repo-ops}/pulls/{index}
Body: {"state": "closed"}
```
### Delete branch
### Delete branch (ops repo)
```
DELETE /repos/{owner}/{repo}/git/branches/<branch-name>
DELETE /repos/{owner}/{repo-ops}/git/branches/<branch-name>
```
### Get labels (look up label IDs by name)
### Read-only on project repo (context gathering)
```
GET /repos/{owner}/{repo}/labels
```
### Add label to issue (for in-progress on vision issue)
```
POST /repos/{owner}/{repo}/issues/{index}/labels
Body: {"labels": [<label-id>]}
```
### Remove label from issue (for in-progress removal on REJECT)
```
DELETE /repos/{owner}/{repo}/issues/{index}/labels/{label-id}
GET /repos/{owner}/{repo}/issues list issues
GET /repos/{owner}/{repo}/issues/{number} read issue details
GET /repos/{owner}/{repo}/labels list labels
GET /repos/{owner}/{repo}/pulls list PRs
```
"""

View file

@ -177,7 +177,7 @@ DUST (trivial — single-line edit, rename, comment, style, whitespace):
VAULT (needs human decision or external resource):
File a vault procurement item using vault_request():
source "$(dirname "$0")/../lib/vault.sh"
source "$(dirname "$0")/../lib/action-vault.sh"
TOML_CONTENT="# Vault action: <action_id>
context = \"<description of what decision/resource is needed>\"
unblocks = [\"#NNN\"]

View file

@ -243,7 +243,7 @@ needs = ["preflight"]
[[steps]]
id = "commit-ops-changes"
title = "Write tree, memory, and journal; commit and push"
title = "Write tree, memory, and journal; commit and push branch"
description = """
### 1. Write prerequisite tree
Write to: $OPS_REPO_ROOT/prerequisites.md
@ -256,14 +256,16 @@ If (count - N) >= 5 or planner-memory.md missing, write to:
Include: run counter marker, date, constraint focus, patterns, direction.
Keep under 100 lines. Replace entire file.
### 3. Commit ops repo changes
Commit the ops repo changes (prerequisites, memory, vault items):
### 3. Commit ops repo changes to the planner branch
Commit the ops repo changes (prerequisites, memory, vault items) and push the
branch. Do NOT push directly to $PRIMARY_BRANCH planner-run.sh will create a
PR and walk it to merge via review-bot.
cd "$OPS_REPO_ROOT"
git add prerequisites.md knowledge/planner-memory.md vault/pending/
git add -u
if ! git diff --cached --quiet; then
git commit -m "chore: planner run $(date -u +%Y-%m-%d)"
git push origin "$PRIMARY_BRANCH"
git push origin HEAD
fi
cd "$PROJECT_REPO_ROOT"

View file

@ -125,8 +125,8 @@ For each weakness you identify, choose one:
The prediction explains the theory. The vault PR triggers the proof
after human approval. When the planner runs next, evidence is already there.
Vault dispatch (requires lib/vault.sh):
source "$PROJECT_REPO_ROOT/lib/vault.sh"
Vault dispatch (requires lib/action-vault.sh):
source "$PROJECT_REPO_ROOT/lib/action-vault.sh"
TOML_CONTENT="id = \"predict-<prediction_number>-<formula>\"
context = \"Test prediction #<prediction_number>: <theory summary> — focus: <specific test>\"
@ -154,7 +154,7 @@ tea is pre-configured with login "$TEA_LOGIN" and repo "$FORGE_REPO".
--title "<title>" --body "<body>" --labels "prediction/unreviewed"
2. Dispatch formula via vault (if exploiting):
source "$PROJECT_REPO_ROOT/lib/vault.sh"
source "$PROJECT_REPO_ROOT/lib/action-vault.sh"
PR_NUM=$(vault_request "predict-NNN-<formula>" "$TOML_CONTENT")
# See EXPLOIT section above for TOML_CONTENT format

View file

@ -1,4 +1,4 @@
<!-- last-reviewed: c4ca1e930d7be3f95060971ce4fa949dab2f76e7 -->
<!-- last-reviewed: 18190874cae869527f675f717423ded735f2c555 -->
# Gardener Agent
**Role**: Backlog grooming — detect duplicate issues, missing acceptance
@ -32,7 +32,7 @@ the gardener runs as part of the polling loop alongside the planner, predictor,
PR, reviewed alongside AGENTS.md changes, executed by gardener-run.sh after merge.
**Environment variables consumed**:
- `FORGE_TOKEN`, `FORGE_GARDENER_TOKEN` (falls back to FORGE_TOKEN), `FORGE_REPO`, `FORGE_API`, `PROJECT_NAME`, `PROJECT_REPO_ROOT`
- `FORGE_TOKEN`, `FORGE_GARDENER_TOKEN` (falls back to FORGE_TOKEN), `FORGE_REPO`, `FORGE_API`, `PROJECT_NAME`, `PROJECT_REPO_ROOT`. `FORGE_TOKEN_OVERRIDE` is exported to `$FORGE_GARDENER_TOKEN` before sourcing env.sh so the gardener-bot identity survives re-sourcing (#762).
- `PRIMARY_BRANCH`, `CLAUDE_MODEL` (set to sonnet by gardener-run.sh)
**Lifecycle**: gardener-run.sh (invoked by polling loop every 6h, `check_active gardener`) →

View file

@ -26,10 +26,11 @@ FACTORY_ROOT="$(dirname "$SCRIPT_DIR")"
# Accept project config from argument; default to disinto
export PROJECT_TOML="${1:-$FACTORY_ROOT/projects/disinto.toml}"
# Set override BEFORE sourcing env.sh so it survives any later re-source of
# env.sh from nested shells / claude -p tools (#762, #747)
export FORGE_TOKEN_OVERRIDE="${FORGE_GARDENER_TOKEN:-}"
# shellcheck source=../lib/env.sh
source "$FACTORY_ROOT/lib/env.sh"
# Use gardener-bot's own Forgejo identity (#747)
FORGE_TOKEN="${FORGE_GARDENER_TOKEN:-${FORGE_TOKEN}}"
# shellcheck source=../lib/formula-session.sh
source "$FACTORY_ROOT/lib/formula-session.sh"
# shellcheck source=../lib/worktree.sh

View file

@ -1,27 +1,62 @@
[
{
"action": "edit_body",
"issue": 784,
"body": "Flagged by AI reviewer in PR #783.\n\n## Problem\n\n`_regen_file()` (added in PR #783, `bin/disinto` ~line 1424) moves the existing target file to a temp stash before calling the generator:\n\n```bash\nmv \"$target\" \"$stashed\"\n\"$generator\" \"$@\"\n```\n\nThe script runs under `set -euo pipefail`. If the generator exits non-zero, bash exits immediately and the original file remains stranded at `${target}.stash.XXXXXX` (never restored). The target file no longer exists, and `docker compose up` is never reached. Recovery requires the operator to manually locate and rename the hidden stash file.\n\n## Fix\n\nAdd an ERR trap inside `_regen_file` to restore the stash on failure, e.g.:\n```bash\n\"$generator\" \"$@\" || { mv \"$stashed\" \"$target\"; return 1; }\n```\n\n---\n*Auto-created from AI review*\n\n## Acceptance criteria\n\n- [ ] If the generator exits non-zero, the original target file is restored from the stash (not stranded at the temp path)\n- [ ] `_regen_file` still removes the stash file after a successful generator run\n- [ ] `docker compose up` is reached when the generator succeeds\n- [ ] ShellCheck passes on `bin/disinto`\n\n## Affected files\n\n- `bin/disinto` — `_regen_file()` function (~line 1424)\n"
},
{
"action": "add_label",
"issue": 784,
"label": "backlog"
},
{
"action": "remove_label",
"issue": 742,
"issue": 773,
"label": "blocked"
},
{
"action": "add_label",
"issue": 742,
"issue": 773,
"label": "backlog"
},
{
"action": "comment",
"issue": 742,
"body": "Dev-agent failed to push on previous attempt (exit: no_push). Root cause is well-specified in the issue body. Re-entering backlog for retry."
"issue": 772,
"body": "All child issues have been resolved:\n- #768 (edge restart policy) — closed\n- #769 (agents-llama generator service) — closed\n- #770 (disinto up regenerate) — closed\n- #771 (deprecate docker/Caddyfile) — closed\n\nClosing tracker as all decomposed work is complete."
},
{
"action": "close",
"issue": 772,
"reason": "all child issues 768-771 closed"
},
{
"action": "edit_body",
"issue": 712,
"body": "## Goal\n\nLet `disinto-chat` perform scoped write actions against the factory — specifically: trigger a Woodpecker CI run, create a Forgejo issue, create a Forgejo PR — via explicit backend endpoints. The UI surfaces these as buttons the user clicks from a chat turn that proposes an action. The model never holds API tokens directly.\n\n## Why\n\n- #623 lists these escalations as the difference between \"chat that talks about the project\" and \"chat that moves the project forward\".\n- Routing through explicit backend endpoints (instead of giving the sandboxed claude process API tokens) keeps the trust model tight: the *user* authorises each action, not the model.\n\n## Scope\n\n### Files to touch\n\n- `docker/chat/server.{py,go}` — new authenticated endpoints (reuse #708 / #709 session check):\n - `POST /chat/action/ci-run` — body `{repo, branch}` → calls Woodpecker API with `WOODPECKER_TOKEN` (already in `.env` from existing factory setup) to trigger a pipeline.\n - `POST /chat/action/issue-create` — body `{title, body, labels}` → calls Forgejo API `/repos/<owner>/<repo>/issues` with `FORGE_TOKEN`.\n - `POST /chat/action/pr-create` — body `{head, base, title, body}` → calls `/repos/<owner>/<repo>/pulls`.\n - All actions record to #710's NDJSON history as `{role: \"action\", ...}` lines.\n- `docker/chat/ui/index.html` — small HTMX pattern: when claude's response contains a marker like `<action type=\"issue-create\">{...}</action>`, render a clickable button below the message; clicking POSTs to `/chat/action/<type>` with the payload.\n- `lib/generators.sh` chat env: pass `WOODPECKER_TOKEN`, `FORGE_TOKEN`, `FORGE_URL`, `FORGE_OWNER`, `FORGE_REPO`.\n\n### Out of scope\n\n- Destructive actions (branch delete, force push, secret rotation) — deliberately excluded.\n- Multi-step workflows / approval chains.\n- Arbitrary code execution in the chat container (that is what the agents exist for).\n\n## Acceptance\n\n- [ ] A chat turn that emits an `<action type=\"issue-create\">{...}</action>` block renders a button; clicking it creates an issue on Forgejo, visible via the API.\n- [ ] CI-trigger action creates a Woodpecker pipeline that can be seen in the CI UI.\n- [ ] PR-create action produces a Forgejo PR with the specified head / base.\n- [ ] All three actions are logged into the #710 history file with role `action` and the response from the API call.\n- [ ] Unauthenticated requests to `/chat/action/*` return 401 (inherits #708 gate).\n\n## Depends on\n\n- #708 (OAuth gate — actions are authorised by the logged-in user).\n- #742 (CI smoke test fix — #712 fails CI until agent-smoke.sh lib sourcing is stabilised)\n- #710 (history — actions need to be logged alongside chat turns).\n\n## Notes\n\n- Forgejo API auth: the factory's `FORGE_TOKEN` is a long-lived admin token. For MVP, reuse it; a follow-up issue can scope it down to per-user Forgejo tokens derived from the OAuth flow.\n- Woodpecker API is at `http://woodpecker:8000/api/...`, reachable via the compose network — no need to go through the edge container.\n- The `<action>` marker is deliberately simple markup the model can emit in its response text. Do not implement tool-calling protocol; do not spin up an MCP server.\n\n## Boundaries for dev-agent\n\n- Do not give the claude subprocess direct API tokens. The chat backend holds them; the model only emits action markers the user clicks.\n- Do not add destructive actions (delete, force-push). Additive only.\n- Do not invent a new markup format beyond `<action type=\"...\">{JSON}</action>`.\n- Parent vision: #623."
"issue": 778,
"body": "## Problem\n\n`formulas/rent-a-human-caddy-ssh.toml` step 3 tells the operator:\n\n```\necho \"CADDY_SSH_KEY=$(base64 -w0 caddy-collect)\" >> .env.vault.enc\n```\n\n**You cannot append plaintext to a sops-encrypted file.** The append silently corrupts `.env.vault.enc` — subsequent `sops -d` fails, all vault secrets become unrecoverable. Any operator who followed the docs verbatim has broken their vault.\n\nSteps 4 (`CADDY_HOST`) and 5 (`CADDY_ACCESS_LOG`) have the same bug.\n\n## Proposed fix\n\nRewrite the `>>` steps to use the stdin-piped `disinto secrets add` (from issue A):\n\n```\ncat caddy-collect | disinto secrets add CADDY_SSH_KEY\necho '159.89.14.107' | disinto secrets add CADDY_SSH_HOST\necho 'debian' | disinto secrets add CADDY_SSH_USER\necho '/var/log/caddy/access.log' | disinto secrets add CADDY_ACCESS_LOG\n```\n\nAlso:\n- Remove the `base64 -w0` step — the new `secrets add` stores multi-line keys verbatim.\n- Remove the `shred -u caddy-collect` step from the happy path — let the operator keep the backup until they have verified the edge container picks it up.\n- Add a recovery note: operators with a corrupted vault from the old docs must `rm .env.vault.enc` (or `migrate-from-vault` if issue B landed) before re-running.\n\n## Context\n\n- Parent: sprint PR `disinto-admin/disinto-ops#10`.\n- Depends on: #776 (piped `secrets add`) — now closed.\n- Soft-depends on: #777 (if landed, drop all `.env.vault*` references entirely).\n\n## Acceptance criteria\n\n- [ ] Formula runs end-to-end without touching `.env.vault.enc` or `.env.vault` by hand\n- [ ] Re-running is idempotent (upsert via `disinto secrets add -f`)\n- [ ] Edge container starts cleanly with the imported secrets and the daily collect-engagement cron fires without `\"CADDY_SSH_KEY not set, skipping\"`\n\n## Affected files\n\n- `formulas/rent-a-human-caddy-ssh.toml` — replace `>> .env.vault.enc` steps with `disinto secrets add` calls\n"
},
{
"action": "remove_label",
"issue": 778,
"label": "blocked"
},
{
"action": "add_label",
"issue": 778,
"label": "backlog"
},
{
"action": "edit_body",
"issue": 707,
"body": "## Goal\n\nGive `disinto-chat` its own Claude identity mount so its OAuth refresh races cannot corrupt the factory agents' shared `~/.claude` credentials. Default to a separate `~/.claude-chat/` on the host; support `ANTHROPIC_API_KEY` as a fallback that skips OAuth entirely.\n\n## Why\n\n- #623 root-caused this: Claude Code's internal refresh lock in `~/.claude.lock` operates outside bind-mounted directories, so two containers sharing `~/.claude` can race during token refresh and invalidate each other. The factory has already had OAuth expiry incidents traced to multiple agents sharing credentials.\n- Scoping chat to its own identity dir means chat can be logged in as a different Anthropic account, or pinned to an API key, without touching agent credentials.\n\n## Scope\n\n### Files to touch\n\n- `lib/generators.sh` chat service block (from #705):\n - Replace the throwaway named volume with `${CHAT_CLAUDE_DIR:-${HOME}/.claude-chat}:/home/chat/.claude-chat`.\n - Env: `CLAUDE_CONFIG_DIR=/home/chat/.claude-chat/config`, `CLAUDE_CREDENTIALS_DIR=/home/chat/.claude-chat/config/credentials`.\n - Conditional: if `ANTHROPIC_API_KEY` is set in `.env`, pass it through and **do not** mount `~/.claude-chat` at all (no credentials on disk in that mode).\n- `bin/disinto disinto_init()` — after #620's admin password prompt, add an optional prompt: `Use separate Anthropic identity for chat? (y/N)`. On yes, create `~/.claude-chat/` and invoke `claude login` in a subshell with `CLAUDE_CONFIG_DIR=~/.claude-chat/config`.\n- `lib/claude-config.sh` — factor out the existing `~/.claude` setup logic so a non-default `CLAUDE_CONFIG_DIR` is a first-class parameter. If it is already parameterised, just document it; if not, extract a helper `setup_claude_dir <dir>` and have the existing path call it with the default dir.\n- `docker/chat/Dockerfile` — declare `VOLUME /home/chat/.claude-chat`, set owner to the non-root chat user introduced in #706.\n\n### Out of scope\n\n- Cross-session lock coherence for multiple concurrent chat containers (single-chat-container assumption is fine for MVP).\n- Anthropic team / workspace support — single identity is enough.\n\n## Acceptance\n\n- [ ] Fresh `disinto init` with \"use separate chat identity\" answered yes creates `~/.claude-chat/` and logs in successfully.\n- [ ] With `ANTHROPIC_API_KEY=sk-ant-...` set in `.env`, chat starts without any `~/.claude-chat` mount (verified via `docker inspect disinto-chat`) and successfully completes a test prompt.\n- [ ] Running the factory agents AND chat simultaneously for 24h does not produce any OAuth refresh failures on either side (manual soak test — document result in PR).\n- [ ] `CLAUDE_CONFIG_DIR` and `CLAUDE_CREDENTIALS_DIR` inside the chat container resolve to `/home/chat/.claude-chat/config*`, not the shared factory path.\n\n## Depends on\n\n- #705 (chat scaffold).\n- #742 (CI smoke test fix — #707 fails CI until agent-smoke.sh lib sourcing is stabilised)\n- #620 (admin password prompt — same init flow this adds a step to).\n\n## Notes\n\n- The factory's existing shared mount is `/var/lib/disinto/claude-shared` (see `lib/generators.sh:113,327,381,426`). Chat must NOT use this path.\n- `flock(\"${HOME}/.claude/session.lock\")` logic mentioned in #623 is load-bearing, not redundant — do not \"simplify\" it.\n- Prefer the API-key path for anyone running the factory on shared hardware; call this out in README updates.\n\n## Boundaries for dev-agent\n\n- Do not try to make chat share `~/.claude` with the agents \"just for convenience\". The whole point of this chunk is the opposite.\n- Do not add a third claude config dir. One for agents, one for chat, done.\n- Do not refactor `lib/claude-config.sh` beyond extracting a parameterised helper if needed.\n- Parent vision: #623."
"issue": 777,
"body": "## Problem\n\nTwo parallel secret stores:\n\n1. `secrets/<NAME>.enc` — per-key, age-encrypted. Populated by `disinto secrets add`. **No runtime consumer today.** Only `disinto secrets show` ever decrypts these.\n2. `.env.vault.enc` — monolithic, sops/dotenv-encrypted. The only store actually loaded into containers (via `docker/edge/dispatcher.sh` → `sops -d --output-type dotenv`).\n\nTwo mental models, redundant subcommands (`edit-vault`, `show-vault`, `migrate-vault`), and today's `disinto secrets add` silently deposits secrets into a dead-letter directory. Operator runs the command, edge container still logs `CADDY_SSH_KEY not set, skipping` (docker/edge/entrypoint-edge.sh:207).\n\n## Proposed solution\n\nConsolidate on `secrets/<NAME>.enc` as THE store. One file per secret, granular, small surface.\n\n**1. Wire container dispatchers to load `secrets/*.enc` into env**\n\n- `docker/edge/dispatcher.sh` (and agent / ops dispatchers) decrypt declared secrets at startup and export them.\n- Granular per-secret — not a bulk dump.\n\n**2. Containers declare required secrets**\n\n- `secrets.required = [\"CADDY_SSH_KEY\", \"CADDY_SSH_HOST\", ...]` in the container's TOML, or equivalent in compose.\n- Missing required secret → **hard fail** with clear message. Replaces today's silent-skip branch at `entrypoint-edge.sh:207`.\n\n**3. Deprecate the monolithic vault**\n\n- Remove `.env.vault`, `.env.vault.enc`, and subcommands `edit-vault` / `show-vault` / `migrate-vault` from `bin/disinto`.\n- Remove sops round-trip from `docker/edge/dispatcher.sh` (lines 32-40 currently).\n\n**4. One-shot migration for existing operators**\n\n- `disinto secrets migrate-from-vault` splits an existing `.env.vault.enc` into `secrets/<KEY>.enc` files, verifies each, then removes the old vault on success.\n- Idempotent: safe to run multiple times.\n\n## Context\n\n- Parent: sprint PR `disinto-admin/disinto-ops#10`.\n- Depends on: #776 (`secrets add` must accept piped stdin before we can deprecate `edit-vault`) — now closed.\n- Rationale (operator quote): *\"containers should have option to load single secrets, granular. no 2 mental models, only 1 thing that works well and has small surface.\"*\n\n## Acceptance criteria\n\n- [ ] Edge container declares `secrets.required = [\"CADDY_SSH_KEY\", \"CADDY_SSH_HOST\", \"CADDY_SSH_USER\", \"CADDY_ACCESS_LOG\"]`; dispatcher exports them; `collect-engagement.sh` runs without additional env wiring\n- [ ] Container refuses to start when a required secret is missing (fail loudly, not skip silently)\n- [ ] `.env.vault*` files and all vault-specific subcommands removed from `bin/disinto` and all formulas / docs\n- [ ] `migrate-from-vault` converts an existing monolithic vault correctly (verified by round-trip test)\n- [ ] `disinto secrets` help text shows one store, four verbs: `add`, `show`, `remove`, `list`\n\n## Affected files\n\n- `bin/disinto` — remove `edit-vault`, `show-vault`, `migrate-vault` subcommands; add `migrate-from-vault`\n- `docker/edge/dispatcher.sh` — replace sops round-trip with per-secret age decryption (lines 32-40)\n- `docker/edge/entrypoint-edge.sh` — replace silent-skip at line 207 with hard fail on missing required secrets\n- `lib/vault.sh` — update or remove vault-env.sh wiring now that `.env.vault.enc` is deprecated\n"
},
{
"action": "remove_label",
"issue": 777,
"label": "blocked"
},
{
"action": "add_label",
"issue": 777,
"label": "backlog"
}
]

View file

@ -1,4 +1,4 @@
<!-- last-reviewed: 4e53f508d9b36c60bd68ed5fc497fc8775fec79f -->
<!-- last-reviewed: 18190874cae869527f675f717423ded735f2c555 -->
# Shared Helpers (`lib/`)
All agents source `lib/env.sh` as their first action. Additional helpers are
@ -6,7 +6,7 @@ sourced as needed.
| File | What it provides | Sourced by |
|---|---|---|
| `lib/env.sh` | Loads `.env`, sets `FACTORY_ROOT`, exports project config (`FORGE_REPO`, `PROJECT_NAME`, etc.), defines `log()`, `forge_api()`, `forge_api_all()` (paginates all pages; accepts optional second TOKEN parameter, defaults to `$FORGE_TOKEN`; handles invalid/empty JSON responses gracefully — returns empty on parse error instead of crashing), `woodpecker_api()`, `wpdb()`, `memory_guard()` (skips agent if RAM < threshold). Auto-loads project TOML if `PROJECT_TOML` is set. Exports per-agent tokens (`FORGE_PLANNER_TOKEN`, `FORGE_GARDENER_TOKEN`, `FORGE_VAULT_TOKEN`, `FORGE_SUPERVISOR_TOKEN`, `FORGE_PREDICTOR_TOKEN`) each falls back to `$FORGE_TOKEN` if not set. **Vault-only token guard (AD-006)**: `unset GITHUB_TOKEN CLAWHUB_TOKEN` so agents never hold external-action tokens only the runner container receives them. **Container note**: when `DISINTO_CONTAINER=1`, `.env` is NOT re-sourced compose already injects env vars (including `FORGE_URL=http://forgejo:3000`) and re-sourcing would clobber them. **Save/restore scope (#364)**: only `FORGE_URL` is preserved across `.env` re-sourcing (compose injects `http://forgejo:3000`, `.env` has `http://localhost:3000`). `FORGE_TOKEN` is NOT preserved so refreshed tokens in `.env` take effect immediately. **Required env var**: `FORGE_PASS` bot password for git HTTP push (Forgejo 11.x rejects API tokens for `git push`, #361). **Hard preconditions (#674)**: `USER` and `HOME` must be exported by the entrypoint before sourcing. When `PROJECT_TOML` is set, `PROJECT_REPO_ROOT`, `PRIMARY_BRANCH`, and `OPS_REPO_ROOT` must also be set (by entrypoint or TOML). | Every agent |
| `lib/env.sh` | Loads `.env`, sets `FACTORY_ROOT`, exports project config (`FORGE_REPO`, `PROJECT_NAME`, etc.), defines `log()`, `forge_api()`, `forge_api_all()` (paginates all pages; accepts optional second TOKEN parameter, defaults to `$FORGE_TOKEN`; handles invalid/empty JSON responses gracefully — returns empty on parse error instead of crashing), `woodpecker_api()`, `wpdb()`, `memory_guard()` (skips agent if RAM < threshold), `load_secret()` (secret-source abstraction see below). Auto-loads project TOML if `PROJECT_TOML` is set. Exports per-agent tokens (`FORGE_PLANNER_TOKEN`, `FORGE_GARDENER_TOKEN`, `FORGE_VAULT_TOKEN`, `FORGE_SUPERVISOR_TOKEN`, `FORGE_PREDICTOR_TOKEN`) each falls back to `$FORGE_TOKEN` if not set. **Vault-only token guard (AD-006)**: `unset GITHUB_TOKEN CLAWHUB_TOKEN` so agents never hold external-action tokens only the runner container receives them. **Container note**: when `DISINTO_CONTAINER=1`, `.env` is NOT re-sourced compose already injects env vars (including `FORGE_URL=http://forgejo:3000`) and re-sourcing would clobber them. **Save/restore scope (#364)**: only `FORGE_URL` is preserved across `.env` re-sourcing (compose injects `http://forgejo:3000`, `.env` has `http://localhost:3000`). `FORGE_TOKEN` is NOT preserved so refreshed tokens in `.env` take effect immediately. **Per-agent token override (#762)**: agent run scripts export `FORGE_TOKEN_OVERRIDE=<agent-specific-token>` BEFORE sourcing `env.sh`; `env.sh` applies this override at lines 98-100, ensuring the correct identity survives any re-sourcing of `env.sh` by nested shells or `claude -p` invocations. **Required env var**: `FORGE_PASS` bot password for git HTTP push (Forgejo 11.x rejects API tokens for `git push`, #361). **Hard preconditions (#674)**: `USER` and `HOME` must be exported by the entrypoint before sourcing. When `PROJECT_TOML` is set, `PROJECT_REPO_ROOT`, `PRIMARY_BRANCH`, and `OPS_REPO_ROOT` must also be set (by entrypoint or TOML). **`load_secret NAME [DEFAULT]` (#793)**: backend-agnostic secret resolution. Precedence: (1) `/secrets/<NAME>.env` Nomad-rendered template, (2) current environment already set by `.env.enc` / compose, (3) `secrets/<NAME>.enc` age-encrypted per-key file (decrypted on demand, cached in process env), (4) DEFAULT or empty. Consumers call `$(load_secret GITHUB_TOKEN)` instead of `${GITHUB_TOKEN}` identical behavior whether secrets come from Docker compose injection or Nomad Vault templates. | Every agent |
| `lib/ci-helpers.sh` | `ci_passed()` — returns 0 if CI state is "success" (or no CI configured). `ci_required_for_pr()` — returns 0 if PR has code files (CI required), 1 if non-code only (CI not required). `is_infra_step()` — returns 0 if a single CI step failure matches infra heuristics (clone/git exit 128, any exit 137, log timeout patterns). `classify_pipeline_failure()` — returns "infra \<reason>" if any failed Woodpecker step matches infra heuristics via `is_infra_step()`, else "code". `ensure_priority_label()` — looks up (or creates) the `priority` label and returns its ID; caches in `_PRIORITY_LABEL_ID`. `ci_commit_status <sha>` — queries Woodpecker directly for CI state, falls back to forge commit status API. `ci_pipeline_number <sha>` — returns the Woodpecker pipeline number for a commit, falls back to parsing forge status `target_url`. `ci_promote <repo_id> <pipeline_num> <environment>` — promotes a pipeline to a named Woodpecker environment (vault-gated deployment: vault approves, vault-fire calls this — vault redesign in progress, see #73-#77). `ci_get_logs <pipeline_number> [--step <name>]` — reads CI logs from Woodpecker SQLite database via `lib/ci-log-reader.py`; outputs last 200 lines to stdout. Requires mounted woodpecker-data volume at /woodpecker-data. | dev-poll, review-poll, review-pr |
| `lib/ci-debug.sh` | CLI tool for Woodpecker CI: `list`, `status`, `logs`, `failures` subcommands. Not sourced — run directly. | Humans / dev-agent (tool access) |
| `lib/ci-log-reader.py` | Python tool: reads CI logs from Woodpecker SQLite database. `<pipeline_number> [--step <name>]` — returns last 200 lines from failed steps (or specified step). Used by `ci_get_logs()` in ci-helpers.sh. Requires `WOODPECKER_DATA_DIR` (default: /woodpecker-data). | ci-helpers.sh |
@ -14,7 +14,7 @@ sourced as needed.
| `lib/parse-deps.sh` | Extracts dependency issue numbers from an issue body (stdin → stdout, one number per line). Matches `## Dependencies` / `## Depends on` / `## Blocked by` sections and inline `depends on #N` / `blocked by #N` patterns. Inline scan skips fenced code blocks to prevent false positives from code examples in issue bodies. Not sourced — executed via `bash lib/parse-deps.sh`. | dev-poll |
| `lib/formula-session.sh` | `acquire_run_lock()`, `load_formula()`, `load_formula_or_profile()`, `build_context_block()`, `ensure_ops_repo()`, `ops_commit_and_push()`, `build_prompt_footer()`, `build_sdk_prompt_footer()`, `formula_worktree_setup()`, `formula_prepare_profile_context()`, `formula_lessons_block()`, `profile_write_journal()`, `profile_load_lessons()`, `ensure_profile_repo()`, `_profile_has_repo()`, `_count_undigested_journals()`, `_profile_digest_journals()`, `_profile_restore_lessons()`, `_profile_commit_and_push()`, `resolve_agent_identity()`, `build_graph_section()`, `build_scratch_instruction()`, `read_scratch_context()`, `cleanup_stale_crashed_worktrees()` — shared helpers for formula-driven polling-loop agents (lock, .profile repo management, prompt assembly, worktree setup). Memory guard is provided by `memory_guard()` in `lib/env.sh` (not duplicated here). `resolve_agent_identity()` — sets `FORGE_TOKEN`, `AGENT_IDENTITY`, `FORGE_REMOTE` from per-agent token env vars and FORGE_URL remote detection. `build_graph_section()` generates the structural-analysis section (runs `lib/build-graph.py`, formats JSON output) — previously duplicated in planner-run.sh and predictor-run.sh, now shared here. `cleanup_stale_crashed_worktrees()` — thin wrapper around `worktree_cleanup_stale()` from `lib/worktree.sh` (kept for backwards compatibility). **Journal digestion guards (#702)**: `_profile_digest_journals()` respects `PROFILE_DIGEST_TIMEOUT` (default 300s) and `PROFILE_DIGEST_MAX_BATCH` (default 5 journals per run); `_profile_restore_lessons()` restores the previous lessons-learned.md on digest failure. | planner-run.sh, predictor-run.sh, gardener-run.sh, supervisor-run.sh, dev-agent.sh |
| `lib/guard.sh` | `check_active(agent_name)` — reads `$FACTORY_ROOT/state/.{agent_name}-active`; exits 0 (skip) if the file is absent. Factory is off by default — state files must be created to enable each agent. **Logs a message to stderr** when skipping (`[check_active] SKIP: state file not found`), so agent dropout is visible in loop logs. Sourced by dev-poll.sh, review-poll.sh, predictor-run.sh, supervisor-run.sh. | polling-loop entry points |
| `lib/mirrors.sh` | `mirror_push()` — pushes `$PRIMARY_BRANCH` + tags to all configured mirror remotes (fire-and-forget background pushes). Reads `MIRROR_NAMES` and `MIRROR_*` vars exported by `load-project.sh` from the `[mirrors]` TOML section. Failures are logged but never block the pipeline. Sourced by dev-poll.sh — called after every successful merge. | dev-poll.sh |
| `lib/mirrors.sh` | `mirror_push()` — pushes `$PRIMARY_BRANCH` + tags to all configured mirror remotes (fire-and-forget background pushes). Reads `MIRROR_NAMES` and `MIRROR_*` vars exported by `load-project.sh` from the `[mirrors]` TOML section. Failures are logged but never block the pipeline. `mirror_pull_register(clone_url, owner, repo_name, [interval])` — registers a Forgejo pull mirror via `POST /repos/migrate` with `mirror: true`. Creates the target repo and queues the first sync automatically. Works against empty Forgejo instances — no pre-existing content required. Used for Nomad migration cutover: point at Codeberg source, wait for sync, then proceed with `disinto init`. See [docs/mirror-bootstrap.md](../docs/mirror-bootstrap.md) for the full cutover path. Sourced by dev-poll.sh — called after every successful merge. | dev-poll.sh |
| `lib/build-graph.py` | Python tool: parses VISION.md, prerequisites.md (from ops repo), AGENTS.md, formulas/*.toml, evidence/ (from ops repo), and forge issues/labels into a NetworkX DiGraph. Runs structural analyses (orphaned objectives, stale prerequisites, thin evidence, circular deps) and outputs a JSON report. Used by `review-pr.sh` (per-PR changed-file analysis) and `predictor-run.sh` (full-project analysis) to provide structural context to Claude. | review-pr.sh, predictor-run.sh |
| `lib/secret-scan.sh` | `scan_for_secrets()` — detects potential secrets (API keys, bearer tokens, private keys, URLs with embedded credentials) in text; returns 1 if secrets found. `redact_secrets()` — replaces detected secret patterns with `[REDACTED]`. | issue-lifecycle.sh |
| `lib/stack-lock.sh` | File-based lock protocol for singleton project stack access. `stack_lock_acquire(holder, project)` — polls until free, breaks stale heartbeats (>10 min old), claims lock. `stack_lock_release(project)` — deletes lock file. `stack_lock_check(project)` — inspect current lock state. `stack_lock_heartbeat(project)` — update heartbeat timestamp (callers must call every 2 min while holding). Lock files at `~/data/locks/<project>-stack.lock`. | docker/edge/dispatcher.sh, reproduce formula |
@ -22,7 +22,7 @@ sourced as needed.
| `lib/worktree.sh` | Reusable git worktree management: `worktree_create(path, branch, [base_ref])` — create worktree, checkout base, fetch submodules. `worktree_recover(path, branch, [remote])` — detect existing worktree, reuse if on correct branch (sets `_WORKTREE_REUSED`), otherwise clean and recreate. `worktree_cleanup(path)``git worktree remove --force`, clear Claude Code project cache (`~/.claude/projects/` matching path). `worktree_cleanup_stale([max_age_hours])` — scan `/tmp` for orphaned worktrees older than threshold, skip preserved and active tmux worktrees, prune. `worktree_preserve(path, reason)` — mark worktree as preserved for debugging (writes `.worktree-preserved` marker, skipped by stale cleanup). | dev-agent.sh, supervisor-run.sh, planner-run.sh, predictor-run.sh, gardener-run.sh |
| `lib/pr-lifecycle.sh` | Reusable PR lifecycle library: `pr_create()`, `pr_find_by_branch()`, `pr_poll_ci()`, `pr_poll_review()`, `pr_merge()`, `pr_is_merged()`, `pr_walk_to_merge()`, `build_phase_protocol_prompt()`. Requires `lib/ci-helpers.sh`. | dev-agent.sh (future) |
| `lib/issue-lifecycle.sh` | Reusable issue lifecycle library: `issue_claim()` (add in-progress, remove backlog), `issue_release()` (remove in-progress, add backlog), `issue_block()` (post diagnostic comment with secret redaction, add blocked label), `issue_close()`, `issue_check_deps()` (parse deps, check transitive closure; sets `_ISSUE_BLOCKED_BY`, `_ISSUE_SUGGESTION`), `issue_suggest_next()` (find next unblocked backlog issue; sets `_ISSUE_NEXT`), `issue_post_refusal()` (structured refusal comment with dedup). Label IDs cached in globals on first lookup. Sources `lib/secret-scan.sh`. | dev-agent.sh (future) |
| `lib/vault.sh` | **Vault PR helper** — create vault action PRs on ops repo via Forgejo API (works from containers without SSH). `vault_request <action_id> <toml_content>` validates TOML (using `validate_vault_action` from `vault/vault-env.sh`), creates branch `vault/<action-id>`, writes `vault/actions/<action-id>.toml`, creates PR targeting `main` with title `vault: <action-id>` and body from context field, returns PR number. Idempotent: if PR exists, returns existing number. **Low-tier bypass**: if the action's `blast_radius` classifies as `low` (via `vault/classify.sh`), `vault_request` calls `_vault_commit_direct()` which commits directly to ops `main` using `FORGE_ADMIN_TOKEN` — no PR, no approval wait. Returns `0` (not a PR number) for direct commits. Requires `FORGE_TOKEN`, `FORGE_ADMIN_TOKEN` (low-tier only), `FORGE_URL`, `FORGE_REPO`, `FORGE_OPS_REPO`. Uses the calling agent's own token (saves/restores `FORGE_TOKEN` around sourcing `vault-env.sh`), so approval workflow respects individual agent identities. | dev-agent (vault actions), future vault dispatcher |
| `lib/action-vault.sh` | **Vault PR helper** — create vault action PRs on ops repo via Forgejo API (works from containers without SSH). `vault_request <action_id> <toml_content>` validates TOML (using `validate_vault_action` from `action-vault/vault-env.sh`), creates branch `vault/<action-id>`, writes `vault/actions/<action-id>.toml`, creates PR targeting `main` with title `vault: <action-id>` and body from context field, returns PR number. Idempotent: if PR exists, returns existing number. **Low-tier bypass**: if the action's `blast_radius` classifies as `low` (via `action-vault/classify.sh`), `vault_request` calls `_vault_commit_direct()` which commits directly to ops `main` using `FORGE_ADMIN_TOKEN` — no PR, no approval wait. Returns `0` (not a PR number) for direct commits. Requires `FORGE_TOKEN`, `FORGE_ADMIN_TOKEN` (low-tier only), `FORGE_URL`, `FORGE_REPO`, `FORGE_OPS_REPO`. Uses the calling agent's own token (saves/restores `FORGE_TOKEN` around sourcing `vault-env.sh`), so approval workflow respects individual agent identities. | dev-agent (vault actions), future vault dispatcher |
| `lib/branch-protection.sh` | Branch protection helpers for Forgejo repos. `setup_vault_branch_protection()` — configures admin-only merge protection on main (require 1 approval, restrict merge to admin role, block direct pushes). `setup_profile_branch_protection()` — same protection for `.profile` repos. `verify_branch_protection()` — checks protection is correctly configured. `remove_branch_protection()` — removes protection (cleanup/testing). Handles race condition after initial push: retries with backoff if Forgejo hasn't processed the branch yet. Requires `FORGE_TOKEN`, `FORGE_URL`, `FORGE_OPS_REPO`. | bin/disinto (hire-an-agent) |
| `lib/agent-sdk.sh` | `agent_run([--resume SESSION_ID] [--worktree DIR] PROMPT)` — one-shot `claude -p` invocation with session persistence. Saves session ID to `SID_FILE`, reads it back on resume. `agent_recover_session()` — restore previous session ID from `SID_FILE` on startup. **Nudge guard**: skips nudge injection if the worktree is clean and no push is expected, preventing spurious re-invocations. Callers must define `SID_FILE`, `LOGFILE`, and `log()` before sourcing. **Concurrency**: external `flock` on `session.lock` is gated behind `CLAUDE_EXTERNAL_LOCK=1` (default off). When unset, each container's per-session `CLAUDE_CONFIG_DIR` isolation lets Claude Code's native lockfile handle OAuth refresh — no external serialization needed. Set `CLAUDE_EXTERNAL_LOCK=1` to re-enable the old flock wrapper as a rollback mechanism. See [`docs/CLAUDE-AUTH-CONCURRENCY.md`](../docs/CLAUDE-AUTH-CONCURRENCY.md) and AD-002 (#647). | formula-driven agents (dev-agent, planner-run, predictor-run, gardener-run) |
| `lib/forge-setup.sh` | `setup_forge()` — Forgejo instance provisioning: creates admin user, bot accounts, org, repos (code + ops), configures webhooks, sets repo topics. Extracted from `bin/disinto`. Requires `FORGE_URL`, `FORGE_TOKEN`, `FACTORY_ROOT`. **Password storage (#361)**: after creating each bot account, stores its password in `.env` as `FORGE_<BOT>_PASS` (e.g. `FORGE_PASS`, `FORGE_REVIEW_PASS`, etc.) for use by `forge-push.sh`. | bin/disinto (init) |
@ -30,6 +30,7 @@ sourced as needed.
| `lib/git-creds.sh` | Shared git credential helper configuration. `configure_git_creds([HOME_DIR] [RUN_AS_CMD])` — writes a static credential helper script and configures git globally to use password-based HTTP auth (Forgejo 11.x rejects API tokens for `git push`, #361). **Retry on cold boot (#741)**: resolves bot username from `FORGE_TOKEN` with 5 retries (exponential backoff 1-5s); fails loudly and returns 1 if Forgejo is unreachable — never falls back to a wrong hardcoded default (exports `BOT_USER` on success). `repair_baked_cred_urls([--as RUN_AS_CMD] DIR ...)` — rewrites any git remote URLs that have credentials baked in to use clean URLs instead; uses `safe.directory` bypass for root-owned repos (#671). Requires `FORGE_PASS`, `FORGE_URL`, `FORGE_TOKEN`. | entrypoints (agents, edge) |
| `lib/ops-setup.sh` | `setup_ops_repo()` — creates ops repo on Forgejo if it doesn't exist, configures bot collaborators, clones/initializes ops repo locally, seeds directory structure (vault, knowledge, evidence, sprints). Evidence subdirectories seeded: engagement/, red-team/, holdout/, evolution/, user-test/. Also seeds sprints/ for architect output. Exports `_ACTUAL_OPS_SLUG`. `migrate_ops_repo(ops_root, [primary_branch])` — idempotent migration helper that seeds missing directories and .gitkeep files on existing ops repos (pre-#407 deployments). | bin/disinto (init) |
| `lib/ci-setup.sh` | `_install_cron_impl()` — installs crontab entries for bare-metal deployments (compose mode uses polling loop instead). `_create_forgejo_oauth_app()` — generic helper to create an OAuth2 app on Forgejo (shared by Woodpecker and chat). `_create_woodpecker_oauth_impl()` — creates Woodpecker OAuth2 app (thin wrapper). `_create_chat_oauth_impl()` — creates disinto-chat OAuth2 app, writes `CHAT_OAUTH_CLIENT_ID`/`CHAT_OAUTH_CLIENT_SECRET` to `.env` (#708). `_generate_woodpecker_token_impl()` — auto-generates WOODPECKER_TOKEN via OAuth2 flow. `_activate_woodpecker_repo_impl()` — activates repo in Woodpecker. All gated by `_load_ci_context()` which validates required env vars. | bin/disinto (init) |
| `lib/generators.sh` | Template generation for `disinto init`: `generate_compose()` — docker-compose.yml (uses `codeberg.org/forgejo/forgejo:11.0` tag; adds `security_opt: [apparmor:unconfined]` to all services for rootless container compatibility; Forgejo includes a healthcheck so dependent services use `condition: service_healthy` — fixes cold-start races, #665; adds `chat` service block with isolated `chat-config` named volume and `CHAT_HISTORY_DIR` bind-mount for per-user NDJSON history persistence (#710); injects `FORWARD_AUTH_SECRET` for Caddy↔chat defense-in-depth auth (#709); cost-cap env vars `CHAT_MAX_REQUESTS_PER_HOUR`, `CHAT_MAX_REQUESTS_PER_DAY`, `CHAT_MAX_TOKENS_PER_DAY` (#711); subdomain fallback comment for `EDGE_TUNNEL_FQDN_*` vars (#713); all `depends_on` now use `condition: service_healthy/started` instead of bare service names), `generate_caddyfile()` — Caddyfile (routes: `/forge/*` → forgejo:3000, `/woodpecker/*` → woodpecker:8000, `/staging/*` → staging:80; `/chat/login` and `/chat/oauth/callback` bypass `forward_auth` so unauthenticated users can reach the OAuth flow; `/chat/*` gated by `forward_auth` on `chat:8080/chat/auth/verify` which stamps `X-Forwarded-User` (#709); root `/` redirects to `/forge/`), `generate_staging_index()` — staging index, `generate_deploy_pipelines()` — Woodpecker deployment pipeline configs. Requires `FACTORY_ROOT`, `PROJECT_NAME`, `PRIMARY_BRANCH`. | bin/disinto (init) |
| `lib/generators.sh` | Template generation for `disinto init`: `generate_compose()` — docker-compose.yml (uses `codeberg.org/forgejo/forgejo:11.0` tag; adds `security_opt: [apparmor:unconfined]` to all services for rootless container compatibility; Forgejo includes a healthcheck so dependent services use `condition: service_healthy` — fixes cold-start races, #665; adds `chat` service block with isolated `chat-config` named volume and `CHAT_HISTORY_DIR` bind-mount for per-user NDJSON history persistence (#710); injects `FORWARD_AUTH_SECRET` for Caddy↔chat defense-in-depth auth (#709); cost-cap env vars `CHAT_MAX_REQUESTS_PER_HOUR`, `CHAT_MAX_REQUESTS_PER_DAY`, `CHAT_MAX_TOKENS_PER_DAY` (#711); subdomain fallback comment for `EDGE_TUNNEL_FQDN_*` vars (#713); all `depends_on` now use `condition: service_healthy/started` instead of bare service names; all services now include `restart: unless-stopped` including the edge service — #768; agents service now uses `image: ghcr.io/disinto/agents:${DISINTO_IMAGE_TAG:-latest}` instead of `build:` (#429); `WOODPECKER_PLUGINS_PRIVILEGED` env var added to woodpecker service (#779); agents-llama conditional block gated on `ENABLE_LLAMA_AGENT=1` (#769); agents service gains volume mounts for `./projects`, `./.env`, `./state`), `generate_caddyfile()` — Caddyfile (routes: `/forge/*` → forgejo:3000, `/woodpecker/*` → woodpecker:8000, `/staging/*` → staging:80; `/chat/login` and `/chat/oauth/callback` bypass `forward_auth` so unauthenticated users can reach the OAuth flow; `/chat/*` gated by `forward_auth` on `chat:8080/chat/auth/verify` which stamps `X-Forwarded-User` (#709); root `/` redirects to `/forge/`), `generate_staging_index()` — staging index, `generate_deploy_pipelines()` — Woodpecker deployment pipeline configs. Requires `FACTORY_ROOT`, `PROJECT_NAME`, `PRIMARY_BRANCH`. | bin/disinto (init) |
| `lib/sprint-filer.sh` | Post-merge sub-issue filer for sprint PRs. Invoked by the `.woodpecker/ops-filer.yml` pipeline after a sprint PR merges to ops repo `main`. Parses `<!-- filer:begin --> ... <!-- filer:end -->` blocks from sprint PR bodies to extract sub-issue definitions, creates them on the project repo using `FORGE_FILER_TOKEN` (narrow-scope `filer-bot` identity with `issues:write` only), adds `in-progress` label to the parent vision issue, and handles vision lifecycle closure when all sub-issues are closed. Uses `filer_api_all()` for paginated fetches. Idempotent: uses `<!-- decomposed-from: #<vision>, sprint: <slug>, id: <id> -->` markers to skip already-filed issues. Requires `FORGE_FILER_TOKEN`, `FORGE_API`, `FORGE_API_BASE`, `FORGE_OPS_REPO`. | `.woodpecker/ops-filer.yml` (CI pipeline on ops repo) |
| `lib/hire-agent.sh` | `disinto_hire_an_agent()` — user creation, `.profile` repo setup, formula copying, branch protection, and state marker creation for hiring a new agent. Requires `FORGE_URL`, `FORGE_TOKEN`, `FACTORY_ROOT`, `PROJECT_NAME`. Extracted from `bin/disinto`. | bin/disinto (hire) |
| `lib/release.sh` | `disinto_release()` — vault TOML creation, branch setup on ops repo, PR creation, and auto-merge request for a versioned release. `_assert_release_globals()` validates required env vars. Requires `FORGE_URL`, `FORGE_TOKEN`, `FORGE_OPS_REPO`, `FACTORY_ROOT`, `PRIMARY_BRANCH`. Extracted from `bin/disinto`. | bin/disinto (release) |

View file

@ -1,9 +1,9 @@
#!/usr/bin/env bash
# vault.sh — Helper for agents to create vault PRs on ops repo
# action-vault.sh — Helper for agents to create vault PRs on ops repo
#
# Source after lib/env.sh:
# source "$(dirname "$0")/../lib/env.sh"
# source "$(dirname "$0")/lib/vault.sh"
# source "$(dirname "$0")/lib/action-vault.sh"
#
# Required globals: FORGE_TOKEN, FORGE_URL, FORGE_REPO, FORGE_OPS_REPO
# Optional: OPS_REPO_ROOT (local path for ops repo)
@ -12,7 +12,7 @@
# vault_request <action_id> <toml_content> — Create vault PR, return PR number
#
# The function:
# 1. Validates TOML content using validate_vault_action() from vault/vault-env.sh
# 1. Validates TOML content using validate_vault_action() from action-vault/vault-env.sh
# 2. Creates a branch on the ops repo: vault/<action-id>
# 3. Writes TOML to vault/actions/<action-id>.toml on that branch
# 4. Creates PR targeting main with title "vault: <action-id>"
@ -133,7 +133,7 @@ vault_request() {
printf '%s' "$toml_content" > "$tmp_toml"
# Source vault-env.sh for validate_vault_action
local vault_env="${FACTORY_ROOT:-$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)}/vault/vault-env.sh"
local vault_env="${FACTORY_ROOT:-$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)}/action-vault/vault-env.sh"
if [ ! -f "$vault_env" ]; then
echo "ERROR: vault-env.sh not found at $vault_env" >&2
return 1
@ -161,7 +161,7 @@ vault_request() {
ops_api="$(_vault_ops_api)"
# Classify the action to determine if PR bypass is allowed
local classify_script="${FACTORY_ROOT:-$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)}/vault/classify.sh"
local classify_script="${FACTORY_ROOT:-$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)}/action-vault/classify.sh"
local vault_tier
vault_tier=$("$classify_script" "${VAULT_ACTION_FORMULA:-}" "${VAULT_BLAST_RADIUS_OVERRIDE:-}") || {
# Classification failed, default to high tier (require PR)

View file

@ -121,9 +121,10 @@ export FORGE_VAULT_TOKEN="${FORGE_VAULT_TOKEN:-${FORGE_TOKEN}}"
export FORGE_SUPERVISOR_TOKEN="${FORGE_SUPERVISOR_TOKEN:-${FORGE_TOKEN}}"
export FORGE_PREDICTOR_TOKEN="${FORGE_PREDICTOR_TOKEN:-${FORGE_TOKEN}}"
export FORGE_ARCHITECT_TOKEN="${FORGE_ARCHITECT_TOKEN:-${FORGE_TOKEN}}"
export FORGE_FILER_TOKEN="${FORGE_FILER_TOKEN:-${FORGE_TOKEN}}"
# Bot usernames filter
export FORGE_BOT_USERNAMES="${FORGE_BOT_USERNAMES:-dev-bot,review-bot,planner-bot,gardener-bot,vault-bot,supervisor-bot,predictor-bot,architect-bot}"
export FORGE_BOT_USERNAMES="${FORGE_BOT_USERNAMES:-dev-bot,review-bot,planner-bot,gardener-bot,vault-bot,supervisor-bot,predictor-bot,architect-bot,filer-bot}"
# Project config
export FORGE_REPO="${FORGE_REPO:-}"
@ -157,8 +158,8 @@ export WOODPECKER_SERVER="${WOODPECKER_SERVER:-http://localhost:8000}"
export CLAUDE_TIMEOUT="${CLAUDE_TIMEOUT:-7200}"
# Vault-only token guard (#745): external-action tokens (GITHUB_TOKEN, CLAWHUB_TOKEN)
# must NEVER be available to agents. They live in .env.vault.enc and are injected
# only into the ephemeral runner container at fire time. Unset them here so
# must NEVER be available to agents. They live in secrets/*.enc and are decrypted
# only into the ephemeral runner container at fire time (#777). Unset them here so
# even an accidental .env inclusion cannot leak them into agent sessions.
unset GITHUB_TOKEN 2>/dev/null || true
unset CLAWHUB_TOKEN 2>/dev/null || true
@ -312,6 +313,68 @@ memory_guard() {
fi
}
# =============================================================================
# SECRET LOADING ABSTRACTION
# =============================================================================
# load_secret NAME [DEFAULT]
#
# Resolves a secret value using the following precedence:
# 1. /secrets/<NAME>.env — Nomad-rendered template (future)
# 2. Current environment — already set by .env.enc, compose, etc.
# 3. secrets/<NAME>.enc — age-encrypted per-key file (decrypted on demand)
# 4. DEFAULT (or empty)
#
# Prints the resolved value to stdout. Caches age-decrypted values in the
# process environment so subsequent calls are free.
# =============================================================================
load_secret() {
local name="$1"
local default="${2:-}"
# 1. Nomad-rendered template (future: Nomad writes /secrets/<NAME>.env)
local nomad_path="/secrets/${name}.env"
if [ -f "$nomad_path" ]; then
# Source into a subshell to extract just the value
local _nomad_val
_nomad_val=$(
set -a
# shellcheck source=/dev/null
source "$nomad_path"
set +a
printf '%s' "${!name:-}"
)
if [ -n "$_nomad_val" ]; then
export "$name=$_nomad_val"
printf '%s' "$_nomad_val"
return 0
fi
fi
# 2. Already in environment (set by .env.enc, compose injection, etc.)
if [ -n "${!name:-}" ]; then
printf '%s' "${!name}"
return 0
fi
# 3. Age-encrypted per-key file: secrets/<NAME>.enc (#777)
local _age_key="${HOME}/.config/sops/age/keys.txt"
local _enc_path="${FACTORY_ROOT}/secrets/${name}.enc"
if [ -f "$_enc_path" ] && [ -f "$_age_key" ] && command -v age &>/dev/null; then
local _dec_val
if _dec_val=$(age -d -i "$_age_key" "$_enc_path" 2>/dev/null) && [ -n "$_dec_val" ]; then
export "$name=$_dec_val"
printf '%s' "$_dec_val"
return 0
fi
fi
# 4. Default (or empty)
if [ -n "$default" ]; then
printf '%s' "$default"
fi
return 0
}
# Source tea helpers (available when tea binary is installed)
if command -v tea &>/dev/null; then
# shellcheck source=tea-helpers.sh

View file

@ -31,8 +31,9 @@ _load_init_context() {
# Execute a command in the Forgejo container (for admin operations)
_forgejo_exec() {
local use_bare="${DISINTO_BARE:-false}"
local cname="${FORGEJO_CONTAINER_NAME:-disinto-forgejo}"
if [ "$use_bare" = true ]; then
docker exec -u git disinto-forgejo "$@"
docker exec -u git "$cname" "$@"
else
docker compose -f "${FACTORY_ROOT}/docker-compose.yml" exec -T -u git forgejo "$@"
fi
@ -94,11 +95,12 @@ setup_forge() {
# Bare-metal mode: standalone docker run
mkdir -p "${FORGEJO_DATA_DIR}"
if docker ps -a --format '{{.Names}}' | grep -q '^disinto-forgejo$'; then
docker start disinto-forgejo >/dev/null 2>&1 || true
local cname="${FORGEJO_CONTAINER_NAME:-disinto-forgejo}"
if docker ps -a --format '{{.Names}}' | grep -q "^${cname}$"; then
docker start "$cname" >/dev/null 2>&1 || true
else
docker run -d \
--name disinto-forgejo \
--name "$cname" \
--restart unless-stopped \
-p "${forge_port}:3000" \
-p 2222:22 \
@ -719,7 +721,7 @@ setup_forge() {
fi
# Add all bot users as collaborators with appropriate permissions
# dev-bot: write (PR creation via lib/vault.sh)
# dev-bot: write (PR creation via lib/action-vault.sh)
# review-bot: read (PR review)
# planner-bot: write (prerequisites.md, memory)
# gardener-bot: write (backlog grooming)

View file

@ -819,8 +819,7 @@ build_prompt_footer() {
Base URL: ${FORGE_API}
Auth header: -H \"Authorization: token \${FORGE_TOKEN}\"
Read issue: curl -sf -H \"Authorization: token \${FORGE_TOKEN}\" '${FORGE_API}/issues/{number}' | jq '.body'
Create issue: curl -sf -X POST -H \"Authorization: token \${FORGE_TOKEN}\" -H 'Content-Type: application/json' '${FORGE_API}/issues' -d '{\"title\":\"...\",\"body\":\"...\",\"labels\":[LABEL_ID]}'${extra_api}
List labels: curl -sf -H \"Authorization: token \${FORGE_TOKEN}\" '${FORGE_API}/labels'
List labels: curl -sf -H \"Authorization: token \${FORGE_TOKEN}\" '${FORGE_API}/labels'${extra_api}
NEVER echo or include the actual token value in output — always reference \${FORGE_TOKEN}.
## Environment

View file

@ -100,9 +100,7 @@ _generate_local_model_services() {
cat >> "$temp_file" <<EOF
agents-${service_name}:
build:
context: .
dockerfile: docker/agents/Dockerfile
image: ghcr.io/disinto/agents:\${DISINTO_IMAGE_TAG:-latest}
container_name: disinto-agents-${service_name}
restart: unless-stopped
security_opt:
@ -111,9 +109,9 @@ _generate_local_model_services() {
- agents-${service_name}-data:/home/agent/data
- project-repos:/home/agent/repos
- \${CLAUDE_SHARED_DIR:-/var/lib/disinto/claude-shared}:\${CLAUDE_SHARED_DIR:-/var/lib/disinto/claude-shared}
- \${HOME}/.claude.json:/home/agent/.claude.json:ro
- CLAUDE_BIN_PLACEHOLDER:/usr/local/bin/claude:ro
- \${HOME}/.ssh:/home/agent/.ssh:ro
- \${CLAUDE_CONFIG_FILE:-\${HOME}/.claude.json}:/home/agent/.claude.json:ro
- \${CLAUDE_BIN_DIR}:/usr/local/bin/claude:ro
- \${AGENT_SSH_DIR:-\${HOME}/.ssh}:/home/agent/.ssh:ro
environment:
FORGE_URL: http://forgejo:3000
FORGE_REPO: ${FORGE_REPO:-disinto-admin/disinto}
@ -233,6 +231,7 @@ for name, config in agents.items():
# to materialize a working stack on a fresh checkout.
_generate_compose_impl() {
local forge_port="${1:-3000}"
local use_build="${2:-false}"
local compose_file="${FACTORY_ROOT}/docker-compose.yml"
# Check if compose file already exists
@ -296,6 +295,7 @@ services:
WOODPECKER_AGENT_SECRET: ${WOODPECKER_AGENT_SECRET:-}
WOODPECKER_DATABASE_DRIVER: sqlite3
WOODPECKER_DATABASE_DATASOURCE: /var/lib/woodpecker/woodpecker.sqlite
WOODPECKER_PLUGINS_PRIVILEGED: ${WOODPECKER_PLUGINS_PRIVILEGED:-plugins/docker}
WOODPECKER_ENVIRONMENT: "FORGE_TOKEN:${FORGE_TOKEN}"
depends_on:
forgejo:
@ -318,15 +318,19 @@ services:
WOODPECKER_AGENT_SECRET: ${WOODPECKER_AGENT_SECRET:-}
WOODPECKER_GRPC_SECURE: "false"
WOODPECKER_HEALTHCHECK_ADDR: ":3333"
WOODPECKER_BACKEND_DOCKER_NETWORK: disinto_disinto-net
WOODPECKER_BACKEND_DOCKER_NETWORK: ${WOODPECKER_CI_NETWORK:-disinto_disinto-net}
WOODPECKER_MAX_WORKFLOWS: 1
healthcheck:
test: ["CMD", "wget", "-q", "--spider", "http://localhost:3333/healthz"]
interval: 30s
timeout: 5s
retries: 3
start_period: 15s
depends_on:
- woodpecker
agents:
build:
context: .
dockerfile: docker/agents/Dockerfile
image: ghcr.io/disinto/agents:${DISINTO_IMAGE_TAG:-latest}
container_name: disinto-agents
restart: unless-stopped
security_opt:
@ -335,11 +339,14 @@ services:
- agent-data:/home/agent/data
- project-repos:/home/agent/repos
- ${CLAUDE_SHARED_DIR:-/var/lib/disinto/claude-shared}:${CLAUDE_SHARED_DIR:-/var/lib/disinto/claude-shared}
- ${HOME}/.claude.json:/home/agent/.claude.json:ro
- CLAUDE_BIN_PLACEHOLDER:/usr/local/bin/claude:ro
- ${HOME}/.ssh:/home/agent/.ssh:ro
- ${HOME}/.config/sops/age:/home/agent/.config/sops/age:ro
- ${CLAUDE_CONFIG_FILE:-${HOME}/.claude.json}:/home/agent/.claude.json:ro
- ${CLAUDE_BIN_DIR}:/usr/local/bin/claude:ro
- ${AGENT_SSH_DIR:-${HOME}/.ssh}:/home/agent/.ssh:ro
- ${SOPS_AGE_DIR:-${HOME}/.config/sops/age}:/home/agent/.config/sops/age:ro
- woodpecker-data:/woodpecker-data:ro
- ./projects:/home/agent/disinto/projects:ro
- ./.env:/home/agent/disinto/.env:ro
- ./state:/home/agent/disinto/state
environment:
FORGE_URL: http://forgejo:3000
FORGE_REPO: ${FORGE_REPO:-disinto-admin/disinto}
@ -371,8 +378,14 @@ services:
PLANNER_INTERVAL: ${PLANNER_INTERVAL:-43200}
# IMPORTANT: agents get explicit environment variables (forge tokens, CI tokens, config).
# Vault-only secrets (GITHUB_TOKEN, CLAWHUB_TOKEN, deploy keys) live in
# .env.vault.enc and are NEVER injected here — only the runner
# container receives them at fire time (AD-006, #745).
# secrets/*.enc and are NEVER injected here — only the runner
# container receives them at fire time (AD-006, #745, #777).
healthcheck:
test: ["CMD", "pgrep", "-f", "entrypoint.sh"]
interval: 60s
timeout: 5s
retries: 3
start_period: 30s
depends_on:
forgejo:
condition: service_healthy
@ -381,10 +394,71 @@ services:
networks:
- disinto-net
runner:
COMPOSEEOF
# ── Conditional agents-llama block (ENABLE_LLAMA_AGENT=1) ──────────────
# Local-Qwen dev agent — gated on ENABLE_LLAMA_AGENT so factories without
# a local llama endpoint don't try to start it. See docs/agents-llama.md.
if [ "${ENABLE_LLAMA_AGENT:-0}" = "1" ]; then
cat >> "$compose_file" <<'LLAMAEOF'
agents-llama:
build:
context: .
dockerfile: docker/agents/Dockerfile
container_name: disinto-agents-llama
restart: unless-stopped
security_opt:
- apparmor=unconfined
volumes:
- agent-data:/home/agent/data
- project-repos:/home/agent/repos
- ${CLAUDE_SHARED_DIR:-/var/lib/disinto/claude-shared}:${CLAUDE_SHARED_DIR:-/var/lib/disinto/claude-shared}
- ${CLAUDE_CONFIG_FILE:-${HOME}/.claude.json}:/home/agent/.claude.json:ro
- ${CLAUDE_BIN_DIR}:/usr/local/bin/claude:ro
- ${AGENT_SSH_DIR:-${HOME}/.ssh}:/home/agent/.ssh:ro
- ${SOPS_AGE_DIR:-${HOME}/.config/sops/age}:/home/agent/.config/sops/age:ro
- woodpecker-data:/woodpecker-data:ro
environment:
FORGE_URL: http://forgejo:3000
FORGE_REPO: ${FORGE_REPO:-disinto-admin/disinto}
FORGE_TOKEN: ${FORGE_TOKEN_LLAMA:-}
FORGE_PASS: ${FORGE_PASS_LLAMA:-}
FORGE_BOT_USERNAMES: ${FORGE_BOT_USERNAMES:-}
WOODPECKER_TOKEN: ${WOODPECKER_TOKEN:-}
CLAUDE_TIMEOUT: ${CLAUDE_TIMEOUT:-7200}
CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC: ${CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC:-1}
CLAUDE_AUTOCOMPACT_PCT_OVERRIDE: "60"
ANTHROPIC_API_KEY: ${ANTHROPIC_API_KEY:-}
ANTHROPIC_BASE_URL: ${ANTHROPIC_BASE_URL:-}
FORGE_ADMIN_PASS: ${FORGE_ADMIN_PASS:-}
DISINTO_CONTAINER: "1"
PROJECT_NAME: ${PROJECT_NAME:-project}
PROJECT_REPO_ROOT: /home/agent/repos/${PROJECT_NAME:-project}
WOODPECKER_DATA_DIR: /woodpecker-data
WOODPECKER_REPO_ID: "PLACEHOLDER_WP_REPO_ID"
CLAUDE_CONFIG_DIR: ${CLAUDE_CONFIG_DIR:-/var/lib/disinto/claude-shared/config}
POLL_INTERVAL: ${POLL_INTERVAL:-300}
AGENT_ROLES: dev
healthcheck:
test: ["CMD", "pgrep", "-f", "entrypoint.sh"]
interval: 60s
timeout: 5s
retries: 3
start_period: 30s
depends_on:
forgejo:
condition: service_healthy
networks:
- disinto-net
LLAMAEOF
fi
# Resume the rest of the compose file (runner onward)
cat >> "$compose_file" <<'COMPOSEEOF'
runner:
image: ghcr.io/disinto/agents:${DISINTO_IMAGE_TAG:-latest}
profiles: ["vault"]
security_opt:
- apparmor=unconfined
@ -405,8 +479,9 @@ services:
# Edge proxy — reverse proxy to Forgejo, Woodpecker, and staging
# Serves on ports 80/443, routes based on path
edge:
build: ./docker/edge
image: ghcr.io/disinto/edge:${DISINTO_IMAGE_TAG:-latest}
container_name: disinto-edge
restart: unless-stopped
security_opt:
- apparmor=unconfined
ports:
@ -441,7 +516,13 @@ services:
- /var/run/docker.sock:/var/run/docker.sock
- ./secrets/tunnel_key:/run/secrets/tunnel_key:ro
- ${CLAUDE_SHARED_DIR:-/var/lib/disinto/claude-shared}:${CLAUDE_SHARED_DIR:-/var/lib/disinto/claude-shared}
- ${HOME}/.claude.json:/home/agent/.claude.json:ro
- ${CLAUDE_CONFIG_FILE:-${HOME}/.claude.json}:/home/agent/.claude.json:ro
healthcheck:
test: ["CMD", "curl", "-fsS", "http://localhost:2019/config/"]
interval: 30s
timeout: 5s
retries: 3
start_period: 15s
depends_on:
forgejo:
condition: service_healthy
@ -459,6 +540,12 @@ services:
command: ["caddy", "file-server", "--root", "/srv/site"]
security_opt:
- apparmor=unconfined
healthcheck:
test: ["CMD", "wget", "-q", "--spider", "http://localhost:2019/config/"]
interval: 30s
timeout: 5s
retries: 3
start_period: 10s
volumes:
- ./docker:/srv/site:ro
networks:
@ -499,7 +586,7 @@ services:
memswap_limit: 512m
volumes:
# Mount claude binary from host (same as agents)
- CLAUDE_BIN_PLACEHOLDER:/usr/local/bin/claude:ro
- ${CLAUDE_BIN_DIR}:/usr/local/bin/claude:ro
# Throwaway named volume for chat config (isolated from host ~/.claude)
- chat-config:/var/chat/config
# Chat history persistence: per-user NDJSON files on bind-mounted host volume
@ -518,6 +605,12 @@ services:
CHAT_MAX_REQUESTS_PER_HOUR: ${CHAT_MAX_REQUESTS_PER_HOUR:-60}
CHAT_MAX_REQUESTS_PER_DAY: ${CHAT_MAX_REQUESTS_PER_DAY:-500}
CHAT_MAX_TOKENS_PER_DAY: ${CHAT_MAX_TOKENS_PER_DAY:-1000000}
healthcheck:
test: ["CMD", "python3", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:8080/health')"]
interval: 30s
timeout: 5s
retries: 3
start_period: 10s
networks:
- disinto-net
@ -556,20 +649,35 @@ COMPOSEEOF
fi
# Append local-model agent services if any are configured
# (must run before CLAUDE_BIN_PLACEHOLDER substitution so the placeholder
# in local-model services is also resolved)
_generate_local_model_services "$compose_file"
# Patch the Claude CLI binary path — resolve from host PATH at init time.
# Resolve the Claude CLI binary path and persist as CLAUDE_BIN_DIR in .env.
# docker-compose.yml references ${CLAUDE_BIN_DIR} so the value must be set.
local claude_bin
claude_bin="$(command -v claude 2>/dev/null || true)"
if [ -n "$claude_bin" ]; then
# Resolve symlinks to get the real binary path
claude_bin="$(readlink -f "$claude_bin")"
sed -i "s|CLAUDE_BIN_PLACEHOLDER|${claude_bin}|g" "$compose_file"
else
echo "Warning: claude CLI not found in PATH — update docker-compose.yml volumes manually" >&2
sed -i "s|CLAUDE_BIN_PLACEHOLDER|/usr/local/bin/claude|g" "$compose_file"
echo "Warning: claude CLI not found in PATH — set CLAUDE_BIN_DIR in .env manually" >&2
claude_bin="/usr/local/bin/claude"
fi
# Persist CLAUDE_BIN_DIR into .env so docker-compose can resolve it.
local env_file="${FACTORY_ROOT}/.env"
if [ -f "$env_file" ]; then
if grep -q "^CLAUDE_BIN_DIR=" "$env_file" 2>/dev/null; then
sed -i "s|^CLAUDE_BIN_DIR=.*|CLAUDE_BIN_DIR=${claude_bin}|" "$env_file"
else
printf 'CLAUDE_BIN_DIR=%s\n' "$claude_bin" >> "$env_file"
fi
else
printf 'CLAUDE_BIN_DIR=%s\n' "$claude_bin" > "$env_file"
fi
# In build mode, replace image: with build: for locally-built images
if [ "$use_build" = true ]; then
sed -i 's|^\( agents:\)|\1|' "$compose_file"
sed -i '/^ image: ghcr\.io\/disinto\/agents:/{s|image: ghcr\.io/disinto/agents:.*|build:\n context: .\n dockerfile: docker/agents/Dockerfile|}' "$compose_file"
sed -i '/^ image: ghcr\.io\/disinto\/edge:/{s|image: ghcr\.io/disinto/edge:.*|build: ./docker/edge|}' "$compose_file"
fi
echo "Created: ${compose_file}"
@ -588,7 +696,11 @@ _generate_agent_docker_impl() {
fi
}
# Generate docker/Caddyfile template for edge proxy.
# Generate docker/Caddyfile for the edge proxy.
# **CANONICAL SOURCE**: This generator is the single source of truth for the Caddyfile.
# Output path: ${FACTORY_ROOT}/docker/Caddyfile (gitignored — generated artifact).
# The edge compose service mounts this path as /etc/caddy/Caddyfile.
# On a fresh clone, `disinto init` calls generate_caddyfile before first `disinto up`.
_generate_caddyfile_impl() {
local docker_dir="${FACTORY_ROOT}/docker"
local caddyfile="${docker_dir}/Caddyfile"

View file

@ -1,8 +1,10 @@
#!/usr/bin/env bash
# mirrors.sh — Push primary branch + tags to configured mirror remotes.
# mirrors.sh — Mirror helpers: push to remotes + register pull mirrors via API.
#
# Usage: source lib/mirrors.sh; mirror_push
# source lib/mirrors.sh; mirror_pull_register <clone_url> <owner> <repo_name> [interval]
# Requires: PROJECT_REPO_ROOT, PRIMARY_BRANCH, MIRROR_* vars from load-project.sh
# FORGE_API_BASE, FORGE_TOKEN for pull-mirror registration
# shellcheck disable=SC2154 # globals set by load-project.sh / calling script
@ -37,3 +39,73 @@ mirror_push() {
log "mirror: pushed to ${name} (pid $!)"
done
}
# ---------------------------------------------------------------------------
# mirror_pull_register — register a Forgejo pull mirror via the /repos/migrate API.
#
# Creates a new repo as a pull mirror of an external source. Works against
# empty target repos (the repo is created by the API call itself).
#
# Usage:
# mirror_pull_register <clone_url> <owner> <repo_name> [interval]
#
# Args:
# clone_url — HTTPS URL of the source repo (e.g. https://codeberg.org/johba/disinto.git)
# owner — Forgejo org or user that will own the mirror repo
# repo_name — name of the new mirror repo on Forgejo
# interval — sync interval (default: "8h0m0s"; Forgejo duration format)
#
# Requires:
# FORGE_API_BASE, FORGE_TOKEN (from env.sh)
#
# Returns 0 on success, 1 on failure. Prints the new repo JSON to stdout.
# ---------------------------------------------------------------------------
mirror_pull_register() {
local clone_url="$1"
local owner="$2"
local repo_name="$3"
local interval="${4:-8h0m0s}"
if [ -z "${FORGE_API_BASE:-}" ] || [ -z "${FORGE_TOKEN:-}" ]; then
echo "ERROR: FORGE_API_BASE and FORGE_TOKEN must be set" >&2
return 1
fi
if [ -z "$clone_url" ] || [ -z "$owner" ] || [ -z "$repo_name" ]; then
echo "Usage: mirror_pull_register <clone_url> <owner> <repo_name> [interval]" >&2
return 1
fi
local payload
payload=$(jq -n \
--arg clone_addr "$clone_url" \
--arg repo_name "$repo_name" \
--arg repo_owner "$owner" \
--arg interval "$interval" \
'{
clone_addr: $clone_addr,
repo_name: $repo_name,
repo_owner: $repo_owner,
mirror: true,
mirror_interval: $interval,
service: "git"
}')
local http_code body
body=$(curl -s -w "\n%{http_code}" -X POST \
-H "Authorization: token ${FORGE_TOKEN}" \
-H "Content-Type: application/json" \
"${FORGE_API_BASE}/repos/migrate" \
-d "$payload")
http_code=$(printf '%s' "$body" | tail -n1)
body=$(printf '%s' "$body" | sed '$d')
if [ "$http_code" -ge 200 ] && [ "$http_code" -lt 300 ]; then
printf '%s\n' "$body"
return 0
else
echo "ERROR: mirror_pull_register failed (HTTP ${http_code}): ${body}" >&2
return 1
fi
}

View file

@ -18,8 +18,8 @@
# =============================================================================
set -euo pipefail
# Source vault.sh for _vault_log helper
source "${FACTORY_ROOT}/lib/vault.sh"
# Source action-vault.sh for _vault_log helper
source "${FACTORY_ROOT}/lib/action-vault.sh"
# Assert required globals are set before using this module.
_assert_release_globals() {

585
lib/sprint-filer.sh Executable file
View file

@ -0,0 +1,585 @@
#!/usr/bin/env bash
# =============================================================================
# sprint-filer.sh — Parse merged sprint PRs and file sub-issues via filer-bot
#
# Invoked by the ops-filer Woodpecker pipeline after a sprint PR merges on the
# ops repo main branch. Parses each sprints/*.md file for a structured
# ## Sub-issues block (filer:begin/end markers), then creates idempotent
# Forgejo issues on the project repo using FORGE_FILER_TOKEN.
#
# Permission model (#764):
# filer-bot has issues:write on the project repo.
# architect-bot is read-only on the project repo.
#
# Usage:
# sprint-filer.sh <sprint-file.md> — file sub-issues from one sprint
# sprint-filer.sh --all <sprints-dir> — scan all sprint files in dir
#
# Environment:
# FORGE_FILER_TOKEN — filer-bot API token (issues:write on project repo)
# FORGE_API — project repo API base (e.g. http://forgejo:3000/api/v1/repos/org/repo)
# FORGE_API_BASE — API base URL (e.g. http://forgejo:3000/api/v1)
# =============================================================================
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# Source env.sh only if not already loaded (allows standalone + sourced use)
if [ -z "${FACTORY_ROOT:-}" ]; then
FACTORY_ROOT="$(dirname "$SCRIPT_DIR")"
# shellcheck source=env.sh
source "$SCRIPT_DIR/env.sh"
fi
# ── Logging ──────────────────────────────────────────────────────────────
LOG_AGENT="${LOG_AGENT:-filer}"
filer_log() {
printf '[%s] %s: %s\n' "$(date -u '+%Y-%m-%dT%H:%M:%SZ')" "$LOG_AGENT" "$*" >&2
}
# ── Validate required environment ────────────────────────────────────────
: "${FORGE_FILER_TOKEN:?sprint-filer.sh requires FORGE_FILER_TOKEN}"
: "${FORGE_API:?sprint-filer.sh requires FORGE_API}"
# ── Paginated Forgejo API fetch ──────────────────────────────────────────
# Reuses forge_api_all from lib/env.sh with FORGE_FILER_TOKEN.
# Args: api_path (e.g. /issues?state=all&type=issues)
# Output: merged JSON array to stdout
filer_api_all() { forge_api_all "$1" "$FORGE_FILER_TOKEN"; }
# ── Parse sub-issues block from a sprint markdown file ───────────────────
# Extracts the YAML-in-markdown between <!-- filer:begin --> and <!-- filer:end -->
# Args: sprint_file_path
# Output: the raw sub-issues block (YAML lines) to stdout
# Returns: 0 if block found, 1 if not found or malformed
parse_subissues_block() {
local sprint_file="$1"
if [ ! -f "$sprint_file" ]; then
filer_log "ERROR: sprint file not found: ${sprint_file}"
return 1
fi
local in_block=false
local block=""
local found=false
while IFS= read -r line; do
if [[ "$line" == *"<!-- filer:begin -->"* ]]; then
in_block=true
found=true
continue
fi
if [[ "$line" == *"<!-- filer:end -->"* ]]; then
in_block=false
continue
fi
if [ "$in_block" = true ]; then
block+="${line}"$'\n'
fi
done < "$sprint_file"
if [ "$found" = false ]; then
filer_log "No filer:begin/end block found in ${sprint_file}"
return 1
fi
if [ "$in_block" = true ]; then
filer_log "ERROR: malformed sub-issues block in ${sprint_file} — filer:begin without filer:end"
return 1
fi
if [ -z "$block" ]; then
filer_log "WARNING: empty sub-issues block in ${sprint_file}"
return 1
fi
printf '%s' "$block"
}
# ── Extract vision issue number from sprint file ─────────────────────────
# Looks for "#N" references specifically in the "## Vision issues" section
# to avoid picking up cross-links or related-issue mentions earlier in the file.
# Falls back to first #N in the file if no "## Vision issues" section found.
# Args: sprint_file_path
# Output: first vision issue number found
extract_vision_issue() {
local sprint_file="$1"
# Try to extract from "## Vision issues" section first
local in_section=false
local result=""
while IFS= read -r line; do
if [[ "$line" =~ ^##[[:space:]]+Vision[[:space:]]+issues ]]; then
in_section=true
continue
fi
# Stop at next heading
if [ "$in_section" = true ] && [[ "$line" =~ ^## ]]; then
break
fi
if [ "$in_section" = true ]; then
result=$(printf '%s' "$line" | grep -oE '#[0-9]+' | head -1 | tr -d '#')
if [ -n "$result" ]; then
printf '%s' "$result"
return 0
fi
fi
done < "$sprint_file"
# Fallback: first #N in the entire file
grep -oE '#[0-9]+' "$sprint_file" | head -1 | tr -d '#'
}
# ── Extract sprint slug from file path ───────────────────────────────────
# Args: sprint_file_path
# Output: slug (filename without .md)
extract_sprint_slug() {
local sprint_file="$1"
basename "$sprint_file" .md
}
# ── Parse individual sub-issue entries from the block ────────────────────
# The block is a simple YAML-like format:
# - id: foo
# title: "..."
# labels: [backlog, priority]
# depends_on: [bar]
# body: |
# multi-line body
#
# Args: raw_block (via stdin)
# Output: JSON array of sub-issue objects
parse_subissue_entries() {
local block
block=$(cat)
# Use awk to parse the YAML-like structure into JSON
printf '%s' "$block" | awk '
BEGIN {
printf "["
first = 1
inbody = 0
id = ""; title = ""; labels = ""; depends = ""; body = ""
}
function flush_entry() {
if (id == "") return
if (!first) printf ","
first = 0
# Escape JSON special characters in body
gsub(/\\/, "\\\\", body)
gsub(/"/, "\\\"", body)
gsub(/\t/, "\\t", body)
# Replace newlines with \n for JSON
gsub(/\n/, "\\n", body)
# Remove trailing \n
sub(/\\n$/, "", body)
# Clean up title (remove surrounding quotes)
gsub(/^"/, "", title)
gsub(/"$/, "", title)
printf "{\"id\":\"%s\",\"title\":\"%s\",\"labels\":%s,\"depends_on\":%s,\"body\":\"%s\"}", id, title, labels, depends, body
id = ""; title = ""; labels = "[]"; depends = "[]"; body = ""
inbody = 0
}
/^- id:/ {
flush_entry()
sub(/^- id: */, "")
id = $0
labels = "[]"
depends = "[]"
next
}
/^ title:/ {
sub(/^ title: */, "")
title = $0
# Remove surrounding quotes
gsub(/^"/, "", title)
gsub(/"$/, "", title)
next
}
/^ labels:/ {
sub(/^ labels: */, "")
# Convert [a, b] to JSON array ["a","b"]
gsub(/\[/, "", $0)
gsub(/\]/, "", $0)
n = split($0, arr, /, */)
labels = "["
for (i = 1; i <= n; i++) {
gsub(/^ */, "", arr[i])
gsub(/ *$/, "", arr[i])
if (arr[i] != "") {
if (i > 1) labels = labels ","
labels = labels "\"" arr[i] "\""
}
}
labels = labels "]"
next
}
/^ depends_on:/ {
sub(/^ depends_on: */, "")
gsub(/\[/, "", $0)
gsub(/\]/, "", $0)
n = split($0, arr, /, */)
depends = "["
for (i = 1; i <= n; i++) {
gsub(/^ */, "", arr[i])
gsub(/ *$/, "", arr[i])
if (arr[i] != "") {
if (i > 1) depends = depends ","
depends = depends "\"" arr[i] "\""
}
}
depends = depends "]"
next
}
/^ body: *\|/ {
inbody = 1
body = ""
next
}
inbody && /^ / {
sub(/^ /, "")
body = body $0 "\n"
next
}
inbody && !/^ / && !/^$/ {
inbody = 0
# This line starts a new field or entry — re-process it
# (awk does not support re-scanning, so handle common cases)
if ($0 ~ /^- id:/) {
flush_entry()
sub(/^- id: */, "")
id = $0
labels = "[]"
depends = "[]"
}
}
END {
flush_entry()
printf "]"
}
'
}
# ── Check if sub-issue already exists (idempotency) ─────────────────────
# Searches for the decomposed-from marker in existing issues.
# Args: vision_issue_number sprint_slug subissue_id
# Returns: 0 if already exists, 1 if not
subissue_exists() {
local vision_issue="$1"
local sprint_slug="$2"
local subissue_id="$3"
local marker="<!-- decomposed-from: #${vision_issue}, sprint: ${sprint_slug}, id: ${subissue_id} -->"
# Search all issues (paginated) for the exact marker
local issues_json
issues_json=$(filer_api_all "/issues?state=all&type=issues")
if printf '%s' "$issues_json" | jq -e --arg marker "$marker" \
'[.[] | select(.body // "" | contains($marker))] | length > 0' >/dev/null 2>&1; then
return 0 # Already exists
fi
return 1 # Does not exist
}
# ── Resolve label names to IDs ───────────────────────────────────────────
# Args: label_names_json (JSON array of strings)
# Output: JSON array of label IDs
resolve_label_ids() {
local label_names_json="$1"
# Fetch all labels from project repo
local all_labels
all_labels=$(curl -sf -H "Authorization: token ${FORGE_FILER_TOKEN}" \
"${FORGE_API}/labels" 2>/dev/null) || all_labels="[]"
# Map names to IDs
printf '%s' "$label_names_json" | jq -r '.[]' | while IFS= read -r label_name; do
[ -z "$label_name" ] && continue
printf '%s' "$all_labels" | jq -r --arg name "$label_name" \
'.[] | select(.name == $name) | .id' 2>/dev/null
done | jq -Rs 'split("\n") | map(select(. != "") | tonumber)'
}
# ── Add in-progress label to vision issue ────────────────────────────────
# Args: vision_issue_number
add_inprogress_label() {
local issue_num="$1"
local labels_json
labels_json=$(curl -sf -H "Authorization: token ${FORGE_FILER_TOKEN}" \
"${FORGE_API}/labels" 2>/dev/null) || return 1
local label_id
label_id=$(printf '%s' "$labels_json" | jq -r '.[] | select(.name == "in-progress") | .id' 2>/dev/null) || true
if [ -z "$label_id" ]; then
filer_log "WARNING: in-progress label not found"
return 1
fi
if curl -sf -X POST \
-H "Authorization: token ${FORGE_FILER_TOKEN}" \
-H "Content-Type: application/json" \
"${FORGE_API}/issues/${issue_num}/labels" \
-d "{\"labels\": [${label_id}]}" >/dev/null 2>&1; then
filer_log "Added in-progress label to vision issue #${issue_num}"
return 0
else
filer_log "WARNING: failed to add in-progress label to vision issue #${issue_num}"
return 1
fi
}
# ── File sub-issues from a sprint file ───────────────────────────────────
# This is the main entry point. Parses the sprint file, extracts sub-issues,
# and creates them idempotently via the Forgejo API.
# Args: sprint_file_path
# Returns: 0 on success, 1 on any error (fail-fast)
file_subissues() {
local sprint_file="$1"
filer_log "Processing sprint file: ${sprint_file}"
# Extract metadata
local vision_issue sprint_slug
vision_issue=$(extract_vision_issue "$sprint_file")
sprint_slug=$(extract_sprint_slug "$sprint_file")
if [ -z "$vision_issue" ]; then
filer_log "ERROR: could not extract vision issue number from ${sprint_file}"
return 1
fi
filer_log "Vision issue: #${vision_issue}, sprint slug: ${sprint_slug}"
# Parse the sub-issues block
local raw_block
raw_block=$(parse_subissues_block "$sprint_file") || return 1
# Parse individual entries
local entries_json
entries_json=$(printf '%s' "$raw_block" | parse_subissue_entries)
# Validate parsing produced valid JSON
if ! printf '%s' "$entries_json" | jq empty 2>/dev/null; then
filer_log "ERROR: failed to parse sub-issues block as valid JSON in ${sprint_file}"
return 1
fi
local entry_count
entry_count=$(printf '%s' "$entries_json" | jq 'length')
if [ "$entry_count" -eq 0 ]; then
filer_log "WARNING: no sub-issue entries found in ${sprint_file}"
return 1
fi
filer_log "Found ${entry_count} sub-issue(s) to file"
# File each sub-issue (fail-fast on first error)
local filed_count=0
local i=0
while [ "$i" -lt "$entry_count" ]; do
local entry
entry=$(printf '%s' "$entries_json" | jq ".[$i]")
local subissue_id subissue_title subissue_body labels_json
subissue_id=$(printf '%s' "$entry" | jq -r '.id')
subissue_title=$(printf '%s' "$entry" | jq -r '.title')
subissue_body=$(printf '%s' "$entry" | jq -r '.body')
labels_json=$(printf '%s' "$entry" | jq -c '.labels')
if [ -z "$subissue_id" ] || [ "$subissue_id" = "null" ]; then
filer_log "ERROR: sub-issue entry at index ${i} has no id — aborting"
return 1
fi
if [ -z "$subissue_title" ] || [ "$subissue_title" = "null" ]; then
filer_log "ERROR: sub-issue '${subissue_id}' has no title — aborting"
return 1
fi
# Idempotency check
if subissue_exists "$vision_issue" "$sprint_slug" "$subissue_id"; then
filer_log "Sub-issue '${subissue_id}' already exists — skipping"
i=$((i + 1))
continue
fi
# Append decomposed-from marker to body
local marker="<!-- decomposed-from: #${vision_issue}, sprint: ${sprint_slug}, id: ${subissue_id} -->"
local full_body="${subissue_body}
${marker}"
# Resolve label names to IDs
local label_ids
label_ids=$(resolve_label_ids "$labels_json")
# Build issue payload using jq for safe JSON construction
local payload
payload=$(jq -n \
--arg title "$subissue_title" \
--arg body "$full_body" \
--argjson labels "$label_ids" \
'{title: $title, body: $body, labels: $labels}')
# Create the issue
local response
response=$(curl -sf -X POST \
-H "Authorization: token ${FORGE_FILER_TOKEN}" \
-H "Content-Type: application/json" \
"${FORGE_API}/issues" \
-d "$payload" 2>/dev/null) || {
filer_log "ERROR: failed to create sub-issue '${subissue_id}' — aborting (${filed_count}/${entry_count} filed so far)"
return 1
}
local new_issue_num
new_issue_num=$(printf '%s' "$response" | jq -r '.number // empty')
filer_log "Filed sub-issue '${subissue_id}' as #${new_issue_num}: ${subissue_title}"
filed_count=$((filed_count + 1))
i=$((i + 1))
done
# Add in-progress label to the vision issue
add_inprogress_label "$vision_issue" || true
filer_log "Successfully filed ${filed_count}/${entry_count} sub-issue(s) for sprint ${sprint_slug}"
return 0
}
# ── Vision lifecycle: close completed vision issues ──────────────────────
# Checks open vision issues and closes any whose sub-issues are all closed.
# Uses the decomposed-from marker to find sub-issues.
check_and_close_completed_visions() {
filer_log "Checking for vision issues with all sub-issues complete..."
local vision_issues_json
vision_issues_json=$(filer_api_all "/issues?labels=vision&state=open")
if [ "$vision_issues_json" = "[]" ] || [ "$vision_issues_json" = "null" ]; then
filer_log "No open vision issues found"
return 0
fi
local all_issues
all_issues=$(filer_api_all "/issues?state=all&type=issues")
local vision_nums
vision_nums=$(printf '%s' "$vision_issues_json" | jq -r '.[].number' 2>/dev/null) || return 0
local closed_count=0
while IFS= read -r vid; do
[ -z "$vid" ] && continue
# Find sub-issues with decomposed-from marker for this vision
local sub_issues
sub_issues=$(printf '%s' "$all_issues" | jq --arg vid "$vid" \
'[.[] | select(.body // "" | contains("<!-- decomposed-from: #" + $vid))]')
local sub_count
sub_count=$(printf '%s' "$sub_issues" | jq 'length')
# No sub-issues means not ready to close
[ "$sub_count" -eq 0 ] && continue
# Check if all are closed
local open_count
open_count=$(printf '%s' "$sub_issues" | jq '[.[] | select(.state != "closed")] | length')
if [ "$open_count" -gt 0 ]; then
continue
fi
# All sub-issues closed — close the vision issue
filer_log "All ${sub_count} sub-issues for vision #${vid} are closed — closing vision"
local comment_body
comment_body="## Vision Issue Completed
All sub-issues have been implemented and merged. This vision issue is now closed.
---
*Automated closure by filer-bot · $(date -u '+%Y-%m-%d %H:%M UTC')*"
local comment_payload
comment_payload=$(jq -n --arg body "$comment_body" '{body: $body}')
curl -sf -X POST \
-H "Authorization: token ${FORGE_FILER_TOKEN}" \
-H "Content-Type: application/json" \
"${FORGE_API}/issues/${vid}/comments" \
-d "$comment_payload" >/dev/null 2>&1 || true
curl -sf -X PATCH \
-H "Authorization: token ${FORGE_FILER_TOKEN}" \
-H "Content-Type: application/json" \
"${FORGE_API}/issues/${vid}" \
-d '{"state":"closed"}' >/dev/null 2>&1 || true
closed_count=$((closed_count + 1))
done <<< "$vision_nums"
if [ "$closed_count" -gt 0 ]; then
filer_log "Closed ${closed_count} vision issue(s)"
fi
}
# ── Main ─────────────────────────────────────────────────────────────────
main() {
if [ "${1:-}" = "--all" ]; then
local sprints_dir="${2:?Usage: sprint-filer.sh --all <sprints-dir>}"
local exit_code=0
for sprint_file in "${sprints_dir}"/*.md; do
[ -f "$sprint_file" ] || continue
# Only process files with filer:begin markers
if ! grep -q '<!-- filer:begin -->' "$sprint_file"; then
continue
fi
if ! file_subissues "$sprint_file"; then
filer_log "ERROR: failed to process ${sprint_file}"
exit_code=1
fi
done
# Run vision lifecycle check after filing
check_and_close_completed_visions || true
return "$exit_code"
elif [ -n "${1:-}" ]; then
file_subissues "$1"
# Run vision lifecycle check after filing
check_and_close_completed_visions || true
else
echo "Usage: sprint-filer.sh <sprint-file.md>" >&2
echo " sprint-filer.sh --all <sprints-dir>" >&2
return 1
fi
}
# Run main only when executed directly (not when sourced for testing)
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
main "$@"
fi

View file

@ -1,4 +1,4 @@
<!-- last-reviewed: c4ca1e930d7be3f95060971ce4fa949dab2f76e7 -->
<!-- last-reviewed: 18190874cae869527f675f717423ded735f2c555 -->
# Planner Agent
**Role**: Strategic planning using a Prerequisite Tree (Theory of Constraints),
@ -34,7 +34,9 @@ will then sections) and marks the prerequisite as blocked-on-vault in the tree.
Deduplication: checks pending/ + approved/ + fired/ before creating.
Phase 4 (journal-and-memory): write updated prerequisite tree + daily journal
entry (committed to ops repo) and update `$OPS_REPO_ROOT/knowledge/planner-memory.md`.
Phase 5 (commit-ops): commit all ops repo changes, push directly.
Phase 5 (commit-ops): commit all ops repo changes to a `planner/run-YYYY-MM-DD`
branch, then create a PR and walk it to merge via review-bot (`pr_create`
`pr_walk_to_merge`), mirroring the architect's ops flow. No direct push to main.
AGENTS.md maintenance is handled by the Gardener.
**Artifacts use `$OPS_REPO_ROOT`**: All planner artifacts (journal,
@ -55,7 +57,7 @@ nervous system component, not work.
creates tmux session, injects formula prompt, monitors phase file, handles crash recovery, cleans up
- `formulas/run-planner.toml` — Execution spec: six steps (preflight,
prediction-triage, update-prerequisite-tree, file-at-constraints,
journal-and-memory, commit-and-pr) with `needs` dependencies. Claude
journal-and-memory, commit-ops-changes) with `needs` dependencies. Claude
executes all steps in a single interactive session with tool access
- `formulas/groom-backlog.toml` — Grooming formula for backlog triage and
grooming. (Note: the planner no longer dispatches breakdown mode — complex

View file

@ -10,7 +10,9 @@
# 2. Load formula (formulas/run-planner.toml)
# 3. Context: VISION.md, AGENTS.md, ops:RESOURCES.md, structural graph,
# planner memory, journal entries
# 4. agent_run(worktree, prompt) → Claude plans, may push knowledge updates
# 4. Create ops branch planner/run-YYYY-MM-DD for changes
# 5. agent_run(worktree, prompt) → Claude plans, commits to ops branch
# 6. If ops branch has commits: pr_create → pr_walk_to_merge (review-bot)
#
# Usage:
# planner-run.sh [projects/disinto.toml] # project config (default: disinto)
@ -22,10 +24,11 @@ FACTORY_ROOT="$(dirname "$SCRIPT_DIR")"
# Accept project config from argument; default to disinto (planner is disinto infrastructure)
export PROJECT_TOML="${1:-$FACTORY_ROOT/projects/disinto.toml}"
# Set override BEFORE sourcing env.sh so it survives any later re-source of
# env.sh from nested shells / claude -p tools (#762, #747)
export FORGE_TOKEN_OVERRIDE="${FORGE_PLANNER_TOKEN:-}"
# shellcheck source=../lib/env.sh
source "$FACTORY_ROOT/lib/env.sh"
# Use planner-bot's own Forgejo identity (#747)
FORGE_TOKEN="${FORGE_PLANNER_TOKEN:-${FORGE_TOKEN}}"
# shellcheck source=../lib/formula-session.sh
source "$FACTORY_ROOT/lib/formula-session.sh"
# shellcheck source=../lib/worktree.sh
@ -34,6 +37,10 @@ source "$FACTORY_ROOT/lib/worktree.sh"
source "$FACTORY_ROOT/lib/guard.sh"
# shellcheck source=../lib/agent-sdk.sh
source "$FACTORY_ROOT/lib/agent-sdk.sh"
# shellcheck source=../lib/ci-helpers.sh
source "$FACTORY_ROOT/lib/ci-helpers.sh"
# shellcheck source=../lib/pr-lifecycle.sh
source "$FACTORY_ROOT/lib/pr-lifecycle.sh"
LOG_FILE="${DISINTO_LOG_DIR}/planner/planner.log"
# shellcheck disable=SC2034 # consumed by agent-sdk.sh
@ -145,12 +152,69 @@ ${PROMPT_FOOTER}"
# ── Create worktree ──────────────────────────────────────────────────────
formula_worktree_setup "$WORKTREE"
# ── Prepare ops branch for PR-based merge (#765) ────────────────────────
PLANNER_OPS_BRANCH="planner/run-$(date -u +%Y-%m-%d)"
(
cd "$OPS_REPO_ROOT"
git fetch origin "${PRIMARY_BRANCH}" --quiet 2>/dev/null || true
git checkout "${PRIMARY_BRANCH}" --quiet 2>/dev/null || true
git pull --ff-only origin "${PRIMARY_BRANCH}" --quiet 2>/dev/null || true
# Create (or reset to) a fresh branch from PRIMARY_BRANCH
git checkout -B "$PLANNER_OPS_BRANCH" "origin/${PRIMARY_BRANCH}" --quiet 2>/dev/null || \
git checkout -b "$PLANNER_OPS_BRANCH" --quiet 2>/dev/null || true
)
log "ops branch: ${PLANNER_OPS_BRANCH}"
# ── Run agent ─────────────────────────────────────────────────────────────
export CLAUDE_MODEL="opus"
agent_run --worktree "$WORKTREE" "$PROMPT"
log "agent_run complete"
# ── PR lifecycle: create PR on ops repo and walk to merge (#765) ─────────
OPS_FORGE_API="${FORGE_API_BASE}/repos/${FORGE_OPS_REPO}"
ops_has_commits=false
if ! git -C "$OPS_REPO_ROOT" diff --quiet "origin/${PRIMARY_BRANCH}..${PLANNER_OPS_BRANCH}" 2>/dev/null; then
ops_has_commits=true
fi
if [ "$ops_has_commits" = "true" ]; then
log "ops branch has commits — creating PR"
# Push the branch to the ops remote
git -C "$OPS_REPO_ROOT" push origin "$PLANNER_OPS_BRANCH" --quiet 2>/dev/null || \
git -C "$OPS_REPO_ROOT" push --force-with-lease origin "$PLANNER_OPS_BRANCH" 2>/dev/null
# Temporarily point FORGE_API at the ops repo for pr-lifecycle functions
ORIG_FORGE_API="$FORGE_API"
export FORGE_API="$OPS_FORGE_API"
# Ops repo typically has no Woodpecker CI — skip CI polling
ORIG_WOODPECKER_REPO_ID="${WOODPECKER_REPO_ID:-2}"
export WOODPECKER_REPO_ID="0"
PR_NUM=$(pr_create "$PLANNER_OPS_BRANCH" \
"chore: planner run $(date -u +%Y-%m-%d)" \
"Automated planner run — updates prerequisite tree, memory, and vault items." \
"${PRIMARY_BRANCH}" \
"$OPS_FORGE_API") || true
if [ -n "$PR_NUM" ]; then
log "ops PR #${PR_NUM} created — walking to merge"
SESSION_ID=$(cat "$SID_FILE" 2>/dev/null || echo "planner-$$")
pr_walk_to_merge "$PR_NUM" "$SESSION_ID" "$OPS_REPO_ROOT" 1 2 || {
log "ops PR #${PR_NUM} walk finished: ${_PR_WALK_EXIT_REASON:-unknown}"
}
log "ops PR #${PR_NUM} result: ${_PR_WALK_EXIT_REASON:-unknown}"
else
log "WARNING: failed to create ops PR for branch ${PLANNER_OPS_BRANCH}"
fi
# Restore original FORGE_API
export FORGE_API="$ORIG_FORGE_API"
export WOODPECKER_REPO_ID="$ORIG_WOODPECKER_REPO_ID"
else
log "no ops changes — skipping PR creation"
fi
# Persist watermarks so next run can skip if nothing changed
mkdir -p "$FACTORY_ROOT/state"
echo "$CURRENT_SHA" > "$LAST_SHA_FILE"

View file

@ -1,4 +1,4 @@
<!-- last-reviewed: c4ca1e930d7be3f95060971ce4fa949dab2f76e7 -->
<!-- last-reviewed: 18190874cae869527f675f717423ded735f2c555 -->
# Predictor Agent
**Role**: Abstract adversary (the "goblin"). Runs a 2-step formula

View file

@ -23,10 +23,11 @@ FACTORY_ROOT="$(dirname "$SCRIPT_DIR")"
# Accept project config from argument; default to disinto
export PROJECT_TOML="${1:-$FACTORY_ROOT/projects/disinto.toml}"
# Set override BEFORE sourcing env.sh so it survives any later re-source of
# env.sh from nested shells / claude -p tools (#762, #747)
export FORGE_TOKEN_OVERRIDE="${FORGE_PREDICTOR_TOKEN:-}"
# shellcheck source=../lib/env.sh
source "$FACTORY_ROOT/lib/env.sh"
# Use predictor-bot's own Forgejo identity (#747)
FORGE_TOKEN="${FORGE_PREDICTOR_TOKEN:-${FORGE_TOKEN}}"
# shellcheck source=../lib/formula-session.sh
source "$FACTORY_ROOT/lib/formula-session.sh"
# shellcheck source=../lib/worktree.sh

View file

@ -1,4 +1,4 @@
<!-- last-reviewed: c4ca1e930d7be3f95060971ce4fa949dab2f76e7 -->
<!-- last-reviewed: 18190874cae869527f675f717423ded735f2c555 -->
# Review Agent
**Role**: AI-powered PR review — post structured findings and formal

View file

@ -59,6 +59,21 @@ fi
mkdir -p "$EVIDENCE_DIR"
# Verify input is Caddy JSON format (not Combined Log Format or other)
first_line=$(grep -m1 '.' "$CADDY_LOG" || true)
if [ -z "$first_line" ]; then
log "WARN: Caddy access log is empty at ${CADDY_LOG}"
echo "WARN: Caddy access log is empty — nothing to parse." >&2
exit 0
fi
if ! printf '%s\n' "$first_line" | jq empty 2>/dev/null; then
preview="${first_line:0:200}"
log "ERROR: Input file is not Caddy JSON format (expected structured JSON access log). Got: ${preview}"
echo "ERROR: Input file is not Caddy JSON format (expected structured JSON access log)." >&2
echo "Got: ${preview}" >&2
exit 1
fi
# ── Parse access log ────────────────────────────────────────────────────────
log "Parsing ${CADDY_LOG} for entries since $(date -u -d "@${CUTOFF_TS}" +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || echo "${CUTOFF_TS}")"

View file

@ -1,4 +1,4 @@
<!-- last-reviewed: c4ca1e930d7be3f95060971ce4fa949dab2f76e7 -->
<!-- last-reviewed: 18190874cae869527f675f717423ded735f2c555 -->
# Supervisor Agent
**Role**: Health monitoring and auto-remediation, executed as a formula-driven

View file

@ -25,10 +25,11 @@ FACTORY_ROOT="$(dirname "$SCRIPT_DIR")"
# Accept project config from argument; default to disinto
export PROJECT_TOML="${1:-$FACTORY_ROOT/projects/disinto.toml}"
# Set override BEFORE sourcing env.sh so it survives any later re-source of
# env.sh from nested shells / claude -p tools (#762, #747)
export FORGE_TOKEN_OVERRIDE="${FORGE_SUPERVISOR_TOKEN:-}"
# shellcheck source=../lib/env.sh
source "$FACTORY_ROOT/lib/env.sh"
# Use supervisor-bot's own Forgejo identity (#747)
FORGE_TOKEN="${FORGE_SUPERVISOR_TOKEN:-${FORGE_TOKEN}}"
# shellcheck source=../lib/formula-session.sh
source "$FACTORY_ROOT/lib/formula-session.sh"
# shellcheck source=../lib/worktree.sh

162
tests/smoke-load-secret.sh Normal file
View file

@ -0,0 +1,162 @@
#!/usr/bin/env bash
# tests/smoke-load-secret.sh — Unit tests for load_secret() precedence chain
#
# Covers the 4 precedence cases:
# 1. /secrets/<NAME>.env (Nomad template)
# 2. Current environment
# 3. secrets/<NAME>.enc (age-encrypted per-key file)
# 4. Default / empty fallback
#
# Required tools: bash, age (for case 3)
set -euo pipefail
FACTORY_ROOT="$(cd "$(dirname "$0")/.." && pwd)"
fail() { printf 'FAIL: %s\n' "$*" >&2; FAILED=1; }
pass() { printf 'PASS: %s\n' "$*"; }
FAILED=0
# Set up a temp workspace and fake HOME so age key paths work
test_dir=$(mktemp -d)
fake_home=$(mktemp -d)
trap 'rm -rf "$test_dir" "$fake_home"' EXIT
# Minimal env for sourcing env.sh's load_secret function without the full boot
# We source the function definition directly to isolate the unit under test.
# shellcheck disable=SC2034
export USER="${USER:-test}"
export HOME="$fake_home"
# Source env.sh to get load_secret (and FACTORY_ROOT)
source "${FACTORY_ROOT}/lib/env.sh"
# ── Case 4: Default / empty fallback ────────────────────────────────────────
echo "=== 1/5 Case 4: default fallback ==="
unset TEST_SECRET_FALLBACK 2>/dev/null || true
val=$(load_secret TEST_SECRET_FALLBACK "my-default")
if [ "$val" = "my-default" ]; then
pass "load_secret returns default when nothing is set"
else
fail "Expected 'my-default', got '${val}'"
fi
val=$(load_secret TEST_SECRET_FALLBACK)
if [ -z "$val" ]; then
pass "load_secret returns empty when no default and nothing set"
else
fail "Expected empty, got '${val}'"
fi
# ── Case 2: Environment variable already set ────────────────────────────────
echo "=== 2/5 Case 2: environment variable ==="
export TEST_SECRET_ENV="from-environment"
val=$(load_secret TEST_SECRET_ENV "ignored-default")
if [ "$val" = "from-environment" ]; then
pass "load_secret returns env value over default"
else
fail "Expected 'from-environment', got '${val}'"
fi
unset TEST_SECRET_ENV
# ── Case 3: Age-encrypted per-key file ──────────────────────────────────────
echo "=== 3/5 Case 3: age-encrypted secret ==="
if command -v age &>/dev/null && command -v age-keygen &>/dev/null; then
# Generate a test age key
age_key_dir="${fake_home}/.config/sops/age"
mkdir -p "$age_key_dir"
age-keygen -o "${age_key_dir}/keys.txt" 2>/dev/null
pub_key=$(age-keygen -y "${age_key_dir}/keys.txt")
# Create encrypted secret
secrets_dir="${FACTORY_ROOT}/secrets"
mkdir -p "$secrets_dir"
printf 'age-test-value' | age -r "$pub_key" -o "${secrets_dir}/TEST_SECRET_AGE.enc"
unset TEST_SECRET_AGE 2>/dev/null || true
val=$(load_secret TEST_SECRET_AGE "fallback")
if [ "$val" = "age-test-value" ]; then
pass "load_secret decrypts age-encrypted secret"
else
fail "Expected 'age-test-value', got '${val}'"
fi
# Verify caching: call load_secret directly (not in subshell) so export propagates
unset TEST_SECRET_AGE 2>/dev/null || true
load_secret TEST_SECRET_AGE >/dev/null
if [ "${TEST_SECRET_AGE:-}" = "age-test-value" ]; then
pass "load_secret caches decrypted value in environment (direct call)"
else
fail "Decrypted value not cached in environment"
fi
# Clean up test secret
rm -f "${secrets_dir}/TEST_SECRET_AGE.enc"
rmdir "$secrets_dir" 2>/dev/null || true
unset TEST_SECRET_AGE
else
echo "SKIP: age/age-keygen not found — skipping age decryption test"
fi
# ── Case 1: Nomad template path ────────────────────────────────────────────
echo "=== 4/5 Case 1: Nomad template (/secrets/<NAME>.env) ==="
nomad_dir="/secrets"
if [ -w "$(dirname "$nomad_dir")" ] 2>/dev/null || [ -w "$nomad_dir" ] 2>/dev/null; then
mkdir -p "$nomad_dir"
printf 'TEST_SECRET_NOMAD=from-nomad-template\n' > "${nomad_dir}/TEST_SECRET_NOMAD.env"
# Even with env set, Nomad path takes precedence
export TEST_SECRET_NOMAD="from-env-should-lose"
val=$(load_secret TEST_SECRET_NOMAD "default")
if [ "$val" = "from-nomad-template" ]; then
pass "load_secret prefers Nomad template over env"
else
fail "Expected 'from-nomad-template', got '${val}'"
fi
rm -f "${nomad_dir}/TEST_SECRET_NOMAD.env"
rmdir "$nomad_dir" 2>/dev/null || true
unset TEST_SECRET_NOMAD
else
echo "SKIP: /secrets not writable — skipping Nomad template test (needs root or container)"
fi
# ── Precedence: env beats age ────────────────────────────────────────────
echo "=== 5/5 Precedence: env beats age-encrypted ==="
if command -v age &>/dev/null && command -v age-keygen &>/dev/null; then
age_key_dir="${fake_home}/.config/sops/age"
mkdir -p "$age_key_dir"
[ -f "${age_key_dir}/keys.txt" ] || age-keygen -o "${age_key_dir}/keys.txt" 2>/dev/null
pub_key=$(age-keygen -y "${age_key_dir}/keys.txt")
secrets_dir="${FACTORY_ROOT}/secrets"
mkdir -p "$secrets_dir"
printf 'age-value-should-lose' | age -r "$pub_key" -o "${secrets_dir}/TEST_SECRET_PREC.enc"
export TEST_SECRET_PREC="env-value-wins"
val=$(load_secret TEST_SECRET_PREC "default")
if [ "$val" = "env-value-wins" ]; then
pass "load_secret prefers env over age-encrypted file"
else
fail "Expected 'env-value-wins', got '${val}'"
fi
rm -f "${secrets_dir}/TEST_SECRET_PREC.enc"
rmdir "$secrets_dir" 2>/dev/null || true
unset TEST_SECRET_PREC
else
echo "SKIP: age not found — skipping precedence test"
fi
# ── Summary ───────────────────────────────────────────────────────────────
echo ""
if [ "$FAILED" -ne 0 ]; then
echo "=== SMOKE-LOAD-SECRET TEST FAILED ==="
exit 1
fi
echo "=== SMOKE-LOAD-SECRET TEST PASSED ==="

View file

@ -83,9 +83,12 @@ curl -sL https://raw.githubusercontent.com/disinto-admin/disinto/fix/issue-621/t
- Permissions: `root:disinto-register 0750`
3. **Installs Caddy**:
- Backs up any pre-existing `/etc/caddy/Caddyfile` to `/etc/caddy/Caddyfile.pre-disinto`
- Download Caddy with Gandi DNS plugin
- Enable admin API on `127.0.0.1:2019`
- Configure wildcard cert for `*.disinto.ai` via DNS-01
- Creates `/etc/caddy/extra.d/` for operator-owned site blocks
- Emitted Caddyfile ends with `import /etc/caddy/extra.d/*.caddy`
4. **Sets up SSH**:
- Creates `disinto-register` authorized_keys with forced command
@ -95,6 +98,27 @@ curl -sL https://raw.githubusercontent.com/disinto-admin/disinto/fix/issue-621/t
- `/opt/disinto-edge/register.sh` — forced command handler
- `/opt/disinto-edge/lib/*.sh` — helper libraries
## Operator-Owned Site Blocks
Edge-control owns the top-level `/etc/caddy/Caddyfile` and dynamic `<project>.<DOMAIN_SUFFIX>` routes injected via the Caddy admin API. Operators own everything under `/etc/caddy/extra.d/`.
To serve non-tunnel content (apex domain, www redirect, static sites), drop `.caddy` files into `/etc/caddy/extra.d/`:
```bash
# Example: /etc/caddy/extra.d/landing.caddy
disinto.ai {
root * /home/debian/disinto-site
file_server
}
# Example: /etc/caddy/extra.d/www-redirect.caddy
www.disinto.ai {
redir https://disinto.ai{uri} permanent
}
```
These files survive across `install.sh` re-runs. The `--extra-caddyfile <path>` flag overrides the default import glob (`/etc/caddy/extra.d/*.caddy`) if needed.
## Usage
### Register a Tunnel (from dev box)

View file

@ -43,18 +43,21 @@ INSTALL_DIR="/opt/disinto-edge"
REGISTRY_DIR="/var/lib/disinto"
CADDY_VERSION="2.8.4"
DOMAIN_SUFFIX="disinto.ai"
EXTRA_CADDYFILE="/etc/caddy/extra.d/*.caddy"
usage() {
cat <<EOF
Usage: $0 [options]
Options:
--gandi-token <token> Gandi API token for wildcard cert (required)
--install-dir <dir> Install directory (default: /opt/disinto-edge)
--registry-dir <dir> Registry directory (default: /var/lib/disinto)
--caddy-version <ver> Caddy version to install (default: ${CADDY_VERSION})
--domain-suffix <suffix> Domain suffix for tunnels (default: disinto.ai)
-h, --help Show this help
--gandi-token <token> Gandi API token for wildcard cert (required)
--install-dir <dir> Install directory (default: /opt/disinto-edge)
--registry-dir <dir> Registry directory (default: /var/lib/disinto)
--caddy-version <ver> Caddy version to install (default: ${CADDY_VERSION})
--domain-suffix <suffix> Domain suffix for tunnels (default: disinto.ai)
--extra-caddyfile <path> Import path for operator-owned Caddy config
(default: /etc/caddy/extra.d/*.caddy)
-h, --help Show this help
Example:
$0 --gandi-token YOUR_GANDI_API_TOKEN
@ -84,6 +87,10 @@ while [[ $# -gt 0 ]]; do
DOMAIN_SUFFIX="$2"
shift 2
;;
--extra-caddyfile)
EXTRA_CADDYFILE="$2"
shift 2
;;
-h|--help)
usage
;;
@ -225,8 +232,29 @@ EOF
chmod 600 "$GANDI_ENV"
# Create Caddyfile with admin API and wildcard cert
# Note: Caddy auto-generates server names (srv0, srv1, …). lib/caddy.sh
# discovers the server name dynamically via _discover_server_name() so we
# don't need to name the server here.
CADDYFILE="/etc/caddy/Caddyfile"
cat > "$CADDYFILE" <<EOF
# Back up existing Caddyfile before overwriting
if [ -f "$CADDYFILE" ] && [ ! -f "${CADDYFILE}.pre-disinto" ]; then
cp "$CADDYFILE" "${CADDYFILE}.pre-disinto"
log_info "Backed up existing Caddyfile to ${CADDYFILE}.pre-disinto"
fi
# Create extra.d directory for operator-owned site blocks
EXTRA_DIR="/etc/caddy/extra.d"
mkdir -p "$EXTRA_DIR"
chmod 0755 "$EXTRA_DIR"
if getent group caddy >/dev/null 2>&1; then
chown root:caddy "$EXTRA_DIR"
else
log_warn "Group 'caddy' does not exist; extra.d owned by root:root"
fi
log_info "Created ${EXTRA_DIR} for operator-owned Caddy config"
cat > "$CADDYFILE" <<CADDYEOF
# Caddy configuration for edge control plane
# Admin API enabled on 127.0.0.1:2019
@ -240,7 +268,10 @@ cat > "$CADDYFILE" <<EOF
dns gandi {env.GANDI_API_KEY}
}
}
EOF
# Operator-owned site blocks (apex, www, static content, etc.)
import ${EXTRA_CADDYFILE}
CADDYEOF
# Start Caddy
systemctl restart caddy 2>/dev/null || {
@ -359,6 +390,7 @@ echo "Configuration:"
echo " Install directory: ${INSTALL_DIR}"
echo " Registry: ${REGISTRY_FILE}"
echo " Caddy admin API: http://127.0.0.1:2019"
echo " Operator site blocks: ${EXTRA_DIR}/ (import ${EXTRA_CADDYFILE})"
echo ""
echo "Users:"
echo " disinto-register - SSH forced command (runs ${INSTALL_DIR}/register.sh)"

View file

@ -19,6 +19,24 @@ CADDY_ADMIN_URL="${CADDY_ADMIN_URL:-http://127.0.0.1:2019}"
# Domain suffix for projects
DOMAIN_SUFFIX="${DOMAIN_SUFFIX:-disinto.ai}"
# Discover the Caddy server name that listens on :80/:443
# Usage: _discover_server_name
_discover_server_name() {
local server_name
server_name=$(curl -sS "${CADDY_ADMIN_URL}/config/apps/http/servers" \
| jq -r 'to_entries | map(select(.value.listen[]? | test(":(80|443)$"))) | .[0].key // empty') || {
echo "Error: could not query Caddy admin API for servers" >&2
return 1
}
if [ -z "$server_name" ]; then
echo "Error: could not find a Caddy server listening on :80/:443" >&2
return 1
fi
echo "$server_name"
}
# Add a route for a project
# Usage: add_route <project> <port>
add_route() {
@ -26,6 +44,9 @@ add_route() {
local port="$2"
local fqdn="${project}.${DOMAIN_SUFFIX}"
local server_name
server_name=$(_discover_server_name) || return 1
# Build the route configuration (partial config)
local route_config
route_config=$(cat <<EOF
@ -58,16 +79,21 @@ add_route() {
EOF
)
# Append route using POST /config/apps/http/servers/edge/routes
local response
response=$(curl -s -X POST \
"${CADDY_ADMIN_URL}/config/apps/http/servers/edge/routes" \
# Append route via admin API, checking HTTP status
local response status body
response=$(curl -sS -w '\n%{http_code}' -X POST \
"${CADDY_ADMIN_URL}/config/apps/http/servers/${server_name}/routes" \
-H "Content-Type: application/json" \
-d "$route_config" 2>&1) || {
-d "$route_config") || {
echo "Error: failed to add route for ${fqdn}" >&2
echo "Response: ${response}" >&2
return 1
}
status=$(echo "$response" | tail -n1)
body=$(echo "$response" | sed '$d')
if [ "$status" -ge 400 ]; then
echo "Error: Caddy admin API returned ${status}: ${body}" >&2
return 1
fi
echo "Added route: ${fqdn} → 127.0.0.1:${port}" >&2
}
@ -78,31 +104,45 @@ remove_route() {
local project="$1"
local fqdn="${project}.${DOMAIN_SUFFIX}"
# First, get current routes
local routes_json
routes_json=$(curl -s "${CADDY_ADMIN_URL}/config/apps/http/servers/edge/routes" 2>&1) || {
local server_name
server_name=$(_discover_server_name) || return 1
# First, get current routes, checking HTTP status
local response status body
response=$(curl -sS -w '\n%{http_code}' \
"${CADDY_ADMIN_URL}/config/apps/http/servers/${server_name}/routes") || {
echo "Error: failed to get current routes" >&2
return 1
}
status=$(echo "$response" | tail -n1)
body=$(echo "$response" | sed '$d')
if [ "$status" -ge 400 ]; then
echo "Error: Caddy admin API returned ${status}: ${body}" >&2
return 1
fi
# Find the route index that matches our fqdn using jq
local route_index
route_index=$(echo "$routes_json" | jq -r "to_entries[] | select(.value.match[]?.host[]? == \"${fqdn}\") | .key" 2>/dev/null | head -1)
route_index=$(echo "$body" | jq -r "to_entries[] | select(.value.match[]?.host[]? == \"${fqdn}\") | .key" 2>/dev/null | head -1)
if [ -z "$route_index" ] || [ "$route_index" = "null" ]; then
echo "Warning: route for ${fqdn} not found" >&2
return 0
fi
# Delete the route at the found index
local response
response=$(curl -s -X DELETE \
"${CADDY_ADMIN_URL}/config/apps/http/servers/edge/routes/${route_index}" \
-H "Content-Type: application/json" 2>&1) || {
# Delete the route at the found index, checking HTTP status
response=$(curl -sS -w '\n%{http_code}' -X DELETE \
"${CADDY_ADMIN_URL}/config/apps/http/servers/${server_name}/routes/${route_index}" \
-H "Content-Type: application/json") || {
echo "Error: failed to remove route for ${fqdn}" >&2
echo "Response: ${response}" >&2
return 1
}
status=$(echo "$response" | tail -n1)
body=$(echo "$response" | sed '$d')
if [ "$status" -ge 400 ]; then
echo "Error: Caddy admin API returned ${status}: ${body}" >&2
return 1
fi
echo "Removed route: ${fqdn}" >&2
}
@ -110,13 +150,18 @@ remove_route() {
# Reload Caddy to apply configuration changes
# Usage: reload_caddy
reload_caddy() {
local response
response=$(curl -s -X POST \
"${CADDY_ADMIN_URL}/reload" 2>&1) || {
local response status body
response=$(curl -sS -w '\n%{http_code}' -X POST \
"${CADDY_ADMIN_URL}/reload") || {
echo "Error: failed to reload Caddy" >&2
echo "Response: ${response}" >&2
return 1
}
status=$(echo "$response" | tail -n1)
body=$(echo "$response" | sed '$d')
if [ "$status" -ge 400 ]; then
echo "Error: Caddy reload returned ${status}: ${body}" >&2
return 1
fi
echo "Caddy reloaded" >&2
}