Compare commits

..

3 commits

Author SHA1 Message Date
johba
daf08fe62d chore: gardener housekeeping 2026-03-27
Some checks failed
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/push/smoke-init Pipeline failed
- Update AGENTS.md watermarks to current HEAD across all 10 files
- dev/AGENTS.md: document rebase-before-push and _inject_into_session
- review/AGENTS.md: document formal forge review fallback in review-poll
- gardener/pending-actions.json: close #3 (already implemented in PR #2)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-27 00:05:07 +00:00
9b91c6a5bd Merge pull request 'fix: Docker stack: edge proxy + staging container from bootstrap (#1)' (#2) from fix/issue-1 into main 2026-03-26 21:22:06 +00:00
johba
723167d2f2 fix: Docker stack: edge proxy + staging container from bootstrap (#1)
Some checks failed
ci/woodpecker/push/ci Pipeline was successful
ci/woodpecker/push/smoke-init Pipeline failed
- Add edge (Caddy) service to docker-compose.yml as reverse proxy for
  Forgejo (/forgejo/*), Woodpecker (/ci/*), and staging (default)
- Replace placeholder staging service with Caddy-based static file server
- Generate docker/Caddyfile template during disinto init
- Generate default "Nothing shipped yet" staging page in docker/staging-seed/
- Add caddy_data and staging-site volumes
- Staging container seeds default page on first boot; CI overwrites later

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 21:16:08 +00:00
172 changed files with 9114 additions and 23940 deletions

View file

@ -1,7 +1,7 @@
name: Bug Report
about: Something is broken or behaving incorrectly
labels:
- bug-report
- bug
body:
- type: textarea
id: what

View file

@ -1,19 +0,0 @@
# Secrets — prevent .env files and encrypted secrets from being baked into the image
.env
.env.enc
secrets/
# Version control — .git is huge and not needed in image
.git
# Archives — not needed at runtime
*.tar.gz
# Prometheus data — large, ephemeral data
prometheus-data/
# Compose files — only needed at runtime via volume mount
docker-compose.yml
# Project TOML files — gitignored anyway, won't be in build context
projects/*.toml

View file

@ -19,54 +19,26 @@ FORGE_URL=http://localhost:3000 # [CONFIG] local Forgejo instance
# ── Auth tokens ───────────────────────────────────────────────────────────
# Each agent has its own Forgejo account and API token (#747).
# Per-agent tokens fall back to FORGE_TOKEN if not set.
#
# Tokens and passwords are auto-generated by `disinto init` and stored in .env.
# Each bot user gets:
# - FORGE_TOKEN_<BOT> = API token for REST calls (user identity via /api/v1/user)
# - FORGE_PASS_<BOT> = password for git HTTP push (#361, Forgejo 11.x limitation)
#
# Local-model agents (agents-llama) use FORGE_TOKEN_LLAMA / FORGE_PASS_LLAMA
# with FORGE_BOT_USER_LLAMA=dev-qwen to ensure correct attribution (#563).
FORGE_TOKEN= # [SECRET] dev-bot API token (default for all agents)
FORGE_PASS= # [SECRET] dev-bot password for git HTTP push (#361)
FORGE_TOKEN_LLAMA= # [SECRET] dev-qwen API token (for agents-llama)
FORGE_PASS_LLAMA= # [SECRET] dev-qwen password for git HTTP push
FORGE_REVIEW_TOKEN= # [SECRET] review-bot API token
FORGE_REVIEW_PASS= # [SECRET] review-bot password for git HTTP push
FORGE_PLANNER_TOKEN= # [SECRET] planner-bot API token
FORGE_PLANNER_PASS= # [SECRET] planner-bot password for git HTTP push
FORGE_GARDENER_TOKEN= # [SECRET] gardener-bot API token
FORGE_GARDENER_PASS= # [SECRET] gardener-bot password for git HTTP push
FORGE_VAULT_TOKEN= # [SECRET] vault-bot API token
FORGE_VAULT_PASS= # [SECRET] vault-bot password for git HTTP push
FORGE_SUPERVISOR_TOKEN= # [SECRET] supervisor-bot API token
FORGE_SUPERVISOR_PASS= # [SECRET] supervisor-bot password for git HTTP push
FORGE_PREDICTOR_TOKEN= # [SECRET] predictor-bot API token
FORGE_PREDICTOR_PASS= # [SECRET] predictor-bot password for git HTTP push
FORGE_ARCHITECT_TOKEN= # [SECRET] architect-bot API token
FORGE_ARCHITECT_PASS= # [SECRET] architect-bot password for git HTTP push
FORGE_FILER_TOKEN= # [SECRET] filer-bot API token (issues:write on project repo only)
FORGE_FILER_PASS= # [SECRET] filer-bot password for git HTTP push
FORGE_BOT_USERNAMES=dev-bot,review-bot,planner-bot,gardener-bot,vault-bot,supervisor-bot,predictor-bot,architect-bot,filer-bot
FORGE_ACTION_TOKEN= # [SECRET] action-bot API token
FORGE_BOT_USERNAMES=dev-bot,review-bot,planner-bot,gardener-bot,vault-bot,supervisor-bot,predictor-bot,action-bot
# ── Backwards compatibility ───────────────────────────────────────────────
# If CODEBERG_TOKEN is set but FORGE_TOKEN is not, env.sh falls back to
# CODEBERG_TOKEN automatically (same for REVIEW_BOT_TOKEN, CODEBERG_REPO,
# CODEBERG_BOT_USERNAMES). No action needed for existing deployments.
# Per-agent tokens default to FORGE_TOKEN when unset (single-token setups).
#
# Note: `disinto init` auto-generates all bot tokens/passwords when you
# configure [agents.llama] in a project TOML. The credentials are stored
# in .env.enc (encrypted) or .env (plaintext fallback).
# ── Woodpecker CI ─────────────────────────────────────────────────────────
WOODPECKER_TOKEN= # [SECRET] Woodpecker API token
WOODPECKER_SERVER=http://localhost:8000 # [CONFIG] Woodpecker server URL
WOODPECKER_AGENT_SECRET= # [SECRET] shared secret for server↔agent auth (auto-generated)
# Woodpecker privileged-plugin allowlist — comma-separated image names
# Add plugins/docker (and others) here to allow privileged execution
WOODPECKER_PLUGINS_PRIVILEGED=plugins/docker
# WOODPECKER_REPO_ID — now per-project, set in projects/*.toml [ci] section
# Woodpecker Postgres (for direct DB queries)
@ -75,59 +47,26 @@ WOODPECKER_DB_USER=woodpecker # [CONFIG] Postgres user
WOODPECKER_DB_HOST=127.0.0.1 # [CONFIG] Postgres host
WOODPECKER_DB_NAME=woodpecker # [CONFIG] Postgres database name
# ── Chat OAuth (#708) ────────────────────────────────────────────────────
CHAT_OAUTH_CLIENT_ID= # [SECRET] Chat OAuth2 client ID (auto-generated by init)
CHAT_OAUTH_CLIENT_SECRET= # [SECRET] Chat OAuth2 client secret (auto-generated by init)
DISINTO_CHAT_ALLOWED_USERS= # [CONFIG] CSV of allowed usernames (disinto-admin always allowed)
FORWARD_AUTH_SECRET= # [SECRET] Shared secret for Caddy ↔ chat forward_auth (#709)
# ── Vault-only secrets (DO NOT put these in .env) ────────────────────────
# These tokens grant access to external systems (GitHub, ClawHub, deploy targets).
# They live ONLY in secrets/<NAME>.enc (age-encrypted, one file per key) and are
# decrypted into the ephemeral runner container at fire time (#745, #777).
# lib/env.sh explicitly unsets them so agents can never hold them directly —
# all external actions go through vault dispatch.
# They live ONLY in .env.vault.enc and are injected into the ephemeral vault-runner
# container at fire time (#745). lib/env.sh explicitly unsets them so agents
# can never hold them directly — all external actions go through vault dispatch.
#
# GITHUB_TOKEN — GitHub API access (publish, deploy, post)
# CLAWHUB_TOKEN — ClawHub registry credentials (publish)
# CADDY_SSH_KEY — SSH key for Caddy log collection
# (deploy keys) — SSH keys for deployment targets
#
# To manage secrets: disinto secrets add/show/remove/list
# To manage vault secrets: disinto secrets edit-vault
# See also: vault/vault-run-action.sh, vault/vault-fire.sh
# ── Project-specific secrets ──────────────────────────────────────────────
# Store all project secrets here so formulas reference env vars, never hardcode.
BASE_RPC_URL= # [SECRET] on-chain RPC endpoint
# ── Local Qwen dev agent (optional) ──────────────────────────────────────
# Set ENABLE_LLAMA_AGENT=1 to emit agents-llama in docker-compose.yml.
# Requires a running llama-server reachable at ANTHROPIC_BASE_URL.
# See docs/agents-llama.md for details.
ENABLE_LLAMA_AGENT=0 # [CONFIG] 1 = enable agents-llama service
ANTHROPIC_BASE_URL= # [CONFIG] e.g. http://host.docker.internal:8081
# ── Tuning ────────────────────────────────────────────────────────────────
CLAUDE_TIMEOUT=7200 # [CONFIG] max seconds per Claude invocation
# ── Host paths (Nomad-portable) ────────────────────────────────────────────
# These env vars externalize host-side bind-mount paths from docker-compose.yml.
# At cutover, Nomad jobspecs reference the same vars — no path translation.
# Defaults point at current paths so an empty .env override still works.
CLAUDE_BIN_DIR=/usr/local/bin/claude # [CONFIG] host path to claude CLI binary (resolved by `disinto init`)
CLAUDE_CONFIG_FILE=${HOME}/.claude.json # [CONFIG] host path to claude config JSON file
CLAUDE_DIR=${HOME}/.claude # [CONFIG] host path to .claude directory (reproduce/edge)
AGENT_SSH_DIR=${HOME}/.ssh # [CONFIG] host path to SSH keys directory
SOPS_AGE_DIR=${HOME}/.config/sops/age # [CONFIG] host path to SOPS age key directory
# ── Claude Code shared OAuth state ─────────────────────────────────────────
# Shared directory used by every factory container so Claude Code's internal
# proper-lockfile-based OAuth refresh lock works across containers. Both
# values must live outside $HOME (so docker bind mounts don't depend on UID
# mapping) and must be the same absolute path on host and inside each
# container. See docs/CLAUDE-AUTH-CONCURRENCY.md.
CLAUDE_SHARED_DIR=/var/lib/disinto/claude-shared
CLAUDE_CONFIG_DIR=${CLAUDE_SHARED_DIR}/config
# ── Factory safety ────────────────────────────────────────────────────────
# Disables Claude Code auto-updater, telemetry, error reporting, and bug
# command. Factory sessions are production processes — they must never phone

19
.gitignore vendored
View file

@ -3,6 +3,7 @@
# Encrypted secrets — safe to commit (SOPS-encrypted with age)
!.env.enc
!.env.vault.enc
!.sops.yaml
# Per-box project config (generated by disinto init)
@ -21,21 +22,3 @@ metrics/supervisor-metrics.jsonl
.DS_Store
dev/ci-fixes-*.json
gardener/dust.jsonl
# Individual encrypted secrets (managed by disinto secrets add)
secrets/
# Pre-built binaries for Docker builds (avoid network calls during build)
docker/agents/bin/
# Generated docker-compose.yml (run 'bin/disinto init' to regenerate)
# Note: This file is now committed to track volume mount configuration
# docker-compose.yml
# Generated Caddyfile — single source of truth is generate_caddyfile in lib/generators.sh
docker/Caddyfile
# Python bytecode
__pycache__/
*.pyc
*.pyo

View file

@ -6,16 +6,13 @@
# 2. Every custom function called by agent scripts is defined in lib/ or the script itself
#
# Fast (<10s): no network, no tmux, no Claude needed.
# Would have caught: kill_tmux_session (renamed), create_agent_session (missing),
# read_phase (missing from dev-agent.sh scope)
set -euo pipefail
cd "$(dirname "$0")/.."
# CI-side filesystem snapshot: show lib/ state at smoke time (#600)
echo "=== smoke environment snapshot ==="
ls -la lib/ 2>&1 | head -50
echo "=== "
FAILED=0
# ── helpers ─────────────────────────────────────────────────────────────────
@ -24,16 +21,14 @@ FAILED=0
# Uses awk instead of grep -Eo for busybox/Alpine compatibility (#296).
get_fns() {
local f="$1"
# Pure-awk implementation: avoids grep/sed cross-platform differences
# (BusyBox grep BRE quirks, sed ; separator issues on Alpine).
awk '
/^[[:space:]]*[a-zA-Z_][a-zA-Z0-9_][a-zA-Z0-9_]*[[:space:]]*[(][)]/ {
line = $0
gsub(/^[[:space:]]+/, "", line)
sub(/[[:space:]]*[(].*/, "", line)
print line
}
' "$f" 2>/dev/null | sort -u || true
# Use POSIX character classes and bracket-escaped parens for BusyBox awk
# compatibility (BusyBox awk does not expand \t to tab in character classes
# and may handle \( differently in ERE patterns).
awk '/^[[:space:]]*[a-zA-Z_][a-zA-Z0-9_]+[[:space:]]*[(][)]/ {
sub(/^[[:space:]]+/, "")
sub(/[[:space:]]*[(][)].*/, "")
print
}' "$f" 2>/dev/null | sort -u || true
}
# Extract call-position identifiers that look like custom function calls:
@ -91,44 +86,37 @@ while IFS= read -r -d '' f; do
printf 'FAIL [syntax] %s\n' "$f"
FAILED=1
fi
done < <(find dev gardener review planner supervisor architect lib vault -name "*.sh" -print0 2>/dev/null)
done < <(find dev gardener review planner supervisor lib vault action -name "*.sh" -print0 2>/dev/null)
echo "syntax check done"
# ── 2. Function-resolution check ─────────────────────────────────────────────
echo "=== 2/2 Function resolution ==="
# Enumerate ALL lib/*.sh files in stable lexicographic order (#742).
# Previous approach used a hand-maintained REQUIRED_LIBS list, which silently
# became incomplete as new libs were added, producing partial LIB_FUNS that
# caused non-deterministic "undef" failures.
# Functions provided by shared lib files (available to all agent scripts via source).
#
# Excluded from LIB_FUNS (not sourced inline by agents):
# Included — these are inline-sourced by agent scripts:
# lib/env.sh — sourced by every agent (log, forge_api, etc.)
# lib/agent-session.sh — sourced by orchestrators (create_agent_session, monitor_phase_loop, etc.)
# lib/ci-helpers.sh — sourced by pollers and review (ci_passed, classify_pipeline_failure, etc.)
# lib/load-project.sh — sourced by env.sh when PROJECT_TOML is set
# lib/file-action-issue.sh — sourced by gardener-run.sh (file_action_issue)
# lib/secret-scan.sh — sourced by file-action-issue.sh, phase-handler.sh (scan_for_secrets, redact_secrets)
# lib/formula-session.sh — sourced by formula-driven agents (acquire_cron_lock, run_formula_and_monitor, etc.)
# lib/mirrors.sh — sourced by merge sites (mirror_push)
# lib/guard.sh — sourced by all cron entry points (check_active)
#
# Excluded — not sourced inline by agents:
# lib/tea-helpers.sh — sourced conditionally by env.sh (tea_file_issue, etc.); checked standalone below
# lib/ci-debug.sh — standalone CLI tool, run directly (not sourced)
# lib/parse-deps.sh — executed via `bash lib/parse-deps.sh` (not sourced)
# lib/hooks/*.sh — Claude Code hook scripts, executed by the harness (not sourced)
EXCLUDED_LIBS="lib/ci-debug.sh lib/parse-deps.sh"
# Build the list of lib files in deterministic order (LC_ALL=C sort).
# Fail loudly if no lib files are found — checkout is broken.
mapfile -t ALL_LIBS < <(LC_ALL=C find lib -maxdepth 1 -name '*.sh' -print | LC_ALL=C sort)
if [ "${#ALL_LIBS[@]}" -eq 0 ]; then
echo 'FAIL [no-libs] no lib/*.sh files found at smoke time' >&2
printf ' pwd=%s\n' "$(pwd)" >&2
echo '=== SMOKE TEST FAILED (precondition) ===' >&2
exit 2
fi
# Build LIB_FUNS from all non-excluded lib files.
# Use set -e inside the subshell so a failed get_fns aborts loudly
# instead of silently shrinking the function list.
#
# If a new lib file is added and sourced by agents, add it to LIB_FUNS below
# and add a check_script call for it in the lib files section further down.
LIB_FUNS=$(
set -e
for f in "${ALL_LIBS[@]}"; do
# shellcheck disable=SC2086
skip=0; for ex in $EXCLUDED_LIBS; do [ "$f" = "$ex" ] && skip=1; done
[ "$skip" -eq 1 ] && continue
get_fns "$f"
for f in lib/agent-session.sh lib/env.sh lib/ci-helpers.sh lib/load-project.sh lib/secret-scan.sh lib/file-action-issue.sh lib/formula-session.sh lib/mirrors.sh lib/guard.sh; do
if [ -f "$f" ]; then get_fns "$f"; fi
done | sort -u
)
@ -180,15 +168,8 @@ check_script() {
while IFS= read -r fn; do
[ -z "$fn" ] && continue
is_known_cmd "$fn" && continue
# Use here-string (<<<) instead of pipe to avoid SIGPIPE race (#742):
# with pipefail, `printf | grep -q` can fail when grep closes the pipe
# early after finding a match, causing printf to get SIGPIPE (exit 141).
# This produced non-deterministic false "undef" failures.
if ! grep -qxF "$fn" <<< "$all_fns"; then
if ! printf '%s\n' "$all_fns" | grep -qxF "$fn"; then
printf 'FAIL [undef] %s: %s\n' "$script" "$fn"
printf ' all_fns count: %d\n' "$(grep -c . <<< "$all_fns")"
printf ' LIB_FUNS contains "%s": %s\n' "$fn" "$(grep -cxF "$fn" <<< "$LIB_FUNS")"
printf ' defining lib (if any): %s\n' "$(grep -l "^[[:space:]]*${fn}[[:space:]]*()" lib/*.sh 2>/dev/null | tr '\n' ' ')"
FAILED=1
fi
done <<< "$candidates"
@ -198,37 +179,42 @@ check_script() {
# These are already in LIB_FUNS (their definitions are available to agents),
# but this verifies calls *within* each lib file are also resolvable.
check_script lib/env.sh lib/mirrors.sh
check_script lib/agent-sdk.sh
check_script lib/agent-session.sh
check_script lib/ci-helpers.sh
check_script lib/secret-scan.sh
check_script lib/file-action-issue.sh lib/secret-scan.sh
check_script lib/tea-helpers.sh lib/secret-scan.sh
check_script lib/formula-session.sh lib/ops-setup.sh
check_script lib/formula-session.sh lib/agent-session.sh
check_script lib/load-project.sh
check_script lib/mirrors.sh lib/env.sh
check_script lib/guard.sh
check_script lib/pr-lifecycle.sh
check_script lib/issue-lifecycle.sh lib/secret-scan.sh
# Standalone lib scripts (not sourced by agents; run directly or as services).
# Still checked for function resolution against LIB_FUNS + own definitions.
check_script lib/ci-debug.sh
check_script lib/parse-deps.sh
check_script lib/sprint-filer.sh
# Agent scripts — list cross-sourced files where function scope flows across files.
check_script dev/dev-agent.sh
# dev-agent.sh sources phase-handler.sh; phase-handler.sh calls helpers defined in dev-agent.sh.
check_script dev/dev-agent.sh dev/phase-handler.sh
check_script dev/phase-handler.sh dev/dev-agent.sh lib/secret-scan.sh
check_script dev/dev-poll.sh
check_script dev/phase-test.sh
check_script gardener/gardener-run.sh lib/formula-session.sh
check_script review/review-pr.sh lib/agent-sdk.sh
check_script gardener/gardener-run.sh
check_script review/review-pr.sh lib/agent-session.sh
check_script review/review-poll.sh
check_script planner/planner-run.sh lib/formula-session.sh
check_script planner/planner-run.sh lib/agent-session.sh lib/formula-session.sh
check_script supervisor/supervisor-poll.sh
check_script supervisor/update-prompt.sh
check_script supervisor/supervisor-run.sh lib/formula-session.sh
check_script vault/vault-agent.sh
check_script vault/vault-fire.sh
check_script vault/vault-poll.sh
check_script vault/vault-reject.sh
check_script action/action-poll.sh
check_script action/action-agent.sh dev/phase-handler.sh
check_script supervisor/supervisor-run.sh
check_script supervisor/preflight.sh
check_script predictor/predictor-run.sh
check_script architect/architect-run.sh
echo "function resolution check done"

View file

@ -8,19 +8,6 @@
when:
event: [push, pull_request]
# Override default clone to authenticate against Forgejo using FORGE_TOKEN.
# Required because Forgejo is configured with REQUIRE_SIGN_IN, so anonymous
# git clones fail with exit code 128. FORGE_TOKEN is injected globally via
# WOODPECKER_ENVIRONMENT in docker-compose.yml (generated by lib/generators.sh).
clone:
git:
image: alpine/git
commands:
- AUTH_URL=$(printf '%s' "$CI_REPO_CLONE_URL" | sed "s|://|://token:$FORGE_TOKEN@|")
- git clone --depth 1 "$AUTH_URL" .
- git fetch --depth 1 origin "$CI_COMMIT_REF"
- git checkout FETCH_HEAD
steps:
- name: shellcheck
image: koalaman/shellcheck-alpine:stable
@ -29,8 +16,6 @@ steps:
- name: agent-smoke
image: alpine:3
when:
event: pull_request
commands:
- apk add --no-cache bash
- bash .woodpecker/agent-smoke.sh

View file

@ -179,16 +179,9 @@ def collect_findings(root):
Returns ``(ap_hits, dup_groups)`` with file paths relative to *root*.
"""
root = Path(root)
# Skip architect scripts for duplicate detection (stub formulas, see #99)
EXCLUDED_SUFFIXES = ("architect/architect-run.sh",)
def is_excluded(p):
"""Check if path should be excluded by suffix match."""
return p.suffix == ".sh" and ".git" not in p.parts and any(
str(p).endswith(suffix) for suffix in EXCLUDED_SUFFIXES
)
sh_files = sorted(p for p in root.rglob("*.sh") if not is_excluded(p))
sh_files = sorted(
p for p in root.rglob("*.sh") if ".git" not in p.parts
)
ap_hits = check_anti_patterns(sh_files)
dup_groups = check_duplicates(sh_files)
@ -245,56 +238,9 @@ def print_duplicates(groups, label=""):
# ---------------------------------------------------------------------------
def main() -> int:
# Skip architect scripts for duplicate detection (stub formulas, see #99)
EXCLUDED_SUFFIXES = ("architect/architect-run.sh",)
def is_excluded(p):
"""Check if path should be excluded by suffix match."""
return p.suffix == ".sh" and ".git" not in p.parts and any(
str(p).endswith(suffix) for suffix in EXCLUDED_SUFFIXES
)
sh_files = sorted(p for p in Path(".").rglob("*.sh") if not is_excluded(p))
# Standard patterns that are intentionally repeated across formula-driven agents
# These are not copy-paste violations but the expected structure
ALLOWED_HASHES = {
# Standard agent header: shebang, set -euo pipefail, directory resolution
"c93baa0f19d6b9ba271428bf1cf20b45": "Standard agent header (set -euo pipefail, SCRIPT_DIR, FACTORY_ROOT)",
# formula_prepare_profile_context followed by scratch context reading
"eaa735b3598b7b73418845ab00d8aba5": "Standard .profile context setup (formula_prepare_profile_context + SCRATCH_CONTEXT)",
# Standard prompt template: GRAPH_SECTION, SCRATCH_CONTEXT, FORMULA_CONTENT, SCRATCH_INSTRUCTION
"2653705045fdf65072cccfd16eb04900": "Standard prompt template (GRAPH_SECTION, SCRATCH_CONTEXT, FORMULA_CONTENT)",
"93726a3c799b72ed2898a55552031921": "Standard prompt template continuation (SCRATCH_CONTEXT, FORMULA_CONTENT, SCRATCH_INSTRUCTION)",
"c11eaaacab69c9a2d3c38c75215eca84": "Standard prompt template end (FORMULA_CONTENT, SCRATCH_INSTRUCTION)",
# Appears in stack_lock_acquire (lib/stack-lock.sh) and lib/pr-lifecycle.sh
"29d4f34b703f44699237713cc8d8065b": "Structural end-of-while-loop+case (return 1, esac, done, closing brace)",
# Forgejo org-creation API call pattern shared between forge-setup.sh and ops-setup.sh
# Extracted from bin/disinto (not a .sh file, excluded from prior scans) into lib/forge-setup.sh
"059b11945140c172465f9126b829ed7f": "Forgejo org-creation curl pattern (forge-setup.sh + ops-setup.sh)",
# Docker compose environment block for agents service (generators.sh + hire-agent.sh)
# Intentional duplicate - both generate the same docker-compose.yml template
"8066210169a462fe565f18b6a26a57e0": "Docker compose environment block (generators.sh + hire-agent.sh) - old",
"fd978fcd726696e0f280eba2c5198d50": "Docker compose environment block continuation (generators.sh + hire-agent.sh) - old",
"e2760ccc2d4b993a3685bd8991594eb2": "Docker compose env_file + depends_on block (generators.sh + hire-agent.sh) - old",
# The hash shown in output is 161a80f7 - need to match exactly what the script finds
"161a80f7296d6e9d45895607b7f5b9c9": "Docker compose env_file + depends_on block (generators.sh + hire-agent.sh) - old",
# New hash after explicit environment fix (#381)
"83fa229b86a7fdcb1d3591ab8e718f9d": "Docker compose explicit environment block (generators.sh + hire-agent.sh) - #381",
# Verification mode helper functions - intentionally duplicated in dispatcher and entrypoint
# These functions check if bug-report parent issues have all sub-issues closed
"b783d403276f78b49ad35840845126a1": "Verification helper: sub_issues variable declaration",
"4b19b9a1bdfbc62f003fc237ed270ed9": "Verification helper: python3 -c invocation",
"cc1d0a9f85dfe0cc32e9ef6361cb8c3a": "Verification helper: Python imports and args",
"768926748b811ebd30f215f57db5de40": "Verification helper: json.load from /dev/stdin",
"4c58586a30bcf6b009c02010ed8f6256": "Verification helper: sub_issues list initialization",
"53ea3d6359f51d622467bd77b079cc88": "Verification helper: iterate issues in data",
"21aec56a99d5252b23fb9a38b895e8e8": "Verification helper: check body for Decomposed from pattern",
"60ea98b3604557d539193b2a6624e232": "Verification helper: append sub-issue number",
"9f6ae8e7811575b964279d8820494eb0": "Verification helper: for loop done pattern",
# Standard lib source block shared across formula-driven agent run scripts
"330e5809a00b95ade1a5fce2d749b94b": "Standard lib source block (env.sh, formula-session.sh, worktree.sh, guard.sh, agent-sdk.sh)",
}
sh_files = sorted(
p for p in Path(".").rglob("*.sh") if ".git" not in p.parts
)
if not sh_files:
print("No .sh files found.")
@ -330,13 +276,8 @@ def main() -> int:
# Duplicate diff: key by content hash
base_dup_hashes = {g[0] for g in base_dups}
# Filter out allowed standard patterns that are intentionally repeated
new_dups = [
g for g in cur_dups
if g[0] not in base_dup_hashes and g[0] not in ALLOWED_HASHES
]
# Also filter allowed hashes from pre_dups for reporting
pre_dups = [g for g in cur_dups if g[0] in base_dup_hashes and g[0] not in ALLOWED_HASHES]
new_dups = [g for g in cur_dups if g[0] not in base_dup_hashes]
pre_dups = [g for g in cur_dups if g[0] in base_dup_hashes]
# Report pre-existing as info
if pre_ap or pre_dups:

View file

@ -1,64 +0,0 @@
# .woodpecker/publish-images.yml — Build and push versioned container images
# Triggered on tag pushes (e.g. v1.2.3). Builds and pushes:
# - ghcr.io/disinto/agents:<tag>
# - ghcr.io/disinto/reproduce:<tag>
# - ghcr.io/disinto/edge:<tag>
#
# Requires GHCR_TOKEN secret configured in Woodpecker with push access
# to ghcr.io/disinto.
when:
event: tag
ref: refs/tags/v*
clone:
git:
image: alpine/git
commands:
- AUTH_URL=$(printf '%s' "$CI_REPO_CLONE_URL" | sed "s|://|://token:$FORGE_TOKEN@|")
- git clone --depth 1 "$AUTH_URL" .
- git fetch --depth 1 origin "$CI_COMMIT_REF"
- git checkout FETCH_HEAD
steps:
- name: build-and-push-agents
image: plugins/docker
settings:
repo: ghcr.io/disinto/agents
registry: ghcr.io
dockerfile: docker/agents/Dockerfile
context: .
tags:
- ${CI_COMMIT_TAG}
- latest
username: disinto
password:
from_secret: GHCR_TOKEN
- name: build-and-push-reproduce
image: plugins/docker
settings:
repo: ghcr.io/disinto/reproduce
registry: ghcr.io
dockerfile: docker/reproduce/Dockerfile
context: .
tags:
- ${CI_COMMIT_TAG}
- latest
username: disinto
password:
from_secret: GHCR_TOKEN
- name: build-and-push-edge
image: plugins/docker
settings:
repo: ghcr.io/disinto/edge
registry: ghcr.io
dockerfile: docker/edge/Dockerfile
context: docker/edge
tags:
- ${CI_COMMIT_TAG}
- latest
username: disinto
password:
from_secret: GHCR_TOKEN

View file

@ -1,19 +1,31 @@
# .woodpecker/smoke-init.yml — End-to-end smoke test for disinto init
#
# Uses the Forgejo image directly (not as a service) so we have CLI
# access to set up Forgejo and create the bootstrap admin user.
# Then runs disinto init --bare --yes against the local Forgejo instance.
#
# Forgejo refuses to run as root, so all forgejo commands use su-exec
# to run as the 'git' user (pre-created in the Forgejo Docker image).
when:
- event: pull_request
path:
- "bin/disinto"
- "lib/load-project.sh"
- "lib/env.sh"
- "lib/generators.sh"
- "tests/**"
- ".woodpecker/smoke-init.yml"
event: [push, pull_request]
steps:
- name: smoke-init
image: python:3-alpine
image: codeberg.org/forgejo/forgejo:11.0
environment:
SMOKE_FORGE_URL: http://localhost:3000
commands:
- apk add --no-cache bash curl jq git coreutils
- python3 tests/mock-forgejo.py & echo $! > /tmp/mock-forgejo.pid
- sleep 2
# Install test dependencies (Alpine-based image)
- apk add --no-cache bash curl jq python3 git >/dev/null 2>&1
# Set up Forgejo data directories and config (owned by git user)
- mkdir -p /data/gitea/conf /data/gitea/repositories /data/gitea/lfs /data/gitea/log /data/git/.ssh /data/ssh
- printf '[database]\nDB_TYPE = sqlite3\nPATH = /data/gitea/forgejo.db\n\n[server]\nHTTP_PORT = 3000\nROOT_URL = http://localhost:3000/\nLFS_START_SERVER = false\n\n[security]\nINSTALL_LOCK = true\n\n[service]\nDISABLE_REGISTRATION = true\n' > /data/gitea/conf/app.ini
- chown -R git:git /data
# Start Forgejo as git user in background and wait for API
- su-exec git forgejo web --config /data/gitea/conf/app.ini &
- for i in $(seq 1 30); do curl -sf http://localhost:3000/api/v1/version >/dev/null 2>&1 && break; sleep 1; done
# Create bootstrap admin user via CLI
- su-exec git forgejo admin user create --admin --username setup-admin --password "SetupPass-789xyz" --email "setup-admin@smoke.test" --must-change-password=false --config /data/gitea/conf/app.ini
# Run the smoke test (as root is fine — only forgejo binary needs git user)
- bash tests/smoke-init.sh
- kill $(cat /tmp/mock-forgejo.pid) 2>/dev/null || true

130
AGENTS.md
View file

@ -1,65 +1,43 @@
<!-- last-reviewed: 18190874cae869527f675f717423ded735f2c555 -->
<!-- last-reviewed: 9b91c6a5bdd83513ca262dd468ea14db675971c1 -->
# Disinto — Agent Instructions
## What this repo is
Disinto is an autonomous code factory. It manages ten agents (dev, review,
gardener, supervisor, planner, predictor, architect, reproduce, triage, edge
dispatcher) that pick up issues from forge, implement them, review PRs, plan
from the vision, and keep the system healthy — all via a polling loop (`docker/agents/entrypoint.sh`) and `claude -p`.
The dispatcher executes formula-based operational tasks.
Disinto is an autonomous code factory. It manages eight agents (dev, review,
gardener, supervisor, planner, predictor, action, vault) that pick up issues from forge,
implement them, review PRs, plan from the vision, gate dangerous actions, and
keep the system healthy — all via cron and `claude -p`.
Each agent has a `.profile` repository on Forgejo that stores lessons learned
from prior sessions, providing continuous improvement across runs.
> **Note:** The vault is being redesigned as a PR-based approval workflow on the
> ops repo (see issues #73-#77). See [docs/VAULT.md](docs/VAULT.md) for details. Old vault scripts are being removed.
See `README.md` for the full architecture and `disinto-factory/SKILL.md` for setup.
See `README.md` for the full architecture and `BOOTSTRAP.md` for setup.
## Directory layout
```
disinto/ (code repo)
├── dev/ dev-poll.sh, dev-agent.sh, phase-test.sh — issue implementation
├── dev/ dev-poll.sh, dev-agent.sh, phase-handler.sh — issue implementation
├── review/ review-poll.sh, review-pr.sh — PR review
├── gardener/ gardener-run.sh — polling-loop executor for run-gardener formula
│ best-practices.md — gardener best-practice reference
│ pending-actions.json — queued gardener actions
├── predictor/ predictor-run.sh — polling-loop executor for run-predictor formula
├── planner/ planner-run.sh — polling-loop executor for run-planner formula
├── supervisor/ supervisor-run.sh — formula-driven health monitoring (polling-loop executor)
├── gardener/ gardener-run.sh — direct cron executor for run-gardener formula
├── predictor/ predictor-run.sh — daily cron executor for run-predictor formula
├── planner/ planner-run.sh — direct cron executor for run-planner formula
├── supervisor/ supervisor-run.sh — formula-driven health monitoring (cron wrapper)
│ preflight.sh — pre-flight data collection for supervisor formula
├── architect/ architect-run.sh — strategic decomposition of vision into sprints
├── action-vault/ vault-env.sh — shared env setup (vault redesign in progress, see #73-#77)
│ SCHEMA.md — vault item schema documentation
│ validate.sh — vault item validator
│ examples/ — example vault action TOMLs (promote, publish, release, webhook-call)
├── lib/ env.sh, agent-sdk.sh, ci-helpers.sh, ci-debug.sh, load-project.sh, parse-deps.sh, guard.sh, mirrors.sh, pr-lifecycle.sh, issue-lifecycle.sh, worktree.sh, formula-session.sh, stack-lock.sh, forge-setup.sh, forge-push.sh, ops-setup.sh, ci-setup.sh, generators.sh, hire-agent.sh, release.sh, build-graph.py, branch-protection.sh, secret-scan.sh, tea-helpers.sh, action-vault.sh, ci-log-reader.py, git-creds.sh, sprint-filer.sh
│ hooks/ — Claude Code session hooks (on-compact-reinject, on-idle-stop, on-phase-change, on-pretooluse-guard, on-session-end, on-stop-failure)
│ supervisor-poll.sh — legacy bash orchestrator (superseded)
├── vault/ vault-poll.sh, vault-agent.sh, vault-fire.sh — action gating + procurement
├── action/ action-poll.sh, action-agent.sh — operational task execution
├── lib/ env.sh, agent-session.sh, ci-helpers.sh, ci-debug.sh, load-project.sh, parse-deps.sh, guard.sh, mirrors.sh, build-graph.py
├── projects/ *.toml.example — templates; *.toml — local per-box config (gitignored)
├── formulas/ Issue templates (TOML specs for multi-step agent tasks)
├── docker/ Dockerfiles and entrypoints: reproduce, triage, edge dispatcher, chat (server.py, entrypoint-chat.sh, Dockerfile, ui/)
├── tools/ Operational tools: edge-control/ (register.sh, install.sh, verify-chat-sandbox.sh)
├── docs/ Protocol docs (PHASE-PROTOCOL.md, EVIDENCE-ARCHITECTURE.md)
├── site/ disinto.ai website content
├── tests/ Test files (mock-forgejo.py, smoke-init.sh)
├── templates/ Issue templates
├── bin/ The `disinto` CLI script
├── disinto-factory/ Setup documentation and skill
├── state/ Runtime state
├── .woodpecker/ Woodpecker CI pipeline configs
├── VISION.md High-level project vision
└── CLAUDE.md Claude Code project instructions
└── docs/ Protocol docs (PHASE-PROTOCOL.md, EVIDENCE-ARCHITECTURE.md)
disinto-ops/ (ops repo — {project}-ops)
├── vault/
│ ├── actions/ where vault action TOMLs land (core of vault workflow)
│ ├── pending/ vault items awaiting approval
│ ├── approved/ approved vault items
│ ├── fired/ executed vault items
│ └── rejected/ rejected vault items
├── sprints/ sprint planning artifacts
├── journal/
│ ├── planner/ daily planning logs
│ └── supervisor/ operational health logs
├── knowledge/ shared agent knowledge + best practices
├── evidence/ engagement data, experiment results
├── portfolio.md addressables + observables
@ -67,11 +45,10 @@ disinto-ops/ (ops repo — {project}-ops)
└── RESOURCES.md accounts, tokens (refs), infra inventory
```
## Agent .profile Model
Each agent has a `.profile` repository on Forgejo storing `knowledge/lessons-learned.md` (injected into each session prompt) and `journal/` reflection entries (digested into lessons). Pre-session: `formula_prepare_profile_context()` loads lessons. Post-session: `profile_write_journal` records reflections. See `lib/formula-session.sh`.
> **Terminology note:** "Formulas" are TOML issue templates in `formulas/` that orchestrate multi-step agent tasks. Distinct from "processes" in `docs/EVIDENCE-ARCHITECTURE.md`.
> **Terminology note:** "Formulas" in this repo are TOML issue templates in `formulas/` that
> orchestrate multi-step agent tasks (e.g., `run-gardener.toml`, `run-planner.toml`). This is
> distinct from "processes" described in `docs/EVIDENCE-ARCHITECTURE.md`, which are measurement
> and mutation pipelines that read external platforms and write structured evidence to git.
## Tech stack
@ -86,7 +63,7 @@ Each agent has a `.profile` repository on Forgejo storing `knowledge/lessons-lea
- All scripts start with `#!/usr/bin/env bash` and `set -euo pipefail`
- Source shared environment: `source "$(dirname "$0")/../lib/env.sh"`
- Log to `$LOGFILE` using the `log()` function from env.sh or defined locally
- Never hardcode secrets — agent secrets come from `.env.enc`, vault secrets from `secrets/<NAME>.enc` (age-encrypted, one file per key)
- Never hardcode secrets — agent secrets come from `.env.enc`, vault secrets from `.env.vault.enc` (or `.env`/`.env.vault` fallback)
- Never embed secrets in issue bodies, PR descriptions, or comments — use env var references (e.g. `$BASE_RPC_URL`)
- ShellCheck must pass (CI runs `shellcheck` on all `.sh` files)
- Avoid duplicate code — shared helpers go in `lib/`
@ -113,15 +90,8 @@ bash dev/phase-test.sh
| Supervisor | `supervisor/` | Health monitoring | [supervisor/AGENTS.md](supervisor/AGENTS.md) |
| Planner | `planner/` | Strategic planning | [planner/AGENTS.md](planner/AGENTS.md) |
| Predictor | `predictor/` | Infrastructure pattern detection | [predictor/AGENTS.md](predictor/AGENTS.md) |
| Architect | `architect/` | Strategic decomposition (read-only on project repo) | [architect/AGENTS.md](architect/AGENTS.md) |
| Filer | `lib/sprint-filer.sh` | Sub-issue filing from merged sprint PRs | ops repo pipeline (deferred, see #779) |
| Reproduce | `docker/reproduce/` | Bug reproduction using Playwright MCP | `formulas/reproduce.toml` |
| Triage | `docker/reproduce/` | Deep root cause analysis | `formulas/triage.toml` |
| Edge dispatcher | `docker/edge/` | Polls ops repo for vault actions, executes via Claude sessions | `docker/edge/dispatcher.sh` |
| agents-llama | `docker/agents/` (same image) | Local-Qwen dev agent (`AGENT_ROLES=dev`), gated on `ENABLE_LLAMA_AGENT=1` | [docs/agents-llama.md](docs/agents-llama.md) |
> **Vault:** Being redesigned as a PR-based approval workflow (issues #73-#77).
> See [docs/VAULT.md](docs/VAULT.md) for the vault PR workflow details.
| Action | `action/` | Operational task execution | [action/AGENTS.md](action/AGENTS.md) |
| Vault | `vault/` | Action gating + resource procurement | [vault/AGENTS.md](vault/AGENTS.md) |
See [lib/AGENTS.md](lib/AGENTS.md) for the full shared helper reference.
@ -137,28 +107,35 @@ Issues flow: `backlog` → `in-progress` → PR → CI → review → merge →
|---|---|---|
| `backlog` | Issue is queued for implementation. Dev-poll picks the first ready one. | Planner, gardener, humans |
| `priority` | Queue tier above plain backlog. Issues with both `priority` and `backlog` are picked before plain `backlog` issues. FIFO within each tier. | Planner, humans |
| `in-progress` | Dev-agent is actively working on this issue. Only one issue per project is in-progress at a time. Also set on vision issues by filer-bot when sub-issues are filed (#764). | dev-agent.sh (claims issue), filer-bot (vision issues) |
| `blocked` | Issue is stuck — agent session failed, crashed, timed out, or CI exhausted. Diagnostic comment on the issue has details. Also used for unmet dependencies. | dev-agent.sh, dev-poll.sh (on failure) |
| `in-progress` | Dev-agent is actively working on this issue. Only one issue per project is in-progress at a time. | dev-agent.sh (claims issue) |
| `blocked` | Issue is stuck — agent session failed, crashed, timed out, or CI exhausted. Diagnostic comment on the issue has details. Also used for unmet dependencies. | dev-agent.sh, action-agent.sh, dev-poll.sh (on failure) |
| `tech-debt` | Pre-existing issue flagged by AI reviewer, not introduced by a PR. | review-pr.sh (auto-created follow-ups) |
| `underspecified` | Dev-agent refused the issue as too large or vague. | dev-poll.sh (on preflight `too_large`), dev-agent.sh (on mid-run `too_large` refusal) |
| `bug-report` | Issue describes user-facing broken behavior with reproduction steps. Separate triage track for reproduction automation. | Gardener (bug-report detection in grooming) |
| `in-triage` | Bug reproduced but root cause not obvious — triage agent investigates. Set alongside `bug-report`. | reproduce-agent (when reproduction succeeds but cause unclear) |
| `rejected` | Issue formally rejected — cannot reproduce, out of scope, or invalid. | reproduce-agent, humans |
| `vision` | Goal anchors — high-level objectives from VISION.md. | Planner, humans |
| `prediction/unreviewed` | Unprocessed prediction filed by predictor. | predictor-run.sh |
| `prediction/dismissed` | Prediction triaged as DISMISS — planner disagrees, closed with reason. | Planner (triage-predictions step) |
| `prediction/actioned` | Prediction promoted or dismissed by planner. | Planner (triage-predictions step) |
| `formula` | Issue is a formula-based operational task. Dev-poll skips these; dispatcher handles them. | Dispatcher (when dispatching formula tasks) |
| `action` | Operational task for the action-agent to execute via formula. | Planner, humans |
### Dependency conventions
Issues declare dependencies via `## Dependencies` / `## Depends on` sections listing `#N` refs. `lib/parse-deps.sh` extracts these; dev-poll only picks issues whose deps are all closed. See AD-002 for concurrency bounds per LLM backend.
Issues declare dependencies in their body using a `## Dependencies` or
`## Depends on` section listing `#N` references. The dev-poll scheduler uses
`lib/parse-deps.sh` to extract these and only picks issues whose dependencies
are all closed.
### Single-threaded pipeline
Each project processes one issue at a time. Dev-poll will not start new work
while an open PR is waiting for CI or review. This keeps context clear and
prevents merge conflicts between concurrent changes.
---
## Addressables and Observables
## Addressables
Concrete artifacts the factory has produced or is building. Observables have measurement wired — the gardener promotes addressables once an evidence process is connected.
Concrete artifacts the factory has produced or is building. The gardener
maintains this table during grooming — see `formulas/run-gardener.toml`.
| Artifact | Location | Observable? |
|----------|----------|-------------|
@ -167,6 +144,14 @@ Concrete artifacts the factory has produced or is building. Observables have mea
| Skill | ClawHub (in progress) | No |
| GitHub org | github.com/Disinto | No |
## Observables
Addressables with measurement wired — the factory can read structured
feedback from these. The gardener promotes addressables here once an
evidence process is connected.
None yet.
---
## Architecture Decisions
@ -175,18 +160,19 @@ Humans write these. Agents read and enforce them.
| ID | Decision | Rationale |
|---|---|---|
| AD-001 | Nervous system runs from a polling loop (`docker/agents/entrypoint.sh`), not PR-based actions. | Planner, predictor, gardener, supervisor run directly via `*-run.sh`. They create work, they don't become work. (See PR #474 revert.) |
| AD-002 | **Concurrency is bounded per LLM backend, not per project.** One concurrent Claude session per OAuth credential pool; one concurrent session per llama-server instance. Containers with disjoint backends may run in parallel. | The single-thread invariant is about *backends*, not pipelines. **(a) Anthropic OAuth credentials race on token refresh** — each container uses a per-session `CLAUDE_CONFIG_DIR`, so Claude Code's native lockfile-based OAuth refresh handles contention automatically without external serialization. (Legacy: set `CLAUDE_EXTERNAL_LOCK=1` to re-enable the old `flock session.lock` wrapper for rollback.) **(b) llama-server has finite VRAM and one KV cache** — parallel inference thrashes the cache and risks OOM. All llama-backed agents serialize on the same lock. **(c) Disjoint backends are free to parallelize.** Today `disinto-agents` (Anthropic OAuth, runs `review,gardener`) runs concurrently with `disinto-agents-llama` (llama, runs `dev`) on the same project — they share neither OAuth state nor llama VRAM. **(d) Per-project work-conflict safety** (no duplicate dev work, no merge conflicts on the same branch) is enforced by `issue_claim` (assignee + `in-progress` label) and per-issue worktrees — that's a separate guard that does NOT depend on this AD. |
| AD-001 | Nervous system runs from cron, not action issues. | Planner, predictor, gardener, supervisor run directly via `*-run.sh`. They create work, they don't become work. (See PR #474 revert.) |
| AD-002 | Single-threaded pipeline per project. | One dev issue at a time. No new work while a PR awaits CI or review. Prevents merge conflicts and keeps context clear. |
| AD-003 | The runtime creates and destroys, the formula preserves. | Runtime manages worktrees/sessions/temp. Formulas commit knowledge to git before signaling done. |
| AD-004 | Event-driven > polling > fixed delays. | Never `waitForTimeout` or hardcoded sleep. Use phase files, webhooks, or poll loops with backoff. |
| AD-005 | Secrets via env var indirection, never in issue bodies. | Issue bodies become code. Agent secrets go in `.env.enc` (SOPS-encrypted), vault secrets in `secrets/<NAME>.enc` (age-encrypted, one file per key). Referenced as `$VAR_NAME`. Runner gets only vault secrets; agents get only agent secrets. |
| AD-006 | External actions go through vault dispatch, never direct. | Agents build addressables; only the vault exercises them (publishes, deploys, posts). Tokens for external systems (`GITHUB_TOKEN`, `CLAWHUB_TOKEN`, deploy keys) live only in `secrets/<NAME>.enc` and are decrypted into the ephemeral runner container. `lib/env.sh` unsets them so agents never hold them. PRs with direct external actions without vault dispatch get REQUEST_CHANGES. (Vault redesign in progress: PR-based approval on ops repo, see #73-#77) |
| AD-005 | Secrets via env var indirection, never in issue bodies. | Issue bodies become code. Agent secrets go in `.env.enc`, vault secrets in `.env.vault.enc` (both SOPS-encrypted). Referenced as `$VAR_NAME`. Vault-runner gets only vault secrets; agents get only agent secrets. |
| AD-006 | External actions go through vault dispatch, never direct. | Agents build addressables; only the vault exercises them (publishes, deploys, posts). Tokens for external systems (`GITHUB_TOKEN`, `CLAWHUB_TOKEN`, deploy keys) live only in `.env.vault.enc` and are injected into the ephemeral vault-runner container. `lib/env.sh` unsets them so agents never hold them. PRs with direct external actions without vault dispatch get REQUEST_CHANGES. |
**Who enforces what:**
- **Gardener** checks open backlog issues against ADs during grooming; closes violations with a comment referencing the AD number.
- **Planner** plans within the architecture; does not create issues that violate ADs.
- **Dev-agent** reads AGENTS.md before implementing; refuses work that violates ADs.
- **AD-002 is a runtime invariant; nothing for the gardener to check at issue-groom time.** OAuth concurrency is handled by per-session `CLAUDE_CONFIG_DIR` isolation (with `CLAUDE_EXTERNAL_LOCK` as a rollback flag). Per-issue work is enforced by `issue_claim`. A violation manifests as a 401 or VRAM OOM in agent logs, not as a malformed issue.
---
## Phase-Signaling Protocol
@ -196,4 +182,6 @@ at each phase boundary by writing to a phase file (e.g.
Key phases: `PHASE:awaiting_ci``PHASE:awaiting_review``PHASE:done`.
Also: `PHASE:escalate` (needs human input), `PHASE:failed`.
See [docs/PHASE-PROTOCOL.md](docs/PHASE-PROTOCOL.md) for the complete spec, orchestrator reaction matrix, sequence diagram, and crash recovery.
See [docs/PHASE-PROTOCOL.md](docs/PHASE-PROTOCOL.md) for the complete spec
including the orchestrator reaction matrix, sequence diagram, and crash recovery.

460
BOOTSTRAP.md Normal file
View file

@ -0,0 +1,460 @@
# Bootstrapping a New Project
How to point disinto at a new target project and get all agents running.
## Prerequisites
Before starting, ensure you have:
- [ ] A **git repo** (GitHub, Codeberg, or any URL) with at least one issue labeled `backlog`
- [ ] A **Woodpecker CI** pipeline (`.woodpecker/` dir with at least one `.yml`)
- [ ] **Docker** installed (for local Forgejo provisioning) — or a running Forgejo instance
- [ ] A **local clone** of the target repo on the same machine as disinto
- [ ] `claude` CLI installed and authenticated (`claude --version`)
- [ ] `tmux` installed (`tmux -V`) — required for persistent dev sessions (issue #80+)
## Quick Start
The fastest path is `disinto init`, which provisions a local Forgejo instance, creates bot users and tokens, clones the repo, and sets up cron — all in one command:
```bash
disinto init https://github.com/org/repo
```
This will:
1. Start a local Forgejo instance via Docker (at `http://localhost:3000`)
2. Create admin + bot users (dev-bot, review-bot) with API tokens
3. Create the repo on Forgejo and push your code
4. Generate a `projects/<name>.toml` config
5. Create standard labels (backlog, in-progress, blocked, etc.)
6. Install cron entries for the agents
No external accounts or tokens needed.
## 1. Secret Management (SOPS + age)
Disinto encrypts secrets at rest using [SOPS](https://github.com/getsops/sops) with [age](https://age-encryption.org/) encryption. When `sops` and `age` are installed, `disinto init` automatically:
1. Generates an age key at `~/.config/sops/age/keys.txt` (if none exists)
2. Creates `.sops.yaml` pinning the age public key
3. Encrypts all secrets into `.env.enc` (safe to commit)
4. Removes the plaintext `.env`
**Install the tools:**
```bash
# age (key generation)
apt install age # Debian/Ubuntu
brew install age # macOS
# sops (encryption/decryption)
# Download from https://github.com/getsops/sops/releases
```
**The age private key** at `~/.config/sops/age/keys.txt` is the single file that must be protected. Back it up securely — without it, `.env.enc` cannot be decrypted. LUKS disk encryption on the VPS protects this key at rest.
**Managing secrets after setup:**
```bash
disinto secrets edit # Opens .env.enc in $EDITOR, re-encrypts on save
disinto secrets show # Prints decrypted secrets (for debugging)
disinto secrets migrate # Converts existing plaintext .env -> .env.enc
```
**Fallback:** If `sops`/`age` are not installed, `disinto init` writes secrets to a plaintext `.env` file with a warning. All agents load secrets transparently — `lib/env.sh` checks for `.env.enc` first, then falls back to `.env`.
## 2. Configure `.env`
```bash
cp .env.example .env
```
Fill in:
```bash
# ── Forge (auto-populated by disinto init) ─────────────────
FORGE_URL=http://localhost:3000 # local Forgejo instance
FORGE_TOKEN= # dev-bot token (auto-generated)
FORGE_REVIEW_TOKEN= # review-bot token (auto-generated)
# ── Woodpecker CI ───────────────────────────────────────────
WOODPECKER_TOKEN=tok_xxxxxxxx
WOODPECKER_SERVER=http://localhost:8000
# WOODPECKER_REPO_ID — now per-project, set in projects/*.toml [ci] section
# Woodpecker Postgres (for direct pipeline queries)
WOODPECKER_DB_PASSWORD=secret
WOODPECKER_DB_USER=woodpecker
WOODPECKER_DB_HOST=127.0.0.1
WOODPECKER_DB_NAME=woodpecker
# ── Tuning ──────────────────────────────────────────────────
CLAUDE_TIMEOUT=7200 # seconds per Claude invocation
```
### Backwards compatibility
If you have an existing deployment using `CODEBERG_TOKEN` / `REVIEW_BOT_TOKEN` in `.env`, those still work — `env.sh` falls back to the old names automatically. No migration needed.
## 3. Configure Project TOML
Each project needs a `projects/<name>.toml` file with box-specific settings
(absolute paths, Woodpecker CI IDs, forge URL). These files are
**gitignored** — they are local installation config, not shared code.
To create one:
```bash
# Automatic — generates TOML, clones repo, sets up cron:
disinto init https://github.com/org/repo
# Manual — copy a template and fill in your values:
cp projects/myproject.toml.example projects/myproject.toml
vim projects/myproject.toml
```
The `forge_url` field in the TOML tells all agents where to find the forge API:
```toml
name = "myproject"
repo = "org/myproject"
forge_url = "http://localhost:3000"
```
The repo ships `projects/*.toml.example` templates showing the expected
structure. See any `.toml.example` file for the full field reference.
## 4. Claude Code Global Settings
Configure `~/.claude/settings.json` with **only** permissions and `skipDangerousModePermissionPrompt`. Do not add hooks to the global settings — `agent-session.sh` injects per-worktree hooks automatically.
Match the configuration from harb-staging exactly. The file should contain only permission grants and the dangerous-mode flag:
```json
{
"permissions": {
"allow": [
"..."
]
},
"skipDangerousModePermissionPrompt": true
}
```
### Seed `~/.claude.json`
Run `claude --dangerously-skip-permissions` once interactively to create `~/.claude.json`. This file must exist before cron-driven agents can run.
```bash
claude --dangerously-skip-permissions
# Exit after it initializes successfully
```
## 5. File Ownership
Everything under `/home/debian` must be owned by `debian:debian`. Root-owned files cause permission errors when agents run as the `debian` user.
```bash
chown -R debian:debian /home/debian/harb /home/debian/dark-factory
```
Verify no root-owned files exist in agent temp directories:
```bash
# These should return nothing
find /tmp/dev-* /tmp/harb-* /tmp/review-* -not -user debian 2>/dev/null
```
## 5b. Woodpecker CI + Forgejo Integration
`disinto init` automatically configures Woodpecker to use the local Forgejo instance as its forge backend if `WOODPECKER_SERVER` is set in `.env`. This includes:
1. Creating an OAuth2 application on Forgejo for Woodpecker
2. Writing `WOODPECKER_FORGEJO_*` env vars to `.env`
3. Activating the repo in Woodpecker
### Manual setup (if Woodpecker runs outside of `disinto init`)
If you manage Woodpecker separately, configure these env vars in its server config:
```bash
WOODPECKER_FORGEJO=true
WOODPECKER_FORGEJO_URL=http://localhost:3000
WOODPECKER_FORGEJO_CLIENT=<oauth2-client-id>
WOODPECKER_FORGEJO_SECRET=<oauth2-client-secret>
```
To create the OAuth2 app on Forgejo:
```bash
# Create OAuth2 application (redirect URI = Woodpecker authorize endpoint)
curl -X POST \
-H "Authorization: token ${FORGE_TOKEN}" \
-H "Content-Type: application/json" \
"http://localhost:3000/api/v1/user/applications/oauth2" \
-d '{"name":"woodpecker-ci","redirect_uris":["http://localhost:8000/authorize"],"confidential_client":true}'
```
The response contains `client_id` and `client_secret` for `WOODPECKER_FORGEJO_CLIENT` / `WOODPECKER_FORGEJO_SECRET`.
To activate the repo in Woodpecker:
```bash
woodpecker-cli repo add <org>/<repo>
# Or via API:
curl -X POST \
-H "Authorization: Bearer ${WOODPECKER_TOKEN}" \
"http://localhost:8000/api/repos" \
-d '{"forge_remote_id":"<org>/<repo>"}'
```
Woodpecker will now trigger pipelines on pushes to Forgejo and push commit status back. Disinto queries Woodpecker directly for CI status (with a forge API fallback), so pipeline results are visible even if Woodpecker's status push to Forgejo is delayed.
## 6. Prepare the Target Repo
### Required: CI pipeline
The repo needs at least one Woodpecker pipeline. Disinto monitors CI status to decide when a PR is ready for review and when it can merge.
### Required: `CLAUDE.md`
Create a `CLAUDE.md` in the repo root. This is the context document that dev-agent and review-agent read before working. It should cover:
- **What the project is** (one paragraph)
- **Tech stack** (languages, frameworks, DB)
- **How to build/run/test** (`npm install`, `npm test`, etc.)
- **Coding conventions** (import style, naming, linting rules)
- **Project structure** (key directories and what lives where)
The dev-agent reads this file via `claude -p` before implementing any issue. The better this file, the better the output.
### Required: Issue labels
`disinto init` creates these automatically. If setting up manually, create these labels on the forge repo:
| Label | Purpose |
|-------|---------|
| `backlog` | Issues ready to be picked up by dev-agent |
| `in-progress` | Managed by dev-agent (auto-applied, auto-removed) |
Optional but recommended:
| Label | Purpose |
|-------|---------|
| `tech-debt` | Gardener can promote these to `backlog` |
| `blocked` | Dev-agent marks issues with unmet dependencies |
| `formula` | **Not yet functional.** Formula dispatch lives on the unmerged `feat/formula` branch. Dev-agent will skip any issue with this label until that branch is merged. Template files exist in `formulas/` for future use. |
### Required: Branch protection
On Forgejo, set up branch protection for your primary branch:
- **Require pull request reviews**: enabled
- **Required approvals**: 1 (from the review bot account)
- **Restrict push**: only allow merges via PR
This ensures dev-agent can't merge its own PRs — it must wait for review-agent (running as the bot account) to approve.
> **Common pitfall:** Approvals alone are not enough. You must also:
> 1. Add `review-bot` as a **write** collaborator on the repo (Settings → Collaborators)
> 2. Set both `approvals_whitelist_username` **and** `merge_whitelist_usernames` to include `review-bot` in the branch protection rule
>
> Without write access, the bot's approval is counted but the merge API returns HTTP 405.
### Required: Seed the `AGENTS.md` tree
The planner maintains an `AGENTS.md` tree — architecture docs with
per-file `<!-- last-reviewed: SHA -->` watermarks. You must seed this before
the first planner run, otherwise the planner sees no watermarks and treats the
entire repo as "new", generating a noisy first-run diff.
1. **Create `AGENTS.md` in the repo root** with a one-page overview of the
project: what it is, tech stack, directory layout, key conventions. Link
to sub-directory AGENTS.md files.
2. **Create sub-directory `AGENTS.md` files** for each major directory
(e.g. `frontend/AGENTS.md`, `backend/AGENTS.md`). Keep each under ~200
lines — architecture and conventions, not implementation details.
3. **Set the watermark** on line 1 of every AGENTS.md file to the current HEAD:
```bash
SHA=$(git rev-parse --short HEAD)
for f in $(find . -name "AGENTS.md" -not -path "./.git/*"); do
sed -i "1s/^/<!-- last-reviewed: ${SHA} -->\n/" "$f"
done
```
4. **Symlink `CLAUDE.md`** so Claude Code picks up the same file:
```bash
ln -sf AGENTS.md CLAUDE.md
```
5. Commit and push. The planner will now see 0 changes on its first run and
only update files when real commits land.
See `formulas/run-planner.toml` (agents-update step) for the full AGENTS.md conventions.
## 7. Write Good Issues
Dev-agent works best with issues that have:
- **Clear title** describing the change (e.g., "Add email validation to customer form")
- **Acceptance criteria** — what "done" looks like
- **Dependencies** — reference blocking issues with `#NNN` in the body or a `## Dependencies` section:
```
## Dependencies
- #4
- #7
```
Dev-agent checks that all referenced issues are closed (= merged) before starting work. If any are open, the issue is skipped and checked again next cycle.
## 8. Install Cron
```bash
crontab -e
```
### Single project
Add (adjust paths):
```cron
FACTORY_ROOT=/home/you/disinto
# Supervisor — health checks, auto-healing (every 10 min)
0,10,20,30,40,50 * * * * $FACTORY_ROOT/supervisor/supervisor-poll.sh
# Review agent — find unreviewed PRs (every 10 min, offset +3)
3,13,23,33,43,53 * * * * $FACTORY_ROOT/review/review-poll.sh $FACTORY_ROOT/projects/myproject.toml
# Dev agent — find ready issues, implement (every 10 min, offset +6)
6,16,26,36,46,56 * * * * $FACTORY_ROOT/dev/dev-poll.sh $FACTORY_ROOT/projects/myproject.toml
# Gardener — backlog grooming (daily)
15 8 * * * $FACTORY_ROOT/gardener/gardener-poll.sh
# Planner — AGENTS.md maintenance + gap analysis (weekly)
0 9 * * 1 $FACTORY_ROOT/planner/planner-poll.sh
```
`review-poll.sh`, `dev-poll.sh`, and `gardener-poll.sh` all take a project TOML file as their first argument.
### Multiple projects
Stagger each project's polls so they don't overlap. With the example below, cross-project gaps are 2 minutes:
```cron
FACTORY_ROOT=/home/you/disinto
# Supervisor (shared)
0,10,20,30,40,50 * * * * $FACTORY_ROOT/supervisor/supervisor-poll.sh
# Project A — review +3, dev +6
3,13,23,33,43,53 * * * * $FACTORY_ROOT/review/review-poll.sh $FACTORY_ROOT/projects/project-a.toml
6,16,26,36,46,56 * * * * $FACTORY_ROOT/dev/dev-poll.sh $FACTORY_ROOT/projects/project-a.toml
# Project B — review +8, dev +1 (2-min gap from project A)
8,18,28,38,48,58 * * * * $FACTORY_ROOT/review/review-poll.sh $FACTORY_ROOT/projects/project-b.toml
1,11,21,31,41,51 * * * * $FACTORY_ROOT/dev/dev-poll.sh $FACTORY_ROOT/projects/project-b.toml
# Gardener — per-project backlog grooming (daily)
15 8 * * * $FACTORY_ROOT/gardener/gardener-poll.sh $FACTORY_ROOT/projects/project-a.toml
45 8 * * * $FACTORY_ROOT/gardener/gardener-poll.sh $FACTORY_ROOT/projects/project-b.toml
# Planner — AGENTS.md maintenance + gap analysis (weekly)
0 9 * * 1 $FACTORY_ROOT/planner/planner-poll.sh
```
The staggered offsets prevent agents from competing for resources. Each project gets its own lock file (`/tmp/dev-agent-{name}.lock`) derived from the `name` field in its TOML, so concurrent runs across projects are safe.
## 9. Verify
```bash
# Should complete with "all clear" (no problems to fix)
bash supervisor/supervisor-poll.sh
# Should list backlog issues (or "no backlog issues")
bash dev/dev-poll.sh
# Should find no unreviewed PRs (or review one if exists)
bash review/review-poll.sh
```
Check logs after a few cycles:
```bash
tail -30 supervisor/supervisor.log
tail -30 dev/dev-agent.log
tail -30 review/review.log
```
## Lifecycle
Once running, the system operates autonomously:
```
You write issues (with backlog label)
→ dev-poll finds ready issues
→ dev-agent implements in a worktree, opens PR
→ CI runs (Woodpecker)
→ review-agent reviews, approves or requests changes
→ dev-agent addresses feedback (if any)
→ merge, close issue, clean up
Meanwhile:
supervisor-poll monitors health, kills stale processes, manages resources
gardener grooms backlog: closes duplicates, promotes tech-debt, escalates ambiguity
planner rebuilds AGENTS.md from git history, gap-analyses against VISION.md
```
## Troubleshooting
| Symptom | Check |
|---------|-------|
| Dev-agent not picking up issues | `cat /tmp/dev-agent.lock` — is another instance running? Issues labeled `backlog`? Dependencies met? |
| PR not getting reviewed | `tail review/review.log` — CI must pass first. Review bot token valid? |
| CI stuck | `bash lib/ci-debug.sh` — check Woodpecker. Rate-limited? (exit 128 = wait 15 min) |
| Claude not found | `which claude` — must be in PATH. Check `lib/env.sh` adds `~/.local/bin`. |
| Merge fails | Branch protection misconfigured? Review bot needs write access to the repo. |
| Memory issues | Supervisor auto-heals at <500 MB free. Check `supervisor/supervisor.log` for P0 alerts. |
| Works on one box but not another | Diff configs first (`~/.claude/settings.json`, `.env`, crontab, branch protection). Write code never — config mismatches are the #1 cause of cross-box failures. |
### Multi-project common blockers
| Symptom | Cause | Fix |
|---------|-------|-----|
| Dev-agent for project B never starts | Shared lock file path | Each TOML `name` field must be unique — lock is `/tmp/dev-agent-{name}.lock` |
| Review-poll skips all PRs | CI gate with no CI configured | Set `woodpecker_repo_id = 0` in the TOML `[ci]` section to bypass the CI check |
| Approved PRs never merge (HTTP 405) | `review-bot` not in merge/approvals whitelist | Add as write collaborator; set both `approvals_whitelist_username` and `merge_whitelist_usernames` in branch protection |
| Dev-agent churns through issues without waiting for open PRs to land | No single-threaded enforcement | `WAITING_PRS` check in dev-poll holds new work — verify TOML `name` is consistent across invocations |
| Label ping-pong (issue reopened then immediately re-closed) | `already_done` handler doesn't close issue | Review dev-agent log; `already_done` status should auto-close the issue |
## Security: Docker Socket Sharing in CI
The `woodpecker-agent` service mounts `/var/run/docker.sock` to execute `type: docker` CI pipelines. This grants root-equivalent access to the Docker host — any CI pipeline step can run privileged containers, mount arbitrary host paths, or access other containers' data.
**Mitigations:**
- **Run disinto in an LXD/VM container, not on bare metal.** When the Docker daemon runs inside an LXD container, LXD's user namespace mapping and resource limits contain the blast radius. A compromised CI step cannot reach the real host.
- **`WOODPECKER_MAX_WORKFLOWS: 1`** limits concurrent CI resource usage, preventing a runaway pipeline from exhausting host resources.
- **`WOODPECKER_AGENT_SECRET`** authenticates the agent↔server gRPC connection. `disinto init` auto-generates this secret and stores it in `.env` (or `.env.enc` when SOPS is available).
- Consider setting `WOODPECKER_BACKEND_DOCKER_VOLUMES` on the agent to restrict which host volumes CI pipelines can mount.
**Threat model:** PRs are created by the dev-agent (Claude) and auto-reviewed by the review-bot. A crafted backlog issue could theoretically produce a PR whose CI step exploits the Docker socket. The LXD containment boundary is the primary defense — treat the LXD container as the trust boundary, not the Docker daemon inside it.
## Action Runner — disinto (harb-staging)
Added 2026-03-19. Polls disinto repo for `action`-labeled issues.
```
*/5 * * * * cd /home/debian/dark-factory && bash action/action-poll.sh projects/disinto.toml >> /tmp/action-disinto-cron.log 2>&1
```
Runs locally on harb-staging — same box where Caddy/site live. For formulas that need local resources (publish-site, etc).
### Fix applied: action-agent.sh needs +x
The script wasn't executable after git clone. Run:
```bash
chmod +x action/action-agent.sh action/action-poll.sh
```

View file

@ -1,6 +0,0 @@
# CLAUDE.md
This repo is **disinto** — an autonomous code factory.
Read `AGENTS.md` for architecture, coding conventions, and per-file documentation.
For setup and operations, load the `disinto-factory` skill (`disinto-factory/SKILL.md`).

View file

@ -21,29 +21,25 @@ Point it at a git repo with a Woodpecker CI pipeline and it will pick up issues,
## Architecture
```
entrypoint.sh (while-true polling loop, 5 min base interval)
├── every 5 min ──→ review-poll.sh ← finds unreviewed PRs, spawns review
│ └── review-pr.sh ← claude -p: review → approve/request changes
├── every 5 min ──→ dev-poll.sh ← pulls ready issues, spawns dev-agent
│ └── dev-agent.sh ← claude -p: implement → PR → CI → review → merge
├── every 6h ────→ gardener-run.sh ← backlog grooming (duplicates, stale, tech-debt)
│ └── claude -p: triage → promote/close/escalate
├── every 6h ────→ architect-run.sh ← strategic decomposition of vision into sprints
├── every 12h ───→ planner-run.sh ← gap-analyse VISION.md, create backlog issues
│ └── claude -p: update AGENTS.md → create issues
└── every 24h ───→ predictor-run.sh ← infrastructure pattern detection
cron (*/10) ──→ supervisor-poll.sh ← supervisor (bash checks, zero tokens)
├── all clear? → exit 0
└── problem? → claude -p (diagnose, fix, or escalate)
cron (*/10) ──→ dev-poll.sh ← pulls ready issues, spawns dev-agent
└── dev-agent.sh ← claude -p: implement → PR → CI → review → merge
cron (*/10) ──→ review-poll.sh ← finds unreviewed PRs, spawns review
└── review-pr.sh ← claude -p: review → approve/request changes
cron (daily) ──→ gardener-poll.sh ← backlog grooming (duplicates, stale, tech-debt)
└── claude -p: triage → promote/close/escalate
cron (weekly) ──→ planner-poll.sh ← gap-analyse VISION.md, create backlog issues
└── claude -p: update AGENTS.md → create issues
cron (*/30) ──→ vault-poll.sh ← safety gate for dangerous/irreversible actions
└── claude -p: classify → auto-approve/reject or escalate
entrypoint-edge.sh (edge container)
├── dispatcher.sh ← polls ops repo for vault actions
└── every 20 min → supervisor-run.sh ← health checks (bash checks, zero tokens)
├── all clear? → exit 0
└── problem? → claude -p (diagnose, fix, or escalate)
```
## Prerequisites
@ -72,8 +68,6 @@ cd disinto
disinto init https://github.com/yourorg/yourproject
```
This will generate a `docker-compose.yml` file.
Or configure manually — edit `.env` with your values:
```bash
@ -95,11 +89,18 @@ CLAUDE_TIMEOUT=7200 # max seconds per Claude invocation (default: 2h)
```
```bash
# 3. Start the agent and edge containers
docker compose up -d
# 3. Install cron (staggered to avoid overlap)
crontab -e
# Add:
# 0,10,20,30,40,50 * * * * /path/to/disinto/supervisor/supervisor-poll.sh
# 3,13,23,33,43,53 * * * * /path/to/disinto/review/review-poll.sh
# 6,16,26,36,46,56 * * * * /path/to/disinto/dev/dev-poll.sh
# 15 8 * * * /path/to/disinto/gardener/gardener-poll.sh
# 0,30 * * * * /path/to/disinto/vault/vault-poll.sh
# 0 9 * * 1 /path/to/disinto/planner/planner-poll.sh
# 4. Verify the entrypoint loop is running
docker exec disinto-agents tail -f /home/agent/data/agent-entrypoint.log
# 4. Verify
bash supervisor/supervisor-poll.sh # should log "all clear"
```
## Directory Structure
@ -112,23 +113,26 @@ disinto/
│ ├── env.sh # Shared: load .env, PATH, API helpers
│ └── ci-debug.sh # Woodpecker CI log/failure helper
├── dev/
│ ├── dev-poll.sh # Poll: find ready issues
│ ├── dev-poll.sh # Cron entry: find ready issues
│ └── dev-agent.sh # Implementation agent (claude -p)
├── review/
│ ├── review-poll.sh # Poll: find unreviewed PRs
│ ├── review-poll.sh # Cron entry: find unreviewed PRs
│ └── review-pr.sh # Review agent (claude -p)
├── gardener/
│ ├── gardener-run.sh # Executor: backlog grooming
│ ├── gardener-poll.sh # Cron entry: backlog grooming
│ └── best-practices.md # Gardener knowledge base
├── planner/
│ ├── planner-run.sh # Executor: vision gap analysis
│ └── (formula-driven) # run-planner.toml executed by dispatcher
│ ├── planner-poll.sh # Cron entry: weekly vision gap analysis
│ └── (formula-driven) # run-planner.toml executed by action-agent
├── vault/
│ └── vault-env.sh # Shared env setup (vault redesign in progress, see #73-#77)
├── docs/
│ └── VAULT.md # Vault PR workflow and branch protection documentation
│ ├── vault-poll.sh # Cron entry: process pending dangerous actions
│ ├── vault-agent.sh # Classifies and routes actions (claude -p)
│ ├── vault-fire.sh # Executes an approved action
│ ├── vault-reject.sh # Marks an action as rejected
│ └── PROMPT.md # System prompt for vault agent
└── supervisor/
├── supervisor-poll.sh # Supervisor: health checks + claude -p
├── PROMPT.md # Supervisor's system prompt
├── update-prompt.sh # Self-learning: append to best-practices
└── best-practices/ # Progressive disclosure knowledge base
├── memory.md
@ -144,14 +148,12 @@ disinto/
| Agent | Trigger | Job |
|-------|---------|-----|
| **Supervisor** | Every 20 min | Health checks (RAM, disk, CI, git). Calls Claude only when something is broken. Self-improving via `best-practices/`. |
| **Dev** | Every 5 min | Picks up `backlog`-labeled issues, creates a branch, implements, opens a PR, monitors CI, responds to review, merges. |
| **Review** | Every 5 min | Finds PRs without review, runs Claude-powered code review, approves or requests changes. |
| **Gardener** | Every 6h | Grooms the issue backlog: detects duplicates, promotes `tech-debt` to `backlog`, closes stale issues, escalates ambiguous items. |
| **Planner** | Every 12h | Updates AGENTS.md documentation to reflect recent code changes, then gap-analyses VISION.md vs current state and creates up to 5 backlog issues for the highest-leverage gaps. |
> **Vault:** Being redesigned as a PR-based approval workflow (issues #73-#77).
> See [docs/VAULT.md](docs/VAULT.md) for the vault PR workflow and branch protection details.
| **Supervisor** | Every 10 min | Health checks (RAM, disk, CI, git). Calls Claude only when something is broken. Self-improving via `best-practices/`. |
| **Dev** | Every 10 min | Picks up `backlog`-labeled issues, creates a branch, implements, opens a PR, monitors CI, responds to review, merges. |
| **Review** | Every 10 min | Finds PRs without review, runs Claude-powered code review, approves or requests changes. |
| **Gardener** | Daily | Grooms the issue backlog: detects duplicates, promotes `tech-debt` to `backlog`, closes stale issues, escalates ambiguous items. |
| **Planner** | Weekly | Updates AGENTS.md documentation to reflect recent code changes, then gap-analyses VISION.md vs current state and creates up to 5 backlog issues for the highest-leverage gaps. |
| **Vault** | Every 30 min | Safety gate for dangerous or irreversible actions. Classifies pending actions via Claude: auto-approve, auto-reject, or escalate to a human via vault/forge. |
## Design Principles

View file

@ -1,97 +0,0 @@
# Vault Action TOML Schema
This document defines the schema for vault action TOML files used in the PR-based approval workflow (issue #74).
## File Location
Vault actions are stored in `vault/actions/<action-id>.toml` on the ops repo.
## Schema Definition
```toml
# Required
id = "publish-skill-20260331"
formula = "clawhub-publish"
context = "SKILL.md bumped to 0.3.0"
# Required secrets to inject (env vars)
secrets = ["CLAWHUB_TOKEN"]
# Optional file-based credential mounts
mounts = ["ssh"]
# Optional
model = "sonnet"
tools = ["clawhub"]
timeout_minutes = 30
blast_radius = "low" # optional: overrides policy.toml tier ("low"|"medium"|"high")
```
## Field Specifications
### Required Fields
| Field | Type | Description |
|-------|------|-------------|
| `id` | string | Unique identifier for the vault action. Format: `<action-type>-<date>` (e.g., `publish-skill-20260331`) |
| `formula` | string | Formula name from `formulas/` directory that defines the operational task to execute |
| `context` | string | Human-readable explanation of why this action is needed. Used in PR description |
| `secrets` | array of strings | List of secret names to inject into the execution environment. Only these secrets are passed to the container |
### Optional Fields
| Field | Type | Default | Description |
|-------|------|---------|-------------|
| `mounts` | array of strings | `[]` | Well-known mount aliases for file-based credentials. The dispatcher maps each alias to a read-only volume flag |
| `model` | string | `sonnet` | Override the default Claude model for this action |
| `tools` | array of strings | `[]` | MCP tools to enable during execution |
| `timeout_minutes` | integer | `60` | Maximum execution time in minutes |
| `blast_radius` | string | _(from policy.toml)_ | Override blast-radius tier for this invocation. Valid values: `"low"`, `"medium"`, `"high"`. See [docs/BLAST-RADIUS.md](../docs/BLAST-RADIUS.md) |
## Secret Names
Secret names must have a corresponding `secrets/<NAME>.enc` file (age-encrypted). The vault validates that requested secrets exist in the allowlist before execution.
Common secret names:
- `CLAWHUB_TOKEN` - Token for ClawHub skill publishing
- `GITHUB_TOKEN` - GitHub API token for repository operations
- `DEPLOY_KEY` - Infrastructure deployment key
## Mount Aliases
Mount aliases map to read-only volume flags passed to the runner container:
| Alias | Maps to |
|-------|---------|
| `ssh` | `-v ${HOME}/.ssh:/home/agent/.ssh:ro` |
| `gpg` | `-v ${HOME}/.gnupg:/home/agent/.gnupg:ro` |
| `sops` | `-v ${HOME}/.config/sops/age:/home/agent/.config/sops/age:ro` |
## Validation Rules
1. **Required fields**: `id`, `formula`, `context`, and `secrets` must be present
2. **Formula validation**: The formula must exist in the `formulas/` directory
3. **Secret validation**: All secrets in the `secrets` array must be in the allowlist
4. **No unknown fields**: The TOML must not contain fields outside the schema
5. **ID uniqueness**: The `id` must be unique across all vault actions
## Example Files
See `vault/examples/` for complete examples:
- `webhook-call.toml` - Example of calling an external webhook
- `promote.toml` - Example of promoting a build/artifact
- `publish.toml` - Example of publishing a skill to ClawHub
## Usage
Validate a vault action file:
```bash
./vault/validate.sh vault/actions/<action-id>.toml
```
The validator will check:
- All required fields are present
- Secret names are in the allowlist
- No unknown fields are present
- Formula exists in the formulas directory

View file

@ -1,53 +0,0 @@
#!/usr/bin/env bash
# classify.sh — Blast-radius classification engine
#
# Reads the ops-repo policy.toml and prints the tier for a given formula.
# An optional blast_radius override (from the action TOML) takes precedence.
#
# Usage: classify.sh <formula-name> [blast_radius_override]
# Output: prints "low", "medium", or "high" to stdout; exits 0
#
# Source lib/env.sh directly (not vault-env.sh) to avoid circular dependency:
# vault-env.sh calls classify.sh, so classify.sh must not source vault-env.sh.
# The only variable needed here is OPS_REPO_ROOT, which comes from lib/env.sh.
# shellcheck source=../lib/env.sh
set -euo pipefail
source "$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)/lib/env.sh"
formula="${1:-}"
override="${2:-}"
if [ -z "$formula" ]; then
echo "Usage: classify.sh <formula-name> [blast_radius_override]" >&2
exit 1
fi
# If the action TOML provides a blast_radius override, use it directly
if [[ "$override" =~ ^(low|medium|high)$ ]]; then
echo "$override"
exit 0
fi
# Read tier from ops-repo policy.toml
policy_file="${OPS_REPO_ROOT}/vault/policy.toml"
if [ -f "$policy_file" ]; then
# Parse: look for `formula_name = "tier"` under [tiers]
# Escape regex metacharacters in formula name for safe grep
escaped_formula=$(printf '%s' "$formula" | sed 's/[].[*^$\\]/\\&/g')
# grep may find no match (exit 1); guard with || true to avoid pipefail abort
tier=$(sed -n '/^\[tiers\]/,/^\[/{/^\[tiers\]/d;/^\[/d;p}' "$policy_file" \
| { grep -E "^${escaped_formula}[[:space:]]*=" || true; } \
| sed -E 's/^[^=]+=[[:space:]]*"([^"]+)".*/\1/' \
| head -n1)
if [[ "$tier" =~ ^(low|medium|high)$ ]]; then
echo "$tier"
exit 0
fi
fi
# Default-deny: unknown formulas are high
echo "high"
exit 0

View file

@ -1,21 +0,0 @@
# vault/examples/promote.toml
# Example: Promote a build/artifact to production
#
# This vault action demonstrates promoting a built artifact to a
# production environment with proper authentication.
id = "promote-20260331"
formula = "run-supervisor"
context = "Promote build v1.2.3 to production environment"
# Secrets to inject for deployment authentication
secrets = ["DEPLOY_KEY", "DOCKER_HUB_TOKEN"]
# Optional: use larger model for complex deployment logic
model = "sonnet"
# Optional: enable MCP tools for container operations
tools = ["docker"]
# Optional: deployments may take longer
timeout_minutes = 45

View file

@ -1,21 +0,0 @@
# vault/examples/publish.toml
# Example: Publish a skill to ClawHub
#
# This vault action demonstrates publishing a skill to ClawHub
# using the clawhub-publish formula.
id = "publish-site-20260331"
formula = "run-publish-site"
context = "Publish updated site to production"
# Secrets to inject (only these get passed to the container)
secrets = ["DEPLOY_KEY"]
# Optional: use sonnet model
model = "sonnet"
# Optional: enable MCP tools
tools = []
# Optional: 30 minute timeout
timeout_minutes = 30

View file

@ -1,37 +0,0 @@
# vault/examples/release.toml
# Example: Release vault item schema
#
# This example demonstrates the release vault item schema for creating
# versioned releases with vault-gated approval.
#
# The release formula tags Forgejo main, pushes to mirrors, builds and
# tags the agents Docker image, and restarts agent containers.
#
# Example vault item (auto-generated by `disinto release v1.2.0`):
#
# id = "release-v120"
# formula = "release"
# context = "Release v1.2.0"
# secrets = []
# mounts = ["ssh"]
#
# Steps executed by the release formula:
# 1. preflight - Validate prerequisites (version, FORGE_TOKEN, Docker)
# 2. tag-main - Create tag on Forgejo main via API
# 3. push-mirrors - Push tag to Codeberg and GitHub mirrors
# 4. build-image - Build agents Docker image with --no-cache
# 5. tag-image - Tag image with version (disinto-agents:v1.2.0)
# 6. restart-agents - Restart agent containers with new image
# 7. commit-result - Write release result to tracking file
id = "release-v120"
formula = "release"
context = "Release v1.2.0 — includes vault redesign, .profile system, architect agent"
secrets = ["GITHUB_TOKEN", "CODEBERG_TOKEN"]
mounts = ["ssh"]
# Optional: specify a larger model for complex release logic
# model = "sonnet"
# Optional: releases may take longer due to Docker builds
# timeout_minutes = 60

View file

@ -1,21 +0,0 @@
# vault/examples/webhook-call.toml
# Example: Call an external webhook with authentication
#
# This vault action demonstrates calling an external webhook endpoint
# with proper authentication via injected secrets.
id = "webhook-call-20260331"
formula = "run-rent-a-human"
context = "Notify Slack channel about deployment completion"
# Secrets to inject (only these get passed to the container)
secrets = ["DEPLOY_KEY"]
# Optional: use sonnet model for this action
model = "sonnet"
# Optional: enable MCP tools
tools = []
# Optional: 30 minute timeout
timeout_minutes = 30

View file

@ -1,30 +0,0 @@
# vault/policy.toml — Blast-radius tier classification for formulas
#
# Each formula maps to a tier: "low", "medium", or "high".
# Unknown formulas default to "high" (default-deny).
#
# This file is a template. `disinto init` copies it to
# $OPS_REPO_ROOT/vault/policy.toml where operators can override tiers
# per-deployment without a disinto PR.
[tiers]
# Read-only / internal bookkeeping — no external side-effects
groom-backlog = "low"
triage = "low"
reproduce = "low"
review-pr = "low"
# Create issues, PRs, or internal plans — visible but reversible
dev = "medium"
run-planner = "medium"
run-gardener = "medium"
run-predictor = "medium"
run-supervisor = "medium"
run-architect = "medium"
upgrade-dependency = "medium"
# External-facing or irreversible operations
run-publish-site = "high"
run-rent-a-human = "high"
add-rpc-method = "high"
release = "high"

View file

@ -1,47 +0,0 @@
#!/usr/bin/env bash
# vault/validate.sh — Validate vault action TOML files
#
# Usage: ./vault/validate.sh <path-to-toml>
#
# Validates a vault action TOML file according to the schema defined in
# vault/SCHEMA.md. Checks:
# - Required fields are present
# - Secret names are in the allowlist
# - No unknown fields are present
# - Formula exists in formulas/
set -euo pipefail
# Get script directory
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# Source vault environment
source "$SCRIPT_DIR/vault-env.sh"
# Get the TOML file to validate
TOML_FILE="${1:-}"
if [ -z "$TOML_FILE" ]; then
echo "Usage: $0 <path-to-toml>" >&2
echo "Example: $0 vault/examples/publish.toml" >&2
exit 1
fi
# Resolve relative paths
if [[ "$TOML_FILE" != /* ]]; then
TOML_FILE="$(cd "$(dirname "$TOML_FILE")" && pwd)/$(basename "$TOML_FILE")"
fi
# Run validation
if validate_vault_action "$TOML_FILE"; then
echo "VALID: $TOML_FILE"
echo " ID: $VAULT_ACTION_ID"
echo " Formula: $VAULT_ACTION_FORMULA"
echo " Context: $VAULT_ACTION_CONTEXT"
echo " Secrets: $VAULT_ACTION_SECRETS"
echo " Mounts: ${VAULT_ACTION_MOUNTS:-none}"
exit 0
else
echo "INVALID: $TOML_FILE" >&2
exit 1
fi

View file

@ -1,190 +0,0 @@
#!/usr/bin/env bash
# vault-env.sh — Shared vault environment: loads lib/env.sh and activates
# vault-bot's Forgejo identity (#747).
# Source this instead of lib/env.sh in vault scripts.
# shellcheck source=../lib/env.sh
source "$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)/lib/env.sh"
# Use vault-bot's own Forgejo identity
FORGE_TOKEN="${FORGE_VAULT_TOKEN:-${FORGE_TOKEN}}"
export FORGE_TOKEN
# Export FORGE_ADMIN_TOKEN for direct commits (low-tier bypass)
# This token is used to commit directly to ops main without PR workflow
export FORGE_ADMIN_TOKEN="${FORGE_ADMIN_TOKEN:-}"
# Vault redesign in progress (PR-based approval workflow)
# This file is kept for shared env setup; scripts being replaced by #73
# Blast-radius classification — set VAULT_TIER if a formula is known
# Callers may set VAULT_ACTION_FORMULA before sourcing, or pass it later.
if [ -n "${VAULT_ACTION_FORMULA:-}" ]; then
VAULT_TIER=$("$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)/classify.sh" \
"$VAULT_ACTION_FORMULA" "${VAULT_BLAST_RADIUS_OVERRIDE:-}")
export VAULT_TIER
fi
# =============================================================================
# VAULT ACTION VALIDATION
# =============================================================================
# Allowed secret names - must match files in secrets/<NAME>.enc
VAULT_ALLOWED_SECRETS="CLAWHUB_TOKEN GITHUB_TOKEN CODEBERG_TOKEN DEPLOY_KEY NPM_TOKEN DOCKER_HUB_TOKEN"
# Allowed mount aliases — well-known file-based credential directories
VAULT_ALLOWED_MOUNTS="ssh gpg sops"
# Validate a vault action TOML file
# Usage: validate_vault_action <path-to-toml>
# Returns: 0 if valid, 1 if invalid
# Sets: VAULT_ACTION_ID, VAULT_ACTION_FORMULA, VAULT_ACTION_CONTEXT on success
validate_vault_action() {
local toml_file="$1"
if [ -z "$toml_file" ]; then
echo "ERROR: No TOML file specified" >&2
return 1
fi
if [ ! -f "$toml_file" ]; then
echo "ERROR: File not found: $toml_file" >&2
return 1
fi
log "Validating vault action: $toml_file"
# Get script directory for relative path resolution
# FACTORY_ROOT is set by lib/env.sh which is sourced above
local formulas_dir="${FACTORY_ROOT:-$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)}/formulas"
# Extract TOML values using grep/sed (basic TOML parsing)
local toml_content
toml_content=$(cat "$toml_file")
# Extract string values (id, formula, context)
local id formula context
id=$(echo "$toml_content" | grep -E '^id\s*=' | sed -E 's/^id\s*=\s*"(.*)"/\1/' | tr -d '\r')
formula=$(echo "$toml_content" | grep -E '^formula\s*=' | sed -E 's/^formula\s*=\s*"(.*)"/\1/' | tr -d '\r')
context=$(echo "$toml_content" | grep -E '^context\s*=' | sed -E 's/^context\s*=\s*"(.*)"/\1/' | tr -d '\r')
# Extract secrets array
local secrets_line secrets_array
secrets_line=$(echo "$toml_content" | grep -E '^secrets\s*=' | tr -d '\r')
secrets_array=$(echo "$secrets_line" | sed -E 's/^secrets\s*=\s*\[(.*)\]/\1/' | tr -d '[]"' | tr ',' ' ' | sed 's/^[[:space:]]*//;s/[[:space:]]*$//')
# Extract mounts array (optional)
local mounts_line mounts_array
mounts_line=$(echo "$toml_content" | grep -E '^mounts\s*=' | tr -d '\r') || true
mounts_array=$(echo "$mounts_line" | sed -E 's/^mounts\s*=\s*\[(.*)\]/\1/' | tr -d '[]"' | tr ',' ' ' | sed 's/^[[:space:]]*//;s/[[:space:]]*$//') || true
# Check for unknown fields (any top-level key not in allowed list)
local unknown_fields
unknown_fields=$(echo "$toml_content" | grep -E '^[a-zA-Z_][a-zA-Z0-9_]*\s*=' | sed -E 's/^([a-zA-Z_][a-zA-Z0-9_]*)\s*=.*/\1/' | sort -u | while read -r field; do
case "$field" in
id|formula|context|secrets|mounts|model|tools|timeout_minutes|dispatch_mode|blast_radius) ;;
*) echo "$field" ;;
esac
done)
if [ -n "$unknown_fields" ]; then
echo "ERROR: Unknown fields in TOML: $(echo "$unknown_fields" | tr '\n' ', ' | sed 's/,$//')" >&2
return 1
fi
# Validate required fields
if [ -z "$id" ]; then
echo "ERROR: Missing required field: id" >&2
return 1
fi
if [ -z "$formula" ]; then
echo "ERROR: Missing required field: formula" >&2
return 1
fi
if [ -z "$context" ]; then
echo "ERROR: Missing required field: context" >&2
return 1
fi
# Validate formula exists in formulas/ (.toml for Claude reasoning, .sh for mechanical)
if [ ! -f "$formulas_dir/${formula}.toml" ] && [ ! -f "$formulas_dir/${formula}.sh" ]; then
echo "ERROR: Formula not found: $formula (checked .toml and .sh)" >&2
return 1
fi
# Validate secrets field exists and is not empty
if [ -z "$secrets_line" ]; then
echo "ERROR: Missing required field: secrets" >&2
return 1
fi
# Validate each secret is in the allowlist
for secret in $secrets_array; do
secret=$(echo "$secret" | tr -d '"' | xargs) # trim whitespace and quotes
if [ -n "$secret" ]; then
if ! echo " $VAULT_ALLOWED_SECRETS " | grep -q " $secret "; then
echo "ERROR: Unknown secret (not in allowlist): $secret" >&2
return 1
fi
fi
done
# Validate each mount alias is in the allowlist
if [ -n "$mounts_array" ]; then
for mount in $mounts_array; do
mount=$(echo "$mount" | tr -d '"' | xargs) # trim whitespace and quotes
if [ -n "$mount" ]; then
if ! echo " $VAULT_ALLOWED_MOUNTS " | grep -q " $mount "; then
echo "ERROR: Unknown mount alias (not in allowlist): $mount" >&2
return 1
fi
fi
done
fi
# Validate optional fields if present
# model
if echo "$toml_content" | grep -qE '^model\s*='; then
local model_value
model_value=$(echo "$toml_content" | grep -E '^model\s*=' | sed -E 's/^model\s*=\s*"(.*)"/\1/' | tr -d '\r')
if [ -z "$model_value" ]; then
echo "ERROR: 'model' must be a non-empty string" >&2
return 1
fi
fi
# tools
if echo "$toml_content" | grep -qE '^tools\s*='; then
local tools_line
tools_line=$(echo "$toml_content" | grep -E '^tools\s*=' | tr -d '\r')
if ! echo "$tools_line" | grep -q '\['; then
echo "ERROR: 'tools' must be an array" >&2
return 1
fi
fi
# timeout_minutes
if echo "$toml_content" | grep -qE '^timeout_minutes\s*='; then
local timeout_value
timeout_value=$(echo "$toml_content" | grep -E '^timeout_minutes\s*=' | sed -E 's/^timeout_minutes\s*=\s*([0-9]+)/\1/' | tr -d '\r')
if [ -z "$timeout_value" ] || [ "$timeout_value" -le 0 ] 2>/dev/null; then
echo "ERROR: 'timeout_minutes' must be a positive integer" >&2
return 1
fi
fi
# Export validated values (for use by caller script)
export VAULT_ACTION_ID="$id"
export VAULT_ACTION_FORMULA="$formula"
export VAULT_ACTION_CONTEXT="$context"
export VAULT_ACTION_SECRETS="$secrets_array"
export VAULT_ACTION_MOUNTS="${mounts_array:-}"
log "VAULT_ACTION_ID=$VAULT_ACTION_ID"
log "VAULT_ACTION_FORMULA=$VAULT_ACTION_FORMULA"
log "VAULT_ACTION_SECRETS=$VAULT_ACTION_SECRETS"
log "VAULT_ACTION_MOUNTS=${VAULT_ACTION_MOUNTS:-none}"
return 0
}

34
action/AGENTS.md Normal file
View file

@ -0,0 +1,34 @@
<!-- last-reviewed: 9b91c6a5bdd83513ca262dd468ea14db675971c1 -->
# Action Agent
**Role**: Execute operational tasks described by action formulas — run scripts,
call APIs, send messages, collect human approval. Shares the same phase handler
as the dev-agent: if an action produces code changes, the orchestrator creates a
PR and drives the CI/review loop; otherwise Claude closes the issue directly.
**Trigger**: `action-poll.sh` runs every 10 min via cron. Sources `lib/guard.sh`
and calls `check_active action` first — skips if `$FACTORY_ROOT/state/.action-active`
is absent. Then scans for open issues labeled `action` that have no active tmux
session, and spawns `action-agent.sh <issue-number>`.
**Key files**:
- `action/action-poll.sh` — Cron scheduler: finds open action issues with no active tmux session, spawns action-agent.sh
- `action/action-agent.sh` — Orchestrator: fetches issue body + prior comments, **checks all dependencies via `lib/parse-deps.sh` before spawning** (skips silently if any dep is still open), creates tmux session (`action-{project}-{issue_num}`) with interactive `claude`, injects formula prompt with phase protocol, enters `monitor_phase_loop` (shared via `dev/phase-handler.sh`) for CI/review lifecycle or direct completion
**Session lifecycle**:
1. `action-poll.sh` finds open `action` issues with no active tmux session.
2. Spawns `action-agent.sh <issue_num>`.
3. Agent creates tmux session `action-{project}-{issue_num}`, injects prompt (formula + prior comments + phase protocol).
4. Agent enters `monitor_phase_loop` (shared with dev-agent via `dev/phase-handler.sh`).
5. **Path A (git output):** Claude pushes branch → `PHASE:awaiting_ci` → handler creates PR, polls CI → injects failures → Claude fixes → push → re-poll → CI passes → `PHASE:awaiting_review` → handler polls reviews → injects REQUEST_CHANGES → Claude fixes → approved → merge → cleanup.
6. **Path B (no git output):** Claude posts results as comment, closes issue → `PHASE:done` → handler cleans up (kill session, docker compose down, remove temp files).
7. For human input: Claude writes `PHASE:escalate`; human responds via vault/forge.
**Crash recovery**: on `PHASE:crashed` or non-zero exit, the worktree is **preserved** (not destroyed) for debugging. Location logged. Supervisor housekeeping removes stale crashed worktrees older than 24h.
**Environment variables consumed**:
- `FORGE_TOKEN`, `FORGE_ACTION_TOKEN` (falls back to FORGE_TOKEN), `FORGE_REPO`, `FORGE_API`, `FORGE_URL`, `PROJECT_NAME`, `FORGE_WEB`
- `ACTION_IDLE_TIMEOUT` — Max seconds before killing idle session (default 14400 = 4h)
- `ACTION_MAX_LIFETIME` — Max total session wall-clock seconds (default 28800 = 8h); caps session independently of idle timeout
**FORGE_REMOTE**: `action-agent.sh` auto-detects the git remote for `FORGE_URL` (same logic as dev-agent). Exported as `FORGE_REMOTE`, used for worktree creation and push instructions injected into the Claude prompt.

363
action/action-agent.sh Executable file
View file

@ -0,0 +1,363 @@
#!/usr/bin/env bash
# action-agent.sh — Autonomous action agent: tmux + Claude + action formula
#
# Usage: ./action-agent.sh <issue-number> [project.toml]
#
# Lifecycle:
# 1. Fetch issue body (action formula) + existing comments
# 2. Create isolated git worktree: /tmp/action-{issue}-{timestamp}
# 3. Create tmux session: action-{project}-{issue_num} with interactive claude in worktree
# 4. Inject initial prompt: formula + comments + phase protocol instructions
# 5. Monitor phase file via monitor_phase_loop (shared with dev-agent)
# Path A (git output): Claude pushes → handler creates PR → CI poll → review
# injection → merge → cleanup (same loop as dev-agent via phase-handler.sh)
# Path B (no git output): Claude posts results → PHASE:done → cleanup
# 6. For human input: Claude writes PHASE:escalate; human responds via vault/forge
# 7. Cleanup on terminal phase: kill children, destroy worktree, remove temp files
#
# Key principle: The runtime creates and destroys. The formula preserves.
# The formula must push results before signaling done — the worktree is nuked after.
#
# Session: action-{project}-{issue_num} (tmux)
# Log: action/action-poll-{project}.log
set -euo pipefail
ISSUE="${1:?Usage: action-agent.sh <issue-number> [project.toml]}"
export PROJECT_TOML="${2:-${PROJECT_TOML:-}}"
source "$(dirname "$0")/../lib/env.sh"
# Use action-bot's own Forgejo identity (#747)
FORGE_TOKEN="${FORGE_ACTION_TOKEN:-${FORGE_TOKEN}}"
source "$(dirname "$0")/../lib/ci-helpers.sh"
source "$(dirname "$0")/../lib/agent-session.sh"
source "$(dirname "$0")/../lib/formula-session.sh"
# shellcheck source=../dev/phase-handler.sh
source "$(dirname "$0")/../dev/phase-handler.sh"
SESSION_NAME="action-${PROJECT_NAME}-${ISSUE}"
LOCKFILE="/tmp/action-agent-${ISSUE}.lock"
LOGFILE="${FACTORY_ROOT}/action/action-poll-${PROJECT_NAME:-default}.log"
IDLE_TIMEOUT="${ACTION_IDLE_TIMEOUT:-14400}" # 4h default
MAX_LIFETIME="${ACTION_MAX_LIFETIME:-28800}" # 8h default wall-clock cap
SESSION_START_EPOCH=$(date +%s)
# --- Phase handler globals (agent-specific; defaults in phase-handler.sh) ---
# shellcheck disable=SC2034 # used by phase-handler.sh
API="${FORGE_API}"
BRANCH="action/issue-${ISSUE}"
# shellcheck disable=SC2034 # used by phase-handler.sh
WORKTREE="/tmp/action-${ISSUE}-$(date +%s)"
PHASE_FILE="/tmp/action-session-${PROJECT_NAME:-default}-${ISSUE}.phase"
IMPL_SUMMARY_FILE="/tmp/action-impl-summary-${PROJECT_NAME:-default}-${ISSUE}.txt"
PREFLIGHT_RESULT="/tmp/action-preflight-${ISSUE}.json"
SCRATCH_FILE="/tmp/action-${ISSUE}-scratch.md"
log() {
printf '[%s] action#%s %s\n' "$(date -u '+%Y-%m-%d %H:%M:%S UTC')" "$ISSUE" "$*" >> "$LOGFILE"
}
status() {
log "$*"
}
# --- Action-specific helpers for phase-handler.sh ---
cleanup_worktree() {
cd "${PROJECT_REPO_ROOT}" 2>/dev/null || true
git worktree remove "$WORKTREE" --force 2>/dev/null || true
rm -rf "$WORKTREE"
# Clear Claude Code session history for this worktree to prevent hallucinated "already done"
local claude_project_dir
claude_project_dir="$HOME/.claude/projects/$(echo "$WORKTREE" | sed 's|/|-|g; s|^-||')"
rm -rf "$claude_project_dir" 2>/dev/null || true
log "destroyed worktree: ${WORKTREE}"
}
cleanup_labels() { :; } # action agent doesn't use in-progress labels
# --- Concurrency lock (per issue) ---
if [ -f "$LOCKFILE" ]; then
LOCK_PID=$(cat "$LOCKFILE" 2>/dev/null || echo "")
if [ -n "$LOCK_PID" ] && kill -0 "$LOCK_PID" 2>/dev/null; then
log "SKIP: action-agent already running for #${ISSUE} (PID ${LOCK_PID})"
exit 0
fi
rm -f "$LOCKFILE"
fi
echo $$ > "$LOCKFILE"
cleanup() {
local exit_code=$?
# Kill lifetime watchdog if running
if [ -n "${LIFETIME_WATCHDOG_PID:-}" ] && kill -0 "$LIFETIME_WATCHDOG_PID" 2>/dev/null; then
kill "$LIFETIME_WATCHDOG_PID" 2>/dev/null || true
wait "$LIFETIME_WATCHDOG_PID" 2>/dev/null || true
fi
rm -f "$LOCKFILE"
agent_kill_session "$SESSION_NAME"
# Kill any remaining child processes spawned during the run
local children
children=$(jobs -p 2>/dev/null) || true
if [ -n "$children" ]; then
# shellcheck disable=SC2086 # intentional word splitting
kill $children 2>/dev/null || true
# shellcheck disable=SC2086
wait $children 2>/dev/null || true
fi
# Best-effort docker cleanup for containers started during this action
(cd "${WORKTREE}" 2>/dev/null && docker compose down 2>/dev/null) || true
# Preserve worktree on crash for debugging; clean up on success
local final_phase=""
[ -f "$PHASE_FILE" ] && final_phase=$(head -1 "$PHASE_FILE" 2>/dev/null || true)
if [ "${final_phase:-}" = "PHASE:crashed" ] || [ "${_MONITOR_LOOP_EXIT:-}" = "crashed" ] || [ "$exit_code" -ne 0 ]; then
log "PRESERVED crashed worktree for debugging: $WORKTREE"
else
cleanup_worktree
fi
rm -f "$PHASE_FILE" "${PHASE_FILE%.phase}.context" "$IMPL_SUMMARY_FILE" "$PREFLIGHT_RESULT"
}
trap cleanup EXIT
# --- Memory guard ---
AVAIL_MB=$(awk '/MemAvailable/ {printf "%d", $2/1024}' /proc/meminfo)
if [ "$AVAIL_MB" -lt 2000 ]; then
log "SKIP: only ${AVAIL_MB}MB available (need 2000MB)"
exit 0
fi
# --- Fetch issue ---
log "fetching issue #${ISSUE}"
ISSUE_JSON=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API}/issues/${ISSUE}") || true
if [ -z "$ISSUE_JSON" ] || ! printf '%s' "$ISSUE_JSON" | jq -e '.id' >/dev/null 2>&1; then
log "ERROR: failed to fetch issue #${ISSUE}"
exit 1
fi
ISSUE_TITLE=$(printf '%s' "$ISSUE_JSON" | jq -r '.title')
ISSUE_BODY=$(printf '%s' "$ISSUE_JSON" | jq -r '.body // ""')
ISSUE_STATE=$(printf '%s' "$ISSUE_JSON" | jq -r '.state')
if [ "$ISSUE_STATE" != "open" ]; then
log "SKIP: issue #${ISSUE} is ${ISSUE_STATE}"
exit 0
fi
log "Issue: ${ISSUE_TITLE}"
# --- Dependency check (skip before spawning Claude) ---
DEPS=$(printf '%s' "$ISSUE_BODY" | bash "${FACTORY_ROOT}/lib/parse-deps.sh")
if [ -n "$DEPS" ]; then
ALL_MET=true
while IFS= read -r dep; do
[ -z "$dep" ] && continue
DEP_STATE=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API}/issues/${dep}" | jq -r '.state // "open"') || DEP_STATE="open"
if [ "$DEP_STATE" != "closed" ]; then
log "SKIP: dependency #${dep} still open — not spawning session"
ALL_MET=false
break
fi
done <<< "$DEPS"
if [ "$ALL_MET" = false ]; then
rm -f "$LOCKFILE"
exit 0
fi
log "all dependencies met"
fi
# --- Extract model from YAML front matter (if present) ---
YAML_MODEL=$(printf '%s' "$ISSUE_BODY" | \
sed -n '/^---$/,/^---$/p' | grep '^model:' | awk '{print $2}' | tr -d '"' || true)
if [ -n "$YAML_MODEL" ]; then
export CLAUDE_MODEL="$YAML_MODEL"
log "model from front matter: ${YAML_MODEL}"
fi
# --- Resolve bot username(s) for comment filtering ---
_bot_login=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API%%/repos*}/user" | jq -r '.login // empty' 2>/dev/null || true)
# Build list: token owner + any extra names from FORGE_BOT_USERNAMES (comma-separated)
_bot_logins="${_bot_login}"
if [ -n "${FORGE_BOT_USERNAMES:-}" ]; then
_bot_logins="${_bot_logins:+${_bot_logins},}${FORGE_BOT_USERNAMES}"
fi
# --- Fetch existing comments (resume context, excluding bot comments) ---
COMMENTS_JSON=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API}/issues/${ISSUE}/comments?limit=50") || true
PRIOR_COMMENTS=""
if [ -n "$COMMENTS_JSON" ] && [ "$COMMENTS_JSON" != "null" ] && [ "$COMMENTS_JSON" != "[]" ]; then
PRIOR_COMMENTS=$(printf '%s' "$COMMENTS_JSON" | \
jq -r --arg bots "$_bot_logins" \
'($bots | split(",") | map(select(. != ""))) as $bl |
.[] | select(.user.login as $u | $bl | index($u) | not) |
"[\(.user.login) at \(.created_at[:19])]\n\(.body)\n---"' 2>/dev/null || true)
fi
# --- Create isolated worktree ---
log "creating worktree: ${WORKTREE}"
cd "${PROJECT_REPO_ROOT}"
# Determine which git remote corresponds to FORGE_URL
_forge_host=$(echo "$FORGE_URL" | sed 's|https\?://||; s|/.*||')
FORGE_REMOTE=$(git remote -v | awk -v host="$_forge_host" '$2 ~ host && /\(push\)/ {print $1; exit}')
FORGE_REMOTE="${FORGE_REMOTE:-origin}"
export FORGE_REMOTE
git fetch "${FORGE_REMOTE}" "${PRIMARY_BRANCH}" 2>/dev/null || true
if ! git worktree add "$WORKTREE" "${FORGE_REMOTE}/${PRIMARY_BRANCH}" 2>&1; then
log "ERROR: worktree creation failed"
exit 1
fi
log "worktree ready: ${WORKTREE}"
# --- Read scratch file (compaction survival) ---
SCRATCH_CONTEXT=$(read_scratch_context "$SCRATCH_FILE")
SCRATCH_INSTRUCTION=$(build_scratch_instruction "$SCRATCH_FILE")
# --- Build initial prompt ---
PRIOR_SECTION=""
if [ -n "$PRIOR_COMMENTS" ]; then
PRIOR_SECTION="## Prior comments (resume context)
${PRIOR_COMMENTS}
"
fi
# Build phase protocol from shared function (Path B covered in Instructions section above)
PHASE_PROTOCOL_INSTRUCTIONS="$(build_phase_protocol_prompt "$PHASE_FILE" "$IMPL_SUMMARY_FILE" "$BRANCH")"
# Write phase protocol to context file for compaction survival
write_compact_context "$PHASE_FILE" "$PHASE_PROTOCOL_INSTRUCTIONS"
INITIAL_PROMPT="You are an action agent. Your job is to execute the action formula
in the issue below.
## Issue #${ISSUE}: ${ISSUE_TITLE}
${ISSUE_BODY}
${SCRATCH_CONTEXT}
${PRIOR_SECTION}## Instructions
1. Read the action formula steps in the issue body carefully.
2. Execute each step in order using your Bash tool and any other tools available.
3. Post progress as comments on issue #${ISSUE} after significant steps:
curl -sf -X POST \\
-H \"Authorization: token \${FORGE_TOKEN}\" \\
-H 'Content-Type: application/json' \\
\"${FORGE_API}/issues/${ISSUE}/comments\" \\
-d \"{\\\"body\\\": \\\"your comment here\\\"}\"
4. If a step requires human input or approval, write PHASE:escalate with a reason.
A human will review and respond via the forge.
### Path A: If this action produces code changes (e.g. config updates, baselines):
- You are already in an isolated worktree at: ${WORKTREE}
- Create and switch to branch: git checkout -b ${BRANCH}
- Make your changes, commit, and push: git push ${FORGE_REMOTE} ${BRANCH}
- **IMPORTANT:** The worktree is destroyed after completion. Push all
results before signaling done — unpushed work will be lost.
- Follow the phase protocol below — the orchestrator handles PR creation,
CI monitoring, and review injection.
### Path B: If this action produces no code changes (investigation, report):
- Post results as a comment on issue #${ISSUE}.
- **IMPORTANT:** The worktree is destroyed after completion. Copy any
files you need to persistent paths before signaling done.
- Close the issue:
curl -sf -X PATCH \\
-H \"Authorization: token \${FORGE_TOKEN}\" \\
-H 'Content-Type: application/json' \\
\"${FORGE_API}/issues/${ISSUE}\" \\
-d '{\"state\": \"closed\"}'
- Signal completion: echo \"PHASE:done\" > \"${PHASE_FILE}\"
5. Environment variables available in your bash sessions:
FORGE_TOKEN, FORGE_API, FORGE_REPO, FORGE_WEB, PROJECT_NAME
(all sourced from ${FACTORY_ROOT}/.env)
### CRITICAL: Never embed secrets in issue bodies, comments, or PR descriptions
- NEVER put API keys, tokens, passwords, or private keys in issue text or comments.
- Always reference secrets via env var names (e.g. \\\$BASE_RPC_URL, \\\${FORGE_TOKEN}).
- If a formula step needs a secret, read it from .env or the environment at runtime.
- Before posting any comment, verify it contains no credentials, hex keys > 32 chars,
or URLs with embedded API keys.
If the prior comments above show work already completed, resume from where it
left off.
${SCRATCH_INSTRUCTION}
${PHASE_PROTOCOL_INSTRUCTIONS}"
# --- Create tmux session ---
log "creating tmux session: ${SESSION_NAME}"
if ! create_agent_session "${SESSION_NAME}" "${WORKTREE}" "${PHASE_FILE}"; then
log "ERROR: failed to create tmux session"
exit 1
fi
# --- Inject initial prompt ---
inject_formula "${SESSION_NAME}" "${INITIAL_PROMPT}"
log "initial prompt injected into session"
# --- Wall-clock lifetime watchdog (background) ---
# Caps total session time independently of idle timeout. When the cap is
# hit the watchdog kills the tmux session, posts a summary comment on the
# issue, and writes PHASE:failed so monitor_phase_loop exits.
_lifetime_watchdog() {
local remaining=$(( MAX_LIFETIME - ($(date +%s) - SESSION_START_EPOCH) ))
[ "$remaining" -le 0 ] && remaining=1
sleep "$remaining"
local hours=$(( MAX_LIFETIME / 3600 ))
log "MAX_LIFETIME (${hours}h) reached — killing session"
agent_kill_session "$SESSION_NAME"
# Post summary comment on issue
local body="Action session killed: wall-clock lifetime cap (${hours}h) reached."
curl -sf -X POST \
-H "Authorization: token ${FORGE_TOKEN}" \
-H 'Content-Type: application/json' \
"${FORGE_API}/issues/${ISSUE}/comments" \
-d "{\"body\": \"${body}\"}" >/dev/null 2>&1 || true
printf 'PHASE:failed\nReason: max_lifetime (%sh) reached\n' "$hours" > "$PHASE_FILE"
# Touch phase-changed marker so monitor_phase_loop picks up immediately
touch "/tmp/phase-changed-${SESSION_NAME}.marker"
}
_lifetime_watchdog &
LIFETIME_WATCHDOG_PID=$!
# --- Monitor phase loop (shared with dev-agent) ---
status "monitoring phase: ${PHASE_FILE} (action agent)"
monitor_phase_loop "$PHASE_FILE" "$IDLE_TIMEOUT" _on_phase_change "$SESSION_NAME"
# Handle exit reason from monitor_phase_loop
case "${_MONITOR_LOOP_EXIT:-}" in
idle_timeout)
# Post diagnostic comment + label blocked
post_blocked_diagnostic "idle_timeout"
rm -f "$PHASE_FILE" "${PHASE_FILE%.phase}.context" "$IMPL_SUMMARY_FILE" "$SCRATCH_FILE"
;;
idle_prompt)
# Notification + blocked label already handled by _on_phase_change(PHASE:failed) callback
rm -f "$PHASE_FILE" "${PHASE_FILE%.phase}.context" "$IMPL_SUMMARY_FILE" "$SCRATCH_FILE"
;;
PHASE:failed)
# Check if this was a max_lifetime kill (phase file contains the reason)
if grep -q 'max_lifetime' "$PHASE_FILE" 2>/dev/null; then
post_blocked_diagnostic "max_lifetime"
fi
rm -f "$PHASE_FILE" "${PHASE_FILE%.phase}.context" "$IMPL_SUMMARY_FILE" "$SCRATCH_FILE"
;;
done)
# Belt-and-suspenders: callback handles primary cleanup,
# but ensure sentinel files are removed if callback was interrupted
rm -f "$PHASE_FILE" "${PHASE_FILE%.phase}.context" "$IMPL_SUMMARY_FILE" "$SCRATCH_FILE"
;;
esac
log "action-agent finished for issue #${ISSUE}"

75
action/action-poll.sh Executable file
View file

@ -0,0 +1,75 @@
#!/usr/bin/env bash
# action-poll.sh — Cron scheduler: find open 'action' issues, spawn action-agent
#
# An issue is ready for action if:
# - It is open and labeled 'action'
# - No tmux session named action-{project}-{issue_num} is already active
#
# Usage:
# cron every 10min
# action-poll.sh [projects/foo.toml] # optional project config
set -euo pipefail
export PROJECT_TOML="${1:-}"
source "$(dirname "$0")/../lib/env.sh"
# Use action-bot's own Forgejo identity (#747)
FORGE_TOKEN="${FORGE_ACTION_TOKEN:-${FORGE_TOKEN}}"
# shellcheck source=../lib/guard.sh
source "$(dirname "$0")/../lib/guard.sh"
check_active action
LOGFILE="${FACTORY_ROOT}/action/action-poll-${PROJECT_NAME:-default}.log"
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
log() {
printf '[%s] poll: %s\n' "$(date -u '+%Y-%m-%d %H:%M:%S UTC')" "$*" >> "$LOGFILE"
}
# --- Memory guard ---
memory_guard 2000
# --- Find open 'action' issues ---
log "scanning for open action issues"
ACTION_ISSUES=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API}/issues?state=open&labels=action&limit=50&type=issues") || true
if [ -z "$ACTION_ISSUES" ] || [ "$ACTION_ISSUES" = "null" ]; then
log "no action issues found"
exit 0
fi
COUNT=$(printf '%s' "$ACTION_ISSUES" | jq 'length')
if [ "$COUNT" -eq 0 ]; then
log "no action issues found"
exit 0
fi
log "found ${COUNT} open action issue(s)"
# Spawn action-agent for each issue that has no active tmux session.
# Only one agent is spawned per poll to avoid memory pressure; the next
# poll picks up remaining issues.
for i in $(seq 0 $((COUNT - 1))); do
ISSUE_NUM=$(printf '%s' "$ACTION_ISSUES" | jq -r ".[$i].number")
SESSION="action-${PROJECT_NAME}-${ISSUE_NUM}"
if tmux has-session -t "$SESSION" 2>/dev/null; then
log "issue #${ISSUE_NUM}: session ${SESSION} already active, skipping"
continue
fi
LOCKFILE="/tmp/action-agent-${ISSUE_NUM}.lock"
if [ -f "$LOCKFILE" ]; then
LOCK_PID=$(cat "$LOCKFILE" 2>/dev/null || echo "")
if [ -n "$LOCK_PID" ] && kill -0 "$LOCK_PID" 2>/dev/null; then
log "issue #${ISSUE_NUM}: agent starting (PID ${LOCK_PID}), skipping"
continue
fi
fi
log "spawning action-agent for issue #${ISSUE_NUM}"
nohup "${SCRIPT_DIR}/action-agent.sh" "$ISSUE_NUM" "$PROJECT_TOML" >> "$LOGFILE" 2>&1 &
log "started action-agent PID $! for issue #${ISSUE_NUM}"
break
done

View file

@ -1,137 +0,0 @@
<!-- last-reviewed: be463c5b439aec1ef0d4acfafc47e94896f5dc57 -->
# Architect — Agent Instructions
## What this agent is
The architect is a strategic decomposition agent that breaks down vision issues
into development sprints. It proposes sprints via PRs on the ops repo and
converses with humans through PR comments.
## Role
- **Input**: Vision issues from VISION.md, prerequisite tree from ops repo
- **Output**: Sprint proposals as PRs on the ops repo (with embedded `## Sub-issues` blocks)
- **Mechanism**: Bash-driven orchestration in `architect-run.sh`, pitching formula via `formulas/run-architect.toml`
- **Identity**: `architect-bot` on Forgejo (READ-ONLY on project repo, write on ops repo only — #764)
## Responsibilities
1. **Strategic decomposition**: Break down large vision items into coherent
sprints that can be executed by the dev agent
2. **Design fork identification**: When multiple implementation approaches exist,
identify the forks and file sub-issues for each path
3. **Sprint PR creation**: Propose sprints as PRs on the ops repo with clear
acceptance criteria and dependencies
4. **Human conversation**: Respond to PR comments, refine sprint proposals based
on human feedback
5. **Sub-issue definition**: Define concrete sub-issues in the `## Sub-issues`
block of the sprint spec. Filing is handled by `filer-bot` after sprint PR
merge (#764)
## Formula
The architect pitching is driven by `formulas/run-architect.toml`. This formula defines
the steps for:
- Research: analyzing vision items and prerequisite tree
- Pitch: creating structured sprint PRs with embedded `## Sub-issues` blocks
- Design Q&A: refining the sprint via PR comments after human ACCEPT
## Bash-driven orchestration
Bash in `architect-run.sh` handles state detection and orchestration:
- **Deterministic state detection**: Bash reads the Forgejo reviews API to detect
ACCEPT/REJECT decisions — checks both formal APPROVED reviews and PR comments, not just comments (#718)
- **Human guidance injection**: Review body text from ACCEPT reviews is injected
directly into the research prompt as context
- **Response processing**: When ACCEPT/REJECT responses are detected, bash invokes
the agent with appropriate context (session resumed for questions phase)
- **Pitch capture**: `pitch_output` is written to a temp file instead of captured via `$()` subshell, because `agent_run` writes to side-channels (`SID_FILE`, `LOGFILE`) that subshell capture would suppress (#716)
- **PR URL construction**: existing-PR check uses `${FORGE_API}/pulls` directly (not `${FORGE_API}/repos/…`) — the base URL already includes the repos segment (#717)
### State transitions
```
New vision issue → pitch PR (model generates pitch, bash creates PR)
APPROVED review → start design questions (model posts Q1:, adds Design forks section)
Answers received → continue Q&A (model processes answers, posts follow-ups)
All forks resolved → finalize ## Sub-issues section in sprint spec
Sprint PR merged → filer-bot files sub-issues on project repo (#764)
REJECT review → close PR + journal (model processes rejection, bash merges PR)
```
### Vision issue lifecycle
Vision issues decompose into sprint sub-issues. Sub-issues are defined in the
`## Sub-issues` block of the sprint spec (between `<!-- filer:begin -->` and
`<!-- filer:end -->` markers) and filed by `filer-bot` after the sprint PR merges
on the ops repo (#764).
Each filer-created sub-issue carries a `<!-- decomposed-from: #<vision>, sprint: <slug>, id: <id> -->`
marker in its body for idempotency and traceability.
The filer-bot (via `lib/sprint-filer.sh`) handles vision lifecycle:
1. After filing sub-issues, adds `in-progress` label to the vision issue
2. On each run, checks if all sub-issues for a vision are closed
3. If all closed, posts a summary comment and closes the vision issue
The architect no longer writes to the project repo — it is read-only (#764).
All project-repo writes (issue filing, label management, vision closure) are
handled by filer-bot with its narrowly-scoped `FORGE_FILER_TOKEN`.
### Session management
The agent maintains a global session file at `/tmp/architect-session-{project}.sid`.
When processing responses, bash checks if the PR is in the questions phase and
resumes the session using `--resume session_id` to preserve codebase context.
## Execution
Run via `architect/architect-run.sh`, which:
- Acquires a poll-loop lock (via `acquire_lock`) and checks available memory
- Cleans up per-issue scratch files from previous runs (`/tmp/architect-{project}-scratch-*.md`)
- Sources shared libraries (env.sh, formula-session.sh)
- Exports `FORGE_TOKEN_OVERRIDE="${FORGE_ARCHITECT_TOKEN}"` BEFORE sourcing env.sh, ensuring architect-bot identity survives re-sourcing (#762)
- Uses FORGE_ARCHITECT_TOKEN for authentication
- Processes existing architect PRs via bash-driven design phase
- Loads the formula and builds context from VISION.md, AGENTS.md, and ops repo
- Bash orchestrates state management:
- Fetches open vision issues, open architect PRs, and merged sprint PRs from Forgejo API
- Filters out visions already with open PRs, in-progress label, sub-issues, or merged sprint PRs
- Selects up to `pitch_budget` (3 - open architect PRs) remaining vision issues
- For each selected issue, invokes stateless `claude -p` with issue body + context
- Creates PRs directly from pitch content (no scratch files)
- Agent is invoked for stateless pitch generation and response processing (ACCEPT/REJECT handling)
- NOTE: architect-bot is read-only on the project repo (#764) — sub-issue filing
and in-progress label management are handled by filer-bot after sprint PR merge
**Multi-sprint pitching**: The architect pitches up to 3 sprints per run. Bash handles all state management:
- Fetches Forgejo API data (vision issues, open PRs, merged PRs)
- Filters and deduplicates (no model-level dedup or journal-based memory)
- For each selected vision issue, bash invokes stateless `claude -p` to generate pitch markdown
- Bash creates the PR with pitch content and posts ACCEPT/REJECT footer comment
- Branch names use issue number (architect/sprint-vision-{issue_number}) to avoid collisions
## Schedule
The architect runs every 6 hours as part of the polling loop in
`docker/agents/entrypoint.sh` (iteration math at line 196-208).
## State
Architect state is tracked in `state/.architect-active` (disabled by default —
empty file not created, just document it).
## Related issues
- #96: Architect agent parent issue
- #100: Architect formula — research + design fork identification
- #101: Architect formula — sprint PR creation with questions
- #102: Architect formula — answer parsing + sub-issue filing
- #764: Permission scoping — architect read-only on project repo, filer-bot files sub-issues
- #491: Refactor — bash-driven design phase with stateful session resumption

View file

@ -1,907 +0,0 @@
#!/usr/bin/env bash
# =============================================================================
# architect-run.sh — Polling-loop wrapper: architect execution via SDK + formula
#
# Synchronous bash loop using claude -p (one-shot invocation).
# No tmux sessions, no phase files — the bash script IS the state machine.
#
# Flow:
# 1. Guards: run lock, memory check
# 2. Precondition checks: skip if no work (no vision issues, no responses)
# 3. Load formula (formulas/run-architect.toml)
# 4. Context: VISION.md, AGENTS.md, ops:prerequisites.md, structural graph
# 5. Stateless pitch generation: for each selected issue:
# - Fetch issue body from Forgejo API (bash)
# - Invoke claude -p with issue body + context (stateless, no API calls)
# - Create PR with pitch content (bash)
# - Post footer comment (bash)
# 6. Response processing: handle ACCEPT/REJECT on existing PRs
#
# Precondition checks (bash before model):
# - Skip if no vision issues AND no open architect PRs
# - Skip if 3+ architect PRs open AND no ACCEPT/REJECT responses to process
# - Only invoke model when there's actual work: new pitches or response processing
#
# Usage:
# architect-run.sh [projects/disinto.toml] # project config (default: disinto)
#
# Called by: entrypoint.sh polling loop (every 6 hours)
# =============================================================================
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
FACTORY_ROOT="$(dirname "$SCRIPT_DIR")"
# Accept project config from argument; default to disinto
export PROJECT_TOML="${1:-$FACTORY_ROOT/projects/disinto.toml}"
# Set override BEFORE sourcing env.sh so it survives any later re-source of
# env.sh from nested shells / claude -p tools (#762, #747)
export FORGE_TOKEN_OVERRIDE="${FORGE_ARCHITECT_TOKEN:-}"
# shellcheck source=../lib/env.sh
source "$FACTORY_ROOT/lib/env.sh"
# shellcheck source=../lib/formula-session.sh
source "$FACTORY_ROOT/lib/formula-session.sh"
# shellcheck source=../lib/worktree.sh
source "$FACTORY_ROOT/lib/worktree.sh"
# shellcheck source=../lib/guard.sh
source "$FACTORY_ROOT/lib/guard.sh"
# shellcheck source=../lib/agent-sdk.sh
source "$FACTORY_ROOT/lib/agent-sdk.sh"
LOG_FILE="${DISINTO_LOG_DIR}/architect/architect.log"
# shellcheck disable=SC2034 # consumed by agent-sdk.sh
LOGFILE="$LOG_FILE"
# shellcheck disable=SC2034 # consumed by agent-sdk.sh
SID_FILE="/tmp/architect-session-${PROJECT_NAME}.sid"
# Per-PR session files for stateful resumption across runs
SID_DIR="/tmp/architect-sessions-${PROJECT_NAME}"
mkdir -p "$SID_DIR"
SCRATCH_FILE="/tmp/architect-${PROJECT_NAME}-scratch.md"
SCRATCH_FILE_PREFIX="/tmp/architect-${PROJECT_NAME}-scratch"
WORKTREE="/tmp/${PROJECT_NAME}-architect-run"
# Override LOG_AGENT for consistent agent identification
# shellcheck disable=SC2034 # consumed by agent-sdk.sh and env.sh
LOG_AGENT="architect"
# Override log() to append to architect-specific log file
# shellcheck disable=SC2034
log() {
local agent="${LOG_AGENT:-architect}"
printf '[%s] %s: %s\n' "$(date -u '+%Y-%m-%dT%H:%M:%SZ')" "$agent" "$*" >> "$LOG_FILE"
}
# ── Guards ────────────────────────────────────────────────────────────────
check_active architect
acquire_run_lock "/tmp/architect-run.lock"
memory_guard 2000
log "--- Architect run start ---"
# ── Resolve forge remote for git operations ─────────────────────────────
# Run git operations from the project checkout, not the baked code dir
cd "$PROJECT_REPO_ROOT"
resolve_forge_remote
# ── Resolve agent identity for .profile repo ────────────────────────────
if [ -z "${AGENT_IDENTITY:-}" ] && [ -n "${FORGE_ARCHITECT_TOKEN:-}" ]; then
AGENT_IDENTITY=$(curl -sf -H "Authorization: token ${FORGE_ARCHITECT_TOKEN}" \
"${FORGE_URL:-http://localhost:3000}/api/v1/user" 2>/dev/null | jq -r '.login // empty' 2>/dev/null || true)
fi
# ── Load formula + context ───────────────────────────────────────────────
load_formula_or_profile "architect" "$FACTORY_ROOT/formulas/run-architect.toml" || exit 1
build_context_block VISION.md AGENTS.md ops:prerequisites.md
# ── Prepare .profile context (lessons injection) ─────────────────────────
formula_prepare_profile_context
# ── Build structural analysis graph ──────────────────────────────────────
build_graph_section
# ── Read scratch file (compaction survival) ───────────────────────────────
SCRATCH_CONTEXT=$(read_scratch_context "$SCRATCH_FILE")
SCRATCH_INSTRUCTION=$(build_scratch_instruction "$SCRATCH_FILE")
# ── Build prompt ─────────────────────────────────────────────────────────
build_sdk_prompt_footer
# Architect prompt: strategic decomposition of vision into sprints
# See: architect/AGENTS.md for full role description
# Pattern: heredoc function to avoid inline prompt construction
# Note: Uses CONTEXT_BLOCK, GRAPH_SECTION, SCRATCH_CONTEXT from formula-session.sh
# Architecture Decision: AD-003 — The runtime creates and destroys, the formula preserves.
build_architect_prompt() {
cat <<_PROMPT_EOF_
You are the architect agent for ${FORGE_REPO}. Work through the formula below.
Your role: strategic decomposition of vision issues into development sprints.
Propose sprints via PRs on the ops repo, converse with humans through PR comments.
You are READ-ONLY on the project repo — sub-issues are filed by filer-bot after sprint PR merge (#764).
## Project context
${CONTEXT_BLOCK}
${GRAPH_SECTION}
${SCRATCH_CONTEXT}
$(formula_lessons_block)
## Formula
${FORMULA_CONTENT}
${SCRATCH_INSTRUCTION}
${PROMPT_FOOTER}
_PROMPT_EOF_
}
# ── Build prompt for specific session mode ───────────────────────────────
# Args: session_mode (pitch / questions_phase / start_questions)
# Returns: prompt text via stdout
build_architect_prompt_for_mode() {
local session_mode="$1"
case "$session_mode" in
"start_questions")
cat <<_PROMPT_EOF_
You are the architect agent for ${FORGE_REPO}. Work through the formula below.
Your role: strategic decomposition of vision issues into development sprints.
Propose sprints via PRs on the ops repo, converse with humans through PR comments.
You are READ-ONLY on the project repo — sub-issues are filed by filer-bot after sprint PR merge (#764).
## CURRENT STATE: Approved PR awaiting initial design questions
A sprint pitch PR has been approved by the human (via APPROVED review), but the
design conversation has not yet started. Your task is to:
1. Read the approved sprint pitch from the PR body
2. Identify the key design decisions that need human input
3. Post initial design questions (Q1:, Q2:, etc.) as comments on the PR
4. Add a `## Design forks` section to the PR body documenting the design decisions
5. Update the ## Sub-issues section in the sprint spec if design decisions affect decomposition
This is NOT a pitch phase — the pitch is already approved. This is the START
of the design Q&A phase. Sub-issues are filed by filer-bot after sprint PR merge (#764).
## Project context
${CONTEXT_BLOCK}
${GRAPH_SECTION}
${SCRATCH_CONTEXT}
$(formula_lessons_block)
## Formula
${FORMULA_CONTENT}
${SCRATCH_INSTRUCTION}
${PROMPT_FOOTER}
_PROMPT_EOF_
;;
"questions_phase")
cat <<_PROMPT_EOF_
You are the architect agent for ${FORGE_REPO}. Work through the formula below.
Your role: strategic decomposition of vision issues into development sprints.
Propose sprints via PRs on the ops repo, converse with humans through PR comments.
You are READ-ONLY on the project repo — sub-issues are filed by filer-bot after sprint PR merge (#764).
## CURRENT STATE: Design Q&A in progress
A sprint pitch PR is in the questions phase:
- The PR has a `## Design forks` section
- Initial questions (Q1:, Q2:, etc.) have been posted
- Humans may have posted answers or follow-up questions
Your task is to:
1. Read the existing questions and the PR body
2. Read human answers from PR comments
3. Parse the answers and determine next steps
4. Post follow-up questions if needed (Q3:, Q4:, etc.)
5. If all design forks are resolved, finalize the ## Sub-issues section in the sprint spec
6. Update the `## Design forks` section as you progress
## Project context
${CONTEXT_BLOCK}
${GRAPH_SECTION}
${SCRATCH_CONTEXT}
$(formula_lessons_block)
## Formula
${FORMULA_CONTENT}
${SCRATCH_INSTRUCTION}
${PROMPT_FOOTER}
_PROMPT_EOF_
;;
"pitch"|*)
# Default: pitch new sprints (original behavior)
build_architect_prompt
;;
esac
}
# ── Create worktree ──────────────────────────────────────────────────────
formula_worktree_setup "$WORKTREE"
# ── Detect if PR is in questions-awaiting-answers phase ──────────────────
# A PR is in the questions phase if it has a `## Design forks` section and
# question comments. We check this to decide whether to resume the session
# from the research/questions run (preserves codebase context for answer parsing).
detect_questions_phase() {
local pr_number=""
local pr_body=""
# Get open architect PRs on ops repo
local ops_repo="${OPS_REPO_ROOT:-/home/agent/data/ops}"
if [ ! -d "${ops_repo}/.git" ]; then
return 1
fi
# Use Forgejo API to find open architect PRs
local response
response=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API_BASE}/repos/${FORGE_OPS_REPO}/pulls?state=open" 2>/dev/null) || return 1
# Check each open PR for architect markers
pr_number=$(printf '%s' "$response" | jq -r '.[] | select(.title | contains("architect:")) | .number' 2>/dev/null | head -1) || return 1
if [ -z "$pr_number" ]; then
return 1
fi
# Fetch PR body
pr_body=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API_BASE}/repos/${FORGE_OPS_REPO}/pulls/${pr_number}" 2>/dev/null | jq -r '.body // empty') || return 1
# Check for `## Design forks` section (added by #101 after ACCEPT)
if ! printf '%s' "$pr_body" | grep -q "## Design forks"; then
return 1
fi
# Check for question comments (Q1:, Q2:, etc.)
# Use jq to extract body text before grepping (handles JSON escaping properly)
local comments
comments=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API_BASE}/repos/${FORGE_OPS_REPO}/issues/${pr_number}/comments" 2>/dev/null) || return 1
if ! printf '%s' "$comments" | jq -r '.[].body // empty' | grep -qE 'Q[0-9]+:'; then
return 1
fi
# PR is in questions phase
log "Detected PR #${pr_number} in questions-awaiting-answers phase"
return 0
}
# ── Detect if PR is approved and awaiting initial design questions ────────
# A PR is in this state when:
# - It's an open architect PR on ops repo
# - It has an APPROVED review (from human acceptance)
# - It has NO `## Design forks` section yet
# - It has NO Q1:, Q2:, etc. comments yet
# This means the human accepted the pitch and we need to start the design
# conversation by posting initial questions and adding the Design forks section.
detect_approved_pending_questions() {
local pr_number=""
local pr_body=""
# Get open architect PRs on ops repo
local ops_repo="${OPS_REPO_ROOT:-/home/agent/data/ops}"
if [ ! -d "${ops_repo}/.git" ]; then
return 1
fi
# Use Forgejo API to find open architect PRs
local response
response=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API_BASE}/repos/${FORGE_OPS_REPO}/pulls?state=open" 2>/dev/null) || return 1
# Check each open PR for architect markers
pr_number=$(printf '%s' "$response" | jq -r '.[] | select(.title | contains("architect:")) | .number' 2>/dev/null | head -1) || return 1
if [ -z "$pr_number" ]; then
return 1
fi
# Fetch PR body
pr_body=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API_BASE}/repos/${FORGE_OPS_REPO}/pulls/${pr_number}" 2>/dev/null | jq -r '.body // empty') || return 1
# Check for APPROVED review
local reviews
reviews=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API_BASE}/repos/${FORGE_OPS_REPO}/pulls/${pr_number}/reviews" 2>/dev/null) || return 1
if ! printf '%s' "$reviews" | jq -e '.[] | select(.state == "APPROVED")' >/dev/null 2>&1; then
return 1
fi
# Check that PR does NOT have `## Design forks` section yet
# (we're in the "start questions" phase, not "process answers" phase)
if printf '%s' "$pr_body" | grep -q "## Design forks"; then
# Has design forks section — this is either in questions phase or past it
return 1
fi
# Check that PR has NO question comments yet (Q1:, Q2:, etc.)
local comments
comments=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API_BASE}/repos/${FORGE_OPS_REPO}/issues/${pr_number}/comments" 2>/dev/null) || return 1
if printf '%s' "$comments" | jq -r '.[].body // empty' | grep -qE 'Q[0-9]+:'; then
# Has question comments — this is either in questions phase or past it
return 1
fi
# PR is approved and awaiting initial design questions
log "Detected PR #${pr_number} approved and awaiting initial design questions"
return 0
}
# ── Sub-issue existence check ────────────────────────────────────────────
# Check if a vision issue already has sub-issues filed from it.
# Returns 0 if sub-issues exist and are open, 1 otherwise.
# Args: vision_issue_number
has_open_subissues() {
local vision_issue="$1"
local subissue_count=0
# Search for issues whose body contains 'Decomposed from #N' pattern
# Fetch all open issues with bodies in one API call (avoids N+1 calls)
local issues_json
issues_json=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API}/issues?state=open&limit=100" 2>/dev/null) || return 1
# Check each issue for the decomposition pattern using jq to extract bodies
subissue_count=$(printf '%s' "$issues_json" | jq -r --arg vid "$vision_issue" '
[.[] | select(.number != ($vid | tonumber)) | select(.body // "" | contains("Decomposed from #" + $vid))] | length
' 2>/dev/null) || subissue_count=0
if [ "$subissue_count" -gt 0 ]; then
log "Vision issue #${vision_issue} has ${subissue_count} open sub-issue(s) — skipping"
return 0 # Has open sub-issues
fi
log "Vision issue #${vision_issue} has no open sub-issues"
return 1 # No open sub-issues
}
# ── Merged sprint PR check ───────────────────────────────────────────────
# Check if a vision issue already has a merged sprint PR on the ops repo.
# Returns 0 if a merged sprint PR exists, 1 otherwise.
# Args: vision_issue_number
has_merged_sprint_pr() {
local vision_issue="$1"
# Get closed PRs from ops repo
local prs_json
prs_json=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API_BASE}/repos/${FORGE_OPS_REPO}/pulls?state=closed&limit=100" 2>/dev/null) || return 1
# Check each closed PR for architect markers and vision issue reference
local pr_numbers
pr_numbers=$(printf '%s' "$prs_json" | jq -r '.[] | select(.title | contains("architect:")) | .number' 2>/dev/null) || return 1
local pr_num
while IFS= read -r pr_num; do
[ -z "$pr_num" ] && continue
# Get PR details including merged status
local pr_details
pr_details=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API_BASE}/repos/${FORGE_OPS_REPO}/pulls/${pr_num}" 2>/dev/null) || continue
# Check if PR is actually merged (not just closed)
local is_merged
is_merged=$(printf '%s' "$pr_details" | jq -r '.merged // false') || continue
if [ "$is_merged" != "true" ]; then
continue
fi
# Get PR body and check for vision issue reference
local pr_body
pr_body=$(printf '%s' "$pr_details" | jq -r '.body // ""') || continue
# Check if PR body references the vision issue number
# Look for patterns like "#N" where N is the vision issue number
if printf '%s' "$pr_body" | grep -qE "(#|refs|references)[[:space:]]*#${vision_issue}|#${vision_issue}[^0-9]|#${vision_issue}$"; then
log "Found merged sprint PR #${pr_num} referencing vision issue #${vision_issue} — skipping"
return 0 # Has merged sprint PR
fi
done <<< "$pr_numbers"
log "Vision issue #${vision_issue} has no merged sprint PR"
return 1 # No merged sprint PR
}
# ── Helper: Fetch all open vision issues from Forgejo API ─────────────────
# Returns: JSON array of vision issue objects
fetch_vision_issues() {
curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API}/issues?labels=vision&state=open&limit=100" 2>/dev/null || echo '[]'
}
# NOTE: get_vision_subissues, all_subissues_closed, close_vision_issue,
# check_and_close_completed_visions removed (#764) — architect-bot is read-only
# on the project repo. Vision lifecycle (closing completed visions, adding
# in-progress labels) is now handled by filer-bot via lib/sprint-filer.sh.
# ── Helper: Fetch open architect PRs from ops repo Forgejo API ───────────
# Returns: JSON array of architect PR objects
fetch_open_architect_prs() {
curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API_BASE}/repos/${FORGE_OPS_REPO}/pulls?state=open&limit=100" 2>/dev/null || echo '[]'
}
# ── Helper: Get vision issue body by number ──────────────────────────────
# Args: issue_number
# Returns: issue body text
get_vision_issue_body() {
local issue_num="$1"
curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API}/issues/${issue_num}" 2>/dev/null | jq -r '.body // ""'
}
# ── Helper: Get vision issue title by number ─────────────────────────────
# Args: issue_number
# Returns: issue title
get_vision_issue_title() {
local issue_num="$1"
curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API}/issues/${issue_num}" 2>/dev/null | jq -r '.title // ""'
}
# ── Helper: Create a sprint pitch via stateless claude -p call ───────────
# The model NEVER calls Forgejo API. It only reads context and generates pitch.
# Args: vision_issue_number vision_issue_title vision_issue_body
# Returns: pitch markdown to stdout
#
# This is a stateless invocation: the model has no memory between calls.
# All state management (which issues to pitch, dedup logic, etc.) happens in bash.
generate_pitch() {
local issue_num="$1"
local issue_title="$2"
local issue_body="$3"
# Build context block with vision issue details
local pitch_context
pitch_context="
## Vision Issue #${issue_num}
### Title
${issue_title}
### Description
${issue_body}
## Project Context
${CONTEXT_BLOCK}
${GRAPH_SECTION}
$(formula_lessons_block)
## Formula
${FORMULA_CONTENT}
${SCRATCH_INSTRUCTION}
${PROMPT_FOOTER}
"
# Prompt: model generates pitch markdown only, no API calls
local pitch_prompt="You are the architect agent for ${FORGE_REPO}. Write a sprint pitch for the vision issue above.
Instructions:
1. Output ONLY the pitch markdown (no explanations, no preamble, no postscript)
2. Use this exact format:
# Sprint: <sprint-name>
## Vision issues
- #${issue_num} — ${issue_title}
## What this enables
<what the project can do after this sprint that it can't do now>
## What exists today
<current state — infrastructure, interfaces, code that can be reused>
## Complexity
<number of files/subsystems, estimated sub-issues>
<gluecode vs greenfield ratio>
## Risks
<what could go wrong, what breaks if this is done badly>
## Cost — new infra to maintain
<what ongoing maintenance burden does this sprint add>
<new services, scheduled tasks, formulas, agent roles>
## Recommendation
<architect's assessment: worth it / defer / alternative approach>
## Sub-issues
<!-- filer:begin -->
- id: <kebab-case-id>
title: \"vision(#${issue_num}): <concise sub-issue title>\"
labels: [backlog]
depends_on: []
body: |
## Goal
<what this sub-issue accomplishes>
## Acceptance criteria
- [ ] <criterion>
<!-- filer:end -->
IMPORTANT: Do NOT include design forks or questions. This is a go/no-go pitch.
The ## Sub-issues block is parsed by the filer-bot pipeline after sprint PR merge.
Each sub-issue between filer:begin/end markers becomes a Forgejo issue.
---
${pitch_context}
"
# Execute stateless claude -p call
agent_run "$pitch_prompt" 2>>"$LOGFILE" || true
# Extract pitch content from JSON response
local pitch
pitch=$(printf '%s' "$_AGENT_LAST_OUTPUT" | jq -r '.result // empty' 2>/dev/null) || pitch=""
if [ -z "$pitch" ]; then
log "WARNING: empty pitch generated for vision issue #${issue_num}"
return 1
fi
# Output pitch to stdout for caller to use
printf '%s' "$pitch"
}
# ── Helper: Create PR on ops repo via Forgejo API ────────────────────────
# Args: sprint_title sprint_body branch_name
# Returns: PR number on success, empty on failure
create_sprint_pr() {
local sprint_title="$1"
local sprint_body="$2"
local branch_name="$3"
# Create branch on ops repo
if ! curl -sf -X POST \
-H "Authorization: token ${FORGE_TOKEN}" \
-H "Content-Type: application/json" \
"${FORGE_API_BASE}/repos/${FORGE_OPS_REPO}/branches" \
-d "{\"new_branch_name\": \"${branch_name}\", \"old_branch_name\": \"${PRIMARY_BRANCH:-main}\"}" >/dev/null 2>&1; then
log "WARNING: failed to create branch ${branch_name}"
return 1
fi
# Extract sprint name from title for filename
local sprint_name
sprint_name=$(printf '%s' "$sprint_title" | sed 's/^architect: *//; s/ *$//')
local sprint_slug
sprint_slug=$(printf '%s' "$sprint_name" | tr '[:upper:]' '[:lower:]' | tr ' ' '-' | sed 's/--*/-/g')
# Prepare sprint spec content
local sprint_spec="# Sprint: ${sprint_name}
${sprint_body}
"
# Base64 encode the content
local sprint_spec_b64
sprint_spec_b64=$(printf '%s' "$sprint_spec" | base64 -w 0)
# Write sprint spec file to branch
if ! curl -sf -X PUT \
-H "Authorization: token ${FORGE_TOKEN}" \
-H "Content-Type: application/json" \
"${FORGE_API_BASE}/repos/${FORGE_OPS_REPO}/contents/sprints/${sprint_slug}.md" \
-d "{\"message\": \"sprint: add ${sprint_slug}.md\", \"content\": \"${sprint_spec_b64}\", \"branch\": \"${branch_name}\"}" >/dev/null 2>&1; then
log "WARNING: failed to write sprint spec file"
return 1
fi
# Create PR - use jq to build JSON payload safely (prevents injection from markdown)
local pr_payload
pr_payload=$(jq -n \
--arg title "$sprint_title" \
--arg body "$sprint_body" \
--arg head "$branch_name" \
--arg base "${PRIMARY_BRANCH:-main}" \
'{title: $title, body: $body, head: $head, base: $base}')
local pr_response
pr_response=$(curl -sf -X POST \
-H "Authorization: token ${FORGE_TOKEN}" \
-H "Content-Type: application/json" \
"${FORGE_API_BASE}/repos/${FORGE_OPS_REPO}/pulls" \
-d "$pr_payload" 2>/dev/null) || return 1
# Extract PR number
local pr_number
pr_number=$(printf '%s' "$pr_response" | jq -r '.number // empty')
log "Created sprint PR #${pr_number}: ${sprint_title}"
printf '%s' "$pr_number"
}
# ── Helper: Post footer comment on PR ────────────────────────────────────
# Args: pr_number
post_pr_footer() {
local pr_number="$1"
local footer="Reply \`ACCEPT\` to proceed with design questions, or \`REJECT: <reason>\` to decline."
if curl -sf -X POST \
-H "Authorization: token ${FORGE_TOKEN}" \
-H "Content-Type: application/json" \
"${FORGE_API_BASE}/repos/${FORGE_OPS_REPO}/issues/${pr_number}/comments" \
-d "{\"body\": \"${footer}\"}" >/dev/null 2>&1; then
log "Posted footer comment on PR #${pr_number}"
return 0
else
log "WARNING: failed to post footer comment on PR #${pr_number}"
return 1
fi
}
# NOTE: add_inprogress_label removed (#764) — architect-bot is read-only on
# project repo. in-progress label is now added by filer-bot via sprint-filer.sh.
# ── Precondition checks in bash before invoking the model ─────────────────
# Check 1: Skip if no vision issues exist and no open architect PRs to handle
vision_count=$(curl -sf -H "Authorization: token $FORGE_TOKEN" \
"$FORGE_API/issues?labels=vision&state=open&limit=1" 2>/dev/null | jq length) || vision_count=0
if [ "${vision_count:-0}" -eq 0 ]; then
# Check for open architect PRs that need handling (ACCEPT/REJECT responses)
open_arch_prs=$(curl -sf -H "Authorization: token $FORGE_TOKEN" \
"${FORGE_API_BASE}/repos/${FORGE_OPS_REPO}/pulls?state=open&limit=10" 2>/dev/null | jq '[.[] | select(.title | startswith("architect:"))] | length') || open_arch_prs=0
if [ "${open_arch_prs:-0}" -eq 0 ]; then
log "no vision issues and no open architect PRs — skipping"
exit 0
fi
fi
# Check 2: Scan for ACCEPT/REJECT responses on open architect PRs (unconditional)
# This ensures responses are processed regardless of open_arch_prs count
has_responses_to_process=false
pr_numbers=$(curl -sf -H "Authorization: token $FORGE_TOKEN" \
"${FORGE_API_BASE}/repos/${FORGE_OPS_REPO}/pulls?state=open&limit=100" 2>/dev/null | jq -r '.[] | select(.title | startswith("architect:")) | .number') || pr_numbers=""
for pr_num in $pr_numbers; do
# Check formal reviews first (Forgejo green check via review API)
reviews=$(curl -sf -H "Authorization: token $FORGE_TOKEN" \
"${FORGE_API_BASE}/repos/${FORGE_OPS_REPO}/pulls/${pr_num}/reviews" 2>/dev/null) || reviews="[]"
if printf '%s' "$reviews" | jq -e '.[] | select(.state == "APPROVED" or .state == "REQUEST_CHANGES")' >/dev/null 2>&1; then
has_responses_to_process=true
break
fi
# Then check ACCEPT/REJECT in comments (legacy / human-typed)
comments=$(curl -sf -H "Authorization: token $FORGE_TOKEN" \
"${FORGE_API_BASE}/repos/${FORGE_OPS_REPO}/issues/${pr_num}/comments" 2>/dev/null) || continue
if printf '%s' "$comments" | jq -r '.[].body // empty' | grep -qE '(ACCEPT|REJECT):'; then
has_responses_to_process=true
break
fi
done
# Check 2 (continued): Skip if already at max open pitches (3), unless there are responses to process
open_arch_prs=$(curl -sf -H "Authorization: token $FORGE_TOKEN" \
"${FORGE_API_BASE}/repos/${FORGE_OPS_REPO}/pulls?state=open&limit=100" 2>/dev/null | jq '[.[] | select(.title | startswith("architect:"))] | length') || open_arch_prs=0
if [ "${open_arch_prs:-0}" -ge 3 ]; then
if [ "$has_responses_to_process" = false ]; then
log "already 3 open architect PRs with no responses to process — skipping"
exit 0
fi
log "3 open architect PRs found but responses detected — processing"
fi
# NOTE: Vision lifecycle check (close completed visions) moved to filer-bot (#764)
# ── Bash-driven state management: Select vision issues for pitching ───────
# This logic is also documented in formulas/run-architect.toml preflight step
# Fetch all data from Forgejo API upfront (bash handles state, not model)
vision_issues_json=$(fetch_vision_issues)
open_arch_prs_json=$(fetch_open_architect_prs)
# Build list of vision issues that already have open architect PRs
declare -A _arch_vision_issues_with_open_prs
while IFS= read -r pr_num; do
[ -z "$pr_num" ] && continue
pr_body=$(curl -sf -H "Authorization: token $FORGE_TOKEN" \
"${FORGE_API_BASE}/repos/${FORGE_OPS_REPO}/pulls/${pr_num}" 2>/dev/null | jq -r '.body // ""') || continue
# Extract vision issue numbers referenced in PR body (e.g., "refs #419" or "#419")
while IFS= read -r ref_issue; do
[ -z "$ref_issue" ] && continue
_arch_vision_issues_with_open_prs["$ref_issue"]=1
done <<< "$(printf '%s' "$pr_body" | grep -oE '#[0-9]+' | tr -d '#' | sort -u)"
done <<< "$(printf '%s' "$open_arch_prs_json" | jq -r '.[] | select(.title | startswith("architect:")) | .number')"
# Get all open vision issues
vision_issues_json=$(curl -sf -H "Authorization: token $FORGE_TOKEN" \
"${FORGE_API}/issues?labels=vision&state=open&limit=100" 2>/dev/null) || vision_issues_json='[]'
# Get issues with in-progress label
in_progress_issues=$(curl -sf -H "Authorization: token $FORGE_TOKEN" \
"${FORGE_API}/issues?labels=in-progress&state=open&limit=100" 2>/dev/null | jq -r '.[].number' 2>/dev/null) || in_progress_issues=""
# Select vision issues for pitching
ARCHITECT_TARGET_ISSUES=()
vision_issue_count=0
pitch_budget=$((3 - open_arch_prs))
# Get all vision issue numbers
vision_issue_nums=$(printf '%s' "$vision_issues_json" | jq -r '.[].number' 2>/dev/null) || vision_issue_nums=""
while IFS= read -r vision_issue; do
[ -z "$vision_issue" ] && continue
vision_issue_count=$((vision_issue_count + 1))
# Skip if pitch budget exhausted
if [ "${pitch_budget}" -le 0 ] || [ ${#ARCHITECT_TARGET_ISSUES[@]} -ge "$pitch_budget" ]; then
log "Pitch budget exhausted (${#ARCHITECT_TARGET_ISSUES[@]}/${pitch_budget})"
break
fi
# Skip if vision issue already has open architect PR
if [ "${_arch_vision_issues_with_open_prs[$vision_issue]:-}" = "1" ]; then
log "Vision issue #${vision_issue} already has open architect PR — skipping"
continue
fi
# Skip if vision issue has in-progress label
if printf '%s\n' "$in_progress_issues" | grep -q "^${vision_issue}$"; then
log "Vision issue #${vision_issue} has in-progress label — skipping"
continue
fi
# Skip if vision issue has open sub-issues (already being worked on)
if has_open_subissues "$vision_issue"; then
log "Vision issue #${vision_issue} has open sub-issues — skipping"
continue
fi
# Skip if vision issue has merged sprint PR (decomposition already done)
if has_merged_sprint_pr "$vision_issue"; then
log "Vision issue #${vision_issue} has merged sprint PR — skipping"
continue
fi
# Add to target issues
ARCHITECT_TARGET_ISSUES+=("$vision_issue")
log "Selected vision issue #${vision_issue} for pitching"
done <<< "$vision_issue_nums"
# If no issues selected, decide whether to exit or process responses
if [ ${#ARCHITECT_TARGET_ISSUES[@]} -eq 0 ]; then
if [ "${has_responses_to_process:-false}" = "true" ]; then
log "No new pitches needed — responses to process"
# Fall through to response processing block below
else
log "No vision issues available for pitching (all have open PRs, sub-issues, or merged sprint PRs) — signaling PHASE:done"
# Signal PHASE:done by writing to phase file if it exists
if [ -f "/tmp/architect-${PROJECT_NAME}.phase" ]; then
echo "PHASE:done" > "/tmp/architect-${PROJECT_NAME}.phase"
fi
exit 0
fi
fi
log "Selected ${#ARCHITECT_TARGET_ISSUES[@]} vision issue(s) for pitching: ${ARCHITECT_TARGET_ISSUES[*]}"
# ── Stateless pitch generation and PR creation (bash-driven, no model API calls) ──
# For each target issue:
# 1. Fetch issue body from Forgejo API (bash)
# 2. Invoke claude -p with issue body + context (stateless, no API calls)
# 3. Create PR with pitch content (bash)
# 4. Post footer comment (bash)
pitch_count=0
for vision_issue in "${ARCHITECT_TARGET_ISSUES[@]}"; do
log "Processing vision issue #${vision_issue}"
# Fetch vision issue details from Forgejo API (bash, not model)
issue_title=$(get_vision_issue_title "$vision_issue")
issue_body=$(get_vision_issue_body "$vision_issue")
if [ -z "$issue_title" ] || [ -z "$issue_body" ]; then
log "WARNING: failed to fetch vision issue #${vision_issue} details"
continue
fi
# Generate pitch via stateless claude -p call (model has no API access)
log "Generating pitch for vision issue #${vision_issue}"
pitch=$(generate_pitch "$vision_issue" "$issue_title" "$issue_body") || true
if [ -z "$pitch" ]; then
log "WARNING: failed to generate pitch for vision issue #${vision_issue}"
continue
fi
# Create sprint PR (bash, not model)
# Use issue number in branch name to avoid collisions across runs
branch_name="architect/sprint-vision-${vision_issue}"
pr_number=$(create_sprint_pr "architect: ${issue_title}" "$pitch" "$branch_name")
if [ -z "$pr_number" ]; then
log "WARNING: failed to create PR for vision issue #${vision_issue}"
continue
fi
# Post footer comment
post_pr_footer "$pr_number"
# NOTE: in-progress label is added by filer-bot after sprint PR merge (#764)
pitch_count=$((pitch_count + 1))
log "Completed pitch for vision issue #${vision_issue} — PR #${pr_number}"
done
log "Generated ${pitch_count} sprint pitch(es)"
# ── Run agent for response processing if needed ───────────────────────────
# Always process ACCEPT/REJECT responses when present, regardless of new pitches
if [ "${has_responses_to_process:-false}" = "true" ]; then
log "Processing ACCEPT/REJECT responses on existing PRs"
# Check if any PRs have responses that need agent handling
needs_agent=false
pr_numbers=$(curl -sf -H "Authorization: token $FORGE_TOKEN" \
"${FORGE_API_BASE}/repos/${FORGE_OPS_REPO}/pulls?state=open&limit=100" 2>/dev/null | jq -r '.[] | select(.title | startswith("architect:")) | .number') || pr_numbers=""
for pr_num in $pr_numbers; do
# Check for ACCEPT/REJECT in comments
comments=$(curl -sf -H "Authorization: token $FORGE_TOKEN" \
"${FORGE_API_BASE}/repos/${FORGE_OPS_REPO}/issues/${pr_num}/comments" 2>/dev/null) || continue
# Check for review decisions (higher precedence)
reviews=$(curl -sf -H "Authorization: token $FORGE_TOKEN" \
"${FORGE_API_BASE}/repos/${FORGE_OPS_REPO}/pulls/${pr_num}/reviews" 2>/dev/null) || reviews=""
# Check for ACCEPT (APPROVED review or ACCEPT comment)
if printf '%s' "$reviews" | jq -e '.[] | select(.state == "APPROVED")' >/dev/null 2>&1; then
log "PR #${pr_num} has APPROVED review — needs agent handling"
needs_agent=true
elif printf '%s' "$comments" | jq -r '.[].body // empty' | grep -qiE '^[^:]+: *ACCEPT'; then
log "PR #${pr_num} has ACCEPT comment — needs agent handling"
needs_agent=true
elif printf '%s' "$comments" | jq -r '.[].body // empty' | grep -qiE '^[^:]+: *REJECT:'; then
log "PR #${pr_num} has REJECT comment — needs agent handling"
needs_agent=true
fi
done
# Run agent only if there are responses to process
if [ "$needs_agent" = "true" ]; then
# Determine session handling based on PR state
RESUME_ARGS=()
SESSION_MODE="fresh"
if detect_questions_phase; then
# PR is in questions-awaiting-answers phase — resume from that session
if [ -f "$SID_FILE" ]; then
RESUME_SESSION=$(cat "$SID_FILE")
RESUME_ARGS=(--resume "$RESUME_SESSION")
SESSION_MODE="questions_phase"
log "PR in questions-awaiting-answers phase — resuming session: ${RESUME_SESSION:0:12}..."
else
log "PR in questions phase but no session file — starting fresh session"
fi
elif detect_approved_pending_questions; then
# PR is approved but awaiting initial design questions — start fresh with special prompt
SESSION_MODE="start_questions"
log "PR approved and awaiting initial design questions — starting fresh session"
else
log "PR not in questions phase — starting fresh session"
fi
# Build prompt with appropriate mode
PROMPT_FOR_MODE=$(build_architect_prompt_for_mode "$SESSION_MODE")
agent_run "${RESUME_ARGS[@]}" --worktree "$WORKTREE" "$PROMPT_FOR_MODE"
log "agent_run complete"
fi
fi
# ── Clean up scratch files (legacy single file + per-issue files) ──────────
rm -f "$SCRATCH_FILE"
rm -f "${SCRATCH_FILE_PREFIX}"-*.md
# Write journal entry post-session
profile_write_journal "architect-run" "Architect run $(date -u +%Y-%m-%d)" "complete" "" || true
log "--- Architect run done ---"

File diff suppressed because it is too large Load diff

View file

@ -1,44 +1,22 @@
<!-- last-reviewed: be463c5b439aec1ef0d4acfafc47e94896f5dc57 -->
<!-- last-reviewed: 9b91c6a5bdd83513ca262dd468ea14db675971c1 -->
# Dev Agent
**Role**: Implement issues autonomously — write code, push branches, address
CI failures and review feedback.
**Trigger**: `dev-poll.sh` is invoked by the polling loop in `docker/agents/entrypoint.sh`
every 5 minutes (iteration math at line 171-175). Sources `lib/guard.sh` and calls
`check_active dev` first — skips if `$FACTORY_ROOT/state/.dev-active` is absent. Then
performs a direct-merge scan (approved + CI green PRs — including chore/gardener PRs
without issue numbers), then checks the agent lock and scans for ready issues using a
two-tier priority queue: (1) `priority`+`backlog` issues first (FIFO within tier), then
(2) plain `backlog` issues (FIFO). Orphaned in-progress issues are also picked up. The
direct-merge scan runs before the lock check so approved PRs get merged even while a
dev-agent session is active.
**Trigger**: `dev-poll.sh` runs every 10 min via cron. Sources `lib/guard.sh` and
calls `check_active dev` first — skips if `$FACTORY_ROOT/state/.dev-active` is
absent. Then performs a direct-merge scan (approved + CI green PRs — including
chore/gardener PRs without issue numbers), then checks the agent lock and scans
for ready issues using a two-tier priority queue: (1) `priority`+`backlog` issues
first (FIFO within tier), then (2) plain `backlog` issues (FIFO). Orphaned
in-progress issues are also picked up. The direct-merge scan runs before the lock
check so approved PRs get merged even while a dev-agent session is active.
**Key files**:
- `dev/dev-poll.sh` — Polling loop participant: finds next ready issue, handles merge/rebase
of approved PRs, tracks CI fix attempts. Invoked by `docker/agents/entrypoint.sh` every 5
minutes. `BOT_USER` is resolved once at startup via the Forge `/user` API and cached for
all assignee checks. Formula guard skips issues labeled `formula`, `prediction/dismissed`,
or `prediction/unreviewed`. **Race prevention**: checks issue assignee before claiming —
skips if assigned to a different bot user. **Stale branch abandonment**: closes PRs and
deletes branches that are behind `$PRIMARY_BRANCH` (restarts poll cycle for a fresh start).
**Stale in-progress recovery**: on each poll cycle, scans for issues labeled `in-progress`.
If the issue has a `vision` label, sets `BLOCKED_BY_INPROGRESS=true` and skips further
stale checks (vision issues are managed by the architect). If the issue is assigned to
`$BOT_USER` (this agent), checks for pending review feedback first — if an open PR has
`REQUEST_CHANGES`, spawns the dev-agent to address it before setting `BLOCKED_BY_INPROGRESS=true`;
otherwise just sets blocked. If assigned to another agent, logs and falls through (does not
block). If no assignee, no open PR, and no agent lock file — removes `in-progress`, adds
`blocked` with a human-triage comment. **Post-crash self-assigned recovery (#749)**: when the
issue is self-assigned (this bot) but there is no open PR, dev-poll now checks for a lock
file (`/tmp/dev-impl-summary-$PROJECT_NAME-$ISSUE_NUM.txt`) AND a remote branch
(`fix/issue-$ISSUE_NUM`) before declaring "my thread is busy". If neither exists after a cold
boot, it spawns a fresh dev-agent for recovery instead of looping forever. **Per-agent open-PR gate**: before starting new work,
filters open waiting PRs to only those assigned to this agent (`$BOT_USER`). Other agents'
PRs do not block this agent's pipeline (#358, #369). **Pre-lock merge scan own-PRs only**:
the direct-merge scan only merges PRs whose linked issue is assigned to this agent — skips
PRs owned by other bot users (#374).
- `dev/dev-agent.sh` — Orchestrator: claims issue, creates worktree + tmux session with interactive `claude`, monitors phase file, injects CI results and review feedback, merges on approval. **Launched as a subshell** (`("${SCRIPT_DIR}/dev-agent.sh" ...) &`) — not via `nohup` — to avoid deadlocking the polling loop and review-poll when running in the same container (#693).
- `dev/dev-poll.sh` — Cron scheduler: finds next ready issue, handles merge/rebase of approved PRs, tracks CI fix attempts. Formula guard skips issues labeled `formula`, `action`, `prediction/dismissed`, or `prediction/unreviewed`. Also injects CI failures and review feedback into active tmux sessions via `_inject_into_session()` (uses `tmux load-buffer` + `paste-buffer` to handle multi-line text safely).
- `dev/dev-agent.sh` — Orchestrator: claims issue, creates worktree + tmux session with interactive `claude`, monitors phase file, injects CI results and review feedback, merges on approval
- `dev/phase-handler.sh` — Phase callback functions: `post_refusal_comment()`, `_on_phase_change()`, `build_phase_protocol_prompt()`. `do_merge()` detects already-merged PRs on HTTP 405 (race with dev-poll's pre-lock scan) and returns success instead of escalating. Sources `lib/mirrors.sh` and calls `mirror_push()` after every successful merge.
- `dev/phase-test.sh` — Integration test for the phase protocol
**Environment variables consumed** (via `lib/env.sh` + project TOML):
@ -55,15 +33,11 @@ PRs owned by other bot users (#374).
**Crash recovery**: on `PHASE:crashed` or non-zero exit, the worktree is **preserved** (not destroyed) for debugging. Location logged. Supervisor housekeeping removes stale crashed worktrees older than 24h.
**Polling loop isolation (#753)**: `docker/agents/entrypoint.sh` now tracks fast-poll PIDs
(`FAST_PIDS`) and calls `wait "${FAST_PIDS[@]}"` instead of `wait` (no-args). This means
long-running dev-agent sessions no longer block the loop from launching the next iteration's
fast polls — the loop only waits for review-poll and dev-poll (the fast agents), never for
the dev-agent subprocess itself.
**Rebase-before-push**: the phase protocol instructs Claude to `git fetch && git rebase` on `$PRIMARY_BRANCH` before every push (initial, CI fix, and review address). This avoids merge conflicts when main has advanced since branch creation. Uses `--force-with-lease` on CI/review fix pushes.
**Lifecycle**: dev-poll.sh (invoked by polling loop, `check_active dev`) → dev-agent.sh →
tmux session → phase file drives CI/review loop → merge + `mirror_push()` → close issue.
On respawn after `PHASE:escalate`, the stale phase file is cleared first so the session
starts clean; the reinject prompt tells Claude not to re-escalate for the same reason.
On respawn for any active PR, the prompt explicitly tells Claude the PR already exists
and not to create a new one via API.
**Lifecycle**: dev-poll.sh (`check_active dev`) → dev-agent.sh → tmux `dev-{project}-{issue}` → phase file
drives CI/review loop → merge + `mirror_push()` → close issue. On respawn after
`PHASE:escalate`, the stale phase file is cleared first so the session starts
clean; the reinject prompt tells Claude not to re-escalate for the same reason.
On respawn for any active PR, the prompt explicitly tells Claude the PR already
exists and not to create a new one via API.

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

809
dev/phase-handler.sh Normal file
View file

@ -0,0 +1,809 @@
#!/usr/bin/env bash
# dev/phase-handler.sh — Phase callback functions for dev-agent.sh
#
# Source this file from agent orchestrators after lib/agent-session.sh is loaded.
# Defines: post_refusal_comment(), _on_phase_change(), build_phase_protocol_prompt()
#
# Required globals (set by calling agent before or after sourcing):
# ISSUE, FORGE_TOKEN, API, FORGE_WEB, PROJECT_NAME, FACTORY_ROOT
# BRANCH, PHASE_FILE, WORKTREE, IMPL_SUMMARY_FILE
# PRIMARY_BRANCH, SESSION_NAME, LOGFILE, ISSUE_TITLE
# WOODPECKER_REPO_ID, WOODPECKER_TOKEN, WOODPECKER_SERVER
#
# Globals with defaults (agents can override after sourcing):
# PR_NUMBER, CI_POLL_TIMEOUT, MAX_CI_FIXES, MAX_REVIEW_ROUNDS,
# REVIEW_POLL_TIMEOUT, CI_RETRY_COUNT, CI_FIX_COUNT, REVIEW_ROUND,
# CLAIMED, PHASE_POLL_INTERVAL
#
# Calls back to agent-defined helpers:
# cleanup_worktree(), cleanup_labels(), status(), log()
#
# shellcheck shell=bash
# shellcheck disable=SC2154 # globals are set in dev-agent.sh before calling
# shellcheck disable=SC2034 # CLAIMED is read by cleanup() in dev-agent.sh
# Load secret scanner for redacting tmux output before posting to issues
# shellcheck source=../lib/secret-scan.sh
source "$(dirname "${BASH_SOURCE[0]}")/../lib/secret-scan.sh"
# Load shared CI helpers (is_infra_step, classify_pipeline_failure, etc.)
# shellcheck source=../lib/ci-helpers.sh
source "$(dirname "${BASH_SOURCE[0]}")/../lib/ci-helpers.sh"
# Load mirror push helper
# shellcheck source=../lib/mirrors.sh
source "$(dirname "${BASH_SOURCE[0]}")/../lib/mirrors.sh"
# --- Default globals (agents can override after sourcing) ---
: "${CI_POLL_TIMEOUT:=1800}"
: "${REVIEW_POLL_TIMEOUT:=10800}"
: "${MAX_CI_FIXES:=3}"
: "${MAX_REVIEW_ROUNDS:=5}"
: "${CI_RETRY_COUNT:=0}"
: "${CI_FIX_COUNT:=0}"
: "${REVIEW_ROUND:=0}"
: "${PR_NUMBER:=}"
: "${CLAIMED:=false}"
: "${PHASE_POLL_INTERVAL:=30}"
# --- Post diagnostic comment + label issue as blocked ---
# Captures tmux pane output, posts a structured comment on the issue, removes
# in-progress label, and adds the "blocked" label.
#
# Args: reason [session_name]
# Uses globals: ISSUE, SESSION_NAME, PR_NUMBER, FORGE_TOKEN, API
post_blocked_diagnostic() {
local reason="$1"
local session="${2:-${SESSION_NAME:-}}"
# Capture last 50 lines from tmux pane (before kill)
local tmux_output=""
if [ -n "$session" ] && tmux has-session -t "$session" 2>/dev/null; then
tmux_output=$(tmux capture-pane -p -t "$session" -S -50 2>/dev/null || true)
fi
# Redact any secrets from tmux output before posting to issue
if [ -n "$tmux_output" ]; then
tmux_output=$(redact_secrets "$tmux_output")
fi
# Build diagnostic comment body
local comment
comment="### Session failure diagnostic
| Field | Value |
|---|---|
| Exit reason | \`${reason}\` |
| Timestamp | \`$(date -u +%Y-%m-%dT%H:%M:%SZ)\` |"
[ -n "${PR_NUMBER:-}" ] && [ "${PR_NUMBER:-0}" != "0" ] && \
comment="${comment}
| PR | #${PR_NUMBER} |"
if [ -n "$tmux_output" ]; then
comment="${comment}
<details><summary>Last 50 lines from tmux pane</summary>
\`\`\`
${tmux_output}
\`\`\`
</details>"
fi
# Post comment to issue
curl -sf -X POST \
-H "Authorization: token ${FORGE_TOKEN}" \
-H "Content-Type: application/json" \
"${API}/issues/${ISSUE}/comments" \
-d "$(jq -nc --arg b "$comment" '{body:$b}')" >/dev/null 2>&1 || true
# Remove in-progress, add blocked
cleanup_labels
local blocked_id
blocked_id=$(ensure_blocked_label_id)
if [ -n "$blocked_id" ]; then
curl -sf -X POST \
-H "Authorization: token ${FORGE_TOKEN}" \
-H "Content-Type: application/json" \
"${API}/issues/${ISSUE}/labels" \
-d "{\"labels\":[${blocked_id}]}" >/dev/null 2>&1 || true
fi
CLAIMED=false
_BLOCKED_POSTED=true
}
# --- Build phase protocol prompt (shared across agents) ---
# Generates the phase-signaling instructions for Claude prompts.
# Args: phase_file summary_file branch [remote]
# Output: The protocol text (stdout)
build_phase_protocol_prompt() {
local _pf="$1" _sf="$2" _br="$3" _remote="${4:-${FORGE_REMOTE:-origin}}"
cat <<_PHASE_PROTOCOL_EOF_
## Phase-Signaling Protocol (REQUIRED)
You are running in a persistent tmux session managed by an orchestrator.
Communicate progress by writing to the phase file. The orchestrator watches
this file and injects events (CI results, review feedback) back into this session.
### Key files
\`\`\`
PHASE_FILE="${_pf}"
SUMMARY_FILE="${_sf}"
\`\`\`
### Phase transitions — write these exactly:
**After committing and pushing your branch:**
\`\`\`bash
# Rebase on target branch before push to avoid merge conflicts
git fetch ${_remote} ${PRIMARY_BRANCH} && git rebase ${_remote}/${PRIMARY_BRANCH}
git push ${_remote} ${_br}
# Write a short summary of what you implemented:
printf '%s' "<your summary>" > "\${SUMMARY_FILE}"
# Signal the orchestrator to create the PR and watch for CI:
echo "PHASE:awaiting_ci" > "${_pf}"
\`\`\`
Then STOP and wait. The orchestrator will inject CI results.
**When you receive a "CI passed" injection:**
\`\`\`bash
echo "PHASE:awaiting_review" > "${_pf}"
\`\`\`
Then STOP and wait. The orchestrator will inject review feedback.
**When you receive a "CI failed:" injection:**
Fix the CI issue, then rebase on target branch and push:
\`\`\`bash
git fetch ${_remote} ${PRIMARY_BRANCH} && git rebase ${_remote}/${PRIMARY_BRANCH}
git push --force-with-lease ${_remote} ${_br}
echo "PHASE:awaiting_ci" > "${_pf}"
\`\`\`
Then STOP and wait.
**When you receive a "Review: REQUEST_CHANGES" injection:**
Address ALL review feedback, then rebase on target branch and push:
\`\`\`bash
git fetch ${_remote} ${PRIMARY_BRANCH} && git rebase ${_remote}/${PRIMARY_BRANCH}
git push --force-with-lease ${_remote} ${_br}
echo "PHASE:awaiting_ci" > "${_pf}"
\`\`\`
(CI runs again after each push — always write awaiting_ci, not awaiting_review)
**When you need human help (CI exhausted, merge blocked, stuck on a decision):**
\`\`\`bash
printf 'PHASE:escalate\nReason: %s\n' "describe what you need" > "${_pf}"
\`\`\`
Then STOP and wait. A human will review and respond via the forge.
**On unrecoverable failure:**
\`\`\`bash
printf 'PHASE:failed\nReason: %s\n' "describe what failed" > "${_pf}"
\`\`\`
_PHASE_PROTOCOL_EOF_
}
# --- Merge helper ---
# do_merge — attempt to merge PR via forge API.
# Args: pr_num
# Returns:
# 0 = merged successfully
# 1 = other failure (conflict, network error, etc.)
# 2 = not enough approvals (HTTP 405) — PHASE:escalate already written
do_merge() {
local pr_num="$1"
local merge_response merge_http_code merge_body
merge_response=$(curl -s -w "\n%{http_code}" -X POST \
-H "Authorization: token ${FORGE_TOKEN}" \
-H 'Content-Type: application/json' \
"${API}/pulls/${pr_num}/merge" \
-d '{"Do":"merge","delete_branch_after_merge":true}') || true
merge_http_code=$(echo "$merge_response" | tail -1)
merge_body=$(echo "$merge_response" | sed '$d')
if [ "$merge_http_code" = "200" ] || [ "$merge_http_code" = "204" ]; then
log "do_merge: PR #${pr_num} merged (HTTP ${merge_http_code})"
return 0
fi
# HTTP 405 — could be "merge requirements not met" OR "already merged" (race with dev-poll).
# Before escalating, check whether the PR was already merged by another agent.
if [ "$merge_http_code" = "405" ]; then
local pr_state
pr_state=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${API}/pulls/${pr_num}" | jq -r '.merged // false') || pr_state="false"
if [ "$pr_state" = "true" ]; then
log "do_merge: PR #${pr_num} already merged (detected after HTTP 405) — treating as success"
return 0
fi
log "do_merge: PR #${pr_num} blocked — merge requirements not met (HTTP 405): ${merge_body:0:200}"
printf 'PHASE:escalate\nReason: %s\n' \
"PR #${pr_num} merge blocked — merge requirements not met (HTTP 405): ${merge_body:0:200}" \
> "$PHASE_FILE"
return 2
fi
log "do_merge: PR #${pr_num} merge failed (HTTP ${merge_http_code}): ${merge_body:0:200}"
return 1
}
# --- Refusal comment helper ---
post_refusal_comment() {
local emoji="$1" title="$2" body="$3"
local last_has_title
last_has_title=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${API}/issues/${ISSUE}/comments?limit=5" | \
jq -r --arg t "Dev-agent: ${title}" '[.[] | .body // ""] | any(contains($t)) | tostring') || true
if [ "$last_has_title" = "true" ]; then
log "skipping duplicate refusal comment: ${title}"
return 0
fi
local comment
comment="${emoji} **Dev-agent: ${title}**
${body}
---
*Automated assessment by dev-agent · $(date -u '+%Y-%m-%d %H:%M UTC')*"
printf '%s' "$comment" > "/tmp/refusal-comment.txt"
jq -Rs '{body: .}' < "/tmp/refusal-comment.txt" > "/tmp/refusal-comment.json"
curl -sf -o /dev/null -X POST \
-H "Authorization: token ${FORGE_TOKEN}" \
-H "Content-Type: application/json" \
"${API}/issues/${ISSUE}/comments" \
--data-binary @"/tmp/refusal-comment.json" 2>/dev/null || \
log "WARNING: failed to post refusal comment"
rm -f "/tmp/refusal-comment.txt" "/tmp/refusal-comment.json"
}
# =============================================================================
# PHASE DISPATCH CALLBACK
# =============================================================================
# _on_phase_change — Phase dispatch callback for monitor_phase_loop
# Receives the current phase as $1.
# Returns 0 to continue the loop, 1 to break (terminal phase reached).
_on_phase_change() {
local phase="$1"
# ── PHASE: awaiting_ci ──────────────────────────────────────────────────────
if [ "$phase" = "PHASE:awaiting_ci" ]; then
# Release session lock — Claude is idle during CI polling (#724)
session_lock_release
# Create PR if not yet created
if [ -z "${PR_NUMBER:-}" ]; then
status "creating PR for issue #${ISSUE}"
IMPL_SUMMARY=""
if [ -f "$IMPL_SUMMARY_FILE" ]; then
# Don't treat refusal JSON as a PR summary
if ! jq -e '.status' < "$IMPL_SUMMARY_FILE" >/dev/null 2>&1; then
IMPL_SUMMARY=$(head -c 4000 "$IMPL_SUMMARY_FILE")
fi
fi
printf 'Fixes #%s\n\n## Changes\n%s' "$ISSUE" "$IMPL_SUMMARY" > "/tmp/pr-body-${ISSUE}.txt"
jq -n \
--arg title "fix: ${ISSUE_TITLE} (#${ISSUE})" \
--rawfile body "/tmp/pr-body-${ISSUE}.txt" \
--arg head "$BRANCH" \
--arg base "${PRIMARY_BRANCH}" \
'{title: $title, body: $body, head: $head, base: $base}' > "/tmp/pr-request-${ISSUE}.json"
PR_RESPONSE=$(curl -s -w "\n%{http_code}" -X POST \
-H "Authorization: token ${FORGE_TOKEN}" \
-H "Content-Type: application/json" \
"${API}/pulls" \
--data-binary @"/tmp/pr-request-${ISSUE}.json")
PR_HTTP_CODE=$(echo "$PR_RESPONSE" | tail -1)
PR_RESPONSE_BODY=$(echo "$PR_RESPONSE" | sed '$d')
rm -f "/tmp/pr-body-${ISSUE}.txt" "/tmp/pr-request-${ISSUE}.json"
if [ "$PR_HTTP_CODE" = "201" ] || [ "$PR_HTTP_CODE" = "200" ]; then
PR_NUMBER=$(echo "$PR_RESPONSE_BODY" | jq -r '.number')
log "created PR #${PR_NUMBER}"
elif [ "$PR_HTTP_CODE" = "409" ]; then
# PR already exists (race condition) — find it
FOUND_PR=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${API}/pulls?state=open&limit=20" | \
jq -r --arg branch "$BRANCH" \
'.[] | select(.head.ref == $branch) | .number' | head -1) || true
if [ -n "$FOUND_PR" ]; then
PR_NUMBER="$FOUND_PR"
log "PR already exists: #${PR_NUMBER}"
else
log "ERROR: PR creation got 409 but no existing PR found"
agent_inject_into_session "$SESSION_NAME" "ERROR: Could not create PR (HTTP 409, no existing PR found). Check the forge API. Retry by writing PHASE:awaiting_ci again after verifying the branch was pushed."
return 0
fi
else
log "ERROR: PR creation failed (HTTP ${PR_HTTP_CODE})"
agent_inject_into_session "$SESSION_NAME" "ERROR: Could not create PR (HTTP ${PR_HTTP_CODE}). Check branch was pushed: git push ${FORGE_REMOTE:-origin} ${BRANCH}. Then write PHASE:awaiting_ci again."
return 0
fi
fi
# No CI configured? Treat as success immediately
if [ "${WOODPECKER_REPO_ID:-2}" = "0" ]; then
log "no CI configured — treating as passed"
agent_inject_into_session "$SESSION_NAME" "CI passed on PR #${PR_NUMBER} (no CI configured for this project).
Write PHASE:awaiting_review to the phase file, then stop and wait for review feedback."
return 0
fi
# Poll CI until done or timeout
status "waiting for CI on PR #${PR_NUMBER}"
CI_CURRENT_SHA=$(git -C "${WORKTREE}" rev-parse HEAD 2>/dev/null || \
curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${API}/pulls/${PR_NUMBER}" | jq -r '.head.sha')
CI_DONE=false
CI_STATE="unknown"
CI_POLL_ELAPSED=0
while [ "$CI_POLL_ELAPSED" -lt "$CI_POLL_TIMEOUT" ]; do
sleep 30
CI_POLL_ELAPSED=$(( CI_POLL_ELAPSED + 30 ))
# Check session still alive during CI wait (exit_marker + tmux fallback)
if [ -f "/tmp/claude-exited-${SESSION_NAME}.ts" ] || ! tmux has-session -t "${SESSION_NAME}" 2>/dev/null; then
log "session died during CI wait"
break
fi
# Re-fetch HEAD — Claude may have pushed new commits since loop started
CI_CURRENT_SHA=$(git -C "${WORKTREE}" rev-parse HEAD 2>/dev/null || echo "$CI_CURRENT_SHA")
CI_STATE=$(ci_commit_status "$CI_CURRENT_SHA")
if [ "$CI_STATE" = "success" ] || [ "$CI_STATE" = "failure" ] || [ "$CI_STATE" = "error" ]; then
CI_DONE=true
[ "$CI_STATE" = "success" ] && CI_FIX_COUNT=0
break
fi
done
if ! $CI_DONE; then
log "TIMEOUT: CI didn't complete in ${CI_POLL_TIMEOUT}s"
agent_inject_into_session "$SESSION_NAME" "CI TIMEOUT: CI did not complete within 30 minutes for PR #${PR_NUMBER} (SHA: ${CI_CURRENT_SHA:0:7}). This may be an infrastructure issue. Write PHASE:escalate if you cannot proceed."
return 0
fi
log "CI: ${CI_STATE}"
if [ "$CI_STATE" = "success" ]; then
agent_inject_into_session "$SESSION_NAME" "CI passed on PR #${PR_NUMBER}.
Write PHASE:awaiting_review to the phase file, then stop and wait for review feedback:
echo \"PHASE:awaiting_review\" > \"${PHASE_FILE}\""
else
# Fetch CI error details
PIPELINE_NUM=$(ci_pipeline_number "$CI_CURRENT_SHA")
FAILED_STEP=""
FAILED_EXIT=""
IS_INFRA=false
if [ -n "$PIPELINE_NUM" ]; then
FAILED_INFO=$(curl -sf \
-H "Authorization: Bearer ${WOODPECKER_TOKEN}" \
"${WOODPECKER_SERVER}/api/repos/${WOODPECKER_REPO_ID}/pipelines/${PIPELINE_NUM}" | \
jq -r '.workflows[]?.children[]? | select(.state=="failure") | "\(.name)|\(.exit_code)"' | head -1 || true)
FAILED_STEP=$(echo "$FAILED_INFO" | cut -d'|' -f1)
FAILED_EXIT=$(echo "$FAILED_INFO" | cut -d'|' -f2)
fi
log "CI failed: step=${FAILED_STEP:-unknown} exit=${FAILED_EXIT:-?}"
if [ -n "$FAILED_STEP" ] && is_infra_step "$FAILED_STEP" "${FAILED_EXIT:-0}" >/dev/null 2>&1; then
IS_INFRA=true
fi
if [ "$IS_INFRA" = true ] && [ "${CI_RETRY_COUNT:-0}" -lt 1 ]; then
CI_RETRY_COUNT=$(( CI_RETRY_COUNT + 1 ))
log "infra failure — retrigger CI (retry ${CI_RETRY_COUNT})"
(cd "$WORKTREE" && git commit --allow-empty \
-m "ci: retrigger after infra failure (#${ISSUE})" --no-verify 2>&1 | tail -1)
# Rebase on target branch before push to avoid merge conflicts
if ! (cd "$WORKTREE" && \
git fetch "${FORGE_REMOTE:-origin}" "${PRIMARY_BRANCH}" 2>/dev/null && \
git rebase "${FORGE_REMOTE:-origin}/${PRIMARY_BRANCH}" 2>&1 | tail -5); then
log "rebase conflict detected — aborting, agent must resolve"
(cd "$WORKTREE" && git rebase --abort 2>/dev/null || git reset --hard HEAD 2>/dev/null) || true
agent_inject_into_session "$SESSION_NAME" "REBASE CONFLICT: Cannot rebase onto ${PRIMARY_BRANCH} automatically.
Please resolve merge conflicts manually:
1. Check conflict status: git status
2. Resolve conflicts in the conflicted files
3. Stage resolved files: git add <files>
4. Continue rebase: git rebase --continue
If you cannot resolve conflicts, abort: git rebase --abort
Then write PHASE:escalate with a reason."
return 0
fi
# Rebase succeeded — push the result
(cd "$WORKTREE" && git push --force-with-lease "${FORGE_REMOTE:-origin}" "$BRANCH" 2>&1 | tail -3)
# Touch phase file so we recheck CI on the new SHA
# Do NOT update LAST_PHASE_MTIME here — let the main loop detect the fresh mtime
touch "$PHASE_FILE"
CI_CURRENT_SHA=$(git -C "${WORKTREE}" rev-parse HEAD 2>/dev/null || true)
return 0
fi
CI_FIX_COUNT=$(( CI_FIX_COUNT + 1 ))
_ci_pipeline_url="${WOODPECKER_SERVER}/repos/${WOODPECKER_REPO_ID}/pipeline/${PIPELINE_NUM:-0}"
if [ "$CI_FIX_COUNT" -gt "$MAX_CI_FIXES" ]; then
log "CI failure not recoverable after ${CI_FIX_COUNT} fix attempts — escalating"
printf 'PHASE:escalate\nReason: ci_exhausted after %d attempts (step: %s)\n' "$CI_FIX_COUNT" "${FAILED_STEP:-unknown}" > "$PHASE_FILE"
# Do NOT update LAST_PHASE_MTIME here — let the main loop detect PHASE:escalate
return 0
fi
CI_ERROR_LOG=""
if [ -n "$PIPELINE_NUM" ]; then
CI_ERROR_LOG=$(bash "${FACTORY_ROOT}/lib/ci-debug.sh" failures "$PIPELINE_NUM" 2>/dev/null | tail -80 | head -c 8000 || echo "")
fi
# Save CI result for crash recovery
printf 'CI failed (attempt %d/%d)\nStep: %s\nExit: %s\n\n%s' \
"$CI_FIX_COUNT" "$MAX_CI_FIXES" "${FAILED_STEP:-unknown}" "${FAILED_EXIT:-?}" "$CI_ERROR_LOG" \
> "/tmp/ci-result-${PROJECT_NAME}-${ISSUE}.txt" 2>/dev/null || true
agent_inject_into_session "$SESSION_NAME" "CI failed on PR #${PR_NUMBER} (attempt ${CI_FIX_COUNT}/${MAX_CI_FIXES}).
Failed step: ${FAILED_STEP:-unknown} (exit code ${FAILED_EXIT:-?}, pipeline #${PIPELINE_NUM:-?})
CI debug tool:
bash ${FACTORY_ROOT}/lib/ci-debug.sh failures ${PIPELINE_NUM:-0}
bash ${FACTORY_ROOT}/lib/ci-debug.sh logs ${PIPELINE_NUM:-0} <step-name>
Error snippet:
${CI_ERROR_LOG:-No logs available. Use ci-debug.sh to query the pipeline.}
Instructions:
1. Run ci-debug.sh failures to get the full error output.
2. Read the failing test file(s) — understand what the tests EXPECT.
3. Fix the root cause — do NOT weaken tests.
4. Rebase on target branch and push: git fetch ${FORGE_REMOTE:-origin} ${PRIMARY_BRANCH} && git rebase ${FORGE_REMOTE:-origin}/${PRIMARY_BRANCH}
git push --force-with-lease ${FORGE_REMOTE:-origin} ${BRANCH}
5. Write: echo \"PHASE:awaiting_ci\" > \"${PHASE_FILE}\"
6. Stop and wait."
fi
# ── PHASE: awaiting_review ──────────────────────────────────────────────────
elif [ "$phase" = "PHASE:awaiting_review" ]; then
# Release session lock — Claude is idle during review wait (#724)
session_lock_release
status "waiting for review on PR #${PR_NUMBER:-?}"
CI_FIX_COUNT=0 # Reset CI fix budget for this review cycle
if [ -z "${PR_NUMBER:-}" ]; then
log "WARNING: awaiting_review but PR_NUMBER unknown — searching for PR"
FOUND_PR=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${API}/pulls?state=open&limit=20" | \
jq -r --arg branch "$BRANCH" \
'.[] | select(.head.ref == $branch) | .number' | head -1) || true
if [ -n "$FOUND_PR" ]; then
PR_NUMBER="$FOUND_PR"
log "found PR #${PR_NUMBER}"
else
agent_inject_into_session "$SESSION_NAME" "ERROR: Cannot find open PR for branch ${BRANCH}. Did you push? Verify with git status and git push ${FORGE_REMOTE:-origin} ${BRANCH}, then write PHASE:awaiting_ci."
return 0
fi
fi
REVIEW_POLL_ELAPSED=0
REVIEW_FOUND=false
while [ "$REVIEW_POLL_ELAPSED" -lt "$REVIEW_POLL_TIMEOUT" ]; do
sleep 300 # 5 min between review checks
REVIEW_POLL_ELAPSED=$(( REVIEW_POLL_ELAPSED + 300 ))
# Check session still alive (exit_marker + tmux fallback)
if [ -f "/tmp/claude-exited-${SESSION_NAME}.ts" ] || ! tmux has-session -t "${SESSION_NAME}" 2>/dev/null; then
log "session died during review wait"
REVIEW_FOUND=false
break
fi
# Check if phase was updated while we wait (e.g., Claude reacted to something)
NEW_MTIME=$(stat -c %Y "$PHASE_FILE" 2>/dev/null || echo 0)
if [ "$NEW_MTIME" -gt "$LAST_PHASE_MTIME" ]; then
log "phase file updated during review wait — re-entering main loop"
# Do NOT update LAST_PHASE_MTIME here — leave it stale so the outer
# loop detects the change on its next tick and dispatches the new phase.
REVIEW_FOUND=true # Prevent timeout injection
# Clean up review-poll sentinel if it exists (session already advanced)
rm -f "/tmp/review-injected-${PROJECT_NAME}-${PR_NUMBER}"
break
fi
REVIEW_SHA=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${API}/pulls/${PR_NUMBER}" | jq -r '.head.sha') || true
REVIEW_COMMENT=$(forge_api_all "/issues/${PR_NUMBER}/comments" | \
jq -r --arg sha "$REVIEW_SHA" \
'[.[] | select(.body | contains("<!-- reviewed: " + $sha))] | last // empty') || true
if [ -n "$REVIEW_COMMENT" ] && [ "$REVIEW_COMMENT" != "null" ]; then
REVIEW_TEXT=$(echo "$REVIEW_COMMENT" | jq -r '.body')
# Skip error reviews — they have no verdict
if echo "$REVIEW_TEXT" | grep -q "review-error\|Review — Error"; then
log "review was an error, waiting for re-review"
continue
fi
VERDICT=$(echo "$REVIEW_TEXT" | grep -oP '\*\*(APPROVE|REQUEST_CHANGES|DISCUSS)\*\*' | head -1 | tr -d '*' || true)
log "review verdict: ${VERDICT:-unknown}"
# Also check formal forge reviews
if [ -z "$VERDICT" ]; then
VERDICT=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${API}/pulls/${PR_NUMBER}/reviews" | \
jq -r '[.[] | select(.stale == false)] | last | .state // empty' || true)
if [ "$VERDICT" = "APPROVED" ]; then
VERDICT="APPROVE"
elif [ "$VERDICT" != "REQUEST_CHANGES" ]; then
VERDICT=""
fi
[ -n "$VERDICT" ] && log "verdict from formal review: $VERDICT"
fi
# Skip injection if review-poll.sh already injected (sentinel present).
# Exception: APPROVE always falls through so do_merge() runs even when
# review-poll injected first — prevents Claude writing PHASE:done on a
# failed merge without the orchestrator detecting the error.
REVIEW_SENTINEL="/tmp/review-injected-${PROJECT_NAME}-${PR_NUMBER}"
if [ -n "$VERDICT" ] && [ -f "$REVIEW_SENTINEL" ] && [ "$VERDICT" != "APPROVE" ]; then
log "review already injected by review-poll (sentinel exists) — skipping"
rm -f "$REVIEW_SENTINEL"
REVIEW_FOUND=true
break
fi
rm -f "$REVIEW_SENTINEL" # consume sentinel before APPROVE handling below
if [ "$VERDICT" = "APPROVE" ]; then
REVIEW_FOUND=true
_merge_rc=0; do_merge "$PR_NUMBER" || _merge_rc=$?
if [ "$_merge_rc" -eq 0 ]; then
# Merge succeeded — close issue and signal done
curl -sf -X PATCH \
-H "Authorization: token ${FORGE_TOKEN}" \
-H 'Content-Type: application/json' \
"${API}/issues/${ISSUE}" \
-d '{"state":"closed"}' >/dev/null 2>&1 || true
# Pull merged primary branch and push to mirrors
git -C "$PROJECT_REPO_ROOT" fetch "${FORGE_REMOTE:-origin}" "$PRIMARY_BRANCH" 2>/dev/null || true
git -C "$PROJECT_REPO_ROOT" checkout "$PRIMARY_BRANCH" 2>/dev/null || true
git -C "$PROJECT_REPO_ROOT" pull --ff-only "${FORGE_REMOTE:-origin}" "$PRIMARY_BRANCH" 2>/dev/null || true
mirror_push
printf 'PHASE:done\n' > "$PHASE_FILE"
elif [ "$_merge_rc" -ne 2 ]; then
# Other merge failure (conflict, etc.) — delegate to Claude for rebase + retry
agent_inject_into_session "$SESSION_NAME" "Approved! PR #${PR_NUMBER} has been approved, but the merge failed (likely conflicts).
Rebase onto ${PRIMARY_BRANCH} and push:
git fetch ${FORGE_REMOTE:-origin} ${PRIMARY_BRANCH} && git rebase ${FORGE_REMOTE:-origin}/${PRIMARY_BRANCH}
git push --force-with-lease ${FORGE_REMOTE:-origin} ${BRANCH}
echo \"PHASE:awaiting_ci\" > \"${PHASE_FILE}\"
Do NOT merge or close the issue — the orchestrator handles that after CI passes.
If rebase repeatedly fails, write PHASE:escalate with a reason."
fi
# _merge_rc=2: PHASE:escalate already written by do_merge()
break
elif [ "$VERDICT" = "REQUEST_CHANGES" ] || [ "$VERDICT" = "DISCUSS" ]; then
REVIEW_ROUND=$(( REVIEW_ROUND + 1 ))
if [ "$REVIEW_ROUND" -ge "$MAX_REVIEW_ROUNDS" ]; then
log "hit max review rounds (${MAX_REVIEW_ROUNDS})"
log "PR #${PR_NUMBER}: hit ${MAX_REVIEW_ROUNDS} review rounds, needs human attention"
fi
REVIEW_FOUND=true
agent_inject_into_session "$SESSION_NAME" "Review feedback (round ${REVIEW_ROUND}) on PR #${PR_NUMBER}:
${REVIEW_TEXT}
Instructions:
1. Address each piece of feedback carefully.
2. Run lint and tests when done.
3. Rebase on target branch and push: git fetch ${FORGE_REMOTE:-origin} ${PRIMARY_BRANCH} && git rebase ${FORGE_REMOTE:-origin}/${PRIMARY_BRANCH}
git push --force-with-lease ${FORGE_REMOTE:-origin} ${BRANCH}
4. Write: echo \"PHASE:awaiting_ci\" > \"${PHASE_FILE}\"
5. Stop and wait for the next CI result."
log "review REQUEST_CHANGES received (round ${REVIEW_ROUND})"
break
else
# No verdict found in comment or formal review — keep waiting
log "review comment found but no verdict, continuing to wait"
continue
fi
fi
# Check if PR was merged or closed externally
PR_JSON=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${API}/pulls/${PR_NUMBER}") || true
PR_STATE=$(echo "$PR_JSON" | jq -r '.state // "unknown"')
PR_MERGED=$(echo "$PR_JSON" | jq -r '.merged // false')
if [ "$PR_STATE" != "open" ]; then
if [ "$PR_MERGED" = "true" ]; then
log "PR #${PR_NUMBER} was merged externally"
curl -sf -X PATCH -H "Authorization: token ${FORGE_TOKEN}" \
-H "Content-Type: application/json" \
"${API}/issues/${ISSUE}" -d '{"state":"closed"}' >/dev/null 2>&1 || true
cleanup_labels
agent_kill_session "$SESSION_NAME"
cleanup_worktree
rm -f "$PHASE_FILE" "$IMPL_SUMMARY_FILE" "${SCRATCH_FILE:-}"
exit 0
else
log "PR #${PR_NUMBER} was closed WITHOUT merge — NOT closing issue"
cleanup_labels
agent_kill_session "$SESSION_NAME"
cleanup_worktree
exit 0
fi
fi
log "waiting for review on PR #${PR_NUMBER} (${REVIEW_POLL_ELAPSED}s elapsed)"
done
if ! $REVIEW_FOUND && [ "$REVIEW_POLL_ELAPSED" -ge "$REVIEW_POLL_TIMEOUT" ]; then
log "TIMEOUT: no review after 3h"
agent_inject_into_session "$SESSION_NAME" "TIMEOUT: No review received after 3 hours for PR #${PR_NUMBER}. Write PHASE:escalate to escalate to a human reviewer."
fi
# ── PHASE: escalate ──────────────────────────────────────────────────────
elif [ "$phase" = "PHASE:escalate" ]; then
status "escalated — waiting for human input on issue #${ISSUE}"
ESCALATE_REASON=$(sed -n '2p' "$PHASE_FILE" 2>/dev/null | sed 's/^Reason: //' || echo "")
log "phase: escalate — reason: ${ESCALATE_REASON:-none}"
# Session stays alive — human input arrives via vault/forge
# ── PHASE: done ─────────────────────────────────────────────────────────────
# PR merged and issue closed (by orchestrator or Claude). Just clean up local state.
elif [ "$phase" = "PHASE:done" ]; then
if [ -n "${PR_NUMBER:-}" ]; then
status "phase done — PR #${PR_NUMBER} merged, cleaning up"
else
status "phase done — issue #${ISSUE} complete, cleaning up"
fi
# Belt-and-suspenders: ensure in-progress label removed (idempotent)
cleanup_labels
# Local cleanup
agent_kill_session "$SESSION_NAME"
cleanup_worktree
rm -f "$PHASE_FILE" "$IMPL_SUMMARY_FILE" "${SCRATCH_FILE:-}" \
"/tmp/ci-result-${PROJECT_NAME}-${ISSUE}.txt"
[ -n "${PR_NUMBER:-}" ] && rm -f "/tmp/review-injected-${PROJECT_NAME}-${PR_NUMBER}"
CLAIMED=false # Don't unclaim again in cleanup()
# ── PHASE: failed ───────────────────────────────────────────────────────────
elif [ "$phase" = "PHASE:failed" ]; then
if [[ -f "$PHASE_FILE" ]]; then
FAILURE_REASON=$(sed -n '2p' "$PHASE_FILE" | sed 's/^Reason: //')
fi
FAILURE_REASON="${FAILURE_REASON:-unspecified}"
log "phase: failed — reason: ${FAILURE_REASON}"
# Gitea labels API requires []int64 — look up the "backlog" label ID once
BACKLOG_LABEL_ID=$(forge_api GET "/labels" 2>/dev/null \
| jq -r '.[] | select(.name == "backlog") | .id' 2>/dev/null || true)
BACKLOG_LABEL_ID="${BACKLOG_LABEL_ID:-1300815}"
UNDERSPECIFIED_LABEL_ID=$(forge_api GET "/labels" 2>/dev/null \
| jq -r '.[] | select(.name == "underspecified") | .id' 2>/dev/null || true)
UNDERSPECIFIED_LABEL_ID="${UNDERSPECIFIED_LABEL_ID:-1300816}"
# Check if this is a refusal (Claude wrote refusal JSON to IMPL_SUMMARY_FILE)
REFUSAL_JSON=""
if [ -f "$IMPL_SUMMARY_FILE" ] && jq -e '.status' < "$IMPL_SUMMARY_FILE" >/dev/null 2>&1; then
REFUSAL_JSON=$(cat "$IMPL_SUMMARY_FILE")
fi
if [ -n "$REFUSAL_JSON" ] && [ "$FAILURE_REASON" = "refused" ]; then
REFUSAL_STATUS=$(printf '%s' "$REFUSAL_JSON" | jq -r '.status')
log "claude refused: ${REFUSAL_STATUS}"
# Write preflight result for dev-poll.sh
printf '%s' "$REFUSAL_JSON" > "$PREFLIGHT_RESULT"
# Unclaim issue (restore backlog label, remove in-progress)
cleanup_labels
curl -sf -X POST \
-H "Authorization: token ${FORGE_TOKEN}" \
-H "Content-Type: application/json" \
"${API}/issues/${ISSUE}/labels" \
-d "{\"labels\":[${BACKLOG_LABEL_ID}]}" >/dev/null 2>&1 || true
case "$REFUSAL_STATUS" in
unmet_dependency)
BLOCKED_BY_MSG=$(printf '%s' "$REFUSAL_JSON" | jq -r '.blocked_by // "unknown"')
SUGGESTION=$(printf '%s' "$REFUSAL_JSON" | jq -r '.suggestion // empty')
COMMENT_BODY="### Blocked by unmet dependency
${BLOCKED_BY_MSG}"
if [ -n "$SUGGESTION" ] && [ "$SUGGESTION" != "null" ]; then
COMMENT_BODY="${COMMENT_BODY}
**Suggestion:** Work on #${SUGGESTION} first."
fi
post_refusal_comment "🚧" "Unmet dependency" "$COMMENT_BODY"
;;
too_large)
REASON=$(printf '%s' "$REFUSAL_JSON" | jq -r '.reason // "unspecified"')
post_refusal_comment "📏" "Too large for single session" "### Why this can't be implemented as-is
${REASON}
### Next steps
A maintainer should split this issue or add more detail to the spec."
curl -sf -X POST \
-H "Authorization: token ${FORGE_TOKEN}" \
-H "Content-Type: application/json" \
"${API}/issues/${ISSUE}/labels" \
-d "{\"labels\":[${UNDERSPECIFIED_LABEL_ID}]}" >/dev/null 2>&1 || true
curl -sf -X DELETE \
-H "Authorization: token ${FORGE_TOKEN}" \
"${API}/issues/${ISSUE}/labels/${BACKLOG_LABEL_ID}" >/dev/null 2>&1 || true
;;
already_done)
REASON=$(printf '%s' "$REFUSAL_JSON" | jq -r '.reason // "unspecified"')
post_refusal_comment "✅" "Already implemented" "### Existing implementation
${REASON}
Closing as already implemented."
curl -sf -X PATCH \
-H "Authorization: token ${FORGE_TOKEN}" \
-H "Content-Type: application/json" \
"${API}/issues/${ISSUE}" \
-d '{"state":"closed"}' >/dev/null 2>&1 || true
;;
*)
post_refusal_comment "❓" "Unable to proceed" "The dev-agent could not process this issue.
Raw response:
\`\`\`json
$(printf '%s' "$REFUSAL_JSON" | head -c 2000)
\`\`\`"
;;
esac
CLAIMED=false # Don't unclaim again in cleanup()
agent_kill_session "$SESSION_NAME"
cleanup_worktree
rm -f "$PHASE_FILE" "$IMPL_SUMMARY_FILE" "${SCRATCH_FILE:-}" \
"/tmp/ci-result-${PROJECT_NAME}-${ISSUE}.txt"
[ -n "${PR_NUMBER:-}" ] && rm -f "/tmp/review-injected-${PROJECT_NAME}-${PR_NUMBER}"
return 1
else
# Genuine unrecoverable failure — label blocked with diagnostic
log "session failed: ${FAILURE_REASON}"
post_blocked_diagnostic "$FAILURE_REASON"
agent_kill_session "$SESSION_NAME"
if [ -n "${PR_NUMBER:-}" ]; then
log "keeping worktree (PR #${PR_NUMBER} still open)"
else
cleanup_worktree
fi
rm -f "$PHASE_FILE" "$IMPL_SUMMARY_FILE" "${SCRATCH_FILE:-}" \
"/tmp/ci-result-${PROJECT_NAME}-${ISSUE}.txt"
[ -n "${PR_NUMBER:-}" ] && rm -f "/tmp/review-injected-${PROJECT_NAME}-${PR_NUMBER}"
return 1
fi
# ── PHASE: crashed ──────────────────────────────────────────────────────────
# Session died unexpectedly (OOM kill, tmux crash, etc.). Label blocked with
# diagnostic comment so humans can triage directly on the issue.
elif [ "$phase" = "PHASE:crashed" ]; then
log "session crashed for issue #${ISSUE}"
post_blocked_diagnostic "crashed"
log "PRESERVED crashed worktree for debugging: $WORKTREE"
rm -f "$PHASE_FILE" "$IMPL_SUMMARY_FILE" "${SCRATCH_FILE:-}" \
"/tmp/ci-result-${PROJECT_NAME}-${ISSUE}.txt"
[ -n "${PR_NUMBER:-}" ] && rm -f "/tmp/review-injected-${PROJECT_NAME}-${PR_NUMBER}"
else
log "WARNING: unknown phase value: ${phase}"
fi
}

View file

@ -8,13 +8,8 @@
set -euo pipefail
# Inline read_phase() function (previously from lib/agent-session.sh)
# Read the current phase from a phase file, stripped of whitespace.
# Usage: read_phase [file] — defaults to $PHASE_FILE
read_phase() {
local file="${1:-${PHASE_FILE:-}}"
{ cat "$file" 2>/dev/null || true; } | head -1 | tr -d '[:space:]'
}
# Source canonical read_phase() from shared library
source "$(dirname "$0")/../lib/agent-session.sh"
PROJECT="testproject"
ISSUE="999"
@ -89,7 +84,7 @@ else
fail "PHASE:failed format: first='$first_line' second='$second_line'"
fi
# ── Test 5: orchestrator read function (inline read_phase)
# ── Test 5: orchestrator read function (canonical read_phase from lib/agent-session.sh)
echo "PHASE:awaiting_ci" > "$PHASE_FILE"
phase=$(read_phase "$PHASE_FILE")
if [ "$phase" = "PHASE:awaiting_ci" ]; then

View file

@ -1,27 +0,0 @@
---
name: disinto-factory
description: Set up and operate a disinto autonomous code factory.
---
# Disinto Factory
You are helping the user set up and operate a **disinto autonomous code factory**.
## Guides
- **[Setup guide](setup.md)** — First-time factory setup: environment, init, verification, backlog seeding
- **[Operations guide](operations.md)** — Day-to-day: status checks, CI debugging, unsticking issues, Forgejo access
## Important context
- Read `AGENTS.md` for per-agent architecture and file-level docs
- Read `VISION.md` for project philosophy
- The factory uses a single internal Forgejo as its forge, regardless of where mirrors go
- Dev-agent uses `claude -p` for one-shot implementation sessions
- Mirror pushes happen automatically after every merge
- Polling loop in `docker/agents/entrypoint.sh`: dev-poll/review-poll every 5m, gardener/architect every 6h, planner every 12h, predictor every 24h
## References
- [Troubleshooting](references/troubleshooting.md)
- [Factory status script](scripts/factory-status.sh)

View file

@ -1,54 +0,0 @@
# Ongoing operations
### Check factory status
```bash
source .env
# Issues
curl -sf "http://localhost:3000/api/v1/repos/<org>/<repo>/issues?state=open" \
-H "Authorization: token $FORGE_TOKEN" \
| jq -r '.[] | "#\(.number) [\(.labels | map(.name) | join(","))] \(.title)"'
# PRs
curl -sf "http://localhost:3000/api/v1/repos/<org>/<repo>/pulls?state=open" \
-H "Authorization: token $FORGE_TOKEN" \
| jq -r '.[] | "PR #\(.number) [\(.head.ref)] \(.title)"'
# Agent logs
docker exec disinto-agents-1 tail -20 /home/agent/data/logs/dev/dev-agent.log
```
### Check CI
```bash
source .env
WP_CSRF=$(curl -sf -b "user_sess=$WOODPECKER_TOKEN" http://localhost:8000/web-config.js \
| sed -n 's/.*WOODPECKER_CSRF = "\([^"]*\)".*/\1/p')
curl -sf -b "user_sess=$WOODPECKER_TOKEN" -H "X-CSRF-Token: $WP_CSRF" \
"http://localhost:8000/api/repos/1/pipelines?page=1&per_page=5" \
| jq '.[] | {number, status, event}'
```
### Unstick a blocked issue
When a dev-agent run fails (CI timeout, implementation error), the issue gets labeled `blocked`:
1. Close stale PR and delete the branch
2. `docker exec disinto-agents-1 rm -f /tmp/dev-agent-*.json /tmp/dev-agent-*.lock`
3. Relabel the issue to `backlog`
4. Update agent repo: `docker exec -u agent disinto-agents-1 bash -c "cd /home/agent/repos/<name> && git fetch origin && git reset --hard origin/main"`
### Access Forgejo UI
If running in an LXD container with reverse tunnel:
```bash
# From your machine:
ssh -L 3000:localhost:13000 user@jump-host
# Open http://localhost:3000
```
Reset admin password if needed:
```bash
docker exec disinto-forgejo-1 su -c "forgejo admin user change-password --username disinto-admin --password <new-pw> --must-change-password=false" git
```

View file

@ -1,53 +0,0 @@
# Troubleshooting
## WOODPECKER_TOKEN empty after init
The OAuth2 flow failed. Common causes:
1. **URL-encoded redirect_uri mismatch**: Forgejo logs show "Unregistered Redirect URI".
The init script must rewrite both plain and URL-encoded Docker hostnames.
2. **Forgejo must_change_password**: Admin user was created with forced password change.
The init script calls `--must-change-password=false` but Forgejo 11.x sometimes ignores it.
3. **WOODPECKER_OPEN not set**: WP refuses first-user OAuth registration without it.
Manual fix: reset admin password and re-run the token generation manually, or
use the Woodpecker UI to create a token.
## WP CI agent won't connect (DeadlineExceeded)
gRPC over Docker bridge fails in LXD (and possibly other nested container environments).
The compose template uses `network_mode: host` + `privileged: true` for the agent.
If you see this error, check:
- Server exposes port 9000: `grep "9000:9000" docker-compose.yml`
- Agent uses `localhost:9000`: `grep "WOODPECKER_SERVER" docker-compose.yml`
- Agent has `network_mode: host`
## CI clone fails (could not resolve host)
CI containers need to resolve Docker service names (e.g., `forgejo`).
Check `WOODPECKER_BACKEND_DOCKER_NETWORK` is set on the agent.
## Webhooks not delivered
Forgejo blocks outgoing webhooks by default. Check:
```bash
docker logs disinto-forgejo-1 2>&1 | grep "webhook.*ALLOWED_HOST_LIST"
```
Fix: add `FORGEJO__webhook__ALLOWED_HOST_LIST: "private"` to Forgejo environment.
Also verify the webhook exists:
```bash
curl -sf -u "disinto-admin:<password>" "http://localhost:3000/api/v1/repos/<org>/<repo>/hooks" | jq '.[].config.url'
```
If missing, deactivate and reactivate the repo in Woodpecker to auto-create it.
## Dev-agent fails with "cd: no such file or directory"
`PROJECT_REPO_ROOT` inside the agents container points to a host path that doesn't
exist in the container. Check the compose env:
```bash
docker inspect disinto-agents-1 --format '{{range .Config.Env}}{{println .}}{{end}}' | grep PROJECT_REPO_ROOT
```
Should be `/home/agent/repos/<name>`, not `/home/<user>/<name>`.

View file

@ -1,44 +0,0 @@
#!/usr/bin/env bash
# factory-status.sh — Quick status check for a running disinto factory
set -euo pipefail
FACTORY_ROOT="${1:-$(cd "$(dirname "$0")/../.." && pwd)}"
source "${FACTORY_ROOT}/.env" 2>/dev/null || { echo "No .env found at ${FACTORY_ROOT}"; exit 1; }
FORGE_URL="${FORGE_URL:-http://localhost:3000}"
REPO=$(grep '^repo ' "${FACTORY_ROOT}/projects/"*.toml 2>/dev/null | head -1 | sed 's/.*= *"//;s/"//')
[ -z "$REPO" ] && { echo "No project TOML found"; exit 1; }
echo "=== Stack ==="
docker ps --format "table {{.Names}}\t{{.Status}}" 2>/dev/null | grep disinto
echo ""
echo "=== Open Issues ==="
curl -sf "${FORGE_URL}/api/v1/repos/${REPO}/issues?state=open&limit=20" \
-H "Authorization: token ${FORGE_TOKEN}" \
| jq -r '.[] | "#\(.number) [\(.labels | map(.name) | join(","))] \(.title)"' 2>/dev/null || echo "(API error)"
echo ""
echo "=== Open PRs ==="
curl -sf "${FORGE_URL}/api/v1/repos/${REPO}/pulls?state=open&limit=10" \
-H "Authorization: token ${FORGE_TOKEN}" \
| jq -r '.[] | "PR #\(.number) [\(.head.ref)] \(.title)"' 2>/dev/null || echo "none"
echo ""
echo "=== Agent Activity ==="
docker exec disinto-agents-1 bash -c "tail -5 /home/agent/data/logs/dev/dev-agent.log 2>/dev/null" || echo "(no logs)"
echo ""
echo "=== Claude Running? ==="
docker exec disinto-agents-1 bash -c "
found=false
for f in /proc/[0-9]*/cmdline; do
cmd=\$(tr '\0' ' ' < \"\$f\" 2>/dev/null)
if echo \"\$cmd\" | grep -q 'claude.*-p'; then found=true; echo 'Yes — Claude is actively working'; break; fi
done
\$found || echo 'No — idle'
" 2>/dev/null
echo ""
echo "=== Mirrors ==="
cd "${FACTORY_ROOT}" 2>/dev/null && git remote -v | grep -E 'github|codeberg' | grep push || echo "none configured"

View file

@ -1,191 +0,0 @@
# First-time setup
Walk the user through these steps interactively. Ask questions where marked with [ASK].
### 1. Environment
[ASK] Where will the factory run? Options:
- **LXD container** (recommended for isolation) — need Debian 12, Docker, nesting enabled
- **Bare VM or server** — need Debian/Ubuntu with Docker
- **Existing container** — check prerequisites
Verify prerequisites:
```bash
docker --version && git --version && jq --version && curl --version && tmux -V && python3 --version && claude --version
```
Any missing tool — help the user install it before continuing.
### 2. Clone disinto and choose a target project
Clone the disinto factory itself:
```bash
git clone https://codeberg.org/johba/disinto.git && cd disinto
```
[ASK] What repository should the factory develop? Provide the **remote repository URL** in one of these formats:
- Full URL: `https://github.com/johba/harb.git` or `https://codeberg.org/johba/harb.git`
- Short slug: `johba/harb` (uses local Forgejo as the primary remote)
The factory will clone from the remote URL (if provided) or from your local Forgejo, then mirror to the remote.
Then initialize the factory for that project:
```bash
bin/disinto init johba/harb --yes
# or with full URL:
bin/disinto init https://github.com/johba/harb.git --yes
```
The `init` command will:
- Create all bot users (dev-bot, review-bot, etc.) on the local Forgejo
- Generate and save `WOODPECKER_TOKEN`
- Start the stack containers
- Clone the target repo into the agent workspace
> **Note:** The `--repo-root` flag is optional and only needed if you want to customize
> where the cloned repo lives. By default, it goes under `/home/agent/repos/<name>`.
### 3. Post-init verification
Run this checklist — fix any failures before proceeding:
```bash
# Stack healthy?
docker ps --format "table {{.Names}}\t{{.Status}}"
# Expected: forgejo, woodpecker (healthy), woodpecker-agent (healthy), agents, edge, staging
# Token generated?
grep WOODPECKER_TOKEN .env | grep -v "^$" && echo "OK" || echo "MISSING — see references/troubleshooting.md"
# Agent entrypoint loop running?
docker exec disinto-agents-1 tail -5 /home/agent/data/agent-entrypoint.log
# Agent can reach Forgejo?
docker exec disinto-agents-1 bash -c "source /home/agent/disinto/.env && curl -sf http://forgejo:3000/api/v1/version | jq .version"
# Agent repo cloned?
docker exec -u agent disinto-agents-1 ls /home/agent/repos/
```
If the agent repo is missing, clone it:
```bash
docker exec disinto-agents-1 chown -R agent:agent /home/agent/repos
docker exec -u agent disinto-agents-1 bash -c "source /home/agent/disinto/.env && git clone http://dev-bot:\${FORGE_TOKEN}@forgejo:3000/<org>/<repo>.git /home/agent/repos/<name>"
```
### 4. Create the project configuration file
The factory uses a TOML file to configure how it manages your project. Create
`projects/<name>.toml` based on the template format:
```toml
# projects/harb.toml
name = "harb"
repo = "johba/harb"
forge_url = "http://localhost:3000"
repo_root = "/home/agent/repos/harb"
primary_branch = "master"
[ci]
woodpecker_repo_id = 0
stale_minutes = 60
[services]
containers = ["ponder"]
[monitoring]
check_prs = true
check_dev_agent = true
check_pipeline_stall = true
# [mirrors]
# github = "git@github.com:johba/harb.git"
# codeberg = "git@codeberg.org:johba/harb.git"
```
**Key fields:**
- `name`: Project identifier (used for file names, logs, etc.)
- `repo`: The source repo in `owner/name` format
- `forge_url`: URL of your local Forgejo instance
- `repo_root`: Where the agent clones the repo
- `primary_branch`: Default branch name (e.g., `main` or `master`)
- `woodpecker_repo_id`: Set to `0` initially; auto-populated on first CI run
- `containers`: List of Docker containers the factory should manage
- `mirrors`: Optional external forge URLs for backup/sync
### 5. Mirrors (optional)
[ASK] Should the factory mirror to external forges? If yes, which?
- GitHub: need repo URL and SSH key added to GitHub account
- Codeberg: need repo URL and SSH key added to Codeberg account
Show the user their public key:
```bash
cat ~/.ssh/id_ed25519.pub
```
Test SSH access:
```bash
ssh -T git@github.com 2>&1; ssh -T git@codeberg.org 2>&1
```
If SSH host keys are missing: `ssh-keyscan github.com codeberg.org >> ~/.ssh/known_hosts 2>/dev/null`
Edit `projects/<name>.toml` to uncomment and configure mirrors:
```toml
[mirrors]
github = "git@github.com:Org/repo.git"
codeberg = "git@codeberg.org:user/repo.git"
```
Test with a manual push:
```bash
source .env && source lib/env.sh && export PROJECT_TOML=projects/<name>.toml && source lib/load-project.sh && source lib/mirrors.sh && mirror_push
```
### 6. Seed the backlog
[ASK] What should the factory work on first? Brainstorm with the user.
Help them create issues on the local Forgejo. Each issue needs:
- A clear title prefixed with `fix:`, `feat:`, or `chore:`
- A body describing what to change, which files, and any constraints
- The `backlog` label (so the dev-agent picks it up)
```bash
source .env
BACKLOG_ID=$(curl -sf "http://localhost:3000/api/v1/repos/<org>/<repo>/labels" \
-H "Authorization: token $FORGE_TOKEN" | jq -r '.[] | select(.name=="backlog") | .id')
curl -sf -X POST "http://localhost:3000/api/v1/repos/<org>/<repo>/issues" \
-H "Authorization: token $FORGE_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"title\": \"<title>\", \"body\": \"<body>\", \"labels\": [$BACKLOG_ID]}"
```
For issues with dependencies, add `Depends-on: #N` in the body — the dev-agent checks
these before starting.
Use labels:
- `backlog` — ready for the dev-agent
- `blocked` — parked, not for the factory
- No label — tracked but not for autonomous work
### 7. Watch it work
The dev-agent runs every 5 minutes via the entrypoint polling loop. Trigger manually to see it immediately:
```bash
source .env
export PROJECT_TOML=projects/<name>.toml
docker exec -u agent disinto-agents-1 bash -c "cd /home/agent/disinto && bash dev/dev-poll.sh projects/<name>.toml"
```
Then monitor:
```bash
# Watch the agent work
docker exec disinto-agents-1 tail -f /home/agent/data/logs/dev/dev-agent.log
# Check for Claude running
docker exec disinto-agents-1 bash -c "for f in /proc/[0-9]*/cmdline; do cmd=\$(tr '\0' ' ' < \$f 2>/dev/null); echo \$cmd | grep -q 'claude.*-p' && echo 'Claude is running'; done"
```

View file

@ -1,217 +0,0 @@
version: "3.8"
services:
agents:
build:
context: .
dockerfile: docker/agents/Dockerfile
image: disinto/agents:latest
container_name: disinto-agents
restart: unless-stopped
security_opt:
- apparmor=unconfined
volumes:
- agent-data:/home/agent/data
- project-repos:/home/agent/repos
- ${CLAUDE_SHARED_DIR:-/var/lib/disinto/claude-shared}:${CLAUDE_SHARED_DIR:-/var/lib/disinto/claude-shared}
- ${CLAUDE_CONFIG_FILE:-${HOME}/.claude.json}:/home/agent/.claude.json:ro
- ${CLAUDE_BIN_DIR}:/usr/local/bin/claude:ro
- ${AGENT_SSH_DIR:-${HOME}/.ssh}:/home/agent/.ssh:ro
- ${SOPS_AGE_DIR:-${HOME}/.config/sops/age}:/home/agent/.config/sops/age:ro
- woodpecker-data:/woodpecker-data:ro
environment:
- FORGE_URL=http://forgejo:3000
- FORGE_REPO=${FORGE_REPO:-disinto-admin/disinto}
- FORGE_TOKEN=${FORGE_TOKEN:-}
- FORGE_REVIEW_TOKEN=${FORGE_REVIEW_TOKEN:-}
- FORGE_PLANNER_TOKEN=${FORGE_PLANNER_TOKEN:-}
- FORGE_GARDENER_TOKEN=${FORGE_GARDENER_TOKEN:-}
- FORGE_VAULT_TOKEN=${FORGE_VAULT_TOKEN:-}
- FORGE_SUPERVISOR_TOKEN=${FORGE_SUPERVISOR_TOKEN:-}
- FORGE_PREDICTOR_TOKEN=${FORGE_PREDICTOR_TOKEN:-}
- FORGE_ARCHITECT_TOKEN=${FORGE_ARCHITECT_TOKEN:-}
- FORGE_FILER_TOKEN=${FORGE_FILER_TOKEN:-}
- FORGE_BOT_USERNAMES=${FORGE_BOT_USERNAMES:-}
- WOODPECKER_TOKEN=${WOODPECKER_TOKEN:-}
- CLAUDE_TIMEOUT=${CLAUDE_TIMEOUT:-7200}
- CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=${CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC:-1}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
- FORGE_PASS=${FORGE_PASS:-}
- FORGE_ADMIN_PASS=${FORGE_ADMIN_PASS:-}
- FACTORY_REPO=${FORGE_REPO:-disinto-admin/disinto}
- DISINTO_CONTAINER=1
- PROJECT_NAME=${PROJECT_NAME:-project}
- PROJECT_REPO_ROOT=/home/agent/repos/${PROJECT_NAME:-project}
- WOODPECKER_DATA_DIR=/woodpecker-data
- WOODPECKER_REPO_ID=${WOODPECKER_REPO_ID:-}
- CLAUDE_CONFIG_DIR=${CLAUDE_CONFIG_DIR:-/var/lib/disinto/claude-shared/config}
- POLL_INTERVAL=${POLL_INTERVAL:-300}
- GARDENER_INTERVAL=${GARDENER_INTERVAL:-21600}
- ARCHITECT_INTERVAL=${ARCHITECT_INTERVAL:-21600}
- PLANNER_INTERVAL=${PLANNER_INTERVAL:-43200}
healthcheck:
test: ["CMD", "pgrep", "-f", "entrypoint.sh"]
interval: 60s
timeout: 5s
retries: 3
start_period: 30s
depends_on:
forgejo:
condition: service_healthy
woodpecker:
condition: service_started
networks:
- disinto-net
agents-llama:
build:
context: .
dockerfile: docker/agents/Dockerfile
image: disinto/agents-llama:latest
container_name: disinto-agents-llama
restart: unless-stopped
security_opt:
- apparmor=unconfined
volumes:
- agent-data:/home/agent/data
- project-repos:/home/agent/repos
- ${CLAUDE_SHARED_DIR:-/var/lib/disinto/claude-shared}:${CLAUDE_SHARED_DIR:-/var/lib/disinto/claude-shared}
- ${CLAUDE_CONFIG_FILE:-${HOME}/.claude.json}:/home/agent/.claude.json:ro
- ${CLAUDE_BIN_DIR}:/usr/local/bin/claude:ro
- ${AGENT_SSH_DIR:-${HOME}/.ssh}:/home/agent/.ssh:ro
- ${SOPS_AGE_DIR:-${HOME}/.config/sops/age}:/home/agent/.config/sops/age:ro
- woodpecker-data:/woodpecker-data:ro
environment:
- FORGE_URL=http://forgejo:3000
- FORGE_REPO=${FORGE_REPO:-disinto-admin/disinto}
- FORGE_TOKEN=${FORGE_TOKEN_LLAMA:-}
- FORGE_PASS=${FORGE_PASS_LLAMA:-}
- FORGE_SUPERVISOR_TOKEN=${FORGE_SUPERVISOR_TOKEN:-}
- FORGE_PREDICTOR_TOKEN=${FORGE_PREDICTOR_TOKEN:-}
- FORGE_ARCHITECT_TOKEN=${FORGE_ARCHITECT_TOKEN:-}
- FORGE_VAULT_TOKEN=${FORGE_VAULT_TOKEN:-}
- FORGE_PLANNER_TOKEN=${FORGE_PLANNER_TOKEN:-}
- FORGE_BOT_USERNAMES=${FORGE_BOT_USERNAMES:-}
- WOODPECKER_TOKEN=${WOODPECKER_TOKEN:-}
- CLAUDE_TIMEOUT=${CLAUDE_TIMEOUT:-7200}
- CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=${CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC:-1}
- CLAUDE_AUTOCOMPACT_PCT_OVERRIDE=60
- CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS=1
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
- ANTHROPIC_BASE_URL=${ANTHROPIC_BASE_URL:-}
- FORGE_ADMIN_PASS=${FORGE_ADMIN_PASS:-}
- DISINTO_CONTAINER=1
- PROJECT_TOML=projects/disinto.toml
- PROJECT_NAME=${PROJECT_NAME:-project}
- PROJECT_REPO_ROOT=/home/agent/repos/${PROJECT_NAME:-project}
- WOODPECKER_DATA_DIR=/woodpecker-data
- WOODPECKER_REPO_ID=${WOODPECKER_REPO_ID:-}
- CLAUDE_CONFIG_DIR=${CLAUDE_CONFIG_DIR:-/var/lib/disinto/claude-shared/config}
- POLL_INTERVAL=${POLL_INTERVAL:-300}
- AGENT_ROLES=dev
healthcheck:
test: ["CMD", "pgrep", "-f", "entrypoint.sh"]
interval: 60s
timeout: 5s
retries: 3
start_period: 30s
depends_on:
forgejo:
condition: service_healthy
woodpecker:
condition: service_started
networks:
- disinto-net
reproduce:
build:
context: .
dockerfile: docker/reproduce/Dockerfile
image: disinto-reproduce:latest
network_mode: host
profiles: ["reproduce"]
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- agent-data:/home/agent/data
- project-repos:/home/agent/repos
- ${CLAUDE_DIR:-${HOME}/.claude}:/home/agent/.claude
- ${CLAUDE_BIN_DIR:-/usr/local/bin/claude}:/usr/local/bin/claude:ro
- ${AGENT_SSH_DIR:-${HOME}/.ssh}:/home/agent/.ssh:ro
env_file:
- .env
edge:
build:
context: docker/edge
dockerfile: Dockerfile
image: disinto/edge:latest
container_name: disinto-edge
security_opt:
- apparmor=unconfined
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ${CLAUDE_BIN_DIR:-/usr/local/bin/claude}:/usr/local/bin/claude:ro
- ${CLAUDE_CONFIG_FILE:-${HOME}/.claude.json}:/root/.claude.json:ro
- ${CLAUDE_DIR:-${HOME}/.claude}:/root/.claude:ro
- disinto-logs:/opt/disinto-logs
environment:
- FORGE_SUPERVISOR_TOKEN=${FORGE_SUPERVISOR_TOKEN:-}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
- CLAUDE_MODEL=claude-sonnet-4-6
- FORGE_TOKEN=${FORGE_TOKEN:-}
- FORGE_URL=http://forgejo:3000
- FORGE_REPO=disinto-admin/disinto
- FORGE_OPS_REPO=disinto-admin/disinto-ops
- PRIMARY_BRANCH=main
- DISINTO_CONTAINER=1
- FORGE_ADMIN_USERS=disinto-admin,vault-bot,admin
ports:
- "80:80"
- "443:443"
healthcheck:
test: ["CMD", "curl", "-fsS", "http://localhost:2019/config/"]
interval: 30s
timeout: 5s
retries: 3
start_period: 15s
depends_on:
- forgejo
networks:
- disinto-net
forgejo:
image: codeberg.org/forgejo/forgejo:11.0
container_name: disinto-forgejo
restart: unless-stopped
security_opt:
- apparmor=unconfined
volumes:
- forgejo-data:/data
environment:
- FORGEJO__database__DB_TYPE=sqlite3
- FORGEJO__server__ROOT_URL=http://forgejo:3000/
- FORGEJO__server__HTTP_PORT=3000
- FORGEJO__security__INSTALL_LOCK=true
- FORGEJO__service__DISABLE_REGISTRATION=true
- FORGEJO__webhook__ALLOWED_HOST_LIST=private
healthcheck:
test: ["CMD", "curl", "-sf", "http://localhost:3000/api/v1/version"]
interval: 5s
timeout: 3s
retries: 30
start_period: 30s
ports:
- "3000:3000"
networks:
- disinto-net
volumes:
disinto-logs:
agent-data:
project-repos:
woodpecker-data:
forgejo-data:
networks:
disinto-net:
driver: bridge

View file

@ -1,18 +1,14 @@
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y --no-install-recommends \
bash curl git jq tmux python3 python3-pip openssh-client ca-certificates age shellcheck procps gosu \
&& pip3 install --break-system-packages networkx \
bash curl git jq tmux cron python3 openssh-client ca-certificates \
&& rm -rf /var/lib/apt/lists/*
# Pre-built binaries (copied from docker/agents/bin/)
# SOPS — encrypted data decryption tool
COPY docker/agents/bin/sops /usr/local/bin/sops
RUN chmod +x /usr/local/bin/sops
# tea CLI — official Gitea/Forgejo CLI for issue/label/comment operations
COPY docker/agents/bin/tea /usr/local/bin/tea
RUN chmod +x /usr/local/bin/tea
# Checksum from https://dl.gitea.com/tea/0.9.2/tea-0.9.2-linux-amd64.sha256
RUN curl -sL https://dl.gitea.com/tea/0.9.2/tea-0.9.2-linux-amd64 -o /usr/local/bin/tea \
&& echo "be10cdf9a619e3c0f121df874960ed19b53e62d1c7036cf60313a28b5227d54d /usr/local/bin/tea" | sha256sum -c - \
&& chmod +x /usr/local/bin/tea
# Claude CLI is mounted from the host via docker-compose volume.
# No internet access to cli.anthropic.com required at build time.
@ -20,17 +16,11 @@ RUN chmod +x /usr/local/bin/tea
# Non-root user
RUN useradd -m -u 1000 -s /bin/bash agent
# Copy disinto code into the image
COPY . /home/agent/disinto
COPY docker/agents/entrypoint.sh /entrypoint.sh
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
# Entrypoint runs polling loop directly, dropping to agent user via gosu.
# All scripts execute as the agent user (UID 1000) while preserving env vars.
VOLUME /home/agent/data
VOLUME /home/agent/repos
WORKDIR /home/agent/disinto
# Entrypoint runs as root to start the cron daemon;
# cron jobs execute as the agent user (crontab -u agent).
WORKDIR /home/agent
ENTRYPOINT ["/entrypoint.sh"]

View file

@ -1,122 +1,50 @@
#!/usr/bin/env bash
set -euo pipefail
# entrypoint.sh — Start agent container with polling loop
# entrypoint.sh — Start agent container with cron in foreground
#
# Runs as root inside the container. Drops to agent user via gosu for all
# poll scripts. All Docker Compose env vars are inherited (PATH, FORGE_TOKEN,
# ANTHROPIC_API_KEY, etc.).
#
# AGENT_ROLES env var controls which scripts run: "review,dev,gardener,architect,planner,predictor"
# (default: all six). Uses while-true loop with staggered intervals:
# - review-poll: every 5 minutes (offset by 0s)
# - dev-poll: every 5 minutes (offset by 2 minutes)
# - gardener: every GARDENER_INTERVAL seconds (default: 21600 = 6 hours)
# - architect: every ARCHITECT_INTERVAL seconds (default: 21600 = 6 hours)
# - planner: every PLANNER_INTERVAL seconds (default: 43200 = 12 hours)
# - predictor: every 24 hours (288 iterations * 5 min)
# Runs as root inside the container. Installs crontab entries for the
# agent user from project TOMLs, then starts cron in the foreground.
# All cron jobs execute as the agent user (UID 1000).
DISINTO_BAKED="/home/agent/disinto"
DISINTO_LIVE="/home/agent/repos/_factory"
DISINTO_DIR="$DISINTO_BAKED" # start with baked copy; switched to live checkout after bootstrap
DISINTO_DIR="/home/agent/disinto"
LOGFILE="/home/agent/data/agent-entrypoint.log"
# Create all expected log subdirectories and set ownership as root before dropping to agent.
# This handles both fresh volumes and stale root-owned dirs from prior container runs.
mkdir -p /home/agent/data/logs/{dev,action,review,supervisor,vault,site,metrics,gardener,planner,predictor,architect,dispatcher}
chown -R agent:agent /home/agent/data
mkdir -p /home/agent/data
chown agent:agent /home/agent/data
log() {
printf '[%s] %s\n' "$(date -u '+%Y-%m-%d %H:%M:%S UTC')" "$*" | tee -a "$LOGFILE"
}
# Initialize state directory and files if they don't exist
init_state_dir() {
local state_dir="${DISINTO_DIR}/state"
mkdir -p "$state_dir"
# Create empty state files so check_active guards work
for agent in dev reviewer gardener architect planner predictor; do
touch "$state_dir/.${agent}-active" 2>/dev/null || true
# Build crontab from project TOMLs and install for the agent user.
install_project_crons() {
local cron_lines=""
for toml in "${DISINTO_DIR}"/projects/*.toml; do
[ -f "$toml" ] || continue
local pname
pname=$(python3 -c "
import sys, tomllib
with open(sys.argv[1], 'rb') as f:
print(tomllib.load(f)['name'])
" "$toml" 2>/dev/null) || continue
cron_lines="${cron_lines}
# disinto: ${pname}
2,7,12,17,22,27,32,37,42,47,52,57 * * * * ${DISINTO_DIR}/review/review-poll.sh ${toml} >/dev/null 2>&1
4,9,14,19,24,29,34,39,44,49,54,59 * * * * ${DISINTO_DIR}/dev/dev-poll.sh ${toml} >/dev/null 2>&1
0 0,6,12,18 * * * cd ${DISINTO_DIR} && bash gardener/gardener-run.sh ${toml} >/dev/null 2>&1"
done
chown -R agent:agent "$state_dir"
log "Initialized state directory"
}
# Source shared git credential helper library (#604).
# shellcheck source=lib/git-creds.sh
source "${DISINTO_BAKED}/lib/git-creds.sh"
# Wrapper that calls the shared configure_git_creds with agent-specific paths,
# then repairs any legacy baked-credential URLs in existing clones.
_setup_git_creds() {
_GIT_CREDS_LOG_FN=log configure_git_creds "/home/agent" "gosu agent"
if [ -n "${FORGE_PASS:-}" ] && [ -n "${FORGE_URL:-}" ]; then
log "Git credential helper configured (password auth)"
fi
# Repair legacy clones with baked-in stale credentials (#604).
_GIT_CREDS_LOG_FN=log repair_baked_cred_urls --as "gosu agent" /home/agent/repos
}
# Configure git author identity for commits made by this container.
# Derives identity from the resolved bot user (BOT_USER) to ensure commits
# are visibly attributable to the correct bot in the forge timeline.
# BOT_USER is normally set by configure_git_creds() (#741); this function
# only falls back to its own API call if BOT_USER was not already resolved.
configure_git_identity() {
# Resolve BOT_USER from FORGE_TOKEN if not already set (configure_git_creds
# exports BOT_USER on success, so this is a fallback for edge cases only).
if [ -z "${BOT_USER:-}" ] && [ -n "${FORGE_TOKEN:-}" ]; then
BOT_USER=$(curl -sf --max-time 10 \
-H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_URL:-http://localhost:3000}/api/v1/user" 2>/dev/null | jq -r '.login // empty') || true
fi
if [ -z "${BOT_USER:-}" ]; then
log "WARNING: Could not resolve bot username for git identity — commits will use fallback"
BOT_USER="agent"
fi
# Configure git identity for all repositories
gosu agent git config --global user.name "${BOT_USER}"
gosu agent git config --global user.email "${BOT_USER}@disinto.local"
log "Git identity configured: ${BOT_USER} <${BOT_USER}@disinto.local>"
}
# Configure tea CLI login for forge operations (runs as agent user).
# tea stores config in ~/.config/tea/ — persistent across container restarts
# only if that directory is on a mounted volume.
configure_tea_login() {
if command -v tea &>/dev/null && [ -n "${FORGE_TOKEN:-}" ] && [ -n "${FORGE_URL:-}" ]; then
local_tea_login="forgejo"
case "$FORGE_URL" in
*codeberg.org*) local_tea_login="codeberg" ;;
esac
gosu agent bash -c "tea login add \
--name '${local_tea_login}' \
--url '${FORGE_URL}' \
--token '${FORGE_TOKEN}' \
--no-version-check 2>/dev/null || true"
log "tea login configured: ${local_tea_login}${FORGE_URL}"
if [ -n "$cron_lines" ]; then
printf '%s\n' "$cron_lines" | crontab -u agent -
log "Installed crontab for agent user"
else
log "tea login: skipped (tea not found or FORGE_TOKEN/FORGE_URL not set)"
log "No project TOMLs found — crontab empty"
fi
}
log "Agent container starting"
# Set USER and HOME for scripts that source lib/env.sh.
# These are preconditions required by lib/env.sh's surface contract.
# gosu agent inherits the parent's env, so exports here propagate to all children.
export USER=agent
export HOME=/home/agent
# Source lib/env.sh to get DISINTO_LOG_DIR and other shared environment.
# This must happen after USER/HOME are set (env.sh preconditions).
# shellcheck source=lib/env.sh
source "${DISINTO_BAKED}/lib/env.sh"
# Verify Claude CLI is available (expected via volume mount from host).
if ! command -v claude &>/dev/null; then
log "FATAL: claude CLI not found in PATH."
@ -132,338 +60,33 @@ log "Claude CLI: $(claude --version 2>&1 || true)"
# auth method is active so operators can debug 401s.
if [ -n "${ANTHROPIC_API_KEY:-}" ]; then
log "Auth: ANTHROPIC_API_KEY is set — using API key (no OAuth rotation)"
elif [ -f "${CLAUDE_CONFIG_DIR:-/home/agent/.claude}/.credentials.json" ]; then
log "Auth: OAuth credentials mounted from host (${CLAUDE_CONFIG_DIR:-~/.claude})"
elif [ -f /home/agent/.claude/credentials.json ]; then
log "Auth: OAuth credentials mounted from host (~/.claude)"
else
log "WARNING: No ANTHROPIC_API_KEY and no OAuth credentials found."
log "Run 'claude auth login' on the host, or set ANTHROPIC_API_KEY in .env"
fi
# Bootstrap ops repos for each project TOML (#586).
# In compose mode the ops repo lives on a Docker named volume at
# /home/agent/repos/<project>-ops. If init ran migrate_ops_repo on the host
# the container never saw those changes. This function clones from forgejo
# when the repo is missing, or configures the remote and pulls when it exists
# but has no remote (orphaned local-only checkout).
bootstrap_ops_repos() {
local repos_dir="/home/agent/repos"
mkdir -p "$repos_dir"
chown agent:agent "$repos_dir"
install_project_crons
for toml in "${DISINTO_DIR}"/projects/*.toml; do
[ -f "$toml" ] || continue
# Configure tea CLI login for forge operations (runs as agent user).
# tea stores config in ~/.config/tea/ — persistent across container restarts
# only if that directory is on a mounted volume.
if command -v tea &>/dev/null && [ -n "${FORGE_TOKEN:-}" ] && [ -n "${FORGE_URL:-}" ]; then
local_tea_login="forgejo"
case "$FORGE_URL" in
*codeberg.org*) local_tea_login="codeberg" ;;
esac
su -s /bin/bash agent -c "tea login add \
--name '${local_tea_login}' \
--url '${FORGE_URL}' \
--token '${FORGE_TOKEN}' \
--no-version-check 2>/dev/null || true"
log "tea login configured: ${local_tea_login}${FORGE_URL}"
else
log "tea login: skipped (tea not found or FORGE_TOKEN/FORGE_URL not set)"
fi
# Extract project name, ops repo slug, repo slug, and primary branch from TOML
local project_name ops_slug primary_branch
local _toml_vals
_toml_vals=$(python3 -c "
import tomllib, sys
with open(sys.argv[1], 'rb') as f:
cfg = tomllib.load(f)
print(cfg.get('name', ''))
print(cfg.get('ops_repo', ''))
print(cfg.get('repo', ''))
print(cfg.get('primary_branch', 'main'))
" "$toml" 2>/dev/null || true)
project_name=$(sed -n '1p' <<< "$_toml_vals")
[ -n "$project_name" ] || continue
ops_slug=$(sed -n '2p' <<< "$_toml_vals")
local repo_slug
repo_slug=$(sed -n '3p' <<< "$_toml_vals")
primary_branch=$(sed -n '4p' <<< "$_toml_vals")
primary_branch="${primary_branch:-main}"
# Fall back to convention if ops_repo not in TOML
if [ -z "$ops_slug" ]; then
if [ -n "$repo_slug" ]; then
ops_slug="${repo_slug}-ops"
else
ops_slug="disinto-admin/${project_name}-ops"
fi
fi
local ops_root="${repos_dir}/${project_name}-ops"
local remote_url="${FORGE_URL}/${ops_slug}.git"
if [ ! -d "${ops_root}/.git" ]; then
# Clone ops repo from forgejo
log "Ops bootstrap: cloning ${ops_slug} -> ${ops_root}"
if gosu agent git clone --quiet "$remote_url" "$ops_root" 2>/dev/null; then
log "Ops bootstrap: ${ops_slug} cloned successfully"
else
# Remote may not exist yet (first run before init); create empty repo
log "Ops bootstrap: clone failed for ${ops_slug} — initializing empty repo"
gosu agent bash -c "
mkdir -p '${ops_root}' && \
git -C '${ops_root}' init --initial-branch='${primary_branch}' -q && \
git -C '${ops_root}' remote add origin '${remote_url}'
"
fi
else
# Repo exists — ensure remote is configured and pull latest
local current_remote
current_remote=$(git -C "$ops_root" remote get-url origin 2>/dev/null || true)
if [ -z "$current_remote" ]; then
log "Ops bootstrap: adding missing remote to ${ops_root}"
gosu agent git -C "$ops_root" remote add origin "$remote_url"
elif [ "$current_remote" != "$remote_url" ]; then
log "Ops bootstrap: fixing remote URL in ${ops_root}"
gosu agent git -C "$ops_root" remote set-url origin "$remote_url"
fi
# Pull latest from forgejo to pick up any host-side migrations
log "Ops bootstrap: pulling latest for ${project_name}-ops"
gosu agent bash -c "
cd '${ops_root}' && \
git fetch origin '${primary_branch}' --quiet 2>/dev/null && \
git reset --hard 'origin/${primary_branch}' --quiet 2>/dev/null
" || log "Ops bootstrap: pull failed for ${ops_slug} (remote may not exist yet)"
fi
done
}
# Bootstrap the factory (disinto) repo from Forgejo into the project-repos
# volume so the entrypoint runs from a live git checkout that receives
# updates via `git pull`, not the stale baked copy from `COPY .` (#593).
bootstrap_factory_repo() {
local repo="${FACTORY_REPO:-}"
if [ -z "$repo" ]; then
log "Factory bootstrap: FACTORY_REPO not set — running from baked copy"
return 0
fi
local remote_url="${FORGE_URL}/${repo}.git"
local primary_branch="${PRIMARY_BRANCH:-main}"
if [ ! -d "${DISINTO_LIVE}/.git" ]; then
log "Factory bootstrap: cloning ${repo} -> ${DISINTO_LIVE}"
if gosu agent git clone --quiet --branch "$primary_branch" "$remote_url" "$DISINTO_LIVE" 2>&1; then
log "Factory bootstrap: cloned successfully"
else
log "Factory bootstrap: clone failed — running from baked copy"
return 0
fi
else
log "Factory bootstrap: pulling latest ${repo}"
gosu agent bash -c "
cd '${DISINTO_LIVE}' && \
git fetch origin '${primary_branch}' --quiet 2>/dev/null && \
git reset --hard 'origin/${primary_branch}' --quiet 2>/dev/null
" || log "Factory bootstrap: pull failed — using existing checkout"
fi
# Copy project TOMLs from baked dir — they are gitignored AND docker-ignored,
# so neither the image nor the clone normally contains them. If the baked
# copy has any (e.g. operator manually placed them), propagate them.
if compgen -G "${DISINTO_BAKED}/projects/*.toml" >/dev/null 2>&1; then
mkdir -p "${DISINTO_LIVE}/projects"
cp "${DISINTO_BAKED}"/projects/*.toml "${DISINTO_LIVE}/projects/"
chown -R agent:agent "${DISINTO_LIVE}/projects"
log "Factory bootstrap: copied project TOMLs to live checkout"
fi
# Verify the live checkout has the expected structure
if [ -f "${DISINTO_LIVE}/lib/env.sh" ]; then
DISINTO_DIR="$DISINTO_LIVE"
log "Factory bootstrap: DISINTO_DIR switched to live checkout at ${DISINTO_LIVE}"
else
log "Factory bootstrap: live checkout missing expected files — falling back to baked copy"
fi
}
# Ensure the project repo is cloned on first run (#589).
# The agents container uses a named volume (project-repos) at /home/agent/repos.
# On first startup, if the project repo is missing, clone it from FORGE_URL/FORGE_REPO.
# This makes the agents container self-healing and independent of init's host clone.
ensure_project_clone() {
# shellcheck disable=SC2153
local repo_dir="/home/agent/repos/${PROJECT_NAME}"
if [ -d "${repo_dir}/.git" ]; then
log "Project repo present at ${repo_dir}"
return 0
fi
if [ -z "${FORGE_REPO:-}" ] || [ -z "${FORGE_URL:-}" ]; then
log "Cannot clone project repo: FORGE_REPO or FORGE_URL unset"
return 1
fi
log "Cloning ${FORGE_URL}/${FORGE_REPO}.git -> ${repo_dir} (first run)"
mkdir -p "$(dirname "$repo_dir")"
chown -R agent:agent "$(dirname "$repo_dir")"
if gosu agent git clone --quiet "${FORGE_URL}/${FORGE_REPO}.git" "$repo_dir"; then
log "Project repo cloned"
else
log "Project repo clone failed — agents may fail until manually fixed"
return 1
fi
}
# Pull latest factory code at the start of each poll iteration (#593).
# Runs as the agent user; failures are non-fatal (stale code still works).
pull_factory_repo() {
[ "$DISINTO_DIR" = "$DISINTO_LIVE" ] || return 0
local primary_branch="${PRIMARY_BRANCH:-main}"
gosu agent bash -c "
cd '${DISINTO_LIVE}' && \
git fetch origin '${primary_branch}' --quiet 2>/dev/null && \
git reset --hard 'origin/${primary_branch}' --quiet 2>/dev/null
" || log "Factory pull failed — continuing with current checkout"
}
# Configure git and tea once at startup (as root, then drop to agent)
_setup_git_creds
configure_git_identity
configure_tea_login
# Clone project repo on first run (makes agents self-healing, #589)
ensure_project_clone
# Bootstrap ops repos from forgejo into container volumes (#586)
bootstrap_ops_repos
# Bootstrap factory repo — switch DISINTO_DIR to live checkout (#593)
bootstrap_factory_repo
# Initialize state directory for check_active guards
init_state_dir
# Parse AGENT_ROLES env var (default: all agents)
# Expected format: comma-separated list like "review,dev,gardener"
AGENT_ROLES="${AGENT_ROLES:-review,dev,gardener,architect,planner,predictor}"
log "Agent roles configured: ${AGENT_ROLES}"
# Poll interval in seconds (5 minutes default)
POLL_INTERVAL="${POLL_INTERVAL:-300}"
# Gardener and architect intervals (default 6 hours = 21600 seconds)
GARDENER_INTERVAL="${GARDENER_INTERVAL:-21600}"
ARCHITECT_INTERVAL="${ARCHITECT_INTERVAL:-21600}"
PLANNER_INTERVAL="${PLANNER_INTERVAL:-43200}"
log "Entering polling loop (interval: ${POLL_INTERVAL}s, roles: ${AGENT_ROLES})"
log "Gardener interval: ${GARDENER_INTERVAL}s, Architect interval: ${ARCHITECT_INTERVAL}s, Planner interval: ${PLANNER_INTERVAL}s"
# Main polling loop using iteration counter for gardener scheduling
iteration=0
while true; do
iteration=$((iteration + 1))
now=$(date +%s)
# Pull latest factory code so poll scripts stay current (#593)
pull_factory_repo
# Stale .sid cleanup — needed for agents that don't support --resume
# Run this as the agent user
gosu agent bash -c "rm -f /tmp/dev-session-*.sid /tmp/review-session-*.sid 2>/dev/null || true"
# Poll each project TOML
# Fast agents (review-poll, dev-poll) run in background so they don't block
# each other. Slow agents (gardener, architect, planner, predictor) also run
# in background but are guarded by pgrep so only one instance runs at a time.
# Per-session CLAUDE_CONFIG_DIR isolation handles OAuth concurrency natively.
# Set CLAUDE_EXTERNAL_LOCK=1 to re-enable the legacy flock serialization.
for toml in "${DISINTO_DIR}"/projects/*.toml; do
[ -f "$toml" ] || continue
# Parse project name and primary branch from TOML so env.sh preconditions
# are satisfied when agent scripts source it (#674).
_toml_vals=$(python3 -c "
import tomllib, sys
with open(sys.argv[1], 'rb') as f:
cfg = tomllib.load(f)
print(cfg.get('name', ''))
print(cfg.get('primary_branch', 'main'))
" "$toml" 2>/dev/null || true)
_pname=$(sed -n '1p' <<< "$_toml_vals")
_pbranch=$(sed -n '2p' <<< "$_toml_vals")
[ -n "$_pname" ] || { log "WARNING: could not parse project name from ${toml} — skipping"; continue; }
export PROJECT_NAME="$_pname"
export PROJECT_REPO_ROOT="/home/agent/repos/${_pname}"
export OPS_REPO_ROOT="/home/agent/repos/${_pname}-ops"
export PRIMARY_BRANCH="${_pbranch:-main}"
log "Processing project TOML: ${toml}"
# --- Fast agents: run in background, wait before slow agents ---
FAST_PIDS=()
# Review poll (every iteration)
if [[ ",${AGENT_ROLES}," == *",review,"* ]]; then
log "Running review-poll (iteration ${iteration}) for ${toml}"
gosu agent bash -c "cd ${DISINTO_DIR} && bash review/review-poll.sh \"${toml}\"" >> "${DISINTO_LOG_DIR}/review-poll.log" 2>&1 &
FAST_PIDS+=($!)
fi
sleep 2 # stagger fast polls
# Dev poll (every iteration)
if [[ ",${AGENT_ROLES}," == *",dev,"* ]]; then
log "Running dev-poll (iteration ${iteration}) for ${toml}"
gosu agent bash -c "cd ${DISINTO_DIR} && bash dev/dev-poll.sh \"${toml}\"" >> "${DISINTO_LOG_DIR}/dev-poll.log" 2>&1 &
FAST_PIDS+=($!)
fi
# Wait only for THIS iteration's fast polls — long-running gardener/dev-agent
# from prior iterations must not block us.
if [ ${#FAST_PIDS[@]} -gt 0 ]; then
wait "${FAST_PIDS[@]}"
fi
# --- Slow agents: run in background with pgrep guard ---
# Gardener (interval configurable via GARDENER_INTERVAL env var)
if [[ ",${AGENT_ROLES}," == *",gardener,"* ]]; then
gardener_iteration=$((iteration * POLL_INTERVAL))
if [ $((gardener_iteration % GARDENER_INTERVAL)) -eq 0 ] && [ "$now" -ge "$gardener_iteration" ]; then
if ! pgrep -f "gardener-run.sh" >/dev/null; then
log "Running gardener (iteration ${iteration}, ${GARDENER_INTERVAL}s interval) for ${toml}"
gosu agent bash -c "cd ${DISINTO_DIR} && bash gardener/gardener-run.sh \"${toml}\"" >> "${DISINTO_LOG_DIR}/gardener.log" 2>&1 &
else
log "Skipping gardener — already running"
fi
fi
fi
# Architect (interval configurable via ARCHITECT_INTERVAL env var)
if [[ ",${AGENT_ROLES}," == *",architect,"* ]]; then
architect_iteration=$((iteration * POLL_INTERVAL))
if [ $((architect_iteration % ARCHITECT_INTERVAL)) -eq 0 ] && [ "$now" -ge "$architect_iteration" ]; then
if ! pgrep -f "architect-run.sh" >/dev/null; then
log "Running architect (iteration ${iteration}, ${ARCHITECT_INTERVAL}s interval) for ${toml}"
gosu agent bash -c "cd ${DISINTO_DIR} && bash architect/architect-run.sh \"${toml}\"" >> "${DISINTO_LOG_DIR}/architect.log" 2>&1 &
else
log "Skipping architect — already running"
fi
fi
fi
# Planner (interval configurable via PLANNER_INTERVAL env var)
if [[ ",${AGENT_ROLES}," == *",planner,"* ]]; then
planner_iteration=$((iteration * POLL_INTERVAL))
if [ $((planner_iteration % PLANNER_INTERVAL)) -eq 0 ] && [ "$now" -ge "$planner_iteration" ]; then
if ! pgrep -f "planner-run.sh" >/dev/null; then
log "Running planner (iteration ${iteration}, ${PLANNER_INTERVAL}s interval) for ${toml}"
gosu agent bash -c "cd ${DISINTO_DIR} && bash planner/planner-run.sh \"${toml}\"" >> "${DISINTO_LOG_DIR}/planner.log" 2>&1 &
else
log "Skipping planner — already running"
fi
fi
fi
# Predictor (every 24 hours = 288 iterations * 5 min = 86400 seconds)
if [[ ",${AGENT_ROLES}," == *",predictor,"* ]]; then
predictor_iteration=$((iteration * POLL_INTERVAL))
predictor_interval=$((24 * 60 * 60)) # 24 hours in seconds
if [ $((predictor_iteration % predictor_interval)) -eq 0 ] && [ "$now" -ge "$predictor_iteration" ]; then
if ! pgrep -f "predictor-run.sh" >/dev/null; then
log "Running predictor (iteration ${iteration}, 24-hour interval) for ${toml}"
gosu agent bash -c "cd ${DISINTO_DIR} && bash predictor/predictor-run.sh \"${toml}\"" >> "${DISINTO_LOG_DIR}/predictor.log" 2>&1 &
else
log "Skipping predictor — already running"
fi
fi
fi
done
sleep "${POLL_INTERVAL}"
done
# Run cron in the foreground. Cron jobs execute as the agent user.
log "Starting cron daemon"
exec cron -f

View file

@ -1,35 +0,0 @@
# disinto-chat — minimal HTTP backend for Claude chat UI
#
# Small Debian slim base with Python runtime.
# Chosen for simplicity and small image size (~100MB).
#
# Image size: ~100MB (well under the 200MB ceiling)
#
# The claude binary is mounted from the host at runtime via docker-compose,
# not baked into the image — same pattern as the agents container.
FROM debian:bookworm-slim
# Install Python (no build-time network access needed)
RUN apt-get update && apt-get install -y --no-install-recommends \
python3 \
&& rm -rf /var/lib/apt/lists/*
# Non-root user — fixed UID 10001 for sandbox hardening (#706)
RUN useradd -m -u 10001 -s /bin/bash chat
# Copy application files
COPY server.py /usr/local/bin/server.py
COPY entrypoint-chat.sh /entrypoint-chat.sh
COPY ui/ /var/chat/ui/
RUN chmod +x /entrypoint-chat.sh /usr/local/bin/server.py
USER chat
WORKDIR /var/chat
EXPOSE 8080
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
CMD python3 -c "import urllib.request; urllib.request.urlopen('http://localhost:8080/health')" || exit 1
ENTRYPOINT ["/entrypoint-chat.sh"]

View file

@ -1,37 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
# entrypoint-chat.sh — Start the disinto-chat backend server
#
# Exec-replace pattern: this script is the container entrypoint and runs
# the server directly (no wrapper needed). Logs to stdout for docker logs.
LOGFILE="/tmp/chat.log"
log() {
printf '[%s] %s\n' "$(date -u '+%Y-%m-%d %H:%M:%S UTC')" "$*" | tee -a "$LOGFILE"
}
# Sandbox sanity checks (#706) — fail fast if isolation is broken
if [ -e /var/run/docker.sock ]; then
log "FATAL: /var/run/docker.sock is accessible — sandbox violation"
exit 1
fi
if [ "$(id -u)" = "0" ]; then
log "FATAL: running as root (uid 0) — sandbox violation"
exit 1
fi
# Verify Claude CLI is available (expected via volume mount from host).
if ! command -v claude &>/dev/null; then
log "FATAL: claude CLI not found in PATH"
log "Mount the host binary into the container, e.g.:"
log " volumes:"
log " - /usr/local/bin/claude:/usr/local/bin/claude:ro"
exit 1
fi
log "Claude CLI: $(claude --version 2>&1 || true)"
# Start the Python server (exec-replace so signals propagate correctly)
log "Starting disinto-chat server on port 8080..."
exec python3 /usr/local/bin/server.py

View file

@ -1,957 +0,0 @@
#!/usr/bin/env python3
"""
disinto-chat server minimal HTTP backend for Claude chat UI.
Routes:
GET /chat/auth/verify -> Caddy forward_auth callback (returns 200+X-Forwarded-User or 401)
GET /chat/login -> 302 to Forgejo OAuth authorize
GET /chat/oauth/callback -> exchange code for token, validate user, set session
GET /chat/ -> serves index.html (session required)
GET /chat/static/* -> serves static assets (session required)
POST /chat -> spawns `claude --print` with user message (session required)
GET /ws -> reserved for future streaming upgrade (returns 501)
OAuth flow:
1. User hits any /chat/* route without a valid session cookie -> 302 /chat/login
2. /chat/login redirects to Forgejo /login/oauth/authorize
3. Forgejo redirects back to /chat/oauth/callback with ?code=...&state=...
4. Server exchanges code for access token, fetches /api/v1/user
5. Asserts user is in allowlist, sets HttpOnly session cookie
6. Redirects to /chat/
The claude binary is expected to be mounted from the host at /usr/local/bin/claude.
"""
import datetime
import json
import os
import re
import secrets
import subprocess
import sys
import time
from http.server import HTTPServer, BaseHTTPRequestHandler
from urllib.parse import urlparse, parse_qs, urlencode
# Configuration
HOST = os.environ.get("CHAT_HOST", "0.0.0.0")
PORT = int(os.environ.get("CHAT_PORT", 8080))
UI_DIR = "/var/chat/ui"
STATIC_DIR = os.path.join(UI_DIR, "static")
CLAUDE_BIN = "/usr/local/bin/claude"
# OAuth configuration
FORGE_URL = os.environ.get("FORGE_URL", "http://localhost:3000")
CHAT_OAUTH_CLIENT_ID = os.environ.get("CHAT_OAUTH_CLIENT_ID", "")
CHAT_OAUTH_CLIENT_SECRET = os.environ.get("CHAT_OAUTH_CLIENT_SECRET", "")
EDGE_TUNNEL_FQDN = os.environ.get("EDGE_TUNNEL_FQDN", "")
# Shared secret for Caddy forward_auth verify endpoint (#709).
# When set, only requests carrying this value in X-Forward-Auth-Secret are
# allowed to call /chat/auth/verify. When empty the endpoint is unrestricted
# (acceptable during local dev; production MUST set this).
FORWARD_AUTH_SECRET = os.environ.get("FORWARD_AUTH_SECRET", "")
# Rate limiting / cost caps (#711)
CHAT_MAX_REQUESTS_PER_HOUR = int(os.environ.get("CHAT_MAX_REQUESTS_PER_HOUR", 60))
CHAT_MAX_REQUESTS_PER_DAY = int(os.environ.get("CHAT_MAX_REQUESTS_PER_DAY", 500))
CHAT_MAX_TOKENS_PER_DAY = int(os.environ.get("CHAT_MAX_TOKENS_PER_DAY", 1000000))
# Allowed users - disinto-admin always allowed; CSV allowlist extends it
_allowed_csv = os.environ.get("DISINTO_CHAT_ALLOWED_USERS", "")
ALLOWED_USERS = {"disinto-admin"}
if _allowed_csv:
ALLOWED_USERS.update(u.strip() for u in _allowed_csv.split(",") if u.strip())
# Session cookie name
SESSION_COOKIE = "disinto_chat_session"
# Session TTL: 24 hours
SESSION_TTL = 24 * 60 * 60
# Chat history directory (bind-mounted from host)
CHAT_HISTORY_DIR = os.environ.get("CHAT_HISTORY_DIR", "/var/lib/chat/history")
# Regex for valid conversation_id (12-char hex, no slashes)
CONVERSATION_ID_PATTERN = re.compile(r"^[0-9a-f]{12}$")
# In-memory session store: token -> {"user": str, "expires": float}
_sessions = {}
# Pending OAuth state tokens: state -> expires (float)
_oauth_states = {}
# Per-user rate limiting state (#711)
# user -> list of request timestamps (for sliding-window hourly/daily caps)
_request_log = {}
# user -> {"tokens": int, "date": "YYYY-MM-DD"}
_daily_tokens = {}
# MIME types for static files
MIME_TYPES = {
".html": "text/html; charset=utf-8",
".js": "application/javascript; charset=utf-8",
".css": "text/css; charset=utf-8",
".json": "application/json; charset=utf-8",
".png": "image/png",
".jpg": "image/jpeg",
".svg": "image/svg+xml",
".ico": "image/x-icon",
}
def _build_callback_uri():
"""Build the OAuth callback URI based on tunnel configuration."""
if EDGE_TUNNEL_FQDN:
return f"https://{EDGE_TUNNEL_FQDN}/chat/oauth/callback"
return "http://localhost/chat/oauth/callback"
def _session_cookie_flags():
"""Return cookie flags appropriate for the deployment mode."""
flags = "HttpOnly; SameSite=Lax; Path=/chat"
if EDGE_TUNNEL_FQDN:
flags += "; Secure"
return flags
def _validate_session(cookie_header):
"""Check session cookie and return username if valid, else None."""
if not cookie_header:
return None
for part in cookie_header.split(";"):
part = part.strip()
if part.startswith(SESSION_COOKIE + "="):
token = part[len(SESSION_COOKIE) + 1:]
session = _sessions.get(token)
if session and session["expires"] > time.time():
return session["user"]
# Expired - clean up
_sessions.pop(token, None)
return None
return None
def _gc_sessions():
"""Remove expired sessions (called opportunistically)."""
now = time.time()
expired = [k for k, v in _sessions.items() if v["expires"] <= now]
for k in expired:
del _sessions[k]
expired_states = [k for k, v in _oauth_states.items() if v <= now]
for k in expired_states:
del _oauth_states[k]
def _exchange_code_for_token(code):
"""Exchange an authorization code for an access token via Forgejo."""
import urllib.request
import urllib.error
data = urlencode({
"grant_type": "authorization_code",
"code": code,
"client_id": CHAT_OAUTH_CLIENT_ID,
"client_secret": CHAT_OAUTH_CLIENT_SECRET,
"redirect_uri": _build_callback_uri(),
}).encode()
req = urllib.request.Request(
f"{FORGE_URL}/login/oauth/access_token",
data=data,
headers={"Accept": "application/json", "Content-Type": "application/x-www-form-urlencoded"},
method="POST",
)
try:
with urllib.request.urlopen(req, timeout=10) as resp:
return json.loads(resp.read().decode())
except (urllib.error.URLError, json.JSONDecodeError, OSError) as e:
print(f"OAuth token exchange failed: {e}", file=sys.stderr)
return None
def _fetch_user(access_token):
"""Fetch the authenticated user from Forgejo API."""
import urllib.request
import urllib.error
req = urllib.request.Request(
f"{FORGE_URL}/api/v1/user",
headers={"Authorization": f"token {access_token}", "Accept": "application/json"},
)
try:
with urllib.request.urlopen(req, timeout=10) as resp:
return json.loads(resp.read().decode())
except (urllib.error.URLError, json.JSONDecodeError, OSError) as e:
print(f"User fetch failed: {e}", file=sys.stderr)
return None
# =============================================================================
# Rate Limiting Functions (#711)
# =============================================================================
def _check_rate_limit(user):
"""Check per-user rate limits. Returns (allowed, retry_after, reason) (#711).
Checks hourly request cap, daily request cap, and daily token cap.
"""
now = time.time()
one_hour_ago = now - 3600
today = datetime.date.today().isoformat()
# Prune old entries from request log
timestamps = _request_log.get(user, [])
timestamps = [t for t in timestamps if t > now - 86400]
_request_log[user] = timestamps
# Hourly request cap
hourly = [t for t in timestamps if t > one_hour_ago]
if len(hourly) >= CHAT_MAX_REQUESTS_PER_HOUR:
oldest_in_window = min(hourly)
retry_after = int(oldest_in_window + 3600 - now) + 1
return False, max(retry_after, 1), "hourly request limit"
# Daily request cap
start_of_day = time.mktime(datetime.date.today().timetuple())
daily = [t for t in timestamps if t >= start_of_day]
if len(daily) >= CHAT_MAX_REQUESTS_PER_DAY:
next_day = start_of_day + 86400
retry_after = int(next_day - now) + 1
return False, max(retry_after, 1), "daily request limit"
# Daily token cap
token_info = _daily_tokens.get(user, {"tokens": 0, "date": today})
if token_info["date"] != today:
token_info = {"tokens": 0, "date": today}
_daily_tokens[user] = token_info
if token_info["tokens"] >= CHAT_MAX_TOKENS_PER_DAY:
next_day = start_of_day + 86400
retry_after = int(next_day - now) + 1
return False, max(retry_after, 1), "daily token limit"
return True, 0, ""
def _record_request(user):
"""Record a request timestamp for the user (#711)."""
_request_log.setdefault(user, []).append(time.time())
def _record_tokens(user, tokens):
"""Record token usage for the user (#711)."""
today = datetime.date.today().isoformat()
token_info = _daily_tokens.get(user, {"tokens": 0, "date": today})
if token_info["date"] != today:
token_info = {"tokens": 0, "date": today}
token_info["tokens"] += tokens
_daily_tokens[user] = token_info
def _parse_stream_json(output):
"""Parse stream-json output from claude --print (#711).
Returns (text_content, total_tokens). Falls back gracefully if the
usage event is absent or malformed.
"""
text_parts = []
total_tokens = 0
for line in output.splitlines():
line = line.strip()
if not line:
continue
try:
event = json.loads(line)
except json.JSONDecodeError:
continue
etype = event.get("type", "")
# Collect assistant text
if etype == "content_block_delta":
delta = event.get("delta", {})
if delta.get("type") == "text_delta":
text_parts.append(delta.get("text", ""))
elif etype == "assistant":
# Full assistant message (non-streaming)
content = event.get("content", "")
if isinstance(content, str) and content:
text_parts.append(content)
elif isinstance(content, list):
for block in content:
if isinstance(block, dict) and block.get("text"):
text_parts.append(block["text"])
# Parse usage from result event
if etype == "result":
usage = event.get("usage", {})
total_tokens = usage.get("input_tokens", 0) + usage.get("output_tokens", 0)
elif "usage" in event:
usage = event["usage"]
if isinstance(usage, dict):
total_tokens = usage.get("input_tokens", 0) + usage.get("output_tokens", 0)
return "".join(text_parts), total_tokens
# =============================================================================
# Conversation History Functions (#710)
# =============================================================================
def _generate_conversation_id():
"""Generate a new conversation ID (12-char hex string)."""
return secrets.token_hex(6)
def _validate_conversation_id(conv_id):
"""Validate that conversation_id matches the required format."""
return bool(CONVERSATION_ID_PATTERN.match(conv_id))
def _get_user_history_dir(user):
"""Get the history directory path for a user."""
return os.path.join(CHAT_HISTORY_DIR, user)
def _get_conversation_path(user, conv_id):
"""Get the full path to a conversation file."""
user_dir = _get_user_history_dir(user)
return os.path.join(user_dir, f"{conv_id}.ndjson")
def _ensure_user_dir(user):
"""Ensure the user's history directory exists."""
user_dir = _get_user_history_dir(user)
os.makedirs(user_dir, exist_ok=True)
return user_dir
def _write_message(user, conv_id, role, content):
"""Append a message to a conversation file in NDJSON format."""
conv_path = _get_conversation_path(user, conv_id)
_ensure_user_dir(user)
record = {
"ts": time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime()),
"user": user,
"role": role,
"content": content,
}
with open(conv_path, "a", encoding="utf-8") as f:
f.write(json.dumps(record, ensure_ascii=False) + "\n")
def _read_conversation(user, conv_id):
"""Read all messages from a conversation file."""
conv_path = _get_conversation_path(user, conv_id)
messages = []
if not os.path.exists(conv_path):
return None
try:
with open(conv_path, "r", encoding="utf-8") as f:
for line in f:
line = line.strip()
if line:
try:
messages.append(json.loads(line))
except json.JSONDecodeError:
# Skip malformed lines
continue
except IOError:
return None
return messages
def _list_user_conversations(user):
"""List all conversation files for a user with first message preview."""
user_dir = _get_user_history_dir(user)
conversations = []
if not os.path.exists(user_dir):
return conversations
try:
for filename in os.listdir(user_dir):
if not filename.endswith(".ndjson"):
continue
conv_id = filename[:-7] # Remove .ndjson extension
if not _validate_conversation_id(conv_id):
continue
conv_path = os.path.join(user_dir, filename)
messages = _read_conversation(user, conv_id)
if messages:
first_msg = messages[0]
preview = first_msg.get("content", "")[:50]
if len(first_msg.get("content", "")) > 50:
preview += "..."
conversations.append({
"id": conv_id,
"created_at": first_msg.get("ts", ""),
"preview": preview,
"message_count": len(messages),
})
else:
# Empty conversation file
conversations.append({
"id": conv_id,
"created_at": "",
"preview": "(empty)",
"message_count": 0,
})
except OSError:
pass
# Sort by created_at descending
conversations.sort(key=lambda x: x["created_at"] or "", reverse=True)
return conversations
def _delete_conversation(user, conv_id):
"""Delete a conversation file."""
conv_path = _get_conversation_path(user, conv_id)
if os.path.exists(conv_path):
os.remove(conv_path)
return True
return False
class ChatHandler(BaseHTTPRequestHandler):
"""HTTP request handler for disinto-chat with Forgejo OAuth."""
def log_message(self, format, *args):
"""Log to stderr."""
print(f"[{self.log_date_time_string()}] {format % args}", file=sys.stderr)
def send_error_page(self, code, message=None):
"""Custom error response."""
self.send_response(code)
self.send_header("Content-Type", "text/plain; charset=utf-8")
self.end_headers()
if message:
self.wfile.write(message.encode("utf-8"))
def _require_session(self):
"""Check session; redirect to /chat/login if missing. Returns username or None."""
user = _validate_session(self.headers.get("Cookie"))
if user:
return user
self.send_response(302)
self.send_header("Location", "/chat/login")
self.end_headers()
return None
def _check_forwarded_user(self, session_user):
"""Defense-in-depth: verify X-Forwarded-User matches session user (#709).
Returns True if the request may proceed, False if a 403 was sent.
When X-Forwarded-User is absent (forward_auth removed from Caddy),
the request is rejected - fail-closed by design.
"""
forwarded = self.headers.get("X-Forwarded-User")
if not forwarded:
rid = self.headers.get("X-Request-Id", "-")
print(
f"WARN: missing X-Forwarded-User for session_user={session_user} "
f"req_id={rid} - fail-closed (#709)",
file=sys.stderr,
)
self.send_error_page(403, "Forbidden: missing forwarded-user header")
return False
if forwarded != session_user:
rid = self.headers.get("X-Request-Id", "-")
print(
f"WARN: X-Forwarded-User mismatch: header={forwarded} "
f"session={session_user} req_id={rid} (#709)",
file=sys.stderr,
)
self.send_error_page(403, "Forbidden: user identity mismatch")
return False
return True
def do_GET(self):
"""Handle GET requests."""
parsed = urlparse(self.path)
path = parsed.path
# Health endpoint (no auth required) — used by Docker healthcheck
if path == "/health":
self.send_response(200)
self.send_header("Content-Type", "text/plain")
self.end_headers()
self.wfile.write(b"ok\n")
return
# Verify endpoint for Caddy forward_auth (#709)
if path == "/chat/auth/verify":
self.handle_auth_verify()
return
# OAuth routes (no session required)
if path == "/chat/login":
self.handle_login()
return
if path == "/chat/oauth/callback":
self.handle_oauth_callback(parsed.query)
return
# Conversation list endpoint: GET /chat/history
if path == "/chat/history":
user = self._require_session()
if not user:
return
if not self._check_forwarded_user(user):
return
self.handle_conversation_list(user)
return
# Single conversation endpoint: GET /chat/history/<id>
if path.startswith("/chat/history/"):
user = self._require_session()
if not user:
return
if not self._check_forwarded_user(user):
return
conv_id = path[len("/chat/history/"):]
self.handle_conversation_get(user, conv_id)
return
# Serve index.html at root
if path in ("/", "/chat", "/chat/"):
user = self._require_session()
if not user:
return
if not self._check_forwarded_user(user):
return
self.serve_index()
return
# Serve static files
if path.startswith("/chat/static/") or path.startswith("/static/"):
user = self._require_session()
if not user:
return
if not self._check_forwarded_user(user):
return
self.serve_static(path)
return
# Reserved WebSocket endpoint (future use)
if path == "/ws" or path.startswith("/ws"):
self.send_error_page(501, "WebSocket upgrade not yet implemented")
return
# 404 for unknown paths
self.send_error_page(404, "Not found")
def do_POST(self):
"""Handle POST requests."""
parsed = urlparse(self.path)
path = parsed.path
# New conversation endpoint (session required)
if path == "/chat/new":
user = self._require_session()
if not user:
return
if not self._check_forwarded_user(user):
return
self.handle_new_conversation(user)
return
# Chat endpoint (session required)
if path in ("/chat", "/chat/"):
user = self._require_session()
if not user:
return
if not self._check_forwarded_user(user):
return
self.handle_chat(user)
return
# 404 for unknown paths
self.send_error_page(404, "Not found")
def handle_auth_verify(self):
"""Caddy forward_auth callback - validate session and return X-Forwarded-User (#709).
Caddy calls this endpoint for every /chat/* request. If the session
cookie is valid the endpoint returns 200 with the X-Forwarded-User
header set to the session username. Otherwise it returns 401 so Caddy
knows the request is unauthenticated.
Access control: when FORWARD_AUTH_SECRET is configured, the request must
carry a matching X-Forward-Auth-Secret header (shared secret between
Caddy and the chat backend).
"""
# Shared-secret gate
if FORWARD_AUTH_SECRET:
provided = self.headers.get("X-Forward-Auth-Secret", "")
if not secrets.compare_digest(provided, FORWARD_AUTH_SECRET):
self.send_error_page(403, "Forbidden: invalid forward-auth secret")
return
user = _validate_session(self.headers.get("Cookie"))
if not user:
self.send_error_page(401, "Unauthorized: no valid session")
return
self.send_response(200)
self.send_header("X-Forwarded-User", user)
self.send_header("Content-Type", "text/plain; charset=utf-8")
self.end_headers()
self.wfile.write(b"ok")
def handle_login(self):
"""Redirect to Forgejo OAuth authorize endpoint."""
_gc_sessions()
if not CHAT_OAUTH_CLIENT_ID:
self.send_error_page(500, "Chat OAuth not configured (CHAT_OAUTH_CLIENT_ID missing)")
return
state = secrets.token_urlsafe(32)
_oauth_states[state] = time.time() + 600 # 10 min validity
params = urlencode({
"client_id": CHAT_OAUTH_CLIENT_ID,
"redirect_uri": _build_callback_uri(),
"response_type": "code",
"state": state,
})
self.send_response(302)
self.send_header("Location", f"{FORGE_URL}/login/oauth/authorize?{params}")
self.end_headers()
def handle_oauth_callback(self, query_string):
"""Exchange authorization code for token, validate user, set session."""
params = parse_qs(query_string)
code = params.get("code", [""])[0]
state = params.get("state", [""])[0]
# Validate state
expected_expiry = _oauth_states.pop(state, None) if state else None
if not expected_expiry or expected_expiry < time.time():
self.send_error_page(400, "Invalid or expired OAuth state")
return
if not code:
self.send_error_page(400, "Missing authorization code")
return
# Exchange code for access token
token_resp = _exchange_code_for_token(code)
if not token_resp or "access_token" not in token_resp:
self.send_error_page(502, "Failed to obtain access token from Forgejo")
return
access_token = token_resp["access_token"]
# Fetch user info
user_info = _fetch_user(access_token)
if not user_info or "login" not in user_info:
self.send_error_page(502, "Failed to fetch user info from Forgejo")
return
username = user_info["login"]
# Check allowlist
if username not in ALLOWED_USERS:
self.send_response(403)
self.send_header("Content-Type", "text/plain; charset=utf-8")
self.end_headers()
self.wfile.write(
f"Not authorised: user '{username}' is not in the allowed users list.\n".encode()
)
return
# Create session
session_token = secrets.token_urlsafe(48)
_sessions[session_token] = {
"user": username,
"expires": time.time() + SESSION_TTL,
}
cookie_flags = _session_cookie_flags()
self.send_response(302)
self.send_header("Set-Cookie", f"{SESSION_COOKIE}={session_token}; {cookie_flags}")
self.send_header("Location", "/chat/")
self.end_headers()
def serve_index(self):
"""Serve the main index.html file."""
index_path = os.path.join(UI_DIR, "index.html")
if not os.path.exists(index_path):
self.send_error_page(500, "UI not found")
return
try:
with open(index_path, "r", encoding="utf-8") as f:
content = f.read()
self.send_response(200)
self.send_header("Content-Type", MIME_TYPES[".html"])
self.send_header("Content-Length", len(content.encode("utf-8")))
self.end_headers()
self.wfile.write(content.encode("utf-8"))
except IOError as e:
self.send_error_page(500, f"Error reading index.html: {e}")
def serve_static(self, path):
"""Serve static files from the static directory."""
# Strip /chat/static/ or /static/ prefix
if path.startswith("/chat/static/"):
relative_path = path[len("/chat/static/"):]
else:
relative_path = path[len("/static/"):]
if ".." in relative_path or relative_path.startswith("/"):
self.send_error_page(403, "Forbidden")
return
file_path = os.path.join(STATIC_DIR, relative_path)
if not os.path.exists(file_path):
self.send_error_page(404, "Not found")
return
# Determine MIME type
_, ext = os.path.splitext(file_path)
content_type = MIME_TYPES.get(ext.lower(), "application/octet-stream")
try:
with open(file_path, "rb") as f:
content = f.read()
self.send_response(200)
self.send_header("Content-Type", content_type)
self.send_header("Content-Length", len(content))
self.end_headers()
self.wfile.write(content)
except IOError as e:
self.send_error_page(500, f"Error reading file: {e}")
def _send_rate_limit_response(self, retry_after, reason):
"""Send a 429 response with Retry-After header and HTMX fragment (#711)."""
body = (
f'<div class="rate-limit-error">'
f"Rate limit exceeded: {reason}. "
f"Please try again in {retry_after} seconds."
f"</div>"
)
self.send_response(429)
self.send_header("Retry-After", str(retry_after))
self.send_header("Content-Type", "text/html; charset=utf-8")
self.send_header("Content-Length", str(len(body.encode("utf-8"))))
self.end_headers()
self.wfile.write(body.encode("utf-8"))
def handle_chat(self, user):
"""
Handle chat requests by spawning `claude --print` with the user message.
Enforces per-user rate limits and tracks token usage (#711).
"""
# Check rate limits before processing (#711)
allowed, retry_after, reason = _check_rate_limit(user)
if not allowed:
self._send_rate_limit_response(retry_after, reason)
return
# Read request body
content_length = int(self.headers.get("Content-Length", 0))
if content_length == 0:
self.send_error_page(400, "No message provided")
return
body = self.rfile.read(content_length)
try:
# Parse form-encoded body
body_str = body.decode("utf-8")
params = parse_qs(body_str)
message = params.get("message", [""])[0]
conv_id = params.get("conversation_id", [None])[0]
except (UnicodeDecodeError, KeyError):
self.send_error_page(400, "Invalid message format")
return
if not message:
self.send_error_page(400, "Empty message")
return
# Get user from session
user = _validate_session(self.headers.get("Cookie"))
if not user:
self.send_error_page(401, "Unauthorized")
return
# Validate Claude binary exists
if not os.path.exists(CLAUDE_BIN):
self.send_error_page(500, "Claude CLI not found")
return
# Generate new conversation ID if not provided
if not conv_id or not _validate_conversation_id(conv_id):
conv_id = _generate_conversation_id()
# Record request for rate limiting (#711)
_record_request(user)
try:
# Save user message to history
_write_message(user, conv_id, "user", message)
# Spawn claude --print with stream-json for token tracking (#711)
proc = subprocess.Popen(
[CLAUDE_BIN, "--print", "--output-format", "stream-json", message],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
)
raw_output = proc.stdout.read()
error_output = proc.stderr.read()
if error_output:
print(f"Claude stderr: {error_output}", file=sys.stderr)
proc.wait()
if proc.returncode != 0:
self.send_error_page(500, f"Claude CLI failed with exit code {proc.returncode}")
return
# Parse stream-json for text and token usage (#711)
response, total_tokens = _parse_stream_json(raw_output)
# Track token usage - does not block *this* request (#711)
if total_tokens > 0:
_record_tokens(user, total_tokens)
print(
f"Token usage: user={user} tokens={total_tokens}",
file=sys.stderr,
)
# Fall back to raw output if stream-json parsing yielded no text
if not response:
response = raw_output
# Save assistant response to history
_write_message(user, conv_id, "assistant", response)
self.send_response(200)
self.send_header("Content-Type", "application/json; charset=utf-8")
self.end_headers()
self.wfile.write(json.dumps({
"response": response,
"conversation_id": conv_id,
}, ensure_ascii=False).encode("utf-8"))
except FileNotFoundError:
self.send_error_page(500, "Claude CLI not found")
except Exception as e:
self.send_error_page(500, f"Error: {e}")
# =======================================================================
# Conversation History Handlers
# =======================================================================
def handle_conversation_list(self, user):
"""List all conversations for the logged-in user."""
conversations = _list_user_conversations(user)
self.send_response(200)
self.send_header("Content-Type", "application/json; charset=utf-8")
self.end_headers()
self.wfile.write(json.dumps(conversations, ensure_ascii=False).encode("utf-8"))
def handle_conversation_get(self, user, conv_id):
"""Get a specific conversation for the logged-in user."""
# Validate conversation_id format
if not _validate_conversation_id(conv_id):
self.send_error_page(400, "Invalid conversation ID")
return
messages = _read_conversation(user, conv_id)
if messages is None:
self.send_error_page(404, "Conversation not found")
return
self.send_response(200)
self.send_header("Content-Type", "application/json; charset=utf-8")
self.end_headers()
self.wfile.write(json.dumps(messages, ensure_ascii=False).encode("utf-8"))
def handle_conversation_delete(self, user, conv_id):
"""Delete a specific conversation for the logged-in user."""
# Validate conversation_id format
if not _validate_conversation_id(conv_id):
self.send_error_page(400, "Invalid conversation ID")
return
if _delete_conversation(user, conv_id):
self.send_response(204) # No Content
self.end_headers()
else:
self.send_error_page(404, "Conversation not found")
def handle_new_conversation(self, user):
"""Create a new conversation and return its ID."""
conv_id = _generate_conversation_id()
self.send_response(200)
self.send_header("Content-Type", "application/json; charset=utf-8")
self.end_headers()
self.wfile.write(json.dumps({"conversation_id": conv_id}, ensure_ascii=False).encode("utf-8"))
def do_DELETE(self):
"""Handle DELETE requests."""
parsed = urlparse(self.path)
path = parsed.path
# Delete conversation endpoint
if path.startswith("/chat/history/"):
user = self._require_session()
if not user:
return
if not self._check_forwarded_user(user):
return
conv_id = path[len("/chat/history/"):]
self.handle_conversation_delete(user, conv_id)
return
# 404 for unknown paths
self.send_error_page(404, "Not found")
def main():
"""Start the HTTP server."""
server_address = (HOST, PORT)
httpd = HTTPServer(server_address, ChatHandler)
print(f"Starting disinto-chat server on {HOST}:{PORT}", file=sys.stderr)
print(f"UI available at http://localhost:{PORT}/chat/", file=sys.stderr)
if CHAT_OAUTH_CLIENT_ID:
print(f"OAuth enabled (client_id={CHAT_OAUTH_CLIENT_ID[:8]}...)", file=sys.stderr)
print(f"Allowed users: {', '.join(sorted(ALLOWED_USERS))}", file=sys.stderr)
else:
print("WARNING: CHAT_OAUTH_CLIENT_ID not set - OAuth disabled", file=sys.stderr)
if FORWARD_AUTH_SECRET:
print("forward_auth secret configured (#709)", file=sys.stderr)
else:
print("WARNING: FORWARD_AUTH_SECRET not set - verify endpoint unrestricted", file=sys.stderr)
print(
f"Rate limits (#711): {CHAT_MAX_REQUESTS_PER_HOUR}/hr, "
f"{CHAT_MAX_REQUESTS_PER_DAY}/day, "
f"{CHAT_MAX_TOKENS_PER_DAY} tokens/day",
file=sys.stderr,
)
httpd.serve_forever()
if __name__ == "__main__":
main()

View file

@ -1,521 +0,0 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>disinto-chat</title>
<script src="/static/htmx.min.js"></script>
<style>
* {
box-sizing: border-box;
margin: 0;
padding: 0;
}
body {
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Oxygen, Ubuntu, sans-serif;
background: #1a1a2e;
color: #eaeaea;
min-height: 100vh;
display: flex;
}
/* Sidebar styles */
.sidebar {
width: 280px;
background: #16213e;
border-right: 1px solid #0f3460;
display: flex;
flex-direction: column;
height: 100vh;
position: fixed;
left: 0;
top: 0;
z-index: 100;
}
.sidebar-header {
padding: 1rem;
border-bottom: 1px solid #0f3460;
}
.sidebar-header h1 {
font-size: 1.25rem;
font-weight: 600;
margin-bottom: 0.5rem;
}
.new-chat-btn {
width: 100%;
background: #e94560;
color: white;
border: none;
border-radius: 6px;
padding: 0.75rem 1rem;
font-size: 0.9rem;
font-weight: 600;
cursor: pointer;
transition: background 0.2s;
}
.new-chat-btn:hover {
background: #d63447;
}
.new-chat-btn:disabled {
background: #555;
cursor: not-allowed;
}
.conversations-list {
flex: 1;
overflow-y: auto;
padding: 0.5rem;
}
.conversation-item {
padding: 0.75rem 1rem;
border-radius: 6px;
cursor: pointer;
margin-bottom: 0.25rem;
transition: background 0.2s;
border: 1px solid transparent;
}
.conversation-item:hover {
background: #1a1a2e;
}
.conversation-item.active {
background: #0f3460;
border-color: #e94560;
}
.conversation-item .preview {
font-size: 0.875rem;
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
opacity: 0.9;
}
.conversation-item .meta {
font-size: 0.75rem;
opacity: 0.6;
margin-top: 0.25rem;
}
.conversation-item .message-count {
float: right;
font-size: 0.7rem;
background: #0f3460;
padding: 0.125rem 0.5rem;
border-radius: 10px;
}
.main-content {
margin-left: 280px;
display: flex;
flex-direction: column;
width: 100%;
height: 100vh;
}
header {
background: #16213e;
padding: 1rem 2rem;
border-bottom: 1px solid #0f3460;
}
header h1 {
font-size: 1.25rem;
font-weight: 600;
}
main {
flex: 1;
display: flex;
flex-direction: column;
max-width: 900px;
margin: 0 auto;
width: 100%;
padding: 1rem;
}
#messages {
flex: 1;
overflow-y: auto;
padding: 1rem;
background: #16213e;
border-radius: 8px;
margin-bottom: 1rem;
}
.message {
margin-bottom: 1rem;
padding: 0.75rem 1rem;
border-radius: 8px;
line-height: 1.5;
}
.message.user {
background: #0f3460;
margin-left: 2rem;
}
.message.assistant {
background: #1a1a2e;
margin-right: 2rem;
}
.message.system {
background: #1a1a2e;
font-style: italic;
color: #888;
text-align: center;
}
.message .role {
font-weight: 600;
font-size: 0.875rem;
margin-bottom: 0.25rem;
opacity: 0.8;
}
.message .content {
white-space: pre-wrap;
word-wrap: break-word;
}
.input-area {
display: flex;
gap: 0.5rem;
padding: 1rem;
background: #16213e;
border-radius: 8px;
}
textarea {
flex: 1;
background: #1a1a2e;
border: 1px solid #0f3460;
border-radius: 6px;
padding: 0.75rem;
color: #eaeaea;
font-family: inherit;
font-size: 1rem;
resize: none;
min-height: 80px;
}
textarea:focus {
outline: none;
border-color: #e94560;
}
button {
background: #e94560;
color: white;
border: none;
border-radius: 6px;
padding: 0.75rem 1.5rem;
font-size: 1rem;
font-weight: 600;
cursor: pointer;
transition: background 0.2s;
}
button:hover {
background: #d63447;
}
button:disabled {
background: #555;
cursor: not-allowed;
}
.loading {
opacity: 0.6;
}
.empty-state {
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
height: 100%;
color: #888;
text-align: center;
}
.empty-state p {
margin-top: 1rem;
}
/* Responsive sidebar toggle */
.sidebar-toggle {
display: none;
position: fixed;
top: 1rem;
left: 1rem;
z-index: 200;
background: #e94560;
color: white;
border: none;
border-radius: 6px;
padding: 0.5rem;
cursor: pointer;
}
@media (max-width: 768px) {
.sidebar {
transform: translateX(-100%);
transition: transform 0.3s;
}
.sidebar.open {
transform: translateX(0);
}
.sidebar-toggle {
display: block;
}
.main-content {
margin-left: 0;
}
}
</style>
</head>
<body>
<button class="sidebar-toggle" id="sidebar-toggle"></button>
<aside class="sidebar" id="sidebar">
<div class="sidebar-header">
<h1>disinto-chat</h1>
<button class="new-chat-btn" id="new-chat-btn">+ New Chat</button>
</div>
<div class="conversations-list" id="conversations-list">
<!-- Conversations will be loaded here -->
</div>
</aside>
<div class="main-content">
<header>
<h1>disinto-chat</h1>
</header>
<main>
<div id="messages">
<div class="message system">
<div class="role">system</div>
<div class="content">Welcome to disinto-chat. Type a message to start chatting with Claude.</div>
</div>
</div>
<form class="input-area" id="chat-form">
<textarea name="message" placeholder="Type your message..." required></textarea>
<button type="submit" id="send-btn">Send</button>
</form>
</main>
</div>
<script>
// State
let currentConversationId = null;
let conversations = [];
// DOM elements
const messagesDiv = document.getElementById('messages');
const sendBtn = document.getElementById('send-btn');
const textarea = document.querySelector('textarea');
const conversationsList = document.getElementById('conversations-list');
const newChatBtn = document.getElementById('new-chat-btn');
const sidebar = document.getElementById('sidebar');
const sidebarToggle = document.getElementById('sidebar-toggle');
// Load conversations list
async function loadConversations() {
try {
const response = await fetch('/chat/history');
if (response.ok) {
conversations = await response.json();
renderConversationsList();
}
} catch (error) {
console.error('Failed to load conversations:', error);
}
}
// Render conversations list
function renderConversationsList() {
conversationsList.innerHTML = '';
if (conversations.length === 0) {
conversationsList.innerHTML = '<div style="padding: 1rem; color: #888; text-align: center; font-size: 0.875rem;">No conversations yet</div>';
return;
}
conversations.forEach(conv => {
const item = document.createElement('div');
item.className = 'conversation-item';
if (conv.id === currentConversationId) {
item.classList.add('active');
}
item.dataset.conversationId = conv.id;
const previewDiv = document.createElement('div');
previewDiv.className = 'preview';
previewDiv.textContent = conv.preview || '(empty)';
const metaDiv = document.createElement('div');
metaDiv.className = 'meta';
const date = conv.created_at ? new Date(conv.created_at).toLocaleDateString() : '';
metaDiv.innerHTML = `${date} <span class="message-count">${conv.message_count || 0} msg${conv.message_count !== 1 ? 's' : ''}</span>`;
item.appendChild(previewDiv);
item.appendChild(metaDiv);
item.addEventListener('click', () => loadConversation(conv.id));
conversationsList.appendChild(item);
});
}
// Load a specific conversation
async function loadConversation(convId) {
// Early-return if already showing this conversation
if (convId === currentConversationId) {
return;
}
// Clear messages
messagesDiv.innerHTML = '';
// Update active state in sidebar
document.querySelectorAll('.conversation-item').forEach(item => {
item.classList.remove('active');
});
document.querySelector(`[data-conversation-id="${convId}"]`)?.classList.add('active');
currentConversationId = convId;
try {
const response = await fetch(`/chat/history/${convId}`);
if (response.ok) {
const messages = await response.json();
if (messages && messages.length > 0) {
messages.forEach(msg => {
addMessage(msg.role, msg.content);
});
} else {
addSystemMessage('This conversation is empty');
}
} else {
addSystemMessage('Failed to load conversation');
}
} catch (error) {
console.error('Failed to load conversation:', error);
addSystemMessage('Error loading conversation');
}
// Close sidebar on mobile
if (window.innerWidth <= 768) {
sidebar.classList.remove('open');
}
}
// Create a new conversation
async function createNewConversation() {
try {
const response = await fetch('/chat/new', { method: 'POST' });
if (response.ok) {
const data = await response.json();
currentConversationId = data.conversation_id;
messagesDiv.innerHTML = '';
addSystemMessage('New conversation started');
await loadConversations();
} else {
addSystemMessage('Failed to create new conversation');
}
} catch (error) {
console.error('Failed to create new conversation:', error);
addSystemMessage('Error creating new conversation');
}
}
// Add message to display
function addMessage(role, content, streaming = false) {
const msgDiv = document.createElement('div');
msgDiv.className = `message ${role}`;
msgDiv.innerHTML = `
<div class="role">${role}</div>
<div class="content${streaming ? ' streaming' : ''}">${escapeHtml(content)}</div>
`;
messagesDiv.appendChild(msgDiv);
messagesDiv.scrollTop = messagesDiv.scrollHeight;
return msgDiv.querySelector('.content');
}
function addSystemMessage(content) {
const msgDiv = document.createElement('div');
msgDiv.className = 'message system';
msgDiv.innerHTML = `
<div class="role">system</div>
<div class="content">${escapeHtml(content)}</div>
`;
messagesDiv.appendChild(msgDiv);
messagesDiv.scrollTop = messagesDiv.scrollHeight;
}
function escapeHtml(text) {
const div = document.createElement('div');
div.textContent = text;
return div.innerHTML.replace(/\n/g, '<br>');
}
// Send message handler
async function sendMessage() {
const message = textarea.value.trim();
if (!message) return;
// Disable input
textarea.disabled = true;
sendBtn.disabled = true;
sendBtn.textContent = 'Sending...';
// Add user message
addMessage('user', message);
textarea.value = '';
// If no conversation ID, create one
if (!currentConversationId) {
await createNewConversation();
}
try {
// Use fetch with URLSearchParams for application/x-www-form-urlencoded
const params = new URLSearchParams();
params.append('message', message);
params.append('conversation_id', currentConversationId);
const response = await fetch('/chat', {
method: 'POST',
headers: {
'Content-Type': 'application/x-www-form-urlencoded'
},
body: params
});
if (!response.ok) {
throw new Error(`HTTP ${response.status}`);
}
// Read the response as JSON (now returns JSON with response and conversation_id)
const data = await response.json();
addMessage('assistant', data.response);
} catch (error) {
addSystemMessage(`Error: ${error.message}`);
} finally {
textarea.disabled = false;
sendBtn.disabled = false;
sendBtn.textContent = 'Send';
textarea.focus();
messagesDiv.scrollTop = messagesDiv.scrollHeight;
// Refresh conversations list
await loadConversations();
}
}
// Event listeners
sendBtn.addEventListener('click', sendMessage);
newChatBtn.addEventListener('click', createNewConversation);
textarea.addEventListener('keydown', (e) => {
if (e.key === 'Enter' && !e.shiftKey) {
e.preventDefault();
sendMessage();
}
});
// Sidebar toggle for mobile
sidebarToggle.addEventListener('click', () => {
sidebar.classList.toggle('open');
});
// Close sidebar when clicking outside on mobile
document.addEventListener('click', (e) => {
if (window.innerWidth <= 768) {
if (!sidebar.contains(e.target) && !sidebarToggle.contains(e.target)) {
sidebar.classList.remove('open');
}
}
});
// Initial focus
textarea.focus();
// Load conversations on page load
loadConversations();
</script>
</body>
</html>

File diff suppressed because one or more lines are too long

View file

@ -1,7 +0,0 @@
FROM caddy:latest
RUN apk add --no-cache bash jq curl git docker-cli python3 openssh-client autossh
COPY entrypoint-edge.sh /usr/local/bin/entrypoint-edge.sh
VOLUME /data
ENTRYPOINT ["bash", "/usr/local/bin/entrypoint-edge.sh"]

File diff suppressed because it is too large Load diff

View file

@ -1,243 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
# Set USER and HOME before sourcing env.sh — preconditions for lib/env.sh (#674).
export USER="${USER:-agent}"
export HOME="${HOME:-/home/agent}"
FORGE_URL="${FORGE_URL:-http://forgejo:3000}"
# Derive FORGE_REPO from PROJECT_TOML if available, otherwise require explicit env var
if [ -z "${FORGE_REPO:-}" ]; then
# Try to find a project TOML to derive FORGE_REPO from
_project_toml="${PROJECT_TOML:-}"
if [ -z "$_project_toml" ] && [ -d "${FACTORY_ROOT:-/opt/disinto}/projects" ]; then
for toml in "${FACTORY_ROOT:-/opt/disinto}"/projects/*.toml; do
if [ -f "$toml" ]; then
_project_toml="$toml"
break
fi
done
fi
if [ -n "$_project_toml" ] && [ -f "$_project_toml" ]; then
# Parse FORGE_REPO from project TOML using load-project.sh
if source "${FACTORY_ROOT:-/opt/disinto}/lib/load-project.sh" "$_project_toml" 2>/dev/null; then
if [ -n "${FORGE_REPO:-}" ]; then
echo "Derived FORGE_REPO from PROJECT_TOML: $_project_toml" >&2
fi
fi
fi
# If still not set, fail fast with a clear error message
if [ -z "${FORGE_REPO:-}" ]; then
echo "FATAL: FORGE_REPO environment variable not set" >&2
echo "Set FORGE_REPO=<owner>/<repo> in .env (e.g. FORGE_REPO=disinto-admin/disinto)" >&2
exit 1
fi
fi
# Detect bind-mount of a non-git directory before attempting clone
if [ -d /opt/disinto ] && [ ! -d /opt/disinto/.git ] && [ -n "$(ls -A /opt/disinto 2>/dev/null)" ]; then
echo "FATAL: /opt/disinto contains files but no .git directory." >&2
echo "If you bind-mounted a directory at /opt/disinto, ensure it is a git working tree." >&2
echo "Sleeping 60s before exit to throttle the restart loop..." >&2
sleep 60
exit 1
fi
# Set HOME early so credential helper and git config land in the right place.
export HOME=/home/agent
mkdir -p "$HOME"
# Configure git credential helper before cloning (#604).
# /opt/disinto does not exist yet so we cannot source lib/git-creds.sh;
# inline a minimal credential-helper setup here.
if [ -n "${FORGE_PASS:-}" ] && [ -n "${FORGE_URL:-}" ]; then
_forge_host=$(printf '%s' "$FORGE_URL" | sed 's|https\?://||; s|/.*||')
_forge_proto=$(printf '%s' "$FORGE_URL" | sed 's|://.*||')
_bot_user=""
if [ -n "${FORGE_TOKEN:-}" ]; then
_bot_user=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_URL}/api/v1/user" 2>/dev/null | jq -r '.login // empty') || _bot_user=""
fi
_bot_user="${_bot_user:-dev-bot}"
cat > "${HOME}/.git-credentials-helper" <<CREDEOF
#!/bin/sh
# Reads \$FORGE_PASS from env at runtime — file is safe to read on disk.
[ "\$1" = "get" ] || exit 0
cat >/dev/null
echo "protocol=${_forge_proto}"
echo "host=${_forge_host}"
echo "username=${_bot_user}"
echo "password=\$FORGE_PASS"
CREDEOF
chmod 755 "${HOME}/.git-credentials-helper"
git config --global credential.helper "${HOME}/.git-credentials-helper"
git config --global --add safe.directory '*'
fi
# Shallow clone at the pinned version — use clean URL, credential helper
# supplies auth (#604).
# Retry with exponential backoff — forgejo may still be starting (#665).
if [ ! -d /opt/disinto/.git ]; then
echo "edge: cloning ${FORGE_URL}/${FORGE_REPO} (branch ${DISINTO_VERSION:-main})..." >&2
_clone_ok=false
_backoff=2
_max_backoff=30
_max_attempts=10
for _attempt in $(seq 1 "$_max_attempts"); do
if git clone --depth 1 --branch "${DISINTO_VERSION:-main}" "${FORGE_URL}/${FORGE_REPO}.git" /opt/disinto 2>&1; then
_clone_ok=true
break
fi
rm -rf /opt/disinto # clean up partial clone before retry
if [ "$_attempt" -lt "$_max_attempts" ]; then
echo "edge: clone attempt ${_attempt}/${_max_attempts} failed, retrying in ${_backoff}s..." >&2
sleep "$_backoff"
_backoff=$(( _backoff * 2 ))
if [ "$_backoff" -gt "$_max_backoff" ]; then _backoff=$_max_backoff; fi
fi
done
if [ "$_clone_ok" != "true" ]; then
echo >&2
echo "FATAL: failed to clone ${FORGE_URL}/${FORGE_REPO}.git (branch ${DISINTO_VERSION:-main}) after ${_max_attempts} attempts" >&2
echo "Likely causes:" >&2
echo " - Forgejo at ${FORGE_URL} is unreachable from the edge container" >&2
echo " - Repository '${FORGE_REPO}' does not exist on this forge" >&2
echo " - FORGE_TOKEN/FORGE_PASS is invalid or has no read access to '${FORGE_REPO}'" >&2
echo " - Branch '${DISINTO_VERSION:-main}' does not exist in '${FORGE_REPO}'" >&2
echo "Workaround: bind-mount a local git checkout into /opt/disinto." >&2
echo "Sleeping 60s before exit to throttle the restart loop..." >&2
sleep 60
exit 1
fi
fi
# Repair any legacy baked-credential URLs in /opt/disinto (#604).
# Now that /opt/disinto exists, source the shared lib.
if [ -f /opt/disinto/lib/git-creds.sh ]; then
# shellcheck source=/opt/disinto/lib/git-creds.sh
source /opt/disinto/lib/git-creds.sh
_GIT_CREDS_LOG_FN="echo" repair_baked_cred_urls /opt/disinto
fi
# Ensure log directory exists
mkdir -p /opt/disinto-logs
# ── Reverse tunnel (optional) ──────────────────────────────────────────
# When EDGE_TUNNEL_HOST is set, open a single reverse-SSH forward so the
# DO edge box can reach this container's Caddy on the project's assigned port.
# Guarded: if EDGE_TUNNEL_HOST is empty/unset the block is skipped entirely,
# keeping local-only dev working without errors.
if [ -n "${EDGE_TUNNEL_HOST:-}" ]; then
_tunnel_key="/run/secrets/tunnel_key"
if [ ! -f "$_tunnel_key" ]; then
echo "WARN: EDGE_TUNNEL_HOST is set but ${_tunnel_key} is missing — skipping tunnel" >&2
else
# Ensure correct permissions (bind-mount may arrive as 644)
chmod 0400 "$_tunnel_key" 2>/dev/null || true
: "${EDGE_TUNNEL_USER:=tunnel}"
: "${EDGE_TUNNEL_PORT:?EDGE_TUNNEL_PORT must be set when EDGE_TUNNEL_HOST is set}"
export AUTOSSH_GATETIME=0 # don't exit if the first attempt fails quickly
autossh -M 0 -N -f \
-o StrictHostKeyChecking=accept-new \
-o ServerAliveInterval=30 \
-o ServerAliveCountMax=3 \
-o ExitOnForwardFailure=yes \
-i "$_tunnel_key" \
-R "127.0.0.1:${EDGE_TUNNEL_PORT}:localhost:80" \
"${EDGE_TUNNEL_USER}@${EDGE_TUNNEL_HOST}"
echo "edge: reverse tunnel → ${EDGE_TUNNEL_HOST}:${EDGE_TUNNEL_PORT}" >&2
fi
fi
# Set project context vars for scripts that source lib/env.sh (#674).
# These satisfy env.sh's preconditions for edge-container scripts.
export PROJECT_REPO_ROOT="${PROJECT_REPO_ROOT:-/opt/disinto}"
export PRIMARY_BRANCH="${PRIMARY_BRANCH:-main}"
export OPS_REPO_ROOT="${OPS_REPO_ROOT:-/home/agent/repos/${PROJECT_NAME:-disinto}-ops}"
# Start dispatcher in background
bash /opt/disinto/docker/edge/dispatcher.sh &
# Start supervisor loop in background
PROJECT_TOML="${PROJECT_TOML:-projects/disinto.toml}"
(while true; do
bash /opt/disinto/supervisor/supervisor-run.sh "/opt/disinto/${PROJECT_TOML}" 2>&1 | tee -a /opt/disinto-logs/supervisor.log || true
sleep 1200 # 20 minutes
done) &
# ── Load required secrets from secrets/*.enc (#777) ────────────────────
# Edge container declares its required secrets; missing ones cause a hard fail.
_AGE_KEY_FILE="${HOME}/.config/sops/age/keys.txt"
_SECRETS_DIR="/opt/disinto/secrets"
EDGE_REQUIRED_SECRETS="CADDY_SSH_KEY CADDY_SSH_HOST CADDY_SSH_USER CADDY_ACCESS_LOG"
_edge_decrypt_secret() {
local enc_path="${_SECRETS_DIR}/${1}.enc"
[ -f "$enc_path" ] || return 1
age -d -i "$_AGE_KEY_FILE" "$enc_path" 2>/dev/null
}
if [ -f "$_AGE_KEY_FILE" ] && [ -d "$_SECRETS_DIR" ]; then
_missing=""
for _secret_name in $EDGE_REQUIRED_SECRETS; do
_val=$(_edge_decrypt_secret "$_secret_name") || { _missing="${_missing} ${_secret_name}"; continue; }
export "$_secret_name=$_val"
done
if [ -n "$_missing" ]; then
echo "FATAL: required secrets missing from secrets/*.enc:${_missing}" >&2
echo " Run 'disinto secrets add <NAME>' for each missing secret." >&2
echo " If migrating from .env.vault.enc, run 'disinto secrets migrate-from-vault' first." >&2
exit 1
fi
echo "edge: loaded required secrets: ${EDGE_REQUIRED_SECRETS}" >&2
else
echo "FATAL: age key (${_AGE_KEY_FILE}) or secrets dir (${_SECRETS_DIR}) not found — cannot load required secrets" >&2
echo " Ensure age is installed and secrets/*.enc files are present." >&2
exit 1
fi
# Start daily engagement collection cron loop in background (#745)
# Runs collect-engagement.sh daily at ~23:50 UTC via a sleep loop that
# calculates seconds until the next 23:50 window. SSH key from secrets/*.enc (#777).
(while true; do
# Calculate seconds until next 23:50 UTC
_now=$(date -u +%s)
_target=$(date -u -d "today 23:50" +%s 2>/dev/null || date -u -d "23:50" +%s 2>/dev/null || echo 0)
if [ "$_target" -le "$_now" ]; then
_target=$(( _target + 86400 ))
fi
_sleep_secs=$(( _target - _now ))
echo "edge: collect-engagement scheduled in ${_sleep_secs}s (next 23:50 UTC)" >&2
sleep "$_sleep_secs"
_fetch_log="/tmp/caddy-access-log-fetch.log"
_ssh_key_file=$(mktemp)
printf '%s\n' "$CADDY_SSH_KEY" > "$_ssh_key_file"
chmod 0600 "$_ssh_key_file"
scp -i "$_ssh_key_file" -o StrictHostKeyChecking=accept-new -o ConnectTimeout=10 -o BatchMode=yes \
"${CADDY_SSH_USER}@${CADDY_SSH_HOST}:${CADDY_ACCESS_LOG}" \
"$_fetch_log" 2>&1 | tee -a /opt/disinto-logs/collect-engagement.log || true
rm -f "$_ssh_key_file"
if [ -s "$_fetch_log" ]; then
CADDY_ACCESS_LOG="$_fetch_log" bash /opt/disinto/site/collect-engagement.sh 2>&1 \
| tee -a /opt/disinto-logs/collect-engagement.log || true
else
echo "edge: collect-engagement: fetched log is empty, skipping parse" >&2
fi
rm -f "$_fetch_log"
done) &
# Caddy as main process — run in foreground via wait so background jobs survive
# (exec replaces the shell, which can orphan backgrounded subshells)
caddy run --config /etc/caddy/Caddyfile --adapter caddyfile &
# Exit when any child dies (caddy crash → container restart via docker compose)
wait -n
exit 1

View file

@ -1,38 +0,0 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Nothing shipped yet</title>
<style>
body {
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, sans-serif;
display: flex;
align-items: center;
justify-content: center;
min-height: 100vh;
margin: 0;
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
color: white;
}
.container {
text-align: center;
padding: 2rem;
}
h1 {
font-size: 3rem;
margin: 0 0 1rem 0;
}
p {
font-size: 1.25rem;
opacity: 0.9;
}
</style>
</head>
<body>
<div class="container">
<h1>Nothing shipped yet</h1>
<p>CI pipelines will update this page with your staging artifacts.</p>
</div>
</body>
</html>

View file

@ -1,14 +0,0 @@
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y --no-install-recommends \
bash curl git jq docker.io docker-compose-plugin \
nodejs npm chromium \
&& npm install -g @anthropic-ai/mcp-playwright \
&& rm -rf /var/lib/apt/lists/*
RUN useradd -m -u 1000 -s /bin/bash agent
COPY docker/reproduce/entrypoint-reproduce.sh /entrypoint-reproduce.sh
RUN chmod +x /entrypoint-reproduce.sh
VOLUME /home/agent/data
VOLUME /home/agent/repos
WORKDIR /home/agent
ENTRYPOINT ["/entrypoint-reproduce.sh"]

File diff suppressed because it is too large Load diff

View file

@ -1,115 +0,0 @@
#!/usr/bin/env bash
# entrypoint-runner.sh — Vault runner entrypoint
#
# Receives an action-id, reads the vault action TOML to get the formula name,
# then dispatches to the appropriate executor:
# - formulas/<name>.sh → bash (mechanical operations like release)
# - formulas/<name>.toml → claude -p (reasoning tasks like triage, architect)
#
# Usage: entrypoint-runner.sh <action-id>
#
# Expects:
# OPS_REPO_ROOT — path to the ops repo (mounted by compose)
# FACTORY_ROOT — path to disinto code (default: /home/agent/disinto)
#
# Part of #516.
set -euo pipefail
FACTORY_ROOT="${FACTORY_ROOT:-/home/agent/disinto}"
OPS_REPO_ROOT="${OPS_REPO_ROOT:-/home/agent/ops}"
log() {
printf '[%s] runner: %s\n' "$(date -u '+%Y-%m-%dT%H:%M:%SZ')" "$*"
}
# Configure git credential helper so formulas can clone/push without
# needing tokens embedded in remote URLs (#604).
if [ -f "${FACTORY_ROOT}/lib/git-creds.sh" ]; then
# shellcheck source=lib/git-creds.sh
source "${FACTORY_ROOT}/lib/git-creds.sh"
# shellcheck disable=SC2119 # no args intended — uses defaults
configure_git_creds
fi
# ── Argument parsing ─────────────────────────────────────────────────────
action_id="${1:-}"
if [ -z "$action_id" ]; then
log "ERROR: action-id argument required"
echo "Usage: entrypoint-runner.sh <action-id>" >&2
exit 1
fi
# ── Read vault action TOML ───────────────────────────────────────────────
action_toml="${OPS_REPO_ROOT}/vault/actions/${action_id}.toml"
if [ ! -f "$action_toml" ]; then
log "ERROR: vault action TOML not found: ${action_toml}"
exit 1
fi
# Extract formula name from TOML
formula=$(grep -E '^formula\s*=' "$action_toml" \
| sed -E 's/^formula\s*=\s*"(.*)"/\1/' | tr -d '\r')
if [ -z "$formula" ]; then
log "ERROR: no 'formula' field found in ${action_toml}"
exit 1
fi
# Extract context for logging
context=$(grep -E '^context\s*=' "$action_toml" \
| sed -E 's/^context\s*=\s*"(.*)"/\1/' | tr -d '\r')
log "Action: ${action_id}, formula: ${formula}, context: ${context:-<none>}"
# Export action TOML path so formula scripts can use it directly
export VAULT_ACTION_TOML="$action_toml"
# ── Dispatch: .sh (mechanical) vs .toml (Claude reasoning) ──────────────
formula_sh="${FACTORY_ROOT}/formulas/${formula}.sh"
formula_toml="${FACTORY_ROOT}/formulas/${formula}.toml"
if [ -f "$formula_sh" ]; then
# Mechanical operation — run directly
log "Dispatching to shell script: ${formula_sh}"
exec bash "$formula_sh" "$action_id"
elif [ -f "$formula_toml" ]; then
# Reasoning task — launch Claude with the formula as prompt
log "Dispatching to Claude with formula: ${formula_toml}"
formula_content=$(cat "$formula_toml")
action_context=$(cat "$action_toml")
prompt="You are a vault runner executing a formula-based operational task.
## Vault action
\`\`\`toml
${action_context}
\`\`\`
## Formula
\`\`\`toml
${formula_content}
\`\`\`
## Instructions
Execute the steps defined in the formula above. The vault action context provides
the specific parameters for this run. Execute each step in order, verifying
success before proceeding to the next.
FACTORY_ROOT=${FACTORY_ROOT}
OPS_REPO_ROOT=${OPS_REPO_ROOT}
"
exec claude -p "$prompt" \
--dangerously-skip-permissions \
${CLAUDE_MODEL:+--model "$CLAUDE_MODEL"}
else
log "ERROR: no formula found for '${formula}' — checked ${formula_sh} and ${formula_toml}"
exit 1
fi

View file

@ -114,3 +114,4 @@ When reviewing PRs or designing new agents, ask:
| gardener | 1242 (agent 471 + poll 771) | Medium — backlog triage, duplicate detection, tech-debt scoring | Poll is heavy orchestration; agent is prompt-driven |
| vault | 442 (4 scripts) | Medium — approval flow, human gate decisions | Intentionally bash-heavy (security gate should be deterministic) |
| planner | 382 | Medium — AGENTS.md update, gap analysis | Tmux+formula (done, #232) |
| action-agent | 192 | Light — formula execution | Close to target |

View file

@ -1,25 +0,0 @@
# Vault blast-radius tiers
## Tiers
| Tier | Meaning | Dispatch path |
|------|---------|---------------|
| low | Revertable, no external side effects | Direct commit to ops main; no human gate |
| medium | Significant but reversible | PR on ops repo; blocks calling agent until merged |
| high | Irreversible or high-blast-radius | PR on ops repo; hard blocks |
## Which agents are affected
Vault-blocking applies to: predictor, planner, architect, deploy pipelines, releases, shipping.
It does NOT apply to dev-agent — dev-agent work is always committed to a feature branch and
revertable via git revert. Dev-agent never needs a vault gate.
## Default tier
Unknown formulas default to `high`. When adding a new formula, add it to
`vault/policy.toml` (in ops repo, seeded during disinto init from disinto repo template).
## Per-action override
A vault action TOML may include `blast_radius = "low"` to override the policy tier
for that specific invocation. Use sparingly — policy.toml is the authoritative source.

View file

@ -1,138 +0,0 @@
# Claude Code OAuth Concurrency Model
## Problem statement
The factory runs multiple concurrent Claude Code processes across
containers. OAuth access tokens are short-lived; refresh tokens rotate
on each use. If two processes POST the same refresh token to Anthropic's
token endpoint simultaneously, only one wins — the other gets
`invalid_grant` and the operator is forced to re-login.
Claude Code already serializes OAuth refreshes internally using
`proper-lockfile` (`src/utils/auth.ts:1485-1491`):
```typescript
release = await lockfile.lock(claudeDir)
```
`proper-lockfile` creates a lockfile via an atomic `mkdir(${path}.lock)`
call — a cross-process primitive that works across any number of
processes on the same filesystem. The problem was never the lock
implementation; it was that our old per-container bind-mount layout
(`~/.claude` mounted but `/home/agent/` container-local) caused each
container to compute a different lockfile path, so the locks never
coordinated.
## The fix: shared `CLAUDE_CONFIG_DIR`
`CLAUDE_CONFIG_DIR` is an officially supported env var in Claude Code
(`src/utils/envUtils.ts`). It controls where Claude resolves its config
directory instead of the default `~/.claude`.
By setting `CLAUDE_CONFIG_DIR` to a path on a shared bind mount, every
container computes the **same** lockfile location. `proper-lockfile`'s
atomic `mkdir(${CLAUDE_CONFIG_DIR}.lock)` then gives free cross-container
serialization — no external wrapper needed.
## Current layout
```
Host filesystem:
/var/lib/disinto/claude-shared/ ← CLAUDE_SHARED_DIR
└── config/ ← CLAUDE_CONFIG_DIR
├── .credentials.json
├── settings.json
└── ...
Inside every container:
Same absolute path: /var/lib/disinto/claude-shared/config
Env: CLAUDE_CONFIG_DIR=/var/lib/disinto/claude-shared/config
```
The shared directory is mounted at the **same absolute path** inside
every container, so `proper-lockfile` resolves an identical lock path
everywhere.
### Where these values are defined
| What | Where |
|------|-------|
| Defaults for `CLAUDE_SHARED_DIR`, `CLAUDE_CONFIG_DIR` | `lib/env.sh:138-140` |
| `.env` documentation | `.env.example:92-99` |
| Container mounts + env passthrough (edge dispatcher) | `docker/edge/dispatcher.sh:446-448` (and analogous blocks for reproduce, triage, verify) |
| Auth detection using `CLAUDE_CONFIG_DIR` | `docker/agents/entrypoint.sh:101-102` |
| Bootstrap / migration during `disinto init` | `lib/claude-config.sh:setup_claude_config_dir()`, `bin/disinto:952-962` |
## Migration for existing dev boxes
For operators upgrading from the old `~/.claude` bind-mount layout,
`disinto init` handles the migration interactively (or with `--yes`).
The manual equivalent is:
```bash
# 1. Stop the factory
disinto down
# 2. Create the shared directory
mkdir -p /var/lib/disinto/claude-shared
# 3. Move existing config
mv "$HOME/.claude" /var/lib/disinto/claude-shared/config
# 4. Create a back-compat symlink so host-side claude still works
ln -sfn /var/lib/disinto/claude-shared/config "$HOME/.claude"
# 5. Export the env var (add to shell rc for persistence)
export CLAUDE_CONFIG_DIR=/var/lib/disinto/claude-shared/config
# 6. Start the factory
disinto up
```
## Verification
Watch for these analytics events during concurrent agent runs:
| Event | Meaning |
|-------|---------|
| `tengu_oauth_token_refresh_lock_acquiring` | A process is attempting to acquire the refresh lock |
| `tengu_oauth_token_refresh_lock_acquired` | Lock acquired; refresh proceeding |
| `tengu_oauth_token_refresh_lock_retry` | Lock is held by another process; retrying |
| `tengu_oauth_token_refresh_lock_race_resolved` | Contention detected and resolved normally |
| `tengu_oauth_token_refresh_lock_retry_limit_reached` | Lock acquisition failed after all retries |
**Healthy:** `_race_resolved` appearing during contention windows — this
means multiple processes tried to refresh simultaneously and the lock
correctly serialized them.
**Bad:** `_lock_retry_limit_reached` — indicates the lock is stuck or
the shared mount is not working. Verify that `CLAUDE_CONFIG_DIR` resolves
to the same path in all containers and that the filesystem supports
`mkdir` atomicity (any POSIX filesystem does).
## The deferred external `flock` wrapper
`lib/agent-sdk.sh:139,144` still wraps every `claude` invocation in an
external `flock` on `${HOME}/.claude/session.lock`:
```bash
local lock_file="${HOME}/.claude/session.lock"
...
output=$(cd "$run_dir" && ( flock -w 600 9 || exit 1;
claude_run_with_watchdog claude "${args[@]}" ) 9>"$lock_file" ...)
```
With the `CLAUDE_CONFIG_DIR` fix in place, this external lock is
**redundant but harmless** — `proper-lockfile` serializes the refresh
internally, and `flock` serializes the entire invocation externally.
The external flock remains as a defense-in-depth measure; removal is
tracked as a separate vision-tier issue.
## See also
- `lib/env.sh:138-140``CLAUDE_SHARED_DIR` / `CLAUDE_CONFIG_DIR` defaults
- `lib/claude-config.sh` — migration helper used by `disinto init`
- `lib/agent-sdk.sh:139,144` — the external `flock` wrapper (deferred removal)
- `docker/agents/entrypoint.sh:101-102``CLAUDE_CONFIG_DIR` auth detection
- `.env.example:92-99` — operator-facing documentation of the env vars
- Issue #623 — chat container auth strategy

View file

@ -39,11 +39,9 @@ programmatically instead of parsing SKILL.md instructions.
(`mcp` package). This adds a build step, runtime dependency, and
language that no current contributor or agent maintains.
2. **Persistent process.** The factory already runs a long-lived polling loop
(`docker/agents/entrypoint.sh`), so an MCP server is not architecturally
alien — the loop could keep an MCP client alive across iterations. However,
adding a second long-running process increases the monitoring surface and
restart complexity.
2. **Persistent process.** The factory is cron-driven — no long-running
daemons. An MCP server must stay up, be monitored, and be restarted on
failure. This contradicts the factory's event-driven architecture (AD-004).
3. **Thin wrapper over existing APIs.** Every proposed MCP tool maps directly
to a forge API call or a skill script invocation. The MCP server would be

View file

@ -92,9 +92,10 @@ PHASE:failed → label issue blocked, post diagnostic comment
### `idle_prompt` exit reason
The phase monitor can exit with `_MONITOR_LOOP_EXIT=idle_prompt`. This happens
when Claude returns to the interactive prompt (``) for **3 consecutive polls**
without writing any phase signal to the phase file.
`monitor_phase_loop` (in `lib/agent-session.sh`) can exit with
`_MONITOR_LOOP_EXIT=idle_prompt`. This happens when Claude returns to the
interactive prompt (``) for **3 consecutive polls** without writing any phase
signal to the phase file.
**Trigger conditions:**
- The phase file is empty (no phase has ever been written), **and**
@ -110,13 +111,14 @@ without writing any phase signal to the phase file.
callback without the phase file actually containing that value.
**Agent requirements:**
- **Callback:** Must handle `PHASE:failed` defensively — the session is already
dead, so any tmux send-keys or session-dependent logic must be skipped or
guarded.
- **Callback (`_on_phase_change` / `formula_phase_callback`):** Must handle
`PHASE:failed` defensively — the session is already dead, so any tmux
send-keys or session-dependent logic must be skipped or guarded.
- **Post-loop exit handler (`case $_MONITOR_LOOP_EXIT`):** Must include an
`idle_prompt)` branch. Typical actions: log the event, clean up temp files,
and (for agents that use escalation) write an escalation entry or notify via
vault/forge. See `dev/dev-agent.sh` for reference implementations.
vault/forge. See `dev/dev-agent.sh`, `action/action-agent.sh`, and
`gardener/gardener-agent.sh` for reference implementations.
## Crash Recovery

View file

@ -1,101 +0,0 @@
# Vault PR Workflow
This document describes the vault PR-based approval workflow for the ops repo.
## Overview
The vault system enables agents to request execution of privileged actions (deployments, token operations, etc.) through a PR-based approval process. This replaces the old vault directory structure with a more auditable, collaborative workflow.
## Branch Protection
The `main` branch on the ops repo (`johba/disinto-ops`) is protected via Forgejo branch protection to enforce:
- **Require 1 approval before merge** — All vault PRs must have at least one approval from an admin user
- **Admin-only merge** — Only users with admin role can merge vault PRs (regular collaborators and bot accounts cannot)
- **Block direct pushes** — All changes to `main` must go through PRs
### Protection Rules
| Setting | Value |
|---------|-------|
| `enable_push` | `false` |
| `enable_force_push` | `false` |
| `enable_merge_commit` | `true` |
| `required_approvals` | `1` |
| `admin_enforced` | `true` |
## Vault PR Lifecycle
1. **Request** — Agent calls `lib/action-vault.sh:vault_request()` with action TOML content
2. **Validation** — TOML is validated against the schema in `action-vault/vault-env.sh`
3. **PR Creation** — A PR is created on `disinto-ops` with:
- Branch: `vault/<action-id>`
- Title: `vault: <action-id>`
- Labels: `vault`, `pending-approval`
- File: `vault/actions/<action-id>.toml`
- **Auto-merge enabled** — Forgejo will auto-merge after approval
4. **Approval** — Admin user reviews and approves the PR
5. **Auto-merge** — Forgejo automatically merges the PR once required approvals are met
6. **Execution** — Dispatcher (issue #76) polls for merged vault PRs and executes them
7. **Cleanup** — Executed vault items are moved to `fired/` (via PR)
## Bot Account Behavior
Bot accounts (dev-bot, review-bot, vault-bot, etc.) **cannot merge vault PRs** even if they have approval, due to the `admin_enforced` setting. This ensures:
- Only human admins can approve sensitive vault actions
- Bot accounts can only create vault PRs, not execute them
- Bot accounts cannot self-approve vault PRs (Forgejo prevents this automatically)
- Manual admin review is always required for privileged operations
## Setup
To set up branch protection on the ops repo:
```bash
# Source environment
source lib/env.sh
source lib/branch-protection.sh
# Set up protection
setup_vault_branch_protection main
# Verify setup
verify_branch_protection main
```
Or use the CLI directly:
```bash
export FORGE_TOKEN="<admin-token>"
export FORGE_URL="https://codeberg.org"
export FORGE_OPS_REPO="johba/disinto-ops"
# Set up protection
bash lib/branch-protection.sh setup main
# Verify
bash lib/branch-protection.sh verify main
```
## Testing
To verify the protection is working:
1. **Bot cannot merge** — Attempt to merge a PR with a bot token (should fail with HTTP 405)
2. **Admin can merge** — Attempt to merge with admin token (should succeed)
3. **Direct push blocked** — Attempt `git push origin main` (should be rejected)
## Related Issues
- #73 — Vault redesign proposal
- #74 — Vault action TOML schema
- #75 — Vault PR creation helper (`lib/action-vault.sh`)
- #76 — Dispatcher rewrite (poll for merged vault PRs)
- #77 — Branch protection on ops repo (this issue)
## See Also
- [`lib/action-vault.sh`](../lib/action-vault.sh) — Vault PR creation helper
- [`action-vault/vault-env.sh`](../action-vault/vault-env.sh) — TOML validation
- [`lib/branch-protection.sh`](../lib/branch-protection.sh) — Branch protection helper

View file

@ -1,42 +0,0 @@
# agents-llama — Local-Qwen Dev Agent
The `agents-llama` service is an optional compose service that runs a dev agent
backed by a local llama-server instance (e.g. Qwen) instead of the Anthropic
API. It uses the same Docker image as the main `agents` service but connects to
a local inference endpoint via `ANTHROPIC_BASE_URL`.
## Enabling
Set `ENABLE_LLAMA_AGENT=1` in `.env` (or `.env.enc`) and provide the required
credentials:
```env
ENABLE_LLAMA_AGENT=1
FORGE_TOKEN_LLAMA=<dev-qwen API token>
FORGE_PASS_LLAMA=<dev-qwen password>
ANTHROPIC_BASE_URL=http://host.docker.internal:8081 # llama-server endpoint
```
Then regenerate the compose file (`disinto init ...`) and bring the stack up.
## Prerequisites
- **llama-server** (or compatible OpenAI-API endpoint) running on the host,
reachable from inside Docker at the URL set in `ANTHROPIC_BASE_URL`.
- A Forgejo bot user (e.g. `dev-qwen`) with its own API token and password,
stored as `FORGE_TOKEN_LLAMA` / `FORGE_PASS_LLAMA`.
## Behaviour
- `AGENT_ROLES=dev` — the llama agent only picks up dev work.
- `CLAUDE_AUTOCOMPACT_PCT_OVERRIDE=60` — more aggressive compaction for smaller
context windows.
- `depends_on: forgejo (service_healthy)` — does **not** depend on Woodpecker
(the llama agent doesn't need CI).
- Serialises on the llama-server's single KV cache (AD-002).
## Disabling
Set `ENABLE_LLAMA_AGENT=0` (or leave it unset) and regenerate. The service
block is omitted entirely from `docker-compose.yml`; the stack starts cleanly
without it.

View file

@ -1,149 +0,0 @@
# Edge Routing Fallback: Per-Project Subdomains
> **Status:** Contingency plan. Only implement if subpath routing (#704 / #708)
> proves unworkable.
## Context
The primary approach routes services under subpaths of `<project>.disinto.ai`:
| Service | Primary (subpath) |
|------------|--------------------------------------------|
| Forgejo | `<project>.disinto.ai/forge/` |
| Woodpecker | `<project>.disinto.ai/ci/` |
| Chat | `<project>.disinto.ai/chat/` |
| Staging | `<project>.disinto.ai/staging/` |
The fallback uses per-service subdomains instead:
| Service | Fallback (subdomain) |
|------------|--------------------------------------------|
| Forgejo | `forge.<project>.disinto.ai/` |
| Woodpecker | `ci.<project>.disinto.ai/` |
| Chat | `chat.<project>.disinto.ai/` |
| Staging | `<project>.disinto.ai/` (root) |
The wildcard cert from #621 already covers `*.<project>.disinto.ai` — no new
DNS records or certs are needed for sub-subdomains because `*.disinto.ai`
matches one level deep. For sub-subdomains like `forge.<project>.disinto.ai`
we would need to add a second wildcard (`*.*.disinto.ai`) or explicit DNS
records per project. Both are straightforward with the existing Gandi DNS-01
setup.
## Pivot Decision Criteria
**Pivot if:**
- Forgejo `ROOT_URL` under a subpath (`/forge/`) causes redirect loops that
cannot be fixed with `X-Forwarded-Prefix` or Caddy `uri strip_prefix`.
- Woodpecker's `WOODPECKER_HOST` does not honour subpath prefixes, causing
OAuth callback mismatches that persist after adjusting redirect URIs.
- Forward-auth on `/chat/*` conflicts with Forgejo's own OAuth flow when both
share the same origin (cookie collision, CSRF token mismatch).
**Do NOT pivot if:**
- Forgejo login redirects to `/` instead of `/forge/` — fixable with Caddy
`handle_path` + `uri prefix` rewrite.
- Woodpecker UI assets 404 under `/ci/` — fixable with asset prefix config
(`WOODPECKER_ROOT_PATH`).
- A single OAuth app needs a second redirect URI — Forgejo supports multiple
`redirect_uris` in the same app.
## Fallback Topology
### Caddyfile
Replace the single `:80` block with four host blocks:
```caddy
# Main project domain — staging / landing
<project>.disinto.ai {
reverse_proxy staging:80
}
# Forgejo — root path, no subpath rewrite needed
forge.<project>.disinto.ai {
reverse_proxy forgejo:3000
}
# Woodpecker CI — root path
ci.<project>.disinto.ai {
reverse_proxy woodpecker:8000
}
# Chat — with forward_auth (same as #709, but on its own host)
chat.<project>.disinto.ai {
handle /login {
reverse_proxy chat:8080
}
handle /oauth/callback {
reverse_proxy chat:8080
}
handle /* {
forward_auth chat:8080 {
uri /auth/verify
copy_headers X-Forwarded-User
header_up X-Forward-Auth-Secret {$FORWARD_AUTH_SECRET}
}
reverse_proxy chat:8080
}
}
```
**Current file:** `docker/Caddyfile` (generated by `lib/generators.sh:_generate_caddyfile_impl`, line ~596).
### Service Configuration Changes
| Variable / Setting | Current (subpath) | Fallback (subdomain) | File |
|----------------------------|------------------------------------------------|-------------------------------------------------|-----------------------------|
| Forgejo `ROOT_URL` | `https://<project>.disinto.ai/forge/` | `https://forge.<project>.disinto.ai/` | forgejo `app.ini` |
| `WOODPECKER_HOST` | `http://localhost:8000` (subpath via proxy) | `https://ci.<project>.disinto.ai` | `lib/ci-setup.sh` line ~164 |
| Woodpecker OAuth redirect | `https://<project>.disinto.ai/ci/authorize` | `https://ci.<project>.disinto.ai/authorize` | `lib/ci-setup.sh` line ~153 |
| Chat OAuth redirect | `https://<project>.disinto.ai/chat/oauth/callback` | `https://chat.<project>.disinto.ai/oauth/callback` | `lib/ci-setup.sh` line ~188 |
| `EDGE_TUNNEL_FQDN` | `<project>.disinto.ai` | unchanged (main domain) | `lib/generators.sh` line ~432 |
### New Environment Variables (pivot only)
These would be added to `lib/generators.sh` `_generate_compose_impl()` in the
edge service environment block (currently line ~415):
| Variable | Value |
|------------------------------|----------------------------------------|
| `EDGE_TUNNEL_FQDN_FORGE` | `forge.<project>.disinto.ai` |
| `EDGE_TUNNEL_FQDN_CI` | `ci.<project>.disinto.ai` |
| `EDGE_TUNNEL_FQDN_CHAT` | `chat.<project>.disinto.ai` |
### DNS
No new records needed if the registrar supports `*.*.disinto.ai` wildcards.
Otherwise, add explicit A/CNAME records per project:
```
forge.<project>.disinto.ai → edge server IP
ci.<project>.disinto.ai → edge server IP
chat.<project>.disinto.ai → edge server IP
```
The edge server already handles TLS via Caddy's automatic HTTPS with the
existing ACME / DNS-01 challenge.
### Edge Control (`tools/edge-control/register.sh`)
Currently `do_register()` creates a single route for `<project>.disinto.ai`.
The fallback would need to register four routes (or accept a `--subdomain`
parameter). See the TODO in `register.sh`.
## Files to Change on Pivot
| File | What changes |
|-----------------------------------|-----------------------------------------------------------------|
| `docker/Caddyfile` | Replace single host block → four host blocks (see above) |
| `lib/generators.sh` | Add `EDGE_TUNNEL_FQDN_{FORGE,CI,CHAT}` env vars to compose |
| `lib/ci-setup.sh` ~line 153 | Woodpecker OAuth redirect URI → `ci.<project>` subdomain |
| `lib/ci-setup.sh` ~line 188 | Chat OAuth redirect URI → `chat.<project>` subdomain |
| `tools/edge-control/register.sh` | Register four routes per project instead of one |
| `tools/edge-control/lib/caddy.sh`| `add_route()` gains subdomain support |
| forgejo `app.ini` | `ROOT_URL``https://forge.<project>.disinto.ai/` |
Estimated effort for a full pivot: **under one day** given this plan.

View file

@ -1,123 +0,0 @@
# Investigation: Reviewer approved destructive compose rewrite in PR #683
**Issue**: #685
**Date**: 2026-04-11
**PR under investigation**: #683 (fix: config: gardener=1h, architect=9m, planner=11m)
## Summary
The reviewer agent approved PR #683 in ~1 minute without flagging that it
contained a destructive rewrite of `docker-compose.yml` — dropping named
volumes, bind mounts, env vars, restart policy, and security options. Six
structural gaps in the review pipeline allowed this to pass.
## Root causes
### 1. No infrastructure-file-specific review checklist
The review formula (`formulas/review-pr.toml`) has a generic review checklist
(bugs, security, imports, architecture, bash specifics, dead code). It has
**no special handling for infrastructure files** — `docker-compose.yml`,
`Dockerfile`, CI configs, or `entrypoint.sh` are reviewed with the same
checklist as application code.
Infrastructure files have a different failure mode: a single dropped line
(a volume mount, an env var, a restart policy) can break a running deployment
without any syntax error or linting failure. The generic checklist doesn't
prompt the reviewer to check for these regressions.
**Fix applied**: Added step 3c "Infrastructure file review" to
`formulas/review-pr.toml` with a compose-specific checklist covering named
volumes, bind mounts, env vars, restart policy, and security options.
### 2. No scope discipline
Issue #682 asked for ~3 env var changes + `PLANNER_INTERVAL` plumbing — roughly
10-15 lines across 3-4 files. PR #683's diff rewrote the entire compose service
block (~50+ lines changed in `docker-compose.yml` alone).
The review formula **does not instruct the reviewer to compare diff size against
issue scope**. A scope-aware reviewer would flag: "this PR changes more lines
than the issue scope warrants — request justification for out-of-scope changes."
**Fix applied**: Added step 3d "Scope discipline" to `formulas/review-pr.toml`
requiring the reviewer to compare actual changes against stated issue scope and
flag out-of-scope modifications to infrastructure files.
### 3. Lessons-learned bias toward approval
The reviewer's `.profile/knowledge/lessons-learned.md` contains multiple entries
that systematically bias toward approval:
- "Approval means 'ready to ship,' not 'perfect.'"
- "'Different from how I'd write it' is not a blocker."
- "Reserve request_changes for genuinely blocking concerns."
These lessons are well-intentioned (they prevent nit-picking and false blocks)
but they create a blind spot: the reviewer suppresses its instinct to flag
suspicious-looking changes because the lessons tell it not to block on
"taste-based" concerns. A compose service block rewrite *looks* like a style
preference ("the dev reorganized the file") but is actually a correctness
regression.
**Recommendation**: The lessons-learned are not wrong — they should stay. But
the review formula now explicitly carves out infrastructure files from the
"bias toward APPROVE" guidance, making it clear that dropped infra
configuration is a blocking concern, not a style preference.
### 4. No ground-truth for infrastructure files
The reviewer only sees the diff. It has no way to compare against the running
container's actual volume/env config. When dev-qwen rewrote a 30-line service
block from scratch, the reviewer saw a 30-line addition and a 30-line deletion
with no reference point.
**Recommendation (future work)**: Maintain a `docker/expected-compose-config.yml`
or have the reviewer fetch `docker compose config` output as ground truth when
reviewing compose changes. This would let the reviewer diff the proposed config
against the known-good config.
### 5. Structural analysis blind spot
`lib/build-graph.py` tracks changes to files in `formulas/`, agent directories
(`dev/`, `review/`, etc.), and `evidence/`. It does **not track infrastructure
files** (`docker-compose.yml`, `docker/`, `.woodpecker/`). Changes to these
files produce no alerts in the graph report — the reviewer gets no
"affected objectives" signal for infrastructure changes.
**Recommendation (future work)**: Add infrastructure file tracking to
`build-graph.py` so that compose/Dockerfile/CI changes surface in the
structural analysis.
### 6. Model and time budget
Reviews use Sonnet (`CLAUDE_MODEL="sonnet"` at `review-pr.sh:229`) with a
15-minute timeout. The PR #683 review completed in ~1 minute. Sonnet is
optimized for speed, which is appropriate for most code reviews, but
infrastructure changes benefit from the deeper reasoning of a more capable
model.
**Recommendation (future work)**: Consider escalating to a more capable model
when the diff includes infrastructure files (compose, Dockerfiles, CI configs).
## Changes made
1. **`formulas/review-pr.toml`** — Added two new review steps:
- **Step 3c: Infrastructure file review** — When the diff touches
`docker-compose.yml`, `Dockerfile*`, `.woodpecker/`, or `docker/`,
requires checking for dropped volumes, bind mounts, env vars, restart
policy, security options, and network config. Instructs the reviewer to
read the full file (not just the diff) and compare against the base branch.
- **Step 3d: Scope discipline** — Requires comparing the actual diff
footprint against the stated issue scope. Flags out-of-scope rewrites of
infrastructure files as blocking concerns.
## What would have caught this
With the changes above, the reviewer would have:
1. Seen step 3c trigger for `docker-compose.yml` changes
2. Read the full compose file and compared against the base branch
3. Noticed the dropped named volumes, bind mounts, env vars, restart policy
4. Seen step 3d flag that a 3-env-var issue produced a 50+ line compose rewrite
5. Issued REQUEST_CHANGES citing specific dropped configuration

View file

@ -1,59 +0,0 @@
# Mirror Bootstrap — Pull-Mirror Cutover Path
How to populate an empty Forgejo repo from an external source using
`lib/mirrors.sh`'s `mirror_pull_register()`.
## Prerequisites
| Variable | Example | Purpose |
|---|---|---|
| `FORGE_URL` | `http://forgejo:3000` | Forgejo instance base URL |
| `FORGE_API_BASE` | `${FORGE_URL}/api/v1` | Global API base (set by `lib/env.sh`) |
| `FORGE_TOKEN` | (admin or org-owner token) | Must have `repo:create` scope |
The target org/user must already exist on the Forgejo instance.
## Command
```bash
source lib/env.sh
source lib/mirrors.sh
# Register a pull mirror — creates the repo and starts the first sync.
mirror_pull_register \
"https://codeberg.org/johba/disinto.git" \ # source URL
"disinto-admin" \ # target owner
"disinto" \ # target repo name
"8h0m0s" # sync interval (optional, default 8h)
```
The function calls `POST /api/v1/repos/migrate` with `mirror: true`.
Forgejo creates the repo and immediately queues the first sync.
## Verifying the sync
```bash
# Check mirror status via API
forge_api GET "/repos/disinto-admin/disinto" | jq '.mirror, .mirror_interval'
# Confirm content arrived — should list branches
forge_api GET "/repos/disinto-admin/disinto/branches" | jq '.[].name'
```
The first sync typically completes within a few seconds for small-to-medium
repos. For large repos, poll the branches endpoint until content appears.
## Cutover scenario (Nomad migration)
At cutover to the Nomad box:
1. Stand up fresh Forgejo on the Nomad cluster (empty instance).
2. Create the `disinto-admin` org via `disinto init` or API.
3. Run `mirror_pull_register` pointing at the Codeberg source.
4. Wait for sync to complete (check branches endpoint).
5. Once content is confirmed, proceed with `disinto init` against the
now-populated repo — all subsequent `mirror_push` calls will push
to any additional mirrors configured in `projects/*.toml`.
No manual `git clone` + `git push` step is needed. The Forgejo pull-mirror
handles the entire transfer.

View file

@ -1,175 +0,0 @@
# Updating the Disinto Factory
How to update the disinto factory code on a deployment box (e.g. harb-dev-box)
after a new version lands on the upstream Forgejo.
## Prerequisites
- SSH access to the deployment box
- The upstream remote (`devbox`) pointing to the disinto-dev-box Forgejo
## Step 1: Pull the latest code
```bash
cd ~/disinto
git fetch devbox main
git log --oneline devbox/main -5 # review what changed
git stash # save any local fixes
git merge devbox/main
```
## Note: docker-compose.yml is generator-only
The `docker-compose.yml` file is now generated exclusively by `bin/disinto init`.
The tracked file has been removed. If you have a local `docker-compose.yml` from
before this change, it is now "yours" and won't be touched by future updates.
To pick up generator improvements, delete the existing file and run `bin/disinto init`.
## Step 2: Preserve local config
These files are not in git but are needed at runtime. Back them up before
any compose regeneration:
```bash
cp .env .env.backup
cp projects/harb.toml projects/harb.toml.backup
cp docker-compose.override.yml docker-compose.override.yml.backup 2>/dev/null
```
## Step 3: Regenerate docker-compose.yml
If `generate_compose()` changed or you need a fresh compose file:
```bash
rm docker-compose.yml
source .env
bin/disinto init https://codeberg.org/johba/harb --branch master --yes
```
This will regenerate the compose but may fail partway through (token collisions,
existing users). The compose file is written early — check it exists even if
init errors out.
### Known post-regeneration fixes (until #429 lands)
Most generator issues have been fixed. The following items no longer apply:
- **AppArmor (#492)** — Fixed: all services now have `apparmor=unconfined`
- **Forgejo image tag (#493)** — Fixed: generator uses `forgejo:11.0`
- **Agent credential mounts (#495)** — Fixed: `.claude`, `.claude.json`, `.ssh`, and `project-repos` volumes are auto-generated
- **Repo path (#494)** — Not applicable: `projects/*.toml` files are gitignored and preserved
If you need to add custom volumes, edit the generated `docker-compose.yml` directly.
It will not be overwritten by future `init` runs (the generator skips existing files).
## Step 4: Rebuild and restart
```bash
# Rebuild agents image (code is baked in via COPY)
docker compose build agents
# Restart all disinto services
docker compose up -d
# If edge fails to build (caddy:alpine has no apt-get), skip it:
docker compose up -d forgejo woodpecker woodpecker-agent agents staging
```
## Step 5: Verify
```bash
# All containers running?
docker ps --format 'table {{.Names}}\t{{.Status}}' | grep disinto
# Forgejo responding?
curl -sf -o /dev/null -w 'HTTP %{http_code}' http://localhost:3000/
# Claude auth works?
docker exec -u agent disinto-agents bash -c 'claude -p "say ok" 2>&1'
# Agent polling loop running?
docker exec disinto-agents pgrep -f entrypoint.sh
# If no process: check that entrypoint.sh is the container CMD and projects TOML is mounted.
# Agent repo cloned?
docker exec disinto-agents ls /home/agent/repos/harb/.git && echo ok
# If missing:
docker exec disinto-agents chown -R agent:agent /home/agent/repos
source .env
docker exec -u agent disinto-agents bash -c \
"git clone http://dev-bot:${FORGE_TOKEN}@forgejo:3000/johba/harb.git /home/agent/repos/harb"
# Git safe.directory (needed after volume recreation)
docker exec -u agent disinto-agents git config --global --add safe.directory /home/agent/repos/harb
```
## Step 6: Verify harb stack coexistence
```bash
# Harb stack still running?
cd ~/harb && docker compose ps --format 'table {{.Name}}\t{{.Status}}'
# No port conflicts?
# Forgejo: 3000, Woodpecker: 8000, harb caddy: 8081, umami: 3001
ss -tlnp | grep -E '3000|3001|8000|8081'
```
## Step 7: Docker disk hygiene
The reproduce image is ~1.3GB. Dangling images accumulate fast.
```bash
# Check disk
df -h /
# Prune dangling images (safe — only removes unused)
docker image prune -f
# Nuclear option (removes ALL unused images, volumes, networks):
docker system prune -af
# WARNING: this removes cached layers, requiring full rebuilds
```
## Troubleshooting
### Forgejo at 170%+ CPU, not responding
AppArmor issue. Add `security_opt: [apparmor=unconfined]` and recreate:
```bash
docker compose up -d forgejo
```
### "Not logged in" / OAuth expired
Re-auth on the host:
```bash
claude auth login
```
Credentials are bind-mounted into containers automatically.
Multiple containers sharing OAuth can cause frequent expiry — consider
using `ANTHROPIC_API_KEY` in `.env` instead.
### Agent loop not running after restart
The entrypoint reads `projects/*.toml` to determine which agents to run.
If the TOML isn't mounted or the disinto directory is read-only,
the polling loop won't start agents. Check:
```bash
docker exec disinto-agents ls /home/agent/disinto/projects/harb.toml
docker logs disinto-agents --tail 20 # look for "Entering polling loop"
```
### "fatal: not a git repository"
After image rebuilds, the baked-in `/home/agent/disinto` has no `.git`.
This breaks review-pr.sh (#408). Workaround:
```bash
docker exec -u agent disinto-agents git config --global --add safe.directory '*'
```
### Dev-agent stuck on closed issue
The dev-poll latches onto in-progress issues. If the issue was closed
externally, the agent skips it every cycle but never moves on. Check:
```bash
docker exec disinto-agents tail -5 /home/agent/data/logs/dev/dev-agent.log
```
Fix: clean the worktree and let it re-scan:
```bash
docker exec disinto-agents rm -rf /tmp/harb-worktree-*
```

View file

@ -1,172 +0,0 @@
# formulas/collect-engagement.toml — Collect website engagement data
#
# Daily formula: SSH into Caddy host, fetch access log, parse locally,
# commit evidence JSON to ops repo via Forgejo API.
#
# Triggered by cron in the edge container entrypoint (daily at 23:50 UTC).
# Design choices from #426: Q1=A (fetch raw log, process locally),
# Q2=A (direct cron in edge container), Q3=B (dedicated purpose-limited SSH key).
#
# Steps: fetch-log → parse-engagement → commit-evidence
name = "collect-engagement"
description = "SSH-fetch Caddy access log, parse engagement metrics, commit evidence"
version = 1
[context]
files = ["AGENTS.md"]
[vars.caddy_host]
description = "SSH host for the Caddy server"
required = false
default = "${CADDY_SSH_HOST:-disinto.ai}"
[vars.caddy_user]
description = "SSH user on the Caddy host"
required = false
default = "${CADDY_SSH_USER:-debian}"
[vars.caddy_log_path]
description = "Path to Caddy access log on the remote host"
required = false
default = "${CADDY_ACCESS_LOG:-/var/log/caddy/access.log}"
[vars.local_log_path]
description = "Local path to store fetched access log"
required = false
default = "/tmp/caddy-access-log-fetch.log"
[vars.evidence_dir]
description = "Evidence output directory in the ops repo"
required = false
default = "evidence/engagement"
# ── Step 1: SSH fetch ────────────────────────────────────────────────
[[steps]]
id = "fetch-log"
title = "Fetch Caddy access log from remote host via SSH"
description = """
Fetch today's Caddy access log segment from the remote host using SCP.
The SSH key is read from the environment (CADDY_SSH_KEY), which is
decrypted from secrets/CADDY_SSH_KEY.enc by the edge entrypoint. It is NEVER hardcoded.
1. Write the SSH key to a temporary file with restricted permissions:
_ssh_key_file=$(mktemp)
trap 'rm -f "$_ssh_key_file"' EXIT
printf '%s\n' "$CADDY_SSH_KEY" > "$_ssh_key_file"
chmod 0600 "$_ssh_key_file"
2. Verify connectivity:
ssh -i "$_ssh_key_file" -o StrictHostKeyChecking=accept-new \
-o ConnectTimeout=10 -o BatchMode=yes \
{{caddy_user}}@{{caddy_host}} 'echo ok'
3. Fetch the access log via scp:
scp -i "$_ssh_key_file" -o StrictHostKeyChecking=accept-new \
-o ConnectTimeout=10 -o BatchMode=yes \
"{{caddy_user}}@{{caddy_host}}:{{caddy_log_path}}" \
"{{local_log_path}}"
4. Verify the fetched file is non-empty:
if [ ! -s "{{local_log_path}}" ]; then
echo "WARNING: fetched access log is empty — site may have no traffic"
else
echo "Fetched $(wc -l < "{{local_log_path}}") lines from {{caddy_host}}"
fi
5. Clean up the temporary key file:
rm -f "$_ssh_key_file"
"""
# ── Step 2: Parse engagement ─────────────────────────────────────────
[[steps]]
id = "parse-engagement"
title = "Run collect-engagement.sh against the local log copy"
description = """
Run the engagement parser against the locally fetched access log.
1. Set CADDY_ACCESS_LOG to point at the local copy so collect-engagement.sh
reads from it instead of the default path:
export CADDY_ACCESS_LOG="{{local_log_path}}"
2. Run the parser:
bash "$FACTORY_ROOT/site/collect-engagement.sh"
3. Verify the evidence JSON was written:
REPORT_DATE=$(date -u +%Y-%m-%d)
EVIDENCE_FILE="${OPS_REPO_ROOT}/{{evidence_dir}}/${REPORT_DATE}.json"
if [ -f "$EVIDENCE_FILE" ]; then
echo "Evidence written: $EVIDENCE_FILE"
jq . "$EVIDENCE_FILE"
else
echo "ERROR: evidence file not found at $EVIDENCE_FILE"
exit 1
fi
4. Clean up the fetched log:
rm -f "{{local_log_path}}"
"""
needs = ["fetch-log"]
# ── Step 3: Commit evidence ──────────────────────────────────────────
[[steps]]
id = "commit-evidence"
title = "Commit evidence JSON to ops repo via Forgejo API"
description = """
Commit the dated evidence JSON to the ops repo so the planner can
consume it during gap analysis.
1. Read the evidence file:
REPORT_DATE=$(date -u +%Y-%m-%d)
EVIDENCE_FILE="${OPS_REPO_ROOT}/{{evidence_dir}}/${REPORT_DATE}.json"
CONTENT=$(base64 < "$EVIDENCE_FILE")
2. Check if the file already exists in the ops repo (update vs create):
OPS_OWNER="${OPS_FORGE_OWNER:-${FORGE_REPO%%/*}}"
OPS_REPO="${OPS_FORGE_REPO:-${PROJECT_NAME:-disinto}-ops}"
FILE_PATH="{{evidence_dir}}/${REPORT_DATE}.json"
EXISTING=$(curl -sf \
-H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_URL}/api/v1/repos/${OPS_OWNER}/${OPS_REPO}/contents/${FILE_PATH}" \
2>/dev/null || echo "")
3. Create or update the file via Forgejo API:
if [ -n "$EXISTING" ] && printf '%s' "$EXISTING" | jq -e '.sha' >/dev/null 2>&1; then
# Update existing file
SHA=$(printf '%s' "$EXISTING" | jq -r '.sha')
curl -sf -X PUT \
-H "Authorization: token ${FORGE_TOKEN}" \
-H "Content-Type: application/json" \
"${FORGE_URL}/api/v1/repos/${OPS_OWNER}/${OPS_REPO}/contents/${FILE_PATH}" \
-d "$(jq -nc --arg content "$CONTENT" --arg sha "$SHA" --arg msg "evidence: engagement ${REPORT_DATE}" \
'{message: $msg, content: $content, sha: $sha}')"
echo "Updated existing evidence file in ops repo"
else
# Create new file
curl -sf -X POST \
-H "Authorization: token ${FORGE_TOKEN}" \
-H "Content-Type: application/json" \
"${FORGE_URL}/api/v1/repos/${OPS_OWNER}/${OPS_REPO}/contents/${FILE_PATH}" \
-d "$(jq -nc --arg content "$CONTENT" --arg msg "evidence: engagement ${REPORT_DATE}" \
'{message: $msg, content: $content}')"
echo "Created evidence file in ops repo"
fi
4. Verify the commit landed:
VERIFY=$(curl -sf \
-H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_URL}/api/v1/repos/${OPS_OWNER}/${OPS_REPO}/contents/${FILE_PATH}" \
| jq -r '.name // empty')
if [ "$VERIFY" = "${REPORT_DATE}.json" ]; then
echo "Evidence committed: ${FILE_PATH}"
else
echo "ERROR: could not verify evidence commit"
exit 1
fi
"""
needs = ["parse-engagement"]

View file

@ -1,175 +0,0 @@
# formulas/dev.toml — Dev agent formula (issue implementation)
#
# Executed by dev/dev-agent.sh via tmux session with Claude.
# dev-agent.sh is called by dev-poll.sh which finds the next ready issue
# from the backlog (priority tier first, then plain backlog).
#
# Steps: preflight → implement → CI → review → merge → journal
#
# Key behaviors:
# - Creates worktree for isolation
# - Uses tmux session for persistent Claude interaction
# - Phase-file signaling for orchestrator coordination
# - Auto-retry on CI failures (max 3 attempts)
# - Direct-merge for approved PRs (bypasses lock)
name = "dev"
description = "Issue implementation: code, commit, push, address CI/review"
version = 1
model = "sonnet"
[context]
files = ["AGENTS.md", "dev/AGENTS.md", "lib/env.sh", "lib/pr-lifecycle.sh", "lib/ci-helpers.sh"]
[[steps]]
id = "preflight"
title = "Review the issue and prepare implementation plan"
description = """
Read the issue body carefully. Understand:
- What needs to be implemented
- Any dependencies (check `## Dependencies` section)
- Existing code that might be affected
- Testing requirements
Then create a plan:
1. What files need to be modified/created
2. What tests need to be added
3. Any documentation updates
Check the preflight metrics from supervisor if available:
cat "$OPS_REPO_ROOT/journal/supervisor/$(date -u +%Y-%m-%d).md"
Note: Only proceed if all dependency issues are closed.
"""
[[steps]]
id = "implement"
title = "Write code to implement the issue"
description = """
Implement the changes:
1. Create a new worktree:
cd "$PROJECT_REPO_ROOT"
git worktree add -b "dev/{agent}-{issue}" ../{agent}-{issue}
2. Make your changes to the codebase
3. Add tests if applicable
4. Update documentation if needed
5. Commit with conventional commits:
git add -A
git commit -m "feat({issue}): {description}"
6. Push to forge:
git push -u origin dev/{agent}-{issue}
7. Create PR via API or web interface
- Title: feat({issue}): {description}
- Body: Link to issue, describe changes
- Labels: backlog, in-progress
Note: The worktree is preserved on crash for debugging.
"""
needs = ["preflight"]
[[steps]]
id = "ci"
title = "Wait for CI and address failures"
description = """
Monitor CI pipeline status via Woodpecker API:
woodpecker_api /repos/${WOODPECKER_REPO_ID}/pipelines?branch=dev/{agent}-{issue}
Wait for CI to complete. If CI fails:
1. Read the CI logs to understand the failure
2. Fix the issue
3. Amend commit and force push
4. Track CI attempts (max 3 retries)
CI fix tracker file:
$DISINTO_LOG_DIR/dev/ci-fixes-{project}.json
On CI success, proceed to review.
If CI exhausted (3 failures), escalate via PHASE:escalate.
"""
needs = ["implement"]
[[steps]]
id = "review"
title = "Address review feedback"
description = """
Check PR for review comments:
curl -sf "${FORGE_API}/pulls/{pr-number}/comments"
For each comment:
1. Understand the feedback
2. Make changes to fix the issue
3. Amend commit and force push
4. Address the comment in the PR
If review approves, proceed to merge.
If stuck or needs clarification, escalate via PHASE:escalate.
"""
needs = ["ci"]
[[steps]]
id = "merge"
title = "Merge the PR"
description = """
Check if PR is approved and CI is green:
curl -sf "${FORGE_API}/pulls/{pr-number}"
If approved (merged=true or approved_by set):
1. Merge the PR:
curl -sf -X PUT "${FORGE_API}/pulls/{pr-number}/merge" \\
-d '{"merge_method":"merge"}'
2. Mirror push to other remotes:
mirror_push
3. Close the issue:
curl -sf -X PATCH "${FORGE_API}/issues/{issue-number}" \\
-d '{"state":"closed"}'
4. Delete the branch:
git push origin --delete dev/{agent}-{issue}
If direct merge is blocked, note in journal and escalate.
"""
needs = ["review"]
[[steps]]
id = "journal"
title = "Write implementation journal"
description = """
Append a timestamped entry to the dev journal:
File path:
$OPS_REPO_ROOT/journal/dev/$(date -u +%Y-%m-%d).md
If the file already exists (multiple PRs merged same day), append.
If it does not exist, create it.
Format:
## Dev implementation — {issue-number}
Time: {timestamp}
PR: {pr-number}
Branch: dev/{agent}-{issue}
### Changes
- {summary of changes}
### CI attempts: {n}
### Review feedback: {n} comments addressed
### Lessons learned
- {what you learned during implementation}
### Knowledge added
If you discovered something new, add to knowledge:
echo "### Lesson title
Description." >> "${OPS_REPO_ROOT}/knowledge/{topic}.md"
After writing the journal, write the phase signal:
echo 'PHASE:done' > "$PHASE_FILE"
"""
needs = ["merge"]

View file

@ -203,7 +203,7 @@ If all tiers clear, write the completion summary and signal done:
echo "ACTION: grooming complete — 0 tech-debt remaining" >> "$RESULT_FILE"
echo 'PHASE:done' > "$PHASE_FILE"
Vault items filed during this run appear as PRs on ops repo for human approval.
Vault items filed during this run are picked up by vault-poll automatically.
On unrecoverable error (API unavailable, repeated failures):
printf 'PHASE:failed\nReason: %s\n' 'describe what failed' > "$PHASE_FILE"

View file

@ -1,187 +0,0 @@
#!/usr/bin/env bash
# formulas/release.sh — Mechanical release script
#
# Implements the release workflow without Claude:
# 1. Validate prerequisites
# 2. Tag Forgejo main via API
# 3. Push tag to mirrors (Codeberg, GitHub) via token auth
# 4. Build and tag the agents Docker image
# 5. Restart agent containers
#
# Usage: release.sh <action-id>
#
# Expects env vars:
# FORGE_URL, FORGE_TOKEN, FORGE_REPO, PRIMARY_BRANCH
# GITHUB_TOKEN — for pushing tags to GitHub mirror
# CODEBERG_TOKEN — for pushing tags to Codeberg mirror
#
# The action TOML context field must contain the version, e.g.:
# context = "Release v1.2.0"
#
# Part of #516.
set -euo pipefail
FACTORY_ROOT="${FACTORY_ROOT:-/home/agent/disinto}"
OPS_REPO_ROOT="${OPS_REPO_ROOT:-/home/agent/ops}"
log() {
printf '[%s] release: %s\n' "$(date -u '+%Y-%m-%dT%H:%M:%SZ')" "$*"
}
# ── Argument parsing ─────────────────────────────────────────────────────
# VAULT_ACTION_TOML is exported by the runner entrypoint (entrypoint-runner.sh)
action_id="${1:-}"
if [ -z "$action_id" ]; then
log "ERROR: action-id argument required"
exit 1
fi
action_toml="${VAULT_ACTION_TOML:-${OPS_REPO_ROOT}/vault/actions/${action_id}.toml}"
if [ ! -f "$action_toml" ]; then
log "ERROR: vault action TOML not found: ${action_toml}"
exit 1
fi
# Extract version from context field (e.g. "Release v1.2.0" → "v1.2.0")
context=$(grep -E '^context\s*=' "$action_toml" \
| sed -E 's/^context\s*=\s*"(.*)"/\1/' | tr -d '\r')
RELEASE_VERSION=$(echo "$context" | grep -oE 'v[0-9]+\.[0-9]+\.[0-9]+') || true
if [ -z "${RELEASE_VERSION:-}" ]; then
log "ERROR: could not extract version from context: '${context}'"
log "Context must contain a version like v1.2.0"
exit 1
fi
log "Starting release ${RELEASE_VERSION} (action: ${action_id})"
# ── Step 1: Preflight ────────────────────────────────────────────────────
log "Step 1/6: Preflight checks"
# Validate version format
if ! echo "$RELEASE_VERSION" | grep -qE '^v[0-9]+\.[0-9]+\.[0-9]+$'; then
log "ERROR: invalid version format: ${RELEASE_VERSION}"
exit 1
fi
# Required env vars
for var in FORGE_URL FORGE_TOKEN FORGE_REPO PRIMARY_BRANCH; do
if [ -z "${!var:-}" ]; then
log "ERROR: required env var not set: ${var}"
exit 1
fi
done
# Check Docker access
if ! docker info >/dev/null 2>&1; then
log "ERROR: Docker not accessible"
exit 1
fi
# Check tag doesn't already exist on Forgejo
if curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_URL}/api/v1/repos/${FORGE_REPO}/tags/${RELEASE_VERSION}" >/dev/null 2>&1; then
log "ERROR: tag ${RELEASE_VERSION} already exists on Forgejo"
exit 1
fi
log "Preflight passed"
# ── Step 2: Tag main via Forgejo API ─────────────────────────────────────
log "Step 2/6: Creating tag ${RELEASE_VERSION} on Forgejo"
# Get HEAD SHA of primary branch
head_sha=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_URL}/api/v1/repos/${FORGE_REPO}/branches/${PRIMARY_BRANCH}" \
| jq -r '.commit.id // empty')
if [ -z "$head_sha" ]; then
log "ERROR: could not get HEAD SHA for ${PRIMARY_BRANCH}"
exit 1
fi
# Create tag via API
curl -sf -X POST \
-H "Authorization: token ${FORGE_TOKEN}" \
-H "Content-Type: application/json" \
"${FORGE_URL}/api/v1/repos/${FORGE_REPO}/tags" \
-d "{\"tag_name\":\"${RELEASE_VERSION}\",\"target\":\"${head_sha}\",\"message\":\"Release ${RELEASE_VERSION}\"}" \
>/dev/null
log "Tag ${RELEASE_VERSION} created (SHA: ${head_sha})"
# ── Step 3: Push tag to mirrors ──────────────────────────────────────────
log "Step 3/6: Pushing tag to mirrors"
# Extract org/repo from FORGE_REPO (e.g. "disinto-admin/disinto" → "disinto")
project_name="${FORGE_REPO##*/}"
# Push to GitHub mirror (if GITHUB_TOKEN is available)
if [ -n "${GITHUB_TOKEN:-}" ]; then
log "Pushing tag to GitHub mirror"
# Create tag on GitHub via API
if curl -sf -X POST \
-H "Authorization: token ${GITHUB_TOKEN}" \
-H "Accept: application/vnd.github+json" \
"https://api.github.com/repos/Disinto/${project_name}/git/refs" \
-d "{\"ref\":\"refs/tags/${RELEASE_VERSION}\",\"sha\":\"${head_sha}\"}" \
>/dev/null 2>&1; then
log "GitHub: tag pushed"
else
log "WARNING: GitHub tag push failed (may already exist)"
fi
else
log "WARNING: GITHUB_TOKEN not set — skipping GitHub mirror"
fi
# Push to Codeberg mirror (if CODEBERG_TOKEN is available)
if [ -n "${CODEBERG_TOKEN:-}" ]; then
log "Pushing tag to Codeberg mirror"
# Codeberg uses Gitea-compatible API
# Extract owner from FORGE_REPO for Codeberg (use same owner)
codeberg_owner="${FORGE_REPO%%/*}"
if curl -sf -X POST \
-H "Authorization: token ${CODEBERG_TOKEN}" \
-H "Content-Type: application/json" \
"https://codeberg.org/api/v1/repos/${codeberg_owner}/${project_name}/tags" \
-d "{\"tag_name\":\"${RELEASE_VERSION}\",\"target\":\"${head_sha}\",\"message\":\"Release ${RELEASE_VERSION}\"}" \
>/dev/null 2>&1; then
log "Codeberg: tag pushed"
else
log "WARNING: Codeberg tag push failed (may already exist)"
fi
else
log "WARNING: CODEBERG_TOKEN not set — skipping Codeberg mirror"
fi
# ── Step 4: Build agents Docker image ────────────────────────────────────
log "Step 4/6: Building agents Docker image"
cd "$FACTORY_ROOT" || exit 1
docker compose build --no-cache agents 2>&1 | tail -5
log "Image built"
# ── Step 5: Tag image with version ───────────────────────────────────────
log "Step 5/6: Tagging image"
docker tag disinto/agents:latest "disinto/agents:${RELEASE_VERSION}"
log "Tagged disinto/agents:${RELEASE_VERSION}"
# ── Step 6: Restart agent containers ─────────────────────────────────────
log "Step 6/6: Restarting agent containers"
docker compose stop agents agents-llama 2>/dev/null || true
docker compose up -d agents agents-llama
log "Agent containers restarted"
# ── Done ─────────────────────────────────────────────────────────────────
log "Release ${RELEASE_VERSION} completed successfully"

View file

@ -1,245 +0,0 @@
# formulas/release.toml — Release formula
#
# Defines the release workflow: tag Forgejo main, push to mirrors, build
# and tag the agents Docker image, and restart agents.
#
# Triggered by vault PR approval (human creates vault PR, approves it, then
# runner executes via `disinto run <id>`).
#
# Example vault item:
# id = "release-v1.2.0"
# formula = "release"
# context = "Tag v1.2.0 — includes vault redesign, .profile system, architect agent"
# secrets = []
#
# Steps: preflight → tag-main → push-mirrors → build-image → tag-image → restart-agents → commit-result
name = "release"
description = "Tag Forgejo main, push to mirrors, build and tag agents image, restart agents"
version = 1
[context]
files = ["docker-compose.yml"]
# ─────────────────────────────────────────────────────────────────────────────────
# Step 1: preflight
# ─────────────────────────────────────────────────────────────────────────────────
[[steps]]
id = "preflight"
title = "Validate release prerequisites"
description = """
Validate release prerequisites before proceeding.
1. Check that RELEASE_VERSION is set:
- Must be in format: v1.2.3 (semver with 'v' prefix)
- Validate with regex: ^v[0-9]+\\.[0-9]+\\.[0-9]+$
- If not set, exit with error
2. Check that FORGE_TOKEN and FORGE_URL are set:
- Required for Forgejo API calls
3. Check that DOCKER_HOST is accessible:
- Test with: docker info
- Required for image build
4. Check current branch is main:
- git rev-parse --abbrev-ref HEAD
- Must be 'main' or 'master'
5. Pull latest code:
- git fetch origin "$PRIMARY_BRANCH"
- git reset --hard origin/"$PRIMARY_BRANCH"
- Ensure working directory is clean
6. Check if tag already exists locally:
- git tag -l "$RELEASE_VERSION"
- If exists, exit with error
7. Check if tag already exists on Forgejo:
- curl -sf -H "Authorization: token $FORGE_TOKEN" \
- "$FORGE_URL/api/v1/repos/$FORGE_REPO/git/tags/$RELEASE_VERSION"
- If exists, exit with error
8. Export RELEASE_VERSION for subsequent steps:
- export RELEASE_VERSION (already set from vault action)
"""
# ─────────────────────────────────────────────────────────────────────────────────
# Step 2: tag-main
# ─────────────────────────────────────────────────────────────────────────────────
[[steps]]
id = "tag-main"
title = "Create tag on Forgejo main via API"
description = """
Create the release tag on Forgejo main via the Forgejo API.
1. Get current HEAD SHA of main:
- curl -sf -H "Authorization: token $FORGE_TOKEN" \
- "$FORGE_URL/api/v1/repos/$FORGE_REPO/branches/$PRIMARY_BRANCH"
- Parse sha field from response
2. Create tag via Forgejo API:
- curl -sf -X POST \
- -H "Authorization: token $FORGE_TOKEN" \
- -H "Content-Type: application/json" \
- "$FORGE_URL/api/v1/repos/$FORGE_REPO/tags" \
- -d "{\"tag\":\"$RELEASE_VERSION\",\"target\":\"$HEAD_SHA\",\"message\":\"Release $RELEASE_VERSION\"}"
- Parse response for success
3. Log the tag creation:
- echo "Created tag $RELEASE_VERSION on Forgejo (SHA: $HEAD_SHA)"
4. Store HEAD SHA for later verification:
- echo "$HEAD_SHA" > /tmp/release-head-sha
"""
# ─────────────────────────────────────────────────────────────────────────────────
# Step 3: push-mirrors
# ─────────────────────────────────────────────────────────────────────────────────
[[steps]]
id = "push-mirrors"
title = "Push tag to mirrors (Codeberg, GitHub)"
description = """
Push the newly created tag to all configured mirrors.
1. Add mirror remotes if not already present:
- Codeberg: git remote add codeberg git@codeberg.org:${FORGE_REPO_OWNER}/${PROJECT_NAME}.git
- GitHub: git remote add github git@github.com:disinto/${PROJECT_NAME}.git
- Check with: git remote -v
2. Push tag to Codeberg:
- git push codeberg "$RELEASE_VERSION" --tags
- Or push all tags: git push codeberg --tags
3. Push tag to GitHub:
- git push github "$RELEASE_VERSION" --tags
- Or push all tags: git push github --tags
4. Verify tags exist on mirrors:
- curl -sf -H "Authorization: token $GITHUB_TOKEN" \
- "https://api.github.com/repos/disinto/${PROJECT_NAME}/tags/$RELEASE_VERSION"
- curl -sf -H "Authorization: token $FORGE_TOKEN" \
- "$FORGE_URL/api/v1/repos/$FORGE_REPO/git/tags/$RELEASE_VERSION"
5. Log success:
- echo "Tag $RELEASE_VERSION pushed to mirrors"
"""
# ─────────────────────────────────────────────────────────────────────────────────
# Step 4: build-image
# ─────────────────────────────────────────────────────────────────────────────────
[[steps]]
id = "build-image"
title = "Build agents Docker image"
description = """
Build the new agents Docker image with the tagged code.
1. Build image without cache to ensure fresh build:
- docker compose build --no-cache agents
2. Verify image was created:
- docker images | grep disinto-agents
- Check image exists and has recent timestamp
3. Store image ID for later:
- docker images disinto-agents --format "{{.ID}}" > /tmp/release-image-id
4. Log build completion:
- echo "Built disinto-agents image"
"""
# ─────────────────────────────────────────────────────────────────────────────────
# Step 5: tag-image
# ─────────────────────────────────────────────────────────────────────────────────
[[steps]]
id = "tag-image"
title = "Tag Docker image with version"
description = """
Tag the newly built agents image with the release version.
1. Get the untagged image ID:
- docker images disinto-agents --format "{{.ID}}" --no-trunc | head -1
2. Tag the image:
- docker tag disinto-agents disinto-agents:$RELEASE_VERSION
3. Verify tag:
- docker images disinto-agents
4. Log tag:
- echo "Tagged disinto-agents:$RELEASE_VERSION"
"""
# ─────────────────────────────────────────────────────────────────────────────────
# Step 6: restart-agents
# ─────────────────────────────────────────────────────────────────────────────────
[[steps]]
id = "restart-agents"
title = "Restart agent containers with new image"
description = """
Restart agent containers to use the new image.
1. Pull the new image (in case it was pushed somewhere):
- docker compose pull agents
2. Stop and remove existing agent containers:
- docker compose down agents agents-llama 2>/dev/null || true
3. Start agents with new image:
- docker compose up -d agents agents-llama
4. Wait for containers to be healthy:
- for i in {1..30}; do
- if docker inspect --format='{{.State.Health.Status}}' agents | grep -q healthy; then
- echo "Agents container healthy"; break
- fi
- sleep 5
- done
5. Verify containers are running:
- docker compose ps agents agents-llama
6. Log restart:
- echo "Restarted agents containers"
"""
# ─────────────────────────────────────────────────────────────────────────────────
# Step 7: commit-result
# ─────────────────────────────────────────────────────────────────────────────────
[[steps]]
id = "commit-result"
title = "Write release result"
description = """
Write the release result to a file for tracking.
1. Get the image ID:
- IMAGE_ID=$(cat /tmp/release-image-id)
2. Create result file:
- cat > /tmp/release-result.json <<EOF
- {
- "version": "$RELEASE_VERSION",
- "image_id": "$IMAGE_ID",
- "forgejo_tag_url": "$FORGE_URL/$FORGE_REPO/src/$RELEASE_VERSION",
- "timestamp": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
- "status": "success"
- }
- EOF
3. Copy result to data directory:
- mkdir -p "$PROJECT_REPO_ROOT/release"
- cp /tmp/release-result.json "$PROJECT_REPO_ROOT/release/$RELEASE_VERSION.json"
4. Log result:
- cat /tmp/release-result.json
5. Clean up temp files:
- rm -f /tmp/release-head-sha /tmp/release-image-id /tmp/release-result.json
"""

View file

@ -1,161 +0,0 @@
# formulas/rent-a-human-caddy-ssh.toml — Provision SSH key for Caddy log collection
#
# "Rent a Human" — walk the operator through provisioning a purpose-limited
# SSH keypair so collect-engagement.sh can fetch Caddy access logs remotely.
#
# The key uses a `command=` restriction so it can ONLY cat the access log.
# No interactive shell, no port forwarding, no agent forwarding.
#
# Parent vision issue: #426
# Sprint: website-observability-wire-up (ops PR #10)
# Consumed by: site/collect-engagement.sh (issue #745)
name = "rent-a-human-caddy-ssh"
description = "Provision a purpose-limited SSH keypair for remote Caddy log collection"
version = 1
# ── Step 1: Generate keypair ─────────────────────────────────────────────────
[[steps]]
id = "generate-keypair"
title = "Generate a dedicated ed25519 keypair"
description = """
Generate a purpose-limited SSH keypair for Caddy log collection.
Run on your local machine (NOT the Caddy host):
```
ssh-keygen -t ed25519 -f caddy-collect -N '' -C 'disinto-collect-engagement'
```
This produces two files:
- caddy-collect (private key goes into the vault)
- caddy-collect.pub (public key goes onto the Caddy host)
Do NOT set a passphrase (-N '') the factory runs unattended.
"""
# ── Step 2: Install public key on Caddy host ─────────────────────────────────
[[steps]]
id = "install-public-key"
title = "Install the public key on the Caddy host with command= restriction"
needs = ["generate-keypair"]
description = """
Install the public key on the Caddy host with a strict command= restriction
so this key can ONLY read the access log.
1. SSH into the Caddy host as the user who owns /var/log/caddy/access.log.
2. Open (or create) ~/.ssh/authorized_keys:
mkdir -p ~/.ssh && chmod 700 ~/.ssh
nano ~/.ssh/authorized_keys
3. Add this line (all on ONE line do not wrap):
command="cat /var/log/caddy/access.log",no-port-forwarding,no-X11-forwarding,no-agent-forwarding ssh-ed25519 AAAA... disinto-collect-engagement
Replace "AAAA..." with the contents of caddy-collect.pub.
To build the line automatically:
echo "command=\"cat /var/log/caddy/access.log\",no-port-forwarding,no-X11-forwarding,no-agent-forwarding $(cat caddy-collect.pub)"
4. Set permissions:
chmod 600 ~/.ssh/authorized_keys
What the restrictions do:
- command="cat /var/log/caddy/access.log"
Forces this key to only execute `cat /var/log/caddy/access.log`,
regardless of what the client requests.
- no-port-forwarding blocks SSH tunnels
- no-X11-forwarding blocks X11
- no-agent-forwarding blocks agent forwarding
If the access log is at a different path, update the command= restriction
AND set CADDY_ACCESS_LOG in the factory environment to match.
"""
# ── Step 3: Add private key to vault secrets ─────────────────────────────────
[[steps]]
id = "store-private-key"
title = "Add the private key as CADDY_SSH_KEY secret"
needs = ["generate-keypair"]
description = """
Store the private key in the factory's encrypted secrets store.
1. Add the private key using `disinto secrets add`:
cat caddy-collect | disinto secrets add CADDY_SSH_KEY
This encrypts the key with age and stores it as secrets/CADDY_SSH_KEY.enc.
2. IMPORTANT: After storing, securely delete the local private key file:
shred -u caddy-collect 2>/dev/null || rm -f caddy-collect
rm -f caddy-collect.pub
The public key is already installed on the Caddy host; the private key
now lives only in secrets/CADDY_SSH_KEY.enc.
Never commit the private key to any git repository.
"""
# ── Step 4: Configure Caddy host address ─────────────────────────────────────
[[steps]]
id = "store-caddy-host"
title = "Add the Caddy host details as secrets"
needs = ["install-public-key"]
description = """
Store the Caddy connection details so collect-engagement.sh knows
where to SSH.
1. Add each value using `disinto secrets add`:
echo 'disinto.ai' | disinto secrets add CADDY_SSH_HOST
echo 'debian' | disinto secrets add CADDY_SSH_USER
echo '/var/log/caddy/access.log' | disinto secrets add CADDY_ACCESS_LOG
Replace values with the actual SSH host, user, and log path for your setup.
"""
# ── Step 5: Test the connection ──────────────────────────────────────────────
[[steps]]
id = "test-connection"
title = "Verify the SSH key works and returns the access log"
needs = ["install-public-key", "store-private-key", "store-caddy-host"]
description = """
Test the end-to-end connection before the factory tries to use it.
1. From the factory host (or anywhere with the private key), run:
ssh -i caddy-collect -o StrictHostKeyChecking=accept-new user@caddy-host
Expected behavior:
- Outputs the contents of /var/log/caddy/access.log
- Disconnects immediately (command= restriction forces this)
If you already shredded the local key, decode it from the vault:
echo "$CADDY_SSH_KEY" | base64 -d > /tmp/caddy-collect-test
chmod 600 /tmp/caddy-collect-test
ssh -i /tmp/caddy-collect-test -o StrictHostKeyChecking=accept-new user@caddy-host
rm -f /tmp/caddy-collect-test
2. Verify the output is Caddy structured JSON (one JSON object per line):
ssh -i /tmp/caddy-collect-test user@caddy-host | head -1 | jq .
You should see fields like: ts, request, status, duration.
3. If the connection fails:
- Permission denied check authorized_keys format (must be one line)
- Connection refused check sshd is running on the Caddy host
- Empty output check /var/log/caddy/access.log exists and is readable
by the SSH user
- "jq: error" Caddy may be using Combined Log Format instead of
structured JSON; check Caddy's log configuration
4. Once verified, the factory's collect-engagement.sh can use this key
to fetch logs remotely via:
ssh -i <decoded-key-path> $CADDY_HOST
"""

View file

@ -1,37 +0,0 @@
# formulas/reproduce.toml — Reproduce-agent formula
#
# Declares the reproduce-agent's runtime parameters.
# The dispatcher reads this to configure the sidecar container.
#
# stack_script: path (relative to PROJECT_REPO_ROOT) of the script used to
# restart/rebuild the project stack before reproduction. Omit (or leave
# blank) to connect to an existing staging environment instead.
#
# tools: MCP servers to pass to claude via --mcp-server flags.
#
# timeout_minutes: hard upper bound on the Claude session.
#
# Exit gate logic (standard mode):
# 1. Can I reproduce it? → NO → rejected/blocked → EXIT
# → YES → continue
# 2. Is the cause obvious? → YES → in-progress + backlog issue → EXIT
# → NO → in-triage → EXIT
#
# Exit gate logic (verification mode):
# Triggered when all sub-issues of a parent bug-report are closed.
# 1. Bug fixed → comment "verified fixed", remove in-progress, close issue
# 2. Bug persists → comment "still reproduces", add in-triage, re-enter triage
#
# Turn budget (standard mode): 60% on step 1 (reproduction), 40% on step 2 (cause check).
# Turn budget (verification mode): 100% on re-running reproduction steps.
name = "reproduce"
description = "Primary: reproduce the bug. Secondary: check if cause is obvious. Exit gates enforced."
version = 1
# Set stack_script to the restart command for local stacks.
# Leave empty ("") to target an existing staging environment.
stack_script = ""
tools = ["playwright"]
timeout_minutes = 15

View file

@ -61,83 +61,6 @@ Do NOT flag:
- Things that look wrong but actually work verify by reading the code first
- Files that were truncated from the diff (the orchestrator notes truncation)
## 3b. Architecture and documentation consistency
For each BEHAVIORAL change in the diff (not pure bug fixes or formatting):
1. Identify what behavior changed (e.g., scheduling mechanism, auth flow,
container lifecycle, secret handling)
2. Search AGENTS.md for claims about that behavior:
grep -n '<keyword>' AGENTS.md
Also check docs/ and any per-directory AGENTS.md files.
3. Search for Architecture Decision references (AD-001 through AD-006):
grep -n 'AD-0' AGENTS.md
Read each AD and check if the PR's changes contradict it.
4. If the PR changes behavior described in AGENTS.md or contradicts an AD
but does NOT update the documentation in the same PR:
REQUEST_CHANGES require the documentation update in the same PR.
This check is SKIPPED for pure bug fixes where the intended behavior is
unchanged (the code was wrong, not the documentation).
## 3c. Infrastructure file review (conditional)
If the diff touches ANY of these files, apply this additional checklist:
- `docker-compose.yml` or `docker-compose.*.yml`
- `Dockerfile` or `docker/*`
- `.woodpecker/` CI configs
- `docker/agents/entrypoint.sh`
Infrastructure files have a different failure mode from application code:
a single dropped line (a volume mount, an env var, a restart policy) can
break a running deployment with no syntax error. Treat dropped
infrastructure configuration as a **blocking defect**, not a style choice.
### For docker-compose.yml changes:
1. **Read the full file** in the PR branch do not rely only on the diff.
2. Run `git diff <base>..HEAD -- docker-compose.yml` to see the complete
change, not just the truncated diff.
3. Check that NONE of the following were dropped without explicit
justification in the PR description:
- Named volumes (e.g. `agent-data`, `project-repos`)
- Bind mounts (especially for config, secrets, SSH keys, shared dirs)
- Environment variables (compare the full `environment:` block against
the base branch)
- `restart:` policy (should be `unless-stopped` for production services)
- `security_opt:` settings
- Network configuration
- Resource limits / deploy constraints
4. If ANY production configuration was dropped and the PR description does
not explain why, **REQUEST_CHANGES**. List each dropped item explicitly.
### For Dockerfile / entrypoint changes:
1. Check that base image, installed packages, and runtime deps are preserved.
2. Verify that entrypoint/CMD changes don't break the container startup.
### For CI config changes:
1. Check that pipeline steps aren't silently removed.
2. Verify that secret references still match available secrets.
## 3d. Scope discipline
Compare the actual diff footprint against the stated issue scope:
1. Read the PR title and description to identify what the issue asked for.
2. Estimate the expected diff size (e.g., "add 3 env vars" = ~5-10 lines
in compose + ~5 lines in scripts).
3. If the actual diff in ANY single file exceeds 3x the expected scope,
flag it: "this file changed N lines but the issue scope suggests ~M."
For infrastructure files (compose, Dockerfiles, CI), scope violations are
**blocking**: REQUEST_CHANGES and ask the author to split out-of-scope
changes into a separate PR or justify them in the description.
For non-infrastructure files, scope violations are advisory: leave a
non-blocking COMMENT noting the scope creep.
## 4. Vault item quality (conditional)
If the PR adds or modifies vault item files (`vault/pending/*.md` in the ops repo), apply these
@ -189,7 +112,7 @@ near-duplicate exists, REQUEST_CHANGES and reference the existing item.
Agents must NEVER execute external actions directly. Any action that touches
an external system (publish, deploy, post, push to external registry, API
calls to third-party services) MUST go through vault dispatch i.e., the
agent files a vault item (`$OPS_REPO_ROOT/vault/pending/*.json`) and the runner
agent files a vault item (`$OPS_REPO_ROOT/vault/pending/*.json`) and the vault-runner
container executes it with injected secrets.
Scan the diff for these patterns:
@ -205,7 +128,8 @@ Scan the diff for these patterns:
If ANY of these patterns appear in agent code (scripts in `dev/`, `action/`,
`planner/`, `gardener/`, `supervisor/`, `predictor/`, `review/`, `formulas/`,
`lib/`) WITHOUT routing through vault dispatch (file a vault PR on ops repo see #73-#77), **REQUEST_CHANGES**.
`lib/`) WITHOUT routing through vault dispatch (`$OPS_REPO_ROOT/vault/pending/`, `vault-fire.sh`,
`vault-run-action.sh`), **REQUEST_CHANGES**.
Explain that external actions must use vault dispatch per AD-006. The agent
should file a vault item instead of executing directly.
@ -213,7 +137,7 @@ should file a vault item instead of executing directly.
**Exceptions** (do NOT flag these):
- Code inside `vault/` the vault system itself is allowed to handle secrets
- References in comments or documentation explaining the architecture
- `bin/disinto` setup commands that manage `secrets/*.enc` and the `run` subcommand
- `bin/disinto` setup commands that manage `.env.vault.enc`
- Local operations (git push to forge, forge API calls with `FORGE_TOKEN`)
## 6. Re-review (if previous review is provided)
@ -254,16 +178,8 @@ tech-debt issues via API so they are tracked separately:
-H "Content-Type: application/json" "$FORGE_API/issues" \
-d '{"title":"...","body":"Flagged by AI reviewer in PR #NNN.\n\n## Problem\n...\n\n---\n*Auto-created from AI review*","labels":[TECH_DEBT_ID]}'
File a tech-debt issue for every finding rated **medium** or higher that
is pre-existing (not introduced by this PR). Also file for **low** findings
that represent correctness risks (dead code that masks bugs, misleading
documentation, unguarded variables under set -u).
Do NOT file for: style preferences, naming opinions, missing comments,
or speculative improvements with no concrete failure mode.
When in doubt, file. A closed-as-wontfix tech-debt issue costs nothing;
an unfiled bug costs a future debugging session.
Only create follow-ups for clear, actionable tech debt. Do not create
issues for minor style nits or speculative improvements.
## 8. Verdict
@ -276,13 +192,6 @@ Bias toward APPROVE for small, correct changes. Use REQUEST_CHANGES only
for actual problems (bugs, security issues, broken functionality, missing
required behavior). Use DISCUSS sparingly.
Note: The bias toward APPROVE applies to code correctness and style decisions.
It does NOT apply to documentation consistency (step 3b), infrastructure file
findings (step 3c), or tech-debt filing (step 7) those are separate concerns
that should be handled regardless of the change's correctness. In particular,
dropped production configuration (volumes, bind mounts, env vars, restart
policy) is a blocking defect, not a style preference.
## 9. Output
Write a single JSON object to the file path from REVIEW_OUTPUT_FILE.

View file

@ -1,280 +0,0 @@
# formulas/run-architect.toml — Architect formula
#
# Executed by architect-run.sh via polling loop — strategic decomposition of vision
# issues into development sprints.
#
# This formula orchestrates the architect agent's workflow:
# Step 1: Preflight — bash handles state management:
# - Fetch open vision issues from Forgejo API
# - Fetch open architect PRs on ops repo
# - Fetch merged architect PRs (already pitched visions)
# - Filter: remove visions with open PRs, merged sprints, or sub-issues
# - Select up to 3 remaining vision issues for pitching
# Step 2: Stateless pitch generation — for each selected issue:
# - Invoke claude -p with: vision issue body + codebase context
# - Model NEVER calls Forgejo API — only generates pitch markdown
# - Bash creates the ops PR with pitch content
# - Bash posts the ACCEPT/REJECT footer comment
# Step 3: Sprint PR creation with questions (issue #101) (one PR per pitch)
# Step 4: Post-merge sub-issue filing via filer-bot (#764)
#
# Permission model (#764):
# architect-bot: READ-ONLY on project repo (GET issues/PRs/labels for context).
# Cannot POST/PUT/PATCH/DELETE any project-repo resource.
# Write access ONLY on ops repo (branches, PRs, comments).
# filer-bot: issues:write on project repo. Files sub-issues from merged sprint
# PRs via ops-filer pipeline. Adds in-progress label to vision issues.
#
# Architecture:
# - Bash script (architect-run.sh) handles ALL state management
# - Model calls are stateless — no Forgejo API access, no memory between calls
# - Dedup is automatic via bash filters (no journal-based memory needed)
# - Max 3 open architect PRs at any time
#
# AGENTS.md maintenance is handled by the gardener (#246).
name = "run-architect"
description = "Architect: strategic decomposition of vision into sprints"
version = 2
model = "opus"
[context]
files = ["VISION.md", "AGENTS.md"]
# Prerequisite tree loaded from ops repo (ops: prefix)
# Sprints directory tracked in ops repo
[[steps]]
id = "preflight"
title = "Preflight: bash-driven state management and issue selection"
description = """
This step performs preflight checks and selects up to 3 vision issues for pitching.
IMPORTANT: All state management is handled by bash (architect-run.sh), NOT the model.
Architecture Decision: Bash-driven orchestration with stateless model calls
- The model NEVER calls Forgejo API during pitching
- Bash fetches all data from Forgejo API (vision issues, open PRs, merged PRs)
- Bash filters and deduplicates (no model-level dedup or journal-based memory)
- For each selected issue, bash invokes stateless claude -p (model only generates pitch)
- Bash creates PRs and posts footer comments (no model API access)
Bash Actions (in architect-run.sh):
1. Fetch open vision issues from Forgejo API: GET /repos/{owner}/{repo}/issues?labels=vision&state=open
2. Fetch open architect PRs from ops repo: GET /repos/{owner}/{repo}/pulls?state=open
3. Fetch merged sprint PRs: GET /repos/{owner}/{repo}/pulls?state=closed (filter merged=true)
4. Filter out visions that:
- Already have open architect PRs (check PR body for issue number reference)
- Have in-progress label
- Have open sub-issues (check for 'Decomposed from #N' pattern)
- Have merged sprint PRs (decomposition already done)
5. Select up to (3 - open_architect_pr_count) remaining vision issues
6. If no issues remain AND no responses to process, signal PHASE:done
If open architect PRs exist, handle accept/reject responses FIRST (see Capability B below).
After handling existing PRs, count remaining open architect PRs and calculate pitch_budget.
## Multi-pitch selection (up to 3 per run)
After handling existing PRs, determine how many new pitches can be created:
pitch_budget = 3 - <number of open architect PRs remaining after handling>
For each available pitch slot:
1. From the vision issues list, skip any issue that already has an open architect PR
2. Skip any issue that already has the `in-progress` label
3. Check for existing sub-issues filed from this vision issue
4. Check for merged sprint PRs referencing this vision issue
5. From remaining candidates, pick the most unblocking issue first
6. Add to ARCHITECT_TARGET_ISSUES array
Skip conditions:
- If no vision issues are found, signal PHASE:done
- If pitch_budget <= 0 (already 3 open architect PRs), skip pitching
- If all vision issues already have open architect PRs, signal PHASE:done
- If all vision issues have open sub-issues, skip pitching
- If all vision issues have merged sprint PRs, skip pitching
Output:
- Sets ARCHITECT_TARGET_ISSUES as a JSON array of issue numbers to pitch (up to 3)
"""
[[steps]]
id = "research_pitch"
title = "Stateless pitch generation: model generates content, bash creates PRs"
description = """
IMPORTANT: This step is executed by bash (architect-run.sh) via stateless claude -p calls.
The model NEVER calls Forgejo API it only reads context and generates pitch markdown.
Architecture:
- Bash orchestrates the loop over ARCHITECT_TARGET_ISSUES
- For each issue: bash fetches issue body from Forgejo API, then invokes stateless claude -p
- Model receives: vision issue body + codebase context (VISION.md, AGENTS.md, prerequisites.md)
- Model outputs: sprint pitch markdown ONLY (no API calls, no side effects)
- Bash creates the PR and posts the ACCEPT/REJECT footer comment
For each issue in ARCHITECT_TARGET_ISSUES, bash performs:
1. Fetch vision issue details from Forgejo API:
- GET /repos/{owner}/{repo}/issues/{issue_number}
- Extract: title, body
2. Invoke stateless claude -p with prompt:
"Write a sprint pitch for this vision issue. Output only the pitch markdown."
Context provided:
- Vision issue #N: <title>
- Vision issue body
- Project context (VISION.md, AGENTS.md)
- Codebase context (prerequisites.md, graph section)
- Formula content
3. Model generates pitch markdown (NO API CALLS):
# Sprint: <sprint-name>
## Vision issues
- #N — <title>
## What this enables
<what the project can do after this sprint that it can't do now>
## What exists today
<current state infrastructure, interfaces, code that can be reused>
## Complexity
<number of files/subsystems, estimated sub-issues>
<gluecode vs greenfield ratio>
## Risks
<what could go wrong, what breaks if this is done badly>
## Cost — new infra to maintain
<what ongoing maintenance burden does this sprint add>
<new services, scheduled tasks, formulas, agent roles>
## Recommendation
<architect's assessment: worth it / defer / alternative approach>
## Sub-issues
<!-- filer:begin -->
- id: <kebab-case-id>
title: "vision(#N): <concise sub-issue title>"
labels: [backlog]
depends_on: []
body: |
## Goal
<what this sub-issue accomplishes>
## Acceptance criteria
- [ ] <criterion>
<!-- filer:end -->
IMPORTANT: Do NOT include design forks or questions yet. The pitch is a go/no-go
decision for the human. Questions come only after acceptance.
The ## Sub-issues block is parsed by the filer-bot pipeline after sprint PR merge.
Each sub-issue between filer:begin/end markers becomes a Forgejo issue on the
project repo. The filer appends a decomposed-from marker to each body automatically.
4. Bash creates PR:
- Create branch: architect/sprint-{pitch-number}
- Write sprint spec to sprints/{sprint-slug}.md
- Create PR with pitch content as body
- Post footer comment: "Reply ACCEPT to proceed with design questions, or REJECT: <reason> to decline."
- NOTE: in-progress label is added by filer-bot after sprint PR merge (#764)
Output:
- One PR per vision issue (up to 3 per run)
- Each PR contains the pitch markdown
- If ARCHITECT_TARGET_ISSUES is empty, skip this step
"""
[[steps]]
id = "sprint_pr_creation"
title = "Sprint PR creation with questions (issue #101) — handled by bash"
description = """
IMPORTANT: PR creation is handled by bash (architect-run.sh) during the pitch step.
This step is for documentation only the actual PR creation happens in research_pitch.
## Approved PR → Initial design questions (issue #570)
When a sprint pitch PR receives an APPROVED review but has no `## Design forks`
section and no Q1:, Q2: comments yet, the architect enters a new state:
1. detect_approved_pending_questions() identifies this state
2. A fresh agent session starts with a special prompt
3. The agent reads the approved pitch, posts initial design questions (Q1:, Q2:, etc.)
4. The agent adds a `## Design forks` section to the PR body
5. The PR transitions into the questions phase, where the existing Q&A loop takes over
This ensures approved PRs don't sit indefinitely without design conversation.
Architecture:
- Bash creates PRs during stateless pitch generation (step 2)
- Model has no role in PR creation no Forgejo API access
- architect-bot is READ-ONLY on the project repo (#764) — all project-repo
writes (sub-issue filing, in-progress label) are handled by filer-bot
via the ops-filer pipeline after sprint PR merge
- This step describes the PR format for reference
PR Format (created by bash):
1. Branch: architect/sprint-{pitch-number}
2. Sprint spec file: sprints/{sprint-slug}.md
Contains the pitch markdown from the model.
3. PR via Forgejo API:
- Title: architect: <sprint summary>
- Body: plain markdown text from model output
- Base: main (or PRIMARY_BRANCH)
- Head: architect/sprint-{pitch-number}
- Footer comment: "Reply ACCEPT to proceed with design questions, or REJECT: <reason> to decline."
After creating all PRs, signal PHASE:done.
NOTE: in-progress label on the vision issue is added by filer-bot after sprint PR merge (#764).
## Forgejo API Reference (ops repo only)
All operations use the ops repo Forgejo API with `Authorization: token ${FORGE_TOKEN}` header.
architect-bot is READ-ONLY on the project repo cannot POST/PUT/PATCH/DELETE project-repo resources (#764).
### Create branch (ops repo)
```
POST /repos/{owner}/{repo-ops}/branches
Body: {"new_branch_name": "architect/<sprint-slug>", "old_branch_name": "main"}
```
### Create/update file (ops repo)
```
PUT /repos/{owner}/{repo-ops}/contents/<path>
Body: {"message": "sprint: add <sprint-slug>.md", "content": "<base64-encoded-content>", "branch": "architect/<sprint-slug>"}
```
### Create PR (ops repo)
```
POST /repos/{owner}/{repo-ops}/pulls
Body: {"title": "architect: <sprint summary>", "body": "<markdown-text>", "head": "architect/<sprint-slug>", "base": "main"}
```
**Important: PR body format**
- The `body` field must contain **plain markdown text** (the raw content from the scratch file)
- Do NOT JSON-encode or escape the body pass it as a JSON string value
- Newlines and markdown formatting (headings, lists, etc.) must be preserved as-is
### Close PR (ops repo)
```
PATCH /repos/{owner}/{repo-ops}/pulls/{index}
Body: {"state": "closed"}
```
### Delete branch (ops repo)
```
DELETE /repos/{owner}/{repo-ops}/git/branches/<branch-name>
```
### Read-only on project repo (context gathering)
```
GET /repos/{owner}/{repo}/issues list issues
GET /repos/{owner}/{repo}/issues/{number} read issue details
GET /repos/{owner}/{repo}/labels list labels
GET /repos/{owner}/{repo}/pulls list PRs
```
"""

View file

@ -1,15 +1,16 @@
# formulas/run-gardener.toml — Gardener housekeeping formula
#
# Defines the gardener's complete run: grooming (Claude session via
# gardener-run.sh) + AGENTS.md maintenance + final commit-and-pr.
# gardener-run.sh) + blocked-review + AGENTS.md maintenance + final
# commit-and-pr.
#
# Gardener has journaling via .profile (issue #97), so it learns from
# past runs and improves over time.
# No memory, no journal. The gardener does mechanical housekeeping
# based on current state — it doesn't need to remember past runs.
#
# Steps: preflight -> grooming -> dust-bundling -> agents-update -> commit-and-pr
# Steps: preflight → grooming → dust-bundling → blocked-review → stale-pr-recycle → agents-update → commit-and-pr
name = "run-gardener"
description = "Mechanical housekeeping: grooming, dust bundling, docs update"
description = "Mechanical housekeeping: grooming, blocked review, docs update"
version = 1
[context]
@ -76,63 +77,6 @@ Pre-checks (bash, zero tokens — detect problems before invoking Claude):
6. Tech-debt promotion: list all tech-debt labeled issues goal is to
process them all (promote to backlog or classify as dust).
7. Bug-report detection: for each open unlabeled issue (no backlog, no
bug-report, no in-progress, no blocked, no underspecified, no vision,
no tech-debt), check whether it describes a user-facing bug with
reproduction steps. Criteria ALL must be true:
a. Body describes broken behavior (something that should work but
doesn't), NOT a feature request or enhancement
b. Body contains steps to reproduce (numbered list, "steps to
reproduce" heading, or clear sequence of actions that trigger the bug)
c. Issue is not already labeled
If all criteria match, enrich the issue body and write the manifest actions:
Body enrichment (CRITICAL turns raw reports into actionable investigation briefs):
Before writing the add_label action, construct an enriched body by appending
these sections to the original issue body:
a. ``## What was reported``
One or two sentence summary of the user's claim. Distill the broken
behavior concisely what the user expected vs. what actually happened.
b. ``## Known context``
What can be inferred from the codebase without running anything:
- Which contracts/components/files are involved (use AGENTS.md layout
and file paths mentioned in the issue or body)
- What the expected behavior should be (from VISION.md, docs, code)
- Any recent changes to involved components:
git log --oneline -5 -- <paths>
- Related issues or prior fixes (cross-reference by number if known)
c. ``## Reproduction plan``
Concrete steps for a reproduce-agent or human. Be specific:
- Which environment to use (e.g. "start fresh stack with
\`./scripts/dev.sh restart --full\`")
- Which transactions or actions to execute (with \`cast\` commands,
API calls, or UI navigation steps where applicable)
- What state to check after each step (contract reads, API queries,
UI observations, log output)
d. ``## What needs verification``
Checkboxes distinguishing known facts from unknowns:
- ``- [ ]`` Does the reported behavior actually occur? (reproduce)
- ``- [ ]`` Is <component X> behaving as expected? (check state)
- ``- [ ]`` Is the data flow correct from <A> to <B>? (trace)
Tailor these to the specific bug three to five items covering the
key unknowns a reproduce-agent must resolve.
e. Construct full new body = original body text + appended sections.
Write an edit_body action BEFORE the add_label action:
echo '{"action":"edit_body","issue":NNN,"body":"<full new body>"}' >> "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
f. Write the add_label action:
echo '{"action":"add_label","issue":NNN,"label":"bug-report"}' >> "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
echo "ACTION: labeled #NNN as bug-report — <reason>" >> "$RESULT_FILE"
Do NOT also add the backlog label bug-report is a separate triage
track that feeds into reproduction automation.
For each issue, choose ONE action and write to result file:
ACTION (substantial promote, close duplicate, add acceptance criteria):
@ -176,17 +120,15 @@ DUST (trivial — single-line edit, rename, comment, style, whitespace):
of 3+ into one backlog issue.
VAULT (needs human decision or external resource):
File a vault procurement item using vault_request():
source "$(dirname "$0")/../lib/action-vault.sh"
TOML_CONTENT="# Vault action: <action_id>
context = \"<description of what decision/resource is needed>\"
unblocks = [\"#NNN\"]
[execution]
# Commands to run after approval
"
PR_NUM=$(vault_request "<action_id>" "$TOML_CONTENT")
echo "VAULT: filed PR #${PR_NUM} for #NNN — <reason>" >> "$RESULT_FILE"
File a vault procurement item at $OPS_REPO_ROOT/vault/pending/<id>.md:
# <What decision or resource is needed>
## What
<description>
## Why
<which issue this unblocks>
## Unblocks
- #NNN — <title>
Log: echo "VAULT: filed $OPS_REPO_ROOT/vault/pending/<id>.md for #NNN — <reason>" >> "$RESULT_FILE"
CLEAN (only if truly nothing to do):
echo 'CLEAN' >> "$RESULT_FILE"
@ -200,7 +142,25 @@ Sibling dependency rule (CRITICAL):
NEVER add bidirectional ## Dependencies between siblings (creates deadlocks).
Use ## Related for cross-references: "## Related\n- #NNN (sibling)"
6. Quality gate backlog label enforcement:
7. Architecture decision alignment check (AD check):
For each open issue labeled 'backlog', check whether the issue
contradicts any architecture decision listed in the
## Architecture Decisions section of AGENTS.md.
Read AGENTS.md and extract the AD table. For each backlog issue,
compare the issue title and body against each AD. If an issue
clearly violates an AD:
a. Write a comment action to the manifest:
echo '{"action":"comment","issue":NNN,"body":"Closing: violates AD-NNN (<decision summary>). See AGENTS.md § Architecture Decisions."}' >> "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
b. Write a close action to the manifest:
echo '{"action":"close","issue":NNN,"reason":"violates AD-NNN"}' >> "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
c. Log to the result file:
echo "ACTION: closed #NNN — violates AD-NNN" >> "$RESULT_FILE"
Only close for clear, unambiguous violations. If the issue is
borderline or could be interpreted as compatible, leave it open
and file a VAULT item for human decision instead.
8. Quality gate backlog label enforcement:
For each open issue labeled 'backlog', verify it has the required
sections for dev-agent pickup:
a. Acceptance criteria body must contain at least one checkbox
@ -221,65 +181,28 @@ Sibling dependency rule (CRITICAL):
Well-structured issues (both sections present) are left untouched
they are ready for dev-agent pickup.
8. Bug-report lifecycle auto-close resolved parent issues:
For each open issue, check whether it is a parent that was decomposed
into sub-issues. A parent is identified by having OTHER issues whose
body contains "Decomposed from #N" where N is the parent's number.
9. Portfolio lifecycle maintain ## Addressables and ## Observables in AGENTS.md:
Read the current Addressables and Observables tables from AGENTS.md.
Algorithm:
a. From the open issues fetched in step 1, collect all issue numbers.
b. For each open issue number N, search ALL issues (open AND closed)
for bodies containing "Decomposed from #N":
curl -sf -H "Authorization: token $FORGE_TOKEN" \
"$FORGE_API/issues?state=all&type=issues&limit=50" \
| jq -r --argjson n N \
'[.[] | select(.body != null) | select(.body | test("Decomposed from #" + ($n | tostring) + "\\b"))] | length'
If zero sub-issues found, skip this is not a decomposed parent.
a. ADD: if a recently closed issue shipped a new deployment, listing,
package, or external presence not yet in the table, add a row.
b. PROMOTE: if an addressable now has measurement wired (an evidence
process reads from it), move it to the Observables section.
c. REMOVE: if an addressable was decommissioned (vision change
invalidated it, service shut down), remove the row and log why.
d. FLAG: if an addressable has been live > 2 weeks with Observable? = No
and no evidence process is planned, add a comment to the result file:
echo "ACTION: flagged addressable '<name>' — live >2 weeks, no observation path" >> "$RESULT_FILE"
c. If sub-issues exist, check whether ALL of them are closed:
curl -sf -H "Authorization: token $FORGE_TOKEN" \
"$FORGE_API/issues?state=all&type=issues&limit=50" \
| jq -r --argjson n N \
'[.[] | select(.body != null) | select(.body | test("Decomposed from #" + ($n | tostring) + "\\b"))]
| {total: length, closed: [.[] | select(.state == "closed")] | length}
| .total == .closed'
If the result is "false", some sub-issues are still open skip.
d. If ALL sub-issues are closed, collect sub-issue numbers and titles:
SUB_ISSUES=$(curl -sf -H "Authorization: token $FORGE_TOKEN" \
"$FORGE_API/issues?state=all&type=issues&limit=50" \
| jq -r --argjson n N \
'[.[] | select(.body != null) | select(.body | test("Decomposed from #" + ($n | tostring) + "\\b"))]
| .[] | "- #\(.number) \(.title)"')
e. Write a comment action listing the resolved sub-issues.
Use jq to build valid JSON (sub-issue titles may contain quotes/backslashes,
and SUB_ISSUES is multiline raw interpolation would break JSONL):
COMMENT_BODY=$(printf 'All sub-issues have been resolved:\n%s\n\nClosing this parent issue as all decomposed work is complete.' "$SUB_ISSUES")
jq -n --argjson issue N --arg body "$COMMENT_BODY" \
'{action:"comment", issue: $issue, body: $body}' \
>> "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
f. Write a close action:
jq -n --argjson issue N \
'{action:"close", issue: $issue, reason: "all sub-issues resolved"}' \
>> "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
g. Log the action:
echo "ACTION: closed #N — all sub-issues resolved" >> "$RESULT_FILE"
Edge cases:
- Already closed parent: skipped (only open issues are processed)
- No sub-issues found: skipped (not a decomposed issue)
- Multi-cause bugs: stays open until ALL sub-issues are closed
Stage AGENTS.md if changed the commit-and-pr step handles the actual commit.
Processing order:
1. Handle PRIORITY_blockers_starving_factory first promote or resolve
2. Quality gate strip backlog from issues missing acceptance criteria or affected files
3. Bug-report detection label qualifying issues before other classification
4. Bug-report lifecycle close parents whose sub-issues are all resolved
5. Process tech-debt issues by score (impact/effort)
6. Classify remaining items as dust or route to vault
2. AD alignment check close backlog issues that violate architecture decisions
3. Quality gate strip backlog from issues missing acceptance criteria or affected files
4. Process tech-debt issues by score (impact/effort)
5. Classify remaining items as dust or route to vault
6. Portfolio lifecycle update addressables/observables tables
Do NOT bundle dust yourself the dust-bundling step handles accumulation,
dedup, TTL expiry, and bundling into backlog issues.
@ -334,22 +257,137 @@ session, so changes there would be lost.
5. If no DUST items were emitted and no groups are ripe, skip this step.
CRITICAL: If this step fails, log the failure and move on.
CRITICAL: If this step fails, log the failure and move on to blocked-review.
"""
needs = ["grooming"]
# ─────────────────────────────────────────────────────────────────────
# Step 4: agents-update — AGENTS.md watermark staleness + size enforcement
# Step 4: blocked-review — triage blocked issues
# ─────────────────────────────────────────────────────────────────────
[[steps]]
id = "blocked-review"
title = "Review issues labeled blocked"
description = """
Review all issues labeled 'blocked' and decide their fate.
(See issue #352 for the blocked label convention.)
1. Fetch all blocked issues:
curl -sf -H "Authorization: token $FORGE_TOKEN" \
"$FORGE_API/issues?state=open&type=issues&labels=blocked&limit=50"
2. For each blocked issue, read the full body and comments:
curl -sf -H "Authorization: token $FORGE_TOKEN" \
"$FORGE_API/issues/<number>"
curl -sf -H "Authorization: token $FORGE_TOKEN" \
"$FORGE_API/issues/<number>/comments"
3. Check dependencies extract issue numbers from ## Dependencies /
## Depends on / ## Blocked by sections. For each dependency:
curl -sf -H "Authorization: token $FORGE_TOKEN" \
"$FORGE_API/issues/<dep_number>"
Check if the dependency is now closed.
4. For each blocked issue, choose ONE action:
UNBLOCK all dependencies are now closed or the blocking condition resolved:
a. Write a remove_label action to the manifest:
echo '{"action":"remove_label","issue":NNN,"label":"blocked"}' >> "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
b. Write a comment action to the manifest:
echo '{"action":"comment","issue":NNN,"body":"Unblocked: <explanation of what resolved the blocker>"}' >> "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
NEEDS HUMAN blocking condition is ambiguous, requires architectural
decision, or involves external factors:
a. Write a comment action to the manifest:
echo '{"action":"comment","issue":NNN,"body":"<diagnostic: what you found and what decision is needed>"}' >> "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
b. Leave the 'blocked' label in place
CLOSE issue is stale (blocked 30+ days with no progress on blocker),
the blocker is wontfix, or the issue is no longer relevant:
a. Write a comment action to the manifest:
echo '{"action":"comment","issue":NNN,"body":"Closing: <reason — stale blocker, no longer relevant, etc.>"}' >> "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
b. Write a close action to the manifest:
echo '{"action":"close","issue":NNN,"reason":"<stale blocker / no longer relevant / etc.>"}' >> "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
CRITICAL: If this step fails, log the failure and move on.
"""
needs = ["dust-bundling"]
# ─────────────────────────────────────────────────────────────────────
# Step 5: stale-pr-recycle — recycle stale failed PRs back to backlog
# ─────────────────────────────────────────────────────────────────────
[[steps]]
id = "stale-pr-recycle"
title = "Recycle stale failed PRs back to backlog"
description = """
Detect open PRs where CI has failed and no work has happened in 24+ hours.
These represent abandoned dev-agent attempts recycle them so the pipeline
can retry with a fresh session.
1. Fetch all open PRs:
curl -sf -H "Authorization: token $FORGE_TOKEN" \
"$FORGE_API/pulls?state=open&limit=50"
2. For each PR, check all four conditions before recycling:
a. CI failed get the HEAD SHA from the PR's head.sha field, then:
curl -sf -H "Authorization: token $FORGE_TOKEN" \
"$FORGE_API/commits/<head_sha>/status"
Only proceed if the combined state is "failure" or "error".
Skip PRs with "success", "pending", or no CI status.
b. Last push > 24 hours ago get the commit details:
curl -sf -H "Authorization: token $FORGE_TOKEN" \
"$FORGE_API/git/commits/<head_sha>"
Parse the committer.date field. Only proceed if it is older than:
$(date -u -d '24 hours ago' +%Y-%m-%dT%H:%M:%SZ)
c. Linked issue exists extract the issue number from the PR body.
Look for "Fixes #NNN" or "ixes #NNN" patterns (case-insensitive).
If no linked issue found, skip this PR (cannot reset labels).
d. No active tmux session check:
tmux has-session -t "dev-${PROJECT_NAME}-<issue_number>" 2>/dev/null
If a session exists, someone may still be working skip this PR.
3. For each PR that passes all checks (failed CI, 24+ hours stale,
linked issue found, no active session):
a. Write a comment on the PR explaining the recycle:
echo '{"action":"comment","issue":<pr_number>,"body":"Recycling stale CI failure for fresh attempt. Previous PR: #<pr_number>"}' >> "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
b. Write a close_pr action:
echo '{"action":"close_pr","pr":<pr_number>}' >> "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
c. Remove the in-progress label from the linked issue:
echo '{"action":"remove_label","issue":<issue_number>,"label":"in-progress"}' >> "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
d. Add the backlog label to the linked issue:
echo '{"action":"add_label","issue":<issue_number>,"label":"backlog"}' >> "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
e. Log to result file:
echo "ACTION: recycled PR #<pr_number> (linked issue #<issue_number>) — stale CI failure" >> "$RESULT_FILE"
4. If no stale failed PRs found, skip this step.
CRITICAL: If this step fails, log the failure and move on to agents-update.
"""
needs = ["blocked-review"]
# ─────────────────────────────────────────────────────────────────────
# Step 6: agents-update — AGENTS.md watermark staleness + size enforcement
# ─────────────────────────────────────────────────────────────────────
[[steps]]
id = "agents-update"
title = "Check AGENTS.md watermarks, discover structural changes, update stale files"
title = "Check AGENTS.md watermarks, update stale files, enforce size limit"
description = """
Maintain all AGENTS.md files by detecting structural drift since the last
review. Uses git history as the source of truth not vibes.
Check all AGENTS.md files for staleness, update any that are outdated, and
enforce the ~200-line size limit via progressive disclosure splitting.
This keeps documentation fresh runs 2x/day so drift stays small.
## Part A: Discover what changed
## Part A: Watermark staleness check and update
1. Read the HEAD SHA from preflight:
HEAD_SHA=$(cat /tmp/gardener-head-sha)
@ -359,80 +397,110 @@ review. Uses git history as the source of truth — not vibes.
3. For each file, read the watermark from line 1:
<!-- last-reviewed: <sha> -->
If no watermark exists, treat the file as fully stale (review everything).
4. Check for changes since the watermark:
git log --oneline <watermark>..HEAD -- <directory>
If zero changes, the file is current skip it.
5. For each stale file, run a STRUCTURAL DIFF this is the core of the step:
5. For stale files:
- Read the AGENTS.md and the source files in that directory
- Update the documentation to reflect code changes since the watermark
- Set the watermark to the HEAD SHA from the preflight step
- Conventions: architecture and WHY not implementation details
a. FILE INVENTORY: list files at watermark vs HEAD for this directory:
git ls-tree -r --name-only <watermark> -- <directory>
git ls-tree -r --name-only HEAD -- <directory>
Diff the two lists. Categorize:
- NEW files: in HEAD but not in watermark
- DELETED files: in watermark but not in HEAD
- Check AGENTS.md layout section: does it list each current file?
Files present in the directory but absent from the layout = GAPS.
Files listed in the layout but missing from the directory = LIES.
## Part B: Size limit enforcement (progressive disclosure split)
b. REFERENCE VALIDATION: extract every file path, function name, and
shell variable referenced in the AGENTS.md. For each:
- File paths: verify the file exists (ls or git ls-tree HEAD)
- Function names: grep for the definition in the codebase
- Script names: verify they exist where claimed
Any reference that fails validation is a LIE flag it for correction.
c. SEMANTIC CHANGES: for files that existed at both watermark and HEAD,
check if they changed meaningfully:
git diff <watermark>..HEAD -- <directory>/*.sh <directory>/*.py <directory>/*.toml
Look for: new exported functions, removed functions, renamed files,
changed CLI flags, new environment variables, new configuration.
Ignore: internal refactors, comment changes, formatting.
6. For each stale file, apply corrections:
- Add NEW files to the layout section
- Remove DELETED files from the layout section
- Fix every LIE found in reference validation
- Add notes about significant SEMANTIC CHANGES
- Set the watermark to HEAD_SHA
- Conventions: document architecture and WHY, not implementation details
## Part B: Size limit enforcement
After all updates, count lines in the root AGENTS.md:
After all updates are done, count lines in the root AGENTS.md:
wc -l < "$PROJECT_REPO_ROOT/AGENTS.md"
If it exceeds 200 lines, split verbose sections into per-directory files
using progressive disclosure:
If the root AGENTS.md exceeds 200 lines, perform a progressive disclosure
split. The principle: agent reads the map, drills into detail only when
needed. You wouldn't dump a 500-page wiki on a new hire's first morning.
7. Identify sections that can be extracted to per-directory files.
Keep the root AGENTS.md as a table of contents brief overview,
directory layout, summary tables with links to detail files.
6. Identify per-directory sections to extract. Each agent section under
"## Agents" (e.g. "### Dev (`dev/`)", "### Review (`review/`)") and
each helper section (e.g. "### Shared helpers (`lib/`)") is a candidate.
Also extract verbose subsections like "## Issue lifecycle and label
conventions" and "## Phase-Signaling Protocol" into docs/ or the
relevant directory.
8. For each extracted section, create a `{dir}/AGENTS.md` with:
7. For each section to extract, create a `{dir}/AGENTS.md` file with:
- Line 1: watermark <!-- last-reviewed: <HEAD_SHA> -->
- The full section content, preserving structure and detail
- The full section content (role, trigger, key files, env vars, lifecycle)
- Keep the same markdown structure and detail level
9. Replace extracted sections in root with concise summaries + links.
Example for dev/:
```
<!-- last-reviewed: abc123 -->
# Dev Agent
10. Verify root is under 200 lines. If still over, extract more.
**Role**: Implement issues autonomously ...
**Trigger**: dev-poll.sh runs every 10 min ...
**Key files**: ...
**Environment variables consumed**: ...
**Lifecycle**: ...
```
8. Replace extracted sections in the root AGENTS.md with a concise
directory map table. The root file keeps ONLY:
- Watermark (line 1)
- ## What this repo is (brief overview)
- ## Directory layout (existing tree)
- ## Tech stack
- ## Coding conventions
- ## How to lint and test
- ## Agents — replaced with a summary table pointing to per-dir files:
## Agents
| Agent | Directory | Role | Guide |
|-------|-----------|------|-------|
| Dev | dev/ | Issue implementation | [dev/AGENTS.md](dev/AGENTS.md) |
| Review | review/ | PR review | [review/AGENTS.md](review/AGENTS.md) |
| Gardener | gardener/ | Backlog grooming | [gardener/AGENTS.md](gardener/AGENTS.md) |
| ... | ... | ... | ... |
- ## Shared helpers — replaced with a brief pointer:
"See [lib/AGENTS.md](lib/AGENTS.md) for the full helper reference."
Keep the summary table if it fits, or move it to lib/AGENTS.md.
- ## Issue lifecycle and label conventions — keep a brief summary
(labels table + dependency convention) or move verbose parts to
docs/PHASE-PROTOCOL.md
- ## Architecture Decisions — keep in root (humans write, agents enforce)
- ## Phase-Signaling Protocol — keep a brief summary with pointer:
"See [docs/PHASE-PROTOCOL.md](docs/PHASE-PROTOCOL.md) for the full spec."
9. Verify the root AGENTS.md is now under 200 lines:
LINE_COUNT=$(wc -l < "$PROJECT_REPO_ROOT/AGENTS.md")
if [ "$LINE_COUNT" -gt 200 ]; then
echo "WARNING: root AGENTS.md still $LINE_COUNT lines after split"
fi
If still over 200, trim further move more detail into per-directory
files. The root should read like a table of contents, not an encyclopedia.
10. Each new per-directory AGENTS.md must have a watermark on line 1.
The gardener maintains freshness for ALL AGENTS.md files root and
per-directory using the same watermark mechanism from Part A.
## Staging
11. Stage all AGENTS.md files created or changed:
11. Stage ALL AGENTS.md files you created or changed do NOT commit yet.
All git writes happen in the commit-and-pr step at the end:
find . -name "AGENTS.md" -not -path "./.git/*" -exec git add {} +
12. If no files need updating AND root is under 200 lines, skip entirely.
12. If no AGENTS.md files need updating AND root is under 200 lines,
skip this step entirely.
CRITICAL: If this step fails for any reason, log the failure and move on.
Do NOT let an AGENTS.md failure prevent the commit-and-pr step.
"""
needs = ["dust-bundling"]
needs = ["stale-pr-recycle"]
# ─────────────────────────────────────────────────────────────────────
# Step 5: commit-and-pr — single commit with all file changes
# Step 7: commit-and-pr — single commit with all file changes
# ─────────────────────────────────────────────────────────────────────
[[steps]]
@ -486,14 +554,16 @@ executes them after the PR merges.
PR_NUMBER=$(echo "$PR_RESPONSE" | jq -r '.number')
h. Save PR number for orchestrator tracking:
echo "$PR_NUMBER" > /tmp/gardener-pr-${PROJECT_NAME}.txt
i. The orchestrator handles CI/review via pr_walk_to_merge.
The gardener stays alive to inject CI results and review feedback
as they come in, then executes the pending-actions manifest after merge.
i. Signal the orchestrator to monitor CI:
echo "PHASE:awaiting_ci" > "$PHASE_FILE"
j. STOP and WAIT. Do NOT return to the primary branch.
The orchestrator polls CI, injects results and review feedback.
When you receive injected CI or review feedback, follow its
instructions, then write PHASE:awaiting_ci and wait again.
4. If no file changes existed (step 2 found nothing):
# Nothing to commit — the gardener has no work to do this run.
exit 0
echo "PHASE:done" > "$PHASE_FILE"
5. If PR creation fails, log the error and exit.
5. If PR creation fails, log the error and write PHASE:failed.
"""
needs = ["agents-update"]

View file

@ -1,10 +1,10 @@
# formulas/run-planner.toml — Strategic planning formula (v4: graph-driven)
#
# Executed directly by planner-run.sh via polling loop — no action issues.
# Executed directly by planner-run.sh via cron — no action issues.
# planner-run.sh creates a tmux session with Claude (opus) and injects
# this formula as context, plus the graph report from build-graph.py.
#
# Steps: preflight → triage-and-plan → commit-ops-changes
# Steps: preflight → triage-and-plan → journal-and-commit
#
# v4 changes from v3:
# - Graph report (orphans, cycles, thin objectives, bottlenecks) replaces
@ -13,8 +13,7 @@
# - 3 steps instead of 6.
#
# AGENTS.md maintenance is handled by the gardener (#246).
# All git writes (tree, memory) happen in one commit at the end.
# Journal writing is delegated to generic profile_write_journal() function.
# All git writes (tree, journal, memory) happen in one commit at the end.
name = "run-planner"
description = "Planner v4: graph-driven planning with tea helpers"
@ -152,10 +151,13 @@ From the updated tree + graph bottlenecks, identify the top 5 constraints.
A constraint is an unresolved prerequisite blocking the most downstream objectives.
Graph bottlenecks (high betweenness centrality) and thin objectives inform ranking.
HUMAN_BLOCKED handling (needs human decision or external resource):
- File a vault procurement item instead of skipping. First check for duplicates
across ALL vault directories (pending/, approved/, fired/) if a file with the
same slug already exists in any of them, do NOT create a new one.
Stuck issue handling:
- BOUNCED/LABEL_CHURN: do NOT re-promote. Dispatch groom-backlog formula instead:
tea_file_issue "chore: break down #<N> — bounced <count>x" "<body>" "action"
- HUMAN_BLOCKED (needs human decision or external resource): file a vault
procurement item instead of skipping. First check for duplicates across ALL
vault directories (pending/, approved/, fired/) if a file with the same
slug already exists in any of them, do NOT create a new one.
Naming: $OPS_REPO_ROOT/vault/pending/<project>-<slug>.md (e.g. disinto-github-org.md).
Write with this template:
@ -183,37 +185,10 @@ HUMAN_BLOCKED handling (needs human decision or external resource):
Then mark the prerequisite in the tree as "blocked-on-vault ($OPS_REPO_ROOT/vault/pending/<id>.md)".
Do NOT skip or mark as "awaiting human decision" the vault owns the human interface.
Template-or-vision filing gate (for non-stuck constraints):
1. Read issue templates from .codeberg/ISSUE_TEMPLATE/*.yaml:
- bug.yaml: for broken/incorrect behavior (error in logs, failing test)
- feature.yaml: for new capabilities (prerequisite doesn't exist)
- refactor.yaml: for restructuring without behavior change
2. Attempt to fill template fields:
- affected_files: list 3 or fewer specific files
- acceptance_criteria: write concrete, checkable criteria (max 5)
- proposed_solution/approach: is there one clear approach, or design forks?
3. Complexity test:
- If work touches ONE subsystem (3 or fewer files) AND no design forks
(only one reasonable approach) AND template fields fill confidently:
File as `backlog` using matching template format
- Otherwise Label `vision` with short body:
- Problem statement
- Why it's vision-sized
- Which objectives it blocks
- Include "## Why vision" section explaining complexity
4. Template selection heuristic:
- Bug template: planner identifies something broken (error in logs,
incorrect behavior, failing test)
- Feature template: new capability needed (prerequisite doesn't exist)
- Refactor template: existing code needs restructuring without behavior change
5. Filing steps:
- Check if issue already exists (match by #number in tree or title search)
- If no issue, create with tea_file_issue using template format
- If issue exists and is open, skip no duplicates
Filing gate (for non-stuck constraints):
1. Check if issue already exists (match by #number in tree or title search)
2. If no issue, create one with tea_file_issue using the template above
3. If issue exists and is open, skip no duplicates
Priority label sync:
- Add priority to current top-5 constraint issues (if missing):
@ -242,13 +217,50 @@ CRITICAL: If any part of this step fails, log the failure and continue.
needs = ["preflight"]
[[steps]]
id = "commit-ops-changes"
title = "Write tree, memory, and journal; commit and push branch"
id = "journal-and-commit"
title = "Write tree, journal, optional memory; commit and PR"
description = """
### 1. Write prerequisite tree
Write to: $OPS_REPO_ROOT/prerequisites.md
### 2. Memory update (every 5th run)
### 2. Write journal entry
Create/append to: $OPS_REPO_ROOT/journal/planner/$(date -u +%Y-%m-%d).md
Format:
# Planner run — YYYY-MM-DD HH:MM UTC
## Predictions triaged
- #NNN: ACTION — reasoning (or "No unreviewed predictions")
## Prerequisite tree updates
- Resolved: <list> - Discovered: <list> - Proposed: <list>
## Top 5 constraints
1. <prerequisite> blocks N objectives #NNN (existing|filed)
## Stuck issues detected
- #NNN: BOUNCED (Nx) — dispatched groom-backlog as #MMM
(or "No stuck issues detected")
## Vault items filed
- $OPS_REPO_ROOT/vault/pending/<id>.md <what> blocks #NNN
(or "No vault items filed")
## Issues created
- #NNN: title — why (or "No new issues")
## Priority label changes
- Added/removed priority: #NNN (or "No priority changes")
## Observations
- Key patterns noticed this run
## Deferred
- Items in tree beyond top 5, why not filed
Keep concise 30-50 lines max.
### 3. Memory update (every 5th run)
Count "# Planner run —" headers across all journal files.
Check "<!-- summarized-through-run: N -->" in planner-memory.md.
If (count - N) >= 5 or planner-memory.md missing, write to:
@ -256,21 +268,15 @@ If (count - N) >= 5 or planner-memory.md missing, write to:
Include: run counter marker, date, constraint focus, patterns, direction.
Keep under 100 lines. Replace entire file.
### 3. Commit ops repo changes to the planner branch
Commit the ops repo changes (prerequisites, memory, vault items) and push the
branch. Do NOT push directly to $PRIMARY_BRANCH planner-run.sh will create a
PR and walk it to merge via review-bot.
### 4. Commit ops repo changes
Commit the ops repo changes (prerequisites, journal, memory, vault items):
cd "$OPS_REPO_ROOT"
git add prerequisites.md knowledge/planner-memory.md vault/pending/
git add prerequisites.md journal/planner/ knowledge/planner-memory.md vault/pending/
git add -u
if ! git diff --cached --quiet; then
git commit -m "chore: planner run $(date -u +%Y-%m-%d)"
git push origin HEAD
git push origin "$PRIMARY_BRANCH"
fi
cd "$PROJECT_REPO_ROOT"
### 4. Write journal entry (generic)
The planner-run.sh wrapper will handle journal writing via profile_write_journal()
after the formula completes. This step is informational only.
"""
needs = ["triage-and-plan"]

View file

@ -6,7 +6,7 @@
# Memory: previous predictions on the forge ARE the memory.
# No separate memory file — the issue tracker is the source of truth.
#
# Executed by predictor/predictor-run.sh via polling loop — no action issues.
# Executed by predictor/predictor-run.sh via cron — no action issues.
# predictor-run.sh creates a tmux session with Claude (sonnet) and injects
# this formula as context. Claude executes all steps autonomously.
#
@ -119,24 +119,27 @@ For each weakness you identify, choose one:
**Suggested action:** <what the planner should consider>
**EXPLOIT** high confidence, have a theory you can test:
File a prediction/unreviewed issue AND a vault PR that dispatches
a formula to generate evidence (AD-006: external actions go through vault).
File a prediction/unreviewed issue AND an action issue that dispatches
a formula to generate evidence.
The prediction explains the theory. The vault PR triggers the proof
after human approval. When the planner runs next, evidence is already there.
The prediction explains the theory. The action generates the proof.
When the planner runs next, evidence is already there.
Vault dispatch (requires lib/action-vault.sh):
source "$PROJECT_REPO_ROOT/lib/action-vault.sh"
Action issue body format (label: action):
Dispatched by predictor to test theory in #<prediction_number>.
TOML_CONTENT="id = \"predict-<prediction_number>-<formula>\"
context = \"Test prediction #<prediction_number>: <theory summary> — focus: <specific test>\"
formula = \"<formula-name>\"
secrets = []
# Unblocks: #<prediction_number>
# Expected evidence: evidence/<dir>/<date>-<name>.json
"
PR_NUM=$(vault_request "predict-<prediction_number>-<formula>" "$TOML_CONTENT")
echo "Vault PR #${PR_NUM} filed to test prediction #<prediction_number>"
## Task
Run <formula name> with focus on <specific test>.
## Expected evidence
Results in evidence/<dir>/<date>-<name>.json
## Acceptance criteria
- [ ] Formula ran to completion
- [ ] Evidence file written with structured results
## Affected files
- evidence/<dir>/
Available formulas (check $PROJECT_REPO_ROOT/formulas/*.toml for current list):
cat "$PROJECT_REPO_ROOT/formulas/"*.toml | grep '^name' | head -10
@ -153,10 +156,10 @@ tea is pre-configured with login "$TEA_LOGIN" and repo "$FORGE_REPO".
tea issues create --login "$TEA_LOGIN" --repo "$FORGE_REPO" \
--title "<title>" --body "<body>" --labels "prediction/unreviewed"
2. Dispatch formula via vault (if exploiting):
source "$PROJECT_REPO_ROOT/lib/action-vault.sh"
PR_NUM=$(vault_request "predict-NNN-<formula>" "$TOML_CONTENT")
# See EXPLOIT section above for TOML_CONTENT format
2. File action dispatches (if exploiting):
tea issues create --login "$TEA_LOGIN" --repo "$FORGE_REPO" \
--title "action: test prediction #NNN — <formula> <focus>" \
--body "<body>" --labels "action"
3. Close superseded predictions:
tea issues close <number> --login "$TEA_LOGIN" --repo "$FORGE_REPO"
@ -170,11 +173,11 @@ tea is pre-configured with login "$TEA_LOGIN" and repo "$FORGE_REPO".
## Rules
- Max 5 actions total (predictions + vault dispatches combined)
- Each exploit counts as 2 (prediction + vault dispatch)
- Max 5 actions total (predictions + action dispatches combined)
- Each exploit counts as 2 (prediction + action dispatch)
- So: 5 explores, or 2 exploits + 1 explore, or 1 exploit + 3 explores
- Never re-file a dismissed prediction without new evidence
- Vault dispatches must reference existing formulas don't invent formulas
- Action issues must reference existing formulas don't invent formulas
- Be specific: name the file, the metric, the threshold, the formula
- If no weaknesses found, file nothing that's a strong signal the project is healthy

View file

@ -3,7 +3,7 @@
# Trigger: action issue created by planner (gap analysis), dev-poll (post-merge
# hook detecting site/ changes), or gardener (periodic SHA drift check).
#
# The dispatcher picks up the issue, executes these steps, posts results
# The action-agent picks up the issue, executes these steps, posts results
# as a comment, and closes the issue.
name = "run-publish-site"
@ -216,7 +216,7 @@ Check 3 — engagement evidence has been collected at least once:
jq -r '" visitors=\(.unique_visitors) pages=\(.page_views) referrals=\(.referred_visitors)"' "$LATEST" 2>/dev/null || true
else
echo "NOTE: No engagement reports yet — run: bash site/collect-engagement.sh"
echo "The first report will appear after the scheduled collection runs (daily at 23:55 UTC)."
echo "The first report will appear after the cron job runs (daily at 23:55 UTC)."
fi
Summary:

View file

@ -5,7 +5,7 @@
# the action and notifies the human for one-click copy-paste execution.
#
# Trigger: action issue created by planner or any formula.
# The dispatcher picks up the issue, executes these steps, writes a draft
# The action-agent picks up the issue, executes these steps, writes a draft
# to vault/outreach/{platform}/drafts/, notifies the human via the forge,
# and closes the issue.
#

View file

@ -1,7 +1,7 @@
# formulas/run-supervisor.toml — Supervisor formula (health monitoring + remediation)
#
# Executed by supervisor/supervisor-run.sh via polling loop (every 20 minutes).
# supervisor-run.sh runs claude -p via agent-sdk.sh and injects
# Executed by supervisor/supervisor-run.sh via cron (every 20 minutes).
# supervisor-run.sh creates a tmux session with Claude (sonnet) and injects
# this formula with pre-collected metrics as context.
#
# Steps: preflight → health-assessment → decide-actions → report → journal
@ -34,15 +34,13 @@ and injected into your prompt above. Review them now.
(24h grace period). Check the "Stale Phase Cleanup" section for any
files cleaned or in grace period this run.
2. Check vault state: read ${OPS_VAULT_ROOT:-$OPS_REPO_ROOT/vault/pending}/*.md for any procurement items
2. Check vault state: read $OPS_REPO_ROOT/vault/pending/*.md for any procurement items
the planner has filed. Note items relevant to the health assessment
(e.g. a blocked resource that explains why the pipeline is stalled).
Note: In degraded mode, vault items are stored locally.
3. Read the supervisor journal for recent history:
JOURNAL_FILE="${OPS_JOURNAL_ROOT:-$OPS_REPO_ROOT/journal/supervisor}/$(date -u +%Y-%m-%d).md"
JOURNAL_FILE="$OPS_REPO_ROOT/journal/supervisor/$(date -u +%Y-%m-%d).md"
if [ -f "$JOURNAL_FILE" ]; then cat "$JOURNAL_FILE"; fi
Note: In degraded mode, the journal is stored locally and not committed to git.
4. Note any values that cross these thresholds:
- RAM available < 500MB or swap > 3GB P0 (memory crisis)
@ -107,13 +105,8 @@ For each finding from the health assessment, decide and execute an action.
sync && echo 3 | sudo tee /proc/sys/vm/drop_caches >/dev/null 2>&1 || true
**P1 Disk pressure:**
# First pass: dangling only (cheap, safe)
# Docker cleanup
sudo docker system prune -f >/dev/null 2>&1 || true
# If still > 80%, escalate to all unused images (more aggressive but necessary)
_pct=$(df -h / | awk 'NR==2{print $5}' | tr -d '%')
if [ "${_pct:-0}" -gt 80 ]; then
sudo docker system prune -a -f >/dev/null 2>&1 || true
fi
# Truncate logs > 10MB
for f in "$FACTORY_ROOT"/{dev,review,supervisor,gardener,planner,predictor}/*.log; do
[ -f "$f" ] && [ "$(du -k "$f" | cut -f1)" -gt 10240 ] && truncate -s 0 "$f"
@ -144,22 +137,21 @@ For each finding from the health assessment, decide and execute an action.
**P3 Stale PRs (CI done >20min, no push since):**
Do NOT read dev-poll.sh, push branches, attempt merges, or investigate pipeline code.
Instead, file a vault item for the dev-agent to pick up:
Write ${OPS_VAULT_ROOT:-$OPS_REPO_ROOT/vault/pending}/stale-pr-${ISSUE_NUM}.md:
# Stale PR: ${PR_TITLE}
## What
CI finished >20min ago but no git push has been made to the PR branch.
## Why
P3 Factory degraded: PRs should be pushed within 20min of CI completion.
## Unblocks
- Factory health: dev-agent will push the branch and continue the workflow
Instead, nudge the dev-agent via tmux injection if a session is alive:
# Find the dev session for this issue
SESSION=$(tmux list-sessions -F '#{session_name}' 2>/dev/null | grep "dev-.*-${ISSUE_NUM}" | head -1)
if [ -n "$SESSION" ]; then
# Inject a nudge into the dev-agent session
tmux send-keys -t "$SESSION" "# [supervisor] PR stale >20min — CI finished, please push or update" Enter
fi
If no active tmux session exists, note it in the journal for the next dev-poll cycle.
Do NOT file vault items for stale PRs unless they remain stale for >3 consecutive runs.
### Cannot auto-fix → file vault item
For P0-P2 issues that persist after auto-fix attempts, or issues requiring
human judgment, file a vault procurement item:
Write ${OPS_VAULT_ROOT:-$OPS_REPO_ROOT/vault/pending}/supervisor-<issue-slug>.md:
Write $OPS_REPO_ROOT/vault/pending/supervisor-<issue-slug>.md:
# <What is needed>
## What
<description of the problem and why the supervisor cannot fix it>
@ -167,24 +159,14 @@ human judgment, file a vault procurement item:
<impact on factory health reference the priority level>
## Unblocks
- Factory health: <what this resolves>
Vault PR filed on ops repo human approves via PR review.
Note: In degraded mode (no ops repo), vault items are written locally to ${OPS_VAULT_ROOT:-local path}.
The vault-poll will notify the human and track the request.
### Reading best-practices files
Read the relevant best-practices file before taking action. In degraded mode,
use the bundled knowledge files from ${OPS_KNOWLEDGE_ROOT:-$OPS_REPO_ROOT/knowledge}:
cat "${OPS_KNOWLEDGE_ROOT:-$OPS_REPO_ROOT/knowledge}/memory.md" # P0
cat "${OPS_KNOWLEDGE_ROOT:-$OPS_REPO_ROOT/knowledge}/disk.md" # P1
cat "${OPS_KNOWLEDGE_ROOT:-$OPS_REPO_ROOT/knowledge}/ci.md" # P2 CI
cat "${OPS_KNOWLEDGE_ROOT:-$OPS_REPO_ROOT/knowledge}/dev-agent.md" # P2 agent
cat "${OPS_KNOWLEDGE_ROOT:-$OPS_REPO_ROOT/knowledge}/git.md" # P2 git
cat "${OPS_KNOWLEDGE_ROOT:-$OPS_REPO_ROOT/knowledge}/review-agent.md" # P2 review
cat "${OPS_KNOWLEDGE_ROOT:-$OPS_REPO_ROOT/knowledge}/forge.md" # P2 forge
Note: If OPS_REPO_ROOT is not available (degraded mode), the bundled knowledge
files in ${OPS_KNOWLEDGE_ROOT:-<unset>} provide fallback guidance.
Read the relevant best-practices file before taking action:
cat "$OPS_REPO_ROOT/knowledge/memory.md" # P0
cat "$OPS_REPO_ROOT/knowledge/disk.md" # P1
cat "$OPS_REPO_ROOT/knowledge/ci.md" # P2 CI
cat "$OPS_REPO_ROOT/knowledge/dev-agent.md" # P2 agent
cat "$OPS_REPO_ROOT/knowledge/git.md" # P2 git
Track what you fixed and what vault items you filed for the report step.
"""
@ -226,7 +208,7 @@ description = """
Append a timestamped entry to the supervisor journal.
File path:
${OPS_JOURNAL_ROOT:-$OPS_REPO_ROOT/journal/supervisor}/$(date -u +%Y-%m-%d).md
$OPS_REPO_ROOT/journal/supervisor/$(date -u +%Y-%m-%d).md
If the file already exists (multiple runs per day), append a new section.
If it does not exist, create it.
@ -259,24 +241,7 @@ run-to-run context so future supervisor runs can detect trends
IMPORTANT: Do NOT commit or push the journal it is a local working file.
The journal directory is committed to git periodically by other agents.
Note: In degraded mode (no ops repo), the journal is written locally to
${OPS_JOURNAL_ROOT:-<unset>} and is NOT automatically committed to any repo.
## Learning
If you discover something new during this run:
- In full mode (ops repo available): append to the relevant knowledge file:
echo "### Lesson title
Description of what you learned." >> "${OPS_REPO_ROOT}/knowledge/<file>.md"
- In degraded mode: write to the local knowledge directory for reference:
echo "### Lesson title
Description of what you learned." >> "${OPS_KNOWLEDGE_ROOT:-<unset>}/<file>.md"
Knowledge files: memory.md, disk.md, ci.md, forge.md, dev-agent.md,
review-agent.md, git.md.
After writing the journal, the agent session completes automatically.
After writing the journal, write the phase signal:
echo 'PHASE:done' > "$PHASE_FILE"
"""
needs = ["report"]

View file

@ -1,267 +0,0 @@
# formulas/triage.toml — Triage-agent formula (generic template)
#
# This is the base template for triage investigations.
# Project-specific formulas (e.g. formulas/triage-harb.toml) extend this by
# overriding the fields in the [project] section and providing stack-specific
# step descriptions.
#
# Triggered by: bug-report + in-triage label combination.
# Set by the reproduce-agent when:
# - Bug was confirmed (reproduced)
# - Quick log analysis did not reveal an obvious root cause
# - Reproduce-agent documented all steps taken and logs examined
#
# Steps:
# 1. read-findings — parse issue comments for prior reproduce-agent evidence
# 2. trace-data-flow — follow symptom through UI → API → backend → data store
# 3. instrumentation — throwaway branch, add logging, restart, observe
# 4. decompose — file backlog issues for each root cause
# 5. link-back — update original issue, swap in-triage → in-progress
# 6. cleanup — delete throwaway debug branch
#
# Best practices:
# - Start from reproduce-agent findings; do not repeat their work
# - Budget: 70% tracing data flow, 30% instrumented re-runs
# - Multiple causes: check if layered (Depends-on) or independent (Related)
# - Always delete the throwaway debug branch before finishing
# - If inconclusive after full turn budget: leave in-triage, post what was
# tried, do NOT relabel — supervisor handles stale triage sessions
#
# Project-specific formulas extend this template by defining:
# - stack_script: how to start/stop the project stack
# - [project].data_flow: layer names (e.g. "chain → indexer → GraphQL → UI")
# - [project].api_endpoints: which APIs/services to inspect
# - [project].stack_lock: stack lock configuration
# - Per-step description overrides with project-specific commands
#
# No hard timeout — runs until Claude hits its turn limit.
# Stack lock held for full run (triage is rare; blocking CI is acceptable).
name = "triage"
description = "Deep root cause analysis: trace data flow, add debug instrumentation, decompose causes into backlog issues."
version = 2
# Set stack_script to the restart command for local stacks.
# Leave empty ("") to connect to an existing staging environment.
stack_script = ""
tools = ["playwright"]
# ---------------------------------------------------------------------------
# Project-specific extension fields.
# Override these in formulas/triage-<project>.toml.
# ---------------------------------------------------------------------------
[project]
# Human-readable layer names for the data-flow trace (generic default).
# Example project override: "chain → indexer → GraphQL → UI"
data_flow = "UI → API → backend → data store"
# Comma-separated list of API endpoints or services to inspect.
# Example: "GraphQL /graphql, REST /api/v1, RPC ws://localhost:8545"
api_endpoints = ""
# Stack lock configuration (leave empty for default behavior).
# Example: "full" to hold a full stack lock during triage.
stack_lock = ""
# ---------------------------------------------------------------------------
# Steps
# ---------------------------------------------------------------------------
[[steps]]
id = "read-findings"
title = "Read reproduce-agent findings"
description = """
Before doing anything else, parse all prior evidence from the issue comments.
1. Fetch the issue body and all comments:
curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API}/issues/${ISSUE_NUMBER}" | jq -r '.body'
curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API}/issues/${ISSUE_NUMBER}/comments" | jq -r '.[].body'
2. Identify the reproduce-agent comment (look for sections like
"Reproduction steps", "Logs examined", "What was tried").
3. Extract and note:
- The exact symptom (error message, unexpected value, visual regression)
- Steps that reliably trigger the bug
- Log lines or API responses already captured
- Any hypotheses the reproduce-agent already ruled out
Do NOT repeat work the reproduce-agent already did. Your job starts where
theirs ended. If no reproduce-agent comment is found, note it and proceed
with fresh investigation using the issue body only.
"""
[[steps]]
id = "trace-data-flow"
title = "Trace data flow from symptom to source"
description = """
Systematically follow the symptom backwards through each layer of the stack.
Spend ~70% of your total turn budget here before moving to instrumentation.
Generic layer traversal (adapt to the project's actual stack):
UI API backend data store
For each layer boundary:
1. What does the upstream layer send?
2. What does the downstream layer expect?
3. Is there a mismatch? If yes is this the root cause or a symptom?
Tracing checklist:
a. Start at the layer closest to the visible symptom.
b. Read the relevant source files do not guess data shapes.
c. Cross-reference API contracts: compare what the code sends vs what it
should send according to schemas, type definitions, or documentation.
d. Check recent git history on suspicious files:
git log --oneline -20 -- <file>
e. Search for related issues or TODOs in the code:
grep -r "TODO\|FIXME\|HACK" -- <relevant directory>
Capture for each layer:
- The data shape flowing in and out (field names, types, nullability)
- Whether the layer's behavior matches its documented contract
- Any discrepancy found
If a clear root cause becomes obvious during tracing, note it and continue
checking whether additional causes exist downstream.
"""
needs = ["read-findings"]
[[steps]]
id = "instrumentation"
title = "Add debug instrumentation on a throwaway branch"
description = """
Use ~30% of your total turn budget here. Only instrument after tracing has
identified the most likely failure points do not instrument blindly.
1. Create a throwaway debug branch (NEVER commit this to main):
cd "$PROJECT_REPO_ROOT"
git checkout -b debug/triage-${ISSUE_NUMBER}
2. Add targeted logging at the layer boundaries identified during tracing:
- Console.log / structured log statements around the suspicious code path
- Log the actual values flowing through: inputs, outputs, intermediate state
- Add verbose mode flags if the stack supports them
- Keep instrumentation minimal only what confirms or refutes the hypothesis
3. Restart the stack using the configured script (if set):
${stack_script:-"# No stack_script configured — restart manually or connect to staging"}
4. Re-run the reproduction steps from the reproduce-agent findings.
5. Observe and capture new output:
- Paste relevant log lines into your working notes
- Note whether the observed values match or contradict the hypothesis
6. If the first instrumentation pass is inconclusive, iterate:
- Narrow the scope to the next most suspicious boundary
- Re-instrument, restart, re-run
- Maximum 2-3 instrumentation rounds before declaring inconclusive
Do NOT push the debug branch. It will be deleted in the cleanup step.
"""
needs = ["trace-data-flow"]
[[steps]]
id = "decompose"
title = "Decompose root causes into backlog issues"
description = """
After tracing and instrumentation, articulate each distinct root cause.
For each root cause found:
1. Determine the relationship to other causes:
- Layered (one causes another) use Depends-on in the issue body
- Independent (separate code paths fail independently) use Related
2. Create a backlog issue for each root cause:
curl -sf -X POST "${FORGE_API}/issues" \\
-H "Authorization: token ${FORGE_TOKEN}" \\
-H "Content-Type: application/json" \\
-d '{
"title": "fix: <specific description of root cause N>",
"body": "## Root cause\\n<exact code path, file:line>\\n\\n## Fix suggestion\\n<recommended approach>\\n\\n## Context\\nDecomposed from #${ISSUE_NUMBER} (cause N of M)\\n\\n## Dependencies\\n<#X if this depends on another cause being fixed first>",
"labels": [{"name": "backlog"}]
}'
3. Note the newly created issue numbers.
If only one root cause is found, still create a single backlog issue with
the specific code location and fix suggestion.
If the investigation is inconclusive (no clear root cause found), skip this
step and proceed directly to link-back with the inconclusive outcome.
"""
needs = ["instrumentation"]
[[steps]]
id = "link-back"
title = "Update original issue and relabel"
description = """
Post a summary comment on the original issue and update its labels.
### If root causes were found (conclusive):
Post a comment:
"## Triage findings
Found N root cause(s):
- #X — <one-line description> (cause 1 of N)
- #Y — <one-line description> (cause 2 of N, depends on #X)
Data flow traced: <layer where the bug originates>
Instrumentation: <key log output that confirmed the cause>
Next step: backlog issues above will be implemented in dependency order."
Then swap labels:
- Remove: in-triage
- Add: in-progress
### If investigation was inconclusive (turn budget exhausted):
Post a comment:
"## Triage — inconclusive
Traced: <layers checked>
Tried: <instrumentation attempts and what they showed>
Hypothesis: <best guess at cause, if any>
No definitive root cause identified. Leaving in-triage for supervisor
to handle as a stale triage session."
Do NOT relabel. Leave in-triage. The supervisor monitors stale triage
sessions and will escalate or reassign.
**CRITICAL: Write outcome file** Always write the outcome to the outcome file:
- If root causes found (conclusive): echo "reproduced" > /tmp/triage-outcome-${ISSUE_NUMBER}.txt
- If inconclusive: echo "needs-triage" > /tmp/triage-outcome-${ISSUE_NUMBER}.txt
"""
needs = ["decompose"]
[[steps]]
id = "cleanup"
title = "Delete throwaway debug branch"
description = """
Always delete the debug branch, even if the investigation was inconclusive.
1. Switch back to the main branch:
cd "$PROJECT_REPO_ROOT"
git checkout "$PRIMARY_BRANCH"
2. Delete the local debug branch:
git branch -D debug/triage-${ISSUE_NUMBER}
3. Confirm no remote was pushed (if accidentally pushed, delete it too):
git push origin --delete debug/triage-${ISSUE_NUMBER} 2>/dev/null || true
4. Verify the worktree is clean:
git status
git worktree list
A clean repo is a prerequisite for the next dev-agent run. Never leave
debug branches behind they accumulate and pollute the branch list.
"""
needs = ["link-back"]

View file

@ -1,4 +1,4 @@
<!-- last-reviewed: 18190874cae869527f675f717423ded735f2c555 -->
<!-- last-reviewed: 9b91c6a5bdd83513ca262dd468ea14db675971c1 -->
# Gardener Agent
**Role**: Backlog grooming — detect duplicate issues, missing acceptance
@ -7,38 +7,34 @@ the quality gate: strips the `backlog` label from issues that lack acceptance
criteria checkboxes (`- [ ]`) or an `## Affected files` section. Invokes
Claude to fix what it can; files vault items for what it cannot.
**Trigger**: `gardener-run.sh` is invoked by the polling loop in `docker/agents/entrypoint.sh`
every 6 hours (iteration math at line 182-194). Sources `lib/guard.sh` and calls
`check_active gardener` first — skips if `$FACTORY_ROOT/state/.gardener-active` is absent.
**Early-exit optimization**: if no issues, PRs, or repo files have changed since the last
run (checked via Forgejo API and `git diff`), the model is not invoked — the run exits
immediately (no tmux session, no tokens consumed). Otherwise, creates a tmux session with
`claude --model sonnet`, injects `formulas/run-gardener.toml` as context, monitors the
phase file, and cleans up on completion or timeout (2h max session). No action issues —
the gardener runs as part of the polling loop alongside the planner, predictor, and supervisor.
**Trigger**: `gardener-run.sh` runs 4x/day via cron. Sources `lib/guard.sh` and
calls `check_active gardener` first — skips if `$FACTORY_ROOT/state/.gardener-active`
is absent. Then creates a tmux session with `claude --model sonnet`, injects
`formulas/run-gardener.toml` as context, monitors the phase file, and cleans up
on completion or timeout (2h max session). No action issues — the gardener runs
directly from cron like the planner, predictor, and supervisor.
**Key files**:
- `gardener/gardener-run.sh`Polling loop participant + orchestrator: lock, memory guard,
- `gardener/gardener-run.sh` — Cron wrapper + orchestrator: lock, memory guard,
sources disinto project config, creates tmux session, injects formula prompt,
monitors phase file via custom `_gardener_on_phase_change` callback (passed to
`run_formula_and_monitor`). Stays alive through CI/review/merge cycle after
`PHASE:awaiting_ci` — injects CI results and review feedback, re-signals
`PHASE:awaiting_ci` after fixes, signals `PHASE:awaiting_review` on CI pass.
Executes pending-actions manifest after PR merge.
- `formulas/run-gardener.toml` — Execution spec: preflight, grooming, dust-bundling,
agents-update, commit-and-pr
- `formulas/run-gardener.toml` — Execution spec: preflight, grooming, dust-bundling, blocked-review, agents-update, commit-and-pr
- `gardener/pending-actions.json` — Manifest of deferred repo actions (label changes,
closures, comments, issue creation). Written during grooming steps, committed to the
PR, reviewed alongside AGENTS.md changes, executed by gardener-run.sh after merge.
**Environment variables consumed**:
- `FORGE_TOKEN`, `FORGE_GARDENER_TOKEN` (falls back to FORGE_TOKEN), `FORGE_REPO`, `FORGE_API`, `PROJECT_NAME`, `PROJECT_REPO_ROOT`. `FORGE_TOKEN_OVERRIDE` is exported to `$FORGE_GARDENER_TOKEN` before sourcing env.sh so the gardener-bot identity survives re-sourcing (#762).
- `FORGE_TOKEN`, `FORGE_GARDENER_TOKEN` (falls back to FORGE_TOKEN), `FORGE_REPO`, `FORGE_API`, `PROJECT_NAME`, `PROJECT_REPO_ROOT`
- `PRIMARY_BRANCH`, `CLAUDE_MODEL` (set to sonnet by gardener-run.sh)
**Lifecycle**: gardener-run.sh (invoked by polling loop every 6h, `check_active gardener`)
lock + memory guard → load formula + context → create tmux session →
**Lifecycle**: gardener-run.sh (cron 0,6,12,18) → `check_active gardener` → lock + memory guard
load formula + context → create tmux session →
Claude grooms backlog (writes proposed actions to manifest), bundles dust,
updates AGENTS.md, commits manifest + docs to PR →
reviews blocked issues, updates AGENTS.md, commits manifest + docs to PR →
`PHASE:awaiting_ci` (stays alive) → CI pass → `PHASE:awaiting_review`
review feedback → address + re-signal → merge → gardener-run.sh executes
manifest actions via API → `PHASE:done`. When blocked on external resources

50
gardener/PROMPT.md Normal file
View file

@ -0,0 +1,50 @@
# Gardener Prompt — Dust vs Ore
> **Note:** This is human documentation. The actual LLM prompt is built
> inline in `gardener-poll.sh` (with dynamic context injection). This file
> documents the design rationale for reference.
## Rule
Don't promote trivial tech-debt individually. Each promotion costs a full
factory cycle: CI + dev-agent + review + merge. Don't fill minecarts with
dust — put ore inside.
## What is dust?
- Comment fix
- Variable rename
- Style-only change (whitespace, formatting)
- Single-line edit
- Trivial cleanup with no behavioral impact
## What is ore?
- Multi-file changes
- Behavioral fixes
- Architectural improvements
- Security or correctness issues
- Anything requiring design thought
## LLM output format
When a tech-debt issue is dust, the LLM outputs:
```
DUST: {"issue": NNN, "group": "<file-or-subsystem>", "title": "...", "reason": "..."}
```
The `group` field clusters related dust by file or subsystem (e.g.
`"gardener"`, `"lib/env.sh"`, `"dev-poll"`).
## Bundling
The script collects dust items into `gardener/dust.jsonl`. When a group
accumulates 3+ items, the script automatically:
1. Creates one bundled backlog issue referencing all source issues
2. Closes the individual source issues with a cross-reference comment
3. Removes bundled items from the staging file
This converts N trivial issues into 1 actionable issue, saving N-1 factory
cycles.

View file

@ -51,4 +51,3 @@ Compact, decision-ready. Human should be able to reply "1a 2c 3b" and be done.
- Dev-agent doesn't understand the product — clear acceptance criteria save 2-3 CI cycles
- Feature issues MUST list affected e2e test files
- Issue templates from ISSUE-TEMPLATES.md propagate via triage gate
- **AD-002 is a runtime invariant; nothing for the gardener to check at issue-groom time.** Concurrency is enforced by `flock session.lock` within each container and by `issue_claim` for per-issue work. A violation manifests as a 401 or VRAM OOM in agent logs, not as a malformed issue.

View file

@ -1,23 +1,15 @@
#!/usr/bin/env bash
# =============================================================================
# gardener-run.sh — Polling-loop wrapper: gardener execution via SDK + formula
# gardener-run.sh — Cron wrapper: gardener execution via Claude + formula
#
# Synchronous bash loop using claude -p (one-shot invocation).
# No tmux sessions, no phase files — the bash script IS the state machine.
#
# Flow:
# 1. Guards: run lock, memory check
# 2. Load formula (formulas/run-gardener.toml)
# 3. Build context: AGENTS.md, scratch file, prompt footer
# 4. agent_run(worktree, prompt) → Claude does maintenance, pushes if needed
# 5. If pushed: pr_walk_to_merge() from lib/pr-lifecycle.sh
# 6. Post-merge: execute pending actions manifest (gardener/pending-actions.json)
# 7. Mirror push
# Runs 4x/day (or on-demand). Guards against concurrent runs and low memory.
# Creates a tmux session with Claude (sonnet) reading formulas/run-gardener.toml.
# No action issues — the gardener is a nervous system component, not work (AD-001).
#
# Usage:
# gardener-run.sh [projects/disinto.toml] # project config (default: disinto)
#
# Called by: entrypoint.sh polling loop (every 6 hours)
# Cron: 0 0,6,12,18 * * * cd /home/debian/dark-factory && bash gardener/gardener-run.sh projects/disinto.toml
# =============================================================================
set -euo pipefail
@ -26,87 +18,59 @@ FACTORY_ROOT="$(dirname "$SCRIPT_DIR")"
# Accept project config from argument; default to disinto
export PROJECT_TOML="${1:-$FACTORY_ROOT/projects/disinto.toml}"
# Set override BEFORE sourcing env.sh so it survives any later re-source of
# env.sh from nested shells / claude -p tools (#762, #747)
export FORGE_TOKEN_OVERRIDE="${FORGE_GARDENER_TOKEN:-}"
# shellcheck source=../lib/env.sh
source "$FACTORY_ROOT/lib/env.sh"
# Use gardener-bot's own Forgejo identity (#747)
FORGE_TOKEN="${FORGE_GARDENER_TOKEN:-${FORGE_TOKEN}}"
# shellcheck source=../lib/agent-session.sh
source "$FACTORY_ROOT/lib/agent-session.sh"
# shellcheck source=../lib/formula-session.sh
source "$FACTORY_ROOT/lib/formula-session.sh"
# shellcheck source=../lib/worktree.sh
source "$FACTORY_ROOT/lib/worktree.sh"
# shellcheck source=../lib/ci-helpers.sh
source "$FACTORY_ROOT/lib/ci-helpers.sh"
# shellcheck source=../lib/mirrors.sh
source "$FACTORY_ROOT/lib/mirrors.sh"
# shellcheck source=../lib/guard.sh
source "$FACTORY_ROOT/lib/guard.sh"
# shellcheck source=../lib/agent-sdk.sh
source "$FACTORY_ROOT/lib/agent-sdk.sh"
# shellcheck source=../lib/pr-lifecycle.sh
source "$FACTORY_ROOT/lib/pr-lifecycle.sh"
LOG_FILE="${DISINTO_LOG_DIR}/gardener/gardener.log"
# shellcheck disable=SC2034 # consumed by agent-sdk.sh
LOGFILE="$LOG_FILE"
# shellcheck disable=SC2034 # consumed by agent-sdk.sh
SID_FILE="/tmp/gardener-session-${PROJECT_NAME}.sid"
LOG_FILE="$SCRIPT_DIR/gardener.log"
# shellcheck disable=SC2034 # consumed by run_formula_and_monitor
SESSION_NAME="gardener-${PROJECT_NAME}"
PHASE_FILE="/tmp/gardener-session-${PROJECT_NAME}.phase"
# shellcheck disable=SC2034 # read by monitor_phase_loop in lib/agent-session.sh
PHASE_POLL_INTERVAL=15
SCRATCH_FILE="/tmp/gardener-${PROJECT_NAME}-scratch.md"
RESULT_FILE="/tmp/gardener-result-${PROJECT_NAME}.txt"
GARDENER_PR_FILE="/tmp/gardener-pr-${PROJECT_NAME}.txt"
WORKTREE="/tmp/${PROJECT_NAME}-gardener-run"
LAST_SHA_FILE="${DISINTO_DATA_DIR}/gardener-last-sha.txt"
# Override LOG_AGENT for consistent agent identification
# shellcheck disable=SC2034 # consumed by agent-sdk.sh and env.sh log()
LOG_AGENT="gardener"
# Merge-through state (used by _gardener_on_phase_change callback)
_GARDENER_PR=""
_GARDENER_MERGE_START=0
_GARDENER_MERGE_TIMEOUT=1800 # 30 min
_GARDENER_CI_FIX_COUNT=0
_GARDENER_REVIEW_ROUND=0
_GARDENER_CRASH_COUNT=0
log() { echo "[$(date -u +%Y-%m-%dT%H:%M:%S)Z] $*" >> "$LOG_FILE"; }
# ── Guards ────────────────────────────────────────────────────────────────
check_active gardener
acquire_run_lock "/tmp/gardener-run.lock"
memory_guard 2000
acquire_cron_lock "/tmp/gardener-run.lock"
check_memory 2000
log "--- Gardener run start ---"
# ── Resolve forge remote for git operations ─────────────────────────────
# Run git operations from the project checkout, not the baked code dir
cd "$PROJECT_REPO_ROOT"
resolve_forge_remote
# ── Precondition checks: skip if nothing to do ────────────────────────────
# Check for new commits since last run
CURRENT_SHA=$(git -C "$FACTORY_ROOT" rev-parse HEAD 2>/dev/null || echo "")
LAST_SHA=$(cat "$LAST_SHA_FILE" 2>/dev/null || echo "")
# Check for open issues needing grooming
backlog_count=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API}/issues?labels=backlog&state=open&limit=1" 2>/dev/null | jq length) || backlog_count=0
tech_debt_count=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API}/issues?labels=tech-debt&state=open&limit=1" 2>/dev/null | jq length) || tech_debt_count=0
if [ "$CURRENT_SHA" = "$LAST_SHA" ] && [ "${backlog_count:-0}" -eq 0 ] && [ "${tech_debt_count:-0}" -eq 0 ]; then
log "no new commits and no issues to groom — skipping"
exit 0
fi
log "current sha: ${CURRENT_SHA:0:8}..., backlog issues: ${backlog_count}, tech-debt issues: ${tech_debt_count}"
# ── Resolve agent identity for .profile repo ────────────────────────────
resolve_agent_identity || true
# ── Load formula + context ───────────────────────────────────────────────
load_formula_or_profile "gardener" "$FACTORY_ROOT/formulas/run-gardener.toml" || exit 1
load_formula "$FACTORY_ROOT/formulas/run-gardener.toml"
build_context_block AGENTS.md
# ── Prepare .profile context (lessons injection) ─────────────────────────
formula_prepare_profile_context
# ── Read scratch file (compaction survival) ───────────────────────────────
SCRATCH_CONTEXT=$(read_scratch_context "$SCRATCH_FILE")
SCRATCH_INSTRUCTION=$(build_scratch_instruction "$SCRATCH_FILE")
# ── Build prompt ─────────────────────────────────────────────────────────
# ── Build prompt (manifest format reference for deferred actions) ─────────
GARDENER_API_EXTRA="
## Pending-actions manifest (REQUIRED)
@ -125,21 +89,34 @@ Supported actions:
The commit-and-pr step converts JSONL to JSON array. The orchestrator executes
actions after the PR merges. Do NOT call mutation APIs directly during the run."
build_prompt_footer "$GARDENER_API_EXTRA"
build_sdk_prompt_footer "$GARDENER_API_EXTRA"
PROMPT_FOOTER="${PROMPT_FOOTER}## Completion protocol (REQUIRED)
When the commit-and-pr step creates a PR, write the PR number and stop:
# Extend phase protocol with merge-through instructions for compaction survival
PROMPT_FOOTER="${PROMPT_FOOTER}
## Merge-through protocol (commit-and-pr step)
After creating the PR, write the PR number and signal CI:
echo \"\$PR_NUMBER\" > '${GARDENER_PR_FILE}'
Then STOP. Do NOT write PHASE: signals — the orchestrator handles CI, review, and merge.
If no file changes exist (empty commit-and-pr), just stop — no PR needed."
echo 'PHASE:awaiting_ci' > '${PHASE_FILE}'
Then STOP and WAIT for CI results.
When 'CI passed' is injected:
echo 'PHASE:awaiting_review' > '${PHASE_FILE}'
Then STOP and WAIT.
When 'CI failed' is injected:
Fix, commit, push, then: echo 'PHASE:awaiting_ci' > '${PHASE_FILE}'
When review feedback is injected:
Address all feedback, commit, push, then: echo 'PHASE:awaiting_ci' > '${PHASE_FILE}'
If no file changes in commit-and-pr:
echo 'PHASE:done' > '${PHASE_FILE}'"
PROMPT="You are the issue gardener for ${FORGE_REPO}. Work through the formula below.
# shellcheck disable=SC2034 # consumed by run_formula_and_monitor
PROMPT="You are the issue gardener for ${FORGE_REPO}. Work through the formula below. Follow the phase protocol: if the commit-and-pr step creates a PR, write PHASE:awaiting_ci and wait for orchestrator CI/review/merge handling. If no file changes, write PHASE:done. The orchestrator will time you out if you return to the prompt without signalling.
You have full shell access and --dangerously-skip-permissions.
Fix what you can. File vault items for what you cannot. Do NOT ask permission — act first, report after.
## Project context
${CONTEXT_BLOCK}$(formula_lessons_block)
${CONTEXT_BLOCK}
${SCRATCH_CONTEXT:+${SCRATCH_CONTEXT}
}
## Result file
@ -151,12 +128,14 @@ ${FORMULA_CONTENT}
${SCRATCH_INSTRUCTION}
${PROMPT_FOOTER}"
# ── Create worktree ──────────────────────────────────────────────────────
formula_worktree_setup "$WORKTREE"
# ── Phase callback for merge-through ─────────────────────────────────────
# Handles CI polling, review injection, merge, and cleanup after PR creation.
# Lighter than dev/phase-handler.sh — tailored for gardener doc-only PRs.
# ── Post-merge manifest execution ────────────────────────────────────────
# ── Post-merge manifest execution ─────────────────────────────────────
# Reads gardener/pending-actions.json and executes each action via API.
# Failed actions are logged but do not block completion.
# shellcheck disable=SC2317 # called indirectly via _gardener_merge
_gardener_execute_manifest() {
local manifest_file="$PROJECT_REPO_ROOT/gardener/pending-actions.json"
if [ ! -f "$manifest_file" ]; then
@ -181,21 +160,19 @@ _gardener_execute_manifest() {
case "$action" in
add_label)
local label label_id http_code resp
local label label_id
label=$(jq -r ".[$i].label" "$manifest_file")
label_id=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API}/labels" | jq -r --arg n "$label" \
'.[] | select(.name == $n) | .id') || true
if [ -n "$label_id" ]; then
resp=$(curl -sf -w "\n%{http_code}" -X POST -H "Authorization: token ${FORGE_TOKEN}" \
if curl -sf -X POST -H "Authorization: token ${FORGE_TOKEN}" \
-H 'Content-Type: application/json' \
"${FORGE_API}/issues/${issue}/labels" \
-d "{\"labels\":[${label_id}]}" 2>/dev/null) || true
http_code=$(echo "$resp" | tail -1)
if [ "$http_code" = "200" ] || [ "$http_code" = "201" ]; then
-d "{\"labels\":[${label_id}]}" >/dev/null 2>&1; then
log "manifest: add_label '${label}' to #${issue}"
else
log "manifest: FAILED add_label '${label}' to #${issue}: HTTP ${http_code}"
log "manifest: FAILED add_label '${label}' to #${issue}"
fi
else
log "manifest: FAILED add_label — label '${label}' not found"
@ -203,19 +180,17 @@ _gardener_execute_manifest() {
;;
remove_label)
local label label_id http_code resp
local label label_id
label=$(jq -r ".[$i].label" "$manifest_file")
label_id=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API}/labels" | jq -r --arg n "$label" \
'.[] | select(.name == $n) | .id') || true
if [ -n "$label_id" ]; then
resp=$(curl -sf -w "\n%{http_code}" -X DELETE -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API}/issues/${issue}/labels/${label_id}" 2>/dev/null) || true
http_code=$(echo "$resp" | tail -1)
if [ "$http_code" = "200" ] || [ "$http_code" = "204" ]; then
if curl -sf -X DELETE -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API}/issues/${issue}/labels/${label_id}" >/dev/null 2>&1; then
log "manifest: remove_label '${label}' from #${issue}"
else
log "manifest: FAILED remove_label '${label}' from #${issue}: HTTP ${http_code}"
log "manifest: FAILED remove_label '${label}' from #${issue}"
fi
else
log "manifest: FAILED remove_label — label '${label}' not found"
@ -223,38 +198,34 @@ _gardener_execute_manifest() {
;;
close)
local reason http_code resp
local reason
reason=$(jq -r ".[$i].reason // empty" "$manifest_file")
resp=$(curl -sf -w "\n%{http_code}" -X PATCH -H "Authorization: token ${FORGE_TOKEN}" \
if curl -sf -X PATCH -H "Authorization: token ${FORGE_TOKEN}" \
-H 'Content-Type: application/json' \
"${FORGE_API}/issues/${issue}" \
-d '{"state":"closed"}' 2>/dev/null) || true
http_code=$(echo "$resp" | tail -1)
if [ "$http_code" = "200" ] || [ "$http_code" = "204" ]; then
-d '{"state":"closed"}' >/dev/null 2>&1; then
log "manifest: closed #${issue} (${reason})"
else
log "manifest: FAILED close #${issue}: HTTP ${http_code}"
log "manifest: FAILED close #${issue}"
fi
;;
comment)
local body escaped_body http_code resp
local body escaped_body
body=$(jq -r ".[$i].body" "$manifest_file")
escaped_body=$(printf '%s' "$body" | jq -Rs '.')
resp=$(curl -sf -w "\n%{http_code}" -X POST -H "Authorization: token ${FORGE_TOKEN}" \
if curl -sf -X POST -H "Authorization: token ${FORGE_TOKEN}" \
-H 'Content-Type: application/json' \
"${FORGE_API}/issues/${issue}/comments" \
-d "{\"body\":${escaped_body}}" 2>/dev/null) || true
http_code=$(echo "$resp" | tail -1)
if [ "$http_code" = "200" ] || [ "$http_code" = "201" ]; then
-d "{\"body\":${escaped_body}}" >/dev/null 2>&1; then
log "manifest: commented on #${issue}"
else
log "manifest: FAILED comment on #${issue}: HTTP ${http_code}"
log "manifest: FAILED comment on #${issue}"
fi
;;
create_issue)
local title body labels escaped_title escaped_body label_ids http_code resp
local title body labels escaped_title escaped_body label_ids
title=$(jq -r ".[$i].title" "$manifest_file")
body=$(jq -r ".[$i].body" "$manifest_file")
labels=$(jq -r ".[$i].labels // [] | .[]" "$manifest_file")
@ -274,46 +245,40 @@ _gardener_execute_manifest() {
done <<< "$labels"
[ -n "$ids_json" ] && label_ids="[${ids_json}]"
fi
resp=$(curl -sf -w "\n%{http_code}" -X POST -H "Authorization: token ${FORGE_TOKEN}" \
if curl -sf -X POST -H "Authorization: token ${FORGE_TOKEN}" \
-H 'Content-Type: application/json' \
"${FORGE_API}/issues" \
-d "{\"title\":${escaped_title},\"body\":${escaped_body},\"labels\":${label_ids}}" 2>/dev/null) || true
http_code=$(echo "$resp" | tail -1)
if [ "$http_code" = "200" ] || [ "$http_code" = "201" ]; then
-d "{\"title\":${escaped_title},\"body\":${escaped_body},\"labels\":${label_ids}}" >/dev/null 2>&1; then
log "manifest: created issue '${title}'"
else
log "manifest: FAILED create_issue '${title}': HTTP ${http_code}"
log "manifest: FAILED create_issue '${title}'"
fi
;;
edit_body)
local body escaped_body http_code resp
local body escaped_body
body=$(jq -r ".[$i].body" "$manifest_file")
escaped_body=$(printf '%s' "$body" | jq -Rs '.')
resp=$(curl -sf -w "\n%{http_code}" -X PATCH -H "Authorization: token ${FORGE_TOKEN}" \
if curl -sf -X PATCH -H "Authorization: token ${FORGE_TOKEN}" \
-H 'Content-Type: application/json' \
"${FORGE_API}/issues/${issue}" \
-d "{\"body\":${escaped_body}}" 2>/dev/null) || true
http_code=$(echo "$resp" | tail -1)
if [ "$http_code" = "200" ] || [ "$http_code" = "204" ]; then
-d "{\"body\":${escaped_body}}" >/dev/null 2>&1; then
log "manifest: edited body of #${issue}"
else
log "manifest: FAILED edit_body #${issue}: HTTP ${http_code}"
log "manifest: FAILED edit_body #${issue}"
fi
;;
close_pr)
local pr http_code resp
local pr
pr=$(jq -r ".[$i].pr" "$manifest_file")
resp=$(curl -sf -w "\n%{http_code}" -X PATCH -H "Authorization: token ${FORGE_TOKEN}" \
if curl -sf -X PATCH -H "Authorization: token ${FORGE_TOKEN}" \
-H 'Content-Type: application/json' \
"${FORGE_API}/pulls/${pr}" \
-d '{"state":"closed"}' 2>/dev/null) || true
http_code=$(echo "$resp" | tail -1)
if [ "$http_code" = "200" ] || [ "$http_code" = "204" ]; then
-d '{"state":"closed"}' >/dev/null 2>&1; then
log "manifest: closed PR #${pr}"
else
log "manifest: FAILED close_pr #${pr}: HTTP ${http_code}"
log "manifest: FAILED close_pr #${pr}"
fi
;;
@ -328,57 +293,387 @@ _gardener_execute_manifest() {
log "manifest: execution complete (${count} actions processed)"
}
# ── Reset result file ────────────────────────────────────────────────────
rm -f "$RESULT_FILE" "$GARDENER_PR_FILE"
touch "$RESULT_FILE"
# shellcheck disable=SC2317 # called indirectly by monitor_phase_loop
_gardener_merge() {
local merge_response merge_http_code
merge_response=$(curl -s -w "\n%{http_code}" -X POST \
-H "Authorization: token ${FORGE_TOKEN}" \
-H 'Content-Type: application/json' \
"${FORGE_API}/pulls/${_GARDENER_PR}/merge" \
-d '{"Do":"merge","delete_branch_after_merge":true}') || true
merge_http_code=$(echo "$merge_response" | tail -1)
# ── Run agent ─────────────────────────────────────────────────────────────
export CLAUDE_MODEL="sonnet"
agent_run --worktree "$WORKTREE" "$PROMPT"
log "agent_run complete"
# ── Detect PR ─────────────────────────────────────────────────────────────
PR_NUMBER=""
if [ -f "$GARDENER_PR_FILE" ]; then
PR_NUMBER=$(tr -d '[:space:]' < "$GARDENER_PR_FILE")
fi
# Fallback: search for open gardener PRs
if [ -z "$PR_NUMBER" ]; then
PR_NUMBER=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API}/pulls?state=open&limit=10" | \
jq -r '[.[] | select(.head.ref | startswith("chore/gardener-"))] | .[0].number // empty') || true
fi
# ── Walk PR to merge ──────────────────────────────────────────────────────
if [ -n "$PR_NUMBER" ]; then
log "walking PR #${PR_NUMBER} to merge"
pr_walk_to_merge "$PR_NUMBER" "$_AGENT_SESSION_ID" "$WORKTREE" || true
if [ "$_PR_WALK_EXIT_REASON" = "merged" ]; then
# Post-merge: pull primary, mirror push, execute manifest
git -C "$PROJECT_REPO_ROOT" fetch "${FORGE_REMOTE}" "$PRIMARY_BRANCH" 2>/dev/null || true
if [ "$merge_http_code" = "200" ] || [ "$merge_http_code" = "204" ]; then
log "gardener PR #${_GARDENER_PR} merged"
# Pull merged primary branch and push to mirrors
git -C "$PROJECT_REPO_ROOT" fetch origin "$PRIMARY_BRANCH" 2>/dev/null || true
git -C "$PROJECT_REPO_ROOT" checkout "$PRIMARY_BRANCH" 2>/dev/null || true
git -C "$PROJECT_REPO_ROOT" pull --ff-only "${FORGE_REMOTE}" "$PRIMARY_BRANCH" 2>/dev/null || true
git -C "$PROJECT_REPO_ROOT" pull --ff-only origin "$PRIMARY_BRANCH" 2>/dev/null || true
mirror_push
_gardener_execute_manifest
rm -f "$SCRATCH_FILE"
log "gardener PR #${PR_NUMBER} merged — manifest executed"
else
log "PR #${PR_NUMBER} not merged (reason: ${_PR_WALK_EXIT_REASON:-unknown})"
printf 'PHASE:done\n' > "$PHASE_FILE"
return 0
fi
else
log "no PR created — gardener run complete"
# Already merged (race)?
if [ "$merge_http_code" = "405" ]; then
local pr_merged
pr_merged=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API}/pulls/${_GARDENER_PR}" | jq -r '.merged // false') || true
if [ "$pr_merged" = "true" ]; then
log "gardener PR #${_GARDENER_PR} already merged"
# Pull merged primary branch and push to mirrors
git -C "$PROJECT_REPO_ROOT" fetch origin "$PRIMARY_BRANCH" 2>/dev/null || true
git -C "$PROJECT_REPO_ROOT" checkout "$PRIMARY_BRANCH" 2>/dev/null || true
git -C "$PROJECT_REPO_ROOT" pull --ff-only origin "$PRIMARY_BRANCH" 2>/dev/null || true
mirror_push
_gardener_execute_manifest
printf 'PHASE:done\n' > "$PHASE_FILE"
return 0
fi
log "gardener merge blocked (HTTP 405)"
printf 'PHASE:failed\nReason: gardener PR #%s merge blocked (HTTP 405)\n' \
"$_GARDENER_PR" > "$PHASE_FILE"
return 0
fi
# Other failure (likely conflicts) — tell Claude to rebase
log "gardener merge failed (HTTP ${merge_http_code}) — requesting rebase"
agent_inject_into_session "${_MONITOR_SESSION:-$SESSION_NAME}" \
"Merge failed for PR #${_GARDENER_PR} (likely conflicts). Rebase and push:
git fetch origin ${PRIMARY_BRANCH} && git rebase origin/${PRIMARY_BRANCH}
git push --force-with-lease origin HEAD
echo \"PHASE:awaiting_ci\" > \"${PHASE_FILE}\"
If rebase fails, write PHASE:failed with a reason."
}
# shellcheck disable=SC2317 # called indirectly by monitor_phase_loop
_gardener_timeout_cleanup() {
log "gardener merge-through timed out (${_GARDENER_MERGE_TIMEOUT}s) — closing PR"
if [ -n "$_GARDENER_PR" ]; then
curl -sf -X PATCH \
-H "Authorization: token ${FORGE_TOKEN}" \
-H 'Content-Type: application/json' \
"${FORGE_API}/pulls/${_GARDENER_PR}" \
-d '{"state":"closed"}' >/dev/null 2>&1 || true
fi
printf 'PHASE:failed\nReason: merge-through timeout (%ss)\n' \
"$_GARDENER_MERGE_TIMEOUT" > "$PHASE_FILE"
}
# shellcheck disable=SC2317 # called indirectly by monitor_phase_loop
_gardener_handle_ci() {
# Start merge-through timer on first CI phase
if [ "$_GARDENER_MERGE_START" -eq 0 ]; then
_GARDENER_MERGE_START=$(date +%s)
fi
# Check merge-through timeout
local elapsed
elapsed=$(( $(date +%s) - _GARDENER_MERGE_START ))
if [ "$elapsed" -ge "$_GARDENER_MERGE_TIMEOUT" ]; then
_gardener_timeout_cleanup
return 0
fi
# Discover PR number if unknown
if [ -z "$_GARDENER_PR" ]; then
if [ -f "$GARDENER_PR_FILE" ]; then
_GARDENER_PR=$(tr -d '[:space:]' < "$GARDENER_PR_FILE")
fi
# Fallback: search for open gardener PRs
if [ -z "$_GARDENER_PR" ]; then
_GARDENER_PR=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API}/pulls?state=open&limit=10" | \
jq -r '[.[] | select(.head.ref | startswith("chore/gardener-"))] | .[0].number // empty') || true
fi
if [ -z "$_GARDENER_PR" ]; then
log "ERROR: cannot find gardener PR"
agent_inject_into_session "${_MONITOR_SESSION:-$SESSION_NAME}" \
"ERROR: Could not find the gardener PR. Verify branch was pushed and PR created. Write the PR number to ${GARDENER_PR_FILE}, then write PHASE:awaiting_ci again."
return 0
fi
log "tracking gardener PR #${_GARDENER_PR}"
fi
# Skip CI for doc-only PRs
if ! ci_required_for_pr "$_GARDENER_PR" 2>/dev/null; then
log "CI not required (doc-only) — treating as passed"
agent_inject_into_session "${_MONITOR_SESSION:-$SESSION_NAME}" \
"CI passed on PR #${_GARDENER_PR} (doc-only changes, CI not required).
Write PHASE:awaiting_review to the phase file, then stop and wait:
echo \"PHASE:awaiting_review\" > \"${PHASE_FILE}\""
return 0
fi
# No CI configured?
if [ "${WOODPECKER_REPO_ID:-2}" = "0" ]; then
log "no CI configured — treating as passed"
agent_inject_into_session "${_MONITOR_SESSION:-$SESSION_NAME}" \
"CI passed on PR #${_GARDENER_PR} (no CI configured).
Write PHASE:awaiting_review to the phase file, then stop and wait:
echo \"PHASE:awaiting_review\" > \"${PHASE_FILE}\""
return 0
fi
# Get HEAD SHA from PR
local head_sha
head_sha=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API}/pulls/${_GARDENER_PR}" | jq -r '.head.sha // empty') || true
if [ -z "$head_sha" ]; then
log "WARNING: could not get HEAD SHA for PR #${_GARDENER_PR}"
agent_inject_into_session "${_MONITOR_SESSION:-$SESSION_NAME}" \
"WARNING: Could not read HEAD SHA for PR #${_GARDENER_PR}. Verify push succeeded. Then write PHASE:awaiting_ci again."
return 0
fi
# Poll CI (15 min max within this phase)
local ci_done=false ci_state="unknown" ci_elapsed=0 ci_timeout=900
while [ "$ci_elapsed" -lt "$ci_timeout" ]; do
sleep 30
ci_elapsed=$((ci_elapsed + 30))
# Session health check
if [ -f "/tmp/claude-exited-${_MONITOR_SESSION:-$SESSION_NAME}.ts" ] || \
! tmux has-session -t "${_MONITOR_SESSION:-$SESSION_NAME}" 2>/dev/null; then
log "session died during CI wait"
return 0
fi
# Merge-through timeout check
elapsed=$(( $(date +%s) - _GARDENER_MERGE_START ))
if [ "$elapsed" -ge "$_GARDENER_MERGE_TIMEOUT" ]; then
_gardener_timeout_cleanup
return 0
fi
# Re-fetch HEAD in case Claude pushed new commits
head_sha=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API}/pulls/${_GARDENER_PR}" | jq -r '.head.sha // empty') || true
ci_state=$(ci_commit_status "$head_sha") || ci_state="unknown"
case "$ci_state" in
success|failure|error) ci_done=true; break ;;
esac
done
if ! $ci_done; then
log "CI timeout for PR #${_GARDENER_PR}"
agent_inject_into_session "${_MONITOR_SESSION:-$SESSION_NAME}" \
"CI TIMEOUT: CI did not complete within 15 minutes for PR #${_GARDENER_PR}. Write PHASE:failed with a reason if you cannot proceed."
return 0
fi
log "CI: ${ci_state} for PR #${_GARDENER_PR}"
if [ "$ci_state" = "success" ]; then
_GARDENER_CI_FIX_COUNT=0
agent_inject_into_session "${_MONITOR_SESSION:-$SESSION_NAME}" \
"CI passed on PR #${_GARDENER_PR}.
Write PHASE:awaiting_review to the phase file, then stop and wait:
echo \"PHASE:awaiting_review\" > \"${PHASE_FILE}\""
else
_GARDENER_CI_FIX_COUNT=$(( _GARDENER_CI_FIX_COUNT + 1 ))
if [ "$_GARDENER_CI_FIX_COUNT" -gt 3 ]; then
log "CI exhausted after ${_GARDENER_CI_FIX_COUNT} attempts"
printf 'PHASE:failed\nReason: gardener CI exhausted after %d attempts\n' \
"$_GARDENER_CI_FIX_COUNT" > "$PHASE_FILE"
return 0
fi
# Get error details
local pipeline_num ci_error_log
pipeline_num=$(ci_pipeline_number "$head_sha")
ci_error_log=""
if [ -n "$pipeline_num" ]; then
ci_error_log=$(bash "${FACTORY_ROOT}/lib/ci-debug.sh" failures "$pipeline_num" 2>/dev/null \
| tail -80 | head -c 8000 || true)
fi
agent_inject_into_session "${_MONITOR_SESSION:-$SESSION_NAME}" \
"CI failed on PR #${_GARDENER_PR} (attempt ${_GARDENER_CI_FIX_COUNT}/3).
${ci_error_log:+Error output:
${ci_error_log}
}Fix the issue, commit, push, then write:
echo \"PHASE:awaiting_ci\" > \"${PHASE_FILE}\"
Then stop and wait."
fi
}
# shellcheck disable=SC2317 # called indirectly by monitor_phase_loop
_gardener_handle_review() {
log "waiting for review on PR #${_GARDENER_PR:-?}"
_GARDENER_CI_FIX_COUNT=0 # Reset CI fix budget for next review cycle
local review_elapsed=0 review_timeout=1800
while [ "$review_elapsed" -lt "$review_timeout" ]; do
sleep 60 # 1 min between review checks (gardener PRs are fast-tracked)
review_elapsed=$((review_elapsed + 60))
# Session health check
if [ -f "/tmp/claude-exited-${_MONITOR_SESSION:-$SESSION_NAME}.ts" ] || \
! tmux has-session -t "${_MONITOR_SESSION:-$SESSION_NAME}" 2>/dev/null; then
log "session died during review wait"
return 0
fi
# Merge-through timeout check
local elapsed
elapsed=$(( $(date +%s) - _GARDENER_MERGE_START ))
if [ "$elapsed" -ge "$_GARDENER_MERGE_TIMEOUT" ]; then
_gardener_timeout_cleanup
return 0
fi
# Check if phase changed while we wait (e.g. review-poll injected feedback)
local new_mtime
new_mtime=$(stat -c %Y "$PHASE_FILE" 2>/dev/null || echo 0)
if [ "$new_mtime" -gt "${LAST_PHASE_MTIME:-0}" ]; then
log "phase changed during review wait — returning to monitor loop"
return 0
fi
# Check for review on current HEAD
local review_sha review_comment
review_sha=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API}/pulls/${_GARDENER_PR}" | jq -r '.head.sha // empty') || true
review_comment=$(forge_api_all "/issues/${_GARDENER_PR}/comments" 2>/dev/null | \
jq -r --arg sha "${review_sha:-none}" \
'[.[] | select(.body | contains("<!-- reviewed: " + $sha))] | last // empty') || true
if [ -n "$review_comment" ] && [ "$review_comment" != "null" ]; then
local review_text verdict
review_text=$(echo "$review_comment" | jq -r '.body')
# Skip error reviews
if echo "$review_text" | grep -q "review-error\|Review — Error"; then
continue
fi
verdict=$(echo "$review_text" | grep -oP '\*\*(APPROVE|REQUEST_CHANGES|DISCUSS)\*\*' | head -1 | tr -d '*' || true)
# Check formal forge reviews as fallback
if [ -z "$verdict" ]; then
verdict=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API}/pulls/${_GARDENER_PR}/reviews" | \
jq -r '[.[] | select(.stale == false)] | last | .state // empty' || true)
[ "$verdict" = "APPROVED" ] && verdict="APPROVE"
[[ "$verdict" != "REQUEST_CHANGES" && "$verdict" != "APPROVE" ]] && verdict=""
fi
# Check review-poll sentinel to avoid double injection
local review_sentinel="/tmp/review-injected-${PROJECT_NAME}-${_GARDENER_PR}"
if [ -n "$verdict" ] && [ -f "$review_sentinel" ] && [ "$verdict" != "APPROVE" ]; then
log "review already injected by review-poll — skipping"
rm -f "$review_sentinel"
break
fi
rm -f "$review_sentinel"
if [ "$verdict" = "APPROVE" ]; then
log "gardener PR #${_GARDENER_PR} approved — merging"
_gardener_merge
return 0
elif [ "$verdict" = "REQUEST_CHANGES" ] || [ "$verdict" = "DISCUSS" ]; then
_GARDENER_REVIEW_ROUND=$(( _GARDENER_REVIEW_ROUND + 1 ))
log "review REQUEST_CHANGES on PR #${_GARDENER_PR} (round ${_GARDENER_REVIEW_ROUND})"
agent_inject_into_session "${_MONITOR_SESSION:-$SESSION_NAME}" \
"Review feedback on PR #${_GARDENER_PR} (round ${_GARDENER_REVIEW_ROUND}):
${review_text}
Address all feedback, commit, push, then write:
echo \"PHASE:awaiting_ci\" > \"${PHASE_FILE}\"
Then stop and wait."
return 0
fi
fi
# Check if PR was merged or closed externally
local pr_json pr_state pr_merged
pr_json=$(curl -sf -H "Authorization: token ${FORGE_TOKEN}" \
"${FORGE_API}/pulls/${_GARDENER_PR}") || true
pr_state=$(echo "$pr_json" | jq -r '.state // "unknown"')
pr_merged=$(echo "$pr_json" | jq -r '.merged // false')
if [ "$pr_merged" = "true" ]; then
log "gardener PR #${_GARDENER_PR} merged externally"
_gardener_execute_manifest
printf 'PHASE:done\n' > "$PHASE_FILE"
return 0
fi
if [ "$pr_state" != "open" ]; then
log "gardener PR #${_GARDENER_PR} closed without merge"
printf 'PHASE:failed\nReason: PR closed without merge\n' > "$PHASE_FILE"
return 0
fi
log "waiting for review on PR #${_GARDENER_PR} (${review_elapsed}s)"
done
if [ "$review_elapsed" -ge "$review_timeout" ]; then
log "review wait timed out for PR #${_GARDENER_PR}"
agent_inject_into_session "${_MONITOR_SESSION:-$SESSION_NAME}" \
"No review received after ${review_timeout}s for PR #${_GARDENER_PR}. Write PHASE:failed with a reason if you cannot proceed."
fi
}
# shellcheck disable=SC2317 # called indirectly by monitor_phase_loop
_gardener_on_phase_change() {
local phase="$1"
log "phase: ${phase}"
case "$phase" in
PHASE:awaiting_ci)
_gardener_handle_ci
;;
PHASE:awaiting_review)
_gardener_handle_review
;;
PHASE:done|PHASE:merged)
agent_kill_session "${_MONITOR_SESSION:-$SESSION_NAME}"
;;
PHASE:failed|PHASE:escalate)
agent_kill_session "${_MONITOR_SESSION:-$SESSION_NAME}"
;;
PHASE:crashed)
if [ "${_GARDENER_CRASH_COUNT:-0}" -gt 0 ]; then
log "ERROR: session crashed again — giving up"
return 0
fi
_GARDENER_CRASH_COUNT=$(( _GARDENER_CRASH_COUNT + 1 ))
log "WARNING: session crashed — attempting recovery"
if create_agent_session "${_MONITOR_SESSION:-$SESSION_NAME}" \
"${_FORMULA_SESSION_WORKDIR:-$PROJECT_REPO_ROOT}" "$PHASE_FILE" 2>/dev/null; then
agent_inject_into_session "${_MONITOR_SESSION:-$SESSION_NAME}" "$PROMPT"
log "recovery session started"
else
log "ERROR: could not restart session after crash"
fi
;;
*)
log "WARNING: unknown phase: ${phase}"
;;
esac
}
# ── Reset result file ────────────────────────────────────────────────────
rm -f "$RESULT_FILE"
touch "$RESULT_FILE"
# ── Run session ──────────────────────────────────────────────────────────
export CLAUDE_MODEL="sonnet"
run_formula_and_monitor "gardener" 7200 "_gardener_on_phase_change"
# ── Cleanup on exit ──────────────────────────────────────────────────────
# FINAL_PHASE already set by run_formula_and_monitor
if [ "${FINAL_PHASE:-}" = "PHASE:done" ]; then
rm -f "$SCRATCH_FILE"
fi
# Write journal entry post-session
profile_write_journal "gardener-run" "Gardener run $(date -u +%Y-%m-%d)" "complete" "" || true
rm -f "$GARDENER_PR_FILE"
# Persist last-seen SHA for next run comparison
echo "$CURRENT_SHA" > "$LAST_SHA_FILE"
log "--- Gardener run done ---"
[ -n "$_GARDENER_PR" ] && rm -f "/tmp/review-injected-${PROJECT_NAME}-${_GARDENER_PR}"

View file

@ -1,62 +1,12 @@
[
{
"action": "edit_body",
"issue": 784,
"body": "Flagged by AI reviewer in PR #783.\n\n## Problem\n\n`_regen_file()` (added in PR #783, `bin/disinto` ~line 1424) moves the existing target file to a temp stash before calling the generator:\n\n```bash\nmv \"$target\" \"$stashed\"\n\"$generator\" \"$@\"\n```\n\nThe script runs under `set -euo pipefail`. If the generator exits non-zero, bash exits immediately and the original file remains stranded at `${target}.stash.XXXXXX` (never restored). The target file no longer exists, and `docker compose up` is never reached. Recovery requires the operator to manually locate and rename the hidden stash file.\n\n## Fix\n\nAdd an ERR trap inside `_regen_file` to restore the stash on failure, e.g.:\n```bash\n\"$generator\" \"$@\" || { mv \"$stashed\" \"$target\"; return 1; }\n```\n\n---\n*Auto-created from AI review*\n\n## Acceptance criteria\n\n- [ ] If the generator exits non-zero, the original target file is restored from the stash (not stranded at the temp path)\n- [ ] `_regen_file` still removes the stash file after a successful generator run\n- [ ] `docker compose up` is reached when the generator succeeds\n- [ ] ShellCheck passes on `bin/disinto`\n\n## Affected files\n\n- `bin/disinto` — `_regen_file()` function (~line 1424)\n"
},
{
"action": "add_label",
"issue": 784,
"label": "backlog"
},
{
"action": "remove_label",
"issue": 773,
"label": "blocked"
},
{
"action": "add_label",
"issue": 773,
"label": "backlog"
},
{
"action": "comment",
"issue": 772,
"body": "All child issues have been resolved:\n- #768 (edge restart policy) — closed\n- #769 (agents-llama generator service) — closed\n- #770 (disinto up regenerate) — closed\n- #771 (deprecate docker/Caddyfile) — closed\n\nClosing tracker as all decomposed work is complete."
"issue": 3,
"body": "Closing: this issue was fully implemented by PR #2 (commit `723167d`). All acceptance criteria are met:\n- `edge` (Caddy) service added to docker-compose\n- `staging` Caddy static file server configured\n- `docker/Caddyfile` template generated by `disinto init`\n- Default \"Nothing shipped yet\" staging page in `docker/staging-seed/`\n\nThe dev-agent sessions recorded here crashed after the implementation was already merged into `main`. Closing as implemented."
},
{
"action": "close",
"issue": 772,
"reason": "all child issues 768-771 closed"
},
{
"action": "edit_body",
"issue": 778,
"body": "## Problem\n\n`formulas/rent-a-human-caddy-ssh.toml` step 3 tells the operator:\n\n```\necho \"CADDY_SSH_KEY=$(base64 -w0 caddy-collect)\" >> .env.vault.enc\n```\n\n**You cannot append plaintext to a sops-encrypted file.** The append silently corrupts `.env.vault.enc` — subsequent `sops -d` fails, all vault secrets become unrecoverable. Any operator who followed the docs verbatim has broken their vault.\n\nSteps 4 (`CADDY_HOST`) and 5 (`CADDY_ACCESS_LOG`) have the same bug.\n\n## Proposed fix\n\nRewrite the `>>` steps to use the stdin-piped `disinto secrets add` (from issue A):\n\n```\ncat caddy-collect | disinto secrets add CADDY_SSH_KEY\necho '159.89.14.107' | disinto secrets add CADDY_SSH_HOST\necho 'debian' | disinto secrets add CADDY_SSH_USER\necho '/var/log/caddy/access.log' | disinto secrets add CADDY_ACCESS_LOG\n```\n\nAlso:\n- Remove the `base64 -w0` step — the new `secrets add` stores multi-line keys verbatim.\n- Remove the `shred -u caddy-collect` step from the happy path — let the operator keep the backup until they have verified the edge container picks it up.\n- Add a recovery note: operators with a corrupted vault from the old docs must `rm .env.vault.enc` (or `migrate-from-vault` if issue B landed) before re-running.\n\n## Context\n\n- Parent: sprint PR `disinto-admin/disinto-ops#10`.\n- Depends on: #776 (piped `secrets add`) — now closed.\n- Soft-depends on: #777 (if landed, drop all `.env.vault*` references entirely).\n\n## Acceptance criteria\n\n- [ ] Formula runs end-to-end without touching `.env.vault.enc` or `.env.vault` by hand\n- [ ] Re-running is idempotent (upsert via `disinto secrets add -f`)\n- [ ] Edge container starts cleanly with the imported secrets and the daily collect-engagement cron fires without `\"CADDY_SSH_KEY not set, skipping\"`\n\n## Affected files\n\n- `formulas/rent-a-human-caddy-ssh.toml` — replace `>> .env.vault.enc` steps with `disinto secrets add` calls\n"
},
{
"action": "remove_label",
"issue": 778,
"label": "blocked"
},
{
"action": "add_label",
"issue": 778,
"label": "backlog"
},
{
"action": "edit_body",
"issue": 777,
"body": "## Problem\n\nTwo parallel secret stores:\n\n1. `secrets/<NAME>.enc` — per-key, age-encrypted. Populated by `disinto secrets add`. **No runtime consumer today.** Only `disinto secrets show` ever decrypts these.\n2. `.env.vault.enc` — monolithic, sops/dotenv-encrypted. The only store actually loaded into containers (via `docker/edge/dispatcher.sh` → `sops -d --output-type dotenv`).\n\nTwo mental models, redundant subcommands (`edit-vault`, `show-vault`, `migrate-vault`), and today's `disinto secrets add` silently deposits secrets into a dead-letter directory. Operator runs the command, edge container still logs `CADDY_SSH_KEY not set, skipping` (docker/edge/entrypoint-edge.sh:207).\n\n## Proposed solution\n\nConsolidate on `secrets/<NAME>.enc` as THE store. One file per secret, granular, small surface.\n\n**1. Wire container dispatchers to load `secrets/*.enc` into env**\n\n- `docker/edge/dispatcher.sh` (and agent / ops dispatchers) decrypt declared secrets at startup and export them.\n- Granular per-secret — not a bulk dump.\n\n**2. Containers declare required secrets**\n\n- `secrets.required = [\"CADDY_SSH_KEY\", \"CADDY_SSH_HOST\", ...]` in the container's TOML, or equivalent in compose.\n- Missing required secret → **hard fail** with clear message. Replaces today's silent-skip branch at `entrypoint-edge.sh:207`.\n\n**3. Deprecate the monolithic vault**\n\n- Remove `.env.vault`, `.env.vault.enc`, and subcommands `edit-vault` / `show-vault` / `migrate-vault` from `bin/disinto`.\n- Remove sops round-trip from `docker/edge/dispatcher.sh` (lines 32-40 currently).\n\n**4. One-shot migration for existing operators**\n\n- `disinto secrets migrate-from-vault` splits an existing `.env.vault.enc` into `secrets/<KEY>.enc` files, verifies each, then removes the old vault on success.\n- Idempotent: safe to run multiple times.\n\n## Context\n\n- Parent: sprint PR `disinto-admin/disinto-ops#10`.\n- Depends on: #776 (`secrets add` must accept piped stdin before we can deprecate `edit-vault`) — now closed.\n- Rationale (operator quote): *\"containers should have option to load single secrets, granular. no 2 mental models, only 1 thing that works well and has small surface.\"*\n\n## Acceptance criteria\n\n- [ ] Edge container declares `secrets.required = [\"CADDY_SSH_KEY\", \"CADDY_SSH_HOST\", \"CADDY_SSH_USER\", \"CADDY_ACCESS_LOG\"]`; dispatcher exports them; `collect-engagement.sh` runs without additional env wiring\n- [ ] Container refuses to start when a required secret is missing (fail loudly, not skip silently)\n- [ ] `.env.vault*` files and all vault-specific subcommands removed from `bin/disinto` and all formulas / docs\n- [ ] `migrate-from-vault` converts an existing monolithic vault correctly (verified by round-trip test)\n- [ ] `disinto secrets` help text shows one store, four verbs: `add`, `show`, `remove`, `list`\n\n## Affected files\n\n- `bin/disinto` — remove `edit-vault`, `show-vault`, `migrate-vault` subcommands; add `migrate-from-vault`\n- `docker/edge/dispatcher.sh` — replace sops round-trip with per-secret age decryption (lines 32-40)\n- `docker/edge/entrypoint-edge.sh` — replace silent-skip at line 207 with hard fail on missing required secrets\n- `lib/vault.sh` — update or remove vault-env.sh wiring now that `.env.vault.enc` is deprecated\n"
},
{
"action": "remove_label",
"issue": 777,
"label": "blocked"
},
{
"action": "add_label",
"issue": 777,
"label": "backlog"
"issue": 3,
"reason": "already implemented in PR #2 (commit 723167d)"
}
]

View file

@ -0,0 +1,16 @@
# gardener/recipes/cascade-rebase.toml — PR outdated after main moved
#
# Trigger: PR mergeable=false (stale branch or dismissed approval)
# Playbook: rebase only — merge and re-approval happen on subsequent cycles
# after CI reruns on the rebased branch (rebase is async via Gitea API)
name = "cascade-rebase"
description = "PR outdated after main moved — mergeable=false or stale approval"
priority = 20
[trigger]
pr_mergeable = false
[[playbook]]
action = "rebase-pr"
description = "Rebase PR onto main (async — CI reruns, merge on next cycle)"

View file

@ -0,0 +1,25 @@
# gardener/recipes/chicken-egg-ci.toml — PR introduces CI step that fails on pre-existing code
#
# Trigger: New .woodpecker/*.yml in PR + lint/check step + failures on unchanged files
# Playbook: make step non-blocking, create per-file issues, create follow-up to remove bypass
name = "chicken-egg-ci"
description = "PR introduces a CI pipeline/linting step that fails on pre-existing code"
priority = 10
[trigger]
pr_files = '\.woodpecker/.*\.yml$'
step_name = '(?i)(lint|shellcheck|check)'
failures_on_unchanged = true
[[playbook]]
action = "make-step-non-blocking"
description = "Make failing step non-blocking (|| true) in the PR"
[[playbook]]
action = "lint-per-file"
description = "Create per-file fix issues for pre-existing violations (generic linter support)"
[[playbook]]
action = "create-followup-remove-bypass"
description = "Create follow-up issue to remove || true once fixes land"

View file

@ -0,0 +1,20 @@
# gardener/recipes/flaky-test.toml — CI fails intermittently
#
# Trigger: Test step fails + multiple CI attempts (same step, different output)
# Playbook: retrigger CI (max 2x), quarantine test if still failing
name = "flaky-test"
description = "CI fails intermittently — same step fails across multiple attempts"
priority = 30
[trigger]
step_name = '(?i)test'
min_attempts = 2
[[playbook]]
action = "retrigger-ci"
description = "Retrigger CI (max 2 retries)"
[[playbook]]
action = "quarantine-test"
description = "If still failing, quarantine test and create fix issue"

View file

@ -0,0 +1,20 @@
# gardener/recipes/shellcheck-violations.toml — ShellCheck step fails
#
# Trigger: Step named *shellcheck* fails with SC#### codes in output
# Playbook: parse per-file, create one issue per file, label backlog
name = "shellcheck-violations"
description = "ShellCheck step fails with SC#### codes in output"
priority = 40
[trigger]
step_name = '(?i)shellcheck'
output = 'SC\d{4}'
[[playbook]]
action = "shellcheck-per-file"
description = "Parse output by file, create one fix issue per file with specific SC codes"
[[playbook]]
action = "label-backlog"
description = "Label created issues as backlog"

View file

@ -1,28 +0,0 @@
# CI/CD — Best Practices
## CI Pipeline Issues (P2)
When CI pipelines are stuck running >20min or pending >30min:
### Investigation Steps
1. Check pipeline status via Forgejo API:
```bash
curl -sf -H "Authorization: token $FORGE_TOKEN" \
"$FORGE_API/pipelines?limit=50" | jq '.[] | {number, status, created}'
```
2. Check Woodpecker CI if configured:
```bash
curl -sf -H "Authorization: Bearer $WOODPECKER_TOKEN" \
"$WOODPECKER_SERVER/api/repos/${WOODPECKER_REPO_ID}/pipelines?limit=10"
```
### Common Fixes
- **Stuck pipeline**: Cancel via Forgejo API, retrigger
- **Pending pipeline**: Check queue depth, scale CI runners
- **Failed pipeline**: Review logs, fix failing test/step
### Prevention
- Set timeout limits on CI pipelines
- Monitor runner capacity and scale as needed
- Use caching for dependencies to reduce build time

View file

@ -1,28 +0,0 @@
# Dev Agent — Best Practices
## Dev Agent Issues (P2)
When dev-agent is stuck, blocked, or in bad state:
### Dead Lock File
```bash
# Check if process still exists
ps -p $(cat /path/to/lock.file) 2>/dev/null || rm -f /path/to/lock.file
```
### Stale Worktree Cleanup
```bash
cd "$PROJECT_REPO_ROOT"
git worktree remove --force /tmp/stale-worktree 2>/dev/null || true
git worktree prune 2>/dev/null || true
```
### Blocked Pipeline
- Check if PR is awaiting review or CI
- Verify no other agent is actively working on same issue
- Check for unmet dependencies (issues with `Depends on` refs)
### Prevention
- Concurrency bounded per LLM backend (AD-002)
- Clear lock files in EXIT traps
- Use phase files to track agent state

View file

@ -1,35 +0,0 @@
# Disk Management — Best Practices
## Disk Pressure Response (P1)
When disk usage exceeds 80%, take these actions in order:
### Immediate Actions
1. **Docker cleanup** (safe, low impact):
```bash
sudo docker system prune -f
```
2. **Aggressive Docker cleanup** (if still >80%):
```bash
sudo docker system prune -a -f
```
This removes unused images in addition to containers/volumes.
3. **Log rotation**:
```bash
for f in "$FACTORY_ROOT"/{dev,review,supervisor,gardener,planner,predictor}/*.log; do
[ -f "$f" ] && [ "$(du -k "$f" | cut -f1)" -gt 10240 ] && truncate -s 0 "$f"
done
```
### Prevention
- Monitor disk with alerts at 70% (warning) and 80% (critical)
- Set up automatic log rotation for agent logs
- Clean up old Docker images regularly
- Consider using separate partitions for `/var/lib/docker`
### When to Escalate
- Disk stays >80% after cleanup (indicates legitimate growth)
- No unused Docker images to clean
- Critical data filling disk (check /home, /var/log)

View file

@ -1,25 +0,0 @@
# Forgejo Operations — Best Practices
## Forgejo Issues
When Forgejo operations encounter issues:
### API Rate Limits
- Monitor rate limit headers in API responses
- Implement exponential backoff on 429 responses
- Use agent-specific tokens (#747) to increase limits
### Authentication Issues
- Verify FORGE_TOKEN is valid and not expired
- Check agent identity matches token (#747)
- Use FORGE_<AGENT>_TOKEN for agent-specific identities
### Repository Access
- Verify FORGE_REMOTE matches actual git remote
- Check token has appropriate permissions (repo, write)
- Use `resolve_forge_remote()` to auto-detect remote
### Prevention
- Set up monitoring for API failures
- Rotate tokens before expiry
- Document required permissions per agent

View file

@ -1,28 +0,0 @@
# Git State Recovery — Best Practices
## Git State Issues (P2)
When git repo is on wrong branch or in broken rebase state:
### Wrong Branch Recovery
```bash
cd "$PROJECT_REPO_ROOT"
git checkout "$PRIMARY_BRANCH" 2>/dev/null || git checkout master 2>/dev/null
```
### Broken Rebase Recovery
```bash
cd "$PROJECT_REPO_ROOT"
git rebase --abort 2>/dev/null || true
git checkout "$PRIMARY_BRANCH" 2>/dev/null || git checkout master 2>/dev/null
```
### Stale Lock File Cleanup
```bash
rm -f /path/to/stale.lock
```
### Prevention
- Always checkout primary branch after rebase conflicts
- Remove lock files after agent sessions complete
- Use `git status` to verify repo state before operations

View file

@ -1,27 +0,0 @@
# Memory Management — Best Practices
## Memory Crisis Response (P0)
When RAM available drops below 500MB or swap usage exceeds 3GB, take these actions:
### Immediate Actions
1. **Kill stale claude processes** (>3 hours old):
```bash
pgrep -f "claude -p" --older 10800 2>/dev/null | xargs kill 2>/dev/null || true
```
2. **Drop filesystem caches**:
```bash
sync && echo 3 | sudo tee /proc/sys/vm/drop_caches >/dev/null 2>&1 || true
```
### Prevention
- Set memory_guard to 2000MB minimum (default in env.sh)
- Configure swap usage alerts at 2GB
- Monitor for memory leaks in long-running processes
- Use cgroups for process memory limits
### When to Escalate
- RAM stays <500MB after cache drop
- Swap continues growing after process kills
- System becomes unresponsive (OOM killer active)

View file

@ -1,23 +0,0 @@
# Review Agent — Best Practices
## Review Agent Issues
When review agent encounters issues with PRs:
### Stale PR Handling
- PRs stale >20min (CI done, no push since) → file vault item for dev-agent
- Do NOT push branches or attempt merges directly
- File vault item with:
- What: Stale PR requiring push
- Why: Factory degraded
- Unblocks: dev-agent will push the branch
### Circular Dependencies
- Check backlog for issues with circular `Depends on` refs
- Use `lib/parse-deps.sh` to analyze dependency graph
- Report to planner for resolution
### Prevention
- Review agent only reads PRs, never modifies
- Use vault items for actions requiring dev-agent
- Monitor for PRs stuck in review state

View file

@ -1,4 +1,4 @@
<!-- last-reviewed: 18190874cae869527f675f717423ded735f2c555 -->
<!-- last-reviewed: 9b91c6a5bdd83513ca262dd468ea14db675971c1 -->
# Shared Helpers (`lib/`)
All agents source `lib/env.sh` as their first action. Additional helpers are
@ -6,31 +6,16 @@ sourced as needed.
| File | What it provides | Sourced by |
|---|---|---|
| `lib/env.sh` | Loads `.env`, sets `FACTORY_ROOT`, exports project config (`FORGE_REPO`, `PROJECT_NAME`, etc.), defines `log()`, `forge_api()`, `forge_api_all()` (paginates all pages; accepts optional second TOKEN parameter, defaults to `$FORGE_TOKEN`; handles invalid/empty JSON responses gracefully — returns empty on parse error instead of crashing), `woodpecker_api()`, `wpdb()`, `memory_guard()` (skips agent if RAM < threshold), `load_secret()` (secret-source abstraction see below). Auto-loads project TOML if `PROJECT_TOML` is set. Exports per-agent tokens (`FORGE_PLANNER_TOKEN`, `FORGE_GARDENER_TOKEN`, `FORGE_VAULT_TOKEN`, `FORGE_SUPERVISOR_TOKEN`, `FORGE_PREDICTOR_TOKEN`) each falls back to `$FORGE_TOKEN` if not set. **Vault-only token guard (AD-006)**: `unset GITHUB_TOKEN CLAWHUB_TOKEN` so agents never hold external-action tokens only the runner container receives them. **Container note**: when `DISINTO_CONTAINER=1`, `.env` is NOT re-sourced compose already injects env vars (including `FORGE_URL=http://forgejo:3000`) and re-sourcing would clobber them. **Save/restore scope (#364)**: only `FORGE_URL` is preserved across `.env` re-sourcing (compose injects `http://forgejo:3000`, `.env` has `http://localhost:3000`). `FORGE_TOKEN` is NOT preserved so refreshed tokens in `.env` take effect immediately. **Per-agent token override (#762)**: agent run scripts export `FORGE_TOKEN_OVERRIDE=<agent-specific-token>` BEFORE sourcing `env.sh`; `env.sh` applies this override at lines 98-100, ensuring the correct identity survives any re-sourcing of `env.sh` by nested shells or `claude -p` invocations. **Required env var**: `FORGE_PASS` bot password for git HTTP push (Forgejo 11.x rejects API tokens for `git push`, #361). **Hard preconditions (#674)**: `USER` and `HOME` must be exported by the entrypoint before sourcing. When `PROJECT_TOML` is set, `PROJECT_REPO_ROOT`, `PRIMARY_BRANCH`, and `OPS_REPO_ROOT` must also be set (by entrypoint or TOML). **`load_secret NAME [DEFAULT]` (#793)**: backend-agnostic secret resolution. Precedence: (1) `/secrets/<NAME>.env` Nomad-rendered template, (2) current environment already set by `.env.enc` / compose, (3) `secrets/<NAME>.enc` age-encrypted per-key file (decrypted on demand, cached in process env), (4) DEFAULT or empty. Consumers call `$(load_secret GITHUB_TOKEN)` instead of `${GITHUB_TOKEN}` identical behavior whether secrets come from Docker compose injection or Nomad Vault templates. | Every agent |
| `lib/ci-helpers.sh` | `ci_passed()` — returns 0 if CI state is "success" (or no CI configured). `ci_required_for_pr()` — returns 0 if PR has code files (CI required), 1 if non-code only (CI not required). `is_infra_step()` — returns 0 if a single CI step failure matches infra heuristics (clone/git exit 128, any exit 137, log timeout patterns). `classify_pipeline_failure()` — returns "infra \<reason>" if any failed Woodpecker step matches infra heuristics via `is_infra_step()`, else "code". `ensure_priority_label()` — looks up (or creates) the `priority` label and returns its ID; caches in `_PRIORITY_LABEL_ID`. `ci_commit_status <sha>` — queries Woodpecker directly for CI state, falls back to forge commit status API. `ci_pipeline_number <sha>` — returns the Woodpecker pipeline number for a commit, falls back to parsing forge status `target_url`. `ci_promote <repo_id> <pipeline_num> <environment>` — promotes a pipeline to a named Woodpecker environment (vault-gated deployment: vault approves, vault-fire calls this — vault redesign in progress, see #73-#77). `ci_get_logs <pipeline_number> [--step <name>]` — reads CI logs from Woodpecker SQLite database via `lib/ci-log-reader.py`; outputs last 200 lines to stdout. Requires mounted woodpecker-data volume at /woodpecker-data. | dev-poll, review-poll, review-pr |
| `lib/env.sh` | Loads `.env`, sets `FACTORY_ROOT`, exports project config (`FORGE_REPO`, `PROJECT_NAME`, etc.), defines `log()`, `forge_api()`, `forge_api_all()` (accepts optional second TOKEN parameter, defaults to `$FORGE_TOKEN`), `woodpecker_api()`, `wpdb()`, `memory_guard()` (skips agent if RAM < threshold). Auto-loads project TOML if `PROJECT_TOML` is set. Exports per-agent tokens (`FORGE_PLANNER_TOKEN`, `FORGE_GARDENER_TOKEN`, `FORGE_VAULT_TOKEN`, `FORGE_SUPERVISOR_TOKEN`, `FORGE_PREDICTOR_TOKEN`, `FORGE_ACTION_TOKEN`) each falls back to `$FORGE_TOKEN` if not set. **Vault-only token guard (AD-006)**: `unset GITHUB_TOKEN CLAWHUB_TOKEN` so agents never hold external-action tokens only the vault-runner container receives them. **Container note**: when `DISINTO_CONTAINER=1`, `.env` is NOT re-sourced compose already injects env vars (including `FORGE_URL=http://forgejo:3000`) and re-sourcing would clobber them. | Every agent |
| `lib/ci-helpers.sh` | `ci_passed()` — returns 0 if CI state is "success" (or no CI configured). `ci_required_for_pr()` — returns 0 if PR has code files (CI required), 1 if non-code only (CI not required). `is_infra_step()` — returns 0 if a single CI step failure matches infra heuristics (clone/git exit 128, any exit 137, log timeout patterns). `classify_pipeline_failure()` — returns "infra \<reason>" if any failed Woodpecker step matches infra heuristics via `is_infra_step()`, else "code". `ensure_priority_label()` — looks up (or creates) the `priority` label and returns its ID; caches in `_PRIORITY_LABEL_ID`. `ci_commit_status <sha>` — queries Woodpecker directly for CI state, falls back to forge commit status API. `ci_pipeline_number <sha>` — returns the Woodpecker pipeline number for a commit, falls back to parsing forge status `target_url`. `ci_promote <repo_id> <pipeline_num> <environment>` — promotes a pipeline to a named Woodpecker environment (vault-gated deployment: vault approves, vault-fire calls this). | dev-poll, review-poll, review-pr, supervisor-poll |
| `lib/ci-debug.sh` | CLI tool for Woodpecker CI: `list`, `status`, `logs`, `failures` subcommands. Not sourced — run directly. | Humans / dev-agent (tool access) |
| `lib/ci-log-reader.py` | Python tool: reads CI logs from Woodpecker SQLite database. `<pipeline_number> [--step <name>]` — returns last 200 lines from failed steps (or specified step). Used by `ci_get_logs()` in ci-helpers.sh. Requires `WOODPECKER_DATA_DIR` (default: /woodpecker-data). | ci-helpers.sh |
| `lib/load-project.sh` | Parses a `projects/*.toml` file into env vars (`PROJECT_NAME`, `FORGE_REPO`, `WOODPECKER_REPO_ID`, monitoring toggles, mirror config, etc.). Also exports `FORGE_REPO_OWNER` (the owner component of `FORGE_REPO`, e.g. `disinto-admin` from `disinto-admin/disinto`). Reads `repo_root` and `ops_repo_root` from the TOML for host-CLI callers. **Container path handling (#674)**: no longer derives `PROJECT_REPO_ROOT` or `OPS_REPO_ROOT` inside the script — container entrypoints export the correct paths before agent scripts source `env.sh`, and the `DISINTO_CONTAINER` guard (line 90) skips TOML overrides when those vars are already set. | env.sh (when `PROJECT_TOML` is set) |
| `lib/parse-deps.sh` | Extracts dependency issue numbers from an issue body (stdin → stdout, one number per line). Matches `## Dependencies` / `## Depends on` / `## Blocked by` sections and inline `depends on #N` / `blocked by #N` patterns. Inline scan skips fenced code blocks to prevent false positives from code examples in issue bodies. Not sourced — executed via `bash lib/parse-deps.sh`. | dev-poll |
| `lib/formula-session.sh` | `acquire_run_lock()`, `load_formula()`, `load_formula_or_profile()`, `build_context_block()`, `ensure_ops_repo()`, `ops_commit_and_push()`, `build_prompt_footer()`, `build_sdk_prompt_footer()`, `formula_worktree_setup()`, `formula_prepare_profile_context()`, `formula_lessons_block()`, `profile_write_journal()`, `profile_load_lessons()`, `ensure_profile_repo()`, `_profile_has_repo()`, `_count_undigested_journals()`, `_profile_digest_journals()`, `_profile_restore_lessons()`, `_profile_commit_and_push()`, `resolve_agent_identity()`, `build_graph_section()`, `build_scratch_instruction()`, `read_scratch_context()`, `cleanup_stale_crashed_worktrees()` — shared helpers for formula-driven polling-loop agents (lock, .profile repo management, prompt assembly, worktree setup). Memory guard is provided by `memory_guard()` in `lib/env.sh` (not duplicated here). `resolve_agent_identity()` — sets `FORGE_TOKEN`, `AGENT_IDENTITY`, `FORGE_REMOTE` from per-agent token env vars and FORGE_URL remote detection. `build_graph_section()` generates the structural-analysis section (runs `lib/build-graph.py`, formats JSON output) — previously duplicated in planner-run.sh and predictor-run.sh, now shared here. `cleanup_stale_crashed_worktrees()` — thin wrapper around `worktree_cleanup_stale()` from `lib/worktree.sh` (kept for backwards compatibility). **Journal digestion guards (#702)**: `_profile_digest_journals()` respects `PROFILE_DIGEST_TIMEOUT` (default 300s) and `PROFILE_DIGEST_MAX_BATCH` (default 5 journals per run); `_profile_restore_lessons()` restores the previous lessons-learned.md on digest failure. | planner-run.sh, predictor-run.sh, gardener-run.sh, supervisor-run.sh, dev-agent.sh |
| `lib/guard.sh` | `check_active(agent_name)` — reads `$FACTORY_ROOT/state/.{agent_name}-active`; exits 0 (skip) if the file is absent. Factory is off by default — state files must be created to enable each agent. **Logs a message to stderr** when skipping (`[check_active] SKIP: state file not found`), so agent dropout is visible in loop logs. Sourced by dev-poll.sh, review-poll.sh, predictor-run.sh, supervisor-run.sh. | polling-loop entry points |
| `lib/mirrors.sh` | `mirror_push()` — pushes `$PRIMARY_BRANCH` + tags to all configured mirror remotes (fire-and-forget background pushes). Reads `MIRROR_NAMES` and `MIRROR_*` vars exported by `load-project.sh` from the `[mirrors]` TOML section. Failures are logged but never block the pipeline. `mirror_pull_register(clone_url, owner, repo_name, [interval])` — registers a Forgejo pull mirror via `POST /repos/migrate` with `mirror: true`. Creates the target repo and queues the first sync automatically. Works against empty Forgejo instances — no pre-existing content required. Used for Nomad migration cutover: point at Codeberg source, wait for sync, then proceed with `disinto init`. See [docs/mirror-bootstrap.md](../docs/mirror-bootstrap.md) for the full cutover path. Sourced by dev-poll.sh — called after every successful merge. | dev-poll.sh |
| `lib/load-project.sh` | Parses a `projects/*.toml` file into env vars (`PROJECT_NAME`, `FORGE_REPO`, `WOODPECKER_REPO_ID`, monitoring toggles, mirror config, etc.). | env.sh (when `PROJECT_TOML` is set), supervisor-poll (per-project iteration) |
| `lib/parse-deps.sh` | Extracts dependency issue numbers from an issue body (stdin → stdout, one number per line). Matches `## Dependencies` / `## Depends on` / `## Blocked by` sections and inline `depends on #N` / `blocked by #N` patterns. Inline scan skips fenced code blocks to prevent false positives from code examples in issue bodies. Not sourced — executed via `bash lib/parse-deps.sh`. | dev-poll, supervisor-poll |
| `lib/formula-session.sh` | `acquire_cron_lock()`, `check_memory()`, `load_formula()`, `build_context_block()`, `consume_escalation_reply()`, `start_formula_session()`, `formula_phase_callback()`, `build_prompt_footer()`, `build_graph_section()`, `run_formula_and_monitor(AGENT [TIMEOUT] [CALLBACK])` — shared helpers for formula-driven cron agents (lock, memory guard, formula loading, prompt assembly, tmux session, monitor loop, crash recovery). `build_graph_section()` generates the structural-analysis section (runs `lib/build-graph.py`, formats JSON output) — previously duplicated in planner-run.sh and predictor-run.sh, now shared here. `formula_phase_callback()` handles `PHASE:escalate` (unified escalation path — kills the session). `run_formula_and_monitor` accepts an optional CALLBACK (default: `formula_phase_callback`) so callers can install custom merge-through or escalation handlers. | planner-run.sh, predictor-run.sh, gardener-run.sh, supervisor-run.sh, dev-agent.sh, action-agent.sh |
| `lib/guard.sh` | `check_active(agent_name)` — reads `$FACTORY_ROOT/state/.{agent_name}-active`; exits 0 (skip) if the file is absent. Factory is off by default — state files must be created to enable each agent. **Logs a message to stderr** when skipping (`[check_active] SKIP: state file not found`), so agent dropout is visible in cron logs. Sourced by dev-poll.sh, review-poll.sh, action-poll.sh, predictor-run.sh, supervisor-run.sh. | cron entry points |
| `lib/mirrors.sh` | `mirror_push()` — pushes `$PRIMARY_BRANCH` + tags to all configured mirror remotes (fire-and-forget background pushes). Reads `MIRROR_NAMES` and `MIRROR_*` vars exported by `load-project.sh` from the `[mirrors]` TOML section. Failures are logged but never block the pipeline. Sourced by dev-poll.sh and dev/phase-handler.sh — called after every successful merge. | dev-poll.sh, phase-handler.sh |
| `lib/build-graph.py` | Python tool: parses VISION.md, prerequisites.md (from ops repo), AGENTS.md, formulas/*.toml, evidence/ (from ops repo), and forge issues/labels into a NetworkX DiGraph. Runs structural analyses (orphaned objectives, stale prerequisites, thin evidence, circular deps) and outputs a JSON report. Used by `review-pr.sh` (per-PR changed-file analysis) and `predictor-run.sh` (full-project analysis) to provide structural context to Claude. | review-pr.sh, predictor-run.sh |
| `lib/secret-scan.sh` | `scan_for_secrets()` — detects potential secrets (API keys, bearer tokens, private keys, URLs with embedded credentials) in text; returns 1 if secrets found. `redact_secrets()` — replaces detected secret patterns with `[REDACTED]`. | issue-lifecycle.sh |
| `lib/stack-lock.sh` | File-based lock protocol for singleton project stack access. `stack_lock_acquire(holder, project)` — polls until free, breaks stale heartbeats (>10 min old), claims lock. `stack_lock_release(project)` — deletes lock file. `stack_lock_check(project)` — inspect current lock state. `stack_lock_heartbeat(project)` — update heartbeat timestamp (callers must call every 2 min while holding). Lock files at `~/data/locks/<project>-stack.lock`. | docker/edge/dispatcher.sh, reproduce formula |
| `lib/secret-scan.sh` | `scan_for_secrets()` — detects potential secrets (API keys, bearer tokens, private keys, URLs with embedded credentials) in text; returns 1 if secrets found. `redact_secrets()` — replaces detected secret patterns with `[REDACTED]`. | file-action-issue.sh, phase-handler.sh |
| `lib/file-action-issue.sh` | `file_action_issue()` — dedup check, secret scan, label lookup, and issue creation for formula-driven cron wrappers. Sets `FILED_ISSUE_NUM` on success. Returns 4 if secrets detected in body. | (available for future use) |
| `lib/tea-helpers.sh` | `tea_file_issue(title, body, labels...)` — create issue via tea CLI with secret scanning; sets `FILED_ISSUE_NUM`. `tea_relabel(issue_num, labels...)` — replace labels using tea's `edit` subcommand (not `label`). `tea_comment(issue_num, body)` — add comment with secret scanning. `tea_close(issue_num)` — close issue. All use `TEA_LOGIN` and `FORGE_REPO` from env.sh. Labels by name (no ID lookup). Tea binary download verified via sha256 checksum. Sourced by env.sh when `tea` binary is available. | env.sh (conditional) |
| `lib/worktree.sh` | Reusable git worktree management: `worktree_create(path, branch, [base_ref])` — create worktree, checkout base, fetch submodules. `worktree_recover(path, branch, [remote])` — detect existing worktree, reuse if on correct branch (sets `_WORKTREE_REUSED`), otherwise clean and recreate. `worktree_cleanup(path)``git worktree remove --force`, clear Claude Code project cache (`~/.claude/projects/` matching path). `worktree_cleanup_stale([max_age_hours])` — scan `/tmp` for orphaned worktrees older than threshold, skip preserved and active tmux worktrees, prune. `worktree_preserve(path, reason)` — mark worktree as preserved for debugging (writes `.worktree-preserved` marker, skipped by stale cleanup). | dev-agent.sh, supervisor-run.sh, planner-run.sh, predictor-run.sh, gardener-run.sh |
| `lib/pr-lifecycle.sh` | Reusable PR lifecycle library: `pr_create()`, `pr_find_by_branch()`, `pr_poll_ci()`, `pr_poll_review()`, `pr_merge()`, `pr_is_merged()`, `pr_walk_to_merge()`, `build_phase_protocol_prompt()`. Requires `lib/ci-helpers.sh`. | dev-agent.sh (future) |
| `lib/issue-lifecycle.sh` | Reusable issue lifecycle library: `issue_claim()` (add in-progress, remove backlog), `issue_release()` (remove in-progress, add backlog), `issue_block()` (post diagnostic comment with secret redaction, add blocked label), `issue_close()`, `issue_check_deps()` (parse deps, check transitive closure; sets `_ISSUE_BLOCKED_BY`, `_ISSUE_SUGGESTION`), `issue_suggest_next()` (find next unblocked backlog issue; sets `_ISSUE_NEXT`), `issue_post_refusal()` (structured refusal comment with dedup). Label IDs cached in globals on first lookup. Sources `lib/secret-scan.sh`. | dev-agent.sh (future) |
| `lib/action-vault.sh` | **Vault PR helper** — create vault action PRs on ops repo via Forgejo API (works from containers without SSH). `vault_request <action_id> <toml_content>` validates TOML (using `validate_vault_action` from `action-vault/vault-env.sh`), creates branch `vault/<action-id>`, writes `vault/actions/<action-id>.toml`, creates PR targeting `main` with title `vault: <action-id>` and body from context field, returns PR number. Idempotent: if PR exists, returns existing number. **Low-tier bypass**: if the action's `blast_radius` classifies as `low` (via `action-vault/classify.sh`), `vault_request` calls `_vault_commit_direct()` which commits directly to ops `main` using `FORGE_ADMIN_TOKEN` — no PR, no approval wait. Returns `0` (not a PR number) for direct commits. Requires `FORGE_TOKEN`, `FORGE_ADMIN_TOKEN` (low-tier only), `FORGE_URL`, `FORGE_REPO`, `FORGE_OPS_REPO`. Uses the calling agent's own token (saves/restores `FORGE_TOKEN` around sourcing `vault-env.sh`), so approval workflow respects individual agent identities. | dev-agent (vault actions), future vault dispatcher |
| `lib/branch-protection.sh` | Branch protection helpers for Forgejo repos. `setup_vault_branch_protection()` — configures admin-only merge protection on main (require 1 approval, restrict merge to admin role, block direct pushes). `setup_profile_branch_protection()` — same protection for `.profile` repos. `verify_branch_protection()` — checks protection is correctly configured. `remove_branch_protection()` — removes protection (cleanup/testing). Handles race condition after initial push: retries with backoff if Forgejo hasn't processed the branch yet. Requires `FORGE_TOKEN`, `FORGE_URL`, `FORGE_OPS_REPO`. | bin/disinto (hire-an-agent) |
| `lib/agent-sdk.sh` | `agent_run([--resume SESSION_ID] [--worktree DIR] PROMPT)` — one-shot `claude -p` invocation with session persistence. Saves session ID to `SID_FILE`, reads it back on resume. `agent_recover_session()` — restore previous session ID from `SID_FILE` on startup. **Nudge guard**: skips nudge injection if the worktree is clean and no push is expected, preventing spurious re-invocations. Callers must define `SID_FILE`, `LOGFILE`, and `log()` before sourcing. **Concurrency**: external `flock` on `session.lock` is gated behind `CLAUDE_EXTERNAL_LOCK=1` (default off). When unset, each container's per-session `CLAUDE_CONFIG_DIR` isolation lets Claude Code's native lockfile handle OAuth refresh — no external serialization needed. Set `CLAUDE_EXTERNAL_LOCK=1` to re-enable the old flock wrapper as a rollback mechanism. See [`docs/CLAUDE-AUTH-CONCURRENCY.md`](../docs/CLAUDE-AUTH-CONCURRENCY.md) and AD-002 (#647). | formula-driven agents (dev-agent, planner-run, predictor-run, gardener-run) |
| `lib/forge-setup.sh` | `setup_forge()` — Forgejo instance provisioning: creates admin user, bot accounts, org, repos (code + ops), configures webhooks, sets repo topics. Extracted from `bin/disinto`. Requires `FORGE_URL`, `FORGE_TOKEN`, `FACTORY_ROOT`. **Password storage (#361)**: after creating each bot account, stores its password in `.env` as `FORGE_<BOT>_PASS` (e.g. `FORGE_PASS`, `FORGE_REVIEW_PASS`, etc.) for use by `forge-push.sh`. | bin/disinto (init) |
| `lib/forge-push.sh` | `push_to_forge()` — pushes a local clone to the Forgejo remote and verifies the push. `_assert_forge_push_globals()` validates required env vars before use. Requires `FORGE_URL`, `FORGE_PASS`, `FACTORY_ROOT`, `PRIMARY_BRANCH`. **Auth**: uses `FORGE_PASS` (bot password) for git HTTP push — Forgejo 11.x rejects API tokens for `git push` (#361). | bin/disinto (init) |
| `lib/git-creds.sh` | Shared git credential helper configuration. `configure_git_creds([HOME_DIR] [RUN_AS_CMD])` — writes a static credential helper script and configures git globally to use password-based HTTP auth (Forgejo 11.x rejects API tokens for `git push`, #361). **Retry on cold boot (#741)**: resolves bot username from `FORGE_TOKEN` with 5 retries (exponential backoff 1-5s); fails loudly and returns 1 if Forgejo is unreachable — never falls back to a wrong hardcoded default (exports `BOT_USER` on success). `repair_baked_cred_urls([--as RUN_AS_CMD] DIR ...)` — rewrites any git remote URLs that have credentials baked in to use clean URLs instead; uses `safe.directory` bypass for root-owned repos (#671). Requires `FORGE_PASS`, `FORGE_URL`, `FORGE_TOKEN`. | entrypoints (agents, edge) |
| `lib/ops-setup.sh` | `setup_ops_repo()` — creates ops repo on Forgejo if it doesn't exist, configures bot collaborators, clones/initializes ops repo locally, seeds directory structure (vault, knowledge, evidence, sprints). Evidence subdirectories seeded: engagement/, red-team/, holdout/, evolution/, user-test/. Also seeds sprints/ for architect output. Exports `_ACTUAL_OPS_SLUG`. `migrate_ops_repo(ops_root, [primary_branch])` — idempotent migration helper that seeds missing directories and .gitkeep files on existing ops repos (pre-#407 deployments). | bin/disinto (init) |
| `lib/ci-setup.sh` | `_install_cron_impl()` — installs crontab entries for bare-metal deployments (compose mode uses polling loop instead). `_create_forgejo_oauth_app()` — generic helper to create an OAuth2 app on Forgejo (shared by Woodpecker and chat). `_create_woodpecker_oauth_impl()` — creates Woodpecker OAuth2 app (thin wrapper). `_create_chat_oauth_impl()` — creates disinto-chat OAuth2 app, writes `CHAT_OAUTH_CLIENT_ID`/`CHAT_OAUTH_CLIENT_SECRET` to `.env` (#708). `_generate_woodpecker_token_impl()` — auto-generates WOODPECKER_TOKEN via OAuth2 flow. `_activate_woodpecker_repo_impl()` — activates repo in Woodpecker. All gated by `_load_ci_context()` which validates required env vars. | bin/disinto (init) |
| `lib/generators.sh` | Template generation for `disinto init`: `generate_compose()` — docker-compose.yml (uses `codeberg.org/forgejo/forgejo:11.0` tag; adds `security_opt: [apparmor:unconfined]` to all services for rootless container compatibility; Forgejo includes a healthcheck so dependent services use `condition: service_healthy` — fixes cold-start races, #665; adds `chat` service block with isolated `chat-config` named volume and `CHAT_HISTORY_DIR` bind-mount for per-user NDJSON history persistence (#710); injects `FORWARD_AUTH_SECRET` for Caddy↔chat defense-in-depth auth (#709); cost-cap env vars `CHAT_MAX_REQUESTS_PER_HOUR`, `CHAT_MAX_REQUESTS_PER_DAY`, `CHAT_MAX_TOKENS_PER_DAY` (#711); subdomain fallback comment for `EDGE_TUNNEL_FQDN_*` vars (#713); all `depends_on` now use `condition: service_healthy/started` instead of bare service names; all services now include `restart: unless-stopped` including the edge service — #768; agents service now uses `image: ghcr.io/disinto/agents:${DISINTO_IMAGE_TAG:-latest}` instead of `build:` (#429); `WOODPECKER_PLUGINS_PRIVILEGED` env var added to woodpecker service (#779); agents-llama conditional block gated on `ENABLE_LLAMA_AGENT=1` (#769); agents service gains volume mounts for `./projects`, `./.env`, `./state`), `generate_caddyfile()` — Caddyfile (routes: `/forge/*` → forgejo:3000, `/woodpecker/*` → woodpecker:8000, `/staging/*` → staging:80; `/chat/login` and `/chat/oauth/callback` bypass `forward_auth` so unauthenticated users can reach the OAuth flow; `/chat/*` gated by `forward_auth` on `chat:8080/chat/auth/verify` which stamps `X-Forwarded-User` (#709); root `/` redirects to `/forge/`), `generate_staging_index()` — staging index, `generate_deploy_pipelines()` — Woodpecker deployment pipeline configs. Requires `FACTORY_ROOT`, `PROJECT_NAME`, `PRIMARY_BRANCH`. | bin/disinto (init) |
| `lib/sprint-filer.sh` | Post-merge sub-issue filer for sprint PRs. Invoked by the `.woodpecker/ops-filer.yml` pipeline after a sprint PR merges to ops repo `main`. Parses `<!-- filer:begin --> ... <!-- filer:end -->` blocks from sprint PR bodies to extract sub-issue definitions, creates them on the project repo using `FORGE_FILER_TOKEN` (narrow-scope `filer-bot` identity with `issues:write` only), adds `in-progress` label to the parent vision issue, and handles vision lifecycle closure when all sub-issues are closed. Uses `filer_api_all()` for paginated fetches. Idempotent: uses `<!-- decomposed-from: #<vision>, sprint: <slug>, id: <id> -->` markers to skip already-filed issues. Requires `FORGE_FILER_TOKEN`, `FORGE_API`, `FORGE_API_BASE`, `FORGE_OPS_REPO`. | `.woodpecker/ops-filer.yml` (CI pipeline on ops repo) |
| `lib/hire-agent.sh` | `disinto_hire_an_agent()` — user creation, `.profile` repo setup, formula copying, branch protection, and state marker creation for hiring a new agent. Requires `FORGE_URL`, `FORGE_TOKEN`, `FACTORY_ROOT`, `PROJECT_NAME`. Extracted from `bin/disinto`. | bin/disinto (hire) |
| `lib/release.sh` | `disinto_release()` — vault TOML creation, branch setup on ops repo, PR creation, and auto-merge request for a versioned release. `_assert_release_globals()` validates required env vars. Requires `FORGE_URL`, `FORGE_TOKEN`, `FORGE_OPS_REPO`, `FACTORY_ROOT`, `PRIMARY_BRANCH`. Extracted from `bin/disinto`. | bin/disinto (release) |
| `lib/agent-session.sh` | Shared tmux + Claude session helpers: `create_agent_session()`, `inject_formula()`, `agent_wait_for_claude_ready()`, `agent_inject_into_session()`, `agent_kill_session()`, `monitor_phase_loop()`, `read_phase()`, `write_compact_context()`. `create_agent_session(session, workdir, [phase_file])` optionally installs a PostToolUse hook (matcher `Bash\|Write`) that detects phase file writes in real-time — when Claude writes to the phase file, the hook writes a marker so `monitor_phase_loop` reacts on the next poll instead of waiting for mtime changes. Also installs a StopFailure hook (matcher `rate_limit\|server_error\|authentication_failed\|billing_error`) that writes `PHASE:failed` with an `api_error` reason to the phase file and touches the phase-changed marker, so the orchestrator discovers API errors within one poll cycle instead of waiting for idle timeout. Also installs a SessionStart hook (matcher `compact`) that re-injects phase protocol instructions after context compaction — callers write the context file via `write_compact_context(phase_file, content)`, and the hook (`on-compact-reinject.sh`) outputs the file content to stdout so Claude retains critical instructions. When `phase_file` is set, passes it to the idle stop hook (`on-idle-stop.sh`) so the hook can **nudge Claude** (up to 2 times) if Claude returns to the prompt without writing to the phase file — the hook injects a tmux reminder asking Claude to signal PHASE:done or PHASE:awaiting_ci. The PreToolUse guard hook (`on-pretooluse-guard.sh`) receives the session name as a third argument — formula agents (`gardener-*`, `planner-*`, `predictor-*`, `supervisor-*`) are identified this way and allowed to access `FACTORY_ROOT` from worktrees (they need env.sh, AGENTS.md, formulas/, lib/). **OAuth flock**: when `DISINTO_CONTAINER=1`, Claude CLI is wrapped in `flock -w 300 ~/.claude/session.lock` to queue concurrent token refresh attempts and prevent rotation races across agents sharing the same credentials. `monitor_phase_loop` sets `_MONITOR_LOOP_EXIT` to one of: `done`, `idle_timeout`, `idle_prompt` (Claude returned to `>` for 3 consecutive polls without writing any phase — callback invoked with `PHASE:failed`, session already dead), `crashed`, or `PHASE:escalate` / other `PHASE:*` string. **Unified escalation**: `PHASE:escalate` is the signal that a session needs human input (renamed from `PHASE:needs_human`). **Callers must handle `idle_prompt`** in both their callback and their post-loop exit handler — see [`docs/PHASE-PROTOCOL.md` idle_prompt](docs/PHASE-PROTOCOL.md#idle_prompt-exit-reason) for the full contract. | dev-agent.sh, action-agent.sh |

View file

@ -1,312 +0,0 @@
#!/usr/bin/env bash
# action-vault.sh — Helper for agents to create vault PRs on ops repo
#
# Source after lib/env.sh:
# source "$(dirname "$0")/../lib/env.sh"
# source "$(dirname "$0")/lib/action-vault.sh"
#
# Required globals: FORGE_TOKEN, FORGE_URL, FORGE_REPO, FORGE_OPS_REPO
# Optional: OPS_REPO_ROOT (local path for ops repo)
#
# Functions:
# vault_request <action_id> <toml_content> — Create vault PR, return PR number
#
# The function:
# 1. Validates TOML content using validate_vault_action() from action-vault/vault-env.sh
# 2. Creates a branch on the ops repo: vault/<action-id>
# 3. Writes TOML to vault/actions/<action-id>.toml on that branch
# 4. Creates PR targeting main with title "vault: <action-id>"
# 5. Body includes context field from TOML
# 6. Returns PR number (existing or newly created)
#
# Idempotent: if PR for same action-id exists, returns its number
#
# Uses Forgejo REST API (not git push) — works from containers without SSH
set -euo pipefail
# Internal log helper
_vault_log() {
if declare -f log >/dev/null 2>&1; then
log "vault: $*"
else
printf '[%s] vault: %s\n' "$(date -u '+%Y-%m-%d %H:%M:%S UTC')" "$*" >&2
fi
}
# Get ops repo API URL
_vault_ops_api() {
printf '%s' "${FORGE_URL}/api/v1/repos/${FORGE_OPS_REPO}"
}
# -----------------------------------------------------------------------------
# _vault_commit_direct — Commit low-tier action directly to ops main
# Args: ops_api tmp_toml_file action_id
# Uses FORGE_ADMIN_TOKEN to bypass PR workflow
# -----------------------------------------------------------------------------
_vault_commit_direct() {
local ops_api="$1"
local tmp_toml="$2"
local action_id="$3"
local file_path="vault/actions/${action_id}.toml"
# Use FORGE_ADMIN_TOKEN for direct commit (vault-bot identity)
local admin_token="${FORGE_ADMIN_TOKEN:-${FORGE_TOKEN}}"
if [ -z "$admin_token" ]; then
echo "ERROR: FORGE_ADMIN_TOKEN is required for low-tier commits" >&2
return 1
fi
# Get main branch SHA
local main_sha
main_sha=$(curl -sf -H "Authorization: token ${admin_token}" \
"${ops_api}/git/branches/${PRIMARY_BRANCH:-main}" 2>/dev/null | \
jq -r '.commit.id // empty' || true)
if [ -z "$main_sha" ]; then
main_sha=$(curl -sf -H "Authorization: token ${admin_token}" \
"${ops_api}/git/refs/heads/${PRIMARY_BRANCH:-main}" 2>/dev/null | \
jq -r '.object.sha // empty' || true)
fi
if [ -z "$main_sha" ]; then
echo "ERROR: could not get main branch SHA" >&2
return 1
fi
_vault_log "Committing ${file_path} directly to ${PRIMARY_BRANCH:-main}"
# Encode TOML content as base64
local encoded_content
encoded_content=$(base64 -w 0 < "$tmp_toml")
# Commit directly to main branch using Forgejo content API
if ! curl -sf -X PUT \
-H "Authorization: token ${admin_token}" \
-H "Content-Type: application/json" \
"${ops_api}/contents/${file_path}" \
-d "{\"message\":\"vault: add ${action_id} (low-tier)\",\"branch\":\"${PRIMARY_BRANCH:-main}\",\"content\":\"${encoded_content}\",\"committer\":{\"name\":\"vault-bot\",\"email\":\"vault-bot@${FORGE_REPO}\"},\"overwrite\":true}" >/dev/null 2>&1; then
echo "ERROR: failed to write ${file_path} to ${PRIMARY_BRANCH:-main}" >&2
return 1
fi
_vault_log "Direct commit successful for ${action_id}"
}
# -----------------------------------------------------------------------------
# vault_request — Create a vault PR or return existing one
# Args: action_id toml_content
# Stdout: PR number
# Returns: 0=success, 1=validation failed, 2=API error
# -----------------------------------------------------------------------------
vault_request() {
local action_id="$1"
local toml_content="$2"
if [ -z "$action_id" ]; then
echo "ERROR: action_id is required" >&2
return 1
fi
if [ -z "$toml_content" ]; then
echo "ERROR: toml_content is required" >&2
return 1
fi
# Get admin token for API calls (FORGE_ADMIN_TOKEN for low-tier, FORGE_TOKEN otherwise)
local admin_token="${FORGE_ADMIN_TOKEN:-${FORGE_TOKEN}}"
# Check if PR already exists for this action
local existing_pr
existing_pr=$(pr_find_by_branch "vault/${action_id}" "$(_vault_ops_api)") || true
if [ -n "$existing_pr" ]; then
_vault_log "PR already exists for action $action_id: #${existing_pr}"
printf '%s' "$existing_pr"
return 0
fi
# Validate TOML content
local tmp_toml
tmp_toml=$(mktemp /tmp/vault-XXXXXX.toml)
trap 'rm -f "$tmp_toml"' RETURN
printf '%s' "$toml_content" > "$tmp_toml"
# Source vault-env.sh for validate_vault_action
local vault_env="${FACTORY_ROOT:-$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)}/action-vault/vault-env.sh"
if [ ! -f "$vault_env" ]; then
echo "ERROR: vault-env.sh not found at $vault_env" >&2
return 1
fi
# Save caller's FORGE_TOKEN, source vault-env.sh for validate_vault_action,
# then restore caller's token so PR creation uses agent's identity (not vault-bot)
local _saved_forge_token="${FORGE_TOKEN:-}"
if ! source "$vault_env"; then
FORGE_TOKEN="${_saved_forge_token:-}"
echo "ERROR: failed to source vault-env.sh" >&2
return 1
fi
# Restore caller's FORGE_TOKEN after validation
FORGE_TOKEN="${_saved_forge_token:-}"
# Run validation
if ! validate_vault_action "$tmp_toml"; then
echo "ERROR: TOML validation failed" >&2
return 1
fi
# Get ops repo API URL
local ops_api
ops_api="$(_vault_ops_api)"
# Classify the action to determine if PR bypass is allowed
local classify_script="${FACTORY_ROOT:-$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)}/action-vault/classify.sh"
local vault_tier
vault_tier=$("$classify_script" "${VAULT_ACTION_FORMULA:-}" "${VAULT_BLAST_RADIUS_OVERRIDE:-}") || {
# Classification failed, default to high tier (require PR)
vault_tier="high"
_vault_log "Warning: classification failed, defaulting to high tier"
}
export VAULT_TIER="${vault_tier}"
# For low-tier actions, commit directly to ops main using FORGE_ADMIN_TOKEN
if [ "$vault_tier" = "low" ]; then
_vault_log "low-tier — committed directly to ops main"
# Add dispatch_mode field to indicate direct commit (no PR)
local direct_toml
direct_toml=$(mktemp /tmp/vault-direct-XXXXXX.toml)
trap 'rm -f "$tmp_toml" "$direct_toml"' RETURN
# Prepend dispatch_mode = "direct" to the TOML
printf 'dispatch_mode = "direct"\n%s\n' "$toml_content" > "$direct_toml"
_vault_commit_direct "$ops_api" "$direct_toml" "${action_id}"
return 0
fi
# Extract values for PR creation (medium/high tier)
local pr_title pr_body
pr_title="vault: ${action_id}"
pr_body="Vault action: ${action_id}
Context: ${VAULT_ACTION_CONTEXT:-No context provided}
Formula: ${VAULT_ACTION_FORMULA:-}
Secrets: ${VAULT_ACTION_SECRETS:-}
---
This vault action has been created by an agent and requires admin approval
before execution. See the TOML file for details."
# Create branch
local branch="vault/${action_id}"
local branch_exists
branch_exists=$(curl -s -o /dev/null -w "%{http_code}" \
-H "Authorization: token ${admin_token}" \
"${ops_api}/git/branches/${branch}" 2>/dev/null || echo "0")
if [ "$branch_exists" != "200" ]; then
# Branch doesn't exist, create it from main
_vault_log "Creating branch ${branch} on ops repo"
# Get the commit SHA of main branch
local main_sha
main_sha=$(curl -sf -H "Authorization: token ${admin_token}" \
"${ops_api}/git/branches/${PRIMARY_BRANCH:-main}" 2>/dev/null | \
jq -r '.commit.id // empty' || true)
if [ -z "$main_sha" ]; then
# Fallback: get from refs
main_sha=$(curl -sf -H "Authorization: token ${admin_token}" \
"${ops_api}/git/refs/heads/${PRIMARY_BRANCH:-main}" 2>/dev/null | \
jq -r '.object.sha // empty' || true)
fi
if [ -z "$main_sha" ]; then
echo "ERROR: could not get main branch SHA" >&2
return 1
fi
# Create the branch
if ! curl -sf -X POST \
-H "Authorization: token ${admin_token}" \
-H "Content-Type: application/json" \
"${ops_api}/git/branches" \
-d "{\"ref\":\"${branch}\",\"sha\":\"${main_sha}\"}" >/dev/null 2>&1; then
echo "ERROR: failed to create branch ${branch}" >&2
return 1
fi
else
_vault_log "Branch ${branch} already exists"
fi
# Write TOML file to branch via API
local file_path="vault/actions/${action_id}.toml"
_vault_log "Writing ${file_path} to branch ${branch}"
# Encode TOML content as base64
local encoded_content
encoded_content=$(printf '%s' "$toml_content" | base64 -w 0)
# Upload file using Forgejo content API
if ! curl -sf -X PUT \
-H "Authorization: token ${admin_token}" \
-H "Content-Type: application/json" \
"${ops_api}/contents/${file_path}" \
-d "{\"message\":\"vault: add ${action_id}\",\"branch\":\"${branch}\",\"content\":\"${encoded_content}\",\"committer\":{\"name\":\"vault-bot\",\"email\":\"vault-bot@${FORGE_REPO}\"},\"overwrite\":true}" >/dev/null 2>&1; then
echo "ERROR: failed to write ${file_path} to branch ${branch}" >&2
return 1
fi
# Create PR
_vault_log "Creating PR for ${branch}"
local pr_num
pr_num=$(pr_create "$branch" "$pr_title" "$pr_body" "$PRIMARY_BRANCH" "$ops_api") || {
echo "ERROR: failed to create PR" >&2
return 1
}
# Enable auto-merge on the PR — Forgejo will auto-merge after approval
_vault_log "Enabling auto-merge for PR #${pr_num}"
curl -sf -X POST \
-H "Authorization: token ${admin_token}" \
-H "Content-Type: application/json" \
"${ops_api}/pulls/${pr_num}/merge" \
-d '{"Do":"merge","merge_when_checks_succeed":true}' >/dev/null 2>&1 || {
_vault_log "Warning: failed to enable auto-merge (may already be enabled or not supported)"
}
# Add labels to PR (vault, pending-approval)
_vault_log "PR #${pr_num} created, adding labels"
# Get label IDs
local vault_label_id pending_label_id
vault_label_id=$(curl -sf -H "Authorization: token ${admin_token}" \
"${ops_api}/labels" 2>/dev/null | \
jq -r --arg n "vault" '.[] | select(.name == $n) | .id // empty' || true)
pending_label_id=$(curl -sf -H "Authorization: token ${admin_token}" \
"${ops_api}/labels" 2>/dev/null | \
jq -r --arg n "pending-approval" '.[] | select(.name == $n) | .id // empty' || true)
# Add labels if they exist
if [ -n "$vault_label_id" ]; then
curl -sf -X POST \
-H "Authorization: token ${admin_token}" \
-H "Content-Type: application/json" \
"${ops_api}/issues/${pr_num}/labels" \
-d "[{\"id\":${vault_label_id}}]" >/dev/null 2>&1 || true
fi
if [ -n "$pending_label_id" ]; then
curl -sf -X POST \
-H "Authorization: token ${admin_token}" \
-H "Content-Type: application/json" \
"${ops_api}/issues/${pr_num}/labels" \
-d "[{\"id\":${pending_label_id}}]" >/dev/null 2>&1 || true
fi
printf '%s' "$pr_num"
return 0
}

Some files were not shown because too many files have changed in this diff Show more