diff --git a/AGENTS.md b/AGENTS.md index db61b8e..dba187e 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -1,4 +1,4 @@ - + # Disinto — Agent Instructions ## What this repo is diff --git a/action/AGENTS.md b/action/AGENTS.md index b0b3629..15c8438 100644 --- a/action/AGENTS.md +++ b/action/AGENTS.md @@ -1,4 +1,4 @@ - + # Action Agent **Role**: Execute operational tasks described by action formulas — run scripts, diff --git a/dev/AGENTS.md b/dev/AGENTS.md index cf37fd8..3827840 100644 --- a/dev/AGENTS.md +++ b/dev/AGENTS.md @@ -1,4 +1,4 @@ - + # Dev Agent **Role**: Implement issues autonomously — write code, push branches, address diff --git a/gardener/AGENTS.md b/gardener/AGENTS.md index f179c1a..413c576 100644 --- a/gardener/AGENTS.md +++ b/gardener/AGENTS.md @@ -1,4 +1,4 @@ - + # Gardener Agent **Role**: Backlog grooming — detect duplicate issues, missing acceptance diff --git a/gardener/pending-actions.json b/gardener/pending-actions.json index 87ec531..fe51488 100644 --- a/gardener/pending-actions.json +++ b/gardener/pending-actions.json @@ -1,37 +1 @@ -[ - { - "action": "edit_body", - "issue": 619, - "body": "Depends on: *Containerize full stack with docker-compose*\r\n\r\n## Problem\r\n\r\nDendrite currently runs on the bare host as a systemd service (`dendrite.service`), manually installed and configured. The `matrix_listener.sh` daemon also runs on the host via its own systemd unit (`matrix_listener.service`), hardcoded to `/home/admin/disinto`. This is the last piece of the stack that isn't containerized — Forgejo, Woodpecker, and the agents are inside compose, but the Matrix homeserver sits outside.\r\n\r\nThe result: `disinto init` can't provision Matrix automatically, the listener systemd unit has a hardcoded user and path, and if someone sets up disinto fresh on a new VPS they need to install Dendrite manually, create users, create rooms, and wire everything together before notifications and human-in-the-loop escalation work.\r\n\r\n## Solution\r\n\r\nAdd Dendrite as a fourth service in `docker-compose.yml`. Provision the bot user, coordination room, and access token during `disinto init`. Move the `matrix_listener.sh` daemon into the agent container's entrypoint alongside cron.\r\n\r\n## Scope\r\n\r\n### 1. Dendrite service in docker-compose.yml\r\n\r\n```yaml\r\n dendrite:\r\n image: matrixdotorg/dendrite-monolith:latest\r\n restart: unless-stopped\r\n volumes:\r\n - dendrite-data:/etc/dendrite\r\n environment:\r\n DENDRITE_DOMAIN: disinto.local\r\n networks:\r\n - disinto-net\r\n```\r\n\r\nNo host ports exposed — the agents talk to Dendrite over the internal Docker network at `http://dendrite:8008`. There's no need for federation or external Matrix clients unless the user explicitly wants to connect their own Matrix client (e.g. Element), in which case they can add a port mapping themselves.\r\n\r\n### 2. Provisioning in `disinto init`\r\n\r\nAfter Dendrite is healthy, `disinto init` creates:\r\n\r\n- A server signing key (Dendrite generates this on first start if missing)\r\n- A bot user via Dendrite's admin API (`POST /_dendrite/admin/createOrModifyAccount` or `create-account` CLI tool via `docker compose exec dendrite`)\r\n- A coordination room via the Matrix client-server API (`POST /_matrix/client/v3/createRoom`)\r\n- An access token for the bot (via login: `POST /_matrix/client/v3/login`)\r\n\r\nStore the resulting `MATRIX_TOKEN`, `MATRIX_ROOM_ID`, and `MATRIX_BOT_USER` in `.env` (or `.env.enc` if SOPS is available). Set `MATRIX_HOMESERVER=http://dendrite:8008` — this URL only needs to resolve inside the Docker network.\r\n\r\nFor the interactive case: after creating the room, print the room alias or ID so the user can join from their own Matrix client (Element, etc.) to receive notifications and reply to escalations. If they don't have a Matrix client, the factory still works — escalations just go unanswered until they check manually.\r\n\r\n### 3. Move `matrix_listener.sh` into agent container\r\n\r\nThe listener is currently a systemd service on the host. In the compose setup it runs inside the agent container as a background process alongside cron. Update `docker/agents/entrypoint.sh`:\r\n\r\n```bash\r\n#!/bin/bash\r\n# Start matrix listener in background (if configured)\r\nif [ -n \"${MATRIX_TOKEN:-}\" ] && [ -n \"${MATRIX_ROOM_ID:-}\" ]; then\r\n /home/agent/disinto/lib/matrix_listener.sh &\r\nfi\r\n\r\n# Start cron in foreground\r\nexec cron -f\r\n```\r\n\r\nRemove the pidfile guard in `matrix_listener.sh` (line 24–31) or make it work with the container lifecycle — inside a container the PID file from a previous run doesn't exist. The trap on EXIT already cleans up.\r\n\r\n### 4. Remove `matrix_listener.service`\r\n\r\nThe systemd unit file at `lib/matrix_listener.service` becomes dead code once the listener runs inside the agent container. Keep it for bare-metal deployments (`disinto init --bare`) but document it as the legacy path.\r\n\r\n### 5. Update `MATRIX_HOMESERVER` default\r\n\r\nIn `.env.example`, change the default from `http://localhost:8008` to `http://dendrite:8008`. In `lib/env.sh`, the default should detect the environment:\r\n\r\n- Inside a container (compose): `MATRIX_HOMESERVER` defaults to `http://dendrite:8008`\r\n- On bare metal: defaults to `http://localhost:8008`\r\n\r\nThis can use the same container detection from the compose issue (e.g. checking for `/.dockerenv` or a `DISINTO_COMPOSE=1` env var set in the compose file).\r\n\r\n### 6. Per-project Matrix rooms (optional enhancement)\r\n\r\nThe current setup uses one room for all projects, with project-specific thread maps for routing. This works fine inside compose — no change needed. But `disinto init` for a new project could optionally create a per-project room and store it in the project TOML under `[matrix] room_id`. The listener already dispatches by project name via the thread map, so per-project rooms would just reduce noise.\r\n\r\nThis is optional — single-room works, document multi-room as a possible configuration.\r\n\r\n## Affected files\r\n\r\n- `docker-compose.yml` (generated) — add `dendrite` service and `dendrite-data` volume\r\n- `docker/agents/entrypoint.sh` — start `matrix_listener.sh` as background process\r\n- `bin/disinto` — Dendrite provisioning (bot user, room, token) during init\r\n- `.env.example` — update `MATRIX_HOMESERVER` default, document compose vs bare-metal\r\n- `lib/matrix_listener.sh` — make pidfile guard container-friendly (no stale PID from previous container)\r\n- `lib/matrix_listener.service` — keep for `--bare` mode, document as legacy\r\n\r\n## Not in scope\r\n\r\n- Matrix federation with external homeservers\r\n- End-to-end encryption for the coordination room (Dendrite supports it, but agent bots don't need it for an internal channel)\r\n- Element or other Matrix client setup (user's responsibility)\r\n- Replacing Matrix with a different notification system\r\n- Migrating existing Matrix room history into the containerized Dendrite\r\n\r\n## Acceptance criteria\r\n\r\n- `disinto init` provisions Dendrite, creates a bot user and coordination room, and stores credentials in `.env`\r\n- Agent notifications (`matrix_send`) work via `http://dendrite:8008` inside the Docker network\r\n- `matrix_listener.sh` runs inside the agent container and dispatches escalation replies to agent sessions\r\n- Dendrite is only reachable from within `disinto-net` — no host ports exposed by default\r\n- Users can join the coordination room from an external Matrix client by adding a port mapping to compose and joining via room alias\r\n- The factory works end-to-end without Matrix configured (all `matrix_send` calls already guard on `[ -z \"${MATRIX_TOKEN:-}\" ]`)\r\n" - }, - { - "action": "add_label", - "issue": 619, - "label": "backlog" - }, - { - "action": "edit_body", - "issue": 614, - "body": "Depends on: *Replace Codeberg dependency with local Forgejo instance*\r\n\r\n## Problem\r\n\r\nWith agents operating against a local Forgejo instance, the code is no longer visible on any public forge. For adoption (stars, forks, contributors — goals listed in VISION.md) the repo needs a public presence. Codeberg and GitHub serve different audiences: Codeberg for the FOSS community, GitHub for wider reach.\r\n\r\n## Solution\r\n\r\nAfter every successful merge to the primary branch, push to configured mirror remotes. Mirrors are read-only — agents never read from them. Pushes are fire-and-forget: failures are logged but never block the pipeline.\r\n\r\n## Scope\r\n\r\n### 1. Project TOML configuration\r\n\r\nAdd a `[mirrors]` section:\r\n\r\n```toml\r\nname = \"harb\"\r\nrepo = \"johba/harb\"\r\nforge_url = \"http://localhost:3000\"\r\nprimary_branch = \"master\"\r\n\r\n[mirrors]\r\ngithub = \"git@github.com:johba/harb.git\"\r\ncodeberg = \"git@codeberg.org:johba/harb.git\"\r\n```\r\n\r\nValues are push URLs, not slugs — this keeps it explicit and avoids guessing SSH vs HTTPS. Any number of mirrors can be listed; the key (github, codeberg, etc.) is just a human-readable name used in log messages.\r\n\r\n### 2. `lib/load-project.sh` — parse mirrors\r\n\r\nExtend the TOML parser to export mirror URLs. Add to the Python block:\r\n\r\n```python\r\nmirrors = cfg.get('mirrors', {})\r\nfor name, url in mirrors.items():\r\n emit(f'MIRROR_{name.upper()}', url)\r\n```\r\n\r\nThis exports `MIRROR_GITHUB`, `MIRROR_CODEBERG`, etc. as env vars. Also emit a space-separated list for iteration:\r\n\r\n```python\r\nif mirrors:\r\n emit('MIRROR_NAMES', list(mirrors.keys()))\r\n emit('MIRROR_URLS', list(mirrors.values()))\r\n```\r\n\r\n### 3. `lib/mirrors.sh` — shared push helper\r\n\r\nNew file:\r\n\r\n```bash\r\n#!/usr/bin/env bash\r\n# mirrors.sh — Push primary branch + tags to configured mirror remotes.\r\n#\r\n# Usage: source lib/mirrors.sh; mirror_push\r\n# Requires: PROJECT_REPO_ROOT, PRIMARY_BRANCH, MIRROR_* vars from load-project.sh\r\n\r\nmirror_push() {\r\n [ -z \"${MIRROR_NAMES:-}\" ] && return 0\r\n\r\n local name url\r\n local i=0\r\n for name in $MIRROR_NAMES; do\r\n url=$(eval \"echo \\$MIRROR_$(echo \"$name\" | tr '[:lower:]' '[:upper:]')\")\r\n [ -z \"$url\" ] && continue\r\n\r\n # Ensure remote exists\r\n git -C \"$PROJECT_REPO_ROOT\" remote get-url \"$name\" &>/dev/null \\\r\n || git -C \"$PROJECT_REPO_ROOT\" remote add \"$name\" \"$url\"\r\n\r\n # Fire-and-forget push (background, no failure propagation)\r\n git -C \"$PROJECT_REPO_ROOT\" push \"$name\" \"$PRIMARY_BRANCH\" --tags 2>/dev/null &\r\n log \"mirror: pushed to ${name} (pid $!)\"\r\n done\r\n}\r\n```\r\n\r\nBackground pushes so the agent doesn't block on slow upstreams. SSH keys for GitHub/Codeberg are the user's responsibility (existing SSH agent or deploy keys).\r\n\r\n### 4. Call `mirror_push()` at the three merge sites\r\n\r\nThere are three places where PRs get merged:\r\n\r\n**`dev/phase-handler.sh` — `do_merge()`** (line ~193): the main dev-agent merge path. After the successful merge (HTTP 200/204 block), pull the merged primary branch locally and call `mirror_push()`.\r\n\r\n**`dev/dev-poll.sh` — `try_direct_merge()`** (line ~189): fast-path merge for approved + CI-green PRs that don't need a Claude session. Same insertion point after the success check.\r\n\r\n**`gardener/gardener-run.sh` — `_gardener_merge()`** (line ~293): gardener PR merge. Same pattern.\r\n\r\nAt each site, after the forge API confirms the merge:\r\n\r\n```bash\r\n# Pull merged primary branch and push to mirrors\r\ngit -C \"$REPO_ROOT\" fetch origin \"$PRIMARY_BRANCH\" 2>/dev/null || true\r\ngit -C \"$REPO_ROOT\" checkout \"$PRIMARY_BRANCH\" 2>/dev/null || true\r\ngit -C \"$REPO_ROOT\" pull --ff-only origin \"$PRIMARY_BRANCH\" 2>/dev/null || true\r\nmirror_push\r\n```\r\n\r\nThe fetch/pull is necessary because the merge happened via the forge API, not locally — the local clone needs to pick up the merge commit before it can push to mirrors.\r\n\r\n### 5. `disinto init` — set up mirror remotes\r\n\r\nIf `[mirrors]` is present in the TOML, add the remotes to the local clone during init:\r\n\r\n```bash\r\nfor name in $MIRROR_NAMES; do\r\n url=$(eval \"echo \\$MIRROR_$(echo \"$name\" | tr '[:lower:]' '[:upper:]')\")\r\n git -C \"$repo_root\" remote add \"$name\" \"$url\" 2>/dev/null || true\r\ndone\r\n```\r\n\r\nAlso do an initial push to sync the mirrors with the current state of the primary branch.\r\n\r\n### 6. Matrix notification\r\n\r\nOn successful mirror push, include the mirror info in the existing merge notification. On failure (if the background job exits non-zero), log a warning but don't escalate — mirror failures are cosmetic.\r\n\r\n## Affected files\r\n\r\n- `lib/mirrors.sh` — new file, shared `mirror_push()` helper\r\n- `lib/load-project.sh` — parse `[mirrors]` section from TOML\r\n- `dev/phase-handler.sh` — call `mirror_push()` after `do_merge()` success\r\n- `dev/dev-poll.sh` — call `mirror_push()` after `try_direct_merge()` success\r\n- `gardener/gardener-run.sh` — call `mirror_push()` after `_gardener_merge()` success\r\n- `bin/disinto` — add mirror remotes during init\r\n- `projects/*.toml.example` — show `[mirrors]` section\r\n\r\n## Not in scope\r\n\r\n- Two-way sync (pulling GitHub Issues or PRs into local Forgejo)\r\n- Mirror webhooks or status badges\r\n- Mirroring branches other than the primary branch\r\n- HTTPS push with token auth (SSH only for mirrors)\r\n- Automatic deploy key generation on GitHub/Codeberg\r\n\r\n## Acceptance criteria\r\n\r\n- Merges to the primary branch are pushed to all configured mirrors within seconds\r\n- Mirror push failures are logged but never block the dev/review/gardener pipeline\r\n- `disinto init` sets up git remotes for configured mirrors\r\n- Projects with no `[mirrors]` section work exactly as before (no-op)\r\n- Mirror remotes are push-only — no agent ever reads from them\r" - }, - { - "action": "add_label", - "issue": 614, - "label": "backlog" - }, - { - "action": "edit_body", - "issue": 613, - "body": "Depends on: *Replace Codeberg dependency with local Forgejo instance*\r\n\r\n## Problem\r\n\r\nAll authentication tokens and database passwords sit in plaintext on disk in `.env` and `~/.netrc`. On a VPS this means anyone with disk access (compromised account, provider snapshot, backup leak) gets full control over the forge, CI, Matrix bot, and database.\r\n\r\nThe secrets in question:\r\n\r\n- `FORGE_TOKEN` (née `CODEBERG_TOKEN`) — full forge API access: create/delete issues, merge PRs, push code\r\n- `FORGE_REVIEW_TOKEN` (née `REVIEW_BOT_TOKEN`) — same, for the review bot account\r\n- `WOODPECKER_TOKEN` — trigger pipelines, read logs, retry builds\r\n- `WOODPECKER_DB_PASSWORD` — direct Postgres access to all CI state\r\n- `MATRIX_TOKEN` — send messages as the bot, read room history\r\n- Any project-specific secrets (e.g. `BASE_RPC_URL` for on-chain operations)\r\n\r\nMeanwhile the non-secret config (repo slugs, paths, branch names, server URLs, repo IDs) is harmless if leaked and already lives in plaintext in `projects/*.toml` where it belongs.\r\n\r\n## Solution\r\n\r\nUse SOPS with age encryption. Secrets go into `.env.enc` (encrypted, safe to commit). The age private key at `~/.config/sops/age/keys.txt` is the single file that must be protected — LUKS disk encryption on the VPS handles that layer.\r\n\r\n## Scope\r\n\r\n### 1. Secret loading in `lib/env.sh`\r\n\r\nReplace the `.env` source block (lines 10–16) with a two-tier loader:\r\n\r\n```bash\r\nif [ -f \"$FACTORY_ROOT/.env.enc\" ] && command -v sops &>/dev/null; then\r\n set -a\r\n eval \"$(sops -d --output-type dotenv \"$FACTORY_ROOT/.env.enc\" 2>/dev/null)\"\r\n set +a\r\nelif [ -f \"$FACTORY_ROOT/.env\" ]; then\r\n set -a\r\n source \"$FACTORY_ROOT/.env\"\r\n set +a\r\nfi\r\n```\r\n\r\nIf `.env.enc` exists and `sops` is available, decrypt and load. Otherwise fall back to plaintext `.env`. Existing deployments keep working unchanged.\r\n\r\n### 2. `disinto init` generates encrypted secrets\r\n\r\nAfter the Forgejo provisioning step generates tokens (from the Forgejo issue), store them encrypted instead of plaintext:\r\n\r\n- Check for `age-keygen` and `sops` in PATH\r\n- If no age key exists at `~/.config/sops/age/keys.txt`, generate one: `age-keygen -o ~/.config/sops/age/keys.txt 2>/dev/null`\r\n- Extract the public key: `age-keygen -y ~/.config/sops/age/keys.txt`\r\n- Create a `.sops.yaml` in the factory root that pins the age recipient:\r\n\r\n```yaml\r\ncreation_rules:\r\n - path_regex: \\.env\\.enc$\r\n age: \"age1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\"\r\n```\r\n\r\n- Write secrets to a temp file and encrypt: `sops -e --input-type dotenv --output-type dotenv .env.tmp > .env.enc`\r\n- Remove the temp file\r\n- If `sops` is not available, fall back to writing plaintext `.env` with a warning\r\n\r\n### 3. Remove `~/.netrc` token storage\r\n\r\n`bin/disinto` currently writes forge tokens to `~/.netrc` (the `write_netrc()` function, lines 61–81). With local Forgejo the tokens are generated programmatically and go straight into `.env.enc`. The `~/.netrc` codepath and the fallback read in `lib/env.sh` line 29 should be removed. Git credential access to the local Forgejo can use the token in the URL or a git credential helper instead.\r\n\r\n### 4. Preflight and documentation\r\n\r\n- Add `sops` and `age` to the optional-tools check in `preflight_check()` — warn if missing, don't hard-fail (plaintext `.env` still works)\r\n- Update `.env.example` to document which vars are secrets vs. config\r\n- Update `.gitignore`: add `.env.enc` as safe-to-commit (remove from ignore), keep `.env` ignored\r\n- Update `BOOTSTRAP.md` with the age key setup and SOPS workflow\r\n\r\n### 5. Secret rotation helper\r\n\r\nAdd a `disinto secrets` subcommand:\r\n\r\n- `disinto secrets edit` — runs `sops .env.enc` (opens in `$EDITOR`, re-encrypts on save)\r\n- `disinto secrets show` — runs `sops -d .env.enc` (prints decrypted, for debugging)\r\n- `disinto secrets migrate` — reads existing plaintext `.env`, encrypts to `.env.enc`, removes `.env`\r\n\r\n## What counts as a secret\r\n\r\nThe dividing line: if the value is leaked, do you need to rotate it?\r\n\r\n**Secret (goes in `.env.enc`):** `FORGE_TOKEN`, `FORGE_REVIEW_TOKEN`, `WOODPECKER_TOKEN`, `WOODPECKER_DB_PASSWORD`, `MATRIX_TOKEN`, `BASE_RPC_URL`, any future API keys or credentials.\r\n\r\n**Not secret (stays in plaintext `projects/*.toml` or `.env`):** `FORGE_REPO`, `PROJECT_NAME`, `PRIMARY_BRANCH`, `WOODPECKER_REPO_ID`, `WOODPECKER_SERVER`, `WOODPECKER_DB_USER`, `WOODPECKER_DB_HOST`, `WOODPECKER_DB_NAME`, `MATRIX_HOMESERVER`, `MATRIX_ROOM_ID`, `MATRIX_BOT_USER`, `CLAUDE_TIMEOUT`, `PROJECT_REPO_ROOT`, `FORGE_URL`.\r\n\r\n## Affected files\r\n\r\n- `lib/env.sh` — SOPS decryption block, remove `~/.netrc` fallback\r\n- `bin/disinto` — age key generation, SOPS encryption during init, remove `write_netrc()`, add `secrets` subcommand\r\n- `.env.example` — annotate secret vs. config vars\r\n- `.gitignore` — `.env.enc` safe to commit, `.env` stays ignored\r\n- `.sops.yaml` — generated by init, committed to repo\r\n- `BOOTSTRAP.md` — document SOPS + age setup, key backup, rotation\r\n\r\n## Not in scope\r\n\r\n- LUKS disk encryption setup (host-level concern, not a disinto issue)\r\n- HSM or TPM-backed age keys\r\n- Per-project separate encryption keys (all projects share one age key for now)\r\n- Encrypting `projects/*.toml` files (they contain no secrets)\r\n\r\n## Acceptance criteria\r\n\r\n- `disinto init` generates an age key if missing and encrypts secrets into `.env.enc`\r\n- All agents load secrets from `.env.enc` transparently via `lib/env.sh`\r\n- No plaintext secrets on disk when SOPS + age are available\r\n- Existing deployments with plaintext `.env` and no SOPS installed continue to work\r\n- `disinto secrets edit` opens the encrypted file in `$EDITOR` for manual changes\r\n- `disinto secrets migrate` converts an existing `.env` to `.env.enc`\r" - }, - { - "action": "add_label", - "issue": 613, - "label": "backlog" - }, - { - "action": "add_label", - "issue": 466, - "label": "backlog" - } -] +[] diff --git a/lib/AGENTS.md b/lib/AGENTS.md index 6eb9367..60820a4 100644 --- a/lib/AGENTS.md +++ b/lib/AGENTS.md @@ -1,4 +1,4 @@ - + # Shared Helpers (`lib/`) All agents source `lib/env.sh` as their first action. Additional helpers are @@ -6,12 +6,12 @@ sourced as needed. | File | What it provides | Sourced by | |---|---|---| -| `lib/env.sh` | Loads `.env`, sets `FACTORY_ROOT`, exports project config (`FORGE_REPO`, `PROJECT_NAME`, etc.), defines `log()`, `forge_api()`, `forge_api_all()` (accepts optional second TOKEN parameter, defaults to `$FORGE_TOKEN`), `woodpecker_api()`, `wpdb()`, `matrix_send()`, `matrix_send_ctx()`. Auto-loads project TOML if `PROJECT_TOML` is set. | Every agent | +| `lib/env.sh` | Loads `.env`, sets `FACTORY_ROOT`, exports project config (`FORGE_REPO`, `PROJECT_NAME`, etc.), defines `log()`, `forge_api()`, `forge_api_all()` (accepts optional second TOKEN parameter, defaults to `$FORGE_TOKEN`), `woodpecker_api()`, `wpdb()`, `matrix_send()`, `matrix_send_ctx()`. Auto-loads project TOML if `PROJECT_TOML` is set. Auto-detects `MATRIX_HOMESERVER`: defaults to `http://dendrite:8008` inside a container (`DISINTO_CONTAINER=1`) or `http://localhost:8008` on bare metal; can be overridden via `.env`. | Every agent | | `lib/ci-helpers.sh` | `ci_passed()` — returns 0 if CI state is "success" (or no CI configured). `ci_required_for_pr()` — returns 0 if PR has code files (CI required), 1 if non-code only (CI not required). `is_infra_step()` — returns 0 if a single CI step failure matches infra heuristics (clone/git exit 128, any exit 137, log timeout patterns). `classify_pipeline_failure()` — returns "infra \" if any failed Woodpecker step matches infra heuristics via `is_infra_step()`, else "code". `ensure_priority_label()` — looks up (or creates) the `priority` label and returns its ID; caches in `_PRIORITY_LABEL_ID`. `ci_commit_status ` — queries Woodpecker directly for CI state, falls back to forge commit status API. `ci_pipeline_number ` — returns the Woodpecker pipeline number for a commit, falls back to parsing forge status `target_url`. | dev-poll, review-poll, review-pr, supervisor-poll | | `lib/ci-debug.sh` | CLI tool for Woodpecker CI: `list`, `status`, `logs`, `failures` subcommands. Not sourced — run directly. | Humans / dev-agent (tool access) | | `lib/load-project.sh` | Parses a `projects/*.toml` file into env vars (`PROJECT_NAME`, `FORGE_REPO`, `WOODPECKER_REPO_ID`, monitoring toggles, Matrix config, etc.). | env.sh (when `PROJECT_TOML` is set), supervisor-poll (per-project iteration) | | `lib/parse-deps.sh` | Extracts dependency issue numbers from an issue body (stdin → stdout, one number per line). Matches `## Dependencies` / `## Depends on` / `## Blocked by` sections and inline `depends on #N` / `blocked by #N` patterns. Inline scan skips fenced code blocks to prevent false positives from code examples in issue bodies. Not sourced — executed via `bash lib/parse-deps.sh`. | dev-poll, supervisor-poll | -| `lib/matrix_listener.sh` | Long-poll Matrix sync daemon. Dispatches thread replies to the correct agent via tmux session injection (dev, action, vault, review) or well-known files (`/tmp/{agent}-escalation-reply` for supervisor/gardener). Handles all agent reply routing. Run as systemd service. | Standalone daemon | +| `lib/matrix_listener.sh` | Long-poll Matrix sync daemon. Dispatches thread replies to the correct agent via tmux session injection (dev, action, vault, review) or well-known files (`/tmp/{agent}-escalation-reply` for supervisor/gardener). Handles all agent reply routing. In compose mode, started as a background process by `docker/agents/entrypoint.sh`; on bare metal, run as systemd service (see `matrix_listener.service`). | Standalone daemon | | `lib/formula-session.sh` | `acquire_cron_lock()`, `check_memory()`, `load_formula()`, `build_context_block()`, `consume_escalation_reply()`, `start_formula_session()`, `formula_phase_callback()`, `build_prompt_footer()`, `run_formula_and_monitor(AGENT [TIMEOUT] [CALLBACK])` — shared helpers for formula-driven cron agents (lock, memory guard, formula loading, prompt assembly, tmux session, monitor loop, crash recovery). `formula_phase_callback()` handles `PHASE:escalate` (unified escalation path — kills the session; callers may follow up via Matrix). `run_formula_and_monitor` accepts an optional CALLBACK (default: `formula_phase_callback`) so callers can install custom merge-through or escalation handlers. | planner-run.sh, predictor-run.sh, gardener-run.sh, supervisor-run.sh, dev-agent.sh, action-agent.sh | | `lib/guard.sh` | `check_active(agent_name)` — reads `$FACTORY_ROOT/state/.{agent_name}-active`; exits 0 (skip) if the file is absent. Factory is off by default — state files must be created to enable each agent. Sourced by dev-poll.sh, review-poll.sh, action-poll.sh, predictor-run.sh, supervisor-run.sh. | cron entry points | | `lib/mirrors.sh` | `mirror_push()` — pushes `$PRIMARY_BRANCH` + tags to all configured mirror remotes (fire-and-forget background pushes). Reads `MIRROR_NAMES` and `MIRROR_*` vars exported by `load-project.sh` from the `[mirrors]` TOML section. Failures are logged but never block the pipeline. Sourced by dev-poll.sh and dev/phase-handler.sh — called after every successful merge. | dev-poll.sh, phase-handler.sh | diff --git a/planner/AGENTS.md b/planner/AGENTS.md index abd1654..77387cf 100644 --- a/planner/AGENTS.md +++ b/planner/AGENTS.md @@ -1,4 +1,4 @@ - + # Planner Agent **Role**: Strategic planning using a Prerequisite Tree (Theory of Constraints), diff --git a/predictor/AGENTS.md b/predictor/AGENTS.md index 2ba726f..6bb9de2 100644 --- a/predictor/AGENTS.md +++ b/predictor/AGENTS.md @@ -1,4 +1,4 @@ - + # Predictor Agent **Role**: Abstract adversary (the "goblin"). Runs a 2-step formula diff --git a/review/AGENTS.md b/review/AGENTS.md index ba0a91c..9f9b61b 100644 --- a/review/AGENTS.md +++ b/review/AGENTS.md @@ -1,4 +1,4 @@ - + # Review Agent **Role**: AI-powered PR review — post structured findings and formal diff --git a/supervisor/AGENTS.md b/supervisor/AGENTS.md index 46d1198..b837237 100644 --- a/supervisor/AGENTS.md +++ b/supervisor/AGENTS.md @@ -1,4 +1,4 @@ - + # Supervisor Agent **Role**: Health monitoring and auto-remediation, executed as a formula-driven diff --git a/vault/AGENTS.md b/vault/AGENTS.md index 13b2edc..7ac59dd 100644 --- a/vault/AGENTS.md +++ b/vault/AGENTS.md @@ -1,4 +1,4 @@ - + # Vault Agent **Role**: Dual-purpose gate — action safety classification and resource procurement.