disinto/gardener/pending-actions.json

38 lines
19 KiB
JSON
Raw Normal View History

[
{
"action": "edit_body",
"issue": 619,
"body": "Depends on: *Containerize full stack with docker-compose*\r\n\r\n## Problem\r\n\r\nDendrite currently runs on the bare host as a systemd service (`dendrite.service`), manually installed and configured. The `matrix_listener.sh` daemon also runs on the host via its own systemd unit (`matrix_listener.service`), hardcoded to `/home/admin/disinto`. This is the last piece of the stack that isn't containerized — Forgejo, Woodpecker, and the agents are inside compose, but the Matrix homeserver sits outside.\r\n\r\nThe result: `disinto init` can't provision Matrix automatically, the listener systemd unit has a hardcoded user and path, and if someone sets up disinto fresh on a new VPS they need to install Dendrite manually, create users, create rooms, and wire everything together before notifications and human-in-the-loop escalation work.\r\n\r\n## Solution\r\n\r\nAdd Dendrite as a fourth service in `docker-compose.yml`. Provision the bot user, coordination room, and access token during `disinto init`. Move the `matrix_listener.sh` daemon into the agent container's entrypoint alongside cron.\r\n\r\n## Scope\r\n\r\n### 1. Dendrite service in docker-compose.yml\r\n\r\n```yaml\r\n dendrite:\r\n image: matrixdotorg/dendrite-monolith:latest\r\n restart: unless-stopped\r\n volumes:\r\n - dendrite-data:/etc/dendrite\r\n environment:\r\n DENDRITE_DOMAIN: disinto.local\r\n networks:\r\n - disinto-net\r\n```\r\n\r\nNo host ports exposed — the agents talk to Dendrite over the internal Docker network at `http://dendrite:8008`. There's no need for federation or external Matrix clients unless the user explicitly wants to connect their own Matrix client (e.g. Element), in which case they can add a port mapping themselves.\r\n\r\n### 2. Provisioning in `disinto init`\r\n\r\nAfter Dendrite is healthy, `disinto init` creates:\r\n\r\n- A server signing key (Dendrite generates this on first start if missing)\r\n- A bot user via Dendrite's admin API (`POST /_dendrite/admin/createOrModifyAccount` or `create-account` CLI tool via `docker compose exec dendrite`)\r\n- A coordination room via the Matrix client-server API (`POST /_matrix/client/v3/createRoom`)\r\n- An access token for the bot (via login: `POST /_matrix/client/v3/login`)\r\n\r\nStore the resulting `MATRIX_TOKEN`, `MATRIX_ROOM_ID`, and `MATRIX_BOT_USER` in `.env` (or `.env.enc` if SOPS is available). Set `MATRIX_HOMESERVER=http://dendrite:8008` — this URL only needs to resolve inside the Docker network.\r\n\r\nFor the interactive case: after creating the room, print the room alias or ID so the user can join from their own Matrix client (Element, etc.) to receive notifications and reply to escalations. If they don't have a Matrix client, the factory still works — escalations just go unanswered until they check manually.\r\n\r\n### 3. Move `matrix_listener.sh` into agent container\r\n\r\nThe listener is currently a systemd service on the host. In the compose setup it runs inside the agent container as a background process alongside cron. Update `docker/agents/entrypoint.sh`:\r\n\r\n```bash\r\n#!/bin/bash\r\n# Start matrix listener in background (if configured)\r\nif [ -n \"${MATRIX_TOKEN:-}\" ] && [ -n \"${MATRIX_ROOM_ID:-}\" ]; then\r\n /home/agent/disinto/lib/matrix_listener.sh &\r\nfi\r\n\r\n# Start cron in foreground\r\nexec cron -f\r\n```\r\n\r\nRemove the pidfile guard in `matrix_listener.sh` (line 2431) or make it work with the container lifecycle inside a container the PID file from a previous run doesn't exist. The trap on EXIT already cleans up.\r\n\r\n### 4. Remove `matrix_listener.service`\r\n\r\nThe systemd unit file at `lib/matrix_listener.service` becomes dead code once the listener runs inside the agent container. Keep it for bare-metal deployments (`disinto init --bare`) but document it as the legacy path.\r\n\r\n### 5. Update `MATRIX_HOMESERVER` default\r\n\r\nIn `.env.example`, change the default from `http://localhost:8008` to `http://dendrite:8008`. In `lib/env.sh`, the default should detect the environment:\r\n\r
},
{
"action": "add_label",
"issue": 619,
"label": "backlog"
},
{
"action": "edit_body",
"issue": 614,
"body": "Depends on: *Replace Codeberg dependency with local Forgejo instance*\r\n\r\n## Problem\r\n\r\nWith agents operating against a local Forgejo instance, the code is no longer visible on any public forge. For adoption (stars, forks, contributors — goals listed in VISION.md) the repo needs a public presence. Codeberg and GitHub serve different audiences: Codeberg for the FOSS community, GitHub for wider reach.\r\n\r\n## Solution\r\n\r\nAfter every successful merge to the primary branch, push to configured mirror remotes. Mirrors are read-only — agents never read from them. Pushes are fire-and-forget: failures are logged but never block the pipeline.\r\n\r\n## Scope\r\n\r\n### 1. Project TOML configuration\r\n\r\nAdd a `[mirrors]` section:\r\n\r\n```toml\r\nname = \"harb\"\r\nrepo = \"johba/harb\"\r\nforge_url = \"http://localhost:3000\"\r\nprimary_branch = \"master\"\r\n\r\n[mirrors]\r\ngithub = \"git@github.com:johba/harb.git\"\r\ncodeberg = \"git@codeberg.org:johba/harb.git\"\r\n```\r\n\r\nValues are push URLs, not slugs — this keeps it explicit and avoids guessing SSH vs HTTPS. Any number of mirrors can be listed; the key (github, codeberg, etc.) is just a human-readable name used in log messages.\r\n\r\n### 2. `lib/load-project.sh` — parse mirrors\r\n\r\nExtend the TOML parser to export mirror URLs. Add to the Python block:\r\n\r\n```python\r\nmirrors = cfg.get('mirrors', {})\r\nfor name, url in mirrors.items():\r\n emit(f'MIRROR_{name.upper()}', url)\r\n```\r\n\r\nThis exports `MIRROR_GITHUB`, `MIRROR_CODEBERG`, etc. as env vars. Also emit a space-separated list for iteration:\r\n\r\n```python\r\nif mirrors:\r\n emit('MIRROR_NAMES', list(mirrors.keys()))\r\n emit('MIRROR_URLS', list(mirrors.values()))\r\n```\r\n\r\n### 3. `lib/mirrors.sh` — shared push helper\r\n\r\nNew file:\r\n\r\n```bash\r\n#!/usr/bin/env bash\r\n# mirrors.sh — Push primary branch + tags to configured mirror remotes.\r\n#\r\n# Usage: source lib/mirrors.sh; mirror_push\r\n# Requires: PROJECT_REPO_ROOT, PRIMARY_BRANCH, MIRROR_* vars from load-project.sh\r\n\r\nmirror_push() {\r\n [ -z \"${MIRROR_NAMES:-}\" ] && return 0\r\n\r\n local name url\r\n local i=0\r\n for name in $MIRROR_NAMES; do\r\n url=$(eval \"echo \\$MIRROR_$(echo \"$name\" | tr '[:lower:]' '[:upper:]')\")\r\n [ -z \"$url\" ] && continue\r\n\r\n # Ensure remote exists\r\n git -C \"$PROJECT_REPO_ROOT\" remote get-url \"$name\" &>/dev/null \\\r\n || git -C \"$PROJECT_REPO_ROOT\" remote add \"$name\" \"$url\"\r\n\r\n # Fire-and-forget push (background, no failure propagation)\r\n git -C \"$PROJECT_REPO_ROOT\" push \"$name\" \"$PRIMARY_BRANCH\" --tags 2>/dev/null &\r\n log \"mirror: pushed to ${name} (pid $!)\"\r\n done\r\n}\r\n```\r\n\r\nBackground pushes so the agent doesn't block on slow upstreams. SSH keys for GitHub/Codeberg are the user's responsibility (existing SSH agent or deploy keys).\r\n\r\n### 4. Call `mirror_push()` at the three merge sites\r\n\r\nThere are three places where PRs get merged:\r\n\r\n**`dev/phase-handler.sh` — `do_merge()`** (line ~193): the main dev-agent merge path. After the successful merge (HTTP 200/204 block), pull the merged primary branch locally and call `mirror_push()`.\r\n\r\n**`dev/dev-poll.sh` — `try_direct_merge()`** (line ~189): fast-path merge for approved + CI-green PRs that don't need a Claude session. Same insertion point after the success check.\r\n\r\n**`gardener/gardener-run.sh` — `_gardener_merge()`** (line ~293): gardener PR merge. Same pattern.\r\n\r\nAt each site, after the forge API confirms the merge:\r\n\r\n```bash\r\n# Pull merged primary branch and push to mirrors\r\ngit -C \"$REPO_ROOT\" fetch origin \"$PRIMARY_BRANCH\" 2>/dev/null || true\r\ngit -C \"$REPO_ROOT\" checkout \"$PRIMARY_BRANCH\" 2>/dev/null || true\r\ngit -C \"$REPO_ROOT\" pull --ff-only origin \"$PRIMARY_BRANCH\" 2>/dev/null || true\r\nmirror_push\r\n```\r\n\r\nThe fetch/pull is necessary because the merge happened via the forge API, not locally the local clone n
},
{
"action": "add_label",
"issue": 614,
"label": "backlog"
},
{
"action": "edit_body",
"issue": 613,
"body": "Depends on: *Replace Codeberg dependency with local Forgejo instance*\r\n\r\n## Problem\r\n\r\nAll authentication tokens and database passwords sit in plaintext on disk in `.env` and `~/.netrc`. On a VPS this means anyone with disk access (compromised account, provider snapshot, backup leak) gets full control over the forge, CI, Matrix bot, and database.\r\n\r\nThe secrets in question:\r\n\r\n- `FORGE_TOKEN` (née `CODEBERG_TOKEN`) — full forge API access: create/delete issues, merge PRs, push code\r\n- `FORGE_REVIEW_TOKEN` (née `REVIEW_BOT_TOKEN`) — same, for the review bot account\r\n- `WOODPECKER_TOKEN` — trigger pipelines, read logs, retry builds\r\n- `WOODPECKER_DB_PASSWORD` — direct Postgres access to all CI state\r\n- `MATRIX_TOKEN` — send messages as the bot, read room history\r\n- Any project-specific secrets (e.g. `BASE_RPC_URL` for on-chain operations)\r\n\r\nMeanwhile the non-secret config (repo slugs, paths, branch names, server URLs, repo IDs) is harmless if leaked and already lives in plaintext in `projects/*.toml` where it belongs.\r\n\r\n## Solution\r\n\r\nUse SOPS with age encryption. Secrets go into `.env.enc` (encrypted, safe to commit). The age private key at `~/.config/sops/age/keys.txt` is the single file that must be protected — LUKS disk encryption on the VPS handles that layer.\r\n\r\n## Scope\r\n\r\n### 1. Secret loading in `lib/env.sh`\r\n\r\nReplace the `.env` source block (lines 1016) with a two-tier loader:\r\n\r\n```bash\r\nif [ -f \"$FACTORY_ROOT/.env.enc\" ] && command -v sops &>/dev/null; then\r\n set -a\r\n eval \"$(sops -d --output-type dotenv \"$FACTORY_ROOT/.env.enc\" 2>/dev/null)\"\r\n set +a\r\nelif [ -f \"$FACTORY_ROOT/.env\" ]; then\r\n set -a\r\n source \"$FACTORY_ROOT/.env\"\r\n set +a\r\nfi\r\n```\r\n\r\nIf `.env.enc` exists and `sops` is available, decrypt and load. Otherwise fall back to plaintext `.env`. Existing deployments keep working unchanged.\r\n\r\n### 2. `disinto init` generates encrypted secrets\r\n\r\nAfter the Forgejo provisioning step generates tokens (from the Forgejo issue), store them encrypted instead of plaintext:\r\n\r\n- Check for `age-keygen` and `sops` in PATH\r\n- If no age key exists at `~/.config/sops/age/keys.txt`, generate one: `age-keygen -o ~/.config/sops/age/keys.txt 2>/dev/null`\r\n- Extract the public key: `age-keygen -y ~/.config/sops/age/keys.txt`\r\n- Create a `.sops.yaml` in the factory root that pins the age recipient:\r\n\r\n```yaml\r\ncreation_rules:\r\n - path_regex: \\.env\\.enc$\r\n age: \"age1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\"\r\n```\r\n\r\n- Write secrets to a temp file and encrypt: `sops -e --input-type dotenv --output-type dotenv .env.tmp > .env.enc`\r\n- Remove the temp file\r\n- If `sops` is not available, fall back to writing plaintext `.env` with a warning\r\n\r\n### 3. Remove `~/.netrc` token storage\r\n\r\n`bin/disinto` currently writes forge tokens to `~/.netrc` (the `write_netrc()` function, lines 6181). With local Forgejo the tokens are generated programmatically and go straight into `.env.enc`. The `~/.netrc` codepath and the fallback read in `lib/env.sh` line 29 should be removed. Git credential access to the local Forgejo can use the token in the URL or a git credential helper instead.\r\n\r\n### 4. Preflight and documentation\r\n\r\n- Add `sops` and `age` to the optional-tools check in `preflight_check()` warn if missing, don't hard-fail (plaintext `.env` still works)\r\n- Update `.env.example` to document which vars are secrets vs. config\r\n- Update `.gitignore`: add `.env.enc` as safe-to-commit (remove from ignore), keep `.env` ignored\r\n- Update `BOOTSTRAP.md` with the age key setup and SOPS workflow\r\n\r\n### 5. Secret rotation helper\r\n\r\nAdd a `disinto secrets` subcommand:\r\n\r\n- `disinto secrets edit` runs `sops .env.enc` (opens in `$EDITOR`, re-encrypts on save)\r\n- `disinto secrets show` runs `sops -d .env.enc` (prints decrypted, for debugging)\r\n- `disinto secrets migrate` reads existing plaintext `.env`, encrypts
},
{
"action": "add_label",
"issue": 613,
"label": "backlog"
},
{
"action": "add_label",
"issue": 466,
"label": "backlog"
}
]