27 lines
12 KiB
JSON
27 lines
12 KiB
JSON
[
|
||
{
|
||
"action": "edit_body",
|
||
"issue": 614,
|
||
"body": "Depends on: *Replace Codeberg dependency with local Forgejo instance*\r\n\r\n## Problem\r\n\r\nWith agents operating against a local Forgejo instance, the code is no longer visible on any public forge. For adoption (stars, forks, contributors — goals listed in VISION.md) the repo needs a public presence. Codeberg and GitHub serve different audiences: Codeberg for the FOSS community, GitHub for wider reach.\r\n\r\n## Solution\r\n\r\nAfter every successful merge to the primary branch, push to configured mirror remotes. Mirrors are read-only — agents never read from them. Pushes are fire-and-forget: failures are logged but never block the pipeline.\r\n\r\n## Scope\r\n\r\n### 1. Project TOML configuration\r\n\r\nAdd a `[mirrors]` section:\r\n\r\n```toml\r\nname = \"harb\"\r\nrepo = \"johba/harb\"\r\nforge_url = \"http://localhost:3000\"\r\nprimary_branch = \"master\"\r\n\r\n[mirrors]\r\ngithub = \"git@github.com:johba/harb.git\"\r\ncodeberg = \"git@codeberg.org:johba/harb.git\"\r\n```\r\n\r\nValues are push URLs, not slugs — this keeps it explicit and avoids guessing SSH vs HTTPS. Any number of mirrors can be listed; the key (github, codeberg, etc.) is just a human-readable name used in log messages.\r\n\r\n### 2. `lib/load-project.sh` — parse mirrors\r\n\r\nExtend the TOML parser to export mirror URLs. Add to the Python block:\r\n\r\n```python\r\nmirrors = cfg.get('mirrors', {})\r\nfor name, url in mirrors.items():\r\n emit(f'MIRROR_{name.upper()}', url)\r\n```\r\n\r\nThis exports `MIRROR_GITHUB`, `MIRROR_CODEBERG`, etc. as env vars. Also emit a space-separated list for iteration:\r\n\r\n```python\r\nif mirrors:\r\n emit('MIRROR_NAMES', list(mirrors.keys()))\r\n emit('MIRROR_URLS', list(mirrors.values()))\r\n```\r\n\r\n### 3. `lib/mirrors.sh` — shared push helper\r\n\r\nNew file:\r\n\r\n```bash\r\n#!/usr/bin/env bash\r\n# mirrors.sh — Push primary branch + tags to configured mirror remotes.\r\n#\r\n# Usage: source lib/mirrors.sh; mirror_push\r\n# Requires: PROJECT_REPO_ROOT, PRIMARY_BRANCH, MIRROR_* vars from load-project.sh\r\n\r\nmirror_push() {\r\n [ -z \"${MIRROR_NAMES:-}\" ] && return 0\r\n\r\n local name url\r\n local i=0\r\n for name in $MIRROR_NAMES; do\r\n url=$(eval \"echo \\$MIRROR_$(echo \"$name\" | tr '[:lower:]' '[:upper:]')\")\r\n [ -z \"$url\" ] && continue\r\n\r\n # Ensure remote exists\r\n git -C \"$PROJECT_REPO_ROOT\" remote get-url \"$name\" &>/dev/null \\\r\n || git -C \"$PROJECT_REPO_ROOT\" remote add \"$name\" \"$url\"\r\n\r\n # Fire-and-forget push (background, no failure propagation)\r\n git -C \"$PROJECT_REPO_ROOT\" push \"$name\" \"$PRIMARY_BRANCH\" --tags 2>/dev/null &\r\n log \"mirror: pushed to ${name} (pid $!)\"\r\n done\r\n}\r\n```\r\n\r\nBackground pushes so the agent doesn't block on slow upstreams. SSH keys for GitHub/Codeberg are the user's responsibility (existing SSH agent or deploy keys).\r\n\r\n### 4. Call `mirror_push()` at the three merge sites\r\n\r\nThere are three places where PRs get merged:\r\n\r\n**`dev/phase-handler.sh` — `do_merge()`** (line ~193): the main dev-agent merge path. After the successful merge (HTTP 200/204 block), pull the merged primary branch locally and call `mirror_push()`.\r\n\r\n**`dev/dev-poll.sh` — `try_direct_merge()`** (line ~189): fast-path merge for approved + CI-green PRs that don't need a Claude session. Same insertion point after the success check.\r\n\r\n**`gardener/gardener-run.sh` — `_gardener_merge()`** (line ~293): gardener PR merge. Same pattern.\r\n\r\nAt each site, after the forge API confirms the merge:\r\n\r\n```bash\r\n# Pull merged primary branch and push to mirrors\r\ngit -C \"$REPO_ROOT\" fetch origin \"$PRIMARY_BRANCH\" 2>/dev/null || true\r\ngit -C \"$REPO_ROOT\" checkout \"$PRIMARY_BRANCH\" 2>/dev/null || true\r\ngit -C \"$REPO_ROOT\" pull --ff-only origin \"$PRIMARY_BRANCH\" 2>/dev/null || true\r\nmirror_push\r\n```\r\n\r\nThe fetch/pull is necessary because the merge happened via the forge API, not locally — the local clone needs to pick up the merge commit before it can push to mirrors.\r\n\r\n### 5. `disinto init` — set up mirror remotes\r\n\r\nIf `[mirrors]` is present in the TOML, add the remotes to the local clone during init:\r\n\r\n```bash\r\nfor name in $MIRROR_NAMES; do\r\n url=$(eval \"echo \\$MIRROR_$(echo \"$name\" | tr '[:lower:]' '[:upper:]')\")\r\n git -C \"$repo_root\" remote add \"$name\" \"$url\" 2>/dev/null || true\r\ndone\r\n```\r\n\r\nAlso do an initial push to sync the mirrors with the current state of the primary branch.\r\n\r\n### 6. Matrix notification\r\n\r\nOn successful mirror push, include the mirror info in the existing merge notification. On failure (if the background job exits non-zero), log a warning but don't escalate — mirror failures are cosmetic.\r\n\r\n## Affected files\r\n\r\n- `lib/mirrors.sh` — new file, shared `mirror_push()` helper\r\n- `lib/load-project.sh` — parse `[mirrors]` section from TOML\r\n- `dev/phase-handler.sh` — call `mirror_push()` after `do_merge()` success\r\n- `dev/dev-poll.sh` — call `mirror_push()` after `try_direct_merge()` success\r\n- `gardener/gardener-run.sh` — call `mirror_push()` after `_gardener_merge()` success\r\n- `bin/disinto` — add mirror remotes during init\r\n- `projects/*.toml.example` — show `[mirrors]` section\r\n\r\n## Not in scope\r\n\r\n- Two-way sync (pulling GitHub Issues or PRs into local Forgejo)\r\n- Mirror webhooks or status badges\r\n- Mirroring branches other than the primary branch\r\n- HTTPS push with token auth (SSH only for mirrors)\r\n- Automatic deploy key generation on GitHub/Codeberg\r\n\r\n## Acceptance criteria\r\n\r\n- Merges to the primary branch are pushed to all configured mirrors within seconds\r\n- Mirror push failures are logged but never block the dev/review/gardener pipeline\r\n- `disinto init` sets up git remotes for configured mirrors\r\n- Projects with no `[mirrors]` section work exactly as before (no-op)\r\n- Mirror remotes are push-only — no agent ever reads from them\r"
|
||
},
|
||
{
|
||
"action": "add_label",
|
||
"issue": 614,
|
||
"label": "backlog"
|
||
},
|
||
{
|
||
"action": "edit_body",
|
||
"issue": 613,
|
||
"body": "Depends on: *Replace Codeberg dependency with local Forgejo instance*\r\n\r\n## Problem\r\n\r\nAll authentication tokens and database passwords sit in plaintext on disk in `.env` and `~/.netrc`. On a VPS this means anyone with disk access (compromised account, provider snapshot, backup leak) gets full control over the forge, CI, Matrix bot, and database.\r\n\r\nThe secrets in question:\r\n\r\n- `FORGE_TOKEN` (née `CODEBERG_TOKEN`) — full forge API access: create/delete issues, merge PRs, push code\r\n- `FORGE_REVIEW_TOKEN` (née `REVIEW_BOT_TOKEN`) — same, for the review bot account\r\n- `WOODPECKER_TOKEN` — trigger pipelines, read logs, retry builds\r\n- `WOODPECKER_DB_PASSWORD` — direct Postgres access to all CI state\r\n- `MATRIX_TOKEN` — send messages as the bot, read room history\r\n- Any project-specific secrets (e.g. `BASE_RPC_URL` for on-chain operations)\r\n\r\nMeanwhile the non-secret config (repo slugs, paths, branch names, server URLs, repo IDs) is harmless if leaked and already lives in plaintext in `projects/*.toml` where it belongs.\r\n\r\n## Solution\r\n\r\nUse SOPS with age encryption. Secrets go into `.env.enc` (encrypted, safe to commit). The age private key at `~/.config/sops/age/keys.txt` is the single file that must be protected — LUKS disk encryption on the VPS handles that layer.\r\n\r\n## Scope\r\n\r\n### 1. Secret loading in `lib/env.sh`\r\n\r\nReplace the `.env` source block (lines 10–16) with a two-tier loader:\r\n\r\n```bash\r\nif [ -f \"$FACTORY_ROOT/.env.enc\" ] && command -v sops &>/dev/null; then\r\n set -a\r\n eval \"$(sops -d --output-type dotenv \"$FACTORY_ROOT/.env.enc\" 2>/dev/null)\"\r\n set +a\r\nelif [ -f \"$FACTORY_ROOT/.env\" ]; then\r\n set -a\r\n source \"$FACTORY_ROOT/.env\"\r\n set +a\r\nfi\r\n```\r\n\r\nIf `.env.enc` exists and `sops` is available, decrypt and load. Otherwise fall back to plaintext `.env`. Existing deployments keep working unchanged.\r\n\r\n### 2. `disinto init` generates encrypted secrets\r\n\r\nAfter the Forgejo provisioning step generates tokens (from the Forgejo issue), store them encrypted instead of plaintext:\r\n\r\n- Check for `age-keygen` and `sops` in PATH\r\n- If no age key exists at `~/.config/sops/age/keys.txt`, generate one: `age-keygen -o ~/.config/sops/age/keys.txt 2>/dev/null`\r\n- Extract the public key: `age-keygen -y ~/.config/sops/age/keys.txt`\r\n- Create a `.sops.yaml` in the factory root that pins the age recipient:\r\n\r\n```yaml\r\ncreation_rules:\r\n - path_regex: \\.env\\.enc$\r\n age: \"age1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\"\r\n```\r\n\r\n- Write secrets to a temp file and encrypt: `sops -e --input-type dotenv --output-type dotenv .env.tmp > .env.enc`\r\n- Remove the temp file\r\n- If `sops` is not available, fall back to writing plaintext `.env` with a warning\r\n\r\n### 3. Remove `~/.netrc` token storage\r\n\r\n`bin/disinto` currently writes forge tokens to `~/.netrc` (the `write_netrc()` function, lines 61–81). With local Forgejo the tokens are generated programmatically and go straight into `.env.enc`. The `~/.netrc` codepath and the fallback read in `lib/env.sh` line 29 should be removed. Git credential access to the local Forgejo can use the token in the URL or a git credential helper instead.\r\n\r\n### 4. Preflight and documentation\r\n\r\n- Add `sops` and `age` to the optional-tools check in `preflight_check()` — warn if missing, don't hard-fail (plaintext `.env` still works)\r\n- Update `.env.example` to document which vars are secrets vs. config\r\n- Update `.gitignore`: add `.env.enc` as safe-to-commit (remove from ignore), keep `.env` ignored\r\n- Update `BOOTSTRAP.md` with the age key setup and SOPS workflow\r\n\r\n### 5. Secret rotation helper\r\n\r\nAdd a `disinto secrets` subcommand:\r\n\r\n- `disinto secrets edit` — runs `sops .env.enc` (opens in `$EDITOR`, re-encrypts on save)\r\n- `disinto secrets show` — runs `sops -d .env.enc` (prints decrypted, for debugging)\r\n- `disinto secrets migrate` — reads existing plaintext `.env`, encrypts to `.env.enc`, removes `.env`\r\n\r\n## What counts as a secret\r\n\r\nThe dividing line: if the value is leaked, do you need to rotate it?\r\n\r\n**Secret (goes in `.env.enc`):** `FORGE_TOKEN`, `FORGE_REVIEW_TOKEN`, `WOODPECKER_TOKEN`, `WOODPECKER_DB_PASSWORD`, `MATRIX_TOKEN`, `BASE_RPC_URL`, any future API keys or credentials.\r\n\r\n**Not secret (stays in plaintext `projects/*.toml` or `.env`):** `FORGE_REPO`, `PROJECT_NAME`, `PRIMARY_BRANCH`, `WOODPECKER_REPO_ID`, `WOODPECKER_SERVER`, `WOODPECKER_DB_USER`, `WOODPECKER_DB_HOST`, `WOODPECKER_DB_NAME`, `MATRIX_HOMESERVER`, `MATRIX_ROOM_ID`, `MATRIX_BOT_USER`, `CLAUDE_TIMEOUT`, `PROJECT_REPO_ROOT`, `FORGE_URL`.\r\n\r\n## Affected files\r\n\r\n- `lib/env.sh` — SOPS decryption block, remove `~/.netrc` fallback\r\n- `bin/disinto` — age key generation, SOPS encryption during init, remove `write_netrc()`, add `secrets` subcommand\r\n- `.env.example` — annotate secret vs. config vars\r\n- `.gitignore` — `.env.enc` safe to commit, `.env` stays ignored\r\n- `.sops.yaml` — generated by init, committed to repo\r\n- `BOOTSTRAP.md` — document SOPS + age setup, key backup, rotation\r\n\r\n## Not in scope\r\n\r\n- LUKS disk encryption setup (host-level concern, not a disinto issue)\r\n- HSM or TPM-backed age keys\r\n- Per-project separate encryption keys (all projects share one age key for now)\r\n- Encrypting `projects/*.toml` files (they contain no secrets)\r\n\r\n## Acceptance criteria\r\n\r\n- `disinto init` generates an age key if missing and encrypts secrets into `.env.enc`\r\n- All agents load secrets from `.env.enc` transparently via `lib/env.sh`\r\n- No plaintext secrets on disk when SOPS + age are available\r\n- Existing deployments with plaintext `.env` and no SOPS installed continue to work\r\n- `disinto secrets edit` opens the encrypted file in `$EDITOR` for manual changes\r\n- `disinto secrets migrate` converts an existing `.env` to `.env.enc`\r"
|
||
},
|
||
{
|
||
"action": "add_label",
|
||
"issue": 613,
|
||
"label": "backlog"
|
||
},
|
||
{
|
||
"action": "add_label",
|
||
"issue": 466,
|
||
"label": "backlog"
|
||
}
|
||
]
|