fix: feat: gardener-agent.sh — tmux + Claude interactive gardener using agent-session.sh (#159) (#163)
Fixes #159 ## Changes Add gardener-agent.sh (tmux+Claude) and lib/agent-session.sh (shared helpers). gardener-poll.sh slimmed to cron wrapper; grooming delegated to new agent; recipe engine for CI escalations unchanged. Co-authored-by: openhands <openhands@all-hands.dev> Reviewed-on: https://codeberg.org/johba/disinto/pulls/163 Reviewed-by: review_bot <review_bot@noreply.codeberg.org>
This commit is contained in:
parent
4d88464edb
commit
6d5cc4458f
4 changed files with 586 additions and 649 deletions
14
AGENTS.md
14
AGENTS.md
|
|
@ -15,11 +15,11 @@ See `README.md` for the full architecture and `BOOTSTRAP.md` for setup.
|
||||||
disinto/
|
disinto/
|
||||||
├── dev/ dev-poll.sh, dev-agent.sh — issue implementation
|
├── dev/ dev-poll.sh, dev-agent.sh — issue implementation
|
||||||
├── review/ review-poll.sh, review-pr.sh — PR review
|
├── review/ review-poll.sh, review-pr.sh — PR review
|
||||||
├── gardener/ gardener-poll.sh — backlog grooming
|
├── gardener/ gardener-poll.sh, gardener-agent.sh — backlog grooming
|
||||||
├── planner/ planner-poll.sh, planner-agent.sh — vision gap analysis
|
├── planner/ planner-poll.sh, planner-agent.sh — vision gap analysis
|
||||||
├── supervisor/ supervisor-poll.sh — health monitoring
|
├── supervisor/ supervisor-poll.sh — health monitoring
|
||||||
├── vault/ vault-poll.sh, vault-agent.sh, vault-fire.sh — action gating
|
├── vault/ vault-poll.sh, vault-agent.sh, vault-fire.sh — action gating
|
||||||
├── lib/ env.sh, ci-helpers.sh, ci-debug.sh, load-project.sh, parse-deps.sh, matrix_listener.sh
|
├── lib/ env.sh, agent-session.sh, ci-helpers.sh, ci-debug.sh, load-project.sh, parse-deps.sh, matrix_listener.sh
|
||||||
├── projects/ *.toml — per-project config
|
├── projects/ *.toml — per-project config
|
||||||
├── formulas/ Issue templates
|
├── formulas/ Issue templates
|
||||||
└── docs/ Protocol docs (PHASE-PROTOCOL.md, etc.)
|
└── docs/ Protocol docs (PHASE-PROTOCOL.md, etc.)
|
||||||
|
|
@ -48,10 +48,10 @@ disinto/
|
||||||
# ShellCheck all scripts
|
# ShellCheck all scripts
|
||||||
shellcheck dev/dev-poll.sh dev/dev-agent.sh dev/phase-test.sh \
|
shellcheck dev/dev-poll.sh dev/dev-agent.sh dev/phase-test.sh \
|
||||||
review/review-poll.sh review/review-pr.sh \
|
review/review-poll.sh review/review-pr.sh \
|
||||||
gardener/gardener-poll.sh \
|
gardener/gardener-poll.sh gardener/gardener-agent.sh \
|
||||||
supervisor/supervisor-poll.sh supervisor/update-prompt.sh \
|
supervisor/supervisor-poll.sh supervisor/update-prompt.sh \
|
||||||
lib/env.sh lib/ci-debug.sh lib/ci-helpers.sh lib/load-project.sh \
|
lib/env.sh lib/ci-debug.sh lib/ci-helpers.sh lib/load-project.sh \
|
||||||
lib/parse-deps.sh lib/matrix_listener.sh
|
lib/parse-deps.sh lib/matrix_listener.sh lib/agent-session.sh
|
||||||
|
|
||||||
# Run phase protocol test
|
# Run phase protocol test
|
||||||
bash dev/phase-test.sh
|
bash dev/phase-test.sh
|
||||||
|
|
@ -116,10 +116,11 @@ Claude to fix or escalate to a human via Matrix.
|
||||||
optional project TOML argument.
|
optional project TOML argument.
|
||||||
|
|
||||||
**Key files**:
|
**Key files**:
|
||||||
- `gardener/gardener-poll.sh` — All-in-one: bash pre-analysis (duplicates, missing criteria, staleness, etc.) then `claude -p` for remediation
|
- `gardener/gardener-poll.sh` — Cron wrapper: lock, escalation-reply injection for dev sessions, calls `gardener-agent.sh`, then processes dev-agent CI escalations via recipe engine
|
||||||
|
- `gardener/gardener-agent.sh` — Orchestrator: bash pre-analysis, creates tmux session (`gardener-{project}`) with interactive `claude`, monitors phase file, parses result file (ACTION:/DUST:/ESCALATE), handles dust bundling
|
||||||
|
|
||||||
**Environment variables consumed**:
|
**Environment variables consumed**:
|
||||||
- `CODEBERG_TOKEN`, `CODEBERG_REPO`, `CODEBERG_API`, `PROJECT_NAME`
|
- `CODEBERG_TOKEN`, `CODEBERG_REPO`, `CODEBERG_API`, `PROJECT_NAME`, `PROJECT_REPO_ROOT`
|
||||||
- `CLAUDE_TIMEOUT`
|
- `CLAUDE_TIMEOUT`
|
||||||
- `MATRIX_TOKEN`, `MATRIX_ROOM_ID`, `MATRIX_HOMESERVER`
|
- `MATRIX_TOKEN`, `MATRIX_ROOM_ID`, `MATRIX_HOMESERVER`
|
||||||
|
|
||||||
|
|
@ -200,6 +201,7 @@ sourced as needed.
|
||||||
| `lib/load-project.sh` | Parses a `projects/*.toml` file into env vars (`PROJECT_NAME`, `CODEBERG_REPO`, `WOODPECKER_REPO_ID`, monitoring toggles, Matrix config, etc.). | env.sh (when `PROJECT_TOML` is set), supervisor-poll (per-project iteration) |
|
| `lib/load-project.sh` | Parses a `projects/*.toml` file into env vars (`PROJECT_NAME`, `CODEBERG_REPO`, `WOODPECKER_REPO_ID`, monitoring toggles, Matrix config, etc.). | env.sh (when `PROJECT_TOML` is set), supervisor-poll (per-project iteration) |
|
||||||
| `lib/parse-deps.sh` | Extracts dependency issue numbers from an issue body (stdin → stdout, one number per line). Matches `## Dependencies` / `## Depends on` / `## Blocked by` sections and inline `depends on #N` patterns. Not sourced — executed via `bash lib/parse-deps.sh`. | dev-poll, supervisor-poll |
|
| `lib/parse-deps.sh` | Extracts dependency issue numbers from an issue body (stdin → stdout, one number per line). Matches `## Dependencies` / `## Depends on` / `## Blocked by` sections and inline `depends on #N` patterns. Not sourced — executed via `bash lib/parse-deps.sh`. | dev-poll, supervisor-poll |
|
||||||
| `lib/matrix_listener.sh` | Long-poll Matrix sync daemon. Dispatches thread replies to the correct agent via well-known files (`/tmp/{agent}-escalation-reply`). Handles supervisor, gardener, dev, review, and vault reply routing. Run as systemd service. | Standalone daemon |
|
| `lib/matrix_listener.sh` | Long-poll Matrix sync daemon. Dispatches thread replies to the correct agent via well-known files (`/tmp/{agent}-escalation-reply`). Handles supervisor, gardener, dev, review, and vault reply routing. Run as systemd service. | Standalone daemon |
|
||||||
|
| `lib/agent-session.sh` | Shared tmux + Claude session helpers: `agent_wait_for_claude_ready()`, `agent_inject_into_session()`, `agent_kill_session()`. | gardener-agent.sh |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|
|
||||||
535
gardener/gardener-agent.sh
Normal file
535
gardener/gardener-agent.sh
Normal file
|
|
@ -0,0 +1,535 @@
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
# gardener-agent.sh — tmux + Claude interactive gardener session manager
|
||||||
|
#
|
||||||
|
# Usage: ./gardener-agent.sh [project-toml]
|
||||||
|
# Called by: gardener-poll.sh
|
||||||
|
#
|
||||||
|
# Lifecycle:
|
||||||
|
# 1. Read escalation reply (from ESCALATION_REPLY env var)
|
||||||
|
# 2. Fetch open issues + bash pre-checks (zero tokens)
|
||||||
|
# 3. If no problems detected, exit 0
|
||||||
|
# 4. Build prompt with result-file output + phase protocol instructions
|
||||||
|
# 5. Create tmux session: gardener-{project} with interactive claude
|
||||||
|
# 6. Inject prompt via tmux
|
||||||
|
# 7. Monitor phase file — Claude writes PHASE:done when finished
|
||||||
|
# 8. Parse result file (ACTION:/DUST:/ESCALATE) → Matrix + dust.jsonl
|
||||||
|
# 9. Dust bundling: groups with 3+ items → one backlog issue
|
||||||
|
#
|
||||||
|
# Phase file: /tmp/gardener-session-{project}.phase
|
||||||
|
# Result file: /tmp/gardener-result-{project}.txt
|
||||||
|
# Session: gardener-{project} (tmux)
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||||
|
FACTORY_ROOT="$(dirname "$SCRIPT_DIR")"
|
||||||
|
|
||||||
|
export PROJECT_TOML="${1:-}"
|
||||||
|
# shellcheck source=../lib/env.sh
|
||||||
|
source "$FACTORY_ROOT/lib/env.sh"
|
||||||
|
# shellcheck source=../lib/agent-session.sh
|
||||||
|
source "$FACTORY_ROOT/lib/agent-session.sh"
|
||||||
|
|
||||||
|
LOG_FILE="$SCRIPT_DIR/gardener.log"
|
||||||
|
CLAUDE_TIMEOUT="${CLAUDE_TIMEOUT:-3600}"
|
||||||
|
|
||||||
|
SESSION_NAME="gardener-${PROJECT_NAME}"
|
||||||
|
PHASE_FILE="/tmp/gardener-session-${PROJECT_NAME}.phase"
|
||||||
|
RESULT_FILE="/tmp/gardener-result-${PROJECT_NAME}.txt"
|
||||||
|
DUST_FILE="$SCRIPT_DIR/dust.jsonl"
|
||||||
|
|
||||||
|
PHASE_POLL_INTERVAL=15
|
||||||
|
MAX_RUNTIME="${CLAUDE_TIMEOUT}"
|
||||||
|
|
||||||
|
log() { echo "[$(date -u +%Y-%m-%dT%H:%M:%S)Z] $*" >> "$LOG_FILE"; }
|
||||||
|
|
||||||
|
read_phase() {
|
||||||
|
{ cat "$PHASE_FILE" 2>/dev/null || true; } | head -1 | tr -d '[:space:]'
|
||||||
|
}
|
||||||
|
|
||||||
|
log "--- gardener-agent start ---"
|
||||||
|
|
||||||
|
# ── Read escalation reply (passed via env by gardener-poll.sh) ────────────
|
||||||
|
ESCALATION_REPLY="${ESCALATION_REPLY:-}"
|
||||||
|
|
||||||
|
# ── Fetch all open issues ─────────────────────────────────────────────────
|
||||||
|
ISSUES_JSON=$(codeberg_api GET "/issues?state=open&type=issues&limit=50&sort=updated&direction=desc" 2>/dev/null || true)
|
||||||
|
if [ -z "$ISSUES_JSON" ] || [ "$ISSUES_JSON" = "null" ]; then
|
||||||
|
log "Failed to fetch issues"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
ISSUE_COUNT=$(echo "$ISSUES_JSON" | jq 'length')
|
||||||
|
log "Found $ISSUE_COUNT open issues"
|
||||||
|
|
||||||
|
if [ "$ISSUE_COUNT" -eq 0 ]; then
|
||||||
|
log "No open issues — nothing to groom"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# ── Bash pre-checks (zero tokens) ────────────────────────────────────────
|
||||||
|
|
||||||
|
PROBLEMS=""
|
||||||
|
|
||||||
|
# 1. Duplicate detection: issues with very similar titles
|
||||||
|
TITLES=$(echo "$ISSUES_JSON" | jq -r '.[] | "\(.number)\t\(.title)"')
|
||||||
|
DUPES=""
|
||||||
|
while IFS=$'\t' read -r num1 title1; do
|
||||||
|
while IFS=$'\t' read -r num2 title2; do
|
||||||
|
[ "$num1" -ge "$num2" ] && continue
|
||||||
|
# Normalize: lowercase, strip prefixes + series names, collapse whitespace
|
||||||
|
t1=$(echo "$title1" | tr '[:upper:]' '[:lower:]' | sed 's/^feat:\|^fix:\|^refactor://;s/llm seed[^—]*—\s*//;s/push3 evolution[^—]*—\s*//;s/[^a-z0-9 ]//g;s/ */ /g')
|
||||||
|
t2=$(echo "$title2" | tr '[:upper:]' '[:lower:]' | sed 's/^feat:\|^fix:\|^refactor://;s/llm seed[^—]*—\s*//;s/push3 evolution[^—]*—\s*//;s/[^a-z0-9 ]//g;s/ */ /g')
|
||||||
|
# Count shared words (>60% overlap = suspect)
|
||||||
|
WORDS1=$(echo "$t1" | tr ' ' '\n' | sort -u)
|
||||||
|
WORDS2=$(echo "$t2" | tr ' ' '\n' | sort -u)
|
||||||
|
SHARED=$(comm -12 <(echo "$WORDS1") <(echo "$WORDS2") | wc -l)
|
||||||
|
TOTAL1=$(echo "$WORDS1" | wc -l)
|
||||||
|
TOTAL2=$(echo "$WORDS2" | wc -l)
|
||||||
|
MIN_TOTAL=$(( TOTAL1 < TOTAL2 ? TOTAL1 : TOTAL2 ))
|
||||||
|
if [ "$MIN_TOTAL" -gt 2 ] && [ "$SHARED" -gt 0 ]; then
|
||||||
|
OVERLAP=$(( SHARED * 100 / MIN_TOTAL ))
|
||||||
|
if [ "$OVERLAP" -ge 60 ]; then
|
||||||
|
DUPES="${DUPES}possible_dupe: #${num1} vs #${num2} (${OVERLAP}% word overlap)\n"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
done <<< "$TITLES"
|
||||||
|
done <<< "$TITLES"
|
||||||
|
[ -n "$DUPES" ] && PROBLEMS="${PROBLEMS}${DUPES}"
|
||||||
|
|
||||||
|
# 2. Missing acceptance criteria: issues with short body and no checkboxes
|
||||||
|
while IFS=$'\t' read -r num body_len has_checkbox; do
|
||||||
|
if [ "$body_len" -lt 100 ] && [ "$has_checkbox" = "false" ]; then
|
||||||
|
PROBLEMS="${PROBLEMS}thin_issue: #${num} — body < 100 chars, no acceptance criteria\n"
|
||||||
|
fi
|
||||||
|
done < <(echo "$ISSUES_JSON" | jq -r '.[] | "\(.number)\t\(.body | length)\t\(.body | test("- \\[[ x]\\]") // false)"')
|
||||||
|
|
||||||
|
# 3. Stale issues: no update in 14+ days
|
||||||
|
NOW_EPOCH=$(date +%s)
|
||||||
|
while IFS=$'\t' read -r num updated_at; do
|
||||||
|
UPDATED_EPOCH=$(date -d "$updated_at" +%s 2>/dev/null || echo 0)
|
||||||
|
AGE_DAYS=$(( (NOW_EPOCH - UPDATED_EPOCH) / 86400 ))
|
||||||
|
if [ "$AGE_DAYS" -ge 14 ]; then
|
||||||
|
PROBLEMS="${PROBLEMS}stale: #${num} — no activity for ${AGE_DAYS} days\n"
|
||||||
|
fi
|
||||||
|
done < <(echo "$ISSUES_JSON" | jq -r '.[] | "\(.number)\t\(.updated_at)"')
|
||||||
|
|
||||||
|
# 5. Blocker detection: find issues blocking backlog items that aren't themselves backlog
|
||||||
|
# This is the HIGHEST PRIORITY — a non-backlog blocker starves the entire factory
|
||||||
|
BACKLOG_ISSUES=$(echo "$ISSUES_JSON" | jq -r '.[] | select(.labels | map(.name) | index("backlog")) | .number')
|
||||||
|
BLOCKER_NUMS=""
|
||||||
|
for BNUM in $BACKLOG_ISSUES; do
|
||||||
|
BBODY=$(echo "$ISSUES_JSON" | jq -r --arg n "$BNUM" '.[] | select(.number == ($n | tonumber)) | .body // ""')
|
||||||
|
# Extract deps from ## Dependencies / ## Depends on / ## Blocked by
|
||||||
|
IN_SECTION=false
|
||||||
|
while IFS= read -r line; do
|
||||||
|
if echo "$line" | grep -qiP '^##?\s*(Dependencies|Depends on|Blocked by)'; then IN_SECTION=true; continue; fi
|
||||||
|
if echo "$line" | grep -qP '^##?\s' && [ "$IN_SECTION" = true ]; then IN_SECTION=false; fi
|
||||||
|
if [ "$IN_SECTION" = true ]; then
|
||||||
|
for dep in $(echo "$line" | grep -oP '#\d+' | grep -oP '\d+'); do
|
||||||
|
[ "$dep" = "$BNUM" ] && continue
|
||||||
|
# Check if dep is open but NOT backlog-labeled
|
||||||
|
DEP_STATE=$(echo "$ISSUES_JSON" | jq -r --arg n "$dep" '.[] | select(.number == ($n | tonumber)) | .state' 2>/dev/null || true)
|
||||||
|
DEP_LABELS=$(echo "$ISSUES_JSON" | jq -r --arg n "$dep" '.[] | select(.number == ($n | tonumber)) | [.labels[].name] | join(",")' 2>/dev/null || true)
|
||||||
|
if [ "$DEP_STATE" = "open" ] && ! echo ",$DEP_LABELS," | grep -q ',backlog,'; then
|
||||||
|
BLOCKER_NUMS="${BLOCKER_NUMS} ${dep}"
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
done <<< "$BBODY"
|
||||||
|
done
|
||||||
|
# Deduplicate blockers
|
||||||
|
BLOCKER_NUMS=$(echo "$BLOCKER_NUMS" | tr ' ' '\n' | sort -un | head -10)
|
||||||
|
if [ -n "$BLOCKER_NUMS" ]; then
|
||||||
|
BLOCKER_LIST=""
|
||||||
|
for bnum in $BLOCKER_NUMS; do
|
||||||
|
BTITLE=$(echo "$ISSUES_JSON" | jq -r --arg n "$bnum" '.[] | select(.number == ($n | tonumber)) | .title' 2>/dev/null || true)
|
||||||
|
BLABELS=$(echo "$ISSUES_JSON" | jq -r --arg n "$bnum" '.[] | select(.number == ($n | tonumber)) | [.labels[].name] | join(",")' 2>/dev/null || true)
|
||||||
|
BLOCKER_LIST="${BLOCKER_LIST}#${bnum} [${BLABELS:-unlabeled}] ${BTITLE}\n"
|
||||||
|
done
|
||||||
|
PROBLEMS="${PROBLEMS}PRIORITY_blockers_starving_factory: these issues block backlog items but are NOT labeled backlog — promote them FIRST:\n${BLOCKER_LIST}\n"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# 6. Tech-debt issues needing promotion to backlog (secondary to blockers)
|
||||||
|
TECH_DEBT_ISSUES=$(echo "$ISSUES_JSON" | jq -r '.[] | select(.labels | map(.name) | index("tech-debt")) | "#\(.number) \(.title)"')
|
||||||
|
if [ -n "$TECH_DEBT_ISSUES" ]; then
|
||||||
|
TECH_DEBT_COUNT=$(echo "$TECH_DEBT_ISSUES" | wc -l)
|
||||||
|
PROBLEMS="${PROBLEMS}tech_debt_promotion: ${TECH_DEBT_COUNT} tech-debt issues need processing (goal: zero tech-debt):\n$(echo "$TECH_DEBT_ISSUES" | head -50)\n"
|
||||||
|
fi
|
||||||
|
|
||||||
|
PROBLEM_COUNT=$(echo -e "$PROBLEMS" | grep -c '.' || true)
|
||||||
|
log "Detected $PROBLEM_COUNT potential problems"
|
||||||
|
|
||||||
|
if [ "$PROBLEM_COUNT" -eq 0 ] && [ -z "$ESCALATION_REPLY" ]; then
|
||||||
|
log "Backlog is clean — nothing to groom"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# ── Build prompt ──────────────────────────────────────────────────────────
|
||||||
|
log "Building gardener prompt"
|
||||||
|
|
||||||
|
# Build issue summary for context (titles + labels + deps)
|
||||||
|
ISSUE_SUMMARY=$(echo "$ISSUES_JSON" | jq -r '.[] | "#\(.number) [\(.labels | map(.name) | join(","))] \(.title)"')
|
||||||
|
|
||||||
|
# Build list of issues already staged as dust (so LLM doesn't re-emit them)
|
||||||
|
STAGED_DUST=""
|
||||||
|
if [ -s "$DUST_FILE" ]; then
|
||||||
|
STAGED_DUST=$(jq -r '"#\(.issue) (\(.group))"' "$DUST_FILE" 2>/dev/null | sort -u || true)
|
||||||
|
fi
|
||||||
|
|
||||||
|
PROMPT="You are the issue gardener for ${CODEBERG_REPO}. Your job: keep the backlog clean, well-structured, and actionable.
|
||||||
|
|
||||||
|
## Current open issues
|
||||||
|
$ISSUE_SUMMARY
|
||||||
|
|
||||||
|
## Problems detected
|
||||||
|
$(echo -e "$PROBLEMS")
|
||||||
|
## Tools available
|
||||||
|
- Codeberg API: use curl with the CODEBERG_TOKEN env var (already set in your environment)
|
||||||
|
- Base URL: ${CODEBERG_API}
|
||||||
|
- Read issue: \`curl -sf -H \"Authorization: token \$CODEBERG_TOKEN\" '${CODEBERG_API}/issues/{number}' | jq '.body'\`
|
||||||
|
- Relabel: \`curl -sf -H \"Authorization: token \$CODEBERG_TOKEN\" -X PUT -H 'Content-Type: application/json' '${CODEBERG_API}/issues/{number}/labels' -d '{\"labels\":[LABEL_ID]}'\`
|
||||||
|
- Comment: \`curl -sf -H \"Authorization: token \$CODEBERG_TOKEN\" -X POST -H 'Content-Type: application/json' '${CODEBERG_API}/issues/{number}/comments' -d '{\"body\":\"...\"}'\`
|
||||||
|
- Close: \`curl -sf -H \"Authorization: token \$CODEBERG_TOKEN\" -X PATCH -H 'Content-Type: application/json' '${CODEBERG_API}/issues/{number}' -d '{\"state\":\"closed\"}'\`
|
||||||
|
- Edit body: \`curl -sf -H \"Authorization: token \$CODEBERG_TOKEN\" -X PATCH -H 'Content-Type: application/json' '${CODEBERG_API}/issues/{number}' -d '{\"body\":\"new body\"}'\`
|
||||||
|
- List labels: \`curl -sf -H \"Authorization: token \$CODEBERG_TOKEN\" '${CODEBERG_API}/labels'\` (to find label IDs)
|
||||||
|
- NEVER echo, log, or include the actual token value in any output — always reference \$CODEBERG_TOKEN
|
||||||
|
- You're running in the project repo root. Read README.md and any docs/ files before making decisions.
|
||||||
|
|
||||||
|
## Primary mission: unblock the factory
|
||||||
|
Issues prefixed with PRIORITY_blockers_starving_factory are your TOP priority. These are non-backlog issues that block existing backlog items — the dev-agent is completely starved until these are promoted. Process ALL of them before touching regular tech-debt.
|
||||||
|
|
||||||
|
## Your objective: zero tech-debt issues
|
||||||
|
|
||||||
|
Tech-debt is unprocessed work — it sits outside the factory pipeline
|
||||||
|
(dev-agent only pulls backlog). Every tech-debt issue is a decision
|
||||||
|
you haven't made yet:
|
||||||
|
|
||||||
|
- Substantial? → promote to backlog (add affected files, acceptance
|
||||||
|
criteria, dependencies)
|
||||||
|
- Dust? → bundle into an ore issue
|
||||||
|
- Duplicate? → close with cross-reference
|
||||||
|
- Invalid/wontfix? → close with explanation
|
||||||
|
- Needs human decision? → escalate
|
||||||
|
|
||||||
|
Process ALL tech-debt issues every run. The goal is zero tech-debt
|
||||||
|
when you're done. If you can't reach zero (needs human input,
|
||||||
|
unclear scope), escalate those specifically and close out everything
|
||||||
|
else.
|
||||||
|
|
||||||
|
Tech-debt is your inbox. An empty inbox is a healthy factory.
|
||||||
|
|
||||||
|
## Dust vs Ore — bundle trivial tech-debt
|
||||||
|
Don't promote trivial tech-debt individually — each costs a full factory cycle (CI + dev-agent + review + merge). If an issue is dust (comment fix, rename, style-only, single-line change, trivial cleanup), output a DUST line instead of promoting:
|
||||||
|
|
||||||
|
DUST: {\"issue\": NNN, \"group\": \"<file-or-subsystem>\", \"title\": \"issue title\", \"reason\": \"why it's dust\"}
|
||||||
|
|
||||||
|
Group by file or subsystem (e.g. \"gardener\", \"lib/env.sh\", \"dev-poll\"). The script collects dust items into a staging file. When a group accumulates 3+ items, the script bundles them into one backlog issue automatically.
|
||||||
|
|
||||||
|
Only promote tech-debt that is substantial: multi-file changes, behavioral fixes, architectural improvements. Dust is any issue where the fix is a single-line edit, a rename, a comment tweak, or a style-only change.
|
||||||
|
$(if [ -n "$STAGED_DUST" ]; then echo "
|
||||||
|
These issues are ALREADY staged as dust — do NOT emit DUST lines for them again:
|
||||||
|
${STAGED_DUST}"; fi)
|
||||||
|
|
||||||
|
## Other rules
|
||||||
|
1. **Duplicates**: If confident (>80% overlap + same scope after reading bodies), close the newer one with a comment referencing the older. If unsure, ESCALATE.
|
||||||
|
2. **Thin issues** (non-tech-debt): Add acceptance criteria. Read the body first.
|
||||||
|
3. **Stale issues**: If clearly superseded or no longer relevant, close with explanation. If unclear, ESCALATE.
|
||||||
|
4. **Oversized issues**: If >5 acceptance criteria touching different files/concerns, ESCALATE with suggested split.
|
||||||
|
5. **Dependencies**: If an issue references another that must land first, add a \`## Dependencies\n- #NNN\` section if missing.
|
||||||
|
6. **Sibling issues**: When creating multiple issues from the same source (PR review, code audit), NEVER add bidirectional dependencies between them. Siblings are independent work items, not parent/child. Use \`## Related\n- #NNN (sibling)\` for cross-references between siblings — NOT \`## Dependencies\`. The dev-poll \`get_deps()\` parser only reads \`## Dependencies\` / \`## Depends on\` / \`## Blocked by\` headers, so \`## Related\` is safely ignored. Bidirectional deps create permanent deadlocks that stall the entire factory.
|
||||||
|
|
||||||
|
## Escalation format
|
||||||
|
For anything needing human decision, output EXACTLY this format (one block, all items):
|
||||||
|
\`\`\`
|
||||||
|
ESCALATE
|
||||||
|
1. #NNN \"title\" — reason (a) option1 (b) option2 (c) option3
|
||||||
|
2. #NNN \"title\" — reason (a) option1 (b) option2
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
## Output format (MANDATORY — write each line to result file using bash)
|
||||||
|
Write your structured output to ${RESULT_FILE}. Use bash to append each line:
|
||||||
|
echo \"ACTION: description of what you did\" >> '${RESULT_FILE}'
|
||||||
|
echo 'DUST: {\"issue\": NNN, \"group\": \"...\", \"title\": \"...\", \"reason\": \"...\"}' >> '${RESULT_FILE}'
|
||||||
|
For escalations, write the full block to the result file:
|
||||||
|
printf 'ESCALATE\n1. #NNN \"title\" — reason (a) option1 (b) option2\n' >> '${RESULT_FILE}'
|
||||||
|
If truly nothing to do: echo 'CLEAN' >> '${RESULT_FILE}'
|
||||||
|
|
||||||
|
## Important
|
||||||
|
- You MUST process the tech_debt_promotion items listed above. Read each issue, add acceptance criteria + affected files, then relabel to backlog.
|
||||||
|
- If an issue is ambiguous or needs a design decision, ESCALATE it — don't skip it silently.
|
||||||
|
- Every tech-debt issue in the list above should result in either an ACTION (promoted) or an ESCALATE (needs decision). Never skip silently.
|
||||||
|
$(if [ -n "$ESCALATION_REPLY" ]; then echo "
|
||||||
|
## Human Response to Previous Escalation
|
||||||
|
The human replied with shorthand choices keyed to the previous ESCALATE block.
|
||||||
|
Format: '1a 2c 3b' means question 1→option (a), question 2→option (c), question 3→option (b).
|
||||||
|
|
||||||
|
Raw reply:
|
||||||
|
${ESCALATION_REPLY}
|
||||||
|
|
||||||
|
Execute each chosen option NOW via the Codeberg API before processing new items.
|
||||||
|
If a choice is unclear, re-escalate that single item with a clarifying question."; fi)
|
||||||
|
|
||||||
|
## Phase protocol (REQUIRED)
|
||||||
|
When you have finished ALL work, write to the phase file:
|
||||||
|
echo 'PHASE:done' > '${PHASE_FILE}'
|
||||||
|
On unrecoverable error:
|
||||||
|
printf 'PHASE:failed\nReason: %s\n' 'describe error' > '${PHASE_FILE}'"
|
||||||
|
|
||||||
|
# ── Reset phase + result files ────────────────────────────────────────────
|
||||||
|
kill_tmux_session
|
||||||
|
rm -f "$PHASE_FILE" "$RESULT_FILE"
|
||||||
|
touch "$RESULT_FILE"
|
||||||
|
|
||||||
|
# ── Create tmux session ───────────────────────────────────────────────────
|
||||||
|
log "Creating tmux session: ${SESSION_NAME}"
|
||||||
|
if ! create_agent_session "$SESSION_NAME" "$PROJECT_REPO_ROOT"; then
|
||||||
|
log "ERROR: failed to create tmux session ${SESSION_NAME}"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
inject_into_session "$PROMPT"
|
||||||
|
log "Prompt sent to tmux session"
|
||||||
|
matrix_send "gardener" "🌱 Gardener session started for ${CODEBERG_REPO}" 2>/dev/null || true
|
||||||
|
|
||||||
|
# ── Phase monitoring loop ─────────────────────────────────────────────────
|
||||||
|
log "Monitoring phase file: ${PHASE_FILE}"
|
||||||
|
LAST_PHASE_MTIME=0
|
||||||
|
IDLE_ELAPSED=0
|
||||||
|
CRASHED=false
|
||||||
|
|
||||||
|
while true; do
|
||||||
|
sleep "$PHASE_POLL_INTERVAL"
|
||||||
|
IDLE_ELAPSED=$((IDLE_ELAPSED + PHASE_POLL_INTERVAL))
|
||||||
|
|
||||||
|
# --- Session health check ---
|
||||||
|
if ! tmux has-session -t "$SESSION_NAME" 2>/dev/null; then
|
||||||
|
CURRENT_PHASE=$(read_phase)
|
||||||
|
case "$CURRENT_PHASE" in
|
||||||
|
PHASE:done|PHASE:failed)
|
||||||
|
# Expected terminal phase — exit loop
|
||||||
|
break
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
if [ "$CRASHED" = true ]; then
|
||||||
|
log "ERROR: session crashed again after recovery — giving up"
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
CRASHED=true
|
||||||
|
log "WARNING: tmux session died unexpectedly (phase: ${CURRENT_PHASE:-none})"
|
||||||
|
# Attempt one crash recovery
|
||||||
|
RECOVERY_MSG="The previous gardener session was interrupted unexpectedly.
|
||||||
|
|
||||||
|
Re-run your analysis from scratch:
|
||||||
|
1. Fetch open issues and identify problems using the Codeberg API
|
||||||
|
2. Take all necessary actions (close dupes, add criteria, promote tech-debt, etc.)
|
||||||
|
3. Write structured output to ${RESULT_FILE}:
|
||||||
|
- echo \"ACTION: ...\" >> '${RESULT_FILE}'
|
||||||
|
- echo 'DUST: {...}' >> '${RESULT_FILE}'
|
||||||
|
- printf 'ESCALATE\n1. ...\n' >> '${RESULT_FILE}'
|
||||||
|
4. When finished: echo 'PHASE:done' > '${PHASE_FILE}'"
|
||||||
|
|
||||||
|
rm -f "$RESULT_FILE"
|
||||||
|
touch "$RESULT_FILE"
|
||||||
|
if create_agent_session "$SESSION_NAME" "$PROJECT_REPO_ROOT" 2>/dev/null; then
|
||||||
|
inject_into_session "$RECOVERY_MSG"
|
||||||
|
log "Recovery session started"
|
||||||
|
IDLE_ELAPSED=0
|
||||||
|
else
|
||||||
|
log "ERROR: could not restart session after crash"
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
continue
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
fi
|
||||||
|
|
||||||
|
# --- Check phase file for changes ---
|
||||||
|
PHASE_MTIME=$(stat -c %Y "$PHASE_FILE" 2>/dev/null || echo 0)
|
||||||
|
CURRENT_PHASE=$(read_phase)
|
||||||
|
|
||||||
|
if [ -z "$CURRENT_PHASE" ] || [ "$PHASE_MTIME" -le "$LAST_PHASE_MTIME" ]; then
|
||||||
|
# No phase change — check idle timeout
|
||||||
|
if [ "$IDLE_ELAPSED" -ge "$MAX_RUNTIME" ]; then
|
||||||
|
log "TIMEOUT: gardener session idle for ${MAX_RUNTIME}s — killing"
|
||||||
|
matrix_send "gardener" "⚠️ Gardener session timed out after ${MAX_RUNTIME}s" 2>/dev/null || true
|
||||||
|
kill_tmux_session
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Phase changed
|
||||||
|
LAST_PHASE_MTIME="$PHASE_MTIME"
|
||||||
|
IDLE_ELAPSED=0
|
||||||
|
log "phase: ${CURRENT_PHASE}"
|
||||||
|
|
||||||
|
if [ "$CURRENT_PHASE" = "PHASE:done" ] || [ "$CURRENT_PHASE" = "PHASE:failed" ]; then
|
||||||
|
kill_tmux_session
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
FINAL_PHASE=$(read_phase)
|
||||||
|
log "Final phase: ${FINAL_PHASE:-none}"
|
||||||
|
|
||||||
|
if [ "$FINAL_PHASE" != "PHASE:done" ]; then
|
||||||
|
log "gardener-agent finished without PHASE:done (phase: ${FINAL_PHASE:-none})"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
log "claude finished — parsing result file"
|
||||||
|
|
||||||
|
# ── Parse result file ─────────────────────────────────────────────────────
|
||||||
|
CLAUDE_OUTPUT=""
|
||||||
|
if [ -s "$RESULT_FILE" ]; then
|
||||||
|
CLAUDE_OUTPUT=$(cat "$RESULT_FILE")
|
||||||
|
fi
|
||||||
|
|
||||||
|
# ── Parse escalations ─────────────────────────────────────────────────────
|
||||||
|
ESCALATION=$(echo "$CLAUDE_OUTPUT" | awk '/^ESCALATE$/{found=1;next} found && /^(ACTION:|DUST:|CLEAN|PHASE:)/{found=0} found{print}' || true)
|
||||||
|
if [ -z "$ESCALATION" ]; then
|
||||||
|
ESCALATION=$(echo "$CLAUDE_OUTPUT" | grep -A50 "^ESCALATE" | grep -E '^[0-9]' || true)
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -n "$ESCALATION" ]; then
|
||||||
|
ITEM_COUNT=$(echo "$ESCALATION" | grep -c '.' || true)
|
||||||
|
log "Escalating $ITEM_COUNT items to human"
|
||||||
|
|
||||||
|
# Send via Matrix (threaded — replies route back via listener)
|
||||||
|
matrix_send "gardener" "🌱 Issue Gardener — ${ITEM_COUNT} item(s) need attention
|
||||||
|
|
||||||
|
${ESCALATION}
|
||||||
|
|
||||||
|
Reply with numbers+letters (e.g. 1a 2c) to decide." 2>/dev/null || true
|
||||||
|
fi
|
||||||
|
|
||||||
|
# ── Log actions taken ─────────────────────────────────────────────────────
|
||||||
|
ACTIONS=$(echo "$CLAUDE_OUTPUT" | grep "^ACTION:" || true)
|
||||||
|
if [ -n "$ACTIONS" ]; then
|
||||||
|
echo "$ACTIONS" | while read -r line; do
|
||||||
|
log " $line"
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
|
||||||
|
# ── Collect dust items ────────────────────────────────────────────────────
|
||||||
|
# DUST_FILE already set above (before prompt construction)
|
||||||
|
DUST_LINES=$(echo "$CLAUDE_OUTPUT" | grep "^DUST: " | sed 's/^DUST: //' || true)
|
||||||
|
if [ -n "$DUST_LINES" ]; then
|
||||||
|
# Build set of issue numbers already in dust.jsonl for dedup
|
||||||
|
EXISTING_DUST_ISSUES=""
|
||||||
|
if [ -s "$DUST_FILE" ]; then
|
||||||
|
EXISTING_DUST_ISSUES=$(jq -r '.issue' "$DUST_FILE" 2>/dev/null | sort -nu || true)
|
||||||
|
fi
|
||||||
|
|
||||||
|
DUST_COUNT=0
|
||||||
|
while IFS= read -r dust_json; do
|
||||||
|
[ -z "$dust_json" ] && continue
|
||||||
|
# Validate JSON
|
||||||
|
if ! echo "$dust_json" | jq -e '.issue and .group' >/dev/null 2>&1; then
|
||||||
|
log "WARNING: invalid dust JSON: $dust_json"
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
# Deduplicate: skip if this issue is already staged
|
||||||
|
dust_issue_num=$(echo "$dust_json" | jq -r '.issue')
|
||||||
|
if echo "$EXISTING_DUST_ISSUES" | grep -qx "$dust_issue_num" 2>/dev/null; then
|
||||||
|
log "Skipping duplicate dust entry for issue #${dust_issue_num}"
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
EXISTING_DUST_ISSUES="${EXISTING_DUST_ISSUES}
|
||||||
|
${dust_issue_num}"
|
||||||
|
echo "$dust_json" | jq -c '. + {"ts": "'"$(date -u +%Y-%m-%dT%H:%M:%SZ)"'"}' >> "$DUST_FILE"
|
||||||
|
DUST_COUNT=$((DUST_COUNT + 1))
|
||||||
|
done <<< "$DUST_LINES"
|
||||||
|
log "Collected $DUST_COUNT dust item(s) (duplicates skipped)"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# ── Expire stale dust entries (30-day TTL) ───────────────────────────────
|
||||||
|
if [ -s "$DUST_FILE" ]; then
|
||||||
|
CUTOFF=$(date -u -d '30 days ago' +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || true)
|
||||||
|
if [ -n "$CUTOFF" ]; then
|
||||||
|
BEFORE_COUNT=$(wc -l < "$DUST_FILE")
|
||||||
|
if jq -c --arg c "$CUTOFF" 'select(.ts >= $c)' "$DUST_FILE" > "${DUST_FILE}.ttl" 2>/dev/null; then
|
||||||
|
mv "${DUST_FILE}.ttl" "$DUST_FILE"
|
||||||
|
AFTER_COUNT=$(wc -l < "$DUST_FILE")
|
||||||
|
EXPIRED=$((BEFORE_COUNT - AFTER_COUNT))
|
||||||
|
[ "$EXPIRED" -gt 0 ] && log "Expired $EXPIRED stale dust entries (>30 days old)"
|
||||||
|
else
|
||||||
|
rm -f "${DUST_FILE}.ttl"
|
||||||
|
log "WARNING: TTL cleanup failed — dust.jsonl left unchanged"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# ── Bundle dust groups with 3+ distinct issues ──────────────────────────
|
||||||
|
if [ -s "$DUST_FILE" ]; then
|
||||||
|
# Count distinct issues per group (not raw entries)
|
||||||
|
DUST_GROUPS=$(jq -r '[.group, (.issue | tostring)] | join("\t")' "$DUST_FILE" 2>/dev/null \
|
||||||
|
| sort -u | cut -f1 | sort | uniq -c | sort -rn || true)
|
||||||
|
while read -r count group; do
|
||||||
|
[ -z "$group" ] && continue
|
||||||
|
[ "$count" -lt 3 ] && continue
|
||||||
|
|
||||||
|
log "Bundling dust group '$group' ($count distinct issues)"
|
||||||
|
|
||||||
|
# Collect deduplicated issue references and details for this group
|
||||||
|
BUNDLE_ISSUES=$(jq -r --arg g "$group" 'select(.group == $g) | "#\(.issue) \(.title // "untitled") — \(.reason // "dust")"' "$DUST_FILE" | sort -u)
|
||||||
|
BUNDLE_ISSUE_NUMS=$(jq -r --arg g "$group" 'select(.group == $g) | .issue' "$DUST_FILE" | sort -nu)
|
||||||
|
DISTINCT_COUNT=$(echo "$BUNDLE_ISSUE_NUMS" | grep -c '.' || true)
|
||||||
|
|
||||||
|
bundle_title="fix: bundled dust cleanup — ${group}"
|
||||||
|
bundle_body="## Bundled dust cleanup — \`${group}\`
|
||||||
|
|
||||||
|
Gardener bundled ${DISTINCT_COUNT} trivial tech-debt items into one issue to save factory cycles.
|
||||||
|
|
||||||
|
### Items
|
||||||
|
$(echo "$BUNDLE_ISSUES" | sed 's/^/- /')
|
||||||
|
|
||||||
|
### Instructions
|
||||||
|
Fix all items above in a single PR. Each is a small change (rename, comment, style fix, single-line edit).
|
||||||
|
|
||||||
|
### Affected files
|
||||||
|
- Files in \`${group}\` subsystem
|
||||||
|
|
||||||
|
### Acceptance criteria
|
||||||
|
- [ ] All listed items resolved
|
||||||
|
- [ ] ShellCheck passes"
|
||||||
|
|
||||||
|
new_bundle=$(curl -sf -X POST \
|
||||||
|
-H "Authorization: token ${CODEBERG_TOKEN}" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
"${CODEBERG_API}/issues" \
|
||||||
|
-d "$(jq -nc --arg t "$bundle_title" --arg b "$bundle_body" \
|
||||||
|
'{"title":$t,"body":$b,"labels":["backlog"]}')" 2>/dev/null | jq -r '.number // ""') || true
|
||||||
|
|
||||||
|
if [ -n "$new_bundle" ]; then
|
||||||
|
log "Created bundle issue #${new_bundle} for dust group '$group' ($DISTINCT_COUNT items)"
|
||||||
|
matrix_send "gardener" "📦 Bundled ${DISTINCT_COUNT} dust items (${group}) → #${new_bundle}" 2>/dev/null || true
|
||||||
|
|
||||||
|
# Close source issues with cross-reference
|
||||||
|
for src_issue in $BUNDLE_ISSUE_NUMS; do
|
||||||
|
curl -sf -X POST \
|
||||||
|
-H "Authorization: token ${CODEBERG_TOKEN}" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
"${CODEBERG_API}/issues/${src_issue}/comments" \
|
||||||
|
-d "$(jq -nc --arg b "Bundled into #${new_bundle} (dust cleanup)" '{"body":$b}')" 2>/dev/null || true
|
||||||
|
curl -sf -X PATCH \
|
||||||
|
-H "Authorization: token ${CODEBERG_TOKEN}" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
"${CODEBERG_API}/issues/${src_issue}" \
|
||||||
|
-d '{"state":"closed"}' 2>/dev/null || true
|
||||||
|
log "Closed source issue #${src_issue} → bundled into #${new_bundle}"
|
||||||
|
done
|
||||||
|
|
||||||
|
# Remove bundled items from dust.jsonl — only if jq succeeds
|
||||||
|
if jq -c --arg g "$group" 'select(.group != $g)' "$DUST_FILE" > "${DUST_FILE}.tmp" 2>/dev/null; then
|
||||||
|
mv "${DUST_FILE}.tmp" "$DUST_FILE"
|
||||||
|
else
|
||||||
|
rm -f "${DUST_FILE}.tmp"
|
||||||
|
log "WARNING: failed to prune bundled group '$group' from dust.jsonl"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
done <<< "$DUST_GROUPS"
|
||||||
|
fi
|
||||||
|
|
||||||
|
log "--- gardener-agent done ---"
|
||||||
|
|
@ -1,29 +1,24 @@
|
||||||
#!/usr/bin/env bash
|
#!/usr/bin/env bash
|
||||||
# =============================================================================
|
# =============================================================================
|
||||||
# gardener-poll.sh — Issue backlog grooming agent
|
# gardener-poll.sh — Cron wrapper for the gardener agent
|
||||||
#
|
#
|
||||||
# Cron: daily (or 2x/day). Reads open issues, detects problems, invokes
|
# Cron: daily (or 2x/day). Handles lock management, escalation reply
|
||||||
# claude -p to fix or escalate.
|
# injection, and delegates backlog grooming to gardener-agent.sh.
|
||||||
|
# Then processes dev-agent CI escalations via the recipe engine.
|
||||||
#
|
#
|
||||||
# Problems detected (bash, zero tokens):
|
# Grooming (delegated to gardener-agent.sh):
|
||||||
# - Duplicate titles / overlapping scope
|
# - Duplicate titles / overlapping scope
|
||||||
# - Missing acceptance criteria
|
# - Missing acceptance criteria
|
||||||
# - Missing dependencies (references other issues but no dep link)
|
# - Stale issues (no activity > 14 days)
|
||||||
# - Oversized issues (too many acceptance criteria or change files)
|
# - Blockers starving the factory
|
||||||
# - Stale issues (no activity > 14 days, still open)
|
# - Tech-debt promotion / dust bundling
|
||||||
# - Closed issues with open dependents still referencing them
|
|
||||||
#
|
#
|
||||||
# Actions taken (claude -p):
|
# CI escalation (recipe-driven, handled here):
|
||||||
# - Close duplicates with cross-reference comment
|
# - ShellCheck per-file sub-issues
|
||||||
# - Add acceptance criteria template
|
# - Generic CI failure issues
|
||||||
# - Set dependency labels
|
# - Chicken-egg CI handling
|
||||||
# - Split oversized issues (create sub-issues, close parent)
|
# - Cascade rebase + retry merge
|
||||||
# - Escalate decisions to human via openclaw system event
|
# - Flaky test quarantine
|
||||||
#
|
|
||||||
# Escalation format (compact, decision-ready):
|
|
||||||
# 🌱 Issue Gardener — N items need attention
|
|
||||||
# 1. #123 "title" — duplicate of #456? (a) close #123 (b) close #456 (c) merge scope
|
|
||||||
# 2. #789 "title" — needs decision: (a) backlog (b) wontfix (c) split into X,Y
|
|
||||||
# =============================================================================
|
# =============================================================================
|
||||||
set -euo pipefail
|
set -euo pipefail
|
||||||
|
|
||||||
|
|
@ -38,7 +33,6 @@ source "$FACTORY_ROOT/lib/env.sh"
|
||||||
|
|
||||||
LOG_FILE="$SCRIPT_DIR/gardener.log"
|
LOG_FILE="$SCRIPT_DIR/gardener.log"
|
||||||
LOCK_FILE="/tmp/gardener-poll.lock"
|
LOCK_FILE="/tmp/gardener-poll.lock"
|
||||||
CLAUDE_TIMEOUT="${CLAUDE_TIMEOUT:-3600}"
|
|
||||||
|
|
||||||
log() { echo "[$(date -u +%Y-%m-%dT%H:%M:%S)Z] $*" >> "$LOG_FILE"; }
|
log() { echo "[$(date -u +%Y-%m-%dT%H:%M:%S)Z] $*" >> "$LOG_FILE"; }
|
||||||
|
|
||||||
|
|
@ -63,6 +57,7 @@ if [ -s /tmp/gardener-escalation-reply ]; then
|
||||||
rm -f /tmp/gardener-escalation-reply
|
rm -f /tmp/gardener-escalation-reply
|
||||||
log "Got escalation reply: $(echo "$ESCALATION_REPLY" | head -1)"
|
log "Got escalation reply: $(echo "$ESCALATION_REPLY" | head -1)"
|
||||||
fi
|
fi
|
||||||
|
export ESCALATION_REPLY
|
||||||
|
|
||||||
# ── Inject human replies into needs_human dev sessions (backup to supervisor) ─
|
# ── Inject human replies into needs_human dev sessions (backup to supervisor) ─
|
||||||
HUMAN_REPLY_FILE="/tmp/dev-escalation-reply"
|
HUMAN_REPLY_FILE="/tmp/dev-escalation-reply"
|
||||||
|
|
@ -106,395 +101,9 @@ Instructions:
|
||||||
break # only one reply to deliver
|
break # only one reply to deliver
|
||||||
done
|
done
|
||||||
|
|
||||||
# ── Fetch all open issues ─────────────────────────────────────────────────
|
# ── Backlog grooming (delegated to gardener-agent.sh) ────────────────────
|
||||||
ISSUES_JSON=$(codeberg_api GET "/issues?state=open&type=issues&limit=50&sort=updated&direction=desc" 2>/dev/null || true)
|
log "Invoking gardener-agent.sh for backlog grooming"
|
||||||
if [ -z "$ISSUES_JSON" ] || [ "$ISSUES_JSON" = "null" ]; then
|
bash "$SCRIPT_DIR/gardener-agent.sh" "${1:-}" || log "WARNING: gardener-agent.sh exited with error"
|
||||||
log "Failed to fetch issues"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
ISSUE_COUNT=$(echo "$ISSUES_JSON" | jq 'length')
|
|
||||||
log "Found $ISSUE_COUNT open issues"
|
|
||||||
|
|
||||||
if [ "$ISSUE_COUNT" -eq 0 ]; then
|
|
||||||
log "No open issues — nothing to groom"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# ── Bash pre-checks (zero tokens) ────────────────────────────────────────
|
|
||||||
|
|
||||||
PROBLEMS=""
|
|
||||||
|
|
||||||
# 1. Duplicate detection: issues with very similar titles
|
|
||||||
TITLES=$(echo "$ISSUES_JSON" | jq -r '.[] | "\(.number)\t\(.title)"')
|
|
||||||
DUPES=""
|
|
||||||
while IFS=$'\t' read -r num1 title1; do
|
|
||||||
while IFS=$'\t' read -r num2 title2; do
|
|
||||||
[ "$num1" -ge "$num2" ] && continue
|
|
||||||
# Normalize: lowercase, strip prefixes + series names, collapse whitespace
|
|
||||||
t1=$(echo "$title1" | tr '[:upper:]' '[:lower:]' | sed 's/^feat:\|^fix:\|^refactor://;s/llm seed[^—]*—\s*//;s/push3 evolution[^—]*—\s*//;s/[^a-z0-9 ]//g;s/ */ /g')
|
|
||||||
t2=$(echo "$title2" | tr '[:upper:]' '[:lower:]' | sed 's/^feat:\|^fix:\|^refactor://;s/llm seed[^—]*—\s*//;s/push3 evolution[^—]*—\s*//;s/[^a-z0-9 ]//g;s/ */ /g')
|
|
||||||
# Count shared words (>60% overlap = suspect)
|
|
||||||
WORDS1=$(echo "$t1" | tr ' ' '\n' | sort -u)
|
|
||||||
WORDS2=$(echo "$t2" | tr ' ' '\n' | sort -u)
|
|
||||||
SHARED=$(comm -12 <(echo "$WORDS1") <(echo "$WORDS2") | wc -l)
|
|
||||||
TOTAL1=$(echo "$WORDS1" | wc -l)
|
|
||||||
TOTAL2=$(echo "$WORDS2" | wc -l)
|
|
||||||
MIN_TOTAL=$(( TOTAL1 < TOTAL2 ? TOTAL1 : TOTAL2 ))
|
|
||||||
if [ "$MIN_TOTAL" -gt 2 ] && [ "$SHARED" -gt 0 ]; then
|
|
||||||
OVERLAP=$(( SHARED * 100 / MIN_TOTAL ))
|
|
||||||
if [ "$OVERLAP" -ge 60 ]; then
|
|
||||||
DUPES="${DUPES}possible_dupe: #${num1} vs #${num2} (${OVERLAP}% word overlap)\n"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
done <<< "$TITLES"
|
|
||||||
done <<< "$TITLES"
|
|
||||||
[ -n "$DUPES" ] && PROBLEMS="${PROBLEMS}${DUPES}"
|
|
||||||
|
|
||||||
# 2. Missing acceptance criteria: issues with short body and no checkboxes
|
|
||||||
while IFS=$'\t' read -r num body_len has_checkbox; do
|
|
||||||
if [ "$body_len" -lt 100 ] && [ "$has_checkbox" = "false" ]; then
|
|
||||||
PROBLEMS="${PROBLEMS}thin_issue: #${num} — body < 100 chars, no acceptance criteria\n"
|
|
||||||
fi
|
|
||||||
done < <(echo "$ISSUES_JSON" | jq -r '.[] | "\(.number)\t\(.body | length)\t\(.body | test("- \\[[ x]\\]") // false)"')
|
|
||||||
|
|
||||||
# 3. Stale issues: no update in 14+ days
|
|
||||||
NOW_EPOCH=$(date +%s)
|
|
||||||
while IFS=$'\t' read -r num updated_at; do
|
|
||||||
UPDATED_EPOCH=$(date -d "$updated_at" +%s 2>/dev/null || echo 0)
|
|
||||||
AGE_DAYS=$(( (NOW_EPOCH - UPDATED_EPOCH) / 86400 ))
|
|
||||||
if [ "$AGE_DAYS" -ge 14 ]; then
|
|
||||||
PROBLEMS="${PROBLEMS}stale: #${num} — no activity for ${AGE_DAYS} days\n"
|
|
||||||
fi
|
|
||||||
done < <(echo "$ISSUES_JSON" | jq -r '.[] | "\(.number)\t\(.updated_at)"')
|
|
||||||
|
|
||||||
# 4. Issues referencing closed deps
|
|
||||||
while IFS=$'\t' read -r num body; do
|
|
||||||
REFS=$(echo "$body" | grep -oP '#\d+' | grep -oP '\d+' | sort -u || true)
|
|
||||||
for ref in $REFS; do
|
|
||||||
[ "$ref" = "$num" ] && continue
|
|
||||||
REF_STATE=$(echo "$ISSUES_JSON" | jq -r --arg n "$ref" '.[] | select(.number == ($n | tonumber)) | .state' 2>/dev/null || true)
|
|
||||||
# If ref not in our open set, check if it's closed
|
|
||||||
if [ -z "$REF_STATE" ]; then
|
|
||||||
REF_STATE=$(codeberg_api GET "/issues/$ref" 2>/dev/null | jq -r '.state // "unknown"' 2>/dev/null || true)
|
|
||||||
# Rate limit protection
|
|
||||||
sleep 0.5
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
done < <(echo "$ISSUES_JSON" | jq -r '.[] | "\(.number)\t\(.body // "")"' | head -20)
|
|
||||||
|
|
||||||
# 5. Blocker detection: find issues blocking backlog items that aren't themselves backlog
|
|
||||||
# This is the HIGHEST PRIORITY — a non-backlog blocker starves the entire factory
|
|
||||||
BACKLOG_ISSUES=$(echo "$ISSUES_JSON" | jq -r '.[] | select(.labels | map(.name) | index("backlog")) | .number')
|
|
||||||
BLOCKER_NUMS=""
|
|
||||||
for BNUM in $BACKLOG_ISSUES; do
|
|
||||||
BBODY=$(echo "$ISSUES_JSON" | jq -r --arg n "$BNUM" '.[] | select(.number == ($n | tonumber)) | .body // ""')
|
|
||||||
# Extract deps from ## Dependencies / ## Depends on / ## Blocked by
|
|
||||||
IN_SECTION=false
|
|
||||||
while IFS= read -r line; do
|
|
||||||
if echo "$line" | grep -qiP '^##?\s*(Dependencies|Depends on|Blocked by)'; then IN_SECTION=true; continue; fi
|
|
||||||
if echo "$line" | grep -qP '^##?\s' && [ "$IN_SECTION" = true ]; then IN_SECTION=false; fi
|
|
||||||
if [ "$IN_SECTION" = true ]; then
|
|
||||||
for dep in $(echo "$line" | grep -oP '#\d+' | grep -oP '\d+'); do
|
|
||||||
[ "$dep" = "$BNUM" ] && continue
|
|
||||||
# Check if dep is open but NOT backlog-labeled
|
|
||||||
DEP_STATE=$(echo "$ISSUES_JSON" | jq -r --arg n "$dep" '.[] | select(.number == ($n | tonumber)) | .state' 2>/dev/null || true)
|
|
||||||
DEP_LABELS=$(echo "$ISSUES_JSON" | jq -r --arg n "$dep" '.[] | select(.number == ($n | tonumber)) | [.labels[].name] | join(",")' 2>/dev/null || true)
|
|
||||||
if [ "$DEP_STATE" = "open" ] && ! echo ",$DEP_LABELS," | grep -q ',backlog,'; then
|
|
||||||
BLOCKER_NUMS="${BLOCKER_NUMS} ${dep}"
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
fi
|
|
||||||
done <<< "$BBODY"
|
|
||||||
done
|
|
||||||
# Deduplicate blockers
|
|
||||||
BLOCKER_NUMS=$(echo "$BLOCKER_NUMS" | tr ' ' '\n' | sort -un | head -10)
|
|
||||||
if [ -n "$BLOCKER_NUMS" ]; then
|
|
||||||
BLOCKER_LIST=""
|
|
||||||
for bnum in $BLOCKER_NUMS; do
|
|
||||||
BTITLE=$(echo "$ISSUES_JSON" | jq -r --arg n "$bnum" '.[] | select(.number == ($n | tonumber)) | .title' 2>/dev/null || true)
|
|
||||||
BLABELS=$(echo "$ISSUES_JSON" | jq -r --arg n "$bnum" '.[] | select(.number == ($n | tonumber)) | [.labels[].name] | join(",")' 2>/dev/null || true)
|
|
||||||
BLOCKER_LIST="${BLOCKER_LIST}#${bnum} [${BLABELS:-unlabeled}] ${BTITLE}\n"
|
|
||||||
done
|
|
||||||
PROBLEMS="${PROBLEMS}PRIORITY_blockers_starving_factory: these issues block backlog items but are NOT labeled backlog — promote them FIRST:\n${BLOCKER_LIST}\n"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# 6. Tech-debt issues needing promotion to backlog (secondary to blockers)
|
|
||||||
TECH_DEBT_ISSUES=$(echo "$ISSUES_JSON" | jq -r '.[] | select(.labels | map(.name) | index("tech-debt")) | "#\(.number) \(.title)"')
|
|
||||||
if [ -n "$TECH_DEBT_ISSUES" ]; then
|
|
||||||
TECH_DEBT_COUNT=$(echo "$TECH_DEBT_ISSUES" | wc -l)
|
|
||||||
PROBLEMS="${PROBLEMS}tech_debt_promotion: ${TECH_DEBT_COUNT} tech-debt issues need processing (goal: zero tech-debt):\n$(echo "$TECH_DEBT_ISSUES" | head -50)\n"
|
|
||||||
fi
|
|
||||||
|
|
||||||
PROBLEM_COUNT=$(echo -e "$PROBLEMS" | grep -c '.' || true)
|
|
||||||
log "Detected $PROBLEM_COUNT potential problems"
|
|
||||||
|
|
||||||
if [ "$PROBLEM_COUNT" -eq 0 ]; then
|
|
||||||
log "Backlog is clean — nothing to groom"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# ── Invoke claude -p ──────────────────────────────────────────────────────
|
|
||||||
log "Invoking claude -p for grooming"
|
|
||||||
|
|
||||||
# Build issue summary for context (titles + labels + deps)
|
|
||||||
ISSUE_SUMMARY=$(echo "$ISSUES_JSON" | jq -r '.[] | "#\(.number) [\(.labels | map(.name) | join(","))] \(.title)"')
|
|
||||||
|
|
||||||
# Build list of issues already staged as dust (so LLM doesn't re-emit them)
|
|
||||||
DUST_FILE="$SCRIPT_DIR/dust.jsonl"
|
|
||||||
STAGED_DUST=""
|
|
||||||
if [ -s "$DUST_FILE" ]; then
|
|
||||||
STAGED_DUST=$(jq -r '"#\(.issue) (\(.group))"' "$DUST_FILE" 2>/dev/null | sort -u || true)
|
|
||||||
fi
|
|
||||||
|
|
||||||
PROMPT="You are the issue gardener for ${CODEBERG_REPO}. Your job: keep the backlog clean, well-structured, and actionable.
|
|
||||||
|
|
||||||
## Current open issues
|
|
||||||
$ISSUE_SUMMARY
|
|
||||||
|
|
||||||
## Problems detected
|
|
||||||
$(echo -e "$PROBLEMS")
|
|
||||||
|
|
||||||
## Tools available
|
|
||||||
- Codeberg API: use curl with the CODEBERG_TOKEN env var (already set in your environment)
|
|
||||||
- Base URL: ${CODEBERG_API}
|
|
||||||
- Read issue: \`curl -sf -H \"Authorization: token \$CODEBERG_TOKEN\" '${CODEBERG_API}/issues/{number}' | jq '.body'\`
|
|
||||||
- Relabel: \`curl -sf -H \"Authorization: token \$CODEBERG_TOKEN\" -X PUT -H 'Content-Type: application/json' '${CODEBERG_API}/issues/{number}/labels' -d '{\"labels\":[LABEL_ID]}'\`
|
|
||||||
- Comment: \`curl -sf -H \"Authorization: token \$CODEBERG_TOKEN\" -X POST -H 'Content-Type: application/json' '${CODEBERG_API}/issues/{number}/comments' -d '{\"body\":\"...\"}'\`
|
|
||||||
- Close: \`curl -sf -H \"Authorization: token \$CODEBERG_TOKEN\" -X PATCH -H 'Content-Type: application/json' '${CODEBERG_API}/issues/{number}' -d '{\"state\":\"closed\"}'\`
|
|
||||||
- Edit body: \`curl -sf -H \"Authorization: token \$CODEBERG_TOKEN\" -X PATCH -H 'Content-Type: application/json' '${CODEBERG_API}/issues/{number}' -d '{\"body\":\"new body\"}'\`
|
|
||||||
- List labels: \`curl -sf -H \"Authorization: token \$CODEBERG_TOKEN\" '${CODEBERG_API}/labels'\` (to find label IDs)
|
|
||||||
- NEVER echo, log, or include the actual token value in any output — always reference \$CODEBERG_TOKEN
|
|
||||||
- You're running in the project repo root. Read README.md and any docs/ files before making decisions.
|
|
||||||
|
|
||||||
## Primary mission: unblock the factory
|
|
||||||
Issues prefixed with PRIORITY_blockers_starving_factory are your TOP priority. These are non-backlog issues that block existing backlog items — the dev-agent is completely starved until these are promoted. Process ALL of them before touching regular tech-debt.
|
|
||||||
|
|
||||||
## Your objective: zero tech-debt issues
|
|
||||||
|
|
||||||
Tech-debt is unprocessed work — it sits outside the factory pipeline
|
|
||||||
(dev-agent only pulls backlog). Every tech-debt issue is a decision
|
|
||||||
you haven't made yet:
|
|
||||||
|
|
||||||
- Substantial? → promote to backlog (add affected files, acceptance
|
|
||||||
criteria, dependencies)
|
|
||||||
- Dust? → bundle into an ore issue
|
|
||||||
- Duplicate? → close with cross-reference
|
|
||||||
- Invalid/wontfix? → close with explanation
|
|
||||||
- Needs human decision? → escalate
|
|
||||||
|
|
||||||
Process ALL tech-debt issues every run. The goal is zero tech-debt
|
|
||||||
when you're done. If you can't reach zero (needs human input,
|
|
||||||
unclear scope), escalate those specifically and close out everything
|
|
||||||
else.
|
|
||||||
|
|
||||||
Tech-debt is your inbox. An empty inbox is a healthy factory.
|
|
||||||
|
|
||||||
## Dust vs Ore — bundle trivial tech-debt
|
|
||||||
Don't promote trivial tech-debt individually — each costs a full factory cycle (CI + dev-agent + review + merge). If an issue is dust (comment fix, rename, style-only, single-line change, trivial cleanup), output a DUST line instead of promoting:
|
|
||||||
|
|
||||||
DUST: {\"issue\": NNN, \"group\": \"<file-or-subsystem>\", \"title\": \"issue title\", \"reason\": \"why it's dust\"}
|
|
||||||
|
|
||||||
Group by file or subsystem (e.g. \"gardener\", \"lib/env.sh\", \"dev-poll\"). The script collects dust items into a staging file. When a group accumulates 3+ items, the script bundles them into one backlog issue automatically.
|
|
||||||
|
|
||||||
Only promote tech-debt that is substantial: multi-file changes, behavioral fixes, architectural improvements. Dust is any issue where the fix is a single-line edit, a rename, a comment tweak, or a style-only change.
|
|
||||||
$(if [ -n "$STAGED_DUST" ]; then echo "
|
|
||||||
These issues are ALREADY staged as dust — do NOT emit DUST lines for them again:
|
|
||||||
${STAGED_DUST}"; fi)
|
|
||||||
|
|
||||||
## Other rules
|
|
||||||
1. **Duplicates**: If confident (>80% overlap + same scope after reading bodies), close the newer one with a comment referencing the older. If unsure, ESCALATE.
|
|
||||||
2. **Thin issues** (non-tech-debt): Add acceptance criteria. Read the body first.
|
|
||||||
3. **Stale issues**: If clearly superseded or no longer relevant, close with explanation. If unclear, ESCALATE.
|
|
||||||
4. **Oversized issues**: If >5 acceptance criteria touching different files/concerns, ESCALATE with suggested split.
|
|
||||||
5. **Dependencies**: If an issue references another that must land first, add a \`## Dependencies\n- #NNN\` section if missing.
|
|
||||||
6. **Sibling issues**: When creating multiple issues from the same source (PR review, code audit), NEVER add bidirectional dependencies between them. Siblings are independent work items, not parent/child. Use \`## Related\n- #NNN (sibling)\` for cross-references between siblings — NOT \`## Dependencies\`. The dev-poll \`get_deps()\` parser only reads \`## Dependencies\` / \`## Depends on\` / \`## Blocked by\` headers, so \`## Related\` is safely ignored. Bidirectional deps create permanent deadlocks that stall the entire factory.
|
|
||||||
|
|
||||||
## Escalation format
|
|
||||||
For anything needing human decision, output EXACTLY this format (one block, all items):
|
|
||||||
\`\`\`
|
|
||||||
ESCALATE
|
|
||||||
1. #NNN \"title\" — reason (a) option1 (b) option2 (c) option3
|
|
||||||
2. #NNN \"title\" — reason (a) option1 (b) option2
|
|
||||||
\`\`\`
|
|
||||||
|
|
||||||
## Output format (MANDATORY — the script parses these exact prefixes)
|
|
||||||
- After EVERY action you take, print exactly: ACTION: <description>
|
|
||||||
- For trivial tech-debt (dust), print exactly: DUST: {\"issue\": NNN, \"group\": \"<subsystem>\", \"title\": \"...\", \"reason\": \"...\"}
|
|
||||||
- For issues needing human decision, output EXACTLY:
|
|
||||||
ESCALATE
|
|
||||||
1. #NNN \"title\" — reason (a) option1 (b) option2
|
|
||||||
- If truly nothing to do, print: CLEAN
|
|
||||||
|
|
||||||
## Important
|
|
||||||
- You MUST process the tech_debt_promotion items listed above. Read each issue, add acceptance criteria + affected files, then relabel to backlog.
|
|
||||||
- If an issue is ambiguous or needs a design decision, ESCALATE it — don't skip it silently.
|
|
||||||
- Every tech-debt issue in the list above should result in either an ACTION (promoted) or an ESCALATE (needs decision). Never skip silently.
|
|
||||||
$(if [ -n "$ESCALATION_REPLY" ]; then echo "
|
|
||||||
## Human Response to Previous Escalation
|
|
||||||
The human replied with shorthand choices keyed to the previous ESCALATE block.
|
|
||||||
Format: '1a 2c 3b' means question 1→option (a), question 2→option (c), question 3→option (b).
|
|
||||||
|
|
||||||
Raw reply:
|
|
||||||
${ESCALATION_REPLY}
|
|
||||||
|
|
||||||
Execute each chosen option NOW via the Codeberg API before processing new items.
|
|
||||||
If a choice is unclear, re-escalate that single item with a clarifying question."; fi)"
|
|
||||||
|
|
||||||
CLAUDE_OUTPUT=$(cd "${PROJECT_REPO_ROOT}" && CODEBERG_TOKEN="$CODEBERG_TOKEN" timeout "$CLAUDE_TIMEOUT" \
|
|
||||||
claude -p "$PROMPT" \
|
|
||||||
--model sonnet \
|
|
||||||
--dangerously-skip-permissions \
|
|
||||||
--max-turns 30 \
|
|
||||||
2>/dev/null) || true
|
|
||||||
|
|
||||||
log "claude finished ($(echo "$CLAUDE_OUTPUT" | wc -c) bytes)"
|
|
||||||
|
|
||||||
# ── Parse escalations ────────────────────────────────────────────────────
|
|
||||||
ESCALATION=$(echo "$CLAUDE_OUTPUT" | sed -n '/^ESCALATE$/,/^```$/p' | grep -v '^ESCALATE$\|^```$' || true)
|
|
||||||
if [ -z "$ESCALATION" ]; then
|
|
||||||
ESCALATION=$(echo "$CLAUDE_OUTPUT" | grep -A50 "^ESCALATE" | grep '^\d' || true)
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ -n "$ESCALATION" ]; then
|
|
||||||
ITEM_COUNT=$(echo "$ESCALATION" | grep -c '.' || true)
|
|
||||||
log "Escalating $ITEM_COUNT items to human"
|
|
||||||
|
|
||||||
# Send via Matrix (threaded — replies route back via listener)
|
|
||||||
matrix_send "gardener" "🌱 Issue Gardener — ${ITEM_COUNT} item(s) need attention
|
|
||||||
|
|
||||||
${ESCALATION}
|
|
||||||
|
|
||||||
Reply with numbers+letters (e.g. 1a 2c) to decide." 2>/dev/null || true
|
|
||||||
fi
|
|
||||||
|
|
||||||
# ── Log actions taken ─────────────────────────────────────────────────────
|
|
||||||
ACTIONS=$(echo "$CLAUDE_OUTPUT" | grep "^ACTION:" || true)
|
|
||||||
if [ -n "$ACTIONS" ]; then
|
|
||||||
echo "$ACTIONS" | while read -r line; do
|
|
||||||
log " $line"
|
|
||||||
done
|
|
||||||
fi
|
|
||||||
|
|
||||||
# ── Collect dust items ───────────────────────────────────────────────────
|
|
||||||
# DUST_FILE already set above (before prompt construction)
|
|
||||||
DUST_LINES=$(echo "$CLAUDE_OUTPUT" | grep "^DUST: " | sed 's/^DUST: //' || true)
|
|
||||||
if [ -n "$DUST_LINES" ]; then
|
|
||||||
# Build set of issue numbers already in dust.jsonl for dedup
|
|
||||||
EXISTING_DUST_ISSUES=""
|
|
||||||
if [ -s "$DUST_FILE" ]; then
|
|
||||||
EXISTING_DUST_ISSUES=$(jq -r '.issue' "$DUST_FILE" 2>/dev/null | sort -nu || true)
|
|
||||||
fi
|
|
||||||
|
|
||||||
DUST_COUNT=0
|
|
||||||
while IFS= read -r dust_json; do
|
|
||||||
[ -z "$dust_json" ] && continue
|
|
||||||
# Validate JSON
|
|
||||||
if ! echo "$dust_json" | jq -e '.issue and .group' >/dev/null 2>&1; then
|
|
||||||
log "WARNING: invalid dust JSON: $dust_json"
|
|
||||||
continue
|
|
||||||
fi
|
|
||||||
# Deduplicate: skip if this issue is already staged
|
|
||||||
dust_issue_num=$(echo "$dust_json" | jq -r '.issue')
|
|
||||||
if echo "$EXISTING_DUST_ISSUES" | grep -qx "$dust_issue_num" 2>/dev/null; then
|
|
||||||
log "Skipping duplicate dust entry for issue #${dust_issue_num}"
|
|
||||||
continue
|
|
||||||
fi
|
|
||||||
EXISTING_DUST_ISSUES="${EXISTING_DUST_ISSUES}
|
|
||||||
${dust_issue_num}"
|
|
||||||
echo "$dust_json" | jq -c '. + {"ts": "'"$(date -u +%Y-%m-%dT%H:%M:%SZ)"'"}' >> "$DUST_FILE"
|
|
||||||
DUST_COUNT=$((DUST_COUNT + 1))
|
|
||||||
done <<< "$DUST_LINES"
|
|
||||||
log "Collected $DUST_COUNT dust item(s) (duplicates skipped)"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# ── Expire stale dust entries (30-day TTL) ───────────────────────────────
|
|
||||||
if [ -s "$DUST_FILE" ]; then
|
|
||||||
CUTOFF=$(date -u -d '30 days ago' +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || true)
|
|
||||||
if [ -n "$CUTOFF" ]; then
|
|
||||||
BEFORE_COUNT=$(wc -l < "$DUST_FILE")
|
|
||||||
if jq -c --arg c "$CUTOFF" 'select(.ts >= $c)' "$DUST_FILE" > "${DUST_FILE}.ttl" 2>/dev/null; then
|
|
||||||
mv "${DUST_FILE}.ttl" "$DUST_FILE"
|
|
||||||
AFTER_COUNT=$(wc -l < "$DUST_FILE")
|
|
||||||
EXPIRED=$((BEFORE_COUNT - AFTER_COUNT))
|
|
||||||
[ "$EXPIRED" -gt 0 ] && log "Expired $EXPIRED stale dust entries (>30 days old)"
|
|
||||||
else
|
|
||||||
rm -f "${DUST_FILE}.ttl"
|
|
||||||
log "WARNING: TTL cleanup failed — dust.jsonl left unchanged"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# ── Bundle dust groups with 3+ distinct issues ──────────────────────────
|
|
||||||
if [ -s "$DUST_FILE" ]; then
|
|
||||||
# Count distinct issues per group (not raw entries)
|
|
||||||
DUST_GROUPS=$(jq -r '[.group, (.issue | tostring)] | join("\t")' "$DUST_FILE" 2>/dev/null \
|
|
||||||
| sort -u | cut -f1 | sort | uniq -c | sort -rn || true)
|
|
||||||
while read -r count group; do
|
|
||||||
[ -z "$group" ] && continue
|
|
||||||
[ "$count" -lt 3 ] && continue
|
|
||||||
|
|
||||||
log "Bundling dust group '$group' ($count distinct issues)"
|
|
||||||
|
|
||||||
# Collect deduplicated issue references and details for this group
|
|
||||||
BUNDLE_ISSUES=$(jq -r --arg g "$group" 'select(.group == $g) | "#\(.issue) \(.title // "untitled") — \(.reason // "dust")"' "$DUST_FILE" | sort -u)
|
|
||||||
BUNDLE_ISSUE_NUMS=$(jq -r --arg g "$group" 'select(.group == $g) | .issue' "$DUST_FILE" | sort -nu)
|
|
||||||
DISTINCT_COUNT=$(echo "$BUNDLE_ISSUE_NUMS" | grep -c '.' || true)
|
|
||||||
|
|
||||||
bundle_title="fix: bundled dust cleanup — ${group}"
|
|
||||||
bundle_body="## Bundled dust cleanup — \`${group}\`
|
|
||||||
|
|
||||||
Gardener bundled ${DISTINCT_COUNT} trivial tech-debt items into one issue to save factory cycles.
|
|
||||||
|
|
||||||
### Items
|
|
||||||
$(echo "$BUNDLE_ISSUES" | sed 's/^/- /')
|
|
||||||
|
|
||||||
### Instructions
|
|
||||||
Fix all items above in a single PR. Each is a small change (rename, comment, style fix, single-line edit).
|
|
||||||
|
|
||||||
### Affected files
|
|
||||||
- Files in \`${group}\` subsystem
|
|
||||||
|
|
||||||
### Acceptance criteria
|
|
||||||
- [ ] All listed items resolved
|
|
||||||
- [ ] ShellCheck passes"
|
|
||||||
|
|
||||||
new_bundle=$(curl -sf -X POST \
|
|
||||||
-H "Authorization: token ${CODEBERG_TOKEN}" \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
"${CODEBERG_API}/issues" \
|
|
||||||
-d "$(jq -nc --arg t "$bundle_title" --arg b "$bundle_body" \
|
|
||||||
'{"title":$t,"body":$b,"labels":["backlog"]}')" 2>/dev/null | jq -r '.number // ""') || true
|
|
||||||
|
|
||||||
if [ -n "$new_bundle" ]; then
|
|
||||||
log "Created bundle issue #${new_bundle} for dust group '$group' ($DISTINCT_COUNT items)"
|
|
||||||
matrix_send "gardener" "📦 Bundled ${DISTINCT_COUNT} dust items (${group}) → #${new_bundle}" 2>/dev/null || true
|
|
||||||
|
|
||||||
# Close source issues with cross-reference
|
|
||||||
for src_issue in $BUNDLE_ISSUE_NUMS; do
|
|
||||||
curl -sf -X POST \
|
|
||||||
-H "Authorization: token ${CODEBERG_TOKEN}" \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
"${CODEBERG_API}/issues/${src_issue}/comments" \
|
|
||||||
-d "$(jq -nc --arg b "Bundled into #${new_bundle} (dust cleanup)" '{"body":$b}')" 2>/dev/null || true
|
|
||||||
curl -sf -X PATCH \
|
|
||||||
-H "Authorization: token ${CODEBERG_TOKEN}" \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
"${CODEBERG_API}/issues/${src_issue}" \
|
|
||||||
-d '{"state":"closed"}' 2>/dev/null || true
|
|
||||||
log "Closed source issue #${src_issue} → bundled into #${new_bundle}"
|
|
||||||
done
|
|
||||||
|
|
||||||
# Remove bundled items from dust.jsonl — only if jq succeeds
|
|
||||||
if jq -c --arg g "$group" 'select(.group != $g)' "$DUST_FILE" > "${DUST_FILE}.tmp" 2>/dev/null; then
|
|
||||||
mv "${DUST_FILE}.tmp" "$DUST_FILE"
|
|
||||||
else
|
|
||||||
rm -f "${DUST_FILE}.tmp"
|
|
||||||
log "WARNING: failed to prune bundled group '$group' from dust.jsonl"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
done <<< "$DUST_GROUPS"
|
|
||||||
fi
|
|
||||||
|
|
||||||
|
|
||||||
# ── Recipe matching engine ────────────────────────────────────────────────
|
# ── Recipe matching engine ────────────────────────────────────────────────
|
||||||
|
|
|
||||||
|
|
@ -1,255 +1,46 @@
|
||||||
#!/usr/bin/env bash
|
#!/usr/bin/env bash
|
||||||
# lib/agent-session.sh — Reusable tmux + Claude agent runtime
|
# agent-session.sh — Shared tmux + Claude interactive session helpers
|
||||||
#
|
#
|
||||||
# Source this in any agent script after lib/env.sh.
|
# Source this into agent orchestrator scripts for reusable session management.
|
||||||
#
|
#
|
||||||
# Required globals (set by the caller before using functions):
|
# Functions:
|
||||||
# SESSION_NAME — tmux session name (e.g., "dev-harb-935")
|
# agent_wait_for_claude_ready SESSION_NAME [TIMEOUT_SECS]
|
||||||
# PHASE_FILE — path to phase file
|
# agent_inject_into_session SESSION_NAME TEXT
|
||||||
# LOGFILE — path to log file
|
# agent_kill_session SESSION_NAME
|
||||||
# ISSUE — issue/context identifier (used in log prefix)
|
|
||||||
# STATUSFILE — path to status file
|
|
||||||
# THREAD_FILE — path to Matrix thread ID file
|
|
||||||
# WORKTREE — agent working directory (for crash recovery)
|
|
||||||
# PRIMARY_BRANCH — primary git branch (for crash recovery diff)
|
|
||||||
#
|
|
||||||
# Optional globals:
|
|
||||||
# PHASE_POLL_INTERVAL — seconds between phase polls (default: 30)
|
|
||||||
#
|
|
||||||
# Globals exported by monitor_phase_loop (readable by phase callbacks):
|
|
||||||
# LAST_PHASE_MTIME — mtime of the phase file when the current phase was dispatched
|
|
||||||
# _MONITOR_LOOP_EXIT — set on return: "idle_timeout", "crash_recovery_failed",
|
|
||||||
# or "callback_break"
|
|
||||||
|
|
||||||
# log — Timestamped logging to LOGFILE
|
# Wait for the Claude ❯ ready prompt in a tmux pane.
|
||||||
# Usage: log <message>
|
# Returns 0 if ready within TIMEOUT_SECS (default 120), 1 otherwise.
|
||||||
log() {
|
agent_wait_for_claude_ready() {
|
||||||
printf '[%s] #%s %s\n' "$(date -u '+%Y-%m-%d %H:%M:%S UTC')" "${ISSUE:-?}" "$*" >> "${LOGFILE:-/dev/null}"
|
local session="$1"
|
||||||
}
|
local timeout="${2:-120}"
|
||||||
|
|
||||||
# status — Log + write current status to STATUSFILE
|
|
||||||
# Usage: status <message>
|
|
||||||
status() {
|
|
||||||
printf '[%s] agent #%s: %s\n' "$(date -u '+%Y-%m-%d %H:%M:%S UTC')" "${ISSUE:-?}" "$*" > "${STATUSFILE:-/dev/null}"
|
|
||||||
log "$*"
|
|
||||||
}
|
|
||||||
|
|
||||||
# notify — Send plain-text Matrix notification into the issue thread
|
|
||||||
# Usage: notify <message>
|
|
||||||
notify() {
|
|
||||||
local thread_id=""
|
|
||||||
[ -f "${THREAD_FILE:-}" ] && thread_id=$(cat "$THREAD_FILE" 2>/dev/null || true)
|
|
||||||
matrix_send "dev" "🔧 #${ISSUE}: $*" "${thread_id}" 2>/dev/null || true
|
|
||||||
}
|
|
||||||
|
|
||||||
# notify_ctx — Send rich Matrix notification with HTML context into the issue thread
|
|
||||||
# Falls back to plain send (registering a thread root) when no thread exists.
|
|
||||||
# Usage: notify_ctx <plain_text> <html_body>
|
|
||||||
notify_ctx() {
|
|
||||||
local plain="$1" html="$2"
|
|
||||||
local thread_id=""
|
|
||||||
[ -f "${THREAD_FILE:-}" ] && thread_id=$(cat "$THREAD_FILE" 2>/dev/null || true)
|
|
||||||
if [ -n "$thread_id" ]; then
|
|
||||||
matrix_send_ctx "dev" "🔧 #${ISSUE}: ${plain}" "🔧 #${ISSUE}: ${html}" "${thread_id}" 2>/dev/null || true
|
|
||||||
else
|
|
||||||
# No thread — fall back to plain send so a thread root is registered
|
|
||||||
matrix_send "dev" "🔧 #${ISSUE}: ${plain}" "" "${ISSUE}" 2>/dev/null || true
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# read_phase — Read current value from PHASE_FILE, stripping whitespace
|
|
||||||
# Usage: read_phase
|
|
||||||
read_phase() {
|
|
||||||
{ cat "${PHASE_FILE}" 2>/dev/null || true; } | head -1 | tr -d '[:space:]'
|
|
||||||
}
|
|
||||||
|
|
||||||
# wait_for_claude_ready — Poll SESSION_NAME tmux pane until Claude shows ❯ prompt
|
|
||||||
# Usage: wait_for_claude_ready [timeout_seconds]
|
|
||||||
# Returns: 0 if ready, 1 if timeout
|
|
||||||
wait_for_claude_ready() {
|
|
||||||
local timeout="${1:-120}"
|
|
||||||
local elapsed=0
|
local elapsed=0
|
||||||
while [ "$elapsed" -lt "$timeout" ]; do
|
while [ "$elapsed" -lt "$timeout" ]; do
|
||||||
# Claude Code shows ❯ when ready for input
|
if tmux capture-pane -t "$session" -p 2>/dev/null | grep -q '❯'; then
|
||||||
if tmux capture-pane -t "${SESSION_NAME}" -p 2>/dev/null | grep -q '❯'; then
|
|
||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
sleep 2
|
sleep 2
|
||||||
elapsed=$((elapsed + 2))
|
elapsed=$((elapsed + 2))
|
||||||
done
|
done
|
||||||
log "WARNING: claude not ready after ${timeout}s — proceeding anyway"
|
|
||||||
return 1
|
return 1
|
||||||
}
|
}
|
||||||
|
|
||||||
# inject_into_session — Paste text into the tmux session via tmux buffer
|
# Paste TEXT into SESSION (waits for Claude to be ready first), then press Enter.
|
||||||
# Usage: inject_into_session <text>
|
agent_inject_into_session() {
|
||||||
inject_into_session() {
|
local session="$1"
|
||||||
local text="$1"
|
local text="$2"
|
||||||
local tmpfile
|
local tmpfile
|
||||||
wait_for_claude_ready 120
|
agent_wait_for_claude_ready "$session" 120 || true
|
||||||
tmpfile=$(mktemp /tmp/tmux-inject-XXXXXX)
|
tmpfile=$(mktemp /tmp/agent-inject-XXXXXX)
|
||||||
printf '%s' "$text" > "$tmpfile"
|
printf '%s' "$text" > "$tmpfile"
|
||||||
tmux load-buffer -b "inject-${ISSUE}" "$tmpfile"
|
tmux load-buffer -b "agent-inject-$$" "$tmpfile"
|
||||||
tmux paste-buffer -t "${SESSION_NAME}" -b "inject-${ISSUE}"
|
tmux paste-buffer -t "$session" -b "agent-inject-$$"
|
||||||
sleep 0.5
|
sleep 0.5
|
||||||
tmux send-keys -t "${SESSION_NAME}" "" Enter
|
tmux send-keys -t "$session" "" Enter
|
||||||
tmux delete-buffer -b "inject-${ISSUE}" 2>/dev/null || true
|
tmux delete-buffer -b "agent-inject-$$" 2>/dev/null || true
|
||||||
rm -f "$tmpfile"
|
rm -f "$tmpfile"
|
||||||
}
|
}
|
||||||
|
|
||||||
# kill_tmux_session — Kill SESSION_NAME tmux session
|
# Kill a tmux session gracefully (no-op if not found).
|
||||||
# Usage: kill_tmux_session
|
agent_kill_session() {
|
||||||
kill_tmux_session() {
|
tmux kill-session -t "$1" 2>/dev/null || true
|
||||||
tmux kill-session -t "${SESSION_NAME}" 2>/dev/null || true
|
|
||||||
}
|
|
||||||
|
|
||||||
# create_agent_session — Create (or reuse) a detached tmux session running claude
|
|
||||||
# Sets SESSION_NAME to $1 and uses $2 as the working directory.
|
|
||||||
# Usage: create_agent_session <session_name> <workdir>
|
|
||||||
# Returns: 0 on success, 1 on failure
|
|
||||||
create_agent_session() {
|
|
||||||
SESSION_NAME="${1:-${SESSION_NAME}}"
|
|
||||||
local workdir="${2:-${WORKTREE}}"
|
|
||||||
|
|
||||||
if tmux has-session -t "${SESSION_NAME}" 2>/dev/null; then
|
|
||||||
log "reusing existing tmux session: ${SESSION_NAME}"
|
|
||||||
return 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Kill any stale entry before creating
|
|
||||||
tmux kill-session -t "${SESSION_NAME}" 2>/dev/null || true
|
|
||||||
|
|
||||||
tmux new-session -d -s "${SESSION_NAME}" -c "${workdir}" \
|
|
||||||
"claude --dangerously-skip-permissions"
|
|
||||||
|
|
||||||
if ! tmux has-session -t "${SESSION_NAME}" 2>/dev/null; then
|
|
||||||
log "ERROR: failed to create tmux session ${SESSION_NAME}"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
log "tmux session created: ${SESSION_NAME}"
|
|
||||||
|
|
||||||
if ! wait_for_claude_ready 120; then
|
|
||||||
log "ERROR: claude did not become ready in ${SESSION_NAME}"
|
|
||||||
kill_tmux_session
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
|
|
||||||
# inject_formula — Send a formula/prompt into the agent session
|
|
||||||
# Usage: inject_formula <session_name> <formula_text> [context]
|
|
||||||
inject_formula() {
|
|
||||||
SESSION_NAME="${1:-${SESSION_NAME}}"
|
|
||||||
local formula_text="$2"
|
|
||||||
# $3 context is available for future use by callers
|
|
||||||
inject_into_session "$formula_text"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Globals exported by monitor_phase_loop for use by phase callbacks.
|
|
||||||
# LAST_PHASE_MTIME: mtime of phase file at the time the current phase was dispatched.
|
|
||||||
# _MONITOR_LOOP_EXIT: reason monitor_phase_loop returned — check after the call.
|
|
||||||
LAST_PHASE_MTIME=0
|
|
||||||
_MONITOR_LOOP_EXIT=""
|
|
||||||
|
|
||||||
# monitor_phase_loop — Watch PHASE_FILE and dispatch phase changes to a callback
|
|
||||||
#
|
|
||||||
# Handles: phase change detection, idle timeout, and session crash recovery.
|
|
||||||
# The phase callback receives the current phase string as $1.
|
|
||||||
# Return 1 from the callback to break the loop; return 0 (or default) to continue.
|
|
||||||
#
|
|
||||||
# On idle timeout: kills the session, sets _MONITOR_LOOP_EXIT=idle_timeout, breaks.
|
|
||||||
# On crash recovery failure: sets _MONITOR_LOOP_EXIT=crash_recovery_failed, breaks.
|
|
||||||
# On callback return 1: sets _MONITOR_LOOP_EXIT=callback_break, breaks.
|
|
||||||
#
|
|
||||||
# LAST_PHASE_MTIME is updated before each callback invocation so callbacks can
|
|
||||||
# detect subsequent phase file changes (e.g., during inner polling loops).
|
|
||||||
#
|
|
||||||
# Usage: monitor_phase_loop <phase_file> <idle_timeout_secs> <phase_callback_fn>
|
|
||||||
monitor_phase_loop() {
|
|
||||||
local phase_file="${1:-${PHASE_FILE}}"
|
|
||||||
local idle_timeout="${2:-7200}"
|
|
||||||
local callback_fn="${3:-}"
|
|
||||||
local poll_interval="${PHASE_POLL_INTERVAL:-30}"
|
|
||||||
local current_phase phase_mtime crash_diff recovery_msg
|
|
||||||
|
|
||||||
PHASE_FILE="$phase_file"
|
|
||||||
LAST_PHASE_MTIME=0
|
|
||||||
_MONITOR_LOOP_EXIT=""
|
|
||||||
local idle_elapsed=0
|
|
||||||
|
|
||||||
while true; do
|
|
||||||
sleep "$poll_interval"
|
|
||||||
idle_elapsed=$(( idle_elapsed + poll_interval ))
|
|
||||||
|
|
||||||
# --- Session health check ---
|
|
||||||
if ! tmux has-session -t "${SESSION_NAME}" 2>/dev/null; then
|
|
||||||
current_phase=$(read_phase)
|
|
||||||
case "$current_phase" in
|
|
||||||
PHASE:done|PHASE:failed)
|
|
||||||
# Expected terminal phases — fall through to phase dispatch below
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
log "WARNING: tmux session died unexpectedly (phase: ${current_phase:-none})"
|
|
||||||
notify "session crashed (phase: ${current_phase:-none}), attempting recovery"
|
|
||||||
|
|
||||||
# Attempt crash recovery: restart session with recovery context
|
|
||||||
crash_diff=$(git -C "${WORKTREE}" diff "origin/${PRIMARY_BRANCH}..HEAD" --stat 2>/dev/null | head -20 || echo "(no diff)")
|
|
||||||
recovery_msg="## Session Recovery
|
|
||||||
|
|
||||||
Your Claude Code session for issue #${ISSUE} was interrupted unexpectedly.
|
|
||||||
The git worktree at ${WORKTREE} is intact — your changes survived.
|
|
||||||
|
|
||||||
Last known phase: ${current_phase:-unknown}
|
|
||||||
|
|
||||||
Work so far:
|
|
||||||
${crash_diff}
|
|
||||||
|
|
||||||
Run: git log --oneline -5 && git status
|
|
||||||
Then resume from the last phase following the original phase protocol.
|
|
||||||
Phase file: ${PHASE_FILE}"
|
|
||||||
|
|
||||||
if tmux new-session -d -s "${SESSION_NAME}" -c "${WORKTREE}" \
|
|
||||||
"claude --dangerously-skip-permissions" 2>/dev/null; then
|
|
||||||
inject_into_session "$recovery_msg"
|
|
||||||
log "recovery session started"
|
|
||||||
idle_elapsed=0
|
|
||||||
else
|
|
||||||
log "ERROR: could not restart session after crash"
|
|
||||||
notify "session crashed and could not recover — needs human attention"
|
|
||||||
_MONITOR_LOOP_EXIT="crash_recovery_failed"
|
|
||||||
break
|
|
||||||
fi
|
|
||||||
continue
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
fi
|
|
||||||
|
|
||||||
# --- Check phase file for changes ---
|
|
||||||
phase_mtime=$(stat -c %Y "$phase_file" 2>/dev/null || echo 0)
|
|
||||||
current_phase=$(read_phase)
|
|
||||||
|
|
||||||
if [ -z "$current_phase" ] || [ "$phase_mtime" -le "$LAST_PHASE_MTIME" ]; then
|
|
||||||
# No phase change — check idle timeout
|
|
||||||
if [ "$idle_elapsed" -ge "$idle_timeout" ]; then
|
|
||||||
log "TIMEOUT: no phase update for ${idle_timeout}s — killing session"
|
|
||||||
kill_tmux_session
|
|
||||||
_MONITOR_LOOP_EXIT="idle_timeout"
|
|
||||||
break
|
|
||||||
fi
|
|
||||||
continue
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Phase changed — update tracking state and dispatch to callback
|
|
||||||
LAST_PHASE_MTIME="$phase_mtime"
|
|
||||||
idle_elapsed=0
|
|
||||||
log "phase: ${current_phase}"
|
|
||||||
status "${current_phase}"
|
|
||||||
|
|
||||||
if [ -n "$callback_fn" ]; then
|
|
||||||
if ! "$callback_fn" "$current_phase"; then
|
|
||||||
_MONITOR_LOOP_EXIT="callback_break"
|
|
||||||
break
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
}
|
}
|
||||||
|
|
|
||||||
Loading…
Add table
Add a link
Reference in a new issue