2026-03-20 12:11:58 +01:00
# formulas/run-gardener.toml — Gardener housekeeping formula
#
# Defines the gardener's complete run: grooming (Claude session via
fix: gardener migration — run-gardener.toml via direct cron, remove legacy scripts (#490)
Rewrite gardener-run.sh as direct cron runner (matching supervisor/planner/
predictor pattern): lock guard, memory check, worktree, tmux session with
Claude sonnet + formulas/run-gardener.toml, phase monitoring, cleanup.
- Delete gardener-poll.sh and gardener-agent.sh (superseded)
- Extract consume_escalation_reply() to lib/formula-session.sh (shared
by gardener and supervisor, eliminates duplicate blocks)
- Update AGENTS.md, gardener/AGENTS.md, lib/AGENTS.md, CI smoke test,
and cross-references
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-21 13:09:17 +00:00
# gardener-run.sh) + blocked-review + AGENTS.md maintenance + final
2026-03-21 04:18:43 +00:00
# commit-and-pr.
2026-03-20 12:11:58 +01:00
#
# No memory, no journal. The gardener does mechanical housekeeping
# based on current state — it doesn't need to remember past runs.
#
2026-03-24 20:48:55 +00:00
# Steps: preflight → grooming → dust-bundling → blocked-review → stale-pr-recycle → agents-update → commit-and-pr
2026-03-20 12:11:58 +01:00
name = "run-gardener"
2026-03-21 04:18:43 +00:00
description = "Mechanical housekeeping: grooming, blocked review, docs update"
2026-03-20 12:11:58 +01:00
version = 1
[ context ]
files = [ "AGENTS.md" , "VISION.md" , "README.md" ]
# ─────────────────────────────────────────────────────────────────────
# Step 1: preflight
# ─────────────────────────────────────────────────────────────────────
[ [ steps ] ]
id = "preflight"
title = "Pull latest code"
description = "" "
Set up the working environment for this gardener run .
1 . Change to the project repository :
cd "$PROJECT_REPO_ROOT"
2 . Pull the latest code :
git fetch origin "$PRIMARY_BRANCH" --quiet
git checkout "$PRIMARY_BRANCH" --quiet
git pull --ff-only origin "$PRIMARY_BRANCH" --quiet
3 . Record the current HEAD SHA for AGENTS . md watermarks :
HEAD_SHA = $ ( git rev-parse HEAD )
echo "$HEAD_SHA" > / tmp / gardener-head-sha
2026-03-22 23:58:50 +00:00
4 . Initialize the pending-actions manifest ( JSONL , converted to JSON at commit time ) :
printf '' > "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
2026-03-20 12:11:58 +01:00
"" "
# ─────────────────────────────────────────────────────────────────────
# Step 2: grooming — Claude-driven backlog grooming
# ─────────────────────────────────────────────────────────────────────
[ [ steps ] ]
id = "grooming"
title = "Backlog grooming — triage all open issues"
description = "" "
Groom the open issue backlog . This step is the core Claude-driven analysis
fix: gardener migration — run-gardener.toml via direct cron, remove legacy scripts (#490)
Rewrite gardener-run.sh as direct cron runner (matching supervisor/planner/
predictor pattern): lock guard, memory check, worktree, tmux session with
Claude sonnet + formulas/run-gardener.toml, phase monitoring, cleanup.
- Delete gardener-poll.sh and gardener-agent.sh (superseded)
- Extract consume_escalation_reply() to lib/formula-session.sh (shared
by gardener and supervisor, eliminates duplicate blocks)
- Update AGENTS.md, gardener/AGENTS.md, lib/AGENTS.md, CI smoke test,
and cross-references
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-21 13:09:17 +00:00
( Claude performs pre-checks inline before deeper analysis ) .
2026-03-20 12:11:58 +01:00
Pre-checks ( bash , zero tokens — detect problems before invoking Claude ) :
1 . Fetch all open issues :
fix: Replace Codeberg dependency with local Forgejo instance (#611)
- Add setup_forge() to bin/disinto: provisions Forgejo via Docker,
creates admin + bot users (dev-bot, review-bot), generates API
tokens, creates repo, and pushes code — all automated
- Rename env vars: CODEBERG_TOKEN→FORGE_TOKEN, REVIEW_BOT_TOKEN→
FORGE_REVIEW_TOKEN, CODEBERG_REPO→FORGE_REPO, CODEBERG_API→
FORGE_API, CODEBERG_WEB→FORGE_WEB, CODEBERG_BOT_USERNAMES→
FORGE_BOT_USERNAMES (with backwards-compat fallbacks)
- Rename API helpers: codeberg_api()→forge_api(), codeberg_api_all()
→forge_api_all() (with compat aliases)
- Add forge_url field to project TOML; load-project.sh derives
FORGE_API/FORGE_WEB from forge_url + repo
- Update parse_repo_slug() to accept any host URL, not just codeberg
- Forgejo data stored under ~/.disinto/forgejo/ (not in factory repo)
- Update all 58 files: agent scripts, formulas, docs, site HTML
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 16:57:12 +00:00
curl -sf -H "Authorization: token $FORGE_TOKEN" \
"$FORGE_API/issues?state=open&type=issues&limit=50&sort=updated&direction=desc"
2026-03-20 12:11:58 +01:00
2 . Duplicate detection : compare issue titles pairwise . Normalize
( lowercase , strip prefixes like feat : / fix : / refactor : , collapse whitespace )
and flag pairs with > 60 % word overlap as possible duplicates .
3 . Missing acceptance criteria : flag issues with body < 100 chars and
no checkboxes ( - [ ] or - [ x ] ) .
4 . Stale issues : flag issues with no update in 14 + days .
5 . Blockers starving the factory ( HIGHEST PRIORITY ) : find issues that
block backlog items but are NOT themselves labeled backlog . These
starve the dev-agent completely . Extract deps from ## Dependencies /
## Depends on / ## Blocked by sections of backlog issues and check
if each dependency is open + not backlog-labeled .
6 . Tech-debt promotion : list all tech-debt labeled issues — goal is to
process them all ( promote to backlog or classify as dust ) .
For each issue , choose ONE action and write to result file :
ACTION ( substantial — promote , close duplicate , add acceptance criteria ) :
echo "ACTION: promoted #NNN to backlog — <reason>" > > "$RESULT_FILE"
echo "ACTION: closed #NNN as duplicate of #OLDER" > > "$RESULT_FILE"
2026-03-23 12:32:36 +00:00
Body enrichment on promotion ( CRITICAL — prevents quality-gate bounce ) :
When promoting ANY issue to backlog , you MUST enrich the issue body so
it passes the quality gate ( step 8 ) on the next gardener run . Before
writing the add_label manifest action :
a . Check whether the body already contains ` ` ## Acceptance criteria``
( with at least one ` ` - [ ] ` ` checkbox ) and ` ` ## Affected files``
( with at least one file path ) . If both are present , skip to ( d ) .
b . If ` ` ## Affected files`` is missing, infer from the body — look for
file paths ( e . g . ` ` lib / agent-session . sh : 266 ` ` ) , function names ,
script names , or directory references . Use the AGENTS . md directory
layout to resolve ambiguous mentions ( e . g . "gardener" →
` ` gardener / gardener-run . sh ` ` , "dev-poll" → ` ` dev / dev-poll . sh ` ` ) .
Format as a bulleted list under a ` ` ## Affected files`` heading.
c . If ` ` ## Acceptance criteria`` is missing, derive ``- [ ]`` checkboxes
from the problem description — each a verifiable condition the fix
must satisfy .
d . Construct the full new body = original body text + appended missing
sections . Write an edit_body action BEFORE the add_label action :
echo '{"action":"edit_body","issue":NNN,"body":"<full new body>"}' > > "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
e . Write the add_label action :
echo '{"action":"add_label","issue":NNN,"label":"backlog"}' > > "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
This ensures promoted issues already have the required sections when
the next gardener run ' s quality gate inspects them .
2026-03-20 12:11:58 +01:00
DUST ( trivial — single-line edit , rename , comment , style , whitespace ) :
echo 'DUST: {"issue": NNN, "group": "<file-or-subsystem>", "title": "...", "reason": "..."}' > > "$RESULT_FILE"
Group by file or subsystem ( e . g . "gardener" , "lib/env.sh" , "dev-poll" ) .
2026-03-21 10:41:31 +00:00
Do NOT close dust issues — the dust-bundling step auto-bundles groups
of 3 + into one backlog issue .
2026-03-20 12:11:58 +01:00
2026-03-26 09:09:58 +00:00
VAULT ( needs human decision or external resource ) :
File a vault procurement item at $ PROJECT_REPO_ROOT / vault / pending / < id > . md :
# <What decision or resource is needed>
## What
< description >
## Why
< which issue this unblocks >
## Unblocks
- #NNN — <title>
Log : echo "VAULT: filed vault/pending/<id>.md for #NNN — <reason>" > > "$RESULT_FILE"
2026-03-20 12:11:58 +01:00
CLEAN ( only if truly nothing to do ) :
echo 'CLEAN' > > "$RESULT_FILE"
Dust vs ore rules :
Dust : comment fix , variable rename , whitespace / formatting , single-line edit , trivial cleanup with no behavior change
Ore : multi-file changes , behavioral fixes , architectural improvements , security / correctness issues
Sibling dependency rule ( CRITICAL ) :
Issues from the same PR review or code audit are SIBLINGS — independent work items .
NEVER add bidirectional ## Dependencies between siblings (creates deadlocks).
Use ## Related for cross-references: "## Related\n- #NNN (sibling)"
2026-03-21 11:55:15 +00:00
7 . Architecture decision alignment check ( AD check ) :
For each open issue labeled 'backlog' , check whether the issue
contradicts any architecture decision listed in the
## Architecture Decisions section of AGENTS.md.
Read AGENTS . md and extract the AD table . For each backlog issue ,
compare the issue title and body against each AD . If an issue
clearly violates an AD :
2026-03-22 23:58:50 +00:00
a . Write a comment action to the manifest :
echo '{"action":"comment","issue":NNN,"body":"Closing: violates AD-NNN (<decision summary>). See AGENTS.md § Architecture Decisions."}' > > "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
b . Write a close action to the manifest :
echo '{"action":"close","issue":NNN,"reason":"violates AD-NNN"}' > > "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
2026-03-21 11:55:15 +00:00
c . Log to the result file :
echo "ACTION: closed #NNN — violates AD-NNN" > > "$RESULT_FILE"
Only close for clear , unambiguous violations . If the issue is
borderline or could be interpreted as compatible , leave it open
2026-03-26 09:09:58 +00:00
and file a VAULT item for human decision instead .
2026-03-21 11:55:15 +00:00
2026-03-21 17:45:36 +00:00
8 . Quality gate — backlog label enforcement :
For each open issue labeled 'backlog' , verify it has the required
sections for dev-agent pickup :
a . Acceptance criteria — body must contain at least one checkbox
( ` ` - [ ] ` ` or ` ` - [ x ] ` ` )
b . Affected files — body must contain an "Affected files" or
"## Affected files" section with at least one file path
If either section is missing :
2026-03-22 23:58:50 +00:00
a . Write a comment action to the manifest :
fix: Replace Codeberg dependency with local Forgejo instance (#611)
- Add setup_forge() to bin/disinto: provisions Forgejo via Docker,
creates admin + bot users (dev-bot, review-bot), generates API
tokens, creates repo, and pushes code — all automated
- Rename env vars: CODEBERG_TOKEN→FORGE_TOKEN, REVIEW_BOT_TOKEN→
FORGE_REVIEW_TOKEN, CODEBERG_REPO→FORGE_REPO, CODEBERG_API→
FORGE_API, CODEBERG_WEB→FORGE_WEB, CODEBERG_BOT_USERNAMES→
FORGE_BOT_USERNAMES (with backwards-compat fallbacks)
- Rename API helpers: codeberg_api()→forge_api(), codeberg_api_all()
→forge_api_all() (with compat aliases)
- Add forge_url field to project TOML; load-project.sh derives
FORGE_API/FORGE_WEB from forge_url + repo
- Update parse_repo_slug() to accept any host URL, not just codeberg
- Forgejo data stored under ~/.disinto/forgejo/ (not in factory repo)
- Update all 58 files: agent scripts, formulas, docs, site HTML
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 16:57:12 +00:00
echo '{"action":"comment","issue":NNN,"body":"This issue is missing required sections. Please use the issue templates at `.forgejo/ISSUE_TEMPLATE/` — needs: <missing items>."}' > > "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
2026-03-21 17:45:36 +00:00
Where < missing items > is a comma-separated list of what ' s absent
( e . g . "acceptance criteria, affected files" or just "affected files" ) .
2026-03-22 23:58:50 +00:00
b . Write a remove_label action to the manifest :
echo '{"action":"remove_label","issue":NNN,"label":"backlog"}' > > "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
c . Log to the result file :
2026-03-21 17:45:36 +00:00
echo "ACTION: stripped backlog from #NNN — missing: <missing items>" > > "$RESULT_FILE"
Well-structured issues ( both sections present ) are left untouched —
they are ready for dev-agent pickup .
2026-03-20 12:11:58 +01:00
Processing order :
1 . Handle PRIORITY_blockers_starving_factory first — promote or resolve
2026-03-21 11:55:15 +00:00
2 . AD alignment check — close backlog issues that violate architecture decisions
2026-03-21 17:45:36 +00:00
3 . Quality gate — strip backlog from issues missing acceptance criteria or affected files
4 . Process tech-debt issues by score ( impact / effort )
2026-03-26 09:09:58 +00:00
5 . Classify remaining items as dust or route to vault
2026-03-20 12:11:58 +01:00
2026-03-21 10:41:31 +00:00
Do NOT bundle dust yourself — the dust-bundling step handles accumulation ,
dedup , TTL expiry , and bundling into backlog issues .
2026-03-20 12:11:58 +01:00
CRITICAL : If this step fails for any reason , log the failure and move on .
"" "
needs = [ "preflight" ]
# ─────────────────────────────────────────────────────────────────────
2026-03-21 10:41:31 +00:00
# Step 3: dust-bundling — accumulate, expire, and bundle dust items
# ─────────────────────────────────────────────────────────────────────
[ [ steps ] ]
id = "dust-bundling"
title = "Accumulate dust, expire stale entries, and bundle groups"
description = "" "
Process DUST items emitted during grooming . This step maintains the
persistent dust accumulator at $ PROJECT_REPO_ROOT / gardener / dust . jsonl .
IMPORTANT : Use $ PROJECT_REPO_ROOT / gardener / dust . jsonl ( the main repo
checkout ) , NOT the worktree copy — the worktree is destroyed after the
session , so changes there would be lost .
1 . Collect DUST JSON lines emitted during grooming ( from the result file
or your notes ) . Each has : { "issue" : NNN , "group" : "..." , "title" : "..." , "reason" : "..." }
2 . Deduplicate : read existing dust . jsonl and skip any issue numbers that
are already staged :
DUST_FILE = "$PROJECT_REPO_ROOT/gardener/dust.jsonl"
touch "$DUST_FILE"
EXISTING = $ ( jq -r '.issue' "$DUST_FILE" 2 > / dev / null | sort -nu | | true )
For each new dust item , check if its issue number is in EXISTING .
Add new entries with a timestamp :
echo '{"issue":NNN,"group":"...","title":"...","reason":"...","ts":"YYYY-MM-DDTHH:MM:SSZ"}' > > "$DUST_FILE"
3 . Expire stale entries ( 30 -day TTL ) :
CUTOFF = $ ( date -u -d '30 days ago' + % Y- % m- % dT % H : % M : % SZ )
jq -c --arg c "$CUTOFF" 'select(.ts >= $c)' "$DUST_FILE" > "${DUST_FILE}.tmp" & & mv "${DUST_FILE}.tmp" "$DUST_FILE"
4 . Bundle groups with 3 + distinct issues :
a . Count distinct issues per group :
jq -r '[.group, (.issue | tostring)] | join("\\t")' "$DUST_FILE" | sort -u | cut -f1 | sort | uniq -c | sort -rn
b . For each group with count > = 3 :
- Collect issue details and distinct issue numbers for the group
2026-03-22 23:58:50 +00:00
- Write a create_issue action to the manifest :
echo '{"action":"create_issue","title":"fix: bundled dust cleanup — GROUP","body":"...","labels":["backlog"]}' > > "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
- Write comment + close actions for each source issue :
echo '{"action":"comment","issue":NNN,"body":"Bundled into dust cleanup issue for GROUP"}' > > "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
echo '{"action":"close","issue":NNN,"reason":"bundled into dust cleanup for GROUP"}' > > "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
2026-03-21 10:41:31 +00:00
- Remove bundled items from dust . jsonl :
jq -c --arg g "GROUP" 'select(.group != $g)' "$DUST_FILE" > "${DUST_FILE}.tmp" & & mv "${DUST_FILE}.tmp" "$DUST_FILE"
5 . If no DUST items were emitted and no groups are ripe , skip this step .
CRITICAL : If this step fails , log the failure and move on to blocked-review .
"" "
needs = [ "grooming" ]
# ─────────────────────────────────────────────────────────────────────
# Step 4: blocked-review — triage blocked issues
2026-03-20 12:11:58 +01:00
# ─────────────────────────────────────────────────────────────────────
[ [ steps ] ]
id = "blocked-review"
title = "Review issues labeled blocked"
description = "" "
Review all issues labeled 'blocked' and decide their fate .
( See issue #352 for the blocked label convention.)
2026-03-22 23:58:50 +00:00
1 . Fetch all blocked issues :
fix: Replace Codeberg dependency with local Forgejo instance (#611)
- Add setup_forge() to bin/disinto: provisions Forgejo via Docker,
creates admin + bot users (dev-bot, review-bot), generates API
tokens, creates repo, and pushes code — all automated
- Rename env vars: CODEBERG_TOKEN→FORGE_TOKEN, REVIEW_BOT_TOKEN→
FORGE_REVIEW_TOKEN, CODEBERG_REPO→FORGE_REPO, CODEBERG_API→
FORGE_API, CODEBERG_WEB→FORGE_WEB, CODEBERG_BOT_USERNAMES→
FORGE_BOT_USERNAMES (with backwards-compat fallbacks)
- Rename API helpers: codeberg_api()→forge_api(), codeberg_api_all()
→forge_api_all() (with compat aliases)
- Add forge_url field to project TOML; load-project.sh derives
FORGE_API/FORGE_WEB from forge_url + repo
- Update parse_repo_slug() to accept any host URL, not just codeberg
- Forgejo data stored under ~/.disinto/forgejo/ (not in factory repo)
- Update all 58 files: agent scripts, formulas, docs, site HTML
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 16:57:12 +00:00
curl -sf -H "Authorization: token $FORGE_TOKEN" \
"$FORGE_API/issues?state=open&type=issues&labels=blocked&limit=50"
2026-03-20 12:11:58 +01:00
2026-03-22 23:58:50 +00:00
2 . For each blocked issue , read the full body and comments :
fix: Replace Codeberg dependency with local Forgejo instance (#611)
- Add setup_forge() to bin/disinto: provisions Forgejo via Docker,
creates admin + bot users (dev-bot, review-bot), generates API
tokens, creates repo, and pushes code — all automated
- Rename env vars: CODEBERG_TOKEN→FORGE_TOKEN, REVIEW_BOT_TOKEN→
FORGE_REVIEW_TOKEN, CODEBERG_REPO→FORGE_REPO, CODEBERG_API→
FORGE_API, CODEBERG_WEB→FORGE_WEB, CODEBERG_BOT_USERNAMES→
FORGE_BOT_USERNAMES (with backwards-compat fallbacks)
- Rename API helpers: codeberg_api()→forge_api(), codeberg_api_all()
→forge_api_all() (with compat aliases)
- Add forge_url field to project TOML; load-project.sh derives
FORGE_API/FORGE_WEB from forge_url + repo
- Update parse_repo_slug() to accept any host URL, not just codeberg
- Forgejo data stored under ~/.disinto/forgejo/ (not in factory repo)
- Update all 58 files: agent scripts, formulas, docs, site HTML
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 16:57:12 +00:00
curl -sf -H "Authorization: token $FORGE_TOKEN" \
"$FORGE_API/issues/<number>"
curl -sf -H "Authorization: token $FORGE_TOKEN" \
"$FORGE_API/issues/<number>/comments"
2026-03-20 12:11:58 +01:00
2026-03-22 23:58:50 +00:00
3 . Check dependencies — extract issue numbers from ## Dependencies /
2026-03-20 12:11:58 +01:00
## Depends on / ## Blocked by sections. For each dependency:
fix: Replace Codeberg dependency with local Forgejo instance (#611)
- Add setup_forge() to bin/disinto: provisions Forgejo via Docker,
creates admin + bot users (dev-bot, review-bot), generates API
tokens, creates repo, and pushes code — all automated
- Rename env vars: CODEBERG_TOKEN→FORGE_TOKEN, REVIEW_BOT_TOKEN→
FORGE_REVIEW_TOKEN, CODEBERG_REPO→FORGE_REPO, CODEBERG_API→
FORGE_API, CODEBERG_WEB→FORGE_WEB, CODEBERG_BOT_USERNAMES→
FORGE_BOT_USERNAMES (with backwards-compat fallbacks)
- Rename API helpers: codeberg_api()→forge_api(), codeberg_api_all()
→forge_api_all() (with compat aliases)
- Add forge_url field to project TOML; load-project.sh derives
FORGE_API/FORGE_WEB from forge_url + repo
- Update parse_repo_slug() to accept any host URL, not just codeberg
- Forgejo data stored under ~/.disinto/forgejo/ (not in factory repo)
- Update all 58 files: agent scripts, formulas, docs, site HTML
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 16:57:12 +00:00
curl -sf -H "Authorization: token $FORGE_TOKEN" \
"$FORGE_API/issues/<dep_number>"
2026-03-20 12:11:58 +01:00
Check if the dependency is now closed .
2026-03-22 23:58:50 +00:00
4 . For each blocked issue , choose ONE action :
2026-03-20 12:11:58 +01:00
UNBLOCK — all dependencies are now closed or the blocking condition resolved :
2026-03-22 23:58:50 +00:00
a . Write a remove_label action to the manifest :
echo '{"action":"remove_label","issue":NNN,"label":"blocked"}' > > "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
b . Write a comment action to the manifest :
echo '{"action":"comment","issue":NNN,"body":"Unblocked: <explanation of what resolved the blocker>"}' > > "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
2026-03-20 12:11:58 +01:00
NEEDS HUMAN — blocking condition is ambiguous , requires architectural
decision , or involves external factors :
2026-03-22 23:58:50 +00:00
a . Write a comment action to the manifest :
echo '{"action":"comment","issue":NNN,"body":"<diagnostic: what you found and what decision is needed>"}' > > "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
2026-03-20 12:11:58 +01:00
b . Leave the 'blocked' label in place
CLOSE — issue is stale ( blocked 30 + days with no progress on blocker ) ,
the blocker is wontfix , or the issue is no longer relevant :
2026-03-22 23:58:50 +00:00
a . Write a comment action to the manifest :
echo '{"action":"comment","issue":NNN,"body":"Closing: <reason — stale blocker, no longer relevant, etc.>"}' > > "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
b . Write a close action to the manifest :
echo '{"action":"close","issue":NNN,"reason":"<stale blocker / no longer relevant / etc.>"}' > > "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
2026-03-20 12:11:58 +01:00
CRITICAL : If this step fails , log the failure and move on .
"" "
2026-03-21 10:41:31 +00:00
needs = [ "dust-bundling" ]
2026-03-20 12:11:58 +01:00
# ─────────────────────────────────────────────────────────────────────
2026-03-24 20:48:55 +00:00
# Step 5: stale-pr-recycle — recycle stale failed PRs back to backlog
# ─────────────────────────────────────────────────────────────────────
[ [ steps ] ]
id = "stale-pr-recycle"
title = "Recycle stale failed PRs back to backlog"
description = "" "
Detect open PRs where CI has failed and no work has happened in 24 + hours .
These represent abandoned dev-agent attempts — recycle them so the pipeline
can retry with a fresh session .
1 . Fetch all open PRs :
curl -sf -H "Authorization: token $FORGE_TOKEN" \
"$FORGE_API/pulls?state=open&limit=50"
2 . For each PR , check all four conditions before recycling :
a . CI failed — get the HEAD SHA from the PR ' s head . sha field , then :
curl -sf -H "Authorization: token $FORGE_TOKEN" \
"$FORGE_API/commits/<head_sha>/status"
Only proceed if the combined state is "failure" or "error" .
Skip PRs with "success" , "pending" , or no CI status .
b . Last push > 24 hours ago — get the commit details :
curl -sf -H "Authorization: token $FORGE_TOKEN" \
"$FORGE_API/git/commits/<head_sha>"
Parse the committer . date field . Only proceed if it is older than :
$ ( date -u -d '24 hours ago' + % Y- % m- % dT % H : % M : % SZ )
c . Linked issue exists — extract the issue number from the PR body .
Look for "Fixes #NNN" or "ixes #NNN" patterns ( case-insensitive ) .
If no linked issue found , skip this PR ( cannot reset labels ) .
d . No active tmux session — check :
tmux has-session -t "dev-${PROJECT_NAME}-<issue_number>" 2 > / dev / null
If a session exists , someone may still be working — skip this PR .
3 . For each PR that passes all checks ( failed CI , 24 + hours stale ,
linked issue found , no active session ) :
a . Write a comment on the PR explaining the recycle :
echo '{"action":"comment","issue":<pr_number>,"body":"Recycling stale CI failure for fresh attempt. Previous PR: #<pr_number>"}' > > "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
b . Write a close_pr action :
echo '{"action":"close_pr","pr":<pr_number>}' > > "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
c . Remove the in-progress label from the linked issue :
echo '{"action":"remove_label","issue":<issue_number>,"label":"in-progress"}' > > "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
d . Add the backlog label to the linked issue :
echo '{"action":"add_label","issue":<issue_number>,"label":"backlog"}' > > "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
e . Log to result file :
echo "ACTION: recycled PR #<pr_number> (linked issue #<issue_number>) — stale CI failure" > > "$RESULT_FILE"
4 . If no stale failed PRs found , skip this step .
CRITICAL : If this step fails , log the failure and move on to agents-update .
"" "
needs = [ "blocked-review" ]
# ─────────────────────────────────────────────────────────────────────
# Step 6: agents-update — AGENTS.md watermark staleness + size enforcement
2026-03-20 12:11:58 +01:00
# ─────────────────────────────────────────────────────────────────────
[ [ steps ] ]
id = "agents-update"
2026-03-21 12:25:55 +00:00
title = "Check AGENTS.md watermarks, update stale files, enforce size limit"
2026-03-20 12:11:58 +01:00
description = "" "
2026-03-21 12:25:55 +00:00
Check all AGENTS . md files for staleness , update any that are outdated , and
enforce the ~ 200 -line size limit via progressive disclosure splitting .
2026-03-20 12:11:58 +01:00
This keeps documentation fresh — runs 2 x / day so drift stays small .
2026-03-21 12:25:55 +00:00
## Part A: Watermark staleness check and update
2026-03-20 12:11:58 +01:00
1 . Read the HEAD SHA from preflight :
HEAD_SHA = $ ( cat / tmp / gardener-head-sha )
2 . Find all AGENTS . md files :
find "$PROJECT_REPO_ROOT" -name "AGENTS.md" -not -path "*/.git/*"
3 . For each file , read the watermark from line 1 :
< ! -- last-reviewed : < sha > -- >
4 . Check for changes since the watermark :
git log --oneline < watermark > . . HEAD -- < directory >
If zero changes , the file is current — skip it .
5 . For stale files :
- Read the AGENTS . md and the source files in that directory
- Update the documentation to reflect code changes since the watermark
- Set the watermark to the HEAD SHA from the preflight step
2026-03-21 12:25:55 +00:00
- Conventions : architecture and WHY not implementation details
## Part B: Size limit enforcement (progressive disclosure split)
After all updates are done , count lines in the root AGENTS . md :
wc -l < "$PROJECT_REPO_ROOT/AGENTS.md"
If the root AGENTS . md exceeds 200 lines , perform a progressive disclosure
split . The principle : agent reads the map , drills into detail only when
needed . You wouldn 't dump a 500-page wiki on a new hire' s first morning .
6 . Identify per-directory sections to extract . Each agent section under
"## Agents" ( e . g . "### Dev (`dev/`)" , "### Review (`review/`)" ) and
each helper section ( e . g . "### Shared helpers (`lib/`)" ) is a candidate .
Also extract verbose subsections like " ## Issue lifecycle and label
conventions " and " ## Phase-Signaling Protocol" into docs/ or the
relevant directory .
7 . For each section to extract , create a ` { dir } / AGENTS . md ` file with :
- Line 1 : watermark < ! -- last-reviewed : < HEAD_SHA > -- >
- The full section content ( role , trigger , key files , env vars , lifecycle )
- Keep the same markdown structure and detail level
Example for dev / :
` ` `
< ! -- last-reviewed : abc123 -- >
# Dev Agent
* * Role * * : Implement issues autonomously . . .
* * Trigger * * : dev-poll . sh runs every 10 min . . .
* * Key files * * : . . .
* * Environment variables consumed * * : . . .
* * Lifecycle * * : . . .
` ` `
8 . Replace extracted sections in the root AGENTS . md with a concise
directory map table . The root file keeps ONLY :
- Watermark ( line 1 )
- ## What this repo is (brief overview)
- ## Directory layout (existing tree)
- ## Tech stack
- ## Coding conventions
- ## How to lint and test
- ## Agents — replaced with a summary table pointing to per-dir files:
## Agents
| Agent | Directory | Role | Guide |
| ------- | ----------- | ------ | ------- |
| Dev | dev / | Issue implementation | [ dev / AGENTS . md ] ( dev / AGENTS . md ) |
| Review | review / | PR review | [ review / AGENTS . md ] ( review / AGENTS . md ) |
| Gardener | gardener / | Backlog grooming | [ gardener / AGENTS . md ] ( gardener / AGENTS . md ) |
| . . . | . . . | . . . | . . . |
- ## Shared helpers — replaced with a brief pointer:
"See [lib/AGENTS.md](lib/AGENTS.md) for the full helper reference."
Keep the summary table if it fits , or move it to lib / AGENTS . md .
- ## Issue lifecycle and label conventions — keep a brief summary
( labels table + dependency convention ) or move verbose parts to
docs / PHASE-PROTOCOL . md
- ## Architecture Decisions — keep in root (humans write, agents enforce)
- ## Phase-Signaling Protocol — keep a brief summary with pointer:
"See [docs/PHASE-PROTOCOL.md](docs/PHASE-PROTOCOL.md) for the full spec."
9 . Verify the root AGENTS . md is now under 200 lines :
LINE_COUNT = $ ( wc -l < "$PROJECT_REPO_ROOT/AGENTS.md" )
if [ "$LINE_COUNT" -gt 200 ] ; then
echo "WARNING: root AGENTS.md still $LINE_COUNT lines after split"
fi
If still over 200 , trim further — move more detail into per-directory
files . The root should read like a table of contents , not an encyclopedia .
10 . Each new per-directory AGENTS . md must have a watermark on line 1 .
The gardener maintains freshness for ALL AGENTS . md files — root and
per-directory — using the same watermark mechanism from Part A .
## Staging
11 . Stage ALL AGENTS . md files you created or changed — do NOT commit yet .
All git writes happen in the commit-and-pr step at the end :
find . -name "AGENTS.md" -not -path "./.git/*" -exec git add { } +
12 . If no AGENTS . md files need updating AND root is under 200 lines ,
skip this step entirely .
2026-03-20 12:11:58 +01:00
CRITICAL : If this step fails for any reason , log the failure and move on .
Do NOT let an AGENTS . md failure prevent the commit-and-pr step .
"" "
2026-03-24 20:48:55 +00:00
needs = [ "stale-pr-recycle" ]
2026-03-20 12:11:58 +01:00
# ─────────────────────────────────────────────────────────────────────
2026-03-24 20:48:55 +00:00
# Step 7: commit-and-pr — single commit with all file changes
2026-03-20 12:11:58 +01:00
# ─────────────────────────────────────────────────────────────────────
[ [ steps ] ]
id = "commit-and-pr"
2026-03-22 20:48:07 +00:00
title = "One commit with all file changes, push, create PR, monitor to merge"
2026-03-20 12:11:58 +01:00
description = "" "
2026-03-22 23:58:50 +00:00
Collect all file changes from this run ( AGENTS . md updates + pending-actions
manifest ) into a single commit . All repo mutation API calls ( comments , closures ,
label changes , issue creation ) are deferred to the manifest — the orchestrator
executes them after the PR merges .
2026-03-20 12:11:58 +01:00
2026-03-22 23:58:50 +00:00
1 . Convert the JSONL manifest to a JSON array :
2026-03-20 12:11:58 +01:00
cd "$PROJECT_REPO_ROOT"
2026-03-22 23:58:50 +00:00
JSONL_FILE = "$PROJECT_REPO_ROOT/gardener/pending-actions.jsonl"
JSON_FILE = "$PROJECT_REPO_ROOT/gardener/pending-actions.json"
if [ -s "$JSONL_FILE" ] ; then
jq -s '.' "$JSONL_FILE" > "$JSON_FILE"
else
echo '[]' > "$JSON_FILE"
fi
rm -f "$JSONL_FILE"
2 . Check for staged or unstaged changes :
2026-03-20 12:11:58 +01:00
git status --porcelain
2026-03-22 23:58:50 +00:00
If there are no file changes ( no AGENTS . md updates AND manifest is empty [ ] ) ,
skip to step 4 — no commit , no PR needed .
2026-03-20 12:11:58 +01:00
2026-03-22 23:58:50 +00:00
3 . If there are changes :
2026-03-20 12:11:58 +01:00
a . Create a branch :
BRANCH = "chore/gardener-$(date -u +%Y%m%d-%H%M)"
git checkout -B "$BRANCH"
b . Stage all modified AGENTS . md files :
find . -name "AGENTS.md" -not -path "./.git/*" -exec git add { } +
2026-03-22 23:58:50 +00:00
c . Stage the pending-actions manifest :
git add gardener / pending-actions . json
d . Also stage any other files the gardener modified ( if any ) :
2026-03-20 12:11:58 +01:00
git add -u
2026-03-22 23:58:50 +00:00
e . Commit :
2026-03-20 12:11:58 +01:00
git commit -m "chore: gardener housekeeping $(date -u +%Y-%m-%d)"
2026-03-22 23:58:50 +00:00
f . Push :
2026-03-20 12:11:58 +01:00
git push -u origin "$BRANCH"
2026-03-22 23:58:50 +00:00
g . Create a PR :
2026-03-22 20:48:07 +00:00
PR_RESPONSE = $ ( curl -sf -X POST \
fix: Replace Codeberg dependency with local Forgejo instance (#611)
- Add setup_forge() to bin/disinto: provisions Forgejo via Docker,
creates admin + bot users (dev-bot, review-bot), generates API
tokens, creates repo, and pushes code — all automated
- Rename env vars: CODEBERG_TOKEN→FORGE_TOKEN, REVIEW_BOT_TOKEN→
FORGE_REVIEW_TOKEN, CODEBERG_REPO→FORGE_REPO, CODEBERG_API→
FORGE_API, CODEBERG_WEB→FORGE_WEB, CODEBERG_BOT_USERNAMES→
FORGE_BOT_USERNAMES (with backwards-compat fallbacks)
- Rename API helpers: codeberg_api()→forge_api(), codeberg_api_all()
→forge_api_all() (with compat aliases)
- Add forge_url field to project TOML; load-project.sh derives
FORGE_API/FORGE_WEB from forge_url + repo
- Update parse_repo_slug() to accept any host URL, not just codeberg
- Forgejo data stored under ~/.disinto/forgejo/ (not in factory repo)
- Update all 58 files: agent scripts, formulas, docs, site HTML
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 16:57:12 +00:00
-H "Authorization: token $FORGE_TOKEN" \
2026-03-20 12:11:58 +01:00
-H "Content-Type: application/json" \
fix: Replace Codeberg dependency with local Forgejo instance (#611)
- Add setup_forge() to bin/disinto: provisions Forgejo via Docker,
creates admin + bot users (dev-bot, review-bot), generates API
tokens, creates repo, and pushes code — all automated
- Rename env vars: CODEBERG_TOKEN→FORGE_TOKEN, REVIEW_BOT_TOKEN→
FORGE_REVIEW_TOKEN, CODEBERG_REPO→FORGE_REPO, CODEBERG_API→
FORGE_API, CODEBERG_WEB→FORGE_WEB, CODEBERG_BOT_USERNAMES→
FORGE_BOT_USERNAMES (with backwards-compat fallbacks)
- Rename API helpers: codeberg_api()→forge_api(), codeberg_api_all()
→forge_api_all() (with compat aliases)
- Add forge_url field to project TOML; load-project.sh derives
FORGE_API/FORGE_WEB from forge_url + repo
- Update parse_repo_slug() to accept any host URL, not just codeberg
- Forgejo data stored under ~/.disinto/forgejo/ (not in factory repo)
- Update all 58 files: agent scripts, formulas, docs, site HTML
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 16:57:12 +00:00
"$FORGE_API/pulls" \
2026-03-20 12:11:58 +01:00
-d ' { "title" : "chore: gardener housekeeping" ,
2026-03-22 20:48:07 +00:00
"head" : "'" $ BRANCH "'" , "base" : "'" $ PRIMARY_BRANCH "'" ,
2026-03-22 23:58:50 +00:00
"body" : "Automated gardener housekeeping — AGENTS.md updates + pending actions manifest.\\n\\nReview `gardener/pending-actions.json` for proposed grooming actions (label changes, closures, comments). These execute after merge." } ' )
2026-03-22 20:48:07 +00:00
PR_NUMBER = $ ( echo "$PR_RESPONSE" | jq -r '.number' )
2026-03-22 23:58:50 +00:00
h . Save PR number for orchestrator tracking :
2026-03-22 20:48:07 +00:00
echo "$PR_NUMBER" > / tmp / gardener-pr- $ { PROJECT_NAME } . txt
2026-03-22 23:58:50 +00:00
i . Signal the orchestrator to monitor CI :
2026-03-22 20:48:07 +00:00
echo "PHASE:awaiting_ci" > "$PHASE_FILE"
2026-03-22 23:58:50 +00:00
j . STOP and WAIT . Do NOT return to the primary branch .
2026-03-22 20:48:07 +00:00
The orchestrator polls CI , injects results and review feedback .
When you receive injected CI or review feedback , follow its
instructions , then write PHASE : awaiting_ci and wait again .
2026-03-22 23:58:50 +00:00
4 . If no file changes existed ( step 2 found nothing ) :
2026-03-22 20:48:07 +00:00
echo "PHASE:done" > "$PHASE_FILE"
2026-03-22 23:58:50 +00:00
5 . If PR creation fails , log the error and write PHASE : failed .
2026-03-20 12:11:58 +01:00
"" "
needs = [ "agents-update" ]