# formulas/run-planner.toml — Strategic planning formula # # Executed directly by planner-run.sh via cron — no action issues. # planner-run.sh creates a tmux session with Claude (opus) and injects # this formula as context. Claude executes all steps autonomously. # # Steps: preflight → prediction-triage → strategic-planning # → journal-and-memory → commit-and-pr # # AGENTS.md maintenance is handled by the gardener (#246). # All git writes (journal entry) happen in one commit at the end. name = "run-planner" description = "Strategic planning: triage predictions, resource+leverage gap analysis, journal" version = 2 model = "opus" [context] files = ["VISION.md", "AGENTS.md", "RESOURCES.md"] [[steps]] id = "preflight" title = "Pull latest code and load planner memory" description = """ Set up the working environment for this planning run. 1. Change to the project repository: cd "$PROJECT_REPO_ROOT" 2. Pull the latest code: git fetch origin "$PRIMARY_BRANCH" --quiet git checkout "$PRIMARY_BRANCH" --quiet git pull --ff-only origin "$PRIMARY_BRANCH" --quiet 3. Record the current HEAD SHA: HEAD_SHA=$(git rev-parse HEAD) echo "$HEAD_SHA" > /tmp/planner-head-sha 4. Read the planner memory file at: $FACTORY_ROOT/planner/MEMORY.md If it does not exist, this is the first planning run. Keep this memory context in mind for all subsequent steps. """ [[steps]] id = "prediction-triage" title = "Triage prediction/unreviewed issues" description = """ Triage prediction issues filed by the predictor (goblin). Evidence from the preflight step informs whether each prediction is valid (e.g. "red-team stale since March 12" is confirmed by evidence/ timestamps). 1. Fetch unreviewed predictions: curl -sf -H "Authorization: token $CODEBERG_TOKEN" \ "$CODEBERG_API/issues?state=open&type=issues&labels=prediction%2Funreviewed&limit=50" If there are none, note that and proceed to strategic-planning. 2. Read available formulas from $FACTORY_ROOT/formulas/*.toml so you know what actions can be dispatched. 3. Fetch all open issues to check for overlap: curl -sf -H "Authorization: token $CODEBERG_TOKEN" \ "$CODEBERG_API/issues?state=open&type=issues&limit=50" 3b. Resolve label IDs needed for triage (fetch via $CODEBERG_API/labels): - → prediction/unreviewed - → prediction/backlog - → prediction/actioned (create if missing, color #c2e0c6, description "Prediction triaged by planner") - → backlog - → action These are DISTINCT labels — do not reuse IDs across them. 4. For each prediction, read the title and body. Choose one action: - PROMOTE_ACTION: maps to an available formula → create an action issue with YAML front matter referencing the formula name and vars. Relabel prediction/unreviewed → prediction/actioned, then close with comment "Actioned as #NNN — ". - PROMOTE_BACKLOG: warrants dev work → create a backlog issue. Relabel prediction/unreviewed → prediction/actioned, then close with comment "Actioned as #NNN — ". - WATCH: not urgent but worth tracking → post a comment explaining why it is not urgent, then relabel from prediction/unreviewed to prediction/backlog. Do NOT close. - DISMISS: noise, already covered by an open issue, or not actionable → relabel prediction/unreviewed → prediction/actioned, post a comment with explicit reasoning, then close the prediction. Every decision MUST include reasoning in a comment on the prediction issue. 5. Executing triage decisions via API: For PROMOTE_ACTION / PROMOTE_BACKLOG: a. Create the new issue with the 'action' or 'backlog' label: curl -sf -X POST -H "Authorization: token $CODEBERG_TOKEN" \ -H "Content-Type: application/json" "$CODEBERG_API/issues" \ -d '{"title":"...","body":"...","labels":[]}' b. Comment on the prediction with "Actioned as #NNN — ": curl -sf -X POST -H "Authorization: token $CODEBERG_TOKEN" \ -H "Content-Type: application/json" \ "$CODEBERG_API/issues//comments" \ -d '{"body":"Actioned as #NNN — "}' c. Relabel: remove prediction/unreviewed, add prediction/actioned: curl -sf -X DELETE -H "Authorization: token $CODEBERG_TOKEN" \ "$CODEBERG_API/issues//labels/" curl -sf -X POST -H "Authorization: token $CODEBERG_TOKEN" \ -H "Content-Type: application/json" \ "$CODEBERG_API/issues//labels" \ -d '{"labels":[]}' d. Close the prediction: curl -sf -X PATCH -H "Authorization: token $CODEBERG_TOKEN" \ -H "Content-Type: application/json" \ "$CODEBERG_API/issues/" \ -d '{"state":"closed"}' For WATCH: a. Comment with reasoning why not urgent b. Replace prediction/unreviewed label with prediction/backlog: curl -sf -X DELETE -H "Authorization: token $CODEBERG_TOKEN" \ "$CODEBERG_API/issues//labels/" curl -sf -X POST -H "Authorization: token $CODEBERG_TOKEN" \ -H "Content-Type: application/json" \ "$CODEBERG_API/issues//labels" \ -d '{"labels":[]}' For DISMISS: a. Comment with explicit reasoning: curl -sf -X POST -H "Authorization: token $CODEBERG_TOKEN" \ -H "Content-Type: application/json" \ "$CODEBERG_API/issues//comments" \ -d '{"body":"Dismissed — "}' b. Relabel: remove prediction/unreviewed, add prediction/actioned: curl -sf -X DELETE -H "Authorization: token $CODEBERG_TOKEN" \ "$CODEBERG_API/issues//labels/" curl -sf -X POST -H "Authorization: token $CODEBERG_TOKEN" \ -H "Content-Type: application/json" \ "$CODEBERG_API/issues//labels" \ -d '{"labels":[]}' c. Close the prediction: curl -sf -X PATCH -H "Authorization: token $CODEBERG_TOKEN" \ -H "Content-Type: application/json" \ "$CODEBERG_API/issues/" \ -d '{"state":"closed"}' 6. Track promoted predictions — they compete with vision gaps in the strategic-planning step for the per-cycle 5-issue limit. Record each promotion (issue number, title, type) for hand-off. 7. Validation: if you reference a formula, verify it exists on disk. Fall back to a freeform backlog issue for unknown formulas. Be decisive — the predictor intentionally over-signals; your job is to filter. CRITICAL: If this step fails, log the failure and move on to strategic-planning. """ needs = ["preflight"] [[steps]] id = "strategic-planning" title = "Strategic planning — resource+leverage gap analysis" description = """ This is the core planning step. Reason about leverage and create the highest-impact issues. Read these inputs: - VISION.md — where we want to be - All AGENTS.md files — what exists today - $FACTORY_ROOT/RESOURCES.md — what we have (may not exist) - $FACTORY_ROOT/formulas/*.toml — what actions can be dispatched - Open issues (fetched via API) — what's already planned - $FACTORY_ROOT/metrics/supervisor-metrics.jsonl — operational trends (may not exist) - Planner memory (loaded in preflight) - Promoted predictions from prediction-triage (these count toward the per-cycle issue limit — they compete with vision gaps for priority) Reason through these five questions: 1. **What resources do you need that you don't have?** Analytics, domains, accounts, compute, integrations — things required by the vision that aren't in RESOURCES.md or aren't set up yet. 2. **What resources are underutilized?** Compute capacity idle most of the day. Domains with no traffic. CI capacity unused at night. Accounts not being leveraged. 3. **What's the highest-leverage action?** The one thing that unblocks the most progress toward the vision. Can you dispatch a formula for it? 4. **What task gaps remain?** Things in VISION.md not covered by open issues or the current project state. 5. **What should be deferred?** Things that depend on blocked resources or aren't high-leverage right now. Do NOT create issues for these. Then create up to 5 issues total (including promotions from prediction-triage), prioritized by leverage: For formula-matching gaps, include YAML front matter in the body: --- formula: vars: key: "value" --- For freeform gaps: Create each issue via the API with the 'backlog' label: curl -sf -X POST \ -H "Authorization: token $CODEBERG_TOKEN" \ -H "Content-Type: application/json" \ "$CODEBERG_API/issues" \ -d '{"title":"...","body":"...","labels":[]}' Rules: - Max 5 new issues total (promoted predictions + vision gaps) — highest leverage first - Do NOT create issues that overlap with ANY existing open issue - Do NOT create issues for items you identified as "deferred" - Each body: what's missing, why it matters, rough approach - When deploying/operating, reference the resource alias from RESOURCES.md - Add ## Depends on section for issues that depend on other open issues - Only reference formulas that exist in formulas/*.toml - When metrics show systemic problems, create optimization issues If there are no gaps, note that the backlog is aligned with the vision. """ needs = ["prediction-triage"] [[steps]] id = "journal-and-memory" title = "Write journal entry and update planner memory" description = """ Two outputs from this step: ### 1. Journal entry (committed to git) Create a daily journal file at: $FACTORY_ROOT/planner/journal/$(date -u +%Y-%m-%d).md If the file already exists (multiple runs per day), append a new section with a timestamp header. Format: # Planner run — YYYY-MM-DD HH:MM UTC ## Predictions triaged - #NNN: PROMOTE_ACTION/PROMOTE_BACKLOG/WATCH/DISMISS — reasoning (or "No unreviewed predictions" if none) ## Issues created - #NNN: title — why (or "No new issues — backlog aligned with vision" if none) ## Observations - Key patterns, resource state, metric trends noticed during this run ## Deferred - Items considered but deferred, and why Keep each entry concise — 30-50 lines max. ### 2. Memory update (committed to git) Write to: $FACTORY_ROOT/planner/MEMORY.md (replace the entire file) Include: - Date of this run - What was observed (resource state, metric trends, project progress) - What was decided (issues created, predictions triaged, what was deferred) - Patterns and learnings useful for future planning runs - Things to watch for next time Rules: - Keep under 100 lines total - Replace the file contents — prune outdated entries from previous runs - Focus on PATTERNS and LEARNINGS, not transient state - Do NOT include specific issue counts or numbers that will be stale - Most recent entries at top Format: simple markdown with dated sections. """ needs = ["strategic-planning"] [[steps]] id = "commit-and-pr" title = "One commit with all file changes, push, create PR" description = """ Collect all file changes from this run into a single commit. API calls (issue creation, prediction triage) already happened during the run — only file changes (journal entries, MEMORY.md) need the PR. 1. Check for staged or unstaged changes: cd "$PROJECT_REPO_ROOT" git status --porcelain If there are no file changes, skip this entire step — no commit, no PR. 2. If there are changes: a. Create a branch: BRANCH="chore/planner-$(date -u +%Y%m%d-%H%M)" git checkout -B "$BRANCH" b. Stage journal entries and planner memory: git add planner/journal/ 2>/dev/null || true git add planner/MEMORY.md 2>/dev/null || true c. Stage any other tracked files modified during the run: git add -u d. Check if there is anything to commit: git diff --cached --quiet && echo "Nothing staged" && skip e. Commit: git commit -m "chore: planner run $(date -u +%Y-%m-%d)" f. Push: git push -u origin "$BRANCH" g. Create a PR: curl -sf -X POST \ -H "Authorization: token $CODEBERG_TOKEN" \ -H "Content-Type: application/json" \ "$CODEBERG_API/pulls" \ -d '{"title":"chore: planner run journal", "head":"","base":"", "body":"Automated planner run — journal entry from strategic planning session."}' h. Return to primary branch: git checkout "$PRIMARY_BRANCH" 3. If the PR creation fails, log and continue — the journal is committed locally. """ needs = ["journal-and-memory"]