fix: feat: CI log access — disinto ci-logs + dev-agent CI failure context (#136)
This commit is contained in:
parent
19969586e5
commit
a2d5d71c04
5 changed files with 245 additions and 2 deletions
|
|
@ -7,7 +7,7 @@ sourced as needed.
|
|||
| File | What it provides | Sourced by |
|
||||
|---|---|---|
|
||||
| `lib/env.sh` | Loads `.env`, sets `FACTORY_ROOT`, exports project config (`FORGE_REPO`, `PROJECT_NAME`, etc.), defines `log()`, `forge_api()`, `forge_api_all()` (accepts optional second TOKEN parameter, defaults to `$FORGE_TOKEN`), `woodpecker_api()`, `wpdb()`, `memory_guard()` (skips agent if RAM < threshold). Auto-loads project TOML if `PROJECT_TOML` is set. Exports per-agent tokens (`FORGE_PLANNER_TOKEN`, `FORGE_GARDENER_TOKEN`, `FORGE_VAULT_TOKEN`, `FORGE_SUPERVISOR_TOKEN`, `FORGE_PREDICTOR_TOKEN`) — each falls back to `$FORGE_TOKEN` if not set. **Vault-only token guard (AD-006)**: `unset GITHUB_TOKEN CLAWHUB_TOKEN` so agents never hold external-action tokens — only the runner container receives them. **Container note**: when `DISINTO_CONTAINER=1`, `.env` is NOT re-sourced — compose already injects env vars (including `FORGE_URL=http://forgejo:3000`) and re-sourcing would clobber them. | Every agent |
|
||||
| `lib/ci-helpers.sh` | `ci_passed()` — returns 0 if CI state is "success" (or no CI configured). `ci_required_for_pr()` — returns 0 if PR has code files (CI required), 1 if non-code only (CI not required). `is_infra_step()` — returns 0 if a single CI step failure matches infra heuristics (clone/git exit 128, any exit 137, log timeout patterns). `classify_pipeline_failure()` — returns "infra \<reason>" if any failed Woodpecker step matches infra heuristics via `is_infra_step()`, else "code". `ensure_priority_label()` — looks up (or creates) the `priority` label and returns its ID; caches in `_PRIORITY_LABEL_ID`. `ci_commit_status <sha>` — queries Woodpecker directly for CI state, falls back to forge commit status API. `ci_pipeline_number <sha>` — returns the Woodpecker pipeline number for a commit, falls back to parsing forge status `target_url`. `ci_promote <repo_id> <pipeline_num> <environment>` — promotes a pipeline to a named Woodpecker environment (vault-gated deployment: vault approves, vault-fire calls this — vault redesign in progress, see #73-#77). | dev-poll, review-poll, review-pr, supervisor-poll |
|
||||
| `lib/ci-helpers.sh` | `ci_passed()` — returns 0 if CI state is "success" (or no CI configured). `ci_required_for_pr()` — returns 0 if PR has code files (CI required), 1 if non-code only (CI not required). `is_infra_step()` — returns 0 if a single CI step failure matches infra heuristics (clone/git exit 128, any exit 137, log timeout patterns). `classify_pipeline_failure()` — returns "infra \<reason>" if any failed Woodpecker step matches infra heuristics via `is_infra_step()`, else "code". `ensure_priority_label()` — looks up (or creates) the `priority` label and returns its ID; caches in `_PRIORITY_LABEL_ID`. `ci_commit_status <sha>` — queries Woodpecker directly for CI state, falls back to forge commit status API. `ci_pipeline_number <sha>` — returns the Woodpecker pipeline number for a commit, falls back to parsing forge status `target_url`. `ci_promote <repo_id> <pipeline_num> <environment>` — promotes a pipeline to a named Woodpecker environment (vault-gated deployment: vault approves, vault-fire calls this — vault redesign in progress, see #73-#77). `ci_get_logs <pipeline_number> [--step <name>]` — reads CI logs from Woodpecker SQLite database; outputs last 200 lines to stdout. Requires mounted woodpecker-data volume at /woodpecker-data. | dev-poll, review-poll, review-pr, supervisor-poll |
|
||||
| `lib/ci-debug.sh` | CLI tool for Woodpecker CI: `list`, `status`, `logs`, `failures` subcommands. Not sourced — run directly. | Humans / dev-agent (tool access) |
|
||||
| `lib/load-project.sh` | Parses a `projects/*.toml` file into env vars (`PROJECT_NAME`, `FORGE_REPO`, `WOODPECKER_REPO_ID`, monitoring toggles, mirror config, etc.). | env.sh (when `PROJECT_TOML` is set), supervisor-poll (per-project iteration) |
|
||||
| `lib/parse-deps.sh` | Extracts dependency issue numbers from an issue body (stdin → stdout, one number per line). Matches `## Dependencies` / `## Depends on` / `## Blocked by` sections and inline `depends on #N` / `blocked by #N` patterns. Inline scan skips fenced code blocks to prevent false positives from code examples in issue bodies. Not sourced — executed via `bash lib/parse-deps.sh`. | dev-poll, supervisor-poll |
|
||||
|
|
|
|||
|
|
@ -267,3 +267,42 @@ ci_promote() {
|
|||
|
||||
echo "$new_num"
|
||||
}
|
||||
|
||||
# ci_get_logs <pipeline_number> [--step <step_name>]
|
||||
# Reads CI logs from the Woodpecker SQLite database.
|
||||
# Requires: WOODPECKER_DATA_DIR env var or mounted volume at /woodpecker-data
|
||||
# Returns: 0 on success, 1 on failure. Outputs log text to stdout.
|
||||
#
|
||||
# Usage:
|
||||
# ci_get_logs 346 # Get all failed step logs
|
||||
# ci_get_logs 346 --step smoke-init # Get logs for specific step
|
||||
ci_get_logs() {
|
||||
local pipeline_number="$1"
|
||||
shift || true
|
||||
|
||||
local step_name=""
|
||||
while [ $# -gt 0 ]; do
|
||||
case "$1" in
|
||||
--step|-s)
|
||||
step_name="$2"
|
||||
shift 2
|
||||
;;
|
||||
*)
|
||||
echo "Unknown option: $1" >&2
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
local log_reader="${FACTORY_ROOT:-/home/agent/disinto}/lib/ci-log-reader.py"
|
||||
if [ -f "$log_reader" ]; then
|
||||
if [ -n "$step_name" ]; then
|
||||
python3 "$log_reader" "$pipeline_number" --step "$step_name"
|
||||
else
|
||||
python3 "$log_reader" "$pipeline_number"
|
||||
fi
|
||||
else
|
||||
echo "ERROR: ci-log-reader.py not found at $log_reader" >&2
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
|
|
|||
125
lib/ci-log-reader.py
Executable file
125
lib/ci-log-reader.py
Executable file
|
|
@ -0,0 +1,125 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
ci-log-reader.py — Read CI logs from Woodpecker SQLite database.
|
||||
|
||||
Usage:
|
||||
ci-log-reader.py <pipeline_number> [--step <step_name>]
|
||||
|
||||
Reads log entries from the Woodpecker SQLite database and outputs them to stdout.
|
||||
If --step is specified, filters to that step only. Otherwise returns logs from
|
||||
all failed steps, truncated to the last 200 lines to avoid context bloat.
|
||||
|
||||
Environment:
|
||||
WOODPECKER_DATA_DIR - Path to Woodpecker data directory (default: /woodpecker-data)
|
||||
|
||||
The SQLite database is located at: $WOODPECKER_DATA_DIR/woodpecker.sqlite
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import sqlite3
|
||||
import sys
|
||||
import os
|
||||
|
||||
DEFAULT_DB_PATH = "/woodpecker-data/woodpecker.sqlite"
|
||||
DEFAULT_WOODPECKER_DATA_DIR = "/woodpecker-data"
|
||||
MAX_OUTPUT_LINES = 200
|
||||
|
||||
|
||||
def get_db_path():
|
||||
"""Determine the path to the Woodpecker SQLite database."""
|
||||
env_dir = os.environ.get("WOODPECKER_DATA_DIR", DEFAULT_WOODPECKER_DATA_DIR)
|
||||
return os.path.join(env_dir, "woodpecker.sqlite")
|
||||
|
||||
|
||||
def query_logs(pipeline_number: int, step_name: str | None = None) -> list[str]:
|
||||
"""
|
||||
Query log entries from the Woodpecker database.
|
||||
|
||||
Args:
|
||||
pipeline_number: The pipeline number to query
|
||||
step_name: Optional step name to filter by
|
||||
|
||||
Returns:
|
||||
List of log data strings
|
||||
"""
|
||||
db_path = get_db_path()
|
||||
|
||||
if not os.path.exists(db_path):
|
||||
print(f"ERROR: Woodpecker database not found at {db_path}", file=sys.stderr)
|
||||
print(f"Set WOODPECKER_DATA_DIR or mount volume to {DEFAULT_WOODPECKER_DATA_DIR}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
conn = sqlite3.connect(db_path)
|
||||
conn.row_factory = sqlite3.Row
|
||||
cursor = conn.cursor()
|
||||
|
||||
if step_name:
|
||||
# Query logs for a specific step
|
||||
query = """
|
||||
SELECT le.data
|
||||
FROM log_entries le
|
||||
JOIN steps s ON le.step_id = s.id
|
||||
JOIN pipelines p ON s.pipeline_id = p.id
|
||||
WHERE p.number = ? AND s.name = ?
|
||||
ORDER BY le.id
|
||||
"""
|
||||
cursor.execute(query, (pipeline_number, step_name))
|
||||
else:
|
||||
# Query logs for all failed steps in the pipeline
|
||||
query = """
|
||||
SELECT le.data
|
||||
FROM log_entries le
|
||||
JOIN steps s ON le.step_id = s.id
|
||||
JOIN pipelines p ON s.pipeline_id = p.id
|
||||
WHERE p.number = ? AND s.state IN ('failure', 'error', 'killed')
|
||||
ORDER BY le.id
|
||||
"""
|
||||
cursor.execute(query, (pipeline_number,))
|
||||
|
||||
logs = [row["data"] for row in cursor.fetchall()]
|
||||
conn.close()
|
||||
return logs
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Read CI logs from Woodpecker SQLite database"
|
||||
)
|
||||
parser.add_argument(
|
||||
"pipeline_number",
|
||||
type=int,
|
||||
help="Pipeline number to query"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--step", "-s",
|
||||
dest="step_name",
|
||||
default=None,
|
||||
help="Filter to a specific step name"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
logs = query_logs(args.pipeline_number, args.step_name)
|
||||
|
||||
if not logs:
|
||||
if args.step_name:
|
||||
print(f"No logs found for pipeline #{args.pipeline_number}, step '{args.step_name}'", file=sys.stderr)
|
||||
else:
|
||||
print(f"No failed steps found in pipeline #{args.pipeline_number}", file=sys.stderr)
|
||||
sys.exit(0)
|
||||
|
||||
# Join all log data and output
|
||||
full_output = "\n".join(logs)
|
||||
|
||||
# Truncate to last N lines to avoid context bloat
|
||||
lines = full_output.split("\n")
|
||||
if len(lines) > MAX_OUTPUT_LINES:
|
||||
# Keep last N lines
|
||||
truncated = lines[-MAX_OUTPUT_LINES:]
|
||||
print("\n".join(truncated))
|
||||
else:
|
||||
print(full_output)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
|
@ -414,6 +414,23 @@ pr_walk_to_merge() {
|
|||
fi
|
||||
|
||||
_prl_log "CI failed — invoking agent (attempt ${ci_fix_count}/${max_ci_fixes})"
|
||||
|
||||
# Get CI logs from SQLite database if available
|
||||
local ci_logs=""
|
||||
if [ -n "$_PR_CI_PIPELINE" ] && [ -n "${FACTORY_ROOT:-}" ]; then
|
||||
ci_logs=$(ci_get_logs "$_PR_CI_PIPELINE" 2>/dev/null | tail -50) || ci_logs=""
|
||||
fi
|
||||
|
||||
local logs_section=""
|
||||
if [ -n "$ci_logs" ]; then
|
||||
logs_section="
|
||||
CI Log Output (last 50 lines):
|
||||
\`\`\`
|
||||
${ci_logs}
|
||||
\`\`\`
|
||||
"
|
||||
fi
|
||||
|
||||
agent_run --resume "$session_id" --worktree "$worktree" \
|
||||
"CI failed on PR #${pr_num} (attempt ${ci_fix_count}/${max_ci_fixes}).
|
||||
|
||||
|
|
@ -421,7 +438,7 @@ Pipeline: #${_PR_CI_PIPELINE:-?}
|
|||
Failure type: ${_PR_CI_FAILURE_TYPE:-unknown}
|
||||
|
||||
Error log:
|
||||
${_PR_CI_ERROR_LOG:-No logs available.}
|
||||
${_PR_CI_ERROR_LOG:-No logs available.}${logs_section}
|
||||
|
||||
Fix the issue, run tests, commit, rebase on ${PRIMARY_BRANCH}, and push:
|
||||
git fetch ${remote} ${PRIMARY_BRANCH} && git rebase ${remote}/${PRIMARY_BRANCH}
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue