fix: Two parallel activation paths for llama agents (ENABLE_LLAMA_AGENT vs [agents.X] TOML) (#846)

This commit is contained in:
dev-qwen2 2026-04-16 18:40:35 +00:00
parent c77fb1dc53
commit 28eb182487
6 changed files with 38 additions and 323 deletions

View file

@ -2,9 +2,12 @@
Local-model agents run the same agent code as the Claude-backed agents, but
connect to a local llama-server (or compatible OpenAI-API endpoint) instead of
the Anthropic API. This document describes the current activation flow using
the Anthropic API. This document describes the canonical activation flow using
`disinto hire-an-agent` and `[agents.X]` TOML configuration.
> **Note:** The legacy `ENABLE_LLAMA_AGENT=1` env flag has been removed (#846).
> Activation is now done exclusively via `[agents.X]` sections in project TOML.
## Overview
Local-model agents are configured via `[agents.<name>]` sections in