fix: Two parallel activation paths for llama agents (ENABLE_LLAMA_AGENT vs [agents.X] TOML) (#846)
This commit is contained in:
parent
c77fb1dc53
commit
28eb182487
6 changed files with 38 additions and 323 deletions
|
|
@ -2,9 +2,12 @@
|
|||
|
||||
Local-model agents run the same agent code as the Claude-backed agents, but
|
||||
connect to a local llama-server (or compatible OpenAI-API endpoint) instead of
|
||||
the Anthropic API. This document describes the current activation flow using
|
||||
the Anthropic API. This document describes the canonical activation flow using
|
||||
`disinto hire-an-agent` and `[agents.X]` TOML configuration.
|
||||
|
||||
> **Note:** The legacy `ENABLE_LLAMA_AGENT=1` env flag has been removed (#846).
|
||||
> Activation is now done exclusively via `[agents.X]` sections in project TOML.
|
||||
|
||||
## Overview
|
||||
|
||||
Local-model agents are configured via `[agents.<name>]` sections in
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue