feat(orchestrator): Phase 2 engine + tokenizer + tool dispatch
Orchestrator engine: - engine.rs: unified Mistral Conversations API tool loop that emits OrchestratorEvent instead of calling Matrix/gRPC directly - tool_dispatch.rs: ToolSide routing (client vs server tools) - Memory loading stubbed (migrates in Phase 4) Server-side tokenizer: - tokenizer.rs: HuggingFace tokenizers-rs with Mistral's BPE tokenizer - count_tokens() for accurate usage metrics - Loads from local tokenizer.json or falls back to bundled vocab - Config: mistral.tokenizer_path (optional) No behavior change — engine is wired but not yet called from sync.rs or session.rs (Phase 2 continuation).
This commit is contained in:
@@ -102,6 +102,9 @@ pub struct MistralConfig {
|
||||
pub research_model: String,
|
||||
#[serde(default = "default_max_tool_iterations")]
|
||||
pub max_tool_iterations: usize,
|
||||
/// Path to a local `tokenizer.json` file. If unset, downloads from HuggingFace Hub.
|
||||
#[serde(default)]
|
||||
pub tokenizer_path: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Deserialize)]
|
||||
|
||||
Reference in New Issue
Block a user