Files
sol/config/sol.toml
Sienna Meridian Satterwhite 7580c10dda feat: multi-agent architecture with Conversations API and persistent state
Mistral Agents + Conversations API integration:
- Orchestrator agent created on startup with Sol's personality + tools
- ConversationRegistry routes messages through persistent conversations
- Per-room conversation state (room_id → conversation_id + token counts)
- Function call handling within conversation responses
- Configurable via [agents] section in sol.toml (use_conversations_api flag)

Multimodal support:
- m.image detection and Matrix media download (mxc:// → base64 data URI)
- ContentPart-based messages sent to Mistral vision models
- Archive stores media_urls for image messages

System prompt rewrite:
- 687 → 150 lines — dense, few-shot examples, hard rules
- {room_context_rules} placeholder for group vs DM behavior
- Sender prefixing (<@user:server>) for multi-user turns in group rooms

SQLite persistence (/data/sol.db):
- Conversation mappings and agent IDs survive reboots
- WAL mode for concurrent reads
- Falls back to in-memory on failure (sneezes into all rooms to signal)
- PVC already mounted at /data alongside Matrix SDK state store

New modules:
- src/persistence.rs — SQLite state store
- src/conversations.rs — ConversationRegistry + message merging
- src/agents/{mod,definitions,registry}.rs — agent lifecycle
- src/agent_ux.rs — reaction + thread progress UX
- src/tools/bridge.rs — tool dispatch for domain agents

102 tests passing.
2026-03-21 22:21:14 +00:00

49 lines
1.4 KiB
TOML

[matrix]
homeserver_url = "http://tuwunel.matrix.svc.cluster.local:6167"
user_id = "@sol:sunbeam.pt"
state_store_path = "/data/matrix-state"
db_path = "/data/sol.db"
[opensearch]
url = "http://opensearch.data.svc.cluster.local:9200"
index = "sol_archive"
memory_index = "sol_user_memory"
batch_size = 50
flush_interval_ms = 2000
embedding_pipeline = "tuwunel_embedding_pipeline"
[mistral]
default_model = "mistral-medium-latest"
evaluation_model = "ministral-3b-latest"
research_model = "mistral-large-latest"
max_tool_iterations = 5
[behavior]
response_delay_min_ms = 100
response_delay_max_ms = 2300
spontaneous_delay_min_ms = 15000
spontaneous_delay_max_ms = 60000
spontaneous_threshold = 0.85
room_context_window = 30
dm_context_window = 100
backfill_on_join = true
backfill_limit = 10000
instant_responses = false
cooldown_after_response_ms = 15000
evaluation_context_window = 200
reaction_threshold = 0.6
reaction_enabled = true
detect_sol_in_conversation = true
script_timeout_secs = 5
script_max_heap_mb = 64
script_fetch_allowlist = []
memory_extraction_enabled = true
# evaluation_prompt_active = "custom prompt when Sol is already in conversation..."
# evaluation_prompt_passive = "custom prompt when Sol hasn't spoken yet..."
[agents]
orchestrator_model = "mistral-medium-latest"
domain_model = "mistral-medium-latest"
compaction_threshold = 118000
use_conversations_api = true