Implement Langfuse tracing integration for LLM service calls to capture prompts, responses, latency, token usage, and errors, enabling comprehensive monitoring and debugging of AI model interactions for performance analysis and cost optimization.
838 B
838 B
Changelog
All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
[Unreleased]
Added
- ✨(backend) enable user creation via email for external integrations
- ✨(summary) add Langfuse observability for LLM API calls
[1.0.1] - 2025-12-17
Changed
- ♿(frontend) improve accessibility:
- ♿️(frontend) hover controls, focus, SR #803
- ♿️(frontend) change ptt keybinding from space to v #813
- ♿(frontend) indicate external link opens in new window on feedback #816
- ♿(frontend) fix heading level in modal to maintain semantic hierarchy #815
- ♿️(frontend) Improve focus management when opening and closing chat #807