✨(summary) add Langfuse observability for LLM API calls
Implement Langfuse tracing integration for LLM service calls to capture prompts, responses, latency, token usage, and errors, enabling comprehensive monitoring and debugging of AI model interactions for performance analysis and cost optimization.
This commit is contained in:
committed by
aleb_the_flash
parent
c81ef38005
commit
aff87d4953
@@ -11,6 +11,7 @@ and this project adheres to
|
||||
### Added
|
||||
|
||||
- ✨(backend) enable user creation via email for external integrations
|
||||
- ✨(summary) add Langfuse observability for LLM API calls
|
||||
|
||||
## [1.0.1] - 2025-12-17
|
||||
|
||||
|
||||
Reference in New Issue
Block a user