The previous attempt to make the Deepgram configuration extensible
introduced unnecessary complexity for a very limited use case and
made it harder to add new STT backends.
Refactor to a deliberately simple and explicit design with minimal
cognitive overhead. Configuration is now fully driven by environment
variables and provides enough flexibility for ops to select and
parameterize the STT backend.
Until a Pull Request is merged with our changes on livekit-agent
to support Kyutai API, we will use a custom and hacky python
library made from Arnaud's researches and published on an
unofficial pypi project page.
Everything is quite "draft" but it allows us to deploy and test
in real situation the work from Arnaud.
Some system dependencies were unexpectedly missing, causing the
LiveKit agent framework to fail at runtime.
Install the required dependencies based on runtime error logs.
This fixes Docker image failures in the remote (staging) environment.
- add admin action to retry a recording notification to external services
- log more Celery tasks' parameters
- add multilingual support for real-time subtitles
- update backend dependencies
Add dynamic configuration for Deepgram STT via environment variables,
enabling multilingual real-time subtitles with automatic language
detection.
Changes:
- Add DEEPGRAM_STT_* environment variables pattern for configuration
- Implement _build_deepgram_stt_kwargs() to dynamically build STT
parameters from environment variables
- Add whitelist of supported parameters (model, language) for LiveKit
Deepgram plugin
- Log warnings for unsupported parameters (diarize, smart_format, etc)
- Set default configuration: model=nova-3, language=multi
- Document supported parameters in Helm values.yaml
Configuration:
- DEEPGRAM_STT_MODEL: Deepgram model (default: nova-3)
- DEEPGRAM_STT_LANGUAGE: Language or 'multi' for automatic detection
of 10 languages (en, es, fr, de, hi, ru, pt, ja, it, nl)
Note: Advanced features like diarization and smart_format are not
supported by the LiveKit Deepgram plugin in streaming mode.
Create Python script based on LiveKit's multi-user transcriber example
with enhanced request_fnc handler that ensures job uniqueness by room.
A transcriber sends segments to every participant present in a room and
transcribes every participant's audio. We don't need several
transcribers in the same room. Made the worker hidden - by default it
uses auto dispatch and is visible as any other participant, but having
a transcriber participant would be weird since no other videoconference
tool treats this feature as a bot participant joining a call.
Job uniqueness is ensured using agent identity by forging a
deterministic identity for each transcriber by room. This makes sure
two transcribers would never be able to join the same room. It might be
a bit harsh, but our API calling to list participants before accepting
a new transcription job should already filter out situations where an
agent is triggered twice.
We chose explicit worker orchestration over auto-dispatch because we
want to keep control of this feature which will be challenging to
scale. LiveKit agent scaling is documented but we need to experiment in
real life situations with their Worker/Job mechanism.
Currently uses Deepgram since Arnaud's draft Kyutai plugin isn't ready
for production. This allows our ops team to advance on deploying and
monitoring agents. Deepgram was a random choice offering 200 hours
free, though it only works for English. ASR provider needs to be
refactored as a pluggable system selectable through environment
variables or settings.
Agent dispatch will be triggered via a new REST API endpoint to our
backend. This is quite a first naive version of a minimal dockerized
LiveKit agent to start playing with the framework.