- upload local translation files on push
- make crowdin create a pull request when new translations are made
through the crowdin website (webhook configured on crowdin-end)
- dynamically load locale files for smaller footprint
- have a namespace for each feature. At first I'd figured I'd put each
namespace in its correct feature folder but it's kinda cumbersome to
manage if we want to link that to i18n management services like crowdin…
Updated CI to use "npm" instead of yarn for the frontend project based
on @manuhabitela's recommendations. Also updated the dependencies-related CI
steps that were previously missed.
Heavily inspired from openfun handbook instructions,
https://handbook.openfun.fr/git
Document how release a new version. Might be a common
documentation shared with Impress, Regie and Meet projects.
Let's discuss it.
It's quite important that anyone should know how to release,
as we plan to release every week an enhanced version.
While refactoring 'Impress' to introduce features from 'Magnify',
few unnecessary changes were traced in the database migrations.
Do some clean up before releasing a first version in production.
Deploying LiveKit on Kubernetes is quite challenging when using a private cloud provider.
@rouja faced some issues while configuring the exposed port necessary for the
STUN and TURN servers to work when the user is connected to a network behind a firewall.
@rouja deployed quickly a temporary LiveKit instance on a VM with its own STUN and
TURN servers to avoid using the Google infrastructure.
Soon we will have a proper Python API, that will interact with the Egress
service.
Until this point, I shared how recording data from a meeting. So we could
extract data from the LiveKit server, and use it as sample to build the
AI pipeline.
Please note this documentation is minimal, it's a mini-tutorial.
LiveKit CLI is essential to interact with the running server and its
ecosystem.
I recommend installing it, as you can list rooms, find participant identity,
create egress to record room, etc.
It helped a lot debugging the Egress service, and discovering its features.
LiveKit offers Universal Egress, designed to provide universal exports
of LiveKit sessions or tracks to a file or stream data.
Egress is kept outside of the server to keep the load off the SFU and avoid
impacting real-time audio or video performance/quality.
Followed the "Running Locally" steps from the https://github.com/livekit/egress
repository, but I adapted them to docker-compose.
By default, I chose to run both the LiveKit server and the Egress when you
up the stack. If we see any performance issue, we could only run the LiveKit
server, which is the barebone of the product.
Egress will be usefull only when dealing with recording/exporting data.
Egress service will output file recordings to "./docker/livekit/out"
Note: the Egress service doesn't run as root. You need to update the "/out"
permissions, so all user could write to it.
LiveKit server configuration was the default ones. These configurations
were not connecting to any Redis instance. When running a standalone
LiveKit server, Redis is not needed.
However, when adding other LiveKit ecosystem service, e.g. Egress,
LiveKit server publish jobs to a Redis queue, that are handled by
the Egress workers.
(Precisely, they use Redis Pub/Sub to communicate but I am no expert)
The LiveKit server and the Egress need to be connected to the same
Redis instance. This commit configure the LiveKit server before
adding the Egress service to the compose stack.
- we now have "features" to try to organize code by intent instead of
code type. everything at the root of frontend, not in feature/, is
global
- customized the panda config a bunch to try to begin to have an actual
design system. The idea is to prevent using arbitrary values here and
there in the code, but rather semantic tokens
- changed the userAuth code logic to handle the fact that a 401 on the
users/me call is not really an error per say, but rather an indication
the user is not logged in
ChangeLog won't be any useful before the first release.
Save us time, save the world useless computation, remove the CI steps.
They'll be added back as soon as they are necessary.
Configured Nginx to set caching headers for static assets by adding
a location block to match common static file extensions and set
an expiration time of 30 days.
It should result in faster loading times, reduced bandwidth usage,
and a more efficient and responsive user experience.
Wdyt @manuhabitela?
Configured Nginx to serve index.html for all requests, allowing
the client-side router (Wouter) to manage the routing.
Added a try_files directive to attempt to serve static files first,
falling back to index.html if the requested file is not found.
Added an error_page directive to handle 404 errors by internally
redirecting to index.html without modifying the URL path.
Wouter should make the rest.
It seems appropriate that backend owns the responsability of knowing any
information/configurations of the LiveKit server. Then, it shares those
with the frontend.
Please see my previous commit to understand why environment variables are
not appropriate for deployment in several remove environments.
As of today, the LiveKit server URL is the only configuration exposed
dynamically to the frontend. Thus, it doesn't justify adding a new route
to the API, responsible for exposing configurations (e.g. /configuration).
As the frontend needs to call the backend when it wants to initiate a new
webconference room, let's pass the server URL when retrieving the room's token.
It is relevant, to get both the room location and the keys to open the room in
the same call.
I prefered to be pragmatic, if the need appears any soon, I would refactor
these parts.
Discussed IRL with @manuhabitela. In developpement, we build locally the
Docker image. Thus, we can pass values to the frontend before the npm build
command was called.
Environment variables are great for configuration, and work perfectly in dev
mode, building Docker image on the fly.
However, in other environment (e.g. staging, pre-prod, prod) we'll pull a common
Docker image published in a remote registry. All cited environments should use
the same Docker image to make tests/deployment reproducible between envs.
As the Docker image is not rebuilt on the fly, we cannot easily configure
customized environment variables for each environment.
The API base URL would have a different value for each environment, and would
require a different environment variable.
Inspired by Impress works, if no environment variable is passed for the API URL,
the window origin will be used, and then the API path will be appended.
Frontend and backend are always deployed on the same URL, usually frontend
is at the '/' route, and backend at the '/api/vXX/' route.
If any configuration are required for each remote environment, they would be
retrieved from the API at runtime.
Voila! Don't hesitate to challenge this commit.
@manuhabitela configured panda css to manage project styling.
Panda codegen generates a new folder, 'styled-system' which was not
ingored by Eslint, resulting in ~40 Eslint errors.
Adapted Eslint configurations to ignore this path.
Values for staging, pre-prod, prod environments were adapted to read
the newly introduced LiveKit secrets.
The extra/template/secrets.yaml should be moved to a proper location.
Uncommenting the line left the original commented line in place,
which was misleading because the comment indicated to comment
the next line, which was already commented.
Fixed!
I have updated the staging, pre-prod and production environments.
Done:
- Remove silenced security checks, as SECURE_PROXY_SSL_HEADER is set in prod.
- Rename "impress" to "meet"
- Rename "docs" to "meet"
- Remove unused values (webrtc, ingressWS)
I haven't yet received the definitive DNS configuration from Florian or Olivier.
The hosts meet.numerique.gouv.fr and all meet-*.beta.numerique.gouv.fr are
only hypothetical at this point.
panda needs to generate types to work. We used to generate those after
deps install but it's not that necessary, since we generate them before
running the dev env, and before building the prod build.
This fixes the `npm ci` error in the frontend docker build
the idea is to use react aria, panda-css, react query and wouter as a
base, in addition to livekit react components
this is still mostly wip but it's usable:
- homepage shows a login link to create a room
- before joining a room you are asked to configure your audio/video/user
name
- note that if you directly go to a a conference url it creates it even
if you are not logged in… very secured!
Done:
- Rename all occurrences of "impress" to "meet".
- Update Agent Connect secrets credentials for the dev environment.
- Add new development secrets for LiveKit.
- Remove Minio from the dev stack (no cold storage required).
- Add LiveKit chart to the stack.
- Remove templates and values related to the WebSocket server.
The integration of LiveKit was inspired by an example from the "numerique-gouve/infrastructure" repo.
However, a notable issue persists with LiveKit's default chart: we are unable to override
the namespace, resulting in all LiveKit components running in the default namespace.
thx to @rouja for his help.
The start-kind.sh script was read-only after copying the repository, preventing it from running
the "build-k8s-cluster" make command. Updated permissions to chmod 755.
Configured the frontend to use environment variables (prefixed with "VITE_") for frontend
port and host configuration, which will be overridden in the Helm chart values
to ensure correct values are used in different environments.
Helm requires the frontend port to be 8081 and use the public host,
not the default "localhost" value.
Renamed docker/files/usr/local/etc/gunicorn/impress.py to .../meet.py to match the updated
backend source filenames. This resolves the issue where the Dockerfile was attempting
to copy a non-existent file, causing the build to fail.
Configured the frontend to use environment variables (prefixed with "VITE_") for API
and LiveKit server URLs, which will be overridden in the Helm chart values
to ensure correct URLs are used in different environments.
Inspired by the Docker images from numerique-gouv/people and numerique-gouv/impress
(see commit 1a3b396 in the "people" repository).
Due to the lack of a certified cold storage solution (e.g., S3) for serving static files,
we've containerized the frontend as a temporary deployment solution.
Vite.js static output is served using an Nginx reverse proxy.
I am not quite sure of this commit, please @manuhabitela could you review how I exposed
the static build from vite in my Nginx server? and do the appriopriate fix if necessary.
Resolved minor TypeScript errors in the Proof of Concept (PoC)
that were causing the "npm run build" command to fail.
These fixes were necessary to prepare the frontend for
containerization with Docker.
ASAP, a CI step will prevent these kind of errors.