Request the necessary scopes from ProConnect service.
Update configurations in every environments.
Note: ask given_name and usual_name scopes to get users' info.
(these scopes should be granted by default by ProConnect when
requesting a client id client secret)
Egress is already deployed in staging. But, while
working locally on feature relying on Egress, it's not
suitable to test your development or iterate.
Especially I'll need to test the connection between the Egress
and the minio bucket in my next PR.
We faced quite a few issue while starting the whole stack.
Egress didn't want to start. Its connection with the livekit server
while the egress participant was joining the room was not successful.
The Turn part of the livekit server helm chart was activated. We needed
to update few values to in the helm configuration to enabled this turn.
Updated CoreDNS to expose Egress pod. Egress tries connecting to MinIO at
127.0.0.1, where no instance exists. Using minio.127.0.0.1.nip.io resolves
to 127.0.0.1, causing Egress to connect to itself for uploads. The CoreDNS
rewrite directs this to the Ingress IP, correctly routing to MinIO.
Inspired by Impress and @sampaccoud's work.
We use Indie Hoster’s Kubernetes objects in staging and production.
In the "dev" environment, we install the `bitnami/minio` chart to mimic
Indie Hoster’s MinIO setup.
To access the MinIO admin interface in dev, use port forwarding;
the interface runs on port 9001.
Based on feedback from @rouja, I've updated the Helm configuration for PostHog
to use separate ingress resources for each service. Although the documentation
suggests sharing the same ingress, the services have different externalName
values, which conflicts with the use of a vhost in the ingress annotations.
This change ensures proper service redirection by aligning each service with
its own ingress.
Declare the expected support and analytics env variables
expected by the frontend for each environment. To avoid consuming
too much credits from our PostHog free tier plan.
When new secret is added to backend secret, it's not sync at the
beginning of argocd synchronisation and jobs are blocked. Theses new
annotations fix this issue.
Updated Django's ALLOWED_HOSTS setting from '*' to the specific host of the
server. Setting ALLOWED_HOSTS to '*' is a security risk as it allows any host
to access the application, potentially exposing it to malicious attacks.
Restricting ALLOWED_HOSTS to the server's host ensures only legitimate
requests are processed.
In a Kubernetes environment, we also needed to whitelist the pod's IP address
to allow health checks to pass. This ensures that Kubernetes liveness and
readiness probes can access the application to verify its health.
Updated the liveness and readiness probes interval from every 10 seconds to
every 30 seconds. This change reduces the load on the server by decreasing
the frequency of health checks.
Given the current stability of the application, a 30-second interval is
sufficient to ensure that the application remains responsive and healthy.
Silenced certain Django security warnings because the application is served
behind a reverse proxy. These warnings are not applicable in our deployment
context, where the reverse proxy handles these security concerns.
This change ensures relevant security measures are appropriately managed
while avoiding unnecessary warnings. Any question? asked @rouja.
/!\ actually, this commit is not working, and should be fixed.
Require users to create a room in the database
before requesting a LiveKit token.
If user request an access token for a room that doesn't
exist in our db, its request would end in a 404 error.
Ensure that rooms must be registered by a user before they can be accessed.
By default, all created rooms remain public, allowing anonymous users to join
any room created by a logged-in user.
However, anonymous users cannot create rooms themselves.
I have set up distinct namespaces for each environment. You can now push
events to the development namespace without affecting production data.
Please note that these keys are not 'secret'. They will also be configured
in the browser SDK, which is inherently insecure. The documentation does not
specify a secure storage method for these keys.
Deploying LiveKit on Kubernetes is quite challenging when using a private cloud provider.
@rouja faced some issues while configuring the exposed port necessary for the
STUN and TURN servers to work when the user is connected to a network behind a firewall.
@rouja deployed quickly a temporary LiveKit instance on a VM with its own STUN and
TURN servers to avoid using the Google infrastructure.
It seems appropriate that backend owns the responsability of knowing any
information/configurations of the LiveKit server. Then, it shares those
with the frontend.
Please see my previous commit to understand why environment variables are
not appropriate for deployment in several remove environments.
As of today, the LiveKit server URL is the only configuration exposed
dynamically to the frontend. Thus, it doesn't justify adding a new route
to the API, responsible for exposing configurations (e.g. /configuration).
As the frontend needs to call the backend when it wants to initiate a new
webconference room, let's pass the server URL when retrieving the room's token.
It is relevant, to get both the room location and the keys to open the room in
the same call.
I prefered to be pragmatic, if the need appears any soon, I would refactor
these parts.
Discussed IRL with @manuhabitela. In developpement, we build locally the
Docker image. Thus, we can pass values to the frontend before the npm build
command was called.
Environment variables are great for configuration, and work perfectly in dev
mode, building Docker image on the fly.
However, in other environment (e.g. staging, pre-prod, prod) we'll pull a common
Docker image published in a remote registry. All cited environments should use
the same Docker image to make tests/deployment reproducible between envs.
As the Docker image is not rebuilt on the fly, we cannot easily configure
customized environment variables for each environment.
The API base URL would have a different value for each environment, and would
require a different environment variable.
Inspired by Impress works, if no environment variable is passed for the API URL,
the window origin will be used, and then the API path will be appended.
Frontend and backend are always deployed on the same URL, usually frontend
is at the '/' route, and backend at the '/api/vXX/' route.
If any configuration are required for each remote environment, they would be
retrieved from the API at runtime.
Voila! Don't hesitate to challenge this commit.
Values for staging, pre-prod, prod environments were adapted to read
the newly introduced LiveKit secrets.
The extra/template/secrets.yaml should be moved to a proper location.
I have updated the staging, pre-prod and production environments.
Done:
- Remove silenced security checks, as SECURE_PROXY_SSL_HEADER is set in prod.
- Rename "impress" to "meet"
- Rename "docs" to "meet"
- Remove unused values (webrtc, ingressWS)
I haven't yet received the definitive DNS configuration from Florian or Olivier.
The hosts meet.numerique.gouv.fr and all meet-*.beta.numerique.gouv.fr are
only hypothetical at this point.
Done:
- Rename all occurrences of "impress" to "meet".
- Update Agent Connect secrets credentials for the dev environment.
- Add new development secrets for LiveKit.
- Remove Minio from the dev stack (no cold storage required).
- Add LiveKit chart to the stack.
- Remove templates and values related to the WebSocket server.
The integration of LiveKit was inspired by an example from the "numerique-gouve/infrastructure" repo.
However, a notable issue persists with LiveKit's default chart: we are unable to override
the namespace, resulting in all LiveKit components running in the default namespace.
thx to @rouja for his help.
I have created two new repositories on DockerHub, one for the currently
existing backend image, and one for the future frontend image.
I searched-replaced all occurences of "lasuite/impress-frontend" or "lasuite/impress-backend".
One image won't exist anymore, "impress-y-webrtc-signaling", I have
removed the steps building and pushing its image to the DockerHub account.
This commit introduces a boilerplate inspired by https://github.com/numerique-gouv/impress.
The code has been cleaned to remove unnecessary Impress logic and dependencies.
Changes made:
- Removed Minio, WebRTC, and create bucket from the stack.
- Removed the Next.js frontend (it will be replaced by Vite).
- Cleaned up impress-specific backend logics.
The whole stack remains functional:
- All tests pass.
- Linter checks pass.
- Agent Connexion sources are already set-up.
Why clear out the code?
To adhere to the KISS principle, we aim to maintain a minimalist codebase. Cloning Impress
allowed us to quickly inherit its code quality tools and deployment configurations for staging,
pre-production, and production environments.
What’s broken?
- The tsclient is not functional anymore.
- Some make commands need to be fixed.
- Helm sources are outdated.
- Naming across the project sources are inconsistent (impress, visio, etc.)
- CI is not configured properly.
This list might be incomplete. Let's grind it.