feat: La Suite email/messages, buildkitd, monitoring, vault and storage updates
- Add Messages (email) service: backend, frontend, MTA in/out, MPA, SOCKS proxy, worker, DKIM config, and theme customization - Add Collabora deployment for document collaboration - Add Drive frontend nginx config and values - Add buildkitd namespace for in-cluster container builds - Add SeaweedFS remote sync and additional S3 buckets - Update vault secrets across namespaces (devtools, lasuite, media, monitoring, ory, storage) with expanded credential management - Update monitoring: rename grafana→metrics OAuth2Client, add Prometheus remote write and additional scrape configs - Update local/production overlays with resource patches - Remove stale login-ui resource patch from production overlay
This commit is contained in:
255
AGENTS.md
Normal file
255
AGENTS.md
Normal file
@@ -0,0 +1,255 @@
|
||||
# Sunbeam Infrastructure
|
||||
|
||||
Kubernetes manifests for a self-hosted collaboration platform. Single-node k3s cluster deployed via Kustomize + Helm. All YAML, no Terraform, no Pulumi.
|
||||
|
||||
## How to Deploy
|
||||
|
||||
```bash
|
||||
# Local (Lima VM on macOS):
|
||||
sunbeam up # full stack bring-up
|
||||
sunbeam apply # re-apply all manifests
|
||||
sunbeam apply lasuite # apply single namespace
|
||||
|
||||
# Or raw kustomize:
|
||||
kustomize build --enable-helm overlays/local | sed 's/DOMAIN_SUFFIX/192.168.5.2.sslip.io/g' | kubectl apply --server-side -f -
|
||||
```
|
||||
|
||||
There is no `make`, no CI pipeline, no Terraform. Manifests are applied with `sunbeam apply` (which runs kustomize build + sed + kubectl apply).
|
||||
|
||||
## Critical Rules
|
||||
|
||||
- **Do NOT add Helm charts when plain YAML works.** Most resources here are plain YAML. Helm is only used for complex upstream charts (Kratos, Hydra, CloudNativePG, etc.) that have their own release cycles. A new Deployment, Service, or ConfigMap should be plain YAML.
|
||||
- **Do NOT create Ingress resources.** This project does not use Kubernetes Ingress. Routing is handled by Pingora (a custom reverse proxy) configured via a TOML ConfigMap in `base/ingress/`. To expose a new service, add a `[[routes]]` entry to the Pingora config.
|
||||
- **Do NOT introduce Terraform, Pulumi, or any IaC tool.** Everything is Kustomize.
|
||||
- **Do NOT modify overlays when the change belongs in base.** Base holds the canonical config; overlays only hold environment-specific patches. If something applies to both local and production, it goes in base.
|
||||
- **Do NOT add RBAC, NetworkPolicy, or PodSecurityPolicy resources.** Linkerd service mesh handles mTLS. RBAC is managed at the k3s level.
|
||||
- **Do NOT create new namespaces** without being asked. The namespace layout is intentional.
|
||||
- **Do NOT hardcode domains.** Use `DOMAIN_SUFFIX` as a placeholder — it gets substituted at deploy time.
|
||||
- **Never commit TLS keys or secrets.** Secrets are managed by OpenBao + Vault Secrets Operator, not stored in this repo.
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
base/ # Canonical manifests (environment-agnostic)
|
||||
{namespace}/ # One directory per Kubernetes namespace
|
||||
kustomization.yaml # Resources, patches, helmCharts
|
||||
namespace.yaml # Namespace definition with Linkerd injection
|
||||
vault-secrets.yaml # VaultAuth + VaultStaticSecret + VaultDynamicSecret
|
||||
{app}-deployment.yaml # Deployments
|
||||
{app}-service.yaml # Services
|
||||
{app}-config.yaml # ConfigMaps
|
||||
{chart}-values.yaml # Helm chart values
|
||||
patch-{what}.yaml # Strategic merge patches
|
||||
|
||||
overlays/
|
||||
local/ # Lima VM dev overlay (macOS)
|
||||
kustomization.yaml # Selects base dirs, adds local patches + image overrides
|
||||
patch-*.yaml / values-*.yaml
|
||||
production/ # Scaleway server overlay
|
||||
kustomization.yaml
|
||||
patch-*.yaml / values-*.yaml
|
||||
|
||||
scripts/ # Bash automation (local-up.sh, local-down.sh, etc.)
|
||||
secrets/ # TLS cert placeholders (gitignored)
|
||||
```
|
||||
|
||||
## Manifest Conventions — Follow These Exactly
|
||||
|
||||
### kustomization.yaml
|
||||
|
||||
```yaml
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
|
||||
namespace: {namespace-name}
|
||||
|
||||
resources:
|
||||
- namespace.yaml
|
||||
- {app}-deployment.yaml
|
||||
- {app}-service.yaml
|
||||
- vault-secrets.yaml
|
||||
|
||||
# Helm charts only for complex upstream projects
|
||||
helmCharts:
|
||||
- name: {chart}
|
||||
repo: https://...
|
||||
version: "X.Y.Z"
|
||||
releaseName: {name}
|
||||
namespace: {ns}
|
||||
valuesFile: {chart}-values.yaml
|
||||
|
||||
patches:
|
||||
- path: patch-{what}.yaml
|
||||
target:
|
||||
kind: Deployment
|
||||
name: {name}
|
||||
```
|
||||
|
||||
### Namespace definition
|
||||
|
||||
Every namespace gets Linkerd injection:
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: {name}
|
||||
annotations:
|
||||
linkerd.io/inject: enabled
|
||||
```
|
||||
|
||||
### Deployments
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: {app}
|
||||
namespace: {ns}
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: {app}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: {app}
|
||||
spec:
|
||||
containers:
|
||||
- name: {app}
|
||||
image: {app} # Short name — Kustomize images: section handles registry
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: {app}-config # Bulk env from ConfigMap
|
||||
env:
|
||||
- name: DB_PASSWORD # Individual secrets from VSO-synced K8s Secrets
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: {app}-db-credentials
|
||||
key: password
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /__lbheartbeat__
|
||||
port: 8000
|
||||
initialDelaySeconds: 15
|
||||
periodSeconds: 20
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /__heartbeat__
|
||||
port: 8000
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 10
|
||||
resources:
|
||||
limits:
|
||||
memory: 512Mi
|
||||
cpu: 500m
|
||||
requests:
|
||||
memory: 128Mi
|
||||
cpu: 100m
|
||||
```
|
||||
|
||||
Key patterns:
|
||||
- `image: {app}` uses a short name — the actual registry is set via `images:` in the overlay kustomization.yaml
|
||||
- Django apps use `/__lbheartbeat__` (liveness) and `/__heartbeat__` (readiness)
|
||||
- Init containers run migrations: `python manage.py migrate --no-input`
|
||||
- `envFrom` for ConfigMaps, individual `env` entries for secrets
|
||||
|
||||
### Secrets (Vault Secrets Operator)
|
||||
|
||||
Secrets are NEVER stored in this repo. They flow: OpenBao → VaultStaticSecret/VaultDynamicSecret CRD → K8s Secret → Pod env.
|
||||
|
||||
```yaml
|
||||
# VaultAuth — one per namespace, always named vso-auth
|
||||
apiVersion: secrets.hashicorp.com/v1beta1
|
||||
kind: VaultAuth
|
||||
metadata:
|
||||
name: vso-auth
|
||||
namespace: {ns}
|
||||
spec:
|
||||
method: kubernetes
|
||||
mount: kubernetes
|
||||
kubernetes:
|
||||
role: vso
|
||||
serviceAccount: default
|
||||
|
||||
---
|
||||
# Static secrets (OIDC keys, Django secrets, etc.)
|
||||
apiVersion: secrets.hashicorp.com/v1beta1
|
||||
kind: VaultStaticSecret
|
||||
metadata:
|
||||
name: {secret-name}
|
||||
namespace: {ns}
|
||||
spec:
|
||||
vaultAuthRef: vso-auth
|
||||
mount: secret
|
||||
type: kv-v2
|
||||
path: {openbao-path}
|
||||
refreshAfter: 30s
|
||||
destination:
|
||||
name: {k8s-secret-name}
|
||||
create: true
|
||||
overwrite: true
|
||||
transformation:
|
||||
excludeRaw: true
|
||||
templates:
|
||||
KEY_NAME:
|
||||
text: '{{ index .Secrets "openbao-key" }}'
|
||||
|
||||
---
|
||||
# Dynamic secrets (DB credentials, rotate every 5m)
|
||||
apiVersion: secrets.hashicorp.com/v1beta1
|
||||
kind: VaultDynamicSecret
|
||||
metadata:
|
||||
name: {app}-db-credentials
|
||||
namespace: {ns}
|
||||
spec:
|
||||
vaultAuthRef: vso-auth
|
||||
mount: database
|
||||
path: static-creds/{app}
|
||||
allowStaticCreds: true
|
||||
refreshAfter: 5m
|
||||
rolloutRestartTargets:
|
||||
- kind: Deployment
|
||||
name: {app}
|
||||
destination:
|
||||
name: {app}-db-credentials
|
||||
create: true
|
||||
transformation:
|
||||
templates:
|
||||
password:
|
||||
text: 'postgresql://{{ index .Secrets "username" }}:{{ index .Secrets "password" }}@postgres-rw.data.svc.cluster.local:5432/{db}'
|
||||
```
|
||||
|
||||
### Shared ConfigMaps
|
||||
|
||||
Apps in `lasuite` namespace share these ConfigMaps via `envFrom`:
|
||||
- `lasuite-postgres` — DB_HOST, DB_PORT, DB_ENGINE
|
||||
- `lasuite-valkey` — REDIS_URL, CELERY_BROKER_URL
|
||||
- `lasuite-s3` — AWS_S3_ENDPOINT_URL, AWS_S3_REGION_NAME
|
||||
- `lasuite-oidc-provider` — OIDC endpoints (uses DOMAIN_SUFFIX)
|
||||
|
||||
### Overlay patches
|
||||
|
||||
Patches in overlays are strategic merge patches. Name them `patch-{what}.yaml` or `values-{what}.yaml`:
|
||||
```yaml
|
||||
# overlays/local/patch-oidc-verify-ssl.yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: lasuite-oidc-provider
|
||||
namespace: lasuite
|
||||
data:
|
||||
OIDC_VERIFY_SSL: "false"
|
||||
```
|
||||
|
||||
## What NOT to Do
|
||||
|
||||
- Don't create abstract base layers, nested overlays, or "common" directories. The structure is flat: `base/{namespace}/` and `overlays/{env}/`.
|
||||
- Don't use `configMapGenerator` or `secretGenerator` — ConfigMaps are plain YAML resources, secrets come from VSO.
|
||||
- Don't add `commonLabels` or `commonAnnotations` in kustomization.yaml — labels are set per-resource.
|
||||
- Don't use JSON patches when a strategic merge patch works.
|
||||
- Don't wrap simple services in Helm charts.
|
||||
- Don't add comments explaining what standard Kubernetes fields do. Only comment non-obvious decisions.
|
||||
- Don't change Helm chart versions without being asked — version pinning is intentional.
|
||||
- Don't add monitoring/alerting rules unless asked — monitoring lives in `base/monitoring/`.
|
||||
- Don't split a single component across multiple kustomization.yaml directories.
|
||||
43
base/build/buildkitd-deployment.yaml
Normal file
43
base/build/buildkitd-deployment.yaml
Normal file
@@ -0,0 +1,43 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: buildkitd
|
||||
namespace: build
|
||||
spec:
|
||||
replicas: 1
|
||||
strategy:
|
||||
type: Recreate
|
||||
selector:
|
||||
matchLabels:
|
||||
app: buildkitd
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: buildkitd
|
||||
spec:
|
||||
# Use host network so buildkitd can push to src.DOMAIN_SUFFIX (Gitea registry
|
||||
# via Pingora) without DNS resolution issues. The registry runs on the same
|
||||
# node, so host networking routes traffic back to localhost directly.
|
||||
hostNetwork: true
|
||||
dnsPolicy: None
|
||||
dnsConfig:
|
||||
nameservers:
|
||||
- 8.8.8.8
|
||||
- 1.1.1.1
|
||||
containers:
|
||||
- name: buildkitd
|
||||
image: moby/buildkit:v0.28.0
|
||||
args:
|
||||
- --addr
|
||||
- tcp://0.0.0.0:1234
|
||||
ports:
|
||||
- containerPort: 1234
|
||||
securityContext:
|
||||
privileged: true
|
||||
resources:
|
||||
requests:
|
||||
cpu: "500m"
|
||||
memory: "1Gi"
|
||||
limits:
|
||||
cpu: "4"
|
||||
memory: "8Gi"
|
||||
11
base/build/buildkitd-service.yaml
Normal file
11
base/build/buildkitd-service.yaml
Normal file
@@ -0,0 +1,11 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: buildkitd
|
||||
namespace: build
|
||||
spec:
|
||||
selector:
|
||||
app: buildkitd
|
||||
ports:
|
||||
- port: 1234
|
||||
targetPort: 1234
|
||||
7
base/build/kustomization.yaml
Normal file
7
base/build/kustomization.yaml
Normal file
@@ -0,0 +1,7 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
|
||||
resources:
|
||||
- namespace.yaml
|
||||
- buildkitd-deployment.yaml
|
||||
- buildkitd-service.yaml
|
||||
4
base/build/namespace.yaml
Normal file
4
base/build/namespace.yaml
Normal file
@@ -0,0 +1,4 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: build
|
||||
@@ -24,7 +24,7 @@ spec:
|
||||
allowStaticCreds: true
|
||||
refreshAfter: 5m
|
||||
rolloutRestartTargets:
|
||||
- kind: StatefulSet
|
||||
- kind: Deployment
|
||||
name: gitea
|
||||
destination:
|
||||
name: gitea-db-credentials
|
||||
@@ -47,6 +47,9 @@ spec:
|
||||
type: kv-v2
|
||||
path: seaweedfs
|
||||
refreshAfter: 30s
|
||||
rolloutRestartTargets:
|
||||
- kind: Deployment
|
||||
name: gitea
|
||||
destination:
|
||||
name: gitea-s3-credentials
|
||||
create: true
|
||||
@@ -70,6 +73,9 @@ spec:
|
||||
type: kv-v2
|
||||
path: gitea
|
||||
refreshAfter: 30s
|
||||
rolloutRestartTargets:
|
||||
- kind: Deployment
|
||||
name: gitea
|
||||
destination:
|
||||
name: gitea-admin-credentials
|
||||
create: true
|
||||
|
||||
@@ -5,6 +5,7 @@ metadata:
|
||||
namespace: ingress
|
||||
labels:
|
||||
app: pingora
|
||||
release: kube-prometheus-stack
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
|
||||
54
base/lasuite/collabora-deployment.yaml
Normal file
54
base/lasuite/collabora-deployment.yaml
Normal file
@@ -0,0 +1,54 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: collabora
|
||||
namespace: lasuite
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: collabora
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: collabora
|
||||
spec:
|
||||
containers:
|
||||
- name: collabora
|
||||
image: collabora/code:latest
|
||||
ports:
|
||||
- containerPort: 9980
|
||||
env:
|
||||
# Regex of allowed WOPI host origins (Drive's public URL). Escape the dot.
|
||||
- name: aliasgroup1
|
||||
value: "https://drive\\.DOMAIN_SUFFIX:443"
|
||||
# Public hostname — Collabora uses this in self-referencing URLs.
|
||||
- name: server_name
|
||||
value: "docs.DOMAIN_SUFFIX"
|
||||
# TLS is terminated at Pingora; disable Collabora's built-in TLS.
|
||||
- name: extra_params
|
||||
value: "--o:ssl.enable=false --o:ssl.termination=true"
|
||||
- name: dictionaries
|
||||
value: "en_US fr_FR"
|
||||
- name: username
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: collabora-credentials
|
||||
key: username
|
||||
- name: password
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: collabora-credentials
|
||||
key: password
|
||||
securityContext:
|
||||
capabilities:
|
||||
add:
|
||||
- SYS_CHROOT
|
||||
- SYS_ADMIN
|
||||
resources:
|
||||
limits:
|
||||
memory: 1Gi
|
||||
cpu: 1000m
|
||||
requests:
|
||||
memory: 512Mi
|
||||
cpu: 100m
|
||||
11
base/lasuite/collabora-service.yaml
Normal file
11
base/lasuite/collabora-service.yaml
Normal file
@@ -0,0 +1,11 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: collabora
|
||||
namespace: lasuite
|
||||
spec:
|
||||
selector:
|
||||
app: collabora
|
||||
ports:
|
||||
- port: 9980
|
||||
targetPort: 9980
|
||||
@@ -1,7 +1,4 @@
|
||||
# nginx config for docs-frontend that injects the brand theme CSS at serve time.
|
||||
# sub_filter injects the theme.css link before </head> so Cunningham CSS variables
|
||||
# are overridden at runtime without rebuilding the app.
|
||||
# gzip must be off for sub_filter to work on HTML responses.
|
||||
# nginx config for docs-frontend.
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
@@ -17,9 +14,9 @@ data:
|
||||
root /app;
|
||||
|
||||
gzip off;
|
||||
sub_filter '</head>' '<link rel="preconnect" href="https://fonts.googleapis.com"><link rel="preconnect" href="https://fonts.gstatic.com" crossorigin><link rel="stylesheet" href="https://fonts.googleapis.com/css2?family=Ysabeau+Variable:ital,wght@0,100..900;1,100..900&display=swap"><link rel="stylesheet" href="https://integration.DOMAIN_SUFFIX/api/v2/theme.css"></head>';
|
||||
sub_filter_once off;
|
||||
sub_filter_types text/html;
|
||||
sub_filter_types text/html application/javascript;
|
||||
sub_filter '</head>' '<link rel="stylesheet" href="https://integration.DOMAIN_SUFFIX/api/v2/theme.css"></head>';
|
||||
|
||||
location / {
|
||||
try_files $uri index.html $uri/index.html =404;
|
||||
|
||||
@@ -155,6 +155,7 @@ backend:
|
||||
height: "auto"
|
||||
alt: ""
|
||||
withTitle: true
|
||||
css_url: "https://integration.DOMAIN_SUFFIX/api/v2/theme.css"
|
||||
waffle:
|
||||
apiUrl: "https://integration.DOMAIN_SUFFIX/api/v2/services.json"
|
||||
widgetPath: "https://integration.DOMAIN_SUFFIX/api/v2/lagaufre.js"
|
||||
|
||||
81
base/lasuite/drive-frontend-nginx-configmap.yaml
Normal file
81
base/lasuite/drive-frontend-nginx-configmap.yaml
Normal file
@@ -0,0 +1,81 @@
|
||||
# nginx config for drive-frontend.
|
||||
#
|
||||
# /media/ requests are validated via auth_request to the Drive backend before
|
||||
# being proxied to SeaweedFS. This avoids exposing S3 credentials to the browser
|
||||
# while still serving private files through the CDN path.
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: drive-frontend-nginx-conf
|
||||
namespace: lasuite
|
||||
data:
|
||||
default.conf: |
|
||||
server {
|
||||
listen 8080;
|
||||
server_name localhost;
|
||||
server_tokens off;
|
||||
|
||||
root /usr/share/nginx/html;
|
||||
|
||||
# Rewrite hardcoded upstream La Gaufre widget and services API URLs to our
|
||||
# integration service. sub_filter doesn't work on gzip-compressed responses.
|
||||
gzip off;
|
||||
sub_filter_once off;
|
||||
sub_filter_types text/html application/javascript;
|
||||
sub_filter 'https://static.suite.anct.gouv.fr/widgets/lagaufre.js'
|
||||
'https://integration.DOMAIN_SUFFIX/api/v2/lagaufre.js';
|
||||
sub_filter 'https://lasuite.numerique.gouv.fr/api/services'
|
||||
'https://integration.DOMAIN_SUFFIX/api/v2/services.json';
|
||||
sub_filter 'https://operateurs.suite.anct.gouv.fr/api/v1.0/lagaufre/services/?operator=9f5624fc-ef99-4d10-ae3f-403a81eb16ef&siret=21870030000013'
|
||||
'https://integration.DOMAIN_SUFFIX/api/v2/services.json';
|
||||
sub_filter '</head>' '<link rel="stylesheet" href="https://integration.DOMAIN_SUFFIX/api/v2/theme.css"></head>';
|
||||
|
||||
# Public file viewer — Next.js static export generates a literal [id].html
|
||||
# template for this dynamic route. Serve it for any file UUID so the
|
||||
# client-side router hydrates the correct FilePage component without auth.
|
||||
location ~ ^/explorer/items/files/[^/]+$ {
|
||||
try_files $uri $uri.html /explorer/items/files/[id].html;
|
||||
}
|
||||
|
||||
# Item detail routes (folders, workspaces, shared items).
|
||||
location ~ ^/explorer/items/[^/]+$ {
|
||||
try_files $uri $uri.html /explorer/items/[id].html;
|
||||
}
|
||||
|
||||
location / {
|
||||
# Try the exact path, then path + .html (Next.js static export generates
|
||||
# e.g. explorer/items/my-files.html), then fall back to index.html for
|
||||
# client-side routes that have no pre-rendered file.
|
||||
try_files $uri $uri.html /index.html;
|
||||
}
|
||||
|
||||
# Protected media: auth via Drive backend, then proxy to S3 with signed headers.
|
||||
# media-auth returns S3 SigV4 Authorization/X-Amz-Date headers; nginx captures
|
||||
# and forwards them so SeaweedFS can verify the request.
|
||||
location /media/ {
|
||||
auth_request /internal/media-auth;
|
||||
auth_request_set $auth_header $upstream_http_authorization;
|
||||
auth_request_set $amz_date $upstream_http_x_amz_date;
|
||||
auth_request_set $amz_content $upstream_http_x_amz_content_sha256;
|
||||
proxy_set_header Authorization $auth_header;
|
||||
proxy_set_header X-Amz-Date $amz_date;
|
||||
proxy_set_header X-Amz-Content-Sha256 $amz_content;
|
||||
proxy_pass http://seaweedfs-filer.storage.svc.cluster.local:8333/sunbeam-drive/;
|
||||
}
|
||||
|
||||
# Internal subrequest: Django checks session and item access, returns S3 auth headers.
|
||||
location = /internal/media-auth {
|
||||
internal;
|
||||
proxy_pass http://drive-backend.lasuite.svc.cluster.local:80/api/v1.0/items/media-auth/;
|
||||
proxy_pass_request_body off;
|
||||
proxy_set_header Content-Length "";
|
||||
proxy_set_header Host drive.sunbeam.pt;
|
||||
proxy_set_header X-Original-URL $scheme://$host$request_uri;
|
||||
}
|
||||
|
||||
error_page 500 502 503 504 @blank_error;
|
||||
location @blank_error {
|
||||
return 200 '';
|
||||
add_header Content-Type text/html;
|
||||
}
|
||||
}
|
||||
200
base/lasuite/drive-values.yaml
Normal file
200
base/lasuite/drive-values.yaml
Normal file
@@ -0,0 +1,200 @@
|
||||
# La Suite Numérique — Drive (drive chart).
|
||||
# Env vars use the chart's dict-based envVars schema:
|
||||
# string value → rendered as env.value
|
||||
# map value → rendered as env.valueFrom (configMapKeyRef / secretKeyRef)
|
||||
# DOMAIN_SUFFIX is substituted by sed at deploy time.
|
||||
#
|
||||
# Required secrets (created by seed script):
|
||||
# oidc-drive — CLIENT_ID, CLIENT_SECRET (created by Hydra Maester)
|
||||
# drive-db-credentials — password (VaultDynamicSecret, DB engine)
|
||||
# drive-django-secret — DJANGO_SECRET_KEY (VaultStaticSecret)
|
||||
# seaweedfs-s3-credentials — S3_ACCESS_KEY, S3_SECRET_KEY (shared)
|
||||
|
||||
fullnameOverride: drive
|
||||
|
||||
backend:
|
||||
createsuperuser:
|
||||
# No superuser — users authenticate via OIDC.
|
||||
# The chart always renders this Job; override command so it exits 0.
|
||||
command: ["true"]
|
||||
|
||||
envVars: &backendEnvVars
|
||||
# ── Database ──────────────────────────────────────────────────────────────
|
||||
DB_NAME: drive_db
|
||||
DB_USER: drive
|
||||
DB_HOST:
|
||||
configMapKeyRef:
|
||||
name: lasuite-postgres
|
||||
key: DB_HOST
|
||||
DB_PORT:
|
||||
configMapKeyRef:
|
||||
name: lasuite-postgres
|
||||
key: DB_PORT
|
||||
# Drive uses psycopg3 backend (no _psycopg2 suffix).
|
||||
DB_ENGINE: django.db.backends.postgresql
|
||||
DB_PASSWORD:
|
||||
secretKeyRef:
|
||||
name: drive-db-credentials
|
||||
key: password
|
||||
|
||||
# ── Redis / Celery ────────────────────────────────────────────────────────
|
||||
REDIS_URL:
|
||||
configMapKeyRef:
|
||||
name: lasuite-valkey
|
||||
key: REDIS_URL
|
||||
# Drive uses DJANGO_CELERY_BROKER_URL (not CELERY_BROKER_URL).
|
||||
DJANGO_CELERY_BROKER_URL:
|
||||
configMapKeyRef:
|
||||
name: lasuite-valkey
|
||||
key: CELERY_BROKER_URL
|
||||
|
||||
# ── S3 (file storage) ─────────────────────────────────────────────────────
|
||||
AWS_STORAGE_BUCKET_NAME: sunbeam-drive
|
||||
AWS_S3_ENDPOINT_URL:
|
||||
configMapKeyRef:
|
||||
name: lasuite-s3
|
||||
key: AWS_S3_ENDPOINT_URL
|
||||
AWS_S3_REGION_NAME:
|
||||
configMapKeyRef:
|
||||
name: lasuite-s3
|
||||
key: AWS_S3_REGION_NAME
|
||||
AWS_DEFAULT_ACL:
|
||||
configMapKeyRef:
|
||||
name: lasuite-s3
|
||||
key: AWS_DEFAULT_ACL
|
||||
# Drive uses AWS_S3_ACCESS_KEY_ID / AWS_S3_SECRET_ACCESS_KEY (with _S3_ prefix).
|
||||
AWS_S3_ACCESS_KEY_ID:
|
||||
secretKeyRef:
|
||||
name: seaweedfs-s3-credentials
|
||||
key: S3_ACCESS_KEY
|
||||
AWS_S3_SECRET_ACCESS_KEY:
|
||||
secretKeyRef:
|
||||
name: seaweedfs-s3-credentials
|
||||
key: S3_SECRET_KEY
|
||||
# Base URL for media file references so nginx auth proxy receives full paths.
|
||||
MEDIA_BASE_URL: https://drive.DOMAIN_SUFFIX
|
||||
|
||||
# ── OIDC (Hydra) ──────────────────────────────────────────────────────────
|
||||
OIDC_RP_CLIENT_ID:
|
||||
secretKeyRef:
|
||||
name: oidc-drive
|
||||
key: CLIENT_ID
|
||||
OIDC_RP_CLIENT_SECRET:
|
||||
secretKeyRef:
|
||||
name: oidc-drive
|
||||
key: CLIENT_SECRET
|
||||
OIDC_RP_SIGN_ALGO:
|
||||
configMapKeyRef:
|
||||
name: lasuite-oidc-provider
|
||||
key: OIDC_RP_SIGN_ALGO
|
||||
OIDC_RP_SCOPES:
|
||||
configMapKeyRef:
|
||||
name: lasuite-oidc-provider
|
||||
key: OIDC_RP_SCOPES
|
||||
OIDC_OP_JWKS_ENDPOINT:
|
||||
configMapKeyRef:
|
||||
name: lasuite-oidc-provider
|
||||
key: OIDC_OP_JWKS_ENDPOINT
|
||||
OIDC_OP_AUTHORIZATION_ENDPOINT:
|
||||
configMapKeyRef:
|
||||
name: lasuite-oidc-provider
|
||||
key: OIDC_OP_AUTHORIZATION_ENDPOINT
|
||||
OIDC_OP_TOKEN_ENDPOINT:
|
||||
configMapKeyRef:
|
||||
name: lasuite-oidc-provider
|
||||
key: OIDC_OP_TOKEN_ENDPOINT
|
||||
OIDC_OP_USER_ENDPOINT:
|
||||
configMapKeyRef:
|
||||
name: lasuite-oidc-provider
|
||||
key: OIDC_OP_USER_ENDPOINT
|
||||
OIDC_OP_LOGOUT_ENDPOINT:
|
||||
configMapKeyRef:
|
||||
name: lasuite-oidc-provider
|
||||
key: OIDC_OP_LOGOUT_ENDPOINT
|
||||
OIDC_VERIFY_SSL:
|
||||
configMapKeyRef:
|
||||
name: lasuite-oidc-provider
|
||||
key: OIDC_VERIFY_SSL
|
||||
|
||||
# ── Resource Server (Drive as OAuth2 RS for Messages integration) ─────────
|
||||
OIDC_RESOURCE_SERVER_ENABLED: "True"
|
||||
# Hydra issuer URL — must match the `iss` claim in introspection responses.
|
||||
OIDC_OP_URL: https://auth.DOMAIN_SUFFIX/
|
||||
# Hydra token introspection endpoint (admin port — no client auth required).
|
||||
OIDC_OP_INTROSPECTION_ENDPOINT: http://hydra-admin.ory.svc.cluster.local:4445/admin/oauth2/introspect
|
||||
# Drive authenticates to Hydra introspection using its own OIDC client creds.
|
||||
OIDC_RS_CLIENT_ID:
|
||||
secretKeyRef:
|
||||
name: oidc-drive
|
||||
key: CLIENT_ID
|
||||
OIDC_RS_CLIENT_SECRET:
|
||||
secretKeyRef:
|
||||
name: oidc-drive
|
||||
key: CLIENT_SECRET
|
||||
# Only accept tokens issued to the messages OAuth2 client (ListValue, comma-separated).
|
||||
OIDC_RS_ALLOWED_AUDIENCES:
|
||||
secretKeyRef:
|
||||
name: oidc-messages
|
||||
key: CLIENT_ID
|
||||
|
||||
# ── Django ────────────────────────────────────────────────────────────────
|
||||
DJANGO_SECRET_KEY:
|
||||
secretKeyRef:
|
||||
name: drive-django-secret
|
||||
key: DJANGO_SECRET_KEY
|
||||
DJANGO_CONFIGURATION: Production
|
||||
ALLOWED_HOSTS: drive.DOMAIN_SUFFIX
|
||||
DJANGO_ALLOWED_HOSTS: drive.DOMAIN_SUFFIX
|
||||
DJANGO_CSRF_TRUSTED_ORIGINS: https://drive.DOMAIN_SUFFIX
|
||||
LOGIN_REDIRECT_URL: /
|
||||
LOGOUT_REDIRECT_URL: /
|
||||
SESSION_COOKIE_AGE: "3600"
|
||||
# Session cache TTL must match SESSION_COOKIE_AGE; default is 30s which
|
||||
# causes sessions to expire in Valkey while the cookie remains valid.
|
||||
CACHES_SESSION_TIMEOUT: "3600"
|
||||
# Silent login disabled: the callback redirects back to the returnTo URL
|
||||
# (not LOGIN_REDIRECT_URL) on login_required, causing an infinite reload loop
|
||||
# when the user has no Hydra session. UserProfile shows a Login button instead.
|
||||
FRONTEND_SILENT_LOGIN_ENABLED: "false"
|
||||
# Redirect unauthenticated visitors at / straight to OIDC login instead of
|
||||
# showing the La Suite marketing landing page. returnTo brings them to
|
||||
# their files after successful auth.
|
||||
FRONTEND_EXTERNAL_HOME_URL: "https://drive.DOMAIN_SUFFIX/api/v1.0/authenticate/?returnTo=https%3A%2F%2Fdrive.DOMAIN_SUFFIX%2Fexplorer%2Fitems%2Fmy-files"
|
||||
|
||||
# Allow Messages to call Drive SDK relay cross-origin.
|
||||
SDK_CORS_ALLOWED_ORIGINS: "https://mail.DOMAIN_SUFFIX"
|
||||
CORS_ALLOWED_ORIGINS: "https://mail.DOMAIN_SUFFIX"
|
||||
|
||||
# Allow all file types — self-hosted instance, no need to restrict uploads.
|
||||
RESTRICT_UPLOAD_FILE_TYPE: "False"
|
||||
|
||||
# ── WOPI / Collabora ──────────────────────────────────────────────────────
|
||||
# Comma-separated list of enabled WOPI client names.
|
||||
# Inject Sunbeam theme CSS from the integration service.
|
||||
FRONTEND_CSS_URL: "https://integration.DOMAIN_SUFFIX/api/v2/theme.css"
|
||||
|
||||
WOPI_CLIENTS: collabora
|
||||
# Discovery XML endpoint — Collabora registers supported MIME types here.
|
||||
WOPI_COLLABORA_DISCOVERY_URL: http://collabora.lasuite.svc.cluster.local:9980/hosting/discovery
|
||||
# Base URL Drive uses when building wopi_src callback URLs for Collabora.
|
||||
WOPI_SRC_BASE_URL: https://drive.DOMAIN_SUFFIX
|
||||
|
||||
themeCustomization:
|
||||
enabled: true
|
||||
file_content:
|
||||
css_url: "https://integration.DOMAIN_SUFFIX/api/v2/theme.css"
|
||||
waffle:
|
||||
apiUrl: "https://integration.DOMAIN_SUFFIX/api/v2/services.json"
|
||||
widgetPath: "https://integration.DOMAIN_SUFFIX/api/v2/lagaufre.js"
|
||||
label: "O Estúdio"
|
||||
closeLabel: "Fechar"
|
||||
newWindowLabelSuffix: " · nova janela"
|
||||
|
||||
ingress:
|
||||
enabled: false
|
||||
|
||||
ingressAdmin:
|
||||
enabled: false
|
||||
|
||||
ingressMedia:
|
||||
enabled: false
|
||||
@@ -6,7 +6,7 @@ metadata:
|
||||
data:
|
||||
config.toml: |
|
||||
[drive]
|
||||
base_url = "http://drive.lasuite.svc.cluster.local:8000"
|
||||
base_url = "http://drive-backend.lasuite.svc.cluster.local:80"
|
||||
workspace = "Game Assets"
|
||||
oidc_client_id = "hive"
|
||||
oidc_token_url = "http://hydra.ory.svc.cluster.local:4444/oauth2/token"
|
||||
|
||||
@@ -20,20 +20,15 @@ data:
|
||||
services.json: |
|
||||
{
|
||||
"services": [
|
||||
{
|
||||
"name": "Docs",
|
||||
"url": "https://docs.DOMAIN_SUFFIX",
|
||||
"logo": "https://integration.DOMAIN_SUFFIX/logos/docs.svg?v=2"
|
||||
},
|
||||
{
|
||||
"name": "Reuniões",
|
||||
"url": "https://meet.DOMAIN_SUFFIX",
|
||||
"logo": "https://integration.DOMAIN_SUFFIX/logos/visio.svg?v=2"
|
||||
},
|
||||
{
|
||||
"name": "Humans",
|
||||
"url": "https://people.DOMAIN_SUFFIX",
|
||||
"logo": "https://integration.DOMAIN_SUFFIX/logos/people.svg?v=2"
|
||||
"name": "Drive",
|
||||
"url": "https://drive.DOMAIN_SUFFIX",
|
||||
"logo": "https://integration.DOMAIN_SUFFIX/logos/drive.svg?v=1"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -15,7 +15,8 @@ resources:
|
||||
- vault-secrets.yaml
|
||||
- integration-deployment.yaml
|
||||
- people-frontend-nginx-configmap.yaml
|
||||
- docs-frontend-nginx-configmap.yaml
|
||||
- collabora-deployment.yaml
|
||||
- collabora-service.yaml
|
||||
- meet-config.yaml
|
||||
- meet-backend-deployment.yaml
|
||||
- meet-backend-service.yaml
|
||||
@@ -23,12 +24,29 @@ resources:
|
||||
- meet-frontend-nginx-configmap.yaml
|
||||
- meet-frontend-deployment.yaml
|
||||
- meet-frontend-service.yaml
|
||||
- drive-frontend-nginx-configmap.yaml
|
||||
- messages-config.yaml
|
||||
- messages-backend-deployment.yaml
|
||||
- messages-backend-service.yaml
|
||||
- messages-frontend-theme-configmap.yaml
|
||||
- messages-frontend-deployment.yaml
|
||||
- messages-frontend-service.yaml
|
||||
- messages-worker-deployment.yaml
|
||||
- messages-mta-in-deployment.yaml
|
||||
- messages-mta-in-service.yaml
|
||||
- messages-mta-out-deployment.yaml
|
||||
- messages-mta-out-service.yaml
|
||||
- messages-mpa-dkim-config.yaml
|
||||
- messages-mpa-deployment.yaml
|
||||
- messages-mpa-service.yaml
|
||||
- messages-socks-proxy-deployment.yaml
|
||||
- messages-socks-proxy-service.yaml
|
||||
|
||||
patches:
|
||||
# Rewrite hardcoded production integration URL + inject theme CSS in people-frontend
|
||||
- path: patch-people-frontend-nginx.yaml
|
||||
# Inject theme CSS in docs-frontend
|
||||
- path: patch-docs-frontend-nginx.yaml
|
||||
# Mount media auth proxy nginx config in drive-frontend
|
||||
- path: patch-drive-frontend-nginx.yaml
|
||||
|
||||
# La Suite Numérique Helm charts.
|
||||
# Charts with a published Helm repo use helmCharts below.
|
||||
@@ -42,10 +60,10 @@ helmCharts:
|
||||
namespace: lasuite
|
||||
valuesFile: people-values.yaml
|
||||
|
||||
# helm repo add docs https://suitenumerique.github.io/docs/
|
||||
- name: docs
|
||||
repo: https://suitenumerique.github.io/docs/
|
||||
version: "4.5.0"
|
||||
releaseName: docs
|
||||
# helm repo add drive https://suitenumerique.github.io/drive/
|
||||
- name: drive
|
||||
repo: https://suitenumerique.github.io/drive/
|
||||
version: "0.14.0"
|
||||
releaseName: drive
|
||||
namespace: lasuite
|
||||
valuesFile: docs-values.yaml
|
||||
valuesFile: drive-values.yaml
|
||||
|
||||
183
base/lasuite/messages-backend-deployment.yaml
Normal file
183
base/lasuite/messages-backend-deployment.yaml
Normal file
@@ -0,0 +1,183 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: messages-backend
|
||||
namespace: lasuite
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: messages-backend
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: messages-backend
|
||||
spec:
|
||||
initContainers:
|
||||
- name: migrate
|
||||
image: messages-backend
|
||||
command: ["python", "manage.py", "migrate", "--no-input"]
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: messages-config
|
||||
- configMapRef:
|
||||
name: lasuite-postgres
|
||||
- configMapRef:
|
||||
name: lasuite-valkey
|
||||
- configMapRef:
|
||||
name: lasuite-s3
|
||||
- configMapRef:
|
||||
name: lasuite-oidc-provider
|
||||
env:
|
||||
- name: DB_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: messages-db-credentials
|
||||
key: password
|
||||
- name: DJANGO_SECRET_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: messages-django-secret
|
||||
key: DJANGO_SECRET_KEY
|
||||
- name: SALT_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: messages-django-secret
|
||||
key: SALT_KEY
|
||||
- name: MDA_API_SECRET
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: messages-django-secret
|
||||
key: MDA_API_SECRET
|
||||
- name: OIDC_RP_CLIENT_ID
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: oidc-messages
|
||||
key: CLIENT_ID
|
||||
- name: OIDC_RP_CLIENT_SECRET
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: oidc-messages
|
||||
key: CLIENT_SECRET
|
||||
- name: AWS_S3_ACCESS_KEY_ID
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: seaweedfs-s3-credentials
|
||||
key: S3_ACCESS_KEY
|
||||
- name: AWS_S3_SECRET_ACCESS_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: seaweedfs-s3-credentials
|
||||
key: S3_SECRET_KEY
|
||||
- name: RSPAMD_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: messages-mpa-credentials
|
||||
key: RSPAMD_password
|
||||
- name: OIDC_STORE_REFRESH_TOKEN_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: messages-django-secret
|
||||
key: OIDC_STORE_REFRESH_TOKEN_KEY
|
||||
- name: OIDC_RP_SCOPES
|
||||
value: "openid email profile offline_access"
|
||||
resources:
|
||||
limits:
|
||||
memory: 1Gi
|
||||
cpu: 500m
|
||||
requests:
|
||||
memory: 256Mi
|
||||
cpu: 100m
|
||||
containers:
|
||||
- name: messages-backend
|
||||
image: messages-backend
|
||||
command:
|
||||
- gunicorn
|
||||
- -c
|
||||
- /app/gunicorn.conf.py
|
||||
- messages.wsgi:application
|
||||
ports:
|
||||
- containerPort: 8000
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: messages-config
|
||||
- configMapRef:
|
||||
name: lasuite-postgres
|
||||
- configMapRef:
|
||||
name: lasuite-valkey
|
||||
- configMapRef:
|
||||
name: lasuite-s3
|
||||
- configMapRef:
|
||||
name: lasuite-oidc-provider
|
||||
env:
|
||||
- name: DB_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: messages-db-credentials
|
||||
key: password
|
||||
- name: DJANGO_SECRET_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: messages-django-secret
|
||||
key: DJANGO_SECRET_KEY
|
||||
- name: SALT_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: messages-django-secret
|
||||
key: SALT_KEY
|
||||
- name: MDA_API_SECRET
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: messages-django-secret
|
||||
key: MDA_API_SECRET
|
||||
- name: OIDC_RP_CLIENT_ID
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: oidc-messages
|
||||
key: CLIENT_ID
|
||||
- name: OIDC_RP_CLIENT_SECRET
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: oidc-messages
|
||||
key: CLIENT_SECRET
|
||||
- name: AWS_S3_ACCESS_KEY_ID
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: seaweedfs-s3-credentials
|
||||
key: S3_ACCESS_KEY
|
||||
- name: AWS_S3_SECRET_ACCESS_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: seaweedfs-s3-credentials
|
||||
key: S3_SECRET_KEY
|
||||
- name: RSPAMD_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: messages-mpa-credentials
|
||||
key: RSPAMD_password
|
||||
- name: OIDC_STORE_REFRESH_TOKEN_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: messages-django-secret
|
||||
key: OIDC_STORE_REFRESH_TOKEN_KEY
|
||||
- name: OIDC_RP_SCOPES
|
||||
value: "openid email profile offline_access"
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /__heartbeat__/
|
||||
port: 8000
|
||||
initialDelaySeconds: 15
|
||||
periodSeconds: 20
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /__heartbeat__/
|
||||
port: 8000
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 10
|
||||
resources:
|
||||
limits:
|
||||
memory: 1Gi
|
||||
cpu: 500m
|
||||
requests:
|
||||
memory: 256Mi
|
||||
cpu: 100m
|
||||
11
base/lasuite/messages-backend-service.yaml
Normal file
11
base/lasuite/messages-backend-service.yaml
Normal file
@@ -0,0 +1,11 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: messages-backend
|
||||
namespace: lasuite
|
||||
spec:
|
||||
selector:
|
||||
app: messages-backend
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 8000
|
||||
45
base/lasuite/messages-config.yaml
Normal file
45
base/lasuite/messages-config.yaml
Normal file
@@ -0,0 +1,45 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: messages-config
|
||||
namespace: lasuite
|
||||
data:
|
||||
DJANGO_CONFIGURATION: Production
|
||||
DJANGO_SETTINGS_MODULE: messages.settings
|
||||
DJANGO_ALLOWED_HOSTS: mail.DOMAIN_SUFFIX,messages-backend.lasuite.svc.cluster.local
|
||||
ALLOWED_HOSTS: mail.DOMAIN_SUFFIX,messages-backend.lasuite.svc.cluster.local
|
||||
DJANGO_CSRF_TRUSTED_ORIGINS: https://mail.DOMAIN_SUFFIX
|
||||
DB_NAME: messages_db
|
||||
DB_USER: messages
|
||||
OPENSEARCH_URL: http://opensearch.data.svc.cluster.local:9200
|
||||
MDA_API_BASE_URL: http://messages-backend.lasuite.svc.cluster.local:80/api/v1.0/
|
||||
MYHOSTNAME: mail.DOMAIN_SUFFIX
|
||||
# rspamd URL (auth token injected separately from messages-mpa-credentials secret)
|
||||
SPAM_RSPAMD_URL: http://messages-mpa.lasuite.svc.cluster.local:8010/_api
|
||||
MESSAGES_FRONTEND_BACKEND_SERVER: messages-backend.lasuite.svc.cluster.local:80
|
||||
STORAGE_MESSAGE_IMPORTS_BUCKET_NAME: sunbeam-messages-imports
|
||||
STORAGE_MESSAGE_IMPORTS_ENDPOINT_URL: http://seaweedfs-filer.storage.svc.cluster.local:8333
|
||||
AWS_STORAGE_BUCKET_NAME: sunbeam-messages
|
||||
IDENTITY_PROVIDER: oidc
|
||||
FRONTEND_THEME: default
|
||||
DRIVE_BASE_URL: https://drive.DOMAIN_SUFFIX
|
||||
LOGIN_REDIRECT_URL: https://mail.DOMAIN_SUFFIX
|
||||
LOGOUT_REDIRECT_URL: https://mail.DOMAIN_SUFFIX
|
||||
OIDC_REDIRECT_ALLOWED_HOSTS: '["https://auth.DOMAIN_SUFFIX"]'
|
||||
MTA_OUT_MODE: direct
|
||||
# Create user accounts on first OIDC login (required — no pre-provisioning)
|
||||
OIDC_CREATE_USER: "True"
|
||||
# Redirect to home on auth failure (avoids HttpResponseRedirect(None) → /callback/None 404)
|
||||
LOGIN_REDIRECT_URL_FAILURE: https://mail.DOMAIN_SUFFIX
|
||||
# Store OIDC tokens in session so the Drive integration can proxy requests on behalf of the user.
|
||||
OIDC_STORE_ACCESS_TOKEN: "True"
|
||||
OIDC_STORE_REFRESH_TOKEN: "True"
|
||||
# Session lives 7 days — long enough to survive overnight/weekend without re-auth.
|
||||
# Default is 43200 (12h) which forces a login after a browser restart.
|
||||
SESSION_COOKIE_AGE: "604800"
|
||||
# Renew the id token 60 s before it expires (access_token TTL = 1h).
|
||||
# Without this the default falls back to SESSION_COOKIE_AGE (7 days), which means
|
||||
# every request sees the 1h token as "expiring within 7 days" and triggers a
|
||||
# prompt=none renewal on every page load — causing repeated auth loops.
|
||||
OIDC_RENEW_ID_TOKEN_EXPIRY_SECONDS: "60"
|
||||
# offline_access scope is set directly in the deployment env (overrides lasuite-oidc-provider envFrom).
|
||||
53
base/lasuite/messages-frontend-deployment.yaml
Normal file
53
base/lasuite/messages-frontend-deployment.yaml
Normal file
@@ -0,0 +1,53 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: messages-frontend
|
||||
namespace: lasuite
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: messages-frontend
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: messages-frontend
|
||||
spec:
|
||||
containers:
|
||||
- name: messages-frontend
|
||||
image: messages-frontend
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
env:
|
||||
- name: MESSAGES_FRONTEND_BACKEND_SERVER
|
||||
value: messages-backend.lasuite.svc.cluster.local:80
|
||||
- name: PORT
|
||||
value: "8080"
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
port: 8080
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 20
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
port: 8080
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 10
|
||||
volumeMounts:
|
||||
- name: theme
|
||||
mountPath: /app/sunbeam-theme.css
|
||||
subPath: sunbeam-theme.css
|
||||
readOnly: true
|
||||
resources:
|
||||
limits:
|
||||
memory: 256Mi
|
||||
cpu: 250m
|
||||
requests:
|
||||
memory: 64Mi
|
||||
cpu: 50m
|
||||
volumes:
|
||||
- name: theme
|
||||
configMap:
|
||||
name: messages-frontend-theme
|
||||
11
base/lasuite/messages-frontend-service.yaml
Normal file
11
base/lasuite/messages-frontend-service.yaml
Normal file
@@ -0,0 +1,11 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: messages-frontend
|
||||
namespace: lasuite
|
||||
spec:
|
||||
selector:
|
||||
app: messages-frontend
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 8080
|
||||
50
base/lasuite/messages-frontend-theme-configmap.yaml
Normal file
50
base/lasuite/messages-frontend-theme-configmap.yaml
Normal file
@@ -0,0 +1,50 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: messages-frontend-theme
|
||||
namespace: lasuite
|
||||
data:
|
||||
sunbeam-theme.css: |
|
||||
/*
|
||||
* O Estúdio — runtime brand overrides for messages-frontend.
|
||||
* Loaded via <link href="/sunbeam-theme.css"> injected in _document.tsx.
|
||||
* Override Cunningham v4 --c--globals--* variables (no rebuild for updates).
|
||||
*/
|
||||
@import url('https://fonts.googleapis.com/css2?family=Ysabeau:ital,wght@0,100..900;1,100..900&display=swap');
|
||||
|
||||
:root {
|
||||
--c--globals--font--families--base: 'Ysabeau Variable', Inter, sans-serif;
|
||||
--c--globals--font--families--accent: 'Ysabeau Variable', Inter, sans-serif;
|
||||
|
||||
/* Brand — amber/gold palette */
|
||||
--c--globals--colors--brand-050: #fffbeb;
|
||||
--c--globals--colors--brand-100: #fef3c7;
|
||||
--c--globals--colors--brand-150: #fde9a0;
|
||||
--c--globals--colors--brand-200: #fde68a;
|
||||
--c--globals--colors--brand-250: #fde047;
|
||||
--c--globals--colors--brand-300: #fcd34d;
|
||||
--c--globals--colors--brand-350: #fbcf3f;
|
||||
--c--globals--colors--brand-400: #fbbf24;
|
||||
--c--globals--colors--brand-450: #f8b31a;
|
||||
--c--globals--colors--brand-500: #f59e0b;
|
||||
--c--globals--colors--brand-550: #e8920a;
|
||||
--c--globals--colors--brand-600: #d97706;
|
||||
--c--globals--colors--brand-650: #c26d05;
|
||||
--c--globals--colors--brand-700: #b45309;
|
||||
--c--globals--colors--brand-750: #9a4508;
|
||||
--c--globals--colors--brand-800: #92400e;
|
||||
--c--globals--colors--brand-850: #7c370c;
|
||||
--c--globals--colors--brand-900: #78350f;
|
||||
--c--globals--colors--brand-950: #451a03;
|
||||
|
||||
/* Logo gradient */
|
||||
--c--globals--colors--logo-1: #f59e0b;
|
||||
--c--globals--colors--logo-2: #d97706;
|
||||
--c--globals--colors--logo-1-light: #f59e0b;
|
||||
--c--globals--colors--logo-2-light: #d97706;
|
||||
--c--globals--colors--logo-1-dark: #fcd34d;
|
||||
--c--globals--colors--logo-2-dark: #fbbf24;
|
||||
|
||||
/* PWA theme color */
|
||||
--sunbeam-brand: #f59e0b;
|
||||
}
|
||||
56
base/lasuite/messages-mpa-deployment.yaml
Normal file
56
base/lasuite/messages-mpa-deployment.yaml
Normal file
@@ -0,0 +1,56 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: messages-mpa
|
||||
namespace: lasuite
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: messages-mpa
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: messages-mpa
|
||||
spec:
|
||||
containers:
|
||||
- name: messages-mpa
|
||||
image: messages-mpa
|
||||
ports:
|
||||
- containerPort: 8010
|
||||
env:
|
||||
- name: RSPAMD_password
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: messages-mpa-credentials
|
||||
key: RSPAMD_password
|
||||
- name: PORT
|
||||
value: "8010"
|
||||
- name: REDIS_HOST
|
||||
value: valkey.data.svc.cluster.local
|
||||
- name: REDIS_PORT
|
||||
value: "6379"
|
||||
volumeMounts:
|
||||
- name: dkim-key
|
||||
mountPath: /etc/rspamd/dkim
|
||||
readOnly: true
|
||||
- name: dkim-signing-conf
|
||||
mountPath: /etc/rspamd/local.d
|
||||
readOnly: true
|
||||
resources:
|
||||
limits:
|
||||
memory: 768Mi
|
||||
cpu: 250m
|
||||
requests:
|
||||
memory: 256Mi
|
||||
cpu: 50m
|
||||
volumes:
|
||||
- name: dkim-key
|
||||
secret:
|
||||
secretName: messages-dkim-key
|
||||
items:
|
||||
- key: dkim-private-key
|
||||
path: default.sunbeam.pt.key
|
||||
- name: dkim-signing-conf
|
||||
configMap:
|
||||
name: messages-mpa-rspamd-config
|
||||
13
base/lasuite/messages-mpa-dkim-config.yaml
Normal file
13
base/lasuite/messages-mpa-dkim-config.yaml
Normal file
@@ -0,0 +1,13 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: messages-mpa-rspamd-config
|
||||
namespace: lasuite
|
||||
data:
|
||||
dkim_signing.conf: |
|
||||
enabled = true;
|
||||
selector = "default";
|
||||
path = "/etc/rspamd/dkim/$domain.$selector.key";
|
||||
sign_authenticated = true;
|
||||
sign_local = true;
|
||||
use_domain = "header";
|
||||
11
base/lasuite/messages-mpa-service.yaml
Normal file
11
base/lasuite/messages-mpa-service.yaml
Normal file
@@ -0,0 +1,11 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: messages-mpa
|
||||
namespace: lasuite
|
||||
spec:
|
||||
selector:
|
||||
app: messages-mpa
|
||||
ports:
|
||||
- port: 8010
|
||||
targetPort: 8010
|
||||
43
base/lasuite/messages-mta-in-deployment.yaml
Normal file
43
base/lasuite/messages-mta-in-deployment.yaml
Normal file
@@ -0,0 +1,43 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: messages-mta-in
|
||||
namespace: lasuite
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: messages-mta-in
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: messages-mta-in
|
||||
spec:
|
||||
containers:
|
||||
- name: messages-mta-in
|
||||
image: messages-mta-in
|
||||
ports:
|
||||
- containerPort: 25
|
||||
env:
|
||||
- name: MDA_API_BASE_URL
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: messages-config
|
||||
key: MDA_API_BASE_URL
|
||||
- name: MDA_API_SECRET
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: messages-django-secret
|
||||
key: MDA_API_SECRET
|
||||
- name: MAX_INCOMING_EMAIL_SIZE
|
||||
value: "30000000"
|
||||
securityContext:
|
||||
capabilities:
|
||||
add: ["NET_BIND_SERVICE"]
|
||||
resources:
|
||||
limits:
|
||||
memory: 256Mi
|
||||
cpu: 250m
|
||||
requests:
|
||||
memory: 64Mi
|
||||
cpu: 50m
|
||||
11
base/lasuite/messages-mta-in-service.yaml
Normal file
11
base/lasuite/messages-mta-in-service.yaml
Normal file
@@ -0,0 +1,11 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: messages-mta-in
|
||||
namespace: lasuite
|
||||
spec:
|
||||
selector:
|
||||
app: messages-mta-in
|
||||
ports:
|
||||
- port: 25
|
||||
targetPort: 25
|
||||
43
base/lasuite/messages-mta-out-deployment.yaml
Normal file
43
base/lasuite/messages-mta-out-deployment.yaml
Normal file
@@ -0,0 +1,43 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: messages-mta-out
|
||||
namespace: lasuite
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: messages-mta-out
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: messages-mta-out
|
||||
spec:
|
||||
containers:
|
||||
- name: messages-mta-out
|
||||
image: messages-mta-out
|
||||
ports:
|
||||
- containerPort: 587
|
||||
env:
|
||||
- name: MYHOSTNAME
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: messages-config
|
||||
key: MYHOSTNAME
|
||||
- name: SMTP_USERNAME
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: messages-mta-out-credentials
|
||||
key: SMTP_USERNAME
|
||||
- name: SMTP_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: messages-mta-out-credentials
|
||||
key: SMTP_PASSWORD
|
||||
resources:
|
||||
limits:
|
||||
memory: 256Mi
|
||||
cpu: 250m
|
||||
requests:
|
||||
memory: 64Mi
|
||||
cpu: 50m
|
||||
11
base/lasuite/messages-mta-out-service.yaml
Normal file
11
base/lasuite/messages-mta-out-service.yaml
Normal file
@@ -0,0 +1,11 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: messages-mta-out
|
||||
namespace: lasuite
|
||||
spec:
|
||||
selector:
|
||||
app: messages-mta-out
|
||||
ports:
|
||||
- port: 587
|
||||
targetPort: 587
|
||||
35
base/lasuite/messages-socks-proxy-deployment.yaml
Normal file
35
base/lasuite/messages-socks-proxy-deployment.yaml
Normal file
@@ -0,0 +1,35 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: messages-socks-proxy
|
||||
namespace: lasuite
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: messages-socks-proxy
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: messages-socks-proxy
|
||||
spec:
|
||||
containers:
|
||||
- name: messages-socks-proxy
|
||||
image: messages-socks-proxy
|
||||
ports:
|
||||
- containerPort: 1080
|
||||
env:
|
||||
- name: PROXY_USERS
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: messages-socks-credentials
|
||||
key: PROXY_USERS
|
||||
- name: PROXY_SOURCE_IP_WHITELIST
|
||||
value: 10.0.0.0/8
|
||||
resources:
|
||||
limits:
|
||||
memory: 128Mi
|
||||
cpu: 100m
|
||||
requests:
|
||||
memory: 32Mi
|
||||
cpu: 25m
|
||||
11
base/lasuite/messages-socks-proxy-service.yaml
Normal file
11
base/lasuite/messages-socks-proxy-service.yaml
Normal file
@@ -0,0 +1,11 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: messages-socks-proxy
|
||||
namespace: lasuite
|
||||
spec:
|
||||
selector:
|
||||
app: messages-socks-proxy
|
||||
ports:
|
||||
- port: 1080
|
||||
targetPort: 1080
|
||||
90
base/lasuite/messages-worker-deployment.yaml
Normal file
90
base/lasuite/messages-worker-deployment.yaml
Normal file
@@ -0,0 +1,90 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: messages-worker
|
||||
namespace: lasuite
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: messages-worker
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: messages-worker
|
||||
spec:
|
||||
containers:
|
||||
- name: messages-worker
|
||||
image: messages-backend
|
||||
command: ["python", "worker.py", "--loglevel=INFO", "--concurrency=3"]
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: messages-config
|
||||
- configMapRef:
|
||||
name: lasuite-postgres
|
||||
- configMapRef:
|
||||
name: lasuite-valkey
|
||||
- configMapRef:
|
||||
name: lasuite-s3
|
||||
- configMapRef:
|
||||
name: lasuite-oidc-provider
|
||||
env:
|
||||
- name: DB_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: messages-db-credentials
|
||||
key: password
|
||||
- name: DJANGO_SECRET_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: messages-django-secret
|
||||
key: DJANGO_SECRET_KEY
|
||||
- name: SALT_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: messages-django-secret
|
||||
key: SALT_KEY
|
||||
- name: MDA_API_SECRET
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: messages-django-secret
|
||||
key: MDA_API_SECRET
|
||||
- name: OIDC_RP_CLIENT_ID
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: oidc-messages
|
||||
key: CLIENT_ID
|
||||
- name: OIDC_RP_CLIENT_SECRET
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: oidc-messages
|
||||
key: CLIENT_SECRET
|
||||
- name: AWS_S3_ACCESS_KEY_ID
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: seaweedfs-s3-credentials
|
||||
key: S3_ACCESS_KEY
|
||||
- name: AWS_S3_SECRET_ACCESS_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: seaweedfs-s3-credentials
|
||||
key: S3_SECRET_KEY
|
||||
- name: RSPAMD_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: messages-mpa-credentials
|
||||
key: RSPAMD_password
|
||||
- name: OIDC_STORE_REFRESH_TOKEN_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: messages-django-secret
|
||||
key: OIDC_STORE_REFRESH_TOKEN_KEY
|
||||
- name: OIDC_RP_SCOPES
|
||||
value: "openid email profile offline_access"
|
||||
resources:
|
||||
limits:
|
||||
memory: 1Gi
|
||||
cpu: "1"
|
||||
requests:
|
||||
memory: 256Mi
|
||||
cpu: 100m
|
||||
@@ -41,7 +41,9 @@ spec:
|
||||
- code
|
||||
scope: openid email profile
|
||||
redirectUris:
|
||||
- https://drive.DOMAIN_SUFFIX/oidc/callback/
|
||||
- https://drive.DOMAIN_SUFFIX/api/v1.0/callback/
|
||||
postLogoutRedirectUris:
|
||||
- https://drive.DOMAIN_SUFFIX/api/v1.0/logout-callback/
|
||||
tokenEndpointAuthMethod: client_secret_post
|
||||
secretName: oidc-drive
|
||||
skipConsent: true
|
||||
@@ -68,25 +70,8 @@ spec:
|
||||
secretName: oidc-meet
|
||||
skipConsent: true
|
||||
---
|
||||
# ── Conversations (chat) ──────────────────────────────────────────────────────
|
||||
apiVersion: hydra.ory.sh/v1alpha1
|
||||
kind: OAuth2Client
|
||||
metadata:
|
||||
name: conversations
|
||||
namespace: lasuite
|
||||
spec:
|
||||
clientName: Chat
|
||||
grantTypes:
|
||||
- authorization_code
|
||||
- refresh_token
|
||||
responseTypes:
|
||||
- code
|
||||
scope: openid email profile
|
||||
redirectUris:
|
||||
- https://chat.DOMAIN_SUFFIX/oidc/callback/
|
||||
tokenEndpointAuthMethod: client_secret_post
|
||||
secretName: oidc-conversations
|
||||
skipConsent: true
|
||||
# ── Conversations (chat) — replaced by Tuwunel in matrix namespace ───────────
|
||||
# OAuth2Client for tuwunel is in base/matrix/hydra-oauth2client.yaml
|
||||
---
|
||||
# ── Messages (mail) ───────────────────────────────────────────────────────────
|
||||
apiVersion: hydra.ory.sh/v1alpha1
|
||||
@@ -101,9 +86,11 @@ spec:
|
||||
- refresh_token
|
||||
responseTypes:
|
||||
- code
|
||||
scope: openid email profile
|
||||
scope: openid email profile offline_access
|
||||
redirectUris:
|
||||
- https://mail.DOMAIN_SUFFIX/oidc/callback/
|
||||
- https://mail.DOMAIN_SUFFIX/api/v1.0/callback/
|
||||
postLogoutRedirectUris:
|
||||
- https://mail.DOMAIN_SUFFIX/api/v1.0/logout-callback/
|
||||
tokenEndpointAuthMethod: client_secret_post
|
||||
secretName: oidc-messages
|
||||
skipConsent: true
|
||||
|
||||
20
base/lasuite/patch-drive-frontend-nginx.yaml
Normal file
20
base/lasuite/patch-drive-frontend-nginx.yaml
Normal file
@@ -0,0 +1,20 @@
|
||||
# Patch: mount the nginx ConfigMap into drive-frontend to enable the media
|
||||
# auth_request proxy (validates Drive session before serving S3 files).
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: drive-frontend
|
||||
namespace: lasuite
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: drive
|
||||
volumeMounts:
|
||||
- name: nginx-conf
|
||||
mountPath: /etc/nginx/conf.d/default.conf
|
||||
subPath: default.conf
|
||||
volumes:
|
||||
- name: nginx-conf
|
||||
configMap:
|
||||
name: drive-frontend-nginx-conf
|
||||
@@ -30,10 +30,16 @@ spec:
|
||||
sunbeam-conversations \
|
||||
sunbeam-people \
|
||||
sunbeam-git-lfs \
|
||||
sunbeam-game-assets; do
|
||||
sunbeam-game-assets \
|
||||
sunbeam-ml-models; do
|
||||
mc mb --ignore-existing "weed/$bucket"
|
||||
echo "Ensured bucket: $bucket"
|
||||
done
|
||||
|
||||
# Enable object versioning on buckets that require it.
|
||||
# Drive's WOPI GetFile response includes X-WOPI-ItemVersion from S3 VersionId.
|
||||
mc versioning enable weed/sunbeam-drive
|
||||
echo "Versioning enabled: sunbeam-drive"
|
||||
envFrom:
|
||||
- secretRef:
|
||||
name: seaweedfs-s3-credentials
|
||||
|
||||
@@ -22,6 +22,33 @@ spec:
|
||||
type: kv-v2
|
||||
path: seaweedfs
|
||||
refreshAfter: 30s
|
||||
rolloutRestartTargets:
|
||||
- kind: Deployment
|
||||
name: hive
|
||||
- kind: Deployment
|
||||
name: people-backend
|
||||
- kind: Deployment
|
||||
name: people-celery-worker
|
||||
- kind: Deployment
|
||||
name: people-celery-beat
|
||||
- kind: Deployment
|
||||
name: docs-backend
|
||||
- kind: Deployment
|
||||
name: docs-celery-worker
|
||||
- kind: Deployment
|
||||
name: docs-y-provider
|
||||
- kind: Deployment
|
||||
name: drive-backend
|
||||
- kind: Deployment
|
||||
name: drive-backend-celery-default
|
||||
- kind: Deployment
|
||||
name: meet-backend
|
||||
- kind: Deployment
|
||||
name: meet-celery-worker
|
||||
- kind: Deployment
|
||||
name: messages-backend
|
||||
- kind: Deployment
|
||||
name: messages-worker
|
||||
destination:
|
||||
name: seaweedfs-s3-credentials
|
||||
create: true
|
||||
@@ -70,6 +97,9 @@ spec:
|
||||
type: kv-v2
|
||||
path: hive
|
||||
refreshAfter: 30s
|
||||
rolloutRestartTargets:
|
||||
- kind: Deployment
|
||||
name: hive
|
||||
destination:
|
||||
name: hive-oidc
|
||||
create: true
|
||||
@@ -122,6 +152,13 @@ spec:
|
||||
type: kv-v2
|
||||
path: people
|
||||
refreshAfter: 30s
|
||||
rolloutRestartTargets:
|
||||
- kind: Deployment
|
||||
name: people-backend
|
||||
- kind: Deployment
|
||||
name: people-celery-worker
|
||||
- kind: Deployment
|
||||
name: people-celery-beat
|
||||
destination:
|
||||
name: people-django-secret
|
||||
create: true
|
||||
@@ -172,6 +209,13 @@ spec:
|
||||
type: kv-v2
|
||||
path: docs
|
||||
refreshAfter: 30s
|
||||
rolloutRestartTargets:
|
||||
- kind: Deployment
|
||||
name: docs-backend
|
||||
- kind: Deployment
|
||||
name: docs-celery-worker
|
||||
- kind: Deployment
|
||||
name: docs-y-provider
|
||||
destination:
|
||||
name: docs-django-secret
|
||||
create: true
|
||||
@@ -193,6 +237,11 @@ spec:
|
||||
type: kv-v2
|
||||
path: docs
|
||||
refreshAfter: 30s
|
||||
rolloutRestartTargets:
|
||||
- kind: Deployment
|
||||
name: docs-backend
|
||||
- kind: Deployment
|
||||
name: docs-y-provider
|
||||
destination:
|
||||
name: docs-collaboration-secret
|
||||
create: true
|
||||
@@ -241,6 +290,11 @@ spec:
|
||||
type: kv-v2
|
||||
path: meet
|
||||
refreshAfter: 30s
|
||||
rolloutRestartTargets:
|
||||
- kind: Deployment
|
||||
name: meet-backend
|
||||
- kind: Deployment
|
||||
name: meet-celery-worker
|
||||
destination:
|
||||
name: meet-django-secret
|
||||
create: true
|
||||
@@ -264,6 +318,11 @@ spec:
|
||||
type: kv-v2
|
||||
path: livekit
|
||||
refreshAfter: 30s
|
||||
rolloutRestartTargets:
|
||||
- kind: Deployment
|
||||
name: meet-backend
|
||||
- kind: Deployment
|
||||
name: meet-celery-worker
|
||||
destination:
|
||||
name: meet-livekit
|
||||
create: true
|
||||
@@ -275,3 +334,241 @@ spec:
|
||||
text: "{{ index .Secrets \"api-key\" }}"
|
||||
LIVEKIT_API_SECRET:
|
||||
text: "{{ index .Secrets \"api-secret\" }}"
|
||||
---
|
||||
# Drive DB credentials from OpenBao database secrets engine (static role, 24h rotation).
|
||||
apiVersion: secrets.hashicorp.com/v1beta1
|
||||
kind: VaultDynamicSecret
|
||||
metadata:
|
||||
name: drive-db-credentials
|
||||
namespace: lasuite
|
||||
spec:
|
||||
vaultAuthRef: vso-auth
|
||||
mount: database
|
||||
path: static-creds/drive
|
||||
allowStaticCreds: true
|
||||
refreshAfter: 5m
|
||||
rolloutRestartTargets:
|
||||
- kind: Deployment
|
||||
name: drive-backend
|
||||
- kind: Deployment
|
||||
name: drive-backend-celery-default
|
||||
destination:
|
||||
name: drive-db-credentials
|
||||
create: true
|
||||
overwrite: true
|
||||
transformation:
|
||||
excludeRaw: true
|
||||
templates:
|
||||
password:
|
||||
text: "{{ index .Secrets \"password\" }}"
|
||||
---
|
||||
apiVersion: secrets.hashicorp.com/v1beta1
|
||||
kind: VaultStaticSecret
|
||||
metadata:
|
||||
name: drive-django-secret
|
||||
namespace: lasuite
|
||||
spec:
|
||||
vaultAuthRef: vso-auth
|
||||
mount: secret
|
||||
type: kv-v2
|
||||
path: drive
|
||||
refreshAfter: 30s
|
||||
rolloutRestartTargets:
|
||||
- kind: Deployment
|
||||
name: drive-backend
|
||||
- kind: Deployment
|
||||
name: drive-backend-celery-default
|
||||
destination:
|
||||
name: drive-django-secret
|
||||
create: true
|
||||
overwrite: true
|
||||
transformation:
|
||||
excludeRaw: true
|
||||
templates:
|
||||
DJANGO_SECRET_KEY:
|
||||
text: "{{ index .Secrets \"django-secret-key\" }}"
|
||||
---
|
||||
apiVersion: secrets.hashicorp.com/v1beta1
|
||||
kind: VaultStaticSecret
|
||||
metadata:
|
||||
name: collabora-credentials
|
||||
namespace: lasuite
|
||||
spec:
|
||||
vaultAuthRef: vso-auth
|
||||
mount: secret
|
||||
type: kv-v2
|
||||
path: collabora
|
||||
refreshAfter: 30s
|
||||
rolloutRestartTargets:
|
||||
- kind: Deployment
|
||||
name: collabora
|
||||
destination:
|
||||
name: collabora-credentials
|
||||
create: true
|
||||
overwrite: true
|
||||
transformation:
|
||||
excludeRaw: true
|
||||
templates:
|
||||
username:
|
||||
text: "{{ index .Secrets \"username\" }}"
|
||||
password:
|
||||
text: "{{ index .Secrets \"password\" }}"
|
||||
---
|
||||
# Messages DB credentials from OpenBao database secrets engine (static role, 24h rotation).
|
||||
apiVersion: secrets.hashicorp.com/v1beta1
|
||||
kind: VaultDynamicSecret
|
||||
metadata:
|
||||
name: messages-db-credentials
|
||||
namespace: lasuite
|
||||
spec:
|
||||
vaultAuthRef: vso-auth
|
||||
mount: database
|
||||
path: static-creds/messages
|
||||
allowStaticCreds: true
|
||||
refreshAfter: 5m
|
||||
rolloutRestartTargets:
|
||||
- kind: Deployment
|
||||
name: messages-backend
|
||||
- kind: Deployment
|
||||
name: messages-worker
|
||||
destination:
|
||||
name: messages-db-credentials
|
||||
create: true
|
||||
overwrite: true
|
||||
transformation:
|
||||
excludeRaw: true
|
||||
templates:
|
||||
password:
|
||||
text: "{{ index .Secrets \"password\" }}"
|
||||
---
|
||||
apiVersion: secrets.hashicorp.com/v1beta1
|
||||
kind: VaultStaticSecret
|
||||
metadata:
|
||||
name: messages-django-secret
|
||||
namespace: lasuite
|
||||
spec:
|
||||
vaultAuthRef: vso-auth
|
||||
mount: secret
|
||||
type: kv-v2
|
||||
path: messages
|
||||
refreshAfter: 30s
|
||||
rolloutRestartTargets:
|
||||
- kind: Deployment
|
||||
name: messages-backend
|
||||
- kind: Deployment
|
||||
name: messages-worker
|
||||
- kind: Deployment
|
||||
name: messages-mta-in
|
||||
destination:
|
||||
name: messages-django-secret
|
||||
create: true
|
||||
overwrite: true
|
||||
transformation:
|
||||
excludeRaw: true
|
||||
templates:
|
||||
DJANGO_SECRET_KEY:
|
||||
text: "{{ index .Secrets \"django-secret-key\" }}"
|
||||
SALT_KEY:
|
||||
text: "{{ index .Secrets \"salt-key\" }}"
|
||||
MDA_API_SECRET:
|
||||
text: "{{ index .Secrets \"mda-api-secret\" }}"
|
||||
OIDC_STORE_REFRESH_TOKEN_KEY:
|
||||
text: "{{ index .Secrets \"oidc-refresh-token-key\" }}"
|
||||
---
|
||||
apiVersion: secrets.hashicorp.com/v1beta1
|
||||
kind: VaultStaticSecret
|
||||
metadata:
|
||||
name: messages-dkim-key
|
||||
namespace: lasuite
|
||||
spec:
|
||||
vaultAuthRef: vso-auth
|
||||
mount: secret
|
||||
type: kv-v2
|
||||
path: messages
|
||||
refreshAfter: 30s
|
||||
rolloutRestartTargets:
|
||||
- kind: Deployment
|
||||
name: messages-mpa
|
||||
destination:
|
||||
name: messages-dkim-key
|
||||
create: true
|
||||
overwrite: true
|
||||
transformation:
|
||||
excludeRaw: true
|
||||
templates:
|
||||
dkim-private-key:
|
||||
text: "{{ index .Secrets \"dkim-private-key\" }}"
|
||||
---
|
||||
apiVersion: secrets.hashicorp.com/v1beta1
|
||||
kind: VaultStaticSecret
|
||||
metadata:
|
||||
name: messages-mpa-credentials
|
||||
namespace: lasuite
|
||||
spec:
|
||||
vaultAuthRef: vso-auth
|
||||
mount: secret
|
||||
type: kv-v2
|
||||
path: messages
|
||||
refreshAfter: 30s
|
||||
rolloutRestartTargets:
|
||||
- kind: Deployment
|
||||
name: messages-mpa
|
||||
destination:
|
||||
name: messages-mpa-credentials
|
||||
create: true
|
||||
overwrite: true
|
||||
transformation:
|
||||
excludeRaw: true
|
||||
templates:
|
||||
RSPAMD_password:
|
||||
text: "{{ index .Secrets \"rspamd-password\" }}"
|
||||
---
|
||||
apiVersion: secrets.hashicorp.com/v1beta1
|
||||
kind: VaultStaticSecret
|
||||
metadata:
|
||||
name: messages-socks-credentials
|
||||
namespace: lasuite
|
||||
spec:
|
||||
vaultAuthRef: vso-auth
|
||||
mount: secret
|
||||
type: kv-v2
|
||||
path: messages
|
||||
refreshAfter: 30s
|
||||
rolloutRestartTargets:
|
||||
- kind: Deployment
|
||||
name: messages-socks-proxy
|
||||
destination:
|
||||
name: messages-socks-credentials
|
||||
create: true
|
||||
overwrite: true
|
||||
transformation:
|
||||
excludeRaw: true
|
||||
templates:
|
||||
PROXY_USERS:
|
||||
text: "{{ index .Secrets \"socks-proxy-users\" }}"
|
||||
---
|
||||
apiVersion: secrets.hashicorp.com/v1beta1
|
||||
kind: VaultStaticSecret
|
||||
metadata:
|
||||
name: messages-mta-out-credentials
|
||||
namespace: lasuite
|
||||
spec:
|
||||
vaultAuthRef: vso-auth
|
||||
mount: secret
|
||||
type: kv-v2
|
||||
path: messages
|
||||
refreshAfter: 30s
|
||||
rolloutRestartTargets:
|
||||
- kind: Deployment
|
||||
name: messages-mta-out
|
||||
destination:
|
||||
name: messages-mta-out-credentials
|
||||
create: true
|
||||
overwrite: true
|
||||
transformation:
|
||||
excludeRaw: true
|
||||
templates:
|
||||
SMTP_USERNAME:
|
||||
text: "{{ index .Secrets \"mta-out-smtp-username\" }}"
|
||||
SMTP_PASSWORD:
|
||||
text: "{{ index .Secrets \"mta-out-smtp-password\" }}"
|
||||
|
||||
@@ -23,6 +23,9 @@ spec:
|
||||
type: kv-v2
|
||||
path: livekit
|
||||
refreshAfter: 30s
|
||||
rolloutRestartTargets:
|
||||
- kind: Deployment
|
||||
name: livekit-server
|
||||
destination:
|
||||
name: livekit-api-credentials
|
||||
create: true
|
||||
@@ -31,4 +34,4 @@ spec:
|
||||
excludeRaw: true
|
||||
templates:
|
||||
keys.yaml:
|
||||
text: "{{ index .Secrets \"keys.yaml\" }}"
|
||||
text: '{{ index .Secrets "api-key" }}: {{ index .Secrets "api-secret" }}'
|
||||
|
||||
@@ -24,9 +24,9 @@ spec:
|
||||
- code
|
||||
scope: openid email profile
|
||||
redirectUris:
|
||||
- https://grafana.DOMAIN_SUFFIX/login/generic_oauth
|
||||
- https://metrics.DOMAIN_SUFFIX/login/generic_oauth
|
||||
postLogoutRedirectUris:
|
||||
- https://grafana.DOMAIN_SUFFIX/
|
||||
- https://metrics.DOMAIN_SUFFIX/
|
||||
tokenEndpointAuthMethod: client_secret_post
|
||||
secretName: grafana-oidc
|
||||
skipConsent: true
|
||||
|
||||
@@ -38,38 +38,30 @@ grafana:
|
||||
skip_org_role_sync: true
|
||||
sidecar:
|
||||
datasources:
|
||||
# Disable the auto-provisioned ClusterIP datasource; we define it
|
||||
# explicitly below using the external URL so Grafana's backend reaches
|
||||
# Prometheus via Pingora (https://systemmetrics.DOMAIN_SUFFIX) rather
|
||||
# than the cluster-internal ClusterIP which is blocked by network policy.
|
||||
defaultDatasourceEnabled: false
|
||||
|
||||
additionalDataSources:
|
||||
- name: Prometheus
|
||||
type: prometheus
|
||||
url: "https://systemmetrics.DOMAIN_SUFFIX"
|
||||
url: "http://kube-prometheus-stack-prometheus.monitoring.svc.cluster.local:9090"
|
||||
access: proxy
|
||||
isDefault: true
|
||||
jsonData:
|
||||
timeInterval: 30s
|
||||
- name: Loki
|
||||
type: loki
|
||||
url: "https://systemlogs.DOMAIN_SUFFIX"
|
||||
url: "http://loki-gateway.monitoring.svc.cluster.local:80"
|
||||
access: proxy
|
||||
isDefault: false
|
||||
- name: Tempo
|
||||
type: tempo
|
||||
url: "https://systemtracing.DOMAIN_SUFFIX"
|
||||
url: "http://tempo.monitoring.svc.cluster.local:3200"
|
||||
access: proxy
|
||||
isDefault: false
|
||||
|
||||
prometheus:
|
||||
prometheusSpec:
|
||||
retention: 90d
|
||||
# hostNetwork allows Prometheus to reach kubelet (10250) and node-exporter
|
||||
# (9100) on the node's public InternalIP. On a single-node bare-metal
|
||||
# server, pod-to-node-public-IP traffic doesn't route without this.
|
||||
hostNetwork: true
|
||||
additionalArgs:
|
||||
# Allow browser-direct queries from the Grafana UI origin.
|
||||
- name: web.cors.origin
|
||||
|
||||
@@ -23,6 +23,9 @@ spec:
|
||||
type: kv-v2
|
||||
path: grafana
|
||||
refreshAfter: 30s
|
||||
rolloutRestartTargets:
|
||||
- kind: Deployment
|
||||
name: kube-prometheus-stack-grafana
|
||||
destination:
|
||||
name: grafana-admin
|
||||
create: true
|
||||
|
||||
@@ -23,6 +23,9 @@ spec:
|
||||
type: kv-v2
|
||||
path: hydra
|
||||
refreshAfter: 30s
|
||||
rolloutRestartTargets:
|
||||
- kind: Deployment
|
||||
name: hydra
|
||||
destination:
|
||||
name: hydra
|
||||
create: true
|
||||
@@ -49,6 +52,11 @@ spec:
|
||||
type: kv-v2
|
||||
path: kratos
|
||||
refreshAfter: 30s
|
||||
rolloutRestartTargets:
|
||||
- kind: Deployment
|
||||
name: kratos
|
||||
- kind: StatefulSet
|
||||
name: kratos-courier
|
||||
destination:
|
||||
name: kratos-app-secrets
|
||||
create: true
|
||||
@@ -90,30 +98,6 @@ spec:
|
||||
dsn:
|
||||
text: "postgresql://{{ index .Secrets \"username\" }}:{{ index .Secrets \"password\" }}@postgres-rw.data.svc.cluster.local:5432/kratos_db?sslmode=disable"
|
||||
---
|
||||
# Login UI session cookie + CSRF protection secrets.
|
||||
apiVersion: secrets.hashicorp.com/v1beta1
|
||||
kind: VaultStaticSecret
|
||||
metadata:
|
||||
name: login-ui-secrets
|
||||
namespace: ory
|
||||
spec:
|
||||
vaultAuthRef: vso-auth
|
||||
mount: secret
|
||||
type: kv-v2
|
||||
path: login-ui
|
||||
refreshAfter: 30s
|
||||
destination:
|
||||
name: login-ui-secrets
|
||||
create: true
|
||||
overwrite: true
|
||||
transformation:
|
||||
excludeRaw: true
|
||||
templates:
|
||||
cookie-secret:
|
||||
text: "{{ index .Secrets \"cookie-secret\" }}"
|
||||
csrf-cookie-secret:
|
||||
text: "{{ index .Secrets \"csrf-cookie-secret\" }}"
|
||||
---
|
||||
# Hydra DB credentials from OpenBao database secrets engine (static role, 24h rotation).
|
||||
apiVersion: secrets.hashicorp.com/v1beta1
|
||||
kind: VaultDynamicSecret
|
||||
@@ -151,6 +135,9 @@ spec:
|
||||
type: kv-v2
|
||||
path: kratos-admin
|
||||
refreshAfter: 30s
|
||||
rolloutRestartTargets:
|
||||
- kind: Deployment
|
||||
name: kratos-admin-ui
|
||||
destination:
|
||||
name: kratos-admin-ui-secrets
|
||||
create: true
|
||||
@@ -164,3 +151,7 @@ spec:
|
||||
text: "{{ index .Secrets \"csrf-cookie-secret\" }}"
|
||||
admin-identity-ids:
|
||||
text: "{{ index .Secrets \"admin-identity-ids\" }}"
|
||||
s3-access-key:
|
||||
text: "{{ index .Secrets \"s3-access-key\" }}"
|
||||
s3-secret-key:
|
||||
text: "{{ index .Secrets \"s3-secret-key\" }}"
|
||||
|
||||
@@ -11,3 +11,4 @@ resources:
|
||||
- seaweedfs-filer.yaml
|
||||
- seaweedfs-filer-pvc.yaml
|
||||
- vault-secrets.yaml
|
||||
- seaweedfs-remote-sync.yaml
|
||||
|
||||
62
base/storage/seaweedfs-remote-sync.yaml
Normal file
62
base/storage/seaweedfs-remote-sync.yaml
Normal file
@@ -0,0 +1,62 @@
|
||||
# SeaweedFS S3 mirror — hourly mc mirror from SeaweedFS → Scaleway Object Storage.
|
||||
# Mirrors all buckets to s3://sunbeam-backups/seaweedfs/<bucket>/.
|
||||
# No --remove: deleted files are left in Scaleway (versioning provides recovery window).
|
||||
# concurrencyPolicy: Forbid prevents overlap if a run takes longer than an hour.
|
||||
---
|
||||
apiVersion: batch/v1
|
||||
kind: CronJob
|
||||
metadata:
|
||||
name: seaweedfs-s3-mirror
|
||||
namespace: storage
|
||||
spec:
|
||||
schedule: "0 * * * *"
|
||||
concurrencyPolicy: Forbid
|
||||
successfulJobsHistoryLimit: 3
|
||||
failedJobsHistoryLimit: 3
|
||||
jobTemplate:
|
||||
spec:
|
||||
activeDeadlineSeconds: 3300
|
||||
template:
|
||||
spec:
|
||||
restartPolicy: OnFailure
|
||||
containers:
|
||||
- name: mirror
|
||||
image: minio/mc:latest
|
||||
command: ["/bin/sh", "-c"]
|
||||
args:
|
||||
- |
|
||||
set -e
|
||||
mc alias set seaweed \
|
||||
http://seaweedfs-filer.storage.svc.cluster.local:8333 \
|
||||
"${S3_ACCESS_KEY}" "${S3_SECRET_KEY}"
|
||||
mc alias set scaleway \
|
||||
https://s3.fr-par.scw.cloud \
|
||||
"${ACCESS_KEY_ID}" "${SECRET_ACCESS_KEY}"
|
||||
mc mirror --overwrite seaweed/ scaleway/sunbeam-backups/seaweedfs/
|
||||
env:
|
||||
- name: S3_ACCESS_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: seaweedfs-s3-credentials
|
||||
key: S3_ACCESS_KEY
|
||||
- name: S3_SECRET_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: seaweedfs-s3-credentials
|
||||
key: S3_SECRET_KEY
|
||||
- name: ACCESS_KEY_ID
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: scaleway-s3-creds
|
||||
key: ACCESS_KEY_ID
|
||||
- name: SECRET_ACCESS_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: scaleway-s3-creds
|
||||
key: SECRET_ACCESS_KEY
|
||||
resources:
|
||||
requests:
|
||||
memory: 128Mi
|
||||
cpu: 10m
|
||||
limits:
|
||||
memory: 512Mi
|
||||
@@ -46,7 +46,7 @@ spec:
|
||||
accessModes: ["ReadWriteOnce"]
|
||||
resources:
|
||||
requests:
|
||||
storage: 20Gi
|
||||
storage: 400Gi
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
|
||||
@@ -11,6 +11,31 @@ spec:
|
||||
role: vso
|
||||
serviceAccount: default
|
||||
---
|
||||
# Scaleway S3 credentials for SeaweedFS remote sync.
|
||||
# Same KV path as barman; synced separately so storage namespace has its own Secret.
|
||||
apiVersion: secrets.hashicorp.com/v1beta1
|
||||
kind: VaultStaticSecret
|
||||
metadata:
|
||||
name: scaleway-s3-creds
|
||||
namespace: storage
|
||||
spec:
|
||||
vaultAuthRef: vso-auth
|
||||
mount: secret
|
||||
type: kv-v2
|
||||
path: scaleway-s3
|
||||
refreshAfter: 30s
|
||||
destination:
|
||||
name: scaleway-s3-creds
|
||||
create: true
|
||||
overwrite: true
|
||||
transformation:
|
||||
excludeRaw: true
|
||||
templates:
|
||||
ACCESS_KEY_ID:
|
||||
text: "{{ index .Secrets \"access-key-id\" }}"
|
||||
SECRET_ACCESS_KEY:
|
||||
text: "{{ index .Secrets \"secret-access-key\" }}"
|
||||
---
|
||||
apiVersion: secrets.hashicorp.com/v1beta1
|
||||
kind: VaultStaticSecret
|
||||
metadata:
|
||||
@@ -22,6 +47,9 @@ spec:
|
||||
type: kv-v2
|
||||
path: seaweedfs
|
||||
refreshAfter: 30s
|
||||
rolloutRestartTargets:
|
||||
- kind: Deployment
|
||||
name: seaweedfs-filer
|
||||
destination:
|
||||
name: seaweedfs-s3-credentials
|
||||
create: true
|
||||
@@ -45,6 +73,9 @@ spec:
|
||||
type: kv-v2
|
||||
path: seaweedfs
|
||||
refreshAfter: 30s
|
||||
rolloutRestartTargets:
|
||||
- kind: Deployment
|
||||
name: seaweedfs-filer
|
||||
destination:
|
||||
name: seaweedfs-s3-json
|
||||
create: true
|
||||
|
||||
@@ -12,6 +12,7 @@ kind: Kustomization
|
||||
# replace DOMAIN_SUFFIX with <LIMA_IP>.sslip.io before kubectl apply.
|
||||
|
||||
resources:
|
||||
- ../../base/build
|
||||
- ../../base/ingress
|
||||
- ../../base/ory
|
||||
- ../../base/data
|
||||
@@ -34,6 +35,7 @@ images:
|
||||
newName: src.DOMAIN_SUFFIX/studio/people-backend
|
||||
- name: lasuite/people-frontend
|
||||
newName: src.DOMAIN_SUFFIX/studio/people-frontend
|
||||
newTag: latest
|
||||
|
||||
# amd64-only impress (Docs) images — same mirror pattern.
|
||||
- name: lasuite/impress-backend
|
||||
|
||||
@@ -180,78 +180,36 @@ spec:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: docs-celery-worker
|
||||
name: collabora
|
||||
namespace: lasuite
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: docs
|
||||
env:
|
||||
# Celery workers: 2 concurrent workers fits within local memory budget.
|
||||
- name: CELERY_WORKER_CONCURRENCY
|
||||
value: "2"
|
||||
resources:
|
||||
limits:
|
||||
memory: 384Mi
|
||||
requests:
|
||||
memory: 128Mi
|
||||
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: docs-backend
|
||||
namespace: lasuite
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: docs
|
||||
env:
|
||||
# 2 uvicorn workers instead of the default 4 to stay within local memory budget.
|
||||
# Each worker loads the full Django+impress app (~150 MB), so 4 workers
|
||||
# pushed peak RSS above 384 Mi and triggered OOMKill at startup.
|
||||
- name: WEB_CONCURRENCY
|
||||
value: "2"
|
||||
- name: collabora
|
||||
resources:
|
||||
limits:
|
||||
memory: 512Mi
|
||||
cpu: 500m
|
||||
requests:
|
||||
memory: 192Mi
|
||||
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: docs-frontend
|
||||
namespace: lasuite
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: docs
|
||||
resources:
|
||||
limits:
|
||||
memory: 128Mi
|
||||
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: docs-y-provider
|
||||
namespace: lasuite
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: docs
|
||||
resources:
|
||||
limits:
|
||||
memory: 256Mi
|
||||
cpu: 50m
|
||||
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: buildkitd
|
||||
namespace: build
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: buildkitd
|
||||
resources:
|
||||
requests:
|
||||
memory: 64Mi
|
||||
cpu: "250m"
|
||||
memory: "256Mi"
|
||||
limits:
|
||||
cpu: "2"
|
||||
memory: "2Gi"
|
||||
|
||||
15
overlays/production/patch-mta-in-hostport.yaml
Normal file
15
overlays/production/patch-mta-in-hostport.yaml
Normal file
@@ -0,0 +1,15 @@
|
||||
# Bind MTA-in port 25 to the host so inbound email reaches the pod directly.
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: messages-mta-in
|
||||
namespace: lasuite
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: messages-mta-in
|
||||
ports:
|
||||
- containerPort: 25
|
||||
hostPort: 25
|
||||
protocol: TCP
|
||||
@@ -4,8 +4,8 @@ metadata:
|
||||
name: postgres-daily
|
||||
namespace: data
|
||||
spec:
|
||||
# Daily at 02:00 UTC
|
||||
schedule: "0 2 * * *"
|
||||
# Daily at 02:00 UTC (CNPG uses 6-field cron: second minute hour dom month dow)
|
||||
schedule: "0 0 2 * * *"
|
||||
backupOwnerReference: self
|
||||
cluster:
|
||||
name: postgres
|
||||
|
||||
@@ -149,24 +149,6 @@ spec:
|
||||
limits:
|
||||
memory: 128Mi
|
||||
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: login-ui
|
||||
namespace: ory
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: login-ui
|
||||
resources:
|
||||
requests:
|
||||
memory: 128Mi
|
||||
cpu: 50m
|
||||
limits:
|
||||
memory: 384Mi
|
||||
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
@@ -215,79 +197,17 @@ spec:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: docs-celery-worker
|
||||
name: collabora
|
||||
namespace: lasuite
|
||||
spec:
|
||||
replicas: 2
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: docs
|
||||
env:
|
||||
- name: CELERY_WORKER_CONCURRENCY
|
||||
value: "4"
|
||||
- name: collabora
|
||||
resources:
|
||||
requests:
|
||||
memory: 512Mi
|
||||
cpu: 250m
|
||||
limits:
|
||||
memory: 1Gi
|
||||
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: docs-backend
|
||||
namespace: lasuite
|
||||
spec:
|
||||
replicas: 2
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: docs
|
||||
env:
|
||||
- name: WEB_CONCURRENCY
|
||||
value: "4"
|
||||
resources:
|
||||
requests:
|
||||
memory: 512Mi
|
||||
cpu: 250m
|
||||
limits:
|
||||
memory: 1Gi
|
||||
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: docs-frontend
|
||||
namespace: lasuite
|
||||
spec:
|
||||
replicas: 2
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: docs
|
||||
resources:
|
||||
requests:
|
||||
memory: 64Mi
|
||||
limits:
|
||||
memory: 256Mi
|
||||
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: docs-y-provider
|
||||
namespace: lasuite
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: docs
|
||||
resources:
|
||||
requests:
|
||||
memory: 256Mi
|
||||
cpu: 100m
|
||||
limits:
|
||||
memory: 1Gi
|
||||
cpu: 1000m
|
||||
|
||||
Reference in New Issue
Block a user