Database secrets engine (_configure_db_engine):
- Creates a dedicated `vault` PostgreSQL user via CNPG peer auth (psql exec).
CNPG enableSuperuserAccess=false blocks remote auth for the postgres
superuser, so we create vault with CREATEROLE and grant ADMIN OPTION on
each service role (required by PG 16+ to rotate passwords).
- Configures the OpenBao postgresql plugin (cnpg-postgres connection) and
creates static roles for all PG_USERS with 24h rotation_period.
- All bao/psql calls now raise RuntimeError on non-zero exit — no more
silent failures.
Credential seeding (_seed_openbao):
- Added secret/login-ui path (cookie-secret, csrf-cookie-secret) so the
login UI no longer needs hardcoded values in its Deployment manifest.
- Removed all DB password fields from KV; passwords are now managed
exclusively by the database secrets engine.
Lifecycle:
- pre_apply_cleanup() prunes stale VaultStaticSecrets that have been
superseded by VaultDynamicSecrets of the same name, preventing the
"not the owner" ownerRef conflict that blocked secret updates.
- status_check() no longer marks Completed/Succeeded pods as unhealthy.
- _vso_sync_status() added to status output: shows sync state (secretMAC
for VSS, lastRenewalTime for VDS) across all managed namespaces.
Verification (--verify):
- New verify_vso() function writes a random sentinel to OpenBao, creates
a VaultAuth + VaultStaticSecret in the ory namespace, waits up to 60s
for VSO to sync, decodes the K8s Secret, and asserts the value matches.
Cleans up all test resources unconditionally. Replaces the unreliable
Helm test pod for integration testing.
LiveKit: switch to Recreate deployment strategy. hostPorts (TURN UDP relay
range) block RollingUpdate because the new pod cannot schedule while the
old one still holds the ports.
OpenSearch: set OPENSEARCH_JAVA_OPTS to -Xms192m -Xmx256m. The upstream
default (-Xms512m -Xmx1g) immediately OOMs the container given our 512Mi
memory limit.
login-ui: raise memory limit from 64Mi to 192Mi and add a 64Mi request;
the previous limit was too tight and caused OOMKilled restarts under load.
The People backend service is named people-backend (not people) in the
desk chart. Add a route for find-backend to front the future OpenSearch
Dashboards service.
- gitea-db-credentials is now a VaultDynamicSecret reading from
database/static-creds/gitea (OpenBao static role, 24h password rotation).
Replaces the previous KV-based Secret that used a hardcoded localdev password.
- gitea-admin-credentials and gitea-s3-credentials remain VaultStaticSecrets
synced from secret/gitea and secret/seaweedfs respectively.
- gitea-values.yaml adds gitea.admin.existingSecret so the chart reads the
admin username/password from the VSO-managed Secret instead of values.
All Ory service credentials now flow from OpenBao through VSO instead
of being hardcoded in Helm values or Deployment env vars.
Kratos:
- Remove config.dsn; flip secret.enabled=false with nameOverride pointing
at kratos-app-secrets (a VSO-managed Secret with secretsDefault,
secretsCookie, smtpConnectionURI).
- Inject DSN at runtime via deployment.extraEnv from kratos-db-creds
(VaultDynamicSecret backed by OpenBao database static role, 24h rotation).
Hydra:
- Remove config.dsn; inject DSN via deployment.extraEnv from hydra-db-creds
(VaultDynamicSecret, same rotation scheme).
Login UI:
- Replace hardcoded COOKIE_SECRET/CSRF_COOKIE_SECRET env var values with
secretKeyRef reads from login-ui-secrets (VaultStaticSecret → secret/login-ui).
vault-secrets.yaml adds: VaultAuth, Hydra VSS, kratos-app-secrets VSS,
login-ui-secrets VSS, kratos-db-creds VDS, hydra-db-creds VDS.
Previously s3.json was embedded in the seaweedfs-filer-config ConfigMap
with hardcoded minioadmin credentials, and the config volume was mounted
at /etc/seaweedfs/ (overwriting filer.toml with its own directory mount).
- Remove s3.json from ConfigMap; fix the config volume to mount only
filer.toml via subPath so both files coexist under /etc/seaweedfs/.
- Add vault-secrets.yaml with VaultStaticSecrets that VSO syncs from
OpenBao secret/seaweedfs: seaweedfs-s3-credentials (S3_ACCESS_KEY /
S3_SECRET_KEY) and seaweedfs-s3-json (s3.json as a JSON template).
- Mount seaweedfs-s3-json Secret at /etc/seaweedfs/s3.json via subPath.
kubectl apply --server-side was managing the `data: {}` field, which
caused it to wipe the key/root-token entries written by the seed script
on subsequent applies. Removing the field entirely means server-side
apply never touches data, so seed-written keys survive re-applies.
- Add base/vso/ with Helm chart (v0.9.0 from helm.releases.hashicorp.com),
namespace, and test-rbac.yaml granting the Helm test pod's default SA
permission to create/read/delete Secrets, ConfigMaps, and Leases so the
bundled connectivity test passes.
- Wire ../../base/vso into overlays/local/kustomization.yaml.
- Add image aliases for lasuite/people-backend and lasuite/people-frontend
so kustomize rewrites those pulls to our Gitea registry (amd64-only images
that are patched and mirrored by sunbeam.py).
- Rename local-up.py → sunbeam.py; update docstring and argparser description
- Add setup_lima_vm_registry(): installs mkcert root CA into Lima VM system trust
store and writes k3s registries.yaml (Gitea auth); restarts k3s if changed
- Add bootstrap_gitea(): waits for pod Running+Ready, sets admin password via
gitea CLI, clears must_change_password via Postgres UPDATE (Gitea enforces
this flag at API level regardless of auth method), creates studio/internal orgs
- Add mirror_amd64_images(): pulls amd64-only images, patches OCI index with an
arm64 alias pointing at the same manifest (Rosetta runs it transparently),
imports patched image into k3s containerd, pushes to Gitea container registry
- Add AMD64_ONLY_IMAGES list (currently: lasuite/people-{backend,frontend})
- Add --gitea partial flag: registry trust + Gitea bootstrap + mirror
- Add --status flag: pod health table across all managed namespaces
- Fix create_secret to use --field-manager=sunbeam so kustomize apply (manager
kubectl) never wipes data fields written by the seed script
- Add people-frontend to SERVICES_TO_RESTART (was missing)
local-up.py is a stdlib-only Python rewrite of local-up.sh +
local-seed-secrets.sh. Key improvements:
- Correctly parses limactl list --json NDJSON output (json.load()
choked on NDJSON, causing spurious VM creation attempts)
- Handles all Lima VM states: none, Running, Stopped, Broken, etc.
- Inlines seed secrets (no separate local-seed-secrets.sh subprocess)
- Partial runs: --seed, --apply, --restart flags
- Consistent idempotency: every step checks state before acting
- Adds people-backend/celery to restart list; find to PG users list
local-up.sh patched: yq in prereqs, NDJSON-safe VM detection,
--server-side for Linkerd apply, people in restart list, Mail URL.
- Add find user and find_db to postgres-cluster.yaml (11th database)
- Add sunbeam-messages-imports and sunbeam-people buckets to SeaweedFS
- Configure Hydra Maester with enabledNamespaces: [lasuite] so it can
create and update OAuth2Client secrets in the lasuite namespace
- Add find to Kratos allowed_return_urls
- Add shared ConfigMaps: lasuite-postgres, lasuite-valkey, lasuite-s3,
lasuite-oidc-provider — single source of truth for all app env vars
- Add HydraOAuth2Client CRDs for all nine La Suite apps (docs, drive,
meet, conversations, messages, people, find, gitea, hive); Maester
will create oidc-<app> secrets with CLIENT_ID and CLIENT_SECRET