- Add Messages (email) service: backend, frontend, MTA in/out, MPA, SOCKS proxy, worker, DKIM config, and theme customization - Add Collabora deployment for document collaboration - Add Drive frontend nginx config and values - Add buildkitd namespace for in-cluster container builds - Add SeaweedFS remote sync and additional S3 buckets - Update vault secrets across namespaces (devtools, lasuite, media, monitoring, ory, storage) with expanded credential management - Update monitoring: rename grafana→metrics OAuth2Client, add Prometheus remote write and additional scrape configs - Update local/production overlays with resource patches - Remove stale login-ui resource patch from production overlay
8.3 KiB
8.3 KiB
Sunbeam Infrastructure
Kubernetes manifests for a self-hosted collaboration platform. Single-node k3s cluster deployed via Kustomize + Helm. All YAML, no Terraform, no Pulumi.
How to Deploy
# Local (Lima VM on macOS):
sunbeam up # full stack bring-up
sunbeam apply # re-apply all manifests
sunbeam apply lasuite # apply single namespace
# Or raw kustomize:
kustomize build --enable-helm overlays/local | sed 's/DOMAIN_SUFFIX/192.168.5.2.sslip.io/g' | kubectl apply --server-side -f -
There is no make, no CI pipeline, no Terraform. Manifests are applied with sunbeam apply (which runs kustomize build + sed + kubectl apply).
Critical Rules
- Do NOT add Helm charts when plain YAML works. Most resources here are plain YAML. Helm is only used for complex upstream charts (Kratos, Hydra, CloudNativePG, etc.) that have their own release cycles. A new Deployment, Service, or ConfigMap should be plain YAML.
- Do NOT create Ingress resources. This project does not use Kubernetes Ingress. Routing is handled by Pingora (a custom reverse proxy) configured via a TOML ConfigMap in
base/ingress/. To expose a new service, add a[[routes]]entry to the Pingora config. - Do NOT introduce Terraform, Pulumi, or any IaC tool. Everything is Kustomize.
- Do NOT modify overlays when the change belongs in base. Base holds the canonical config; overlays only hold environment-specific patches. If something applies to both local and production, it goes in base.
- Do NOT add RBAC, NetworkPolicy, or PodSecurityPolicy resources. Linkerd service mesh handles mTLS. RBAC is managed at the k3s level.
- Do NOT create new namespaces without being asked. The namespace layout is intentional.
- Do NOT hardcode domains. Use
DOMAIN_SUFFIXas a placeholder — it gets substituted at deploy time. - Never commit TLS keys or secrets. Secrets are managed by OpenBao + Vault Secrets Operator, not stored in this repo.
Directory Structure
base/ # Canonical manifests (environment-agnostic)
{namespace}/ # One directory per Kubernetes namespace
kustomization.yaml # Resources, patches, helmCharts
namespace.yaml # Namespace definition with Linkerd injection
vault-secrets.yaml # VaultAuth + VaultStaticSecret + VaultDynamicSecret
{app}-deployment.yaml # Deployments
{app}-service.yaml # Services
{app}-config.yaml # ConfigMaps
{chart}-values.yaml # Helm chart values
patch-{what}.yaml # Strategic merge patches
overlays/
local/ # Lima VM dev overlay (macOS)
kustomization.yaml # Selects base dirs, adds local patches + image overrides
patch-*.yaml / values-*.yaml
production/ # Scaleway server overlay
kustomization.yaml
patch-*.yaml / values-*.yaml
scripts/ # Bash automation (local-up.sh, local-down.sh, etc.)
secrets/ # TLS cert placeholders (gitignored)
Manifest Conventions — Follow These Exactly
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: {namespace-name}
resources:
- namespace.yaml
- {app}-deployment.yaml
- {app}-service.yaml
- vault-secrets.yaml
# Helm charts only for complex upstream projects
helmCharts:
- name: {chart}
repo: https://...
version: "X.Y.Z"
releaseName: {name}
namespace: {ns}
valuesFile: {chart}-values.yaml
patches:
- path: patch-{what}.yaml
target:
kind: Deployment
name: {name}
Namespace definition
Every namespace gets Linkerd injection:
apiVersion: v1
kind: Namespace
metadata:
name: {name}
annotations:
linkerd.io/inject: enabled
Deployments
apiVersion: apps/v1
kind: Deployment
metadata:
name: {app}
namespace: {ns}
spec:
replicas: 1
selector:
matchLabels:
app: {app}
template:
metadata:
labels:
app: {app}
spec:
containers:
- name: {app}
image: {app} # Short name — Kustomize images: section handles registry
envFrom:
- configMapRef:
name: {app}-config # Bulk env from ConfigMap
env:
- name: DB_PASSWORD # Individual secrets from VSO-synced K8s Secrets
valueFrom:
secretKeyRef:
name: {app}-db-credentials
key: password
livenessProbe:
httpGet:
path: /__lbheartbeat__
port: 8000
initialDelaySeconds: 15
periodSeconds: 20
readinessProbe:
httpGet:
path: /__heartbeat__
port: 8000
initialDelaySeconds: 10
periodSeconds: 10
resources:
limits:
memory: 512Mi
cpu: 500m
requests:
memory: 128Mi
cpu: 100m
Key patterns:
image: {app}uses a short name — the actual registry is set viaimages:in the overlay kustomization.yaml- Django apps use
/__lbheartbeat__(liveness) and/__heartbeat__(readiness) - Init containers run migrations:
python manage.py migrate --no-input envFromfor ConfigMaps, individualenventries for secrets
Secrets (Vault Secrets Operator)
Secrets are NEVER stored in this repo. They flow: OpenBao → VaultStaticSecret/VaultDynamicSecret CRD → K8s Secret → Pod env.
# VaultAuth — one per namespace, always named vso-auth
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuth
metadata:
name: vso-auth
namespace: {ns}
spec:
method: kubernetes
mount: kubernetes
kubernetes:
role: vso
serviceAccount: default
---
# Static secrets (OIDC keys, Django secrets, etc.)
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultStaticSecret
metadata:
name: {secret-name}
namespace: {ns}
spec:
vaultAuthRef: vso-auth
mount: secret
type: kv-v2
path: {openbao-path}
refreshAfter: 30s
destination:
name: {k8s-secret-name}
create: true
overwrite: true
transformation:
excludeRaw: true
templates:
KEY_NAME:
text: '{{ index .Secrets "openbao-key" }}'
---
# Dynamic secrets (DB credentials, rotate every 5m)
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultDynamicSecret
metadata:
name: {app}-db-credentials
namespace: {ns}
spec:
vaultAuthRef: vso-auth
mount: database
path: static-creds/{app}
allowStaticCreds: true
refreshAfter: 5m
rolloutRestartTargets:
- kind: Deployment
name: {app}
destination:
name: {app}-db-credentials
create: true
transformation:
templates:
password:
text: 'postgresql://{{ index .Secrets "username" }}:{{ index .Secrets "password" }}@postgres-rw.data.svc.cluster.local:5432/{db}'
Shared ConfigMaps
Apps in lasuite namespace share these ConfigMaps via envFrom:
lasuite-postgres— DB_HOST, DB_PORT, DB_ENGINElasuite-valkey— REDIS_URL, CELERY_BROKER_URLlasuite-s3— AWS_S3_ENDPOINT_URL, AWS_S3_REGION_NAMElasuite-oidc-provider— OIDC endpoints (uses DOMAIN_SUFFIX)
Overlay patches
Patches in overlays are strategic merge patches. Name them patch-{what}.yaml or values-{what}.yaml:
# overlays/local/patch-oidc-verify-ssl.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: lasuite-oidc-provider
namespace: lasuite
data:
OIDC_VERIFY_SSL: "false"
What NOT to Do
- Don't create abstract base layers, nested overlays, or "common" directories. The structure is flat:
base/{namespace}/andoverlays/{env}/. - Don't use
configMapGeneratororsecretGenerator— ConfigMaps are plain YAML resources, secrets come from VSO. - Don't add
commonLabelsorcommonAnnotationsin kustomization.yaml — labels are set per-resource. - Don't use JSON patches when a strategic merge patch works.
- Don't wrap simple services in Helm charts.
- Don't add comments explaining what standard Kubernetes fields do. Only comment non-obvious decisions.
- Don't change Helm chart versions without being asked — version pinning is intentional.
- Don't add monitoring/alerting rules unless asked — monitoring lives in
base/monitoring/. - Don't split a single component across multiple kustomization.yaml directories.