Files
sbbb/base/monitoring/alertrules-slo.yaml
Sienna Meridian Satterwhite e4987b4c58 feat(monitoring): comprehensive alerting overhaul, 66 rules across 14 PrometheusRules
The Longhorn memory leak went undetected for 14 days because alerting
was broken (email receiver, missing label selector, no node alerts).
This overhaul brings alerting to production grade.

Fixes:
- Alloy Loki URL pointed to deleted loki-gateway, now loki:3100
- seaweedfs-bucket-init crash on unsupported `mc versioning` command
- All PrometheusRules now have `release: kube-prometheus-stack` label
- Removed broken email receiver, Matrix-only alerting

New alert coverage:
- Node: memory, CPU, swap, filesystem, inodes, network, clock skew, OOM
- Kubernetes: deployment down, CronJob failed, pod crash-looping, PVC full
- Backups: Postgres barman stale/failed, WAL archiving, SeaweedFS mirror
- Observability: Prometheus WAL/storage/rules, Loki/Tempo/AlertManager down
- Services: Stalwart, Bulwark, Tuwunel, Sol, Valkey, OpenSearch (smart)
- SLOs: auth stack 99.9% burn rate, Matrix 99.5%, latency p95 < 2s
- Recording rules for Linkerd RED metrics and node aggregates
- Watchdog heartbeat → Matrix every 12h (dead pipeline detection)
- Inhibition: critical suppresses warning for same alert+namespace
- OpenSearchClusterYellow only fires with >1 data node (single-node aware)
2026-04-06 15:52:06 +01:00

63 lines
2.2 KiB
YAML

apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: slo-alerts
namespace: monitoring
labels:
role: alert-rules
release: kube-prometheus-stack
spec:
groups:
# SLO: Kratos/Hydra auth stack — 99.9% availability (43 min/month budget)
- name: slo-auth
rules:
- alert: AuthErrorBudgetFastBurn
expr: |
service:error_rate:5m{deployment=~"kratos|hydra"} > 0.0144
for: 2m
labels:
severity: critical
slo: auth-availability
annotations:
summary: "Auth stack burning error budget at 14.4x rate"
description: "{{ $labels.deployment }} error rate is {{ $value | humanizePercentage }} (14.4x burn rate for 99.9% SLO)."
- alert: AuthErrorBudgetSlowBurn
expr: |
service:error_rate:5m{deployment=~"kratos|hydra"} > 0.003
for: 1h
labels:
severity: warning
slo: auth-availability
annotations:
summary: "Auth stack slowly burning error budget"
description: "{{ $labels.deployment }} error rate is {{ $value | humanizePercentage }} (3x burn rate for 99.9% SLO)."
# SLO: Tuwunel Matrix homeserver — 99.5% availability (3.6 hr/month budget)
- name: slo-matrix
rules:
- alert: MatrixErrorBudgetFastBurn
expr: |
service:error_rate:5m{deployment="tuwunel"} > 0.072
for: 2m
labels:
severity: critical
slo: matrix-availability
annotations:
summary: "Matrix homeserver burning error budget at 14.4x rate"
description: "Tuwunel error rate is {{ $value | humanizePercentage }}."
# SLO: All services — latency p95 under 2s
- name: slo-latency
rules:
- alert: ServiceLatencyBudgetBurn
expr: |
service:latency_p95:5m > 2000
for: 10m
labels:
severity: warning
slo: latency
annotations:
summary: "Service p95 latency exceeds 2s SLO"
description: "{{ $labels.deployment }} in {{ $labels.namespace }} p95 latency is {{ $value }}ms."