15 Commits

Author SHA1 Message Date
7214d0ab5d style: cargo fmt fixup for 1.9 feature edits 2026-04-07 20:14:50 +01:00
41df3c2dfd chore: bump workspace to 1.9.0 + CHANGELOG
Workspace version goes from 1.8.1 → 1.9.0. Internal crate deps that
carry an explicit version (wfe-buildkit-protos, wfe-containerd-protos,
wfe in wfe-deno) are bumped to match.

CHANGELOG.md documents the release under `## [1.9.0] - 2026-04-07`:

* wfectl CLI with 17 subcommands
* wfectl validate (local YAML compile, no round-trip)
* Human-friendly workflow names (instance sequencing + definition
  display name)
* wfe-server full feature set (kubernetes + deno + buildkit +
  containerd + rustlang) on a debian base
* wfe-ci builder Dockerfile
* /bin/bash for run scripts
* ensure_store_exists called on host start
* SubWorkflowStep parent data inheritance
* workflows.yaml restructured for YAML 1.1 shallow-merge semantics
2026-04-07 19:12:26 +01:00
6cc1437f0c feat(workflows.yaml): display names + restructure for shallow-merge
Two scope changes that together get the self-hosted wfe CI pipeline
passing against builds.sunbeam.pt.

1. Add `name:` display names to all 12 workflow definitions
   (Continuous Integration, Unit Tests, Build Image, etc.) so the
   new wfectl tables and UIs have human-friendly labels alongside
   the slug ids.

2. Restructure step references from the old `<<: *ci_step` / `<<:
   *ci_long` anchors to inner-config merges of the form:

       - name: foo
         type: kubernetes
         config:
           <<: *ci_config
           run: |
             ...

   YAML 1.1 merge keys are *shallow*. The old anchors put `config:`
   on the top-level step, then the step's own `config:` block
   replaced it wholesale — image, memory, cpu, env all vanished.
   The new pattern merges at the `config:` level so step-specific
   fields (`run:`, `outputs:`, etc.) sit alongside the inherited
   `image:`, `memory:`, `cpu:`, `env:`.

3. Secret env vars (GITEA_TOKEN, TEA_TOKEN, CARGO_REGISTRIES_*,
   BUILDKIT_*) moved into the shared `ci_env` anchor. Individual
   steps used to declare their own `env:` blocks which — again due
   to shallow merge — would replace the whole inherited env map.
2026-04-07 19:11:50 +01:00
0c239cd484 feat(wfectl): new CLI client + wfe-ci builder image
wfectl is a command-line client for wfe-server with 17 subcommands
covering the full workflow lifecycle:

* Auth: login (OAuth2 PKCE via Ory Hydra), logout, whoami
* Definitions: register (YAML → gRPC), validate (local compile),
  definitions list
* Instances: run, get, list, cancel, suspend, resume
* Events: publish
* Streaming: watch (lifecycle), logs, search-logs (full-text)

Key design points:

* `validate` compiles YAML locally via `wfe-yaml::load_workflow_from_str`
  with the full executor feature set enabled — instant feedback, no
  server round-trip, no auth required. Uses the same compile path as
  the server's `register` RPC so what passes validation is guaranteed
  to register.
* Lookup commands accept either UUID or human name; the server
  resolves the identifier for us. Display tables show both columns.
* `run --name <N>` lets users override the auto-generated
  `{def_id}-{N}` instance name when they want a sticky reference.
* Table and JSON output formats, shared bearer-token or cached-login
  auth path, direct token injection via `WFECTL_TOKEN`.
* 5 new unit tests for the validate command cover happy path, unknown
  step type rejection, and missing file handling.

Dockerfile.ci ships the prebuilt image used as the `image:` for
kubernetes CI steps: rust stable, cargo-nextest, cargo-llvm-cov,
sccache (configured via WFE_SCCACHE_* env), buildctl for in-cluster
buildkitd, kubectl, tea for Gitea releases, and git. Published to
`src.sunbeam.pt/studio/wfe-ci:latest`.
2026-04-07 19:09:26 +01:00
34209470c3 feat(wfe-server): full feature set, debian base, name resolution in gRPC
Proto changes:

* Add `name` to `WorkflowInstance`, `WorkflowSearchResult`,
  `RegisteredDefinition`, and `DefinitionSummary` messages.
* Add optional `name` override to `StartWorkflowRequest` and echo the
  assigned name back in `StartWorkflowResponse`.
* Document that `GetWorkflowRequest.workflow_id` accepts UUID or
  human name.

gRPC handler changes:

* `start_workflow` honors the optional name override and reads the
  instance back to return the assigned name to clients.
* `get_workflow` flows through `WorkflowHost::get_workflow`, which
  already falls back from UUID to name lookup.
* `stream_logs`, `watch_lifecycle`, and `search_logs` resolve
  name-or-UUID up front so the LogStore/lifecycle bus (keyed by
  UUID) subscribe to the right instance.
* `register_workflow` propagates the definition's display name into
  `RegisteredDefinition.name`.

Crate build changes:

* Enable the full executor feature set on wfe-yaml —
  `rustlang,buildkit,containerd,kubernetes,deno` — so the shipped
  binary recognizes every step type users can write.
* Dockerfile switched from `rust:alpine` to `rust:1-bookworm` +
  `debian:bookworm-slim` runtime. `deno_core` bundles a v8 binary
  that only ships glibc; alpine/musl can't link it without building
  v8 from source.
2026-04-07 19:07:52 +01:00
d88af54db9 feat(wfe-yaml): optional display name on workflow spec + schema tests
Add an optional `name` field to `WorkflowSpec` so YAML authors can
declare a human-friendly display name alongside the existing slug
`id`. The compiler copies it through to `WorkflowDefinition.name`,
which surfaces in definitions listings, run tables, and JSON output.
Slug `id` remains the primary lookup key.

Also adds a small smoke test for the schema generators to catch
regressions in `generate_json_schema` / `generate_yaml_schema`.
2026-04-07 19:07:30 +01:00
be0b93e959 feat(wfe): auto-assign workflow names + ensure store + name-or-UUID lookups
Three related host.rs changes that together make the 1.9 name support
end-to-end functional.

1. `WorkflowHost::start()` now calls `persistence.ensure_store_exists()`.
   The method existed on the trait and was implemented by every
   provider but nothing ever invoked it, so the Postgres/SQLite schema
   was never auto-created on startup — deployments failed on first
   persist with `relation "wfc.workflows" does not exist`.

2. New `start_workflow_with_name` entry point accepting an optional
   caller-supplied name override. The normal `start_workflow` is now a
   thin wrapper that passes `None` (auto-assign). The default path
   calls `next_definition_sequence(definition_id)` and formats the
   result as `{definition_id}-{N}` before persisting. Sub-workflow
   children also get auto-assigned names via HostContextImpl.

3. `get_workflow`/`suspend_workflow`/`resume_workflow`/
   `terminate_workflow` now accept either a UUID or a human-friendly
   name. `get_workflow` tries the UUID index first, then falls back to
   name lookup. A new `resolve_workflow_id` helper returns the
   canonical UUID so the gRPC log/lifecycle streams (which are keyed
   by UUID internally) can translate before subscribing.
2026-04-07 19:01:02 +01:00
9af1a0d276 feat(persistence): name column, name lookup, definition sequence counter
Land the `name` field and `next_definition_sequence` counter in the
two real persistence backends. Both providers:

* Add `name TEXT NOT NULL UNIQUE` to the `workflows` table.
* Add a `definition_sequences` table (`definition_id, next_num`) with
  an atomic UPSERT + RETURNING to give the host a race-free monotonic
  counter for `{def_id}-{N}` name generation.
* INSERT/UPDATE queries now include `name`; SELECT row parsers hydrate
  it back onto `WorkflowInstance`.
* New `get_workflow_instance_by_name` method for name-based lookups
  used by grpc handlers.

Postgres includes a DO-block migration that back-fills `name` from
`id` on pre-existing deployments so the NOT NULL + UNIQUE invariant
holds retroactively; callers can overwrite with a real name on the
next persist.
2026-04-07 18:58:25 +01:00
d9b9c5651e feat(wfe-core): human-friendly workflow names
Add a `name` field to both `WorkflowDefinition` (optional display name
declared in YAML, e.g. "Continuous Integration") and `WorkflowInstance`
(required, unique alongside the UUID primary key). Instance names are
auto-assigned as `{definition_id}-{N}` via a per-definition monotonic
counter so the 42nd run of `ci` becomes `ci-42`.

Persistence trait gains two methods:

* `get_workflow_instance_by_name` — name-based lookup for Get/Cancel/
  Suspend/Resume/Watch/Logs RPCs so callers can address instances
  interchangeably as either UUID or human name.
* `next_definition_sequence` — atomic per-definition counter used by
  the host at start time to allocate the next N.

This commit wires the in-memory test provider and touches the deno
bridge test helper; the real postgres/sqlite impls follow in the next
commit. UUIDs remain the primary key throughout — names are a second
unique index, never a replacement.
2026-04-07 18:58:12 +01:00
883471181d fix(wfe-kubernetes): run scripts under /bin/bash for pipefail support
Kubernetes step jobs with a `run:` block were invoked via
`/bin/sh -c <script>`. On debian-family base images that resolves to
dash, which rejects `set -o pipefail` ("Illegal option") and other
bashisms (arrays, process substitution, `{1..10}`). The first line of
nearly every real CI script relies on `set -euo pipefail`, so the
steps were failing with exit code 2 before running a single command.

Switch to `/bin/bash -c` so `run:` scripts can rely on the bash
feature set. Containers that lack bash should use the explicit
`command:` form instead.
2026-04-07 18:55:10 +01:00
02a574b24e style: apply cargo fmt workspace-wide
Pure formatting pass from `cargo fmt --all`. No logic changes. Separating
this out so the 1.9 release feature commits that follow show only their
intentional edits.
2026-04-07 18:44:21 +01:00
3915bcc1ec fix(wfe-core): sub-workflow inherits parent workflow data
SubWorkflowStep was hard-coding `inputs: serde_json::Value::Null` from
the YAML compiler, so every `type: workflow` step kicked off a child
instance with an empty data object. Scripts in child workflows then
saw empty `$REPO_URL`, `$COMMIT_SHA`, etc. and failed immediately.

Now: when no explicit inputs are set, the child inherits the parent
workflow's data (when it's an object). Scripts in child workflows can
reference the same top-level inputs the parent was started with without
every `type: workflow` step needing to re-declare them.
2026-04-07 18:38:41 +01:00
da26f142ee chore: ignore local dev sqlite + generated schema artifacts
Local dev runs with the SQLite backend leave `wfe.db{,-shm,-wal}` files
in the repo root, and `workflows.schema.yaml` is a generated artifact
we prefer to fetch from the running server's `/schema/workflow.yaml`
endpoint rather than checking in.
2026-04-07 18:37:30 +01:00
1b873d93f3 feat(wfe-server): gRPC reflection, auto-generated schema endpoints, Dockerfile
- tonic-reflection for gRPC service discovery
  - /schema/workflow.json (JSON Schema from schemars derives)
  - /schema/workflow.yaml (same schema in YAML)
  - /schema/workflow.proto (raw proto file)
  - Multi-stage alpine Dockerfile with all executor features
  - Comprehensive configuration reference (wfe-server/README.md)
  - Release script (scripts/release.sh)
  - Bumped to 1.8.1
2026-04-06 23:47:42 +01:00
6f4700ef89 feat(wfe-server): Dockerfile and configuration reference
Multi-stage alpine build targeting sunbeam-remote buildx builder.
Comprehensive README documenting all config options, env vars,
auth methods (static tokens, OIDC/JWT, webhook HMAC), and backends.
2026-04-06 21:01:28 +01:00
161 changed files with 8554 additions and 2158 deletions

10
.dockerignore Normal file
View File

@@ -0,0 +1,10 @@
target/
.git/
*.md
!README.md
.envrc
TODO.md
.cargo/
.github/
test/
workflows.yaml

6
.gitignore vendored
View File

@@ -4,3 +4,9 @@ Cargo.lock
*.swo
.DS_Store
.env*
# Local dev SQLite database + WAL companions
/wfe.db
/wfe.db-shm
/wfe.db-wal
# Auto-generated schema artifact (server endpoint is the source of truth)
/workflows.schema.yaml

View File

@@ -2,6 +2,75 @@
All notable changes to this project will be documented in this file.
## [1.9.0] - 2026-04-07
### Added
- **wfectl**: New command-line client for wfe-server with 17 subcommands
(login, logout, whoami, register, validate, definitions, run, get, list,
cancel, suspend, resume, publish, watch, logs, search-logs). Supports
OAuth2 PKCE login flow via Ory Hydra, direct bearer-token auth, and
configurable output formats (table/JSON).
- **wfectl validate**: Local YAML validation command that compiles workflow
files in-process via `wfe-yaml` with the full executor feature set
(rustlang, buildkit, containerd, kubernetes, deno). No server round-trip
or auth required — instant feedback before push.
- **Human-friendly workflow names**: `WorkflowInstance` now has a `name`
field (unique alongside the UUID primary key). The host auto-assigns
`{definition_id}-{N}` using a per-definition monotonic counter, with
optional caller override via `start_workflow_with_name` /
`StartWorkflowRequest.name`. All gRPC read/mutate APIs accept either the
UUID or the human name interchangeably. `WorkflowDefinition` now has an
optional display `name` declared in YAML (e.g. `name: "Continuous
Integration"`) that surfaces in listings.
- **wfe-server**: Full executor feature set enabled in the shipped binary —
kubernetes, deno, buildkit, containerd, rustlang step types all compiled
in.
- **wfe-server**: Dockerfile switched from `rust:alpine` to
`rust:1-bookworm` + `debian:bookworm-slim` runtime because `deno_core`'s
bundled v8 only ships glibc binaries.
- **wfe-ci**: New `Dockerfile.ci` builder image with rust stable,
cargo-nextest, cargo-llvm-cov, sccache, buildctl, kubectl, tea, git.
Used as the base image for kubernetes-executed CI steps.
- **wfe-kubernetes**: `run:` scripts now execute under `/bin/bash -c`
instead of `/bin/sh -c` so workflows can rely on `set -o pipefail`,
process substitution, arrays, and other bashisms dash doesn't support.
### Fixed
- **wfe**: `WorkflowHost::start()` now calls
`persistence.ensure_store_exists()`, which was previously defined but
never invoked — the Postgres/SQLite schema was never auto-created on
startup, causing `relation "wfc.workflows" does not exist` errors on
first run.
- **wfe-core**: `SubWorkflowStep` now inherits the parent workflow's data
when no explicit inputs are set, so child workflows see the same
top-level fields (e.g. `$REPO_URL`, `$COMMIT_SHA`) without every
`type: workflow` step having to re-declare them.
- **workflows.yaml**: Restructured all step references from
`<<: *ci_step` / `<<: *ci_long` (which relied on YAML 1.1 shallow merge
over-writing the `config:` block) to inner-config merges of the form
`config: {<<: *ci_config, ...}`. Secret env vars moved into the shared
`ci_env` anchor so individual steps don't fight the shallow merge.
## [1.8.1] - 2026-04-06
### Added
- **wfe-server**: gRPC reflection support via `tonic-reflection`
- **wfe-server**: Schema endpoints: `/schema/workflow.json` (JSON Schema), `/schema/workflow.yaml` (YAML Schema), `/schema/workflow.proto` (raw proto)
- **wfe-yaml**: Auto-generated JSON Schema from `schemars` derives on all YAML types
- **wfe-server**: Dockerfile for multi-stage alpine build with all executor features
- **wfe-server**: Comprehensive configuration reference (README.md)
### Fixed
- **wfe-yaml**: Added missing `license`, `repository`, `homepage` fields to Cargo.toml
- **wfe-buildkit-protos**: Removed vendored Go repos (166MB -> 356K), kept only .proto files
- **wfe-containerd-protos**: Removed vendored Go repos (53MB -> 216K), kept only .proto files
- Filesystem loop warnings from circular symlinks in vendored Go modules eliminated
- Pinned `icu_calendar <2.2` to work around `temporal_rs`/`deno_core` incompatibility
## [1.8.0] - 2026-04-06
### Added

View File

@@ -1,9 +1,9 @@
[workspace]
members = ["wfe-core", "wfe-sqlite", "wfe-postgres", "wfe-opensearch", "wfe-valkey", "wfe", "wfe-yaml", "wfe-buildkit", "wfe-containerd", "wfe-containerd-protos", "wfe-buildkit-protos", "wfe-rustlang", "wfe-server-protos", "wfe-server", "wfe-kubernetes", "wfe-deno"]
members = ["wfe-core", "wfe-sqlite", "wfe-postgres", "wfe-opensearch", "wfe-valkey", "wfe", "wfe-yaml", "wfe-buildkit", "wfe-containerd", "wfe-containerd-protos", "wfe-buildkit-protos", "wfe-rustlang", "wfe-server-protos", "wfe-server", "wfe-kubernetes", "wfe-deno", "wfectl"]
resolver = "2"
[workspace.package]
version = "1.8.0"
version = "1.9.0"
edition = "2024"
license = "MIT"
repository = "https://src.sunbeam.pt/studio/wfe"
@@ -38,16 +38,17 @@ redis = { version = "0.27", features = ["tokio-comp", "connection-manager"] }
opensearch = "2"
# Internal crates
wfe-core = { version = "1.8.0", path = "wfe-core", registry = "sunbeam" }
wfe-sqlite = { version = "1.8.0", path = "wfe-sqlite", registry = "sunbeam" }
wfe-postgres = { version = "1.8.0", path = "wfe-postgres", registry = "sunbeam" }
wfe-opensearch = { version = "1.8.0", path = "wfe-opensearch", registry = "sunbeam" }
wfe-valkey = { version = "1.8.0", path = "wfe-valkey", registry = "sunbeam" }
wfe-yaml = { version = "1.8.0", path = "wfe-yaml", registry = "sunbeam" }
wfe-buildkit = { version = "1.8.0", path = "wfe-buildkit", registry = "sunbeam" }
wfe-containerd = { version = "1.8.0", path = "wfe-containerd", registry = "sunbeam" }
wfe-rustlang = { version = "1.8.0", path = "wfe-rustlang", registry = "sunbeam" }
wfe-kubernetes = { version = "1.8.0", path = "wfe-kubernetes", registry = "sunbeam" }
wfe-core = { version = "1.9.0", path = "wfe-core", registry = "sunbeam" }
wfe-sqlite = { version = "1.9.0", path = "wfe-sqlite", registry = "sunbeam" }
wfe-postgres = { version = "1.9.0", path = "wfe-postgres", registry = "sunbeam" }
wfe-opensearch = { version = "1.9.0", path = "wfe-opensearch", registry = "sunbeam" }
wfe-valkey = { version = "1.9.0", path = "wfe-valkey", registry = "sunbeam" }
wfe-yaml = { version = "1.9.0", path = "wfe-yaml", registry = "sunbeam" }
wfe-buildkit = { version = "1.9.0", path = "wfe-buildkit", registry = "sunbeam" }
wfe-containerd = { version = "1.9.0", path = "wfe-containerd", registry = "sunbeam" }
wfe-rustlang = { version = "1.9.0", path = "wfe-rustlang", registry = "sunbeam" }
wfe-kubernetes = { version = "1.9.0", path = "wfe-kubernetes", registry = "sunbeam" }
wfe-server-protos = { version = "1.9.0", path = "wfe-server-protos", registry = "sunbeam" }
# YAML
serde_yaml = "0.9"

39
Dockerfile Normal file
View File

@@ -0,0 +1,39 @@
# Stage 1: Build
#
# Using debian-slim (glibc) rather than alpine because deno_core's bundled v8
# only ships glibc binaries — building v8 under musl from source is impractical
# and we need the full feature set (rustlang, buildkit, containerd, kubernetes,
# deno) compiled into wfe-server.
FROM rust:1-bookworm AS builder
RUN apt-get update && apt-get install -y --no-install-recommends \
protobuf-compiler libprotobuf-dev libssl-dev pkg-config ca-certificates \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /build
COPY . .
# Configure the sunbeam cargo registry (workspace deps reference it)
RUN mkdir -p .cargo && printf '[registries.sunbeam]\nindex = "sparse+https://src.sunbeam.pt/api/packages/studio/cargo/"\n' > .cargo/config.toml
RUN cargo build --release --bin wfe-server \
-p wfe-server \
--features "wfe-yaml/rustlang,wfe-yaml/buildkit,wfe-yaml/containerd,wfe-yaml/kubernetes,wfe-yaml/deno" \
&& strip target/release/wfe-server
# Stage 2: Runtime
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates tini libssl3 \
&& rm -rf /var/lib/apt/lists/*
COPY --from=builder /build/target/release/wfe-server /usr/local/bin/wfe-server
RUN useradd -u 1000 -m wfe
USER wfe
EXPOSE 50051 8080
ENTRYPOINT ["tini", "--"]
CMD ["wfe-server"]

55
Dockerfile.ci Normal file
View File

@@ -0,0 +1,55 @@
# wfe-ci: Prebuilt image for running wfe CI workflows in Kubernetes.
#
# Contains:
# - Rust stable toolchain
# - cargo-nextest, cargo-llvm-cov
# - sccache (configured via env vars from Vault)
# - buildkit client (buildctl) for in-cluster buildkitd
# - tea CLI for Gitea release management
# - git, curl, kubectl
#
# Usage in workflows: type: kubernetes, image: src.sunbeam.pt/studio/wfe-ci:latest
FROM rust:bookworm
# System packages
RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \
curl \
git \
jq \
libssl-dev \
pkg-config \
protobuf-compiler \
unzip \
xz-utils \
&& rm -rf /var/lib/apt/lists/*
# Cargo tools
RUN cargo install --locked cargo-nextest cargo-llvm-cov sccache && \
rm -rf /usr/local/cargo/registry
# Buildkit client (buildctl)
ARG BUILDKIT_VERSION=v0.28.0
RUN curl -fsSL "https://github.com/moby/buildkit/releases/download/${BUILDKIT_VERSION}/buildkit-${BUILDKIT_VERSION}.linux-amd64.tar.gz" \
| tar -xz -C /usr/local --strip-components=1 bin/buildctl
# kubectl
RUN curl -fsSL "https://dl.k8s.io/release/$(curl -fsSL https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" \
-o /usr/local/bin/kubectl && chmod +x /usr/local/bin/kubectl
# tea CLI for Gitea
ARG TEA_VERSION=0.11.0
RUN curl -fsSL "https://gitea.com/gitea/tea/releases/download/v${TEA_VERSION}/tea-${TEA_VERSION}-linux-amd64" \
-o /usr/local/bin/tea && chmod +x /usr/local/bin/tea
# llvm tools (needed by cargo-llvm-cov)
RUN rustup component add llvm-tools-preview
# Sccache wrapper config — expects SCCACHE_S3_ENDPOINT, SCCACHE_BUCKET, etc. via env.
ENV RUSTC_WRAPPER=/usr/local/cargo/bin/sccache \
CARGO_INCREMENTAL=0
WORKDIR /workspace
CMD ["bash"]

53
scripts/release.sh Executable file
View File

@@ -0,0 +1,53 @@
#!/usr/bin/env bash
set -euo pipefail
VERSION="${1:?Usage: scripts/release.sh <version> [message]}"
MESSAGE="${2:-v${VERSION}}"
echo "=== Releasing v${VERSION} ==="
# Tag
git tag -a "v${VERSION}" -m "${MESSAGE}"
# Publish leaf crates
for crate in wfe-core wfe-containerd-protos wfe-buildkit-protos wfe-server-protos; do
echo "--- Publishing ${crate} ---"
cargo publish -p "${crate}" --registry sunbeam
done
# Middle layer
for crate in wfe-sqlite wfe-postgres wfe-opensearch wfe-valkey wfe-buildkit wfe-containerd wfe-rustlang; do
echo "--- Publishing ${crate} ---"
cargo publish -p "${crate}" --registry sunbeam
done
# Top layer (needs index to catch up)
sleep 10
for crate in wfe wfe-yaml; do
echo "--- Publishing ${crate} ---"
cargo publish -p "${crate}" --registry sunbeam
done
# Final layer
sleep 10
for crate in wfe-server wfe-deno wfe-kubernetes; do
echo "--- Publishing ${crate} ---"
cargo publish -p "${crate}" --registry sunbeam
done
# Push git
git push origin mainline
git push origin "v${VERSION}"
# Create Gitea release from changelog
release_body=$(awk -v ver="${VERSION}" '/^## \[/{if(found)exit;if(index($0,"["ver"]"))found=1;next}found{print}' CHANGELOG.md)
tea release create --tag "v${VERSION}" --title "v${VERSION}" --note "${release_body}" || echo "(release already exists)"
# Build + push Docker image
echo "--- Building Docker image ---"
docker buildx build --builder sunbeam-remote --platform linux/amd64 \
-t "src.sunbeam.pt/studio/wfe:${VERSION}" \
-t "src.sunbeam.pt/studio/wfe:latest" \
--push .
echo "=== v${VERSION} released ==="

View File

@@ -16,7 +16,7 @@ async-trait = { workspace = true }
tracing = { workspace = true }
thiserror = { workspace = true }
regex = { workspace = true }
wfe-buildkit-protos = { version = "1.8.0", path = "../wfe-buildkit-protos", registry = "sunbeam" }
wfe-buildkit-protos = { version = "1.9.0", path = "../wfe-buildkit-protos", registry = "sunbeam" }
tonic = "0.14"
tower = { version = "0.4", features = ["util"] }
hyper-util = { version = "0.1", features = ["tokio"] }

View File

@@ -2,4 +2,4 @@ pub mod config;
pub mod step;
pub use config::{BuildkitConfig, RegistryAuth, TlsConfig};
pub use step::{build_output_data, parse_digest, BuildkitStep};
pub use step::{BuildkitStep, build_output_data, parse_digest};

View File

@@ -9,9 +9,9 @@ use wfe_buildkit_protos::moby::buildkit::v1::control_client::ControlClient;
use wfe_buildkit_protos::moby::buildkit::v1::{
CacheOptions, CacheOptionsEntry, Exporter, SolveRequest, StatusRequest,
};
use wfe_core::WfeError;
use wfe_core::models::ExecutionResult;
use wfe_core::traits::step::{StepBody, StepExecutionContext};
use wfe_core::WfeError;
use crate::config::BuildkitConfig;
@@ -45,10 +45,7 @@ impl BuildkitStep {
tracing::info!(addr = %addr, "connecting to BuildKit daemon");
let channel = if addr.starts_with("unix://") {
let socket_path = addr
.strip_prefix("unix://")
.unwrap()
.to_string();
let socket_path = addr.strip_prefix("unix://").unwrap().to_string();
// Verify the socket exists before attempting connection.
if !Path::new(&socket_path).exists() {
@@ -60,9 +57,7 @@ impl BuildkitStep {
// tonic requires a dummy URI for Unix sockets; the actual path
// is provided via the connector.
Endpoint::try_from("http://[::]:50051")
.map_err(|e| {
WfeError::StepExecution(format!("failed to create endpoint: {e}"))
})?
.map_err(|e| WfeError::StepExecution(format!("failed to create endpoint: {e}")))?
.connect_with_connector(tower::service_fn(move |_: Uri| {
let path = socket_path.clone();
async move {
@@ -231,10 +226,7 @@ impl BuildkitStep {
let context_name = "context";
let dockerfile_name = "dockerfile";
frontend_attrs.insert(
"context".to_string(),
format!("local://{context_name}"),
);
frontend_attrs.insert("context".to_string(), format!("local://{context_name}"));
frontend_attrs.insert(
format!("local-sessionid:{context_name}"),
session_id.clone(),
@@ -276,20 +268,18 @@ impl BuildkitStep {
// The x-docker-expose-session-uuid header tells buildkitd which
// session owns the local sources. The x-docker-expose-session-grpc-method
// header lists the gRPC methods the session implements.
if let Ok(key) =
"x-docker-expose-session-uuid"
.parse::<tonic::metadata::MetadataKey<tonic::metadata::Ascii>>()
&& let Ok(val) = session_id
.parse::<tonic::metadata::MetadataValue<tonic::metadata::Ascii>>()
if let Ok(key) = "x-docker-expose-session-uuid"
.parse::<tonic::metadata::MetadataKey<tonic::metadata::Ascii>>()
&& let Ok(val) =
session_id.parse::<tonic::metadata::MetadataValue<tonic::metadata::Ascii>>()
{
metadata.insert(key, val);
}
// Advertise the filesync method so the daemon knows it can request
// local file content from our session.
if let Ok(key) =
"x-docker-expose-session-grpc-method"
.parse::<tonic::metadata::MetadataKey<tonic::metadata::Ascii>>()
if let Ok(key) = "x-docker-expose-session-grpc-method"
.parse::<tonic::metadata::MetadataKey<tonic::metadata::Ascii>>()
{
if let Ok(val) = "/moby.filesync.v1.FileSync/DiffCopy"
.parse::<tonic::metadata::MetadataValue<tonic::metadata::Ascii>>()
@@ -598,7 +588,8 @@ mod tests {
#[test]
fn parse_digest_with_digest_prefix() {
let output = "digest: sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef\n";
let output =
"digest: sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef\n";
let digest = parse_digest(output);
assert_eq!(
digest,
@@ -630,8 +621,7 @@ mod tests {
#[test]
fn parse_digest_wrong_prefix() {
let output =
"sha256:abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789";
let output = "sha256:abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789";
assert_eq!(parse_digest(output), None);
}
@@ -651,7 +641,10 @@ mod tests {
"#;
assert_eq!(
parse_digest(output),
Some("sha256:aabbccdd0011223344556677aabbccdd0011223344556677aabbccdd00112233".to_string())
Some(
"sha256:aabbccdd0011223344556677aabbccdd0011223344556677aabbccdd00112233"
.to_string()
)
);
}
@@ -659,9 +652,7 @@ mod tests {
fn parse_digest_first_match_wins() {
let hash1 = "a".repeat(64);
let hash2 = "b".repeat(64);
let output = format!(
"exporting manifest sha256:{hash1}\ndigest: sha256:{hash2}"
);
let output = format!("exporting manifest sha256:{hash1}\ndigest: sha256:{hash2}");
let digest = parse_digest(&output).unwrap();
assert_eq!(digest, format!("sha256:{hash1}"));
}
@@ -806,10 +797,7 @@ mod tests {
exporters[0].attrs.get("name"),
Some(&"myapp:latest,myapp:v1.0".to_string())
);
assert_eq!(
exporters[0].attrs.get("push"),
Some(&"true".to_string())
);
assert_eq!(exporters[0].attrs.get("push"), Some(&"true".to_string()));
}
#[test]

View File

@@ -9,17 +9,16 @@
use std::collections::HashMap;
use std::path::Path;
use wfe_buildkit::config::{BuildkitConfig, TlsConfig};
use wfe_buildkit::BuildkitStep;
use wfe_buildkit::config::{BuildkitConfig, TlsConfig};
use wfe_core::models::{ExecutionPointer, WorkflowInstance, WorkflowStep};
use wfe_core::traits::step::{StepBody, StepExecutionContext};
/// Get the BuildKit daemon address from the environment or use the default.
fn buildkit_addr() -> String {
std::env::var("WFE_BUILDKIT_ADDR").unwrap_or_else(|_| {
"unix:///Users/sienna/.lima/wfe-test/sock/buildkitd.sock".to_string()
})
std::env::var("WFE_BUILDKIT_ADDR")
.unwrap_or_else(|_| "unix:///Users/sienna/.lima/wfe-test/sock/buildkitd.sock".to_string())
}
/// Check whether the BuildKit daemon socket is reachable.
@@ -33,13 +32,7 @@ fn buildkitd_available() -> bool {
}
}
fn make_test_context(
step_name: &str,
) -> (
WorkflowStep,
ExecutionPointer,
WorkflowInstance,
) {
fn make_test_context(step_name: &str) -> (WorkflowStep, ExecutionPointer, WorkflowInstance) {
let mut step = WorkflowStep::new(0, "buildkit");
step.name = Some(step_name.to_string());
let pointer = ExecutionPointer::new(0);
@@ -50,21 +43,14 @@ fn make_test_context(
#[tokio::test]
async fn build_simple_dockerfile_via_grpc() {
if !buildkitd_available() {
eprintln!(
"SKIP: BuildKit daemon not available at {}",
buildkit_addr()
);
eprintln!("SKIP: BuildKit daemon not available at {}", buildkit_addr());
return;
}
// Create a temp directory with a trivial Dockerfile.
let tmp = tempfile::tempdir().unwrap();
let dockerfile = tmp.path().join("Dockerfile");
std::fs::write(
&dockerfile,
"FROM alpine:latest\nRUN echo built\n",
)
.unwrap();
std::fs::write(&dockerfile, "FROM alpine:latest\nRUN echo built\n").unwrap();
let config = BuildkitConfig {
dockerfile: "Dockerfile".to_string(),
@@ -94,7 +80,7 @@ async fn build_simple_dockerfile_via_grpc() {
workflow: &instance,
cancellation_token: cancel,
host_context: None,
log_sink: None,
log_sink: None,
};
let result = step.run(&ctx).await.expect("build should succeed");
@@ -135,10 +121,7 @@ async fn build_simple_dockerfile_via_grpc() {
#[tokio::test]
async fn build_with_build_args() {
if !buildkitd_available() {
eprintln!(
"SKIP: BuildKit daemon not available at {}",
buildkit_addr()
);
eprintln!("SKIP: BuildKit daemon not available at {}", buildkit_addr());
return;
}
@@ -181,10 +164,13 @@ async fn build_with_build_args() {
workflow: &instance,
cancellation_token: cancel,
host_context: None,
log_sink: None,
log_sink: None,
};
let result = step.run(&ctx).await.expect("build with args should succeed");
let result = step
.run(&ctx)
.await
.expect("build with args should succeed");
assert!(result.proceed);
let data = result.output_data.expect("should have output_data");
@@ -229,7 +215,7 @@ async fn connect_to_unavailable_daemon_returns_error() {
workflow: &instance,
cancellation_token: cancel,
host_context: None,
log_sink: None,
log_sink: None,
};
let err = step.run(&ctx).await;

View File

@@ -9,7 +9,7 @@ description = "containerd container runner executor for WFE"
[dependencies]
wfe-core = { workspace = true }
wfe-containerd-protos = { version = "1.8.0", path = "../wfe-containerd-protos", registry = "sunbeam" }
wfe-containerd-protos = { version = "1.9.0", path = "../wfe-containerd-protos", registry = "sunbeam" }
tokio = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }

View File

@@ -133,7 +133,11 @@ mod tests {
assert_eq!(deserialized.tls.ca, Some("/ca.pem".to_string()));
assert_eq!(deserialized.tls.cert, Some("/cert.pem".to_string()));
assert_eq!(deserialized.tls.key, Some("/key.pem".to_string()));
assert!(deserialized.registry_auth.contains_key("registry.example.com"));
assert!(
deserialized
.registry_auth
.contains_key("registry.example.com")
);
assert_eq!(deserialized.timeout_ms, Some(30000));
}

View File

@@ -8,21 +8,20 @@ use wfe_core::models::ExecutionResult;
use wfe_core::traits::step::{StepBody, StepExecutionContext};
use wfe_containerd_protos::containerd::services::containers::v1::{
containers_client::ContainersClient, Container, CreateContainerRequest,
DeleteContainerRequest, container::Runtime,
Container, CreateContainerRequest, DeleteContainerRequest, container::Runtime,
containers_client::ContainersClient,
};
use wfe_containerd_protos::containerd::services::content::v1::{
content_client::ContentClient, ReadContentRequest,
ReadContentRequest, content_client::ContentClient,
};
use wfe_containerd_protos::containerd::services::images::v1::{
images_client::ImagesClient, GetImageRequest,
GetImageRequest, images_client::ImagesClient,
};
use wfe_containerd_protos::containerd::services::snapshots::v1::{
snapshots_client::SnapshotsClient, MountsRequest, PrepareSnapshotRequest,
MountsRequest, PrepareSnapshotRequest, snapshots_client::SnapshotsClient,
};
use wfe_containerd_protos::containerd::services::tasks::v1::{
tasks_client::TasksClient, CreateTaskRequest, DeleteTaskRequest, StartRequest,
WaitRequest,
CreateTaskRequest, DeleteTaskRequest, StartRequest, WaitRequest, tasks_client::TasksClient,
};
use wfe_containerd_protos::containerd::services::version::v1::version_client::VersionClient;
@@ -49,10 +48,7 @@ impl ContainerdStep {
/// TCP/HTTP endpoints.
pub(crate) async fn connect(addr: &str) -> Result<Channel, WfeError> {
let channel = if addr.starts_with('/') || addr.starts_with("unix://") {
let socket_path = addr
.strip_prefix("unix://")
.unwrap_or(addr)
.to_string();
let socket_path = addr.strip_prefix("unix://").unwrap_or(addr).to_string();
if !Path::new(&socket_path).exists() {
return Err(WfeError::StepExecution(format!(
@@ -61,9 +57,7 @@ impl ContainerdStep {
}
Endpoint::try_from("http://[::]:50051")
.map_err(|e| {
WfeError::StepExecution(format!("failed to create endpoint: {e}"))
})?
.map_err(|e| WfeError::StepExecution(format!("failed to create endpoint: {e}")))?
.connect_with_connector(tower::service_fn(move |_: Uri| {
let path = socket_path.clone();
async move {
@@ -112,20 +106,14 @@ impl ContainerdStep {
/// `ctr image pull` or `nerdctl pull`.
///
/// TODO: implement full image pull via TransferService or content ingest.
async fn ensure_image(
channel: &Channel,
image: &str,
namespace: &str,
) -> Result<(), WfeError> {
async fn ensure_image(channel: &Channel, image: &str, namespace: &str) -> Result<(), WfeError> {
let mut client = ImagesClient::new(channel.clone());
let mut req = tonic::Request::new(GetImageRequest {
name: image.to_string(),
});
req.metadata_mut().insert(
"containerd-namespace",
namespace.parse().unwrap(),
);
req.metadata_mut()
.insert("containerd-namespace", namespace.parse().unwrap());
match client.get(req).await {
Ok(_) => Ok(()),
@@ -151,20 +139,24 @@ impl ContainerdStep {
image: &str,
namespace: &str,
) -> Result<String, WfeError> {
use sha2::{Sha256, Digest};
use sha2::{Digest, Sha256};
// 1. Get the image record to find the manifest digest.
let mut images_client = ImagesClient::new(channel.clone());
let req = Self::with_namespace(
GetImageRequest { name: image.to_string() },
GetImageRequest {
name: image.to_string(),
},
namespace,
);
let image_resp = images_client.get(req).await.map_err(|e| {
WfeError::StepExecution(format!("failed to get image '{image}': {e}"))
})?;
let img = image_resp.into_inner().image.ok_or_else(|| {
WfeError::StepExecution(format!("image '{image}' has no record"))
})?;
let image_resp = images_client
.get(req)
.await
.map_err(|e| WfeError::StepExecution(format!("failed to get image '{image}': {e}")))?;
let img = image_resp
.into_inner()
.image
.ok_or_else(|| WfeError::StepExecution(format!("image '{image}' has no record")))?;
let target = img.target.ok_or_else(|| {
WfeError::StepExecution(format!("image '{image}' has no target descriptor"))
})?;
@@ -188,22 +180,26 @@ impl ContainerdStep {
let manifests = manifest_json["manifests"].as_array().ok_or_else(|| {
WfeError::StepExecution("image index has no manifests array".to_string())
})?;
let platform_manifest = manifests.iter().find(|m| {
m.get("platform")
.and_then(|p| p.get("architecture"))
.and_then(|a| a.as_str())
== Some(oci_arch)
}).ok_or_else(|| {
WfeError::StepExecution(format!(
"no manifest for architecture '{oci_arch}' in image index"
))
})?;
let platform_manifest = manifests
.iter()
.find(|m| {
m.get("platform")
.and_then(|p| p.get("architecture"))
.and_then(|a| a.as_str())
== Some(oci_arch)
})
.ok_or_else(|| {
WfeError::StepExecution(format!(
"no manifest for architecture '{oci_arch}' in image index"
))
})?;
let digest = platform_manifest["digest"].as_str().ok_or_else(|| {
WfeError::StepExecution("platform manifest has no digest".to_string())
})?;
let bytes = Self::read_content(channel, digest, namespace).await?;
serde_json::from_slice(&bytes)
.map_err(|e| WfeError::StepExecution(format!("failed to parse platform manifest: {e}")))?
serde_json::from_slice(&bytes).map_err(|e| {
WfeError::StepExecution(format!("failed to parse platform manifest: {e}"))
})?
} else {
manifest_json
};
@@ -211,9 +207,7 @@ impl ContainerdStep {
// 3. Get the config digest from the manifest.
let config_digest = manifest_json["config"]["digest"]
.as_str()
.ok_or_else(|| {
WfeError::StepExecution("manifest has no config.digest".to_string())
})?;
.ok_or_else(|| WfeError::StepExecution("manifest has no config.digest".to_string()))?;
// 4. Read the image config.
let config_bytes = Self::read_content(channel, config_digest, namespace).await?;
@@ -239,9 +233,9 @@ impl ContainerdStep {
.to_string();
for diff_id in &diff_ids[1..] {
let diff = diff_id.as_str().ok_or_else(|| {
WfeError::StepExecution("diff_id is not a string".to_string())
})?;
let diff = diff_id
.as_str()
.ok_or_else(|| WfeError::StepExecution("diff_id is not a string".to_string()))?;
let mut hasher = Sha256::new();
hasher.update(format!("{chain_id} {diff}"));
chain_id = format!("sha256:{:x}", hasher.finalize());
@@ -269,9 +263,11 @@ impl ContainerdStep {
namespace,
);
let mut stream = client.read(req).await.map_err(|e| {
WfeError::StepExecution(format!("failed to read content {digest}: {e}"))
})?.into_inner();
let mut stream = client
.read(req)
.await
.map_err(|e| WfeError::StepExecution(format!("failed to read content {digest}: {e}")))?
.into_inner();
let mut data = Vec::new();
while let Some(chunk) = stream.next().await {
@@ -288,10 +284,7 @@ impl ContainerdStep {
///
/// The spec is serialized as JSON and wrapped in a protobuf Any with
/// the containerd OCI spec type URL.
pub(crate) fn build_oci_spec(
&self,
merged_env: &HashMap<String, String>,
) -> prost_types::Any {
pub(crate) fn build_oci_spec(&self, merged_env: &HashMap<String, String>) -> prost_types::Any {
// Build the args array for the process.
let args: Vec<String> = if let Some(ref run) = self.config.run {
vec!["/bin/sh".to_string(), "-c".to_string(), run.clone()]
@@ -302,10 +295,7 @@ impl ContainerdStep {
};
// Build env in KEY=VALUE form.
let env: Vec<String> = merged_env
.iter()
.map(|(k, v)| format!("{k}={v}"))
.collect();
let env: Vec<String> = merged_env.iter().map(|(k, v)| format!("{k}={v}")).collect();
// Build mounts.
let mut mounts = vec![
@@ -360,10 +350,20 @@ impl ContainerdStep {
// capability set so tools like apt-get work. Non-root gets nothing.
let caps = if uid == 0 {
serde_json::json!([
"CAP_AUDIT_WRITE", "CAP_CHOWN", "CAP_DAC_OVERRIDE",
"CAP_FOWNER", "CAP_FSETID", "CAP_KILL", "CAP_MKNOD",
"CAP_NET_BIND_SERVICE", "CAP_NET_RAW", "CAP_SETFCAP",
"CAP_SETGID", "CAP_SETPCAP", "CAP_SETUID", "CAP_SYS_CHROOT",
"CAP_AUDIT_WRITE",
"CAP_CHOWN",
"CAP_DAC_OVERRIDE",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_MKNOD",
"CAP_NET_BIND_SERVICE",
"CAP_NET_RAW",
"CAP_SETFCAP",
"CAP_SETGID",
"CAP_SETPCAP",
"CAP_SETUID",
"CAP_SYS_CHROOT",
])
} else {
serde_json::json!([])
@@ -405,10 +405,9 @@ impl ContainerdStep {
/// Inject a `containerd-namespace` header into a tonic request.
pub(crate) fn with_namespace<T>(req: T, namespace: &str) -> tonic::Request<T> {
let mut request = tonic::Request::new(req);
request.metadata_mut().insert(
"containerd-namespace",
namespace.parse().unwrap(),
);
request
.metadata_mut()
.insert("containerd-namespace", namespace.parse().unwrap());
request
}
@@ -492,8 +491,7 @@ impl ContainerdStep {
match snapshots_client.mounts(mounts_req).await {
Ok(resp) => resp.into_inner().mounts,
Err(_) => {
let parent =
Self::resolve_image_chain_id(&channel, image, namespace).await?;
let parent = Self::resolve_image_chain_id(&channel, image, namespace).await?;
let prepare_req = Self::with_namespace(
PrepareSnapshotRequest {
snapshotter: DEFAULT_SNAPSHOTTER.to_string(),
@@ -531,9 +529,10 @@ impl ContainerdStep {
},
namespace,
);
tasks_client.create(create_task_req).await.map_err(|e| {
WfeError::StepExecution(format!("failed to create service task: {e}"))
})?;
tasks_client
.create(create_task_req)
.await
.map_err(|e| WfeError::StepExecution(format!("failed to create service task: {e}")))?;
let start_req = Self::with_namespace(
StartRequest {
@@ -542,9 +541,10 @@ impl ContainerdStep {
},
namespace,
);
tasks_client.start(start_req).await.map_err(|e| {
WfeError::StepExecution(format!("failed to start service task: {e}"))
})?;
tasks_client
.start(start_req)
.await
.map_err(|e| WfeError::StepExecution(format!("failed to start service task: {e}")))?;
tracing::info!(container_id = %container_id, image = %image, "service container started");
Ok(())
@@ -701,9 +701,10 @@ impl StepBody for ContainerdStep {
namespace,
);
containers_client.create(create_req).await.map_err(|e| {
WfeError::StepExecution(format!("failed to create container: {e}"))
})?;
containers_client
.create(create_req)
.await
.map_err(|e| WfeError::StepExecution(format!("failed to create container: {e}")))?;
// 6. Prepare snapshot with the image's layers as parent.
let mut snapshots_client = SnapshotsClient::new(channel.clone());
@@ -723,7 +724,8 @@ impl StepBody for ContainerdStep {
Err(_) => {
// Resolve the image's chain ID to use as snapshot parent.
let parent = if should_check {
Self::resolve_image_chain_id(&channel, &self.config.image, namespace).await?
Self::resolve_image_chain_id(&channel, &self.config.image, namespace)
.await?
} else {
String::new()
};
@@ -741,9 +743,7 @@ impl StepBody for ContainerdStep {
.prepare(prepare_req)
.await
.map_err(|e| {
WfeError::StepExecution(format!(
"failed to prepare snapshot: {e}"
))
WfeError::StepExecution(format!("failed to prepare snapshot: {e}"))
})?
.into_inner()
.mounts
@@ -758,9 +758,8 @@ impl StepBody for ContainerdStep {
.map(std::path::PathBuf::from)
.unwrap_or_else(|_| std::env::temp_dir());
let tmp_dir = io_base.join(format!("wfe-io-{container_id}"));
std::fs::create_dir_all(&tmp_dir).map_err(|e| {
WfeError::StepExecution(format!("failed to create IO temp dir: {e}"))
})?;
std::fs::create_dir_all(&tmp_dir)
.map_err(|e| WfeError::StepExecution(format!("failed to create IO temp dir: {e}")))?;
let stdout_path = tmp_dir.join("stdout");
let stderr_path = tmp_dir.join("stderr");
@@ -802,9 +801,10 @@ impl StepBody for ContainerdStep {
namespace,
);
tasks_client.create(create_task_req).await.map_err(|e| {
WfeError::StepExecution(format!("failed to create task: {e}"))
})?;
tasks_client
.create(create_task_req)
.await
.map_err(|e| WfeError::StepExecution(format!("failed to create task: {e}")))?;
// Start the task.
let start_req = Self::with_namespace(
@@ -815,9 +815,10 @@ impl StepBody for ContainerdStep {
namespace,
);
tasks_client.start(start_req).await.map_err(|e| {
WfeError::StepExecution(format!("failed to start task: {e}"))
})?;
tasks_client
.start(start_req)
.await
.map_err(|e| WfeError::StepExecution(format!("failed to start task: {e}")))?;
tracing::info!(container_id = %container_id, "task started");
@@ -836,12 +837,7 @@ impl StepBody for ContainerdStep {
Ok(result) => result,
Err(_) => {
// Attempt cleanup before returning timeout error.
let _ = Self::cleanup(
&channel,
&container_id,
namespace,
)
.await;
let _ = Self::cleanup(&channel, &container_id, namespace).await;
let _ = std::fs::remove_dir_all(&tmp_dir);
return Err(WfeError::StepExecution(format!(
"container execution timed out after {timeout_ms}ms"
@@ -887,8 +883,13 @@ impl StepBody for ContainerdStep {
// 13. Parse outputs and build result.
let parsed = Self::parse_outputs(&stdout_content);
let output_data =
Self::build_output_data(step_name, &stdout_content, &stderr_content, exit_code, &parsed);
let output_data = Self::build_output_data(
step_name,
&stdout_content,
&stderr_content,
exit_code,
&parsed,
);
Ok(ExecutionResult {
proceed: true,
@@ -927,9 +928,7 @@ impl ContainerdStep {
containers_client
.delete(del_container_req)
.await
.map_err(|e| {
WfeError::StepExecution(format!("failed to delete container: {e}"))
})?;
.map_err(|e| WfeError::StepExecution(format!("failed to delete container: {e}")))?;
Ok(())
}
@@ -1013,10 +1012,7 @@ mod tests {
let stdout = "##wfe[output url=https://example.com?a=1&b=2]\n";
let outputs = ContainerdStep::parse_outputs(stdout);
assert_eq!(outputs.len(), 1);
assert_eq!(
outputs.get("url").unwrap(),
"https://example.com?a=1&b=2"
);
assert_eq!(outputs.get("url").unwrap(), "https://example.com?a=1&b=2");
}
#[test]
@@ -1043,13 +1039,7 @@ mod tests {
#[test]
fn build_output_data_basic() {
let parsed = HashMap::from([("result".to_string(), "success".to_string())]);
let data = ContainerdStep::build_output_data(
"my_step",
"hello world\n",
"",
0,
&parsed,
);
let data = ContainerdStep::build_output_data("my_step", "hello world\n", "", 0, &parsed);
let obj = data.as_object().unwrap();
assert_eq!(obj.get("result").unwrap(), "success");
@@ -1060,13 +1050,7 @@ mod tests {
#[test]
fn build_output_data_no_parsed_outputs() {
let data = ContainerdStep::build_output_data(
"step1",
"out",
"err",
1,
&HashMap::new(),
);
let data = ContainerdStep::build_output_data("step1", "out", "err", 1, &HashMap::new());
let obj = data.as_object().unwrap();
assert_eq!(obj.len(), 3); // stdout, stderr, exit_code
@@ -1150,7 +1134,11 @@ mod tests {
fn build_oci_spec_with_command() {
let mut config = minimal_config();
config.run = None;
config.command = Some(vec!["echo".to_string(), "hello".to_string(), "world".to_string()]);
config.command = Some(vec![
"echo".to_string(),
"hello".to_string(),
"world".to_string(),
]);
let step = ContainerdStep::new(config);
let spec = step.build_oci_spec(&HashMap::new());
@@ -1227,10 +1215,8 @@ mod tests {
// 3 default + 2 user = 5
assert_eq!(mounts.len(), 5);
let bind_mounts: Vec<&serde_json::Value> = mounts
.iter()
.filter(|m| m["type"] == "bind")
.collect();
let bind_mounts: Vec<&serde_json::Value> =
mounts.iter().filter(|m| m["type"] == "bind").collect();
assert_eq!(bind_mounts.len(), 2);
let ro_mount = bind_mounts
@@ -1274,10 +1260,9 @@ mod tests {
#[tokio::test]
async fn connect_to_missing_unix_socket_with_scheme_returns_error() {
let err =
ContainerdStep::connect("unix:///tmp/nonexistent-wfe-containerd-test.sock")
.await
.unwrap_err();
let err = ContainerdStep::connect("unix:///tmp/nonexistent-wfe-containerd-test.sock")
.await
.unwrap_err();
let msg = format!("{err}");
assert!(
msg.contains("socket not found"),
@@ -1304,9 +1289,11 @@ mod tests {
let config = minimal_config();
let step = ContainerdStep::new(config);
assert_eq!(step.config.image, "alpine:3.18");
assert_eq!(step.config.containerd_addr, "/run/containerd/containerd.sock");
assert_eq!(
step.config.containerd_addr,
"/run/containerd/containerd.sock"
);
}
}
/// Integration tests that require a live containerd daemon.
@@ -1323,9 +1310,7 @@ mod e2e_tests {
)
});
let socket_path = addr
.strip_prefix("unix://")
.unwrap_or(addr.as_str());
let socket_path = addr.strip_prefix("unix://").unwrap_or(addr.as_str());
if Path::new(socket_path).exists() {
Some(addr)
@@ -1350,6 +1335,9 @@ mod e2e_tests {
assert!(!version.version.is_empty(), "version should not be empty");
assert!(!version.revision.is_empty(), "revision should not be empty");
eprintln!("containerd version={} revision={}", version.version, version.revision);
eprintln!(
"containerd version={} revision={}",
version.version, version.revision
);
}
}

View File

@@ -10,8 +10,8 @@
use std::collections::HashMap;
use std::path::Path;
use wfe_containerd::config::{ContainerdConfig, TlsConfig};
use wfe_containerd::ContainerdStep;
use wfe_containerd::config::{ContainerdConfig, TlsConfig};
use wfe_core::models::{ExecutionPointer, WorkflowInstance, WorkflowStep};
use wfe_core::traits::step::{StepBody, StepExecutionContext};
@@ -75,7 +75,7 @@ fn make_context<'a>(
workflow,
cancellation_token: tokio_util::sync::CancellationToken::new(),
host_context: None,
log_sink: None,
log_sink: None,
}
}
@@ -204,8 +204,7 @@ async fn run_container_with_volume_mount() {
return;
};
let shared_dir = std::env::var("WFE_IO_DIR")
.unwrap_or_else(|_| "/tmp/wfe-io".to_string());
let shared_dir = std::env::var("WFE_IO_DIR").unwrap_or_else(|_| "/tmp/wfe-io".to_string());
let vol_dir = format!("{shared_dir}/test-vol");
std::fs::create_dir_all(&vol_dir).unwrap();
@@ -249,8 +248,7 @@ async fn run_debian_with_volume_and_network() {
return;
};
let shared_dir = std::env::var("WFE_IO_DIR")
.unwrap_or_else(|_| "/tmp/wfe-io".to_string());
let shared_dir = std::env::var("WFE_IO_DIR").unwrap_or_else(|_| "/tmp/wfe-io".to_string());
let cargo_dir = format!("{shared_dir}/test-cargo");
let rustup_dir = format!("{shared_dir}/test-rustup");
std::fs::create_dir_all(&cargo_dir).unwrap();
@@ -263,8 +261,12 @@ async fn run_debian_with_volume_and_network() {
config.user = "0:0".to_string();
config.network = "host".to_string();
config.timeout_ms = Some(30_000);
config.env.insert("CARGO_HOME".to_string(), "/cargo".to_string());
config.env.insert("RUSTUP_HOME".to_string(), "/rustup".to_string());
config
.env
.insert("CARGO_HOME".to_string(), "/cargo".to_string());
config
.env
.insert("RUSTUP_HOME".to_string(), "/rustup".to_string());
config.volumes = vec![
wfe_containerd::VolumeMountConfig {
source: cargo_dir.clone(),

View File

@@ -67,7 +67,10 @@ impl<D: WorkflowData> StepBuilder<D> {
}
/// Chain an inline function step.
pub fn then_fn(mut self, f: impl Fn() -> ExecutionResult + Send + Sync + 'static) -> StepBuilder<D> {
pub fn then_fn(
mut self,
f: impl Fn() -> ExecutionResult + Send + Sync + 'static,
) -> StepBuilder<D> {
let next_id = self.builder.add_step(std::any::type_name::<InlineStep>());
self.builder.wire_outcome(self.step_id, next_id, None);
self.builder.last_step = Some(next_id);
@@ -77,7 +80,9 @@ impl<D: WorkflowData> StepBuilder<D> {
/// Insert a WaitFor step.
pub fn wait_for(mut self, event_name: &str, event_key: &str) -> StepBuilder<D> {
let next_id = self.builder.add_step(std::any::type_name::<primitives::wait_for::WaitForStep>());
let next_id = self
.builder
.add_step(std::any::type_name::<primitives::wait_for::WaitForStep>());
self.builder.wire_outcome(self.step_id, next_id, None);
self.builder.last_step = Some(next_id);
self.builder.steps[next_id].step_config = Some(serde_json::json!({
@@ -89,7 +94,9 @@ impl<D: WorkflowData> StepBuilder<D> {
/// Insert a Delay step.
pub fn delay(mut self, duration: std::time::Duration) -> StepBuilder<D> {
let next_id = self.builder.add_step(std::any::type_name::<primitives::delay::DelayStep>());
let next_id = self
.builder
.add_step(std::any::type_name::<primitives::delay::DelayStep>());
self.builder.wire_outcome(self.step_id, next_id, None);
self.builder.last_step = Some(next_id);
self.builder.steps[next_id].step_config = Some(serde_json::json!({
@@ -104,7 +111,9 @@ impl<D: WorkflowData> StepBuilder<D> {
mut self,
build_children: impl FnOnce(&mut WorkflowBuilder<D>),
) -> StepBuilder<D> {
let if_id = self.builder.add_step(std::any::type_name::<primitives::if_step::IfStep>());
let if_id = self
.builder
.add_step(std::any::type_name::<primitives::if_step::IfStep>());
self.builder.wire_outcome(self.step_id, if_id, None);
// Build children
@@ -126,7 +135,9 @@ impl<D: WorkflowData> StepBuilder<D> {
mut self,
build_children: impl FnOnce(&mut WorkflowBuilder<D>),
) -> StepBuilder<D> {
let while_id = self.builder.add_step(std::any::type_name::<primitives::while_step::WhileStep>());
let while_id = self
.builder
.add_step(std::any::type_name::<primitives::while_step::WhileStep>());
self.builder.wire_outcome(self.step_id, while_id, None);
let before_count = self.builder.steps.len();
@@ -146,7 +157,9 @@ impl<D: WorkflowData> StepBuilder<D> {
mut self,
build_children: impl FnOnce(&mut WorkflowBuilder<D>),
) -> StepBuilder<D> {
let fe_id = self.builder.add_step(std::any::type_name::<primitives::foreach_step::ForEachStep>());
let fe_id = self
.builder
.add_step(std::any::type_name::<primitives::foreach_step::ForEachStep>());
self.builder.wire_outcome(self.step_id, fe_id, None);
let before_count = self.builder.steps.len();
@@ -162,11 +175,10 @@ impl<D: WorkflowData> StepBuilder<D> {
}
/// Insert a Saga container step with child steps.
pub fn saga(
mut self,
build_children: impl FnOnce(&mut WorkflowBuilder<D>),
) -> StepBuilder<D> {
let saga_id = self.builder.add_step(std::any::type_name::<primitives::saga_container::SagaContainerStep>());
pub fn saga(mut self, build_children: impl FnOnce(&mut WorkflowBuilder<D>)) -> StepBuilder<D> {
let saga_id = self.builder.add_step(std::any::type_name::<
primitives::saga_container::SagaContainerStep,
>());
self.builder.steps[saga_id].saga = true;
self.builder.wire_outcome(self.step_id, saga_id, None);
@@ -187,7 +199,9 @@ impl<D: WorkflowData> StepBuilder<D> {
mut self,
build_branches: impl FnOnce(ParallelBuilder<D>) -> ParallelBuilder<D>,
) -> StepBuilder<D> {
let seq_id = self.builder.add_step(std::any::type_name::<primitives::sequence::SequenceStep>());
let seq_id = self
.builder
.add_step(std::any::type_name::<primitives::sequence::SequenceStep>());
self.builder.wire_outcome(self.step_id, seq_id, None);
let pb = ParallelBuilder {
@@ -213,10 +227,7 @@ impl<D: WorkflowData> StepBuilder<D> {
impl<D: WorkflowData> ParallelBuilder<D> {
/// Add a parallel branch.
pub fn branch(
mut self,
build_branch: impl FnOnce(&mut WorkflowBuilder<D>),
) -> Self {
pub fn branch(mut self, build_branch: impl FnOnce(&mut WorkflowBuilder<D>)) -> Self {
let before_count = self.builder.steps.len();
build_branch(&mut self.builder);
let after_count = self.builder.steps.len();

View File

@@ -1,9 +1,7 @@
use std::collections::HashMap;
use std::marker::PhantomData;
use crate::models::{
ExecutionResult, StepOutcome, WorkflowDefinition, WorkflowStep,
};
use crate::models::{ExecutionResult, StepOutcome, WorkflowDefinition, WorkflowStep};
use crate::traits::step::{StepBody, WorkflowData};
use super::inline_step::InlineStep;
@@ -77,7 +75,12 @@ impl<D: WorkflowData> WorkflowBuilder<D> {
}
/// Wire an outcome from `from_step` to `to_step`.
pub fn wire_outcome(&mut self, from_step: usize, to_step: usize, value: Option<serde_json::Value>) {
pub fn wire_outcome(
&mut self,
from_step: usize,
to_step: usize,
value: Option<serde_json::Value>,
) {
if let Some(step) = self.steps.get_mut(from_step) {
step.outcomes.push(StepOutcome {
next_step: to_step,

View File

@@ -1,5 +1,5 @@
use crate::models::condition::{ComparisonOp, FieldComparison, StepCondition};
use crate::WfeError;
use crate::models::condition::{ComparisonOp, FieldComparison, StepCondition};
/// Evaluate a step condition against workflow data.
///
@@ -29,10 +29,7 @@ impl From<WfeError> for EvalError {
}
}
fn evaluate_inner(
condition: &StepCondition,
data: &serde_json::Value,
) -> Result<bool, EvalError> {
fn evaluate_inner(condition: &StepCondition, data: &serde_json::Value) -> Result<bool, EvalError> {
match condition {
StepCondition::All(conditions) => {
for c in conditions {
@@ -582,22 +579,14 @@ mod tests {
#[test]
fn not_true_becomes_false() {
let data = json!({"a": 1});
let cond = StepCondition::Not(Box::new(comp(
".a",
ComparisonOp::Equals,
Some(json!(1)),
)));
let cond = StepCondition::Not(Box::new(comp(".a", ComparisonOp::Equals, Some(json!(1)))));
assert!(!evaluate(&cond, &data).unwrap());
}
#[test]
fn not_false_becomes_true() {
let data = json!({"a": 99});
let cond = StepCondition::Not(Box::new(comp(
".a",
ComparisonOp::Equals,
Some(json!(1)),
)));
let cond = StepCondition::Not(Box::new(comp(".a", ComparisonOp::Equals, Some(json!(1)))));
assert!(evaluate(&cond, &data).unwrap());
}
@@ -639,11 +628,7 @@ mod tests {
comp(".a", ComparisonOp::Equals, Some(json!(1))),
comp(".a", ComparisonOp::Equals, Some(json!(99))),
]),
StepCondition::Not(Box::new(comp(
".c",
ComparisonOp::Equals,
Some(json!(99)),
))),
StepCondition::Not(Box::new(comp(".c", ComparisonOp::Equals, Some(json!(99))))),
]);
assert!(evaluate(&cond, &data).unwrap());
}
@@ -742,7 +727,13 @@ mod tests {
let data = json!({"score": 3.14});
assert!(evaluate(&comp(".score", ComparisonOp::Gt, Some(json!(3.0))), &data).unwrap());
assert!(evaluate(&comp(".score", ComparisonOp::Lt, Some(json!(4.0))), &data).unwrap());
assert!(!evaluate(&comp(".score", ComparisonOp::Equals, Some(json!(3.0))), &data).unwrap());
assert!(
!evaluate(
&comp(".score", ComparisonOp::Equals, Some(json!(3.0))),
&data
)
.unwrap()
);
}
#[test]

View File

@@ -29,7 +29,10 @@ pub fn handle_error(
.unwrap_or_else(|| definition.default_error_behavior.clone());
match behavior {
ErrorBehavior::Retry { interval, max_retries } => {
ErrorBehavior::Retry {
interval,
max_retries,
} => {
if max_retries > 0 && pointer.retry_count >= max_retries {
// Exceeded max retries, suspend the workflow
pointer.status = PointerStatus::Failed;
@@ -44,9 +47,8 @@ pub fn handle_error(
pointer.retry_count += 1;
pointer.status = PointerStatus::Sleeping;
pointer.active = true;
pointer.sleep_until = Some(
Utc::now() + chrono::Duration::milliseconds(interval.as_millis() as i64),
);
pointer.sleep_until =
Some(Utc::now() + chrono::Duration::milliseconds(interval.as_millis() as i64));
}
}
ErrorBehavior::Suspend => {
@@ -67,7 +69,9 @@ pub fn handle_error(
&& let Some(comp_step_id) = step.compensation_step_id
{
let mut comp_pointer = ExecutionPointer::new(comp_step_id);
comp_pointer.step_name = definition.steps.iter()
comp_pointer.step_name = definition
.steps
.iter()
.find(|s| s.id == comp_step_id)
.and_then(|s| s.name.clone());
comp_pointer.predecessor_id = Some(pointer.id.clone());

View File

@@ -36,7 +36,9 @@ pub fn process_result(
let next_step_id = find_next_step(step, &result.outcome_value);
if let Some(next_id) = next_step_id {
let mut next_pointer = ExecutionPointer::new(next_id);
next_pointer.step_name = definition.steps.iter()
next_pointer.step_name = definition
.steps
.iter()
.find(|s| s.id == next_id)
.and_then(|s| s.name.clone());
next_pointer.predecessor_id = Some(pointer.id.clone());
@@ -62,7 +64,9 @@ pub fn process_result(
for value in branch_values {
for &child_step_id in &child_step_ids {
let mut child_pointer = ExecutionPointer::new(child_step_id);
child_pointer.step_name = definition.steps.iter()
child_pointer.step_name = definition
.steps
.iter()
.find(|s| s.id == child_step_id)
.and_then(|s| s.name.clone());
child_pointer.context_item = Some(value.clone());
@@ -79,9 +83,7 @@ pub fn process_result(
pointer.event_name = result.event_name.clone();
pointer.event_key = result.event_key.clone();
if let (Some(event_name), Some(event_key)) =
(&result.event_name, &result.event_key)
{
if let (Some(event_name), Some(event_key)) = (&result.event_name, &result.event_key) {
let as_of = result.event_as_of.unwrap_or_else(Utc::now);
let sub = EventSubscription::new(
workflow_id,
@@ -107,8 +109,7 @@ pub fn process_result(
pointer.status = PointerStatus::Sleeping;
pointer.active = true;
pointer.sleep_until = Some(
Utc::now()
+ chrono::Duration::milliseconds(poll_config.interval.as_millis() as i64),
Utc::now() + chrono::Duration::milliseconds(poll_config.interval.as_millis() as i64),
);
pointer.persistence_data = result.persistence_data.clone();
} else if result.persistence_data.is_some() {

View File

@@ -17,7 +17,8 @@ impl StepRegistry {
/// Register a step type using its full type name as the key.
pub fn register<S: StepBody + Default + 'static>(&mut self) {
let key = std::any::type_name::<S>().to_string();
self.factories.insert(key, Box::new(|| Box::new(S::default())));
self.factories
.insert(key, Box::new(|| Box::new(S::default())));
}
/// Register a step factory with an explicit key and factory function.

View File

@@ -119,12 +119,12 @@ impl WorkflowExecutor {
host_context: Option<&dyn crate::traits::HostContext>,
) -> Result<()> {
// 2. Load workflow instance.
let mut workflow = self
.persistence
.get_workflow_instance(workflow_id)
.await?;
let mut workflow = self.persistence.get_workflow_instance(workflow_id).await?;
tracing::Span::current().record("workflow.definition_id", workflow.workflow_definition_id.as_str());
tracing::Span::current().record(
"workflow.definition_id",
workflow.workflow_definition_id.as_str(),
);
if workflow.status != WorkflowStatus::Runnable {
debug!(workflow_id, status = ?workflow.status, "Workflow not runnable, skipping");
@@ -179,15 +179,15 @@ impl WorkflowExecutor {
// Activate next step via outcomes (same as Complete).
let next_step_id = step.outcomes.first().map(|o| o.next_step);
if let Some(next_id) = next_step_id {
let mut next_pointer =
crate::models::ExecutionPointer::new(next_id);
next_pointer.step_name = definition.steps.iter()
let mut next_pointer = crate::models::ExecutionPointer::new(next_id);
next_pointer.step_name = definition
.steps
.iter()
.find(|s| s.id == next_id)
.and_then(|s| s.name.clone());
next_pointer.predecessor_id =
Some(workflow.execution_pointers[idx].id.clone());
next_pointer.scope =
workflow.execution_pointers[idx].scope.clone();
next_pointer.scope = workflow.execution_pointers[idx].scope.clone();
workflow.execution_pointers.push(next_pointer);
}
@@ -208,12 +208,12 @@ impl WorkflowExecutor {
);
// b. Resolve the step body.
let mut step_body = step_registry
.resolve(&step.step_type)
.ok_or_else(|| WfeError::StepExecution(format!(
let mut step_body = step_registry.resolve(&step.step_type).ok_or_else(|| {
WfeError::StepExecution(format!(
"Step type not found in registry: {}",
step.step_type
)))?;
))
})?;
// Mark pointer as running before building context.
if workflow.execution_pointers[idx].start_time.is_none() {
@@ -229,7 +229,8 @@ impl WorkflowExecutor {
step_id,
step_name: step.name.clone(),
},
)).await;
))
.await;
// c. Build StepExecutionContext (borrows workflow immutably).
let cancellation_token = tokio_util::sync::CancellationToken::new();
@@ -277,19 +278,15 @@ impl WorkflowExecutor {
step_id,
step_name: step.name.clone(),
},
)).await;
))
.await;
// e. Process the ExecutionResult.
// Extract workflow_id before mutable borrow.
let wf_id = workflow.id.clone();
let process_result = {
let pointer = &mut workflow.execution_pointers[idx];
result_processor::process_result(
&result,
pointer,
definition,
&wf_id,
)
result_processor::process_result(&result, pointer, definition, &wf_id)
};
all_subscriptions.extend(process_result.subscriptions);
@@ -320,7 +317,8 @@ impl WorkflowExecutor {
crate::models::LifecycleEventType::Error {
message: error_msg.clone(),
},
)).await;
))
.await;
let pointer_id = workflow.execution_pointers[idx].id.clone();
execution_errors.push(ExecutionError::new(
@@ -331,11 +329,7 @@ impl WorkflowExecutor {
let handler_result = {
let pointer = &mut workflow.execution_pointers[idx];
error_handler::handle_error(
&error_msg,
pointer,
definition,
)
error_handler::handle_error(&error_msg, pointer, definition)
};
// Apply workflow-level status changes from error handler.
@@ -348,7 +342,8 @@ impl WorkflowExecutor {
&workflow.workflow_definition_id,
workflow.version,
crate::models::LifecycleEventType::Terminated,
)).await;
))
.await;
}
}
@@ -382,7 +377,8 @@ impl WorkflowExecutor {
&workflow.workflow_definition_id,
workflow.version,
crate::models::LifecycleEventType::Completed,
)).await;
))
.await;
// Publish completion event for SubWorkflow parents.
let completion_event = Event::new(
@@ -427,9 +423,7 @@ impl WorkflowExecutor {
// Persist errors.
if !execution_errors.is_empty() {
self.persistence
.persist_errors(&execution_errors)
.await?;
self.persistence.persist_errors(&execution_errors).await?;
}
// 8. Queue any follow-up work.
@@ -512,10 +506,7 @@ mod tests {
#[async_trait]
impl StepBody for PassStep {
async fn run(
&mut self,
_ctx: &StepExecutionContext<'_>,
) -> crate::Result<ExecutionResult> {
async fn run(&mut self, _ctx: &StepExecutionContext<'_>) -> crate::Result<ExecutionResult> {
Ok(ExecutionResult::next())
}
}
@@ -525,10 +516,7 @@ mod tests {
#[async_trait]
impl StepBody for OutcomeStep {
async fn run(
&mut self,
_ctx: &StepExecutionContext<'_>,
) -> crate::Result<ExecutionResult> {
async fn run(&mut self, _ctx: &StepExecutionContext<'_>) -> crate::Result<ExecutionResult> {
Ok(ExecutionResult::outcome(serde_json::json!("yes")))
}
}
@@ -538,10 +526,7 @@ mod tests {
#[async_trait]
impl StepBody for PersistStep {
async fn run(
&mut self,
_ctx: &StepExecutionContext<'_>,
) -> crate::Result<ExecutionResult> {
async fn run(&mut self, _ctx: &StepExecutionContext<'_>) -> crate::Result<ExecutionResult> {
Ok(ExecutionResult::persist(serde_json::json!({"count": 1})))
}
}
@@ -551,10 +536,7 @@ mod tests {
#[async_trait]
impl StepBody for SleepStep {
async fn run(
&mut self,
_ctx: &StepExecutionContext<'_>,
) -> crate::Result<ExecutionResult> {
async fn run(&mut self, _ctx: &StepExecutionContext<'_>) -> crate::Result<ExecutionResult> {
Ok(ExecutionResult::sleep(Duration::from_secs(30), None))
}
}
@@ -564,10 +546,7 @@ mod tests {
#[async_trait]
impl StepBody for WaitEventStep {
async fn run(
&mut self,
_ctx: &StepExecutionContext<'_>,
) -> crate::Result<ExecutionResult> {
async fn run(&mut self, _ctx: &StepExecutionContext<'_>) -> crate::Result<ExecutionResult> {
Ok(ExecutionResult::wait_for_event(
"order.completed",
"order-123",
@@ -581,10 +560,7 @@ mod tests {
#[async_trait]
impl StepBody for EventResumeStep {
async fn run(
&mut self,
ctx: &StepExecutionContext<'_>,
) -> crate::Result<ExecutionResult> {
async fn run(&mut self, ctx: &StepExecutionContext<'_>) -> crate::Result<ExecutionResult> {
if ctx.execution_pointer.event_published {
Ok(ExecutionResult::next())
} else {
@@ -602,10 +578,7 @@ mod tests {
#[async_trait]
impl StepBody for BranchStep {
async fn run(
&mut self,
_ctx: &StepExecutionContext<'_>,
) -> crate::Result<ExecutionResult> {
async fn run(&mut self, _ctx: &StepExecutionContext<'_>) -> crate::Result<ExecutionResult> {
Ok(ExecutionResult::branch(
vec![
serde_json::json!(1),
@@ -622,10 +595,7 @@ mod tests {
#[async_trait]
impl StepBody for FailStep {
async fn run(
&mut self,
_ctx: &StepExecutionContext<'_>,
) -> crate::Result<ExecutionResult> {
async fn run(&mut self, _ctx: &StepExecutionContext<'_>) -> crate::Result<ExecutionResult> {
Err(WfeError::StepExecution("step failed".into()))
}
}
@@ -635,10 +605,7 @@ mod tests {
#[async_trait]
impl StepBody for CompensateStep {
async fn run(
&mut self,
_ctx: &StepExecutionContext<'_>,
) -> crate::Result<ExecutionResult> {
async fn run(&mut self, _ctx: &StepExecutionContext<'_>) -> crate::Result<ExecutionResult> {
Ok(ExecutionResult::next())
}
}
@@ -680,7 +647,8 @@ mod tests {
registry.register::<PassStep>();
let mut def = WorkflowDefinition::new("test", 1);
def.steps.push(WorkflowStep::new(0, step_type::<PassStep>()));
def.steps
.push(WorkflowStep::new(0, step_type::<PassStep>()));
let mut instance = WorkflowInstance::new("test", 1, serde_json::json!({}));
let pointer = ExecutionPointer::new(0);
@@ -688,11 +656,20 @@ mod tests {
persistence.create_new_workflow(&instance).await.unwrap();
executor.execute(&instance.id, &def, &registry, None).await.unwrap();
executor
.execute(&instance.id, &def, &registry, None)
.await
.unwrap();
let updated = persistence.get_workflow_instance(&instance.id).await.unwrap();
let updated = persistence
.get_workflow_instance(&instance.id)
.await
.unwrap();
assert_eq!(updated.status, WorkflowStatus::Complete);
assert_eq!(updated.execution_pointers[0].status, PointerStatus::Complete);
assert_eq!(
updated.execution_pointers[0].status,
PointerStatus::Complete
);
assert!(updated.complete_time.is_some());
}
@@ -712,27 +689,46 @@ mod tests {
value: None,
});
def.steps.push(step0);
def.steps.push(WorkflowStep::new(1, step_type::<PassStep>()));
def.steps
.push(WorkflowStep::new(1, step_type::<PassStep>()));
let mut instance = WorkflowInstance::new("test", 1, serde_json::json!({}));
instance.execution_pointers.push(ExecutionPointer::new(0));
persistence.create_new_workflow(&instance).await.unwrap();
// First execution: step 0 completes, step 1 pointer created.
executor.execute(&instance.id, &def, &registry, None).await.unwrap();
executor
.execute(&instance.id, &def, &registry, None)
.await
.unwrap();
let updated = persistence.get_workflow_instance(&instance.id).await.unwrap();
let updated = persistence
.get_workflow_instance(&instance.id)
.await
.unwrap();
assert_eq!(updated.execution_pointers.len(), 2);
assert_eq!(updated.execution_pointers[0].status, PointerStatus::Complete);
assert_eq!(
updated.execution_pointers[0].status,
PointerStatus::Complete
);
// Step 1 pointer should be active and pending.
assert_eq!(updated.execution_pointers[1].step_id, 1);
// Second execution: step 1 completes.
executor.execute(&instance.id, &def, &registry, None).await.unwrap();
executor
.execute(&instance.id, &def, &registry, None)
.await
.unwrap();
let updated = persistence.get_workflow_instance(&instance.id).await.unwrap();
let updated = persistence
.get_workflow_instance(&instance.id)
.await
.unwrap();
assert_eq!(updated.status, WorkflowStatus::Complete);
assert_eq!(updated.execution_pointers[1].status, PointerStatus::Complete);
assert_eq!(
updated.execution_pointers[1].status,
PointerStatus::Complete
);
}
#[tokio::test]
@@ -745,9 +741,17 @@ mod tests {
let mut def = WorkflowDefinition::new("test", 1);
let mut s0 = WorkflowStep::new(0, step_type::<PassStep>());
s0.outcomes.push(StepOutcome { next_step: 1, label: None, value: None });
s0.outcomes.push(StepOutcome {
next_step: 1,
label: None,
value: None,
});
let mut s1 = WorkflowStep::new(1, step_type::<PassStep>());
s1.outcomes.push(StepOutcome { next_step: 2, label: None, value: None });
s1.outcomes.push(StepOutcome {
next_step: 2,
label: None,
value: None,
});
let s2 = WorkflowStep::new(2, step_type::<PassStep>());
def.steps.push(s0);
def.steps.push(s1);
@@ -759,10 +763,16 @@ mod tests {
// Execute three times for three steps.
for _ in 0..3 {
executor.execute(&instance.id, &def, &registry, None).await.unwrap();
executor
.execute(&instance.id, &def, &registry, None)
.await
.unwrap();
}
let updated = persistence.get_workflow_instance(&instance.id).await.unwrap();
let updated = persistence
.get_workflow_instance(&instance.id)
.await
.unwrap();
assert_eq!(updated.status, WorkflowStatus::Complete);
assert_eq!(updated.execution_pointers.len(), 3);
for p in &updated.execution_pointers {
@@ -792,16 +802,24 @@ mod tests {
value: Some(serde_json::json!("yes")),
});
def.steps.push(s0);
def.steps.push(WorkflowStep::new(1, step_type::<PassStep>()));
def.steps.push(WorkflowStep::new(2, step_type::<PassStep>()));
def.steps
.push(WorkflowStep::new(1, step_type::<PassStep>()));
def.steps
.push(WorkflowStep::new(2, step_type::<PassStep>()));
let mut instance = WorkflowInstance::new("test", 1, serde_json::json!({}));
instance.execution_pointers.push(ExecutionPointer::new(0));
persistence.create_new_workflow(&instance).await.unwrap();
executor.execute(&instance.id, &def, &registry, None).await.unwrap();
executor
.execute(&instance.id, &def, &registry, None)
.await
.unwrap();
let updated = persistence.get_workflow_instance(&instance.id).await.unwrap();
let updated = persistence
.get_workflow_instance(&instance.id)
.await
.unwrap();
assert_eq!(updated.execution_pointers.len(), 2);
// Should route to step 2 (the "yes" branch).
assert_eq!(updated.execution_pointers[1].step_id, 2);
@@ -816,15 +834,22 @@ mod tests {
registry.register::<PersistStep>();
let mut def = WorkflowDefinition::new("test", 1);
def.steps.push(WorkflowStep::new(0, step_type::<PersistStep>()));
def.steps
.push(WorkflowStep::new(0, step_type::<PersistStep>()));
let mut instance = WorkflowInstance::new("test", 1, serde_json::json!({}));
instance.execution_pointers.push(ExecutionPointer::new(0));
persistence.create_new_workflow(&instance).await.unwrap();
executor.execute(&instance.id, &def, &registry, None).await.unwrap();
executor
.execute(&instance.id, &def, &registry, None)
.await
.unwrap();
let updated = persistence.get_workflow_instance(&instance.id).await.unwrap();
let updated = persistence
.get_workflow_instance(&instance.id)
.await
.unwrap();
assert_eq!(updated.status, WorkflowStatus::Runnable);
assert!(updated.execution_pointers[0].active);
assert_eq!(
@@ -842,16 +867,26 @@ mod tests {
registry.register::<SleepStep>();
let mut def = WorkflowDefinition::new("test", 1);
def.steps.push(WorkflowStep::new(0, step_type::<SleepStep>()));
def.steps
.push(WorkflowStep::new(0, step_type::<SleepStep>()));
let mut instance = WorkflowInstance::new("test", 1, serde_json::json!({}));
instance.execution_pointers.push(ExecutionPointer::new(0));
persistence.create_new_workflow(&instance).await.unwrap();
executor.execute(&instance.id, &def, &registry, None).await.unwrap();
executor
.execute(&instance.id, &def, &registry, None)
.await
.unwrap();
let updated = persistence.get_workflow_instance(&instance.id).await.unwrap();
assert_eq!(updated.execution_pointers[0].status, PointerStatus::Sleeping);
let updated = persistence
.get_workflow_instance(&instance.id)
.await
.unwrap();
assert_eq!(
updated.execution_pointers[0].status,
PointerStatus::Sleeping
);
assert!(updated.execution_pointers[0].sleep_until.is_some());
assert!(updated.execution_pointers[0].active);
}
@@ -865,15 +900,22 @@ mod tests {
registry.register::<WaitEventStep>();
let mut def = WorkflowDefinition::new("test", 1);
def.steps.push(WorkflowStep::new(0, step_type::<WaitEventStep>()));
def.steps
.push(WorkflowStep::new(0, step_type::<WaitEventStep>()));
let mut instance = WorkflowInstance::new("test", 1, serde_json::json!({}));
instance.execution_pointers.push(ExecutionPointer::new(0));
persistence.create_new_workflow(&instance).await.unwrap();
executor.execute(&instance.id, &def, &registry, None).await.unwrap();
executor
.execute(&instance.id, &def, &registry, None)
.await
.unwrap();
let updated = persistence.get_workflow_instance(&instance.id).await.unwrap();
let updated = persistence
.get_workflow_instance(&instance.id)
.await
.unwrap();
assert_eq!(
updated.execution_pointers[0].status,
PointerStatus::WaitingForEvent
@@ -899,7 +941,8 @@ mod tests {
registry.register::<EventResumeStep>();
let mut def = WorkflowDefinition::new("test", 1);
def.steps.push(WorkflowStep::new(0, step_type::<EventResumeStep>()));
def.steps
.push(WorkflowStep::new(0, step_type::<EventResumeStep>()));
let mut instance = WorkflowInstance::new("test", 1, serde_json::json!({}));
let mut pointer = ExecutionPointer::new(0);
@@ -911,10 +954,19 @@ mod tests {
instance.execution_pointers.push(pointer);
persistence.create_new_workflow(&instance).await.unwrap();
executor.execute(&instance.id, &def, &registry, None).await.unwrap();
executor
.execute(&instance.id, &def, &registry, None)
.await
.unwrap();
let updated = persistence.get_workflow_instance(&instance.id).await.unwrap();
assert_eq!(updated.execution_pointers[0].status, PointerStatus::Complete);
let updated = persistence
.get_workflow_instance(&instance.id)
.await
.unwrap();
assert_eq!(
updated.execution_pointers[0].status,
PointerStatus::Complete
);
assert_eq!(updated.status, WorkflowStatus::Complete);
}
@@ -931,15 +983,22 @@ mod tests {
let mut s0 = WorkflowStep::new(0, step_type::<BranchStep>());
s0.children.push(1);
def.steps.push(s0);
def.steps.push(WorkflowStep::new(1, step_type::<PassStep>()));
def.steps
.push(WorkflowStep::new(1, step_type::<PassStep>()));
let mut instance = WorkflowInstance::new("test", 1, serde_json::json!({}));
instance.execution_pointers.push(ExecutionPointer::new(0));
persistence.create_new_workflow(&instance).await.unwrap();
executor.execute(&instance.id, &def, &registry, None).await.unwrap();
executor
.execute(&instance.id, &def, &registry, None)
.await
.unwrap();
let updated = persistence.get_workflow_instance(&instance.id).await.unwrap();
let updated = persistence
.get_workflow_instance(&instance.id)
.await
.unwrap();
// 1 original + 3 children.
assert_eq!(updated.execution_pointers.len(), 4);
// Children should have scope containing the parent pointer id.
@@ -973,11 +1032,20 @@ mod tests {
instance.execution_pointers.push(ExecutionPointer::new(0));
persistence.create_new_workflow(&instance).await.unwrap();
executor.execute(&instance.id, &def, &registry, None).await.unwrap();
executor
.execute(&instance.id, &def, &registry, None)
.await
.unwrap();
let updated = persistence.get_workflow_instance(&instance.id).await.unwrap();
let updated = persistence
.get_workflow_instance(&instance.id)
.await
.unwrap();
assert_eq!(updated.execution_pointers[0].retry_count, 1);
assert_eq!(updated.execution_pointers[0].status, PointerStatus::Sleeping);
assert_eq!(
updated.execution_pointers[0].status,
PointerStatus::Sleeping
);
assert!(updated.execution_pointers[0].sleep_until.is_some());
assert_eq!(updated.status, WorkflowStatus::Runnable);
}
@@ -999,9 +1067,15 @@ mod tests {
instance.execution_pointers.push(ExecutionPointer::new(0));
persistence.create_new_workflow(&instance).await.unwrap();
executor.execute(&instance.id, &def, &registry, None).await.unwrap();
executor
.execute(&instance.id, &def, &registry, None)
.await
.unwrap();
let updated = persistence.get_workflow_instance(&instance.id).await.unwrap();
let updated = persistence
.get_workflow_instance(&instance.id)
.await
.unwrap();
assert_eq!(updated.status, WorkflowStatus::Suspended);
assert_eq!(updated.execution_pointers[0].status, PointerStatus::Failed);
}
@@ -1023,9 +1097,15 @@ mod tests {
instance.execution_pointers.push(ExecutionPointer::new(0));
persistence.create_new_workflow(&instance).await.unwrap();
executor.execute(&instance.id, &def, &registry, None).await.unwrap();
executor
.execute(&instance.id, &def, &registry, None)
.await
.unwrap();
let updated = persistence.get_workflow_instance(&instance.id).await.unwrap();
let updated = persistence
.get_workflow_instance(&instance.id)
.await
.unwrap();
assert_eq!(updated.status, WorkflowStatus::Terminated);
assert_eq!(updated.execution_pointers[0].status, PointerStatus::Failed);
assert!(updated.complete_time.is_some());
@@ -1045,15 +1125,22 @@ mod tests {
s0.error_behavior = Some(ErrorBehavior::Compensate);
s0.compensation_step_id = Some(1);
def.steps.push(s0);
def.steps.push(WorkflowStep::new(1, step_type::<CompensateStep>()));
def.steps
.push(WorkflowStep::new(1, step_type::<CompensateStep>()));
let mut instance = WorkflowInstance::new("test", 1, serde_json::json!({}));
instance.execution_pointers.push(ExecutionPointer::new(0));
persistence.create_new_workflow(&instance).await.unwrap();
executor.execute(&instance.id, &def, &registry, None).await.unwrap();
executor
.execute(&instance.id, &def, &registry, None)
.await
.unwrap();
let updated = persistence.get_workflow_instance(&instance.id).await.unwrap();
let updated = persistence
.get_workflow_instance(&instance.id)
.await
.unwrap();
assert_eq!(updated.execution_pointers[0].status, PointerStatus::Failed);
// Compensation pointer should be created.
assert_eq!(updated.execution_pointers.len(), 2);
@@ -1070,8 +1157,10 @@ mod tests {
registry.register::<PassStep>();
let mut def = WorkflowDefinition::new("test", 1);
def.steps.push(WorkflowStep::new(0, step_type::<PassStep>()));
def.steps.push(WorkflowStep::new(1, step_type::<PassStep>()));
def.steps
.push(WorkflowStep::new(0, step_type::<PassStep>()));
def.steps
.push(WorkflowStep::new(1, step_type::<PassStep>()));
let mut instance = WorkflowInstance::new("test", 1, serde_json::json!({}));
// Two independent active pointers.
@@ -1079,14 +1168,22 @@ mod tests {
instance.execution_pointers.push(ExecutionPointer::new(1));
persistence.create_new_workflow(&instance).await.unwrap();
executor.execute(&instance.id, &def, &registry, None).await.unwrap();
executor
.execute(&instance.id, &def, &registry, None)
.await
.unwrap();
let updated = persistence.get_workflow_instance(&instance.id).await.unwrap();
let updated = persistence
.get_workflow_instance(&instance.id)
.await
.unwrap();
assert_eq!(updated.status, WorkflowStatus::Complete);
assert!(updated
.execution_pointers
.iter()
.all(|p| p.status == PointerStatus::Complete));
assert!(
updated
.execution_pointers
.iter()
.all(|p| p.status == PointerStatus::Complete)
);
}
#[tokio::test]
@@ -1114,9 +1211,15 @@ mod tests {
persistence.create_new_workflow(&instance).await.unwrap();
// Should not error on a completed workflow.
executor.execute(&instance.id, &def, &registry, None).await.unwrap();
executor
.execute(&instance.id, &def, &registry, None)
.await
.unwrap();
let updated = persistence.get_workflow_instance(&instance.id).await.unwrap();
let updated = persistence
.get_workflow_instance(&instance.id)
.await
.unwrap();
assert_eq!(updated.status, WorkflowStatus::Complete);
}
@@ -1129,7 +1232,8 @@ mod tests {
registry.register::<SleepStep>();
let mut def = WorkflowDefinition::new("test", 1);
def.steps.push(WorkflowStep::new(0, step_type::<SleepStep>()));
def.steps
.push(WorkflowStep::new(0, step_type::<SleepStep>()));
let mut instance = WorkflowInstance::new("test", 1, serde_json::json!({}));
let mut pointer = ExecutionPointer::new(0);
@@ -1139,11 +1243,20 @@ mod tests {
instance.execution_pointers.push(pointer);
persistence.create_new_workflow(&instance).await.unwrap();
executor.execute(&instance.id, &def, &registry, None).await.unwrap();
executor
.execute(&instance.id, &def, &registry, None)
.await
.unwrap();
let updated = persistence.get_workflow_instance(&instance.id).await.unwrap();
let updated = persistence
.get_workflow_instance(&instance.id)
.await
.unwrap();
// Should still be sleeping since sleep_until is in the future.
assert_eq!(updated.execution_pointers[0].status, PointerStatus::Sleeping);
assert_eq!(
updated.execution_pointers[0].status,
PointerStatus::Sleeping
);
}
#[tokio::test]
@@ -1163,7 +1276,10 @@ mod tests {
instance.execution_pointers.push(ExecutionPointer::new(0));
persistence.create_new_workflow(&instance).await.unwrap();
executor.execute(&instance.id, &def, &registry, None).await.unwrap();
executor
.execute(&instance.id, &def, &registry, None)
.await
.unwrap();
let errors = persistence.get_errors().await;
assert_eq!(errors.len(), 1);
@@ -1174,24 +1290,31 @@ mod tests {
async fn lifecycle_events_published() {
let (persistence, lock, queue) = create_providers();
let lifecycle = Arc::new(InMemoryLifecyclePublisher::new());
let executor = create_executor(persistence.clone(), lock, queue)
.with_lifecycle(lifecycle.clone());
let executor =
create_executor(persistence.clone(), lock, queue).with_lifecycle(lifecycle.clone());
let mut registry = StepRegistry::new();
registry.register::<PassStep>();
let mut def = WorkflowDefinition::new("test", 1);
def.steps.push(WorkflowStep::new(0, step_type::<PassStep>()));
def.steps
.push(WorkflowStep::new(0, step_type::<PassStep>()));
let mut instance = WorkflowInstance::new("test", 1, serde_json::json!({}));
instance.execution_pointers.push(ExecutionPointer::new(0));
persistence.create_new_workflow(&instance).await.unwrap();
executor.execute(&instance.id, &def, &registry, None).await.unwrap();
executor
.execute(&instance.id, &def, &registry, None)
.await
.unwrap();
// Executor itself doesn't publish lifecycle events in the current implementation,
// but the with_lifecycle builder works correctly.
let updated = persistence.get_workflow_instance(&instance.id).await.unwrap();
let updated = persistence
.get_workflow_instance(&instance.id)
.await
.unwrap();
assert_eq!(updated.status, WorkflowStatus::Complete);
}
@@ -1206,15 +1329,22 @@ mod tests {
let mut def = WorkflowDefinition::new("test", 1);
def.default_error_behavior = ErrorBehavior::Terminate;
// Step has no error_behavior override.
def.steps.push(WorkflowStep::new(0, step_type::<FailStep>()));
def.steps
.push(WorkflowStep::new(0, step_type::<FailStep>()));
let mut instance = WorkflowInstance::new("test", 1, serde_json::json!({}));
instance.execution_pointers.push(ExecutionPointer::new(0));
persistence.create_new_workflow(&instance).await.unwrap();
executor.execute(&instance.id, &def, &registry, None).await.unwrap();
executor
.execute(&instance.id, &def, &registry, None)
.await
.unwrap();
let updated = persistence.get_workflow_instance(&instance.id).await.unwrap();
let updated = persistence
.get_workflow_instance(&instance.id)
.await
.unwrap();
assert_eq!(updated.status, WorkflowStatus::Terminated);
}
@@ -1227,15 +1357,22 @@ mod tests {
registry.register::<PassStep>();
let mut def = WorkflowDefinition::new("test", 1);
def.steps.push(WorkflowStep::new(0, step_type::<PassStep>()));
def.steps
.push(WorkflowStep::new(0, step_type::<PassStep>()));
let mut instance = WorkflowInstance::new("test", 1, serde_json::json!({}));
instance.execution_pointers.push(ExecutionPointer::new(0));
persistence.create_new_workflow(&instance).await.unwrap();
executor.execute(&instance.id, &def, &registry, None).await.unwrap();
executor
.execute(&instance.id, &def, &registry, None)
.await
.unwrap();
let updated = persistence.get_workflow_instance(&instance.id).await.unwrap();
let updated = persistence
.get_workflow_instance(&instance.id)
.await
.unwrap();
assert!(updated.execution_pointers[0].start_time.is_some());
assert!(updated.execution_pointers[0].end_time.is_some());
}
@@ -1257,15 +1394,22 @@ mod tests {
value: Some(serde_json::json!("yes")),
});
def.steps.push(s0);
def.steps.push(WorkflowStep::new(1, step_type::<PassStep>()));
def.steps
.push(WorkflowStep::new(1, step_type::<PassStep>()));
let mut instance = WorkflowInstance::new("test", 1, serde_json::json!({}));
instance.execution_pointers.push(ExecutionPointer::new(0));
persistence.create_new_workflow(&instance).await.unwrap();
executor.execute(&instance.id, &def, &registry, None).await.unwrap();
executor
.execute(&instance.id, &def, &registry, None)
.await
.unwrap();
let updated = persistence.get_workflow_instance(&instance.id).await.unwrap();
let updated = persistence
.get_workflow_instance(&instance.id)
.await
.unwrap();
assert_eq!(
updated.execution_pointers[0].outcome,
Some(serde_json::json!("yes"))
@@ -1318,15 +1462,33 @@ mod tests {
persistence.create_new_workflow(&instance).await.unwrap();
// First execution: fails, retry scheduled.
executor.execute(&instance.id, &def, &registry, None).await.unwrap();
let updated = persistence.get_workflow_instance(&instance.id).await.unwrap();
executor
.execute(&instance.id, &def, &registry, None)
.await
.unwrap();
let updated = persistence
.get_workflow_instance(&instance.id)
.await
.unwrap();
assert_eq!(updated.execution_pointers[0].retry_count, 1);
assert_eq!(updated.execution_pointers[0].status, PointerStatus::Sleeping);
assert_eq!(
updated.execution_pointers[0].status,
PointerStatus::Sleeping
);
// Second execution: succeeds (sleep_until is in the past with 0ms interval).
executor.execute(&instance.id, &def, &registry, None).await.unwrap();
let updated = persistence.get_workflow_instance(&instance.id).await.unwrap();
assert_eq!(updated.execution_pointers[0].status, PointerStatus::Complete);
executor
.execute(&instance.id, &def, &registry, None)
.await
.unwrap();
let updated = persistence
.get_workflow_instance(&instance.id)
.await
.unwrap();
assert_eq!(
updated.execution_pointers[0].status,
PointerStatus::Complete
);
assert_eq!(updated.status, WorkflowStatus::Complete);
}
@@ -1342,9 +1504,15 @@ mod tests {
// No execution pointers at all.
persistence.create_new_workflow(&instance).await.unwrap();
executor.execute(&instance.id, &def, &registry, None).await.unwrap();
executor
.execute(&instance.id, &def, &registry, None)
.await
.unwrap();
let updated = persistence.get_workflow_instance(&instance.id).await.unwrap();
let updated = persistence
.get_workflow_instance(&instance.id)
.await
.unwrap();
assert_eq!(updated.status, WorkflowStatus::Runnable);
}
}

View File

@@ -136,13 +136,11 @@ mod tests {
#[test]
fn step_condition_any_serde_round_trip() {
let condition = StepCondition::Any(vec![
StepCondition::Comparison(FieldComparison {
field: ".x".to_string(),
operator: ComparisonOp::IsNull,
value: None,
}),
]);
let condition = StepCondition::Any(vec![StepCondition::Comparison(FieldComparison {
field: ".x".to_string(),
operator: ComparisonOp::IsNull,
value: None,
})]);
let json_str = serde_json::to_string(&condition).unwrap();
let deserialized: StepCondition = serde_json::from_str(&json_str).unwrap();
assert_eq!(condition, deserialized);
@@ -150,13 +148,11 @@ mod tests {
#[test]
fn step_condition_none_serde_round_trip() {
let condition = StepCondition::None(vec![
StepCondition::Comparison(FieldComparison {
field: ".err".to_string(),
operator: ComparisonOp::IsNotNull,
value: None,
}),
]);
let condition = StepCondition::None(vec![StepCondition::Comparison(FieldComparison {
field: ".err".to_string(),
operator: ComparisonOp::IsNotNull,
value: None,
})]);
let json_str = serde_json::to_string(&condition).unwrap();
let deserialized: StepCondition = serde_json::from_str(&json_str).unwrap();
assert_eq!(condition, deserialized);

View File

@@ -75,7 +75,11 @@ mod tests {
#[test]
fn new_event_defaults() {
let event = Event::new("order.created", "order-456", serde_json::json!({"amount": 100}));
let event = Event::new(
"order.created",
"order-456",
serde_json::json!({"amount": 100}),
);
assert_eq!(event.event_name, "order.created");
assert_eq!(event.event_key, "order-456");
assert!(!event.is_processed);

View File

@@ -59,7 +59,10 @@ impl ExecutionResult {
}
/// Create child branches for parallel/foreach execution.
pub fn branch(values: Vec<serde_json::Value>, persistence_data: Option<serde_json::Value>) -> Self {
pub fn branch(
values: Vec<serde_json::Value>,
persistence_data: Option<serde_json::Value>,
) -> Self {
Self {
proceed: false,
branch_values: Some(values),
@@ -137,7 +140,11 @@ mod tests {
#[test]
fn branch_creates_child_values() {
let values = vec![serde_json::json!(1), serde_json::json!(2), serde_json::json!(3)];
let values = vec![
serde_json::json!(1),
serde_json::json!(2),
serde_json::json!(3),
];
let result = ExecutionResult::branch(values.clone(), None);
assert!(!result.proceed);
assert_eq!(result.branch_values, Some(values));
@@ -181,7 +188,8 @@ mod tests {
#[test]
fn serde_round_trip() {
let result = ExecutionResult::sleep(Duration::from_secs(30), Some(serde_json::json!({"x": 1})));
let result =
ExecutionResult::sleep(Duration::from_secs(30), Some(serde_json::json!({"x": 1})));
let json = serde_json::to_string(&result).unwrap();
let deserialized: ExecutionResult = serde_json::from_str(&json).unwrap();
assert_eq!(result.proceed, deserialized.proceed);

View File

@@ -18,9 +18,17 @@ pub enum LifecycleEventType {
Suspended,
Completed,
Terminated,
Error { message: String },
StepStarted { step_id: usize, step_name: Option<String> },
StepCompleted { step_id: usize, step_name: Option<String> },
Error {
message: String,
},
StepStarted {
step_id: usize,
step_name: Option<String>,
},
StepCompleted {
step_id: usize,
step_name: Option<String>,
},
}
impl LifecycleEvent {
@@ -56,7 +64,10 @@ mod tests {
let event = LifecycleEvent::new("wf-1", "def-1", 1, LifecycleEventType::Started);
let json = serde_json::to_string(&event).unwrap();
let deserialized: LifecycleEvent = serde_json::from_str(&json).unwrap();
assert_eq!(event.workflow_instance_id, deserialized.workflow_instance_id);
assert_eq!(
event.workflow_instance_id,
deserialized.workflow_instance_id
);
assert_eq!(event.event_type, deserialized.event_type);
}

View File

@@ -1,7 +1,6 @@
pub mod condition;
pub mod error_behavior;
pub mod event;
pub mod service;
pub mod execution_error;
pub mod execution_pointer;
pub mod execution_result;
@@ -10,6 +9,7 @@ pub mod poll_config;
pub mod queue_type;
pub mod scheduled_command;
pub mod schema;
pub mod service;
pub mod status;
pub mod workflow_definition;
pub mod workflow_instance;
@@ -25,9 +25,11 @@ pub use poll_config::{HttpMethod, PollCondition, PollEndpointConfig};
pub use queue_type::QueueType;
pub use scheduled_command::{CommandName, ScheduledCommand};
pub use schema::{SchemaType, WorkflowSchema};
pub use service::{
ReadinessCheck, ReadinessProbe, ServiceDefinition, ServiceEndpoint, ServicePort,
};
pub use status::{PointerStatus, WorkflowStatus};
pub use workflow_definition::{StepOutcome, WorkflowDefinition, WorkflowStep};
pub use service::{ReadinessCheck, ReadinessProbe, ServiceDefinition, ServiceEndpoint, ServicePort};
pub use workflow_instance::WorkflowInstance;
/// Serde helper for `Option<Duration>` as milliseconds.

View File

@@ -63,9 +63,7 @@ pub fn parse_type(s: &str) -> crate::Result<SchemaType> {
"integer" => Ok(SchemaType::Integer),
"bool" => Ok(SchemaType::Bool),
"any" => Ok(SchemaType::Any),
_ => Err(crate::WfeError::StepExecution(format!(
"Unknown type: {s}"
))),
_ => Err(crate::WfeError::StepExecution(format!("Unknown type: {s}"))),
}
}
@@ -110,8 +108,7 @@ pub fn validate_value(value: &serde_json::Value, expected: &SchemaType) -> Resul
SchemaType::List(inner) => {
if let Some(arr) = value.as_array() {
for (i, item) in arr.iter().enumerate() {
validate_value(item, inner)
.map_err(|e| format!("list element [{i}]: {e}"))?;
validate_value(item, inner).map_err(|e| format!("list element [{i}]: {e}"))?;
}
Ok(())
} else {
@@ -121,8 +118,7 @@ pub fn validate_value(value: &serde_json::Value, expected: &SchemaType) -> Resul
SchemaType::Map(inner) => {
if let Some(obj) = value.as_object() {
for (key, val) in obj {
validate_value(val, inner)
.map_err(|e| format!("map key \"{key}\": {e}"))?;
validate_value(val, inner).map_err(|e| format!("map key \"{key}\": {e}"))?;
}
Ok(())
} else {

View File

@@ -9,7 +9,14 @@ use super::service::ServiceDefinition;
/// A compiled workflow definition ready for execution.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct WorkflowDefinition {
/// Stable slug used as the primary key (e.g. "ci", "checkout"). Must be
/// unique within a host. Referenced by other workflows, webhooks, and
/// clients when starting new instances.
pub id: String,
/// Optional human-friendly display name surfaced in UIs, listings, and
/// logs (e.g. "Continuous Integration"). Falls back to `id` when unset.
#[serde(default, skip_serializing_if = "Option::is_none")]
pub name: Option<String>,
pub version: u32,
pub description: Option<String>,
pub steps: Vec<WorkflowStep>,
@@ -25,6 +32,7 @@ impl WorkflowDefinition {
pub fn new(id: impl Into<String>, version: u32) -> Self {
Self {
id: id.into(),
name: None,
version,
description: None,
steps: Vec::new(),
@@ -33,6 +41,11 @@ impl WorkflowDefinition {
services: Vec::new(),
}
}
/// Return the display name when set, otherwise fall back to the slug id.
pub fn display_name(&self) -> &str {
self.name.as_deref().unwrap_or(&self.id)
}
}
/// A single step in a workflow definition.

View File

@@ -6,7 +6,14 @@ use super::status::{PointerStatus, WorkflowStatus};
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct WorkflowInstance {
/// UUID — the primary key, always unique, never changes.
pub id: String,
/// Human-friendly unique name, e.g. "ci-42". Auto-assigned as
/// `{definition_id}-{N}` via a per-definition monotonic counter when
/// the caller does not supply an override. Used interchangeably with
/// `id` in lookup APIs. Empty when the instance has not yet been
/// persisted (the host fills it in before the first insert).
pub name: String,
pub workflow_definition_id: String,
pub version: u32,
pub description: Option<String>,
@@ -20,9 +27,15 @@ pub struct WorkflowInstance {
}
impl WorkflowInstance {
pub fn new(workflow_definition_id: impl Into<String>, version: u32, data: serde_json::Value) -> Self {
pub fn new(
workflow_definition_id: impl Into<String>,
version: u32,
data: serde_json::Value,
) -> Self {
Self {
id: uuid::Uuid::new_v4().to_string(),
// Filled in by WorkflowHost::start_workflow before persisting.
name: String::new(),
workflow_definition_id: workflow_definition_id.into(),
version,
description: None,
@@ -134,7 +147,10 @@ mod tests {
let json = serde_json::to_string(&instance).unwrap();
let deserialized: WorkflowInstance = serde_json::from_str(&json).unwrap();
assert_eq!(instance.id, deserialized.id);
assert_eq!(instance.workflow_definition_id, deserialized.workflow_definition_id);
assert_eq!(
instance.workflow_definition_id,
deserialized.workflow_definition_id
);
assert_eq!(instance.version, deserialized.version);
assert_eq!(instance.status, deserialized.status);
assert_eq!(instance.data, deserialized.data);

View File

@@ -130,7 +130,10 @@ mod tests {
let result = step.run(&ctx).await.unwrap();
assert!(!result.proceed);
assert_eq!(result.branch_values, Some(vec![json!(1), json!(2), json!(3)]));
assert_eq!(
result.branch_values,
Some(vec![json!(1), json!(2), json!(3)])
);
}
#[tokio::test]

View File

@@ -60,7 +60,10 @@ mod tests {
let result = step.run(&ctx).await.unwrap();
assert!(!result.proceed);
assert_eq!(result.branch_values, Some(vec![json!(null)]));
assert_eq!(result.persistence_data, Some(json!({"children_active": true})));
assert_eq!(
result.persistence_data,
Some(json!({"children_active": true}))
);
}
#[tokio::test]
@@ -116,6 +119,9 @@ mod tests {
let result = step.run(&ctx).await.unwrap();
assert!(!result.proceed);
assert_eq!(result.persistence_data, Some(json!({"children_active": true})));
assert_eq!(
result.persistence_data,
Some(json!({"children_active": true}))
);
}
}

View File

@@ -45,7 +45,7 @@ mod test_helpers {
workflow,
cancellation_token: CancellationToken::new(),
host_context: None,
log_sink: None,
log_sink: None,
}
}

View File

@@ -1,7 +1,7 @@
use async_trait::async_trait;
use crate::models::poll_config::PollEndpointConfig;
use crate::models::ExecutionResult;
use crate::models::poll_config::PollEndpointConfig;
use crate::traits::step::{StepBody, StepExecutionContext};
/// A step that polls an external HTTP endpoint until a condition is met.
@@ -21,8 +21,8 @@ impl StepBody for PollEndpointStep {
#[cfg(test)]
mod tests {
use super::*;
use crate::models::poll_config::{HttpMethod, PollCondition};
use crate::models::ExecutionPointer;
use crate::models::poll_config::{HttpMethod, PollCondition};
use crate::primitives::test_helpers::*;
use std::collections::HashMap;
use std::time::Duration;

View File

@@ -85,7 +85,10 @@ mod tests {
let result = step.run(&ctx).await.unwrap();
assert!(!result.proceed);
assert_eq!(result.sleep_for, Some(Duration::from_secs(10)));
assert_eq!(result.persistence_data, Some(json!({"children_active": true})));
assert_eq!(
result.persistence_data,
Some(json!({"children_active": true}))
);
}
#[tokio::test]
@@ -130,6 +133,9 @@ mod tests {
let result = step.run(&ctx).await.unwrap();
assert!(!result.proceed);
assert!(result.sleep_for.is_none());
assert_eq!(result.persistence_data, Some(json!({"children_active": true})));
assert_eq!(
result.persistence_data,
Some(json!({"children_active": true}))
);
}
}

View File

@@ -89,7 +89,10 @@ mod tests {
let result = step.run(&ctx).await.unwrap();
assert!(!result.proceed);
assert_eq!(result.branch_values, Some(vec![json!(null)]));
assert_eq!(result.persistence_data, Some(json!({"children_active": true})));
assert_eq!(
result.persistence_data,
Some(json!({"children_active": true}))
);
}
#[tokio::test]

View File

@@ -60,7 +60,10 @@ mod tests {
let result = step.run(&ctx).await.unwrap();
assert!(!result.proceed);
assert_eq!(result.sleep_for, Some(Duration::from_secs(30)));
assert_eq!(result.persistence_data, Some(json!({"children_active": true})));
assert_eq!(
result.persistence_data,
Some(json!({"children_active": true}))
);
}
#[tokio::test]
@@ -101,6 +104,9 @@ mod tests {
let ctx = make_context(&pointer, &wf_step, &workflow);
let result = step.run(&ctx).await.unwrap();
assert!(!result.proceed);
assert_eq!(result.persistence_data, Some(json!({"children_active": true})));
assert_eq!(
result.persistence_data,
Some(json!({"children_active": true}))
);
}
}

View File

@@ -61,7 +61,10 @@ mod tests {
let ctx = make_context(&pointer, &wf_step, &workflow);
let result = step.run(&ctx).await.unwrap();
assert!(!result.proceed);
assert_eq!(result.persistence_data, Some(json!({"children_active": true})));
assert_eq!(
result.persistence_data,
Some(json!({"children_active": true}))
);
}
#[tokio::test]

View File

@@ -1,8 +1,8 @@
use async_trait::async_trait;
use chrono::Utc;
use crate::models::schema::WorkflowSchema;
use crate::models::ExecutionResult;
use crate::models::schema::WorkflowSchema;
use crate::traits::step::{StepBody, StepExecutionContext};
/// A step that starts a child workflow and waits for its completion.
@@ -110,12 +110,18 @@ impl StepBody for SubWorkflowStep {
)
})?;
// Use inputs if set, otherwise pass an empty object so the child
// workflow has a valid JSON object for storing step outputs.
let child_data = if self.inputs.is_null() {
serde_json::json!({})
} else {
// Use explicit inputs if set; otherwise inherit the parent workflow's
// data so child steps can reference the same top-level fields (e.g.
// REPO_URL, COMMIT_SHA) without every `type: workflow` step having to
// re-declare them. Fall back to an empty object when the parent has
// no data either so the child still has a valid JSON object for
// storing step outputs.
let child_data = if !self.inputs.is_null() {
self.inputs.clone()
} else if context.workflow.data.is_object() {
context.workflow.data.clone()
} else {
serde_json::json!({})
};
let child_instance_id = host
.start_workflow(&self.workflow_id, self.version, child_data)
@@ -132,8 +138,8 @@ impl StepBody for SubWorkflowStep {
#[cfg(test)]
mod tests {
use super::*;
use crate::models::schema::SchemaType;
use crate::models::ExecutionPointer;
use crate::models::schema::SchemaType;
use crate::primitives::test_helpers::*;
use crate::traits::step::HostContext;
use serde_json::json;
@@ -170,10 +176,7 @@ mod tests {
let def_id = definition_id.to_string();
let result_id = self.result_id.clone();
Box::pin(async move {
self.started
.lock()
.unwrap()
.push((def_id, version, data));
self.started.lock().unwrap().push((def_id, version, data));
Ok(result_id)
})
}
@@ -265,10 +268,7 @@ mod tests {
let result = step.run(&ctx).await.unwrap();
assert!(result.proceed);
assert_eq!(
result.output_data,
Some(json!({"result": "success"}))
);
assert_eq!(result.output_data, Some(json!({"result": "success"})));
}
#[tokio::test]
@@ -292,10 +292,7 @@ mod tests {
let result = step.run(&ctx).await.unwrap();
assert!(result.proceed);
assert_eq!(
result.output_data,
Some(json!({"a": 1, "b": 2}))
);
assert_eq!(result.output_data, Some(json!({"a": 1, "b": 2})));
}
#[tokio::test]

View File

@@ -69,7 +69,10 @@ mod tests {
let result = step.run(&ctx).await.unwrap();
assert!(!result.proceed);
assert_eq!(result.branch_values, Some(vec![json!(null)]));
assert_eq!(result.persistence_data, Some(json!({"children_active": true})));
assert_eq!(
result.persistence_data,
Some(json!({"children_active": true}))
);
}
#[tokio::test]
@@ -141,6 +144,9 @@ mod tests {
let result = step.run(&ctx).await.unwrap();
assert!(!result.proceed);
assert!(result.branch_values.is_none());
assert_eq!(result.persistence_data, Some(json!({"children_active": true})));
assert_eq!(
result.persistence_data,
Some(json!({"children_active": true}))
);
}
}

View File

@@ -3,9 +3,9 @@ use std::sync::Arc;
use async_trait::async_trait;
use tokio::sync::Mutex;
use crate::Result;
use crate::models::LifecycleEvent;
use crate::traits::LifecyclePublisher;
use crate::Result;
/// An in-memory implementation of `LifecyclePublisher` for testing.
#[derive(Debug, Clone)]

View File

@@ -4,8 +4,8 @@ use std::sync::Arc;
use async_trait::async_trait;
use tokio::sync::Mutex;
use crate::traits::DistributedLockProvider;
use crate::Result;
use crate::traits::DistributedLockProvider;
/// An in-memory implementation of `DistributedLockProvider` for testing.
#[derive(Debug, Clone)]

View File

@@ -5,9 +5,7 @@ use async_trait::async_trait;
use chrono::{DateTime, Utc};
use tokio::sync::RwLock;
use crate::models::{
Event, EventSubscription, ExecutionError, ScheduledCommand, WorkflowInstance,
};
use crate::models::{Event, EventSubscription, ExecutionError, ScheduledCommand, WorkflowInstance};
use crate::traits::{
EventRepository, PersistenceProvider, ScheduledCommandRepository, SubscriptionRepository,
WorkflowRepository,
@@ -22,6 +20,9 @@ pub struct InMemoryPersistenceProvider {
subscriptions: Arc<RwLock<HashMap<String, EventSubscription>>>,
errors: Arc<RwLock<Vec<ExecutionError>>>,
scheduled_commands: Arc<RwLock<Vec<ScheduledCommand>>>,
/// Per-definition monotonic counter used to generate human-friendly
/// workflow instance names of the form `{definition_id}-{N}`.
sequences: Arc<RwLock<HashMap<String, u64>>>,
}
impl InMemoryPersistenceProvider {
@@ -32,6 +33,7 @@ impl InMemoryPersistenceProvider {
subscriptions: Arc::new(RwLock::new(HashMap::new())),
errors: Arc::new(RwLock::new(Vec::new())),
scheduled_commands: Arc::new(RwLock::new(Vec::new())),
sequences: Arc::new(RwLock::new(HashMap::new())),
}
}
@@ -107,6 +109,23 @@ impl WorkflowRepository for InMemoryPersistenceProvider {
.ok_or_else(|| WfeError::WorkflowNotFound(id.to_string()))
}
async fn get_workflow_instance_by_name(&self, name: &str) -> Result<WorkflowInstance> {
self.workflows
.read()
.await
.values()
.find(|w| w.name == name)
.cloned()
.ok_or_else(|| WfeError::WorkflowNotFound(name.to_string()))
}
async fn next_definition_sequence(&self, definition_id: &str) -> Result<u64> {
let mut seqs = self.sequences.write().await;
let next = seqs.get(definition_id).copied().unwrap_or(0) + 1;
seqs.insert(definition_id.to_string(), next);
Ok(next)
}
async fn get_workflow_instances(&self, ids: &[String]) -> Result<Vec<WorkflowInstance>> {
let workflows = self.workflows.read().await;
let mut result = Vec::new();
@@ -121,10 +140,7 @@ impl WorkflowRepository for InMemoryPersistenceProvider {
#[async_trait]
impl SubscriptionRepository for InMemoryPersistenceProvider {
async fn create_event_subscription(
&self,
subscription: &EventSubscription,
) -> Result<String> {
async fn create_event_subscription(&self, subscription: &EventSubscription) -> Result<String> {
let id = if subscription.id.is_empty() {
uuid::Uuid::new_v4().to_string()
} else {
@@ -217,11 +233,7 @@ impl SubscriptionRepository for InMemoryPersistenceProvider {
}
}
async fn clear_subscription_token(
&self,
subscription_id: &str,
token: &str,
) -> Result<()> {
async fn clear_subscription_token(&self, subscription_id: &str, token: &str) -> Result<()> {
let mut subs = self.subscriptions.write().await;
match subs.get_mut(subscription_id) {
Some(sub) => {
@@ -282,7 +294,9 @@ impl EventRepository for InMemoryPersistenceProvider {
let events = self.events.read().await;
let ids = events
.values()
.filter(|e| e.event_name == event_name && e.event_key == event_key && e.event_time <= as_of)
.filter(|e| {
e.event_name == event_name && e.event_key == event_key && e.event_time <= as_of
})
.map(|e| e.id.clone())
.collect();
Ok(ids)
@@ -325,9 +339,14 @@ impl ScheduledCommandRepository for InMemoryPersistenceProvider {
async fn process_commands(
&self,
as_of: DateTime<Utc>,
handler: &(dyn Fn(ScheduledCommand) -> std::pin::Pin<Box<dyn std::future::Future<Output = Result<()>> + Send>>
+ Send
+ Sync),
handler: &(
dyn Fn(
ScheduledCommand,
)
-> std::pin::Pin<Box<dyn std::future::Future<Output = Result<()>> + Send>>
+ Send
+ Sync
),
) -> Result<()> {
let as_of_millis = as_of.timestamp_millis();
let due: Vec<ScheduledCommand> = {
@@ -360,7 +379,7 @@ impl PersistenceProvider for InMemoryPersistenceProvider {
#[cfg(test)]
mod tests {
use super::*;
use crate::models::{Event, EventSubscription, ExecutionError, ScheduledCommand, CommandName};
use crate::models::{CommandName, Event, EventSubscription, ExecutionError, ScheduledCommand};
use crate::traits::{
EventRepository, PersistenceProvider, ScheduledCommandRepository, SubscriptionRepository,
WorkflowRepository,

View File

@@ -4,9 +4,9 @@ use std::sync::Arc;
use async_trait::async_trait;
use tokio::sync::Mutex;
use crate::Result;
use crate::models::QueueType;
use crate::traits::QueueProvider;
use crate::Result;
/// An in-memory implementation of `QueueProvider` for testing.
#[derive(Debug, Clone)]

View File

@@ -17,18 +17,9 @@ macro_rules! queue_suite {
#[tokio::test]
async fn enqueue_dequeue_fifo() {
let provider = ($factory)().await;
provider
.queue_work("a", QueueType::Workflow)
.await
.unwrap();
provider
.queue_work("b", QueueType::Workflow)
.await
.unwrap();
provider
.queue_work("c", QueueType::Workflow)
.await
.unwrap();
provider.queue_work("a", QueueType::Workflow).await.unwrap();
provider.queue_work("b", QueueType::Workflow).await.unwrap();
provider.queue_work("c", QueueType::Workflow).await.unwrap();
assert_eq!(
provider
@@ -94,16 +85,20 @@ macro_rules! queue_suite {
);
// Both should now be empty
assert!(provider
.dequeue_work(QueueType::Event)
.await
.unwrap()
.is_none());
assert!(provider
.dequeue_work(QueueType::Workflow)
.await
.unwrap()
.is_none());
assert!(
provider
.dequeue_work(QueueType::Event)
.await
.unwrap()
.is_none()
);
assert!(
provider
.dequeue_work(QueueType::Workflow)
.await
.unwrap()
.is_none()
);
}
}
};

View File

@@ -69,7 +69,7 @@ mod tests {
workflow: &instance,
cancellation_token: tokio_util::sync::CancellationToken::new(),
host_context: None,
log_sink: None,
log_sink: None,
};
mw.pre_step(&ctx).await.unwrap();
}
@@ -89,7 +89,7 @@ mod tests {
workflow: &instance,
cancellation_token: tokio_util::sync::CancellationToken::new(),
host_context: None,
log_sink: None,
log_sink: None,
};
let result = ExecutionResult::next();
mw.post_step(&ctx, &result).await.unwrap();

View File

@@ -1,12 +1,12 @@
pub mod lifecycle;
pub mod lock;
pub mod service;
pub mod log_sink;
pub mod middleware;
pub mod persistence;
pub mod queue;
pub mod registry;
pub mod search;
pub mod service;
pub mod step;
pub use lifecycle::LifecyclePublisher;

View File

@@ -1,9 +1,7 @@
use async_trait::async_trait;
use chrono::{DateTime, Utc};
use crate::models::{
Event, EventSubscription, ExecutionError, ScheduledCommand, WorkflowInstance,
};
use crate::models::{Event, EventSubscription, ExecutionError, ScheduledCommand, WorkflowInstance};
/// Persistence for workflow instances.
#[async_trait]
@@ -17,7 +15,14 @@ pub trait WorkflowRepository: Send + Sync {
) -> crate::Result<()>;
async fn get_runnable_instances(&self, as_at: DateTime<Utc>) -> crate::Result<Vec<String>>;
async fn get_workflow_instance(&self, id: &str) -> crate::Result<WorkflowInstance>;
async fn get_workflow_instance_by_name(&self, name: &str) -> crate::Result<WorkflowInstance>;
async fn get_workflow_instances(&self, ids: &[String]) -> crate::Result<Vec<WorkflowInstance>>;
/// Atomically allocate the next sequence number for a given workflow
/// definition id. Used by the host to assign human-friendly names of the
/// form `{definition_id}-{N}` before inserting a new workflow instance.
/// Guaranteed monotonic per definition_id; no guarantees across definitions.
async fn next_definition_sequence(&self, definition_id: &str) -> crate::Result<u64>;
}
/// Persistence for event subscriptions.
@@ -79,9 +84,14 @@ pub trait ScheduledCommandRepository: Send + Sync {
async fn process_commands(
&self,
as_of: DateTime<Utc>,
handler: &(dyn Fn(ScheduledCommand) -> std::pin::Pin<Box<dyn std::future::Future<Output = crate::Result<()>> + Send>>
+ Send
+ Sync),
handler: &(
dyn Fn(
ScheduledCommand,
) -> std::pin::Pin<
Box<dyn std::future::Future<Output = crate::Result<()>> + Send>,
> + Send
+ Sync
),
) -> crate::Result<()>;
}

View File

@@ -1,6 +1,6 @@
use async_trait::async_trait;
use serde::de::DeserializeOwned;
use serde::Serialize;
use serde::de::DeserializeOwned;
use crate::models::{ExecutionPointer, ExecutionResult, WorkflowInstance, WorkflowStep};

View File

@@ -9,7 +9,7 @@ description = "Deno bindings for the WFE workflow engine"
[dependencies]
wfe-core = { workspace = true, features = ["test-support"] }
wfe = { version = "1.8.0", path = "../wfe", registry = "sunbeam" }
wfe = { version = "1.9.0", path = "../wfe", registry = "sunbeam" }
deno_core = { workspace = true }
deno_error = { workspace = true }
tokio = { workspace = true }

View File

@@ -3,9 +3,9 @@ use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
use std::time::Duration;
use tokio::sync::{mpsc, oneshot};
use wfe_core::WfeError;
use wfe_core::models::ExecutionResult;
use wfe_core::traits::step::{StepBody, StepExecutionContext};
use wfe_core::WfeError;
/// A request sent from the executor (tokio) to the V8 thread.
pub struct StepRequest {
@@ -160,7 +160,9 @@ pub fn deserialize_execution_result(
value: &serde_json::Value,
) -> wfe_core::Result<ExecutionResult> {
let js_result: JsExecutionResult = serde_json::from_value(value.clone()).map_err(|e| {
WfeError::StepExecution(format!("failed to deserialize ExecutionResult from JS: {e}"))
WfeError::StepExecution(format!(
"failed to deserialize ExecutionResult from JS: {e}"
))
})?;
Ok(ExecutionResult {
@@ -186,6 +188,7 @@ mod tests {
fn make_test_context() -> (WorkflowInstance, WorkflowStep, ExecutionPointer) {
let instance = WorkflowInstance {
id: "wf-1".into(),
name: "test-def-1".into(),
workflow_definition_id: "test-def".into(),
version: 1,
description: None,
@@ -373,7 +376,9 @@ mod tests {
assert_eq!(req.step_type, "MyStep");
assert_eq!(req.request_id, 0);
req.response_tx
.send(Ok(serde_json::json!({"proceed": true, "outputData": {"done": true}})))
.send(Ok(
serde_json::json!({"proceed": true, "outputData": {"done": true}}),
))
.unwrap();
});

View File

@@ -1,7 +1,7 @@
use std::time::Duration;
use deno_core::op2;
use deno_core::OpState;
use deno_core::op2;
use wfe_core::builder::WorkflowBuilder;
use wfe_core::models::ErrorBehavior;

View File

@@ -1,5 +1,5 @@
use deno_core::op2;
use deno_core::OpState;
use deno_core::op2;
use crate::state::WfeState;

View File

@@ -1,7 +1,7 @@
use std::sync::Arc;
use deno_core::op2;
use deno_core::OpState;
use deno_core::op2;
use crate::state::WfeState;

View File

@@ -1,7 +1,7 @@
use std::sync::Arc;
use deno_core::op2;
use deno_core::OpState;
use deno_core::op2;
use crate::bridge::JsStepBody;
use crate::state::WfeState;
@@ -23,10 +23,9 @@ pub async fn op_register_step(
};
let counter = Arc::new(std::sync::atomic::AtomicU32::new(0));
host.register_step_factory(
&step_type,
move || Box::new(JsStepBody::new(tx.clone(), counter.clone())),
)
host.register_step_factory(&step_type, move || {
Box::new(JsStepBody::new(tx.clone(), counter.clone()))
})
.await;
Ok(())

View File

@@ -1,5 +1,5 @@
use deno_core::op2;
use deno_core::OpState;
use deno_core::op2;
use crate::state::WfeState;

View File

@@ -16,8 +16,7 @@ async fn run_js(code: &str) -> Result<(), Box<dyn std::error::Error>> {
/// Helper: run a JS module in a fresh wfe runtime and drive the event loop.
async fn run_module(code: &str) -> Result<(), Box<dyn std::error::Error>> {
let mut runtime = create_wfe_runtime();
let specifier =
deno_core::ModuleSpecifier::parse("ext:wfe-deno/test-module.js").unwrap();
let specifier = deno_core::ModuleSpecifier::parse("ext:wfe-deno/test-module.js").unwrap();
let module_id = runtime
.load_main_es_module_from_code(&specifier, code.to_string())
.await
@@ -27,8 +26,7 @@ async fn run_module(code: &str) -> Result<(), Box<dyn std::error::Error>> {
.run_event_loop(Default::default())
.await
.map_err(|e| format!("event loop error: {e}"))?;
eval.await
.map_err(|e| format!("module eval error: {e}"))?;
eval.await.map_err(|e| format!("module eval error: {e}"))?;
Ok(())
}

View File

@@ -1,10 +1,10 @@
use futures::io::AsyncBufReadExt;
use futures::StreamExt;
use futures::io::AsyncBufReadExt;
use k8s_openapi::api::core::v1::Pod;
use kube::api::LogParams;
use kube::{Api, Client};
use wfe_core::traits::log_sink::{LogChunk, LogSink, LogStreamType};
use wfe_core::WfeError;
use wfe_core::traits::log_sink::{LogChunk, LogSink, LogStreamType};
/// Stream logs from a pod container, optionally forwarding to a LogSink.
///
@@ -29,9 +29,7 @@ pub async fn stream_logs(
};
let stream = pods.log_stream(pod_name, &params).await.map_err(|e| {
WfeError::StepExecution(format!(
"failed to stream logs from pod '{pod_name}': {e}"
))
WfeError::StepExecution(format!("failed to stream logs from pod '{pod_name}': {e}"))
})?;
let mut stdout = String::new();

View File

@@ -60,9 +60,7 @@ pub fn build_job(
cluster
.image_pull_secrets
.iter()
.map(|s| LocalObjectReference {
name: s.clone(),
})
.map(|s| LocalObjectReference { name: s.clone() })
.collect(),
)
};
@@ -70,7 +68,13 @@ pub fn build_job(
let node_selector = if cluster.node_selector.is_empty() {
None
} else {
Some(cluster.node_selector.iter().map(|(k, v)| (k.clone(), v.clone())).collect::<BTreeMap<_, _>>())
Some(
cluster
.node_selector
.iter()
.map(|(k, v)| (k.clone(), v.clone()))
.collect::<BTreeMap<_, _>>(),
)
};
Job {
@@ -108,8 +112,11 @@ fn resolve_command(config: &KubernetesStepConfig) -> (Option<Vec<String>>, Optio
if let Some(ref cmd) = config.command {
(Some(cmd.clone()), None)
} else if let Some(ref run) = config.run {
// Use bash so that scripts can rely on `set -o pipefail`, process
// substitution, arrays, and other bashisms that dash (/bin/sh on
// debian-family images) does not support.
(
Some(vec!["/bin/sh".into(), "-c".into()]),
Some(vec!["/bin/bash".into(), "-c".into()]),
Some(vec![run.clone()]),
)
} else {
@@ -174,7 +181,13 @@ pub fn sanitize_name(name: &str) -> String {
let sanitized: String = name
.to_lowercase()
.chars()
.map(|c| if c.is_ascii_alphanumeric() || c == '-' { c } else { '-' })
.map(|c| {
if c.is_ascii_alphanumeric() || c == '-' {
c
} else {
'-'
}
})
.take(63)
.collect();
sanitized.trim_end_matches('-').to_string()
@@ -203,7 +216,13 @@ mod tests {
pull_policy: None,
namespace: None,
};
let job = build_job(&config, "test-step", "wfe-abc", &HashMap::new(), &default_cluster());
let job = build_job(
&config,
"test-step",
"wfe-abc",
&HashMap::new(),
&default_cluster(),
);
assert_eq!(job.metadata.name, Some("test-step".into()));
assert_eq!(job.metadata.namespace, Some("wfe-abc".into()));
@@ -242,7 +261,7 @@ mod tests {
assert_eq!(
container.command,
Some(vec!["/bin/sh".into(), "-c".into()])
Some(vec!["/bin/bash".into(), "-c".into()])
);
assert_eq!(container.args, Some(vec!["npm test".into()]));
assert_eq!(container.working_dir, Some("/app".into()));
@@ -265,10 +284,7 @@ mod tests {
let job = build_job(&config, "build", "ns", &HashMap::new(), &default_cluster());
let container = &job.spec.unwrap().template.spec.unwrap().containers[0];
assert_eq!(
container.command,
Some(vec!["make".into(), "build".into()])
);
assert_eq!(container.command, Some(vec!["make".into(), "build".into()]));
assert!(container.args.is_none());
}
@@ -286,8 +302,11 @@ mod tests {
pull_policy: None,
namespace: None,
};
let overrides: HashMap<String, String> =
[("WORKFLOW_ID".into(), "wf-123".into()), ("APP_ENV".into(), "staging".into())].into();
let overrides: HashMap<String, String> = [
("WORKFLOW_ID".into(), "wf-123".into()),
("APP_ENV".into(), "staging".into()),
]
.into();
let job = build_job(&config, "step", "ns", &overrides, &default_cluster());
let container = &job.spec.unwrap().template.spec.unwrap().containers[0];
@@ -376,7 +395,13 @@ mod tests {
pull_policy: None,
namespace: None,
};
let job = build_job(&config, "my-step", "ns", &HashMap::new(), &default_cluster());
let job = build_job(
&config,
"my-step",
"ns",
&HashMap::new(),
&default_cluster(),
);
let labels = job.metadata.labels.as_ref().unwrap();
assert_eq!(labels.get(LABEL_STEP_NAME), Some(&"my-step".to_string()));
assert_eq!(

View File

@@ -16,7 +16,13 @@ pub fn namespace_name(prefix: &str, workflow_id: &str) -> String {
let sanitized: String = raw
.to_lowercase()
.chars()
.map(|c| if c.is_ascii_alphanumeric() || c == '-' { c } else { '-' })
.map(|c| {
if c.is_ascii_alphanumeric() || c == '-' {
c
} else {
'-'
}
})
.take(63)
.collect();
// Trim trailing hyphens
@@ -55,9 +61,9 @@ pub async fn ensure_namespace(
..Default::default()
};
api.create(&PostParams::default(), &ns)
.await
.map_err(|e| WfeError::StepExecution(format!("failed to create namespace '{name}': {e}")))?;
api.create(&PostParams::default(), &ns).await.map_err(|e| {
WfeError::StepExecution(format!("failed to create namespace '{name}': {e}"))
})?;
Ok(())
}
@@ -65,9 +71,9 @@ pub async fn ensure_namespace(
/// Delete a namespace and all resources within it.
pub async fn delete_namespace(client: &Client, name: &str) -> Result<(), WfeError> {
let api: Api<Namespace> = Api::all(client.clone());
api.delete(name, &Default::default())
.await
.map_err(|e| WfeError::StepExecution(format!("failed to delete namespace '{name}': {e}")))?;
api.delete(name, &Default::default()).await.map_err(|e| {
WfeError::StepExecution(format!("failed to delete namespace '{name}': {e}"))
})?;
Ok(())
}

View File

@@ -152,10 +152,12 @@ mod tests {
#[test]
fn build_output_data_json_value() {
let parsed: HashMap<String, String> =
[("count".into(), "42".into()), ("flag".into(), "true".into())]
.into_iter()
.collect();
let parsed: HashMap<String, String> = [
("count".into(), "42".into()),
("flag".into(), "true".into()),
]
.into_iter()
.collect();
let data = build_output_data("s", "", "", 0, &parsed);
// Numbers and booleans should be parsed as JSON, not strings.
assert_eq!(data["count"], 42);

View File

@@ -77,8 +77,16 @@ pub fn build_service_pod(svc: &ServiceDefinition, namespace: &str) -> Pod {
command,
args,
resources: Some(ResourceRequirements {
limits: if limits.is_empty() { None } else { Some(limits) },
requests: if requests.is_empty() { None } else { Some(requests) },
limits: if limits.is_empty() {
None
} else {
Some(limits)
},
requests: if requests.is_empty() {
None
} else {
Some(requests)
},
..Default::default()
}),
..Default::default()
@@ -230,10 +238,7 @@ mod tests {
let ports = spec.ports.as_ref().unwrap();
assert_eq!(ports.len(), 1);
assert_eq!(ports[0].port, 5432);
assert_eq!(
ports[0].target_port,
Some(IntOrString::Int(5432))
);
assert_eq!(ports[0].target_port, Some(IntOrString::Int(5432)));
}
#[test]
@@ -241,10 +246,7 @@ mod tests {
let svc = ServiceDefinition {
name: "app".into(),
image: "myapp".into(),
ports: vec![
WfeServicePort::tcp(8080),
WfeServicePort::tcp(8443),
],
ports: vec![WfeServicePort::tcp(8080), WfeServicePort::tcp(8443)],
env: Default::default(),
readiness: None,
command: vec![],

View File

@@ -4,9 +4,9 @@ use async_trait::async_trait;
use k8s_openapi::api::core::v1::Pod;
use kube::api::PostParams;
use kube::{Api, Client};
use wfe_core::WfeError;
use wfe_core::models::service::{ServiceDefinition, ServiceEndpoint};
use wfe_core::traits::ServiceProvider;
use wfe_core::WfeError;
use crate::config::ClusterConfig;
use crate::logs::wait_for_pod_running;
@@ -77,11 +77,8 @@ impl ServiceProvider for KubernetesServiceProvider {
.map(|r| Duration::from_millis(r.timeout_ms))
.unwrap_or(Duration::from_secs(120));
match tokio::time::timeout(
timeout,
wait_for_pod_running(&self.client, &ns, &svc.name),
)
.await
match tokio::time::timeout(timeout, wait_for_pod_running(&self.client, &ns, &svc.name))
.await
{
Ok(Ok(())) => {}
Ok(Err(e)) => {

View File

@@ -6,9 +6,9 @@ use k8s_openapi::api::batch::v1::Job;
use k8s_openapi::api::core::v1::Pod;
use kube::api::{ListParams, PostParams};
use kube::{Api, Client};
use wfe_core::WfeError;
use wfe_core::models::ExecutionResult;
use wfe_core::traits::step::{StepBody, StepExecutionContext};
use wfe_core::WfeError;
use crate::cleanup::delete_job;
use crate::config::{ClusterConfig, KubernetesStepConfig};
@@ -86,13 +86,7 @@ impl StepBody for KubernetesStep {
let env_overrides = extract_workflow_env(&context.workflow.data);
// 4. Build Job manifest.
let job_manifest = build_job(
&self.config,
&step_name,
&ns,
&env_overrides,
&self.cluster,
);
let job_manifest = build_job(&self.config, &step_name, &ns, &env_overrides, &self.cluster);
let job_name = job_manifest
.metadata
.name
@@ -111,7 +105,15 @@ impl StepBody for KubernetesStep {
let result = if let Some(timeout_ms) = self.config.timeout_ms {
match tokio::time::timeout(
Duration::from_millis(timeout_ms),
self.execute_job(&client, &ns, &job_name, &step_name, definition_id, workflow_id, context),
self.execute_job(
&client,
&ns,
&job_name,
&step_name,
definition_id,
workflow_id,
context,
),
)
.await
{
@@ -125,8 +127,16 @@ impl StepBody for KubernetesStep {
}
}
} else {
self.execute_job(&client, &ns, &job_name, &step_name, definition_id, workflow_id, context)
.await
self.execute_job(
&client,
&ns,
&job_name,
&step_name,
definition_id,
workflow_id,
context,
)
.await
};
// Always attempt cleanup.
@@ -205,9 +215,7 @@ async fn wait_for_job_pod(
.list(&ListParams::default().labels(&selector))
.await
.map_err(|e| {
WfeError::StepExecution(format!(
"failed to list pods for job '{job_name}': {e}"
))
WfeError::StepExecution(format!("failed to list pods for job '{job_name}': {e}"))
})?;
if let Some(pod) = pod_list.items.first() {
@@ -236,9 +244,10 @@ async fn wait_for_job_completion(
// Poll Job status.
for _ in 0..600 {
let job = jobs.get(job_name).await.map_err(|e| {
WfeError::StepExecution(format!("failed to get job '{job_name}': {e}"))
})?;
let job = jobs
.get(job_name)
.await
.map_err(|e| WfeError::StepExecution(format!("failed to get job '{job_name}': {e}")))?;
if let Some(status) = &job.status {
if let Some(conditions) = &status.conditions {
@@ -352,9 +361,6 @@ mod tests {
let data = serde_json::json!({"config": {"nested": true}});
let env = extract_workflow_env(&data);
// Nested object serialized as JSON string.
assert_eq!(
env.get("CONFIG"),
Some(&r#"{"nested":true}"#.to_string())
);
assert_eq!(env.get("CONFIG"), Some(&r#"{"nested":true}"#.to_string()));
}
}

View File

@@ -1,13 +1,13 @@
use std::collections::HashMap;
use wfe_core::models::service::{ReadinessCheck, ReadinessProbe, ServiceDefinition, ServicePort};
use wfe_core::traits::step::StepBody;
use wfe_core::traits::ServiceProvider;
use wfe_kubernetes::config::{ClusterConfig, KubernetesStepConfig};
use wfe_kubernetes::namespace;
use wfe_core::traits::step::StepBody;
use wfe_kubernetes::KubernetesServiceProvider;
use wfe_kubernetes::cleanup;
use wfe_kubernetes::client;
use wfe_kubernetes::KubernetesServiceProvider;
use wfe_kubernetes::config::{ClusterConfig, KubernetesStepConfig};
use wfe_kubernetes::namespace;
/// Path to the Lima sunbeam VM kubeconfig.
fn kubeconfig_path() -> String {
@@ -64,10 +64,14 @@ async fn namespace_create_and_delete() {
let client = client::create_client(&config).await.unwrap();
let ns = "wfe-test-ns-lifecycle";
namespace::ensure_namespace(&client, ns, "test-wf").await.unwrap();
namespace::ensure_namespace(&client, ns, "test-wf")
.await
.unwrap();
// Idempotent — creating again should succeed.
namespace::ensure_namespace(&client, ns, "test-wf").await.unwrap();
namespace::ensure_namespace(&client, ns, "test-wf")
.await
.unwrap();
namespace::delete_namespace(&client, ns).await.unwrap();
}
@@ -107,10 +111,12 @@ async fn run_echo_job() {
assert!(result.proceed);
let output = result.output_data.unwrap();
assert!(output["echo-step.stdout"]
.as_str()
.unwrap()
.contains("hello from k8s"));
assert!(
output["echo-step.stdout"]
.as_str()
.unwrap()
.contains("hello from k8s")
);
assert_eq!(output["echo-step.exit_code"], 0);
// Cleanup namespace.
@@ -132,8 +138,7 @@ async fn run_job_with_wfe_output() {
let mut step =
wfe_kubernetes::KubernetesStep::new(step_cfg, config.clone(), k8s_client.clone());
let instance =
wfe_core::models::WorkflowInstance::new("output-wf", 1, serde_json::json!({}));
let instance = wfe_core::models::WorkflowInstance::new("output-wf", 1, serde_json::json!({}));
let mut ws = wfe_core::models::WorkflowStep::new(0, "alpine-output");
ws.name = Some("output-step".into());
let pointer = wfe_core::models::ExecutionPointer::new(0);
@@ -250,8 +255,7 @@ async fn run_job_with_timeout() {
let mut step =
wfe_kubernetes::KubernetesStep::new(step_cfg, config.clone(), k8s_client.clone());
let instance =
wfe_core::models::WorkflowInstance::new("timeout-wf", 1, serde_json::json!({}));
let instance = wfe_core::models::WorkflowInstance::new("timeout-wf", 1, serde_json::json!({}));
let mut ws = wfe_core::models::WorkflowStep::new(0, "alpine-timeout");
ws.name = Some("timeout-step".into());
let pointer = wfe_core::models::ExecutionPointer::new(0);
@@ -484,7 +488,10 @@ async fn service_provider_provision_duplicate_name_fails() {
};
// First provision succeeds.
let endpoints = provider.provision(workflow_id, &[svc.clone()]).await.unwrap();
let endpoints = provider
.provision(workflow_id, &[svc.clone()])
.await
.unwrap();
assert_eq!(endpoints.len(), 1);
// Second provision with same name should fail (pod already exists).
@@ -505,7 +512,9 @@ async fn service_provider_provision_service_object_conflict() {
let workflow_id = &unique_id("svc-conflict");
let ns = namespace::namespace_name(&config.namespace_prefix, workflow_id);
namespace::ensure_namespace(&k8s_client, &ns, workflow_id).await.unwrap();
namespace::ensure_namespace(&k8s_client, &ns, workflow_id)
.await
.unwrap();
// Pre-create just the K8s Service (not the pod).
let svc_def = nginx_service();

View File

@@ -80,7 +80,7 @@ impl SearchIndex for OpenSearchIndex {
.client
.indices()
.exists(opensearch::indices::IndicesExistsParts::Index(&[
&self.index_name,
&self.index_name
]))
.send()
.await

View File

@@ -1,6 +1,6 @@
use chrono::Utc;
use opensearch::http::transport::Transport;
use opensearch::OpenSearch;
use opensearch::http::transport::Transport;
use pretty_assertions::assert_eq;
use serde_json::json;
use uuid::Uuid;
@@ -60,7 +60,7 @@ async fn cleanup(provider: &OpenSearchIndex) {
.client()
.indices()
.delete(opensearch::indices::IndicesDeleteParts::Index(&[
provider.index_name(),
provider.index_name()
]))
.send()
.await;
@@ -164,7 +164,10 @@ async fn index_multiple_and_paginate() {
refresh_index(&provider).await;
// Search all, but skip 2 and take 2
let page = provider.search("Paginated workflow", 2, 2, &[]).await.unwrap();
let page = provider
.search("Paginated workflow", 2, 2, &[])
.await
.unwrap();
assert_eq!(page.total, 5);
assert_eq!(page.data.len(), 2);

View File

@@ -6,8 +6,8 @@ use sqlx::postgres::PgPoolOptions;
use sqlx::{PgPool, Row};
use wfe_core::models::{
CommandName, Event, EventSubscription, ExecutionError, ExecutionPointer, ScheduledCommand,
WorkflowInstance, WorkflowStatus, PointerStatus,
CommandName, Event, EventSubscription, ExecutionError, ExecutionPointer, PointerStatus,
ScheduledCommand, WorkflowInstance, WorkflowStatus,
};
use wfe_core::traits::{
EventRepository, PersistenceProvider, ScheduledCommandRepository, SubscriptionRepository,
@@ -57,7 +57,9 @@ impl PostgresPersistenceProvider {
"Suspended" => Ok(WorkflowStatus::Suspended),
"Complete" => Ok(WorkflowStatus::Complete),
"Terminated" => Ok(WorkflowStatus::Terminated),
other => Err(WfeError::Persistence(format!("Unknown workflow status: {other}"))),
other => Err(WfeError::Persistence(format!(
"Unknown workflow status: {other}"
))),
}
}
@@ -88,7 +90,9 @@ impl PostgresPersistenceProvider {
"Compensated" => Ok(PointerStatus::Compensated),
"Cancelled" => Ok(PointerStatus::Cancelled),
"PendingPredecessor" => Ok(PointerStatus::PendingPredecessor),
other => Err(WfeError::Persistence(format!("Unknown pointer status: {other}"))),
other => Err(WfeError::Persistence(format!(
"Unknown pointer status: {other}"
))),
}
}
@@ -103,7 +107,9 @@ impl PostgresPersistenceProvider {
match s {
"ProcessWorkflow" => Ok(CommandName::ProcessWorkflow),
"ProcessEvent" => Ok(CommandName::ProcessEvent),
other => Err(WfeError::Persistence(format!("Unknown command name: {other}"))),
other => Err(WfeError::Persistence(format!(
"Unknown command name: {other}"
))),
}
}
@@ -118,8 +124,9 @@ impl PostgresPersistenceProvider {
.map_err(|e| WfeError::Persistence(format!("Failed to serialize children: {e}")))?;
let scope_json = serde_json::to_value(&p.scope)
.map_err(|e| WfeError::Persistence(format!("Failed to serialize scope: {e}")))?;
let ext_json = serde_json::to_value(&p.extension_attributes)
.map_err(|e| WfeError::Persistence(format!("Failed to serialize extension_attributes: {e}")))?;
let ext_json = serde_json::to_value(&p.extension_attributes).map_err(|e| {
WfeError::Persistence(format!("Failed to serialize extension_attributes: {e}"))
})?;
sqlx::query(
r#"INSERT INTO wfc.execution_pointers
@@ -158,13 +165,11 @@ impl PostgresPersistenceProvider {
}
async fn load_pointers(&self, workflow_id: &str) -> Result<Vec<ExecutionPointer>> {
let rows = sqlx::query(
"SELECT * FROM wfc.execution_pointers WHERE workflow_id = $1",
)
.bind(workflow_id)
.fetch_all(&self.pool)
.await
.map_err(Self::map_sqlx_err)?;
let rows = sqlx::query("SELECT * FROM wfc.execution_pointers WHERE workflow_id = $1")
.bind(workflow_id)
.fetch_all(&self.pool)
.await
.map_err(Self::map_sqlx_err)?;
let mut pointers = Vec::with_capacity(rows.len());
for row in &rows {
@@ -183,8 +188,9 @@ impl PostgresPersistenceProvider {
let scope: Vec<String> = serde_json::from_value(scope_json)
.map_err(|e| WfeError::Persistence(format!("Failed to deserialize scope: {e}")))?;
let extension_attributes: HashMap<String, serde_json::Value> =
serde_json::from_value(ext_json)
.map_err(|e| WfeError::Persistence(format!("Failed to deserialize extension_attributes: {e}")))?;
serde_json::from_value(ext_json).map_err(|e| {
WfeError::Persistence(format!("Failed to deserialize extension_attributes: {e}"))
})?;
let status_str: String = row.get("status");
@@ -226,11 +232,12 @@ impl WorkflowRepository for PostgresPersistenceProvider {
sqlx::query(
r#"INSERT INTO wfc.workflows
(id, definition_id, version, description, reference, status, data,
(id, name, definition_id, version, description, reference, status, data,
next_execution, create_time, complete_time)
VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10)"#,
VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11)"#,
)
.bind(&id)
.bind(&instance.name)
.bind(&instance.workflow_definition_id)
.bind(instance.version as i32)
.bind(&instance.description)
@@ -245,7 +252,8 @@ impl WorkflowRepository for PostgresPersistenceProvider {
.map_err(Self::map_sqlx_err)?;
// Insert execution pointers
self.insert_pointers(&mut tx, &id, &instance.execution_pointers).await?;
self.insert_pointers(&mut tx, &id, &instance.execution_pointers)
.await?;
tx.commit().await.map_err(Self::map_sqlx_err)?;
Ok(id)
@@ -256,11 +264,12 @@ impl WorkflowRepository for PostgresPersistenceProvider {
sqlx::query(
r#"UPDATE wfc.workflows SET
definition_id=$2, version=$3, description=$4, reference=$5,
status=$6, data=$7, next_execution=$8, create_time=$9, complete_time=$10
name=$2, definition_id=$3, version=$4, description=$5, reference=$6,
status=$7, data=$8, next_execution=$9, create_time=$10, complete_time=$11
WHERE id=$1"#,
)
.bind(&instance.id)
.bind(&instance.name)
.bind(&instance.workflow_definition_id)
.bind(instance.version as i32)
.bind(&instance.description)
@@ -297,11 +306,12 @@ impl WorkflowRepository for PostgresPersistenceProvider {
sqlx::query(
r#"UPDATE wfc.workflows SET
definition_id=$2, version=$3, description=$4, reference=$5,
status=$6, data=$7, next_execution=$8, create_time=$9, complete_time=$10
name=$2, definition_id=$3, version=$4, description=$5, reference=$6,
status=$7, data=$8, next_execution=$9, create_time=$10, complete_time=$11
WHERE id=$1"#,
)
.bind(&instance.id)
.bind(&instance.name)
.bind(&instance.workflow_definition_id)
.bind(instance.version as i32)
.bind(&instance.description)
@@ -385,6 +395,7 @@ impl WorkflowRepository for PostgresPersistenceProvider {
Ok(WorkflowInstance {
id: row.get("id"),
name: row.get("name"),
workflow_definition_id: row.get("definition_id"),
version: row.get::<i32, _>("version") as u32,
description: row.get("description"),
@@ -398,6 +409,35 @@ impl WorkflowRepository for PostgresPersistenceProvider {
})
}
async fn get_workflow_instance_by_name(&self, name: &str) -> Result<WorkflowInstance> {
let row = sqlx::query("SELECT id FROM wfc.workflows WHERE name = $1")
.bind(name)
.fetch_optional(&self.pool)
.await
.map_err(Self::map_sqlx_err)?
.ok_or_else(|| WfeError::WorkflowNotFound(name.to_string()))?;
let id: String = row.get("id");
self.get_workflow_instance(&id).await
}
async fn next_definition_sequence(&self, definition_id: &str) -> Result<u64> {
// UPSERT the counter atomically and return the new value. `RETURNING`
// gives us the post-increment number in a single round trip.
let row = sqlx::query(
r#"INSERT INTO wfc.definition_sequences (definition_id, next_num)
VALUES ($1, 1)
ON CONFLICT (definition_id) DO UPDATE
SET next_num = wfc.definition_sequences.next_num + 1
RETURNING next_num"#,
)
.bind(definition_id)
.fetch_one(&self.pool)
.await
.map_err(Self::map_sqlx_err)?;
let next: i64 = row.get("next_num");
Ok(next as u64)
}
async fn get_workflow_instances(&self, ids: &[String]) -> Result<Vec<WorkflowInstance>> {
let mut result = Vec::new();
for id in ids {
@@ -413,10 +453,7 @@ impl WorkflowRepository for PostgresPersistenceProvider {
#[async_trait]
impl SubscriptionRepository for PostgresPersistenceProvider {
async fn create_event_subscription(
&self,
subscription: &EventSubscription,
) -> Result<String> {
async fn create_event_subscription(&self, subscription: &EventSubscription) -> Result<String> {
let id = if subscription.id.is_empty() {
uuid::Uuid::new_v4().to_string()
} else {
@@ -471,18 +508,14 @@ impl SubscriptionRepository for PostgresPersistenceProvider {
}
async fn terminate_subscription(&self, subscription_id: &str) -> Result<()> {
let result = sqlx::query(
"DELETE FROM wfc.event_subscriptions WHERE id = $1",
)
.bind(subscription_id)
.execute(&self.pool)
.await
.map_err(Self::map_sqlx_err)?;
let result = sqlx::query("DELETE FROM wfc.event_subscriptions WHERE id = $1")
.bind(subscription_id)
.execute(&self.pool)
.await
.map_err(Self::map_sqlx_err)?;
if result.rows_affected() == 0 {
return Err(WfeError::SubscriptionNotFound(
subscription_id.to_string(),
));
return Err(WfeError::SubscriptionNotFound(subscription_id.to_string()));
}
Ok(())
}
@@ -550,20 +583,14 @@ impl SubscriptionRepository for PostgresPersistenceProvider {
.await
.map_err(Self::map_sqlx_err)?;
if exists.is_none() {
return Err(WfeError::SubscriptionNotFound(
subscription_id.to_string(),
));
return Err(WfeError::SubscriptionNotFound(subscription_id.to_string()));
}
return Ok(false);
}
Ok(true)
}
async fn clear_subscription_token(
&self,
subscription_id: &str,
token: &str,
) -> Result<()> {
async fn clear_subscription_token(&self, subscription_id: &str, token: &str) -> Result<()> {
let result = sqlx::query(
r#"UPDATE wfc.event_subscriptions
SET external_token = NULL, external_worker_id = NULL, external_token_expiry = NULL
@@ -576,9 +603,7 @@ impl SubscriptionRepository for PostgresPersistenceProvider {
.map_err(Self::map_sqlx_err)?;
if result.rows_affected() == 0 {
return Err(WfeError::SubscriptionNotFound(
subscription_id.to_string(),
));
return Err(WfeError::SubscriptionNotFound(subscription_id.to_string()));
}
Ok(())
}
@@ -731,20 +756,23 @@ impl ScheduledCommandRepository for PostgresPersistenceProvider {
async fn process_commands(
&self,
as_of: DateTime<Utc>,
handler: &(dyn Fn(ScheduledCommand) -> std::pin::Pin<Box<dyn std::future::Future<Output = Result<()>> + Send>>
+ Send
+ Sync),
handler: &(
dyn Fn(
ScheduledCommand,
)
-> std::pin::Pin<Box<dyn std::future::Future<Output = Result<()>> + Send>>
+ Send
+ Sync
),
) -> Result<()> {
let as_of_millis = as_of.timestamp_millis();
// 1. SELECT due commands (do not delete yet)
let rows = sqlx::query(
"SELECT * FROM wfc.scheduled_commands WHERE execute_time <= $1",
)
.bind(as_of_millis)
.fetch_all(&self.pool)
.await
.map_err(Self::map_sqlx_err)?;
let rows = sqlx::query("SELECT * FROM wfc.scheduled_commands WHERE execute_time <= $1")
.bind(as_of_millis)
.fetch_all(&self.pool)
.await
.map_err(Self::map_sqlx_err)?;
let commands: Vec<ScheduledCommand> = rows
.iter()
@@ -803,6 +831,7 @@ impl PersistenceProvider for PostgresPersistenceProvider {
sqlx::query(
r#"CREATE TABLE IF NOT EXISTS wfc.workflows (
id TEXT PRIMARY KEY,
name TEXT NOT NULL UNIQUE,
definition_id TEXT NOT NULL,
version INT NOT NULL,
description TEXT,
@@ -818,6 +847,39 @@ impl PersistenceProvider for PostgresPersistenceProvider {
.await
.map_err(Self::map_sqlx_err)?;
// Upgrade older databases that lack the `name` column. Back-fill with
// the UUID so the NOT NULL + UNIQUE invariant holds retroactively;
// callers can re-run with a real name on the next persist.
sqlx::query(
r#"DO $$
BEGIN
IF NOT EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_schema = 'wfc' AND table_name = 'workflows'
AND column_name = 'name'
) THEN
ALTER TABLE wfc.workflows ADD COLUMN name TEXT;
UPDATE wfc.workflows SET name = id WHERE name IS NULL;
ALTER TABLE wfc.workflows ALTER COLUMN name SET NOT NULL;
CREATE UNIQUE INDEX IF NOT EXISTS idx_workflows_name
ON wfc.workflows (name);
END IF;
END$$;"#,
)
.execute(&self.pool)
.await
.map_err(Self::map_sqlx_err)?;
sqlx::query(
r#"CREATE TABLE IF NOT EXISTS wfc.definition_sequences (
definition_id TEXT PRIMARY KEY,
next_num BIGINT NOT NULL
)"#,
)
.execute(&self.pool)
.await
.map_err(Self::map_sqlx_err)?;
sqlx::query(
r#"CREATE TABLE IF NOT EXISTS wfc.execution_pointers (
id TEXT PRIMARY KEY,

View File

@@ -229,7 +229,10 @@ mod tests {
#[test]
fn subcommand_args_nextest_has_run() {
assert_eq!(CargoCommand::Nextest.subcommand_args(), vec!["nextest", "run"]);
assert_eq!(
CargoCommand::Nextest.subcommand_args(),
vec!["nextest", "run"]
);
}
#[test]
@@ -241,8 +244,14 @@ mod tests {
fn install_package_external_tools() {
assert_eq!(CargoCommand::Audit.install_package(), Some("cargo-audit"));
assert_eq!(CargoCommand::Deny.install_package(), Some("cargo-deny"));
assert_eq!(CargoCommand::Nextest.install_package(), Some("cargo-nextest"));
assert_eq!(CargoCommand::LlvmCov.install_package(), Some("cargo-llvm-cov"));
assert_eq!(
CargoCommand::Nextest.install_package(),
Some("cargo-nextest")
);
assert_eq!(
CargoCommand::LlvmCov.install_package(),
Some("cargo-llvm-cov")
);
}
#[test]

View File

@@ -1,7 +1,7 @@
use async_trait::async_trait;
use wfe_core::WfeError;
use wfe_core::models::ExecutionResult;
use wfe_core::traits::step::{StepBody, StepExecutionContext};
use wfe_core::WfeError;
use crate::cargo::config::{CargoCommand, CargoConfig};
@@ -88,7 +88,10 @@ impl CargoStep {
/// Ensures an external cargo tool is installed before running it.
/// For built-in cargo subcommands, this is a no-op.
async fn ensure_tool_available(&self) -> Result<(), WfeError> {
let (binary, package) = match (self.config.command.binary_name(), self.config.command.install_package()) {
let (binary, package) = match (
self.config.command.binary_name(),
self.config.command.install_package(),
) {
(Some(b), Some(p)) => (b, p),
_ => return Ok(()),
};
@@ -117,9 +120,11 @@ impl CargoStep {
.stderr(std::process::Stdio::piped())
.output()
.await
.map_err(|e| WfeError::StepExecution(format!(
"Failed to add llvm-tools-preview component: {e}"
)))?;
.map_err(|e| {
WfeError::StepExecution(format!(
"Failed to add llvm-tools-preview component: {e}"
))
})?;
if !component.status.success() {
let stderr = String::from_utf8_lossy(&component.stderr);
@@ -135,9 +140,7 @@ impl CargoStep {
.stderr(std::process::Stdio::piped())
.output()
.await
.map_err(|e| WfeError::StepExecution(format!(
"Failed to install {package}: {e}"
)))?;
.map_err(|e| WfeError::StepExecution(format!("Failed to install {package}: {e}")))?;
if !install.status.success() {
let stderr = String::from_utf8_lossy(&install.stderr);
@@ -162,17 +165,16 @@ impl CargoStep {
let doc_dir = std::path::Path::new(working_dir).join("target/doc");
let json_path = std::fs::read_dir(&doc_dir)
.map_err(|e| WfeError::StepExecution(format!(
"failed to read target/doc: {e}"
)))?
.map_err(|e| WfeError::StepExecution(format!("failed to read target/doc: {e}")))?
.filter_map(|entry| entry.ok())
.find(|entry| {
entry.path().extension().is_some_and(|ext| ext == "json")
})
.find(|entry| entry.path().extension().is_some_and(|ext| ext == "json"))
.map(|entry| entry.path())
.ok_or_else(|| WfeError::StepExecution(
"no JSON file found in target/doc/ — did rustdoc --output-format json succeed?".to_string()
))?;
.ok_or_else(|| {
WfeError::StepExecution(
"no JSON file found in target/doc/ — did rustdoc --output-format json succeed?"
.to_string(),
)
})?;
tracing::info!(path = %json_path.display(), "reading rustdoc JSON");
@@ -180,20 +182,20 @@ impl CargoStep {
WfeError::StepExecution(format!("failed to read {}: {e}", json_path.display()))
})?;
let krate: rustdoc_types::Crate = serde_json::from_str(&json_content).map_err(|e| {
WfeError::StepExecution(format!("failed to parse rustdoc JSON: {e}"))
})?;
let krate: rustdoc_types::Crate = serde_json::from_str(&json_content)
.map_err(|e| WfeError::StepExecution(format!("failed to parse rustdoc JSON: {e}")))?;
let mdx_files = transform_to_mdx(&krate);
let output_dir = self.config.output_dir
let output_dir = self
.config
.output_dir
.as_deref()
.unwrap_or("target/doc/mdx");
let output_path = std::path::Path::new(working_dir).join(output_dir);
write_mdx_files(&mdx_files, &output_path).map_err(|e| {
WfeError::StepExecution(format!("failed to write MDX files: {e}"))
})?;
write_mdx_files(&mdx_files, &output_path)
.map_err(|e| WfeError::StepExecution(format!("failed to write MDX files: {e}")))?;
let file_count = mdx_files.len();
tracing::info!(
@@ -214,7 +216,10 @@ impl CargoStep {
outputs.insert(
"mdx.files".to_string(),
serde_json::Value::Array(
file_paths.into_iter().map(serde_json::Value::String).collect(),
file_paths
.into_iter()
.map(serde_json::Value::String)
.collect(),
),
);
@@ -224,7 +229,10 @@ impl CargoStep {
#[async_trait]
impl StepBody for CargoStep {
async fn run(&mut self, context: &StepExecutionContext<'_>) -> wfe_core::Result<ExecutionResult> {
async fn run(
&mut self,
context: &StepExecutionContext<'_>,
) -> wfe_core::Result<ExecutionResult> {
let step_name = context.step.name.as_deref().unwrap_or("unknown");
let subcmd = self.config.command.as_str();
@@ -248,9 +256,9 @@ impl StepBody for CargoStep {
}
}
} else {
cmd.output()
.await
.map_err(|e| WfeError::StepExecution(format!("Failed to spawn cargo {subcmd}: {e}")))?
cmd.output().await.map_err(|e| {
WfeError::StepExecution(format!("Failed to spawn cargo {subcmd}: {e}"))
})?
};
let stdout = String::from_utf8_lossy(&output.stdout).to_string();
@@ -317,7 +325,11 @@ mod tests {
let cmd = step.build_command();
let prog = cmd.as_std().get_program().to_str().unwrap();
assert_eq!(prog, "cargo");
let args: Vec<_> = cmd.as_std().get_args().map(|a| a.to_str().unwrap()).collect();
let args: Vec<_> = cmd
.as_std()
.get_args()
.map(|a| a.to_str().unwrap())
.collect();
assert_eq!(args, vec!["build"]);
}
@@ -329,7 +341,11 @@ mod tests {
let cmd = step.build_command();
let prog = cmd.as_std().get_program().to_str().unwrap();
assert_eq!(prog, "rustup");
let args: Vec<_> = cmd.as_std().get_args().map(|a| a.to_str().unwrap()).collect();
let args: Vec<_> = cmd
.as_std()
.get_args()
.map(|a| a.to_str().unwrap())
.collect();
assert_eq!(args, vec!["run", "nightly", "cargo", "test"]);
}
@@ -340,8 +356,15 @@ mod tests {
config.features = vec!["feat1".to_string(), "feat2".to_string()];
let step = CargoStep::new(config);
let cmd = step.build_command();
let args: Vec<_> = cmd.as_std().get_args().map(|a| a.to_str().unwrap()).collect();
assert_eq!(args, vec!["check", "-p", "my-crate", "--features", "feat1,feat2"]);
let args: Vec<_> = cmd
.as_std()
.get_args()
.map(|a| a.to_str().unwrap())
.collect();
assert_eq!(
args,
vec!["check", "-p", "my-crate", "--features", "feat1,feat2"]
);
}
#[test]
@@ -351,8 +374,20 @@ mod tests {
config.target = Some("aarch64-unknown-linux-gnu".to_string());
let step = CargoStep::new(config);
let cmd = step.build_command();
let args: Vec<_> = cmd.as_std().get_args().map(|a| a.to_str().unwrap()).collect();
assert_eq!(args, vec!["build", "--release", "--target", "aarch64-unknown-linux-gnu"]);
let args: Vec<_> = cmd
.as_std()
.get_args()
.map(|a| a.to_str().unwrap())
.collect();
assert_eq!(
args,
vec![
"build",
"--release",
"--target",
"aarch64-unknown-linux-gnu"
]
);
}
#[test]
@@ -364,10 +399,23 @@ mod tests {
config.extra_args = vec!["--".to_string(), "-D".to_string(), "warnings".to_string()];
let step = CargoStep::new(config);
let cmd = step.build_command();
let args: Vec<_> = cmd.as_std().get_args().map(|a| a.to_str().unwrap()).collect();
let args: Vec<_> = cmd
.as_std()
.get_args()
.map(|a| a.to_str().unwrap())
.collect();
assert_eq!(
args,
vec!["clippy", "--all-features", "--no-default-features", "--profile", "dev", "--", "-D", "warnings"]
vec![
"clippy",
"--all-features",
"--no-default-features",
"--profile",
"dev",
"--",
"-D",
"warnings"
]
);
}
@@ -377,17 +425,29 @@ mod tests {
config.extra_args = vec!["--check".to_string()];
let step = CargoStep::new(config);
let cmd = step.build_command();
let args: Vec<_> = cmd.as_std().get_args().map(|a| a.to_str().unwrap()).collect();
let args: Vec<_> = cmd
.as_std()
.get_args()
.map(|a| a.to_str().unwrap())
.collect();
assert_eq!(args, vec!["fmt", "--check"]);
}
#[test]
fn build_command_publish_dry_run() {
let mut config = minimal_config(CargoCommand::Publish);
config.extra_args = vec!["--dry-run".to_string(), "--registry".to_string(), "my-reg".to_string()];
config.extra_args = vec![
"--dry-run".to_string(),
"--registry".to_string(),
"my-reg".to_string(),
];
let step = CargoStep::new(config);
let cmd = step.build_command();
let args: Vec<_> = cmd.as_std().get_args().map(|a| a.to_str().unwrap()).collect();
let args: Vec<_> = cmd
.as_std()
.get_args()
.map(|a| a.to_str().unwrap())
.collect();
assert_eq!(args, vec!["publish", "--dry-run", "--registry", "my-reg"]);
}
@@ -398,18 +458,27 @@ mod tests {
config.release = true;
let step = CargoStep::new(config);
let cmd = step.build_command();
let args: Vec<_> = cmd.as_std().get_args().map(|a| a.to_str().unwrap()).collect();
let args: Vec<_> = cmd
.as_std()
.get_args()
.map(|a| a.to_str().unwrap())
.collect();
assert_eq!(args, vec!["doc", "--release", "--no-deps"]);
}
#[test]
fn build_command_env_vars() {
let mut config = minimal_config(CargoCommand::Build);
config.env.insert("RUSTFLAGS".to_string(), "-D warnings".to_string());
config
.env
.insert("RUSTFLAGS".to_string(), "-D warnings".to_string());
let step = CargoStep::new(config);
let cmd = step.build_command();
let envs: Vec<_> = cmd.as_std().get_envs().collect();
assert!(envs.iter().any(|(k, v)| *k == "RUSTFLAGS" && v == &Some("-D warnings".as_ref())));
assert!(
envs.iter()
.any(|(k, v)| *k == "RUSTFLAGS" && v == &Some("-D warnings".as_ref()))
);
}
#[test]
@@ -418,14 +487,21 @@ mod tests {
config.working_dir = Some("/my/project".to_string());
let step = CargoStep::new(config);
let cmd = step.build_command();
assert_eq!(cmd.as_std().get_current_dir(), Some(std::path::Path::new("/my/project")));
assert_eq!(
cmd.as_std().get_current_dir(),
Some(std::path::Path::new("/my/project"))
);
}
#[test]
fn build_command_audit() {
let step = CargoStep::new(minimal_config(CargoCommand::Audit));
let cmd = step.build_command();
let args: Vec<_> = cmd.as_std().get_args().map(|a| a.to_str().unwrap()).collect();
let args: Vec<_> = cmd
.as_std()
.get_args()
.map(|a| a.to_str().unwrap())
.collect();
assert_eq!(args, vec!["audit"]);
}
@@ -435,7 +511,11 @@ mod tests {
config.extra_args = vec!["check".to_string()];
let step = CargoStep::new(config);
let cmd = step.build_command();
let args: Vec<_> = cmd.as_std().get_args().map(|a| a.to_str().unwrap()).collect();
let args: Vec<_> = cmd
.as_std()
.get_args()
.map(|a| a.to_str().unwrap())
.collect();
assert_eq!(args, vec!["deny", "check"]);
}
@@ -443,7 +523,11 @@ mod tests {
fn build_command_nextest() {
let step = CargoStep::new(minimal_config(CargoCommand::Nextest));
let cmd = step.build_command();
let args: Vec<_> = cmd.as_std().get_args().map(|a| a.to_str().unwrap()).collect();
let args: Vec<_> = cmd
.as_std()
.get_args()
.map(|a| a.to_str().unwrap())
.collect();
assert_eq!(args, vec!["nextest", "run"]);
}
@@ -454,25 +538,44 @@ mod tests {
config.extra_args = vec!["--no-fail-fast".to_string()];
let step = CargoStep::new(config);
let cmd = step.build_command();
let args: Vec<_> = cmd.as_std().get_args().map(|a| a.to_str().unwrap()).collect();
assert_eq!(args, vec!["nextest", "run", "--features", "feat1", "--no-fail-fast"]);
let args: Vec<_> = cmd
.as_std()
.get_args()
.map(|a| a.to_str().unwrap())
.collect();
assert_eq!(
args,
vec!["nextest", "run", "--features", "feat1", "--no-fail-fast"]
);
}
#[test]
fn build_command_llvm_cov() {
let step = CargoStep::new(minimal_config(CargoCommand::LlvmCov));
let cmd = step.build_command();
let args: Vec<_> = cmd.as_std().get_args().map(|a| a.to_str().unwrap()).collect();
let args: Vec<_> = cmd
.as_std()
.get_args()
.map(|a| a.to_str().unwrap())
.collect();
assert_eq!(args, vec!["llvm-cov"]);
}
#[test]
fn build_command_llvm_cov_with_args() {
let mut config = minimal_config(CargoCommand::LlvmCov);
config.extra_args = vec!["--html".to_string(), "--output-dir".to_string(), "coverage".to_string()];
config.extra_args = vec![
"--html".to_string(),
"--output-dir".to_string(),
"coverage".to_string(),
];
let step = CargoStep::new(config);
let cmd = step.build_command();
let args: Vec<_> = cmd.as_std().get_args().map(|a| a.to_str().unwrap()).collect();
let args: Vec<_> = cmd
.as_std()
.get_args()
.map(|a| a.to_str().unwrap())
.collect();
assert_eq!(args, vec!["llvm-cov", "--html", "--output-dir", "coverage"]);
}
@@ -482,10 +585,24 @@ mod tests {
let cmd = step.build_command();
let prog = cmd.as_std().get_program().to_str().unwrap();
assert_eq!(prog, "rustup");
let args: Vec<_> = cmd.as_std().get_args().map(|a| a.to_str().unwrap()).collect();
let args: Vec<_> = cmd
.as_std()
.get_args()
.map(|a| a.to_str().unwrap())
.collect();
assert_eq!(
args,
vec!["run", "nightly", "cargo", "rustdoc", "--", "-Z", "unstable-options", "--output-format", "json"]
vec![
"run",
"nightly",
"cargo",
"rustdoc",
"--",
"-Z",
"unstable-options",
"--output-format",
"json"
]
);
}
@@ -496,10 +613,27 @@ mod tests {
config.extra_args = vec!["--no-deps".to_string()];
let step = CargoStep::new(config);
let cmd = step.build_command();
let args: Vec<_> = cmd.as_std().get_args().map(|a| a.to_str().unwrap()).collect();
let args: Vec<_> = cmd
.as_std()
.get_args()
.map(|a| a.to_str().unwrap())
.collect();
assert_eq!(
args,
vec!["run", "nightly", "cargo", "rustdoc", "-p", "my-crate", "--no-deps", "--", "-Z", "unstable-options", "--output-format", "json"]
vec![
"run",
"nightly",
"cargo",
"rustdoc",
"-p",
"my-crate",
"--no-deps",
"--",
"-Z",
"unstable-options",
"--output-format",
"json"
]
);
}
@@ -509,7 +643,11 @@ mod tests {
config.toolchain = Some("nightly-2024-06-01".to_string());
let step = CargoStep::new(config);
let cmd = step.build_command();
let args: Vec<_> = cmd.as_std().get_args().map(|a| a.to_str().unwrap()).collect();
let args: Vec<_> = cmd
.as_std()
.get_args()
.map(|a| a.to_str().unwrap())
.collect();
assert!(args.contains(&"nightly-2024-06-01"));
}

View File

@@ -117,8 +117,7 @@ fn render_module(module_path: &str, items: &[(&Item, &str)], krate: &Crate) -> S
items
.iter()
.find(|(item, kind)| {
*kind == "Modules"
&& item.name.as_deref() == module_path.split("::").last()
*kind == "Modules" && item.name.as_deref() == module_path.split("::").last()
})
.and_then(|(item, _)| item.docs.as_ref())
.map(|d| first_sentence(d))
@@ -136,8 +135,15 @@ fn render_module(module_path: &str, items: &[(&Item, &str)], krate: &Crate) -> S
}
let kind_order = [
"Modules", "Structs", "Enums", "Traits", "Functions",
"Type Aliases", "Constants", "Statics", "Macros",
"Modules",
"Structs",
"Enums",
"Traits",
"Functions",
"Type Aliases",
"Constants",
"Statics",
"Macros",
];
for kind in &kind_order {
@@ -266,16 +272,15 @@ fn render_signature(item: &Item, krate: &Crate) -> Option<String> {
}
Some(sig)
}
ItemEnum::TypeAlias(ta) => {
Some(format!("pub type {name} = {}", render_type(&ta.type_, krate)))
}
ItemEnum::Constant { type_, const_: c } => {
Some(format!(
"pub const {name}: {} = {}",
render_type(type_, krate),
c.value.as_deref().unwrap_or("...")
))
}
ItemEnum::TypeAlias(ta) => Some(format!(
"pub type {name} = {}",
render_type(&ta.type_, krate)
)),
ItemEnum::Constant { type_, const_: c } => Some(format!(
"pub const {name}: {} = {}",
render_type(type_, krate),
c.value.as_deref().unwrap_or("...")
)),
ItemEnum::Macro(_) => Some(format!("macro_rules! {name} {{ ... }}")),
_ => None,
}
@@ -309,7 +314,11 @@ fn render_type(ty: &Type, krate: &Crate) -> String {
}
Type::Generic(name) => name.clone(),
Type::Primitive(name) => name.clone(),
Type::BorrowedRef { lifetime, is_mutable, type_ } => {
Type::BorrowedRef {
lifetime,
is_mutable,
type_,
} => {
let mut s = String::from("&");
if let Some(lt) = lifetime {
s.push_str(lt);
@@ -346,7 +355,12 @@ fn render_type(ty: &Type, krate: &Crate) -> String {
.collect();
format!("impl {}", rendered.join(" + "))
}
Type::QualifiedPath { name, self_type, trait_, .. } => {
Type::QualifiedPath {
name,
self_type,
trait_,
..
} => {
let self_str = render_type(self_type, krate);
if let Some(t) = trait_ {
format!("<{self_str} as {}>::{name}", t.path)
@@ -417,11 +431,17 @@ mod tests {
deprecation: None,
inner: ItemEnum::Function(Function {
sig: FunctionSignature {
inputs: params.into_iter().map(|(n, t)| (n.to_string(), t)).collect(),
inputs: params
.into_iter()
.map(|(n, t)| (n.to_string(), t))
.collect(),
output,
is_c_variadic: false,
},
generics: Generics { params: vec![], where_predicates: vec![] },
generics: Generics {
params: vec![],
where_predicates: vec![],
},
header: FunctionHeader {
is_const: false,
is_unsafe: false,
@@ -446,7 +466,10 @@ mod tests {
deprecation: None,
inner: ItemEnum::Struct(Struct {
kind: StructKind::Unit,
generics: Generics { params: vec![], where_predicates: vec![] },
generics: Generics {
params: vec![],
where_predicates: vec![],
},
impls: vec![],
}),
}
@@ -464,7 +487,10 @@ mod tests {
attrs: vec![],
deprecation: None,
inner: ItemEnum::Enum(Enum {
generics: Generics { params: vec![], where_predicates: vec![] },
generics: Generics {
params: vec![],
where_predicates: vec![],
},
variants: vec![],
has_stripped_variants: false,
impls: vec![],
@@ -488,7 +514,10 @@ mod tests {
is_unsafe: false,
is_dyn_compatible: true,
items: vec![],
generics: Generics { params: vec![], where_predicates: vec![] },
generics: Generics {
params: vec![],
where_predicates: vec![],
},
bounds: vec![],
implementations: vec![],
}),
@@ -540,35 +569,57 @@ mod tests {
#[test]
fn render_type_tuple() {
let krate = empty_crate();
let ty = Type::Tuple(vec![Type::Primitive("u32".into()), Type::Primitive("String".into())]);
let ty = Type::Tuple(vec![
Type::Primitive("u32".into()),
Type::Primitive("String".into()),
]);
assert_eq!(render_type(&ty, &krate), "(u32, String)");
}
#[test]
fn render_type_slice() {
let krate = empty_crate();
assert_eq!(render_type(&Type::Slice(Box::new(Type::Primitive("u8".into()))), &krate), "[u8]");
assert_eq!(
render_type(&Type::Slice(Box::new(Type::Primitive("u8".into()))), &krate),
"[u8]"
);
}
#[test]
fn render_type_array() {
let krate = empty_crate();
let ty = Type::Array { type_: Box::new(Type::Primitive("u8".into())), len: "32".into() };
let ty = Type::Array {
type_: Box::new(Type::Primitive("u8".into())),
len: "32".into(),
};
assert_eq!(render_type(&ty, &krate), "[u8; 32]");
}
#[test]
fn render_type_raw_pointer() {
let krate = empty_crate();
let ty = Type::RawPointer { is_mutable: true, type_: Box::new(Type::Primitive("u8".into())) };
let ty = Type::RawPointer {
is_mutable: true,
type_: Box::new(Type::Primitive("u8".into())),
};
assert_eq!(render_type(&ty, &krate), "*mut u8");
}
#[test]
fn render_function_signature() {
let krate = empty_crate();
let item = make_function("add", vec![("a", Type::Primitive("u32".into())), ("b", Type::Primitive("u32".into()))], Some(Type::Primitive("u32".into())));
assert_eq!(render_signature(&item, &krate).unwrap(), "fn add(a: u32, b: u32) -> u32");
let item = make_function(
"add",
vec![
("a", Type::Primitive("u32".into())),
("b", Type::Primitive("u32".into())),
],
Some(Type::Primitive("u32".into())),
);
assert_eq!(
render_signature(&item, &krate).unwrap(),
"fn add(a: u32, b: u32) -> u32"
);
}
#[test]
@@ -581,25 +632,51 @@ mod tests {
#[test]
fn render_struct_signature() {
let krate = empty_crate();
assert_eq!(render_signature(&make_struct("MyStruct"), &krate).unwrap(), "pub struct MyStruct;");
assert_eq!(
render_signature(&make_struct("MyStruct"), &krate).unwrap(),
"pub struct MyStruct;"
);
}
#[test]
fn render_enum_signature() {
let krate = empty_crate();
assert_eq!(render_signature(&make_enum("Color"), &krate).unwrap(), "pub enum Color { }");
assert_eq!(
render_signature(&make_enum("Color"), &krate).unwrap(),
"pub enum Color { }"
);
}
#[test]
fn render_trait_signature() {
let krate = empty_crate();
assert_eq!(render_signature(&make_trait("Drawable"), &krate).unwrap(), "pub trait Drawable");
assert_eq!(
render_signature(&make_trait("Drawable"), &krate).unwrap(),
"pub trait Drawable"
);
}
#[test]
fn item_kind_labels() {
assert_eq!(item_kind_label(&ItemEnum::Module(Module { is_crate: false, items: vec![], is_stripped: false })), Some("Modules"));
assert_eq!(item_kind_label(&ItemEnum::Struct(Struct { kind: StructKind::Unit, generics: Generics { params: vec![], where_predicates: vec![] }, impls: vec![] })), Some("Structs"));
assert_eq!(
item_kind_label(&ItemEnum::Module(Module {
is_crate: false,
items: vec![],
is_stripped: false
})),
Some("Modules")
);
assert_eq!(
item_kind_label(&ItemEnum::Struct(Struct {
kind: StructKind::Unit,
generics: Generics {
params: vec![],
where_predicates: vec![]
},
impls: vec![]
})),
Some("Structs")
);
}
#[test]
@@ -613,7 +690,14 @@ mod tests {
let func = make_function("hello", vec![], None);
let id = Id(1);
krate.index.insert(id.clone(), func);
krate.paths.insert(id, ItemSummary { crate_id: 0, path: vec!["my_crate".into(), "hello".into()], kind: ItemKind::Function });
krate.paths.insert(
id,
ItemSummary {
crate_id: 0,
path: vec!["my_crate".into(), "hello".into()],
kind: ItemKind::Function,
},
);
let files = transform_to_mdx(&krate);
assert_eq!(files.len(), 1);
@@ -628,11 +712,25 @@ mod tests {
let mut krate = empty_crate();
let func = make_function("do_thing", vec![], None);
krate.index.insert(Id(1), func);
krate.paths.insert(Id(1), ItemSummary { crate_id: 0, path: vec!["mc".into(), "do_thing".into()], kind: ItemKind::Function });
krate.paths.insert(
Id(1),
ItemSummary {
crate_id: 0,
path: vec!["mc".into(), "do_thing".into()],
kind: ItemKind::Function,
},
);
let st = make_struct("Widget");
krate.index.insert(Id(2), st);
krate.paths.insert(Id(2), ItemSummary { crate_id: 0, path: vec!["mc".into(), "Widget".into()], kind: ItemKind::Struct });
krate.paths.insert(
Id(2),
ItemSummary {
crate_id: 0,
path: vec!["mc".into(), "Widget".into()],
kind: ItemKind::Struct,
},
);
let files = transform_to_mdx(&krate);
assert_eq!(files.len(), 1);
@@ -654,7 +752,11 @@ mod tests {
links: HashMap::new(),
attrs: vec![],
deprecation: None,
inner: ItemEnum::Module(Module { is_crate: true, items: vec![Id(1)], is_stripped: false }),
inner: ItemEnum::Module(Module {
is_crate: true,
items: vec![Id(1)],
is_stripped: false,
}),
};
krate.root = Id(0);
krate.index.insert(Id(0), root_module);
@@ -662,12 +764,23 @@ mod tests {
// Add a function so the module generates a file.
let func = make_function("f", vec![], None);
krate.index.insert(Id(1), func);
krate.paths.insert(Id(1), ItemSummary { crate_id: 0, path: vec!["f".into()], kind: ItemKind::Function });
krate.paths.insert(
Id(1),
ItemSummary {
crate_id: 0,
path: vec!["f".into()],
kind: ItemKind::Function,
},
);
let files = transform_to_mdx(&krate);
// The root module's description in frontmatter should have escaped quotes.
let index = files.iter().find(|f| f.path == "index.mdx").unwrap();
assert!(index.content.contains("\\\"quoted\\\""), "content: {}", index.content);
assert!(
index.content.contains("\\\"quoted\\\""),
"content: {}",
index.content
);
}
#[test]
@@ -677,7 +790,9 @@ mod tests {
path: "Option".into(),
id: Id(99),
args: Some(Box::new(rustdoc_types::GenericArgs::AngleBracketed {
args: vec![rustdoc_types::GenericArg::Type(Type::Primitive("u32".into()))],
args: vec![rustdoc_types::GenericArg::Type(Type::Primitive(
"u32".into(),
))],
constraints: vec![],
})),
});
@@ -687,13 +802,15 @@ mod tests {
#[test]
fn render_type_impl_trait() {
let krate = empty_crate();
let ty = Type::ImplTrait(vec![
rustdoc_types::GenericBound::TraitBound {
trait_: rustdoc_types::Path { path: "Display".into(), id: Id(99), args: None },
generic_params: vec![],
modifier: rustdoc_types::TraitBoundModifier::None,
let ty = Type::ImplTrait(vec![rustdoc_types::GenericBound::TraitBound {
trait_: rustdoc_types::Path {
path: "Display".into(),
id: Id(99),
args: None,
},
]);
generic_params: vec![],
modifier: rustdoc_types::TraitBoundModifier::None,
}]);
assert_eq!(render_type(&ty, &krate), "impl Display");
}
@@ -702,7 +819,11 @@ mod tests {
let krate = empty_crate();
let ty = Type::DynTrait(rustdoc_types::DynTrait {
traits: vec![rustdoc_types::PolyTrait {
trait_: rustdoc_types::Path { path: "Error".into(), id: Id(99), args: None },
trait_: rustdoc_types::Path {
path: "Error".into(),
id: Id(99),
args: None,
},
generic_params: vec![],
}],
lifetime: None,
@@ -720,7 +841,12 @@ mod tests {
is_c_variadic: false,
},
generic_params: vec![],
header: FunctionHeader { is_const: false, is_unsafe: false, is_async: false, abi: Abi::Rust },
header: FunctionHeader {
is_const: false,
is_unsafe: false,
is_async: false,
abi: Abi::Rust,
},
}));
assert_eq!(render_type(&ty, &krate), "fn(u32) -> bool");
}
@@ -728,7 +854,10 @@ mod tests {
#[test]
fn render_type_const_pointer() {
let krate = empty_crate();
let ty = Type::RawPointer { is_mutable: false, type_: Box::new(Type::Primitive("u8".into())) };
let ty = Type::RawPointer {
is_mutable: false,
type_: Box::new(Type::Primitive("u8".into())),
};
assert_eq!(render_type(&ty, &krate), "*const u8");
}
@@ -743,9 +872,16 @@ mod tests {
let krate = empty_crate();
let ty = Type::QualifiedPath {
name: "Item".into(),
args: Box::new(rustdoc_types::GenericArgs::AngleBracketed { args: vec![], constraints: vec![] }),
args: Box::new(rustdoc_types::GenericArgs::AngleBracketed {
args: vec![],
constraints: vec![],
}),
self_type: Box::new(Type::Generic("T".into())),
trait_: Some(rustdoc_types::Path { path: "Iterator".into(), id: Id(99), args: None }),
trait_: Some(rustdoc_types::Path {
path: "Iterator".into(),
id: Id(99),
args: None,
}),
};
assert_eq!(render_type(&ty, &krate), "<T as Iterator>::Item");
}
@@ -753,74 +889,137 @@ mod tests {
#[test]
fn item_kind_label_all_variants() {
// Test the remaining untested variants
assert_eq!(item_kind_label(&ItemEnum::Enum(Enum {
generics: Generics { params: vec![], where_predicates: vec![] },
variants: vec![], has_stripped_variants: false, impls: vec![],
})), Some("Enums"));
assert_eq!(item_kind_label(&ItemEnum::Trait(Trait {
is_auto: false, is_unsafe: false, is_dyn_compatible: true,
items: vec![], generics: Generics { params: vec![], where_predicates: vec![] },
bounds: vec![], implementations: vec![],
})), Some("Traits"));
assert_eq!(
item_kind_label(&ItemEnum::Enum(Enum {
generics: Generics {
params: vec![],
where_predicates: vec![]
},
variants: vec![],
has_stripped_variants: false,
impls: vec![],
})),
Some("Enums")
);
assert_eq!(
item_kind_label(&ItemEnum::Trait(Trait {
is_auto: false,
is_unsafe: false,
is_dyn_compatible: true,
items: vec![],
generics: Generics {
params: vec![],
where_predicates: vec![]
},
bounds: vec![],
implementations: vec![],
})),
Some("Traits")
);
assert_eq!(item_kind_label(&ItemEnum::Macro("".into())), Some("Macros"));
assert_eq!(item_kind_label(&ItemEnum::Static(rustdoc_types::Static {
type_: Type::Primitive("u32".into()),
is_mutable: false,
is_unsafe: false,
expr: String::new(),
})), Some("Statics"));
assert_eq!(
item_kind_label(&ItemEnum::Static(rustdoc_types::Static {
type_: Type::Primitive("u32".into()),
is_mutable: false,
is_unsafe: false,
expr: String::new(),
})),
Some("Statics")
);
// Impl blocks should be skipped
assert_eq!(item_kind_label(&ItemEnum::Impl(rustdoc_types::Impl {
is_unsafe: false, generics: Generics { params: vec![], where_predicates: vec![] },
provided_trait_methods: vec![], trait_: None, for_: Type::Primitive("u32".into()),
items: vec![], is_negative: false, is_synthetic: false,
blanket_impl: None,
})), None);
assert_eq!(
item_kind_label(&ItemEnum::Impl(rustdoc_types::Impl {
is_unsafe: false,
generics: Generics {
params: vec![],
where_predicates: vec![]
},
provided_trait_methods: vec![],
trait_: None,
for_: Type::Primitive("u32".into()),
items: vec![],
is_negative: false,
is_synthetic: false,
blanket_impl: None,
})),
None
);
}
#[test]
fn render_constant_signature() {
let krate = empty_crate();
let item = Item {
id: Id(5), crate_id: 0,
name: Some("MAX_SIZE".into()), span: None,
visibility: Visibility::Public, docs: None,
links: HashMap::new(), attrs: vec![], deprecation: None,
id: Id(5),
crate_id: 0,
name: Some("MAX_SIZE".into()),
span: None,
visibility: Visibility::Public,
docs: None,
links: HashMap::new(),
attrs: vec![],
deprecation: None,
inner: ItemEnum::Constant {
type_: Type::Primitive("usize".into()),
const_: rustdoc_types::Constant { expr: "1024".into(), value: Some("1024".into()), is_literal: true },
const_: rustdoc_types::Constant {
expr: "1024".into(),
value: Some("1024".into()),
is_literal: true,
},
},
};
assert_eq!(render_signature(&item, &krate).unwrap(), "pub const MAX_SIZE: usize = 1024");
assert_eq!(
render_signature(&item, &krate).unwrap(),
"pub const MAX_SIZE: usize = 1024"
);
}
#[test]
fn render_type_alias_signature() {
let krate = empty_crate();
let item = Item {
id: Id(6), crate_id: 0,
name: Some("Result".into()), span: None,
visibility: Visibility::Public, docs: None,
links: HashMap::new(), attrs: vec![], deprecation: None,
id: Id(6),
crate_id: 0,
name: Some("Result".into()),
span: None,
visibility: Visibility::Public,
docs: None,
links: HashMap::new(),
attrs: vec![],
deprecation: None,
inner: ItemEnum::TypeAlias(rustdoc_types::TypeAlias {
type_: Type::Primitive("u32".into()),
generics: Generics { params: vec![], where_predicates: vec![] },
generics: Generics {
params: vec![],
where_predicates: vec![],
},
}),
};
assert_eq!(render_signature(&item, &krate).unwrap(), "pub type Result = u32");
assert_eq!(
render_signature(&item, &krate).unwrap(),
"pub type Result = u32"
);
}
#[test]
fn render_macro_signature() {
let krate = empty_crate();
let item = Item {
id: Id(7), crate_id: 0,
name: Some("my_macro".into()), span: None,
visibility: Visibility::Public, docs: None,
links: HashMap::new(), attrs: vec![], deprecation: None,
id: Id(7),
crate_id: 0,
name: Some("my_macro".into()),
span: None,
visibility: Visibility::Public,
docs: None,
links: HashMap::new(),
attrs: vec![],
deprecation: None,
inner: ItemEnum::Macro("macro body".into()),
};
assert_eq!(render_signature(&item, &krate).unwrap(), "macro_rules! my_macro { ... }");
assert_eq!(
render_signature(&item, &krate).unwrap(),
"macro_rules! my_macro { ... }"
);
}
#[test]
@@ -839,9 +1038,15 @@ mod tests {
#[test]
fn write_mdx_files_creates_directories() {
let tmp = tempfile::tempdir().unwrap();
let files = vec![MdxFile { path: "nested/module.mdx".into(), content: "# Test\n".into() }];
let files = vec![MdxFile {
path: "nested/module.mdx".into(),
content: "# Test\n".into(),
}];
write_mdx_files(&files, tmp.path()).unwrap();
assert!(tmp.path().join("nested/module.mdx").exists());
assert_eq!(std::fs::read_to_string(tmp.path().join("nested/module.mdx")).unwrap(), "# Test\n");
assert_eq!(
std::fs::read_to_string(tmp.path().join("nested/module.mdx")).unwrap(),
"# Test\n"
);
}
}

View File

@@ -60,7 +60,10 @@ mod tests {
#[test]
fn command_as_str() {
assert_eq!(RustupCommand::Install.as_str(), "install");
assert_eq!(RustupCommand::ToolchainInstall.as_str(), "toolchain-install");
assert_eq!(
RustupCommand::ToolchainInstall.as_str(),
"toolchain-install"
);
assert_eq!(RustupCommand::ComponentAdd.as_str(), "component-add");
assert_eq!(RustupCommand::TargetAdd.as_str(), "target-add");
}
@@ -118,7 +121,11 @@ mod tests {
let config = RustupConfig {
command: RustupCommand::ComponentAdd,
toolchain: Some("nightly".to_string()),
components: vec!["clippy".to_string(), "rustfmt".to_string(), "rust-src".to_string()],
components: vec![
"clippy".to_string(),
"rustfmt".to_string(),
"rust-src".to_string(),
],
targets: vec![],
profile: None,
default_toolchain: None,
@@ -138,7 +145,10 @@ mod tests {
command: RustupCommand::TargetAdd,
toolchain: Some("stable".to_string()),
components: vec![],
targets: vec!["wasm32-unknown-unknown".to_string(), "aarch64-linux-android".to_string()],
targets: vec![
"wasm32-unknown-unknown".to_string(),
"aarch64-linux-android".to_string(),
],
profile: None,
default_toolchain: None,
extra_args: vec![],
@@ -147,7 +157,10 @@ mod tests {
let json = serde_json::to_string(&config).unwrap();
let de: RustupConfig = serde_json::from_str(&json).unwrap();
assert_eq!(de.command, RustupCommand::TargetAdd);
assert_eq!(de.targets, vec!["wasm32-unknown-unknown", "aarch64-linux-android"]);
assert_eq!(
de.targets,
vec!["wasm32-unknown-unknown", "aarch64-linux-android"]
);
}
#[test]

View File

@@ -1,7 +1,7 @@
use async_trait::async_trait;
use wfe_core::WfeError;
use wfe_core::models::ExecutionResult;
use wfe_core::traits::step::{StepBody, StepExecutionContext};
use wfe_core::WfeError;
use crate::rustup::config::{RustupCommand, RustupConfig};
@@ -26,7 +26,8 @@ impl RustupStep {
fn build_install_command(&self) -> tokio::process::Command {
let mut cmd = tokio::process::Command::new("sh");
// Pipe rustup-init through sh with non-interactive flag.
let mut script = "curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y".to_string();
let mut script =
"curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y".to_string();
if let Some(ref profile) = self.config.profile {
script.push_str(&format!(" --profile {profile}"));
@@ -112,7 +113,10 @@ impl RustupStep {
#[async_trait]
impl StepBody for RustupStep {
async fn run(&mut self, context: &StepExecutionContext<'_>) -> wfe_core::Result<ExecutionResult> {
async fn run(
&mut self,
context: &StepExecutionContext<'_>,
) -> wfe_core::Result<ExecutionResult> {
let step_name = context.step.name.as_deref().unwrap_or("unknown");
let subcmd = self.config.command.as_str();
@@ -133,9 +137,9 @@ impl StepBody for RustupStep {
}
}
} else {
cmd.output()
.await
.map_err(|e| WfeError::StepExecution(format!("Failed to spawn rustup {subcmd}: {e}")))?
cmd.output().await.map_err(|e| {
WfeError::StepExecution(format!("Failed to spawn rustup {subcmd}: {e}"))
})?
};
let stdout = String::from_utf8_lossy(&output.stdout).to_string();
@@ -189,7 +193,11 @@ mod tests {
let cmd = step.build_command();
let prog = cmd.as_std().get_program().to_str().unwrap();
assert_eq!(prog, "sh");
let args: Vec<_> = cmd.as_std().get_args().map(|a| a.to_str().unwrap()).collect();
let args: Vec<_> = cmd
.as_std()
.get_args()
.map(|a| a.to_str().unwrap())
.collect();
assert_eq!(args[0], "-c");
assert!(args[1].contains("rustup.rs"));
assert!(args[1].contains("-y"));
@@ -202,7 +210,11 @@ mod tests {
config.default_toolchain = Some("nightly".to_string());
let step = RustupStep::new(config);
let cmd = step.build_command();
let args: Vec<_> = cmd.as_std().get_args().map(|a| a.to_str().unwrap()).collect();
let args: Vec<_> = cmd
.as_std()
.get_args()
.map(|a| a.to_str().unwrap())
.collect();
assert!(args[1].contains("--profile minimal"));
assert!(args[1].contains("--default-toolchain nightly"));
}
@@ -213,7 +225,11 @@ mod tests {
config.extra_args = vec!["--no-modify-path".to_string()];
let step = RustupStep::new(config);
let cmd = step.build_command();
let args: Vec<_> = cmd.as_std().get_args().map(|a| a.to_str().unwrap()).collect();
let args: Vec<_> = cmd
.as_std()
.get_args()
.map(|a| a.to_str().unwrap())
.collect();
assert!(args[1].contains("--no-modify-path"));
}
@@ -233,8 +249,21 @@ mod tests {
let cmd = step.build_command();
let prog = cmd.as_std().get_program().to_str().unwrap();
assert_eq!(prog, "rustup");
let args: Vec<_> = cmd.as_std().get_args().map(|a| a.to_str().unwrap()).collect();
assert_eq!(args, vec!["toolchain", "install", "nightly-2024-06-01", "--profile", "minimal"]);
let args: Vec<_> = cmd
.as_std()
.get_args()
.map(|a| a.to_str().unwrap())
.collect();
assert_eq!(
args,
vec![
"toolchain",
"install",
"nightly-2024-06-01",
"--profile",
"minimal"
]
);
}
#[test]
@@ -251,7 +280,11 @@ mod tests {
};
let step = RustupStep::new(config);
let cmd = step.build_command();
let args: Vec<_> = cmd.as_std().get_args().map(|a| a.to_str().unwrap()).collect();
let args: Vec<_> = cmd
.as_std()
.get_args()
.map(|a| a.to_str().unwrap())
.collect();
assert_eq!(args, vec!["toolchain", "install", "stable", "--force"]);
}
@@ -271,8 +304,22 @@ mod tests {
let cmd = step.build_command();
let prog = cmd.as_std().get_program().to_str().unwrap();
assert_eq!(prog, "rustup");
let args: Vec<_> = cmd.as_std().get_args().map(|a| a.to_str().unwrap()).collect();
assert_eq!(args, vec!["component", "add", "clippy", "rustfmt", "--toolchain", "nightly"]);
let args: Vec<_> = cmd
.as_std()
.get_args()
.map(|a| a.to_str().unwrap())
.collect();
assert_eq!(
args,
vec![
"component",
"add",
"clippy",
"rustfmt",
"--toolchain",
"nightly"
]
);
}
#[test]
@@ -289,7 +336,11 @@ mod tests {
};
let step = RustupStep::new(config);
let cmd = step.build_command();
let args: Vec<_> = cmd.as_std().get_args().map(|a| a.to_str().unwrap()).collect();
let args: Vec<_> = cmd
.as_std()
.get_args()
.map(|a| a.to_str().unwrap())
.collect();
assert_eq!(args, vec!["component", "add", "rust-src"]);
}
@@ -309,8 +360,21 @@ mod tests {
let cmd = step.build_command();
let prog = cmd.as_std().get_program().to_str().unwrap();
assert_eq!(prog, "rustup");
let args: Vec<_> = cmd.as_std().get_args().map(|a| a.to_str().unwrap()).collect();
assert_eq!(args, vec!["target", "add", "wasm32-unknown-unknown", "--toolchain", "stable"]);
let args: Vec<_> = cmd
.as_std()
.get_args()
.map(|a| a.to_str().unwrap())
.collect();
assert_eq!(
args,
vec![
"target",
"add",
"wasm32-unknown-unknown",
"--toolchain",
"stable"
]
);
}
#[test]
@@ -330,8 +394,20 @@ mod tests {
};
let step = RustupStep::new(config);
let cmd = step.build_command();
let args: Vec<_> = cmd.as_std().get_args().map(|a| a.to_str().unwrap()).collect();
assert_eq!(args, vec!["target", "add", "wasm32-unknown-unknown", "aarch64-linux-android"]);
let args: Vec<_> = cmd
.as_std()
.get_args()
.map(|a| a.to_str().unwrap())
.collect();
assert_eq!(
args,
vec![
"target",
"add",
"wasm32-unknown-unknown",
"aarch64-linux-android"
]
);
}
#[test]
@@ -348,10 +424,21 @@ mod tests {
};
let step = RustupStep::new(config);
let cmd = step.build_command();
let args: Vec<_> = cmd.as_std().get_args().map(|a| a.to_str().unwrap()).collect();
let args: Vec<_> = cmd
.as_std()
.get_args()
.map(|a| a.to_str().unwrap())
.collect();
assert_eq!(
args,
vec!["target", "add", "x86_64-unknown-linux-musl", "--toolchain", "nightly", "--force"]
vec![
"target",
"add",
"x86_64-unknown-linux-musl",
"--toolchain",
"nightly",
"--force"
]
);
}
}

View File

@@ -1,17 +1,19 @@
use std::path::PathBuf;
fn main() -> Result<(), Box<dyn std::error::Error>> {
let proto_files = vec!["proto/wfe/v1/wfe.proto"];
let out_dir = PathBuf::from(std::env::var("OUT_DIR")?);
let descriptor_path = out_dir.join("wfe_descriptor.bin");
let mut prost_config = prost_build::Config::new();
prost_config.include_file("mod.rs");
tonic_prost_build::configure()
.build_server(true)
.build_client(true)
.compile_with_config(
prost_config,
&proto_files,
&["proto"],
)?;
.file_descriptor_set_path(&descriptor_path)
.compile_with_config(prost_config, &proto_files, &["proto"])?;
Ok(())
}

View File

@@ -45,6 +45,9 @@ message RegisteredDefinition {
string definition_id = 1;
uint32 version = 2;
uint32 step_count = 3;
// Human-friendly display name declared in the YAML (e.g. "Continuous
// Integration"). Empty when the definition did not set one.
string name = 4;
}
message ListDefinitionsRequest {}
@@ -58,6 +61,10 @@ message DefinitionSummary {
uint32 version = 2;
string description = 3;
uint32 step_count = 4;
// Human-friendly display name declared in the YAML (e.g. "Continuous
// Integration"). Empty when the definition did not set one; clients should
// fall back to `id` for presentation.
string name = 5;
}
// ─── Instances ───────────────────────────────────────────────────────
@@ -66,13 +73,23 @@ message StartWorkflowRequest {
string definition_id = 1;
uint32 version = 2;
google.protobuf.Struct data = 3;
// Optional caller-supplied name for this instance. Must be unique across
// all workflow instances. When unset the server auto-assigns
// `{definition_id}-{N}` using a per-definition monotonic counter.
string name = 4;
}
message StartWorkflowResponse {
string workflow_id = 1;
// Human-friendly name that was assigned to the new instance (either the
// caller override or the auto-generated `{definition_id}-{N}`).
string name = 2;
}
message GetWorkflowRequest {
// Accepts either the UUID `workflow_id` or the human-friendly instance
// name (e.g. "ci-42"). The server tries UUID first, then falls back to
// name-based lookup.
string workflow_id = 1;
}
@@ -201,6 +218,10 @@ message WorkflowInstance {
google.protobuf.Timestamp create_time = 8;
google.protobuf.Timestamp complete_time = 9;
repeated ExecutionPointer execution_pointers = 10;
// Human-friendly unique name, auto-assigned as `{definition_id}-{N}` at
// start time, or the caller-supplied override from StartWorkflowRequest.
// Interchangeable with `id` in Get/Cancel/Suspend/Resume/Watch/Logs RPCs.
string name = 11;
}
message ExecutionPointer {
@@ -222,6 +243,8 @@ message WorkflowSearchResult {
string reference = 5;
string description = 6;
google.protobuf.Timestamp create_time = 7;
// Human-friendly instance name (e.g. "ci-42").
string name = 8;
}
enum WorkflowStatus {

View File

@@ -15,3 +15,7 @@ include!(concat!(env!("OUT_DIR"), "/mod.rs"));
pub use prost;
pub use prost_types;
pub use tonic;
/// Encoded file descriptor set for gRPC reflection.
pub const FILE_DESCRIPTOR_SET: &[u8] =
include_bytes!(concat!(env!("OUT_DIR"), "/wfe_descriptor.bin"));

View File

@@ -14,9 +14,9 @@ path = "src/main.rs"
[dependencies]
# Internal
wfe-core = { workspace = true, features = ["test-support"] }
wfe = { version = "1.8.0", path = "../wfe", registry = "sunbeam" }
wfe-yaml = { version = "1.8.0", path = "../wfe-yaml", registry = "sunbeam", features = ["rustlang", "buildkit", "containerd"] }
wfe-server-protos = { version = "1.8.0", path = "../wfe-server-protos", registry = "sunbeam" }
wfe = { version = "1.9.0", path = "../wfe", registry = "sunbeam" }
wfe-yaml = { version = "1.9.0", path = "../wfe-yaml", registry = "sunbeam", features = ["rustlang", "buildkit", "containerd", "kubernetes", "deno"] }
wfe-server-protos = { version = "1.9.0", path = "../wfe-server-protos", registry = "sunbeam" }
wfe-sqlite = { workspace = true }
wfe-postgres = { workspace = true }
wfe-valkey = { workspace = true }
@@ -26,6 +26,7 @@ opensearch = { workspace = true }
# gRPC
tonic = "0.14"
tonic-health = "0.14"
tonic-reflection = "0.14"
prost-types = "0.14"
# HTTP (webhooks)

273
wfe-server/README.md Normal file
View File

@@ -0,0 +1,273 @@
# wfe-server
Headless workflow server with gRPC API, HTTP webhooks, and OIDC authentication.
## Quick Start
```bash
# Minimal (SQLite + in-memory queue)
wfe-server
# Production (Postgres + Valkey + OpenSearch + OIDC)
wfe-server \
--db-url postgres://wfe:secret@postgres:5432/wfe \
--queue valkey --queue-url redis://valkey:6379 \
--search-url http://opensearch:9200
```
## Docker
```bash
docker build -t wfe-server .
docker run -p 50051:50051 -p 8080:8080 wfe-server
```
## Configuration
Configuration is layered: **CLI flags > environment variables > TOML config file**.
### CLI Flags / Environment Variables
| Flag | Env Var | Default | Description |
|------|---------|---------|-------------|
| `--config` | - | `wfe-server.toml` | Path to TOML config file |
| `--grpc-addr` | `WFE_GRPC_ADDR` | `0.0.0.0:50051` | gRPC listen address |
| `--http-addr` | `WFE_HTTP_ADDR` | `0.0.0.0:8080` | HTTP listen address (webhooks) |
| `--persistence` | `WFE_PERSISTENCE` | `sqlite` | Persistence backend: `sqlite` or `postgres` |
| `--db-url` | `WFE_DB_URL` | `wfe.db` | Database URL or file path |
| `--queue` | `WFE_QUEUE` | `memory` | Queue backend: `memory` or `valkey` |
| `--queue-url` | `WFE_QUEUE_URL` | `redis://127.0.0.1:6379` | Valkey/Redis URL |
| `--search-url` | `WFE_SEARCH_URL` | *(none)* | OpenSearch URL (enables search) |
| `--workflows-dir` | `WFE_WORKFLOWS_DIR` | *(none)* | Directory to auto-load YAML workflows |
| `--auth-tokens` | `WFE_AUTH_TOKENS` | *(none)* | Comma-separated static bearer tokens |
### TOML Config File
```toml
# Network
grpc_addr = "0.0.0.0:50051"
http_addr = "0.0.0.0:8080"
# Auto-load workflow definitions from this directory
workflows_dir = "/etc/wfe/workflows"
# --- Persistence ---
[persistence]
backend = "postgres" # "sqlite" or "postgres"
url = "postgres://wfe:secret@postgres:5432/wfe"
# For SQLite:
# backend = "sqlite"
# path = "/data/wfe.db"
# --- Queue / Locking ---
[queue]
backend = "valkey" # "memory" or "valkey"
url = "redis://valkey:6379"
# --- Search ---
[search]
url = "http://opensearch:9200" # Enables workflow + log search
# --- Authentication ---
[auth]
# Static bearer tokens (simple API auth)
tokens = ["my-secret-token"]
# OIDC/JWT authentication (e.g., Ory Hydra, Keycloak, Auth0)
oidc_issuer = "https://auth.sunbeam.pt/"
oidc_audience = "wfe-server" # Expected 'aud' claim
# Webhook HMAC secrets (per source)
[auth.webhook_secrets]
github = "whsec_github_secret_here"
gitea = "whsec_gitea_secret_here"
# --- Webhooks ---
# Each trigger maps an incoming webhook event to a workflow.
[[webhook.triggers]]
source = "github" # "github" or "gitea"
event = "push" # GitHub/Gitea event type
match_ref = "refs/heads/main" # Optional: only trigger on this ref
workflow_id = "ci" # Workflow definition to start
version = 1
[webhook.triggers.data_mapping]
repo = "$.repository.full_name" # JSONPath from webhook payload
commit = "$.head_commit.id"
branch = "$.ref"
[[webhook.triggers]]
source = "gitea"
event = "push"
workflow_id = "deploy"
version = 1
```
## Persistence Backends
### SQLite
Single-file embedded database. Good for development and single-node deployments.
```toml
[persistence]
backend = "sqlite"
path = "/data/wfe.db"
```
### PostgreSQL
Production-grade. Required for multi-node deployments.
```toml
[persistence]
backend = "postgres"
url = "postgres://user:password@host:5432/dbname"
```
The server runs migrations automatically on startup.
## Queue Backends
### In-Memory
Default. Single-process only -- workflows are lost on restart.
```toml
[queue]
backend = "memory"
```
### Valkey / Redis
Production-grade distributed queue and locking. Required for multi-node.
```toml
[queue]
backend = "valkey"
url = "redis://valkey:6379"
```
Provides both `QueueProvider` (work distribution) and `DistributedLockProvider` (workflow-level locking).
## Search
Optional. When configured, enables:
- Full-text workflow log search via `SearchLogs` RPC
- Workflow instance indexing for filtered queries
```toml
[search]
url = "http://opensearch:9200"
```
## Authentication
### Static Bearer Tokens
Simplest auth. Tokens are compared in constant time.
```toml
[auth]
tokens = ["token1", "token2"]
```
Use with: `Authorization: Bearer token1`
### OIDC / JWT
For production. The server discovers JWKS keys from the OIDC issuer and validates JWT tokens on every request.
```toml
[auth]
oidc_issuer = "https://auth.sunbeam.pt/"
oidc_audience = "wfe-server"
```
Security properties:
- Algorithm derived from JWK (prevents algorithm confusion attacks)
- Symmetric algorithms rejected (RS256, RS384, RS512, ES256, ES384, PS256 only)
- OIDC issuer must use HTTPS
- Fail-closed: server won't start if OIDC discovery fails
Use with: `Authorization: Bearer <jwt-token>`
### Webhook HMAC
Webhook endpoints validate payloads using HMAC-SHA256.
```toml
[auth.webhook_secrets]
github = "your-github-webhook-secret"
gitea = "your-gitea-webhook-secret"
```
## gRPC API
13 RPCs available on the gRPC port (default 50051):
| RPC | Description |
|-----|-------------|
| `StartWorkflow` | Start a new workflow instance |
| `GetWorkflow` | Get workflow instance by ID |
| `ListWorkflows` | List workflow instances with filters |
| `SuspendWorkflow` | Pause a running workflow |
| `ResumeWorkflow` | Resume a suspended workflow |
| `TerminateWorkflow` | Stop a workflow permanently |
| `RegisterDefinition` | Register a workflow definition |
| `GetDefinition` | Get a workflow definition |
| `ListDefinitions` | List all registered definitions |
| `PublishEvent` | Publish an event for waiting workflows |
| `WatchLifecycle` | Server-streaming: lifecycle events |
| `StreamLogs` | Server-streaming: real-time step output |
| `SearchLogs` | Full-text search over step logs |
## HTTP Webhooks
Webhook endpoint: `POST /webhooks/{source}`
Supported sources:
- `github` -- GitHub webhook payloads with `X-Hub-Signature-256` HMAC
- `gitea` -- Gitea webhook payloads with `X-Gitea-Signature` HMAC
- `generic` -- Any JSON payload (requires bearer token auth)
Payload size limit: 2MB.
## Workflow YAML Auto-Loading
Point `workflows_dir` at a directory of `.yaml` files to auto-register workflow definitions on startup.
```toml
workflows_dir = "/etc/wfe/workflows"
```
File format: see [wfe-yaml](../wfe-yaml/) for the YAML workflow definition schema.
## Ports
| Port | Protocol | Purpose |
|------|----------|---------|
| 50051 | gRPC (HTTP/2) | Workflow API |
| 8080 | HTTP/1.1 | Webhooks |
## Health Check
The gRPC port responds to standard gRPC health checks. For HTTP health, any non-webhook GET to port 8080 returns 404 (the server is up if it responds).
## Environment Variable Reference
All configuration can be set via environment variables:
```bash
WFE_GRPC_ADDR=0.0.0.0:50051
WFE_HTTP_ADDR=0.0.0.0:8080
WFE_PERSISTENCE=postgres
WFE_DB_URL=postgres://wfe:secret@postgres:5432/wfe
WFE_QUEUE=valkey
WFE_QUEUE_URL=redis://valkey:6379
WFE_SEARCH_URL=http://opensearch:9200
WFE_WORKFLOWS_DIR=/etc/wfe/workflows
WFE_AUTH_TOKENS=token1,token2
RUST_LOG=info # Tracing filter (debug, info, warn, error)
```

View File

@@ -1,6 +1,6 @@
use std::sync::Arc;
use jsonwebtoken::{decode, Algorithm, DecodingKey, Validation};
use jsonwebtoken::{Algorithm, DecodingKey, Validation, decode};
use serde::Deserialize;
use tokio::sync::RwLock;
use tonic::{Request, Status};
@@ -99,7 +99,10 @@ impl AuthState {
let resp: JwksResponse = reqwest::get(uri).await?.json().await?;
let mut cache = self.jwks.write().await;
*cache = Some(JwksCache { keys: resp.keys });
tracing::debug!(key_count = cache.as_ref().unwrap().keys.len(), "JWKS refreshed");
tracing::debug!(
key_count = cache.as_ref().unwrap().keys.len(),
"JWKS refreshed"
);
Ok(())
}
@@ -128,7 +131,9 @@ impl AuthState {
/// Validate a JWT against the cached JWKS (synchronous — for use in interceptors).
/// Shared logic used by both `check()` and `make_interceptor()`.
fn validate_jwt_cached(&self, token: &str) -> Result<(), Status> {
let cache = self.jwks.try_read()
let cache = self
.jwks
.try_read()
.map_err(|_| Status::unavailable("JWKS refresh in progress"))?;
let jwks = cache
.as_ref()
@@ -228,9 +233,7 @@ fn extract_bearer_token<T>(request: &Request<T>) -> Result<&str, Status> {
}
/// Map JWK key algorithm to jsonwebtoken Algorithm.
fn key_algorithm_to_jwt_algorithm(
ka: jsonwebtoken::jwk::KeyAlgorithm,
) -> Option<Algorithm> {
fn key_algorithm_to_jwt_algorithm(ka: jsonwebtoken::jwk::KeyAlgorithm) -> Option<Algorithm> {
use jsonwebtoken::jwk::KeyAlgorithm as KA;
match ka {
KA::RS256 => Some(Algorithm::RS256),
@@ -473,7 +476,7 @@ mod tests {
issuer: &str,
audience: Option<&str>,
) -> (Vec<jsonwebtoken::jwk::Jwk>, String) {
use base64::{engine::general_purpose::URL_SAFE_NO_PAD, Engine};
use base64::{Engine, engine::general_purpose::URL_SAFE_NO_PAD};
use rsa::RsaPrivateKey;
let mut rng = rand::thread_rng();
@@ -498,8 +501,7 @@ mod tests {
let pem = private_key
.to_pkcs1_pem(rsa::pkcs1::LineEnding::LF)
.unwrap();
let encoding_key =
jsonwebtoken::EncodingKey::from_rsa_pem(pem.as_bytes()).unwrap();
let encoding_key = jsonwebtoken::EncodingKey::from_rsa_pem(pem.as_bytes()).unwrap();
let mut header = jsonwebtoken::Header::new(jsonwebtoken::Algorithm::RS256);
header.kid = Some("test-key-1".to_string());
@@ -684,9 +686,18 @@ mod tests {
#[test]
fn key_algorithm_mapping() {
use jsonwebtoken::jwk::KeyAlgorithm as KA;
assert_eq!(key_algorithm_to_jwt_algorithm(KA::RS256), Some(Algorithm::RS256));
assert_eq!(key_algorithm_to_jwt_algorithm(KA::ES256), Some(Algorithm::ES256));
assert_eq!(key_algorithm_to_jwt_algorithm(KA::EdDSA), Some(Algorithm::EdDSA));
assert_eq!(
key_algorithm_to_jwt_algorithm(KA::RS256),
Some(Algorithm::RS256)
);
assert_eq!(
key_algorithm_to_jwt_algorithm(KA::ES256),
Some(Algorithm::ES256)
);
assert_eq!(
key_algorithm_to_jwt_algorithm(KA::EdDSA),
Some(Algorithm::EdDSA)
);
// HS256 should be rejected (symmetric algorithm).
assert_eq!(key_algorithm_to_jwt_algorithm(KA::HS256), None);
assert_eq!(key_algorithm_to_jwt_algorithm(KA::HS384), None);

View File

@@ -174,10 +174,7 @@ pub fn load(cli: &Cli) -> ServerConfig {
// Persistence override.
if let Some(ref backend) = cli.persistence {
let url = cli
.db_url
.clone()
.unwrap_or_else(|| "wfe.db".to_string());
let url = cli.db_url.clone().unwrap_or_else(|| "wfe.db".to_string());
config.persistence = match backend.as_str() {
"postgres" => PersistenceConfig::Postgres { url },
_ => PersistenceConfig::Sqlite { path: url },
@@ -231,7 +228,10 @@ mod tests {
let config = ServerConfig::default();
assert_eq!(config.grpc_addr, "0.0.0.0:50051".parse().unwrap());
assert_eq!(config.http_addr, "0.0.0.0:8080".parse().unwrap());
assert!(matches!(config.persistence, PersistenceConfig::Sqlite { .. }));
assert!(matches!(
config.persistence,
PersistenceConfig::Sqlite { .. }
));
assert!(matches!(config.queue, QueueConfig::InMemory));
assert!(config.search.is_none());
assert!(config.auth.tokens.is_empty());
@@ -270,11 +270,17 @@ version = 1
"#;
let config: ServerConfig = toml::from_str(toml).unwrap();
assert_eq!(config.grpc_addr, "127.0.0.1:9090".parse().unwrap());
assert!(matches!(config.persistence, PersistenceConfig::Postgres { .. }));
assert!(matches!(
config.persistence,
PersistenceConfig::Postgres { .. }
));
assert!(matches!(config.queue, QueueConfig::Valkey { .. }));
assert!(config.search.is_some());
assert_eq!(config.auth.tokens.len(), 2);
assert_eq!(config.auth.webhook_secrets.get("github").unwrap(), "mysecret");
assert_eq!(
config.auth.webhook_secrets.get("github").unwrap(),
"mysecret"
);
assert_eq!(config.webhook.triggers.len(), 1);
assert_eq!(config.webhook.triggers[0].workflow_id, "ci");
}
@@ -295,8 +301,12 @@ version = 1
};
let config = load(&cli);
assert_eq!(config.grpc_addr, "127.0.0.1:9999".parse().unwrap());
assert!(matches!(config.persistence, PersistenceConfig::Postgres { ref url } if url == "postgres://db/wfe"));
assert!(matches!(config.queue, QueueConfig::Valkey { ref url } if url == "redis://valkey:6379"));
assert!(
matches!(config.persistence, PersistenceConfig::Postgres { ref url } if url == "postgres://db/wfe")
);
assert!(
matches!(config.queue, QueueConfig::Valkey { ref url } if url == "redis://valkey:6379")
);
assert_eq!(config.search.unwrap().url, "http://os:9200");
assert_eq!(config.workflows_dir.unwrap(), PathBuf::from("/workflows"));
assert_eq!(config.auth.tokens, vec!["tok1", "tok2"]);
@@ -317,7 +327,10 @@ version = 1
auth_tokens: None,
};
let config = load(&cli);
assert!(matches!(config.persistence, PersistenceConfig::Postgres { .. }));
assert!(matches!(
config.persistence,
PersistenceConfig::Postgres { .. }
));
}
// ── Security regression tests ──
@@ -358,6 +371,9 @@ commit = "$.head_commit.id"
"#;
let config: WebhookConfig = toml::from_str(toml).unwrap();
assert_eq!(config.triggers[0].data_mapping.len(), 2);
assert_eq!(config.triggers[0].data_mapping["repo"], "$.repository.full_name");
assert_eq!(
config.triggers[0].data_mapping["repo"],
"$.repository.full_name"
);
}
}

View File

@@ -2,8 +2,8 @@ use std::collections::{BTreeMap, HashMap};
use std::sync::Arc;
use tonic::{Request, Response, Status};
use wfe_server_protos::wfe::v1::*;
use wfe_server_protos::wfe::v1::wfe_server::Wfe;
use wfe_server_protos::wfe::v1::*;
pub struct WfeService {
host: Arc<wfe::WorkflowHost>,
@@ -18,7 +18,12 @@ impl WfeService {
lifecycle_bus: Arc<crate::lifecycle_bus::BroadcastLifecyclePublisher>,
log_store: Arc<crate::log_store::LogStore>,
) -> Self {
Self { host, lifecycle_bus, log_store, log_search: None }
Self {
host,
lifecycle_bus,
log_store,
log_search: None,
}
}
pub fn with_log_search(mut self, index: Arc<crate::log_search::LogSearchIndex>) -> Self {
@@ -56,6 +61,7 @@ impl Wfe for WfeService {
let id = compiled.definition.id.clone();
let version = compiled.definition.version;
let step_count = compiled.definition.steps.len() as u32;
let name = compiled.definition.name.clone().unwrap_or_default();
self.host
.register_workflow_definition(compiled.definition)
@@ -65,6 +71,7 @@ impl Wfe for WfeService {
definition_id: id,
version,
step_count,
name,
});
}
@@ -94,13 +101,33 @@ impl Wfe for WfeService {
.map(struct_to_json)
.unwrap_or_else(|| serde_json::json!({}));
// Empty `name` means "auto-assign"; pass None through so the host
// generates `{definition_id}-{N}` via the persistence sequence.
let name_override = if req.name.trim().is_empty() {
None
} else {
Some(req.name)
};
let workflow_id = self
.host
.start_workflow(&req.definition_id, req.version, data)
.start_workflow_with_name(&req.definition_id, req.version, data, name_override)
.await
.map_err(|e| Status::internal(format!("failed to start workflow: {e}")))?;
Ok(Response::new(StartWorkflowResponse { workflow_id }))
// Load the instance back so we can return the assigned name to the
// client. Cheap read, single row, avoids plumbing the name through
// the host's return signature.
let instance = self
.host
.get_workflow(&workflow_id)
.await
.map_err(|e| Status::internal(format!("failed to load new workflow: {e}")))?;
Ok(Response::new(StartWorkflowResponse {
workflow_id,
name: instance.name,
}))
}
async fn get_workflow(
@@ -206,10 +233,18 @@ impl Wfe for WfeService {
request: Request<WatchLifecycleRequest>,
) -> Result<Response<Self::WatchLifecycleStream>, Status> {
let req = request.into_inner();
// Resolve name-or-UUID to the canonical UUID upfront. Lifecycle events
// carry UUIDs, so filtering by a human name would silently drop
// everything. Empty filter means "all workflows".
let filter_workflow_id = if req.workflow_id.is_empty() {
None
} else {
Some(req.workflow_id)
let resolved = self
.host
.resolve_workflow_id(&req.workflow_id)
.await
.map_err(|e| Status::not_found(format!("workflow not found: {e}")))?;
Some(resolved)
};
let mut broadcast_rx = self.lifecycle_bus.subscribe();
@@ -239,7 +274,9 @@ impl Wfe for WfeService {
}
});
Ok(Response::new(tokio_stream::wrappers::ReceiverStream::new(rx)))
Ok(Response::new(tokio_stream::wrappers::ReceiverStream::new(
rx,
)))
}
type StreamLogsStream = tokio_stream::wrappers::ReceiverStream<Result<LogEntry, Status>>;
@@ -249,7 +286,13 @@ impl Wfe for WfeService {
request: Request<StreamLogsRequest>,
) -> Result<Response<Self::StreamLogsStream>, Status> {
let req = request.into_inner();
let workflow_id = req.workflow_id.clone();
// Resolve name-or-UUID so the log_store (which is keyed by UUID)
// returns history for the right instance.
let workflow_id = self
.host
.resolve_workflow_id(&req.workflow_id)
.await
.map_err(|e| Status::not_found(format!("workflow not found: {e}")))?;
let step_name_filter = if req.step_name.is_empty() {
None
} else {
@@ -301,7 +344,9 @@ impl Wfe for WfeService {
// If not follow mode, the stream ends after history replay.
});
Ok(Response::new(tokio_stream::wrappers::ReceiverStream::new(rx)))
Ok(Response::new(tokio_stream::wrappers::ReceiverStream::new(
rx,
)))
}
// ── Search ───────────────────────────────────────────────────────
@@ -311,12 +356,31 @@ impl Wfe for WfeService {
request: Request<SearchLogsRequest>,
) -> Result<Response<SearchLogsResponse>, Status> {
let Some(ref search) = self.log_search else {
return Err(Status::unavailable("log search not configured — set --search-url"));
return Err(Status::unavailable(
"log search not configured — set --search-url",
));
};
let req = request.into_inner();
let workflow_id = if req.workflow_id.is_empty() { None } else { Some(req.workflow_id.as_str()) };
let step_name = if req.step_name.is_empty() { None } else { Some(req.step_name.as_str()) };
// Resolve name-or-UUID upfront so the search index (keyed by UUID)
// matches the requested instance. We materialize into a String so
// the borrowed reference below has a stable lifetime.
let resolved_workflow_id = if req.workflow_id.is_empty() {
None
} else {
Some(
self.host
.resolve_workflow_id(&req.workflow_id)
.await
.map_err(|e| Status::not_found(format!("workflow not found: {e}")))?,
)
};
let workflow_id = resolved_workflow_id.as_deref();
let step_name = if req.step_name.is_empty() {
None
} else {
Some(req.step_name.as_str())
};
let stream_filter = match req.stream_filter {
x if x == LogStream::Stdout as i32 => Some("stdout"),
x if x == LogStream::Stderr as i32 => Some("stderr"),
@@ -325,7 +389,14 @@ impl Wfe for WfeService {
let take = if req.take == 0 { 50 } else { req.take };
let (hits, total) = search
.search(&req.query, workflow_id, step_name, stream_filter, req.skip, take)
.search(
&req.query,
workflow_id,
step_name,
stream_filter,
req.skip,
take,
)
.await
.map_err(|e| Status::internal(format!("search failed: {e}")))?;
@@ -431,8 +502,18 @@ fn lifecycle_event_to_proto(e: &wfe_core::models::LifecycleEvent) -> LifecycleEv
LET::Suspended => (PLET::Suspended as i32, 0, String::new(), String::new()),
LET::Resumed => (PLET::Resumed as i32, 0, String::new(), String::new()),
LET::Error { message } => (PLET::Error as i32, 0, String::new(), message.clone()),
LET::StepStarted { step_id, step_name } => (PLET::StepStarted as i32, *step_id as u32, step_name.clone().unwrap_or_default(), String::new()),
LET::StepCompleted { step_id, step_name } => (PLET::StepCompleted as i32, *step_id as u32, step_name.clone().unwrap_or_default(), String::new()),
LET::StepStarted { step_id, step_name } => (
PLET::StepStarted as i32,
*step_id as u32,
step_name.clone().unwrap_or_default(),
String::new(),
),
LET::StepCompleted { step_id, step_name } => (
PLET::StepCompleted as i32,
*step_id as u32,
step_name.clone().unwrap_or_default(),
String::new(),
),
};
LifecycleEvent {
event_time: Some(datetime_to_timestamp(&e.event_time_utc)),
@@ -456,6 +537,7 @@ fn datetime_to_timestamp(dt: &chrono::DateTime<chrono::Utc>) -> prost_types::Tim
fn workflow_to_proto(w: &wfe_core::models::WorkflowInstance) -> WorkflowInstance {
WorkflowInstance {
id: w.id.clone(),
name: w.name.clone(),
definition_id: w.workflow_definition_id.clone(),
version: w.version,
description: w.description.clone().unwrap_or_default(),
@@ -469,11 +551,7 @@ fn workflow_to_proto(w: &wfe_core::models::WorkflowInstance) -> WorkflowInstance
data: Some(json_to_struct(&w.data)),
create_time: Some(datetime_to_timestamp(&w.create_time)),
complete_time: w.complete_time.as_ref().map(datetime_to_timestamp),
execution_pointers: w
.execution_pointers
.iter()
.map(pointer_to_proto)
.collect(),
execution_pointers: w.execution_pointers.iter().map(pointer_to_proto).collect(),
}
}
@@ -630,7 +708,10 @@ mod tests {
assert_eq!(pointer_to_proto(&p).status, PointerStatus::Sleeping as i32);
p.status = PS::WaitingForEvent;
assert_eq!(pointer_to_proto(&p).status, PointerStatus::WaitingForEvent as i32);
assert_eq!(
pointer_to_proto(&p).status,
PointerStatus::WaitingForEvent as i32
);
p.status = PS::Failed;
assert_eq!(pointer_to_proto(&p).status, PointerStatus::Failed as i32);
@@ -644,7 +725,8 @@ mod tests {
#[test]
fn workflow_to_proto_basic() {
let w = wfe_core::models::WorkflowInstance::new("my-wf", 1, serde_json::json!({"key": "val"}));
let w =
wfe_core::models::WorkflowInstance::new("my-wf", 1, serde_json::json!({"key": "val"}));
let p = workflow_to_proto(&w);
assert_eq!(p.definition_id, "my-wf");
assert_eq!(p.version, 1);
@@ -674,7 +756,8 @@ mod tests {
host.start().await.unwrap();
let lifecycle_bus = std::sync::Arc::new(crate::lifecycle_bus::BroadcastLifecyclePublisher::new(64));
let lifecycle_bus =
std::sync::Arc::new(crate::lifecycle_bus::BroadcastLifecyclePublisher::new(64));
let log_store = std::sync::Arc::new(crate::log_store::LogStore::new());
WfeService::new(std::sync::Arc::new(host), lifecycle_bus, log_store)
@@ -695,7 +778,8 @@ workflow:
type: shell
config:
run: echo hi
"#.to_string(),
"#
.to_string(),
config: Default::default(),
});
let resp = svc.register_workflow(req).await.unwrap().into_inner();
@@ -709,6 +793,7 @@ workflow:
definition_id: "test-wf".to_string(),
version: 1,
data: None,
name: String::new(),
});
let resp = svc.start_workflow(req).await.unwrap().into_inner();
assert!(!resp.workflow_id.is_empty());
@@ -741,6 +826,7 @@ workflow:
definition_id: "nonexistent".to_string(),
version: 1,
data: None,
name: String::new(),
});
let err = svc.start_workflow(req).await.unwrap_err();
assert_eq!(err.code(), tonic::Code::Internal);
@@ -771,16 +857,30 @@ workflow:
definition_id: "cancel-test".to_string(),
version: 1,
data: None,
name: String::new(),
});
let wf_id = svc.start_workflow(req).await.unwrap().into_inner().workflow_id;
let wf_id = svc
.start_workflow(req)
.await
.unwrap()
.into_inner()
.workflow_id;
// Cancel it.
let req = Request::new(CancelWorkflowRequest { workflow_id: wf_id.clone() });
let req = Request::new(CancelWorkflowRequest {
workflow_id: wf_id.clone(),
});
svc.cancel_workflow(req).await.unwrap();
// Verify it's terminated.
let req = Request::new(GetWorkflowRequest { workflow_id: wf_id });
let instance = svc.get_workflow(req).await.unwrap().into_inner().instance.unwrap();
let instance = svc
.get_workflow(req)
.await
.unwrap()
.into_inner()
.instance
.unwrap();
assert_eq!(instance.status, WorkflowStatus::Terminated as i32);
}
@@ -798,23 +898,47 @@ workflow:
definition_id: "sr-test".to_string(),
version: 1,
data: None,
name: String::new(),
});
let wf_id = svc.start_workflow(req).await.unwrap().into_inner().workflow_id;
let wf_id = svc
.start_workflow(req)
.await
.unwrap()
.into_inner()
.workflow_id;
// Suspend.
let req = Request::new(SuspendWorkflowRequest { workflow_id: wf_id.clone() });
let req = Request::new(SuspendWorkflowRequest {
workflow_id: wf_id.clone(),
});
svc.suspend_workflow(req).await.unwrap();
let req = Request::new(GetWorkflowRequest { workflow_id: wf_id.clone() });
let instance = svc.get_workflow(req).await.unwrap().into_inner().instance.unwrap();
let req = Request::new(GetWorkflowRequest {
workflow_id: wf_id.clone(),
});
let instance = svc
.get_workflow(req)
.await
.unwrap()
.into_inner()
.instance
.unwrap();
assert_eq!(instance.status, WorkflowStatus::Suspended as i32);
// Resume.
let req = Request::new(ResumeWorkflowRequest { workflow_id: wf_id.clone() });
let req = Request::new(ResumeWorkflowRequest {
workflow_id: wf_id.clone(),
});
svc.resume_workflow(req).await.unwrap();
let req = Request::new(GetWorkflowRequest { workflow_id: wf_id });
let instance = svc.get_workflow(req).await.unwrap().into_inner().instance.unwrap();
let instance = svc
.get_workflow(req)
.await
.unwrap()
.into_inner()
.instance
.unwrap();
assert_eq!(instance.status, WorkflowStatus::Runnable as i32);
}

View File

@@ -52,9 +52,14 @@ mod tests {
let mut rx1 = bus.subscribe();
let mut rx2 = bus.subscribe();
bus.publish(LifecycleEvent::new("wf-1", "def-1", 1, LifecycleEventType::Completed))
.await
.unwrap();
bus.publish(LifecycleEvent::new(
"wf-1",
"def-1",
1,
LifecycleEventType::Completed,
))
.await
.unwrap();
let e1 = rx1.recv().await.unwrap();
let e2 = rx2.recv().await.unwrap();
@@ -66,9 +71,14 @@ mod tests {
async fn no_subscribers_does_not_error() {
let bus = BroadcastLifecyclePublisher::new(16);
// No subscribers — should not panic.
bus.publish(LifecycleEvent::new("wf-1", "def-1", 1, LifecycleEventType::Started))
.await
.unwrap();
bus.publish(LifecycleEvent::new(
"wf-1",
"def-1",
1,
LifecycleEventType::Started,
))
.await
.unwrap();
}
#[tokio::test]

View File

@@ -304,8 +304,8 @@ mod tests {
// ── OpenSearch integration tests ────────────────────────────────
fn opensearch_url() -> Option<String> {
let url = std::env::var("WFE_SEARCH_URL")
.unwrap_or_else(|_| "http://localhost:9200".to_string());
let url =
std::env::var("WFE_SEARCH_URL").unwrap_or_else(|_| "http://localhost:9200".to_string());
// Quick TCP probe to check if OpenSearch is reachable.
let addr = url
.strip_prefix("http://")
@@ -340,10 +340,7 @@ mod tests {
/// Delete the test index to start clean.
async fn cleanup_index(url: &str) {
let client = reqwest::Client::new();
let _ = client
.delete(format!("{url}/{LOG_INDEX}"))
.send()
.await;
let _ = client.delete(format!("{url}/{LOG_INDEX}")).send().await;
}
#[tokio::test]
@@ -375,18 +372,37 @@ mod tests {
index.ensure_index().await.unwrap();
// Index some log chunks.
let chunk = make_test_chunk("wf-search-1", "build", LogStreamType::Stdout, "compiling wfe-core v1.5.0");
let chunk = make_test_chunk(
"wf-search-1",
"build",
LogStreamType::Stdout,
"compiling wfe-core v1.5.0",
);
index.index_chunk(&chunk).await.unwrap();
let chunk = make_test_chunk("wf-search-1", "build", LogStreamType::Stderr, "warning: unused variable");
let chunk = make_test_chunk(
"wf-search-1",
"build",
LogStreamType::Stderr,
"warning: unused variable",
);
index.index_chunk(&chunk).await.unwrap();
let chunk = make_test_chunk("wf-search-1", "test", LogStreamType::Stdout, "test result: ok. 79 passed");
let chunk = make_test_chunk(
"wf-search-1",
"test",
LogStreamType::Stdout,
"test result: ok. 79 passed",
);
index.index_chunk(&chunk).await.unwrap();
// OpenSearch needs a refresh to make docs searchable.
let client = reqwest::Client::new();
client.post(format!("{url}/{LOG_INDEX}/_refresh")).send().await.unwrap();
client
.post(format!("{url}/{LOG_INDEX}/_refresh"))
.send()
.await
.unwrap();
// Search by text.
let (results, total) = index
@@ -456,12 +472,21 @@ mod tests {
// Index 5 chunks.
for i in 0..5 {
let chunk = make_test_chunk("wf-page", "build", LogStreamType::Stdout, &format!("line {i}"));
let chunk = make_test_chunk(
"wf-page",
"build",
LogStreamType::Stdout,
&format!("line {i}"),
);
index.index_chunk(&chunk).await.unwrap();
}
let client = reqwest::Client::new();
client.post(format!("{url}/{LOG_INDEX}/_refresh")).send().await.unwrap();
client
.post(format!("{url}/{LOG_INDEX}/_refresh"))
.send()
.await
.unwrap();
// Get first 2.
let (results, total) = index
@@ -506,11 +531,20 @@ mod tests {
let index = LogSearchIndex::new(&url).unwrap();
index.ensure_index().await.unwrap();
let chunk = make_test_chunk("wf-fields", "clippy", LogStreamType::Stderr, "error: type mismatch");
let chunk = make_test_chunk(
"wf-fields",
"clippy",
LogStreamType::Stderr,
"error: type mismatch",
);
index.index_chunk(&chunk).await.unwrap();
let client = reqwest::Client::new();
client.post(format!("{url}/{LOG_INDEX}/_refresh")).send().await.unwrap();
client
.post(format!("{url}/{LOG_INDEX}/_refresh"))
.send()
.await
.unwrap();
let (results, _) = index
.search("type mismatch", None, None, None, 0, 10)

View File

@@ -109,8 +109,12 @@ mod tests {
#[tokio::test]
async fn write_and_read_history() {
let store = LogStore::new();
store.write_chunk(make_chunk("wf-1", 0, "build", "line 1\n")).await;
store.write_chunk(make_chunk("wf-1", 0, "build", "line 2\n")).await;
store
.write_chunk(make_chunk("wf-1", 0, "build", "line 1\n"))
.await;
store
.write_chunk(make_chunk("wf-1", 0, "build", "line 2\n"))
.await;
let history = store.get_history("wf-1", None);
assert_eq!(history.len(), 2);
@@ -121,8 +125,12 @@ mod tests {
#[tokio::test]
async fn history_filtered_by_step() {
let store = LogStore::new();
store.write_chunk(make_chunk("wf-1", 0, "build", "build log\n")).await;
store.write_chunk(make_chunk("wf-1", 1, "test", "test log\n")).await;
store
.write_chunk(make_chunk("wf-1", 0, "build", "build log\n"))
.await;
store
.write_chunk(make_chunk("wf-1", 1, "test", "test log\n"))
.await;
let build_only = store.get_history("wf-1", Some(0));
assert_eq!(build_only.len(), 1);
@@ -144,7 +152,9 @@ mod tests {
let store = LogStore::new();
let mut rx = store.subscribe("wf-1");
store.write_chunk(make_chunk("wf-1", 0, "build", "hello\n")).await;
store
.write_chunk(make_chunk("wf-1", 0, "build", "hello\n"))
.await;
let received = rx.recv().await.unwrap();
assert_eq!(received.data, b"hello\n");
@@ -157,8 +167,12 @@ mod tests {
let mut rx1 = store.subscribe("wf-1");
let mut rx2 = store.subscribe("wf-2");
store.write_chunk(make_chunk("wf-1", 0, "build", "wf1 log\n")).await;
store.write_chunk(make_chunk("wf-2", 0, "test", "wf2 log\n")).await;
store
.write_chunk(make_chunk("wf-1", 0, "build", "wf1 log\n"))
.await;
store
.write_chunk(make_chunk("wf-2", 0, "test", "wf2 log\n"))
.await;
let e1 = rx1.recv().await.unwrap();
assert_eq!(e1.workflow_id, "wf-1");
@@ -171,7 +185,9 @@ mod tests {
async fn no_subscribers_does_not_error() {
let store = LogStore::new();
// No subscribers — should not panic.
store.write_chunk(make_chunk("wf-1", 0, "build", "orphan log\n")).await;
store
.write_chunk(make_chunk("wf-1", 0, "build", "orphan log\n"))
.await;
// History should still be stored.
assert_eq!(store.get_history("wf-1", None).len(), 1);
}
@@ -182,7 +198,9 @@ mod tests {
let mut rx1 = store.subscribe("wf-1");
let mut rx2 = store.subscribe("wf-1");
store.write_chunk(make_chunk("wf-1", 0, "build", "shared\n")).await;
store
.write_chunk(make_chunk("wf-1", 0, "build", "shared\n"))
.await;
let e1 = rx1.recv().await.unwrap();
let e2 = rx2.recv().await.unwrap();

View File

@@ -152,9 +152,7 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
wfe_service = wfe_service.with_log_search(index);
}
let (health_reporter, health_service) = tonic_health::server::health_reporter();
health_reporter
.set_serving::<WfeServer<WfeService>>()
.await;
health_reporter.set_serving::<WfeServer<WfeService>>().await;
// 11. Build auth state.
let auth_state = Arc::new(auth::AuthState::new(config.auth.clone()).await);
@@ -168,10 +166,31 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
// HIGH-08: Limit webhook payload size to 2 MB to prevent OOM DoS.
let http_router = axum::Router::new()
.route("/webhooks/events", axum::routing::post(webhook::handle_generic_event))
.route("/webhooks/github", axum::routing::post(webhook::handle_github_webhook))
.route("/webhooks/gitea", axum::routing::post(webhook::handle_gitea_webhook))
.route(
"/webhooks/events",
axum::routing::post(webhook::handle_generic_event),
)
.route(
"/webhooks/github",
axum::routing::post(webhook::handle_github_webhook),
)
.route(
"/webhooks/gitea",
axum::routing::post(webhook::handle_gitea_webhook),
)
.route("/healthz", axum::routing::get(webhook::health_check))
.route(
"/schema/workflow.proto",
axum::routing::get(serve_proto_schema),
)
.route(
"/schema/workflow.json",
axum::routing::get(serve_json_schema),
)
.route(
"/schema/workflow.yaml",
axum::routing::get(serve_yaml_example),
)
.layer(axum::extract::DefaultBodyLimit::max(2 * 1024 * 1024))
.with_state(webhook_state);
@@ -180,8 +199,14 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
let http_addr = config.http_addr;
tracing::info!(%grpc_addr, %http_addr, "servers listening");
let reflection_service = tonic_reflection::server::Builder::configure()
.register_encoded_file_descriptor_set(wfe_server_protos::FILE_DESCRIPTOR_SET)
.build_v1()
.expect("failed to build reflection service");
let grpc_server = Server::builder()
.add_service(health_service)
.add_service(reflection_service)
.add_service(WfeServer::with_interceptor(wfe_service, auth_interceptor))
.serve(grpc_addr);
@@ -225,7 +250,10 @@ async fn load_yaml_definitions(host: &wfe::WorkflowHost, dir: &std::path::Path)
for entry in entries.flatten() {
let path = entry.path();
if path.extension().is_some_and(|ext| ext == "yaml" || ext == "yml") {
if path
.extension()
.is_some_and(|ext| ext == "yaml" || ext == "yml")
{
match wfe_yaml::load_workflow_from_str(
&std::fs::read_to_string(&path).unwrap_or_default(),
&config,
@@ -248,3 +276,28 @@ async fn load_yaml_definitions(host: &wfe::WorkflowHost, dir: &std::path::Path)
}
}
}
/// Serve the raw .proto schema file.
async fn serve_proto_schema() -> impl axum::response::IntoResponse {
(
[(
axum::http::header::CONTENT_TYPE,
"text/plain; charset=utf-8",
)],
include_str!("../../wfe-server-protos/proto/wfe/v1/wfe.proto"),
)
}
/// Serve the auto-generated JSON Schema for workflow YAML definitions.
async fn serve_json_schema() -> impl axum::response::IntoResponse {
let schema = wfe_yaml::schema::generate_json_schema();
axum::Json(schema)
}
/// Serve the auto-generated JSON Schema as YAML.
async fn serve_yaml_example() -> impl axum::response::IntoResponse {
(
[(axum::http::header::CONTENT_TYPE, "text/yaml; charset=utf-8")],
wfe_yaml::schema::generate_yaml_schema(),
)
}

View File

@@ -1,10 +1,10 @@
use std::sync::Arc;
use axum::Json;
use axum::body::Bytes;
use axum::extract::State;
use axum::http::{HeaderMap, StatusCode};
use axum::response::IntoResponse;
use axum::Json;
use hmac::{Hmac, Mac};
use sha2::Sha256;
@@ -107,7 +107,11 @@ pub async fn handle_github_webhook(
// Publish as event (for workflows waiting on events).
if let Err(e) = state
.host
.publish_event(&forge_event.event_name, &forge_event.event_key, forge_event.data.clone())
.publish_event(
&forge_event.event_name,
&forge_event.event_key,
forge_event.data.clone(),
)
.await
{
tracing::error!(error = %e, "failed to publish forge event");
@@ -208,7 +212,11 @@ pub async fn handle_gitea_webhook(
if let Err(e) = state
.host
.publish_event(&forge_event.event_name, &forge_event.event_key, forge_event.data.clone())
.publish_event(
&forge_event.event_name,
&forge_event.event_key,
forge_event.data.clone(),
)
.await
{
tracing::error!(error = %e, "failed to publish forge event");
@@ -362,10 +370,7 @@ fn map_forge_event(event_type: &str, payload: &serde_json::Value) -> ForgeEvent
/// Extract data fields from payload using simple JSONPath-like mapping.
/// Supports `$.field.nested` syntax.
fn map_trigger_data(
trigger: &WebhookTrigger,
payload: &serde_json::Value,
) -> serde_json::Value {
fn map_trigger_data(trigger: &WebhookTrigger, payload: &serde_json::Value) -> serde_json::Value {
let mut data = serde_json::Map::new();
for (key, path) in &trigger.data_mapping {
if let Some(value) = resolve_json_path(payload, path) {
@@ -376,7 +381,10 @@ fn map_trigger_data(
}
/// Resolve a simple JSONPath expression like `$.repository.full_name`.
fn resolve_json_path<'a>(value: &'a serde_json::Value, path: &str) -> Option<&'a serde_json::Value> {
fn resolve_json_path<'a>(
value: &'a serde_json::Value,
path: &str,
) -> Option<&'a serde_json::Value> {
let path = path.strip_prefix("$.").unwrap_or(path);
let mut current = value;
for segment in path.split('.') {

View File

@@ -57,6 +57,7 @@ impl SqlitePersistenceProvider {
sqlx::query(
"CREATE TABLE IF NOT EXISTS workflows (
id TEXT PRIMARY KEY,
name TEXT NOT NULL UNIQUE,
definition_id TEXT NOT NULL,
version INTEGER NOT NULL,
description TEXT,
@@ -71,6 +72,17 @@ impl SqlitePersistenceProvider {
.execute(&self.pool)
.await?;
// Per-definition monotonic counter used to generate human-friendly
// instance names of the form `{definition_id}-{N}`.
sqlx::query(
"CREATE TABLE IF NOT EXISTS definition_sequences (
definition_id TEXT PRIMARY KEY,
next_num INTEGER NOT NULL
)",
)
.execute(&self.pool)
.await?;
sqlx::query(
"CREATE TABLE IF NOT EXISTS execution_pointers (
id TEXT PRIMARY KEY,
@@ -157,30 +169,28 @@ impl SqlitePersistenceProvider {
.await?;
// Indexes
sqlx::query("CREATE INDEX IF NOT EXISTS idx_workflows_next_execution ON workflows(next_execution)")
.execute(&self.pool)
.await?;
sqlx::query(
"CREATE INDEX IF NOT EXISTS idx_workflows_status ON workflows(status)",
"CREATE INDEX IF NOT EXISTS idx_workflows_next_execution ON workflows(next_execution)",
)
.execute(&self.pool)
.await?;
sqlx::query("CREATE INDEX IF NOT EXISTS idx_workflows_status ON workflows(status)")
.execute(&self.pool)
.await?;
sqlx::query("CREATE INDEX IF NOT EXISTS idx_execution_pointers_workflow_id ON execution_pointers(workflow_id)")
.execute(&self.pool)
.await?;
sqlx::query("CREATE INDEX IF NOT EXISTS idx_events_name_key ON events(event_name, event_key)")
sqlx::query(
"CREATE INDEX IF NOT EXISTS idx_events_name_key ON events(event_name, event_key)",
)
.execute(&self.pool)
.await?;
sqlx::query("CREATE INDEX IF NOT EXISTS idx_events_is_processed ON events(is_processed)")
.execute(&self.pool)
.await?;
sqlx::query("CREATE INDEX IF NOT EXISTS idx_events_event_time ON events(event_time)")
.execute(&self.pool)
.await?;
sqlx::query(
"CREATE INDEX IF NOT EXISTS idx_events_is_processed ON events(is_processed)",
)
.execute(&self.pool)
.await?;
sqlx::query(
"CREATE INDEX IF NOT EXISTS idx_events_event_time ON events(event_time)",
)
.execute(&self.pool)
.await?;
sqlx::query("CREATE INDEX IF NOT EXISTS idx_event_subscriptions_name_key ON event_subscriptions(event_name, event_key)")
.execute(&self.pool)
.await?;
@@ -226,10 +236,8 @@ fn row_to_workflow(
pointers: Vec<ExecutionPointer>,
) -> std::result::Result<WorkflowInstance, WfeError> {
let status_str: String = row.try_get("status").map_err(to_persistence_err)?;
let status: WorkflowStatus =
serde_json::from_str(&format!("\"{status_str}\"")).map_err(|e| {
WfeError::Persistence(format!("Failed to deserialize WorkflowStatus: {e}"))
})?;
let status: WorkflowStatus = serde_json::from_str(&format!("\"{status_str}\""))
.map_err(|e| WfeError::Persistence(format!("Failed to deserialize WorkflowStatus: {e}")))?;
let data_str: String = row.try_get("data").map_err(to_persistence_err)?;
let data: serde_json::Value = serde_json::from_str(&data_str)
@@ -241,6 +249,7 @@ fn row_to_workflow(
Ok(WorkflowInstance {
id: row.try_get("id").map_err(to_persistence_err)?,
name: row.try_get("name").map_err(to_persistence_err)?,
workflow_definition_id: row.try_get("definition_id").map_err(to_persistence_err)?,
version: row
.try_get::<i64, _>("version")
@@ -272,10 +281,11 @@ fn row_to_pointer(
.as_deref()
.map(serde_json::from_str)
.transpose()
.map_err(|e| WfeError::Persistence(format!("Failed to deserialize persistence_data: {e}")))?;
.map_err(|e| {
WfeError::Persistence(format!("Failed to deserialize persistence_data: {e}"))
})?;
let event_data_str: Option<String> =
row.try_get("event_data").map_err(to_persistence_err)?;
let event_data_str: Option<String> = row.try_get("event_data").map_err(to_persistence_err)?;
let event_data: Option<serde_json::Value> = event_data_str
.as_deref()
.map(serde_json::from_str)
@@ -308,15 +318,13 @@ fn row_to_pointer(
let ext_str: String = row
.try_get("extension_attributes")
.map_err(to_persistence_err)?;
let extension_attributes: HashMap<String, serde_json::Value> =
serde_json::from_str(&ext_str).map_err(|e| {
WfeError::Persistence(format!("Failed to deserialize extension_attributes: {e}"))
})?;
let extension_attributes: HashMap<String, serde_json::Value> = serde_json::from_str(&ext_str)
.map_err(|e| {
WfeError::Persistence(format!("Failed to deserialize extension_attributes: {e}"))
})?;
let sleep_until_str: Option<String> =
row.try_get("sleep_until").map_err(to_persistence_err)?;
let start_time_str: Option<String> =
row.try_get("start_time").map_err(to_persistence_err)?;
let sleep_until_str: Option<String> = row.try_get("sleep_until").map_err(to_persistence_err)?;
let start_time_str: Option<String> = row.try_get("start_time").map_err(to_persistence_err)?;
let end_time_str: Option<String> = row.try_get("end_time").map_err(to_persistence_err)?;
Ok(ExecutionPointer {
@@ -373,8 +381,7 @@ fn row_to_event(row: &sqlx::sqlite::SqliteRow) -> std::result::Result<Event, Wfe
fn row_to_subscription(
row: &sqlx::sqlite::SqliteRow,
) -> std::result::Result<EventSubscription, WfeError> {
let subscribe_as_of_str: String =
row.try_get("subscribe_as_of").map_err(to_persistence_err)?;
let subscribe_as_of_str: String = row.try_get("subscribe_as_of").map_err(to_persistence_err)?;
let subscription_data_str: Option<String> = row
.try_get("subscription_data")
@@ -436,10 +443,11 @@ impl WorkflowRepository for SqlitePersistenceProvider {
let mut tx = self.pool.begin().await.map_err(to_persistence_err)?;
sqlx::query(
"INSERT INTO workflows (id, definition_id, version, description, reference, status, data, next_execution, create_time, complete_time)
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10)",
"INSERT INTO workflows (id, name, definition_id, version, description, reference, status, data, next_execution, create_time, complete_time)
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10, ?11)",
)
.bind(&id)
.bind(&instance.name)
.bind(&instance.workflow_definition_id)
.bind(instance.version as i64)
.bind(&instance.description)
@@ -474,10 +482,11 @@ impl WorkflowRepository for SqlitePersistenceProvider {
let mut tx = self.pool.begin().await.map_err(to_persistence_err)?;
sqlx::query(
"UPDATE workflows SET definition_id = ?1, version = ?2, description = ?3, reference = ?4,
status = ?5, data = ?6, next_execution = ?7, complete_time = ?8
WHERE id = ?9",
"UPDATE workflows SET name = ?1, definition_id = ?2, version = ?3, description = ?4, reference = ?5,
status = ?6, data = ?7, next_execution = ?8, complete_time = ?9
WHERE id = ?10",
)
.bind(&instance.name)
.bind(&instance.workflow_definition_id)
.bind(instance.version as i64)
.bind(&instance.description)
@@ -523,10 +532,11 @@ impl WorkflowRepository for SqlitePersistenceProvider {
let mut tx = self.pool.begin().await.map_err(to_persistence_err)?;
sqlx::query(
"UPDATE workflows SET definition_id = ?1, version = ?2, description = ?3, reference = ?4,
status = ?5, data = ?6, next_execution = ?7, complete_time = ?8
WHERE id = ?9",
"UPDATE workflows SET name = ?1, definition_id = ?2, version = ?3, description = ?4, reference = ?5,
status = ?6, data = ?7, next_execution = ?8, complete_time = ?9
WHERE id = ?10",
)
.bind(&instance.name)
.bind(&instance.workflow_definition_id)
.bind(instance.version as i64)
.bind(&instance.description)
@@ -583,12 +593,11 @@ impl WorkflowRepository for SqlitePersistenceProvider {
.map_err(to_persistence_err)?
.ok_or_else(|| WfeError::WorkflowNotFound(id.to_string()))?;
let pointer_rows =
sqlx::query("SELECT * FROM execution_pointers WHERE workflow_id = ?1")
.bind(id)
.fetch_all(&self.pool)
.await
.map_err(to_persistence_err)?;
let pointer_rows = sqlx::query("SELECT * FROM execution_pointers WHERE workflow_id = ?1")
.bind(id)
.fetch_all(&self.pool)
.await
.map_err(to_persistence_err)?;
let pointers = pointer_rows
.iter()
@@ -598,6 +607,36 @@ impl WorkflowRepository for SqlitePersistenceProvider {
row_to_workflow(&row, pointers)
}
async fn get_workflow_instance_by_name(&self, name: &str) -> Result<WorkflowInstance> {
let row = sqlx::query("SELECT id FROM workflows WHERE name = ?1")
.bind(name)
.fetch_optional(&self.pool)
.await
.map_err(to_persistence_err)?
.ok_or_else(|| WfeError::WorkflowNotFound(name.to_string()))?;
let id: String = row.try_get("id").map_err(to_persistence_err)?;
self.get_workflow_instance(&id).await
}
async fn next_definition_sequence(&self, definition_id: &str) -> Result<u64> {
// SQLite doesn't support `INSERT ... ON CONFLICT ... RETURNING` prior
// to 3.35, but sqlx bundles a new-enough build. Emulate an atomic
// increment via UPSERT + RETURNING so concurrent callers don't collide.
let row = sqlx::query(
"INSERT INTO definition_sequences (definition_id, next_num)
VALUES (?1, 1)
ON CONFLICT(definition_id) DO UPDATE
SET next_num = next_num + 1
RETURNING next_num",
)
.bind(definition_id)
.fetch_one(&self.pool)
.await
.map_err(to_persistence_err)?;
let next: i64 = row.try_get("next_num").map_err(to_persistence_err)?;
Ok(next as u64)
}
async fn get_workflow_instances(&self, ids: &[String]) -> Result<Vec<WorkflowInstance>> {
if ids.is_empty() {
return Ok(Vec::new());
@@ -735,10 +774,7 @@ async fn insert_subscription(
#[async_trait]
impl SubscriptionRepository for SqlitePersistenceProvider {
async fn create_event_subscription(
&self,
subscription: &EventSubscription,
) -> Result<String> {
async fn create_event_subscription(&self, subscription: &EventSubscription) -> Result<String> {
let id = if subscription.id.is_empty() {
uuid::Uuid::new_v4().to_string()
} else {
@@ -776,18 +812,14 @@ impl SubscriptionRepository for SqlitePersistenceProvider {
}
async fn terminate_subscription(&self, subscription_id: &str) -> Result<()> {
let result = sqlx::query(
"UPDATE event_subscriptions SET terminated = 1 WHERE id = ?1",
)
.bind(subscription_id)
.execute(&self.pool)
.await
.map_err(to_persistence_err)?;
let result = sqlx::query("UPDATE event_subscriptions SET terminated = 1 WHERE id = ?1")
.bind(subscription_id)
.execute(&self.pool)
.await
.map_err(to_persistence_err)?;
if result.rows_affected() == 0 {
return Err(WfeError::SubscriptionNotFound(
subscription_id.to_string(),
));
return Err(WfeError::SubscriptionNotFound(subscription_id.to_string()));
}
Ok(())
}
@@ -860,20 +892,14 @@ impl SubscriptionRepository for SqlitePersistenceProvider {
.await
.map_err(to_persistence_err)?;
if exists.is_none() {
return Err(WfeError::SubscriptionNotFound(
subscription_id.to_string(),
));
return Err(WfeError::SubscriptionNotFound(subscription_id.to_string()));
}
return Ok(false);
}
Ok(true)
}
async fn clear_subscription_token(
&self,
subscription_id: &str,
token: &str,
) -> Result<()> {
async fn clear_subscription_token(&self, subscription_id: &str, token: &str) -> Result<()> {
let result = sqlx::query(
"UPDATE event_subscriptions
SET external_token = NULL, external_worker_id = NULL, external_token_expiry = NULL
@@ -886,9 +912,7 @@ impl SubscriptionRepository for SqlitePersistenceProvider {
.map_err(to_persistence_err)?;
if result.rows_affected() == 0 {
return Err(WfeError::SubscriptionNotFound(
subscription_id.to_string(),
));
return Err(WfeError::SubscriptionNotFound(subscription_id.to_string()));
}
Ok(())
}
@@ -937,13 +961,11 @@ impl EventRepository for SqlitePersistenceProvider {
async fn get_runnable_events(&self, as_at: DateTime<Utc>) -> Result<Vec<String>> {
let as_at_str = dt_to_string(&as_at);
let rows = sqlx::query(
"SELECT id FROM events WHERE is_processed = 0 AND event_time <= ?1",
)
.bind(&as_at_str)
.fetch_all(&self.pool)
.await
.map_err(to_persistence_err)?;
let rows = sqlx::query("SELECT id FROM events WHERE is_processed = 0 AND event_time <= ?1")
.bind(&as_at_str)
.fetch_all(&self.pool)
.await
.map_err(to_persistence_err)?;
rows.iter()
.map(|r| r.try_get("id").map_err(to_persistence_err))
@@ -1029,9 +1051,14 @@ impl ScheduledCommandRepository for SqlitePersistenceProvider {
async fn process_commands(
&self,
as_of: DateTime<Utc>,
handler: &(dyn Fn(ScheduledCommand) -> std::pin::Pin<Box<dyn std::future::Future<Output = Result<()>> + Send>>
+ Send
+ Sync),
handler: &(
dyn Fn(
ScheduledCommand,
)
-> std::pin::Pin<Box<dyn std::future::Future<Output = Result<()>> + Send>>
+ Send
+ Sync
),
) -> Result<()> {
let as_of_millis = as_of.timestamp_millis();

View File

@@ -28,10 +28,7 @@ impl LifecyclePublisher for ValkeyLifecyclePublisher {
let mut conn = self.conn.clone();
let json = serde_json::to_string(&event)?;
let instance_channel = format!(
"{}:lifecycle:{}",
self.prefix, event.workflow_instance_id
);
let instance_channel = format!("{}:lifecycle:{}", self.prefix, event.workflow_instance_id);
let all_channel = format!("{}:lifecycle:all", self.prefix);
// Publish to the instance-specific channel.

View File

@@ -17,8 +17,9 @@ async fn publish_subscribe_round_trip() {
}
let prefix = format!("wfe_test_{}", uuid::Uuid::new_v4().simple());
let publisher =
wfe_valkey::ValkeyLifecyclePublisher::new("redis://localhost:6379", &prefix).await.unwrap();
let publisher = wfe_valkey::ValkeyLifecyclePublisher::new("redis://localhost:6379", &prefix)
.await
.unwrap();
let instance_id = "wf-lifecycle-test-1";
let channel = format!("{}:lifecycle:{}", prefix, instance_id);
@@ -42,12 +43,7 @@ async fn publish_subscribe_round_trip() {
// Small delay to ensure the subscription is active before publishing.
tokio::time::sleep(Duration::from_millis(200)).await;
let event = LifecycleEvent::new(
instance_id,
"def-1",
1,
LifecycleEventType::Started,
);
let event = LifecycleEvent::new(instance_id, "def-1", 1, LifecycleEventType::Started);
publisher.publish(event).await.unwrap();
// Wait for the message with a timeout.
@@ -71,8 +67,9 @@ async fn publish_to_all_channel() {
}
let prefix = format!("wfe_test_{}", uuid::Uuid::new_v4().simple());
let publisher =
wfe_valkey::ValkeyLifecyclePublisher::new("redis://localhost:6379", &prefix).await.unwrap();
let publisher = wfe_valkey::ValkeyLifecyclePublisher::new("redis://localhost:6379", &prefix)
.await
.unwrap();
let all_channel = format!("{}:lifecycle:all", prefix);
@@ -93,12 +90,7 @@ async fn publish_to_all_channel() {
tokio::time::sleep(Duration::from_millis(200)).await;
let event = LifecycleEvent::new(
"wf-all-test",
"def-1",
1,
LifecycleEventType::Completed,
);
let event = LifecycleEvent::new("wf-all-test", "def-1", 1, LifecycleEventType::Completed);
publisher.publish(event).await.unwrap();
let received = tokio::time::timeout(Duration::from_secs(5), rx.recv())

View File

@@ -2,7 +2,9 @@ use wfe_core::lock_suite;
async fn make_provider() -> wfe_valkey::ValkeyLockProvider {
let prefix = format!("wfe_test_{}", uuid::Uuid::new_v4().simple());
wfe_valkey::ValkeyLockProvider::new("redis://localhost:6379", &prefix).await.unwrap()
wfe_valkey::ValkeyLockProvider::new("redis://localhost:6379", &prefix)
.await
.unwrap()
}
lock_suite!(make_provider);

View File

@@ -2,7 +2,9 @@ use wfe_core::queue_suite;
async fn make_provider() -> wfe_valkey::ValkeyQueueProvider {
let prefix = format!("wfe_test_{}", uuid::Uuid::new_v4().simple());
wfe_valkey::ValkeyQueueProvider::new("redis://localhost:6379", &prefix).await.unwrap()
wfe_valkey::ValkeyQueueProvider::new("redis://localhost:6379", &prefix)
.await
.unwrap()
}
queue_suite!(make_provider);

View File

@@ -27,6 +27,7 @@ thiserror = { workspace = true }
tracing = { workspace = true }
chrono = { workspace = true }
regex = { workspace = true }
schemars = { version = "1", features = ["derive"] }
deno_core = { workspace = true, optional = true }
deno_error = { workspace = true, optional = true }
url = { workspace = true, optional = true }

View File

@@ -7,21 +7,23 @@ use wfe_core::models::workflow_definition::{StepOutcome, WorkflowDefinition, Wor
use wfe_core::traits::StepBody;
use crate::error::YamlWorkflowError;
use crate::executors::shell::{ShellConfig, ShellStep};
#[cfg(feature = "deno")]
use crate::executors::deno::{DenoConfig, DenoPermissions, DenoStep};
use crate::executors::shell::{ShellConfig, ShellStep};
#[cfg(feature = "buildkit")]
use wfe_buildkit::{BuildkitConfig, BuildkitStep};
#[cfg(feature = "containerd")]
use wfe_containerd::{ContainerdConfig, ContainerdStep};
use wfe_core::models::condition::{ComparisonOp, FieldComparison, StepCondition};
use wfe_core::primitives::sub_workflow::SubWorkflowStep;
#[cfg(feature = "kubernetes")]
use wfe_kubernetes::{ClusterConfig, KubernetesStep, KubernetesStepConfig};
#[cfg(feature = "rustlang")]
use wfe_rustlang::{CargoCommand, CargoConfig, CargoStep, RustupCommand, RustupConfig, RustupStep};
#[cfg(feature = "kubernetes")]
use wfe_kubernetes::{ClusterConfig, KubernetesStepConfig, KubernetesStep};
use wfe_core::primitives::sub_workflow::SubWorkflowStep;
use wfe_core::models::condition::{ComparisonOp, FieldComparison, StepCondition};
use crate::schema::{WorkflowSpec, YamlCombinator, YamlComparison, YamlCondition, YamlErrorBehavior, YamlStep};
use crate::schema::{
WorkflowSpec, YamlCombinator, YamlComparison, YamlCondition, YamlErrorBehavior, YamlStep,
};
/// Configuration for a sub-workflow step.
#[derive(Debug, Clone, Serialize)]
@@ -43,6 +45,7 @@ pub struct CompiledWorkflow {
/// Compile a parsed WorkflowSpec into a CompiledWorkflow.
pub fn compile(spec: &WorkflowSpec) -> Result<CompiledWorkflow, YamlWorkflowError> {
let mut definition = WorkflowDefinition::new(&spec.id, spec.version);
definition.name = spec.name.clone();
definition.description = spec.description.clone();
if let Some(ref eb) = spec.error_behavior {
@@ -77,10 +80,8 @@ fn compile_steps(
let container_id = *next_id;
*next_id += 1;
let mut container = WorkflowStep::new(
container_id,
"wfe_core::primitives::sequence::SequenceStep",
);
let mut container =
WorkflowStep::new(container_id, "wfe_core::primitives::sequence::SequenceStep");
container.name = Some(yaml_step.name.clone());
if let Some(ref eb) = yaml_step.error_behavior {
@@ -88,8 +89,7 @@ fn compile_steps(
}
// Compile children.
let child_ids =
compile_steps(parallel_children, definition, factories, next_id)?;
let child_ids = compile_steps(parallel_children, definition, factories, next_id)?;
container.children = child_ids;
// Compile condition if present.
@@ -104,10 +104,7 @@ fn compile_steps(
let step_id = *next_id;
*next_id += 1;
let step_type = yaml_step
.step_type
.as_deref()
.unwrap_or("shell");
let step_type = yaml_step.step_type.as_deref().unwrap_or("shell");
let (step_type_key, step_config_value, factory): (
String,
@@ -133,10 +130,7 @@ fn compile_steps(
let comp_id = *next_id;
*next_id += 1;
let on_failure_type = on_failure
.step_type
.as_deref()
.unwrap_or("shell");
let on_failure_type = on_failure.step_type.as_deref().unwrap_or("shell");
let (comp_key, comp_config_value, comp_factory) =
build_step_config_and_factory(on_failure, on_failure_type)?;
@@ -156,10 +150,7 @@ fn compile_steps(
let success_id = *next_id;
*next_id += 1;
let on_success_type = on_success
.step_type
.as_deref()
.unwrap_or("shell");
let on_success_type = on_success.step_type.as_deref().unwrap_or("shell");
let (success_key, success_config_value, success_factory) =
build_step_config_and_factory(on_success, on_success_type)?;
@@ -183,10 +174,7 @@ fn compile_steps(
let ensure_id = *next_id;
*next_id += 1;
let ensure_type = ensure
.step_type
.as_deref()
.unwrap_or("shell");
let ensure_type = ensure.step_type.as_deref().unwrap_or("shell");
let (ensure_key, ensure_config_value, ensure_factory) =
build_step_config_and_factory(ensure, ensure_type)?;
@@ -407,9 +395,7 @@ fn build_step_config_and_factory(
let config = build_shell_config(step)?;
let key = format!("wfe_yaml::shell::{}", step.name);
let value = serde_json::to_value(&config).map_err(|e| {
YamlWorkflowError::Compilation(format!(
"Failed to serialize shell config: {e}"
))
YamlWorkflowError::Compilation(format!("Failed to serialize shell config: {e}"))
})?;
let config_clone = config.clone();
let factory: StepFactory = Box::new(move || {
@@ -422,9 +408,7 @@ fn build_step_config_and_factory(
let config = build_deno_config(step)?;
let key = format!("wfe_yaml::deno::{}", step.name);
let value = serde_json::to_value(&config).map_err(|e| {
YamlWorkflowError::Compilation(format!(
"Failed to serialize deno config: {e}"
))
YamlWorkflowError::Compilation(format!("Failed to serialize deno config: {e}"))
})?;
let config_clone = config.clone();
let factory: StepFactory = Box::new(move || {
@@ -437,9 +421,7 @@ fn build_step_config_and_factory(
let config = build_buildkit_config(step)?;
let key = format!("wfe_yaml::buildkit::{}", step.name);
let value = serde_json::to_value(&config).map_err(|e| {
YamlWorkflowError::Compilation(format!(
"Failed to serialize buildkit config: {e}"
))
YamlWorkflowError::Compilation(format!("Failed to serialize buildkit config: {e}"))
})?;
let config_clone = config.clone();
let factory: StepFactory = Box::new(move || {
@@ -474,8 +456,10 @@ fn build_step_config_and_factory(
let step_config = config.0;
let cluster_config = config.1;
let factory: StepFactory = Box::new(move || {
Box::new(KubernetesStep::lazy(step_config.clone(), cluster_config.clone()))
as Box<dyn StepBody>
Box::new(KubernetesStep::lazy(
step_config.clone(),
cluster_config.clone(),
)) as Box<dyn StepBody>
});
Ok((key, value, factory))
}
@@ -486,9 +470,7 @@ fn build_step_config_and_factory(
let config = build_cargo_config(step, step_type)?;
let key = format!("wfe_yaml::cargo::{}", step.name);
let value = serde_json::to_value(&config).map_err(|e| {
YamlWorkflowError::Compilation(format!(
"Failed to serialize cargo config: {e}"
))
YamlWorkflowError::Compilation(format!("Failed to serialize cargo config: {e}"))
})?;
let config_clone = config.clone();
let factory: StepFactory = Box::new(move || {
@@ -501,9 +483,7 @@ fn build_step_config_and_factory(
let config = build_rustup_config(step, step_type)?;
let key = format!("wfe_yaml::rustup::{}", step.name);
let value = serde_json::to_value(&config).map_err(|e| {
YamlWorkflowError::Compilation(format!(
"Failed to serialize rustup config: {e}"
))
YamlWorkflowError::Compilation(format!("Failed to serialize rustup config: {e}"))
})?;
let config_clone = config.clone();
let factory: StepFactory = Box::new(move || {
@@ -534,9 +514,7 @@ fn build_step_config_and_factory(
let key = format!("wfe_yaml::workflow::{}", step.name);
let value = serde_json::to_value(&sub_config).map_err(|e| {
YamlWorkflowError::Compilation(format!(
"Failed to serialize workflow config: {e}"
))
YamlWorkflowError::Compilation(format!("Failed to serialize workflow config: {e}"))
})?;
let config_clone = sub_config.clone();
let factory: StepFactory = Box::new(move || {
@@ -603,10 +581,7 @@ fn build_deno_config(step: &YamlStep) -> Result<DenoConfig, YamlWorkflowError> {
fn build_shell_config(step: &YamlStep) -> Result<ShellConfig, YamlWorkflowError> {
let config = step.config.as_ref().ok_or_else(|| {
YamlWorkflowError::Compilation(format!(
"Step '{}' is missing 'config' section",
step.name
))
YamlWorkflowError::Compilation(format!("Step '{}' is missing 'config' section", step.name))
})?;
let run = config
@@ -634,10 +609,7 @@ fn build_shell_config(step: &YamlStep) -> Result<ShellConfig, YamlWorkflowError>
}
#[cfg(feature = "rustlang")]
fn build_cargo_config(
step: &YamlStep,
step_type: &str,
) -> Result<CargoConfig, YamlWorkflowError> {
fn build_cargo_config(step: &YamlStep, step_type: &str) -> Result<CargoConfig, YamlWorkflowError> {
let command = match step_type {
"cargo-build" => CargoCommand::Build,
"cargo-test" => CargoCommand::Test,
@@ -730,9 +702,7 @@ fn parse_duration_ms(s: &str) -> Option<u64> {
}
#[cfg(feature = "buildkit")]
fn build_buildkit_config(
step: &YamlStep,
) -> Result<BuildkitConfig, YamlWorkflowError> {
fn build_buildkit_config(step: &YamlStep) -> Result<BuildkitConfig, YamlWorkflowError> {
let config = step.config.as_ref().ok_or_else(|| {
YamlWorkflowError::Compilation(format!(
"BuildKit step '{}' is missing 'config' section",
@@ -805,9 +775,7 @@ fn build_buildkit_config(
}
#[cfg(feature = "containerd")]
fn build_containerd_config(
step: &YamlStep,
) -> Result<ContainerdConfig, YamlWorkflowError> {
fn build_containerd_config(step: &YamlStep) -> Result<ContainerdConfig, YamlWorkflowError> {
let config = step.config.as_ref().ok_or_else(|| {
YamlWorkflowError::Compilation(format!(
"Containerd step '{}' is missing 'config' section",
@@ -869,11 +837,17 @@ fn build_containerd_config(
env: config.env.clone(),
volumes,
working_dir: config.working_dir.clone(),
user: config.user.clone().unwrap_or_else(|| "65534:65534".to_string()),
user: config
.user
.clone()
.unwrap_or_else(|| "65534:65534".to_string()),
network: config.network.clone().unwrap_or_else(|| "none".to_string()),
memory: config.memory.clone(),
cpu: config.cpu.clone(),
pull: config.pull.clone().unwrap_or_else(|| "if-not-present".to_string()),
pull: config
.pull
.clone()
.unwrap_or_else(|| "if-not-present".to_string()),
containerd_addr: config
.containerd_addr
.clone()
@@ -944,9 +918,7 @@ fn compile_services(
}
} else {
// Default: TCP check on first port.
ReadinessCheck::TcpSocket(
yaml_svc.ports.first().copied().unwrap_or(0),
)
ReadinessCheck::TcpSocket(yaml_svc.ports.first().copied().unwrap_or(0))
};
let interval_ms = r

View File

@@ -96,15 +96,13 @@ impl ModuleLoader for WfeModuleLoader {
// Relative or bare path — resolve against referrer.
// This handles ./foo, ../foo, and /foo (absolute path on same origin, e.g. esm.sh redirects)
if specifier.starts_with("./")
|| specifier.starts_with("../")
|| specifier.starts_with('/')
if specifier.starts_with("./") || specifier.starts_with("../") || specifier.starts_with('/')
{
let base = ModuleSpecifier::parse(referrer)
.map_err(|e| JsErrorBox::generic(format!("Invalid referrer '{referrer}': {e}")))?;
let resolved = base
.join(specifier)
.map_err(|e| JsErrorBox::generic(format!("Failed to resolve '{specifier}': {e}")))?;
let resolved = base.join(specifier).map_err(|e| {
JsErrorBox::generic(format!("Failed to resolve '{specifier}': {e}"))
})?;
// Check permissions based on scheme.
match resolved.scheme() {
@@ -172,11 +170,9 @@ impl ModuleLoader for WfeModuleLoader {
.map_err(|e| JsErrorBox::new("PermissionError", e.to_string()))?;
}
let response = reqwest::get(&url)
.await
.map_err(|e| {
JsErrorBox::generic(format!("Failed to fetch module '{url}': {e}"))
})?;
let response = reqwest::get(&url).await.map_err(|e| {
JsErrorBox::generic(format!("Failed to fetch module '{url}': {e}"))
})?;
if !response.status().is_success() {
return Err(JsErrorBox::generic(format!(
@@ -224,9 +220,10 @@ impl ModuleLoader for WfeModuleLoader {
&specifier,
None,
))),
Err(e) => ModuleLoadResponse::Sync(Err(JsErrorBox::generic(
format!("Failed to read module '{}': {e}", path.display()),
))),
Err(e) => ModuleLoadResponse::Sync(Err(JsErrorBox::generic(format!(
"Failed to read module '{}': {e}",
path.display()
)))),
}
}
Err(e) => ModuleLoadResponse::Sync(Err(e)),
@@ -274,7 +271,11 @@ mod tests {
..Default::default()
});
let result = loader
.resolve("npm:lodash@4", "ext:wfe/bootstrap.js", ResolutionKind::Import)
.resolve(
"npm:lodash@4",
"ext:wfe/bootstrap.js",
ResolutionKind::Import,
)
.unwrap();
assert_eq!(result.as_str(), "https://esm.sh/lodash@4");
}
@@ -304,7 +305,12 @@ mod tests {
ResolutionKind::Import,
);
assert!(result.is_err());
assert!(result.unwrap_err().to_string().contains("Permission denied"));
assert!(
result
.unwrap_err()
.to_string()
.contains("Permission denied")
);
}
#[test]
@@ -320,10 +326,12 @@ mod tests {
ResolutionKind::DynamicImport,
);
assert!(result.is_err());
assert!(result
.unwrap_err()
.to_string()
.contains("Dynamic import is not allowed"));
assert!(
result
.unwrap_err()
.to_string()
.contains("Dynamic import is not allowed")
);
}
#[test]
@@ -361,11 +369,7 @@ mod tests {
..Default::default()
});
let result = loader
.resolve(
"./helper.js",
"file:///tmp/main.js",
ResolutionKind::Import,
)
.resolve("./helper.js", "file:///tmp/main.js", ResolutionKind::Import)
.unwrap();
assert_eq!(result.as_str(), "file:///tmp/helper.js");
}

View File

@@ -2,8 +2,8 @@ use std::cell::RefCell;
use std::collections::HashMap;
use std::rc::Rc;
use deno_core::op2;
use deno_core::OpState;
use deno_core::op2;
use deno_error::JsErrorBox;
use serde::{Deserialize, Serialize};

View File

@@ -1,7 +1,7 @@
use std::collections::HashMap;
use deno_core::op2;
use deno_core::OpState;
use deno_core::op2;
/// Workflow data available to the script via `inputs()`.
pub struct WorkflowInputs {
@@ -28,11 +28,7 @@ pub fn op_inputs(state: &mut OpState) -> serde_json::Value {
/// Stores a key/value pair in the step outputs.
#[op2]
pub fn op_output(
state: &mut OpState,
#[string] key: String,
#[serde] value: serde_json::Value,
) {
pub fn op_output(state: &mut OpState, #[string] key: String, #[serde] value: serde_json::Value) {
let outputs = state.borrow_mut::<StepOutputs>();
outputs.map.insert(key, value);
}
@@ -56,7 +52,8 @@ pub async fn op_read_file(
{
let s = state.borrow();
let checker = s.borrow::<super::super::permissions::PermissionChecker>();
checker.check_read(&path)
checker
.check_read(&path)
.map_err(|e| deno_error::JsErrorBox::new("PermissionError", e.to_string()))?;
}
tokio::fs::read_to_string(&path)
@@ -66,7 +63,13 @@ pub async fn op_read_file(
deno_core::extension!(
wfe_ops,
ops = [op_inputs, op_output, op_log, op_read_file, super::http::op_fetch],
ops = [
op_inputs,
op_output,
op_log,
op_read_file,
super::http::op_fetch
],
esm_entry_point = "ext:wfe/bootstrap.js",
esm = ["ext:wfe/bootstrap.js" = "src/executors/deno/js/bootstrap.js"],
);

View File

@@ -120,9 +120,9 @@ impl PermissionChecker {
/// Detect `..` path traversal components.
fn has_traversal(path: &str) -> bool {
Path::new(path).components().any(|c| {
matches!(c, std::path::Component::ParentDir)
})
Path::new(path)
.components()
.any(|c| matches!(c, std::path::Component::ParentDir))
}
}
@@ -130,12 +130,7 @@ impl PermissionChecker {
mod tests {
use super::*;
fn perms(
net: &[&str],
read: &[&str],
write: &[&str],
env: &[&str],
) -> PermissionChecker {
fn perms(net: &[&str], read: &[&str], write: &[&str], env: &[&str]) -> PermissionChecker {
PermissionChecker::from_config(&DenoPermissions {
net: net.iter().map(|s| s.to_string()).collect(),
read: read.iter().map(|s| s.to_string()).collect(),
@@ -182,9 +177,7 @@ mod tests {
#[test]
fn read_path_traversal_blocked() {
let checker = perms(&[], &["/tmp"], &[], &[]);
let err = checker
.check_read("/tmp/../../../etc/passwd")
.unwrap_err();
let err = checker.check_read("/tmp/../../../etc/passwd").unwrap_err();
assert_eq!(err.kind, "read");
assert!(err.resource.contains(".."));
}
@@ -205,9 +198,7 @@ mod tests {
#[test]
fn write_path_traversal_blocked() {
let checker = perms(&[], &[], &["/tmp/out"], &[]);
assert!(checker
.check_write("/tmp/out/../../etc/shadow")
.is_err());
assert!(checker.check_write("/tmp/out/../../etc/shadow").is_err());
}
#[test]

View File

@@ -8,7 +8,7 @@ use wfe_core::WfeError;
use super::config::DenoConfig;
use super::module_loader::WfeModuleLoader;
use super::ops::workflow::{wfe_ops, StepMeta, StepOutputs, WorkflowInputs};
use super::ops::workflow::{StepMeta, StepOutputs, WorkflowInputs, wfe_ops};
use super::permissions::PermissionChecker;
/// Create a configured `JsRuntime` for executing a workflow step script.
@@ -61,8 +61,8 @@ pub fn would_auto_add_esm_sh(config: &DenoConfig) -> bool {
#[cfg(test)]
mod tests {
use super::*;
use super::super::config::DenoPermissions;
use super::*;
#[test]
fn create_runtime_succeeds() {

View File

@@ -1,7 +1,7 @@
use async_trait::async_trait;
use wfe_core::WfeError;
use wfe_core::models::ExecutionResult;
use wfe_core::traits::step::{StepBody, StepExecutionContext};
use wfe_core::WfeError;
use super::config::DenoConfig;
use super::ops::workflow::StepOutputs;
@@ -95,7 +95,9 @@ impl StepBody for DenoStep {
/// Check if the source code uses ES module syntax or top-level await.
fn needs_module_evaluation(source: &str) -> bool {
// Top-level await requires module evaluation. ES import/export also require it.
source.contains("import ") || source.contains("import(") || source.contains("export ")
source.contains("import ")
|| source.contains("import(")
|| source.contains("export ")
|| source.contains("await ")
}
@@ -191,9 +193,8 @@ async fn run_module_inner(
"wfe:///inline-module.js".to_string()
};
let specifier = deno_core::ModuleSpecifier::parse(&module_url).map_err(|e| {
WfeError::StepExecution(format!("Invalid module URL '{module_url}': {e}"))
})?;
let specifier = deno_core::ModuleSpecifier::parse(&module_url)
.map_err(|e| WfeError::StepExecution(format!("Invalid module URL '{module_url}': {e}")))?;
let module_id = runtime
.load_main_es_module_from_code(&specifier, source.to_string())

View File

@@ -2,9 +2,9 @@ use std::collections::HashMap;
use async_trait::async_trait;
use serde::{Deserialize, Serialize};
use wfe_core::WfeError;
use wfe_core::models::ExecutionResult;
use wfe_core::traits::step::{StepBody, StepExecutionContext};
use wfe_core::WfeError;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ShellConfig {
@@ -31,8 +31,15 @@ impl ShellStep {
// Inject workflow data as UPPER_CASE env vars (top-level keys only).
// Skip keys that would override security-sensitive environment variables.
const BLOCKED_KEYS: &[&str] = &[
"PATH", "LD_PRELOAD", "LD_LIBRARY_PATH", "DYLD_LIBRARY_PATH",
"HOME", "SHELL", "USER", "LOGNAME", "TERM",
"PATH",
"LD_PRELOAD",
"LD_LIBRARY_PATH",
"DYLD_LIBRARY_PATH",
"HOME",
"SHELL",
"USER",
"LOGNAME",
"TERM",
];
if let Some(data_obj) = context.workflow.data.as_object() {
for (key, value) in data_obj {
@@ -78,19 +85,25 @@ impl ShellStep {
let workflow_id = context.workflow.id.clone();
let definition_id = context.workflow.workflow_definition_id.clone();
let step_id = context.step.id;
let step_name = context.step.name.clone().unwrap_or_else(|| "unknown".to_string());
let step_name = context
.step
.name
.clone()
.unwrap_or_else(|| "unknown".to_string());
let mut cmd = self.build_command(context);
let mut child = cmd.spawn().map_err(|e| {
WfeError::StepExecution(format!("Failed to spawn shell command: {e}"))
})?;
let mut child = cmd
.spawn()
.map_err(|e| WfeError::StepExecution(format!("Failed to spawn shell command: {e}")))?;
let stdout_pipe = child.stdout.take().ok_or_else(|| {
WfeError::StepExecution("failed to capture stdout pipe".to_string())
})?;
let stderr_pipe = child.stderr.take().ok_or_else(|| {
WfeError::StepExecution("failed to capture stderr pipe".to_string())
})?;
let stdout_pipe = child
.stdout
.take()
.ok_or_else(|| WfeError::StepExecution("failed to capture stdout pipe".to_string()))?;
let stderr_pipe = child
.stderr
.take()
.ok_or_else(|| WfeError::StepExecution("failed to capture stderr pipe".to_string()))?;
let mut stdout_lines = BufReader::new(stdout_pipe).lines();
let mut stderr_lines = BufReader::new(stderr_pipe).lines();
@@ -194,9 +207,9 @@ impl ShellStep {
}
}
} else {
cmd.output()
.await
.map_err(|e| WfeError::StepExecution(format!("Failed to spawn shell command: {e}")))?
cmd.output().await.map_err(|e| {
WfeError::StepExecution(format!("Failed to spawn shell command: {e}"))
})?
};
let stdout = String::from_utf8_lossy(&output.stdout).to_string();
@@ -209,7 +222,10 @@ impl ShellStep {
#[async_trait]
impl StepBody for ShellStep {
async fn run(&mut self, context: &StepExecutionContext<'_>) -> wfe_core::Result<ExecutionResult> {
async fn run(
&mut self,
context: &StepExecutionContext<'_>,
) -> wfe_core::Result<ExecutionResult> {
let (stdout, stderr, exit_code) = if context.log_sink.is_some() {
self.run_streaming(context).await?
} else {

Some files were not shown because too many files have changed in this diff Show More