docs: add README for workspace and all 7 crates
Root README covers architecture, Rust builder API quick start, YAML pipeline quick start, provider table, deno executor overview, feature flags, and testing instructions. Per-crate READMEs follow consistent structure: one-liner, what it does, quick start code example, API reference, configuration, testing, license. Engineering-confident tone throughout.
This commit is contained in:
252
README.md
Normal file
252
README.md
Normal file
@@ -0,0 +1,252 @@
|
|||||||
|
# WFE
|
||||||
|
|
||||||
|
A persistent, embeddable workflow engine for Rust. Trait-based, pluggable, built for real infrastructure.
|
||||||
|
|
||||||
|
> Rust port of [workflow-core](https://github.com/danielgerlag/workflow-core), rebuilt from scratch with async/await, pluggable persistence, and a YAML frontend with shell and Deno executors.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What is WFE?
|
||||||
|
|
||||||
|
WFE is a workflow engine you embed directly into your Rust application. Define workflows as code using a fluent builder API, or as YAML files with shell and JavaScript steps. Workflows persist across restarts, support event-driven pausing, parallel execution, saga compensation, and distributed locking.
|
||||||
|
|
||||||
|
Built for:
|
||||||
|
|
||||||
|
- **Persistent workflows** — steps survive process restarts. Pick up where you left off.
|
||||||
|
- **Embeddable CLIs** — drop it into a binary, no external orchestrator required.
|
||||||
|
- **Portable CI pipelines** — YAML workflows with shell and Deno steps, variable interpolation, structured outputs.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
wfe/
|
||||||
|
├── wfe-core Traits, models, builder, executor, primitives
|
||||||
|
├── wfe Umbrella crate — WorkflowHost, WorkflowHostBuilder
|
||||||
|
├── wfe-yaml YAML workflow loader, shell executor, Deno executor
|
||||||
|
├── wfe-sqlite SQLite persistence + queue + lock provider
|
||||||
|
├── wfe-postgres PostgreSQL persistence + queue + lock provider
|
||||||
|
├── wfe-valkey Valkey (Redis) distributed lock + queue provider
|
||||||
|
└── wfe-opensearch OpenSearch search index provider
|
||||||
|
```
|
||||||
|
|
||||||
|
`wfe-core` defines the traits. Provider crates implement them. `wfe` wires everything together through `WorkflowHost`. `wfe-yaml` adds a YAML frontend with built-in executors.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick start — Rust builder API
|
||||||
|
|
||||||
|
Define steps by implementing `StepBody`, then chain them with the builder:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
use async_trait::async_trait;
|
||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
use wfe::builder::WorkflowBuilder;
|
||||||
|
use wfe::models::*;
|
||||||
|
use wfe::traits::step::{StepBody, StepExecutionContext};
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
|
||||||
|
struct MyData {
|
||||||
|
message: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Default)]
|
||||||
|
struct FetchData;
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl StepBody for FetchData {
|
||||||
|
async fn run(&mut self, _ctx: &StepExecutionContext<'_>) -> wfe::Result<ExecutionResult> {
|
||||||
|
println!("Fetching data...");
|
||||||
|
Ok(ExecutionResult::next())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Default)]
|
||||||
|
struct Transform;
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl StepBody for Transform {
|
||||||
|
async fn run(&mut self, _ctx: &StepExecutionContext<'_>) -> wfe::Result<ExecutionResult> {
|
||||||
|
println!("Transforming...");
|
||||||
|
Ok(ExecutionResult::next())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Default)]
|
||||||
|
struct Publish;
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl StepBody for Publish {
|
||||||
|
async fn run(&mut self, _ctx: &StepExecutionContext<'_>) -> wfe::Result<ExecutionResult> {
|
||||||
|
println!("Publishing.");
|
||||||
|
Ok(ExecutionResult::next())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let definition = WorkflowBuilder::<MyData>::new()
|
||||||
|
.start_with::<FetchData>()
|
||||||
|
.name("Fetch")
|
||||||
|
.then::<Transform>()
|
||||||
|
.name("Transform")
|
||||||
|
.on_error(ErrorBehavior::Retry {
|
||||||
|
interval: std::time::Duration::from_secs(5),
|
||||||
|
max_retries: 3,
|
||||||
|
})
|
||||||
|
.then::<Publish>()
|
||||||
|
.name("Publish")
|
||||||
|
.end_workflow()
|
||||||
|
.build("etl-pipeline", 1);
|
||||||
|
```
|
||||||
|
|
||||||
|
The builder supports `.then()`, `.parallel()`, `.if_do()`, `.while_do()`, `.for_each()`, `.saga()`, `.compensate_with()`, `.wait_for()`, `.delay()`, and `.then_fn()` for inline closures.
|
||||||
|
|
||||||
|
See [`wfe/examples/pizza.rs`](wfe/examples/pizza.rs) for a full example using every feature.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick start — YAML
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
workflow:
|
||||||
|
id: deploy-pipeline
|
||||||
|
version: 1
|
||||||
|
steps:
|
||||||
|
- name: Lint
|
||||||
|
config:
|
||||||
|
run: cargo clippy --all-targets -- -D warnings
|
||||||
|
timeout: "120s"
|
||||||
|
|
||||||
|
- name: Test
|
||||||
|
config:
|
||||||
|
run: cargo test --workspace
|
||||||
|
timeout: "300s"
|
||||||
|
|
||||||
|
- name: Build
|
||||||
|
config:
|
||||||
|
run: cargo build --release
|
||||||
|
timeout: "600s"
|
||||||
|
|
||||||
|
- name: Notify
|
||||||
|
type: deno
|
||||||
|
config:
|
||||||
|
script: |
|
||||||
|
const result = await fetch("https://hooks.slack.com/...", {
|
||||||
|
method: "POST",
|
||||||
|
body: JSON.stringify({ text: "Deploy complete" }),
|
||||||
|
});
|
||||||
|
Wfe.setOutput("status", result.status.toString());
|
||||||
|
permissions:
|
||||||
|
net: ["hooks.slack.com"]
|
||||||
|
timeout: "10s"
|
||||||
|
```
|
||||||
|
|
||||||
|
Load and run:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
use std::collections::HashMap;
|
||||||
|
use std::path::Path;
|
||||||
|
|
||||||
|
let config = HashMap::new();
|
||||||
|
let compiled = wfe_yaml::load_workflow(Path::new("deploy.yaml"), &config)?;
|
||||||
|
```
|
||||||
|
|
||||||
|
Variables use `${{ var.name }}` interpolation syntax. Outputs from earlier steps are available as workflow data in later steps.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Providers
|
||||||
|
|
||||||
|
| Concern | Provider | Crate | Connection |
|
||||||
|
|---------|----------|-------|------------|
|
||||||
|
| Persistence | SQLite | `wfe-sqlite` | File path or `:memory:` |
|
||||||
|
| Persistence | PostgreSQL | `wfe-postgres` | `postgres://user:pass@host/db` |
|
||||||
|
| Distributed lock | Valkey / Redis | `wfe-valkey` | `redis://host:6379` |
|
||||||
|
| Queue | Valkey / Redis | `wfe-valkey` | Same connection |
|
||||||
|
| Search index | OpenSearch | `wfe-opensearch` | `http://host:9200` |
|
||||||
|
|
||||||
|
All providers implement traits from `wfe-core`. SQLite and PostgreSQL crates include their own lock and queue implementations for single-node deployments. Use Valkey when you need distributed coordination across multiple hosts.
|
||||||
|
|
||||||
|
In-memory implementations of every trait ship with `wfe-core` (behind the `test-support` feature) for testing and prototyping.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The Deno executor
|
||||||
|
|
||||||
|
The `deno` step type embeds a V8 runtime for running JavaScript or TypeScript inside your workflow. Scripts run in a sandboxed environment with fine-grained permissions.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
- name: Process webhook
|
||||||
|
type: deno
|
||||||
|
config:
|
||||||
|
script: |
|
||||||
|
const data = Wfe.getData();
|
||||||
|
const response = await fetch(`https://api.example.com/v1/${data.id}`);
|
||||||
|
const result = await response.json();
|
||||||
|
Wfe.setOutput("processed", JSON.stringify(result));
|
||||||
|
permissions:
|
||||||
|
net: ["api.example.com"]
|
||||||
|
read: []
|
||||||
|
write: []
|
||||||
|
env: []
|
||||||
|
run: false
|
||||||
|
timeout: "30s"
|
||||||
|
```
|
||||||
|
|
||||||
|
| Permission | Type | Default | What it controls |
|
||||||
|
|------------|------|---------|-----------------|
|
||||||
|
| `net` | `string[]` | `[]` | Allowed network hosts |
|
||||||
|
| `read` | `string[]` | `[]` | Allowed filesystem read paths |
|
||||||
|
| `write` | `string[]` | `[]` | Allowed filesystem write paths |
|
||||||
|
| `env` | `string[]` | `[]` | Allowed environment variable names |
|
||||||
|
| `run` | `bool` | `false` | Whether subprocess spawning is allowed |
|
||||||
|
| `dynamic_import` | `bool` | `false` | Whether dynamic `import()` is allowed |
|
||||||
|
|
||||||
|
Everything is denied by default. You allowlist what each step needs. The V8 isolate is terminated hard on timeout — no infinite loops surviving on your watch.
|
||||||
|
|
||||||
|
Enable with the `deno` feature flag on `wfe-yaml`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Feature flags
|
||||||
|
|
||||||
|
| Crate | Flag | What it enables |
|
||||||
|
|-------|------|-----------------|
|
||||||
|
| `wfe` | `otel` | OpenTelemetry tracing (spans for every step execution) |
|
||||||
|
| `wfe-core` | `otel` | OTel span attributes on the executor |
|
||||||
|
| `wfe-core` | `test-support` | In-memory persistence, lock, and queue providers |
|
||||||
|
| `wfe-yaml` | `deno` | Deno JavaScript/TypeScript executor |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
Unit tests run without any external dependencies:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
cargo test --workspace
|
||||||
|
```
|
||||||
|
|
||||||
|
Integration tests for PostgreSQL, Valkey, and OpenSearch need their backing services. A Docker Compose file is included:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
docker compose up -d
|
||||||
|
cargo test --workspace
|
||||||
|
docker compose down
|
||||||
|
```
|
||||||
|
|
||||||
|
The compose file starts:
|
||||||
|
|
||||||
|
- PostgreSQL 17 on port `5433`
|
||||||
|
- Valkey 8 on port `6379`
|
||||||
|
- OpenSearch 2 on port `9200`
|
||||||
|
|
||||||
|
SQLite tests use temporary files and run everywhere.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
[MIT](LICENSE)
|
||||||
|
|
||||||
|
Built by [Sunbeam Studios](https://sunbeam.pt). We run this in production. It works.
|
||||||
85
wfe-core/README.md
Normal file
85
wfe-core/README.md
Normal file
@@ -0,0 +1,85 @@
|
|||||||
|
# wfe-core
|
||||||
|
|
||||||
|
Core traits, models, builder, and executor for the WFE workflow engine.
|
||||||
|
|
||||||
|
## What it does
|
||||||
|
|
||||||
|
`wfe-core` defines the foundational abstractions that every other WFE crate builds on. It provides the `StepBody` trait for implementing workflow steps, a fluent `WorkflowBuilder` for composing workflow definitions, a `WorkflowExecutor` that drives step execution with locking and persistence, and a library of built-in control-flow primitives (if/while/foreach/parallel/saga).
|
||||||
|
|
||||||
|
## Quick start
|
||||||
|
|
||||||
|
Define a step by implementing `StepBody`, then wire steps together with the builder:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
use async_trait::async_trait;
|
||||||
|
use wfe_core::builder::WorkflowBuilder;
|
||||||
|
use wfe_core::models::ExecutionResult;
|
||||||
|
use wfe_core::traits::step::{StepBody, StepExecutionContext};
|
||||||
|
|
||||||
|
#[derive(Default)]
|
||||||
|
struct Greet;
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl StepBody for Greet {
|
||||||
|
async fn run(&mut self, _ctx: &StepExecutionContext<'_>) -> wfe_core::Result<ExecutionResult> {
|
||||||
|
println!("hello from a workflow step");
|
||||||
|
Ok(ExecutionResult::next())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let definition = WorkflowBuilder::<serde_json::Value>::new()
|
||||||
|
.start_with::<Greet>()
|
||||||
|
.name("greet")
|
||||||
|
.end_workflow()
|
||||||
|
.build("hello-workflow", 1);
|
||||||
|
```
|
||||||
|
|
||||||
|
## API
|
||||||
|
|
||||||
|
| Type / Trait | Description |
|
||||||
|
|---|---|
|
||||||
|
| `StepBody` | The core unit of work. Implement `run()` to define step behavior. |
|
||||||
|
| `StepExecutionContext` | Runtime context passed to each step (workflow data, persistence data, cancellation token). |
|
||||||
|
| `WorkflowData` | Marker trait for data flowing between steps. Auto-implemented for anything `Serialize + DeserializeOwned + Send + Sync + Clone`. |
|
||||||
|
| `WorkflowBuilder<D>` | Fluent builder for composing `WorkflowDefinition`s. Supports `start_with`, `then`, `if_do`, `while_do`, `for_each`, `parallel`, `saga`. |
|
||||||
|
| `StepBuilder<D>` | Per-step builder returned by `WorkflowBuilder`. Configures name, error behavior, compensation. |
|
||||||
|
| `WorkflowExecutor` | Acquires a lock, loads the instance, runs all active pointers, processes results, persists. |
|
||||||
|
| `StepRegistry` | Maps step type names to factory functions. |
|
||||||
|
| `PersistenceProvider` | Composite trait: `WorkflowRepository + EventRepository + SubscriptionRepository + ScheduledCommandRepository`. |
|
||||||
|
| `DistributedLockProvider` | Trait for acquiring/releasing workflow-level locks. |
|
||||||
|
| `QueueProvider` | Trait for enqueuing/dequeuing workflow and event work items. |
|
||||||
|
|
||||||
|
### Built-in primitives
|
||||||
|
|
||||||
|
| Primitive | Purpose |
|
||||||
|
|---|---|
|
||||||
|
| `IfStep` | Conditional branching |
|
||||||
|
| `WhileStep` | Loop while condition holds |
|
||||||
|
| `ForEachStep` | Iterate over a collection |
|
||||||
|
| `SequenceStep` | Parallel branch container |
|
||||||
|
| `DecideStep` | Multi-way branching |
|
||||||
|
| `DelayStep` | Pause execution for a duration |
|
||||||
|
| `ScheduleStep` | Resume at a specific time |
|
||||||
|
| `WaitForStep` | Suspend until an external event |
|
||||||
|
| `SagaContainerStep` | Transaction-like compensation |
|
||||||
|
| `RecurStep` | Recurring/periodic execution |
|
||||||
|
| `EndStep` | Explicit workflow termination |
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
| Feature | Description |
|
||||||
|
|---|---|
|
||||||
|
| `test-support` | Exposes `test_support` module with in-memory persistence and helpers for testing workflows. |
|
||||||
|
| `otel` | Enables OpenTelemetry integration via `tracing-opentelemetry`. |
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
```sh
|
||||||
|
cargo test -p wfe-core
|
||||||
|
```
|
||||||
|
|
||||||
|
No external dependencies required.
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
MIT
|
||||||
103
wfe-opensearch/README.md
Normal file
103
wfe-opensearch/README.md
Normal file
@@ -0,0 +1,103 @@
|
|||||||
|
# wfe-opensearch
|
||||||
|
|
||||||
|
OpenSearch search index provider for the WFE workflow engine.
|
||||||
|
|
||||||
|
## What it does
|
||||||
|
|
||||||
|
Implements the `SearchIndex` trait backed by OpenSearch. Indexes workflow instances as documents and supports full-text search with bool queries, term filters (status, reference), date range filters, and pagination. The index mapping is created automatically on `start()` if it does not already exist.
|
||||||
|
|
||||||
|
## Quick start
|
||||||
|
|
||||||
|
```rust
|
||||||
|
use wfe_opensearch::OpenSearchIndex;
|
||||||
|
|
||||||
|
let index = OpenSearchIndex::new(
|
||||||
|
"http://localhost:9200",
|
||||||
|
"wfe_workflows",
|
||||||
|
)?;
|
||||||
|
|
||||||
|
// Create the index mapping (idempotent)
|
||||||
|
index.start().await?;
|
||||||
|
|
||||||
|
// Index a workflow
|
||||||
|
index.index_workflow(&workflow_instance).await?;
|
||||||
|
|
||||||
|
// Search with filters
|
||||||
|
use wfe_core::traits::search::{SearchFilter, SearchIndex};
|
||||||
|
|
||||||
|
let page = index.search(
|
||||||
|
"deploy", // free-text query
|
||||||
|
0, // skip
|
||||||
|
20, // take
|
||||||
|
&[SearchFilter::Status(WorkflowStatus::Complete)],
|
||||||
|
).await?;
|
||||||
|
```
|
||||||
|
|
||||||
|
## API
|
||||||
|
|
||||||
|
| Type | Trait |
|
||||||
|
|------|-------|
|
||||||
|
| `OpenSearchIndex` | `SearchIndex` |
|
||||||
|
|
||||||
|
Key methods:
|
||||||
|
|
||||||
|
| Method | Description |
|
||||||
|
|--------|-------------|
|
||||||
|
| `new(url, index_name)` | Create a provider pointing at an OpenSearch instance |
|
||||||
|
| `start()` | Create the index with mappings if it does not exist |
|
||||||
|
| `index_workflow(instance)` | Index or update a workflow document |
|
||||||
|
| `search(terms, skip, take, filters)` | Bool query with filters, returns `Page<WorkflowSearchResult>` |
|
||||||
|
| `client()` | Access the underlying `OpenSearch` client |
|
||||||
|
| `index_name()` | Get the configured index name |
|
||||||
|
|
||||||
|
### Search filters
|
||||||
|
|
||||||
|
| Filter | OpenSearch mapping |
|
||||||
|
|--------|--------------------|
|
||||||
|
| `SearchFilter::Status(status)` | `term` query on `status` (keyword) |
|
||||||
|
| `SearchFilter::Reference(ref)` | `term` query on `reference` (keyword) |
|
||||||
|
| `SearchFilter::DateRange { field, before, after }` | `range` query on date fields |
|
||||||
|
|
||||||
|
Free-text terms run a `multi_match` across `description`, `reference`, and `workflow_definition_id`.
|
||||||
|
|
||||||
|
## Index mapping
|
||||||
|
|
||||||
|
The index uses the following field types:
|
||||||
|
|
||||||
|
| Field | Type |
|
||||||
|
|-------|------|
|
||||||
|
| `id` | keyword |
|
||||||
|
| `workflow_definition_id` | keyword |
|
||||||
|
| `version` | integer |
|
||||||
|
| `status` | keyword |
|
||||||
|
| `reference` | keyword |
|
||||||
|
| `description` | text |
|
||||||
|
| `data` | object (disabled -- stored but not indexed) |
|
||||||
|
| `create_time` | date |
|
||||||
|
| `complete_time` | date |
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
Constructor takes two arguments:
|
||||||
|
|
||||||
|
| Parameter | Example | Description |
|
||||||
|
|-----------|---------|-------------|
|
||||||
|
| `url` | `http://localhost:9200` | OpenSearch server URL |
|
||||||
|
| `index_name` | `wfe_workflows` | Index name for workflow documents |
|
||||||
|
|
||||||
|
Security plugin is not required. For local development, run OpenSearch with `DISABLE_SECURITY_PLUGIN=true`.
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
Requires a running OpenSearch instance. Use the project docker-compose:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
docker compose up -d opensearch
|
||||||
|
cargo test -p wfe-opensearch
|
||||||
|
```
|
||||||
|
|
||||||
|
Default test connection: `http://localhost:9200`
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
MIT
|
||||||
76
wfe-postgres/README.md
Normal file
76
wfe-postgres/README.md
Normal file
@@ -0,0 +1,76 @@
|
|||||||
|
# wfe-postgres
|
||||||
|
|
||||||
|
PostgreSQL persistence provider for the WFE workflow engine.
|
||||||
|
|
||||||
|
## What it does
|
||||||
|
|
||||||
|
Implements the full `PersistenceProvider` trait backed by PostgreSQL via sqlx. All workflow data, events, subscriptions, and scheduled commands live in a dedicated `wfc` schema. Uses JSONB for structured data (execution pointer children, scope, extension attributes) and TIMESTAMPTZ for timestamps. Schema and indexes are created automatically via `ensure_store_exists`.
|
||||||
|
|
||||||
|
## Quick start
|
||||||
|
|
||||||
|
```rust
|
||||||
|
use wfe_postgres::PostgresPersistenceProvider;
|
||||||
|
|
||||||
|
let provider = PostgresPersistenceProvider::new(
|
||||||
|
"postgres://wfe:wfe@localhost:5433/wfe_test"
|
||||||
|
).await?;
|
||||||
|
|
||||||
|
// Create schema and tables (idempotent)
|
||||||
|
provider.ensure_store_exists().await?;
|
||||||
|
```
|
||||||
|
|
||||||
|
Wire it into the WFE host:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
let host = WorkflowHost::new(provider);
|
||||||
|
```
|
||||||
|
|
||||||
|
## API
|
||||||
|
|
||||||
|
| Type | Trait |
|
||||||
|
|------|-------|
|
||||||
|
| `PostgresPersistenceProvider` | `PersistenceProvider`, `WorkflowRepository`, `EventRepository`, `SubscriptionRepository`, `ScheduledCommandRepository` |
|
||||||
|
|
||||||
|
Additional methods:
|
||||||
|
|
||||||
|
- `truncate_all()` -- truncates all tables with CASCADE, useful for test cleanup
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
Connection string follows the standard PostgreSQL URI format:
|
||||||
|
|
||||||
|
```
|
||||||
|
postgres://user:password@host:port/database
|
||||||
|
```
|
||||||
|
|
||||||
|
The pool is configured with up to 10 connections. All tables are created under the `wfc` schema.
|
||||||
|
|
||||||
|
## Schema
|
||||||
|
|
||||||
|
Tables created in `wfc`:
|
||||||
|
|
||||||
|
| Table | Purpose |
|
||||||
|
|-------|---------|
|
||||||
|
| `wfc.workflows` | Workflow instances. `data` is JSONB, timestamps are TIMESTAMPTZ. |
|
||||||
|
| `wfc.execution_pointers` | Step state. `children`, `scope`, `extension_attributes` are JSONB. References `wfc.workflows(id)`. |
|
||||||
|
| `wfc.events` | Published events. `event_data` is JSONB. |
|
||||||
|
| `wfc.event_subscriptions` | Active subscriptions with CAS-style external token locking. |
|
||||||
|
| `wfc.scheduled_commands` | Deferred commands. Unique on `(command_name, data)` with upsert semantics. |
|
||||||
|
| `wfc.execution_errors` | Error log with auto-incrementing serial primary key. |
|
||||||
|
|
||||||
|
Indexes are created on `next_execution`, `status`, `(event_name, event_key)`, `is_processed`, `event_time`, `workflow_id`, and `execute_time`.
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
Requires a running PostgreSQL instance. Use the project docker-compose:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
docker compose up -d postgres
|
||||||
|
cargo test -p wfe-postgres
|
||||||
|
```
|
||||||
|
|
||||||
|
Default test connection string: `postgres://wfe:wfe@localhost:5433/wfe_test`
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
MIT
|
||||||
67
wfe-sqlite/README.md
Normal file
67
wfe-sqlite/README.md
Normal file
@@ -0,0 +1,67 @@
|
|||||||
|
# wfe-sqlite
|
||||||
|
|
||||||
|
SQLite persistence provider for the WFE workflow engine.
|
||||||
|
|
||||||
|
## What it does
|
||||||
|
|
||||||
|
Implements the full `PersistenceProvider` trait (workflows, events, subscriptions, scheduled commands, execution errors) backed by SQLite via sqlx. Supports both in-memory and file-based databases. Tables and indexes are created automatically on startup -- no migrations to run.
|
||||||
|
|
||||||
|
## Quick start
|
||||||
|
|
||||||
|
```rust
|
||||||
|
use wfe_sqlite::SqlitePersistenceProvider;
|
||||||
|
|
||||||
|
// In-memory (good for tests and single-process deployments)
|
||||||
|
let provider = SqlitePersistenceProvider::new(":memory:").await?;
|
||||||
|
|
||||||
|
// File-based (persistent across restarts)
|
||||||
|
let provider = SqlitePersistenceProvider::new("sqlite:///tmp/wfe.db").await?;
|
||||||
|
```
|
||||||
|
|
||||||
|
Wire it into the WFE host as your persistence layer:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
let host = WorkflowHost::new(provider);
|
||||||
|
```
|
||||||
|
|
||||||
|
## API
|
||||||
|
|
||||||
|
| Type | Trait |
|
||||||
|
|------|-------|
|
||||||
|
| `SqlitePersistenceProvider` | `PersistenceProvider`, `WorkflowRepository`, `EventRepository`, `SubscriptionRepository`, `ScheduledCommandRepository` |
|
||||||
|
|
||||||
|
Key methods come from the traits -- `create_new_workflow`, `persist_workflow`, `get_runnable_instances`, `create_event`, `schedule_command`, and so on. See `wfe-core` for the full trait definitions.
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
The constructor takes a standard SQLite connection string:
|
||||||
|
|
||||||
|
| Value | Behavior |
|
||||||
|
|-------|----------|
|
||||||
|
| `":memory:"` | In-memory database, single connection, lost on drop |
|
||||||
|
| `"sqlite:///path/to/db"` | File-backed, WAL journal mode, up to 4 connections |
|
||||||
|
|
||||||
|
WAL mode and foreign keys are enabled automatically. For in-memory mode, the pool is capped at 1 connection to avoid separate database instances.
|
||||||
|
|
||||||
|
## Schema
|
||||||
|
|
||||||
|
Six tables are created in the default schema:
|
||||||
|
|
||||||
|
- `workflows` -- workflow instance state and metadata
|
||||||
|
- `execution_pointers` -- step execution state, linked to workflows via foreign key
|
||||||
|
- `events` -- published events pending processing
|
||||||
|
- `event_subscriptions` -- active event subscriptions with CAS-style token locking
|
||||||
|
- `scheduled_commands` -- deferred commands with dedup on `(command_name, data)`
|
||||||
|
- `execution_errors` -- error log for failed step executions
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
No external dependencies required. Tests run against `:memory:`:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
cargo test -p wfe-sqlite
|
||||||
|
```
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
MIT
|
||||||
76
wfe-valkey/README.md
Normal file
76
wfe-valkey/README.md
Normal file
@@ -0,0 +1,76 @@
|
|||||||
|
# wfe-valkey
|
||||||
|
|
||||||
|
Valkey (Redis-compatible) provider for distributed locking, queues, and lifecycle events in WFE.
|
||||||
|
|
||||||
|
## What it does
|
||||||
|
|
||||||
|
Provides three provider implementations backed by Valkey (or any Redis-compatible server) using the `redis` crate with multiplexed async connections. Handles distributed lock coordination across multiple WFE host instances, work queue distribution for workflows/events/indexing, and pub/sub lifecycle event broadcasting.
|
||||||
|
|
||||||
|
## Quick start
|
||||||
|
|
||||||
|
```rust
|
||||||
|
use wfe_valkey::{ValkeyLockProvider, ValkeyQueueProvider, ValkeyLifecyclePublisher};
|
||||||
|
|
||||||
|
let redis_url = "redis://127.0.0.1:6379";
|
||||||
|
let prefix = "wfe"; // key prefix for namespacing
|
||||||
|
|
||||||
|
let locks = ValkeyLockProvider::new(redis_url, prefix).await?;
|
||||||
|
let queues = ValkeyQueueProvider::new(redis_url, prefix).await?;
|
||||||
|
let lifecycle = ValkeyLifecyclePublisher::new(redis_url, prefix).await?;
|
||||||
|
```
|
||||||
|
|
||||||
|
Custom lock duration (default is 30 seconds):
|
||||||
|
|
||||||
|
```rust
|
||||||
|
use std::time::Duration;
|
||||||
|
|
||||||
|
let locks = ValkeyLockProvider::new(redis_url, prefix).await?
|
||||||
|
.with_lock_duration(Duration::from_secs(10));
|
||||||
|
```
|
||||||
|
|
||||||
|
## API
|
||||||
|
|
||||||
|
| Type | Trait | Purpose |
|
||||||
|
|------|-------|---------|
|
||||||
|
| `ValkeyLockProvider` | `DistributedLockProvider` | Distributed mutex via `SET NX EX`. Release uses a Lua script to ensure only the holder can unlock. |
|
||||||
|
| `ValkeyQueueProvider` | `QueueProvider` | FIFO work queues via `LPUSH`/`RPOP`. Separate queues for `Workflow`, `Event`, and `Index` work types. |
|
||||||
|
| `ValkeyLifecyclePublisher` | `LifecyclePublisher` | Pub/sub lifecycle events. Publishes to both an instance-specific channel and a global `all` channel. |
|
||||||
|
|
||||||
|
### Key layout
|
||||||
|
|
||||||
|
All keys are prefixed with the configured prefix (e.g., `wfe`):
|
||||||
|
|
||||||
|
| Pattern | Usage |
|
||||||
|
|---------|-------|
|
||||||
|
| `{prefix}:lock:{resource}` | Distributed lock for a resource |
|
||||||
|
| `{prefix}:queue:workflow` | Workflow processing queue |
|
||||||
|
| `{prefix}:queue:event` | Event processing queue |
|
||||||
|
| `{prefix}:queue:index` | Index update queue |
|
||||||
|
| `{prefix}:lifecycle:{workflow_id}` | Lifecycle events for a specific workflow |
|
||||||
|
| `{prefix}:lifecycle:all` | All lifecycle events |
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
Connection string is a standard Redis URI:
|
||||||
|
|
||||||
|
```
|
||||||
|
redis://127.0.0.1:6379
|
||||||
|
redis://:password@host:6379/0
|
||||||
|
```
|
||||||
|
|
||||||
|
Each provider creates its own multiplexed connection. The prefix parameter namespaces all keys, so multiple WFE deployments can share a single Valkey instance.
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
Requires a running Valkey instance. Use the project docker-compose:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
docker compose up -d valkey
|
||||||
|
cargo test -p wfe-valkey
|
||||||
|
```
|
||||||
|
|
||||||
|
Default test connection: `redis://127.0.0.1:6379`
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
MIT
|
||||||
118
wfe-yaml/README.md
Normal file
118
wfe-yaml/README.md
Normal file
@@ -0,0 +1,118 @@
|
|||||||
|
# wfe-yaml
|
||||||
|
|
||||||
|
YAML workflow definitions for WFE.
|
||||||
|
|
||||||
|
## What it does
|
||||||
|
|
||||||
|
`wfe-yaml` lets you define workflows in YAML instead of Rust code. It parses YAML files, validates the schema, interpolates variables, compiles the result into `WorkflowDefinition` + step factories, and hands them to the WFE host. Ships with a shell executor and an optional sandboxed Deno executor for running JavaScript/TypeScript steps.
|
||||||
|
|
||||||
|
## Quick start
|
||||||
|
|
||||||
|
Define a workflow in YAML:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
workflow:
|
||||||
|
id: deploy
|
||||||
|
version: 1
|
||||||
|
steps:
|
||||||
|
- name: build
|
||||||
|
type: shell
|
||||||
|
config:
|
||||||
|
run: cargo build --release
|
||||||
|
timeout: 5m
|
||||||
|
|
||||||
|
- name: test
|
||||||
|
type: shell
|
||||||
|
config:
|
||||||
|
run: cargo test
|
||||||
|
env:
|
||||||
|
RUST_LOG: info
|
||||||
|
|
||||||
|
- name: notify
|
||||||
|
type: deno
|
||||||
|
config:
|
||||||
|
script: |
|
||||||
|
const result = Wfe.getData("build.stdout");
|
||||||
|
Wfe.setOutput("status", "deployed");
|
||||||
|
permissions:
|
||||||
|
net: ["https://hooks.slack.com"]
|
||||||
|
```
|
||||||
|
|
||||||
|
Load and register it:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
use std::collections::HashMap;
|
||||||
|
use std::path::Path;
|
||||||
|
|
||||||
|
let config = HashMap::new();
|
||||||
|
let compiled = wfe_yaml::load_workflow(Path::new("deploy.yaml"), &config)?;
|
||||||
|
|
||||||
|
// Register with the host.
|
||||||
|
host.register_workflow_definition(compiled.definition).await;
|
||||||
|
for (key, factory) in compiled.step_factories {
|
||||||
|
host.register_step_factory(&key, factory).await;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Variable interpolation
|
||||||
|
|
||||||
|
Use `${{ var.name }}` syntax in YAML. Variables are resolved from the config map passed to `load_workflow`:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
let mut config = HashMap::new();
|
||||||
|
config.insert("env".into(), serde_json::json!("production"));
|
||||||
|
|
||||||
|
let compiled = wfe_yaml::load_workflow_from_str(&yaml_str, &config)?;
|
||||||
|
```
|
||||||
|
|
||||||
|
## API
|
||||||
|
|
||||||
|
| Type | Description |
|
||||||
|
|---|---|
|
||||||
|
| `load_workflow()` | Load from a file path with variable interpolation. |
|
||||||
|
| `load_workflow_from_str()` | Load from a YAML string with variable interpolation. |
|
||||||
|
| `CompiledWorkflow` | Contains a `WorkflowDefinition` and a `Vec<(String, StepFactory)>` ready for host registration. |
|
||||||
|
| `YamlWorkflow` | Top-level YAML schema (`workflow:` key). |
|
||||||
|
| `WorkflowSpec` | Workflow definition: id, version, description, error behavior, steps. |
|
||||||
|
| `YamlStep` | Step definition: name, type, config, inputs/outputs, parallel children, error handling hooks (`on_success`, `on_failure`, `ensure`). |
|
||||||
|
| `ShellStep` | Executes shell commands via `tokio::process::Command`. Captures stdout/stderr, parses `##wfe[output name=value]` directives. |
|
||||||
|
| `DenoStep` | Executes JS/TS in an embedded Deno runtime with configurable permissions. |
|
||||||
|
|
||||||
|
### Shell step features
|
||||||
|
|
||||||
|
- Configurable shell (defaults to `sh`)
|
||||||
|
- Workflow data injected as `UPPER_CASE` environment variables
|
||||||
|
- Custom env vars via `config.env`
|
||||||
|
- Working directory override
|
||||||
|
- Timeout support
|
||||||
|
- Structured output via `##wfe[output name=value]` lines in stdout
|
||||||
|
|
||||||
|
### Deno step features
|
||||||
|
|
||||||
|
- Inline `script` or external `file` source
|
||||||
|
- Automatic module evaluation for `import`/`export`/`await` syntax
|
||||||
|
- Granular permissions: `net`, `read`, `write`, `env`, `run`, `dynamic_import`
|
||||||
|
- `Wfe.getData()` / `Wfe.setOutput()` host bindings
|
||||||
|
- V8-level timeout enforcement (catches infinite loops)
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
| Feature | Description |
|
||||||
|
|---|---|
|
||||||
|
| `deno` | Enables the Deno JavaScript/TypeScript executor. Pulls in `deno_core`, `deno_error`, `url`, `reqwest`. Off by default. |
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# Core tests (shell executor)
|
||||||
|
cargo test -p wfe-yaml
|
||||||
|
|
||||||
|
# With Deno executor
|
||||||
|
cargo test -p wfe-yaml --features deno
|
||||||
|
```
|
||||||
|
|
||||||
|
No external services required. Deno tests need the `deno` feature flag.
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
MIT
|
||||||
88
wfe/README.md
Normal file
88
wfe/README.md
Normal file
@@ -0,0 +1,88 @@
|
|||||||
|
# wfe
|
||||||
|
|
||||||
|
The umbrella crate for the WFE workflow engine.
|
||||||
|
|
||||||
|
## What it does
|
||||||
|
|
||||||
|
`wfe` provides `WorkflowHost`, the main orchestrator that ties persistence, locking, queuing, and step execution into a single runtime. It re-exports everything from `wfe-core` so downstream code only needs one dependency. Register workflow definitions, start instances, publish events, suspend/resume/terminate -- all through `WorkflowHost`.
|
||||||
|
|
||||||
|
## Quick start
|
||||||
|
|
||||||
|
```rust
|
||||||
|
use std::sync::Arc;
|
||||||
|
use wfe::{WorkflowHostBuilder, WorkflowHost};
|
||||||
|
|
||||||
|
// Build a host with your providers.
|
||||||
|
let host = WorkflowHostBuilder::new()
|
||||||
|
.use_persistence(persistence)
|
||||||
|
.use_lock_provider(lock_provider)
|
||||||
|
.use_queue_provider(queue_provider)
|
||||||
|
.build()
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// Register a workflow definition.
|
||||||
|
host.register_workflow::<serde_json::Value>(
|
||||||
|
&|builder| {
|
||||||
|
builder
|
||||||
|
.start_with::<MyStep>()
|
||||||
|
.name("do the thing")
|
||||||
|
.end_workflow()
|
||||||
|
},
|
||||||
|
"my-workflow",
|
||||||
|
1,
|
||||||
|
).await;
|
||||||
|
|
||||||
|
// Register custom steps.
|
||||||
|
host.register_step::<MyStep>().await;
|
||||||
|
|
||||||
|
// Start the background consumers.
|
||||||
|
host.start().await.unwrap();
|
||||||
|
|
||||||
|
// Launch a workflow instance.
|
||||||
|
let id = host.start_workflow("my-workflow", 1, serde_json::json!({})).await.unwrap();
|
||||||
|
|
||||||
|
// Publish an event to resume a waiting workflow.
|
||||||
|
host.publish_event("approval", "order-123", serde_json::json!({"approved": true})).await.unwrap();
|
||||||
|
```
|
||||||
|
|
||||||
|
### Synchronous runner (testing)
|
||||||
|
|
||||||
|
`run_workflow_sync` starts a workflow and polls until it completes or times out. Useful in integration tests.
|
||||||
|
|
||||||
|
```rust
|
||||||
|
use std::time::Duration;
|
||||||
|
use wfe::run_workflow_sync;
|
||||||
|
|
||||||
|
let result = run_workflow_sync(&host, "my-workflow", 1, data, Duration::from_secs(5)).await?;
|
||||||
|
assert_eq!(result.status, wfe::models::WorkflowStatus::Complete);
|
||||||
|
```
|
||||||
|
|
||||||
|
## API
|
||||||
|
|
||||||
|
| Type | Description |
|
||||||
|
|---|---|
|
||||||
|
| `WorkflowHost` | Main orchestrator. Registers workflows/steps, starts instances, publishes events, manages lifecycle. |
|
||||||
|
| `WorkflowHostBuilder` | Fluent builder for `WorkflowHost`. Requires persistence, lock provider, and queue provider. Optionally accepts lifecycle publisher and search index. |
|
||||||
|
| `InMemoryWorkflowRegistry` | In-memory `WorkflowRegistry` implementation. Stores definitions keyed by `(id, version)`. |
|
||||||
|
| `run_workflow_sync` | Async helper that starts a workflow and polls until completion or timeout. |
|
||||||
|
| `purge_workflows` | Cleanup utility for removing completed workflow data from persistence. |
|
||||||
|
|
||||||
|
Everything from `wfe-core` is re-exported: `WorkflowBuilder`, `StepBody`, `ExecutionResult`, models, traits, and primitives.
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
| Feature | Description |
|
||||||
|
|---|---|
|
||||||
|
| `otel` | Enables OpenTelemetry tracing. Pulls in `opentelemetry`, `opentelemetry_sdk`, `opentelemetry-otlp`, and `tracing-opentelemetry`. Also enables `wfe-core/otel`. |
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
```sh
|
||||||
|
cargo test -p wfe
|
||||||
|
```
|
||||||
|
|
||||||
|
Tests use `wfe-sqlite` and `wfe-core/test-support` for in-memory persistence. No external services required.
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
MIT
|
||||||
Reference in New Issue
Block a user