This RFC proposes a session-based lifecycle management system for peer-to-peer collaborative sessions. It introduces explicit session identities, session-scoped network isolation via ALPN, hybrid state restoration combining database persistence with delta synchronization, temporary entity ownership locks to prevent conflicts, and persistent session tracking with automatic rejoin capabilities.
## Motivation
The current architecture supports basic CRDT synchronization across all peers on a global gossip topic, but lacks:
1.**Session Isolation**: All peers share the same global network topic, preventing multiple independent sessions from coexisting
2.**State Management**: No concept of "joining a specific session" vs "creating a new one"
3.**Crash Recovery**: Nodes don't remember which session they were in before shutdown
4.**Entity Conflict Prevention**: Multiple nodes can simultaneously modify the same entity, requiring complex merge logic
5.**Hybrid Sync**: Always sends full state on join, even when rejoining a known session
### Requirements
From user specifications:
1.**Explicit Session IDs** - UUID or human-readable codes that identify unique collaborative sessions (let's do like 6 character abcd123 style so it's easy for humans)
2.**ALPN-based Network Isolation** - Each session gets its own ALPN protocol for gossip isolation (and security)
3.**Hybrid Initial Sync** - Restore from local DB first, then request deltas from peers
4.**Temporary Lock-based Ownership** - Only one node can modify an entity at a time (prevents conflicts) (but it should be initiator-driven, like i select the cube and move it i don't wait for upstream locking, the lock happens when i "select it" (or whatever))
5.**Persistent Sessions** - Sessions persist to DB and auto-rejoin on restart
## High-Level Architecture
The session lifecycle consists of four major phases: startup, network join, active collaboration, and shutdown. Each phase builds on the previous one to provide seamless session management and state restoration.
```mermaid
flowchart TD
A[Application Startup] --> B[Load Session from DB]
B --> C[Restore Entities from DB]
C --> D[Connect to Gossip with Session ALPN]
D --> E[Network Join Protocol]
E --> F[Send JoinRequest + VectorClock]
F --> G{First Join or Rejoin?}
G -->|Fresh Join| H[Receive FullState]
G -->|Rejoin| I[Receive Deltas]
H --> J[Apply Updates to Local State]
I --> J
J --> K[Active Session]
K --> L[User Interaction]
L --> M[Acquire Lock]
M --> N[Modify Entity]
N --> O[Broadcast Delta]
O --> P[Release Lock]
P --> L
K --> Q{Session Ending?}
Q -->|Yes| R[Shutdown]
R --> S[Save Session State to DB]
S --> T[Mark Clean Shutdown]
T --> U[Exit]
```
## Session Data Model
The session data model defines how collaborative sessions are identified, tracked, and persisted. It consists of three main components: unique session identifiers, session metadata for tracking state, and database schemas for persistence.
Each collaborative session needs a unique identifier that users can easily share and enter. The design prioritizes human usability while maintaining technical uniqueness and security.
When a user creates a session, they see a code like `xyz-789-mno` that they can share with collaborators. When joining, they simply type this code into a dialog.
**Technical Implementation**:
Behind the scenes, each session code maps to a UUID (Universally Unique Identifier) that provides true global uniqueness. The `SessionId` type handles bidirectional conversion:
- User codes → UUIDs via deterministic hashing
- UUIDs → display codes via formatting
**Network Isolation**:
Each session ID also derives a unique ALPN (Application-Layer Protocol Negotiation) identifier using BLAKE3 hashing. This provides cryptographic isolation at the transport layer - peers in different sessions literally cannot discover or communicate with each other, even if they're on the same local network.
**Key Operations**:
-`SessionId::new()`: Generates a random UUID v4 for new sessions
-`SessionId::from_code(code)`: Parses human-readable codes (format: `xxx-yyy-zzz`) into UUIDs
-`to_code()`: Converts UUID to a 9-character alphanumeric code for display
-`to_alpn()`: Derives a 32-byte ALPN identifier for network isolation
Beyond the unique identifier, each session needs metadata to track its lifecycle and state. The `Session` struct captures when the session was created, when it was last active, how many entities it contains, and its current state (created, joining, active, disconnected, or left).
This metadata serves several purposes:
- **Crash recovery**: When the app restarts, we can detect incomplete sessions and decide whether to rejoin
- **UI display**: Show users their recent sessions with entity counts and last access times
- **Network isolation**: Optional session secrets provide a basic authentication layer
- **State machine**: The `SessionState` enum tracks where we are in the session lifecycle
The `CurrentSession` resource represents the active session within the Bevy ECS world. It includes both the session metadata and the vector clock state at the time of joining, which is essential for the hybrid sync protocol.
The `CurrentSession` Bevy resource wraps the session metadata along with the vector clock captured at join time. This clock snapshot enables the hybrid sync protocol to determine which deltas are needed when rejoining.
To support persistent sessions, we need to extend the existing database schema with session-aware tables and indexes. The schema changes fall into three categories: session tracking, session membership history, and session-scoping of existing tables.
**Session Tracking**: The `sessions` table stores all session metadata including the session UUID, optional human-readable name, creation and last-active timestamps, entity count cache, current state, and optional encrypted secret. The `idx_sessions_last_active` index enables fast queries for "recent sessions" in the UI.
**Membership History**: The `session_membership` table tracks which nodes have participated in which sessions and when. This provides an audit trail and helps detect cases where a node attempts to rejoin a session it was previously kicked from (future enhancement).
**Session-Scoped Data**: Existing tables (`entities`, `vector_clock`, and `operation_log`) are extended with `session_id` foreign keys. This ensures that:
- Entities are scoped to sessions (prevents accidental cross-session entity leakage)
- Vector clocks are per-session (each session has independent causality tracking)
- Operation logs are per-session (enables efficient delta calculation for rejoins)
The composite indexes ensure that common queries like "get all entities in session X" or "get vector clock for session Y, node Z" remain fast even with thousands of sessions in the database.
```sql
-- Sessions table
CREATE TABLE sessions (
id BLOB PRIMARY KEY, -- Session UUID (16 bytes)
name TEXT, -- Optional human-readable name
created_at INTEGER NOT NULL, -- Unix timestamp
last_active INTEGER NOT NULL, -- Unix timestamp
entity_count INTEGER NOT NULL DEFAULT 0,
state TEXT NOT NULL, -- 'created' | 'joining' | 'active' | 'disconnected' | 'left'
-- Session membership (which node was in which session)
CREATE TABLE session_membership (
session_id BLOB NOT NULL,
node_id TEXT NOT NULL,
joined_at INTEGER NOT NULL,
left_at INTEGER, -- NULL if still active
PRIMARY KEY (session_id, node_id),
FOREIGN KEY (session_id) REFERENCES sessions(id) ON DELETE CASCADE
);
-- Link entities to sessions
ALTER TABLE entities ADD COLUMN session_id BLOB NOT NULL REFERENCES sessions(id);
-- Index for session-scoped entity queries
CREATE INDEX idx_entities_session
ON entities(session_id);
-- Update vector clock to be session-scoped
ALTER TABLE vector_clock ADD COLUMN session_id BLOB NOT NULL REFERENCES sessions(id);
-- Composite index for session + node lookups
CREATE INDEX idx_vector_clock_session_node
ON vector_clock(session_id, node_id);
-- Update operation log to be session-scoped
ALTER TABLE operation_log ADD COLUMN session_id BLOB NOT NULL REFERENCES sessions(id);
-- Index for session-scoped operation queries
CREATE INDEX idx_operation_log_session
ON operation_log(session_id, node_id, sequence_number);
```
### Session State Transitions
Sessions progress through a well-defined state machine that handles normal operation, network failures, and clean shutdown. The five states capture every phase of a session's lifecycle:
- **Created**: Session exists in database but hasn't connected to the network yet
- **Joining**: Currently attempting to join the network and sync state with peers
- **Active**: Fully synchronized and actively collaborating with peers
- **Disconnected**: Temporarily offline, will attempt to rejoin when network is restored
- **Left**: User explicitly left the session (clean shutdown)
The state machine allows for automatic reconnection after temporary network failures while respecting explicit user actions like leaving a session.
```mermaid
stateDiagram-v2
[*] --> Created
Created --> Joining: Connect to network
Joining --> Active: Sync complete
Joining --> Disconnected: Network loss
Active --> Disconnected: Network loss
Disconnected --> Joining: Network restored
Disconnected --> Left: Explicit leave
Active --> Left: Explicit leave
Left --> [*]
```
## ALPN-based Network Isolation
### Overview
Instead of using a single global gossip topic, each session gets its own ALPN (Application-Layer Protocol Negotiation) identifier derived from the session ID. This provides true network isolation at the QUIC transport layer: peers in different sessions cannot even discover each other. (this is to protect the players)
### ALPN Derivation
Each session derives a unique ALPN identifier using BLAKE3 cryptographic hashing. The derivation is deterministic - the same session ID always produces the same ALPN - which allows all peers to independently compute the correct ALPN for a session they want to join.
The ALPN is computed by hashing the session UUID with BLAKE3, using a domain separation prefix (`/app/v1/session-id/`) followed by the session ID bytes. This produces a deterministic 32-byte identifier that all peers independently compute from the same session code.
The gossip network initialization is modified to use session-specific ALPNs instead of a global protocol identifier. This requires configuring the iroh endpoint with the session's derived ALPN and ensuring the router only accepts connections using that ALPN.
**Peer Discovery Strategy**: We use a multi-layered approach to discover peers within a session:
1.**mDNS (Multicast DNS)**: For local network discovery - peers on the same LAN can find each other automatically
2.**Pkarr DNS Discovery**: For Internet-wide discovery - iroh's built-in pkarr-based DNS discovery provides decentralized peer discovery without requiring centralized infrastructure. Pkarr (Public Key Addressable Resource Records) allows peers to publish signed DNS records using their public keys.
The combination ensures both local and remote sessions work seamlessly.
The initialization process has three temporal phases:
**One-time Endpoint Setup** (occurs once per application launch):
1.**Endpoint Creation**: Build an iroh `Endpoint` with both mDNS and Pkarr discovery mechanisms enabled
2.**Gossip Protocol**: Spawn the gossip protocol handler using `Gossip::builder().spawn(endpoint)`
**Per-session Connection** (occurs when joining each session):
3.**ALPN Derivation**: Call `session.id.to_alpn()` to compute the 32-byte session-specific ALPN identifier
4.**Router Configuration**: Create a router that only accepts connections on the session's ALPN
- Critical: Use the derived ALPN, not the default `iroh_gossip::ALPN`
- This enforces network isolation at the transport layer
5.**Topic Subscription**: Subscribe to a gossip topic derived from the ALPN (can reuse the same bytes)
6.**Join Wait**: Wait up to 2 seconds for the join confirmation
- Timeout is expected for the first node in a session (no peers yet)
- Errors are logged but don't prevent continuing
**Background Operation** (runs continuously while session is active):
7.**Bridge Creation**: Create a `GossipBridge` that wraps the gossip channels and provides session context
8.**Task Spawning**: Launch background tasks to forward messages between gossip and application
The key architectural decision is using the same ALPN bytes for both transport-layer connection acceptance and application-layer topic identification. This ensures consistent isolation across both layers.
These links can be shared via chat, email, or other communication channels. When clicked, they auto-populate the join dialog, but manual entry remains the fallback if URL handling isn't configured.
## Join Protocol (Hybrid Sync)
The join protocol is the mechanism by which a node synchronizes its state when connecting to a session. The protocol is "hybrid" because it intelligently chooses between two strategies based on the node's history with the session.
### Overview
When a node connects to a session, it needs to synchronize its local state with the distributed state maintained by all peers. The naive approach would be to always transfer the complete session state - all entities, components, and resources. However, this is inefficient for nodes that are rejoining a session after a temporary disconnection or app restart.
The hybrid join protocol addresses this by supporting two distinct scenarios:
1.**Fresh Join**: The node is joining this session for the first time and has no local state. The protocol sends a complete snapshot of the entire session state (all entities and components). This is unavoidable for first-time joins but can be bandwidth-intensive.
2.**Rejoin**: The node has previously joined this session and has persistent local state in its database. Instead of transferring the entire state again, the protocol calculates which operations occurred since the node was last active and sends only those deltas. This dramatically reduces bandwidth and latency for reconnection scenarios.
### Extended JoinRequest Message
To enable hybrid sync, the `JoinRequest` message needs to communicate the joining node's state to existing peers. This allows peers to make an intelligent decision about whether to send full state or just deltas.
The key fields are:
- **session_id**: Which session the node wants to join (validates this matches the receiving peer's session)
- **session_secret**: Optional authentication credential (if the session is password-protected)
- **last_known_clock**: The vector clock from when the node was last active in this session. If `None`, this is a fresh join. If `Some`, the node has previous state and only needs updates since that clock.
- **join_type**: Metadata about the join (fresh vs rejoin with entity count) - helps peers optimize their response
Existing peers use this information to decide: "Can I send just deltas, or do I need to send the full state?"
The `last_known_clock` field is the key discriminator: its presence signals that the node has persistent state and only needs deltas, while its absence triggers full state transfer.
NewNode->>ExistingPeer: SyncRequest (catch any missed deltas)
NewNode->>NewNode: Transition to Active state
```
### Join Flow: Rejoin (Hybrid Sync)
The rejoin flow is optimized for the common case where a node is reconnecting to a session it was previously part of. This happens frequently: app restarts, temporary network disconnections, or laptop sleep/wake cycles.
The key insight is that the node already has most of the session state in its local database. Instead of downloading everything again, it can:
1.**Optimistic Restore**: Load the session and all entities from the local database immediately - the world appears instantly
2.**Clock Comparison**: Send the vector clock from when it last left the session
3.**Delta Calculation**: The peer compares vector clocks and calculates which operations are missing
4.**Incremental Sync**: If the delta count is reasonable (<1000 operations), send only those deltas
5.**Fallback**: If too many deltas accumulated (e.g., node was offline for days), fall back to sending full state
6.**Apply Updates**: Incrementally apply the deltas to bring the world up to date
7.**Final Catchup**: Request any additional deltas that arrived during the sync
This hybrid approach provides several critical advantages:
- **Instant UI**: The world appears immediately from database cache, no loading screen
- **Bandwidth Efficiency**: Only 10-50 KB of deltas instead of 1+ MB full state transfer
- **Lower Latency**: Rejoins complete in under 2 seconds instead of 5-10 seconds
- **Graceful Degradation**: Automatically falls back to full sync if needed
- **Progressive Refinement**: The UI shows cached state immediately, then updates as deltas arrive
```mermaid
sequenceDiagram
participant RejoiningNode
participant Database
participant Network
participant Peer
RejoiningNode->>Database: Load session metadata
RejoiningNode->>Database: Restore entities from DB (optimistic)
Note over RejoiningNode: World populated from cache
RejoiningNode->>Network: Connect to gossip (session ALPN)
Peer->>RejoiningNode: FullState (more efficient than deltas)
RejoiningNode->>RejoiningNode: Clear and reload from FullState
end
RejoiningNode->>Database: Persist updated state
RejoiningNode->>Peer: SyncRequest (catch any missed deltas)
RejoiningNode->>RejoiningNode: Transition to Active state
```
### Join Handler Implementation
The join handler is a Bevy system that runs on existing peers and responds to incoming `JoinRequest` messages. Its responsibility is to decide whether to send full state or deltas based on the joining node's vector clock.
The system polls the gossip bridge for incoming messages and filters for `JoinRequest` messages. For each request, it performs a multi-stage validation and response pipeline:
- If count ≤ 1000: Send `MissingDeltas` message with operation list
- If count > 1000: Send `FullState` instead (more efficient than 1000+ small messages)
**Stage 3: Response Transmission**
Send the constructed response message (`FullState` or `MissingDeltas`) over the gossip bridge. Log errors if transmission fails.
**Design Rationale**:
The 1000-operation threshold is a heuristic based on message overhead: below this, individual delta messages are smaller than a full world snapshot. Above it, the cost of serializing and transmitting 1000+ small messages exceeds the cost of sending one large snapshot. This threshold can be tuned based on profiling.
While CRDTs provide automatic conflict resolution for concurrent edits, they can produce unexpected results when users perform complex, multi-step operations. Consider a user rotating and scaling a 3D object - if another user starts editing the same object mid-operation, the CRDT merge could produce a bizarre intermediate state that neither user intended.
Temporary locks solve this by providing **optimistic, short-lived exclusive access** to entities during active editing. The lock model is intentionally simple and user-driven: when a user selects or begins editing an entity, their client immediately requests a lock. If granted, they have exclusive edit rights for a few seconds. If another user already holds the lock, the request is denied and the user sees visual feedback (e.g., the object is grayed out or shows "locked by Alice").
### Overview
To prevent CRDT conflicts on complex operations (e.g., multi-step drawing, entity transformations), we introduce **temporary exclusive locks** on entities. Only the node holding the lock can modify the entity.
**Design Principles:**
- **Initiator-driven**: Locks are requested immediately when user interaction begins (e.g., clicking an object), not after waiting for server approval
- **Optimistic**: The local client assumes the lock will succeed and allows immediate interaction; conflicts are resolved asynchronously
**Note**: The 5-second lock timeout is fixed in the initial implementation. Future versions may make this configurable per-entity-type or per-session based on UX requirements.
The `EntityLockRegistry` resource maintains a HashMap of entity ID to lock state, plus an acquisition history queue for rate limiting.
**Lock Acquisition Logic**:
When a node requests a lock, the registry performs several checks:
1.**Existing lock check**: Is this entity already locked?
- If locked by the requesting node: refresh the timeout and succeed
- If locked by another node and not expired: reject with `AlreadyLocked` error
- If locked but expired: proceed to acquire
2.**Rate limiting**: Has this node acquired more than 10 locks in the last second?
- If yes: reject with `RateLimited` error
- If no: proceed to acquire
3.**Grant lock**: Create lock entry, add to registry, record acquisition time
If all checks pass, the lock is granted and broadcasted to all peers via a `LockAcquired` message. All peers apply the lock to their local registry, ensuring everyone sees a consistent view of which entities are locked.
**Lock Release Logic**:
When a lock is released - either explicitly by the user (e.g., deselecting an object) or automatically via timeout - a `LockReleased` message is broadcast to all peers. The registry validates that the releasing node actually holds the lock, then removes it from the HashMap. This broadcast-on-release pattern prevents scenarios where one peer thinks an entity is locked while others think it's free.
**Automatic Cleanup**:
A periodic Bevy system runs every second to scan the registry and remove expired locks. This ensures that crashed nodes don't leave orphaned locks indefinitely - after 5 seconds, any lock automatically becomes available again.
The registry also maintains a rolling 60-second history of lock acquisitions for rate limit calculations, pruning old entries to prevent unbounded memory growth.
### Lock Protocol Messages
The lock protocol uses five message types broadcast over the gossip network:
**LockRequest**: Initiates a lock acquisition attempt
- Includes entity ID, requesting node ID, desired timeout, and optional debug reason
- Specifies which entity, who requested it, who currently holds it, and why it failed
- Sent when entity is already locked or rate limit exceeded
**LockRelease**: Explicitly releases a held lock
- Contains entity ID and releasing node ID
- Broadcast when user finishes editing (e.g., deselects object)
**LockReleased**: Confirms lock was released
- Notifies all peers the entity is now available
- All peers remove the lock from their local registry
### Lock Acquisition Flow
The lock acquisition flow is optimistic and user-driven. When a user clicks on an entity to begin editing (e.g., selecting a cube to move it), the client immediately:
1.**Local Check**: Consult the local lock registry - is this entity already locked?
2.**Optimistic Request**: If not locked (or lock expired), immediately broadcast a `LockRequest` to all peers
3.**Peer Application**: All peers (including the requester) apply the lock locally
4.**Grant or Reject**: Each peer validates the lock in their registry:
- If successful: Broadcast `LockAcquired` confirmation
- If failed (already locked by someone else): Broadcast `LockRejected` with reason
5.**User Feedback**: Show the user whether they got the lock (enable editing) or didn't (show "locked by Alice")
6.**Edit Phase**: While holding the lock, user can freely modify the entity and generate deltas
7.**Explicit Release**: When done editing (e.g., deselecting), broadcast `LockRelease` to free the entity
The flow is designed for responsiveness - users see immediate feedback rather than waiting for server round-trips.
```mermaid
flowchart TD
A[User clicks entity] --> B{Check local lock registry}
B -->|Already locked| C[Show 'Locked by X' message]
B -->|Not locked or expired| D[Broadcast LockRequest]
D --> E[All peers apply locally]
E --> F{Lock acquired?}
F -->|No, already locked| G[Broadcast LockRejected]
G --> C
F -->|Yes, granted| H[Broadcast LockAcquired]
H --> I[User edits entity]
I --> J[Generate deltas]
J --> K{User finished?}
K -->|No| I
K -->|Yes| L[Broadcast LockRelease]
L --> M[Entity available again]
```
### Conflict Resolution
**Scenario**: Two nodes request the same lock simultaneously
The most interesting edge case occurs when two users click the same entity at nearly the same time. Due to network latency, both nodes might broadcast `LockRequest` messages before receiving the other's request. This creates a race condition that must be resolved deterministically.
**Resolution Strategy**: Deterministic tiebreaker using node ID comparison
The resolution protocol works as follows:
1.**Optimistic Locking**: Both Node A and Node B broadcast `LockRequest` for the same entity
2.**Local Application**: Both nodes apply the lock locally (optimistic assumption it will succeed)
3.**Broadcast Confirmation**: Both nodes broadcast `LockAcquired`
4.**Conflict Detection**: When Node A receives Node B's `LockAcquired`, it detects a conflict
5.**Deterministic Resolution**: Compare node IDs - the higher node ID wins, the lower releases
6.**Convergence**: The losing node broadcasts `LockReleased`, and the system converges to a single lock holder
This approach is:
- **Deterministic**: All peers reach the same conclusion about who holds the lock
- **Fair**: Neither node has priority; it's based on random UUIDs
- **Fast**: Conflicts resolve in one round-trip (detect conflict → release)
- **No central authority**: Peers coordinate via gossip without requiring a master
```mermaid
sequenceDiagram
participant NodeA
participant NodeB
participant OtherPeers
Note over NodeA,NodeB: Both click entity simultaneously
NodeA->>OtherPeers: LockRequest(entity_id)
NodeB->>OtherPeers: LockRequest(entity_id)
NodeA->>NodeA: Apply lock locally
NodeB->>NodeB: Apply lock locally
NodeA->>OtherPeers: LockAcquired(entity_id)
NodeB->>OtherPeers: LockAcquired(entity_id)
NodeB->>NodeA: LockAcquired(entity_id)
NodeA->>NodeB: LockAcquired(entity_id)
Note over NodeA,NodeB: Both detect conflict
alt NodeA.id > NodeB.id
NodeB->>NodeB: Release lock (lost)
NodeB->>OtherPeers: LockReleased(entity_id)
Note over NodeA: NodeA keeps lock
else NodeB.id > NodeA.id
NodeA->>NodeA: Release lock (lost)
NodeA->>OtherPeers: LockReleased(entity_id)
Note over NodeB: NodeB keeps lock
end
```
**Implementation Notes**:
When a node receives a `LockAcquired` message for an entity it also just acquired, it detects the conflict and compares node IDs:
- **Higher node ID**: Keep the lock, ignore the conflict
- **Lower node ID**: Release the lock after a short timeout (100ms) and broadcast `LockReleased`
The brief timeout allows both nodes to detect the conflict before either releases. Without this delay, nodes might race to release, potentially leaving the entity unlocked. The 100ms window ensures both sides see the conflict before the loser releases, guaranteeing exactly one lock holder remains.
This approach provides fast, deterministic convergence without requiring additional coordination rounds or central authority.
### Integration with Change Detection
Locks integrate with Bevy's change detection system to prevent unauthorized modifications. The delta generation system checks the lock registry before broadcasting entity changes:
**Lock Check on Component Changes**:
When a Bevy component changes (detected via `Changed<Transform>` queries), the delta generation system:
1.**Queries changed entities**: Iterate through all networked entities with modified components
2.**Lock validation**: For each changed entity, check `lock_registry.is_locked_by(entity_id, our_node_id)`
3.**Decision**:
- If we hold the lock: serialize the component change and broadcast an `EntityDelta` message
- If we don't hold the lock: log a warning and skip delta generation (local change only, not synchronized)
This enforcement ensures that only the lock holder can propagate changes to peers. If a buggy client or edge case causes a component to change without holding the lock, the change remains local and doesn't corrupt the distributed state.
The advisory nature of locks means the underlying CRDT can still handle conflicts if lock enforcement fails, providing defense in depth.
## Persistence Integration
Session lifecycle management requires tight integration with the persistence layer to support automatic rejoin after crashes or restarts. The persistence systems handle three critical responsibilities:
1.**Session Discovery**: On startup, check if there's a previous session to rejoin
2.**State Restoration**: Load session metadata, entities, and vector clocks from the database
3.**Clean Shutdown**: Save current session state before exit
The integration is implemented through Bevy systems that run at specific lifecycle events: startup, shutdown, and periodic checkpoints.
### Session Lifecycle Systems
The session lifecycle is managed through two primary Bevy systems: one for initialization on startup, and one for persisting state on shutdown.
On application startup, this system queries the database for the most recent active session:
1.**Session Discovery**: Query `sessions` table ordered by `last_active DESC` to find the most recent session
2.**Decision Point**:
- If a session exists: Resume it (enables automatic rejoin after crashes)
- If no session exists: Create a new session with a random UUID and default state
3.**Vector Clock Loading**: Load the session's vector clock from the database, or initialize an empty clock for new sessions
4.**Resource Initialization**: Insert `CurrentSession` resource containing session metadata and the saved vector clock
This enables crash recovery - if the app crashes and restarts, it automatically resumes the previous session and rejoins the network.
**Shutdown: `save_session_state_system`**
On clean shutdown, this system persists the current session state:
1.**Update Metadata**: Set `last_active` timestamp, count current entities, mark state as `Left`
2.**Save Session**: Write session metadata to the `sessions` table using `INSERT OR REPLACE`
3.**Save Vector Clock**: Transaction-based save that clears old clock entries and inserts current state for all known nodes
The vector clock save uses a transaction to ensure atomic updates - either all clock entries are saved, or none are. This prevents partial clock states that could cause sync issues on rejoin.
The persistence layer provides several key database operations:
**Session Queries**:
-`get_last_active_session()`: Queries the most recent session by `last_active DESC`, returns `Option<Session>`
-`save_session()`: Upserts session metadata using `INSERT OR REPLACE`, persisting all session fields
**Vector Clock Persistence**:
-`load_session_vector_clock()`: Queries all `node_id`/`counter` pairs for a session, rebuilding the HashMap
-`save_session_vector_clock()`: Transactional save that deletes old entries then inserts current clock state
All operations use parameterized queries to prevent SQL injection and handle optional fields (like `name` and `secret`) correctly. Session IDs are stored as 16-byte BLOBs for efficiency.
## Implementation Roadmap
### Documentation Standards
All public APIs should follow Rust documentation conventions with comprehensive docstrings. Expected format:
When the partition heals, the gossip protocol reconnects and nodes exchange their full state. The CRDT merge process happens automatically:
1. Vector clocks from both partitions are compared
2. Operations with concurrent clocks are merged using Last-Write-Wins (LWW) based on timestamps
3. The final state reflects the operation with the highest timestamp
4. Convergence typically completes within 1-2 seconds after reconnection
**UX Implications**:
Users in the "losing" partition may see their changes overridden. To minimize surprise:
- Visual indicator shows when the app is disconnected (yellow/orange connection status)
- On reconnection, entities that changed display a brief animation/highlight
- A notification shows "Reconnected - syncing changes" with entity count
- Changes made during disconnection that were overridden could be logged for potential manual recovery (future enhancement)
The system prioritizes consistency over preserving every edit during splits, which is acceptable for collaborative creative work where real-time coordination is expected.
Security in a peer-to-peer collaborative environment requires careful balance between usability and protection. This RFC addresses two primary security concerns: session access control and protocol integrity.
### Session Secret Validation
Session secrets provide optional password-based access control. When a session is created with a secret, any peer attempting to join must provide the matching secret in their `JoinRequest`. The validation uses constant-time comparison to prevent timing attacks that could leak information about the secret.
The secret is hashed using BLAKE3 before comparison, ensuring that:
- Secrets are never transmitted in plaintext
- Timing analysis cannot reveal secret length or content
- Fast validation (BLAKE3 is extremely performant)
The validation function uses the `subtle` crate's `ConstantTimeEq` trait to perform constant-time comparison of the hashed secrets, preventing timing-based attacks that could leak information about the secret.
To prevent abuse and buggy clients from monopolizing resources, the lock system implements two rate limits:
1.**Total Locks per Node**: Maximum 100 concurrent locks per node
- Prevents a single node from locking every entity in the session
- Ensures entities remain available for other participants
2.**Acquisition Rate**: Maximum 10 lock requests per second per node
- Prevents rapid lock spamming attacks
- Tracked via a rolling 60-second acquisition history queue
- Old entries are pruned to prevent memory growth
When a rate limit is exceeded, the lock request returns a `LockError::RateLimited` error. The requesting node's UI should display appropriate feedback to the user.
Several implementation details are deferred to Phase 3 (Hybrid Join Protocol):
- **Dependency Ordering**: Entities with parent-child relationships or component references will be sorted before streaming to ensure dependencies arrive before dependents
- **Message Size Limits**: Batch size (50-100 entities) will be dynamically adjusted based on average entity serialization size to stay under QUIC message size limits (~1 MB)
- **Retry Mechanism**: Missing entity ranges are tracked and can be requested via `RequestMissingEntities { start_index, end_index }` if gaps are detected
- **Cancellation**: If the user abandons the join before completion, in-flight batches are discarded and the partial state is cleared