Compare commits
17 Commits
phase1-ven
...
v0.1.1
| Author | SHA1 | Date | |
|---|---|---|---|
|
1b3836ed4c
|
|||
|
fdba3903cc
|
|||
|
5838b2dd6a
|
|||
|
ffe529852d
|
|||
|
e890b0213a
|
|||
|
28909e8b76
|
|||
|
3e840908f6
|
|||
|
8ca02fd492
|
|||
|
a0c13be6d6
|
|||
|
da886452bd
|
|||
|
6303c4b409
|
|||
|
f3f8094530
|
|||
|
7b8fed178e
|
|||
|
99db2c90b4
|
|||
|
7d24abf113
|
|||
|
b421aaf037
|
|||
|
99e31b1157
|
5
.cargo/config.toml
Normal file
5
.cargo/config.toml
Normal file
@@ -0,0 +1,5 @@
|
||||
[alias]
|
||||
xtask = "run --package xtask --"
|
||||
|
||||
[env]
|
||||
IPHONEOS_DEPLOYMENT_TARGET = "16.0"
|
||||
2436
.claude/rust-guidelines.txt
Normal file
2436
.claude/rust-guidelines.txt
Normal file
File diff suppressed because it is too large
Load Diff
68
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
68
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
@@ -0,0 +1,68 @@
|
||||
---
|
||||
name: Bug Report
|
||||
about: Report a bug to help us improve Marathon
|
||||
title: '[BUG] '
|
||||
labels: bug
|
||||
assignees: ''
|
||||
---
|
||||
|
||||
## Bug Description
|
||||
|
||||
A clear and concise description of what the bug is.
|
||||
|
||||
## Minimal, Complete, Verifiable Example (MCVE)
|
||||
|
||||
Please provide the **smallest possible code example** that demonstrates the bug. This helps us reproduce and fix the issue faster.
|
||||
|
||||
### Minimal Code Example
|
||||
|
||||
```rust
|
||||
// Paste your minimal reproducible code here
|
||||
// Remove anything not necessary to demonstrate the bug
|
||||
```
|
||||
|
||||
### Steps to Reproduce
|
||||
|
||||
1.
|
||||
2.
|
||||
3.
|
||||
4.
|
||||
|
||||
### Expected Behavior
|
||||
|
||||
What you expected to happen:
|
||||
|
||||
### Actual Behavior
|
||||
|
||||
What actually happened:
|
||||
|
||||
## Environment
|
||||
|
||||
- **OS**: [e.g., macOS 15.0, iOS 18.2]
|
||||
- **Rust Version**: [e.g., 1.85.0 - run `rustc --version`]
|
||||
- **Marathon Version/Commit**: [e.g., v0.1.0 or commit hash]
|
||||
- **Platform**: [Desktop / iOS Simulator / iOS Device]
|
||||
|
||||
## Logs/Stack Traces
|
||||
|
||||
If applicable, paste any error messages or stack traces here:
|
||||
|
||||
```
|
||||
paste logs here
|
||||
```
|
||||
|
||||
## Screenshots/Videos
|
||||
|
||||
If applicable, add screenshots or videos to help explain the problem.
|
||||
|
||||
## Additional Context
|
||||
|
||||
Add any other context about the problem here. For example:
|
||||
- Does it happen every time or intermittently?
|
||||
- Did this work in a previous version?
|
||||
- Are you running multiple instances?
|
||||
- Any relevant configuration or network setup?
|
||||
|
||||
## Possible Solution
|
||||
|
||||
If you have ideas about what might be causing the issue or how to fix it, please share them here.
|
||||
8
.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
8
.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
blank_issues_enabled: true
|
||||
contact_links:
|
||||
- name: Question or Discussion
|
||||
url: https://github.com/r3t-studios/marathon/discussions
|
||||
about: Ask questions or discuss ideas with the community
|
||||
- name: Security Vulnerability
|
||||
url: https://github.com/r3t-studios/marathon/security/policy
|
||||
about: Please report security issues privately (see SECURITY.md)
|
||||
72
.github/ISSUE_TEMPLATE/feature_request.md
vendored
Normal file
72
.github/ISSUE_TEMPLATE/feature_request.md
vendored
Normal file
@@ -0,0 +1,72 @@
|
||||
---
|
||||
name: Feature Request
|
||||
about: Suggest a new feature or enhancement for Marathon
|
||||
title: '[FEATURE] '
|
||||
labels: enhancement
|
||||
assignees: ''
|
||||
---
|
||||
|
||||
## Problem Statement
|
||||
|
||||
**Is your feature request related to a problem? Please describe.**
|
||||
|
||||
A clear and concise description of what the problem is. For example:
|
||||
- "I'm always frustrated when..."
|
||||
- "It's difficult to..."
|
||||
- "Users need to be able to..."
|
||||
|
||||
## Feature Request (Given-When-Then Format)
|
||||
|
||||
Please describe your feature request using the Given-When-Then format to make the behavior clear:
|
||||
|
||||
### Scenario 1: [Brief scenario name]
|
||||
|
||||
**Given** [initial context or preconditions]
|
||||
**When** [specific action or event]
|
||||
**Then** [expected outcome]
|
||||
|
||||
**Example:**
|
||||
- **Given** I am editing a collaborative document with 3 other peers
|
||||
- **When** I lose network connectivity for 5 minutes
|
||||
- **Then** my local changes should be preserved and sync automatically when I reconnect
|
||||
|
||||
### Scenario 2: [Additional scenario if needed]
|
||||
|
||||
**Given** [initial context]
|
||||
**When** [action]
|
||||
**Then** [outcome]
|
||||
|
||||
## Alternatives Considered
|
||||
|
||||
**Describe alternatives you've considered.**
|
||||
|
||||
Have you thought of other ways to solve this problem? What are the pros and cons of different approaches?
|
||||
|
||||
## Technical Considerations
|
||||
|
||||
**Do you have thoughts on implementation?**
|
||||
|
||||
If you have ideas about how this could be implemented technically, share them here. For example:
|
||||
- Which modules might be affected
|
||||
- Potential challenges or dependencies
|
||||
- Performance implications
|
||||
- Breaking changes required
|
||||
|
||||
## Additional Context
|
||||
|
||||
Add any other context, mockups, screenshots, or examples from other projects that illustrate the feature.
|
||||
|
||||
## Priority/Impact
|
||||
|
||||
How important is this feature to you or your use case?
|
||||
- [ ] Critical - blocking current work
|
||||
- [ ] High - would significantly improve workflow
|
||||
- [ ] Medium - nice to have
|
||||
- [ ] Low - minor improvement
|
||||
|
||||
## Willingness to Contribute
|
||||
|
||||
- [ ] I'm willing to implement this feature
|
||||
- [ ] I can help test this feature
|
||||
- [ ] I can help with documentation
|
||||
- [ ] I'm just suggesting the idea
|
||||
117
.github/pull_request_template.md
vendored
Normal file
117
.github/pull_request_template.md
vendored
Normal file
@@ -0,0 +1,117 @@
|
||||
## Description
|
||||
|
||||
<!-- Provide a clear and concise description of what this PR does -->
|
||||
|
||||
## Related Issues
|
||||
|
||||
<!-- Link to related issues using #issue_number -->
|
||||
Fixes #
|
||||
Relates to #
|
||||
|
||||
## Type of Change
|
||||
|
||||
<!-- Mark relevant items with an [x] -->
|
||||
|
||||
- [ ] Bug fix (non-breaking change that fixes an issue)
|
||||
- [ ] New feature (non-breaking change that adds functionality)
|
||||
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
|
||||
- [ ] Documentation update
|
||||
- [ ] Refactoring (no functional changes)
|
||||
- [ ] Performance improvement
|
||||
- [ ] Test coverage improvement
|
||||
|
||||
## Changes Made
|
||||
|
||||
<!-- List the specific changes in this PR -->
|
||||
|
||||
-
|
||||
-
|
||||
-
|
||||
|
||||
## Testing Performed
|
||||
|
||||
<!-- Describe the testing you've done -->
|
||||
|
||||
- [ ] All existing tests pass (`cargo nextest run`)
|
||||
- [ ] Added new tests for new functionality
|
||||
- [ ] Tested manually on desktop
|
||||
- [ ] Tested manually on iOS (if applicable)
|
||||
- [ ] Tested with multiple instances
|
||||
- [ ] Tested edge cases and error conditions
|
||||
|
||||
### Test Details
|
||||
|
||||
<!-- Provide specific details about your testing -->
|
||||
|
||||
**Desktop:**
|
||||
-
|
||||
|
||||
**iOS:** (if applicable)
|
||||
-
|
||||
|
||||
**Multi-instance:** (if applicable)
|
||||
-
|
||||
|
||||
## Documentation
|
||||
|
||||
<!-- Mark relevant items with an [x] -->
|
||||
|
||||
- [ ] Updated relevant documentation in `/docs`
|
||||
- [ ] Updated README.md (if public API changed)
|
||||
- [ ] Added doc comments to new public APIs
|
||||
- [ ] Updated CHANGELOG.md
|
||||
|
||||
## Code Quality
|
||||
|
||||
<!-- Confirm these items -->
|
||||
|
||||
- [ ] Code follows project style guidelines
|
||||
- [ ] Ran `cargo +nightly fmt`
|
||||
- [ ] Ran `cargo clippy` and addressed warnings
|
||||
- [ ] No new compiler warnings
|
||||
- [ ] Added meaningful variable/function names
|
||||
|
||||
## AI Usage
|
||||
|
||||
<!-- If you used AI tools, briefly note how (see AI_POLICY.md) -->
|
||||
<!-- You don't need to disclose simple autocomplete, only substantial AI assistance -->
|
||||
|
||||
- [ ] No AI assistance used
|
||||
- [ ] Used AI tools (brief description below)
|
||||
|
||||
<!-- If used: -->
|
||||
<!-- AI tool: [e.g., Claude, Copilot] -->
|
||||
<!-- How: [e.g., "Used to generate boilerplate, then reviewed and modified"] -->
|
||||
<!-- I reviewed, understand, and am accountable for all code in this PR -->
|
||||
|
||||
## Breaking Changes
|
||||
|
||||
<!-- If this is a breaking change, describe what breaks and how to migrate -->
|
||||
|
||||
**Does this PR introduce breaking changes?**
|
||||
- [ ] No
|
||||
- [ ] Yes (describe below)
|
||||
|
||||
<!-- If yes: -->
|
||||
<!-- - What breaks: -->
|
||||
<!-- - Migration path: -->
|
||||
|
||||
## Screenshots/Videos
|
||||
|
||||
<!-- If applicable, add screenshots or videos showing the changes -->
|
||||
|
||||
## Checklist
|
||||
|
||||
<!-- Final checks before requesting review -->
|
||||
|
||||
- [ ] My code follows the project's coding standards
|
||||
- [ ] I have tested my changes thoroughly
|
||||
- [ ] I have updated relevant documentation
|
||||
- [ ] I have added tests that prove my fix/feature works
|
||||
- [ ] All tests pass locally
|
||||
- [ ] I have read and followed the [CONTRIBUTING.md](../CONTRIBUTING.md) guidelines
|
||||
- [ ] I understand and accept the [AI_POLICY.md](../AI_POLICY.md)
|
||||
|
||||
## Additional Notes
|
||||
|
||||
<!-- Any additional information reviewers should know -->
|
||||
2
.gitignore
vendored
2
.gitignore
vendored
@@ -76,3 +76,5 @@ target/doc/
|
||||
# Project-specific (based on your untracked files)
|
||||
emotion-gradient-config-*.json
|
||||
**/*.csv
|
||||
.op/
|
||||
.sere
|
||||
|
||||
1
.serena/.gitignore
vendored
Normal file
1
.serena/.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
||||
/cache
|
||||
294
.serena/memories/code_style_conventions.md
Normal file
294
.serena/memories/code_style_conventions.md
Normal file
@@ -0,0 +1,294 @@
|
||||
# Code Style & Conventions
|
||||
|
||||
## Rust Style Configuration
|
||||
The project uses **rustfmt** with a custom configuration (`rustfmt.toml`):
|
||||
|
||||
### Key Formatting Rules
|
||||
- **Edition**: 2021
|
||||
- **Braces**: `PreferSameLine` for structs/enums, `AlwaysSameLine` for control flow
|
||||
- **Function Layout**: `Tall` (each parameter on its own line for long signatures)
|
||||
- **Single-line Functions**: Disabled (`fn_single_line = false`)
|
||||
- **Imports**:
|
||||
- Grouping: `StdExternalCrate` (std, external, then local)
|
||||
- Layout: `Vertical` (one import per line)
|
||||
- Granularity: `Crate` level
|
||||
- Reorder: Enabled
|
||||
- **Comments**:
|
||||
- Width: 80 characters
|
||||
- Wrapping: Enabled
|
||||
- Format code in doc comments: Enabled
|
||||
- **Doc Attributes**: Normalized (`normalize_doc_attributes = true`)
|
||||
- **Impl Items**: Reordered (`reorder_impl_items = true`)
|
||||
- **Match Arms**: Leading pipes always shown
|
||||
- **Hex Literals**: Lowercase
|
||||
|
||||
### Applying Formatting
|
||||
```bash
|
||||
# Format all code
|
||||
cargo fmt
|
||||
|
||||
# Check without modifying
|
||||
cargo fmt -- --check
|
||||
```
|
||||
|
||||
## Naming Conventions
|
||||
|
||||
### Rust Standard Conventions
|
||||
- **Types** (structs, enums, traits): `PascalCase`
|
||||
- Example: `EngineBridge`, `PersistenceConfig`, `SessionId`
|
||||
- **Functions & Methods**: `snake_case`
|
||||
- Example: `run_executor()`, `get_database_path()`
|
||||
- **Constants**: `SCREAMING_SNAKE_CASE`
|
||||
- Example: `APP_NAME`, `DEFAULT_BUFFER_SIZE`
|
||||
- **Variables**: `snake_case`
|
||||
- Example: `engine_bridge`, `db_path_str`
|
||||
- **Modules**: `snake_case`
|
||||
- Example: `debug_ui`, `engine_bridge`
|
||||
- **Crates**: `kebab-case` in Cargo.toml, `snake_case` in code
|
||||
- Example: `sync-macros` → `sync_macros`
|
||||
|
||||
### Project-Specific Patterns
|
||||
- **Platform modules**: `platform/desktop/`, `platform/ios/`
|
||||
- **Plugin naming**: Suffix with `Plugin` (e.g., `EngineBridgePlugin`, `CameraPlugin`)
|
||||
- **Resource naming**: Prefix with purpose (e.g., `PersistenceConfig`, `SessionManager`)
|
||||
- **System naming**: Suffix with `_system` for Bevy systems
|
||||
- **Bridge pattern**: Use `Bridge` suffix for inter-component communication (e.g., `EngineBridge`)
|
||||
|
||||
## Code Organization
|
||||
|
||||
### Module Structure
|
||||
```rust
|
||||
// Public API first
|
||||
pub mod engine;
|
||||
pub mod networking;
|
||||
pub mod persistence;
|
||||
|
||||
// Internal modules
|
||||
mod debug_ui;
|
||||
mod platform;
|
||||
|
||||
// Re-exports for convenience
|
||||
pub use engine::{EngineCore, EngineBridge};
|
||||
```
|
||||
|
||||
### Import Organization
|
||||
```rust
|
||||
// Standard library
|
||||
use std::sync::Arc;
|
||||
use std::thread;
|
||||
|
||||
// External crates (grouped by crate)
|
||||
use bevy::prelude::*;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use tokio::runtime::Runtime;
|
||||
|
||||
// Internal crates
|
||||
use libmarathon::engine::EngineCore;
|
||||
use libmarathon::platform;
|
||||
|
||||
// Local modules
|
||||
use crate::camera::*;
|
||||
use crate::debug_ui::DebugUiPlugin;
|
||||
```
|
||||
|
||||
## Documentation
|
||||
|
||||
### Doc Comments
|
||||
- Use `///` for public items
|
||||
- Use `//!` for module-level documentation
|
||||
- Include examples where helpful
|
||||
- Document panics, errors, and safety considerations
|
||||
|
||||
```rust
|
||||
/// Creates a new engine bridge for communication between Bevy and EngineCore.
|
||||
///
|
||||
/// # Returns
|
||||
///
|
||||
/// A tuple of `(EngineBridge, EngineHandle)` where the bridge goes to Bevy
|
||||
/// and the handle goes to EngineCore.
|
||||
///
|
||||
/// # Examples
|
||||
///
|
||||
/// ```no_run
|
||||
/// let (bridge, handle) = EngineBridge::new();
|
||||
/// app.insert_resource(bridge);
|
||||
/// // spawn EngineCore with handle
|
||||
/// ```
|
||||
pub fn new() -> (EngineBridge, EngineHandle) {
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
### Code Comments
|
||||
- Keep line comments at 80 characters or less
|
||||
- Explain *why*, not *what* (code should be self-documenting for the "what")
|
||||
- Use `// TODO:` for temporary code that needs improvement
|
||||
- Use `// SAFETY:` before unsafe blocks to explain invariants
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Library Code (libmarathon)
|
||||
- Use `thiserror` for custom error types
|
||||
- Return `Result<T, Error>` from fallible functions
|
||||
- Provide context with error chains
|
||||
|
||||
```rust
|
||||
use thiserror::Error;
|
||||
|
||||
#[derive(Error, Debug)]
|
||||
pub enum EngineError {
|
||||
#[error("failed to connect to peer: {0}")]
|
||||
ConnectionFailed(String),
|
||||
#[error("database error: {0}")]
|
||||
Database(#[from] rusqlite::Error),
|
||||
}
|
||||
```
|
||||
|
||||
### Application Code (app)
|
||||
- Use `anyhow::Result` for application-level error handling
|
||||
- Add context with `.context()` or `.with_context()`
|
||||
|
||||
```rust
|
||||
use anyhow::{Context, Result};
|
||||
|
||||
fn load_config() -> Result<Config> {
|
||||
let path = get_config_path()
|
||||
.context("failed to determine config path")?;
|
||||
|
||||
std::fs::read_to_string(&path)
|
||||
.with_context(|| format!("failed to read config from {:?}", path))?
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
## Async/Await Style
|
||||
|
||||
### Tokio Runtime Usage
|
||||
- Spawn blocking tasks in background threads
|
||||
- Use `tokio::spawn` for async tasks
|
||||
- Prefer `async fn` over `impl Future`
|
||||
|
||||
```rust
|
||||
// Good: Clear async function
|
||||
async fn process_events(&mut self) -> Result<()> {
|
||||
// ...
|
||||
}
|
||||
|
||||
// Background task spawning
|
||||
std::thread::spawn(move || {
|
||||
let rt = tokio::runtime::Runtime::new().unwrap();
|
||||
rt.block_on(async {
|
||||
core.run().await;
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
## Testing Conventions
|
||||
|
||||
### Test Organization
|
||||
- Unit tests: In same file as code (`#[cfg(test)] mod tests`)
|
||||
- Integration tests: In `tests/` directory
|
||||
- Benchmarks: In `benches/` directory
|
||||
|
||||
### Test Naming
|
||||
- Use descriptive names: `test_sync_between_two_nodes`
|
||||
- Use `should_` prefix for behavior tests: `should_reject_invalid_input`
|
||||
|
||||
```rust
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_engine_bridge_creation() {
|
||||
let (bridge, handle) = EngineBridge::new();
|
||||
// ...
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn should_sync_state_across_peers() {
|
||||
// ...
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Platform-Specific Code
|
||||
|
||||
### Feature Gates
|
||||
```rust
|
||||
// iOS-specific code
|
||||
#[cfg(target_os = "ios")]
|
||||
use tracing_oslog::OsLogger;
|
||||
|
||||
// Desktop-specific code
|
||||
#[cfg(not(target_os = "ios"))]
|
||||
use tracing_subscriber::fmt;
|
||||
```
|
||||
|
||||
### Platform Modules
|
||||
- Keep platform-specific code in `platform/` modules
|
||||
- Provide platform-agnostic interfaces when possible
|
||||
- Use feature flags: `desktop`, `ios`, `headless`
|
||||
|
||||
## Logging
|
||||
|
||||
### Use Structured Logging
|
||||
```rust
|
||||
use tracing::{debug, info, warn, error};
|
||||
|
||||
// Good: Structured with context
|
||||
info!(path = %db_path, "opening database");
|
||||
debug!(count = peers.len(), "connected to peers");
|
||||
|
||||
// Avoid: Plain string
|
||||
info!("Database opened at {}", db_path);
|
||||
```
|
||||
|
||||
### Log Levels
|
||||
- `error!`: System failures requiring immediate attention
|
||||
- `warn!`: Unexpected conditions that are handled
|
||||
- `info!`: Important state changes and milestones
|
||||
- `debug!`: Detailed diagnostic information
|
||||
- `trace!`: Very verbose, rarely needed
|
||||
|
||||
## RFCs and Design Documentation
|
||||
|
||||
### When to Write an RFC
|
||||
- Architectural decisions affecting multiple parts
|
||||
- Choosing between significantly different approaches
|
||||
- Introducing new protocols or APIs
|
||||
- Making breaking changes
|
||||
|
||||
### RFC Structure (see `docs/rfcs/README.md`)
|
||||
- Narrative-first explanation
|
||||
- Trade-offs and alternatives
|
||||
- API examples (not full implementations)
|
||||
- Open questions
|
||||
- Success criteria
|
||||
|
||||
## Git Commit Messages
|
||||
|
||||
### Format
|
||||
```
|
||||
Brief summary (50 chars or less)
|
||||
|
||||
More detailed explanation if needed. Wrap at 72 characters.
|
||||
|
||||
- Use bullet points for multiple changes
|
||||
- Reference issue numbers: #123
|
||||
|
||||
Explains trade-offs, alternatives considered, and why this approach
|
||||
was chosen.
|
||||
```
|
||||
|
||||
### Examples
|
||||
```
|
||||
Add CRDT synchronization over iroh-gossip
|
||||
|
||||
Implements the protocol described in RFC 0001. Uses vector clocks
|
||||
for causal ordering and merkle trees for efficient reconciliation.
|
||||
|
||||
- Add VectorClock type
|
||||
- Implement GossipBridge for peer communication
|
||||
- Add integration tests for two-peer sync
|
||||
```
|
||||
77
.serena/memories/codebase_structure.md
Normal file
77
.serena/memories/codebase_structure.md
Normal file
@@ -0,0 +1,77 @@
|
||||
# Codebase Structure
|
||||
|
||||
## Workspace Organization
|
||||
```
|
||||
aspen/
|
||||
├── crates/
|
||||
│ ├── app/ # Main application
|
||||
│ │ ├── src/
|
||||
│ │ │ ├── main.rs # Entry point
|
||||
│ │ │ ├── camera.rs # Camera system
|
||||
│ │ │ ├── cube.rs # 3D cube demo
|
||||
│ │ │ ├── debug_ui.rs # Debug overlays
|
||||
│ │ │ ├── engine_bridge.rs # Bridge to EngineCore
|
||||
│ │ │ ├── input/ # Input handling
|
||||
│ │ │ ├── rendering.rs # Rendering setup
|
||||
│ │ │ ├── selection.rs # Object selection
|
||||
│ │ │ ├── session.rs # Session management
|
||||
│ │ │ ├── session_ui.rs # Session UI
|
||||
│ │ │ └── setup.rs # App initialization
|
||||
│ │ └── Cargo.toml
|
||||
│ │
|
||||
│ ├── libmarathon/ # Core library
|
||||
│ │ ├── src/
|
||||
│ │ │ ├── lib.rs # Library root
|
||||
│ │ │ ├── sync.rs # Synchronization primitives
|
||||
│ │ │ ├── engine/ # Core engine logic
|
||||
│ │ │ ├── networking/ # P2P networking, gossip
|
||||
│ │ │ ├── persistence/ # Database and storage
|
||||
│ │ │ ├── platform/ # Platform-specific code
|
||||
│ │ │ │ ├── desktop/ # macOS executor
|
||||
│ │ │ │ └── ios/ # iOS executor
|
||||
│ │ │ └── debug_ui/ # Debug UI components
|
||||
│ │ └── Cargo.toml
|
||||
│ │
|
||||
│ ├── sync-macros/ # Procedural macros for sync
|
||||
│ │ └── src/lib.rs
|
||||
│ │
|
||||
│ └── xtask/ # Build automation
|
||||
│ ├── src/main.rs
|
||||
│ └── README.md
|
||||
│
|
||||
├── scripts/
|
||||
│ └── ios/ # iOS-specific build scripts
|
||||
│ ├── Info.plist # iOS app metadata
|
||||
│ ├── Entitlements.plist # App capabilities
|
||||
│ ├── deploy-simulator.sh # Simulator deployment
|
||||
│ └── build-simulator.sh # Build for simulator
|
||||
│
|
||||
├── docs/
|
||||
│ └── rfcs/ # Architecture RFCs
|
||||
│ ├── README.md
|
||||
│ ├── 0001-crdt-gossip-sync.md
|
||||
│ ├── 0002-persistence-strategy.md
|
||||
│ ├── 0003-sync-abstraction.md
|
||||
│ ├── 0004-session-lifecycle.md
|
||||
│ ├── 0005-spatial-audio-system.md
|
||||
│ └── 0006-agent-simulation-architecture.md
|
||||
│
|
||||
├── .github/
|
||||
│ └── ISSUE_TEMPLATE/ # GitHub issue templates
|
||||
│ ├── bug_report.yml
|
||||
│ ├── feature.yml
|
||||
│ ├── task.yml
|
||||
│ ├── epic.yml
|
||||
│ └── support.yml
|
||||
│
|
||||
├── Cargo.toml # Workspace configuration
|
||||
├── Cargo.lock # Dependency lock file
|
||||
└── rustfmt.toml # Code formatting rules
|
||||
```
|
||||
|
||||
## Key Patterns
|
||||
- **ECS Architecture**: Uses Bevy's Entity Component System
|
||||
- **Platform Abstraction**: Separate executors for desktop/iOS
|
||||
- **Engine-UI Separation**: `EngineCore` runs in background thread, communicates via `EngineBridge`
|
||||
- **CRDT-based Sync**: All shared state uses CRDTs for conflict-free merging
|
||||
- **RFC-driven Design**: Major decisions documented in `docs/rfcs/`
|
||||
59
.serena/memories/github_labels.md
Normal file
59
.serena/memories/github_labels.md
Normal file
@@ -0,0 +1,59 @@
|
||||
# GitHub Labels
|
||||
|
||||
This file contains the standard label configuration for r3t-studios repositories.
|
||||
|
||||
## Labels from marathon repository
|
||||
|
||||
These labels are currently defined in the marathon repository:
|
||||
|
||||
### Area Labels
|
||||
|
||||
| Name | Color | Description |
|
||||
|------|-------|-------------|
|
||||
| `area/core` | `#0052CC` | Foundation systems, memory management, math libraries, data structures, core utilities |
|
||||
| `area/rendering` | `#0E8A16` | Graphics pipeline, Bevy rendering, shaders, materials, lighting, cameras, meshes, textures |
|
||||
| `area/audio` | `#1D76DB` | Spatial audio engine, sound playback, audio mixing, music systems, 3D audio positioning |
|
||||
| `area/networking` | `#5319E7` | iroh P2P, CRDT sync, gossip protocol, network replication, connection management |
|
||||
| `area/platform` | `#0075CA` | iOS/macOS platform code, cross-platform abstractions, input handling, OS integration |
|
||||
| `area/simulation` | `#FBCA04` | Agent systems, NPC behaviors, AI, game mechanics, interactions, simulation logic |
|
||||
| `area/content` | `#C5DEF5` | Art assets, models, textures, audio files, dialogue trees, narrative content, game data |
|
||||
| `area/ui-ux` | `#D4C5F9` | User interface, menus, HUD elements, input feedback, screen layouts, navigation |
|
||||
| `area/tooling` | `#D93F0B` | Build systems, CI/CD pipelines, development tools, code generation, testing infrastructure |
|
||||
| `area/docs` | `#0075CA` | Documentation, technical specs, RFCs, architecture decisions, API docs, tutorials, guides |
|
||||
| `area/infrastructure` | `#E99695` | Deployment pipelines, hosting, cloud services, monitoring, logging, DevOps, releases |
|
||||
| `area/rfc` | `#FEF2C0` | RFC proposals, design discussions, architecture planning, feature specifications |
|
||||
|
||||
## Labels referenced in issue templates but not yet created
|
||||
|
||||
The following labels are referenced in issue templates but don't exist in the repository yet:
|
||||
|
||||
| Name | Used In | Suggested Color | Description |
|
||||
|------|---------|-----------------|-------------|
|
||||
| `epic` | epic.yml | `#3E4B9E` | Large body of work spanning multiple features |
|
||||
|
||||
## Command to create all labels in a new repository
|
||||
|
||||
```bash
|
||||
# Area labels
|
||||
gh label create "area/core" --description "Foundation systems, memory management, math libraries, data structures, core utilities" --color "0052CC"
|
||||
gh label create "area/rendering" --description "Graphics pipeline, Bevy rendering, shaders, materials, lighting, cameras, meshes, textures" --color "0E8A16"
|
||||
gh label create "area/audio" --description "Spatial audio engine, sound playback, audio mixing, music systems, 3D audio positioning" --color "1D76DB"
|
||||
gh label create "area/networking" --description "iroh P2P, CRDT sync, gossip protocol, network replication, connection management" --color "5319E7"
|
||||
gh label create "area/platform" --description "iOS/macOS platform code, cross-platform abstractions, input handling, OS integration" --color "0075CA"
|
||||
gh label create "area/simulation" --description "Agent systems, NPC behaviors, AI, game mechanics, interactions, simulation logic" --color "FBCA04"
|
||||
gh label create "area/content" --description "Art assets, models, textures, audio files, dialogue trees, narrative content, game data" --color "C5DEF5"
|
||||
gh label create "area/ui-ux" --description "User interface, menus, HUD elements, input feedback, screen layouts, navigation" --color "D4C5F9"
|
||||
gh label create "area/tooling" --description "Build systems, CI/CD pipelines, development tools, code generation, testing infrastructure" --color "D93F0B"
|
||||
gh label create "area/docs" --description "Documentation, technical specs, RFCs, architecture decisions, API docs, tutorials, guides" --color "0075CA"
|
||||
gh label create "area/infrastructure" --description "Deployment pipelines, hosting, cloud services, monitoring, logging, DevOps, releases" --color "E99695"
|
||||
gh label create "area/rfc" --description "RFC proposals, design discussions, architecture planning, feature specifications" --color "FEF2C0"
|
||||
|
||||
# Issue type labels
|
||||
gh label create "epic" --description "Large body of work spanning multiple features" --color "3E4B9E"
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- The marathon repository has 12 labels defined, all with the `area/` prefix
|
||||
- The `epic` label is referenced in the epic.yml issue template but hasn't been created yet in either marathon or aspen
|
||||
- All area labels use distinct colors for easy visual identification
|
||||
457
.serena/memories/macos_system_commands.md
Normal file
457
.serena/memories/macos_system_commands.md
Normal file
@@ -0,0 +1,457 @@
|
||||
# macOS (Darwin) System Commands
|
||||
|
||||
This document covers macOS-specific system commands and utilities that may differ from standard Unix/Linux systems.
|
||||
|
||||
## File System Operations
|
||||
|
||||
### Finding Files
|
||||
```bash
|
||||
# Standard Unix find (works on macOS)
|
||||
find . -name "*.rs"
|
||||
find . -type f -name "Cargo.toml"
|
||||
|
||||
# macOS Spotlight search (faster for indexed content)
|
||||
mdfind -name "rustfmt.toml"
|
||||
mdfind "kind:rust-source"
|
||||
|
||||
# Locate database (if enabled)
|
||||
locate pattern
|
||||
```
|
||||
|
||||
### Listing & Viewing
|
||||
```bash
|
||||
# List with details
|
||||
ls -la
|
||||
ls -lh # human-readable sizes
|
||||
ls -lhS # sorted by size
|
||||
ls -lht # sorted by modification time
|
||||
|
||||
# View file contents
|
||||
cat file.txt
|
||||
head -20 file.txt
|
||||
tail -50 file.txt
|
||||
less file.txt # paginated view
|
||||
|
||||
# Quick Look (macOS-specific)
|
||||
qlmanage -p file.txt # preview file
|
||||
```
|
||||
|
||||
### Directory Navigation
|
||||
```bash
|
||||
cd /path/to/directory
|
||||
cd ~ # home directory
|
||||
cd - # previous directory
|
||||
pwd # print working directory
|
||||
pushd /path # push to directory stack
|
||||
popd # pop from directory stack
|
||||
```
|
||||
|
||||
## Text Processing
|
||||
|
||||
### Searching in Files
|
||||
```bash
|
||||
# grep (standard)
|
||||
grep -r "pattern" .
|
||||
grep -i "pattern" file.txt # case-insensitive
|
||||
grep -n "pattern" file.txt # with line numbers
|
||||
grep -A 5 "pattern" file.txt # 5 lines after match
|
||||
grep -B 5 "pattern" file.txt # 5 lines before match
|
||||
|
||||
# ripgrep (if installed - faster and better)
|
||||
rg "pattern"
|
||||
rg -i "pattern" # case-insensitive
|
||||
rg -t rust "pattern" # only Rust files
|
||||
```
|
||||
|
||||
### Text Manipulation
|
||||
```bash
|
||||
# sed (stream editor) - macOS uses BSD sed
|
||||
sed -i '' 's/old/new/g' file.txt # note the '' for in-place edit
|
||||
sed 's/pattern/replacement/' file.txt
|
||||
|
||||
# awk
|
||||
awk '{print $1}' file.txt
|
||||
|
||||
# cut
|
||||
cut -d',' -f1,3 file.csv
|
||||
```
|
||||
|
||||
## Process Management
|
||||
|
||||
### Viewing Processes
|
||||
```bash
|
||||
# List processes
|
||||
ps aux
|
||||
ps aux | grep cargo
|
||||
|
||||
# Interactive process viewer
|
||||
top
|
||||
htop # if installed (better)
|
||||
|
||||
# Activity Monitor (GUI)
|
||||
open -a "Activity Monitor"
|
||||
```
|
||||
|
||||
### Process Control
|
||||
```bash
|
||||
# Kill process
|
||||
kill PID
|
||||
kill -9 PID # force kill
|
||||
killall process_name
|
||||
|
||||
# Background/foreground
|
||||
command & # run in background
|
||||
fg # bring to foreground
|
||||
bg # continue in background
|
||||
Ctrl+Z # suspend foreground process
|
||||
```
|
||||
|
||||
## Network
|
||||
|
||||
### Network Info
|
||||
```bash
|
||||
# IP address
|
||||
ifconfig
|
||||
ipconfig getifaddr en0 # specific interface
|
||||
|
||||
# Network connectivity
|
||||
ping google.com
|
||||
traceroute google.com
|
||||
|
||||
# DNS lookup
|
||||
nslookup domain.com
|
||||
dig domain.com
|
||||
|
||||
# Network statistics
|
||||
netstat -an
|
||||
lsof -i # list open network connections
|
||||
```
|
||||
|
||||
### Port Management
|
||||
```bash
|
||||
# Check what's using a port
|
||||
lsof -i :8080
|
||||
lsof -i tcp:3000
|
||||
|
||||
# Kill process using port
|
||||
lsof -ti:8080 | xargs kill
|
||||
```
|
||||
|
||||
## File Permissions
|
||||
|
||||
### Basic Permissions
|
||||
```bash
|
||||
# Change permissions
|
||||
chmod +x script.sh # make executable
|
||||
chmod 644 file.txt # rw-r--r--
|
||||
chmod 755 dir/ # rwxr-xr-x
|
||||
|
||||
# Change ownership
|
||||
chown user:group file
|
||||
chown -R user:group directory/
|
||||
|
||||
# View permissions
|
||||
ls -l
|
||||
stat file.txt # detailed info
|
||||
```
|
||||
|
||||
### Extended Attributes (macOS-specific)
|
||||
```bash
|
||||
# List extended attributes
|
||||
xattr -l file
|
||||
|
||||
# Remove quarantine attribute
|
||||
xattr -d com.apple.quarantine file
|
||||
|
||||
# Clear all extended attributes
|
||||
xattr -c file
|
||||
```
|
||||
|
||||
## Disk & Storage
|
||||
|
||||
### Disk Usage
|
||||
```bash
|
||||
# Disk space
|
||||
df -h
|
||||
df -h /
|
||||
|
||||
# Directory size
|
||||
du -sh directory/
|
||||
du -h -d 1 . # depth 1
|
||||
|
||||
# Sort by size
|
||||
du -sh * | sort -h
|
||||
```
|
||||
|
||||
### Disk Utility
|
||||
```bash
|
||||
# Verify disk
|
||||
diskutil verifyVolume /
|
||||
diskutil list
|
||||
|
||||
# Mount/unmount
|
||||
diskutil mount diskName
|
||||
diskutil unmount diskName
|
||||
```
|
||||
|
||||
## Package Management
|
||||
|
||||
### Homebrew (common on macOS)
|
||||
```bash
|
||||
# Install package
|
||||
brew install package-name
|
||||
|
||||
# Update Homebrew
|
||||
brew update
|
||||
|
||||
# Upgrade packages
|
||||
brew upgrade
|
||||
|
||||
# List installed
|
||||
brew list
|
||||
|
||||
# Search packages
|
||||
brew search pattern
|
||||
```
|
||||
|
||||
### Mac App Store
|
||||
```bash
|
||||
# List updates
|
||||
softwareupdate --list
|
||||
|
||||
# Install updates
|
||||
softwareupdate --install --all
|
||||
```
|
||||
|
||||
## System Information
|
||||
|
||||
### System Details
|
||||
```bash
|
||||
# macOS version
|
||||
sw_vers
|
||||
sw_vers -productVersion
|
||||
|
||||
# System profiler
|
||||
system_profiler SPHardwareDataType
|
||||
system_profiler SPSoftwareDataType
|
||||
|
||||
# Kernel info
|
||||
uname -a
|
||||
```
|
||||
|
||||
### Hardware Info
|
||||
```bash
|
||||
# CPU info
|
||||
sysctl -n machdep.cpu.brand_string
|
||||
sysctl hw
|
||||
|
||||
# Memory
|
||||
top -l 1 | grep PhysMem
|
||||
|
||||
# Disk info
|
||||
diskutil info /
|
||||
```
|
||||
|
||||
## Environment & Shell
|
||||
|
||||
### Environment Variables
|
||||
```bash
|
||||
# View all
|
||||
env
|
||||
printenv
|
||||
|
||||
# Set variable
|
||||
export VAR_NAME=value
|
||||
|
||||
# Shell config files
|
||||
~/.zshrc # Zsh (default on modern macOS)
|
||||
~/.bashrc # Bash
|
||||
~/.profile # Login shell
|
||||
```
|
||||
|
||||
### Path Management
|
||||
```bash
|
||||
# View PATH
|
||||
echo $PATH
|
||||
|
||||
# Add to PATH (in ~/.zshrc or ~/.bashrc)
|
||||
export PATH="/usr/local/bin:$PATH"
|
||||
|
||||
# Which command
|
||||
which cargo
|
||||
which rustc
|
||||
```
|
||||
|
||||
## Archives & Compression
|
||||
|
||||
### Tar
|
||||
```bash
|
||||
# Create archive
|
||||
tar -czf archive.tar.gz directory/
|
||||
|
||||
# Extract archive
|
||||
tar -xzf archive.tar.gz
|
||||
|
||||
# List contents
|
||||
tar -tzf archive.tar.gz
|
||||
```
|
||||
|
||||
### Zip
|
||||
```bash
|
||||
# Create zip
|
||||
zip -r archive.zip directory/
|
||||
|
||||
# Extract zip
|
||||
unzip archive.zip
|
||||
|
||||
# List contents
|
||||
unzip -l archive.zip
|
||||
```
|
||||
|
||||
## Clipboard (macOS-specific)
|
||||
|
||||
```bash
|
||||
# Copy to clipboard
|
||||
echo "text" | pbcopy
|
||||
cat file.txt | pbcopy
|
||||
|
||||
# Paste from clipboard
|
||||
pbpaste
|
||||
pbpaste > file.txt
|
||||
```
|
||||
|
||||
## Notifications (macOS-specific)
|
||||
|
||||
```bash
|
||||
# Display notification
|
||||
osascript -e 'display notification "Message" with title "Title"'
|
||||
|
||||
# Alert dialog
|
||||
osascript -e 'display dialog "Message" with title "Title"'
|
||||
```
|
||||
|
||||
## Xcode & iOS Development
|
||||
|
||||
### Xcode Command Line Tools
|
||||
```bash
|
||||
# Install command line tools
|
||||
xcode-select --install
|
||||
|
||||
# Show active developer directory
|
||||
xcode-select -p
|
||||
|
||||
# Switch Xcode version
|
||||
sudo xcode-select -s /Applications/Xcode.app/Contents/Developer
|
||||
```
|
||||
|
||||
### iOS Simulator
|
||||
```bash
|
||||
# List simulators
|
||||
xcrun simctl list devices
|
||||
|
||||
# Boot simulator
|
||||
xcrun simctl boot "iPad Pro 12.9-inch M2"
|
||||
|
||||
# Open Simulator app
|
||||
open -a Simulator
|
||||
|
||||
# Install app
|
||||
xcrun simctl install <device-uuid> path/to/app.app
|
||||
|
||||
# Launch app
|
||||
xcrun simctl launch <device-uuid> bundle.id
|
||||
|
||||
# View logs
|
||||
xcrun simctl spawn <device-uuid> log stream
|
||||
```
|
||||
|
||||
### Physical Device
|
||||
```bash
|
||||
# List connected devices
|
||||
xcrun devicectl list devices
|
||||
|
||||
# Install app
|
||||
xcrun devicectl device install app --device <device-id> path/to/app.app
|
||||
|
||||
# Launch app
|
||||
xcrun devicectl device process launch --device <device-id> bundle.id
|
||||
|
||||
# View logs
|
||||
xcrun devicectl device stream log --device <device-id>
|
||||
```
|
||||
|
||||
### Code Signing
|
||||
```bash
|
||||
# List signing identities
|
||||
security find-identity -v -p codesigning
|
||||
|
||||
# Sign application
|
||||
codesign -s "Developer ID" path/to/app.app
|
||||
|
||||
# Verify signature
|
||||
codesign -vv path/to/app.app
|
||||
```
|
||||
|
||||
## macOS-Specific Differences from Linux
|
||||
|
||||
### Key Differences
|
||||
1. **sed**: Requires empty string for in-place edit: `sed -i '' ...`
|
||||
2. **find**: Uses BSD find (slightly different options)
|
||||
3. **date**: Different format options than GNU date
|
||||
4. **readlink**: Use `greadlink` (if coreutils installed) for `-f` flag
|
||||
5. **stat**: Different output format than GNU stat
|
||||
6. **grep**: BSD grep (consider installing `ggrep` for GNU grep)
|
||||
|
||||
### GNU Tools via Homebrew
|
||||
```bash
|
||||
# Install GNU coreutils
|
||||
brew install coreutils
|
||||
|
||||
# Then use with 'g' prefix
|
||||
gls, gcp, gmv, grm, greadlink, gdate, etc.
|
||||
```
|
||||
|
||||
## Useful macOS Shortcuts
|
||||
|
||||
### Terminal Shortcuts
|
||||
- `Cmd+K` - Clear terminal
|
||||
- `Cmd+T` - New tab
|
||||
- `Cmd+N` - New window
|
||||
- `Cmd+W` - Close tab
|
||||
- `Cmd+,` - Preferences
|
||||
|
||||
### Command Line Shortcuts
|
||||
- `Ctrl+A` - Beginning of line
|
||||
- `Ctrl+E` - End of line
|
||||
- `Ctrl+U` - Delete to beginning
|
||||
- `Ctrl+K` - Delete to end
|
||||
- `Ctrl+R` - Search history
|
||||
- `Ctrl+C` - Cancel command
|
||||
- `Ctrl+D` - Exit shell
|
||||
- `Ctrl+Z` - Suspend process
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Most Common for Aspen Development
|
||||
```bash
|
||||
# Find Rust files
|
||||
find . -name "*.rs"
|
||||
|
||||
# Search in Rust files
|
||||
grep -r "pattern" crates/
|
||||
|
||||
# Check what's using a port
|
||||
lsof -i :8080
|
||||
|
||||
# View disk space
|
||||
df -h
|
||||
|
||||
# View process list
|
||||
ps aux | grep cargo
|
||||
|
||||
# View logs
|
||||
log stream --predicate 'process == "app"'
|
||||
|
||||
# Xcode simulators
|
||||
xcrun simctl list devices available
|
||||
```
|
||||
27
.serena/memories/project_overview.md
Normal file
27
.serena/memories/project_overview.md
Normal file
@@ -0,0 +1,27 @@
|
||||
# Project Overview: Aspen
|
||||
|
||||
## Purpose
|
||||
Aspen (formerly known as Lonni) is a **cross-platform real-time collaborative application** built for macOS and iPad. It demonstrates real-time CRDT (Conflict-free Replicated Data Type) synchronization with Apple Pencil input support.
|
||||
|
||||
## Key Features
|
||||
- Real-time collaborative drawing/interaction with Apple Pencil support
|
||||
- P2P synchronization using CRDTs over iroh-gossip protocol
|
||||
- Cross-platform: macOS desktop and iOS/iPadOS
|
||||
- 3D rendering using Bevy game engine
|
||||
- Persistent local storage with SQLite
|
||||
- Session management for multi-user collaboration
|
||||
|
||||
## Target Platforms
|
||||
- **macOS** (desktop application)
|
||||
- **iOS/iPadOS** (with Apple Pencil support)
|
||||
- Uses separate executors for each platform
|
||||
|
||||
## Architecture
|
||||
The application uses a **workspace structure** with multiple crates:
|
||||
- `app` - Main application entry point and UI
|
||||
- `libmarathon` - Core library with engine, networking, persistence
|
||||
- `sync-macros` - Procedural macros for synchronization
|
||||
- `xtask` - Build automation tasks
|
||||
|
||||
## Development Status
|
||||
Active development with RFCs for major design decisions. See `docs/rfcs/` for architectural documentation.
|
||||
11
.serena/memories/serialization-policy.md
Normal file
11
.serena/memories/serialization-policy.md
Normal file
@@ -0,0 +1,11 @@
|
||||
# Serialization Policy
|
||||
|
||||
**Never use serde for serialization in this project.**
|
||||
|
||||
We use `rkyv` exclusively for all serialization needs:
|
||||
- Network messages
|
||||
- Component synchronization
|
||||
- Persistence
|
||||
- Any data serialization
|
||||
|
||||
If a type from a dependency (like Bevy) doesn't support rkyv, we vendor it and add the rkyv derives ourselves.
|
||||
237
.serena/memories/suggested_commands.md
Normal file
237
.serena/memories/suggested_commands.md
Normal file
@@ -0,0 +1,237 @@
|
||||
# Suggested Commands for Aspen Development
|
||||
|
||||
## Build & Run Commands
|
||||
|
||||
### iOS Simulator (Primary Development Target)
|
||||
```bash
|
||||
# Build, deploy, and run on iOS Simulator (most common)
|
||||
cargo xtask ios-run
|
||||
|
||||
# Build only (release mode)
|
||||
cargo xtask ios-build
|
||||
|
||||
# Build in debug mode
|
||||
cargo xtask ios-build --debug
|
||||
|
||||
# Deploy to specific device
|
||||
cargo xtask ios-deploy --device "iPad Air (5th generation)"
|
||||
|
||||
# Run with debug mode and custom device
|
||||
cargo xtask ios-run --debug --device "iPhone 15 Pro"
|
||||
|
||||
# Build and deploy to physical iPad
|
||||
cargo xtask ios-device
|
||||
```
|
||||
|
||||
### Desktop (macOS)
|
||||
```bash
|
||||
# Run on macOS desktop
|
||||
cargo run --package app --features desktop
|
||||
|
||||
# Run in release mode
|
||||
cargo run --package app --features desktop --release
|
||||
```
|
||||
|
||||
## Testing
|
||||
```bash
|
||||
# Run all tests
|
||||
cargo test
|
||||
|
||||
# Run tests for specific package
|
||||
cargo test --package libmarathon
|
||||
cargo test --package app
|
||||
|
||||
# Run integration tests
|
||||
cargo test --test sync_integration
|
||||
|
||||
# Run with specific test
|
||||
cargo test test_sync_between_two_nodes
|
||||
|
||||
# Run tests with logging output
|
||||
RUST_LOG=debug cargo test -- --nocapture
|
||||
```
|
||||
|
||||
## Code Quality
|
||||
|
||||
### Formatting
|
||||
```bash
|
||||
# Format all code (uses rustfmt.toml configuration)
|
||||
cargo fmt
|
||||
|
||||
# Check formatting without modifying files
|
||||
cargo fmt -- --check
|
||||
```
|
||||
|
||||
### Linting
|
||||
```bash
|
||||
# Run clippy for all crates
|
||||
cargo clippy --all-targets --all-features
|
||||
|
||||
# Run clippy with fixes
|
||||
cargo clippy --fix --allow-dirty --allow-staged
|
||||
|
||||
# Strict clippy checks
|
||||
cargo clippy -- -D warnings
|
||||
```
|
||||
|
||||
### Building
|
||||
```bash
|
||||
# Build all crates
|
||||
cargo build
|
||||
|
||||
# Build in release mode
|
||||
cargo build --release
|
||||
|
||||
# Build specific package
|
||||
cargo build --package libmarathon
|
||||
|
||||
# Build for iOS target
|
||||
cargo build --target aarch64-apple-ios --release
|
||||
cargo build --target aarch64-apple-ios-sim --release
|
||||
```
|
||||
|
||||
## Cleaning
|
||||
```bash
|
||||
# Clean build artifacts
|
||||
cargo clean
|
||||
|
||||
# Clean specific package
|
||||
cargo clean --package xtask
|
||||
|
||||
# Clean and rebuild
|
||||
cargo clean && cargo build
|
||||
```
|
||||
|
||||
## Benchmarking
|
||||
```bash
|
||||
# Run benchmarks
|
||||
cargo bench
|
||||
|
||||
# Run specific benchmark
|
||||
cargo bench --bench write_buffer
|
||||
cargo bench --bench vector_clock
|
||||
```
|
||||
|
||||
## Documentation
|
||||
```bash
|
||||
# Generate and open documentation
|
||||
cargo doc --open
|
||||
|
||||
# Generate docs for all dependencies
|
||||
cargo doc --open --document-private-items
|
||||
```
|
||||
|
||||
## Dependency Management
|
||||
```bash
|
||||
# Update dependencies
|
||||
cargo update
|
||||
|
||||
# Check for outdated dependencies
|
||||
cargo outdated
|
||||
|
||||
# Show dependency tree
|
||||
cargo tree
|
||||
|
||||
# Check specific dependency
|
||||
cargo tree -p iroh
|
||||
```
|
||||
|
||||
## iOS-Specific Commands
|
||||
|
||||
### Simulator Management
|
||||
```bash
|
||||
# List available simulators
|
||||
xcrun simctl list devices available
|
||||
|
||||
# Boot a specific simulator
|
||||
xcrun simctl boot "iPad Pro 12.9-inch M2"
|
||||
|
||||
# Open Simulator app
|
||||
open -a Simulator
|
||||
|
||||
# View simulator logs
|
||||
xcrun simctl spawn <device-uuid> log stream --predicate 'processImagePath contains "Aspen"'
|
||||
```
|
||||
|
||||
### Device Management
|
||||
```bash
|
||||
# List connected devices
|
||||
xcrun devicectl list devices
|
||||
|
||||
# View device logs
|
||||
xcrun devicectl device stream log --device <device-id> --predicate 'process == "app"'
|
||||
```
|
||||
|
||||
## Git Commands (macOS-specific notes)
|
||||
```bash
|
||||
# Standard git commands work on macOS
|
||||
git status
|
||||
git add .
|
||||
git commit -m "message"
|
||||
git push
|
||||
|
||||
# View recent commits
|
||||
git log --oneline -10
|
||||
|
||||
# Check current branch
|
||||
git branch
|
||||
```
|
||||
|
||||
## System Commands (macOS)
|
||||
```bash
|
||||
# Find files (macOS has both find and mdfind)
|
||||
find . -name "*.rs"
|
||||
mdfind -name "rustfmt.toml"
|
||||
|
||||
# Search in files
|
||||
grep -r "pattern" crates/
|
||||
rg "pattern" crates/ # if ripgrep is installed
|
||||
|
||||
# List files
|
||||
ls -la
|
||||
ls -lh # human-readable sizes
|
||||
|
||||
# Navigate
|
||||
cd crates/app
|
||||
pwd
|
||||
|
||||
# View file contents
|
||||
cat Cargo.toml
|
||||
head -20 src/main.rs
|
||||
tail -50 Cargo.lock
|
||||
```
|
||||
|
||||
## Common Workflows
|
||||
|
||||
### After Making Changes
|
||||
```bash
|
||||
# 1. Format code
|
||||
cargo fmt
|
||||
|
||||
# 2. Run clippy
|
||||
cargo clippy --all-targets
|
||||
|
||||
# 3. Run tests
|
||||
cargo test
|
||||
|
||||
# 4. Test on simulator
|
||||
cargo xtask ios-run
|
||||
```
|
||||
|
||||
### Adding a New Feature
|
||||
```bash
|
||||
# 1. Create RFC if it's a major change
|
||||
# edit docs/rfcs/NNNN-feature-name.md
|
||||
|
||||
# 2. Implement
|
||||
# edit crates/.../src/...
|
||||
|
||||
# 3. Add tests
|
||||
# edit crates/.../tests/...
|
||||
|
||||
# 4. Update documentation
|
||||
cargo doc --open
|
||||
|
||||
# 5. Run full validation
|
||||
cargo fmt && cargo clippy && cargo test && cargo xtask ios-run
|
||||
```
|
||||
211
.serena/memories/task_completion_checklist.md
Normal file
211
.serena/memories/task_completion_checklist.md
Normal file
@@ -0,0 +1,211 @@
|
||||
# Task Completion Checklist
|
||||
|
||||
When completing a task in Aspen, follow these steps to ensure code quality and consistency.
|
||||
|
||||
## Pre-Commit Checklist
|
||||
|
||||
### 1. Code Formatting
|
||||
```bash
|
||||
cargo fmt
|
||||
```
|
||||
- Formats all code according to `rustfmt.toml`
|
||||
- Must pass before committing
|
||||
- Check with: `cargo fmt -- --check`
|
||||
|
||||
### 2. Linting
|
||||
```bash
|
||||
cargo clippy --all-targets --all-features
|
||||
```
|
||||
- Checks for common mistakes and anti-patterns
|
||||
- Address all warnings
|
||||
- For strict mode: `cargo clippy -- -D warnings`
|
||||
|
||||
### 3. Type Checking & Compilation
|
||||
```bash
|
||||
cargo check
|
||||
cargo build
|
||||
```
|
||||
- Ensure code compiles without errors
|
||||
- Check both debug and release if performance-critical:
|
||||
```bash
|
||||
cargo build --release
|
||||
```
|
||||
|
||||
### 4. Testing
|
||||
```bash
|
||||
# Run all tests
|
||||
cargo test
|
||||
|
||||
# Run with output
|
||||
cargo test -- --nocapture
|
||||
|
||||
# Run integration tests
|
||||
cargo test --test sync_integration
|
||||
```
|
||||
- All existing tests must pass
|
||||
- Add new tests for new functionality
|
||||
- Integration tests for cross-component features
|
||||
|
||||
### 5. Platform-Specific Testing
|
||||
|
||||
#### iOS Simulator
|
||||
```bash
|
||||
cargo xtask ios-run
|
||||
```
|
||||
- Test on iOS Simulator (default: iPad Pro 12.9-inch M2)
|
||||
- Verify Apple Pencil interactions if applicable
|
||||
- Check logging output for errors
|
||||
|
||||
#### Physical Device (if iOS changes)
|
||||
```bash
|
||||
cargo xtask ios-device
|
||||
```
|
||||
- Test on actual iPad if Apple Pencil features are involved
|
||||
- Verify Developer Mode is enabled
|
||||
|
||||
#### macOS Desktop
|
||||
```bash
|
||||
cargo run --package app --features desktop
|
||||
```
|
||||
- Test desktop functionality
|
||||
- Verify window handling and input
|
||||
|
||||
### 6. Documentation
|
||||
```bash
|
||||
cargo doc --open
|
||||
```
|
||||
- Add doc comments to public APIs
|
||||
- Update module-level documentation if structure changed
|
||||
- Verify generated docs render correctly
|
||||
- Update RFCs if architectural changes were made
|
||||
|
||||
## Specific Checks by Change Type
|
||||
|
||||
### For New Features
|
||||
- [ ] Write RFC if architectural change (see `docs/rfcs/README.md`)
|
||||
- [ ] Add public API documentation
|
||||
- [ ] Add examples in doc comments
|
||||
- [ ] Write integration tests
|
||||
- [ ] Test on both macOS and iOS if cross-platform
|
||||
- [ ] Update relevant memory files if workflow changes
|
||||
|
||||
### For Bug Fixes
|
||||
- [ ] Add regression test
|
||||
- [ ] Document the bug in commit message
|
||||
- [ ] Verify fix on affected platform(s)
|
||||
- [ ] Check for similar bugs in related code
|
||||
|
||||
### For Performance Changes
|
||||
- [ ] Run benchmarks before and after
|
||||
```bash
|
||||
cargo bench
|
||||
```
|
||||
- [ ] Document performance impact in commit message
|
||||
- [ ] Test on debug and release builds
|
||||
|
||||
### For Refactoring
|
||||
- [ ] Ensure all tests still pass
|
||||
- [ ] Verify no behavioral changes
|
||||
- [ ] Update related documentation
|
||||
- [ ] Check that clippy warnings didn't increase
|
||||
|
||||
### For Dependency Updates
|
||||
- [ ] Update `Cargo.toml` (workspace or specific crate)
|
||||
- [ ] Run `cargo update`
|
||||
- [ ] Check for breaking changes in changelog
|
||||
- [ ] Re-run full test suite
|
||||
- [ ] Test on both platforms
|
||||
|
||||
## Before Pushing
|
||||
|
||||
### Final Validation
|
||||
```bash
|
||||
# One-liner for comprehensive check
|
||||
cargo fmt && cargo clippy --all-targets && cargo test && cargo xtask ios-run
|
||||
```
|
||||
|
||||
### Git Checks
|
||||
- [ ] Review `git diff` for unintended changes
|
||||
- [ ] Ensure sensitive data isn't included
|
||||
- [ ] Write clear commit message (see code_style_conventions.md)
|
||||
- [ ] Verify correct branch
|
||||
|
||||
### Issue Tracking
|
||||
- [ ] Update issue status (use GitHub issue templates)
|
||||
- [ ] Link commits to issues in commit message
|
||||
- [ ] Update project board if using one
|
||||
|
||||
## Platform-Specific Considerations
|
||||
|
||||
### iOS Changes
|
||||
- [ ] Test on iOS Simulator
|
||||
- [ ] Verify Info.plist changes if app metadata changed
|
||||
- [ ] Check Entitlements.plist if permissions changed
|
||||
- [ ] Test with Apple Pencil if input handling changed
|
||||
- [ ] Verify app signing (bundle ID: `G872CZV7WG.aspen`)
|
||||
|
||||
### Networking Changes
|
||||
- [ ] Test P2P connectivity on local network
|
||||
- [ ] Verify gossip propagation with multiple peers
|
||||
- [ ] Check CRDT merge behavior with concurrent edits
|
||||
- [ ] Test with network interruptions
|
||||
|
||||
### Persistence Changes
|
||||
- [ ] Test database migrations if schema changed
|
||||
- [ ] Verify data integrity across app restarts
|
||||
- [ ] Check SQLite WAL mode behavior
|
||||
- [ ] Test with large datasets
|
||||
|
||||
### UI Changes
|
||||
- [ ] Test with debug UI enabled
|
||||
- [ ] Verify on different screen sizes (iPad, desktop)
|
||||
- [ ] Check touch and mouse input paths
|
||||
- [ ] Test accessibility if UI changed
|
||||
|
||||
## Common Issues to Watch For
|
||||
|
||||
### Compilation
|
||||
- Missing feature flags for conditional compilation
|
||||
- Platform-specific code not properly gated with `#[cfg(...)]`
|
||||
- Incorrect use of async/await in synchronous contexts
|
||||
|
||||
### Runtime
|
||||
- Panics in production code (should return `Result` instead)
|
||||
- Deadlocks with locks (use `parking_lot` correctly)
|
||||
- Memory leaks with Arc/Rc cycles
|
||||
- Thread spawning without proper cleanup
|
||||
|
||||
### iOS-Specific
|
||||
- Using `println!` instead of `tracing` (doesn't work on iOS)
|
||||
- Missing `tracing-oslog` initialization
|
||||
- Incorrect bundle ID or entitlements
|
||||
- Not testing on actual device for Pencil features
|
||||
|
||||
## When Task is Complete
|
||||
|
||||
1. **Run final validation**:
|
||||
```bash
|
||||
cargo fmt && cargo clippy && cargo test && cargo xtask ios-run
|
||||
```
|
||||
|
||||
2. **Commit with good message**:
|
||||
```bash
|
||||
git add .
|
||||
git commit -m "Clear, descriptive message"
|
||||
```
|
||||
|
||||
3. **Push to remote**:
|
||||
```bash
|
||||
git push origin <branch-name>
|
||||
```
|
||||
|
||||
4. **Create pull request** (if working in feature branch):
|
||||
- Reference related issues
|
||||
- Describe changes and rationale
|
||||
- Note any breaking changes
|
||||
- Request review if needed
|
||||
|
||||
5. **Update documentation**:
|
||||
- Update RFCs if architectural change
|
||||
- Update memory files if workflow changed
|
||||
- Update README if user-facing change
|
||||
46
.serena/memories/tech_stack.md
Normal file
46
.serena/memories/tech_stack.md
Normal file
@@ -0,0 +1,46 @@
|
||||
# Tech Stack
|
||||
|
||||
## Language
|
||||
- **Rust** (Edition 2021)
|
||||
- Some Swift bridging code for iOS-specific features (Apple Pencil)
|
||||
|
||||
## Key Dependencies
|
||||
|
||||
### Networking & Synchronization
|
||||
- **iroh** (v0.95) - P2P networking and NAT traversal
|
||||
- **iroh-gossip** (v0.95) - Gossip protocol for message propagation
|
||||
- **crdts** (v7.3) - Conflict-free Replicated Data Types
|
||||
|
||||
### Graphics & UI
|
||||
- **Bevy** (v0.17) - Game engine for rendering and ECS architecture
|
||||
- **egui** (v0.33) - Immediate mode GUI
|
||||
- **wgpu** - Low-level GPU API
|
||||
- **winit** (v0.30) - Window handling
|
||||
|
||||
### Storage & Persistence
|
||||
- **rusqlite** (v0.37) - SQLite database bindings
|
||||
- **serde** / **serde_json** - Serialization
|
||||
- **bincode** - Binary serialization
|
||||
|
||||
### Async Runtime
|
||||
- **tokio** (v1) - Async runtime with full features
|
||||
- **futures-lite** (v2.0) - Lightweight futures utilities
|
||||
|
||||
### Utilities
|
||||
- **anyhow** / **thiserror** - Error handling
|
||||
- **tracing** / **tracing-subscriber** - Structured logging
|
||||
- **uuid** - Unique identifiers
|
||||
- **chrono** - Date/time handling
|
||||
- **rand** (v0.8) - Random number generation
|
||||
- **crossbeam-channel** - Multi-producer multi-consumer channels
|
||||
|
||||
### iOS-Specific
|
||||
- **objc** (v0.2) - Objective-C runtime bindings
|
||||
- **tracing-oslog** (v0.3) - iOS unified logging integration
|
||||
- **raw-window-handle** (v0.6) - Platform window abstractions
|
||||
|
||||
### Development Tools
|
||||
- **clap** - CLI argument parsing (in xtask)
|
||||
- **criterion** - Benchmarking
|
||||
- **proptest** - Property-based testing
|
||||
- **tempfile** - Temporary file handling in tests
|
||||
84
.serena/project.yml
Normal file
84
.serena/project.yml
Normal file
@@ -0,0 +1,84 @@
|
||||
# list of languages for which language servers are started; choose from:
|
||||
# al bash clojure cpp csharp csharp_omnisharp
|
||||
# dart elixir elm erlang fortran go
|
||||
# haskell java julia kotlin lua markdown
|
||||
# nix perl php python python_jedi r
|
||||
# rego ruby ruby_solargraph rust scala swift
|
||||
# terraform typescript typescript_vts yaml zig
|
||||
# Note:
|
||||
# - For C, use cpp
|
||||
# - For JavaScript, use typescript
|
||||
# Special requirements:
|
||||
# - csharp: Requires the presence of a .sln file in the project folder.
|
||||
# When using multiple languages, the first language server that supports a given file will be used for that file.
|
||||
# The first language is the default language and the respective language server will be used as a fallback.
|
||||
# Note that when using the JetBrains backend, language servers are not used and this list is correspondingly ignored.
|
||||
languages:
|
||||
- rust
|
||||
|
||||
# the encoding used by text files in the project
|
||||
# For a list of possible encodings, see https://docs.python.org/3.11/library/codecs.html#standard-encodings
|
||||
encoding: "utf-8"
|
||||
|
||||
# whether to use the project's gitignore file to ignore files
|
||||
# Added on 2025-04-07
|
||||
ignore_all_files_in_gitignore: true
|
||||
|
||||
# list of additional paths to ignore
|
||||
# same syntax as gitignore, so you can use * and **
|
||||
# Was previously called `ignored_dirs`, please update your config if you are using that.
|
||||
# Added (renamed) on 2025-04-07
|
||||
ignored_paths: []
|
||||
|
||||
# whether the project is in read-only mode
|
||||
# If set to true, all editing tools will be disabled and attempts to use them will result in an error
|
||||
# Added on 2025-04-18
|
||||
read_only: false
|
||||
|
||||
# list of tool names to exclude. We recommend not excluding any tools, see the readme for more details.
|
||||
# Below is the complete list of tools for convenience.
|
||||
# To make sure you have the latest list of tools, and to view their descriptions,
|
||||
# execute `uv run scripts/print_tool_overview.py`.
|
||||
#
|
||||
# * `activate_project`: Activates a project by name.
|
||||
# * `check_onboarding_performed`: Checks whether project onboarding was already performed.
|
||||
# * `create_text_file`: Creates/overwrites a file in the project directory.
|
||||
# * `delete_lines`: Deletes a range of lines within a file.
|
||||
# * `delete_memory`: Deletes a memory from Serena's project-specific memory store.
|
||||
# * `execute_shell_command`: Executes a shell command.
|
||||
# * `find_referencing_code_snippets`: Finds code snippets in which the symbol at the given location is referenced.
|
||||
# * `find_referencing_symbols`: Finds symbols that reference the symbol at the given location (optionally filtered by type).
|
||||
# * `find_symbol`: Performs a global (or local) search for symbols with/containing a given name/substring (optionally filtered by type).
|
||||
# * `get_current_config`: Prints the current configuration of the agent, including the active and available projects, tools, contexts, and modes.
|
||||
# * `get_symbols_overview`: Gets an overview of the top-level symbols defined in a given file.
|
||||
# * `initial_instructions`: Gets the initial instructions for the current project.
|
||||
# Should only be used in settings where the system prompt cannot be set,
|
||||
# e.g. in clients you have no control over, like Claude Desktop.
|
||||
# * `insert_after_symbol`: Inserts content after the end of the definition of a given symbol.
|
||||
# * `insert_at_line`: Inserts content at a given line in a file.
|
||||
# * `insert_before_symbol`: Inserts content before the beginning of the definition of a given symbol.
|
||||
# * `list_dir`: Lists files and directories in the given directory (optionally with recursion).
|
||||
# * `list_memories`: Lists memories in Serena's project-specific memory store.
|
||||
# * `onboarding`: Performs onboarding (identifying the project structure and essential tasks, e.g. for testing or building).
|
||||
# * `prepare_for_new_conversation`: Provides instructions for preparing for a new conversation (in order to continue with the necessary context).
|
||||
# * `read_file`: Reads a file within the project directory.
|
||||
# * `read_memory`: Reads the memory with the given name from Serena's project-specific memory store.
|
||||
# * `remove_project`: Removes a project from the Serena configuration.
|
||||
# * `replace_lines`: Replaces a range of lines within a file with new content.
|
||||
# * `replace_symbol_body`: Replaces the full definition of a symbol.
|
||||
# * `restart_language_server`: Restarts the language server, may be necessary when edits not through Serena happen.
|
||||
# * `search_for_pattern`: Performs a search for a pattern in the project.
|
||||
# * `summarize_changes`: Provides instructions for summarizing the changes made to the codebase.
|
||||
# * `switch_modes`: Activates modes by providing a list of their names
|
||||
# * `think_about_collected_information`: Thinking tool for pondering the completeness of collected information.
|
||||
# * `think_about_task_adherence`: Thinking tool for determining whether the agent is still on track with the current task.
|
||||
# * `think_about_whether_you_are_done`: Thinking tool for determining whether the task is truly completed.
|
||||
# * `write_memory`: Writes a named memory (for future reference) to Serena's project-specific memory store.
|
||||
excluded_tools: []
|
||||
|
||||
# initial prompt for the project. It will always be given to the LLM upon activating the project
|
||||
# (contrary to the memories, which are loaded on demand).
|
||||
initial_prompt: ""
|
||||
|
||||
project_name: "aspen"
|
||||
included_optional_tools: []
|
||||
136
AI_POLICY.md
Normal file
136
AI_POLICY.md
Normal file
@@ -0,0 +1,136 @@
|
||||
# AI and Machine Learning Usage Policy
|
||||
|
||||
## Core Principle: Human Accountability
|
||||
|
||||
Every contribution to Marathon must have a human who:
|
||||
- **Made the decisions** about what to build and how to build it
|
||||
- **Understands the code, design, or content** they're submitting
|
||||
- **Takes responsibility** for the outcome and any issues that arise
|
||||
- **Can be held accountable** for the contribution
|
||||
|
||||
AI and ML tools are welcome as assistants, but they cannot:
|
||||
- Make architectural or design decisions
|
||||
- Choose between technical trade-offs
|
||||
- Take responsibility for bugs or issues
|
||||
- Be credited as contributors
|
||||
|
||||
## Context: Pragmatism at a Small Scale
|
||||
|
||||
We're a tiny studio with limited resources. We can't afford large teams, professional translators, or extensive QA departments. **Machine learning tools help us punch above our weight class** - they let us move faster, support more languages, and catch bugs we'd otherwise miss.
|
||||
|
||||
We use these tools not to replace human judgment, but to stretch our small team's capacity. This is about working **smart with what we have**, not taking shortcuts that compromise quality or accountability.
|
||||
|
||||
We're using ethical and responsible machine learning as much as possible while ensuring that we are not erasing human contributions while we are resource-constrained.
|
||||
|
||||
## The Blurry Line
|
||||
|
||||
**Here's the honest truth:** The line between "generative AI" and "assistive AI" is fuzzy and constantly shifting. Is IDE autocomplete assistive? What about when it suggests entire functions? What about pair-programming with an LLM?
|
||||
|
||||
**We don't have perfect answers.** What we do have is a principle: **a human must make the decisions and be accountable.**
|
||||
|
||||
If you're unsure whether your use of AI crosses a line, ask yourself:
|
||||
- **"Do I understand what this code does and why?"**
|
||||
- **"Did I decide this was the right approach, or did the AI?"**
|
||||
- **"Can I maintain and debug this?"**
|
||||
- **"Am I comfortable being accountable for this?"**
|
||||
|
||||
If you answer "yes" to those questions, you're probably fine. If you're still uncertain, open a discussion - we'd rather have the conversation than enforce rigid rules that don't match reality.
|
||||
|
||||
## What This Looks Like in Practice
|
||||
|
||||
### Acceptable Use
|
||||
|
||||
**"I used Claude/Copilot to help write this function, I reviewed it, I understand it, and I'm responsible for it."**
|
||||
- You directed the tool
|
||||
- You reviewed and understood the output
|
||||
- You made the decision to use this approach
|
||||
- You take responsibility for the result
|
||||
|
||||
**"I directed an LLM to implement my design, then verified it meets requirements."**
|
||||
- You designed the solution
|
||||
- You used AI to speed up implementation
|
||||
- You verified correctness
|
||||
- You own the outcome
|
||||
|
||||
**"I used machine translation as a starting point, then reviewed and corrected the output."**
|
||||
- You acknowledge the limitations of automated translation
|
||||
- You applied human judgment to the result
|
||||
- You ensure accuracy and appropriateness
|
||||
|
||||
### Not Acceptable
|
||||
|
||||
**"Claude wrote this, I pasted it in, seems fine."**
|
||||
- No understanding of the code
|
||||
- No verification of correctness
|
||||
- Cannot maintain or debug
|
||||
- Cannot explain design decisions
|
||||
|
||||
**"I asked an LLM what architecture to use and implemented its suggestion."**
|
||||
- The AI made the architectural decision
|
||||
- No human judgment about trade-offs
|
||||
- No accountability for the choice
|
||||
|
||||
**"I'm submitting this AI-generated documentation without reviewing it."**
|
||||
- No verification of accuracy
|
||||
- No human oversight
|
||||
- Cannot vouch for quality
|
||||
|
||||
## Why This Matters
|
||||
|
||||
Marathon itself was largely written with AI assistance under human direction. **That's fine!** What matters is:
|
||||
|
||||
1. **A human made every architectural decision**
|
||||
2. **A human is accountable for every line of code**
|
||||
3. **A human can explain why things work the way they do**
|
||||
4. **Humans take credit AND responsibility**
|
||||
|
||||
Think of AI like a compiler, a library, or a really capable intern - it's a tool that amplifies human capability, but **the human is always the one making decisions and being accountable**.
|
||||
|
||||
## For Contributors
|
||||
|
||||
We don't care what tools you use to be productive. We care that:
|
||||
- **You made the decisions** (not the AI)
|
||||
- **You understand what you're submitting**
|
||||
- **You're accountable** for the contribution
|
||||
- **You can maintain it** if issues arise
|
||||
|
||||
Use whatever tools help you work effectively, but you must be able to answer "why did you make this choice?" with human reasoning, not "the AI suggested it."
|
||||
|
||||
### When Contributing
|
||||
|
||||
You don't need to disclose every time you use autocomplete or ask an LLM a question. We trust you to:
|
||||
- Use tools responsibly
|
||||
- Understand your contributions
|
||||
- Take ownership of your work
|
||||
|
||||
If you're doing something novel or pushing boundaries with AI assistance, mentioning it in your PR is welcome - it helps us all learn and navigate this space together.
|
||||
|
||||
## What We Use
|
||||
|
||||
For transparency, here's where Marathon currently uses machine learning:
|
||||
|
||||
- **Development assistance** - IDE tools, code completion, pair programming with LLMs
|
||||
- **Translation tooling** - Machine translation for internationalization (human-reviewed)
|
||||
- **Performance analysis** - Automated profiling and optimization suggestions
|
||||
- **Code review assistance** - Static analysis and potential bug detection
|
||||
- **Documentation help** - Grammar checking, clarity improvements, translation
|
||||
|
||||
In all cases, humans review, approve, and take responsibility for the output.
|
||||
|
||||
## The Bottom Line
|
||||
|
||||
**Machines can't be held accountable, so humans must make all decisions.**
|
||||
|
||||
Use AI tools to help you work faster and smarter, but you must understand and be accountable for what you contribute. When in doubt, ask yourself:
|
||||
|
||||
**"Can a machine be blamed if this breaks?"**
|
||||
|
||||
If yes, you've crossed the line.
|
||||
|
||||
## Questions or Concerns?
|
||||
|
||||
This policy will evolve as we learn more about working effectively with AI tools. If you have questions, concerns, or suggestions, please open a discussion. We're figuring this out together.
|
||||
|
||||
---
|
||||
|
||||
*This policy reflects our values as of February 2026. As technology and our understanding evolve, so will this document.*
|
||||
359
ARCHITECTURE.md
Normal file
359
ARCHITECTURE.md
Normal file
@@ -0,0 +1,359 @@
|
||||
# Marathon Architecture
|
||||
|
||||
This document provides a high-level overview of Marathon's architecture to help contributors understand the system's design and organization.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Overview](#overview)
|
||||
- [Core Principles](#core-principles)
|
||||
- [System Architecture](#system-architecture)
|
||||
- [Crate Organization](#crate-organization)
|
||||
- [Key Components](#key-components)
|
||||
- [Data Flow](#data-flow)
|
||||
- [Technology Decisions](#technology-decisions)
|
||||
- [Design Constraints](#design-constraints)
|
||||
|
||||
## Overview
|
||||
|
||||
Marathon is a **peer-to-peer game engine development kit** built on conflict-free replicated data types (CRDTs). It enables developers to build multiplayer games where players can interact with shared game state in real-time, even across network partitions, with automatic reconciliation.
|
||||
|
||||
**Key Characteristics:**
|
||||
- **Decentralized** - No central game server required, all players are equal peers
|
||||
- **Offline-first** - Gameplay continues during network partitions
|
||||
- **Eventually consistent** - All players converge to the same game state
|
||||
- **Real-time** - Player actions propagate with minimal latency
|
||||
- **Persistent** - Game state survives application restarts
|
||||
|
||||
## Core Principles
|
||||
|
||||
1. **CRDTs for Consistency** - Use mathematically proven data structures that guarantee eventual consistency for multiplayer game state
|
||||
2. **Bevy ECS First** - Build on Bevy's Entity Component System for game development flexibility
|
||||
3. **Zero Trust Networking** - Assume peers may be malicious (future work for competitive games)
|
||||
4. **Separation of Concerns** - Clear boundaries between networking, persistence, and game logic
|
||||
5. **Performance Matters** - Optimize for low latency and high throughput suitable for real-time games
|
||||
|
||||
## System Architecture
|
||||
|
||||
```mermaid
|
||||
graph TB
|
||||
subgraph App["Game Layer"]
|
||||
Demo[Demo Game / Your Game]
|
||||
Actions[Game Actions]
|
||||
Selection[Entity Selection]
|
||||
Input[Input Handling]
|
||||
Render[Rendering]
|
||||
end
|
||||
|
||||
subgraph Core["libmarathon Core"]
|
||||
Net[Networking<br/>• CRDT Sync<br/>• Gossip<br/>• Sessions<br/>• Op Apply]
|
||||
Engine[Engine Core<br/>• Event Loop<br/>• Commands<br/>• Discovery<br/>• Bridge]
|
||||
Persist[Persistence<br/>• SQLite<br/>• Type Registry<br/>• Migrations<br/>• Metrics]
|
||||
end
|
||||
|
||||
subgraph Foundation["Foundation Layer"]
|
||||
Bevy[Bevy ECS<br/>• Entities<br/>• Components<br/>• Systems]
|
||||
Iroh[iroh P2P<br/>• QUIC<br/>• Gossip<br/>• Discovery]
|
||||
end
|
||||
|
||||
Demo --> Actions
|
||||
Demo --> Selection
|
||||
Demo --> Input
|
||||
Demo --> Render
|
||||
|
||||
Actions --> Engine
|
||||
Selection --> Engine
|
||||
Input --> Engine
|
||||
Render --> Engine
|
||||
|
||||
Engine --> Net
|
||||
Engine --> Persist
|
||||
Net --> Persist
|
||||
|
||||
Net --> Iroh
|
||||
Engine --> Bevy
|
||||
Persist --> Bevy
|
||||
```
|
||||
|
||||
## Crate Organization
|
||||
|
||||
Marathon is organized as a Rust workspace with four crates:
|
||||
|
||||
### `libmarathon` (Core Library)
|
||||
|
||||
**Purpose**: The heart of Marathon, providing networking, persistence, and CRDT synchronization.
|
||||
|
||||
**Key Modules:**
|
||||
```
|
||||
libmarathon/
|
||||
├── networking/ # P2P networking and CRDT sync
|
||||
│ ├── crdt/ # CRDT implementations (OR-Set, RGA, LWW)
|
||||
│ ├── operations/ # Network operations and vector clocks
|
||||
│ ├── gossip/ # Gossip protocol bridge to iroh
|
||||
│ ├── session/ # Session management
|
||||
│ └── entity_map/ # UUID ↔ Entity mapping
|
||||
│
|
||||
├── persistence/ # SQLite-backed state persistence
|
||||
│ ├── database/ # SQLite connection and WAL
|
||||
│ ├── registry/ # Type registry for reflection
|
||||
│ └── health/ # Health checks and metrics
|
||||
│
|
||||
├── engine/ # Core engine logic
|
||||
│ ├── networking_manager/ # Network event loop
|
||||
│ ├── commands/ # Bevy commands
|
||||
│ └── game_actions/ # User action handling
|
||||
│
|
||||
├── debug_ui/ # egui debug interface
|
||||
├── render/ # Vendored Bevy render pipeline
|
||||
├── transform/ # Vendored transform with rkyv
|
||||
└── platform/ # Platform-specific code (iOS/desktop)
|
||||
```
|
||||
|
||||
### `app` (Demo Game)
|
||||
|
||||
**Purpose**: Demonstrates Marathon capabilities with a simple multiplayer cube game.
|
||||
|
||||
**Key Files:**
|
||||
- `main.rs` - Entry point with CLI argument handling
|
||||
- `engine_bridge.rs` - Connects Bevy game to Marathon engine
|
||||
- `cube.rs` - Demo game entity implementation
|
||||
- `session.rs` - Multiplayer session lifecycle management
|
||||
- `input/` - Input handling (keyboard, touch, Apple Pencil)
|
||||
- `rendering/` - Rendering setup and camera
|
||||
|
||||
### `macros` (Procedural Macros)
|
||||
|
||||
**Purpose**: Code generation for serialization and deserialization.
|
||||
|
||||
Built on Bevy's macro infrastructure for consistency with the ecosystem.
|
||||
|
||||
### `xtask` (Build Automation)
|
||||
|
||||
**Purpose**: Automate iOS build and deployment using the cargo-xtask pattern.
|
||||
|
||||
**Commands:**
|
||||
- `ios-build` - Build for iOS simulator/device
|
||||
- `ios-deploy` - Deploy to connected device
|
||||
- `ios-run` - Build and run on simulator
|
||||
|
||||
## Key Components
|
||||
|
||||
### 1. CRDT Synchronization Layer
|
||||
|
||||
**Location**: `libmarathon/src/networking/`
|
||||
|
||||
**Purpose**: Implements the CRDT-based synchronization protocol.
|
||||
|
||||
**Key Concepts:**
|
||||
- **Operations** - Immutable change events (Create, Update, Delete)
|
||||
- **Vector Clocks** - Track causality across peers
|
||||
- **OR-Sets** - Observed-Remove Sets for entity membership
|
||||
- **RGA** - Replicated Growable Array for ordered sequences
|
||||
- **LWW** - Last-Write-Wins for simple values
|
||||
|
||||
**Protocol Flow:**
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant A as Peer A
|
||||
participant G as Gossip Network
|
||||
participant B as Peer B
|
||||
|
||||
A->>A: Generate Op<br/>(with vector clock)
|
||||
A->>G: Broadcast Op
|
||||
G->>B: Deliver Op
|
||||
B->>B: Apply Op<br/>(update vector clock)
|
||||
B->>G: ACK
|
||||
G->>A: ACK
|
||||
```
|
||||
|
||||
See [RFC 0001](docs/rfcs/0001-crdt-gossip-sync.md) for detailed protocol specification.
|
||||
|
||||
### 2. Persistence Layer
|
||||
|
||||
**Location**: `libmarathon/src/persistence/`
|
||||
|
||||
**Purpose**: Persist game state to SQLite with minimal overhead.
|
||||
|
||||
**Architecture**: Three-tier system
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[In-Memory State<br/>Bevy ECS - Dirty Tracking] -->|Batch writes<br/>every N frames| B[Write Buffer<br/>Async Batching]
|
||||
B -->|Flush to disk| C[SQLite Database<br/>WAL Mode]
|
||||
|
||||
style A fill:#e1f5ff
|
||||
style B fill:#fff4e1
|
||||
style C fill:#e8f5e9
|
||||
```
|
||||
|
||||
**Key Features:**
|
||||
- **Automatic persistence** - Components marked with `Persisted` save automatically
|
||||
- **Type registry** - Reflection-based serialization
|
||||
- **WAL mode** - Write-Ahead Logging for crash safety
|
||||
- **Migrations** - Schema versioning support
|
||||
|
||||
See [RFC 0002](docs/rfcs/0002-persistence-strategy.md) for detailed design.
|
||||
|
||||
### 3. Networking Manager
|
||||
|
||||
**Location**: `libmarathon/src/engine/networking_manager.rs`
|
||||
|
||||
**Purpose**: Bridge between Bevy and the iroh networking stack.
|
||||
|
||||
**Responsibilities:**
|
||||
- Manage peer connections and discovery
|
||||
- Route operations to/from gossip network
|
||||
- Maintain session state
|
||||
- Handle join protocol for new peers
|
||||
|
||||
### 4. Entity Mapping System
|
||||
|
||||
**Location**: `libmarathon/src/networking/entity_map.rs`
|
||||
|
||||
**Purpose**: Map between Bevy's local `Entity` IDs and global `UUID`s.
|
||||
|
||||
**Why This Exists**: Bevy assigns local sequential entity IDs that differ across instances. We need stable UUIDs for networked entities that all peers agree on.
|
||||
|
||||
```mermaid
|
||||
graph LR
|
||||
A[Bevy Entity<br/>Local ID: 123] <-->|Bidirectional<br/>Mapping| B[UUID<br/>550e8400-....-446655440000]
|
||||
|
||||
style A fill:#ffebee
|
||||
style B fill:#e8f5e9
|
||||
```
|
||||
|
||||
### 5. Debug UI System
|
||||
|
||||
**Location**: `libmarathon/src/debug_ui/`
|
||||
|
||||
**Purpose**: Provide runtime inspection of internal state.
|
||||
|
||||
Built with egui for immediate-mode GUI, integrated into Bevy's render pipeline.
|
||||
|
||||
**Features:**
|
||||
- View connected peers
|
||||
- Inspect vector clocks
|
||||
- Monitor operation log
|
||||
- Check persistence metrics
|
||||
- View entity mappings
|
||||
|
||||
## Data Flow
|
||||
|
||||
### Local Change Flow
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[User Input] --> B[Bevy System<br/>e.g., move entity]
|
||||
B --> C[Generate CRDT<br/>Operation]
|
||||
C --> D[Apply Operation<br/>Locally]
|
||||
D --> E[Broadcast via<br/>Gossip]
|
||||
D --> F[Mark Dirty for<br/>Persistence]
|
||||
|
||||
style A fill:#e3f2fd
|
||||
style E fill:#fff3e0
|
||||
style F fill:#f3e5f5
|
||||
```
|
||||
|
||||
### Remote Change Flow
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Receive Operation<br/>from Gossip] --> B[Check Vector Clock<br/>causality]
|
||||
B --> C[Apply Operation<br/>to ECS]
|
||||
C --> D[Update Local<br/>Vector Clock]
|
||||
C --> E[Mark Dirty for<br/>Persistence]
|
||||
|
||||
style A fill:#fff3e0
|
||||
style C fill:#e8f5e9
|
||||
style E fill:#f3e5f5
|
||||
```
|
||||
|
||||
### Persistence Flow
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Every N Frames] --> B[Identify Dirty<br/>Entities]
|
||||
B --> C[Serialize to<br/>Write Buffer]
|
||||
C --> D[Batch Write<br/>to SQLite]
|
||||
D --> E[Clear Dirty<br/>Flags]
|
||||
E --> A
|
||||
|
||||
style A fill:#e8f5e9
|
||||
style D fill:#f3e5f5
|
||||
```
|
||||
|
||||
## Technology Decisions
|
||||
|
||||
### Why Bevy?
|
||||
|
||||
- **ECS architecture** maps perfectly to game development
|
||||
- **Cross-platform** (desktop, mobile, web)
|
||||
- **Active community** and ecosystem
|
||||
- **Performance** through data-oriented design
|
||||
|
||||
### Why iroh?
|
||||
|
||||
- **QUIC-based** - Modern, efficient transport
|
||||
- **NAT traversal** - Works behind firewalls
|
||||
- **Gossip protocol** - Epidemic broadcast for multi-peer
|
||||
- **Rust-native** - Zero-cost integration
|
||||
|
||||
### Why SQLite?
|
||||
|
||||
- **Embedded** - No server required
|
||||
- **Battle-tested** - Reliable persistence
|
||||
- **WAL mode** - Good write performance
|
||||
- **Cross-platform** - Works everywhere
|
||||
|
||||
### Why CRDTs?
|
||||
|
||||
- **No central authority** - True P2P
|
||||
- **Offline-first** - Work without connectivity
|
||||
- **Provable consistency** - Mathematical guarantees
|
||||
- **No conflict resolution UI** - Users don't see conflicts
|
||||
|
||||
## Design Constraints
|
||||
|
||||
### Current Limitations
|
||||
|
||||
1. **No Authentication** - All peers are trusted (0.1.x)
|
||||
2. **No Authorization** - All peers have full permissions
|
||||
3. **No Encryption** - Beyond QUIC's transport security
|
||||
4. **Limited Scalability** - Not tested beyond ~10 peers
|
||||
5. **Desktop + iOS Only** - Web and other platforms planned
|
||||
|
||||
### Performance Targets
|
||||
|
||||
- **Operation latency**: < 50ms peer-to-peer
|
||||
- **Persistence overhead**: < 5% frame time
|
||||
- **Memory overhead**: < 10MB for typical session
|
||||
- **Startup time**: < 2 seconds
|
||||
|
||||
### Intentional Non-Goals
|
||||
|
||||
- **Central server architecture** - Stay decentralized
|
||||
- **Strong consistency** - Use eventual consistency
|
||||
- **Traditional database** - Use CRDTs, not SQL queries
|
||||
- **General-purpose engine** - Focus on collaboration
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [RFC 0001: CRDT Synchronization Protocol](docs/rfcs/0001-crdt-gossip-sync.md)
|
||||
- [RFC 0002: Persistence Strategy](docs/rfcs/0002-persistence-strategy.md)
|
||||
- [RFC 0003: Sync Abstraction](docs/rfcs/0003-sync-abstraction.md)
|
||||
- [RFC 0004: Session Lifecycle](docs/rfcs/0004-session-lifecycle.md)
|
||||
- [RFC 0005: Spatial Audio System](docs/rfcs/0005-spatial-audio-vendoring.md)
|
||||
- [RFC 0006: Agent Simulation Architecture](docs/rfcs/0006-agent-simulation-architecture.md)
|
||||
|
||||
## Questions?
|
||||
|
||||
If you're working on Marathon and something isn't clear:
|
||||
|
||||
1. Check the RFCs in `docs/rfcs/`
|
||||
2. Search existing issues/discussions
|
||||
3. Ask in GitHub Discussions
|
||||
4. Reach out to maintainers
|
||||
|
||||
---
|
||||
|
||||
*This architecture will evolve. When making significant architectural changes, consider updating this document or creating a new RFC.*
|
||||
65
CHANGELOG.md
Normal file
65
CHANGELOG.md
Normal file
@@ -0,0 +1,65 @@
|
||||
# Changelog
|
||||
|
||||
All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [Unreleased]
|
||||
|
||||
## [0.1.0] - 2026-02-06
|
||||
|
||||
### Added
|
||||
|
||||
#### Core Features
|
||||
- CRDT-based synchronization using OR-Sets, RGA, and Last-Write-Wins semantics
|
||||
- Peer-to-peer networking built on iroh with QUIC transport
|
||||
- Gossip-based message broadcasting for multi-peer coordination
|
||||
- Offline-first architecture with automatic reconciliation
|
||||
- SQLite-backed persistence with WAL mode
|
||||
- Cross-platform support for macOS desktop and iOS
|
||||
|
||||
#### Demo Application
|
||||
- Replicated cube demo showcasing real-time collaboration
|
||||
- Multiple instance support for local testing
|
||||
- Apple Pencil input support on iPad
|
||||
- Real-time cursor and selection synchronization
|
||||
- Debug UI for inspecting internal state
|
||||
|
||||
#### Infrastructure
|
||||
- Bevy 0.17 ECS integration
|
||||
- Zero-copy serialization with rkyv
|
||||
- Automated iOS build tooling via xtask
|
||||
- Comprehensive RFC documentation covering architecture decisions
|
||||
|
||||
### Architecture
|
||||
|
||||
- **Networking Layer**: CRDT sync protocol, entity mapping, vector clocks, session management
|
||||
- **Persistence Layer**: Three-tier system (in-memory → write buffer → SQLite)
|
||||
- **Engine Core**: Event loop, networking manager, peer discovery, game actions
|
||||
- **Platform Support**: iOS and desktop with platform-specific input handling
|
||||
|
||||
### Documentation
|
||||
|
||||
- RFC 0001: CRDT Synchronization Protocol
|
||||
- RFC 0002: Persistence Strategy
|
||||
- RFC 0003: Sync Abstraction
|
||||
- RFC 0004: Session Lifecycle
|
||||
- RFC 0005: Spatial Audio System
|
||||
- RFC 0006: Agent Simulation Architecture
|
||||
- iOS deployment guide
|
||||
- Estimation methodology documentation
|
||||
|
||||
### Known Issues
|
||||
|
||||
- API is unstable and subject to change
|
||||
- Limited documentation for public APIs
|
||||
- Performance optimizations still needed for large-scale collaboration
|
||||
- iOS builds require manual Xcode configuration
|
||||
|
||||
### Notes
|
||||
|
||||
This is an early development release (version 0.x.y). The API is unstable and breaking changes are expected. Not recommended for production use.
|
||||
|
||||
[unreleased]: https://github.com/r3t-studios/marathon/compare/v0.1.0...HEAD
|
||||
[0.1.0]: https://github.com/r3t-studios/marathon/releases/tag/v0.1.0
|
||||
148
CODE_OF_CONDUCT.md
Normal file
148
CODE_OF_CONDUCT.md
Normal file
@@ -0,0 +1,148 @@
|
||||
# Contributor Covenant Code of Conduct
|
||||
|
||||
## Our Pledge
|
||||
|
||||
We as members, contributors, and leaders pledge to make participation in our
|
||||
community a harassment-free experience for everyone, regardless of age, body
|
||||
size, visible or invisible disability, ethnicity, sex characteristics, gender
|
||||
identity and expression, level of experience, education, socio-economic status,
|
||||
nationality, personal appearance, race, caste, color, religion, or sexual
|
||||
identity and orientation.
|
||||
|
||||
We pledge to act and interact in ways that contribute to an open, welcoming,
|
||||
diverse, inclusive, and healthy community.
|
||||
|
||||
## Our Standards
|
||||
|
||||
Examples of behavior that contributes to a positive environment for our
|
||||
community include:
|
||||
|
||||
* Demonstrating empathy and kindness toward other people
|
||||
* Being respectful of differing opinions, viewpoints, and experiences
|
||||
* Giving and gracefully accepting constructive feedback
|
||||
* Accepting responsibility and apologizing to those affected by our mistakes,
|
||||
and learning from the experience
|
||||
* Focusing on what is best not just for us as individuals, but for the overall
|
||||
community
|
||||
|
||||
Examples of unacceptable behavior include:
|
||||
|
||||
* The use of sexualized language or imagery, and sexual attention or advances of
|
||||
any kind
|
||||
* Trolling, insulting or derogatory comments, and personal or political attacks
|
||||
* Public or private harassment
|
||||
* Publishing others' private information, such as a physical or email address,
|
||||
without their explicit permission
|
||||
* Other conduct which could reasonably be considered inappropriate in a
|
||||
professional setting
|
||||
|
||||
## Addressing and Repairing Harm
|
||||
|
||||
If you are being harmed or notice that someone else is being harmed, or have any
|
||||
other concerns, please contact the community leaders responsible for enforcement
|
||||
at sienna@linux.com. All reports will be handled with discretion.
|
||||
|
||||
We are committed to addressing harm in a manner that is respectful to victims
|
||||
and survivors of violations of this Code of Conduct. When community leaders
|
||||
receive a report of a possible violation, they will:
|
||||
|
||||
1. **Acknowledge receipt** of the report
|
||||
2. **Assess the situation** and gather necessary information
|
||||
3. **Determine appropriate action** using the guidelines below
|
||||
4. **Communicate with all parties** involved
|
||||
5. **Take action** to address and repair harm
|
||||
6. **Follow up** to ensure the situation is resolved
|
||||
|
||||
## Enforcement Guidelines
|
||||
|
||||
Community leaders will follow these Community Impact Guidelines in determining
|
||||
the consequences for any action they deem in violation of this Code of Conduct:
|
||||
|
||||
### 1. Correction
|
||||
|
||||
**Community Impact**: Use of inappropriate language or other behavior deemed
|
||||
unprofessional or unwelcome in the community.
|
||||
|
||||
**Consequence**: A private, written warning from community leaders, providing
|
||||
clarity around the nature of the violation and an explanation of why the
|
||||
behavior was inappropriate. A public apology may be requested.
|
||||
|
||||
### 2. Warning
|
||||
|
||||
**Community Impact**: A violation through a single incident or series of
|
||||
actions.
|
||||
|
||||
**Consequence**: A warning with consequences for continued behavior. No
|
||||
interaction with the people involved, including unsolicited interaction with
|
||||
those enforcing the Code of Conduct, for a specified period of time. This
|
||||
includes avoiding interactions in community spaces as well as external channels
|
||||
like social media. Violating these terms may lead to a temporary or permanent
|
||||
ban.
|
||||
|
||||
### 3. Temporary Ban
|
||||
|
||||
**Community Impact**: A serious violation of community standards, including
|
||||
sustained inappropriate behavior.
|
||||
|
||||
**Consequence**: A temporary ban from any sort of interaction or public
|
||||
communication with the community for a specified period of time. No public or
|
||||
private interaction with the people involved, including unsolicited interaction
|
||||
with those enforcing the Code of Conduct, is allowed during this period.
|
||||
Violating these terms may lead to a permanent ban.
|
||||
|
||||
### 4. Permanent Ban
|
||||
|
||||
**Community Impact**: Demonstrating a pattern of violation of community
|
||||
standards, including sustained inappropriate behavior, harassment of an
|
||||
individual, or aggression toward or disparagement of classes of individuals.
|
||||
|
||||
**Consequence**: A permanent ban from any sort of public interaction within the
|
||||
community.
|
||||
|
||||
## Restorative Justice
|
||||
|
||||
We believe in restorative justice and creating opportunities for those who have
|
||||
violated the Code of Conduct to repair harm and reintegrate into the community
|
||||
when appropriate. This may include:
|
||||
|
||||
* Facilitated conversations between affected parties
|
||||
* Public acknowledgment of harm and apology
|
||||
* Education and learning opportunities
|
||||
* Community service or contributions
|
||||
* Gradual reintegration with monitoring
|
||||
|
||||
The possibility of restoration depends on:
|
||||
* The severity of the violation
|
||||
* The willingness of the violator to acknowledge harm
|
||||
* The consent and comfort of those harmed
|
||||
* The assessment of community leaders
|
||||
|
||||
## Scope
|
||||
|
||||
This Code of Conduct applies within all community spaces, and also applies when
|
||||
an individual is officially representing the community in public spaces.
|
||||
Examples of representing our community include using an official email address,
|
||||
posting via an official social media account, or acting as an appointed
|
||||
representative at an online or offline event.
|
||||
|
||||
## Attribution
|
||||
|
||||
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
|
||||
version 3.0, available at
|
||||
[https://www.contributor-covenant.org/version/3/0/code_of_conduct.html][v3.0].
|
||||
|
||||
The "Addressing and Repairing Harm" section is inspired by the restorative
|
||||
justice approach outlined in Contributor Covenant 3.0.
|
||||
|
||||
Community Impact Guidelines were inspired by
|
||||
[Mozilla's code of conduct enforcement ladder][Mozilla CoC].
|
||||
|
||||
For answers to common questions about this code of conduct, see the FAQ at
|
||||
[https://www.contributor-covenant.org/faq][FAQ]. Translations are available at
|
||||
[https://www.contributor-covenant.org/translations][translations].
|
||||
|
||||
[homepage]: https://www.contributor-covenant.org
|
||||
[v3.0]: https://www.contributor-covenant.org/version/3/0/code_of_conduct.html
|
||||
[Mozilla CoC]: https://github.com/mozilla/diversity
|
||||
[FAQ]: https://www.contributor-covenant.org/faq
|
||||
[translations]: https://www.contributor-covenant.org/translations
|
||||
343
CONTRIBUTING.md
Normal file
343
CONTRIBUTING.md
Normal file
@@ -0,0 +1,343 @@
|
||||
# Contributing to Marathon
|
||||
|
||||
Thank you for your interest in contributing to Marathon! We're excited to work with you.
|
||||
|
||||
This document provides guidelines for contributing to the project. Following these guidelines helps maintain code quality and makes the review process smoother for everyone.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Code of Conduct](#code-of-conduct)
|
||||
- [Getting Started](#getting-started)
|
||||
- [Development Environment Setup](#development-environment-setup)
|
||||
- [How to Contribute](#how-to-contribute)
|
||||
- [Coding Standards](#coding-standards)
|
||||
- [Testing](#testing)
|
||||
- [Pull Request Process](#pull-request-process)
|
||||
- [Reporting Bugs](#reporting-bugs)
|
||||
- [Suggesting Features](#suggesting-features)
|
||||
- [AI Usage Policy](#ai-usage-policy)
|
||||
- [Questions?](#questions)
|
||||
|
||||
## Code of Conduct
|
||||
|
||||
This project adheres to the [Contributor Covenant Code of Conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code. Please report unacceptable behavior to the project maintainers.
|
||||
|
||||
## Getting Started
|
||||
|
||||
1. **Fork the repository** on GitHub
|
||||
2. **Clone your fork** locally
|
||||
3. **Set up your development environment** (see below)
|
||||
4. **Create a branch** for your changes
|
||||
5. **Make your changes** with clear commit messages
|
||||
6. **Test your changes** thoroughly
|
||||
7. **Submit a pull request**
|
||||
|
||||
## Development Environment Setup
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- **Rust** 2024 edition or later (install via [rustup](https://rustup.rs/))
|
||||
- **macOS** (for macOS desktop and iOS development)
|
||||
- **Xcode** and iOS simulator (for iOS development)
|
||||
- **Linux** (for Linux desktop development)
|
||||
- **Windows** (for Windows desktop development)
|
||||
- **Git** for version control
|
||||
|
||||
### Initial Setup
|
||||
|
||||
```bash
|
||||
# Clone your fork
|
||||
git clone https://github.com/user/marathon.git
|
||||
cd marathon
|
||||
|
||||
# Add upstream remote
|
||||
git remote add upstream https://github.com/r3t-studios/marathon.git
|
||||
|
||||
# Build the project
|
||||
cargo build
|
||||
|
||||
# Run tests
|
||||
cargo test
|
||||
|
||||
# Run the desktop demo
|
||||
cargo run --package app
|
||||
```
|
||||
|
||||
### iOS Development Setup
|
||||
|
||||
For iOS development, see our detailed [iOS Deployment Guide](docs/ios-deployment.md).
|
||||
|
||||
```bash
|
||||
# Build for iOS simulator
|
||||
cargo xtask ios-build
|
||||
|
||||
# Run on simulator
|
||||
cargo xtask ios-run
|
||||
```
|
||||
|
||||
### Useful Commands
|
||||
|
||||
```bash
|
||||
# Check code without building
|
||||
cargo check
|
||||
|
||||
# Run clippy for linting
|
||||
cargo clippy
|
||||
|
||||
# Format code
|
||||
cargo fmt
|
||||
|
||||
# Run tests with output
|
||||
cargo nextest run -- --nocapture
|
||||
|
||||
# Build documentation
|
||||
cargo doc --open
|
||||
```
|
||||
|
||||
## How to Contribute
|
||||
|
||||
### Types of Contributions
|
||||
|
||||
We welcome many types of contributions:
|
||||
|
||||
- **Bug fixes** - Fix issues and improve stability
|
||||
- **Features** - Implement new functionality (discuss first in an issue)
|
||||
- **Documentation** - Improve or add documentation
|
||||
- **Examples** - Create new examples or demos
|
||||
- **Tests** - Add test coverage
|
||||
- **Performance** - Optimize existing code
|
||||
- **Refactoring** - Improve code quality
|
||||
|
||||
### Before You Start
|
||||
|
||||
For **bug fixes and small improvements**, feel free to open a PR directly.
|
||||
|
||||
For **new features or significant changes**:
|
||||
1. **Open an issue first** to discuss the proposal
|
||||
2. Wait for maintainer feedback before investing significant time
|
||||
3. Reference the issue in your PR
|
||||
|
||||
This helps ensure your work aligns with project direction and avoids duplicate effort.
|
||||
|
||||
## Coding Standards
|
||||
|
||||
### Rust Style
|
||||
|
||||
- Follow the [Rust API Guidelines](https://rust-lang.github.io/api-guidelines/)
|
||||
- Follow the [Rust Style Guide](https://microsoft.github.io/rust-guidelines/guidelines/index.html)
|
||||
- Use `cargo +nightly fmt` to format code (run before committing)
|
||||
- Address all `cargo clippy` warnings
|
||||
- Use meaningful variable and function names
|
||||
- Add doc comments (`///`) for public APIs
|
||||
|
||||
### Code Organization
|
||||
|
||||
- Keep modules focused and cohesive
|
||||
- Prefer composition over inheritance
|
||||
- Use Rust's type system to enforce invariants
|
||||
- Avoid unnecessary `unsafe` code
|
||||
|
||||
### Documentation
|
||||
|
||||
- Add doc comments for all public types, traits, and functions
|
||||
- Include examples in doc comments when helpful
|
||||
- Update relevant documentation in `/docs` when making architectural changes
|
||||
- Keep README.md in sync with current capabilities
|
||||
|
||||
### Commit Messages
|
||||
|
||||
Write clear, descriptive conventional commit messages:
|
||||
|
||||
```
|
||||
Short summary (50 chars or less)
|
||||
|
||||
More detailed explanation if needed. Wrap at 72 characters.
|
||||
|
||||
- Bullet points are fine
|
||||
- Use present tense ("Add feature" not "Added feature")
|
||||
- Reference issues and PRs with #123
|
||||
```
|
||||
|
||||
Good examples:
|
||||
```
|
||||
Add cursor synchronization to networking layer
|
||||
|
||||
Implement entity selection system for iOS
|
||||
|
||||
Fix panic in SQLite persistence during shutdown (#42)
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
### Running Tests
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
cargo nextest run
|
||||
|
||||
# Run tests for specific crate
|
||||
cargo nextest run --package libmarathon
|
||||
|
||||
# Run specific test
|
||||
cargo nextest run test_vector_clock_merge
|
||||
|
||||
# Run tests with output
|
||||
cargo nextest run -- --nocapture
|
||||
```
|
||||
|
||||
### Writing Tests
|
||||
|
||||
- Add unit tests in the same file as the code (in a `mod tests` block)
|
||||
- Add integration tests in `tests/` directory
|
||||
- Test edge cases and error conditions
|
||||
- Keep tests focused and readable
|
||||
- Use descriptive test names: `test_vector_clock_handles_concurrent_updates`
|
||||
|
||||
### Test Coverage
|
||||
|
||||
We aim for good test coverage, especially for:
|
||||
- CRDT operations and synchronization logic
|
||||
- Persistence layer operations
|
||||
- Network protocol handling
|
||||
- Error conditions and edge cases
|
||||
|
||||
You don't need 100% coverage, but core logic should be well-tested.
|
||||
|
||||
## Pull Request Process
|
||||
|
||||
### Before Submitting
|
||||
|
||||
1. **Update your branch** with latest upstream changes
|
||||
```bash
|
||||
git fetch upstream
|
||||
git rebase upstream/mainline
|
||||
```
|
||||
|
||||
2. **Run the test suite** and ensure all tests pass
|
||||
```bash
|
||||
cargo test
|
||||
```
|
||||
|
||||
3. **Run clippy** and fix any warnings
|
||||
```bash
|
||||
cargo clippy
|
||||
```
|
||||
|
||||
4. **Format your code**
|
||||
```bash
|
||||
cargo fmt
|
||||
```
|
||||
|
||||
5. **Update documentation** if you changed APIs or behavior
|
||||
|
||||
### Submitting Your PR
|
||||
|
||||
1. **Push to your fork**
|
||||
```bash
|
||||
git push origin your-branch-name
|
||||
```
|
||||
|
||||
2. **Open a pull request** on GitHub
|
||||
|
||||
3. **Fill out the PR template** with:
|
||||
- Clear description of what changed and why
|
||||
- Link to related issues
|
||||
- Testing performed
|
||||
- Screenshots/videos for UI changes
|
||||
|
||||
4. **Request review** from maintainers
|
||||
|
||||
### During Review
|
||||
|
||||
- Be responsive to feedback
|
||||
- Make requested changes promptly
|
||||
- Push updates to the same branch (they'll appear in the PR)
|
||||
- Use "fixup" commits or force-push after addressing review comments
|
||||
- Be patient - maintainers are volunteers with limited time
|
||||
|
||||
### After Approval
|
||||
|
||||
- Maintainers will merge your PR
|
||||
- You can delete your branch after merging
|
||||
- Celebrate! 🎉 You're now a Marathon contributor!
|
||||
|
||||
## Reporting Bugs
|
||||
|
||||
### Before Reporting
|
||||
|
||||
1. **Check existing issues** to avoid duplicates
|
||||
2. **Verify it's a bug** and not expected behavior
|
||||
3. **Test on the latest version** from mainline branch
|
||||
|
||||
### Bug Report Template
|
||||
|
||||
When opening a bug report, please include:
|
||||
|
||||
- **Description** - What went wrong?
|
||||
- **Expected behavior** - What should have happened?
|
||||
- **Actual behavior** - What actually happened?
|
||||
- **Steps to reproduce** - Minimal steps to reproduce the issue
|
||||
- **Environment**:
|
||||
- OS version (macOS version, iOS version)
|
||||
- Rust version (`rustc --version`)
|
||||
- Marathon version or commit hash
|
||||
- **Logs/Stack traces** - Error messages or relevant log output
|
||||
- **Screenshots/Videos** - If applicable
|
||||
|
||||
### Security Issues
|
||||
|
||||
**Do not report security vulnerabilities in public issues.**
|
||||
|
||||
Please see our [Security Policy](SECURITY.md) for how to report security issues privately.
|
||||
|
||||
## Suggesting Features
|
||||
|
||||
We welcome feature suggestions! Here's how to propose them effectively:
|
||||
|
||||
### Before Suggesting
|
||||
|
||||
1. **Check existing issues and discussions** for similar ideas
|
||||
2. **Consider if it aligns** with Marathon's goals (multiplayer game engine framework)
|
||||
3. **Think about the scope** - is this a core feature or better as a plugin/extension?
|
||||
|
||||
### Feature Request Template
|
||||
|
||||
When suggesting a feature, please include:
|
||||
|
||||
- **Problem statement** - What problem does this solve?
|
||||
- **Proposed solution** - How would this feature work?
|
||||
- **Alternatives considered** - What other approaches did you think about?
|
||||
- **Use cases** - Real-world scenarios where this helps
|
||||
- **Implementation ideas** - Technical approach (if you have thoughts)
|
||||
|
||||
### Feature Discussion
|
||||
|
||||
- Maintainers will label feature requests as `enhancement`
|
||||
- We'll discuss feasibility, scope, and priority
|
||||
- Features that align with the roadmap are more likely to be accepted
|
||||
- You're welcome to implement features you propose (with approval)
|
||||
|
||||
## AI Usage Policy
|
||||
|
||||
Marathon has specific guidelines around AI and ML tool usage. Please read our [AI Usage Policy](AI_POLICY.md) before contributing.
|
||||
|
||||
**Key points:**
|
||||
- AI tools (Copilot, ChatGPT, etc.) are allowed for productivity
|
||||
- You must understand and be accountable for all code you submit
|
||||
- Humans make all architectural decisions, not AI
|
||||
- When in doubt, ask yourself: "Can I maintain and debug this?"
|
||||
|
||||
## Questions?
|
||||
|
||||
- **General questions** - Open a [Discussion](https://github.com/yourusername/marathon/discussions)
|
||||
- **Bug reports** - Open an [Issue](https://github.com/yourusername/marathon/issues)
|
||||
- **Real-time chat** - [Discord/Slack link if you have one]
|
||||
- **Email** - [maintainer email if appropriate]
|
||||
|
||||
## Recognition
|
||||
|
||||
All contributors will be recognized in our release notes and can be listed in AUTHORS file (coming soon).
|
||||
|
||||
---
|
||||
|
||||
Thank you for contributing to Marathon! Your effort helps make collaborative software better for everyone.
|
||||
620
Cargo.lock
generated
620
Cargo.lock
generated
File diff suppressed because it is too large
Load Diff
38
Cargo.toml
38
Cargo.toml
@@ -1,5 +1,5 @@
|
||||
[workspace]
|
||||
members = ["crates/libmarathon", "crates/sync-macros", "crates/app"]
|
||||
members = ["crates/libmarathon", "crates/macros", "crates/app", "crates/xtask"]
|
||||
resolver = "2"
|
||||
|
||||
[workspace.package]
|
||||
@@ -9,18 +9,21 @@ edition = "2024"
|
||||
# Async runtime
|
||||
tokio = { version = "1", features = ["full"] }
|
||||
tokio-stream = "0.1"
|
||||
tokio-util = "0.7"
|
||||
futures-lite = "2.0"
|
||||
|
||||
# Iroh - P2P networking and gossip
|
||||
iroh = { version = "0.95.0",features = ["discovery-local-network"] }
|
||||
iroh = { version = "0.95.0", features = ["discovery-pkarr-dht"] }
|
||||
iroh-gossip = "0.95.0"
|
||||
|
||||
# Database
|
||||
rusqlite = "0.37.0"
|
||||
rusqlite = { version = "0.37.0", features = ["bundled"] }
|
||||
|
||||
# Serialization
|
||||
serde = { version = "1.0", features = ["derive"] }
|
||||
serde_json = "1.0"
|
||||
toml = "0.9"
|
||||
rkyv = { version = "0.8", features = ["uuid-1", "bytes-1"] }
|
||||
|
||||
# Error handling
|
||||
thiserror = "2.0"
|
||||
@@ -32,20 +35,33 @@ chrono = { version = "0.4", features = ["serde"] }
|
||||
# Logging
|
||||
tracing = "0.1"
|
||||
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
|
||||
tracing-appender = "0.2"
|
||||
tracing-oslog = "0.3"
|
||||
|
||||
# Random
|
||||
rand = "0.8"
|
||||
|
||||
# ML/AI
|
||||
candle-core = "0.8"
|
||||
candle-nn = "0.8"
|
||||
candle-transformers = "0.8"
|
||||
tokenizers = "0.20"
|
||||
hf-hub = "0.3"
|
||||
# Encoding
|
||||
hex = "0.4"
|
||||
|
||||
# Bevy
|
||||
bevy = "0.17"
|
||||
# Data structures
|
||||
bytes = "1.0"
|
||||
crossbeam-channel = "0.5"
|
||||
uuid = { version = "1.0", features = ["v4", "serde"] }
|
||||
|
||||
# Bevy and graphics
|
||||
bevy = "0.17.2"
|
||||
egui = { version = "0.33", default-features = false, features = ["bytemuck", "default_fonts"] }
|
||||
glam = "0.29"
|
||||
winit = "0.30"
|
||||
|
||||
# Synchronization
|
||||
parking_lot = "0.12"
|
||||
crdts = "7.3"
|
||||
inventory = "0.3"
|
||||
|
||||
# CLI
|
||||
clap = { version = "4.5", features = ["derive"] }
|
||||
|
||||
# Testing
|
||||
tempfile = "3"
|
||||
|
||||
21
LICENSE.md
Normal file
21
LICENSE.md
Normal file
@@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2026 Marathon Contributors
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
164
README.md
Normal file
164
README.md
Normal file
@@ -0,0 +1,164 @@
|
||||
# Marathon
|
||||
|
||||
**A peer-to-peer game engine development kit built with Rust and CRDTs**
|
||||
|
||||
Marathon is a multiplayer game engine framework designed for building real-time collaborative games with offline-first capabilities. Built on [Bevy](https://bevyengine.org/) and [iroh](https://iroh.computer/), it provides CRDT-based state synchronization, peer-to-peer networking, and persistent state management out of the box - so you can focus on making great games instead of wrestling with networking code.
|
||||
|
||||
## ⚠️ Early Development Notice
|
||||
|
||||
**This project is in early development (<v1.0.0).**
|
||||
|
||||
- The API is unstable and may change without notice
|
||||
- Breaking changes are expected between minor versions
|
||||
- Not recommended for production use
|
||||
- Documentation is still being written
|
||||
- We welcome feedback and contributions!
|
||||
|
||||
Version 1.0.0 will indicate the first stable release.
|
||||
|
||||
## Features
|
||||
|
||||
- **CRDT-based synchronization** - Conflict-free multiplayer state management using OR-Sets, RGA, and LWW semantics
|
||||
- **Peer-to-peer networking** - Built on iroh with QUIC transport and gossip-based message broadcasting
|
||||
- **Offline-first architecture** - Players can continue playing during network issues and sync when reconnected
|
||||
- **Persistent game state** - SQLite-backed storage with automatic entity persistence
|
||||
- **Cross-platform** - Supports macOS desktop and iOS (simulator and device), with more platforms planned
|
||||
- **Built with Bevy** - Leverages the Bevy game engine's ECS architecture and parts of its ecosystem
|
||||
|
||||
## Demo
|
||||
|
||||
The current demo is a **replicated cube game** that synchronizes in real-time across multiple instances:
|
||||
- Apple Pencil input support on iPad
|
||||
- Real-time player cursor and selection sharing
|
||||
- Automatic game state synchronization across network partitions
|
||||
- Multiple players can interact with the same game world simultaneously
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- **Rust** 2024 edition or later
|
||||
- **macOS** (for desktop demo)
|
||||
- **Xcode** (for iOS development)
|
||||
|
||||
### Building the Desktop Demo
|
||||
|
||||
```bash
|
||||
# Clone the repository
|
||||
git clone https://github.com/yourusername/marathon.git
|
||||
cd marathon
|
||||
|
||||
# Build and run
|
||||
cargo run --package app
|
||||
```
|
||||
|
||||
To run multiple instances for testing multiplayer:
|
||||
|
||||
```bash
|
||||
# Terminal 1
|
||||
cargo run --package app -- --instance 0
|
||||
|
||||
# Terminal 2
|
||||
cargo run --package app -- --instance 1
|
||||
```
|
||||
|
||||
### Building for iOS
|
||||
|
||||
Marathon includes automated iOS build tooling via `xtask`:
|
||||
|
||||
```bash
|
||||
# Build for iOS simulator
|
||||
cargo xtask ios-build
|
||||
|
||||
# Deploy to connected device
|
||||
cargo xtask ios-deploy
|
||||
|
||||
# Build and run on simulator
|
||||
cargo xtask ios-run
|
||||
```
|
||||
|
||||
See [docs/ios-deployment.md](docs/ios-deployment.md) for detailed iOS setup instructions.
|
||||
|
||||
## Architecture
|
||||
|
||||
Marathon is organized as a Rust workspace with four crates:
|
||||
|
||||
- **`libmarathon`** - Core engine library with networking, persistence, and CRDT sync
|
||||
- **`app`** - Demo game showcasing multiplayer cube gameplay
|
||||
- **`macros`** - Procedural macros for serialization
|
||||
- **`xtask`** - Build automation for iOS deployment
|
||||
|
||||
### Key Components
|
||||
|
||||
- **Networking** - CRDT synchronization protocol, gossip-based broadcast, entity mapping
|
||||
- **Persistence** - Three-tier system (in-memory → write buffer → SQLite WAL)
|
||||
- **Engine** - Core event loop, peer discovery, session management
|
||||
- **Debug UI** - egui-based debug interface for inspecting state
|
||||
|
||||
For detailed architecture information, see [ARCHITECTURE.md](ARCHITECTURE.md).
|
||||
|
||||
## Documentation
|
||||
|
||||
- **[RFCs](docs/rfcs/)** - Design documents covering core architecture decisions
|
||||
- [0001: CRDT Synchronization Protocol](docs/rfcs/0001-crdt-gossip-sync.md)
|
||||
- [0002: Persistence Strategy](docs/rfcs/0002-persistence-strategy.md)
|
||||
- [0003: Sync Abstraction](docs/rfcs/0003-sync-abstraction.md)
|
||||
- [0004: Session Lifecycle](docs/rfcs/0004-session-lifecycle.md)
|
||||
- [0005: Spatial Audio System](docs/rfcs/0005-spatial-audio-vendoring.md)
|
||||
- [0006: Agent Simulation Architecture](docs/rfcs/0006-agent-simulation-architecture.md)
|
||||
- **[iOS Deployment Guide](docs/ios-deployment.md)** - Complete iOS build instructions
|
||||
- **[Estimation Methodology](docs/ESTIMATION.md)** - Project sizing and prioritization approach
|
||||
|
||||
## Technology Stack
|
||||
|
||||
- **[Bevy 0.17](https://bevyengine.org/)** - Game engine and ECS framework
|
||||
- **[iroh 0.95](https://iroh.computer/)** - P2P networking with QUIC
|
||||
- **[iroh-gossip 0.95](https://github.com/n0-computer/iroh-gossip)** - Gossip protocol for multi-peer coordination
|
||||
- **[SQLite](https://www.sqlite.org/)** - Local persistence with WAL mode
|
||||
- **[rkyv 0.8](https://rkyv.org/)** - Zero-copy serialization
|
||||
- **[egui 0.33](https://www.egui.rs/)** - Immediate-mode GUI
|
||||
- **[wgpu 26](https://wgpu.rs/)** - Graphics API (via Bevy)
|
||||
|
||||
## Contributing
|
||||
|
||||
We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for:
|
||||
- Development environment setup
|
||||
- Code style and conventions
|
||||
- How to submit pull requests
|
||||
- Testing guidelines
|
||||
|
||||
Please also read our [Code of Conduct](CODE_OF_CONDUCT.md) and [AI Usage Policy](AI_POLICY.md).
|
||||
|
||||
## Project Status
|
||||
|
||||
Marathon is actively developed and currently focused on:
|
||||
- Core CRDT synchronization protocol
|
||||
- Persistence layer stability
|
||||
- Multi-platform support (macOS, iOS)
|
||||
- Demo applications
|
||||
|
||||
See our [roadmap](https://github.com/yourusername/marathon/issues) for planned features and known issues.
|
||||
|
||||
## Community
|
||||
|
||||
- **Issues** - [GitHub Issues](https://github.com/yourusername/marathon/issues)
|
||||
- **Discussions** - [GitHub Discussions](https://github.com/yourusername/marathon/discussions)
|
||||
|
||||
## Security
|
||||
|
||||
Please see our [Security Policy](SECURITY.md) for information on reporting vulnerabilities.
|
||||
|
||||
## License
|
||||
|
||||
Marathon is licensed under the [MIT License](LICENSE).
|
||||
|
||||
## Acknowledgments
|
||||
|
||||
Marathon builds on the incredible work of:
|
||||
- The [Bevy community](https://bevyengine.org/) for the game engine
|
||||
- The [iroh team](https://iroh.computer/) for P2P networking infrastructure
|
||||
- The Rust CRDT ecosystem
|
||||
|
||||
---
|
||||
|
||||
**Built with Rust 🦀 and collaborative spirit**
|
||||
143
SECURITY.md
Normal file
143
SECURITY.md
Normal file
@@ -0,0 +1,143 @@
|
||||
# Security Policy
|
||||
|
||||
## Supported Versions
|
||||
|
||||
As an early-stage project (version 0.x.y), security support is limited to the latest development version.
|
||||
|
||||
| Version | Supported |
|
||||
| ------- | ------------------ |
|
||||
| mainline branch | :white_check_mark: |
|
||||
| 0.1.x | :white_check_mark: |
|
||||
| < 0.1.0 | :x: |
|
||||
|
||||
## Security Maturity
|
||||
|
||||
**Marathon is currently in early development (0.1.x) and is NOT recommended for production use or handling sensitive data.**
|
||||
|
||||
Security considerations for the current release:
|
||||
|
||||
- ⚠️ **Network protocol** is not hardened against malicious peers
|
||||
- ⚠️ **Authentication** is not yet implemented
|
||||
- ⚠️ **Encryption** is provided by QUIC but not verified against attacks
|
||||
- ⚠️ **Authorization** is not implemented
|
||||
- ⚠️ **Data validation** is basic and not audited
|
||||
- ⚠️ **Persistence layer** stores data unencrypted locally
|
||||
|
||||
**Use Marathon only in trusted development environments with non-sensitive data.**
|
||||
|
||||
## Reporting a Vulnerability
|
||||
|
||||
We take security issues seriously. If you discover a security vulnerability in Marathon, please help us address it responsibly.
|
||||
|
||||
### How to Report
|
||||
|
||||
**Please DO NOT report security vulnerabilities through public GitHub issues.**
|
||||
|
||||
Instead, report vulnerabilities by:
|
||||
|
||||
1. **Email**: Send details to sienna@linux.com
|
||||
2. **Subject line**: Include "SECURITY" and a brief description
|
||||
3. **Include**:
|
||||
- Description of the vulnerability
|
||||
- Steps to reproduce
|
||||
- Potential impact
|
||||
- Suggested fix (if you have one)
|
||||
|
||||
### What to Expect
|
||||
|
||||
After you submit a report:
|
||||
|
||||
1. **Acknowledgment**: We'll confirm receipt within 48 hours
|
||||
2. **Assessment**: We'll evaluate the severity and impact within 5 business days
|
||||
3. **Updates**: We'll keep you informed of our progress
|
||||
4. **Resolution**: We'll work on a fix and coordinate disclosure timing with you
|
||||
5. **Credit**: We'll acknowledge your contribution (unless you prefer to remain anonymous)
|
||||
|
||||
### Disclosure Timeline
|
||||
|
||||
- **Critical vulnerabilities**: Aim to fix within 30 days
|
||||
- **High severity**: Aim to fix within 60 days
|
||||
- **Medium/Low severity**: Addressed in regular development cycle
|
||||
|
||||
We'll coordinate public disclosure timing with you after a fix is available.
|
||||
|
||||
## Security Best Practices
|
||||
|
||||
If you're using Marathon (keeping in mind it's not production-ready):
|
||||
|
||||
### For Development
|
||||
|
||||
- **Use isolated networks** for testing
|
||||
- **Don't use real user data** or sensitive information
|
||||
- **Don't expose to the internet** without additional security layers
|
||||
- **Keep dependencies updated** with `cargo update`
|
||||
- **Review security advisories** for Rust crates you depend on
|
||||
|
||||
### For Deployment (Future)
|
||||
|
||||
Once Marathon reaches production readiness, we plan to implement:
|
||||
|
||||
- End-to-end encryption for all peer communications
|
||||
- Peer authentication and authorization
|
||||
- Encrypted local storage
|
||||
- Rate limiting and DoS protection
|
||||
- Security audit trail
|
||||
- Regular security audits
|
||||
|
||||
### Known Security Gaps
|
||||
|
||||
Current known limitations (to be addressed before 1.0):
|
||||
|
||||
- **No peer authentication** - Any peer can join a session
|
||||
- **No authorization system** - All peers have full permissions
|
||||
- **No encrypted storage** - Local SQLite database is unencrypted
|
||||
- **Limited input validation** - CRDT operations trust peer input
|
||||
- **No audit logging** - Actions are not logged for security review
|
||||
- **Network protocol not hardened** - Vulnerable to malicious peers
|
||||
|
||||
## Security Contact
|
||||
|
||||
For security-related questions or concerns:
|
||||
|
||||
- **Email**: sienna@linux.com
|
||||
- **Response time**: Within 48 hours for initial contact
|
||||
|
||||
## Security Advisories
|
||||
|
||||
Security advisories will be published:
|
||||
|
||||
- In GitHub Security Advisories
|
||||
- In release notes
|
||||
- In this SECURITY.md file
|
||||
|
||||
Currently, there are no published security advisories.
|
||||
|
||||
## Responsible Disclosure
|
||||
|
||||
We believe in responsible disclosure and request that you:
|
||||
|
||||
- Give us reasonable time to address issues before public disclosure
|
||||
- Make a good faith effort to avoid privacy violations and service disruption
|
||||
- Don't exploit vulnerabilities beyond demonstrating the issue
|
||||
- Don't access or modify data that doesn't belong to you
|
||||
|
||||
In return, we commit to:
|
||||
|
||||
- Respond promptly to your report
|
||||
- Keep you informed of our progress
|
||||
- Credit you for your discovery (if desired)
|
||||
- Not pursue legal action for good faith security research
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- [Rust Security Advisory Database](https://rustsec.org/)
|
||||
- [cargo-audit](https://github.com/RustSec/rustsec/tree/main/cargo-audit) - Audit Rust dependencies
|
||||
- [OWASP Top 10](https://owasp.org/www-project-top-ten/) - Common web application security risks
|
||||
|
||||
## Version History
|
||||
|
||||
- **2026-02-06**: Initial security policy for v0.1.0 release
|
||||
|
||||
---
|
||||
|
||||
**Thank you for helping keep Marathon and its users safe!**
|
||||
@@ -1,7 +1,7 @@
|
||||
[package]
|
||||
name = "app"
|
||||
version = "0.1.0"
|
||||
edition = "2021"
|
||||
edition.workspace = true
|
||||
|
||||
[features]
|
||||
default = ["desktop"]
|
||||
@@ -11,43 +11,46 @@ headless = []
|
||||
|
||||
[dependencies]
|
||||
libmarathon = { path = "../libmarathon" }
|
||||
bevy = { version = "0.17", default-features = false, features = [
|
||||
"bevy_render",
|
||||
"bevy_core_pipeline",
|
||||
"bevy_pbr",
|
||||
libmarathon-macros = { path = "../macros" }
|
||||
inventory.workspace = true
|
||||
rkyv.workspace = true
|
||||
bevy = { version = "0.17.2", default-features = false, features = [
|
||||
# bevy_render, bevy_core_pipeline, bevy_pbr are now vendored in libmarathon
|
||||
"bevy_ui",
|
||||
"bevy_text",
|
||||
"png",
|
||||
] }
|
||||
egui = { version = "0.33", default-features = false, features = ["bytemuck", "default_fonts"] }
|
||||
glam = "0.29"
|
||||
winit = "0.30"
|
||||
egui.workspace = true
|
||||
glam.workspace = true
|
||||
winit.workspace = true
|
||||
raw-window-handle = "0.6"
|
||||
uuid = { version = "1.0", features = ["v4", "serde"] }
|
||||
anyhow = "1.0"
|
||||
tokio = { version = "1", features = ["full"] }
|
||||
tracing = "0.1"
|
||||
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
|
||||
serde = { version = "1.0", features = ["derive"] }
|
||||
rand = "0.8"
|
||||
iroh = { version = "0.95", features = ["discovery-local-network"] }
|
||||
iroh-gossip = "0.95"
|
||||
futures-lite = "2.0"
|
||||
bincode = "1.3"
|
||||
bytes = "1.0"
|
||||
crossbeam-channel = "0.5.15"
|
||||
uuid.workspace = true
|
||||
anyhow.workspace = true
|
||||
tokio.workspace = true
|
||||
tracing.workspace = true
|
||||
tracing-subscriber.workspace = true
|
||||
tracing-appender.workspace = true
|
||||
serde.workspace = true
|
||||
rand.workspace = true
|
||||
iroh = { workspace = true, features = ["discovery-local-network"] }
|
||||
iroh-gossip.workspace = true
|
||||
futures-lite.workspace = true
|
||||
bytes.workspace = true
|
||||
crossbeam-channel.workspace = true
|
||||
clap.workspace = true
|
||||
|
||||
[target.'cfg(target_os = "ios")'.dependencies]
|
||||
objc = "0.2"
|
||||
raw-window-handle = "0.6"
|
||||
tracing-oslog.workspace = true
|
||||
|
||||
[dev-dependencies]
|
||||
iroh = { version = "0.95", features = ["discovery-local-network"] }
|
||||
iroh-gossip = "0.95"
|
||||
tempfile = "3"
|
||||
futures-lite = "2.0"
|
||||
bincode = "1.3"
|
||||
bytes = "1.0"
|
||||
iroh = { workspace = true, features = ["discovery-local-network"] }
|
||||
iroh-gossip.workspace = true
|
||||
tempfile.workspace = true
|
||||
futures-lite.workspace = true
|
||||
rkyv.workspace = true
|
||||
bytes.workspace = true
|
||||
|
||||
[lib]
|
||||
name = "app"
|
||||
|
||||
226
crates/app/src/bin/marathonctl.rs
Normal file
226
crates/app/src/bin/marathonctl.rs
Normal file
@@ -0,0 +1,226 @@
|
||||
//! Marathon control CLI
|
||||
//!
|
||||
//! Send control commands to a running Marathon instance via Unix domain socket.
|
||||
//!
|
||||
//! # Usage
|
||||
//!
|
||||
//! ```bash
|
||||
//! # Get session status
|
||||
//! marathonctl status
|
||||
//!
|
||||
//! # Start networking with a session
|
||||
//! marathonctl start <session-code>
|
||||
//!
|
||||
//! # Use custom socket
|
||||
//! marathonctl --socket /tmp/marathon1.sock status
|
||||
//! ```
|
||||
|
||||
use clap::{Parser, Subcommand};
|
||||
use std::io::{Read, Write};
|
||||
use std::os::unix::net::UnixStream;
|
||||
|
||||
use libmarathon::networking::{ControlCommand, ControlResponse};
|
||||
|
||||
/// Marathon control CLI
|
||||
#[derive(Parser, Debug)]
|
||||
#[command(version, about, long_about = None)]
|
||||
struct Args {
|
||||
/// Path to the control socket
|
||||
#[arg(long, default_value = "/tmp/marathon-control.sock")]
|
||||
socket: String,
|
||||
|
||||
#[command(subcommand)]
|
||||
command: Commands,
|
||||
}
|
||||
|
||||
#[derive(Subcommand, Debug)]
|
||||
enum Commands {
|
||||
/// Start networking with a session
|
||||
Start {
|
||||
/// Session code (e.g., abc-def-123)
|
||||
session_code: String,
|
||||
},
|
||||
/// Stop networking
|
||||
Stop,
|
||||
/// Get current session status
|
||||
Status,
|
||||
/// Send a test message
|
||||
Test {
|
||||
/// Message content
|
||||
content: String,
|
||||
},
|
||||
/// Broadcast a ping message
|
||||
Ping,
|
||||
/// Spawn an entity
|
||||
Spawn {
|
||||
/// Entity type (e.g., "cube")
|
||||
entity_type: String,
|
||||
/// X position
|
||||
#[arg(short, long, default_value = "0.0")]
|
||||
x: f32,
|
||||
/// Y position
|
||||
#[arg(short, long, default_value = "0.0")]
|
||||
y: f32,
|
||||
/// Z position
|
||||
#[arg(short, long, default_value = "0.0")]
|
||||
z: f32,
|
||||
},
|
||||
/// Delete an entity by UUID
|
||||
Delete {
|
||||
/// Entity UUID
|
||||
entity_id: String,
|
||||
},
|
||||
}
|
||||
|
||||
fn main() {
|
||||
let args = Args::parse();
|
||||
|
||||
// Build command from subcommand
|
||||
let command = match args.command {
|
||||
Commands::Start { session_code } => ControlCommand::JoinSession { session_code },
|
||||
Commands::Stop => ControlCommand::LeaveSession,
|
||||
Commands::Status => ControlCommand::GetStatus,
|
||||
Commands::Test { content } => ControlCommand::SendTestMessage { content },
|
||||
Commands::Ping => {
|
||||
use libmarathon::networking::{SyncMessage, VectorClock};
|
||||
use uuid::Uuid;
|
||||
|
||||
// For ping, we send a SyncRequest (lightweight ping-like message)
|
||||
let node_id = Uuid::new_v4();
|
||||
ControlCommand::BroadcastMessage {
|
||||
message: SyncMessage::SyncRequest {
|
||||
node_id,
|
||||
vector_clock: VectorClock::new(),
|
||||
},
|
||||
}
|
||||
}
|
||||
Commands::Spawn { entity_type, x, y, z } => {
|
||||
ControlCommand::SpawnEntity {
|
||||
entity_type,
|
||||
position: [x, y, z],
|
||||
}
|
||||
}
|
||||
Commands::Delete { entity_id } => {
|
||||
use uuid::Uuid;
|
||||
match Uuid::parse_str(&entity_id) {
|
||||
Ok(uuid) => ControlCommand::DeleteEntity { entity_id: uuid },
|
||||
Err(e) => {
|
||||
eprintln!("Invalid UUID '{}': {}", entity_id, e);
|
||||
std::process::exit(1);
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
// Connect to Unix socket
|
||||
let socket_path = &args.socket;
|
||||
let mut stream = match UnixStream::connect(&socket_path) {
|
||||
Ok(s) => s,
|
||||
Err(e) => {
|
||||
eprintln!("Failed to connect to {}: {}", socket_path, e);
|
||||
eprintln!("Is the Marathon app running?");
|
||||
std::process::exit(1);
|
||||
}
|
||||
};
|
||||
|
||||
// Send command
|
||||
if let Err(e) = send_command(&mut stream, &command) {
|
||||
eprintln!("Failed to send command: {}", e);
|
||||
std::process::exit(1);
|
||||
}
|
||||
|
||||
// Receive response
|
||||
match receive_response(&mut stream) {
|
||||
Ok(response) => {
|
||||
print_response(response);
|
||||
}
|
||||
Err(e) => {
|
||||
eprintln!("Failed to receive response: {}", e);
|
||||
std::process::exit(1);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn send_command(stream: &mut UnixStream, command: &ControlCommand) -> Result<(), Box<dyn std::error::Error>> {
|
||||
let bytes = command.to_bytes()?;
|
||||
let len = bytes.len() as u32;
|
||||
|
||||
// Write length prefix
|
||||
stream.write_all(&len.to_le_bytes())?;
|
||||
// Write command bytes
|
||||
stream.write_all(&bytes)?;
|
||||
stream.flush()?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn receive_response(stream: &mut UnixStream) -> Result<ControlResponse, Box<dyn std::error::Error>> {
|
||||
// Read length prefix
|
||||
let mut len_buf = [0u8; 4];
|
||||
stream.read_exact(&mut len_buf)?;
|
||||
let len = u32::from_le_bytes(len_buf) as usize;
|
||||
|
||||
// Read response bytes
|
||||
let mut response_buf = vec![0u8; len];
|
||||
stream.read_exact(&mut response_buf)?;
|
||||
|
||||
// Deserialize response
|
||||
let response = ControlResponse::from_bytes(&response_buf)?;
|
||||
Ok(response)
|
||||
}
|
||||
|
||||
fn print_response(response: ControlResponse) {
|
||||
match response {
|
||||
ControlResponse::Status {
|
||||
node_id,
|
||||
session_id,
|
||||
outgoing_queue_size,
|
||||
incoming_queue_size,
|
||||
connected_peers,
|
||||
} => {
|
||||
println!("Session Status:");
|
||||
println!(" Node ID: {}", node_id);
|
||||
println!(" Session: {}", session_id);
|
||||
println!(" Outgoing Queue: {} messages", outgoing_queue_size);
|
||||
println!(" Incoming Queue: {} messages", incoming_queue_size);
|
||||
if let Some(peers) = connected_peers {
|
||||
println!(" Connected Peers: {}", peers);
|
||||
}
|
||||
}
|
||||
ControlResponse::SessionInfo(info) => {
|
||||
println!("Session Info:");
|
||||
println!(" ID: {}", info.session_id);
|
||||
if let Some(ref name) = info.session_name {
|
||||
println!(" Name: {}", name);
|
||||
}
|
||||
println!(" State: {:?}", info.state);
|
||||
println!(" Entities: {}", info.entity_count);
|
||||
println!(" Created: {}", info.created_at);
|
||||
println!(" Last Active: {}", info.last_active);
|
||||
}
|
||||
ControlResponse::Sessions(sessions) => {
|
||||
println!("Sessions ({} total):", sessions.len());
|
||||
for session in sessions {
|
||||
println!(" {}: {:?} ({} entities)", session.session_id, session.state, session.entity_count);
|
||||
}
|
||||
}
|
||||
ControlResponse::Peers(peers) => {
|
||||
println!("Connected Peers ({} total):", peers.len());
|
||||
for peer in peers {
|
||||
print!(" {}", peer.node_id);
|
||||
if let Some(since) = peer.connected_since {
|
||||
println!(" (connected since: {})", since);
|
||||
} else {
|
||||
println!();
|
||||
}
|
||||
}
|
||||
}
|
||||
ControlResponse::Ok { message } => {
|
||||
println!("Success: {}", message);
|
||||
}
|
||||
ControlResponse::Error { error } => {
|
||||
eprintln!("Error: {}", error);
|
||||
std::process::exit(1);
|
||||
}
|
||||
}
|
||||
}
|
||||
308
crates/app/src/control.rs
Normal file
308
crates/app/src/control.rs
Normal file
@@ -0,0 +1,308 @@
|
||||
//! Standalone control socket for engine control
|
||||
//!
|
||||
//! This control socket starts at app launch and allows external control
|
||||
//! of the engine, including starting/stopping networking, before any
|
||||
//! networking is initialized.
|
||||
|
||||
use anyhow::Result;
|
||||
use bevy::prelude::*;
|
||||
use crossbeam_channel::{Receiver, Sender, unbounded};
|
||||
use libmarathon::{
|
||||
engine::{EngineBridge, EngineCommand},
|
||||
networking::{ControlCommand, ControlResponse, SessionId},
|
||||
};
|
||||
use uuid::Uuid;
|
||||
|
||||
/// Resource holding the control socket path
|
||||
#[derive(Resource)]
|
||||
pub struct ControlSocketPath(pub String);
|
||||
|
||||
/// Resource holding the shutdown sender for control socket
|
||||
#[derive(Resource)]
|
||||
pub struct ControlSocketShutdown(Option<Sender<()>>);
|
||||
|
||||
pub fn cleanup_control_socket(
|
||||
mut exit_events: MessageReader<bevy::app::AppExit>,
|
||||
socket_path: Option<Res<ControlSocketPath>>,
|
||||
shutdown: Option<Res<ControlSocketShutdown>>,
|
||||
) {
|
||||
for _ in exit_events.read() {
|
||||
// Send shutdown signal to control socket thread
|
||||
if let Some(ref shutdown_res) = shutdown {
|
||||
if let Some(ref sender) = shutdown_res.0 {
|
||||
info!("Sending shutdown signal to control socket");
|
||||
let _ = sender.send(());
|
||||
}
|
||||
}
|
||||
|
||||
// Clean up socket file
|
||||
if let Some(ref path) = socket_path {
|
||||
info!("Cleaning up control socket at {}", path.0);
|
||||
let _ = std::fs::remove_file(&path.0);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Commands that can be sent from the control socket to the app
|
||||
#[derive(Debug, Clone)]
|
||||
pub enum AppCommand {
|
||||
SpawnEntity {
|
||||
entity_type: String,
|
||||
position: Vec3,
|
||||
},
|
||||
DeleteEntity {
|
||||
entity_id: Uuid,
|
||||
},
|
||||
}
|
||||
|
||||
/// Queue for app-level commands from control socket
|
||||
#[derive(Resource, Clone)]
|
||||
pub struct AppCommandQueue {
|
||||
sender: Sender<AppCommand>,
|
||||
receiver: Receiver<AppCommand>,
|
||||
}
|
||||
|
||||
impl AppCommandQueue {
|
||||
pub fn new() -> Self {
|
||||
let (sender, receiver) = unbounded();
|
||||
Self { sender, receiver }
|
||||
}
|
||||
|
||||
pub fn send(&self, command: AppCommand) {
|
||||
let _ = self.sender.send(command);
|
||||
}
|
||||
|
||||
pub fn try_recv(&self) -> Option<AppCommand> {
|
||||
self.receiver.try_recv().ok()
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for AppCommandQueue {
|
||||
fn default() -> Self {
|
||||
Self::new()
|
||||
}
|
||||
}
|
||||
|
||||
/// Startup system to launch the control socket server
|
||||
#[cfg(not(target_os = "ios"))]
|
||||
#[cfg(debug_assertions)]
|
||||
pub fn start_control_socket_system(
|
||||
mut commands: Commands,
|
||||
socket_path_res: Res<ControlSocketPath>,
|
||||
bridge: Res<EngineBridge>,
|
||||
) {
|
||||
use tokio::io::AsyncReadExt;
|
||||
use tokio::net::UnixListener;
|
||||
|
||||
let socket_path = socket_path_res.0.clone();
|
||||
info!("Starting control socket at {}", socket_path);
|
||||
|
||||
// Create app command queue
|
||||
let app_queue = AppCommandQueue::new();
|
||||
commands.insert_resource(app_queue.clone());
|
||||
|
||||
// Create shutdown channel
|
||||
let (shutdown_tx, shutdown_rx) = unbounded::<()>();
|
||||
commands.insert_resource(ControlSocketShutdown(Some(shutdown_tx)));
|
||||
|
||||
// Clone bridge and queue for the async task
|
||||
let bridge = bridge.clone();
|
||||
let queue = app_queue;
|
||||
|
||||
// Spawn tokio runtime in background thread
|
||||
std::thread::spawn(move || {
|
||||
let rt = tokio::runtime::Runtime::new().unwrap();
|
||||
rt.block_on(async move {
|
||||
// Clean up any existing socket
|
||||
let _ = std::fs::remove_file(&socket_path);
|
||||
|
||||
let listener = match UnixListener::bind(&socket_path) {
|
||||
Ok(l) => {
|
||||
info!("Control socket listening at {}", socket_path);
|
||||
l
|
||||
}
|
||||
Err(e) => {
|
||||
error!("Failed to bind control socket: {}", e);
|
||||
return;
|
||||
}
|
||||
};
|
||||
|
||||
// Accept connections in a loop with shutdown support
|
||||
loop {
|
||||
tokio::select! {
|
||||
// Check for shutdown signal
|
||||
_ = tokio::task::spawn_blocking({
|
||||
let rx = shutdown_rx.clone();
|
||||
move || rx.try_recv()
|
||||
}) => {
|
||||
info!("Control socket received shutdown signal");
|
||||
break;
|
||||
}
|
||||
// Accept new connection
|
||||
result = listener.accept() => {
|
||||
match result {
|
||||
Ok((mut stream, _addr)) => {
|
||||
let bridge = bridge.clone();
|
||||
|
||||
let queue_clone = queue.clone();
|
||||
tokio::spawn(async move {
|
||||
// Read command length
|
||||
let mut len_buf = [0u8; 4];
|
||||
if let Err(e) = stream.read_exact(&mut len_buf).await {
|
||||
error!("Failed to read command length: {}", e);
|
||||
return;
|
||||
}
|
||||
let len = u32::from_le_bytes(len_buf) as usize;
|
||||
|
||||
// Read command bytes
|
||||
let mut cmd_buf = vec![0u8; len];
|
||||
if let Err(e) = stream.read_exact(&mut cmd_buf).await {
|
||||
error!("Failed to read command: {}", e);
|
||||
return;
|
||||
}
|
||||
|
||||
// Deserialize command
|
||||
let command = match ControlCommand::from_bytes(&cmd_buf) {
|
||||
Ok(cmd) => cmd,
|
||||
Err(e) => {
|
||||
error!("Failed to deserialize command: {}", e);
|
||||
let response = ControlResponse::Error {
|
||||
error: format!("Failed to deserialize: {}", e),
|
||||
};
|
||||
let _ = send_response(&mut stream, response).await;
|
||||
return;
|
||||
}
|
||||
};
|
||||
|
||||
info!("Received control command: {:?}", command);
|
||||
|
||||
// Handle command
|
||||
let response = handle_command(command, &bridge, &queue_clone).await;
|
||||
|
||||
// Send response
|
||||
if let Err(e) = send_response(&mut stream, response).await {
|
||||
error!("Failed to send response: {}", e);
|
||||
}
|
||||
});
|
||||
}
|
||||
Err(e) => {
|
||||
error!("Failed to accept connection: {}", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
info!("Control socket server shut down cleanly");
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
/// Handle a control command and generate a response
|
||||
#[cfg(not(target_os = "ios"))]
|
||||
#[cfg(debug_assertions)]
|
||||
async fn handle_command(
|
||||
command: ControlCommand,
|
||||
bridge: &EngineBridge,
|
||||
app_queue: &AppCommandQueue,
|
||||
) -> ControlResponse {
|
||||
match command {
|
||||
ControlCommand::JoinSession { session_code } => {
|
||||
match SessionId::from_code(&session_code) {
|
||||
Ok(session_id) => {
|
||||
bridge.send_command(EngineCommand::StartNetworking {
|
||||
session_id: session_id.clone(),
|
||||
});
|
||||
ControlResponse::Ok {
|
||||
message: format!("Starting networking with session: {}", session_id),
|
||||
}
|
||||
}
|
||||
Err(e) => ControlResponse::Error {
|
||||
error: format!("Invalid session code: {}", e),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
ControlCommand::LeaveSession => {
|
||||
bridge.send_command(EngineCommand::StopNetworking);
|
||||
ControlResponse::Ok {
|
||||
message: "Stopping networking".to_string(),
|
||||
}
|
||||
}
|
||||
|
||||
ControlCommand::SpawnEntity { entity_type, position } => {
|
||||
app_queue.send(AppCommand::SpawnEntity {
|
||||
entity_type,
|
||||
position: Vec3::from_array(position),
|
||||
});
|
||||
ControlResponse::Ok {
|
||||
message: "Entity spawn command queued".to_string(),
|
||||
}
|
||||
}
|
||||
|
||||
ControlCommand::DeleteEntity { entity_id } => {
|
||||
app_queue.send(AppCommand::DeleteEntity { entity_id });
|
||||
ControlResponse::Ok {
|
||||
message: format!("Entity delete command queued for {}", entity_id),
|
||||
}
|
||||
}
|
||||
|
||||
_ => ControlResponse::Error {
|
||||
error: format!("Command {:?} not yet implemented", command),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
/// System to process app commands from the control socket
|
||||
pub fn process_app_commands(
|
||||
queue: Option<Res<AppCommandQueue>>,
|
||||
mut spawn_cube_writer: MessageWriter<crate::cube::SpawnCubeEvent>,
|
||||
mut delete_cube_writer: MessageWriter<crate::cube::DeleteCubeEvent>,
|
||||
) {
|
||||
let Some(queue) = queue else { return };
|
||||
|
||||
while let Some(command) = queue.try_recv() {
|
||||
match command {
|
||||
AppCommand::SpawnEntity { entity_type, position } => {
|
||||
match entity_type.as_str() {
|
||||
"cube" => {
|
||||
info!("Spawning cube at {:?}", position);
|
||||
spawn_cube_writer.write(crate::cube::SpawnCubeEvent { position });
|
||||
}
|
||||
_ => {
|
||||
warn!("Unknown entity type: {}", entity_type);
|
||||
}
|
||||
}
|
||||
}
|
||||
AppCommand::DeleteEntity { entity_id } => {
|
||||
info!("Deleting entity {}", entity_id);
|
||||
delete_cube_writer.write(crate::cube::DeleteCubeEvent { entity_id });
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Send a response back through the Unix socket
|
||||
#[cfg(not(target_os = "ios"))]
|
||||
#[cfg(debug_assertions)]
|
||||
async fn send_response(
|
||||
stream: &mut tokio::net::UnixStream,
|
||||
response: ControlResponse,
|
||||
) -> Result<()> {
|
||||
use tokio::io::AsyncWriteExt;
|
||||
|
||||
let bytes = response.to_bytes()?;
|
||||
let len = bytes.len() as u32;
|
||||
|
||||
stream.write_all(&len.to_le_bytes()).await?;
|
||||
stream.write_all(&bytes).await?;
|
||||
stream.flush().await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// No-op stubs for iOS and release builds
|
||||
#[cfg(any(target_os = "ios", not(debug_assertions)))]
|
||||
pub fn start_control_socket_system(mut commands: Commands) {
|
||||
// Insert empty shutdown resource for consistency
|
||||
commands.insert_resource(ControlSocketShutdown(None));
|
||||
}
|
||||
@@ -1,27 +1,37 @@
|
||||
//! Cube entity management
|
||||
|
||||
use bevy::prelude::*;
|
||||
use libmarathon::{
|
||||
networking::{
|
||||
NetworkEntityMap,
|
||||
NetworkedEntity,
|
||||
NetworkedSelection,
|
||||
NetworkedTransform,
|
||||
NodeVectorClock,
|
||||
Synced,
|
||||
},
|
||||
persistence::Persisted,
|
||||
};
|
||||
use serde::{
|
||||
Deserialize,
|
||||
Serialize,
|
||||
};
|
||||
use libmarathon::networking::{NetworkEntityMap, Synced};
|
||||
use uuid::Uuid;
|
||||
|
||||
/// Marker component for the replicated cube
|
||||
#[derive(Component, Reflect, Debug, Clone, Copy, Default, Serialize, Deserialize)]
|
||||
#[reflect(Component)]
|
||||
pub struct CubeMarker;
|
||||
///
|
||||
/// This component contains all the data needed for rendering a cube.
|
||||
/// The `#[synced]` attribute automatically handles network synchronization.
|
||||
#[libmarathon_macros::synced]
|
||||
pub struct CubeMarker {
|
||||
/// RGB color values (0.0 to 1.0)
|
||||
pub color_r: f32,
|
||||
pub color_g: f32,
|
||||
pub color_b: f32,
|
||||
pub size: f32,
|
||||
}
|
||||
|
||||
impl CubeMarker {
|
||||
pub fn with_color(color: Color, size: f32) -> Self {
|
||||
let [r, g, b, _] = color.to_linear().to_f32_array();
|
||||
Self {
|
||||
color_r: r,
|
||||
color_g: g,
|
||||
color_b: b,
|
||||
size,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn color(&self) -> Color {
|
||||
Color::srgb(self.color_r, self.color_g, self.color_b)
|
||||
}
|
||||
}
|
||||
|
||||
/// Message to spawn a new cube at a specific position
|
||||
#[derive(Message)]
|
||||
@@ -39,10 +49,33 @@ pub struct CubePlugin;
|
||||
|
||||
impl Plugin for CubePlugin {
|
||||
fn build(&self, app: &mut App) {
|
||||
app.register_type::<CubeMarker>()
|
||||
.add_message::<SpawnCubeEvent>()
|
||||
app.add_message::<SpawnCubeEvent>()
|
||||
.add_message::<DeleteCubeEvent>()
|
||||
.add_systems(Update, (handle_spawn_cube, handle_delete_cube));
|
||||
.add_systems(Update, (
|
||||
handle_spawn_cube,
|
||||
handle_delete_cube,
|
||||
add_cube_rendering_system, // Custom rendering!
|
||||
));
|
||||
}
|
||||
}
|
||||
|
||||
/// Custom rendering system - detects Added<CubeMarker> and adds mesh/material
|
||||
fn add_cube_rendering_system(
|
||||
mut commands: Commands,
|
||||
query: Query<(Entity, &CubeMarker), Added<CubeMarker>>,
|
||||
mut meshes: ResMut<Assets<Mesh>>,
|
||||
mut materials: ResMut<Assets<StandardMaterial>>,
|
||||
) {
|
||||
for (entity, cube) in &query {
|
||||
commands.entity(entity).insert((
|
||||
Mesh3d(meshes.add(Cuboid::new(cube.size, cube.size, cube.size))),
|
||||
MeshMaterial3d(materials.add(StandardMaterial {
|
||||
base_color: cube.color(), // Use the color() helper method
|
||||
perceptual_roughness: 0.7,
|
||||
metallic: 0.3,
|
||||
..default()
|
||||
})),
|
||||
));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -50,36 +83,15 @@ impl Plugin for CubePlugin {
|
||||
fn handle_spawn_cube(
|
||||
mut commands: Commands,
|
||||
mut messages: MessageReader<SpawnCubeEvent>,
|
||||
mut meshes: ResMut<Assets<Mesh>>,
|
||||
mut materials: ResMut<Assets<StandardMaterial>>,
|
||||
node_clock: Res<NodeVectorClock>,
|
||||
) {
|
||||
for event in messages.read() {
|
||||
let entity_id = Uuid::new_v4();
|
||||
let node_id = node_clock.node_id;
|
||||
|
||||
info!("Spawning cube {} at {:?}", entity_id, event.position);
|
||||
info!("Spawning cube at {:?}", event.position);
|
||||
|
||||
commands.spawn((
|
||||
CubeMarker,
|
||||
// Bevy 3D components
|
||||
Mesh3d(meshes.add(Cuboid::new(1.0, 1.0, 1.0))),
|
||||
MeshMaterial3d(materials.add(StandardMaterial {
|
||||
base_color: Color::srgb(0.8, 0.3, 0.6),
|
||||
perceptual_roughness: 0.7,
|
||||
metallic: 0.3,
|
||||
..default()
|
||||
})),
|
||||
CubeMarker::with_color(Color::srgb(0.8, 0.3, 0.6), 1.0),
|
||||
Transform::from_translation(event.position),
|
||||
GlobalTransform::default(),
|
||||
// Networking
|
||||
NetworkedEntity::with_id(entity_id, node_id),
|
||||
NetworkedTransform,
|
||||
NetworkedSelection::default(),
|
||||
// Persistence
|
||||
Persisted::with_id(entity_id),
|
||||
// Sync marker
|
||||
Synced,
|
||||
Synced, // Auto-adds NetworkedEntity, Persisted, NetworkedTransform
|
||||
));
|
||||
}
|
||||
}
|
||||
@@ -92,8 +104,14 @@ fn handle_delete_cube(
|
||||
) {
|
||||
for event in messages.read() {
|
||||
if let Some(bevy_entity) = entity_map.get_entity(event.entity_id) {
|
||||
info!("Deleting cube {}", event.entity_id);
|
||||
commands.entity(bevy_entity).despawn();
|
||||
info!("Marking cube {} for deletion", event.entity_id);
|
||||
// Add ToDelete marker - the handle_local_deletions_system will:
|
||||
// 1. Increment vector clock
|
||||
// 2. Create Delete operation
|
||||
// 3. Record tombstone
|
||||
// 4. Broadcast deletion to peers
|
||||
// 5. Despawn entity locally
|
||||
commands.entity(bevy_entity).insert(libmarathon::networking::ToDelete);
|
||||
} else {
|
||||
warn!("Attempted to delete unknown cube {}", event.entity_id);
|
||||
}
|
||||
|
||||
@@ -43,11 +43,10 @@ fn render_debug_ui(
|
||||
// Node information
|
||||
if let Some(clock) = &node_clock {
|
||||
ui.label(format!("Node ID: {}", &clock.node_id.to_string()[..8]));
|
||||
// Show the current node's clock value (timestamp)
|
||||
let current_timestamp =
|
||||
clock.clock.clocks.get(&clock.node_id).copied().unwrap_or(0);
|
||||
ui.label(format!("Clock: {}", current_timestamp));
|
||||
ui.label(format!("Known nodes: {}", clock.clock.clocks.len()));
|
||||
// Show the sum of all timestamps (total operations across all nodes)
|
||||
let total_ops: u64 = clock.clock.timestamps.values().sum();
|
||||
ui.label(format!("Clock: {} (total ops)", total_ops));
|
||||
ui.label(format!("Known nodes: {}", clock.clock.node_count()));
|
||||
} else {
|
||||
ui.label("Node: Not initialized");
|
||||
}
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
use bevy::prelude::*;
|
||||
use libmarathon::{
|
||||
engine::{EngineBridge, EngineCommand, EngineEvent},
|
||||
networking::{CurrentSession, NetworkedEntity, NodeVectorClock, Session, SessionState, VectorClock},
|
||||
networking::{CurrentSession, NetworkedEntity, NodeVectorClock, Session, SessionState},
|
||||
};
|
||||
|
||||
pub struct EngineBridgePlugin;
|
||||
@@ -18,6 +18,8 @@ impl Plugin for EngineBridgePlugin {
|
||||
app.add_systems(Update, poll_engine_events);
|
||||
// Detect changes and send clock tick commands to engine
|
||||
app.add_systems(PostUpdate, detect_changes_and_tick);
|
||||
// Handle app exit to stop networking gracefully
|
||||
app.add_systems(Update, handle_app_exit);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -44,81 +46,144 @@ fn detect_changes_and_tick(
|
||||
fn poll_engine_events(
|
||||
mut commands: Commands,
|
||||
bridge: Res<EngineBridge>,
|
||||
mut current_session: Option<ResMut<CurrentSession>>,
|
||||
mut current_session: ResMut<CurrentSession>,
|
||||
mut node_clock: ResMut<NodeVectorClock>,
|
||||
mut networking_status: Option<ResMut<crate::session_ui::NetworkingStatus>>,
|
||||
) {
|
||||
let events = (*bridge).poll_events();
|
||||
|
||||
if !events.is_empty() {
|
||||
debug!("Polling {} engine events", events.len());
|
||||
for event in events {
|
||||
match event {
|
||||
EngineEvent::NetworkingStarted { session_id, node_id } => {
|
||||
info!("🌐 Networking started: session={}, node={}",
|
||||
EngineEvent::NetworkingInitializing { session_id, status } => {
|
||||
info!("Networking initializing for session {}: {:?}", session_id.to_code(), status);
|
||||
|
||||
// Update NetworkingStatus resource
|
||||
if let Some(ref mut net_status) = networking_status {
|
||||
net_status.latest_status = Some(status);
|
||||
}
|
||||
|
||||
// Update session state to Joining if not already
|
||||
if matches!(current_session.session.state, SessionState::Created) {
|
||||
current_session.session.state = SessionState::Joining;
|
||||
}
|
||||
}
|
||||
EngineEvent::NetworkingStarted { session_id, node_id, bridge: gossip_bridge } => {
|
||||
info!("Networking started: session={}, node={}",
|
||||
session_id.to_code(), node_id);
|
||||
|
||||
// Create session if it doesn't exist
|
||||
if current_session.is_none() {
|
||||
let mut session = Session::new(session_id.clone());
|
||||
session.state = SessionState::Active;
|
||||
commands.insert_resource(CurrentSession::new(session, VectorClock::new()));
|
||||
info!("Created new session resource: {}", session_id.to_code());
|
||||
} else if let Some(ref mut session) = current_session {
|
||||
// Update existing session state to Active
|
||||
session.session.state = SessionState::Active;
|
||||
// Clear networking status
|
||||
if let Some(ref mut net_status) = networking_status {
|
||||
net_status.latest_status = None;
|
||||
}
|
||||
|
||||
// Insert GossipBridge for Bevy systems to use
|
||||
commands.insert_resource(gossip_bridge);
|
||||
info!("Inserted GossipBridge resource");
|
||||
|
||||
// Update session to use the new session ID and set state to Joining
|
||||
// The transition_session_state_system will handle Joining → Active
|
||||
// after receiving FullState from peers
|
||||
current_session.session = Session::new(session_id.clone());
|
||||
current_session.session.state = SessionState::Joining;
|
||||
info!("Updated CurrentSession to Joining: {}", session_id.to_code());
|
||||
|
||||
// Update node ID in clock
|
||||
node_clock.node_id = node_id;
|
||||
}
|
||||
EngineEvent::NetworkingFailed { error } => {
|
||||
error!("❌ Networking failed: {}", error);
|
||||
error!("Networking failed: {}", error);
|
||||
|
||||
// Keep session state as Created (if session exists)
|
||||
if let Some(ref mut session) = current_session {
|
||||
session.session.state = SessionState::Created;
|
||||
// Clear networking status
|
||||
if let Some(ref mut net_status) = networking_status {
|
||||
net_status.latest_status = None;
|
||||
}
|
||||
|
||||
// Keep session state as Created
|
||||
current_session.session.state = SessionState::Created;
|
||||
}
|
||||
EngineEvent::NetworkingStopped => {
|
||||
info!("🔌 Networking stopped");
|
||||
info!("Networking stopped");
|
||||
|
||||
// Update session state to Disconnected (if session exists)
|
||||
if let Some(ref mut session) = current_session {
|
||||
session.session.state = SessionState::Disconnected;
|
||||
// Clear networking status
|
||||
if let Some(ref mut net_status) = networking_status {
|
||||
net_status.latest_status = None;
|
||||
}
|
||||
|
||||
// Update session state to Disconnected
|
||||
current_session.session.state = SessionState::Disconnected;
|
||||
}
|
||||
EngineEvent::PeerJoined { node_id } => {
|
||||
info!("👋 Peer joined: {}", node_id);
|
||||
info!("Peer joined: {}", node_id);
|
||||
|
||||
// Initialize peer in vector clock so it shows up in UI immediately
|
||||
node_clock.clock.timestamps.entry(node_id).or_insert(0);
|
||||
|
||||
// TODO(Phase 3.3): Trigger sync
|
||||
}
|
||||
EngineEvent::PeerLeft { node_id } => {
|
||||
info!("👋 Peer left: {}", node_id);
|
||||
info!("Peer left: {}", node_id);
|
||||
|
||||
// Remove peer from vector clock
|
||||
node_clock.clock.timestamps.remove(&node_id);
|
||||
}
|
||||
EngineEvent::LockAcquired { entity_id, holder } => {
|
||||
debug!("🔒 Lock acquired: entity={}, holder={}", entity_id, holder);
|
||||
debug!("Lock acquired: entity={}, holder={}", entity_id, holder);
|
||||
// TODO(Phase 3.4): Update lock visuals
|
||||
}
|
||||
EngineEvent::LockReleased { entity_id } => {
|
||||
debug!("🔓 Lock released: entity={}", entity_id);
|
||||
debug!("Lock released: entity={}", entity_id);
|
||||
// TODO(Phase 3.4): Update lock visuals
|
||||
}
|
||||
EngineEvent::ClockTicked { sequence, clock: _ } => {
|
||||
debug!("Clock ticked: sequence={}", sequence);
|
||||
// Clock tick confirmed - no action needed
|
||||
}
|
||||
EngineEvent::SessionJoined { session_id } => {
|
||||
info!("Session joined: {}", session_id.to_code());
|
||||
// Update session state
|
||||
current_session.session.state = SessionState::Joining;
|
||||
}
|
||||
EngineEvent::SessionLeft => {
|
||||
info!("Session left");
|
||||
// Update session state
|
||||
current_session.session.state = SessionState::Left;
|
||||
}
|
||||
EngineEvent::EntitySpawned { entity_id, position, rotation, version: _ } => {
|
||||
debug!("Entity spawned: id={}, pos={:?}, rot={:?}", entity_id, position, rotation);
|
||||
// TODO: Spawn entity in Bevy
|
||||
}
|
||||
EngineEvent::EntityUpdated { entity_id, position, rotation, version: _ } => {
|
||||
debug!("Entity updated: id={}, pos={:?}, rot={:?}", entity_id, position, rotation);
|
||||
// TODO: Update entity in Bevy
|
||||
}
|
||||
EngineEvent::EntityDeleted { entity_id, version: _ } => {
|
||||
debug!("Entity deleted: id={}", entity_id);
|
||||
// TODO: Delete entity in Bevy
|
||||
}
|
||||
EngineEvent::LockDenied { entity_id, current_holder } => {
|
||||
debug!("⛔ Lock denied: entity={}, holder={}", entity_id, current_holder);
|
||||
// TODO(Phase 3.4): Show visual feedback
|
||||
debug!("Lock denied: entity={}, current_holder={}", entity_id, current_holder);
|
||||
// TODO: Show lock denied feedback
|
||||
}
|
||||
EngineEvent::LockExpired { entity_id } => {
|
||||
debug!("⏰ Lock expired: entity={}", entity_id);
|
||||
// TODO(Phase 3.4): Update lock visuals
|
||||
}
|
||||
EngineEvent::ClockTicked { sequence, clock } => {
|
||||
debug!("🕐 Clock ticked to {}", sequence);
|
||||
|
||||
// Update the NodeVectorClock resource with the new clock state
|
||||
node_clock.clock = clock;
|
||||
}
|
||||
_ => {
|
||||
debug!("Unhandled engine event: {:?}", event);
|
||||
debug!("Lock expired: entity={}", entity_id);
|
||||
// TODO: Update lock visuals
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Handle app exit - send shutdown signal to EngineCore
|
||||
fn handle_app_exit(
|
||||
mut exit_events: MessageReader<bevy::app::AppExit>,
|
||||
bridge: Res<EngineBridge>,
|
||||
) {
|
||||
for _ in exit_events.read() {
|
||||
info!("App exiting - sending Shutdown command to EngineCore");
|
||||
bridge.send_command(EngineCommand::Shutdown);
|
||||
// The EngineCore will receive the Shutdown command and gracefully exit
|
||||
// its event loop, allowing the tokio runtime thread to complete
|
||||
}
|
||||
}
|
||||
|
||||
@@ -6,9 +6,14 @@ use bevy::prelude::*;
|
||||
use libmarathon::{
|
||||
engine::GameAction,
|
||||
platform::input::InputController,
|
||||
networking::{EntityLockRegistry, NetworkedEntity, NetworkedSelection, NodeVectorClock},
|
||||
networking::{
|
||||
EntityLockRegistry, LocalSelection, NetworkedEntity,
|
||||
NodeVectorClock,
|
||||
},
|
||||
};
|
||||
|
||||
use crate::cube::CubeMarker;
|
||||
|
||||
use super::event_buffer::InputEventBuffer;
|
||||
|
||||
pub struct InputHandlerPlugin;
|
||||
@@ -16,7 +21,9 @@ pub struct InputHandlerPlugin;
|
||||
impl Plugin for InputHandlerPlugin {
|
||||
fn build(&self, app: &mut App) {
|
||||
app.init_resource::<InputControllerResource>()
|
||||
.add_systems(Update, handle_game_actions);
|
||||
// handle_game_actions updates selection - must run before release_locks_on_deselection_system
|
||||
.add_systems(Update, handle_game_actions.before(libmarathon::networking::release_locks_on_deselection_system))
|
||||
.add_systems(PostUpdate, update_lock_visuals);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -46,9 +53,10 @@ fn to_bevy_vec2(v: glam::Vec2) -> bevy::math::Vec2 {
|
||||
fn handle_game_actions(
|
||||
input_buffer: Res<InputEventBuffer>,
|
||||
mut controller_res: ResMut<InputControllerResource>,
|
||||
mut lock_registry: ResMut<EntityLockRegistry>,
|
||||
lock_registry: Res<EntityLockRegistry>,
|
||||
node_clock: Res<NodeVectorClock>,
|
||||
mut cube_query: Query<(&NetworkedEntity, &mut Transform, &mut NetworkedSelection), With<crate::cube::CubeMarker>>,
|
||||
mut selection: ResMut<LocalSelection>,
|
||||
mut cube_query: Query<(&NetworkedEntity, &mut Transform), With<crate::cube::CubeMarker>>,
|
||||
camera_query: Query<(&Camera, &GlobalTransform)>,
|
||||
window_query: Query<&Window>,
|
||||
) {
|
||||
@@ -65,14 +73,23 @@ fn handle_game_actions(
|
||||
for action in all_actions {
|
||||
match action {
|
||||
GameAction::SelectEntity { position } => {
|
||||
apply_select_entity(
|
||||
// Do raycasting to find which entity (if any) was clicked
|
||||
let entity_id = raycast_entity(
|
||||
position,
|
||||
&mut lock_registry,
|
||||
node_id,
|
||||
&mut cube_query,
|
||||
&cube_query,
|
||||
&camera_query,
|
||||
&window_query,
|
||||
);
|
||||
|
||||
// Update selection
|
||||
// The release_locks_on_deselection_system will automatically handle lock changes
|
||||
selection.clear();
|
||||
if let Some(id) = entity_id {
|
||||
selection.insert(id);
|
||||
info!("Selected entity {}", id);
|
||||
} else {
|
||||
info!("Deselected all entities");
|
||||
}
|
||||
}
|
||||
|
||||
GameAction::MoveEntity { delta } => {
|
||||
@@ -98,32 +115,32 @@ fn handle_game_actions(
|
||||
}
|
||||
}
|
||||
|
||||
/// Apply SelectEntity action - raycast to find clicked cube and select it
|
||||
fn apply_select_entity(
|
||||
/// Raycast to find which entity was clicked
|
||||
///
|
||||
/// Returns the network ID of the closest entity hit by the ray, or None if nothing was hit.
|
||||
fn raycast_entity(
|
||||
position: glam::Vec2,
|
||||
lock_registry: &mut EntityLockRegistry,
|
||||
node_id: uuid::Uuid,
|
||||
cube_query: &mut Query<(&NetworkedEntity, &mut Transform, &mut NetworkedSelection), With<crate::cube::CubeMarker>>,
|
||||
cube_query: &Query<(&NetworkedEntity, &mut Transform), With<crate::cube::CubeMarker>>,
|
||||
camera_query: &Query<(&Camera, &GlobalTransform)>,
|
||||
window_query: &Query<&Window>,
|
||||
) {
|
||||
) -> Option<uuid::Uuid> {
|
||||
// Get the camera and window
|
||||
let Ok((camera, camera_transform)) = camera_query.single() else {
|
||||
return;
|
||||
return None;
|
||||
};
|
||||
let Ok(window) = window_query.single() else {
|
||||
return;
|
||||
return None;
|
||||
};
|
||||
|
||||
// Convert screen position to world ray
|
||||
let Some(ray) = screen_to_world_ray(position, camera, camera_transform, window) else {
|
||||
return;
|
||||
return None;
|
||||
};
|
||||
|
||||
// Find the closest cube hit by the ray
|
||||
let mut closest_hit: Option<(uuid::Uuid, f32)> = None;
|
||||
|
||||
for (networked, transform, _) in cube_query.iter() {
|
||||
for (networked, transform) in cube_query.iter() {
|
||||
// Test ray against cube AABB (1x1x1 cube)
|
||||
if let Some(distance) = ray_aabb_intersection(
|
||||
ray.origin,
|
||||
@@ -137,31 +154,7 @@ fn apply_select_entity(
|
||||
}
|
||||
}
|
||||
|
||||
// If we hit a cube, clear all selections and select this one
|
||||
if let Some((hit_entity_id, _)) = closest_hit {
|
||||
// Clear all previous selections and locks
|
||||
for (networked, _, mut selection) in cube_query.iter_mut() {
|
||||
selection.clear();
|
||||
lock_registry.release(networked.network_id, node_id);
|
||||
}
|
||||
|
||||
// Select and lock the clicked cube
|
||||
for (networked, _, mut selection) in cube_query.iter_mut() {
|
||||
if networked.network_id == hit_entity_id {
|
||||
selection.add(hit_entity_id);
|
||||
let _ = lock_registry.try_acquire(hit_entity_id, node_id);
|
||||
info!("Selected cube {}", hit_entity_id);
|
||||
break;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Clicked on empty space - deselect all
|
||||
for (networked, _, mut selection) in cube_query.iter_mut() {
|
||||
selection.clear();
|
||||
lock_registry.release(networked.network_id, node_id);
|
||||
}
|
||||
info!("Deselected all cubes");
|
||||
}
|
||||
closest_hit.map(|(entity_id, _)| entity_id)
|
||||
}
|
||||
|
||||
/// Apply MoveEntity action to locked cubes
|
||||
@@ -169,12 +162,12 @@ fn apply_move_entity(
|
||||
delta: glam::Vec2,
|
||||
lock_registry: &EntityLockRegistry,
|
||||
node_id: uuid::Uuid,
|
||||
cube_query: &mut Query<(&NetworkedEntity, &mut Transform, &mut NetworkedSelection), With<crate::cube::CubeMarker>>,
|
||||
cube_query: &mut Query<(&NetworkedEntity, &mut Transform), With<crate::cube::CubeMarker>>,
|
||||
) {
|
||||
let bevy_delta = to_bevy_vec2(delta);
|
||||
let sensitivity = 0.01; // Scale factor
|
||||
|
||||
for (networked, mut transform, _) in cube_query.iter_mut() {
|
||||
for (networked, mut transform) in cube_query.iter_mut() {
|
||||
if lock_registry.is_locked_by(networked.network_id, node_id, node_id) {
|
||||
transform.translation.x += bevy_delta.x * sensitivity;
|
||||
transform.translation.y -= bevy_delta.y * sensitivity; // Invert Y for screen coords
|
||||
@@ -187,12 +180,12 @@ fn apply_rotate_entity(
|
||||
delta: glam::Vec2,
|
||||
lock_registry: &EntityLockRegistry,
|
||||
node_id: uuid::Uuid,
|
||||
cube_query: &mut Query<(&NetworkedEntity, &mut Transform, &mut NetworkedSelection), With<crate::cube::CubeMarker>>,
|
||||
cube_query: &mut Query<(&NetworkedEntity, &mut Transform), With<crate::cube::CubeMarker>>,
|
||||
) {
|
||||
let bevy_delta = to_bevy_vec2(delta);
|
||||
let sensitivity = 0.01;
|
||||
|
||||
for (networked, mut transform, _) in cube_query.iter_mut() {
|
||||
for (networked, mut transform) in cube_query.iter_mut() {
|
||||
if lock_registry.is_locked_by(networked.network_id, node_id, node_id) {
|
||||
let rotation_x = Quat::from_rotation_y(bevy_delta.x * sensitivity);
|
||||
let rotation_y = Quat::from_rotation_x(-bevy_delta.y * sensitivity);
|
||||
@@ -206,11 +199,11 @@ fn apply_move_depth(
|
||||
delta: f32,
|
||||
lock_registry: &EntityLockRegistry,
|
||||
node_id: uuid::Uuid,
|
||||
cube_query: &mut Query<(&NetworkedEntity, &mut Transform, &mut NetworkedSelection), With<crate::cube::CubeMarker>>,
|
||||
cube_query: &mut Query<(&NetworkedEntity, &mut Transform), With<crate::cube::CubeMarker>>,
|
||||
) {
|
||||
let sensitivity = 0.1;
|
||||
|
||||
for (networked, mut transform, _) in cube_query.iter_mut() {
|
||||
for (networked, mut transform) in cube_query.iter_mut() {
|
||||
if lock_registry.is_locked_by(networked.network_id, node_id, node_id) {
|
||||
transform.translation.z += delta * sensitivity;
|
||||
}
|
||||
@@ -221,9 +214,9 @@ fn apply_move_depth(
|
||||
fn apply_reset_entity(
|
||||
lock_registry: &EntityLockRegistry,
|
||||
node_id: uuid::Uuid,
|
||||
cube_query: &mut Query<(&NetworkedEntity, &mut Transform, &mut NetworkedSelection), With<crate::cube::CubeMarker>>,
|
||||
cube_query: &mut Query<(&NetworkedEntity, &mut Transform), With<crate::cube::CubeMarker>>,
|
||||
) {
|
||||
for (networked, mut transform, _) in cube_query.iter_mut() {
|
||||
for (networked, mut transform) in cube_query.iter_mut() {
|
||||
if lock_registry.is_locked_by(networked.network_id, node_id, node_id) {
|
||||
transform.translation = Vec3::ZERO;
|
||||
transform.rotation = Quat::IDENTITY;
|
||||
@@ -242,7 +235,7 @@ fn screen_to_world_ray(
|
||||
screen_pos: glam::Vec2,
|
||||
camera: &Camera,
|
||||
camera_transform: &GlobalTransform,
|
||||
window: &Window,
|
||||
_window: &Window,
|
||||
) -> Option<Ray> {
|
||||
// Convert screen position to viewport position (0..1 range)
|
||||
let viewport_pos = Vec2::new(screen_pos.x, screen_pos.y);
|
||||
@@ -317,3 +310,38 @@ fn ray_aabb_intersection(
|
||||
Some(tmin)
|
||||
}
|
||||
}
|
||||
|
||||
/// System to update visual appearance based on lock state
|
||||
///
|
||||
/// Color scheme:
|
||||
/// - Green: Locked by us (we can edit)
|
||||
/// - Blue: Locked by someone else (they can edit, we can't)
|
||||
/// - Pink: Not locked (nobody is editing)
|
||||
fn update_lock_visuals(
|
||||
lock_registry: Res<EntityLockRegistry>,
|
||||
node_clock: Res<NodeVectorClock>,
|
||||
mut cubes: Query<(&NetworkedEntity, &mut MeshMaterial3d<StandardMaterial>), With<CubeMarker>>,
|
||||
mut materials: ResMut<Assets<StandardMaterial>>,
|
||||
) {
|
||||
for (networked, material_handle) in cubes.iter_mut() {
|
||||
let entity_id = networked.network_id;
|
||||
|
||||
// Determine color based on lock state
|
||||
let node_id = node_clock.node_id;
|
||||
let color = if lock_registry.is_locked_by(entity_id, node_id, node_id) {
|
||||
// Locked by us - green
|
||||
Color::srgb(0.3, 0.8, 0.3)
|
||||
} else if lock_registry.is_locked(entity_id, node_id) {
|
||||
// Locked by someone else - blue
|
||||
Color::srgb(0.3, 0.5, 0.9)
|
||||
} else {
|
||||
// Not locked - default pink
|
||||
Color::srgb(0.8, 0.3, 0.6)
|
||||
};
|
||||
|
||||
// Update material color
|
||||
if let Some(mat) = materials.get_mut(&material_handle.0) {
|
||||
mat.base_color = color;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -9,5 +9,4 @@
|
||||
pub mod event_buffer;
|
||||
pub mod input_handler;
|
||||
|
||||
pub use event_buffer::InputEventBuffer;
|
||||
pub use input_handler::InputHandlerPlugin;
|
||||
|
||||
@@ -9,6 +9,7 @@ pub mod debug_ui;
|
||||
pub mod engine_bridge;
|
||||
pub mod input;
|
||||
pub mod rendering;
|
||||
pub mod session_ui;
|
||||
pub mod setup;
|
||||
|
||||
pub use cube::CubeMarker;
|
||||
|
||||
@@ -3,18 +3,55 @@
|
||||
//! This demonstrates real-time CRDT synchronization with Apple Pencil input.
|
||||
|
||||
use bevy::prelude::*;
|
||||
use clap::Parser;
|
||||
use libmarathon::{
|
||||
engine::{EngineBridge, EngineCore},
|
||||
engine::{
|
||||
EngineBridge,
|
||||
EngineCore,
|
||||
},
|
||||
persistence::PersistenceConfig,
|
||||
};
|
||||
use std::path::PathBuf;
|
||||
|
||||
#[cfg(feature = "headless")]
|
||||
use bevy::app::ScheduleRunnerPlugin;
|
||||
#[cfg(feature = "headless")]
|
||||
use std::time::Duration;
|
||||
|
||||
/// Marathon - CRDT-based collaborative editing engine
|
||||
#[derive(Parser, Debug)]
|
||||
#[command(version, about, long_about = None)]
|
||||
struct Args {
|
||||
/// Path to the database file
|
||||
#[arg(long, default_value = "marathon.db")]
|
||||
db_path: String,
|
||||
|
||||
/// Path to the control socket (Unix domain socket)
|
||||
#[arg(long, default_value = "/tmp/marathon-control.sock")]
|
||||
control_socket: String,
|
||||
|
||||
/// Log level (trace, debug, info, warn, error)
|
||||
#[arg(long, default_value = "info")]
|
||||
log_level: String,
|
||||
|
||||
/// Path to log file (relative to current directory)
|
||||
#[arg(long, default_value = "marathon.log")]
|
||||
log_file: String,
|
||||
|
||||
/// Disable log file output (console only)
|
||||
#[arg(long, default_value = "false")]
|
||||
no_log_file: bool,
|
||||
|
||||
/// Disable console output (file only)
|
||||
#[arg(long, default_value = "false")]
|
||||
no_console: bool,
|
||||
}
|
||||
|
||||
mod camera;
|
||||
mod control;
|
||||
mod cube;
|
||||
mod debug_ui;
|
||||
mod engine_bridge;
|
||||
mod rendering;
|
||||
mod selection;
|
||||
mod session;
|
||||
mod session_ui;
|
||||
mod setup;
|
||||
@@ -27,81 +64,262 @@ mod input;
|
||||
use camera::*;
|
||||
use cube::*;
|
||||
use rendering::*;
|
||||
use selection::*;
|
||||
use session::*;
|
||||
use session_ui::*;
|
||||
|
||||
fn main() {
|
||||
// Parse command-line arguments
|
||||
let args = Args::parse();
|
||||
|
||||
// Note: eprintln doesn't work on iOS, but tracing-oslog will once initialized
|
||||
eprintln!(">>> RUST ENTRY: main() started");
|
||||
|
||||
// Initialize logging
|
||||
tracing_subscriber::fmt()
|
||||
.with_env_filter(
|
||||
tracing_subscriber::EnvFilter::from_default_env()
|
||||
.add_directive("wgpu=error".parse().unwrap())
|
||||
.add_directive("naga=warn".parse().unwrap()),
|
||||
)
|
||||
.init();
|
||||
eprintln!(">>> Initializing tracing_subscriber");
|
||||
|
||||
#[cfg(target_os = "ios")]
|
||||
{
|
||||
use tracing_subscriber::prelude::*;
|
||||
|
||||
let filter = tracing_subscriber::EnvFilter::builder()
|
||||
.with_default_directive(tracing::Level::DEBUG.into())
|
||||
.from_env_lossy()
|
||||
.add_directive("wgpu=error".parse().unwrap())
|
||||
.add_directive("naga=warn".parse().unwrap())
|
||||
.add_directive("winit=error".parse().unwrap());
|
||||
|
||||
tracing_subscriber::registry()
|
||||
.with(filter)
|
||||
.with(tracing_oslog::OsLogger::new("io.r3t.aspen", "default"))
|
||||
.init();
|
||||
|
||||
info!("OSLog initialized successfully");
|
||||
}
|
||||
|
||||
#[cfg(not(target_os = "ios"))]
|
||||
{
|
||||
use tracing_subscriber::prelude::*;
|
||||
|
||||
// Parse log level from args
|
||||
let default_level = args.log_level.parse::<tracing::Level>()
|
||||
.unwrap_or_else(|_| {
|
||||
eprintln!("Invalid log level '{}', using 'info'", args.log_level);
|
||||
tracing::Level::INFO
|
||||
});
|
||||
|
||||
// Build filter with default level and quieter dependencies
|
||||
let filter = tracing_subscriber::EnvFilter::from_default_env()
|
||||
.add_directive(default_level.into())
|
||||
.add_directive("wgpu=error".parse().unwrap())
|
||||
.add_directive("naga=warn".parse().unwrap());
|
||||
|
||||
// Build subscriber based on combination of flags
|
||||
match (args.no_console, args.no_log_file) {
|
||||
(false, false) => {
|
||||
// Both console and file
|
||||
let console_layer = tracing_subscriber::fmt::layer()
|
||||
.with_writer(std::io::stdout);
|
||||
|
||||
let log_path = std::path::PathBuf::from(&args.log_file);
|
||||
let log_dir = log_path.parent().unwrap_or_else(|| std::path::Path::new("."));
|
||||
let log_filename = log_path.file_name().unwrap().to_str().unwrap();
|
||||
let file_appender = tracing_appender::rolling::never(log_dir, log_filename);
|
||||
let (non_blocking, _guard) = tracing_appender::non_blocking(file_appender);
|
||||
std::mem::forget(_guard);
|
||||
let file_layer = tracing_subscriber::fmt::layer()
|
||||
.with_writer(non_blocking)
|
||||
.with_ansi(false);
|
||||
|
||||
tracing_subscriber::registry()
|
||||
.with(filter)
|
||||
.with(console_layer)
|
||||
.with(file_layer)
|
||||
.init();
|
||||
|
||||
eprintln!(">>> Logs written to: {} and console", args.log_file);
|
||||
}
|
||||
(false, true) => {
|
||||
// Console only
|
||||
let console_layer = tracing_subscriber::fmt::layer()
|
||||
.with_writer(std::io::stdout);
|
||||
|
||||
tracing_subscriber::registry()
|
||||
.with(filter)
|
||||
.with(console_layer)
|
||||
.init();
|
||||
|
||||
eprintln!(">>> Console logging only (no log file)");
|
||||
}
|
||||
(true, false) => {
|
||||
// File only
|
||||
let log_path = std::path::PathBuf::from(&args.log_file);
|
||||
let log_dir = log_path.parent().unwrap_or_else(|| std::path::Path::new("."));
|
||||
let log_filename = log_path.file_name().unwrap().to_str().unwrap();
|
||||
let file_appender = tracing_appender::rolling::never(log_dir, log_filename);
|
||||
let (non_blocking, _guard) = tracing_appender::non_blocking(file_appender);
|
||||
std::mem::forget(_guard);
|
||||
let file_layer = tracing_subscriber::fmt::layer()
|
||||
.with_writer(non_blocking)
|
||||
.with_ansi(false);
|
||||
|
||||
tracing_subscriber::registry()
|
||||
.with(filter)
|
||||
.with(file_layer)
|
||||
.init();
|
||||
|
||||
eprintln!(">>> Logs written to: {} (console disabled)", args.log_file);
|
||||
}
|
||||
(true, true) => {
|
||||
// Neither - warn but initialize anyway
|
||||
tracing_subscriber::registry()
|
||||
.with(filter)
|
||||
.init();
|
||||
|
||||
eprintln!(">>> Warning: Both console and file logging disabled!");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
eprintln!(">>> Tracing subscriber initialized");
|
||||
|
||||
// Application configuration
|
||||
const APP_NAME: &str = "Aspen";
|
||||
|
||||
// Get platform-appropriate database path
|
||||
let db_path = libmarathon::platform::get_database_path(APP_NAME);
|
||||
// Use database path from CLI args
|
||||
let db_path = std::path::PathBuf::from(&args.db_path);
|
||||
let db_path_str = db_path.to_str().unwrap().to_string();
|
||||
info!("Database path: {}", db_path_str);
|
||||
eprintln!(">>> Database path: {}", db_path_str);
|
||||
|
||||
// Create EngineBridge (for communication between Bevy and EngineCore)
|
||||
eprintln!(">>> Creating EngineBridge");
|
||||
let (engine_bridge, engine_handle) = EngineBridge::new();
|
||||
info!("EngineBridge created");
|
||||
eprintln!(">>> EngineBridge created");
|
||||
|
||||
// Spawn EngineCore on tokio runtime (runs in background thread)
|
||||
eprintln!(">>> Spawning EngineCore background thread");
|
||||
std::thread::spawn(move || {
|
||||
eprintln!(">>> [EngineCore thread] Thread started");
|
||||
info!("Starting EngineCore on tokio runtime...");
|
||||
eprintln!(">>> [EngineCore thread] Creating tokio runtime");
|
||||
let rt = tokio::runtime::Runtime::new().unwrap();
|
||||
eprintln!(">>> [EngineCore thread] Tokio runtime created");
|
||||
rt.block_on(async {
|
||||
eprintln!(">>> [EngineCore thread] Creating EngineCore");
|
||||
let core = EngineCore::new(engine_handle, &db_path_str);
|
||||
eprintln!(">>> [EngineCore thread] Running EngineCore");
|
||||
core.run().await;
|
||||
});
|
||||
});
|
||||
info!("EngineCore spawned in background");
|
||||
eprintln!(">>> EngineCore thread spawned");
|
||||
|
||||
// Create Bevy app (without winit - we own the event loop)
|
||||
eprintln!(">>> Creating Bevy App");
|
||||
let mut app = App::new();
|
||||
eprintln!(">>> Bevy App created");
|
||||
|
||||
// Insert EngineBridge as a resource for Bevy systems to use
|
||||
eprintln!(">>> Inserting EngineBridge resource");
|
||||
app.insert_resource(engine_bridge);
|
||||
|
||||
// Use DefaultPlugins but disable winit/window/input (we own those)
|
||||
app.add_plugins(
|
||||
DefaultPlugins
|
||||
.build()
|
||||
.disable::<bevy::log::LogPlugin>() // Using tracing-subscriber
|
||||
.disable::<bevy::winit::WinitPlugin>() // We own winit
|
||||
.disable::<bevy::window::WindowPlugin>() // We own the window
|
||||
.disable::<bevy::input::InputPlugin>() // We provide InputEvents directly
|
||||
.disable::<bevy::gilrs::GilrsPlugin>() // We handle gamepad input ourselves
|
||||
);
|
||||
// Plugin setup based on headless vs rendering mode
|
||||
#[cfg(not(feature = "headless"))]
|
||||
{
|
||||
info!("Adding DefaultPlugins (rendering mode)");
|
||||
app.add_plugins(
|
||||
DefaultPlugins
|
||||
.build()
|
||||
.disable::<bevy::log::LogPlugin>() // Using tracing-subscriber
|
||||
.disable::<bevy::winit::WinitPlugin>() // We own winit
|
||||
.disable::<bevy::window::WindowPlugin>() // We own the window
|
||||
.disable::<bevy::input::InputPlugin>() // We provide InputEvents directly
|
||||
.disable::<bevy::gilrs::GilrsPlugin>(), // We handle gamepad input ourselves
|
||||
);
|
||||
info!("DefaultPlugins added");
|
||||
}
|
||||
|
||||
// Marathon core plugins (networking, debug UI, persistence)
|
||||
app.add_plugins(libmarathon::MarathonPlugin::new(
|
||||
APP_NAME,
|
||||
PersistenceConfig {
|
||||
flush_interval_secs: 2,
|
||||
checkpoint_interval_secs: 30,
|
||||
battery_adaptive: true,
|
||||
..Default::default()
|
||||
},
|
||||
));
|
||||
#[cfg(feature = "headless")]
|
||||
{
|
||||
info!("Adding MinimalPlugins (headless mode)");
|
||||
app.add_plugins(
|
||||
MinimalPlugins.set(ScheduleRunnerPlugin::run_loop(
|
||||
Duration::from_secs_f64(1.0 / 60.0), // 60 FPS
|
||||
)),
|
||||
);
|
||||
info!("MinimalPlugins added");
|
||||
}
|
||||
|
||||
// Marathon core plugins based on mode
|
||||
#[cfg(not(feature = "headless"))]
|
||||
{
|
||||
info!("Adding MarathonPlugin (with debug UI)");
|
||||
app.add_plugins(libmarathon::MarathonPlugin::new(
|
||||
APP_NAME,
|
||||
PersistenceConfig {
|
||||
flush_interval_secs: 2,
|
||||
checkpoint_interval_secs: 30,
|
||||
battery_adaptive: true,
|
||||
..Default::default()
|
||||
},
|
||||
));
|
||||
}
|
||||
|
||||
#[cfg(feature = "headless")]
|
||||
{
|
||||
info!("Adding networking and persistence (headless, no UI)");
|
||||
app.add_plugins(libmarathon::networking::NetworkingPlugin::new(Default::default()));
|
||||
app.add_plugins(libmarathon::persistence::PersistencePlugin::with_config(
|
||||
db_path.clone(),
|
||||
PersistenceConfig {
|
||||
flush_interval_secs: 2,
|
||||
checkpoint_interval_secs: 30,
|
||||
battery_adaptive: true,
|
||||
..Default::default()
|
||||
},
|
||||
));
|
||||
}
|
||||
|
||||
info!("Marathon plugins added");
|
||||
|
||||
// App-specific bridge for polling engine events
|
||||
info!("Adding app plugins");
|
||||
app.add_plugins(EngineBridgePlugin);
|
||||
app.add_plugins(CameraPlugin);
|
||||
app.add_plugins(RenderingPlugin);
|
||||
app.add_plugins(input::InputHandlerPlugin);
|
||||
app.add_plugins(CubePlugin);
|
||||
app.add_plugins(SelectionPlugin);
|
||||
app.add_plugins(DebugUiPlugin);
|
||||
app.add_plugins(SessionUiPlugin);
|
||||
app.add_systems(Startup, initialize_offline_resources);
|
||||
|
||||
libmarathon::platform::run_executor(app).expect("Failed to run executor");
|
||||
// Configure fixed timestep for deterministic game logic at 60fps
|
||||
app.insert_resource(Time::<Fixed>::from_hz(60.0));
|
||||
|
||||
// Insert control socket path as resource
|
||||
app.insert_resource(control::ControlSocketPath(args.control_socket.clone()));
|
||||
app.add_systems(Startup, control::start_control_socket_system);
|
||||
app.add_systems(Update, (control::process_app_commands, control::cleanup_control_socket));
|
||||
|
||||
// Rendering-only plugins
|
||||
#[cfg(not(feature = "headless"))]
|
||||
{
|
||||
app.add_plugins(CameraPlugin);
|
||||
app.add_plugins(RenderingPlugin);
|
||||
app.add_plugins(input::InputHandlerPlugin);
|
||||
// SelectionPlugin removed - InputHandlerPlugin already handles selection via GameActions
|
||||
app.add_plugins(DebugUiPlugin);
|
||||
app.add_plugins(SessionUiPlugin);
|
||||
}
|
||||
|
||||
info!("All plugins added");
|
||||
|
||||
// Run the app based on mode
|
||||
#[cfg(not(feature = "headless"))]
|
||||
{
|
||||
info!("Running platform executor (rendering mode)");
|
||||
libmarathon::platform::run_executor(app).expect("Failed to run executor");
|
||||
}
|
||||
|
||||
#[cfg(feature = "headless")]
|
||||
{
|
||||
info!("Running headless app loop");
|
||||
app.run();
|
||||
}
|
||||
}
|
||||
|
||||
@@ -6,17 +6,17 @@
|
||||
use bevy::prelude::*;
|
||||
use libmarathon::{
|
||||
networking::{
|
||||
EntityLockRegistry, NetworkEntityMap, NodeVectorClock, VectorClock,
|
||||
CurrentSession, EntityLockRegistry, NetworkEntityMap, NodeVectorClock, Session, VectorClock,
|
||||
},
|
||||
};
|
||||
use uuid::Uuid;
|
||||
|
||||
/// Initialize offline resources on app startup
|
||||
///
|
||||
/// This sets up the vector clock and networking-related resources, but does NOT
|
||||
/// create a session. Sessions only exist when networking is active.
|
||||
/// This sets up the vector clock and networking-related resources.
|
||||
/// Creates an offline CurrentSession that will be updated when networking starts.
|
||||
pub fn initialize_offline_resources(world: &mut World) {
|
||||
info!("Initializing offline resources (no session yet)...");
|
||||
info!("Initializing offline resources...");
|
||||
|
||||
// Create node ID (persists for this app instance)
|
||||
let node_id = Uuid::new_v4();
|
||||
@@ -32,5 +32,11 @@ pub fn initialize_offline_resources(world: &mut World) {
|
||||
world.insert_resource(NetworkEntityMap::default());
|
||||
world.insert_resource(EntityLockRegistry::default());
|
||||
|
||||
info!("Offline resources initialized (vector clock ready)");
|
||||
// Create offline session (will be updated when networking starts)
|
||||
// This ensures CurrentSession resource always exists for UI binding
|
||||
let offline_session_id = libmarathon::networking::SessionId::new();
|
||||
let offline_session = Session::new(offline_session_id);
|
||||
world.insert_resource(CurrentSession::new(offline_session, VectorClock::new()));
|
||||
|
||||
info!("Offline resources initialized (vector clock ready, session created in offline state)");
|
||||
}
|
||||
|
||||
@@ -6,8 +6,8 @@
|
||||
use bevy::prelude::*;
|
||||
use libmarathon::{
|
||||
debug_ui::{egui, EguiContexts, EguiPrimaryContextPass},
|
||||
engine::{EngineBridge, EngineCommand},
|
||||
networking::{CurrentSession, NodeVectorClock, SessionId},
|
||||
engine::{EngineBridge, EngineCommand, NetworkingInitStatus},
|
||||
networking::{CurrentSession, NodeVectorClock, SessionId, SessionState},
|
||||
};
|
||||
|
||||
pub struct SessionUiPlugin;
|
||||
@@ -15,10 +15,16 @@ pub struct SessionUiPlugin;
|
||||
impl Plugin for SessionUiPlugin {
|
||||
fn build(&self, app: &mut App) {
|
||||
app.init_resource::<SessionUiState>()
|
||||
.init_resource::<NetworkingStatus>()
|
||||
.add_systems(EguiPrimaryContextPass, session_ui_panel);
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Resource, Default)]
|
||||
pub struct NetworkingStatus {
|
||||
pub latest_status: Option<NetworkingInitStatus>,
|
||||
}
|
||||
|
||||
#[derive(Resource, Default)]
|
||||
struct SessionUiState {
|
||||
join_code_input: String,
|
||||
@@ -28,10 +34,16 @@ struct SessionUiState {
|
||||
fn session_ui_panel(
|
||||
mut contexts: EguiContexts,
|
||||
mut ui_state: ResMut<SessionUiState>,
|
||||
current_session: Option<Res<CurrentSession>>,
|
||||
current_session: Res<CurrentSession>,
|
||||
node_clock: Option<Res<NodeVectorClock>>,
|
||||
bridge: Res<EngineBridge>,
|
||||
networking_status: Res<NetworkingStatus>,
|
||||
) {
|
||||
// Log session state for debugging
|
||||
debug!("Session UI: state={:?}, id={}",
|
||||
current_session.session.state,
|
||||
current_session.session.id.to_code());
|
||||
|
||||
let Ok(ctx) = contexts.ctx_mut() else {
|
||||
return;
|
||||
};
|
||||
@@ -40,64 +52,107 @@ fn session_ui_panel(
|
||||
.default_pos([320.0, 10.0])
|
||||
.default_width(280.0)
|
||||
.show(ctx, |ui| {
|
||||
if let Some(session) = current_session.as_ref() {
|
||||
// ONLINE MODE: Session exists, networking is active
|
||||
ui.heading("Session (Online)");
|
||||
ui.separator();
|
||||
// Display UI based on session state
|
||||
match current_session.session.state {
|
||||
SessionState::Active => {
|
||||
// ONLINE MODE: Networking is active
|
||||
ui.heading("Session (Online)");
|
||||
ui.separator();
|
||||
|
||||
ui.horizontal(|ui| {
|
||||
ui.label("Code:");
|
||||
ui.code(session.session.id.to_code());
|
||||
if ui.small_button("📋").clicked() {
|
||||
// TODO: Copy to clipboard (requires clipboard API)
|
||||
info!("Session code: {}", session.session.id.to_code());
|
||||
}
|
||||
});
|
||||
|
||||
ui.label(format!("State: {:?}", session.session.state));
|
||||
|
||||
if let Some(clock) = node_clock.as_ref() {
|
||||
ui.label(format!("Connected nodes: {}", clock.clock.clocks.len()));
|
||||
}
|
||||
|
||||
ui.add_space(10.0);
|
||||
|
||||
// Stop networking button
|
||||
if ui.button("🔌 Stop Networking").clicked() {
|
||||
info!("Stopping networking");
|
||||
bridge.send_command(EngineCommand::StopNetworking);
|
||||
}
|
||||
} else {
|
||||
// OFFLINE MODE: No session, networking not started
|
||||
ui.heading("Offline Mode");
|
||||
ui.separator();
|
||||
|
||||
ui.label("World is running offline");
|
||||
ui.label("Vector clock is tracking changes");
|
||||
|
||||
if let Some(clock) = node_clock.as_ref() {
|
||||
let current_seq = clock.clock.clocks.get(&clock.node_id).copied().unwrap_or(0);
|
||||
ui.label(format!("Local sequence: {}", current_seq));
|
||||
}
|
||||
|
||||
ui.add_space(10.0);
|
||||
|
||||
// Start networking button
|
||||
if ui.button("🌐 Start Networking").clicked() {
|
||||
info!("Starting networking (will create new session)");
|
||||
// Generate a new session ID on the fly
|
||||
let new_session_id = libmarathon::networking::SessionId::new();
|
||||
info!("New session code: {}", new_session_id.to_code());
|
||||
bridge.send_command(EngineCommand::StartNetworking {
|
||||
session_id: new_session_id,
|
||||
ui.horizontal(|ui| {
|
||||
ui.label("Code:");
|
||||
ui.code(current_session.session.id.to_code());
|
||||
if ui.small_button("📋").clicked() {
|
||||
// TODO: Copy to clipboard (requires clipboard API)
|
||||
info!("Session code: {}", current_session.session.id.to_code());
|
||||
}
|
||||
});
|
||||
|
||||
ui.label(format!("State: {:?}", current_session.session.state));
|
||||
|
||||
if let Some(clock) = node_clock.as_ref() {
|
||||
ui.label(format!("Connected nodes: {}", clock.clock.node_count()));
|
||||
}
|
||||
|
||||
ui.add_space(10.0);
|
||||
|
||||
// Stop networking button
|
||||
if ui.button("🔌 Stop Networking").clicked() {
|
||||
info!("Stopping networking");
|
||||
bridge.send_command(EngineCommand::StopNetworking);
|
||||
}
|
||||
}
|
||||
SessionState::Joining => {
|
||||
// INITIALIZING: Networking is starting up
|
||||
ui.heading("Connecting...");
|
||||
ui.separator();
|
||||
|
||||
ui.add_space(5.0);
|
||||
// Display initialization status
|
||||
if let Some(ref status) = networking_status.latest_status {
|
||||
match status {
|
||||
NetworkingInitStatus::CreatingEndpoint => {
|
||||
ui.label("⏳ Creating network endpoint...");
|
||||
}
|
||||
NetworkingInitStatus::EndpointReady => {
|
||||
ui.label("✓ Network endpoint ready");
|
||||
}
|
||||
NetworkingInitStatus::DiscoveringPeers { session_code, attempt } => {
|
||||
ui.label(format!("🔍 Discovering peers for session {}", session_code));
|
||||
ui.label(format!(" Attempt {}/3...", attempt));
|
||||
}
|
||||
NetworkingInitStatus::PeersFound { count } => {
|
||||
ui.label(format!("✓ Found {} peer(s)!", count));
|
||||
}
|
||||
NetworkingInitStatus::NoPeersFound => {
|
||||
ui.label("ℹ No existing peers found");
|
||||
ui.label(" (Creating new session)");
|
||||
}
|
||||
NetworkingInitStatus::PublishingToDHT => {
|
||||
ui.label("📡 Publishing to DHT...");
|
||||
}
|
||||
NetworkingInitStatus::InitializingGossip => {
|
||||
ui.label("🔧 Initializing gossip protocol...");
|
||||
}
|
||||
}
|
||||
} else {
|
||||
ui.label("⏳ Initializing...");
|
||||
}
|
||||
|
||||
// Join existing session button
|
||||
if ui.button("➕ Join Session").clicked() {
|
||||
ui_state.show_join_dialog = true;
|
||||
ui.add_space(10.0);
|
||||
ui.label("Please wait...");
|
||||
}
|
||||
_ => {
|
||||
// OFFLINE MODE: Networking not started or disconnected
|
||||
ui.heading("Offline Mode");
|
||||
ui.separator();
|
||||
|
||||
ui.label("World is running offline");
|
||||
ui.label("Vector clock is tracking changes");
|
||||
|
||||
if let Some(clock) = node_clock.as_ref() {
|
||||
let current_seq = clock.clock.timestamps.get(&clock.node_id).copied().unwrap_or(0);
|
||||
ui.label(format!("Local sequence: {}", current_seq));
|
||||
}
|
||||
|
||||
ui.add_space(10.0);
|
||||
|
||||
// Start networking button
|
||||
if ui.button("🌐 Start Networking").clicked() {
|
||||
info!("Starting networking (will create new session)");
|
||||
// Generate a new session ID on the fly
|
||||
let new_session_id = libmarathon::networking::SessionId::new();
|
||||
info!("New session code: {}", new_session_id.to_code());
|
||||
bridge.send_command(EngineCommand::StartNetworking {
|
||||
session_id: new_session_id,
|
||||
});
|
||||
}
|
||||
|
||||
ui.add_space(5.0);
|
||||
|
||||
// Join existing session button
|
||||
if ui.button("➕ Join Session").clicked() {
|
||||
ui_state.show_join_dialog = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
@@ -108,7 +163,10 @@ fn session_ui_panel(
|
||||
.collapsible(false)
|
||||
.show(ctx, |ui| {
|
||||
ui.label("Enter session code (abc-def-123):");
|
||||
ui.text_edit_singleline(&mut ui_state.join_code_input);
|
||||
let text_edit = ui.text_edit_singleline(&mut ui_state.join_code_input);
|
||||
|
||||
// Auto-focus the text input when dialog opens
|
||||
text_edit.request_focus();
|
||||
|
||||
ui.add_space(5.0);
|
||||
ui.label("Note: Joining requires app restart");
|
||||
|
||||
253
crates/app/src/setup/control_socket.rs
Normal file
253
crates/app/src/setup/control_socket.rs
Normal file
@@ -0,0 +1,253 @@
|
||||
//! Unix domain socket control server for remote engine control
|
||||
//!
|
||||
//! This module provides a Unix socket server for controlling the engine
|
||||
//! programmatically without needing screen access or network ports.
|
||||
//!
|
||||
//! # Security
|
||||
//!
|
||||
//! Currently debug-only. See issue #135 for production security requirements.
|
||||
|
||||
use anyhow::Result;
|
||||
use bevy::prelude::*;
|
||||
use libmarathon::networking::{ControlCommand, ControlResponse, GossipBridge, SessionId};
|
||||
use uuid::Uuid;
|
||||
|
||||
/// Spawn Unix domain socket control server for remote engine control
|
||||
///
|
||||
/// This spawns a tokio task that listens on a Unix socket for control commands.
|
||||
/// The socket path is `/tmp/marathon-{session_id}.sock`.
|
||||
///
|
||||
/// **Security Note**: This is currently debug-only. See issue #135 for production
|
||||
/// security requirements (authentication, rate limiting, etc.).
|
||||
///
|
||||
/// # Platform Support
|
||||
///
|
||||
/// This function is only compiled on non-iOS platforms.
|
||||
#[cfg(not(target_os = "ios"))]
|
||||
#[cfg(debug_assertions)]
|
||||
pub fn spawn_control_socket(session_id: SessionId, bridge: GossipBridge, node_id: Uuid) {
|
||||
use tokio::io::AsyncReadExt;
|
||||
use tokio::net::UnixListener;
|
||||
|
||||
let socket_path = format!("/tmp/marathon-{}.sock", session_id);
|
||||
|
||||
tokio::spawn(async move {
|
||||
// Clean up any existing socket
|
||||
let _ = std::fs::remove_file(&socket_path);
|
||||
|
||||
let listener = match UnixListener::bind(&socket_path) {
|
||||
Ok(l) => {
|
||||
info!("Control socket listening at {}", socket_path);
|
||||
l
|
||||
}
|
||||
Err(e) => {
|
||||
error!("Failed to bind control socket at {}: {}", socket_path, e);
|
||||
return;
|
||||
}
|
||||
};
|
||||
|
||||
// Accept connections in a loop
|
||||
loop {
|
||||
match listener.accept().await {
|
||||
Ok((mut stream, _addr)) => {
|
||||
let bridge = bridge.clone();
|
||||
let session_id = session_id.clone();
|
||||
|
||||
// Spawn a task to handle this connection
|
||||
tokio::spawn(async move {
|
||||
// Read command length (4 bytes)
|
||||
let mut len_buf = [0u8; 4];
|
||||
if let Err(e) = stream.read_exact(&mut len_buf).await {
|
||||
error!("Failed to read command length: {}", e);
|
||||
return;
|
||||
}
|
||||
let len = u32::from_le_bytes(len_buf) as usize;
|
||||
|
||||
// Read command bytes
|
||||
let mut cmd_buf = vec![0u8; len];
|
||||
if let Err(e) = stream.read_exact(&mut cmd_buf).await {
|
||||
error!("Failed to read command: {}", e);
|
||||
return;
|
||||
}
|
||||
|
||||
// Deserialize command
|
||||
let command = match ControlCommand::from_bytes(&cmd_buf) {
|
||||
Ok(cmd) => cmd,
|
||||
Err(e) => {
|
||||
error!("Failed to deserialize command: {}", e);
|
||||
let response = ControlResponse::Error {
|
||||
error: format!("Failed to deserialize command: {}", e),
|
||||
};
|
||||
let _ = send_response(&mut stream, response).await;
|
||||
return;
|
||||
}
|
||||
};
|
||||
|
||||
info!("Received control command: {:?}", command);
|
||||
|
||||
// Execute command
|
||||
let response = handle_control_command(command, &bridge, session_id, node_id).await;
|
||||
|
||||
// Send response
|
||||
if let Err(e) = send_response(&mut stream, response).await {
|
||||
error!("Failed to send response: {}", e);
|
||||
}
|
||||
});
|
||||
}
|
||||
Err(e) => {
|
||||
error!("Failed to accept control socket connection: {}", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/// Handle a control command and return a response
|
||||
#[cfg(not(target_os = "ios"))]
|
||||
#[cfg(debug_assertions)]
|
||||
async fn handle_control_command(
|
||||
command: ControlCommand,
|
||||
bridge: &GossipBridge,
|
||||
session_id: SessionId,
|
||||
node_id: Uuid,
|
||||
) -> ControlResponse {
|
||||
match command {
|
||||
ControlCommand::GetStatus => {
|
||||
// Get queue sizes from bridge
|
||||
let outgoing_size = bridge.try_recv_outgoing().map(|msg| {
|
||||
// Put it back
|
||||
let _ = bridge.send(msg);
|
||||
1
|
||||
}).unwrap_or(0);
|
||||
|
||||
ControlResponse::Status {
|
||||
node_id,
|
||||
session_id,
|
||||
outgoing_queue_size: outgoing_size,
|
||||
incoming_queue_size: 0, // We'd need to peek without consuming
|
||||
connected_peers: None, // Not easily available from bridge
|
||||
}
|
||||
}
|
||||
ControlCommand::SendTestMessage { content } => {
|
||||
use libmarathon::networking::{VersionedMessage, VectorClock, SyncMessage};
|
||||
|
||||
// Send a SyncRequest as a test message (lightweight ping-like message)
|
||||
let message = SyncMessage::SyncRequest {
|
||||
node_id,
|
||||
vector_clock: VectorClock::new(),
|
||||
};
|
||||
let versioned = VersionedMessage::new(message);
|
||||
|
||||
match bridge.send(versioned) {
|
||||
Ok(_) => ControlResponse::Ok {
|
||||
message: format!("Sent test message: {}", content),
|
||||
},
|
||||
Err(e) => ControlResponse::Error {
|
||||
error: format!("Failed to send: {}", e),
|
||||
},
|
||||
}
|
||||
}
|
||||
ControlCommand::InjectMessage { message } => {
|
||||
match bridge.push_incoming(message) {
|
||||
Ok(_) => ControlResponse::Ok {
|
||||
message: "Message injected into incoming queue".to_string(),
|
||||
},
|
||||
Err(e) => ControlResponse::Error {
|
||||
error: format!("Failed to inject message: {}", e),
|
||||
},
|
||||
}
|
||||
}
|
||||
ControlCommand::BroadcastMessage { message } => {
|
||||
use libmarathon::networking::VersionedMessage;
|
||||
|
||||
let versioned = VersionedMessage::new(message);
|
||||
match bridge.send(versioned) {
|
||||
Ok(_) => ControlResponse::Ok {
|
||||
message: "Message broadcast".to_string(),
|
||||
},
|
||||
Err(e) => ControlResponse::Error {
|
||||
error: format!("Failed to broadcast: {}", e),
|
||||
},
|
||||
}
|
||||
}
|
||||
ControlCommand::Shutdown => {
|
||||
warn!("Shutdown command received via control socket");
|
||||
ControlResponse::Ok {
|
||||
message: "Shutdown not yet implemented".to_string(),
|
||||
}
|
||||
}
|
||||
|
||||
// Session lifecycle commands (TODO: implement these properly)
|
||||
ControlCommand::JoinSession { session_code } => {
|
||||
ControlResponse::Error {
|
||||
error: format!("JoinSession not yet implemented (requested: {})", session_code),
|
||||
}
|
||||
}
|
||||
ControlCommand::LeaveSession => {
|
||||
ControlResponse::Error {
|
||||
error: "LeaveSession not yet implemented".to_string(),
|
||||
}
|
||||
}
|
||||
ControlCommand::GetSessionInfo => {
|
||||
ControlResponse::Error {
|
||||
error: "GetSessionInfo not yet implemented".to_string(),
|
||||
}
|
||||
}
|
||||
ControlCommand::ListSessions => {
|
||||
ControlResponse::Error {
|
||||
error: "ListSessions not yet implemented".to_string(),
|
||||
}
|
||||
}
|
||||
ControlCommand::DeleteSession { session_code } => {
|
||||
ControlResponse::Error {
|
||||
error: format!("DeleteSession not yet implemented (requested: {})", session_code),
|
||||
}
|
||||
}
|
||||
ControlCommand::ListPeers => {
|
||||
ControlResponse::Error {
|
||||
error: "ListPeers not yet implemented".to_string(),
|
||||
}
|
||||
}
|
||||
ControlCommand::SpawnEntity { .. } => {
|
||||
ControlResponse::Error {
|
||||
error: "SpawnEntity not available on session-level socket. Use app-level socket.".to_string(),
|
||||
}
|
||||
}
|
||||
ControlCommand::DeleteEntity { .. } => {
|
||||
ControlResponse::Error {
|
||||
error: "DeleteEntity not available on session-level socket. Use app-level socket.".to_string(),
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Send a response back through the Unix socket
|
||||
#[cfg(not(target_os = "ios"))]
|
||||
#[cfg(debug_assertions)]
|
||||
async fn send_response(
|
||||
stream: &mut tokio::net::UnixStream,
|
||||
response: ControlResponse,
|
||||
) -> Result<()> {
|
||||
use tokio::io::AsyncWriteExt;
|
||||
|
||||
let bytes = response.to_bytes()?;
|
||||
let len = bytes.len() as u32;
|
||||
|
||||
// Write length prefix
|
||||
stream.write_all(&len.to_le_bytes()).await?;
|
||||
// Write response bytes
|
||||
stream.write_all(&bytes).await?;
|
||||
stream.flush().await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// No-op stub for iOS builds
|
||||
#[cfg(target_os = "ios")]
|
||||
pub fn spawn_control_socket(_session_id: SessionId, _bridge: GossipBridge, _node_id: Uuid) {}
|
||||
|
||||
// No-op stub for release builds
|
||||
#[cfg(all(not(target_os = "ios"), not(debug_assertions)))]
|
||||
pub fn spawn_control_socket(_session_id: SessionId, _bridge: GossipBridge, _node_id: Uuid) {
|
||||
// TODO(#135): Implement secure control socket for release builds with authentication
|
||||
}
|
||||
@@ -47,11 +47,15 @@
|
||||
//! 2. **Tokio → Bevy**: GossipBridge's internal queue (push_incoming)
|
||||
//! 3. **Thread handoff**: crossbeam_channel (one-time GossipBridge transfer)
|
||||
|
||||
mod control_socket;
|
||||
|
||||
use anyhow::Result;
|
||||
use bevy::prelude::*;
|
||||
use libmarathon::networking::{GossipBridge, SessionId};
|
||||
use uuid::Uuid;
|
||||
|
||||
use control_socket::spawn_control_socket;
|
||||
|
||||
/// Session ID to use for network initialization
|
||||
///
|
||||
/// This resource must be inserted before setup_gossip_networking runs.
|
||||
@@ -222,11 +226,12 @@ async fn init_gossip(session_id: SessionId) -> Result<GossipBridge> {
|
||||
let (sender, mut receiver) = subscribe_handle.split();
|
||||
|
||||
// Wait for join (with timeout since we might be the first node)
|
||||
info!("Waiting for gossip join...");
|
||||
match tokio::time::timeout(std::time::Duration::from_secs(2), receiver.joined()).await {
|
||||
| Ok(Ok(())) => info!("Joined gossip swarm"),
|
||||
// Increased timeout to 10s to allow mDNS discovery to work
|
||||
info!("Waiting for gossip join (10s timeout for mDNS discovery)...");
|
||||
match tokio::time::timeout(std::time::Duration::from_secs(10), receiver.joined()).await {
|
||||
| Ok(Ok(())) => info!("Joined gossip swarm successfully"),
|
||||
| Ok(Err(e)) => warn!("Join error: {} (proceeding anyway)", e),
|
||||
| Err(_) => info!("Join timeout (first node in swarm)"),
|
||||
| Err(_) => info!("Join timeout - likely first node in swarm (proceeding anyway)"),
|
||||
}
|
||||
|
||||
// Create bridge
|
||||
@@ -236,6 +241,9 @@ async fn init_gossip(session_id: SessionId) -> Result<GossipBridge> {
|
||||
// Spawn forwarding tasks - pass endpoint, router, gossip to keep them alive
|
||||
spawn_bridge_tasks(sender, receiver, bridge.clone(), endpoint, router, gossip);
|
||||
|
||||
// Spawn control socket server for remote control (debug only)
|
||||
spawn_control_socket(session_id, bridge.clone(), node_id);
|
||||
|
||||
Ok(bridge)
|
||||
}
|
||||
|
||||
@@ -285,7 +293,7 @@ fn spawn_bridge_tasks(
|
||||
|
||||
loop {
|
||||
if let Some(msg) = bridge_out.try_recv_outgoing() {
|
||||
if let Ok(bytes) = bincode::serialize(&msg) {
|
||||
if let Ok(bytes) = rkyv::to_bytes::<rkyv::rancor::Failure>(&msg).map(|b| b.to_vec()) {
|
||||
if let Err(e) = sender.broadcast(Bytes::from(bytes)).await {
|
||||
error!("[Node {}] Broadcast failed: {}", node_id, e);
|
||||
}
|
||||
@@ -301,14 +309,26 @@ fn spawn_bridge_tasks(
|
||||
loop {
|
||||
match tokio::time::timeout(Duration::from_millis(100), receiver.next()).await {
|
||||
| Ok(Some(Ok(event))) => {
|
||||
if let iroh_gossip::api::Event::Received(msg) = event {
|
||||
if let Ok(versioned_msg) =
|
||||
bincode::deserialize::<VersionedMessage>(&msg.content)
|
||||
{
|
||||
if let Err(e) = bridge_in.push_incoming(versioned_msg) {
|
||||
error!("[Node {}] Push incoming failed: {}", node_id, e);
|
||||
match event {
|
||||
| iroh_gossip::api::Event::Received(msg) => {
|
||||
info!("[Node {}] Received message from gossip", node_id);
|
||||
if let Ok(versioned_msg) =
|
||||
rkyv::from_bytes::<VersionedMessage, rkyv::rancor::Failure>(&msg.content)
|
||||
{
|
||||
if let Err(e) = bridge_in.push_incoming(versioned_msg) {
|
||||
error!("[Node {}] Push incoming failed: {}", node_id, e);
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
| iroh_gossip::api::Event::NeighborUp(peer_id) => {
|
||||
info!("[Node {}] Peer connected: {}", node_id, peer_id);
|
||||
},
|
||||
| iroh_gossip::api::Event::NeighborDown(peer_id) => {
|
||||
warn!("[Node {}] Peer disconnected: {}", node_id, peer_id);
|
||||
},
|
||||
| iroh_gossip::api::Event::Lagged => {
|
||||
warn!("[Node {}] Event stream lagged - some events may have been missed", node_id);
|
||||
},
|
||||
}
|
||||
},
|
||||
| Ok(Some(Err(e))) => error!("[Node {}] Receiver error: {}", node_id, e),
|
||||
@@ -107,13 +107,11 @@ mod test_utils {
|
||||
},
|
||||
));
|
||||
|
||||
// Register cube component types for reflection
|
||||
app.register_type::<CubeMarker>();
|
||||
|
||||
app
|
||||
}
|
||||
|
||||
/// Count entities with CubeMarker component
|
||||
#[allow(dead_code)]
|
||||
pub fn count_cubes(world: &mut World) -> usize {
|
||||
let mut query = world.query::<&CubeMarker>();
|
||||
query.iter(world).count()
|
||||
@@ -308,7 +306,7 @@ mod test_utils {
|
||||
"[Node {}] Sending message #{} via gossip",
|
||||
node_id, msg_count
|
||||
);
|
||||
match bincode::serialize(&versioned_msg) {
|
||||
match rkyv::to_bytes::<rkyv::rancor::Failure>(&versioned_msg).map(|b| b.to_vec()) {
|
||||
| Ok(bytes) => {
|
||||
if let Err(e) = sender.broadcast(Bytes::from(bytes)).await {
|
||||
eprintln!("[Node {}] Failed to broadcast message: {}", node_id, e);
|
||||
@@ -349,7 +347,7 @@ mod test_utils {
|
||||
"[Node {}] Received message #{} from gossip",
|
||||
node_id, msg_count
|
||||
);
|
||||
match bincode::deserialize::<VersionedMessage>(&msg.content) {
|
||||
match rkyv::from_bytes::<VersionedMessage, rkyv::rancor::Failure>(&msg.content) {
|
||||
| Ok(versioned_msg) => {
|
||||
if let Err(e) = bridge_in.push_incoming(versioned_msg) {
|
||||
eprintln!(
|
||||
@@ -424,7 +422,7 @@ async fn test_cube_spawn_and_sync() -> Result<()> {
|
||||
let spawned_entity = app1
|
||||
.world_mut()
|
||||
.spawn((
|
||||
CubeMarker,
|
||||
CubeMarker::with_color(Color::srgb(0.8, 0.3, 0.6), 1.0),
|
||||
Transform::from_xyz(1.0, 2.0, 3.0),
|
||||
GlobalTransform::default(),
|
||||
NetworkedEntity::with_id(entity_id, node1_id),
|
||||
|
||||
@@ -1,52 +1,109 @@
|
||||
[package]
|
||||
name = "libmarathon"
|
||||
version = "0.1.0"
|
||||
version = "0.1.1"
|
||||
edition.workspace = true
|
||||
description = "A peer-to-peer game engine development kit with CRDT-based state synchronization"
|
||||
license = "MIT"
|
||||
repository = "https://github.com/r3t-studios/marathon"
|
||||
homepage = "https://github.com/r3t-studios/marathon"
|
||||
readme = "../../README.md"
|
||||
keywords = ["gamedev", "p2p", "crdt", "multiplayer", "bevy"]
|
||||
categories = ["game-engines", "network-programming"]
|
||||
|
||||
[dependencies]
|
||||
anyhow.workspace = true
|
||||
arboard = "3.4"
|
||||
bevy.workspace = true
|
||||
bincode = "1.3"
|
||||
rkyv.workspace = true
|
||||
|
||||
# Bevy subcrates required by vendored rendering (bevy_render, bevy_core_pipeline, bevy_pbr)
|
||||
bevy_app = "0.17.2"
|
||||
bevy_asset = "0.17.2"
|
||||
bevy_camera = "0.17.2"
|
||||
bevy_color = "0.17.2"
|
||||
bevy_derive = "0.17.2"
|
||||
bevy_diagnostic = "0.17.2"
|
||||
bevy_ecs = "0.17.2"
|
||||
bevy_encase_derive = "0.17.2"
|
||||
bevy_image = "0.17.2"
|
||||
bevy_light = "0.17.2"
|
||||
bevy_math = "0.17.2"
|
||||
bevy_mesh = "0.17.2"
|
||||
bevy_platform = { version = "0.17.2", default-features = false }
|
||||
bevy_reflect = "0.17.2"
|
||||
libmarathon-macros = { version = "0.1.1", path = "../macros" }
|
||||
bevy_shader = "0.17.2"
|
||||
bevy_tasks = "0.17.2"
|
||||
bevy_time = "0.17.2"
|
||||
bevy_transform = "0.17.2"
|
||||
bevy_utils = "0.17.2"
|
||||
bevy_window = "0.17.2"
|
||||
|
||||
# Additional dependencies required by vendored rendering crates
|
||||
wgpu = { version = "26", default-features = false, features = ["dx12", "metal"] }
|
||||
naga = { version = "26", features = ["wgsl-in"] }
|
||||
downcast-rs = { version = "2", default-features = false, features = ["std"] }
|
||||
derive_more = { version = "2", default-features = false, features = ["from"] }
|
||||
image = { version = "0.25.2", default-features = false }
|
||||
bitflags = { version = "2.3", features = ["bytemuck"] }
|
||||
fixedbitset = "0.5"
|
||||
radsort = "0.1"
|
||||
nonmax = "0.5"
|
||||
smallvec = { version = "1", default-features = false }
|
||||
indexmap = "2.0"
|
||||
async-channel = "2.3"
|
||||
offset-allocator = "0.2"
|
||||
variadics_please = "1.1"
|
||||
static_assertions = "1.1"
|
||||
|
||||
blake3 = "1.5"
|
||||
blocking = "1.6"
|
||||
hex.workspace = true
|
||||
bytemuck = { version = "1.14", features = ["derive"] }
|
||||
bytes = "1.0"
|
||||
chrono = { version = "0.4", features = ["serde"] }
|
||||
bytes.workspace = true
|
||||
chrono.workspace = true
|
||||
crdts.workspace = true
|
||||
crossbeam-channel = "0.5"
|
||||
crossbeam-channel.workspace = true
|
||||
dirs = "5.0"
|
||||
egui = { version = "0.33", default-features = false, features = ["bytemuck", "default_fonts"] }
|
||||
encase = { version = "0.10", features = ["glam"] }
|
||||
futures-lite = "2.0"
|
||||
glam = "0.29"
|
||||
egui.workspace = true
|
||||
encase = { version = "0.11", features = ["glam"] }
|
||||
futures-lite.workspace = true
|
||||
glam.workspace = true
|
||||
inventory.workspace = true
|
||||
iroh = { workspace = true, features = ["discovery-local-network"] }
|
||||
iroh-gossip.workspace = true
|
||||
pkarr = "5.0"
|
||||
itertools = "0.14"
|
||||
rand = "0.8"
|
||||
raw-window-handle = "0.6"
|
||||
rusqlite = { version = "0.37.0", features = ["bundled"] }
|
||||
serde = { version = "1.0", features = ["derive"] }
|
||||
rand.workspace = true
|
||||
rusqlite.workspace = true
|
||||
rustc-hash = "2.1"
|
||||
serde.workspace = true
|
||||
serde_json.workspace = true
|
||||
sha2 = "0.10"
|
||||
sync-macros = { path = "../sync-macros" }
|
||||
thiserror = "2.0"
|
||||
thiserror.workspace = true
|
||||
tokio.workspace = true
|
||||
tokio-util.workspace = true
|
||||
toml.workspace = true
|
||||
tracing.workspace = true
|
||||
uuid = { version = "1.0", features = ["v4", "serde"] }
|
||||
uuid.workspace = true
|
||||
wgpu-types = "26.0"
|
||||
winit = "0.30"
|
||||
winit.workspace = true
|
||||
|
||||
[target.'cfg(target_os = "ios")'.dependencies]
|
||||
tracing-oslog = "0.3"
|
||||
|
||||
[dev-dependencies]
|
||||
tokio.workspace = true
|
||||
iroh = { workspace = true, features = ["discovery-local-network"] }
|
||||
iroh-gossip.workspace = true
|
||||
futures-lite = "2.0"
|
||||
tempfile = "3"
|
||||
futures-lite.workspace = true
|
||||
tempfile.workspace = true
|
||||
proptest = "1.4"
|
||||
criterion = "0.5"
|
||||
|
||||
[features]
|
||||
# Feature to skip expensive networking operations in tests
|
||||
fast_tests = []
|
||||
|
||||
[[bench]]
|
||||
name = "write_buffer"
|
||||
harness = false
|
||||
|
||||
@@ -181,7 +181,7 @@ impl WindowToEguiContextMap {
|
||||
// NOTE: We don't use bevy_winit since we own the event loop
|
||||
// event_loop_proxy: Res<bevy_winit::EventLoopProxyWrapper<bevy_winit::WakeUp>>,
|
||||
) {
|
||||
for (egui_context_entity, camera, egui_context) in added_contexts {
|
||||
for (egui_context_entity, camera, _egui_context) in added_contexts {
|
||||
if let bevy::camera::RenderTarget::Window(window_ref) = camera.target
|
||||
&& let Some(window_ref) = window_ref.normalize(primary_window.single().ok())
|
||||
{
|
||||
@@ -1509,6 +1509,17 @@ pub fn custom_input_system(
|
||||
}
|
||||
}
|
||||
|
||||
InputEvent::Text { text } => {
|
||||
// Send text input to egui
|
||||
for (entity, _settings, _pointer_pos) in egui_contexts.iter() {
|
||||
egui_input_message_writer.write(EguiInputEvent {
|
||||
context: entity,
|
||||
event: egui::Event::Text(text.clone()),
|
||||
});
|
||||
messages_written += 1;
|
||||
}
|
||||
}
|
||||
|
||||
_ => {
|
||||
// Ignore stylus and touch events for now
|
||||
}
|
||||
|
||||
@@ -35,7 +35,7 @@ pub fn process_output_system(
|
||||
egui_global_settings: Res<EguiGlobalSettings>,
|
||||
window_to_egui_context_map: Res<WindowToEguiContextMap>,
|
||||
) {
|
||||
let mut should_request_redraw = false;
|
||||
let mut _should_request_redraw = false;
|
||||
|
||||
for (entity, mut context, mut full_output, mut render_output, mut egui_output, settings) in
|
||||
context_query.iter_mut()
|
||||
@@ -115,7 +115,7 @@ pub fn process_output_system(
|
||||
}
|
||||
|
||||
let needs_repaint = !render_output.is_empty();
|
||||
should_request_redraw |= ctx.has_requested_repaint() && needs_repaint;
|
||||
_should_request_redraw |= ctx.has_requested_repaint() && needs_repaint;
|
||||
}
|
||||
|
||||
// NOTE: RequestRedraw not needed - we own winit and run unbounded (continuous redraws)
|
||||
|
||||
@@ -4,7 +4,6 @@ use crate::networking::SessionId;
|
||||
use bevy::prelude::*;
|
||||
use uuid::Uuid;
|
||||
|
||||
/// Commands that Bevy sends to the Core Engine
|
||||
#[derive(Debug, Clone)]
|
||||
pub enum EngineCommand {
|
||||
// Networking lifecycle
|
||||
@@ -47,4 +46,7 @@ pub enum EngineCommand {
|
||||
|
||||
// Clock
|
||||
TickClock,
|
||||
|
||||
// Lifecycle
|
||||
Shutdown,
|
||||
}
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
//! Core Engine event loop - runs on tokio outside Bevy
|
||||
|
||||
use tokio::task::JoinHandle;
|
||||
use tokio_util::sync::CancellationToken;
|
||||
use uuid::Uuid;
|
||||
|
||||
use super::{EngineCommand, EngineEvent, EngineHandle, NetworkingManager, PersistenceManager};
|
||||
@@ -9,6 +10,7 @@ use crate::networking::{SessionId, VectorClock};
|
||||
pub struct EngineCore {
|
||||
handle: EngineHandle,
|
||||
networking_task: Option<JoinHandle<()>>,
|
||||
networking_cancel_token: Option<CancellationToken>,
|
||||
#[allow(dead_code)]
|
||||
persistence: PersistenceManager,
|
||||
|
||||
@@ -28,6 +30,7 @@ impl EngineCore {
|
||||
Self {
|
||||
handle,
|
||||
networking_task: None, // Start offline
|
||||
networking_cancel_token: None,
|
||||
persistence,
|
||||
node_id,
|
||||
clock,
|
||||
@@ -41,13 +44,19 @@ impl EngineCore {
|
||||
|
||||
// Process commands as they arrive
|
||||
while let Some(cmd) = self.handle.command_rx.recv().await {
|
||||
self.handle_command(cmd).await;
|
||||
let should_continue = self.handle_command(cmd).await;
|
||||
if !should_continue {
|
||||
tracing::info!("EngineCore received shutdown command");
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
tracing::info!("EngineCore shutting down (command channel closed)");
|
||||
tracing::info!("EngineCore shutting down");
|
||||
}
|
||||
|
||||
async fn handle_command(&mut self, cmd: EngineCommand) {
|
||||
/// Handle a command from Bevy
|
||||
/// Returns true to continue running, false to shutdown
|
||||
async fn handle_command(&mut self, cmd: EngineCommand) -> bool {
|
||||
match cmd {
|
||||
EngineCommand::StartNetworking { session_id } => {
|
||||
self.start_networking(session_id).await;
|
||||
@@ -62,20 +71,36 @@ impl EngineCore {
|
||||
self.stop_networking().await;
|
||||
}
|
||||
EngineCommand::SaveSession => {
|
||||
// TODO: Save current session state
|
||||
tracing::debug!("SaveSession command received (stub)");
|
||||
// Session state is auto-saved by save_session_on_shutdown_system in Bevy
|
||||
// This command is a no-op, as persistence is handled by Bevy systems
|
||||
tracing::debug!("SaveSession command received (session auto-save handled by Bevy)");
|
||||
}
|
||||
EngineCommand::LoadSession { session_id } => {
|
||||
tracing::debug!("LoadSession command received for {} (stub)", session_id.to_code());
|
||||
// Loading a session means switching to a different session
|
||||
// This requires restarting networking with the new session
|
||||
tracing::info!("LoadSession command received for {}", session_id.to_code());
|
||||
|
||||
// Stop current networking if any
|
||||
if self.networking_task.is_some() {
|
||||
self.stop_networking().await;
|
||||
}
|
||||
|
||||
// Start networking with the new session
|
||||
self.start_networking(session_id).await;
|
||||
}
|
||||
EngineCommand::TickClock => {
|
||||
self.tick_clock();
|
||||
}
|
||||
EngineCommand::Shutdown => {
|
||||
tracing::info!("Shutdown command received");
|
||||
return false;
|
||||
}
|
||||
// TODO: Handle CRDT and lock commands in Phase 2
|
||||
_ => {
|
||||
tracing::debug!("Unhandled command: {:?}", cmd);
|
||||
}
|
||||
}
|
||||
true
|
||||
}
|
||||
|
||||
fn tick_clock(&mut self) {
|
||||
@@ -93,38 +118,92 @@ impl EngineCore {
|
||||
return;
|
||||
}
|
||||
|
||||
match NetworkingManager::new(session_id.clone()).await {
|
||||
Ok(net_manager) => {
|
||||
let node_id = net_manager.node_id();
|
||||
tracing::info!("Starting networking initialization for session {}", session_id.to_code());
|
||||
|
||||
// Spawn NetworkingManager in background task
|
||||
let event_tx = self.handle.event_tx.clone();
|
||||
let task = tokio::spawn(async move {
|
||||
net_manager.run(event_tx).await;
|
||||
});
|
||||
// Test mode: Skip actual networking and send event immediately
|
||||
#[cfg(feature = "fast_tests")]
|
||||
{
|
||||
let bridge = crate::networking::GossipBridge::new(self.node_id);
|
||||
let _ = self.handle.event_tx.send(EngineEvent::NetworkingStarted {
|
||||
session_id: session_id.clone(),
|
||||
node_id: self.node_id,
|
||||
bridge,
|
||||
});
|
||||
tracing::info!("Networking started (test mode) for session {}", session_id.to_code());
|
||||
|
||||
self.networking_task = Some(task);
|
||||
|
||||
let _ = self.handle.event_tx.send(EngineEvent::NetworkingStarted {
|
||||
session_id: session_id.clone(),
|
||||
node_id,
|
||||
});
|
||||
tracing::info!("Networking started for session {}", session_id.to_code());
|
||||
}
|
||||
Err(e) => {
|
||||
let _ = self.handle.event_tx.send(EngineEvent::NetworkingFailed {
|
||||
error: e.to_string(),
|
||||
});
|
||||
tracing::error!("Failed to start networking: {}", e);
|
||||
}
|
||||
// Create a dummy task that just waits
|
||||
let task = tokio::spawn(async {
|
||||
tokio::time::sleep(tokio::time::Duration::from_secs(3600)).await;
|
||||
});
|
||||
self.networking_task = Some(task);
|
||||
return;
|
||||
}
|
||||
|
||||
// Create cancellation token for graceful shutdown
|
||||
let cancel_token = CancellationToken::new();
|
||||
let cancel_token_clone = cancel_token.clone();
|
||||
|
||||
// Spawn NetworkingManager initialization in background to avoid blocking
|
||||
// DHT peer discovery can take 15+ seconds with retries
|
||||
let event_tx = self.handle.event_tx.clone();
|
||||
|
||||
// Create channel for progress updates
|
||||
let (progress_tx, mut progress_rx) = tokio::sync::mpsc::unbounded_channel();
|
||||
|
||||
// Spawn task to forward progress updates to Bevy
|
||||
let event_tx_clone = event_tx.clone();
|
||||
let session_id_clone = session_id.clone();
|
||||
tokio::spawn(async move {
|
||||
while let Some(status) = progress_rx.recv().await {
|
||||
let _ = event_tx_clone.send(EngineEvent::NetworkingInitializing {
|
||||
session_id: session_id_clone.clone(),
|
||||
status,
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
let task = tokio::spawn(async move {
|
||||
match NetworkingManager::new(session_id.clone(), Some(progress_tx), cancel_token_clone.clone()).await {
|
||||
Ok((net_manager, bridge)) => {
|
||||
let node_id = net_manager.node_id();
|
||||
|
||||
// Notify Bevy that networking started
|
||||
let _ = event_tx.send(EngineEvent::NetworkingStarted {
|
||||
session_id: session_id.clone(),
|
||||
node_id,
|
||||
bridge,
|
||||
});
|
||||
tracing::info!("Networking started for session {}", session_id.to_code());
|
||||
|
||||
// Run the networking manager loop with cancellation support
|
||||
net_manager.run(event_tx.clone(), cancel_token_clone).await;
|
||||
}
|
||||
Err(e) => {
|
||||
let _ = event_tx.send(EngineEvent::NetworkingFailed {
|
||||
error: e.to_string(),
|
||||
});
|
||||
tracing::error!("Failed to start networking: {}", e);
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
self.networking_task = Some(task);
|
||||
self.networking_cancel_token = Some(cancel_token);
|
||||
}
|
||||
|
||||
async fn stop_networking(&mut self) {
|
||||
// Cancel the task gracefully
|
||||
if let Some(cancel_token) = self.networking_cancel_token.take() {
|
||||
cancel_token.cancel();
|
||||
tracing::info!("Networking cancellation requested");
|
||||
}
|
||||
|
||||
// Abort the task immediately - don't wait for graceful shutdown
|
||||
// This is fine because NetworkingManager doesn't hold critical resources
|
||||
if let Some(task) = self.networking_task.take() {
|
||||
task.abort(); // Cancel the networking task
|
||||
task.abort();
|
||||
tracing::info!("Networking task aborted");
|
||||
let _ = self.handle.event_tx.send(EngineEvent::NetworkingStopped);
|
||||
tracing::info!("Networking stopped");
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -4,13 +4,33 @@ use crate::networking::{NodeId, SessionId, VectorClock};
|
||||
use bevy::prelude::*;
|
||||
use uuid::Uuid;
|
||||
|
||||
/// Events that the Core Engine emits to Bevy
|
||||
#[derive(Debug, Clone)]
|
||||
pub enum NetworkingInitStatus {
|
||||
CreatingEndpoint,
|
||||
EndpointReady,
|
||||
DiscoveringPeers {
|
||||
session_code: String,
|
||||
attempt: u8,
|
||||
},
|
||||
PeersFound {
|
||||
count: usize,
|
||||
},
|
||||
NoPeersFound,
|
||||
PublishingToDHT,
|
||||
InitializingGossip,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub enum EngineEvent {
|
||||
// Networking status
|
||||
NetworkingInitializing {
|
||||
session_id: SessionId,
|
||||
status: NetworkingInitStatus,
|
||||
},
|
||||
NetworkingStarted {
|
||||
session_id: SessionId,
|
||||
node_id: NodeId,
|
||||
bridge: crate::networking::GossipBridge,
|
||||
},
|
||||
NetworkingFailed {
|
||||
error: String,
|
||||
|
||||
@@ -14,12 +14,13 @@ mod core;
|
||||
mod events;
|
||||
mod game_actions;
|
||||
mod networking;
|
||||
mod peer_discovery;
|
||||
mod persistence;
|
||||
|
||||
pub use bridge::{EngineBridge, EngineHandle};
|
||||
pub use commands::EngineCommand;
|
||||
pub use core::EngineCore;
|
||||
pub use events::EngineEvent;
|
||||
pub use events::{EngineEvent, NetworkingInitStatus};
|
||||
pub use game_actions::GameAction;
|
||||
pub use networking::NetworkingManager;
|
||||
pub use persistence::PersistenceManager;
|
||||
|
||||
@@ -12,6 +12,7 @@ use crate::networking::{
|
||||
};
|
||||
|
||||
use super::EngineEvent;
|
||||
use super::events::NetworkingInitStatus;
|
||||
|
||||
pub struct NetworkingManager {
|
||||
session_id: SessionId,
|
||||
@@ -26,6 +27,9 @@ pub struct NetworkingManager {
|
||||
_router: iroh::protocol::Router,
|
||||
_gossip: iroh_gossip::net::Gossip,
|
||||
|
||||
// Bridge to Bevy for message passing
|
||||
bridge: crate::networking::GossipBridge,
|
||||
|
||||
// CRDT state
|
||||
vector_clock: VectorClock,
|
||||
operation_log: OperationLog,
|
||||
@@ -37,9 +41,19 @@ pub struct NetworkingManager {
|
||||
}
|
||||
|
||||
impl NetworkingManager {
|
||||
pub async fn new(session_id: SessionId) -> anyhow::Result<Self> {
|
||||
pub async fn new(
|
||||
session_id: SessionId,
|
||||
progress_tx: Option<tokio::sync::mpsc::UnboundedSender<NetworkingInitStatus>>,
|
||||
cancel_token: tokio_util::sync::CancellationToken,
|
||||
) -> anyhow::Result<(Self, crate::networking::GossipBridge)> {
|
||||
let send_progress = |status: NetworkingInitStatus| {
|
||||
if let Some(ref tx) = progress_tx {
|
||||
let _ = tx.send(status.clone());
|
||||
}
|
||||
tracing::info!("Networking init: {:?}", status);
|
||||
};
|
||||
use iroh::{
|
||||
discovery::mdns::MdnsDiscovery,
|
||||
discovery::pkarr::dht::DhtDiscovery,
|
||||
protocol::Router,
|
||||
Endpoint,
|
||||
};
|
||||
@@ -48,12 +62,24 @@ impl NetworkingManager {
|
||||
proto::TopicId,
|
||||
};
|
||||
|
||||
// Create iroh endpoint with mDNS discovery
|
||||
// Check for cancellation at start
|
||||
if cancel_token.is_cancelled() {
|
||||
return Err(anyhow::anyhow!("Initialization cancelled before start"));
|
||||
}
|
||||
|
||||
send_progress(NetworkingInitStatus::CreatingEndpoint);
|
||||
|
||||
// Create iroh endpoint with DHT discovery
|
||||
// This allows peers to discover each other over the internet via Mainline DHT
|
||||
// Security comes from the secret session-derived ALPN, not network isolation
|
||||
let dht_discovery = DhtDiscovery::builder().build()?;
|
||||
let endpoint = Endpoint::builder()
|
||||
.discovery(MdnsDiscovery::builder())
|
||||
.discovery(dht_discovery)
|
||||
.bind()
|
||||
.await?;
|
||||
|
||||
send_progress(NetworkingInitStatus::EndpointReady);
|
||||
|
||||
let endpoint_id = endpoint.addr().id;
|
||||
|
||||
// Convert endpoint ID to NodeId (using first 16 bytes)
|
||||
@@ -62,20 +88,89 @@ impl NetworkingManager {
|
||||
node_id_bytes.copy_from_slice(&id_bytes[..16]);
|
||||
let node_id = NodeId::from_bytes(node_id_bytes);
|
||||
|
||||
// Create gossip protocol
|
||||
let gossip = Gossip::builder().spawn(endpoint.clone());
|
||||
// Create pkarr client for DHT peer discovery
|
||||
let pkarr_client = pkarr::Client::builder()
|
||||
.no_default_network()
|
||||
.dht(|x| x)
|
||||
.build()?;
|
||||
|
||||
// Discover existing peers from DHT with retries
|
||||
// Retry immediately without delays - if peers aren't in DHT yet, they'll appear soon
|
||||
let mut peer_endpoint_ids = vec![];
|
||||
for attempt in 1..=3 {
|
||||
// Check for cancellation before each attempt
|
||||
if cancel_token.is_cancelled() {
|
||||
tracing::info!("Networking initialization cancelled during DHT discovery");
|
||||
return Err(anyhow::anyhow!("Initialization cancelled"));
|
||||
}
|
||||
|
||||
send_progress(NetworkingInitStatus::DiscoveringPeers {
|
||||
session_code: session_id.to_code().to_string(),
|
||||
attempt,
|
||||
});
|
||||
match crate::engine::peer_discovery::discover_peers_from_dht(&session_id, &pkarr_client).await {
|
||||
Ok(peers) if !peers.is_empty() => {
|
||||
let count = peers.len();
|
||||
peer_endpoint_ids = peers;
|
||||
send_progress(NetworkingInitStatus::PeersFound {
|
||||
count,
|
||||
});
|
||||
break;
|
||||
}
|
||||
Ok(_) if attempt == 3 => {
|
||||
// Last attempt and no peers found
|
||||
send_progress(NetworkingInitStatus::NoPeersFound);
|
||||
}
|
||||
Ok(_) => {
|
||||
// No peers found, but will retry immediately
|
||||
}
|
||||
Err(e) => {
|
||||
tracing::warn!("DHT query attempt {} failed: {}", attempt, e);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check for cancellation before publishing
|
||||
if cancel_token.is_cancelled() {
|
||||
tracing::info!("Networking initialization cancelled before DHT publish");
|
||||
return Err(anyhow::anyhow!("Initialization cancelled"));
|
||||
}
|
||||
|
||||
// Publish our presence to DHT
|
||||
send_progress(NetworkingInitStatus::PublishingToDHT);
|
||||
if let Err(e) = crate::engine::peer_discovery::publish_peer_to_dht(
|
||||
&session_id,
|
||||
endpoint_id,
|
||||
&pkarr_client,
|
||||
)
|
||||
.await
|
||||
{
|
||||
tracing::warn!("Failed to publish to DHT: {}", e);
|
||||
}
|
||||
|
||||
// Check for cancellation before gossip initialization
|
||||
if cancel_token.is_cancelled() {
|
||||
tracing::info!("Networking initialization cancelled before gossip init");
|
||||
return Err(anyhow::anyhow!("Initialization cancelled"));
|
||||
}
|
||||
|
||||
// Derive session-specific ALPN for network isolation
|
||||
let session_alpn = session_id.to_alpn();
|
||||
|
||||
// Create gossip protocol with custom session ALPN
|
||||
send_progress(NetworkingInitStatus::InitializingGossip);
|
||||
let gossip = Gossip::builder()
|
||||
.alpn(&session_alpn)
|
||||
.spawn(endpoint.clone());
|
||||
|
||||
// Set up router to accept session ALPN
|
||||
let router = Router::builder(endpoint.clone())
|
||||
.accept(session_alpn.as_slice(), gossip.clone())
|
||||
.spawn();
|
||||
|
||||
// Subscribe to topic derived from session ALPN
|
||||
// Subscribe to topic with discovered peers as bootstrap
|
||||
let topic_id = TopicId::from_bytes(session_alpn);
|
||||
let subscribe_handle = gossip.subscribe(topic_id, vec![]).await?;
|
||||
let subscribe_handle = gossip.subscribe(topic_id, peer_endpoint_ids).await?;
|
||||
|
||||
let (sender, receiver) = subscribe_handle.split();
|
||||
|
||||
@@ -85,6 +180,19 @@ impl NetworkingManager {
|
||||
node_id
|
||||
);
|
||||
|
||||
// Create GossipBridge for Bevy integration
|
||||
let bridge = crate::networking::GossipBridge::new(node_id);
|
||||
|
||||
// Spawn background task to maintain DHT presence
|
||||
let session_id_clone = session_id.clone();
|
||||
let cancel_token_clone = cancel_token.clone();
|
||||
tokio::spawn(crate::engine::peer_discovery::maintain_dht_presence(
|
||||
session_id_clone,
|
||||
endpoint_id,
|
||||
pkarr_client,
|
||||
cancel_token_clone,
|
||||
));
|
||||
|
||||
let manager = Self {
|
||||
session_id,
|
||||
node_id,
|
||||
@@ -93,6 +201,7 @@ impl NetworkingManager {
|
||||
_endpoint: endpoint,
|
||||
_router: router,
|
||||
_gossip: gossip,
|
||||
bridge: bridge.clone(),
|
||||
vector_clock: VectorClock::new(),
|
||||
operation_log: OperationLog::new(),
|
||||
tombstones: TombstoneRegistry::new(),
|
||||
@@ -100,7 +209,7 @@ impl NetworkingManager {
|
||||
our_locks: std::collections::HashSet::new(),
|
||||
};
|
||||
|
||||
Ok(manager)
|
||||
Ok((manager, bridge))
|
||||
}
|
||||
|
||||
pub fn node_id(&self) -> NodeId {
|
||||
@@ -112,20 +221,98 @@ impl NetworkingManager {
|
||||
}
|
||||
|
||||
/// Process gossip events (unbounded) and periodic tasks (heartbeats, lock cleanup)
|
||||
pub async fn run(mut self, event_tx: mpsc::UnboundedSender<EngineEvent>) {
|
||||
/// Also bridges messages between iroh-gossip and Bevy's GossipBridge
|
||||
pub async fn run(mut self, event_tx: mpsc::UnboundedSender<EngineEvent>, cancel_token: tokio_util::sync::CancellationToken) {
|
||||
let mut heartbeat_interval = time::interval(Duration::from_secs(1));
|
||||
let mut bridge_poll_interval = time::interval(Duration::from_millis(10));
|
||||
|
||||
loop {
|
||||
tokio::select! {
|
||||
// Process gossip events unbounded (as fast as they arrive)
|
||||
// Listen for shutdown signal
|
||||
_ = cancel_token.cancelled() => {
|
||||
tracing::info!("NetworkingManager received shutdown signal");
|
||||
break;
|
||||
}
|
||||
// Process incoming gossip messages and forward to GossipBridge
|
||||
Some(result) = self.receiver.next() => {
|
||||
match result {
|
||||
Ok(event) => {
|
||||
use iroh_gossip::api::Event;
|
||||
if let Event::Received(msg) = event {
|
||||
self.handle_sync_message(&msg.content, &event_tx).await;
|
||||
match event {
|
||||
Event::Received(msg) => {
|
||||
// Deserialize and forward to GossipBridge for Bevy systems
|
||||
if let Ok(versioned) = rkyv::from_bytes::<VersionedMessage, rkyv::rancor::Failure>(&msg.content) {
|
||||
// Diagnostic logging: track message type and nonce
|
||||
let msg_type = match &versioned.message {
|
||||
SyncMessage::EntityDelta { entity_id, .. } => {
|
||||
format!("EntityDelta({})", entity_id)
|
||||
}
|
||||
SyncMessage::JoinRequest { node_id, .. } => {
|
||||
format!("JoinRequest({})", node_id)
|
||||
}
|
||||
SyncMessage::FullState { entities, .. } => {
|
||||
format!("FullState({} entities)", entities.len())
|
||||
}
|
||||
SyncMessage::SyncRequest { node_id, .. } => {
|
||||
format!("SyncRequest({})", node_id)
|
||||
}
|
||||
SyncMessage::MissingDeltas { deltas } => {
|
||||
format!("MissingDeltas({} ops)", deltas.len())
|
||||
}
|
||||
SyncMessage::Lock(lock_msg) => {
|
||||
format!("Lock({:?})", lock_msg)
|
||||
}
|
||||
};
|
||||
|
||||
tracing::debug!(
|
||||
"[NetworkingManager::receive] Node {} received from iroh-gossip: {} (nonce: {})",
|
||||
self.node_id, msg_type, versioned.nonce
|
||||
);
|
||||
|
||||
if let Err(e) = self.bridge.push_incoming(versioned) {
|
||||
tracing::error!("Failed to forward {} to GossipBridge: {}", msg_type, e);
|
||||
} else {
|
||||
tracing::debug!(
|
||||
"[NetworkingManager::receive] ✓ Forwarded {} to Bevy GossipBridge",
|
||||
msg_type
|
||||
);
|
||||
}
|
||||
} else {
|
||||
tracing::warn!("Failed to deserialize message from iroh-gossip");
|
||||
}
|
||||
}
|
||||
Event::NeighborUp(peer) => {
|
||||
tracing::info!("Peer connected: {}", peer);
|
||||
|
||||
// Convert PublicKey to NodeId for Bevy
|
||||
let peer_bytes = peer.as_bytes();
|
||||
let mut node_id_bytes = [0u8; 16];
|
||||
node_id_bytes.copy_from_slice(&peer_bytes[..16]);
|
||||
let peer_node_id = NodeId::from_bytes(node_id_bytes);
|
||||
|
||||
// Notify Bevy of peer join
|
||||
let _ = event_tx.send(EngineEvent::PeerJoined {
|
||||
node_id: peer_node_id,
|
||||
});
|
||||
}
|
||||
Event::NeighborDown(peer) => {
|
||||
tracing::warn!("Peer disconnected: {}", peer);
|
||||
|
||||
// Convert PublicKey to NodeId for Bevy
|
||||
let peer_bytes = peer.as_bytes();
|
||||
let mut node_id_bytes = [0u8; 16];
|
||||
node_id_bytes.copy_from_slice(&peer_bytes[..16]);
|
||||
let peer_node_id = NodeId::from_bytes(node_id_bytes);
|
||||
|
||||
// Notify Bevy of peer leave
|
||||
let _ = event_tx.send(EngineEvent::PeerLeft {
|
||||
node_id: peer_node_id,
|
||||
});
|
||||
}
|
||||
Event::Lagged => {
|
||||
tracing::warn!("Event stream lagged");
|
||||
}
|
||||
}
|
||||
// Note: Neighbor events are not exposed in the current API
|
||||
}
|
||||
Err(e) => {
|
||||
tracing::warn!("Gossip receiver error: {}", e);
|
||||
@@ -133,6 +320,58 @@ impl NetworkingManager {
|
||||
}
|
||||
}
|
||||
|
||||
// Poll GossipBridge for outgoing messages and broadcast via iroh
|
||||
_ = bridge_poll_interval.tick() => {
|
||||
let mut sent_count = 0;
|
||||
while let Some(msg) = self.bridge.try_recv_outgoing() {
|
||||
// Diagnostic logging: track message type and nonce
|
||||
let msg_type = match &msg.message {
|
||||
SyncMessage::EntityDelta { entity_id, .. } => {
|
||||
format!("EntityDelta({})", entity_id)
|
||||
}
|
||||
SyncMessage::JoinRequest { node_id, .. } => {
|
||||
format!("JoinRequest({})", node_id)
|
||||
}
|
||||
SyncMessage::FullState { entities, .. } => {
|
||||
format!("FullState({} entities)", entities.len())
|
||||
}
|
||||
SyncMessage::SyncRequest { node_id, .. } => {
|
||||
format!("SyncRequest({})", node_id)
|
||||
}
|
||||
SyncMessage::MissingDeltas { deltas } => {
|
||||
format!("MissingDeltas({} ops)", deltas.len())
|
||||
}
|
||||
SyncMessage::Lock(lock_msg) => {
|
||||
format!("Lock({:?})", lock_msg)
|
||||
}
|
||||
};
|
||||
|
||||
tracing::debug!(
|
||||
"[NetworkingManager::broadcast] Node {} broadcasting: {} (nonce: {})",
|
||||
self.node_id, msg_type, msg.nonce
|
||||
);
|
||||
|
||||
if let Ok(bytes) = rkyv::to_bytes::<rkyv::rancor::Failure>(&msg).map(|b| b.to_vec()) {
|
||||
if let Err(e) = self.sender.broadcast(Bytes::from(bytes)).await {
|
||||
tracing::error!("Failed to broadcast {} to iroh-gossip: {}", msg_type, e);
|
||||
} else {
|
||||
sent_count += 1;
|
||||
tracing::debug!(
|
||||
"[NetworkingManager::broadcast] ✓ Sent {} to iroh-gossip network",
|
||||
msg_type
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if sent_count > 0 {
|
||||
tracing::info!(
|
||||
"[NetworkingManager::broadcast] Node {} sent {} messages to iroh-gossip network",
|
||||
self.node_id, sent_count
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// Periodic tasks: heartbeats and lock cleanup
|
||||
_ = heartbeat_interval.tick() => {
|
||||
self.broadcast_lock_heartbeats(&event_tx).await;
|
||||
@@ -145,7 +384,7 @@ impl NetworkingManager {
|
||||
|
||||
async fn handle_sync_message(&mut self, msg_bytes: &[u8], event_tx: &mpsc::UnboundedSender<EngineEvent>) {
|
||||
// Deserialize SyncMessage
|
||||
let versioned: VersionedMessage = match bincode::deserialize(msg_bytes) {
|
||||
let versioned: VersionedMessage = match rkyv::from_bytes::<VersionedMessage, rkyv::rancor::Failure>(msg_bytes) {
|
||||
Ok(v) => v,
|
||||
Err(e) => {
|
||||
tracing::warn!("Failed to deserialize sync message: {}", e);
|
||||
@@ -214,7 +453,7 @@ impl NetworkingManager {
|
||||
holder: self.node_id,
|
||||
}));
|
||||
|
||||
if let Ok(bytes) = bincode::serialize(&msg) {
|
||||
if let Ok(bytes) = rkyv::to_bytes::<rkyv::rancor::Failure>(&msg).map(|b| b.to_vec()) {
|
||||
let _ = self.sender.broadcast(Bytes::from(bytes)).await;
|
||||
}
|
||||
}
|
||||
|
||||
158
crates/libmarathon/src/engine/peer_discovery.rs
Normal file
158
crates/libmarathon/src/engine/peer_discovery.rs
Normal file
@@ -0,0 +1,158 @@
|
||||
//! DHT-based peer discovery for session collaboration
|
||||
//!
|
||||
//! Each peer publishes their EndpointId to the DHT using a session-derived pkarr key.
|
||||
//! Other peers query the DHT to discover all peers in the session.
|
||||
|
||||
use anyhow::Result;
|
||||
use iroh::EndpointId;
|
||||
use std::time::Duration;
|
||||
|
||||
use crate::networking::SessionId;
|
||||
|
||||
pub async fn publish_peer_to_dht(
|
||||
session_id: &SessionId,
|
||||
our_endpoint_id: EndpointId,
|
||||
dht_client: &pkarr::Client,
|
||||
) -> Result<()> {
|
||||
use pkarr::dns::{self, rdata};
|
||||
use pkarr::dns::rdata::RData;
|
||||
|
||||
let keypair = session_id.to_pkarr_keypair();
|
||||
let public_key = keypair.public_key();
|
||||
|
||||
// Query DHT for existing peers in this session
|
||||
let existing_peers = match dht_client.resolve(&public_key).await {
|
||||
Some(packet) => {
|
||||
let mut peers = Vec::new();
|
||||
for rr in packet.all_resource_records() {
|
||||
if let RData::TXT(txt) = &rr.rdata {
|
||||
if let Ok(txt_str) = String::try_from(txt.clone()) {
|
||||
if let Some(hex) = txt_str.strip_prefix("peer=") {
|
||||
if let Ok(bytes) = hex::decode(hex) {
|
||||
if bytes.len() == 32 {
|
||||
if let Ok(endpoint_id) = EndpointId::from_bytes(&bytes.try_into().unwrap()) {
|
||||
// Don't include ourselves if we're already in the list
|
||||
if endpoint_id != our_endpoint_id {
|
||||
peers.push(endpoint_id);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
peers
|
||||
}
|
||||
None => Vec::new(),
|
||||
};
|
||||
|
||||
// Build packet with all peers (existing + ourselves)
|
||||
let name = dns::Name::new("_peers").expect("constant");
|
||||
let mut builder = pkarr::SignedPacket::builder();
|
||||
|
||||
// Add TXT record for each existing peer
|
||||
for peer in existing_peers {
|
||||
let peer_hex = hex::encode(peer.as_bytes());
|
||||
let peer_str = format!("peer={}", peer_hex);
|
||||
let mut txt = rdata::TXT::new();
|
||||
txt.add_string(&peer_str)?;
|
||||
builder = builder.txt(name.clone(), txt.into_owned(), 3600);
|
||||
}
|
||||
|
||||
// Add TXT record for ourselves
|
||||
let our_hex = hex::encode(our_endpoint_id.as_bytes());
|
||||
let our_str = format!("peer={}", our_hex);
|
||||
let mut our_txt = rdata::TXT::new();
|
||||
our_txt.add_string(&our_str)?;
|
||||
builder = builder.txt(name, our_txt.into_owned(), 3600);
|
||||
|
||||
// Build and sign the packet
|
||||
let signed_packet = builder.build(&keypair)?;
|
||||
|
||||
// Publish to DHT
|
||||
dht_client.publish(&signed_packet, None).await?;
|
||||
|
||||
tracing::info!(
|
||||
"Published peer {} to DHT for session {}",
|
||||
our_endpoint_id.fmt_short(),
|
||||
session_id.to_code()
|
||||
);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn discover_peers_from_dht(
|
||||
session_id: &SessionId,
|
||||
dht_client: &pkarr::Client,
|
||||
) -> Result<Vec<EndpointId>> {
|
||||
use pkarr::dns::rdata::RData;
|
||||
|
||||
let keypair = session_id.to_pkarr_keypair();
|
||||
let public_key = keypair.public_key();
|
||||
|
||||
// Query DHT for the session's public key
|
||||
let signed_packet = match dht_client.resolve(&public_key).await {
|
||||
Some(packet) => packet,
|
||||
None => {
|
||||
tracing::debug!("No peers found in DHT for session {}", session_id.to_code());
|
||||
return Ok(vec![]);
|
||||
}
|
||||
};
|
||||
|
||||
// Parse TXT records to extract peer endpoint IDs
|
||||
let mut peers = Vec::new();
|
||||
|
||||
for rr in signed_packet.all_resource_records() {
|
||||
if let RData::TXT(txt) = &rr.rdata {
|
||||
// Try to parse as a String
|
||||
if let Ok(txt_str) = String::try_from(txt.clone()) {
|
||||
// Parse "peer=<hex_endpoint_id>"
|
||||
if let Some(hex) = txt_str.strip_prefix("peer=") {
|
||||
if let Ok(bytes) = hex::decode(hex) {
|
||||
if bytes.len() == 32 {
|
||||
if let Ok(endpoint_id) = EndpointId::from_bytes(&bytes.try_into().unwrap()) {
|
||||
peers.push(endpoint_id);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
tracing::info!(
|
||||
"Discovered {} peers from DHT for session {}",
|
||||
peers.len(),
|
||||
session_id.to_code()
|
||||
);
|
||||
|
||||
Ok(peers)
|
||||
}
|
||||
|
||||
/// Periodically republishes our presence to the DHT
|
||||
///
|
||||
/// Should be called in a background task to maintain our DHT presence.
|
||||
/// Republishes every 30 minutes (well before the 1-hour TTL expires).
|
||||
pub async fn maintain_dht_presence(
|
||||
session_id: SessionId,
|
||||
our_endpoint_id: EndpointId,
|
||||
dht_client: pkarr::Client,
|
||||
cancel_token: tokio_util::sync::CancellationToken,
|
||||
) {
|
||||
let mut interval = tokio::time::interval(Duration::from_secs(30 * 60)); // 30 minutes
|
||||
|
||||
loop {
|
||||
tokio::select! {
|
||||
_ = cancel_token.cancelled() => {
|
||||
tracing::info!("DHT maintenance task shutting down");
|
||||
break;
|
||||
}
|
||||
_ = interval.tick() => {
|
||||
if let Err(e) = publish_peer_to_dht(&session_id, our_endpoint_id, &dht_client).await {
|
||||
tracing::warn!("Failed to republish to DHT: {}", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -28,6 +28,9 @@ pub mod engine;
|
||||
pub mod networking;
|
||||
pub mod persistence;
|
||||
pub mod platform;
|
||||
pub mod render; // Vendored Bevy rendering (bevy_render + bevy_core_pipeline + bevy_pbr)
|
||||
pub mod transform; // Vendored Transform with rkyv support
|
||||
pub mod utils;
|
||||
pub mod sync;
|
||||
|
||||
/// Unified Marathon plugin that bundles all core functionality.
|
||||
|
||||
@@ -8,24 +8,21 @@ use std::collections::HashMap;
|
||||
use bevy::prelude::*;
|
||||
use uuid::Uuid;
|
||||
|
||||
use crate::{
|
||||
networking::{
|
||||
VectorClock,
|
||||
blob_support::{
|
||||
BlobStore,
|
||||
get_component_data,
|
||||
},
|
||||
delta_generation::NodeVectorClock,
|
||||
entity_map::NetworkEntityMap,
|
||||
merge::compare_operations_lww,
|
||||
messages::{
|
||||
ComponentData,
|
||||
EntityDelta,
|
||||
SyncMessage,
|
||||
},
|
||||
operations::ComponentOp,
|
||||
use crate::networking::{
|
||||
VectorClock,
|
||||
blob_support::{
|
||||
BlobStore,
|
||||
get_component_data,
|
||||
},
|
||||
persistence::reflection::deserialize_component_typed,
|
||||
delta_generation::NodeVectorClock,
|
||||
entity_map::NetworkEntityMap,
|
||||
merge::compare_operations_lww,
|
||||
messages::{
|
||||
ComponentData,
|
||||
EntityDelta,
|
||||
SyncMessage,
|
||||
},
|
||||
operations::ComponentOp,
|
||||
};
|
||||
|
||||
/// Resource to track the last vector clock and originating node for each
|
||||
@@ -168,6 +165,24 @@ pub fn apply_entity_delta(delta: &EntityDelta, world: &mut World) {
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// CRITICAL: Add marker to prevent feedback loop
|
||||
//
|
||||
// When we apply remote operations, insert_fn() triggers Bevy's change detection.
|
||||
// This causes auto_detect_transform_changes_system to mark NetworkedEntity as changed,
|
||||
// which would normally trigger generate_delta_system to broadcast it back, creating
|
||||
// an infinite feedback loop.
|
||||
//
|
||||
// By adding SkipNextDeltaGeneration marker, we tell generate_delta_system to skip
|
||||
// this entity for one frame. A cleanup system removes the marker after delta
|
||||
// generation runs, allowing future local changes to be broadcast normally.
|
||||
if let Ok(mut entity_mut) = world.get_entity_mut(entity) {
|
||||
entity_mut.insert(crate::networking::SkipNextDeltaGeneration);
|
||||
debug!(
|
||||
"Added SkipNextDeltaGeneration marker to entity {:?} to prevent feedback loop",
|
||||
delta.entity_id
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
/// Apply a single ComponentOp to an entity
|
||||
@@ -177,35 +192,35 @@ pub fn apply_entity_delta(delta: &EntityDelta, world: &mut World) {
|
||||
fn apply_component_op(entity: Entity, op: &ComponentOp, incoming_node_id: Uuid, world: &mut World) {
|
||||
match op {
|
||||
| ComponentOp::Set {
|
||||
component_type,
|
||||
discriminant,
|
||||
data,
|
||||
vector_clock,
|
||||
} => {
|
||||
apply_set_operation_with_lww(
|
||||
entity,
|
||||
component_type,
|
||||
*discriminant,
|
||||
data,
|
||||
vector_clock,
|
||||
incoming_node_id,
|
||||
world,
|
||||
);
|
||||
},
|
||||
| ComponentOp::SetAdd { component_type, .. } => {
|
||||
| ComponentOp::SetAdd { discriminant, .. } => {
|
||||
// OR-Set add - Phase 10 provides OrSet<T> type
|
||||
// Application code should use OrSet in components and handle SetAdd/SetRemove
|
||||
// Full integration will be in Phase 12 plugin
|
||||
debug!(
|
||||
"SetAdd operation for {} (use OrSet<T> in components)",
|
||||
component_type
|
||||
"SetAdd operation for discriminant {} (use OrSet<T> in components)",
|
||||
discriminant
|
||||
);
|
||||
},
|
||||
| ComponentOp::SetRemove { component_type, .. } => {
|
||||
| ComponentOp::SetRemove { discriminant, .. } => {
|
||||
// OR-Set remove - Phase 10 provides OrSet<T> type
|
||||
// Application code should use OrSet in components and handle SetAdd/SetRemove
|
||||
// Full integration will be in Phase 12 plugin
|
||||
debug!(
|
||||
"SetRemove operation for {} (use OrSet<T> in components)",
|
||||
component_type
|
||||
"SetRemove operation for discriminant {} (use OrSet<T> in components)",
|
||||
discriminant
|
||||
);
|
||||
},
|
||||
| ComponentOp::SequenceInsert { .. } => {
|
||||
@@ -230,12 +245,30 @@ fn apply_component_op(entity: Entity, op: &ComponentOp, incoming_node_id: Uuid,
|
||||
/// Uses node_id as a deterministic tiebreaker for concurrent operations.
|
||||
fn apply_set_operation_with_lww(
|
||||
entity: Entity,
|
||||
component_type: &str,
|
||||
discriminant: u16,
|
||||
data: &ComponentData,
|
||||
incoming_clock: &VectorClock,
|
||||
incoming_node_id: Uuid,
|
||||
world: &mut World,
|
||||
) {
|
||||
// Get component type name for logging and clock tracking
|
||||
let type_registry = {
|
||||
let registry_resource =
|
||||
world.resource::<crate::persistence::ComponentTypeRegistryResource>();
|
||||
registry_resource.0
|
||||
};
|
||||
|
||||
let component_type_name = match type_registry.get_type_name(discriminant) {
|
||||
| Some(name) => name,
|
||||
| None => {
|
||||
error!(
|
||||
"Unknown discriminant {} - component not registered",
|
||||
discriminant
|
||||
);
|
||||
return;
|
||||
},
|
||||
};
|
||||
|
||||
// Get the network ID for this entity
|
||||
let entity_network_id = {
|
||||
if let Ok(entity_ref) = world.get_entity(entity) {
|
||||
@@ -255,7 +288,7 @@ fn apply_set_operation_with_lww(
|
||||
let should_apply = {
|
||||
if let Some(component_clocks) = world.get_resource::<ComponentVectorClocks>() {
|
||||
if let Some((current_clock, current_node_id)) =
|
||||
component_clocks.get(entity_network_id, component_type)
|
||||
component_clocks.get(entity_network_id, component_type_name)
|
||||
{
|
||||
// We have a current clock - do LWW comparison with real node IDs
|
||||
let decision = compare_operations_lww(
|
||||
@@ -269,14 +302,14 @@ fn apply_set_operation_with_lww(
|
||||
| crate::networking::merge::MergeDecision::ApplyRemote => {
|
||||
debug!(
|
||||
"Applying remote Set for {} (remote is newer)",
|
||||
component_type
|
||||
component_type_name
|
||||
);
|
||||
true
|
||||
},
|
||||
| crate::networking::merge::MergeDecision::KeepLocal => {
|
||||
debug!(
|
||||
"Ignoring remote Set for {} (local is newer)",
|
||||
component_type
|
||||
component_type_name
|
||||
);
|
||||
false
|
||||
},
|
||||
@@ -287,19 +320,22 @@ fn apply_set_operation_with_lww(
|
||||
if incoming_node_id > *current_node_id {
|
||||
debug!(
|
||||
"Applying remote Set for {} (concurrent, remote node_id {:?} > local {:?})",
|
||||
component_type, incoming_node_id, current_node_id
|
||||
component_type_name, incoming_node_id, current_node_id
|
||||
);
|
||||
true
|
||||
} else {
|
||||
debug!(
|
||||
"Ignoring remote Set for {} (concurrent, local node_id {:?} >= remote {:?})",
|
||||
component_type, current_node_id, incoming_node_id
|
||||
component_type_name, current_node_id, incoming_node_id
|
||||
);
|
||||
false
|
||||
}
|
||||
},
|
||||
| crate::networking::merge::MergeDecision::Equal => {
|
||||
debug!("Ignoring remote Set for {} (clocks equal)", component_type);
|
||||
debug!(
|
||||
"Ignoring remote Set for {} (clocks equal)",
|
||||
component_type_name
|
||||
);
|
||||
false
|
||||
},
|
||||
}
|
||||
@@ -307,7 +343,7 @@ fn apply_set_operation_with_lww(
|
||||
// No current clock - this is the first time we're setting this component
|
||||
debug!(
|
||||
"Applying remote Set for {} (no current clock)",
|
||||
component_type
|
||||
component_type_name
|
||||
);
|
||||
true
|
||||
}
|
||||
@@ -323,19 +359,19 @@ fn apply_set_operation_with_lww(
|
||||
}
|
||||
|
||||
// Apply the operation
|
||||
apply_set_operation(entity, component_type, data, world);
|
||||
apply_set_operation(entity, discriminant, data, world);
|
||||
|
||||
// Update the stored vector clock with node_id
|
||||
if let Some(mut component_clocks) = world.get_resource_mut::<ComponentVectorClocks>() {
|
||||
component_clocks.set(
|
||||
entity_network_id,
|
||||
component_type.to_string(),
|
||||
component_type_name.to_string(),
|
||||
incoming_clock.clone(),
|
||||
incoming_node_id,
|
||||
);
|
||||
debug!(
|
||||
"Updated vector clock for {} on entity {:?} (node_id: {:?})",
|
||||
component_type, entity_network_id, incoming_node_id
|
||||
component_type_name, entity_network_id, incoming_node_id
|
||||
);
|
||||
}
|
||||
}
|
||||
@@ -344,17 +380,9 @@ fn apply_set_operation_with_lww(
|
||||
///
|
||||
/// Deserializes the component and inserts/updates it on the entity.
|
||||
/// Handles both inline data and blob references.
|
||||
fn apply_set_operation(
|
||||
entity: Entity,
|
||||
component_type: &str,
|
||||
data: &ComponentData,
|
||||
world: &mut World,
|
||||
) {
|
||||
let type_registry = {
|
||||
let registry_resource = world.resource::<AppTypeRegistry>();
|
||||
registry_resource.read()
|
||||
};
|
||||
fn apply_set_operation(entity: Entity, discriminant: u16, data: &ComponentData, world: &mut World) {
|
||||
let blob_store = world.get_resource::<BlobStore>();
|
||||
|
||||
// Get the actual data (resolve blob if needed)
|
||||
let data_bytes = match data {
|
||||
| ComponentData::Inline(bytes) => bytes.clone(),
|
||||
@@ -364,61 +392,62 @@ fn apply_set_operation(
|
||||
| Ok(bytes) => bytes,
|
||||
| Err(e) => {
|
||||
error!(
|
||||
"Failed to retrieve blob for component {}: {}",
|
||||
component_type, e
|
||||
"Failed to retrieve blob for discriminant {}: {}",
|
||||
discriminant, e
|
||||
);
|
||||
return;
|
||||
},
|
||||
}
|
||||
} else {
|
||||
error!(
|
||||
"Blob reference for {} but no blob store available",
|
||||
component_type
|
||||
"Blob reference for discriminant {} but no blob store available",
|
||||
discriminant
|
||||
);
|
||||
return;
|
||||
}
|
||||
},
|
||||
};
|
||||
|
||||
let reflected = match deserialize_component_typed(&data_bytes, component_type, &type_registry) {
|
||||
| Ok(reflected) => reflected,
|
||||
| Err(e) => {
|
||||
error!("Failed to deserialize component {}: {}", component_type, e);
|
||||
return;
|
||||
},
|
||||
// Get component type registry
|
||||
let type_registry = {
|
||||
let registry_resource =
|
||||
world.resource::<crate::persistence::ComponentTypeRegistryResource>();
|
||||
registry_resource.0
|
||||
};
|
||||
|
||||
let registration = match type_registry.get_with_type_path(component_type) {
|
||||
| Some(reg) => reg,
|
||||
| None => {
|
||||
error!("Component type {} not registered", component_type);
|
||||
return;
|
||||
},
|
||||
};
|
||||
// Look up deserialize and insert functions by discriminant
|
||||
let deserialize_fn = type_registry.get_deserialize_fn(discriminant);
|
||||
let insert_fn = type_registry.get_insert_fn(discriminant);
|
||||
|
||||
let reflect_component = match registration.data::<ReflectComponent>() {
|
||||
| Some(rc) => rc.clone(),
|
||||
| None => {
|
||||
let (deserialize_fn, insert_fn) = match (deserialize_fn, insert_fn) {
|
||||
| (Some(d), Some(i)) => (d, i),
|
||||
| _ => {
|
||||
error!(
|
||||
"Component type {} does not have ReflectComponent data",
|
||||
component_type
|
||||
"Discriminant {} not registered in ComponentTypeRegistry",
|
||||
discriminant
|
||||
);
|
||||
return;
|
||||
},
|
||||
};
|
||||
|
||||
drop(type_registry);
|
||||
|
||||
let type_registry_arc = world.resource::<AppTypeRegistry>().clone();
|
||||
let type_registry_guard = type_registry_arc.read();
|
||||
// Deserialize the component
|
||||
let boxed_component = match deserialize_fn(&data_bytes) {
|
||||
| Ok(component) => component,
|
||||
| Err(e) => {
|
||||
error!("Failed to deserialize discriminant {}: {}", discriminant, e);
|
||||
return;
|
||||
},
|
||||
};
|
||||
|
||||
// Insert the component into the entity
|
||||
if let Ok(mut entity_mut) = world.get_entity_mut(entity) {
|
||||
reflect_component.insert(&mut entity_mut, &*reflected, &type_registry_guard);
|
||||
debug!("Applied Set operation for {}", component_type);
|
||||
insert_fn(&mut entity_mut, boxed_component);
|
||||
debug!("Applied Set operation for discriminant {}", discriminant);
|
||||
|
||||
// If we just inserted a Transform component, also add NetworkedTransform
|
||||
// This ensures remote entities can have their Transform changes detected
|
||||
if component_type == "bevy_transform::components::transform::Transform" {
|
||||
let type_path = type_registry.get_type_path(discriminant);
|
||||
if type_path == Some("bevy_transform::components::transform::Transform") {
|
||||
if let Ok(mut entity_mut) = world.get_entity_mut(entity) {
|
||||
if entity_mut
|
||||
.get::<crate::networking::NetworkedTransform>()
|
||||
@@ -431,8 +460,8 @@ fn apply_set_operation(
|
||||
}
|
||||
} else {
|
||||
error!(
|
||||
"Entity {:?} not found when applying component {}",
|
||||
entity, component_type
|
||||
"Entity {:?} not found when applying discriminant {}",
|
||||
entity, discriminant
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -30,7 +30,7 @@ use crate::networking::{
|
||||
pub const BLOB_THRESHOLD: usize = 64 * 1024;
|
||||
|
||||
/// Hash type for blob references
|
||||
pub type BlobHash = Vec<u8>;
|
||||
pub type BlobHash = bytes::Bytes;
|
||||
|
||||
/// Bevy resource for managing blobs
|
||||
///
|
||||
@@ -40,7 +40,7 @@ pub type BlobHash = Vec<u8>;
|
||||
#[derive(Resource, Clone)]
|
||||
pub struct BlobStore {
|
||||
/// In-memory cache of blobs (hash -> data)
|
||||
cache: Arc<Mutex<HashMap<BlobHash, Vec<u8>>>>,
|
||||
cache: Arc<Mutex<HashMap<BlobHash, bytes::Bytes>>>,
|
||||
}
|
||||
|
||||
impl BlobStore {
|
||||
@@ -72,7 +72,7 @@ impl BlobStore {
|
||||
self.cache
|
||||
.lock()
|
||||
.map_err(|e| NetworkingError::Blob(format!("Failed to lock cache: {}", e)))?
|
||||
.insert(hash.clone(), data);
|
||||
.insert(hash.clone(), bytes::Bytes::from(data));
|
||||
|
||||
Ok(hash)
|
||||
}
|
||||
@@ -80,7 +80,7 @@ impl BlobStore {
|
||||
/// Retrieve a blob by its hash
|
||||
///
|
||||
/// Returns `None` if the blob is not in the cache.
|
||||
pub fn get_blob(&self, hash: &BlobHash) -> Result<Option<Vec<u8>>> {
|
||||
pub fn get_blob(&self, hash: &BlobHash) -> Result<Option<bytes::Bytes>> {
|
||||
Ok(self
|
||||
.cache
|
||||
.lock()
|
||||
@@ -104,7 +104,7 @@ impl BlobStore {
|
||||
///
|
||||
/// This is safer than calling `has_blob()` followed by `get_blob()` because
|
||||
/// it's atomic - the blob can't be removed between the check and get.
|
||||
pub fn get_blob_if_exists(&self, hash: &BlobHash) -> Result<Option<Vec<u8>>> {
|
||||
pub fn get_blob_if_exists(&self, hash: &BlobHash) -> Result<Option<bytes::Bytes>> {
|
||||
Ok(self
|
||||
.cache
|
||||
.lock()
|
||||
@@ -142,7 +142,7 @@ impl BlobStore {
|
||||
|
||||
let mut hasher = Sha256::new();
|
||||
hasher.update(data);
|
||||
hasher.finalize().to_vec()
|
||||
bytes::Bytes::from(hasher.finalize().to_vec())
|
||||
}
|
||||
}
|
||||
|
||||
@@ -192,11 +192,11 @@ pub fn should_use_blob(data: &[u8]) -> bool {
|
||||
/// let large_data = vec![0u8; 100_000];
|
||||
/// let component_data = create_component_data(large_data, &store).unwrap();
|
||||
/// ```
|
||||
pub fn create_component_data(data: Vec<u8>, blob_store: &BlobStore) -> Result<ComponentData> {
|
||||
pub fn create_component_data(data: bytes::Bytes, blob_store: &BlobStore) -> Result<ComponentData> {
|
||||
if should_use_blob(&data) {
|
||||
let size = data.len() as u64;
|
||||
let hash = blob_store.store_blob(data)?;
|
||||
Ok(ComponentData::BlobRef { hash, size })
|
||||
let hash = blob_store.store_blob(data.to_vec())?;
|
||||
Ok(ComponentData::BlobRef { hash: bytes::Bytes::from(hash), size })
|
||||
} else {
|
||||
Ok(ComponentData::Inline(data))
|
||||
}
|
||||
@@ -218,11 +218,11 @@ pub fn create_component_data(data: Vec<u8>, blob_store: &BlobStore) -> Result<Co
|
||||
/// let store = BlobStore::new();
|
||||
///
|
||||
/// // Inline data
|
||||
/// let inline = ComponentData::Inline(vec![1, 2, 3]);
|
||||
/// let inline = ComponentData::Inline(bytes::Bytes::from(vec![1, 2, 3]));
|
||||
/// let data = get_component_data(&inline, &store).unwrap();
|
||||
/// assert_eq!(data, vec![1, 2, 3]);
|
||||
/// ```
|
||||
pub fn get_component_data(data: &ComponentData, blob_store: &BlobStore) -> Result<Vec<u8>> {
|
||||
pub fn get_component_data(data: &ComponentData, blob_store: &BlobStore) -> Result<bytes::Bytes> {
|
||||
match data {
|
||||
| ComponentData::Inline(bytes) => Ok(bytes.clone()),
|
||||
| ComponentData::BlobRef { hash, size: _ } => blob_store
|
||||
@@ -268,7 +268,7 @@ mod tests {
|
||||
let hash = store.store_blob(data.clone()).unwrap();
|
||||
let retrieved = store.get_blob(&hash).unwrap();
|
||||
|
||||
assert_eq!(retrieved, Some(data));
|
||||
assert_eq!(retrieved, Some(bytes::Bytes::from(data)));
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -291,7 +291,7 @@ mod tests {
|
||||
assert!(store.has_blob(&hash).unwrap());
|
||||
|
||||
let fake_hash = vec![0; 32];
|
||||
assert!(!store.has_blob(&fake_hash).unwrap());
|
||||
assert!(!store.has_blob(&bytes::Bytes::from(fake_hash)).unwrap());
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -326,7 +326,7 @@ mod tests {
|
||||
let store = BlobStore::new();
|
||||
let small_data = vec![1, 2, 3];
|
||||
|
||||
let component_data = create_component_data(small_data.clone(), &store).unwrap();
|
||||
let component_data = create_component_data(bytes::Bytes::from(small_data.clone()), &store).unwrap();
|
||||
|
||||
match component_data {
|
||||
| ComponentData::Inline(data) => assert_eq!(data, small_data),
|
||||
@@ -339,7 +339,7 @@ mod tests {
|
||||
let store = BlobStore::new();
|
||||
let large_data = vec![0u8; 100_000];
|
||||
|
||||
let component_data = create_component_data(large_data.clone(), &store).unwrap();
|
||||
let component_data = create_component_data(bytes::Bytes::from(large_data.clone()), &store).unwrap();
|
||||
|
||||
match component_data {
|
||||
| ComponentData::BlobRef { hash, size } => {
|
||||
@@ -353,7 +353,7 @@ mod tests {
|
||||
#[test]
|
||||
fn test_get_component_data_inline() {
|
||||
let store = BlobStore::new();
|
||||
let inline = ComponentData::Inline(vec![1, 2, 3]);
|
||||
let inline = ComponentData::Inline(bytes::Bytes::from(vec![1, 2, 3]));
|
||||
|
||||
let data = get_component_data(&inline, &store).unwrap();
|
||||
assert_eq!(data, vec![1, 2, 3]);
|
||||
@@ -380,7 +380,7 @@ mod tests {
|
||||
let fake_hash = vec![0; 32];
|
||||
|
||||
let blob_ref = ComponentData::BlobRef {
|
||||
hash: fake_hash,
|
||||
hash: bytes::Bytes::from(fake_hash),
|
||||
size: 1000,
|
||||
};
|
||||
|
||||
|
||||
@@ -31,6 +31,7 @@ pub fn auto_detect_transform_changes_system(
|
||||
(
|
||||
With<NetworkedTransform>,
|
||||
Or<(Changed<Transform>, Changed<GlobalTransform>)>,
|
||||
Without<crate::networking::SkipNextDeltaGeneration>,
|
||||
),
|
||||
>,
|
||||
) {
|
||||
|
||||
@@ -11,6 +11,21 @@ use serde::{
|
||||
|
||||
use crate::networking::vector_clock::NodeId;
|
||||
|
||||
/// Marker component to skip delta generation for one frame after receiving remote updates
|
||||
///
|
||||
/// When we apply remote operations via `apply_entity_delta()`, the `insert_fn()` call
|
||||
/// triggers Bevy's change detection. This would normally cause `generate_delta_system`
|
||||
/// to create and broadcast a new delta, creating an infinite feedback loop.
|
||||
///
|
||||
/// By adding this marker when we apply remote updates, we tell `generate_delta_system`
|
||||
/// to skip this entity for one frame. A cleanup system removes the marker after
|
||||
/// delta generation runs, allowing future local changes to be broadcast normally.
|
||||
///
|
||||
/// This is an implementation detail of the feedback loop prevention mechanism.
|
||||
/// User code should never need to interact with this component.
|
||||
#[derive(Component, Debug)]
|
||||
pub struct SkipNextDeltaGeneration;
|
||||
|
||||
/// Marker component indicating an entity should be synchronized over the
|
||||
/// network
|
||||
///
|
||||
@@ -156,49 +171,36 @@ impl Default for NetworkedEntity {
|
||||
#[reflect(Component)]
|
||||
pub struct NetworkedTransform;
|
||||
|
||||
/// Wrapper for a selection component using OR-Set semantics
|
||||
/// Local selection tracking resource
|
||||
///
|
||||
/// Tracks a set of selected entity network IDs. Uses OR-Set (Observed-Remove)
|
||||
/// CRDT to handle concurrent add/remove operations correctly.
|
||||
/// This global resource tracks which entities are currently selected by THIS node.
|
||||
/// It's used in conjunction with the entity lock system to coordinate concurrent editing.
|
||||
///
|
||||
/// # OR-Set Semantics
|
||||
///
|
||||
/// - Concurrent adds and removes: add wins
|
||||
/// - Each add has a unique operation ID
|
||||
/// - Removes reference specific add operation IDs
|
||||
/// **Selections are local-only UI state** and are NOT synchronized across the network.
|
||||
/// Each node maintains its own independent selection.
|
||||
///
|
||||
/// # Example
|
||||
///
|
||||
/// ```
|
||||
/// use bevy::prelude::*;
|
||||
/// use libmarathon::networking::{
|
||||
/// NetworkedEntity,
|
||||
/// NetworkedSelection,
|
||||
/// };
|
||||
/// use libmarathon::networking::LocalSelection;
|
||||
/// use uuid::Uuid;
|
||||
///
|
||||
/// fn create_selection(mut commands: Commands) {
|
||||
/// let node_id = Uuid::new_v4();
|
||||
/// let mut selection = NetworkedSelection::new();
|
||||
/// fn handle_click(mut selection: ResMut<LocalSelection>) {
|
||||
/// // Clear previous selection
|
||||
/// selection.clear();
|
||||
///
|
||||
/// // Add some entities to the selection
|
||||
/// selection.selected_ids.insert(Uuid::new_v4());
|
||||
/// selection.selected_ids.insert(Uuid::new_v4());
|
||||
///
|
||||
/// commands.spawn((NetworkedEntity::new(node_id), selection));
|
||||
/// // Select a new entity
|
||||
/// selection.insert(Uuid::new_v4());
|
||||
/// }
|
||||
/// ```
|
||||
#[derive(Component, Reflect, Debug, Clone, Default)]
|
||||
#[reflect(Component)]
|
||||
pub struct NetworkedSelection {
|
||||
#[derive(Resource, Debug, Clone, Default)]
|
||||
pub struct LocalSelection {
|
||||
/// Set of selected entity network IDs
|
||||
///
|
||||
/// This will be synchronized using OR-Set CRDT semantics in later phases.
|
||||
/// For now, it's a simple HashSet.
|
||||
pub selected_ids: std::collections::HashSet<uuid::Uuid>,
|
||||
selected_ids: std::collections::HashSet<uuid::Uuid>,
|
||||
}
|
||||
|
||||
impl NetworkedSelection {
|
||||
impl LocalSelection {
|
||||
/// Create a new empty selection
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
@@ -207,13 +209,13 @@ impl NetworkedSelection {
|
||||
}
|
||||
|
||||
/// Add an entity to the selection
|
||||
pub fn add(&mut self, entity_id: uuid::Uuid) {
|
||||
self.selected_ids.insert(entity_id);
|
||||
pub fn insert(&mut self, entity_id: uuid::Uuid) -> bool {
|
||||
self.selected_ids.insert(entity_id)
|
||||
}
|
||||
|
||||
/// Remove an entity from the selection
|
||||
pub fn remove(&mut self, entity_id: uuid::Uuid) {
|
||||
self.selected_ids.remove(&entity_id);
|
||||
pub fn remove(&mut self, entity_id: uuid::Uuid) -> bool {
|
||||
self.selected_ids.remove(&entity_id)
|
||||
}
|
||||
|
||||
/// Check if an entity is selected
|
||||
@@ -235,6 +237,11 @@ impl NetworkedSelection {
|
||||
pub fn is_empty(&self) -> bool {
|
||||
self.selected_ids.is_empty()
|
||||
}
|
||||
|
||||
/// Get an iterator over selected entity IDs
|
||||
pub fn iter(&self) -> impl Iterator<Item = &uuid::Uuid> {
|
||||
self.selected_ids.iter()
|
||||
}
|
||||
}
|
||||
|
||||
/// Wrapper for a drawing path component using Sequence CRDT semantics
|
||||
@@ -361,18 +368,18 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_networked_selection() {
|
||||
let mut selection = NetworkedSelection::new();
|
||||
fn test_local_selection() {
|
||||
let mut selection = LocalSelection::new();
|
||||
let id1 = uuid::Uuid::new_v4();
|
||||
let id2 = uuid::Uuid::new_v4();
|
||||
|
||||
assert!(selection.is_empty());
|
||||
|
||||
selection.add(id1);
|
||||
selection.insert(id1);
|
||||
assert_eq!(selection.len(), 1);
|
||||
assert!(selection.contains(id1));
|
||||
|
||||
selection.add(id2);
|
||||
selection.insert(id2);
|
||||
assert_eq!(selection.len(), 2);
|
||||
assert!(selection.contains(id2));
|
||||
|
||||
|
||||
170
crates/libmarathon/src/networking/control.rs
Normal file
170
crates/libmarathon/src/networking/control.rs
Normal file
@@ -0,0 +1,170 @@
|
||||
//! Control socket protocol for remote engine control
|
||||
//!
|
||||
//! This module defines the message protocol for controlling the engine via
|
||||
//! Unix domain sockets without exposing network ports. Used for testing,
|
||||
//! validation, and programmatic control of sessions.
|
||||
//!
|
||||
//! # Security
|
||||
//!
|
||||
//! Currently debug-only. See issue #135 for production security requirements.
|
||||
|
||||
use uuid::Uuid;
|
||||
|
||||
use crate::networking::{
|
||||
SessionId,
|
||||
SessionState,
|
||||
SyncMessage,
|
||||
VersionedMessage,
|
||||
};
|
||||
|
||||
/// Control command sent to the engine
|
||||
#[derive(Debug, Clone, rkyv::Archive, rkyv::Serialize, rkyv::Deserialize)]
|
||||
pub enum ControlCommand {
|
||||
/// Get current session status
|
||||
GetStatus,
|
||||
|
||||
/// Send a test message through gossip
|
||||
SendTestMessage { content: String },
|
||||
|
||||
/// Inject a message directly into the incoming queue (for testing)
|
||||
InjectMessage { message: VersionedMessage },
|
||||
|
||||
/// Broadcast a full sync message through gossip
|
||||
BroadcastMessage { message: SyncMessage },
|
||||
|
||||
/// Request graceful shutdown
|
||||
Shutdown,
|
||||
|
||||
// Session lifecycle commands
|
||||
|
||||
/// Join a specific session by code
|
||||
JoinSession { session_code: String },
|
||||
|
||||
/// Leave the current session gracefully
|
||||
LeaveSession,
|
||||
|
||||
/// Get detailed current session information
|
||||
GetSessionInfo,
|
||||
|
||||
/// List all sessions in the database
|
||||
ListSessions,
|
||||
|
||||
/// Delete a session from the database
|
||||
DeleteSession { session_code: String },
|
||||
|
||||
/// Get list of connected peers in current session
|
||||
ListPeers,
|
||||
|
||||
// Entity commands
|
||||
|
||||
/// Spawn an entity with a given type and position
|
||||
SpawnEntity {
|
||||
entity_type: String,
|
||||
position: [f32; 3],
|
||||
},
|
||||
|
||||
/// Delete an entity by its UUID
|
||||
DeleteEntity { entity_id: Uuid },
|
||||
}
|
||||
|
||||
/// Detailed session information
|
||||
#[derive(Debug, Clone, rkyv::Archive, rkyv::Serialize, rkyv::Deserialize)]
|
||||
pub struct SessionInfo {
|
||||
pub session_id: SessionId,
|
||||
pub session_name: Option<String>,
|
||||
pub state: SessionState,
|
||||
pub created_at: i64,
|
||||
pub last_active: i64,
|
||||
pub entity_count: usize,
|
||||
}
|
||||
|
||||
/// Peer information
|
||||
#[derive(Debug, Clone, rkyv::Archive, rkyv::Serialize, rkyv::Deserialize)]
|
||||
pub struct PeerInfo {
|
||||
pub node_id: Uuid,
|
||||
pub connected_since: Option<i64>,
|
||||
}
|
||||
|
||||
/// Response from the engine to a control command
|
||||
#[derive(Debug, Clone, rkyv::Archive, rkyv::Serialize, rkyv::Deserialize)]
|
||||
pub enum ControlResponse {
|
||||
/// Session status information
|
||||
Status {
|
||||
node_id: Uuid,
|
||||
session_id: SessionId,
|
||||
outgoing_queue_size: usize,
|
||||
incoming_queue_size: usize,
|
||||
/// Number of connected peers (if available from gossip)
|
||||
connected_peers: Option<usize>,
|
||||
},
|
||||
|
||||
/// Detailed session information
|
||||
SessionInfo(SessionInfo),
|
||||
|
||||
/// List of sessions
|
||||
Sessions(Vec<SessionInfo>),
|
||||
|
||||
/// List of connected peers
|
||||
Peers(Vec<PeerInfo>),
|
||||
|
||||
/// Acknowledgment of command execution
|
||||
Ok { message: String },
|
||||
|
||||
/// Error occurred during command execution
|
||||
Error { error: String },
|
||||
}
|
||||
|
||||
impl ControlCommand {
|
||||
/// Serialize a command to bytes using rkyv
|
||||
pub fn to_bytes(&self) -> Result<Vec<u8>, rkyv::rancor::Error> {
|
||||
rkyv::to_bytes::<rkyv::rancor::Error>(self).map(|b| b.to_vec())
|
||||
}
|
||||
|
||||
/// Deserialize a command from bytes using rkyv
|
||||
pub fn from_bytes(bytes: &[u8]) -> Result<Self, rkyv::rancor::Error> {
|
||||
rkyv::from_bytes::<Self, rkyv::rancor::Error>(bytes)
|
||||
}
|
||||
}
|
||||
|
||||
impl ControlResponse {
|
||||
/// Serialize a response to bytes using rkyv
|
||||
pub fn to_bytes(&self) -> Result<Vec<u8>, rkyv::rancor::Error> {
|
||||
rkyv::to_bytes::<rkyv::rancor::Error>(self).map(|b| b.to_vec())
|
||||
}
|
||||
|
||||
/// Deserialize a response from bytes using rkyv
|
||||
pub fn from_bytes(bytes: &[u8]) -> Result<Self, rkyv::rancor::Error> {
|
||||
rkyv::from_bytes::<Self, rkyv::rancor::Error>(bytes)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_command_roundtrip() {
|
||||
let cmd = ControlCommand::GetStatus;
|
||||
let bytes = cmd.to_bytes().unwrap();
|
||||
let decoded = ControlCommand::from_bytes(&bytes).unwrap();
|
||||
|
||||
match decoded {
|
||||
| ControlCommand::GetStatus => {},
|
||||
| _ => panic!("Failed to decode GetStatus"),
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_response_roundtrip() {
|
||||
let resp = ControlResponse::Ok {
|
||||
message: "Test".to_string(),
|
||||
};
|
||||
let bytes = resp.to_bytes().unwrap();
|
||||
let decoded = ControlResponse::from_bytes(&bytes).unwrap();
|
||||
|
||||
match decoded {
|
||||
| ControlResponse::Ok { message } => assert_eq!(message, "Test"),
|
||||
| _ => panic!("Failed to decode Ok response"),
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -52,7 +52,7 @@ impl NodeVectorClock {
|
||||
/// System to generate and broadcast EntityDelta messages
|
||||
///
|
||||
/// This system:
|
||||
/// 1. Queries for Changed<NetworkedEntity>
|
||||
/// 1. Queries for Added<NetworkedEntity> or Changed<NetworkedEntity>
|
||||
/// 2. Serializes all components on those entities
|
||||
/// 3. Builds EntityDelta messages
|
||||
/// 4. Broadcasts via GossipBridge
|
||||
@@ -66,14 +66,17 @@ impl NodeVectorClock {
|
||||
/// App::new().add_systems(Update, generate_delta_system);
|
||||
/// ```
|
||||
pub fn generate_delta_system(world: &mut World) {
|
||||
// Check if bridge exists
|
||||
if world.get_resource::<GossipBridge>().is_none() {
|
||||
return;
|
||||
}
|
||||
// Works both online and offline - clock increments and operations are recorded
|
||||
// Broadcast only happens when online
|
||||
|
||||
let changed_entities: Vec<(Entity, uuid::Uuid, uuid::Uuid)> = {
|
||||
let mut query =
|
||||
world.query_filtered::<(Entity, &NetworkedEntity), Changed<NetworkedEntity>>();
|
||||
let mut query = world.query_filtered::<
|
||||
(Entity, &NetworkedEntity),
|
||||
(
|
||||
Or<(Added<NetworkedEntity>, Changed<NetworkedEntity>)>,
|
||||
Without<crate::networking::SkipNextDeltaGeneration>,
|
||||
),
|
||||
>();
|
||||
query
|
||||
.iter(world)
|
||||
.map(|(entity, networked)| (entity, networked.network_id, networked.owner_node_id))
|
||||
@@ -93,44 +96,46 @@ pub fn generate_delta_system(world: &mut World) {
|
||||
for (entity, network_id, _owner_node_id) in changed_entities {
|
||||
// Phase 1: Check and update clocks, collect data
|
||||
let mut system_state: bevy::ecs::system::SystemState<(
|
||||
Res<GossipBridge>,
|
||||
Res<AppTypeRegistry>,
|
||||
Option<Res<GossipBridge>>,
|
||||
Res<crate::persistence::ComponentTypeRegistryResource>,
|
||||
ResMut<NodeVectorClock>,
|
||||
ResMut<LastSyncVersions>,
|
||||
Option<ResMut<crate::networking::OperationLog>>,
|
||||
)> = bevy::ecs::system::SystemState::new(world);
|
||||
|
||||
let (node_id, vector_clock, current_seq) = {
|
||||
let (node_id, vector_clock, new_seq) = {
|
||||
let (_, _, mut node_clock, last_versions, _) = system_state.get_mut(world);
|
||||
|
||||
// Check if we should sync this entity
|
||||
// Check if we should sync this entity with the NEXT sequence (after tick)
|
||||
// This prevents duplicate sends when system runs multiple times per frame
|
||||
let current_seq = node_clock.sequence();
|
||||
if !last_versions.should_sync(network_id, current_seq) {
|
||||
let next_seq = current_seq + 1; // What the sequence will be after tick
|
||||
if !last_versions.should_sync(network_id, next_seq) {
|
||||
drop(last_versions);
|
||||
drop(node_clock);
|
||||
system_state.apply(world);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Increment our vector clock
|
||||
node_clock.tick();
|
||||
// Increment our vector clock and get the NEW sequence
|
||||
let new_seq = node_clock.tick();
|
||||
debug_assert_eq!(new_seq, next_seq, "tick() should return next_seq");
|
||||
|
||||
(node_clock.node_id, node_clock.clock.clone(), current_seq)
|
||||
(node_clock.node_id, node_clock.clock.clone(), new_seq)
|
||||
};
|
||||
|
||||
// Phase 2: Build operations (needs world access without holding other borrows)
|
||||
let operations = {
|
||||
let type_registry = world.resource::<AppTypeRegistry>().read();
|
||||
let ops = build_entity_operations(
|
||||
let type_registry_res = world.resource::<crate::persistence::ComponentTypeRegistryResource>();
|
||||
let type_registry = type_registry_res.0;
|
||||
build_entity_operations(
|
||||
entity,
|
||||
world,
|
||||
node_id,
|
||||
vector_clock.clone(),
|
||||
&type_registry,
|
||||
type_registry,
|
||||
None, // blob_store - will be added in later phases
|
||||
);
|
||||
drop(type_registry);
|
||||
ops
|
||||
)
|
||||
};
|
||||
|
||||
if operations.is_empty() {
|
||||
@@ -145,55 +150,76 @@ pub fn generate_delta_system(world: &mut World) {
|
||||
// Create EntityDelta
|
||||
let delta = EntityDelta::new(network_id, node_id, vector_clock.clone(), operations);
|
||||
|
||||
// Record in operation log for anti-entropy
|
||||
// Record in operation log for anti-entropy (works offline!)
|
||||
if let Some(ref mut log) = operation_log {
|
||||
log.record_operation(delta.clone());
|
||||
}
|
||||
|
||||
// Wrap in VersionedMessage
|
||||
let message = VersionedMessage::new(SyncMessage::EntityDelta {
|
||||
entity_id: delta.entity_id,
|
||||
node_id: delta.node_id,
|
||||
vector_clock: delta.vector_clock.clone(),
|
||||
operations: delta.operations.clone(),
|
||||
});
|
||||
// Broadcast if online
|
||||
if let Some(ref bridge) = bridge {
|
||||
// Wrap in VersionedMessage
|
||||
let message = VersionedMessage::new(SyncMessage::EntityDelta {
|
||||
entity_id: delta.entity_id,
|
||||
node_id: delta.node_id,
|
||||
vector_clock: delta.vector_clock.clone(),
|
||||
operations: delta.operations.clone(),
|
||||
});
|
||||
|
||||
// Broadcast
|
||||
if let Err(e) = bridge.send(message) {
|
||||
error!("Failed to broadcast EntityDelta: {}", e);
|
||||
// Broadcast to peers
|
||||
if let Err(e) = bridge.send(message) {
|
||||
error!("Failed to broadcast EntityDelta: {}", e);
|
||||
} else {
|
||||
debug!(
|
||||
"Broadcast EntityDelta for entity {:?} with {} operations",
|
||||
network_id,
|
||||
delta.operations.len()
|
||||
);
|
||||
}
|
||||
} else {
|
||||
debug!(
|
||||
"Broadcast EntityDelta for entity {:?} with {} operations",
|
||||
network_id,
|
||||
delta.operations.len()
|
||||
"Generated EntityDelta for entity {:?} offline (will sync when online)",
|
||||
network_id
|
||||
);
|
||||
last_versions.update(network_id, current_seq);
|
||||
}
|
||||
|
||||
// Update last sync version with NEW sequence (after tick) to prevent duplicates
|
||||
// CRITICAL: Must use new_seq (after tick), not current_seq (before tick)
|
||||
// This prevents sending duplicate deltas if system runs multiple times per frame
|
||||
last_versions.update(network_id, new_seq);
|
||||
|
||||
delta
|
||||
};
|
||||
|
||||
// Phase 4: Update component vector clocks for local modifications
|
||||
{
|
||||
// Get type registry first before mutable borrow
|
||||
let type_registry = {
|
||||
let type_registry_res = world.resource::<crate::persistence::ComponentTypeRegistryResource>();
|
||||
type_registry_res.0
|
||||
};
|
||||
|
||||
if let Some(mut component_clocks) =
|
||||
world.get_resource_mut::<crate::networking::ComponentVectorClocks>()
|
||||
{
|
||||
for op in &delta.operations {
|
||||
if let crate::networking::ComponentOp::Set {
|
||||
component_type,
|
||||
discriminant,
|
||||
vector_clock: op_clock,
|
||||
..
|
||||
} = op
|
||||
{
|
||||
let component_type_name = type_registry.get_type_name(*discriminant)
|
||||
.unwrap_or("unknown");
|
||||
|
||||
component_clocks.set(
|
||||
network_id,
|
||||
component_type.clone(),
|
||||
component_type_name.to_string(),
|
||||
op_clock.clone(),
|
||||
node_id,
|
||||
);
|
||||
debug!(
|
||||
"Updated local vector clock for {} on entity {:?} (node_id: {:?})",
|
||||
component_type, network_id, node_id
|
||||
component_type_name, network_id, node_id
|
||||
);
|
||||
}
|
||||
}
|
||||
@@ -204,6 +230,46 @@ pub fn generate_delta_system(world: &mut World) {
|
||||
}
|
||||
}
|
||||
|
||||
/// Remove SkipNextDeltaGeneration markers after delta generation has run
|
||||
///
|
||||
/// This system must run AFTER `generate_delta_system` to allow entities to be
|
||||
/// synced again on the next actual local change. The marker prevents feedback
|
||||
/// loops by skipping entities that just received remote updates, but we need
|
||||
/// to remove it so future local changes get broadcast.
|
||||
///
|
||||
/// Add this to your app after generate_delta_system:
|
||||
///
|
||||
/// ```no_run
|
||||
/// use bevy::prelude::*;
|
||||
/// use libmarathon::networking::{generate_delta_system, cleanup_skip_delta_markers_system};
|
||||
///
|
||||
/// App::new().add_systems(PostUpdate, (
|
||||
/// generate_delta_system,
|
||||
/// cleanup_skip_delta_markers_system,
|
||||
/// ).chain());
|
||||
/// ```
|
||||
pub fn cleanup_skip_delta_markers_system(world: &mut World) {
|
||||
// Use immediate removal (not deferred commands) to ensure markers are removed
|
||||
// synchronously after generate_delta_system runs, not at the start of next frame
|
||||
let entities_to_clean: Vec<Entity> = {
|
||||
let mut query = world.query_filtered::<Entity, With<crate::networking::SkipNextDeltaGeneration>>();
|
||||
query.iter(world).collect()
|
||||
};
|
||||
|
||||
for entity in &entities_to_clean {
|
||||
if let Ok(mut entity_mut) = world.get_entity_mut(*entity) {
|
||||
entity_mut.remove::<crate::networking::SkipNextDeltaGeneration>();
|
||||
}
|
||||
}
|
||||
|
||||
if !entities_to_clean.is_empty() {
|
||||
debug!(
|
||||
"cleanup_skip_delta_markers_system: Removed markers from {} entities",
|
||||
entities_to_clean.len()
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
@@ -64,12 +64,6 @@ impl fmt::Display for NetworkingError {
|
||||
|
||||
impl std::error::Error for NetworkingError {}
|
||||
|
||||
impl From<bincode::Error> for NetworkingError {
|
||||
fn from(e: bincode::Error) -> Self {
|
||||
NetworkingError::Serialization(e.to_string())
|
||||
}
|
||||
}
|
||||
|
||||
impl From<crate::persistence::PersistenceError> for NetworkingError {
|
||||
fn from(e: crate::persistence::PersistenceError) -> Self {
|
||||
NetworkingError::Other(format!("Persistence error: {}", e))
|
||||
|
||||
@@ -43,6 +43,16 @@ pub struct GossipBridge {
|
||||
pub node_id: NodeId,
|
||||
}
|
||||
|
||||
impl std::fmt::Debug for GossipBridge {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
f.debug_struct("GossipBridge")
|
||||
.field("node_id", &self.node_id)
|
||||
.field("outgoing_len", &self.outgoing.lock().ok().map(|q| q.len()))
|
||||
.field("incoming_len", &self.incoming.lock().ok().map(|q| q.len()))
|
||||
.finish()
|
||||
}
|
||||
}
|
||||
|
||||
impl GossipBridge {
|
||||
/// Create a new gossip bridge
|
||||
pub fn new(node_id: NodeId) -> Self {
|
||||
@@ -55,6 +65,33 @@ impl GossipBridge {
|
||||
|
||||
/// Send a message to the gossip network
|
||||
pub fn send(&self, message: VersionedMessage) -> Result<()> {
|
||||
// Diagnostic logging: track message type and nonce
|
||||
let msg_type = match &message.message {
|
||||
crate::networking::SyncMessage::EntityDelta { entity_id, .. } => {
|
||||
format!("EntityDelta({})", entity_id)
|
||||
}
|
||||
crate::networking::SyncMessage::JoinRequest { node_id, .. } => {
|
||||
format!("JoinRequest({})", node_id)
|
||||
}
|
||||
crate::networking::SyncMessage::FullState { entities, .. } => {
|
||||
format!("FullState({} entities)", entities.len())
|
||||
}
|
||||
crate::networking::SyncMessage::SyncRequest { node_id, .. } => {
|
||||
format!("SyncRequest({})", node_id)
|
||||
}
|
||||
crate::networking::SyncMessage::MissingDeltas { deltas } => {
|
||||
format!("MissingDeltas({} ops)", deltas.len())
|
||||
}
|
||||
crate::networking::SyncMessage::Lock(lock_msg) => {
|
||||
format!("Lock({:?})", lock_msg)
|
||||
}
|
||||
};
|
||||
|
||||
debug!(
|
||||
"[GossipBridge::send] Node {} queuing message: {} (nonce: {})",
|
||||
self.node_id, msg_type, message.nonce
|
||||
);
|
||||
|
||||
self.outgoing
|
||||
.lock()
|
||||
.map_err(|e| NetworkingError::Gossip(format!("Failed to lock outgoing queue: {}", e)))?
|
||||
@@ -87,6 +124,33 @@ impl GossipBridge {
|
||||
|
||||
/// Push a message to the incoming queue (for testing/integration)
|
||||
pub fn push_incoming(&self, message: VersionedMessage) -> Result<()> {
|
||||
// Diagnostic logging: track incoming message type
|
||||
let msg_type = match &message.message {
|
||||
crate::networking::SyncMessage::EntityDelta { entity_id, .. } => {
|
||||
format!("EntityDelta({})", entity_id)
|
||||
}
|
||||
crate::networking::SyncMessage::JoinRequest { node_id, .. } => {
|
||||
format!("JoinRequest({})", node_id)
|
||||
}
|
||||
crate::networking::SyncMessage::FullState { entities, .. } => {
|
||||
format!("FullState({} entities)", entities.len())
|
||||
}
|
||||
crate::networking::SyncMessage::SyncRequest { node_id, .. } => {
|
||||
format!("SyncRequest({})", node_id)
|
||||
}
|
||||
crate::networking::SyncMessage::MissingDeltas { deltas } => {
|
||||
format!("MissingDeltas({} ops)", deltas.len())
|
||||
}
|
||||
crate::networking::SyncMessage::Lock(lock_msg) => {
|
||||
format!("Lock({:?})", lock_msg)
|
||||
}
|
||||
};
|
||||
|
||||
debug!(
|
||||
"[GossipBridge::push_incoming] Node {} received from network: {} (nonce: {})",
|
||||
self.node_id, msg_type, message.nonce
|
||||
);
|
||||
|
||||
self.incoming
|
||||
.lock()
|
||||
.map_err(|e| NetworkingError::Gossip(format!("Failed to lock incoming queue: {}", e)))?
|
||||
|
||||
@@ -11,15 +11,13 @@
|
||||
//! **NOTE:** This is a simplified implementation for Phase 7. Full security
|
||||
//! and session management will be enhanced in Phase 13.
|
||||
|
||||
use bevy::{
|
||||
prelude::*,
|
||||
reflect::TypeRegistry,
|
||||
};
|
||||
use bevy::prelude::*;
|
||||
|
||||
use crate::networking::{
|
||||
GossipBridge,
|
||||
NetworkedEntity,
|
||||
SessionId,
|
||||
Synced,
|
||||
VectorClock,
|
||||
blob_support::BlobStore,
|
||||
delta_generation::NodeVectorClock,
|
||||
@@ -54,7 +52,7 @@ use crate::networking::{
|
||||
pub fn build_join_request(
|
||||
node_id: uuid::Uuid,
|
||||
session_id: SessionId,
|
||||
session_secret: Option<Vec<u8>>,
|
||||
session_secret: Option<bytes::Bytes>,
|
||||
last_known_clock: Option<VectorClock>,
|
||||
join_type: JoinType,
|
||||
) -> VersionedMessage {
|
||||
@@ -76,7 +74,7 @@ pub fn build_join_request(
|
||||
///
|
||||
/// - `world`: Bevy world containing entities
|
||||
/// - `query`: Query for all NetworkedEntity components
|
||||
/// - `type_registry`: Type registry for serialization
|
||||
/// - `type_registry`: Component type registry for serialization
|
||||
/// - `node_clock`: Current node vector clock
|
||||
/// - `blob_store`: Optional blob store for large components
|
||||
///
|
||||
@@ -86,7 +84,7 @@ pub fn build_join_request(
|
||||
pub fn build_full_state(
|
||||
world: &World,
|
||||
networked_entities: &Query<(Entity, &NetworkedEntity)>,
|
||||
type_registry: &TypeRegistry,
|
||||
type_registry: &crate::persistence::ComponentTypeRegistry,
|
||||
node_clock: &NodeVectorClock,
|
||||
blob_store: Option<&BlobStore>,
|
||||
) -> VersionedMessage {
|
||||
@@ -95,53 +93,31 @@ pub fn build_full_state(
|
||||
blob_support::create_component_data,
|
||||
messages::ComponentState,
|
||||
},
|
||||
persistence::reflection::serialize_component,
|
||||
};
|
||||
|
||||
let mut entities = Vec::new();
|
||||
|
||||
for (entity, networked) in networked_entities.iter() {
|
||||
let entity_ref = world.entity(entity);
|
||||
let mut components = Vec::new();
|
||||
|
||||
// Iterate over all type registrations to find components
|
||||
for registration in type_registry.iter() {
|
||||
// Skip if no ReflectComponent data
|
||||
let Some(reflect_component) = registration.data::<ReflectComponent>() else {
|
||||
continue;
|
||||
// Serialize all registered Synced components on this entity
|
||||
let serialized_components = type_registry.serialize_entity_components(world, entity);
|
||||
|
||||
for (discriminant, _type_path, serialized) in serialized_components {
|
||||
// Create component data (inline or blob)
|
||||
let data = if let Some(store) = blob_store {
|
||||
match create_component_data(serialized, store) {
|
||||
| Ok(d) => d,
|
||||
| Err(_) => continue,
|
||||
}
|
||||
} else {
|
||||
crate::networking::ComponentData::Inline(serialized)
|
||||
};
|
||||
|
||||
let type_path = registration.type_info().type_path();
|
||||
|
||||
// Skip networked wrapper components
|
||||
if type_path.ends_with("::NetworkedEntity") ||
|
||||
type_path.ends_with("::NetworkedTransform") ||
|
||||
type_path.ends_with("::NetworkedSelection") ||
|
||||
type_path.ends_with("::NetworkedDrawingPath")
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
// Try to reflect this component from the entity
|
||||
if let Some(reflected) = reflect_component.reflect(entity_ref) {
|
||||
// Serialize the component
|
||||
if let Ok(serialized) = serialize_component(reflected, type_registry) {
|
||||
// Create component data (inline or blob)
|
||||
let data = if let Some(store) = blob_store {
|
||||
match create_component_data(serialized, store) {
|
||||
| Ok(d) => d,
|
||||
| Err(_) => continue,
|
||||
}
|
||||
} else {
|
||||
crate::networking::ComponentData::Inline(serialized)
|
||||
};
|
||||
|
||||
components.push(ComponentState {
|
||||
component_type: type_path.to_string(),
|
||||
data,
|
||||
});
|
||||
}
|
||||
}
|
||||
components.push(ComponentState {
|
||||
discriminant,
|
||||
data,
|
||||
});
|
||||
}
|
||||
|
||||
entities.push(EntityState {
|
||||
@@ -154,8 +130,9 @@ pub fn build_full_state(
|
||||
}
|
||||
|
||||
info!(
|
||||
"Built FullState with {} entities for new peer",
|
||||
entities.len()
|
||||
"Built FullState with {} entities ({} total networked entities queried) for new peer",
|
||||
entities.len(),
|
||||
networked_entities.iter().count()
|
||||
);
|
||||
|
||||
VersionedMessage::new(SyncMessage::FullState {
|
||||
@@ -175,36 +152,37 @@ pub fn build_full_state(
|
||||
/// - `vector_clock`: Vector clock from FullState
|
||||
/// - `commands`: Bevy commands for spawning entities
|
||||
/// - `entity_map`: Entity map to populate
|
||||
/// - `type_registry`: Type registry for deserialization
|
||||
/// - `type_registry`: Component type registry for deserialization
|
||||
/// - `node_clock`: Our node's vector clock to update
|
||||
/// - `blob_store`: Optional blob store for resolving blob references
|
||||
/// - `tombstone_registry`: Optional tombstone registry for deletion tracking
|
||||
pub fn apply_full_state(
|
||||
entities: Vec<EntityState>,
|
||||
remote_clock: crate::networking::VectorClock,
|
||||
commands: &mut Commands,
|
||||
entity_map: &mut NetworkEntityMap,
|
||||
type_registry: &TypeRegistry,
|
||||
node_clock: &mut NodeVectorClock,
|
||||
blob_store: Option<&BlobStore>,
|
||||
mut tombstone_registry: Option<&mut crate::networking::TombstoneRegistry>,
|
||||
world: &mut World,
|
||||
type_registry: &crate::persistence::ComponentTypeRegistry,
|
||||
) {
|
||||
use crate::{
|
||||
networking::blob_support::get_component_data,
|
||||
persistence::reflection::deserialize_component,
|
||||
};
|
||||
use crate::networking::blob_support::get_component_data;
|
||||
|
||||
info!("Applying FullState with {} entities", entities.len());
|
||||
|
||||
// Merge the remote vector clock
|
||||
node_clock.clock.merge(&remote_clock);
|
||||
{
|
||||
let mut node_clock = world.resource_mut::<NodeVectorClock>();
|
||||
node_clock.clock.merge(&remote_clock);
|
||||
info!("Vector clock after merge: {:?}", node_clock.clock);
|
||||
}
|
||||
|
||||
let mut spawned_count = 0;
|
||||
let mut tombstoned_count = 0;
|
||||
|
||||
// Spawn all entities and apply their state
|
||||
for entity_state in entities {
|
||||
// Handle deleted entities (tombstones)
|
||||
if entity_state.is_deleted {
|
||||
tombstoned_count += 1;
|
||||
// Record tombstone
|
||||
if let Some(ref mut registry) = tombstone_registry {
|
||||
if let Some(mut registry) = world.get_resource_mut::<crate::networking::TombstoneRegistry>() {
|
||||
registry.record_deletion(
|
||||
entity_state.entity_id,
|
||||
entity_state.owner_node_id,
|
||||
@@ -214,17 +192,43 @@ pub fn apply_full_state(
|
||||
continue;
|
||||
}
|
||||
|
||||
// Spawn entity with NetworkedEntity and Persisted components
|
||||
// This ensures entities received via FullState are persisted locally
|
||||
let entity = commands
|
||||
.spawn((
|
||||
NetworkedEntity::with_id(entity_state.entity_id, entity_state.owner_node_id),
|
||||
crate::persistence::Persisted::with_id(entity_state.entity_id),
|
||||
))
|
||||
.id();
|
||||
// Check if entity already exists in the map
|
||||
let entity = {
|
||||
let entity_map = world.resource::<NetworkEntityMap>();
|
||||
entity_map.get_entity(entity_state.entity_id)
|
||||
};
|
||||
|
||||
// Register in entity map
|
||||
entity_map.insert(entity_state.entity_id, entity);
|
||||
let entity = match entity {
|
||||
Some(existing_entity) => {
|
||||
// Entity already exists - reuse it and update components
|
||||
debug!(
|
||||
"Entity {} already exists (local entity {:?}), updating components",
|
||||
entity_state.entity_id, existing_entity
|
||||
);
|
||||
existing_entity
|
||||
}
|
||||
None => {
|
||||
// Spawn new entity with NetworkedEntity, Persisted, and Synced components
|
||||
// This ensures entities received via FullState are persisted locally and
|
||||
// will auto-sync their Transform if one is added
|
||||
let entity = world
|
||||
.spawn((
|
||||
NetworkedEntity::with_id(entity_state.entity_id, entity_state.owner_node_id),
|
||||
crate::persistence::Persisted::with_id(entity_state.entity_id),
|
||||
Synced,
|
||||
))
|
||||
.id();
|
||||
|
||||
// Register in entity map
|
||||
{
|
||||
let mut entity_map = world.resource_mut::<NetworkEntityMap>();
|
||||
entity_map.insert(entity_state.entity_id, entity);
|
||||
}
|
||||
|
||||
spawned_count += 1;
|
||||
entity
|
||||
}
|
||||
};
|
||||
|
||||
let num_components = entity_state.components.len();
|
||||
|
||||
@@ -234,91 +238,84 @@ pub fn apply_full_state(
|
||||
let data_bytes = match &component_state.data {
|
||||
| crate::networking::ComponentData::Inline(bytes) => bytes.clone(),
|
||||
| blob_ref @ crate::networking::ComponentData::BlobRef { .. } => {
|
||||
if let Some(store) = blob_store {
|
||||
let blob_store = world.get_resource::<BlobStore>();
|
||||
if let Some(store) = blob_store.as_deref() {
|
||||
match get_component_data(blob_ref, store) {
|
||||
| Ok(bytes) => bytes,
|
||||
| Err(e) => {
|
||||
error!(
|
||||
"Failed to retrieve blob for {}: {}",
|
||||
component_state.component_type, e
|
||||
"Failed to retrieve blob for discriminant {}: {}",
|
||||
component_state.discriminant, e
|
||||
);
|
||||
continue;
|
||||
},
|
||||
}
|
||||
} else {
|
||||
error!(
|
||||
"Blob reference for {} but no blob store available",
|
||||
component_state.component_type
|
||||
"Blob reference for discriminant {} but no blob store available",
|
||||
component_state.discriminant
|
||||
);
|
||||
continue;
|
||||
}
|
||||
},
|
||||
};
|
||||
|
||||
// Use the discriminant directly from ComponentState
|
||||
let discriminant = component_state.discriminant;
|
||||
|
||||
// Deserialize the component
|
||||
let reflected = match deserialize_component(&data_bytes, type_registry) {
|
||||
| Ok(r) => r,
|
||||
let boxed_component = match type_registry.deserialize(discriminant, &data_bytes) {
|
||||
| Ok(component) => component,
|
||||
| Err(e) => {
|
||||
error!(
|
||||
"Failed to deserialize {}: {}",
|
||||
component_state.component_type, e
|
||||
"Failed to deserialize discriminant {}: {}",
|
||||
discriminant, e
|
||||
);
|
||||
continue;
|
||||
},
|
||||
};
|
||||
|
||||
// Get the type registration
|
||||
let registration =
|
||||
match type_registry.get_with_type_path(&component_state.component_type) {
|
||||
| Some(reg) => reg,
|
||||
| None => {
|
||||
error!(
|
||||
"Component type {} not registered",
|
||||
component_state.component_type
|
||||
);
|
||||
continue;
|
||||
},
|
||||
};
|
||||
|
||||
// Get ReflectComponent data
|
||||
let reflect_component = match registration.data::<ReflectComponent>() {
|
||||
| Some(rc) => rc.clone(),
|
||||
| None => {
|
||||
error!(
|
||||
"Component type {} does not have ReflectComponent data",
|
||||
component_state.component_type
|
||||
);
|
||||
continue;
|
||||
},
|
||||
// Get the insert function for this discriminant
|
||||
let Some(insert_fn) = type_registry.get_insert_fn(discriminant) else {
|
||||
error!("No insert function for discriminant {}", discriminant);
|
||||
continue;
|
||||
};
|
||||
|
||||
// Insert the component
|
||||
let component_type_owned = component_state.component_type.clone();
|
||||
commands.queue(move |world: &mut World| {
|
||||
let type_registry_arc = {
|
||||
let Some(type_registry_res) = world.get_resource::<AppTypeRegistry>() else {
|
||||
error!("AppTypeRegistry not found in world");
|
||||
return;
|
||||
};
|
||||
type_registry_res.clone()
|
||||
};
|
||||
|
||||
let type_registry = type_registry_arc.read();
|
||||
|
||||
if let Ok(mut entity_mut) = world.get_entity_mut(entity) {
|
||||
reflect_component.insert(&mut entity_mut, &*reflected, &type_registry);
|
||||
debug!("Applied component {} from FullState", component_type_owned);
|
||||
}
|
||||
});
|
||||
// Insert the component directly
|
||||
let type_name_for_log = type_registry.get_type_name(discriminant)
|
||||
.unwrap_or("unknown");
|
||||
if let Ok(mut entity_mut) = world.get_entity_mut(entity) {
|
||||
insert_fn(&mut entity_mut, boxed_component);
|
||||
debug!("Applied component {} from FullState", type_name_for_log);
|
||||
}
|
||||
}
|
||||
|
||||
debug!(
|
||||
"Spawned entity {:?} from FullState with {} components",
|
||||
"Applied entity {:?} from FullState with {} components",
|
||||
entity_state.entity_id, num_components
|
||||
);
|
||||
}
|
||||
|
||||
info!("FullState applied successfully");
|
||||
info!(
|
||||
"FullState applied successfully: spawned {} entities, skipped {} tombstones",
|
||||
spawned_count, tombstoned_count
|
||||
);
|
||||
|
||||
// Send SyncRequest to catch any deltas that arrived during FullState transfer
|
||||
// This implements the "Final Sync" step from RFC 0004 (Session Lifecycle)
|
||||
if let Some(bridge) = world.get_resource::<GossipBridge>() {
|
||||
let node_clock = world.resource::<NodeVectorClock>();
|
||||
let request = crate::networking::operation_log::build_sync_request(
|
||||
node_clock.node_id,
|
||||
node_clock.clock.clone(),
|
||||
);
|
||||
|
||||
if let Err(e) = bridge.send(request) {
|
||||
error!("Failed to send post-FullState SyncRequest: {}", e);
|
||||
} else {
|
||||
info!("Sent SyncRequest to catch deltas that arrived during FullState transfer");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// System to handle JoinRequest messages
|
||||
@@ -337,7 +334,7 @@ pub fn handle_join_requests_system(
|
||||
world: &World,
|
||||
bridge: Option<Res<GossipBridge>>,
|
||||
networked_entities: Query<(Entity, &NetworkedEntity)>,
|
||||
type_registry: Res<AppTypeRegistry>,
|
||||
type_registry: Res<crate::persistence::ComponentTypeRegistryResource>,
|
||||
node_clock: Res<NodeVectorClock>,
|
||||
blob_store: Option<Res<BlobStore>>,
|
||||
) {
|
||||
@@ -345,7 +342,7 @@ pub fn handle_join_requests_system(
|
||||
return;
|
||||
};
|
||||
|
||||
let registry = type_registry.read();
|
||||
let registry = type_registry.0;
|
||||
let blob_store_ref = blob_store.as_deref();
|
||||
|
||||
// Poll for incoming JoinRequest messages
|
||||
@@ -422,21 +419,17 @@ pub fn handle_join_requests_system(
|
||||
///
|
||||
/// This system should run BEFORE receive_and_apply_deltas_system to ensure
|
||||
/// we're fully initialized before processing deltas.
|
||||
pub fn handle_full_state_system(
|
||||
mut commands: Commands,
|
||||
bridge: Option<Res<GossipBridge>>,
|
||||
mut entity_map: ResMut<NetworkEntityMap>,
|
||||
type_registry: Res<AppTypeRegistry>,
|
||||
mut node_clock: ResMut<NodeVectorClock>,
|
||||
blob_store: Option<Res<BlobStore>>,
|
||||
mut tombstone_registry: Option<ResMut<crate::networking::TombstoneRegistry>>,
|
||||
) {
|
||||
let Some(bridge) = bridge else {
|
||||
pub fn handle_full_state_system(world: &mut World) {
|
||||
// Check if bridge exists
|
||||
if world.get_resource::<GossipBridge>().is_none() {
|
||||
return;
|
||||
};
|
||||
}
|
||||
|
||||
let registry = type_registry.read();
|
||||
let blob_store_ref = blob_store.as_deref();
|
||||
let bridge = world.resource::<GossipBridge>().clone();
|
||||
let type_registry = {
|
||||
let registry_resource = world.resource::<crate::persistence::ComponentTypeRegistryResource>();
|
||||
registry_resource.0
|
||||
};
|
||||
|
||||
// Poll for FullState messages
|
||||
while let Some(message) = bridge.try_recv() {
|
||||
@@ -450,12 +443,8 @@ pub fn handle_full_state_system(
|
||||
apply_full_state(
|
||||
entities,
|
||||
vector_clock,
|
||||
&mut commands,
|
||||
&mut entity_map,
|
||||
®istry,
|
||||
&mut node_clock,
|
||||
blob_store_ref,
|
||||
tombstone_registry.as_deref_mut(),
|
||||
world,
|
||||
type_registry,
|
||||
);
|
||||
},
|
||||
| _ => {
|
||||
@@ -502,7 +491,7 @@ mod tests {
|
||||
let request = build_join_request(
|
||||
node_id,
|
||||
session_id.clone(),
|
||||
Some(secret.clone()),
|
||||
Some(bytes::Bytes::from(secret.clone())),
|
||||
None,
|
||||
JoinType::Fresh,
|
||||
);
|
||||
@@ -516,7 +505,7 @@ mod tests {
|
||||
join_type,
|
||||
} => {
|
||||
assert_eq!(req_session_id, session_id);
|
||||
assert_eq!(session_secret, Some(secret));
|
||||
assert_eq!(session_secret, Some(bytes::Bytes::from(secret)));
|
||||
assert!(last_known_clock.is_none());
|
||||
assert!(matches!(join_type, JoinType::Fresh));
|
||||
},
|
||||
@@ -582,29 +571,25 @@ mod tests {
|
||||
#[test]
|
||||
fn test_apply_full_state_empty() {
|
||||
let node_id = uuid::Uuid::new_v4();
|
||||
let mut node_clock = NodeVectorClock::new(node_id);
|
||||
let remote_clock = VectorClock::new();
|
||||
let type_registry = crate::persistence::component_registry();
|
||||
|
||||
// Create minimal setup for testing
|
||||
let mut entity_map = NetworkEntityMap::new();
|
||||
let type_registry = TypeRegistry::new();
|
||||
|
||||
// Need a minimal Bevy app for Commands
|
||||
// Need a minimal Bevy app for testing
|
||||
let mut app = App::new();
|
||||
let mut commands = app.world_mut().commands();
|
||||
|
||||
// Insert required resources
|
||||
app.insert_resource(NetworkEntityMap::new());
|
||||
app.insert_resource(NodeVectorClock::new(node_id));
|
||||
|
||||
apply_full_state(
|
||||
vec![],
|
||||
remote_clock.clone(),
|
||||
&mut commands,
|
||||
&mut entity_map,
|
||||
&type_registry,
|
||||
&mut node_clock,
|
||||
None,
|
||||
None, // tombstone_registry
|
||||
app.world_mut(),
|
||||
type_registry,
|
||||
);
|
||||
|
||||
// Should have merged clocks
|
||||
let node_clock = app.world().resource::<NodeVectorClock>();
|
||||
assert_eq!(node_clock.clock, remote_clock);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -42,15 +42,11 @@ use std::{
|
||||
};
|
||||
|
||||
use bevy::prelude::*;
|
||||
use serde::{
|
||||
Deserialize,
|
||||
Serialize,
|
||||
};
|
||||
|
||||
use uuid::Uuid;
|
||||
|
||||
use crate::networking::{
|
||||
GossipBridge,
|
||||
NetworkedSelection,
|
||||
NodeId,
|
||||
VersionedMessage,
|
||||
delta_generation::NodeVectorClock,
|
||||
@@ -64,7 +60,7 @@ pub const LOCK_TIMEOUT: Duration = Duration::from_secs(5);
|
||||
pub const MAX_LOCKS_PER_NODE: usize = 100;
|
||||
|
||||
/// Lock acquisition/release messages
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
|
||||
#[derive(Debug, Clone, rkyv::Archive, rkyv::Serialize, rkyv::Deserialize, PartialEq, Eq)]
|
||||
pub enum LockMessage {
|
||||
/// Request to acquire a lock on an entity
|
||||
LockRequest {
|
||||
@@ -337,10 +333,63 @@ impl EntityLockRegistry {
|
||||
}
|
||||
}
|
||||
|
||||
/// System to acquire locks when entities are selected
|
||||
///
|
||||
/// This system detects when entities are added to the global `LocalSelection`
|
||||
/// resource and attempts to acquire locks on those entities, broadcasting
|
||||
/// the request to other peers.
|
||||
pub fn acquire_locks_on_selection_system(
|
||||
mut registry: ResMut<EntityLockRegistry>,
|
||||
node_clock: Res<NodeVectorClock>,
|
||||
bridge: Option<Res<GossipBridge>>,
|
||||
selection: Res<crate::networking::LocalSelection>,
|
||||
) {
|
||||
// Only run when selection changes
|
||||
if !selection.is_changed() {
|
||||
return;
|
||||
}
|
||||
|
||||
let node_id = node_clock.node_id;
|
||||
|
||||
// Try to acquire locks for all selected entities
|
||||
for &entity_id in selection.iter() {
|
||||
let already_locked = registry.is_locked_by(entity_id, node_id, node_id);
|
||||
|
||||
// Only try to acquire if we don't already hold the lock
|
||||
if !already_locked {
|
||||
match registry.try_acquire(entity_id, node_id) {
|
||||
Ok(()) => {
|
||||
info!("Acquired lock on newly selected entity {}", entity_id);
|
||||
|
||||
// Broadcast LockRequest
|
||||
if let Some(ref bridge) = bridge {
|
||||
let msg = VersionedMessage::new(SyncMessage::Lock(LockMessage::LockRequest {
|
||||
entity_id,
|
||||
node_id,
|
||||
}));
|
||||
|
||||
if let Err(e) = bridge.send(msg) {
|
||||
error!("Failed to broadcast LockRequest on selection: {}", e);
|
||||
} else {
|
||||
debug!("LockRequest broadcast successful for entity {}", entity_id);
|
||||
}
|
||||
} else {
|
||||
warn!("No GossipBridge available to broadcast LockRequest");
|
||||
}
|
||||
}
|
||||
Err(holder) => {
|
||||
warn!("Failed to acquire lock on selected entity {} (held by {})", entity_id, holder);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// System to release locks when entities are deselected
|
||||
///
|
||||
/// This system detects when entities are removed from selection and releases
|
||||
/// any locks held on those entities, broadcasting the release to other peers.
|
||||
/// This system detects when entities are removed from the global `LocalSelection`
|
||||
/// resource and releases any locks held on those entities, broadcasting the release
|
||||
/// to other peers.
|
||||
///
|
||||
/// Add to your app as an Update system:
|
||||
/// ```no_run
|
||||
@@ -353,42 +402,46 @@ pub fn release_locks_on_deselection_system(
|
||||
mut registry: ResMut<EntityLockRegistry>,
|
||||
node_clock: Res<NodeVectorClock>,
|
||||
bridge: Option<Res<GossipBridge>>,
|
||||
mut selection_query: Query<&mut NetworkedSelection, Changed<NetworkedSelection>>,
|
||||
selection: Res<crate::networking::LocalSelection>,
|
||||
) {
|
||||
// Only run when selection changes
|
||||
if !selection.is_changed() {
|
||||
return;
|
||||
}
|
||||
|
||||
let node_id = node_clock.node_id;
|
||||
|
||||
for selection in selection_query.iter_mut() {
|
||||
// Find entities that were previously locked but are no longer selected
|
||||
let currently_selected: std::collections::HashSet<Uuid> = selection.selected_ids.clone();
|
||||
// Check all locks held by this node
|
||||
let locks_to_release: Vec<Uuid> = registry
|
||||
.locks
|
||||
.iter()
|
||||
.filter(|(entity_id, lock)| {
|
||||
// Release if held by us and not currently selected
|
||||
lock.holder == node_id && !selection.contains(**entity_id)
|
||||
})
|
||||
.map(|(entity_id, _)| *entity_id)
|
||||
.collect();
|
||||
|
||||
// Check all locks held by this node
|
||||
let locks_to_release: Vec<Uuid> = registry
|
||||
.locks
|
||||
.iter()
|
||||
.filter(|(entity_id, lock)| {
|
||||
// Release if held by us and not currently selected
|
||||
lock.holder == node_id && !currently_selected.contains(entity_id)
|
||||
})
|
||||
.map(|(entity_id, _)| *entity_id)
|
||||
.collect();
|
||||
if !locks_to_release.is_empty() {
|
||||
info!("Selection cleared, releasing {} locks", locks_to_release.len());
|
||||
}
|
||||
|
||||
// Release each lock and broadcast
|
||||
for entity_id in locks_to_release {
|
||||
if registry.release(entity_id, node_id) {
|
||||
debug!("Releasing lock on deselected entity {}", entity_id);
|
||||
// Release each lock and broadcast
|
||||
for entity_id in locks_to_release {
|
||||
if registry.release(entity_id, node_id) {
|
||||
info!("Released lock on deselected entity {}", entity_id);
|
||||
|
||||
// Broadcast LockRelease
|
||||
if let Some(ref bridge) = bridge {
|
||||
let msg = VersionedMessage::new(SyncMessage::Lock(LockMessage::LockRelease {
|
||||
entity_id,
|
||||
node_id,
|
||||
}));
|
||||
// Broadcast LockRelease
|
||||
if let Some(ref bridge) = bridge {
|
||||
let msg = VersionedMessage::new(SyncMessage::Lock(LockMessage::LockRelease {
|
||||
entity_id,
|
||||
node_id,
|
||||
}));
|
||||
|
||||
if let Err(e) = bridge.send(msg) {
|
||||
error!("Failed to broadcast LockRelease on deselection: {}", e);
|
||||
} else {
|
||||
info!("Lock released on deselection: entity {}", entity_id);
|
||||
}
|
||||
if let Err(e) = bridge.send(msg) {
|
||||
error!("Failed to broadcast LockRelease on deselection: {}", e);
|
||||
} else {
|
||||
info!("Lock released on deselection: entity {}", entity_id);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -665,8 +718,8 @@ mod tests {
|
||||
];
|
||||
|
||||
for message in messages {
|
||||
let bytes = bincode::serialize(&message).unwrap();
|
||||
let deserialized: LockMessage = bincode::deserialize(&bytes).unwrap();
|
||||
let bytes = rkyv::to_bytes::<rkyv::rancor::Failure>(&message).map(|b| b.to_vec()).unwrap();
|
||||
let deserialized: LockMessage = rkyv::from_bytes::<LockMessage, rkyv::rancor::Failure>(&bytes).unwrap();
|
||||
assert_eq!(message, deserialized);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -121,8 +121,8 @@ pub fn should_apply_set(local_op: &ComponentOp, remote_op: &ComponentOp) -> bool
|
||||
|
||||
// Use the sequence number from the clocks as a simple tiebreaker
|
||||
// In a real implementation, we'd use the full node IDs
|
||||
let local_seq: u64 = local_clock.clocks.values().sum();
|
||||
let remote_seq: u64 = remote_clock.clocks.values().sum();
|
||||
let local_seq: u64 = local_clock.timestamps.values().sum();
|
||||
let remote_seq: u64 = remote_clock.timestamps.values().sum();
|
||||
|
||||
// Compare clocks
|
||||
match compare_operations_lww(
|
||||
@@ -217,14 +217,14 @@ mod tests {
|
||||
let data = vec![1, 2, 3];
|
||||
|
||||
let op1 = ComponentOp::Set {
|
||||
component_type: "Transform".to_string(),
|
||||
data: ComponentData::Inline(data.clone()),
|
||||
discriminant: 1,
|
||||
data: ComponentData::Inline(bytes::Bytes::from(data.clone())),
|
||||
vector_clock: clock.clone(),
|
||||
};
|
||||
|
||||
let op2 = ComponentOp::Set {
|
||||
component_type: "Transform".to_string(),
|
||||
data: ComponentData::Inline(data.clone()),
|
||||
discriminant: 1,
|
||||
data: ComponentData::Inline(bytes::Bytes::from(data.clone())),
|
||||
vector_clock: clock,
|
||||
};
|
||||
|
||||
@@ -244,14 +244,14 @@ mod tests {
|
||||
clock2.increment(node_id);
|
||||
|
||||
let op1 = ComponentOp::Set {
|
||||
component_type: "Transform".to_string(),
|
||||
data: ComponentData::Inline(vec![1, 2, 3]),
|
||||
discriminant: 1,
|
||||
data: ComponentData::Inline(bytes::Bytes::from(vec![1, 2, 3])),
|
||||
vector_clock: clock1,
|
||||
};
|
||||
|
||||
let op2 = ComponentOp::Set {
|
||||
component_type: "Transform".to_string(),
|
||||
data: ComponentData::Inline(vec![4, 5, 6]),
|
||||
discriminant: 1,
|
||||
data: ComponentData::Inline(bytes::Bytes::from(vec![4, 5, 6])),
|
||||
vector_clock: clock2,
|
||||
};
|
||||
|
||||
|
||||
@@ -5,7 +5,6 @@
|
||||
//! dispatcher system polls once and routes messages to appropriate handlers.
|
||||
|
||||
use bevy::{
|
||||
ecs::system::SystemState,
|
||||
prelude::*,
|
||||
};
|
||||
|
||||
@@ -13,14 +12,12 @@ use crate::networking::{
|
||||
GossipBridge,
|
||||
JoinType,
|
||||
NetworkedEntity,
|
||||
TombstoneRegistry,
|
||||
VersionedMessage,
|
||||
apply_entity_delta,
|
||||
apply_full_state,
|
||||
blob_support::BlobStore,
|
||||
build_missing_deltas,
|
||||
delta_generation::NodeVectorClock,
|
||||
entity_map::NetworkEntityMap,
|
||||
messages::SyncMessage,
|
||||
operation_log::OperationLog,
|
||||
plugin::SessionSecret,
|
||||
@@ -67,8 +64,32 @@ pub fn message_dispatcher_system(world: &mut World) {
|
||||
bridge.drain_incoming()
|
||||
};
|
||||
|
||||
if !messages.is_empty() {
|
||||
let node_id = world.resource::<GossipBridge>().node_id;
|
||||
info!(
|
||||
"[message_dispatcher] Node {} processing {} messages",
|
||||
node_id,
|
||||
messages.len()
|
||||
);
|
||||
}
|
||||
|
||||
// Dispatch each message (bridge is no longer borrowed)
|
||||
for message in messages {
|
||||
let node_id = world.resource::<GossipBridge>().node_id;
|
||||
let msg_type = match &message.message {
|
||||
SyncMessage::EntityDelta { entity_id, .. } => format!("EntityDelta({})", entity_id),
|
||||
SyncMessage::JoinRequest { node_id, .. } => format!("JoinRequest({})", node_id),
|
||||
SyncMessage::FullState { entities, .. } => format!("FullState({} entities)", entities.len()),
|
||||
SyncMessage::SyncRequest { node_id, .. } => format!("SyncRequest({})", node_id),
|
||||
SyncMessage::MissingDeltas { deltas } => format!("MissingDeltas({} ops)", deltas.len()),
|
||||
SyncMessage::Lock(_) => "Lock".to_string(),
|
||||
};
|
||||
|
||||
debug!(
|
||||
"[message_dispatcher] Node {} dispatching: {} (nonce: {})",
|
||||
node_id, msg_type, message.nonce
|
||||
);
|
||||
|
||||
dispatch_message(world, message);
|
||||
}
|
||||
|
||||
@@ -239,41 +260,17 @@ fn dispatch_message(world: &mut World, message: crate::networking::VersionedMess
|
||||
} => {
|
||||
info!("Received FullState with {} entities", entities.len());
|
||||
|
||||
// Use SystemState to properly borrow multiple resources
|
||||
let mut system_state: SystemState<(
|
||||
Commands,
|
||||
ResMut<NetworkEntityMap>,
|
||||
Res<AppTypeRegistry>,
|
||||
ResMut<NodeVectorClock>,
|
||||
Option<Res<BlobStore>>,
|
||||
Option<ResMut<TombstoneRegistry>>,
|
||||
)> = SystemState::new(world);
|
||||
let type_registry = {
|
||||
let registry_resource = world.resource::<crate::persistence::ComponentTypeRegistryResource>();
|
||||
registry_resource.0
|
||||
};
|
||||
|
||||
{
|
||||
let (
|
||||
mut commands,
|
||||
mut entity_map,
|
||||
type_registry,
|
||||
mut node_clock,
|
||||
blob_store,
|
||||
mut tombstone_registry,
|
||||
) = system_state.get_mut(world);
|
||||
let registry = type_registry.read();
|
||||
|
||||
apply_full_state(
|
||||
entities,
|
||||
vector_clock,
|
||||
&mut commands,
|
||||
&mut entity_map,
|
||||
®istry,
|
||||
&mut node_clock,
|
||||
blob_store.as_deref(),
|
||||
tombstone_registry.as_deref_mut(),
|
||||
);
|
||||
// registry is dropped here
|
||||
}
|
||||
|
||||
system_state.apply(world);
|
||||
apply_full_state(
|
||||
entities,
|
||||
vector_clock,
|
||||
world,
|
||||
type_registry,
|
||||
);
|
||||
},
|
||||
|
||||
// SyncRequest - peer requesting missing operations
|
||||
@@ -283,6 +280,18 @@ fn dispatch_message(world: &mut World, message: crate::networking::VersionedMess
|
||||
} => {
|
||||
debug!("Received SyncRequest from node {}", requesting_node);
|
||||
|
||||
// Merge the requesting node's vector clock into ours
|
||||
// This ensures we learn about their latest sequence number
|
||||
{
|
||||
let mut node_clock = world.resource_mut::<NodeVectorClock>();
|
||||
node_clock.clock.merge(&their_clock);
|
||||
debug!(
|
||||
"Merged SyncRequest clock from node {} (seq: {})",
|
||||
requesting_node,
|
||||
their_clock.get(requesting_node)
|
||||
);
|
||||
}
|
||||
|
||||
if let Some(op_log) = world.get_resource::<OperationLog>() {
|
||||
// Find operations they're missing
|
||||
let missing_deltas = op_log.get_all_operations_newer_than(&their_clock);
|
||||
@@ -433,7 +442,7 @@ fn dispatch_message(world: &mut World, message: crate::networking::VersionedMess
|
||||
fn build_full_state_from_data(
|
||||
world: &World,
|
||||
networked_entities: &[(Entity, &NetworkedEntity)],
|
||||
type_registry: &bevy::reflect::TypeRegistry,
|
||||
_type_registry: &bevy::reflect::TypeRegistry,
|
||||
node_clock: &NodeVectorClock,
|
||||
blob_store: Option<&BlobStore>,
|
||||
) -> crate::networking::VersionedMessage {
|
||||
@@ -445,7 +454,6 @@ fn build_full_state_from_data(
|
||||
EntityState,
|
||||
},
|
||||
},
|
||||
persistence::reflection::serialize_component,
|
||||
};
|
||||
|
||||
// Get tombstone registry to filter out deleted entities
|
||||
@@ -464,47 +472,38 @@ fn build_full_state_from_data(
|
||||
continue;
|
||||
}
|
||||
}
|
||||
let entity_ref = world.entity(*entity);
|
||||
let mut components = Vec::new();
|
||||
|
||||
// Iterate over all type registrations to find components
|
||||
for registration in type_registry.iter() {
|
||||
// Skip if no ReflectComponent data
|
||||
let Some(reflect_component) = registration.data::<ReflectComponent>() else {
|
||||
continue;
|
||||
};
|
||||
// Get component type registry
|
||||
let type_registry_res = world.resource::<crate::persistence::ComponentTypeRegistryResource>();
|
||||
let component_registry = type_registry_res.0;
|
||||
|
||||
let type_path = registration.type_info().type_path();
|
||||
// Serialize all registered components on this entity
|
||||
let serialized_components = component_registry.serialize_entity_components(world, *entity);
|
||||
|
||||
for (discriminant, type_path, serialized) in serialized_components {
|
||||
// Skip networked wrapper components
|
||||
if type_path.ends_with("::NetworkedEntity") ||
|
||||
type_path.ends_with("::NetworkedTransform") ||
|
||||
type_path.ends_with("::NetworkedSelection") ||
|
||||
type_path.ends_with("::NetworkedDrawingPath")
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
// Try to reflect this component from the entity
|
||||
if let Some(reflected) = reflect_component.reflect(entity_ref) {
|
||||
// Serialize the component
|
||||
if let Ok(serialized) = serialize_component(reflected, type_registry) {
|
||||
// Create component data (inline or blob)
|
||||
let data = if let Some(store) = blob_store {
|
||||
match create_component_data(serialized, store) {
|
||||
| Ok(d) => d,
|
||||
| Err(_) => continue,
|
||||
}
|
||||
} else {
|
||||
crate::networking::ComponentData::Inline(serialized)
|
||||
};
|
||||
|
||||
components.push(ComponentState {
|
||||
component_type: type_path.to_string(),
|
||||
data,
|
||||
});
|
||||
// Create component data (inline or blob)
|
||||
let data = if let Some(store) = blob_store {
|
||||
match create_component_data(serialized, store) {
|
||||
| Ok(d) => d,
|
||||
| Err(_) => continue,
|
||||
}
|
||||
}
|
||||
} else {
|
||||
crate::networking::ComponentData::Inline(serialized)
|
||||
};
|
||||
|
||||
components.push(ComponentState {
|
||||
discriminant,
|
||||
data,
|
||||
});
|
||||
}
|
||||
|
||||
entities.push(EntityState {
|
||||
@@ -517,8 +516,10 @@ fn build_full_state_from_data(
|
||||
}
|
||||
|
||||
info!(
|
||||
"Built FullState with {} entities for new peer",
|
||||
entities.len()
|
||||
"Built FullState with {} entities ({} total queried, {} tombstoned) for new peer",
|
||||
entities.len(),
|
||||
networked_entities.len(),
|
||||
networked_entities.len() - entities.len()
|
||||
);
|
||||
|
||||
crate::networking::VersionedMessage::new(SyncMessage::FullState {
|
||||
|
||||
@@ -3,10 +3,7 @@
|
||||
//! This module defines the protocol messages used for distributed
|
||||
//! synchronization according to RFC 0001.
|
||||
|
||||
use serde::{
|
||||
Deserialize,
|
||||
Serialize,
|
||||
};
|
||||
|
||||
|
||||
use crate::networking::{
|
||||
locks::LockMessage,
|
||||
@@ -22,13 +19,21 @@ use crate::networking::{
|
||||
///
|
||||
/// All messages sent over the network are wrapped in this envelope to support
|
||||
/// protocol version negotiation and future compatibility.
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
#[derive(Debug, Clone, rkyv::Archive, rkyv::Serialize, rkyv::Deserialize)]
|
||||
pub struct VersionedMessage {
|
||||
/// Protocol version (currently 1)
|
||||
pub version: u32,
|
||||
|
||||
/// The actual sync message
|
||||
pub message: SyncMessage,
|
||||
|
||||
/// Nonce for selective deduplication control
|
||||
///
|
||||
/// - For Lock messages: Unique nonce (counter + timestamp hash) to prevent
|
||||
/// iroh-gossip deduplication, allowing repeated heartbeats.
|
||||
/// - For other messages: Constant nonce (0) to enable content-based deduplication
|
||||
/// by iroh-gossip, preventing feedback loops.
|
||||
pub nonce: u32,
|
||||
}
|
||||
|
||||
impl VersionedMessage {
|
||||
@@ -36,16 +41,50 @@ impl VersionedMessage {
|
||||
pub const CURRENT_VERSION: u32 = 1;
|
||||
|
||||
/// Create a new versioned message with the current protocol version
|
||||
///
|
||||
/// For Lock messages: Generates a unique nonce to prevent deduplication, since
|
||||
/// lock heartbeats need to be sent repeatedly even with identical content.
|
||||
///
|
||||
/// For other messages: Uses a constant nonce (0) to enable iroh-gossip's
|
||||
/// content-based deduplication. This prevents feedback loops where the same
|
||||
/// EntityDelta gets broadcast repeatedly.
|
||||
pub fn new(message: SyncMessage) -> Self {
|
||||
// Only generate unique nonces for Lock messages (heartbeats need to bypass dedup)
|
||||
let nonce = if matches!(message, SyncMessage::Lock(_)) {
|
||||
use std::hash::Hasher;
|
||||
use std::sync::atomic::{AtomicU32, Ordering};
|
||||
use std::time::{SystemTime, UNIX_EPOCH};
|
||||
|
||||
// Per-node rolling counter for sequential uniqueness
|
||||
static COUNTER: AtomicU32 = AtomicU32::new(0);
|
||||
let counter = COUNTER.fetch_add(1, Ordering::Relaxed);
|
||||
|
||||
// Millisecond timestamp for temporal uniqueness
|
||||
let timestamp_millis = SystemTime::now()
|
||||
.duration_since(UNIX_EPOCH)
|
||||
.unwrap_or_default()
|
||||
.as_millis() as u32;
|
||||
|
||||
// Hash counter + timestamp for final nonce
|
||||
let mut hasher = rustc_hash::FxHasher::default();
|
||||
hasher.write_u32(counter);
|
||||
hasher.write_u32(timestamp_millis);
|
||||
hasher.finish() as u32
|
||||
} else {
|
||||
// Use constant nonce for all other messages to enable content deduplication
|
||||
0
|
||||
};
|
||||
|
||||
Self {
|
||||
version: Self::CURRENT_VERSION,
|
||||
message,
|
||||
nonce,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Join request type - distinguishes fresh joins from rejoin attempts
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
#[derive(Debug, Clone, rkyv::Archive, rkyv::Serialize, rkyv::Deserialize)]
|
||||
pub enum JoinType {
|
||||
/// Fresh join - never connected to this session before
|
||||
Fresh,
|
||||
@@ -70,7 +109,7 @@ pub enum JoinType {
|
||||
/// 2. **Normal Operation**: Peers broadcast `EntityDelta` on changes
|
||||
/// 3. **Anti-Entropy**: Periodic `SyncRequest` to detect missing operations
|
||||
/// 4. **Recovery**: `MissingDeltas` sent in response to `SyncRequest`
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
#[derive(Debug, Clone, rkyv::Archive, rkyv::Serialize, rkyv::Deserialize)]
|
||||
pub enum SyncMessage {
|
||||
/// Request to join the network and receive full state
|
||||
///
|
||||
@@ -85,7 +124,7 @@ pub enum SyncMessage {
|
||||
session_id: SessionId,
|
||||
|
||||
/// Optional session secret for authentication
|
||||
session_secret: Option<Vec<u8>>,
|
||||
session_secret: Option<bytes::Bytes>,
|
||||
|
||||
/// Vector clock from when we last left this session
|
||||
/// None = fresh join, Some = rejoin
|
||||
@@ -156,7 +195,7 @@ pub enum SyncMessage {
|
||||
/// Complete state of a single entity
|
||||
///
|
||||
/// Used in `FullState` messages to transfer all components of an entity.
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
#[derive(Debug, Clone, rkyv::Archive, rkyv::Serialize, rkyv::Deserialize)]
|
||||
pub struct EntityState {
|
||||
/// Network ID of the entity
|
||||
pub entity_id: uuid::Uuid,
|
||||
@@ -176,29 +215,28 @@ pub struct EntityState {
|
||||
|
||||
/// State of a single component
|
||||
///
|
||||
/// Contains the component type and its serialized data.
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
/// Contains the component discriminant and its serialized data.
|
||||
#[derive(Debug, Clone, rkyv::Archive, rkyv::Serialize, rkyv::Deserialize)]
|
||||
pub struct ComponentState {
|
||||
/// Type path of the component (e.g.,
|
||||
/// "bevy_transform::components::Transform")
|
||||
pub component_type: String,
|
||||
/// Discriminant identifying the component type
|
||||
pub discriminant: u16,
|
||||
|
||||
/// Serialized component data (bincode)
|
||||
/// Serialized component data (rkyv)
|
||||
pub data: ComponentData,
|
||||
}
|
||||
|
||||
/// Component data - either inline or a blob reference
|
||||
///
|
||||
/// Components larger than 64KB are stored as blobs and referenced by hash.
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
|
||||
#[derive(Debug, Clone, rkyv::Archive, rkyv::Serialize, rkyv::Deserialize, PartialEq, Eq)]
|
||||
pub enum ComponentData {
|
||||
/// Inline data for small components (<64KB)
|
||||
Inline(Vec<u8>),
|
||||
Inline(bytes::Bytes),
|
||||
|
||||
/// Reference to a blob for large components (>64KB)
|
||||
BlobRef {
|
||||
/// iroh-blobs hash
|
||||
hash: Vec<u8>,
|
||||
hash: bytes::Bytes,
|
||||
|
||||
/// Size of the blob in bytes
|
||||
size: u64,
|
||||
@@ -210,11 +248,11 @@ impl ComponentData {
|
||||
pub const BLOB_THRESHOLD: usize = 64 * 1024;
|
||||
|
||||
/// Create component data, automatically choosing inline vs blob
|
||||
pub fn new(data: Vec<u8>) -> Self {
|
||||
pub fn new(data: bytes::Bytes) -> Self {
|
||||
if data.len() > Self::BLOB_THRESHOLD {
|
||||
// Will be populated later when uploaded to iroh-blobs
|
||||
Self::BlobRef {
|
||||
hash: Vec::new(),
|
||||
hash: bytes::Bytes::new(),
|
||||
size: data.len() as u64,
|
||||
}
|
||||
} else {
|
||||
@@ -248,7 +286,7 @@ impl ComponentData {
|
||||
///
|
||||
/// This struct exists because EntityDelta is defined as an enum variant
|
||||
/// but we sometimes need to work with it as a standalone type.
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
#[derive(Debug, Clone, rkyv::Archive, rkyv::Serialize, rkyv::Deserialize)]
|
||||
pub struct EntityDelta {
|
||||
/// Network ID of the entity being updated
|
||||
pub entity_id: uuid::Uuid,
|
||||
@@ -313,7 +351,7 @@ mod tests {
|
||||
#[test]
|
||||
fn test_component_data_inline() {
|
||||
let data = vec![1, 2, 3, 4];
|
||||
let component_data = ComponentData::new(data.clone());
|
||||
let component_data = ComponentData::new(bytes::Bytes::from(data.clone()));
|
||||
|
||||
assert!(!component_data.is_blob());
|
||||
assert_eq!(component_data.as_inline(), Some(data.as_slice()));
|
||||
@@ -323,7 +361,7 @@ mod tests {
|
||||
fn test_component_data_blob() {
|
||||
// Create data larger than threshold
|
||||
let data = vec![0u8; ComponentData::BLOB_THRESHOLD + 1];
|
||||
let component_data = ComponentData::new(data.clone());
|
||||
let component_data = ComponentData::new(bytes::Bytes::from(data.clone()));
|
||||
|
||||
assert!(component_data.is_blob());
|
||||
assert_eq!(component_data.as_inline(), None);
|
||||
@@ -343,7 +381,7 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_message_serialization() -> bincode::Result<()> {
|
||||
fn test_message_serialization() -> anyhow::Result<()> {
|
||||
let node_id = uuid::Uuid::new_v4();
|
||||
let session_id = SessionId::new();
|
||||
let message = SyncMessage::JoinRequest {
|
||||
@@ -355,8 +393,8 @@ mod tests {
|
||||
};
|
||||
|
||||
let versioned = VersionedMessage::new(message);
|
||||
let bytes = bincode::serialize(&versioned)?;
|
||||
let deserialized: VersionedMessage = bincode::deserialize(&bytes)?;
|
||||
let bytes = rkyv::to_bytes::<rkyv::rancor::Failure>(&versioned).map(|b| b.to_vec())?;
|
||||
let deserialized: VersionedMessage = rkyv::from_bytes::<VersionedMessage, rkyv::rancor::Failure>(&bytes)?;
|
||||
|
||||
assert_eq!(deserialized.version, versioned.version);
|
||||
|
||||
@@ -364,7 +402,7 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_full_state_serialization() -> bincode::Result<()> {
|
||||
fn test_full_state_serialization() -> anyhow::Result<()> {
|
||||
let entity_id = uuid::Uuid::new_v4();
|
||||
let owner_node = uuid::Uuid::new_v4();
|
||||
|
||||
@@ -381,8 +419,8 @@ mod tests {
|
||||
vector_clock: VectorClock::new(),
|
||||
};
|
||||
|
||||
let bytes = bincode::serialize(&message)?;
|
||||
let _deserialized: SyncMessage = bincode::deserialize(&bytes)?;
|
||||
let bytes = rkyv::to_bytes::<rkyv::rancor::Failure>(&message).map(|b| b.to_vec())?;
|
||||
let _deserialized: SyncMessage = rkyv::from_bytes::<SyncMessage, rkyv::rancor::Failure>(&bytes)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
@@ -392,8 +430,8 @@ mod tests {
|
||||
let join_type = JoinType::Fresh;
|
||||
|
||||
// Fresh join should serialize correctly
|
||||
let bytes = bincode::serialize(&join_type).unwrap();
|
||||
let deserialized: JoinType = bincode::deserialize(&bytes).unwrap();
|
||||
let bytes = rkyv::to_bytes::<rkyv::rancor::Failure>(&join_type).map(|b| b.to_vec()).unwrap();
|
||||
let deserialized: JoinType = rkyv::from_bytes::<JoinType, rkyv::rancor::Failure>(&bytes).unwrap();
|
||||
|
||||
assert!(matches!(deserialized, JoinType::Fresh));
|
||||
}
|
||||
@@ -406,8 +444,8 @@ mod tests {
|
||||
};
|
||||
|
||||
// Rejoin should serialize correctly
|
||||
let bytes = bincode::serialize(&join_type).unwrap();
|
||||
let deserialized: JoinType = bincode::deserialize(&bytes).unwrap();
|
||||
let bytes = rkyv::to_bytes::<rkyv::rancor::Failure>(&join_type).map(|b| b.to_vec()).unwrap();
|
||||
let deserialized: JoinType = rkyv::from_bytes::<JoinType, rkyv::rancor::Failure>(&bytes).unwrap();
|
||||
|
||||
match deserialized {
|
||||
| JoinType::Rejoin {
|
||||
@@ -434,8 +472,8 @@ mod tests {
|
||||
join_type: JoinType::Fresh,
|
||||
};
|
||||
|
||||
let bytes = bincode::serialize(&message).unwrap();
|
||||
let deserialized: SyncMessage = bincode::deserialize(&bytes).unwrap();
|
||||
let bytes = rkyv::to_bytes::<rkyv::rancor::Failure>(&message).map(|b| b.to_vec()).unwrap();
|
||||
let deserialized: SyncMessage = rkyv::from_bytes::<SyncMessage, rkyv::rancor::Failure>(&bytes).unwrap();
|
||||
|
||||
match deserialized {
|
||||
| SyncMessage::JoinRequest {
|
||||
@@ -467,8 +505,8 @@ mod tests {
|
||||
},
|
||||
};
|
||||
|
||||
let bytes = bincode::serialize(&message).unwrap();
|
||||
let deserialized: SyncMessage = bincode::deserialize(&bytes).unwrap();
|
||||
let bytes = rkyv::to_bytes::<rkyv::rancor::Failure>(&message).map(|b| b.to_vec()).unwrap();
|
||||
let deserialized: SyncMessage = rkyv::from_bytes::<SyncMessage, rkyv::rancor::Failure>(&bytes).unwrap();
|
||||
|
||||
match deserialized {
|
||||
| SyncMessage::JoinRequest {
|
||||
@@ -484,7 +522,7 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_missing_deltas_serialization() -> bincode::Result<()> {
|
||||
fn test_missing_deltas_serialization() -> anyhow::Result<()> {
|
||||
// Test that MissingDeltas message serializes correctly
|
||||
let node_id = uuid::Uuid::new_v4();
|
||||
let entity_id = uuid::Uuid::new_v4();
|
||||
@@ -501,8 +539,8 @@ mod tests {
|
||||
deltas: vec![delta],
|
||||
};
|
||||
|
||||
let bytes = bincode::serialize(&message)?;
|
||||
let deserialized: SyncMessage = bincode::deserialize(&bytes)?;
|
||||
let bytes = rkyv::to_bytes::<rkyv::rancor::Failure>(&message).map(|b| b.to_vec())?;
|
||||
let deserialized: SyncMessage = rkyv::from_bytes::<SyncMessage, rkyv::rancor::Failure>(&bytes)?;
|
||||
|
||||
match deserialized {
|
||||
| SyncMessage::MissingDeltas { deltas } => {
|
||||
|
||||
@@ -27,7 +27,7 @@
|
||||
//! let builder = ComponentOpBuilder::new(node_id, clock.clone());
|
||||
//! let op = builder.set(
|
||||
//! "Transform".to_string(),
|
||||
//! ComponentData::Inline(vec![1, 2, 3]),
|
||||
//! ComponentData::Inline(bytes::Bytes::from(vec![1, 2, 3])),
|
||||
//! );
|
||||
//! ```
|
||||
|
||||
@@ -36,6 +36,7 @@ mod auth;
|
||||
mod blob_support;
|
||||
mod change_detection;
|
||||
mod components;
|
||||
mod control;
|
||||
mod delta_generation;
|
||||
mod entity_map;
|
||||
mod error;
|
||||
@@ -53,6 +54,7 @@ mod plugin;
|
||||
mod rga;
|
||||
mod session;
|
||||
mod session_lifecycle;
|
||||
mod session_sync;
|
||||
mod sync_component;
|
||||
mod tombstones;
|
||||
mod vector_clock;
|
||||
@@ -62,6 +64,7 @@ pub use auth::*;
|
||||
pub use blob_support::*;
|
||||
pub use change_detection::*;
|
||||
pub use components::*;
|
||||
pub use control::*;
|
||||
pub use delta_generation::*;
|
||||
pub use entity_map::*;
|
||||
pub use error::*;
|
||||
@@ -79,6 +82,7 @@ pub use plugin::*;
|
||||
pub use rga::*;
|
||||
pub use session::*;
|
||||
pub use session_lifecycle::*;
|
||||
pub use session_sync::*;
|
||||
pub use sync_component::*;
|
||||
pub use tombstones::*;
|
||||
pub use vector_clock::*;
|
||||
@@ -118,11 +122,13 @@ pub fn spawn_networked_entity(
|
||||
) -> bevy::prelude::Entity {
|
||||
use bevy::prelude::*;
|
||||
|
||||
// Spawn with both NetworkedEntity and Persisted components
|
||||
// Spawn with NetworkedEntity, Persisted, and Synced components
|
||||
// The Synced marker triggers auto-insert of NetworkedTransform if entity has Transform
|
||||
let entity = world
|
||||
.spawn((
|
||||
NetworkedEntity::with_id(entity_id, node_id),
|
||||
crate::persistence::Persisted::with_id(entity_id),
|
||||
Synced,
|
||||
))
|
||||
.id();
|
||||
|
||||
|
||||
@@ -3,75 +3,24 @@
|
||||
//! This module provides utilities to convert Bevy component changes into
|
||||
//! ComponentOp operations that can be synchronized across the network.
|
||||
|
||||
use bevy::{
|
||||
prelude::*,
|
||||
reflect::TypeRegistry,
|
||||
};
|
||||
use bevy::prelude::*;
|
||||
|
||||
use crate::{
|
||||
networking::{
|
||||
blob_support::{
|
||||
BlobStore,
|
||||
create_component_data,
|
||||
},
|
||||
error::Result,
|
||||
messages::ComponentData,
|
||||
operations::{
|
||||
ComponentOp,
|
||||
ComponentOpBuilder,
|
||||
},
|
||||
vector_clock::{
|
||||
NodeId,
|
||||
VectorClock,
|
||||
},
|
||||
use crate::networking::{
|
||||
blob_support::{
|
||||
BlobStore,
|
||||
create_component_data,
|
||||
},
|
||||
messages::ComponentData,
|
||||
operations::ComponentOp,
|
||||
vector_clock::{
|
||||
NodeId,
|
||||
VectorClock,
|
||||
},
|
||||
persistence::reflection::serialize_component_typed,
|
||||
};
|
||||
|
||||
/// Build a Set operation (LWW) from a component
|
||||
///
|
||||
/// Serializes the component using Bevy's reflection system and creates a
|
||||
/// ComponentOp::Set for Last-Write-Wins synchronization. Automatically uses
|
||||
/// blob storage for components >64KB.
|
||||
///
|
||||
/// # Parameters
|
||||
///
|
||||
/// - `component`: The component to serialize
|
||||
/// - `component_type`: Type path string
|
||||
/// - `node_id`: Our node ID
|
||||
/// - `vector_clock`: Current vector clock
|
||||
/// - `type_registry`: Bevy's type registry
|
||||
/// - `blob_store`: Optional blob store for large components
|
||||
///
|
||||
/// # Returns
|
||||
///
|
||||
/// A ComponentOp::Set ready to be broadcast
|
||||
pub fn build_set_operation(
|
||||
component: &dyn Reflect,
|
||||
component_type: String,
|
||||
node_id: NodeId,
|
||||
vector_clock: VectorClock,
|
||||
type_registry: &TypeRegistry,
|
||||
blob_store: Option<&BlobStore>,
|
||||
) -> Result<ComponentOp> {
|
||||
// Serialize the component
|
||||
let serialized = serialize_component_typed(component, type_registry)?;
|
||||
|
||||
// Create component data (inline or blob)
|
||||
let data = if let Some(store) = blob_store {
|
||||
create_component_data(serialized, store)?
|
||||
} else {
|
||||
ComponentData::Inline(serialized)
|
||||
};
|
||||
|
||||
// Build the operation
|
||||
let builder = ComponentOpBuilder::new(node_id, vector_clock);
|
||||
Ok(builder.set(component_type, data))
|
||||
}
|
||||
|
||||
/// Build Set operations for all components on an entity
|
||||
///
|
||||
/// This iterates over all components with reflection data and creates Set
|
||||
/// This iterates over all registered Synced components and creates Set
|
||||
/// operations for each one. Automatically uses blob storage for large
|
||||
/// components.
|
||||
///
|
||||
@@ -81,7 +30,7 @@ pub fn build_set_operation(
|
||||
/// - `world`: Bevy world
|
||||
/// - `node_id`: Our node ID
|
||||
/// - `vector_clock`: Current vector clock
|
||||
/// - `type_registry`: Bevy's type registry
|
||||
/// - `type_registry`: Component type registry (for Synced components)
|
||||
/// - `blob_store`: Optional blob store for large components
|
||||
///
|
||||
/// # Returns
|
||||
@@ -92,64 +41,41 @@ pub fn build_entity_operations(
|
||||
world: &World,
|
||||
node_id: NodeId,
|
||||
vector_clock: VectorClock,
|
||||
type_registry: &TypeRegistry,
|
||||
type_registry: &crate::persistence::ComponentTypeRegistry,
|
||||
blob_store: Option<&BlobStore>,
|
||||
) -> Vec<ComponentOp> {
|
||||
let mut operations = Vec::new();
|
||||
let entity_ref = world.entity(entity);
|
||||
|
||||
debug!(
|
||||
"build_entity_operations: Building operations for entity {:?}",
|
||||
entity
|
||||
);
|
||||
|
||||
// Iterate over all type registrations
|
||||
for registration in type_registry.iter() {
|
||||
// Skip if no ReflectComponent data
|
||||
let Some(reflect_component) = registration.data::<ReflectComponent>() else {
|
||||
continue;
|
||||
// Serialize all Synced components on this entity
|
||||
let serialized_components = type_registry.serialize_entity_components(world, entity);
|
||||
|
||||
for (discriminant, _type_path, serialized) in serialized_components {
|
||||
// Create component data (inline or blob)
|
||||
let data = if let Some(store) = blob_store {
|
||||
if let Ok(component_data) = create_component_data(serialized, store) {
|
||||
component_data
|
||||
} else {
|
||||
continue; // Skip this component if blob storage fails
|
||||
}
|
||||
} else {
|
||||
ComponentData::Inline(serialized)
|
||||
};
|
||||
|
||||
// Get the type path
|
||||
let type_path = registration.type_info().type_path();
|
||||
// Build the operation
|
||||
// Use the vector_clock as-is - it's already been incremented by the caller (delta_generation.rs:116)
|
||||
// All operations in the same EntityDelta share the same vector clock (same logical timestamp)
|
||||
operations.push(ComponentOp::Set {
|
||||
discriminant,
|
||||
data,
|
||||
vector_clock: vector_clock.clone(),
|
||||
});
|
||||
|
||||
// Skip certain components
|
||||
if type_path.ends_with("::NetworkedEntity") ||
|
||||
type_path.ends_with("::NetworkedTransform") ||
|
||||
type_path.ends_with("::NetworkedSelection") ||
|
||||
type_path.ends_with("::NetworkedDrawingPath")
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
// Try to reflect this component from the entity
|
||||
if let Some(reflected) = reflect_component.reflect(entity_ref) {
|
||||
// Serialize the component
|
||||
if let Ok(serialized) = serialize_component_typed(reflected, type_registry) {
|
||||
// Create component data (inline or blob)
|
||||
let data = if let Some(store) = blob_store {
|
||||
if let Ok(component_data) = create_component_data(serialized, store) {
|
||||
component_data
|
||||
} else {
|
||||
continue; // Skip this component if blob storage fails
|
||||
}
|
||||
} else {
|
||||
ComponentData::Inline(serialized)
|
||||
};
|
||||
|
||||
// Build the operation
|
||||
let mut clock = vector_clock.clone();
|
||||
clock.increment(node_id);
|
||||
|
||||
operations.push(ComponentOp::Set {
|
||||
component_type: type_path.to_string(),
|
||||
data,
|
||||
vector_clock: clock.clone(),
|
||||
});
|
||||
|
||||
debug!(" ✓ Added Set operation for {}", type_path);
|
||||
}
|
||||
}
|
||||
debug!(" ✓ Added Set operation for discriminant {}", discriminant);
|
||||
}
|
||||
|
||||
debug!(
|
||||
@@ -160,114 +86,66 @@ pub fn build_entity_operations(
|
||||
operations
|
||||
}
|
||||
|
||||
/// Build a Set operation for Transform component specifically
|
||||
///
|
||||
/// This is a helper for the common case of synchronizing Transform changes.
|
||||
///
|
||||
/// # Example
|
||||
///
|
||||
/// ```
|
||||
/// use bevy::prelude::*;
|
||||
/// use libmarathon::networking::{
|
||||
/// VectorClock,
|
||||
/// build_transform_operation,
|
||||
/// };
|
||||
/// use uuid::Uuid;
|
||||
///
|
||||
/// # fn example(transform: &Transform, type_registry: &bevy::reflect::TypeRegistry) {
|
||||
/// let node_id = Uuid::new_v4();
|
||||
/// let clock = VectorClock::new();
|
||||
///
|
||||
/// let op = build_transform_operation(transform, node_id, clock, type_registry, None).unwrap();
|
||||
/// # }
|
||||
/// ```
|
||||
pub fn build_transform_operation(
|
||||
transform: &Transform,
|
||||
node_id: NodeId,
|
||||
vector_clock: VectorClock,
|
||||
type_registry: &TypeRegistry,
|
||||
blob_store: Option<&BlobStore>,
|
||||
) -> Result<ComponentOp> {
|
||||
// Use reflection to serialize Transform
|
||||
let serialized = serialize_component_typed(transform.as_reflect(), type_registry)?;
|
||||
|
||||
// Create component data (inline or blob)
|
||||
let data = if let Some(store) = blob_store {
|
||||
create_component_data(serialized, store)?
|
||||
} else {
|
||||
ComponentData::Inline(serialized)
|
||||
};
|
||||
|
||||
let builder = ComponentOpBuilder::new(node_id, vector_clock);
|
||||
Ok(builder.set(
|
||||
"bevy_transform::components::transform::Transform".to_string(),
|
||||
data,
|
||||
))
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use bevy::prelude::*;
|
||||
use crate::networking::NetworkedEntity;
|
||||
use crate::persistence::{ComponentTypeRegistry, Persisted};
|
||||
|
||||
#[test]
|
||||
fn test_build_transform_operation() {
|
||||
let mut type_registry = TypeRegistry::new();
|
||||
type_registry.register::<Transform>();
|
||||
|
||||
let transform = Transform::default();
|
||||
let node_id = uuid::Uuid::new_v4();
|
||||
let clock = VectorClock::new();
|
||||
|
||||
let op =
|
||||
build_transform_operation(&transform, node_id, clock, &type_registry, None).unwrap();
|
||||
|
||||
assert!(op.is_set());
|
||||
assert_eq!(
|
||||
op.component_type(),
|
||||
Some("bevy_transform::components::transform::Transform")
|
||||
);
|
||||
assert_eq!(op.vector_clock().get(node_id), 1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_build_entity_operations() {
|
||||
fn test_operations_use_passed_vector_clock_without_extra_increment() {
|
||||
// Setup: Create a minimal world with an entity
|
||||
let mut world = World::new();
|
||||
let mut type_registry = TypeRegistry::new();
|
||||
|
||||
// Register Transform
|
||||
type_registry.register::<Transform>();
|
||||
|
||||
// Spawn entity with Transform
|
||||
let entity = world.spawn(Transform::from_xyz(1.0, 2.0, 3.0)).id();
|
||||
|
||||
let node_id = uuid::Uuid::new_v4();
|
||||
let clock = VectorClock::new();
|
||||
|
||||
let ops = build_entity_operations(entity, &world, node_id, clock, &type_registry, None);
|
||||
// Use the global registry (Transform is already registered via inventory)
|
||||
let registry = ComponentTypeRegistry::init();
|
||||
|
||||
// Should have at least Transform operation
|
||||
assert!(!ops.is_empty());
|
||||
assert!(ops.iter().all(|op| op.is_set()));
|
||||
}
|
||||
// Create test entity with Transform
|
||||
let entity_id = uuid::Uuid::new_v4();
|
||||
let entity = world.spawn((
|
||||
NetworkedEntity::with_id(entity_id, node_id),
|
||||
Persisted::with_id(entity_id),
|
||||
Transform::from_xyz(1.0, 2.0, 3.0),
|
||||
)).id();
|
||||
|
||||
#[test]
|
||||
fn test_vector_clock_increment() {
|
||||
let mut type_registry = TypeRegistry::new();
|
||||
type_registry.register::<Transform>();
|
||||
// Create a vector clock that's already been ticked
|
||||
let mut vector_clock = VectorClock::new();
|
||||
vector_clock.increment(node_id); // Simulate the tick that delta_generation does
|
||||
let expected_clock = vector_clock.clone();
|
||||
|
||||
let transform = Transform::default();
|
||||
let node_id = uuid::Uuid::new_v4();
|
||||
let mut clock = VectorClock::new();
|
||||
// Build operations
|
||||
let operations = build_entity_operations(
|
||||
entity,
|
||||
&world,
|
||||
node_id,
|
||||
vector_clock.clone(),
|
||||
®istry,
|
||||
None,
|
||||
);
|
||||
|
||||
let op1 =
|
||||
build_transform_operation(&transform, node_id, clock.clone(), &type_registry, None)
|
||||
.unwrap();
|
||||
assert_eq!(op1.vector_clock().get(node_id), 1);
|
||||
// Verify: All operations should use the EXACT clock that was passed in
|
||||
assert!(!operations.is_empty(), "Should have created at least one operation");
|
||||
|
||||
clock.increment(node_id);
|
||||
let op2 =
|
||||
build_transform_operation(&transform, node_id, clock.clone(), &type_registry, None)
|
||||
.unwrap();
|
||||
assert_eq!(op2.vector_clock().get(node_id), 2);
|
||||
for op in &operations {
|
||||
if let ComponentOp::Set { vector_clock: op_clock, .. } = op {
|
||||
assert_eq!(
|
||||
*op_clock, expected_clock,
|
||||
"Operation clock should match the input clock exactly. \
|
||||
The bug was that operation_builder would increment the clock again, \
|
||||
causing EntityDelta.vector_clock and ComponentOp.vector_clock to be misaligned."
|
||||
);
|
||||
|
||||
// Verify the sequence number matches
|
||||
let op_seq = op_clock.get(node_id);
|
||||
let expected_seq = expected_clock.get(node_id);
|
||||
assert_eq!(
|
||||
op_seq, expected_seq,
|
||||
"Operation sequence should be {} (same as input clock), but got {}",
|
||||
expected_seq, op_seq
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -351,7 +351,7 @@ pub fn handle_missing_deltas_system(world: &mut World) {
|
||||
/// adaptive sync intervals based on network conditions.
|
||||
pub fn periodic_sync_system(
|
||||
bridge: Option<Res<GossipBridge>>,
|
||||
node_clock: Res<NodeVectorClock>,
|
||||
mut node_clock: ResMut<NodeVectorClock>,
|
||||
time: Res<Time>,
|
||||
mut last_sync: Local<f32>,
|
||||
) {
|
||||
@@ -369,6 +369,9 @@ pub fn periodic_sync_system(
|
||||
|
||||
debug!("Sending periodic SyncRequest for anti-entropy");
|
||||
|
||||
// Increment clock for sending SyncRequest (this is a local operation)
|
||||
node_clock.tick();
|
||||
|
||||
let request = build_sync_request(node_clock.node_id, node_clock.clock.clone());
|
||||
if let Err(e) = bridge.send(request) {
|
||||
error!("Failed to send SyncRequest: {}", e);
|
||||
|
||||
@@ -4,10 +4,7 @@
|
||||
//! on components in the distributed system. Each operation type corresponds to
|
||||
//! a specific CRDT merge strategy.
|
||||
|
||||
use serde::{
|
||||
Deserialize,
|
||||
Serialize,
|
||||
};
|
||||
|
||||
|
||||
use crate::networking::{
|
||||
messages::ComponentData,
|
||||
@@ -39,7 +36,7 @@ use crate::networking::{
|
||||
/// - Maintains ordering across concurrent inserts
|
||||
/// - Uses RGA (Replicated Growable Array) algorithm
|
||||
/// - Example: Collaborative drawing paths
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
#[derive(Debug, Clone, rkyv::Archive, rkyv::Serialize, rkyv::Deserialize)]
|
||||
pub enum ComponentOp {
|
||||
/// Set a component value (Last-Write-Wins)
|
||||
///
|
||||
@@ -50,8 +47,8 @@ pub enum ComponentOp {
|
||||
/// The data field can be either inline (for small components) or a blob
|
||||
/// reference (for components >64KB).
|
||||
Set {
|
||||
/// Type path of the component
|
||||
component_type: String,
|
||||
/// Discriminant identifying the component type
|
||||
discriminant: u16,
|
||||
|
||||
/// Component data (inline or blob reference)
|
||||
data: ComponentData,
|
||||
@@ -65,14 +62,14 @@ pub enum ComponentOp {
|
||||
/// Adds an element to a set that supports concurrent add/remove. Each add
|
||||
/// has a unique ID so that removes can reference specific adds.
|
||||
SetAdd {
|
||||
/// Type path of the component
|
||||
component_type: String,
|
||||
/// Discriminant identifying the component type
|
||||
discriminant: u16,
|
||||
|
||||
/// Unique ID for this add operation
|
||||
operation_id: uuid::Uuid,
|
||||
|
||||
/// Element being added (serialized)
|
||||
element: Vec<u8>,
|
||||
element: bytes::Bytes,
|
||||
|
||||
/// Vector clock when this add was created
|
||||
vector_clock: VectorClock,
|
||||
@@ -83,8 +80,8 @@ pub enum ComponentOp {
|
||||
/// Removes an element by referencing the add operation IDs that added it.
|
||||
/// If concurrent with an add, the add wins (observed-remove semantics).
|
||||
SetRemove {
|
||||
/// Type path of the component
|
||||
component_type: String,
|
||||
/// Discriminant identifying the component type
|
||||
discriminant: u16,
|
||||
|
||||
/// IDs of the add operations being removed
|
||||
removed_ids: Vec<uuid::Uuid>,
|
||||
@@ -99,8 +96,8 @@ pub enum ComponentOp {
|
||||
/// (Replicated Growable Array) to maintain consistent ordering across
|
||||
/// concurrent inserts.
|
||||
SequenceInsert {
|
||||
/// Type path of the component
|
||||
component_type: String,
|
||||
/// Discriminant identifying the component type
|
||||
discriminant: u16,
|
||||
|
||||
/// Unique ID for this insert operation
|
||||
operation_id: uuid::Uuid,
|
||||
@@ -109,7 +106,7 @@ pub enum ComponentOp {
|
||||
after_id: Option<uuid::Uuid>,
|
||||
|
||||
/// Element being inserted (serialized)
|
||||
element: Vec<u8>,
|
||||
element: bytes::Bytes,
|
||||
|
||||
/// Vector clock when this insert was created
|
||||
vector_clock: VectorClock,
|
||||
@@ -120,8 +117,8 @@ pub enum ComponentOp {
|
||||
/// Marks an element as deleted in the sequence. The element remains in the
|
||||
/// structure (tombstone) to preserve ordering for concurrent operations.
|
||||
SequenceDelete {
|
||||
/// Type path of the component
|
||||
component_type: String,
|
||||
/// Discriminant identifying the component type
|
||||
discriminant: u16,
|
||||
|
||||
/// ID of the element to delete
|
||||
element_id: uuid::Uuid,
|
||||
@@ -141,14 +138,14 @@ pub enum ComponentOp {
|
||||
}
|
||||
|
||||
impl ComponentOp {
|
||||
/// Get the component type for this operation
|
||||
pub fn component_type(&self) -> Option<&str> {
|
||||
/// Get the component discriminant for this operation
|
||||
pub fn discriminant(&self) -> Option<u16> {
|
||||
match self {
|
||||
| ComponentOp::Set { component_type, .. } |
|
||||
ComponentOp::SetAdd { component_type, .. } |
|
||||
ComponentOp::SetRemove { component_type, .. } |
|
||||
ComponentOp::SequenceInsert { component_type, .. } |
|
||||
ComponentOp::SequenceDelete { component_type, .. } => Some(component_type),
|
||||
| ComponentOp::Set { discriminant, .. } |
|
||||
ComponentOp::SetAdd { discriminant, .. } |
|
||||
ComponentOp::SetRemove { discriminant, .. } |
|
||||
ComponentOp::SequenceInsert { discriminant, .. } |
|
||||
ComponentOp::SequenceDelete { discriminant, .. } => Some(*discriminant),
|
||||
| ComponentOp::Delete { .. } => None,
|
||||
}
|
||||
}
|
||||
@@ -211,20 +208,20 @@ impl ComponentOpBuilder {
|
||||
}
|
||||
|
||||
/// Build a Set operation (LWW)
|
||||
pub fn set(mut self, component_type: String, data: ComponentData) -> ComponentOp {
|
||||
pub fn set(mut self, discriminant: u16, data: ComponentData) -> ComponentOp {
|
||||
self.vector_clock.increment(self.node_id);
|
||||
ComponentOp::Set {
|
||||
component_type,
|
||||
discriminant,
|
||||
data,
|
||||
vector_clock: self.vector_clock,
|
||||
}
|
||||
}
|
||||
|
||||
/// Build a SetAdd operation (OR-Set)
|
||||
pub fn set_add(mut self, component_type: String, element: Vec<u8>) -> ComponentOp {
|
||||
pub fn set_add(mut self, discriminant: u16, element: bytes::Bytes) -> ComponentOp {
|
||||
self.vector_clock.increment(self.node_id);
|
||||
ComponentOp::SetAdd {
|
||||
component_type,
|
||||
discriminant,
|
||||
operation_id: uuid::Uuid::new_v4(),
|
||||
element,
|
||||
vector_clock: self.vector_clock,
|
||||
@@ -234,12 +231,12 @@ impl ComponentOpBuilder {
|
||||
/// Build a SetRemove operation (OR-Set)
|
||||
pub fn set_remove(
|
||||
mut self,
|
||||
component_type: String,
|
||||
discriminant: u16,
|
||||
removed_ids: Vec<uuid::Uuid>,
|
||||
) -> ComponentOp {
|
||||
self.vector_clock.increment(self.node_id);
|
||||
ComponentOp::SetRemove {
|
||||
component_type,
|
||||
discriminant,
|
||||
removed_ids,
|
||||
vector_clock: self.vector_clock,
|
||||
}
|
||||
@@ -248,13 +245,13 @@ impl ComponentOpBuilder {
|
||||
/// Build a SequenceInsert operation (RGA)
|
||||
pub fn sequence_insert(
|
||||
mut self,
|
||||
component_type: String,
|
||||
discriminant: u16,
|
||||
after_id: Option<uuid::Uuid>,
|
||||
element: Vec<u8>,
|
||||
element: bytes::Bytes,
|
||||
) -> ComponentOp {
|
||||
self.vector_clock.increment(self.node_id);
|
||||
ComponentOp::SequenceInsert {
|
||||
component_type,
|
||||
discriminant,
|
||||
operation_id: uuid::Uuid::new_v4(),
|
||||
after_id,
|
||||
element,
|
||||
@@ -265,12 +262,12 @@ impl ComponentOpBuilder {
|
||||
/// Build a SequenceDelete operation (RGA)
|
||||
pub fn sequence_delete(
|
||||
mut self,
|
||||
component_type: String,
|
||||
discriminant: u16,
|
||||
element_id: uuid::Uuid,
|
||||
) -> ComponentOp {
|
||||
self.vector_clock.increment(self.node_id);
|
||||
ComponentOp::SequenceDelete {
|
||||
component_type,
|
||||
discriminant,
|
||||
element_id,
|
||||
vector_clock: self.vector_clock,
|
||||
}
|
||||
@@ -290,30 +287,30 @@ mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_component_type() {
|
||||
fn test_discriminant() {
|
||||
let op = ComponentOp::Set {
|
||||
component_type: "Transform".to_string(),
|
||||
data: ComponentData::Inline(vec![1, 2, 3]),
|
||||
discriminant: 1,
|
||||
data: ComponentData::Inline(bytes::Bytes::from(vec![1, 2, 3])),
|
||||
vector_clock: VectorClock::new(),
|
||||
};
|
||||
|
||||
assert_eq!(op.component_type(), Some("Transform"));
|
||||
assert_eq!(op.discriminant(), Some(1));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_component_type_delete() {
|
||||
fn test_discriminant_delete() {
|
||||
let op = ComponentOp::Delete {
|
||||
vector_clock: VectorClock::new(),
|
||||
};
|
||||
|
||||
assert_eq!(op.component_type(), None);
|
||||
assert_eq!(op.discriminant(), None);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_is_set() {
|
||||
let op = ComponentOp::Set {
|
||||
component_type: "Transform".to_string(),
|
||||
data: ComponentData::Inline(vec![1, 2, 3]),
|
||||
discriminant: 1,
|
||||
data: ComponentData::Inline(bytes::Bytes::from(vec![1, 2, 3])),
|
||||
vector_clock: VectorClock::new(),
|
||||
};
|
||||
|
||||
@@ -326,9 +323,9 @@ mod tests {
|
||||
#[test]
|
||||
fn test_is_or_set() {
|
||||
let op = ComponentOp::SetAdd {
|
||||
component_type: "Selection".to_string(),
|
||||
discriminant: 2,
|
||||
operation_id: uuid::Uuid::new_v4(),
|
||||
element: vec![1, 2, 3],
|
||||
element: bytes::Bytes::from(vec![1, 2, 3]),
|
||||
vector_clock: VectorClock::new(),
|
||||
};
|
||||
|
||||
@@ -341,10 +338,10 @@ mod tests {
|
||||
#[test]
|
||||
fn test_is_sequence() {
|
||||
let op = ComponentOp::SequenceInsert {
|
||||
component_type: "DrawingPath".to_string(),
|
||||
discriminant: 3,
|
||||
operation_id: uuid::Uuid::new_v4(),
|
||||
after_id: None,
|
||||
element: vec![1, 2, 3],
|
||||
element: bytes::Bytes::from(vec![1, 2, 3]),
|
||||
vector_clock: VectorClock::new(),
|
||||
};
|
||||
|
||||
@@ -361,8 +358,8 @@ mod tests {
|
||||
|
||||
let builder = ComponentOpBuilder::new(node_id, clock);
|
||||
let op = builder.set(
|
||||
"Transform".to_string(),
|
||||
ComponentData::Inline(vec![1, 2, 3]),
|
||||
1,
|
||||
ComponentData::Inline(bytes::Bytes::from(vec![1, 2, 3])),
|
||||
);
|
||||
|
||||
assert!(op.is_set());
|
||||
@@ -375,22 +372,22 @@ mod tests {
|
||||
let clock = VectorClock::new();
|
||||
|
||||
let builder = ComponentOpBuilder::new(node_id, clock);
|
||||
let op = builder.set_add("Selection".to_string(), vec![1, 2, 3]);
|
||||
let op = builder.set_add(2, bytes::Bytes::from(vec![1, 2, 3]));
|
||||
|
||||
assert!(op.is_or_set());
|
||||
assert_eq!(op.vector_clock().get(node_id), 1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_serialization() -> bincode::Result<()> {
|
||||
fn test_serialization() -> anyhow::Result<()> {
|
||||
let op = ComponentOp::Set {
|
||||
component_type: "Transform".to_string(),
|
||||
data: ComponentData::Inline(vec![1, 2, 3]),
|
||||
discriminant: 1,
|
||||
data: ComponentData::Inline(bytes::Bytes::from(vec![1, 2, 3])),
|
||||
vector_clock: VectorClock::new(),
|
||||
};
|
||||
|
||||
let bytes = bincode::serialize(&op)?;
|
||||
let deserialized: ComponentOp = bincode::deserialize(&bytes)?;
|
||||
let bytes = rkyv::to_bytes::<rkyv::rancor::Failure>(&op).map(|b| b.to_vec())?;
|
||||
let deserialized: ComponentOp = rkyv::from_bytes::<ComponentOp, rkyv::rancor::Failure>(&bytes)?;
|
||||
|
||||
assert!(deserialized.is_set());
|
||||
|
||||
|
||||
@@ -87,7 +87,7 @@ pub struct OrElement<T> {
|
||||
///
|
||||
/// An element is "present" if it has an operation ID in `elements` that's
|
||||
/// not in `tombstones`.
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
#[derive(Debug, Clone, rkyv::Archive, rkyv::Serialize, rkyv::Deserialize)]
|
||||
pub struct OrSet<T> {
|
||||
/// Map from operation ID to (value, adding_node)
|
||||
elements: HashMap<uuid::Uuid, (T, NodeId)>,
|
||||
@@ -471,15 +471,15 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_orset_serialization() -> bincode::Result<()> {
|
||||
fn test_orset_serialization() -> anyhow::Result<()> {
|
||||
let node = uuid::Uuid::new_v4();
|
||||
let mut set: OrSet<String> = OrSet::new();
|
||||
|
||||
set.add("foo".to_string(), node);
|
||||
set.add("bar".to_string(), node);
|
||||
|
||||
let bytes = bincode::serialize(&set)?;
|
||||
let deserialized: OrSet<String> = bincode::deserialize(&bytes)?;
|
||||
let bytes = rkyv::to_bytes::<rkyv::rancor::Failure>(&set).map(|b| b.to_vec())?;
|
||||
let deserialized: OrSet<String> = rkyv::from_bytes::<OrSet<String>, rkyv::rancor::Failure>(&bytes)?;
|
||||
|
||||
assert_eq!(deserialized.len(), 2);
|
||||
assert!(deserialized.contains(&"foo".to_string()));
|
||||
|
||||
@@ -34,8 +34,10 @@ use crate::networking::{
|
||||
LastSyncVersions,
|
||||
auto_detect_transform_changes_system,
|
||||
},
|
||||
components::{NetworkedEntity, NetworkedTransform},
|
||||
delta_generation::{
|
||||
NodeVectorClock,
|
||||
cleanup_skip_delta_markers_system,
|
||||
generate_delta_system,
|
||||
},
|
||||
entity_map::{
|
||||
@@ -43,13 +45,19 @@ use crate::networking::{
|
||||
cleanup_despawned_entities_system,
|
||||
register_networked_entities_system,
|
||||
},
|
||||
gossip_bridge::GossipBridge,
|
||||
locks::{
|
||||
EntityLockRegistry,
|
||||
acquire_locks_on_selection_system,
|
||||
broadcast_lock_heartbeats_system,
|
||||
cleanup_expired_locks_system,
|
||||
release_locks_on_deselection_system,
|
||||
},
|
||||
message_dispatcher::message_dispatcher_system,
|
||||
messages::{
|
||||
SyncMessage,
|
||||
VersionedMessage,
|
||||
},
|
||||
operation_log::{
|
||||
OperationLog,
|
||||
periodic_sync_system,
|
||||
@@ -59,12 +67,21 @@ use crate::networking::{
|
||||
initialize_session_system,
|
||||
save_session_on_shutdown_system,
|
||||
},
|
||||
session_sync::{
|
||||
JoinRequestSent,
|
||||
send_join_request_once_system,
|
||||
transition_session_state_system,
|
||||
},
|
||||
sync_component::Synced,
|
||||
tombstones::{
|
||||
TombstoneRegistry,
|
||||
garbage_collect_tombstones_system,
|
||||
handle_local_deletions_system,
|
||||
},
|
||||
vector_clock::NodeId,
|
||||
vector_clock::{
|
||||
NodeId,
|
||||
VectorClock,
|
||||
},
|
||||
};
|
||||
|
||||
/// Configuration for the networking plugin
|
||||
@@ -128,12 +145,12 @@ impl Default for NetworkingConfig {
|
||||
/// .run();
|
||||
/// ```
|
||||
#[derive(Resource, Clone)]
|
||||
pub struct SessionSecret(Vec<u8>);
|
||||
pub struct SessionSecret(bytes::Bytes);
|
||||
|
||||
impl SessionSecret {
|
||||
/// Create a new session secret from bytes
|
||||
pub fn new(secret: impl Into<Vec<u8>>) -> Self {
|
||||
Self(secret.into())
|
||||
Self(bytes::Bytes::from(secret.into()))
|
||||
}
|
||||
|
||||
/// Get the secret as a byte slice
|
||||
@@ -142,6 +159,141 @@ impl SessionSecret {
|
||||
}
|
||||
}
|
||||
|
||||
/// System that auto-inserts required sync components when `Synced` marker is detected.
|
||||
///
|
||||
/// This system runs in PreUpdate and automatically adds:
|
||||
/// - `NetworkedEntity` with a new UUID and node ID
|
||||
/// - `Persisted` with the same UUID
|
||||
/// - `NetworkedTransform` if the entity has a `Transform` component
|
||||
///
|
||||
/// Note: Selection is now a global `LocalSelection` resource, not a per-entity component.
|
||||
///
|
||||
/// This eliminates the need for users to manually add these components when spawning synced entities.
|
||||
fn auto_insert_sync_components(
|
||||
mut commands: Commands,
|
||||
query: Query<Entity, (Added<Synced>, Without<NetworkedEntity>)>,
|
||||
node_clock: Res<NodeVectorClock>,
|
||||
// We need access to check if entity has Transform
|
||||
transforms: Query<&Transform>,
|
||||
) {
|
||||
for entity in &query {
|
||||
let entity_id = uuid::Uuid::new_v4();
|
||||
let node_id = node_clock.node_id;
|
||||
|
||||
// Always add NetworkedEntity and Persisted
|
||||
let mut entity_commands = commands.entity(entity);
|
||||
entity_commands.insert((
|
||||
NetworkedEntity::with_id(entity_id, node_id),
|
||||
crate::persistence::Persisted::with_id(entity_id),
|
||||
));
|
||||
|
||||
// Auto-add NetworkedTransform if entity has Transform
|
||||
if transforms.contains(entity) {
|
||||
entity_commands.insert(NetworkedTransform);
|
||||
}
|
||||
|
||||
info!(
|
||||
"[auto_insert_sync] Entity {:?} → NetworkedEntity({}), Persisted, {} auto-added",
|
||||
entity,
|
||||
entity_id,
|
||||
if transforms.contains(entity) { "NetworkedTransform" } else { "no transform" }
|
||||
);
|
||||
}
|
||||
|
||||
let count = query.iter().count();
|
||||
if count > 0 {
|
||||
debug!("[auto_insert_sync] Processed {} newly synced entities this frame", count);
|
||||
}
|
||||
}
|
||||
|
||||
/// System that adds NetworkedTransform to networked entities when Transform is added.
|
||||
///
|
||||
/// This handles entities received from the network that already have NetworkedEntity,
|
||||
/// Persisted, and Synced, but need NetworkedTransform when Transform is added.
|
||||
fn auto_insert_networked_transform(
|
||||
mut commands: Commands,
|
||||
query: Query<
|
||||
Entity,
|
||||
(
|
||||
With<NetworkedEntity>,
|
||||
With<Synced>,
|
||||
Added<Transform>,
|
||||
Without<NetworkedTransform>,
|
||||
),
|
||||
>,
|
||||
) {
|
||||
for entity in &query {
|
||||
commands.entity(entity).insert(NetworkedTransform);
|
||||
debug!("Auto-inserted NetworkedTransform for networked entity {:?}", entity);
|
||||
}
|
||||
}
|
||||
|
||||
/// System that triggers anti-entropy sync when going online (GossipBridge added).
|
||||
///
|
||||
/// This handles the offline-to-online transition: when GossipBridge is inserted,
|
||||
/// we immediately send a SyncRequest to trigger anti-entropy and broadcast all
|
||||
/// operations from the operation log.
|
||||
///
|
||||
/// Uses a Local resource to track if we've already sent the sync request, so this only runs once.
|
||||
fn trigger_sync_on_connect(
|
||||
mut has_synced: Local<bool>,
|
||||
bridge: Res<GossipBridge>,
|
||||
mut node_clock: ResMut<NodeVectorClock>,
|
||||
operation_log: Res<OperationLog>,
|
||||
) {
|
||||
if *has_synced {
|
||||
return; // Already did this
|
||||
}
|
||||
|
||||
let op_count = operation_log.total_operations();
|
||||
debug!(
|
||||
"Going online: broadcasting {} offline operations to peers",
|
||||
op_count
|
||||
);
|
||||
|
||||
// Broadcast all our stored operations to peers
|
||||
// Use an empty vector clock to get ALL operations (not just newer ones)
|
||||
let all_operations = operation_log.get_all_operations_newer_than(&VectorClock::new());
|
||||
|
||||
for delta in all_operations {
|
||||
// Wrap in VersionedMessage
|
||||
let message = VersionedMessage::new(SyncMessage::EntityDelta {
|
||||
entity_id: delta.entity_id,
|
||||
node_id: delta.node_id,
|
||||
vector_clock: delta.vector_clock.clone(),
|
||||
operations: delta.operations.clone(),
|
||||
});
|
||||
|
||||
// Broadcast to peers
|
||||
if let Err(e) = bridge.send(message) {
|
||||
error!("Failed to broadcast offline EntityDelta: {}", e);
|
||||
} else {
|
||||
debug!(
|
||||
"Broadcast offline EntityDelta for entity {:?} with {} operations",
|
||||
delta.entity_id,
|
||||
delta.operations.len()
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// Also send a SyncRequest to get any operations we're missing from peers
|
||||
// Increment clock for sending SyncRequest (this is a local operation)
|
||||
node_clock.tick();
|
||||
|
||||
let request = crate::networking::operation_log::build_sync_request(
|
||||
node_clock.node_id,
|
||||
node_clock.clock.clone(),
|
||||
);
|
||||
|
||||
if let Err(e) = bridge.send(request) {
|
||||
error!("Failed to send SyncRequest on connect: {}", e);
|
||||
} else {
|
||||
debug!("Sent SyncRequest to get missing operations from peers");
|
||||
}
|
||||
|
||||
*has_synced = true;
|
||||
}
|
||||
|
||||
/// Bevy plugin for CRDT networking
|
||||
///
|
||||
/// This plugin sets up all systems and resources needed for distributed
|
||||
@@ -165,7 +317,10 @@ impl SessionSecret {
|
||||
/// ## Update
|
||||
/// - Auto-detect Transform changes
|
||||
/// - Handle local entity deletions
|
||||
/// - Acquire locks when entities are selected
|
||||
/// - Release locks when entities are deselected
|
||||
/// - Send JoinRequest when networking starts (one-shot)
|
||||
/// - Transition session state (Joining → Active)
|
||||
///
|
||||
/// ## PostUpdate
|
||||
/// - Generate and broadcast EntityDelta for changed entities
|
||||
@@ -187,6 +342,7 @@ impl SessionSecret {
|
||||
/// - `OperationLog` - Operation log for anti-entropy
|
||||
/// - `TombstoneRegistry` - Tombstone tracking for deletions
|
||||
/// - `EntityLockRegistry` - Entity lock registry with heartbeat tracking
|
||||
/// - `JoinRequestSent` - Tracks if JoinRequest has been sent (session sync)
|
||||
///
|
||||
/// # Example
|
||||
///
|
||||
@@ -236,7 +392,9 @@ impl Plugin for NetworkingPlugin {
|
||||
.insert_resource(OperationLog::new())
|
||||
.insert_resource(TombstoneRegistry::new())
|
||||
.insert_resource(EntityLockRegistry::new())
|
||||
.insert_resource(crate::networking::ComponentVectorClocks::new());
|
||||
.insert_resource(JoinRequestSent::default())
|
||||
.insert_resource(crate::networking::ComponentVectorClocks::new())
|
||||
.insert_resource(crate::networking::LocalSelection::new());
|
||||
|
||||
// Startup systems - initialize session from persistence
|
||||
app.add_systems(Startup, initialize_session_system);
|
||||
@@ -245,35 +403,54 @@ impl Plugin for NetworkingPlugin {
|
||||
app.add_systems(
|
||||
PreUpdate,
|
||||
(
|
||||
// Auto-insert sync components when Synced marker is added (must run first)
|
||||
auto_insert_sync_components,
|
||||
// Register new networked entities
|
||||
register_networked_entities_system,
|
||||
// Central message dispatcher - handles all incoming messages
|
||||
// This replaces the individual message handling systems and
|
||||
// eliminates O(n²) behavior from multiple systems polling the same queue
|
||||
message_dispatcher_system,
|
||||
// Auto-insert NetworkedTransform for networked entities when Transform is added
|
||||
auto_insert_networked_transform,
|
||||
)
|
||||
.chain(),
|
||||
);
|
||||
|
||||
// Update systems - handle local operations
|
||||
// FixedUpdate systems - game logic at locked 60fps
|
||||
app.add_systems(
|
||||
Update,
|
||||
FixedUpdate,
|
||||
(
|
||||
// Track Transform changes and mark NetworkedTransform as changed
|
||||
auto_detect_transform_changes_system,
|
||||
// Handle local entity deletions
|
||||
handle_local_deletions_system,
|
||||
// Acquire locks when entities are selected
|
||||
acquire_locks_on_selection_system,
|
||||
// Release locks when entities are deselected
|
||||
release_locks_on_deselection_system,
|
||||
// Session sync: send JoinRequest when networking starts
|
||||
send_join_request_once_system,
|
||||
// Session sync: transition session state based on sync completion
|
||||
transition_session_state_system,
|
||||
),
|
||||
);
|
||||
|
||||
// PostUpdate systems - generate and send deltas
|
||||
// Trigger anti-entropy sync when going online (separate from chain to allow conditional execution)
|
||||
app.add_systems(
|
||||
PostUpdate,
|
||||
FixedPostUpdate,
|
||||
trigger_sync_on_connect
|
||||
.run_if(bevy::ecs::schedule::common_conditions::resource_exists::<GossipBridge>),
|
||||
);
|
||||
|
||||
// FixedPostUpdate systems - generate and send deltas at locked 60fps
|
||||
app.add_systems(
|
||||
FixedPostUpdate,
|
||||
(
|
||||
// Generate deltas for changed entities
|
||||
generate_delta_system,
|
||||
// Generate deltas for changed entities, then cleanup markers
|
||||
// CRITICAL: cleanup_skip_delta_markers_system must run immediately after
|
||||
// generate_delta_system to remove SkipNextDeltaGeneration markers
|
||||
(generate_delta_system, cleanup_skip_delta_markers_system).chain(),
|
||||
// Periodic anti-entropy sync
|
||||
periodic_sync_system,
|
||||
// Maintenance tasks
|
||||
|
||||
@@ -41,10 +41,7 @@
|
||||
use std::collections::HashMap;
|
||||
|
||||
use bevy::prelude::*;
|
||||
use serde::{
|
||||
Deserialize,
|
||||
Serialize,
|
||||
};
|
||||
|
||||
|
||||
use crate::networking::vector_clock::{
|
||||
NodeId,
|
||||
@@ -55,7 +52,7 @@ use crate::networking::vector_clock::{
|
||||
///
|
||||
/// Each element has a unique ID and tracks its logical position in the sequence
|
||||
/// via the "after" pointer.
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
|
||||
#[derive(Debug, Clone, PartialEq, Eq, rkyv::Archive, rkyv::Serialize, rkyv::Deserialize)]
|
||||
pub struct RgaElement<T> {
|
||||
/// Unique ID for this element
|
||||
pub id: uuid::Uuid,
|
||||
@@ -90,7 +87,7 @@ pub struct RgaElement<T> {
|
||||
/// Elements are stored in a HashMap by ID. Each element tracks which element
|
||||
/// it was inserted after, forming a linked list structure. Deleted elements
|
||||
/// remain as tombstones to preserve positions for concurrent operations.
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
#[derive(Debug, Clone, rkyv::Archive, rkyv::Serialize, rkyv::Deserialize)]
|
||||
pub struct Rga<T> {
|
||||
/// Map from element ID to element
|
||||
elements: HashMap<uuid::Uuid, RgaElement<T>>,
|
||||
@@ -98,7 +95,7 @@ pub struct Rga<T> {
|
||||
|
||||
impl<T> Rga<T>
|
||||
where
|
||||
T: Clone + Serialize + for<'de> Deserialize<'de>,
|
||||
T: Clone + rkyv::Archive,
|
||||
{
|
||||
/// Create a new empty RGA sequence
|
||||
pub fn new() -> Self {
|
||||
@@ -416,7 +413,7 @@ where
|
||||
|
||||
impl<T> Default for Rga<T>
|
||||
where
|
||||
T: Clone + Serialize + for<'de> Deserialize<'de>,
|
||||
T: Clone + rkyv::Archive,
|
||||
{
|
||||
fn default() -> Self {
|
||||
Self::new()
|
||||
@@ -612,15 +609,15 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_rga_serialization() -> bincode::Result<()> {
|
||||
fn test_rga_serialization() -> anyhow::Result<()> {
|
||||
let node = uuid::Uuid::new_v4();
|
||||
let mut seq: Rga<String> = Rga::new();
|
||||
|
||||
let (id_a, _) = seq.insert_at_beginning("foo".to_string(), node);
|
||||
seq.insert_after(Some(id_a), "bar".to_string(), node);
|
||||
|
||||
let bytes = bincode::serialize(&seq)?;
|
||||
let deserialized: Rga<String> = bincode::deserialize(&bytes)?;
|
||||
let bytes = rkyv::to_bytes::<rkyv::rancor::Failure>(&seq).map(|b| b.to_vec())?;
|
||||
let deserialized: Rga<String> = rkyv::from_bytes::<Rga<String>, rkyv::rancor::Failure>(&bytes)?;
|
||||
|
||||
assert_eq!(deserialized.len(), 2);
|
||||
let values: Vec<String> = deserialized.values().cloned().collect();
|
||||
|
||||
@@ -6,10 +6,7 @@ use std::fmt;
|
||||
/// human-readable ! session codes, ALPN-based network isolation, and persistent
|
||||
/// session tracking.
|
||||
use bevy::prelude::*;
|
||||
use serde::{
|
||||
Deserialize,
|
||||
Serialize,
|
||||
};
|
||||
|
||||
use uuid::Uuid;
|
||||
|
||||
use crate::networking::VectorClock;
|
||||
@@ -18,7 +15,7 @@ use crate::networking::VectorClock;
|
||||
///
|
||||
/// Session IDs provide both technical uniqueness (UUID) and human usability
|
||||
/// (abc-def-123 codes). All peers in a session share the same session ID.
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Hash, Serialize, Deserialize)]
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Hash, rkyv::Archive, rkyv::Serialize, rkyv::Deserialize)]
|
||||
pub struct SessionId {
|
||||
uuid: Uuid,
|
||||
code: String,
|
||||
@@ -115,6 +112,24 @@ impl SessionId {
|
||||
*hash.as_bytes()
|
||||
}
|
||||
|
||||
/// Derive deterministic pkarr keypair for DHT-based peer discovery
|
||||
///
|
||||
/// All peers in the same session derive the same keypair from the session code.
|
||||
/// This shared keypair is used to publish and discover peer EndpointIds in the DHT.
|
||||
///
|
||||
/// # Security
|
||||
/// The session code is the secret - anyone with the code can discover peers.
|
||||
/// The domain separation prefix ensures no collision with other uses.
|
||||
pub fn to_pkarr_keypair(&self) -> pkarr::Keypair {
|
||||
let mut hasher = blake3::Hasher::new();
|
||||
hasher.update(b"/app/v1/session-pkarr-key/");
|
||||
hasher.update(self.uuid.as_bytes());
|
||||
let hash = hasher.finalize();
|
||||
|
||||
let secret_bytes: [u8; 32] = *hash.as_bytes();
|
||||
pkarr::Keypair::from_secret_key(&secret_bytes)
|
||||
}
|
||||
|
||||
/// Get raw UUID
|
||||
pub fn as_uuid(&self) -> &Uuid {
|
||||
&self.uuid
|
||||
@@ -134,7 +149,7 @@ impl fmt::Display for SessionId {
|
||||
}
|
||||
|
||||
/// Session lifecycle states
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq, rkyv::Archive, rkyv::Serialize, rkyv::Deserialize)]
|
||||
pub enum SessionState {
|
||||
/// Session exists in database but hasn't connected to network yet
|
||||
Created,
|
||||
@@ -178,7 +193,7 @@ impl SessionState {
|
||||
///
|
||||
/// Tracks session identity, creation time, entity count, and lifecycle state.
|
||||
/// Persisted to database for crash recovery and auto-rejoin.
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
#[derive(Debug, Clone, rkyv::Archive, rkyv::Serialize, rkyv::Deserialize)]
|
||||
pub struct Session {
|
||||
/// Unique session identifier
|
||||
pub id: SessionId,
|
||||
@@ -199,7 +214,7 @@ pub struct Session {
|
||||
pub state: SessionState,
|
||||
|
||||
/// Optional encrypted session secret for access control
|
||||
pub secret: Option<Vec<u8>>,
|
||||
pub secret: Option<bytes::Bytes>,
|
||||
}
|
||||
|
||||
impl Session {
|
||||
|
||||
@@ -168,7 +168,8 @@ pub fn save_session_on_shutdown_system(world: &mut World) {
|
||||
|
||||
// Update session metadata
|
||||
session.touch();
|
||||
session.transition_to(SessionState::Left);
|
||||
// Note: We don't transition to Left here - that only happens on actual shutdown
|
||||
// This periodic save just persists the current state
|
||||
|
||||
// Count entities in the world
|
||||
let entity_count = world
|
||||
|
||||
399
crates/libmarathon/src/networking/session_sync.rs
Normal file
399
crates/libmarathon/src/networking/session_sync.rs
Normal file
@@ -0,0 +1,399 @@
|
||||
//! Session synchronization systems
|
||||
//!
|
||||
//! This module handles automatic session lifecycle:
|
||||
//! - Sending JoinRequest when networking starts
|
||||
//! - Transitioning session state when receiving FullState
|
||||
//! - Persisting session state changes
|
||||
|
||||
use bevy::prelude::*;
|
||||
|
||||
use crate::networking::{
|
||||
CurrentSession,
|
||||
GossipBridge,
|
||||
JoinType,
|
||||
NodeVectorClock,
|
||||
SessionState,
|
||||
build_join_request,
|
||||
plugin::SessionSecret,
|
||||
};
|
||||
|
||||
/// System to send JoinRequest when networking comes online
|
||||
///
|
||||
/// This system detects when GossipBridge is added and sends a JoinRequest
|
||||
/// to discover peers and sync state. It only runs once when networking starts.
|
||||
///
|
||||
/// Add to your app as a Startup system AFTER GossipBridge is created:
|
||||
/// ```no_run
|
||||
/// use bevy::prelude::*;
|
||||
/// use libmarathon::networking::send_join_request_on_connect_system;
|
||||
///
|
||||
/// App::new()
|
||||
/// .add_systems(Update, send_join_request_on_connect_system);
|
||||
/// ```
|
||||
pub fn send_join_request_on_connect_system(
|
||||
current_session: ResMut<CurrentSession>,
|
||||
node_clock: Res<NodeVectorClock>,
|
||||
bridge: Option<Res<GossipBridge>>,
|
||||
session_secret: Option<Res<SessionSecret>>,
|
||||
) {
|
||||
// Only run when bridge exists and session is in Joining state
|
||||
let Some(bridge) = bridge else {
|
||||
return;
|
||||
};
|
||||
|
||||
// Only send JoinRequest when in Joining state
|
||||
if current_session.session.state != SessionState::Joining {
|
||||
return;
|
||||
}
|
||||
|
||||
let node_id = node_clock.node_id;
|
||||
let session_id = current_session.session.id.clone();
|
||||
|
||||
// Determine join type based on whether we have a last known clock
|
||||
let join_type = if current_session.last_known_clock.node_count() > 0 {
|
||||
// Rejoin - we have a previous clock snapshot
|
||||
JoinType::Rejoin {
|
||||
last_active: current_session.session.last_active,
|
||||
entity_count: current_session.session.entity_count,
|
||||
}
|
||||
} else {
|
||||
// Fresh join - no previous state
|
||||
JoinType::Fresh
|
||||
};
|
||||
|
||||
// Get session secret if configured
|
||||
let secret = session_secret.as_ref().map(|s| bytes::Bytes::from(s.as_bytes().to_vec()));
|
||||
|
||||
// Build JoinRequest
|
||||
let last_known_clock = if current_session.last_known_clock.node_count() > 0 {
|
||||
Some(current_session.last_known_clock.clone())
|
||||
} else {
|
||||
None
|
||||
};
|
||||
|
||||
let request = build_join_request(
|
||||
node_id,
|
||||
session_id.clone(),
|
||||
secret,
|
||||
last_known_clock,
|
||||
join_type.clone(),
|
||||
);
|
||||
|
||||
// Send JoinRequest
|
||||
match bridge.send(request) {
|
||||
Ok(()) => {
|
||||
info!(
|
||||
"Sent JoinRequest for session {} (type: {:?})",
|
||||
session_id.to_code(),
|
||||
join_type
|
||||
);
|
||||
}
|
||||
Err(e) => {
|
||||
error!("Failed to send JoinRequest: {}", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// System to transition session to Active when sync completes
|
||||
///
|
||||
/// This system monitors for session state changes and handles transitions:
|
||||
/// - Joining → Active: When we receive FullState or initial sync completes
|
||||
///
|
||||
/// This is an exclusive system to allow world queries.
|
||||
///
|
||||
/// Add to your app as an Update system:
|
||||
/// ```no_run
|
||||
/// use bevy::prelude::*;
|
||||
/// use libmarathon::networking::transition_session_state_system;
|
||||
///
|
||||
/// App::new()
|
||||
/// .add_systems(Update, transition_session_state_system);
|
||||
/// ```
|
||||
pub fn transition_session_state_system(world: &mut World) {
|
||||
// Only process state transitions when we have networking
|
||||
if world.get_resource::<GossipBridge>().is_none() {
|
||||
return;
|
||||
}
|
||||
|
||||
// Get values we need (clone to avoid holding references)
|
||||
let (session_state, session_id, join_request_sent, clock_node_count) = {
|
||||
let Some(current_session) = world.get_resource::<CurrentSession>() else {
|
||||
return;
|
||||
};
|
||||
|
||||
let Some(node_clock) = world.get_resource::<NodeVectorClock>() else {
|
||||
return;
|
||||
};
|
||||
|
||||
let Some(join_sent) = world.get_resource::<JoinRequestSent>() else {
|
||||
return;
|
||||
};
|
||||
|
||||
(
|
||||
current_session.session.state,
|
||||
current_session.session.id.clone(),
|
||||
join_sent.sent,
|
||||
node_clock.clock.node_count(),
|
||||
)
|
||||
};
|
||||
|
||||
// Use a non-send resource for the timer to ensure it's stored per-world
|
||||
#[derive(Default, Resource)]
|
||||
struct JoinTimer(Option<std::time::Instant>);
|
||||
|
||||
if !world.contains_resource::<JoinTimer>() {
|
||||
world.insert_resource(JoinTimer::default());
|
||||
}
|
||||
|
||||
match session_state {
|
||||
SessionState::Joining => {
|
||||
// Start timer when JoinRequest is sent
|
||||
{
|
||||
let mut timer = world.resource_mut::<JoinTimer>();
|
||||
if join_request_sent && timer.0.is_none() {
|
||||
timer.0 = Some(std::time::Instant::now());
|
||||
debug!("Started join timer - will transition to Active after timeout if no peers respond");
|
||||
}
|
||||
}
|
||||
|
||||
// Count entities in world
|
||||
let entity_count = world
|
||||
.query::<&crate::networking::NetworkedEntity>()
|
||||
.iter(world)
|
||||
.count();
|
||||
|
||||
// Transition to Active if:
|
||||
// 1. We have received entities (entity_count > 0) AND have multiple nodes in clock
|
||||
// This ensures FullState was received and applied, OR
|
||||
// 2. We've waited 3 seconds and either:
|
||||
// a) We have entities (sync completed), OR
|
||||
// b) No entities exist yet (we're the first node in session)
|
||||
let should_transition = if entity_count > 0 && clock_node_count > 1 {
|
||||
// We've received and applied FullState with entities
|
||||
info!(
|
||||
"Session {} transitioning to Active (received {} entities from {} peers)",
|
||||
session_id.to_code(),
|
||||
entity_count,
|
||||
clock_node_count - 1
|
||||
);
|
||||
true
|
||||
} else {
|
||||
let timer = world.resource::<JoinTimer>();
|
||||
if let Some(start_time) = timer.0 {
|
||||
// Check if 3 seconds have passed since JoinRequest
|
||||
if start_time.elapsed().as_secs() >= 3 {
|
||||
if entity_count > 0 {
|
||||
info!(
|
||||
"Session {} transitioning to Active (timeout reached, have {} entities)",
|
||||
session_id.to_code(),
|
||||
entity_count
|
||||
);
|
||||
} else {
|
||||
info!(
|
||||
"Session {} transitioning to Active (timeout - no peers or empty session, first node)",
|
||||
session_id.to_code()
|
||||
);
|
||||
}
|
||||
true
|
||||
} else {
|
||||
false
|
||||
}
|
||||
} else {
|
||||
false
|
||||
}
|
||||
};
|
||||
|
||||
if should_transition {
|
||||
let mut current_session = world.resource_mut::<CurrentSession>();
|
||||
current_session.transition_to(SessionState::Active);
|
||||
|
||||
// Reset timer
|
||||
let mut timer = world.resource_mut::<JoinTimer>();
|
||||
timer.0 = None;
|
||||
}
|
||||
}
|
||||
SessionState::Active => {
|
||||
// Already active, reset timer
|
||||
if let Some(mut timer) = world.get_resource_mut::<JoinTimer>() {
|
||||
timer.0 = None;
|
||||
}
|
||||
}
|
||||
SessionState::Disconnected => {
|
||||
// If we reconnected (bridge exists), transition to Joining
|
||||
// This is handled by the networking startup logic
|
||||
if let Some(mut timer) = world.get_resource_mut::<JoinTimer>() {
|
||||
timer.0 = None;
|
||||
}
|
||||
}
|
||||
SessionState::Created | SessionState::Left => {
|
||||
// Should not be in these states when networking is active
|
||||
if let Some(mut timer) = world.get_resource_mut::<JoinTimer>() {
|
||||
timer.0 = None;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Resource to track if we've sent the initial JoinRequest
|
||||
///
|
||||
/// This prevents sending multiple JoinRequests on subsequent frame updates
|
||||
#[derive(Resource)]
|
||||
pub struct JoinRequestSent {
|
||||
pub sent: bool,
|
||||
/// Timer to wait for peers before sending JoinRequest
|
||||
/// If no peers connect after 1 second, send anyway (we're first node)
|
||||
pub wait_started: Option<std::time::Instant>,
|
||||
}
|
||||
|
||||
impl Default for JoinRequestSent {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
sent: false,
|
||||
wait_started: None,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// One-shot system to send JoinRequest only once when networking starts
|
||||
///
|
||||
/// CRITICAL: Waits for at least one peer to connect via pkarr+DHT before sending
|
||||
/// JoinRequest. This prevents broadcasting to an empty network.
|
||||
///
|
||||
/// Timing:
|
||||
/// - If peers connect: Send JoinRequest immediately (they'll receive it)
|
||||
/// - If no peers after 1 second: Send anyway (we're probably first node in session)
|
||||
pub fn send_join_request_once_system(
|
||||
mut join_sent: ResMut<JoinRequestSent>,
|
||||
current_session: ResMut<CurrentSession>,
|
||||
node_clock: Res<NodeVectorClock>,
|
||||
bridge: Option<Res<GossipBridge>>,
|
||||
session_secret: Option<Res<SessionSecret>>,
|
||||
) {
|
||||
// Skip if already sent
|
||||
if join_sent.sent {
|
||||
return;
|
||||
}
|
||||
|
||||
// Only run when bridge exists and session is in Joining state
|
||||
let Some(bridge) = bridge else {
|
||||
return;
|
||||
};
|
||||
|
||||
if current_session.session.state != SessionState::Joining {
|
||||
return;
|
||||
}
|
||||
|
||||
// Start wait timer when conditions are met
|
||||
if join_sent.wait_started.is_none() {
|
||||
join_sent.wait_started = Some(std::time::Instant::now());
|
||||
debug!("Started waiting for peers before sending JoinRequest (max 1 second)");
|
||||
}
|
||||
|
||||
// Check if we have any peers connected (node_count > 1 means we + at least 1 peer)
|
||||
let peer_count = node_clock.clock.node_count().saturating_sub(1);
|
||||
let wait_elapsed = join_sent.wait_started.unwrap().elapsed();
|
||||
|
||||
// Send JoinRequest if:
|
||||
// 1. At least one peer has connected (they'll receive our JoinRequest), OR
|
||||
// 2. We've waited 1 second with no peers (we're probably the first node)
|
||||
let should_send = if peer_count > 0 {
|
||||
debug!(
|
||||
"Sending JoinRequest now - {} peer(s) connected (waited {:?})",
|
||||
peer_count, wait_elapsed
|
||||
);
|
||||
true
|
||||
} else if wait_elapsed.as_millis() >= 1000 {
|
||||
debug!(
|
||||
"Sending JoinRequest after timeout - no peers connected, assuming first node (waited {:?})",
|
||||
wait_elapsed
|
||||
);
|
||||
true
|
||||
} else {
|
||||
// Still waiting for peers
|
||||
false
|
||||
};
|
||||
|
||||
if !should_send {
|
||||
return;
|
||||
}
|
||||
|
||||
let node_id = node_clock.node_id;
|
||||
let session_id = current_session.session.id.clone();
|
||||
|
||||
// Determine join type
|
||||
let join_type = if current_session.last_known_clock.node_count() > 0 {
|
||||
JoinType::Rejoin {
|
||||
last_active: current_session.session.last_active,
|
||||
entity_count: current_session.session.entity_count,
|
||||
}
|
||||
} else {
|
||||
JoinType::Fresh
|
||||
};
|
||||
|
||||
// Get session secret if configured
|
||||
let secret = session_secret.as_ref().map(|s| bytes::Bytes::from(s.as_bytes().to_vec()));
|
||||
|
||||
// Build JoinRequest
|
||||
let last_known_clock = if current_session.last_known_clock.node_count() > 0 {
|
||||
Some(current_session.last_known_clock.clone())
|
||||
} else {
|
||||
None
|
||||
};
|
||||
|
||||
let request = build_join_request(
|
||||
node_id,
|
||||
session_id.clone(),
|
||||
secret,
|
||||
last_known_clock,
|
||||
join_type.clone(),
|
||||
);
|
||||
|
||||
// Send JoinRequest
|
||||
match bridge.send(request) {
|
||||
Ok(()) => {
|
||||
info!(
|
||||
"Sent JoinRequest for session {} (type: {:?})",
|
||||
session_id.to_code(),
|
||||
join_type
|
||||
);
|
||||
join_sent.sent = true;
|
||||
|
||||
// Transition to Active immediately if we're the first node
|
||||
// (Otherwise we'll wait for FullState)
|
||||
// Actually, let's always wait a bit for potential peers
|
||||
}
|
||||
Err(e) => {
|
||||
error!("Failed to send JoinRequest: {}", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::networking::{Session, SessionId, VectorClock};
|
||||
|
||||
#[test]
|
||||
fn test_join_request_sent_tracking() {
|
||||
let mut sent = JoinRequestSent::default();
|
||||
assert!(!sent.sent);
|
||||
|
||||
sent.sent = true;
|
||||
assert!(sent.sent);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_session_state_transitions() {
|
||||
let session_id = SessionId::new();
|
||||
let session = Session::new(session_id);
|
||||
let mut current = CurrentSession::new(session, VectorClock::new());
|
||||
|
||||
assert_eq!(current.session.state, SessionState::Created);
|
||||
|
||||
current.transition_to(SessionState::Joining);
|
||||
assert_eq!(current.session.state, SessionState::Joining);
|
||||
|
||||
current.transition_to(SessionState::Active);
|
||||
assert_eq!(current.session.state, SessionState::Active);
|
||||
}
|
||||
}
|
||||
@@ -71,12 +71,12 @@ pub trait SyncComponent: Component + Reflect + Sized {
|
||||
|
||||
/// Serialize this component to bytes
|
||||
///
|
||||
/// Uses bincode for efficient binary serialization.
|
||||
fn serialize_sync(&self) -> anyhow::Result<Vec<u8>>;
|
||||
/// Uses rkyv for zero-copy binary serialization.
|
||||
fn serialize_sync(&self) -> anyhow::Result<bytes::Bytes>;
|
||||
|
||||
/// Deserialize this component from bytes
|
||||
///
|
||||
/// Uses bincode to deserialize from the format created by `serialize_sync`.
|
||||
/// Uses rkyv to deserialize from the format created by `serialize_sync`.
|
||||
fn deserialize_sync(data: &[u8]) -> anyhow::Result<Self>;
|
||||
|
||||
/// Merge remote state with local state
|
||||
@@ -97,33 +97,29 @@ pub trait SyncComponent: Component + Reflect + Sized {
|
||||
fn merge(&mut self, remote: Self, clock_cmp: ClockComparison) -> ComponentMergeDecision;
|
||||
}
|
||||
|
||||
/// Marker component for entities that should be synced
|
||||
/// Marker component indicating that an entity should be synchronized across the network.
|
||||
///
|
||||
/// Add this to any entity with synced components to enable automatic
|
||||
/// change detection and synchronization.
|
||||
/// When this component is added to an entity, the `auto_insert_sync_components` system
|
||||
/// will automatically add the required infrastructure components:
|
||||
/// - `NetworkedEntity` - for network synchronization
|
||||
/// - `Persisted` - for persistence
|
||||
/// - `NetworkedTransform` - if the entity has a `Transform` component
|
||||
///
|
||||
/// # Example
|
||||
/// ```
|
||||
/// use bevy::prelude::*;
|
||||
/// use libmarathon::networking::Synced;
|
||||
/// use sync_macros::Synced as SyncedDerive;
|
||||
///
|
||||
/// #[derive(Component, Reflect, Clone, serde::Serialize, serde::Deserialize, SyncedDerive)]
|
||||
/// #[sync(version = 1, strategy = "LastWriteWins")]
|
||||
/// struct Health(f32);
|
||||
///
|
||||
/// #[derive(Component, Reflect, Clone, serde::Serialize, serde::Deserialize, SyncedDerive)]
|
||||
/// #[sync(version = 1, strategy = "LastWriteWins")]
|
||||
/// struct Position {
|
||||
/// x: f32,
|
||||
/// y: f32,
|
||||
/// ```no_compile
|
||||
/// // Define a synced component with the #[synced] attribute
|
||||
/// #[libmarathon_macros::synced]
|
||||
/// pub struct CubeMarker {
|
||||
/// pub color_r: f32,
|
||||
/// pub size: f32,
|
||||
/// }
|
||||
///
|
||||
/// let mut world = World::new();
|
||||
/// world.spawn((
|
||||
/// Health(100.0),
|
||||
/// Position { x: 0.0, y: 0.0 },
|
||||
/// Synced, // Marker enables sync
|
||||
/// // Spawn with just the Synced marker - infrastructure auto-added
|
||||
/// commands.spawn((
|
||||
/// CubeMarker::with_color(Color::RED, 1.0),
|
||||
/// Transform::from_translation(pos),
|
||||
/// Synced, // Auto-adds NetworkedEntity, Persisted, NetworkedTransform
|
||||
/// ));
|
||||
/// ```
|
||||
#[derive(Component, Reflect, Default, Clone, Copy)]
|
||||
|
||||
@@ -219,11 +219,8 @@ pub fn handle_local_deletions_system(
|
||||
mut tombstone_registry: ResMut<TombstoneRegistry>,
|
||||
mut operation_log: Option<ResMut<crate::networking::OperationLog>>,
|
||||
bridge: Option<Res<GossipBridge>>,
|
||||
mut write_buffer: Option<ResMut<crate::persistence::WriteBufferResource>>,
|
||||
) {
|
||||
let Some(bridge) = bridge else {
|
||||
return;
|
||||
};
|
||||
|
||||
for (entity, networked) in query.iter() {
|
||||
// Increment clock for deletion
|
||||
node_clock.tick();
|
||||
@@ -235,13 +232,34 @@ pub fn handle_local_deletions_system(
|
||||
)
|
||||
.delete();
|
||||
|
||||
// Record tombstone
|
||||
// Record tombstone in memory
|
||||
tombstone_registry.record_deletion(
|
||||
networked.network_id,
|
||||
node_clock.node_id,
|
||||
node_clock.clock.clone(),
|
||||
);
|
||||
|
||||
// Persist tombstone to database
|
||||
if let Some(ref mut buffer) = write_buffer {
|
||||
// Serialize the vector clock using rkyv
|
||||
match rkyv::to_bytes::<rkyv::rancor::Failure>(&node_clock.clock).map(|b| b.to_vec()) {
|
||||
Ok(clock_bytes) => {
|
||||
if let Err(e) = buffer.add(crate::persistence::PersistenceOp::RecordTombstone {
|
||||
entity_id: networked.network_id,
|
||||
deleting_node: node_clock.node_id,
|
||||
deletion_clock: bytes::Bytes::from(clock_bytes),
|
||||
}) {
|
||||
error!("Failed to persist tombstone for entity {:?}: {}", networked.network_id, e);
|
||||
} else {
|
||||
debug!("Persisted tombstone for entity {:?} to database", networked.network_id);
|
||||
}
|
||||
},
|
||||
Err(e) => {
|
||||
error!("Failed to serialize vector clock for tombstone persistence: {:?}", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Create EntityDelta with Delete operation
|
||||
let delta = crate::networking::EntityDelta::new(
|
||||
networked.network_id,
|
||||
@@ -250,25 +268,32 @@ pub fn handle_local_deletions_system(
|
||||
vec![delete_op],
|
||||
);
|
||||
|
||||
// Record in operation log
|
||||
// Record in operation log (for when we go online later)
|
||||
if let Some(ref mut log) = operation_log {
|
||||
log.record_operation(delta.clone());
|
||||
}
|
||||
|
||||
// Broadcast deletion
|
||||
let message =
|
||||
crate::networking::VersionedMessage::new(crate::networking::SyncMessage::EntityDelta {
|
||||
entity_id: delta.entity_id,
|
||||
node_id: delta.node_id,
|
||||
vector_clock: delta.vector_clock.clone(),
|
||||
operations: delta.operations.clone(),
|
||||
});
|
||||
// Broadcast deletion if online
|
||||
if let Some(ref bridge) = bridge {
|
||||
let message =
|
||||
crate::networking::VersionedMessage::new(crate::networking::SyncMessage::EntityDelta {
|
||||
entity_id: delta.entity_id,
|
||||
node_id: delta.node_id,
|
||||
vector_clock: delta.vector_clock.clone(),
|
||||
operations: delta.operations.clone(),
|
||||
});
|
||||
|
||||
if let Err(e) = bridge.send(message) {
|
||||
error!("Failed to broadcast Delete operation: {}", e);
|
||||
if let Err(e) = bridge.send(message) {
|
||||
error!("Failed to broadcast Delete operation: {}", e);
|
||||
} else {
|
||||
info!(
|
||||
"Broadcast Delete operation for entity {:?}",
|
||||
networked.network_id
|
||||
);
|
||||
}
|
||||
} else {
|
||||
info!(
|
||||
"Broadcast Delete operation for entity {:?}",
|
||||
"Deleted entity {:?} locally (offline mode - will sync when online)",
|
||||
networked.network_id
|
||||
);
|
||||
}
|
||||
|
||||
@@ -5,10 +5,7 @@
|
||||
|
||||
use std::collections::HashMap;
|
||||
|
||||
use serde::{
|
||||
Deserialize,
|
||||
Serialize,
|
||||
};
|
||||
|
||||
|
||||
use crate::networking::error::{
|
||||
NetworkingError,
|
||||
@@ -54,20 +51,25 @@ pub type NodeId = uuid::Uuid;
|
||||
/// clock1.merge(&clock2); // node1: 1, node2: 1
|
||||
/// assert!(clock1.happened_before(&clock2) == false);
|
||||
/// ```
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize, Default)]
|
||||
#[derive(Debug, Clone, PartialEq, Eq, rkyv::Archive, rkyv::Serialize, rkyv::Deserialize, Default)]
|
||||
pub struct VectorClock {
|
||||
/// Map from node ID to logical timestamp
|
||||
pub clocks: HashMap<NodeId, u64>,
|
||||
pub timestamps: HashMap<NodeId, u64>,
|
||||
}
|
||||
|
||||
impl VectorClock {
|
||||
/// Create a new empty vector clock
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
clocks: HashMap::new(),
|
||||
timestamps: HashMap::new(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Get the number of nodes tracked in this clock
|
||||
pub fn node_count(&self) -> usize {
|
||||
self.timestamps.len()
|
||||
}
|
||||
|
||||
/// Increment the clock for a given node
|
||||
///
|
||||
/// This should be called by a node before performing a local operation.
|
||||
@@ -89,7 +91,7 @@ impl VectorClock {
|
||||
/// assert_eq!(clock.get(node), 2);
|
||||
/// ```
|
||||
pub fn increment(&mut self, node_id: NodeId) -> u64 {
|
||||
let counter = self.clocks.entry(node_id).or_insert(0);
|
||||
let counter = self.timestamps.entry(node_id).or_insert(0);
|
||||
*counter += 1;
|
||||
*counter
|
||||
}
|
||||
@@ -98,7 +100,7 @@ impl VectorClock {
|
||||
///
|
||||
/// Returns 0 if the node has never been seen in this vector clock.
|
||||
pub fn get(&self, node_id: NodeId) -> u64 {
|
||||
self.clocks.get(&node_id).copied().unwrap_or(0)
|
||||
self.timestamps.get(&node_id).copied().unwrap_or(0)
|
||||
}
|
||||
|
||||
/// Merge another vector clock into this one
|
||||
@@ -127,8 +129,8 @@ impl VectorClock {
|
||||
/// assert_eq!(clock1.get(node2), 1);
|
||||
/// ```
|
||||
pub fn merge(&mut self, other: &VectorClock) {
|
||||
for (node_id, &counter) in &other.clocks {
|
||||
let current = self.clocks.entry(*node_id).or_insert(0);
|
||||
for (node_id, &counter) in &other.timestamps {
|
||||
let current = self.timestamps.entry(*node_id).or_insert(0);
|
||||
*current = (*current).max(counter);
|
||||
}
|
||||
}
|
||||
@@ -161,7 +163,7 @@ impl VectorClock {
|
||||
let mut any_strictly_less = false;
|
||||
|
||||
// Check our nodes in a single pass
|
||||
for (node_id, &our_counter) in &self.clocks {
|
||||
for (node_id, &our_counter) in &self.timestamps {
|
||||
let their_counter = other.get(*node_id);
|
||||
|
||||
// Early exit if we have a counter greater than theirs
|
||||
@@ -178,8 +180,8 @@ impl VectorClock {
|
||||
// If we haven't found a strictly less counter yet, check if they have
|
||||
// nodes we don't know about with non-zero values (those count as strictly less)
|
||||
if !any_strictly_less {
|
||||
any_strictly_less = other.clocks.iter().any(|(node_id, &their_counter)| {
|
||||
!self.clocks.contains_key(node_id) && their_counter > 0
|
||||
any_strictly_less = other.timestamps.iter().any(|(node_id, &their_counter)| {
|
||||
!self.timestamps.contains_key(node_id) && their_counter > 0
|
||||
});
|
||||
}
|
||||
|
||||
@@ -253,7 +255,7 @@ mod tests {
|
||||
#[test]
|
||||
fn test_new_clock() {
|
||||
let clock = VectorClock::new();
|
||||
assert_eq!(clock.clocks.len(), 0);
|
||||
assert_eq!(clock.timestamps.len(), 0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -444,13 +446,13 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_serialization() -> bincode::Result<()> {
|
||||
fn test_serialization() -> anyhow::Result<()> {
|
||||
let node = uuid::Uuid::new_v4();
|
||||
let mut clock = VectorClock::new();
|
||||
clock.increment(node);
|
||||
|
||||
let bytes = bincode::serialize(&clock)?;
|
||||
let deserialized: VectorClock = bincode::deserialize(&bytes)?;
|
||||
let bytes = rkyv::to_bytes::<rkyv::rancor::Failure>(&clock).map(|b| b.to_vec())?;
|
||||
let deserialized: VectorClock = rkyv::from_bytes::<VectorClock, rkyv::rancor::Failure>(&bytes)?;
|
||||
|
||||
assert_eq!(clock, deserialized);
|
||||
|
||||
|
||||
@@ -201,7 +201,7 @@ pub fn flush_to_sqlite(ops: &[PersistenceOp], conn: &mut Connection) -> Result<u
|
||||
rusqlite::params![
|
||||
entity_id.as_bytes(),
|
||||
component_type,
|
||||
data,
|
||||
data.as_ref(),
|
||||
current_timestamp(),
|
||||
],
|
||||
)?;
|
||||
@@ -219,7 +219,7 @@ pub fn flush_to_sqlite(ops: &[PersistenceOp], conn: &mut Connection) -> Result<u
|
||||
rusqlite::params![
|
||||
&node_id.to_string(), // Convert UUID to string for SQLite TEXT column
|
||||
sequence,
|
||||
operation,
|
||||
operation.as_ref(),
|
||||
current_timestamp(),
|
||||
],
|
||||
)?;
|
||||
@@ -253,6 +253,24 @@ pub fn flush_to_sqlite(ops: &[PersistenceOp], conn: &mut Connection) -> Result<u
|
||||
)?;
|
||||
count += 1;
|
||||
},
|
||||
|
||||
| PersistenceOp::RecordTombstone {
|
||||
entity_id,
|
||||
deleting_node,
|
||||
deletion_clock,
|
||||
} => {
|
||||
tx.execute(
|
||||
"INSERT OR REPLACE INTO tombstones (entity_id, deleting_node, deletion_clock, created_at)
|
||||
VALUES (?1, ?2, ?3, ?4)",
|
||||
rusqlite::params![
|
||||
entity_id.as_bytes(),
|
||||
&deleting_node.to_string(),
|
||||
deletion_clock.as_ref(),
|
||||
current_timestamp(),
|
||||
],
|
||||
)?;
|
||||
count += 1;
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
@@ -485,7 +503,7 @@ pub fn save_session(conn: &mut Connection, session: &crate::networking::Session)
|
||||
session.last_active,
|
||||
session.entity_count as i64,
|
||||
session.state.to_string(),
|
||||
session.secret,
|
||||
session.secret.as_ref().map(|b| b.as_ref()),
|
||||
],
|
||||
)?;
|
||||
Ok(())
|
||||
@@ -517,7 +535,8 @@ pub fn load_session(
|
||||
last_active: row.get(3)?,
|
||||
entity_count: row.get::<_, i64>(4)? as usize,
|
||||
state,
|
||||
secret: row.get(6)?,
|
||||
secret: row.get::<_, Option<std::borrow::Cow<'_, [u8]>>>(6)?
|
||||
.map(|cow| bytes::Bytes::copy_from_slice(&cow)),
|
||||
})
|
||||
},
|
||||
)
|
||||
@@ -548,7 +567,8 @@ pub fn get_last_active_session(conn: &Connection) -> Result<Option<crate::networ
|
||||
last_active: row.get(3)?,
|
||||
entity_count: row.get::<_, i64>(4)? as usize,
|
||||
state,
|
||||
secret: row.get(6)?,
|
||||
secret: row.get::<_, Option<std::borrow::Cow<'_, [u8]>>>(6)?
|
||||
.map(|cow| bytes::Bytes::copy_from_slice(&cow)),
|
||||
})
|
||||
},
|
||||
)
|
||||
@@ -571,7 +591,7 @@ pub fn save_session_vector_clock(
|
||||
)?;
|
||||
|
||||
// Insert current clock state
|
||||
for (node_id, &counter) in &clock.clocks {
|
||||
for (node_id, &counter) in &clock.timestamps {
|
||||
tx.execute(
|
||||
"INSERT INTO vector_clock (session_id, node_id, counter, updated_at)
|
||||
VALUES (?1, ?2, ?3, ?4)",
|
||||
@@ -606,13 +626,486 @@ pub fn load_session_vector_clock(
|
||||
for row in rows {
|
||||
let (node_id_str, counter) = row?;
|
||||
if let Ok(node_id) = uuid::Uuid::parse_str(&node_id_str) {
|
||||
clock.clocks.insert(node_id, counter as u64);
|
||||
clock.timestamps.insert(node_id, counter as u64);
|
||||
}
|
||||
}
|
||||
|
||||
Ok(clock)
|
||||
}
|
||||
|
||||
/// Loaded entity data from database
|
||||
#[derive(Debug)]
|
||||
pub struct LoadedEntity {
|
||||
pub id: uuid::Uuid,
|
||||
pub entity_type: String,
|
||||
pub created_at: chrono::DateTime<chrono::Utc>,
|
||||
pub updated_at: chrono::DateTime<chrono::Utc>,
|
||||
pub components: Vec<LoadedComponent>,
|
||||
}
|
||||
|
||||
/// Loaded component data from database
|
||||
#[derive(Debug)]
|
||||
pub struct LoadedComponent {
|
||||
pub component_type: String,
|
||||
pub data: bytes::Bytes,
|
||||
}
|
||||
|
||||
/// Load all components for a single entity from the database
|
||||
pub fn load_entity_components(
|
||||
conn: &Connection,
|
||||
entity_id: uuid::Uuid,
|
||||
) -> Result<Vec<LoadedComponent>> {
|
||||
let mut stmt = conn.prepare(
|
||||
"SELECT component_type, data
|
||||
FROM components
|
||||
WHERE entity_id = ?1",
|
||||
)?;
|
||||
|
||||
let components: Vec<LoadedComponent> = stmt
|
||||
.query_map([entity_id.as_bytes()], |row| {
|
||||
let data_cow: std::borrow::Cow<'_, [u8]> = row.get(1)?;
|
||||
Ok(LoadedComponent {
|
||||
component_type: row.get(0)?,
|
||||
data: bytes::Bytes::copy_from_slice(&data_cow),
|
||||
})
|
||||
})?
|
||||
.collect::<std::result::Result<Vec<_>, _>>()?;
|
||||
|
||||
Ok(components)
|
||||
}
|
||||
|
||||
/// Load a single entity by network ID from the database
|
||||
///
|
||||
/// Returns None if the entity doesn't exist.
|
||||
pub fn load_entity_by_network_id(
|
||||
conn: &Connection,
|
||||
network_id: uuid::Uuid,
|
||||
) -> Result<Option<LoadedEntity>> {
|
||||
// Load entity metadata
|
||||
let entity_data = conn
|
||||
.query_row(
|
||||
"SELECT id, entity_type, created_at, updated_at
|
||||
FROM entities
|
||||
WHERE id = ?1",
|
||||
[network_id.as_bytes()],
|
||||
|row| {
|
||||
let id_bytes: std::borrow::Cow<'_, [u8]> = row.get(0)?;
|
||||
let mut id_array = [0u8; 16];
|
||||
id_array.copy_from_slice(&id_bytes);
|
||||
let id = uuid::Uuid::from_bytes(id_array);
|
||||
|
||||
Ok((
|
||||
id,
|
||||
row.get::<_, String>(1)?,
|
||||
row.get::<_, i64>(2)?,
|
||||
row.get::<_, i64>(3)?,
|
||||
))
|
||||
},
|
||||
)
|
||||
.optional()?;
|
||||
|
||||
let Some((id, entity_type, created_at_ts, updated_at_ts)) = entity_data else {
|
||||
return Ok(None);
|
||||
};
|
||||
|
||||
// Load all components for this entity
|
||||
let components = load_entity_components(conn, id)?;
|
||||
|
||||
Ok(Some(LoadedEntity {
|
||||
id,
|
||||
entity_type,
|
||||
created_at: chrono::DateTime::from_timestamp(created_at_ts, 0)
|
||||
.unwrap_or_else(chrono::Utc::now),
|
||||
updated_at: chrono::DateTime::from_timestamp(updated_at_ts, 0)
|
||||
.unwrap_or_else(chrono::Utc::now),
|
||||
components,
|
||||
}))
|
||||
}
|
||||
|
||||
/// Load all entities from the database
|
||||
///
|
||||
/// This loads all entity metadata and their components.
|
||||
/// Used during startup to rehydrate the game state.
|
||||
pub fn load_all_entities(conn: &Connection) -> Result<Vec<LoadedEntity>> {
|
||||
let mut stmt = conn.prepare(
|
||||
"SELECT id, entity_type, created_at, updated_at
|
||||
FROM entities
|
||||
ORDER BY created_at ASC",
|
||||
)?;
|
||||
|
||||
let entity_rows = stmt.query_map([], |row| {
|
||||
let id_bytes: std::borrow::Cow<'_, [u8]> = row.get(0)?;
|
||||
let mut id_array = [0u8; 16];
|
||||
id_array.copy_from_slice(&id_bytes);
|
||||
let id = uuid::Uuid::from_bytes(id_array);
|
||||
|
||||
Ok((
|
||||
id,
|
||||
row.get::<_, String>(1)?,
|
||||
row.get::<_, i64>(2)?,
|
||||
row.get::<_, i64>(3)?,
|
||||
))
|
||||
})?;
|
||||
|
||||
let mut entities = Vec::new();
|
||||
|
||||
for row in entity_rows {
|
||||
let (id, entity_type, created_at_ts, updated_at_ts) = row?;
|
||||
|
||||
// Load all components for this entity
|
||||
let components = load_entity_components(conn, id)?;
|
||||
|
||||
entities.push(LoadedEntity {
|
||||
id,
|
||||
entity_type,
|
||||
created_at: chrono::DateTime::from_timestamp(created_at_ts, 0)
|
||||
.unwrap_or_else(chrono::Utc::now),
|
||||
updated_at: chrono::DateTime::from_timestamp(updated_at_ts, 0)
|
||||
.unwrap_or_else(chrono::Utc::now),
|
||||
components,
|
||||
});
|
||||
}
|
||||
|
||||
Ok(entities)
|
||||
}
|
||||
|
||||
/// Load entities by entity type from the database
|
||||
///
|
||||
/// Returns all entities matching the specified entity_type.
|
||||
pub fn load_entities_by_type(conn: &Connection, entity_type: &str) -> Result<Vec<LoadedEntity>> {
|
||||
let mut stmt = conn.prepare(
|
||||
"SELECT id, entity_type, created_at, updated_at
|
||||
FROM entities
|
||||
WHERE entity_type = ?1
|
||||
ORDER BY created_at ASC",
|
||||
)?;
|
||||
|
||||
let entity_rows = stmt.query_map([entity_type], |row| {
|
||||
let id_bytes: std::borrow::Cow<'_, [u8]> = row.get(0)?;
|
||||
let mut id_array = [0u8; 16];
|
||||
id_array.copy_from_slice(&id_bytes);
|
||||
let id = uuid::Uuid::from_bytes(id_array);
|
||||
|
||||
Ok((
|
||||
id,
|
||||
row.get::<_, String>(1)?,
|
||||
row.get::<_, i64>(2)?,
|
||||
row.get::<_, i64>(3)?,
|
||||
))
|
||||
})?;
|
||||
|
||||
let mut entities = Vec::new();
|
||||
|
||||
for row in entity_rows {
|
||||
let (id, entity_type, created_at_ts, updated_at_ts) = row?;
|
||||
|
||||
// Load all components for this entity
|
||||
let components = load_entity_components(conn, id)?;
|
||||
|
||||
entities.push(LoadedEntity {
|
||||
id,
|
||||
entity_type,
|
||||
created_at: chrono::DateTime::from_timestamp(created_at_ts, 0)
|
||||
.unwrap_or_else(chrono::Utc::now),
|
||||
updated_at: chrono::DateTime::from_timestamp(updated_at_ts, 0)
|
||||
.unwrap_or_else(chrono::Utc::now),
|
||||
components,
|
||||
});
|
||||
}
|
||||
|
||||
Ok(entities)
|
||||
}
|
||||
|
||||
/// Rehydrate a loaded entity into the Bevy world
|
||||
///
|
||||
/// Takes a `LoadedEntity` from the database and spawns it as a new Bevy entity,
|
||||
/// deserializing and inserting all components using the ComponentTypeRegistry.
|
||||
///
|
||||
/// # Arguments
|
||||
///
|
||||
/// * `loaded_entity` - The entity data loaded from SQLite
|
||||
/// * `world` - The Bevy world to spawn the entity into
|
||||
/// * `component_registry` - Type registry for component deserialization
|
||||
///
|
||||
/// # Returns
|
||||
///
|
||||
/// The spawned Bevy `Entity` on success
|
||||
///
|
||||
/// # Errors
|
||||
///
|
||||
/// Returns an error if:
|
||||
/// - Component deserialization fails
|
||||
/// - Component type is not registered
|
||||
/// - Component insertion fails
|
||||
pub fn rehydrate_entity(
|
||||
loaded_entity: LoadedEntity,
|
||||
world: &mut bevy::prelude::World,
|
||||
component_registry: &crate::persistence::ComponentTypeRegistry,
|
||||
) -> Result<bevy::prelude::Entity> {
|
||||
use bevy::prelude::*;
|
||||
|
||||
use crate::networking::NetworkedEntity;
|
||||
|
||||
// Spawn a new entity
|
||||
let entity = world.spawn_empty().id();
|
||||
|
||||
info!(
|
||||
"Rehydrating entity {:?} with type {} and {} components",
|
||||
loaded_entity.id,
|
||||
loaded_entity.entity_type,
|
||||
loaded_entity.components.len()
|
||||
);
|
||||
|
||||
// Deserialize and insert each component
|
||||
for component in &loaded_entity.components {
|
||||
// Get deserialization function for this component type
|
||||
let deserialize_fn = component_registry
|
||||
.get_deserialize_fn_by_path(&component.component_type)
|
||||
.ok_or_else(|| {
|
||||
PersistenceError::Deserialization(format!(
|
||||
"No deserialize function registered for component type: {}",
|
||||
component.component_type
|
||||
))
|
||||
})?;
|
||||
|
||||
// Get insert function for this component type
|
||||
let insert_fn = component_registry
|
||||
.get_insert_fn_by_path(&component.component_type)
|
||||
.ok_or_else(|| {
|
||||
PersistenceError::Deserialization(format!(
|
||||
"No insert function registered for component type: {}",
|
||||
component.component_type
|
||||
))
|
||||
})?;
|
||||
|
||||
// Deserialize the component from bytes
|
||||
let deserialized = deserialize_fn(&component.data).map_err(|e| {
|
||||
PersistenceError::Deserialization(format!(
|
||||
"Failed to deserialize component {}: {}",
|
||||
component.component_type, e
|
||||
))
|
||||
})?;
|
||||
|
||||
// Insert the component into the entity
|
||||
// Get an EntityWorldMut to pass to the insert function
|
||||
let mut entity_mut = world.entity_mut(entity);
|
||||
insert_fn(&mut entity_mut, deserialized);
|
||||
|
||||
debug!(
|
||||
"Inserted component {} into entity {:?}",
|
||||
component.component_type, entity
|
||||
);
|
||||
}
|
||||
|
||||
// Add the NetworkedEntity component with the persisted network_id
|
||||
// This ensures the entity maintains its identity across restarts
|
||||
world.entity_mut(entity).insert(NetworkedEntity {
|
||||
network_id: loaded_entity.id,
|
||||
owner_node_id: uuid::Uuid::nil(), // Will be set by network system if needed
|
||||
});
|
||||
|
||||
// Add the Persisted marker component
|
||||
world
|
||||
.entity_mut(entity)
|
||||
.insert(crate::persistence::Persisted {
|
||||
network_id: loaded_entity.id,
|
||||
});
|
||||
|
||||
info!(
|
||||
"Successfully rehydrated entity {:?} as Bevy entity {:?}",
|
||||
loaded_entity.id, entity
|
||||
);
|
||||
|
||||
Ok(entity)
|
||||
}
|
||||
|
||||
/// Rehydrate all entities from the database into the Bevy world
|
||||
///
|
||||
/// This function is called during startup to restore the entire persisted
|
||||
/// state. It loads all entities from SQLite and spawns them into the Bevy world
|
||||
/// with all their components.
|
||||
///
|
||||
/// # Arguments
|
||||
///
|
||||
/// * `world` - The Bevy world to spawn entities into
|
||||
///
|
||||
/// # Errors
|
||||
///
|
||||
/// Returns an error if:
|
||||
/// - Database connection fails
|
||||
/// - Entity loading fails
|
||||
/// - Entity rehydration fails
|
||||
pub fn rehydrate_all_entities(world: &mut bevy::prelude::World) -> Result<()> {
|
||||
use bevy::prelude::*;
|
||||
|
||||
// Get database connection from resource
|
||||
let loaded_entities = {
|
||||
let db_res = world.resource::<crate::persistence::PersistenceDb>();
|
||||
let conn = db_res
|
||||
.conn
|
||||
.lock()
|
||||
.map_err(|e| PersistenceError::Other(format!("Failed to lock database: {}", e)))?;
|
||||
|
||||
// Load all entities from database
|
||||
load_all_entities(&conn)?
|
||||
};
|
||||
|
||||
info!("Loaded {} entities from database", loaded_entities.len());
|
||||
|
||||
if loaded_entities.is_empty() {
|
||||
info!("No entities to rehydrate");
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
// Get component registry
|
||||
let component_registry = {
|
||||
let registry_res = world.resource::<crate::persistence::ComponentTypeRegistryResource>();
|
||||
registry_res.0
|
||||
};
|
||||
|
||||
// Rehydrate each entity
|
||||
let mut rehydrated_count = 0;
|
||||
let mut failed_count = 0;
|
||||
|
||||
for loaded_entity in loaded_entities {
|
||||
match rehydrate_entity(loaded_entity, world, component_registry) {
|
||||
| Ok(entity) => {
|
||||
rehydrated_count += 1;
|
||||
debug!("Rehydrated entity {:?}", entity);
|
||||
},
|
||||
| Err(e) => {
|
||||
failed_count += 1;
|
||||
error!("Failed to rehydrate entity: {}", e);
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
info!(
|
||||
"Entity rehydration complete: {} succeeded, {} failed",
|
||||
rehydrated_count, failed_count
|
||||
);
|
||||
|
||||
if failed_count > 0 {
|
||||
warn!(
|
||||
"{} entities failed to rehydrate - check logs for details",
|
||||
failed_count
|
||||
);
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Load all tombstones from the database into the TombstoneRegistry
|
||||
///
|
||||
/// This function is called during startup to restore deletion tombstones
|
||||
/// from the database, preventing resurrection of deleted entities after
|
||||
/// application restart.
|
||||
///
|
||||
/// # Arguments
|
||||
///
|
||||
/// * `world` - The Bevy world containing the TombstoneRegistry resource
|
||||
///
|
||||
/// # Errors
|
||||
///
|
||||
/// Returns an error if:
|
||||
/// - Database connection fails
|
||||
/// - Tombstone loading fails
|
||||
/// - Vector clock deserialization fails
|
||||
pub fn load_tombstones(world: &mut bevy::prelude::World) -> Result<()> {
|
||||
use bevy::prelude::*;
|
||||
|
||||
// Get database connection and load tombstones
|
||||
let tombstone_rows = {
|
||||
let db_res = world.resource::<crate::persistence::PersistenceDb>();
|
||||
let conn = db_res
|
||||
.conn
|
||||
.lock()
|
||||
.map_err(|e| PersistenceError::Other(format!("Failed to lock database: {}", e)))?;
|
||||
|
||||
// Load all tombstones from database
|
||||
let mut stmt = conn.prepare(
|
||||
"SELECT entity_id, deleting_node, deletion_clock, created_at
|
||||
FROM tombstones
|
||||
ORDER BY created_at ASC",
|
||||
)?;
|
||||
|
||||
let rows = stmt.query_map([], |row| {
|
||||
let entity_id_bytes: std::borrow::Cow<'_, [u8]> = row.get(0)?;
|
||||
let mut entity_id_array = [0u8; 16];
|
||||
entity_id_array.copy_from_slice(&entity_id_bytes);
|
||||
let entity_id = uuid::Uuid::from_bytes(entity_id_array);
|
||||
|
||||
let deleting_node_str: String = row.get(1)?;
|
||||
let deletion_clock_bytes: std::borrow::Cow<'_, [u8]> = row.get(2)?;
|
||||
let created_at_ts: i64 = row.get(3)?;
|
||||
|
||||
Ok((
|
||||
entity_id,
|
||||
deleting_node_str,
|
||||
deletion_clock_bytes.to_vec(),
|
||||
created_at_ts,
|
||||
))
|
||||
})?;
|
||||
|
||||
rows.collect::<std::result::Result<Vec<_>, _>>()?
|
||||
};
|
||||
|
||||
info!("Loaded {} tombstones from database", tombstone_rows.len());
|
||||
|
||||
if tombstone_rows.is_empty() {
|
||||
info!("No tombstones to restore");
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
// Restore tombstones into TombstoneRegistry
|
||||
let mut loaded_count = 0;
|
||||
let mut failed_count = 0;
|
||||
|
||||
{
|
||||
let mut tombstone_registry = world.resource_mut::<crate::networking::TombstoneRegistry>();
|
||||
|
||||
for (entity_id, deleting_node_str, deletion_clock_bytes, _created_at_ts) in tombstone_rows {
|
||||
// Parse node ID
|
||||
let deleting_node = match uuid::Uuid::parse_str(&deleting_node_str) {
|
||||
Ok(id) => id,
|
||||
Err(e) => {
|
||||
error!("Failed to parse deleting_node UUID for entity {:?}: {}", entity_id, e);
|
||||
failed_count += 1;
|
||||
continue;
|
||||
}
|
||||
};
|
||||
|
||||
// Deserialize vector clock
|
||||
let deletion_clock = match rkyv::from_bytes::<crate::networking::VectorClock, rkyv::rancor::Failure>(&deletion_clock_bytes) {
|
||||
Ok(clock) => clock,
|
||||
Err(e) => {
|
||||
error!("Failed to deserialize vector clock for tombstone {:?}: {:?}", entity_id, e);
|
||||
failed_count += 1;
|
||||
continue;
|
||||
}
|
||||
};
|
||||
|
||||
// Record the tombstone in the registry
|
||||
tombstone_registry.record_deletion(entity_id, deleting_node, deletion_clock);
|
||||
loaded_count += 1;
|
||||
}
|
||||
}
|
||||
|
||||
info!(
|
||||
"Tombstone restoration complete: {} succeeded, {} failed",
|
||||
loaded_count, failed_count
|
||||
);
|
||||
|
||||
if failed_count > 0 {
|
||||
warn!(
|
||||
"{} tombstones failed to restore - check logs for details",
|
||||
failed_count
|
||||
);
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
@@ -656,7 +1149,7 @@ mod tests {
|
||||
PersistenceOp::UpsertComponent {
|
||||
entity_id,
|
||||
component_type: "Transform".to_string(),
|
||||
data: vec![1, 2, 3, 4],
|
||||
data: bytes::Bytes::from(vec![1, 2, 3, 4]),
|
||||
},
|
||||
];
|
||||
|
||||
|
||||
@@ -12,7 +12,7 @@ pub enum PersistenceError {
|
||||
Database(rusqlite::Error),
|
||||
|
||||
/// Serialization failed
|
||||
Serialization(bincode::Error),
|
||||
Serialization(String),
|
||||
|
||||
/// Deserialization failed
|
||||
Deserialization(String),
|
||||
@@ -85,7 +85,6 @@ impl std::error::Error for PersistenceError {
|
||||
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
|
||||
match self {
|
||||
| Self::Database(err) => Some(err),
|
||||
| Self::Serialization(err) => Some(err),
|
||||
| Self::Io(err) => Some(err),
|
||||
| _ => None,
|
||||
}
|
||||
@@ -99,12 +98,6 @@ impl From<rusqlite::Error> for PersistenceError {
|
||||
}
|
||||
}
|
||||
|
||||
impl From<bincode::Error> for PersistenceError {
|
||||
fn from(err: bincode::Error) -> Self {
|
||||
Self::Serialization(err)
|
||||
}
|
||||
}
|
||||
|
||||
impl From<std::io::Error> for PersistenceError {
|
||||
fn from(err: std::io::Error) -> Self {
|
||||
Self::Io(err)
|
||||
|
||||
@@ -29,6 +29,11 @@ pub const MIGRATIONS: &[Migration] = &[
|
||||
name: "sessions",
|
||||
up: include_str!("migrations/004_sessions.sql"),
|
||||
},
|
||||
Migration {
|
||||
version: 5,
|
||||
name: "tombstones",
|
||||
up: include_str!("migrations/005_tombstones.sql"),
|
||||
},
|
||||
];
|
||||
|
||||
/// Initialize the migrations table
|
||||
|
||||
@@ -0,0 +1,13 @@
|
||||
-- Migration 005: Add tombstones table
|
||||
-- Stores deletion tombstones to prevent resurrection of deleted entities
|
||||
|
||||
CREATE TABLE IF NOT EXISTS tombstones (
|
||||
entity_id BLOB PRIMARY KEY,
|
||||
deleting_node TEXT NOT NULL,
|
||||
deletion_clock BLOB NOT NULL,
|
||||
created_at INTEGER NOT NULL
|
||||
);
|
||||
|
||||
-- Index for querying tombstones by session (for future session scoping)
|
||||
CREATE INDEX IF NOT EXISTS idx_tombstones_created
|
||||
ON tombstones(created_at DESC);
|
||||
@@ -39,7 +39,9 @@ mod metrics;
|
||||
mod migrations;
|
||||
mod plugin;
|
||||
pub mod reflection;
|
||||
mod registered_components;
|
||||
mod systems;
|
||||
mod type_registry;
|
||||
mod types;
|
||||
|
||||
pub use config::*;
|
||||
@@ -52,4 +54,5 @@ pub use migrations::*;
|
||||
pub use plugin::*;
|
||||
pub use reflection::*;
|
||||
pub use systems::*;
|
||||
pub use type_registry::*;
|
||||
pub use types::*;
|
||||
|
||||
@@ -88,10 +88,16 @@ impl Plugin for PersistencePlugin {
|
||||
.insert_resource(PersistenceMetrics::default())
|
||||
.insert_resource(CheckpointTimer::default())
|
||||
.insert_resource(PersistenceHealth::default())
|
||||
.insert_resource(PendingFlushTasks::default());
|
||||
.insert_resource(PendingFlushTasks::default())
|
||||
.init_resource::<ComponentTypeRegistryResource>();
|
||||
|
||||
// Add startup system
|
||||
app.add_systems(Startup, persistence_startup_system);
|
||||
// Add startup systems
|
||||
// First initialize the database, then rehydrate entities and tombstones
|
||||
app.add_systems(Startup, (
|
||||
persistence_startup_system,
|
||||
rehydrate_entities_system,
|
||||
load_tombstones_system,
|
||||
).chain());
|
||||
|
||||
// Add systems in the appropriate schedule
|
||||
app.add_systems(
|
||||
@@ -158,6 +164,68 @@ fn persistence_startup_system(db: Res<PersistenceDb>, mut metrics: ResMut<Persis
|
||||
}
|
||||
}
|
||||
|
||||
/// Exclusive startup system to rehydrate entities from database
|
||||
///
|
||||
/// This system runs after `persistence_startup_system` and loads all entities
|
||||
/// from SQLite, deserializing and spawning them into the Bevy world with all
|
||||
/// their components.
|
||||
///
|
||||
/// **Important**: Only rehydrates entities when rejoining an existing session.
|
||||
/// New sessions start with 0 entities to avoid loading entities from previous
|
||||
/// sessions.
|
||||
fn rehydrate_entities_system(world: &mut World) {
|
||||
// Check if we're rejoining an existing session
|
||||
let should_rehydrate = {
|
||||
let current_session = world.get_resource::<crate::networking::CurrentSession>();
|
||||
match current_session {
|
||||
Some(session) => {
|
||||
// Only rehydrate if we have a last_known_clock (indicates we're rejoining)
|
||||
let is_rejoin = session.last_known_clock.node_count() > 0;
|
||||
if is_rejoin {
|
||||
info!(
|
||||
"Rejoining session {} - will rehydrate persisted entities",
|
||||
session.session.id.to_code()
|
||||
);
|
||||
} else {
|
||||
info!(
|
||||
"New session {} - starting with 0 entities",
|
||||
session.session.id.to_code()
|
||||
);
|
||||
}
|
||||
is_rejoin
|
||||
}
|
||||
None => {
|
||||
warn!("No CurrentSession found - skipping entity rehydration");
|
||||
false
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
if !should_rehydrate {
|
||||
info!("Skipping entity rehydration for new session");
|
||||
return;
|
||||
}
|
||||
|
||||
if let Err(e) = crate::persistence::database::rehydrate_all_entities(world) {
|
||||
error!("Failed to rehydrate entities from database: {}", e);
|
||||
} else {
|
||||
info!("Successfully rehydrated entities from database");
|
||||
}
|
||||
}
|
||||
|
||||
/// Exclusive startup system to load tombstones from database
|
||||
///
|
||||
/// This system runs after `rehydrate_entities_system` and loads all tombstones
|
||||
/// from SQLite, deserializing them into the TombstoneRegistry to prevent
|
||||
/// resurrection of deleted entities.
|
||||
fn load_tombstones_system(world: &mut World) {
|
||||
if let Err(e) = crate::persistence::database::load_tombstones(world) {
|
||||
error!("Failed to load tombstones from database: {}", e);
|
||||
} else {
|
||||
info!("Successfully loaded tombstones from database");
|
||||
}
|
||||
}
|
||||
|
||||
/// System to collect dirty entities using Bevy's change detection
|
||||
///
|
||||
/// This system tracks changes to the `Persisted` component. When `Persisted` is
|
||||
@@ -206,18 +274,17 @@ fn collect_dirty_entities_bevy_system(world: &mut World) {
|
||||
|
||||
// Serialize all components on this entity (generic tracking)
|
||||
let components = {
|
||||
let type_registry = world.resource::<AppTypeRegistry>().read();
|
||||
let comps = serialize_all_components_from_entity(entity, world, &type_registry);
|
||||
drop(type_registry);
|
||||
comps
|
||||
let type_registry_res = world.resource::<crate::persistence::ComponentTypeRegistryResource>();
|
||||
let type_registry = type_registry_res.0;
|
||||
type_registry.serialize_entity_components(world, entity)
|
||||
};
|
||||
|
||||
// Add operations for each component
|
||||
for (component_type, data) in components {
|
||||
for (_discriminant, type_path, data) in components {
|
||||
// Get mutable access to dirty and mark it
|
||||
{
|
||||
let mut dirty = world.resource_mut::<DirtyEntitiesResource>();
|
||||
dirty.mark_dirty(network_id, &component_type);
|
||||
dirty.mark_dirty(network_id, type_path);
|
||||
}
|
||||
|
||||
// Get mutable access to write_buffer and add the operation
|
||||
@@ -225,12 +292,12 @@ fn collect_dirty_entities_bevy_system(world: &mut World) {
|
||||
let mut write_buffer = world.resource_mut::<WriteBufferResource>();
|
||||
if let Err(e) = write_buffer.add(PersistenceOp::UpsertComponent {
|
||||
entity_id: network_id,
|
||||
component_type: component_type.clone(),
|
||||
component_type: type_path.to_string(),
|
||||
data,
|
||||
}) {
|
||||
error!(
|
||||
"Failed to add UpsertComponent operation for entity {} component {}: {}",
|
||||
network_id, component_type, e
|
||||
network_id, type_path, e
|
||||
);
|
||||
// Continue with other components even if one fails
|
||||
}
|
||||
|
||||
@@ -1,27 +1,10 @@
|
||||
//! Reflection-based component serialization for persistence
|
||||
//! DEPRECATED: Reflection-based component serialization
|
||||
//! Marker components for the persistence system
|
||||
//!
|
||||
//! This module provides utilities to serialize and deserialize Bevy components
|
||||
//! using reflection, allowing the persistence layer to work with any component
|
||||
//! that implements Reflect.
|
||||
//! All component serialization now uses #[derive(Synced)] with rkyv.
|
||||
//! This module only provides the Persisted marker component.
|
||||
|
||||
use bevy::{
|
||||
prelude::*,
|
||||
reflect::{
|
||||
TypeRegistry,
|
||||
serde::{
|
||||
ReflectSerializer,
|
||||
TypedReflectDeserializer,
|
||||
TypedReflectSerializer,
|
||||
},
|
||||
},
|
||||
};
|
||||
use bincode::Options as _;
|
||||
use serde::de::DeserializeSeed;
|
||||
|
||||
use crate::persistence::error::{
|
||||
PersistenceError,
|
||||
Result,
|
||||
};
|
||||
use bevy::prelude::*;
|
||||
|
||||
/// Marker component to indicate that an entity should be persisted
|
||||
///
|
||||
@@ -55,6 +38,8 @@ pub struct Persisted {
|
||||
pub network_id: uuid::Uuid,
|
||||
}
|
||||
|
||||
|
||||
|
||||
impl Persisted {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
@@ -67,247 +52,4 @@ impl Persisted {
|
||||
}
|
||||
}
|
||||
|
||||
/// Trait for components that can be persisted
|
||||
pub trait Persistable: Component + Reflect {
|
||||
/// Get the type name for this component (used as key in database)
|
||||
fn type_name() -> &'static str {
|
||||
std::any::type_name::<Self>()
|
||||
}
|
||||
}
|
||||
|
||||
/// Serialize a component using Bevy's reflection system
|
||||
///
|
||||
/// This converts any component implementing `Reflect` into bytes for storage.
|
||||
/// Uses bincode for efficient binary serialization with type information from
|
||||
/// the registry to handle polymorphic types correctly.
|
||||
///
|
||||
/// # Parameters
|
||||
/// - `component`: Component to serialize (must implement `Reflect`)
|
||||
/// - `type_registry`: Bevy's type registry for reflection metadata
|
||||
///
|
||||
/// # Returns
|
||||
/// - `Ok(Vec<u8>)`: Serialized component data
|
||||
/// - `Err`: If serialization fails (e.g., type not properly registered)
|
||||
///
|
||||
/// # Examples
|
||||
/// ```no_run
|
||||
/// # use bevy::prelude::*;
|
||||
/// # use libmarathon::persistence::*;
|
||||
/// # fn example(component: &Transform, registry: &AppTypeRegistry) -> anyhow::Result<()> {
|
||||
/// let registry = registry.read();
|
||||
/// let bytes = serialize_component(component.as_reflect(), ®istry)?;
|
||||
/// # Ok(())
|
||||
/// # }
|
||||
/// ```
|
||||
pub fn serialize_component(
|
||||
component: &dyn Reflect,
|
||||
type_registry: &TypeRegistry,
|
||||
) -> Result<Vec<u8>> {
|
||||
let serializer = ReflectSerializer::new(component, type_registry);
|
||||
bincode::options()
|
||||
.serialize(&serializer)
|
||||
.map_err(PersistenceError::from)
|
||||
}
|
||||
|
||||
/// Serialize a component when the type is known (more efficient for bincode)
|
||||
///
|
||||
/// This uses `TypedReflectSerializer` which doesn't include type path
|
||||
/// information, making it compatible with `TypedReflectDeserializer` for binary
|
||||
/// formats.
|
||||
pub fn serialize_component_typed(
|
||||
component: &dyn Reflect,
|
||||
type_registry: &TypeRegistry,
|
||||
) -> Result<Vec<u8>> {
|
||||
let serializer = TypedReflectSerializer::new(component, type_registry);
|
||||
bincode::options()
|
||||
.serialize(&serializer)
|
||||
.map_err(PersistenceError::from)
|
||||
}
|
||||
|
||||
/// Deserialize a component using Bevy's reflection system
|
||||
///
|
||||
/// Converts serialized bytes back into a reflected component. The returned
|
||||
/// component is boxed and must be downcast to the concrete type for use.
|
||||
///
|
||||
/// # Parameters
|
||||
/// - `bytes`: Serialized component data from [`serialize_component`]
|
||||
/// - `type_registry`: Bevy's type registry for reflection metadata
|
||||
///
|
||||
/// # Returns
|
||||
/// - `Ok(Box<dyn PartialReflect>)`: Deserialized component (needs downcasting)
|
||||
/// - `Err`: If deserialization fails (e.g., type not registered, data
|
||||
/// corruption)
|
||||
///
|
||||
/// # Examples
|
||||
/// ```no_run
|
||||
/// # use bevy::prelude::*;
|
||||
/// # use libmarathon::persistence::*;
|
||||
/// # fn example(bytes: &[u8], registry: &AppTypeRegistry) -> anyhow::Result<()> {
|
||||
/// let registry = registry.read();
|
||||
/// let reflected = deserialize_component(bytes, ®istry)?;
|
||||
/// // Downcast to concrete type as needed
|
||||
/// # Ok(())
|
||||
/// # }
|
||||
/// ```
|
||||
pub fn deserialize_component(
|
||||
bytes: &[u8],
|
||||
type_registry: &TypeRegistry,
|
||||
) -> Result<Box<dyn PartialReflect>> {
|
||||
let mut deserializer = bincode::Deserializer::from_slice(bytes, bincode::options());
|
||||
let reflect_deserializer = bevy::reflect::serde::ReflectDeserializer::new(type_registry);
|
||||
|
||||
reflect_deserializer
|
||||
.deserialize(&mut deserializer)
|
||||
.map_err(|e| PersistenceError::Deserialization(e.to_string()))
|
||||
}
|
||||
|
||||
/// Deserialize a component when the type is known
|
||||
///
|
||||
/// Uses `TypedReflectDeserializer` which is more efficient for binary formats
|
||||
/// like bincode when the component type is known at deserialization time.
|
||||
pub fn deserialize_component_typed(
|
||||
bytes: &[u8],
|
||||
component_type: &str,
|
||||
type_registry: &TypeRegistry,
|
||||
) -> Result<Box<dyn PartialReflect>> {
|
||||
let registration = type_registry
|
||||
.get_with_type_path(component_type)
|
||||
.ok_or_else(|| {
|
||||
PersistenceError::Deserialization(format!("Type {} not registered", component_type))
|
||||
})?;
|
||||
|
||||
let mut deserializer = bincode::Deserializer::from_slice(bytes, bincode::options());
|
||||
let reflect_deserializer = TypedReflectDeserializer::new(registration, type_registry);
|
||||
|
||||
reflect_deserializer
|
||||
.deserialize(&mut deserializer)
|
||||
.map_err(|e| PersistenceError::Deserialization(e.to_string()))
|
||||
}
|
||||
|
||||
/// Serialize a component directly from an entity using its type path
|
||||
///
|
||||
/// This is a convenience function that combines type lookup, reflection, and
|
||||
/// serialization. It's the primary method used by the persistence system to
|
||||
/// save component state without knowing the concrete type at compile time.
|
||||
///
|
||||
/// # Parameters
|
||||
/// - `entity`: Bevy entity to read the component from
|
||||
/// - `component_type`: Type path string (e.g.,
|
||||
/// "bevy_transform::components::Transform")
|
||||
/// - `world`: Bevy world containing the entity
|
||||
/// - `type_registry`: Bevy's type registry for reflection metadata
|
||||
///
|
||||
/// # Returns
|
||||
/// - `Some(Vec<u8>)`: Serialized component data
|
||||
/// - `None`: If entity doesn't have the component or type isn't registered
|
||||
///
|
||||
/// # Examples
|
||||
/// ```no_run
|
||||
/// # use bevy::prelude::*;
|
||||
/// # use libmarathon::persistence::*;
|
||||
/// # fn example(entity: Entity, world: &World, registry: &AppTypeRegistry) -> Option<()> {
|
||||
/// let registry = registry.read();
|
||||
/// let bytes = serialize_component_from_entity(
|
||||
/// entity,
|
||||
/// "bevy_transform::components::Transform",
|
||||
/// world,
|
||||
/// ®istry,
|
||||
/// )?;
|
||||
/// # Some(())
|
||||
/// # }
|
||||
/// ```
|
||||
pub fn serialize_component_from_entity(
|
||||
entity: Entity,
|
||||
component_type: &str,
|
||||
world: &World,
|
||||
type_registry: &TypeRegistry,
|
||||
) -> Option<Vec<u8>> {
|
||||
// Get the type registration
|
||||
let registration = type_registry.get_with_type_path(component_type)?;
|
||||
|
||||
// Get the ReflectComponent data
|
||||
let reflect_component = registration.data::<ReflectComponent>()?;
|
||||
|
||||
// Reflect the component from the entity
|
||||
let reflected = reflect_component.reflect(world.entity(entity))?;
|
||||
|
||||
// Serialize it directly
|
||||
serialize_component(reflected, type_registry).ok()
|
||||
}
|
||||
|
||||
/// Serialize all components from an entity that have reflection data
|
||||
///
|
||||
/// This iterates over all components on an entity and serializes those that:
|
||||
/// - Are registered in the type registry
|
||||
/// - Have `ReflectComponent` data (meaning they support reflection)
|
||||
/// - Are not the `Persisted` marker component (to avoid redundant storage)
|
||||
///
|
||||
/// # Parameters
|
||||
/// - `entity`: Bevy entity to serialize components from
|
||||
/// - `world`: Bevy world containing the entity
|
||||
/// - `type_registry`: Bevy's type registry for reflection metadata
|
||||
///
|
||||
/// # Returns
|
||||
/// Vector of tuples containing (component_type_path, serialized_data) for each
|
||||
/// component
|
||||
pub fn serialize_all_components_from_entity(
|
||||
entity: Entity,
|
||||
world: &World,
|
||||
type_registry: &TypeRegistry,
|
||||
) -> Vec<(String, Vec<u8>)> {
|
||||
let mut components = Vec::new();
|
||||
|
||||
// Get the entity reference
|
||||
let entity_ref = world.entity(entity);
|
||||
|
||||
// Iterate over all type registrations
|
||||
for registration in type_registry.iter() {
|
||||
// Skip if no ReflectComponent data (not a component)
|
||||
let Some(reflect_component) = registration.data::<ReflectComponent>() else {
|
||||
continue;
|
||||
};
|
||||
|
||||
// Get the type path for this component
|
||||
let type_path = registration.type_info().type_path();
|
||||
|
||||
// Skip the Persisted marker component itself (we don't need to persist it)
|
||||
if type_path.ends_with("::Persisted") {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Try to reflect this component from the entity
|
||||
if let Some(reflected) = reflect_component.reflect(entity_ref) {
|
||||
// Serialize the component using typed serialization for consistency
|
||||
// This matches the format expected by deserialize_component_typed
|
||||
if let Ok(data) = serialize_component_typed(reflected, type_registry) {
|
||||
components.push((type_path.to_string(), data));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
components
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[derive(Component, Reflect, Default)]
|
||||
#[reflect(Component)]
|
||||
struct TestComponent {
|
||||
value: i32,
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_component_serialization() -> Result<()> {
|
||||
let mut registry = TypeRegistry::default();
|
||||
registry.register::<TestComponent>();
|
||||
|
||||
let component = TestComponent { value: 42 };
|
||||
let bytes = serialize_component(&component, ®istry)?;
|
||||
|
||||
assert!(!bytes.is_empty());
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
// All component serialization now uses #[derive(Synced)] with rkyv through ComponentTypeRegistry
|
||||
|
||||
64
crates/libmarathon/src/persistence/registered_components.rs
Normal file
64
crates/libmarathon/src/persistence/registered_components.rs
Normal file
@@ -0,0 +1,64 @@
|
||||
//! Component registrations for CRDT synchronization
|
||||
//!
|
||||
//! This module registers all components that should be synchronized across
|
||||
//! the network using the inventory-based type registry.
|
||||
//!
|
||||
//! # When to use this file vs `#[synced]` attribute
|
||||
//!
|
||||
//! **Use `#[synced]` attribute for:**
|
||||
//! - Your own component types defined in this codebase
|
||||
//! - Any type you have source access to
|
||||
//! - Most game components (entities, markers, etc.)
|
||||
//! - Example: `#[synced] pub struct CubeMarker { ... }`
|
||||
//!
|
||||
//! **Use manual `inventory::submit!` here for:**
|
||||
//! - Third-party types (Bevy's Transform, external crates)
|
||||
//! - Types that need custom serialization logic
|
||||
//! - Types where the serialized format differs from in-memory format
|
||||
//!
|
||||
//! # Currently registered external types
|
||||
//!
|
||||
//! - `Transform` - Bevy's transform component (needs custom rkyv conversion)
|
||||
|
||||
use std::any::TypeId;
|
||||
|
||||
// Register Transform for synchronization
|
||||
// We serialize Bevy's Transform but convert to our rkyv-compatible type
|
||||
inventory::submit! {
|
||||
crate::persistence::ComponentMeta {
|
||||
type_name: "Transform",
|
||||
type_path: "bevy::transform::components::transform::Transform",
|
||||
type_id: TypeId::of::<bevy::prelude::Transform>(),
|
||||
|
||||
deserialize_fn: |bytes: &[u8]| -> anyhow::Result<Box<dyn std::any::Any>> {
|
||||
let transform: crate::transform::Transform = rkyv::from_bytes::<crate::transform::Transform, rkyv::rancor::Failure>(bytes)?;
|
||||
// Convert back to Bevy Transform
|
||||
let bevy_transform = bevy::prelude::Transform {
|
||||
translation: transform.translation.into(),
|
||||
rotation: transform.rotation.into(),
|
||||
scale: transform.scale.into(),
|
||||
};
|
||||
Ok(Box::new(bevy_transform))
|
||||
},
|
||||
|
||||
serialize_fn: |world: &bevy::ecs::world::World, entity: bevy::ecs::entity::Entity| -> Option<bytes::Bytes> {
|
||||
world.get::<bevy::prelude::Transform>(entity).map(|bevy_transform| {
|
||||
// Convert to our rkyv-compatible Transform
|
||||
let transform = crate::transform::Transform {
|
||||
translation: bevy_transform.translation.into(),
|
||||
rotation: bevy_transform.rotation.into(),
|
||||
scale: bevy_transform.scale.into(),
|
||||
};
|
||||
let serialized = rkyv::to_bytes::<rkyv::rancor::Failure>(&transform)
|
||||
.expect("Failed to serialize Transform");
|
||||
bytes::Bytes::from(serialized.to_vec())
|
||||
})
|
||||
},
|
||||
|
||||
insert_fn: |entity_mut: &mut bevy::ecs::world::EntityWorldMut, boxed: Box<dyn std::any::Any>| {
|
||||
if let Ok(transform) = boxed.downcast::<bevy::prelude::Transform>() {
|
||||
entity_mut.insert(*transform);
|
||||
}
|
||||
},
|
||||
}
|
||||
}
|
||||
@@ -479,7 +479,7 @@ mod tests {
|
||||
.add(PersistenceOp::UpsertComponent {
|
||||
entity_id,
|
||||
component_type: "Transform".to_string(),
|
||||
data: vec![1, 2, 3],
|
||||
data: bytes::Bytes::from(vec![1, 2, 3]),
|
||||
})
|
||||
.unwrap();
|
||||
|
||||
|
||||
326
crates/libmarathon/src/persistence/type_registry.rs
Normal file
326
crates/libmarathon/src/persistence/type_registry.rs
Normal file
@@ -0,0 +1,326 @@
|
||||
//! Type registry using rkyv and inventory
|
||||
//!
|
||||
//! This module provides a runtime type registry that collects all synced
|
||||
//! components via the `inventory` crate and assigns them numeric discriminants
|
||||
//! for efficient serialization.
|
||||
|
||||
use std::{
|
||||
any::TypeId,
|
||||
collections::HashMap,
|
||||
sync::OnceLock,
|
||||
};
|
||||
|
||||
use anyhow::Result;
|
||||
|
||||
/// Component metadata collected via inventory
|
||||
pub struct ComponentMeta {
|
||||
/// Human-readable type name (e.g., "Health")
|
||||
pub type_name: &'static str,
|
||||
|
||||
/// Full type path (e.g., "my_crate::components::Health")
|
||||
pub type_path: &'static str,
|
||||
|
||||
/// Rust TypeId for type-safe lookups
|
||||
pub type_id: TypeId,
|
||||
|
||||
/// Deserialization function that returns a boxed component
|
||||
pub deserialize_fn: fn(&[u8]) -> Result<Box<dyn std::any::Any>>,
|
||||
|
||||
/// Serialization function that reads from an entity (returns None if entity
|
||||
/// doesn't have this component)
|
||||
pub serialize_fn:
|
||||
fn(&bevy::ecs::world::World, bevy::ecs::entity::Entity) -> Option<bytes::Bytes>,
|
||||
|
||||
/// Insert function that takes a boxed component and inserts it into an
|
||||
/// entity
|
||||
pub insert_fn: fn(&mut bevy::ecs::world::EntityWorldMut, Box<dyn std::any::Any>),
|
||||
}
|
||||
|
||||
// Collect all registered components via inventory
|
||||
inventory::collect!(ComponentMeta);
|
||||
|
||||
/// Runtime component type registry
|
||||
///
|
||||
/// Maps TypeId -> numeric discriminant for efficient serialization
|
||||
pub struct ComponentTypeRegistry {
|
||||
/// TypeId to discriminant mapping
|
||||
type_to_discriminant: HashMap<TypeId, u16>,
|
||||
|
||||
/// Discriminant to deserialization function
|
||||
discriminant_to_deserializer: HashMap<u16, fn(&[u8]) -> Result<Box<dyn std::any::Any>>>,
|
||||
|
||||
/// Discriminant to serialization function
|
||||
discriminant_to_serializer: HashMap<
|
||||
u16,
|
||||
fn(&bevy::ecs::world::World, bevy::ecs::entity::Entity) -> Option<bytes::Bytes>,
|
||||
>,
|
||||
|
||||
/// Discriminant to insert function
|
||||
discriminant_to_inserter:
|
||||
HashMap<u16, fn(&mut bevy::ecs::world::EntityWorldMut, Box<dyn std::any::Any>)>,
|
||||
|
||||
/// Discriminant to type name (for debugging)
|
||||
discriminant_to_name: HashMap<u16, &'static str>,
|
||||
|
||||
/// Discriminant to type path (for networking)
|
||||
discriminant_to_path: HashMap<u16, &'static str>,
|
||||
|
||||
/// TypeId to type name (for debugging)
|
||||
type_to_name: HashMap<TypeId, &'static str>,
|
||||
}
|
||||
|
||||
impl ComponentTypeRegistry {
|
||||
/// Initialize the registry from inventory-collected components
|
||||
///
|
||||
/// This should be called once at application startup.
|
||||
pub fn init() -> Self {
|
||||
let mut type_to_discriminant = HashMap::new();
|
||||
let mut discriminant_to_deserializer = HashMap::new();
|
||||
let mut discriminant_to_serializer = HashMap::new();
|
||||
let mut discriminant_to_inserter = HashMap::new();
|
||||
let mut discriminant_to_name = HashMap::new();
|
||||
let mut discriminant_to_path = HashMap::new();
|
||||
let mut type_to_name = HashMap::new();
|
||||
|
||||
// Collect all registered components
|
||||
let mut components: Vec<&ComponentMeta> = inventory::iter::<ComponentMeta>().collect();
|
||||
|
||||
// Sort by TypeId for deterministic discriminants
|
||||
components.sort_by_key(|c| c.type_id);
|
||||
|
||||
// Assign discriminants
|
||||
for (discriminant, meta) in components.iter().enumerate() {
|
||||
let discriminant = discriminant as u16;
|
||||
type_to_discriminant.insert(meta.type_id, discriminant);
|
||||
discriminant_to_deserializer.insert(discriminant, meta.deserialize_fn);
|
||||
discriminant_to_serializer.insert(discriminant, meta.serialize_fn);
|
||||
discriminant_to_inserter.insert(discriminant, meta.insert_fn);
|
||||
discriminant_to_name.insert(discriminant, meta.type_name);
|
||||
discriminant_to_path.insert(discriminant, meta.type_path);
|
||||
type_to_name.insert(meta.type_id, meta.type_name);
|
||||
|
||||
tracing::debug!(
|
||||
type_name = meta.type_name,
|
||||
type_path = meta.type_path,
|
||||
discriminant = discriminant,
|
||||
"Registered component type"
|
||||
);
|
||||
}
|
||||
|
||||
tracing::info!(
|
||||
count = components.len(),
|
||||
"Initialized component type registry"
|
||||
);
|
||||
|
||||
Self {
|
||||
type_to_discriminant,
|
||||
discriminant_to_deserializer,
|
||||
discriminant_to_serializer,
|
||||
discriminant_to_inserter,
|
||||
discriminant_to_name,
|
||||
discriminant_to_path,
|
||||
type_to_name,
|
||||
}
|
||||
}
|
||||
|
||||
/// Get the discriminant for a component type
|
||||
pub fn get_discriminant(&self, type_id: TypeId) -> Option<u16> {
|
||||
self.type_to_discriminant.get(&type_id).copied()
|
||||
}
|
||||
|
||||
/// Deserialize a component from bytes with its discriminant
|
||||
pub fn deserialize(&self, discriminant: u16, bytes: &[u8]) -> Result<Box<dyn std::any::Any>> {
|
||||
let deserialize_fn = self
|
||||
.discriminant_to_deserializer
|
||||
.get(&discriminant)
|
||||
.ok_or_else(|| {
|
||||
anyhow::anyhow!(
|
||||
"Unknown component discriminant: {} (available: {:?})",
|
||||
discriminant,
|
||||
self.discriminant_to_name
|
||||
)
|
||||
})?;
|
||||
|
||||
deserialize_fn(bytes)
|
||||
}
|
||||
|
||||
/// Get the insert function for a discriminant
|
||||
pub fn get_insert_fn(
|
||||
&self,
|
||||
discriminant: u16,
|
||||
) -> Option<fn(&mut bevy::ecs::world::EntityWorldMut, Box<dyn std::any::Any>)> {
|
||||
self.discriminant_to_inserter.get(&discriminant).copied()
|
||||
}
|
||||
|
||||
/// Get type name for a discriminant (for debugging)
|
||||
pub fn get_type_name(&self, discriminant: u16) -> Option<&'static str> {
|
||||
self.discriminant_to_name.get(&discriminant).copied()
|
||||
}
|
||||
|
||||
/// Get the deserialize function for a discriminant
|
||||
pub fn get_deserialize_fn(
|
||||
&self,
|
||||
discriminant: u16,
|
||||
) -> Option<fn(&[u8]) -> Result<Box<dyn std::any::Any>>> {
|
||||
self.discriminant_to_deserializer
|
||||
.get(&discriminant)
|
||||
.copied()
|
||||
}
|
||||
|
||||
/// Get type path for a discriminant
|
||||
pub fn get_type_path(&self, discriminant: u16) -> Option<&'static str> {
|
||||
self.discriminant_to_path.get(&discriminant).copied()
|
||||
}
|
||||
|
||||
/// Get the deserialize function by type path
|
||||
pub fn get_deserialize_fn_by_path(
|
||||
&self,
|
||||
type_path: &str,
|
||||
) -> Option<fn(&[u8]) -> Result<Box<dyn std::any::Any>>> {
|
||||
// Linear search through discriminant_to_path to find matching type_path
|
||||
for (discriminant, path) in &self.discriminant_to_path {
|
||||
if *path == type_path {
|
||||
return self.get_deserialize_fn(*discriminant);
|
||||
}
|
||||
}
|
||||
None
|
||||
}
|
||||
|
||||
/// Get the insert function by type path
|
||||
pub fn get_insert_fn_by_path(
|
||||
&self,
|
||||
type_path: &str,
|
||||
) -> Option<fn(&mut bevy::ecs::world::EntityWorldMut, Box<dyn std::any::Any>)> {
|
||||
// Linear search through discriminant_to_path to find matching type_path
|
||||
for (discriminant, path) in &self.discriminant_to_path {
|
||||
if *path == type_path {
|
||||
return self.get_insert_fn(*discriminant);
|
||||
}
|
||||
}
|
||||
None
|
||||
}
|
||||
|
||||
/// Get the number of registered component types
|
||||
pub fn len(&self) -> usize {
|
||||
self.type_to_discriminant.len()
|
||||
}
|
||||
|
||||
/// Check if the registry is empty
|
||||
pub fn is_empty(&self) -> bool {
|
||||
self.type_to_discriminant.is_empty()
|
||||
}
|
||||
|
||||
/// Serialize all registered components from an entity
|
||||
///
|
||||
/// Returns Vec<(discriminant, type_path, serialized_bytes)> for all
|
||||
/// components that exist on the entity.
|
||||
pub fn serialize_entity_components(
|
||||
&self,
|
||||
world: &bevy::ecs::world::World,
|
||||
entity: bevy::ecs::entity::Entity,
|
||||
) -> Vec<(u16, &'static str, bytes::Bytes)> {
|
||||
let mut results = Vec::new();
|
||||
|
||||
for (&discriminant, &serialize_fn) in &self.discriminant_to_serializer {
|
||||
if let Some(bytes) = serialize_fn(world, entity) {
|
||||
if let Some(&type_path) = self.discriminant_to_path.get(&discriminant) {
|
||||
results.push((discriminant, type_path, bytes));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
results
|
||||
}
|
||||
|
||||
/// Get all registered discriminants (for iteration)
|
||||
pub fn all_discriminants(&self) -> impl Iterator<Item = u16> + '_ {
|
||||
self.discriminant_to_name.keys().copied()
|
||||
}
|
||||
}
|
||||
|
||||
/// Global component type registry instance
|
||||
static REGISTRY: OnceLock<ComponentTypeRegistry> = OnceLock::new();
|
||||
|
||||
/// Get the global component type registry
|
||||
///
|
||||
/// Initializes the registry on first access.
|
||||
pub fn component_registry() -> &'static ComponentTypeRegistry {
|
||||
REGISTRY.get_or_init(ComponentTypeRegistry::init)
|
||||
}
|
||||
|
||||
/// Bevy resource wrapper for ComponentTypeRegistry
|
||||
///
|
||||
/// Use this in Bevy systems to access the global component registry.
|
||||
/// Insert this resource at app startup.
|
||||
#[derive(bevy::prelude::Resource)]
|
||||
pub struct ComponentTypeRegistryResource(pub &'static ComponentTypeRegistry);
|
||||
|
||||
impl Default for ComponentTypeRegistryResource {
|
||||
fn default() -> Self {
|
||||
Self(component_registry())
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/// Macro to register a component type with the inventory system
|
||||
///
|
||||
/// This generates the necessary serialize/deserialize functions and submits
|
||||
/// the ComponentMeta to inventory for runtime registration.
|
||||
///
|
||||
/// # Example
|
||||
///
|
||||
/// ```ignore
|
||||
/// use bevy::prelude::*;
|
||||
/// register_component!(Transform, "bevy::transform::components::Transform");
|
||||
/// ```
|
||||
#[macro_export]
|
||||
macro_rules! register_component {
|
||||
($component_type:ty, $type_path:expr) => {
|
||||
// Submit component metadata to inventory
|
||||
inventory::submit! {
|
||||
$crate::persistence::ComponentMeta {
|
||||
type_name: stringify!($component_type),
|
||||
type_path: $type_path,
|
||||
type_id: std::any::TypeId::of::<$component_type>(),
|
||||
|
||||
deserialize_fn: |bytes: &[u8]| -> anyhow::Result<Box<dyn std::any::Any>> {
|
||||
let component: $component_type = rkyv::from_bytes(bytes)?;
|
||||
Ok(Box::new(component))
|
||||
},
|
||||
|
||||
serialize_fn: |world: &bevy::ecs::world::World, entity: bevy::ecs::entity::Entity| -> Option<bytes::Bytes> {
|
||||
world.get::<$component_type>(entity).map(|component| {
|
||||
let serialized = rkyv::to_bytes::<rkyv::rancor::Failure>(component)
|
||||
.expect("Failed to serialize component");
|
||||
bytes::Bytes::from(serialized.to_vec())
|
||||
})
|
||||
},
|
||||
|
||||
insert_fn: |entity_mut: &mut bevy::ecs::world::EntityWorldMut, boxed: Box<dyn std::any::Any>| {
|
||||
if let Ok(component) = boxed.downcast::<$component_type>() {
|
||||
entity_mut.insert(*component);
|
||||
}
|
||||
},
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_registry_initialization() {
|
||||
let registry = ComponentTypeRegistry::init();
|
||||
// Should have at least the components defined in the codebase
|
||||
assert!(registry.len() > 0 || registry.is_empty()); // May be empty in unit tests
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_global_registry() {
|
||||
let registry = component_registry();
|
||||
// Should be initialized
|
||||
assert!(registry.len() >= 0);
|
||||
}
|
||||
}
|
||||
@@ -105,14 +105,14 @@ pub enum PersistenceOp {
|
||||
UpsertComponent {
|
||||
entity_id: EntityId,
|
||||
component_type: String,
|
||||
data: Vec<u8>,
|
||||
data: bytes::Bytes,
|
||||
},
|
||||
|
||||
/// Log an operation for CRDT sync
|
||||
LogOperation {
|
||||
node_id: NodeId,
|
||||
sequence: u64,
|
||||
operation: Vec<u8>,
|
||||
operation: bytes::Bytes,
|
||||
},
|
||||
|
||||
/// Update vector clock for causality tracking
|
||||
@@ -126,6 +126,13 @@ pub enum PersistenceOp {
|
||||
entity_id: EntityId,
|
||||
component_type: String,
|
||||
},
|
||||
|
||||
/// Record a tombstone for a deleted entity
|
||||
RecordTombstone {
|
||||
entity_id: EntityId,
|
||||
deleting_node: NodeId,
|
||||
deletion_clock: bytes::Bytes, // Serialized VectorClock
|
||||
},
|
||||
}
|
||||
|
||||
impl PersistenceOp {
|
||||
@@ -473,7 +480,7 @@ mod tests {
|
||||
buffer.add(PersistenceOp::UpsertComponent {
|
||||
entity_id,
|
||||
component_type: "Transform".to_string(),
|
||||
data: vec![1, 2, 3],
|
||||
data: bytes::Bytes::from(vec![1, 2, 3]),
|
||||
})?;
|
||||
assert_eq!(buffer.len(), 1);
|
||||
|
||||
@@ -481,7 +488,7 @@ mod tests {
|
||||
buffer.add(PersistenceOp::UpsertComponent {
|
||||
entity_id,
|
||||
component_type: "Transform".to_string(),
|
||||
data: vec![4, 5, 6],
|
||||
data: bytes::Bytes::from(vec![4, 5, 6]),
|
||||
})?;
|
||||
assert_eq!(buffer.len(), 1);
|
||||
|
||||
@@ -489,7 +496,7 @@ mod tests {
|
||||
let ops = buffer.take_operations();
|
||||
assert_eq!(ops.len(), 1);
|
||||
if let PersistenceOp::UpsertComponent { data, .. } = &ops[0] {
|
||||
assert_eq!(data, &vec![4, 5, 6]);
|
||||
assert_eq!(data.as_ref(), &[4, 5, 6]);
|
||||
} else {
|
||||
panic!("Expected UpsertComponent");
|
||||
}
|
||||
@@ -506,7 +513,7 @@ mod tests {
|
||||
.add(PersistenceOp::UpsertComponent {
|
||||
entity_id,
|
||||
component_type: "Transform".to_string(),
|
||||
data: vec![1, 2, 3],
|
||||
data: bytes::Bytes::from(vec![1, 2, 3]),
|
||||
})
|
||||
.expect("Should successfully add Transform");
|
||||
|
||||
@@ -515,7 +522,7 @@ mod tests {
|
||||
.add(PersistenceOp::UpsertComponent {
|
||||
entity_id,
|
||||
component_type: "Velocity".to_string(),
|
||||
data: vec![4, 5, 6],
|
||||
data: bytes::Bytes::from(vec![4, 5, 6]),
|
||||
})
|
||||
.expect("Should successfully add Velocity");
|
||||
|
||||
@@ -652,7 +659,7 @@ mod tests {
|
||||
let log_op = PersistenceOp::LogOperation {
|
||||
node_id,
|
||||
sequence: 1,
|
||||
operation: vec![1, 2, 3],
|
||||
operation: bytes::Bytes::from(vec![1, 2, 3]),
|
||||
};
|
||||
|
||||
let vector_clock_op = PersistenceOp::UpdateVectorClock {
|
||||
@@ -689,7 +696,7 @@ mod tests {
|
||||
buffer.add(PersistenceOp::UpsertComponent {
|
||||
entity_id,
|
||||
component_type: "Transform".to_string(),
|
||||
data: vec![i],
|
||||
data: bytes::Bytes::from(vec![i]),
|
||||
})?;
|
||||
}
|
||||
|
||||
@@ -700,7 +707,7 @@ mod tests {
|
||||
let ops = buffer.take_operations();
|
||||
assert_eq!(ops.len(), 1);
|
||||
if let PersistenceOp::UpsertComponent { data, .. } = &ops[0] {
|
||||
assert_eq!(data, &vec![9]);
|
||||
assert_eq!(data.as_ref(), &[9]);
|
||||
} else {
|
||||
panic!("Expected UpsertComponent");
|
||||
}
|
||||
@@ -709,7 +716,7 @@ mod tests {
|
||||
buffer.add(PersistenceOp::UpsertComponent {
|
||||
entity_id,
|
||||
component_type: "Transform".to_string(),
|
||||
data: vec![100],
|
||||
data: bytes::Bytes::from(vec![100]),
|
||||
})?;
|
||||
|
||||
assert_eq!(buffer.len(), 1);
|
||||
@@ -726,13 +733,13 @@ mod tests {
|
||||
buffer.add(PersistenceOp::UpsertComponent {
|
||||
entity_id: entity1,
|
||||
component_type: "Transform".to_string(),
|
||||
data: vec![1],
|
||||
data: bytes::Bytes::from(vec![1]),
|
||||
})?;
|
||||
|
||||
buffer.add(PersistenceOp::UpsertComponent {
|
||||
entity_id: entity2,
|
||||
component_type: "Transform".to_string(),
|
||||
data: vec![2],
|
||||
data: bytes::Bytes::from(vec![2]),
|
||||
})?;
|
||||
|
||||
// Should have 2 operations (different entities)
|
||||
@@ -742,7 +749,7 @@ mod tests {
|
||||
buffer.add(PersistenceOp::UpsertComponent {
|
||||
entity_id: entity1,
|
||||
component_type: "Transform".to_string(),
|
||||
data: vec![3],
|
||||
data: bytes::Bytes::from(vec![3]),
|
||||
})?;
|
||||
|
||||
// Still 2 operations (first was replaced in-place)
|
||||
@@ -761,7 +768,7 @@ mod tests {
|
||||
.add_with_default_priority(PersistenceOp::LogOperation {
|
||||
node_id,
|
||||
sequence: 1,
|
||||
operation: vec![1, 2, 3],
|
||||
operation: bytes::Bytes::from(vec![1, 2, 3]),
|
||||
})
|
||||
.unwrap();
|
||||
|
||||
@@ -776,7 +783,7 @@ mod tests {
|
||||
let entity_id = EntityId::new_v4();
|
||||
|
||||
// Create 11MB component (exceeds 10MB limit)
|
||||
let oversized_data = vec![0u8; 11 * 1024 * 1024];
|
||||
let oversized_data = bytes::Bytes::from(vec![0u8; 11 * 1024 * 1024]);
|
||||
|
||||
let result = buffer.add(PersistenceOp::UpsertComponent {
|
||||
entity_id,
|
||||
@@ -809,7 +816,7 @@ mod tests {
|
||||
let entity_id = EntityId::new_v4();
|
||||
|
||||
// Create exactly 10MB component (at limit)
|
||||
let max_data = vec![0u8; 10 * 1024 * 1024];
|
||||
let max_data = bytes::Bytes::from(vec![0u8; 10 * 1024 * 1024]);
|
||||
|
||||
let result = buffer.add(PersistenceOp::UpsertComponent {
|
||||
entity_id,
|
||||
@@ -824,7 +831,7 @@ mod tests {
|
||||
#[test]
|
||||
fn test_oversized_operation_returns_error() {
|
||||
let mut buffer = WriteBuffer::new(100);
|
||||
let oversized_op = vec![0u8; 11 * 1024 * 1024];
|
||||
let oversized_op = bytes::Bytes::from(vec![0u8; 11 * 1024 * 1024]);
|
||||
|
||||
let result = buffer.add(PersistenceOp::LogOperation {
|
||||
node_id: uuid::Uuid::new_v4(),
|
||||
@@ -860,7 +867,7 @@ mod tests {
|
||||
|
||||
for size in sizes {
|
||||
let mut buffer = WriteBuffer::new(100);
|
||||
let data = vec![0u8; size];
|
||||
let data = bytes::Bytes::from(vec![0u8; size]);
|
||||
|
||||
let result = buffer.add(PersistenceOp::UpsertComponent {
|
||||
entity_id: uuid::Uuid::new_v4(),
|
||||
|
||||
@@ -77,7 +77,7 @@ impl ApplicationHandler for DesktopApp {
|
||||
///
|
||||
/// This takes ownership of the main thread and runs the winit event loop.
|
||||
/// The update_fn is called each frame to update game logic.
|
||||
pub fn run(mut update_fn: impl FnMut() + 'static) -> Result<(), Box<dyn std::error::Error>> {
|
||||
pub fn run(_update_fn: impl FnMut() + 'static) -> Result<(), Box<dyn std::error::Error>> {
|
||||
let event_loop = EventLoop::new()?;
|
||||
event_loop.set_control_flow(ControlFlow::Poll); // Run as fast as possible
|
||||
|
||||
|
||||
@@ -51,31 +51,24 @@
|
||||
//! Note: Battery-aware adaptive frame limiting is planned for production use.
|
||||
|
||||
use bevy::prelude::*;
|
||||
use bevy::app::AppExit;
|
||||
use bevy::input::{
|
||||
ButtonInput,
|
||||
mouse::MouseButton as BevyMouseButton,
|
||||
keyboard::KeyCode as BevyKeyCode,
|
||||
touch::{Touches, TouchInput},
|
||||
gestures::*,
|
||||
keyboard::KeyboardInput,
|
||||
mouse::{MouseButtonInput, MouseMotion, MouseWheel},
|
||||
};
|
||||
use bevy::window::{
|
||||
PrimaryWindow, WindowCreated, WindowResized, WindowScaleFactorChanged, WindowClosing,
|
||||
WindowResolution, WindowMode, WindowPosition, WindowEvent as BevyWindowEvent,
|
||||
RawHandleWrapper, WindowWrapper,
|
||||
CursorMoved, CursorEntered, CursorLeft,
|
||||
WindowFocused, WindowOccluded, WindowMoved, WindowThemeChanged, WindowDestroyed,
|
||||
FileDragAndDrop, Ime, WindowCloseRequested,
|
||||
};
|
||||
use bevy::ecs::message::Messages;
|
||||
use crate::platform::input::{InputEvent, InputEventBuffer};
|
||||
use crate::platform::input::InputEventBuffer;
|
||||
use super::{push_window_event, push_device_event, drain_as_input_events, set_scale_factor};
|
||||
use std::sync::Arc;
|
||||
use winit::application::ApplicationHandler;
|
||||
use winit::event::{Event as WinitEvent, WindowEvent as WinitWindowEvent};
|
||||
use winit::event_loop::{ActiveEventLoop, ControlFlow, EventLoop, EventLoopProxy};
|
||||
use winit::event::WindowEvent as WinitWindowEvent;
|
||||
use winit::event_loop::{ActiveEventLoop, ControlFlow, EventLoop};
|
||||
use winit::window::{Window as WinitWindow, WindowId, WindowAttributes};
|
||||
|
||||
/// Application handler state machine
|
||||
@@ -125,6 +118,12 @@ fn send_window_closing(app: &mut App, window: Entity) {
|
||||
.write(WindowClosing { window });
|
||||
}
|
||||
|
||||
fn send_app_exit(app: &mut App) {
|
||||
app.world_mut()
|
||||
.resource_mut::<Messages<bevy::app::AppExit>>()
|
||||
.write(bevy::app::AppExit::Success);
|
||||
}
|
||||
|
||||
impl AppHandler {
|
||||
/// Initialize the window and transition to Running state.
|
||||
///
|
||||
@@ -179,8 +178,8 @@ impl AppHandler {
|
||||
|
||||
// Create window entity with all required components (use logical size)
|
||||
// Convert physical pixels to logical pixels using proper floating-point division
|
||||
let logical_width = (physical_size.width as f64 / scale_factor) as f32;
|
||||
let logical_height = (physical_size.height as f64 / scale_factor) as f32;
|
||||
let logical_width = (physical_size.width as f64 / scale_factor) as u32;
|
||||
let logical_height = (physical_size.height as f64 / scale_factor) as u32;
|
||||
|
||||
let mut window = bevy::window::Window {
|
||||
title: "Marathon".to_string(),
|
||||
@@ -240,7 +239,10 @@ impl AppHandler {
|
||||
// Send WindowClosing event
|
||||
send_window_closing(bevy_app, *bevy_window_entity);
|
||||
|
||||
// Run one final update to process close event
|
||||
// Send AppExit event to trigger cleanup systems
|
||||
send_app_exit(bevy_app);
|
||||
|
||||
// Run one final update to process close events and cleanup
|
||||
bevy_app.update();
|
||||
|
||||
// Don't call finish/cleanup - let Bevy's AppExit handle it
|
||||
|
||||
@@ -51,11 +51,11 @@ use glam::Vec2;
|
||||
use std::sync::{Mutex, OnceLock};
|
||||
use std::path::PathBuf;
|
||||
use winit::event::{
|
||||
DeviceEvent, ElementState, MouseButton as WinitMouseButton, MouseScrollDelta, WindowEvent,
|
||||
Touch as WinitTouch, Force as WinitForce, TouchPhase as WinitTouchPhase,
|
||||
ElementState, MouseButton as WinitMouseButton, MouseScrollDelta, WindowEvent,
|
||||
Force as WinitForce, TouchPhase as WinitTouchPhase,
|
||||
Ime as WinitIme,
|
||||
};
|
||||
use winit::keyboard::{PhysicalKey, Key as LogicalKey, NamedKey};
|
||||
use winit::keyboard::{PhysicalKey, Key as LogicalKey};
|
||||
use winit::window::Theme as WinitTheme;
|
||||
|
||||
/// Raw winit input events before conversion
|
||||
@@ -437,25 +437,18 @@ pub fn push_device_event(event: &winit::event::DeviceEvent) {
|
||||
}
|
||||
}
|
||||
|
||||
/// Drain all buffered winit events and convert to InputEvents
|
||||
///
|
||||
/// Call this from your engine's input processing to consume events.
|
||||
/// This uses a lock-free channel so it never blocks and can't silently drop events.
|
||||
pub fn drain_as_input_events() -> Vec<InputEvent> {
|
||||
let (_, receiver) = get_event_channel();
|
||||
|
||||
// Drain all events from the channel
|
||||
// Drain all events from the channel and convert to InputEvents
|
||||
// Each raw event may generate multiple InputEvents (e.g., Keyboard + Text)
|
||||
receiver
|
||||
.try_iter()
|
||||
.filter_map(raw_to_input_event)
|
||||
.flat_map(raw_to_input_event)
|
||||
.collect()
|
||||
}
|
||||
|
||||
/// Convert a raw winit event to an engine InputEvent
|
||||
///
|
||||
/// Only input-related events are converted. Other events (gestures, file drop, IME, etc.)
|
||||
/// return None and should be handled by the Bevy event system directly.
|
||||
fn raw_to_input_event(event: RawWinitEvent) -> Option<InputEvent> {
|
||||
fn raw_to_input_event(event: RawWinitEvent) -> Vec<InputEvent> {
|
||||
match event {
|
||||
// === MOUSE INPUT ===
|
||||
RawWinitEvent::MouseButton { button, state, position } => {
|
||||
@@ -464,55 +457,70 @@ fn raw_to_input_event(event: RawWinitEvent) -> Option<InputEvent> {
|
||||
ElementState::Released => TouchPhase::Ended,
|
||||
};
|
||||
|
||||
Some(InputEvent::Mouse {
|
||||
vec![InputEvent::Mouse {
|
||||
pos: position,
|
||||
button,
|
||||
phase,
|
||||
})
|
||||
}]
|
||||
}
|
||||
|
||||
RawWinitEvent::CursorMoved { position } => {
|
||||
// Check if any button is pressed
|
||||
let input_state = INPUT_STATE.lock().ok()?;
|
||||
let Some(input_state) = INPUT_STATE.lock().ok() else {
|
||||
return vec![];
|
||||
};
|
||||
|
||||
if input_state.left_pressed {
|
||||
Some(InputEvent::Mouse {
|
||||
vec![InputEvent::Mouse {
|
||||
pos: position,
|
||||
button: MouseButton::Left,
|
||||
phase: TouchPhase::Moved,
|
||||
})
|
||||
}]
|
||||
} else if input_state.right_pressed {
|
||||
Some(InputEvent::Mouse {
|
||||
vec![InputEvent::Mouse {
|
||||
pos: position,
|
||||
button: MouseButton::Right,
|
||||
phase: TouchPhase::Moved,
|
||||
})
|
||||
}]
|
||||
} else if input_state.middle_pressed {
|
||||
Some(InputEvent::Mouse {
|
||||
vec![InputEvent::Mouse {
|
||||
pos: position,
|
||||
button: MouseButton::Middle,
|
||||
phase: TouchPhase::Moved,
|
||||
})
|
||||
}]
|
||||
} else {
|
||||
// No button pressed - hover tracking
|
||||
Some(InputEvent::MouseMove { pos: position })
|
||||
vec![InputEvent::MouseMove { pos: position }]
|
||||
}
|
||||
}
|
||||
|
||||
RawWinitEvent::MouseWheel { delta, position } => {
|
||||
Some(InputEvent::MouseWheel {
|
||||
vec![InputEvent::MouseWheel {
|
||||
delta,
|
||||
pos: position,
|
||||
})
|
||||
}]
|
||||
}
|
||||
|
||||
// === KEYBOARD INPUT ===
|
||||
RawWinitEvent::Keyboard { key, state, modifiers, .. } => {
|
||||
Some(InputEvent::Keyboard {
|
||||
RawWinitEvent::Keyboard { key, state, modifiers, text, .. } => {
|
||||
let mut events = vec![InputEvent::Keyboard {
|
||||
key,
|
||||
pressed: state == ElementState::Pressed,
|
||||
modifiers,
|
||||
})
|
||||
}];
|
||||
|
||||
// If there's text input and the key was pressed, send a Text event too
|
||||
// But only for printable characters, not control characters (backspace, etc.)
|
||||
if state == ElementState::Pressed {
|
||||
if let Some(text) = text {
|
||||
// Filter out control characters - only send printable text
|
||||
if !text.is_empty() && text.chars().all(|c| !c.is_control()) {
|
||||
events.push(InputEvent::Text { text });
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
events
|
||||
}
|
||||
|
||||
// === TOUCH INPUT (APPLE PENCIL!) ===
|
||||
@@ -543,55 +551,55 @@ fn raw_to_input_event(event: RawWinitEvent) -> Option<InputEvent> {
|
||||
0.0, // Azimuth not provided by winit Force::Calibrated
|
||||
);
|
||||
|
||||
Some(InputEvent::Stylus {
|
||||
vec![InputEvent::Stylus {
|
||||
pos: position,
|
||||
pressure,
|
||||
tilt,
|
||||
phase: touch_phase,
|
||||
timestamp: 0.0, // TODO: Get actual timestamp from winit when available
|
||||
})
|
||||
}]
|
||||
}
|
||||
Some(WinitForce::Normalized(pressure)) => {
|
||||
// Normalized pressure (0.0-1.0), likely a stylus
|
||||
Some(InputEvent::Stylus {
|
||||
vec![InputEvent::Stylus {
|
||||
pos: position,
|
||||
pressure: pressure as f32,
|
||||
tilt: Vec2::ZERO, // No tilt data in normalized mode
|
||||
phase: touch_phase,
|
||||
timestamp: 0.0,
|
||||
})
|
||||
}]
|
||||
}
|
||||
None => {
|
||||
// No force data - regular touch (finger)
|
||||
Some(InputEvent::Touch {
|
||||
vec![InputEvent::Touch {
|
||||
pos: position,
|
||||
phase: touch_phase,
|
||||
id,
|
||||
})
|
||||
}]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// === GESTURE INPUT ===
|
||||
RawWinitEvent::PinchGesture { delta } => {
|
||||
Some(InputEvent::PinchGesture { delta })
|
||||
vec![InputEvent::PinchGesture { delta }]
|
||||
}
|
||||
|
||||
RawWinitEvent::RotationGesture { delta } => {
|
||||
Some(InputEvent::RotationGesture { delta })
|
||||
vec![InputEvent::RotationGesture { delta }]
|
||||
}
|
||||
|
||||
RawWinitEvent::PanGesture { delta } => {
|
||||
Some(InputEvent::PanGesture { delta })
|
||||
vec![InputEvent::PanGesture { delta }]
|
||||
}
|
||||
|
||||
RawWinitEvent::DoubleTapGesture => {
|
||||
Some(InputEvent::DoubleTapGesture)
|
||||
vec![InputEvent::DoubleTapGesture]
|
||||
}
|
||||
|
||||
// === MOUSE MOTION (RAW DELTA) ===
|
||||
RawWinitEvent::MouseMotion { delta } => {
|
||||
Some(InputEvent::MouseMotion { delta })
|
||||
vec![InputEvent::MouseMotion { delta }]
|
||||
}
|
||||
|
||||
// === NON-INPUT EVENTS ===
|
||||
@@ -611,7 +619,7 @@ fn raw_to_input_event(event: RawWinitEvent) -> Option<InputEvent> {
|
||||
RawWinitEvent::Moved { .. } => {
|
||||
// These are window/UI events, should be sent to Bevy messages
|
||||
// (to be implemented when we add Bevy window event forwarding)
|
||||
None
|
||||
vec![]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -245,6 +245,11 @@ impl InputController {
|
||||
}
|
||||
// In other contexts, ignore MouseMotion to avoid conflicts with cursor-based input
|
||||
}
|
||||
|
||||
InputEvent::Text { text: _ } => {
|
||||
// Text input is handled by egui, not by game actions
|
||||
// This is for typing in text fields, not game controls
|
||||
}
|
||||
}
|
||||
|
||||
actions
|
||||
@@ -386,6 +391,7 @@ impl Default for InputController {
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
#[path = "input_controller_tests.rs"]
|
||||
mod tests;
|
||||
// Tests are in crates/libmarathon/src/engine/input_controller_tests.rs
|
||||
// #[cfg(test)]
|
||||
// #[path = "input_controller_tests.rs"]
|
||||
// mod tests;
|
||||
|
||||
@@ -52,7 +52,7 @@ pub struct InputEventBuffer {
|
||||
///
|
||||
/// Platform-specific code converts native input (UITouch, winit events)
|
||||
/// into these engine-agnostic events.
|
||||
#[derive(Debug, Clone, Copy)]
|
||||
#[derive(Debug, Clone)]
|
||||
pub enum InputEvent {
|
||||
/// Stylus input (Apple Pencil, Surface Pen, etc.)
|
||||
Stylus {
|
||||
@@ -108,6 +108,13 @@ pub enum InputEvent {
|
||||
modifiers: Modifiers,
|
||||
},
|
||||
|
||||
/// Text input from keyboard
|
||||
/// This is the actual character that was typed, after applying keyboard layout
|
||||
Text {
|
||||
/// The text/character that was entered
|
||||
text: String,
|
||||
},
|
||||
|
||||
/// Mouse wheel scroll
|
||||
MouseWheel {
|
||||
/// Scroll delta (pixels or lines depending on device)
|
||||
@@ -155,6 +162,7 @@ impl InputEvent {
|
||||
InputEvent::Touch { pos, .. } => Some(*pos),
|
||||
InputEvent::MouseWheel { pos, .. } => Some(*pos),
|
||||
InputEvent::Keyboard { .. } |
|
||||
InputEvent::Text { .. } |
|
||||
InputEvent::MouseMotion { .. } |
|
||||
InputEvent::PinchGesture { .. } |
|
||||
InputEvent::RotationGesture { .. } |
|
||||
@@ -170,6 +178,7 @@ impl InputEvent {
|
||||
InputEvent::Mouse { phase, .. } => Some(*phase),
|
||||
InputEvent::Touch { phase, .. } => Some(*phase),
|
||||
InputEvent::Keyboard { .. } |
|
||||
InputEvent::Text { .. } |
|
||||
InputEvent::MouseWheel { .. } |
|
||||
InputEvent::MouseMove { .. } |
|
||||
InputEvent::MouseMotion { .. } |
|
||||
|
||||
@@ -1,38 +1,88 @@
|
||||
//! iOS application executor - owns winit and drives Bevy ECS
|
||||
//!
|
||||
//! iOS-specific implementation of the executor pattern, adapted for UIKit integration.
|
||||
//! See platform/desktop/executor.rs for detailed architecture documentation.
|
||||
//! iOS-specific implementation of the executor pattern, adapted for UIKit
|
||||
//! integration. See platform/desktop/executor.rs for detailed architecture
|
||||
//! documentation.
|
||||
|
||||
use bevy::prelude::*;
|
||||
use bevy::app::AppExit;
|
||||
use bevy::input::{
|
||||
ButtonInput,
|
||||
mouse::MouseButton as BevyMouseButton,
|
||||
keyboard::KeyCode as BevyKeyCode,
|
||||
touch::{Touches, TouchInput},
|
||||
gestures::*,
|
||||
keyboard::KeyboardInput,
|
||||
mouse::{MouseButtonInput, MouseMotion, MouseWheel},
|
||||
};
|
||||
use bevy::window::{
|
||||
PrimaryWindow, WindowCreated, WindowResized, WindowScaleFactorChanged, WindowClosing,
|
||||
WindowResolution, WindowMode, WindowPosition, WindowEvent as BevyWindowEvent,
|
||||
RawHandleWrapper, WindowWrapper,
|
||||
CursorMoved, CursorEntered, CursorLeft,
|
||||
WindowFocused, WindowOccluded, WindowMoved, WindowThemeChanged, WindowDestroyed,
|
||||
FileDragAndDrop, Ime, WindowCloseRequested,
|
||||
};
|
||||
use bevy::ecs::message::Messages;
|
||||
use crate::platform::input::{InputEvent, InputEventBuffer};
|
||||
use std::sync::Arc;
|
||||
use winit::application::ApplicationHandler;
|
||||
use winit::event::{Event as WinitEvent, WindowEvent as WinitWindowEvent};
|
||||
use winit::event_loop::{ActiveEventLoop, ControlFlow, EventLoop, EventLoopProxy};
|
||||
use winit::window::{Window as WinitWindow, WindowId, WindowAttributes};
|
||||
|
||||
use bevy::{
|
||||
app::AppExit,
|
||||
ecs::message::Messages,
|
||||
input::{
|
||||
ButtonInput,
|
||||
gestures::*,
|
||||
keyboard::{
|
||||
KeyCode as BevyKeyCode,
|
||||
KeyboardInput,
|
||||
},
|
||||
mouse::{
|
||||
MouseButton as BevyMouseButton,
|
||||
MouseButtonInput,
|
||||
MouseMotion,
|
||||
MouseWheel,
|
||||
},
|
||||
touch::{
|
||||
TouchInput,
|
||||
Touches,
|
||||
},
|
||||
},
|
||||
prelude::*,
|
||||
window::{
|
||||
CursorEntered,
|
||||
CursorLeft,
|
||||
CursorMoved,
|
||||
FileDragAndDrop,
|
||||
Ime,
|
||||
PrimaryWindow,
|
||||
RawHandleWrapper,
|
||||
WindowCloseRequested,
|
||||
WindowClosing,
|
||||
WindowCreated,
|
||||
WindowDestroyed,
|
||||
WindowEvent as BevyWindowEvent,
|
||||
WindowFocused,
|
||||
WindowMode,
|
||||
WindowMoved,
|
||||
WindowOccluded,
|
||||
WindowPosition,
|
||||
WindowResized,
|
||||
WindowResolution,
|
||||
WindowScaleFactorChanged,
|
||||
WindowThemeChanged,
|
||||
WindowWrapper,
|
||||
},
|
||||
};
|
||||
use glam;
|
||||
use winit::{
|
||||
application::ApplicationHandler,
|
||||
event::{
|
||||
Event as WinitEvent,
|
||||
WindowEvent as WinitWindowEvent,
|
||||
},
|
||||
event_loop::{
|
||||
ActiveEventLoop,
|
||||
ControlFlow,
|
||||
EventLoop,
|
||||
EventLoopProxy,
|
||||
},
|
||||
window::{
|
||||
Window as WinitWindow,
|
||||
WindowAttributes,
|
||||
WindowId,
|
||||
},
|
||||
};
|
||||
|
||||
use crate::platform::input::{
|
||||
InputEvent,
|
||||
InputEventBuffer,
|
||||
};
|
||||
|
||||
/// Application handler state machine
|
||||
enum AppHandler {
|
||||
Initializing { app: Option<App> },
|
||||
Initializing {
|
||||
app: Option<App>,
|
||||
},
|
||||
Running {
|
||||
window: Arc<WinitWindow>,
|
||||
bevy_window_entity: Entity,
|
||||
@@ -107,11 +157,12 @@ impl AppHandler {
|
||||
bevy_app.init_resource::<Messages<TouchInput>>();
|
||||
|
||||
// Create the winit window BEFORE finishing the app
|
||||
// Let winit choose the default size for iOS
|
||||
let window_attributes = WindowAttributes::default()
|
||||
.with_title("Marathon")
|
||||
.with_inner_size(winit::dpi::LogicalSize::new(1280, 720));
|
||||
.with_title("Marathon");
|
||||
|
||||
let winit_window = event_loop.create_window(window_attributes)
|
||||
let winit_window = event_loop
|
||||
.create_window(window_attributes)
|
||||
.map_err(|e| format!("Failed to create window: {}", e))?;
|
||||
let winit_window = Arc::new(winit_window);
|
||||
info!("Created iOS window before app.finish()");
|
||||
@@ -119,37 +170,41 @@ impl AppHandler {
|
||||
let physical_size = winit_window.inner_size();
|
||||
let scale_factor = winit_window.scale_factor();
|
||||
|
||||
// iOS-specific: High DPI screens (Retina)
|
||||
// iPad Pro has scale factors of 2.0, some models 3.0
|
||||
info!("iOS scale factor: {}", scale_factor);
|
||||
|
||||
// Create window entity with all required components
|
||||
// Convert physical pixels to logical pixels using proper floating-point division
|
||||
let logical_width = (physical_size.width as f64 / scale_factor) as f32;
|
||||
let logical_height = (physical_size.height as f64 / scale_factor) as f32;
|
||||
// Log everything for debugging
|
||||
info!("iOS window diagnostics:");
|
||||
info!(" Physical size (pixels): {}×{}", physical_size.width, physical_size.height);
|
||||
info!(" Scale factor: {}", scale_factor);
|
||||
|
||||
// WindowResolution::new() expects PHYSICAL size
|
||||
let mut window = bevy::window::Window {
|
||||
title: "Marathon".to_string(),
|
||||
resolution: WindowResolution::new(logical_width, logical_height),
|
||||
mode: WindowMode::BorderlessFullscreen,
|
||||
resolution: WindowResolution::new(physical_size.width, physical_size.height),
|
||||
mode: WindowMode::BorderlessFullscreen(bevy::window::MonitorSelection::Current),
|
||||
position: WindowPosition::Automatic,
|
||||
focused: true,
|
||||
..Default::default()
|
||||
};
|
||||
window
|
||||
.resolution
|
||||
.set_scale_factor_and_apply_to_physical_size(scale_factor as f32);
|
||||
|
||||
// Set scale factor so Bevy can calculate logical size
|
||||
window.resolution.set_scale_factor(scale_factor as f32);
|
||||
|
||||
// Log final window state
|
||||
info!(" Final window resolution: {:.1}×{:.1} (logical)",
|
||||
window.resolution.width(), window.resolution.height());
|
||||
info!(" Final physical resolution: {}×{}",
|
||||
window.resolution.physical_width(), window.resolution.physical_height());
|
||||
info!(" Final scale factor: {}", window.resolution.scale_factor());
|
||||
info!(" Window mode: BorderlessFullscreen");
|
||||
|
||||
// Create WindowWrapper and RawHandleWrapper for renderer
|
||||
let window_wrapper = WindowWrapper::new(winit_window.clone());
|
||||
let raw_handle_wrapper = RawHandleWrapper::new(&window_wrapper)
|
||||
.map_err(|e| format!("Failed to create RawHandleWrapper: {}", e))?;
|
||||
|
||||
let window_entity = bevy_app.world_mut().spawn((
|
||||
window,
|
||||
PrimaryWindow,
|
||||
raw_handle_wrapper,
|
||||
)).id();
|
||||
let window_entity = bevy_app
|
||||
.world_mut()
|
||||
.spawn((window, PrimaryWindow, raw_handle_wrapper))
|
||||
.id();
|
||||
info!("Created window entity {}", window_entity);
|
||||
|
||||
// Send initialization event
|
||||
@@ -183,7 +238,10 @@ impl AppHandler {
|
||||
// Send WindowClosing event
|
||||
send_window_closing(bevy_app, *bevy_window_entity);
|
||||
|
||||
// Run one final update to process close event
|
||||
// Send AppExit event to trigger cleanup systems
|
||||
bevy_app.world_mut().send_message(AppExit::Success);
|
||||
|
||||
// Run one final update to process close events and cleanup
|
||||
bevy_app.update();
|
||||
}
|
||||
|
||||
@@ -193,13 +251,16 @@ impl AppHandler {
|
||||
|
||||
impl ApplicationHandler for AppHandler {
|
||||
fn resumed(&mut self, event_loop: &ActiveEventLoop) {
|
||||
eprintln!(">>> iOS executor: resumed() callback called");
|
||||
// Initialize on first resumed() call
|
||||
if let Err(e) = self.initialize(event_loop) {
|
||||
error!("Failed to initialize iOS app: {}", e);
|
||||
eprintln!(">>> iOS executor: Initialization failed: {}", e);
|
||||
event_loop.exit();
|
||||
return;
|
||||
}
|
||||
info!("iOS app resumed");
|
||||
eprintln!(">>> iOS executor: App resumed successfully");
|
||||
}
|
||||
|
||||
fn window_event(
|
||||
@@ -219,13 +280,15 @@ impl ApplicationHandler for AppHandler {
|
||||
};
|
||||
|
||||
match event {
|
||||
WinitWindowEvent::CloseRequested => {
|
||||
| WinitWindowEvent::CloseRequested => {
|
||||
self.shutdown(event_loop);
|
||||
}
|
||||
},
|
||||
|
||||
WinitWindowEvent::Resized(physical_size) => {
|
||||
| WinitWindowEvent::Resized(physical_size) => {
|
||||
// Update the Bevy Window component's physical resolution
|
||||
if let Some(mut window_component) = bevy_app.world_mut().get_mut::<Window>(*bevy_window_entity) {
|
||||
if let Some(mut window_component) =
|
||||
bevy_app.world_mut().get_mut::<Window>(*bevy_window_entity)
|
||||
{
|
||||
window_component
|
||||
.resolution
|
||||
.set_physical_resolution(physical_size.width, physical_size.height);
|
||||
@@ -234,9 +297,30 @@ impl ApplicationHandler for AppHandler {
|
||||
// Notify Bevy systems of window resize
|
||||
let scale_factor = window.scale_factor();
|
||||
send_window_resized(bevy_app, *bevy_window_entity, physical_size, scale_factor);
|
||||
}
|
||||
},
|
||||
|
||||
| WinitWindowEvent::RedrawRequested => {
|
||||
// Log viewport/window dimensions every 60 frames
|
||||
static mut FRAME_COUNT: u32 = 0;
|
||||
let should_log = unsafe {
|
||||
FRAME_COUNT += 1;
|
||||
FRAME_COUNT % 60 == 0
|
||||
};
|
||||
|
||||
if should_log {
|
||||
if let Some(window_component) = bevy_app.world().get::<Window>(*bevy_window_entity) {
|
||||
let frame_num = unsafe { FRAME_COUNT };
|
||||
info!("Frame {} - Window state:", frame_num);
|
||||
info!(" Logical: {:.1}×{:.1}",
|
||||
window_component.resolution.width(),
|
||||
window_component.resolution.height());
|
||||
info!(" Physical: {}×{}",
|
||||
window_component.resolution.physical_width(),
|
||||
window_component.resolution.physical_height());
|
||||
info!(" Scale: {}", window_component.resolution.scale_factor());
|
||||
}
|
||||
}
|
||||
|
||||
WinitWindowEvent::RedrawRequested => {
|
||||
// iOS-specific: Get pencil input from the bridge
|
||||
#[cfg(target_os = "ios")]
|
||||
let pencil_events = super::drain_as_input_events();
|
||||
@@ -262,11 +346,13 @@ impl ApplicationHandler for AppHandler {
|
||||
|
||||
// Request next frame immediately (unbounded loop)
|
||||
window.request_redraw();
|
||||
}
|
||||
},
|
||||
|
||||
WinitWindowEvent::ScaleFactorChanged { scale_factor, .. } => {
|
||||
| WinitWindowEvent::ScaleFactorChanged { scale_factor, .. } => {
|
||||
// Update the Bevy Window component's scale factor
|
||||
if let Some(mut window_component) = bevy_app.world_mut().get_mut::<Window>(*bevy_window_entity) {
|
||||
if let Some(mut window_component) =
|
||||
bevy_app.world_mut().get_mut::<Window>(*bevy_window_entity)
|
||||
{
|
||||
let prior_factor = window_component.resolution.scale_factor();
|
||||
|
||||
window_component
|
||||
@@ -280,9 +366,102 @@ impl ApplicationHandler for AppHandler {
|
||||
prior_factor, scale_factor, bevy_window_entity
|
||||
);
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
_ => {}
|
||||
// Mouse support for iPad simulator (simulator uses mouse, not touch)
|
||||
| WinitWindowEvent::CursorMoved { position, .. } => {
|
||||
let scale_factor = window.scale_factor();
|
||||
let mut buffer = bevy_app.world_mut().resource_mut::<InputEventBuffer>();
|
||||
buffer
|
||||
.events
|
||||
.push(crate::platform::input::InputEvent::MouseMove {
|
||||
pos: glam::Vec2::new(
|
||||
(position.x / scale_factor) as f32,
|
||||
(position.y / scale_factor) as f32,
|
||||
),
|
||||
});
|
||||
},
|
||||
|
||||
| WinitWindowEvent::MouseInput { state, button, .. } => {
|
||||
use crate::platform::input::{
|
||||
MouseButton as EngineButton,
|
||||
TouchPhase,
|
||||
};
|
||||
|
||||
let (engine_button, phase) = match (button, state) {
|
||||
| (winit::event::MouseButton::Left, winit::event::ElementState::Pressed) => {
|
||||
(EngineButton::Left, TouchPhase::Started)
|
||||
},
|
||||
| (winit::event::MouseButton::Left, winit::event::ElementState::Released) => {
|
||||
(EngineButton::Left, TouchPhase::Ended)
|
||||
},
|
||||
| (winit::event::MouseButton::Right, winit::event::ElementState::Pressed) => {
|
||||
(EngineButton::Right, TouchPhase::Started)
|
||||
},
|
||||
| (winit::event::MouseButton::Right, winit::event::ElementState::Released) => {
|
||||
(EngineButton::Right, TouchPhase::Ended)
|
||||
},
|
||||
| (winit::event::MouseButton::Middle, winit::event::ElementState::Pressed) => {
|
||||
(EngineButton::Middle, TouchPhase::Started)
|
||||
},
|
||||
| (winit::event::MouseButton::Middle, winit::event::ElementState::Released) => {
|
||||
(EngineButton::Middle, TouchPhase::Ended)
|
||||
},
|
||||
| _ => return, // Ignore other buttons
|
||||
};
|
||||
|
||||
let mut buffer = bevy_app.world_mut().resource_mut::<InputEventBuffer>();
|
||||
// Use last known cursor position - extract position first to avoid borrow issues
|
||||
let last_pos = buffer
|
||||
.events
|
||||
.iter()
|
||||
.rev()
|
||||
.find_map(|e| match e {
|
||||
crate::platform::input::InputEvent::MouseMove { pos } => Some(*pos),
|
||||
_ => None,
|
||||
});
|
||||
|
||||
if let Some(pos) = last_pos {
|
||||
buffer.events.push(crate::platform::input::InputEvent::Mouse {
|
||||
pos,
|
||||
button: engine_button,
|
||||
phase,
|
||||
});
|
||||
}
|
||||
},
|
||||
|
||||
| WinitWindowEvent::MouseWheel { delta, .. } => {
|
||||
let (delta_x, delta_y) = match delta {
|
||||
| winit::event::MouseScrollDelta::LineDelta(x, y) => {
|
||||
(x * 20.0, y * 20.0) // Convert lines to pixels
|
||||
},
|
||||
| winit::event::MouseScrollDelta::PixelDelta(pos) => {
|
||||
(pos.x as f32, pos.y as f32)
|
||||
},
|
||||
};
|
||||
|
||||
let mut buffer = bevy_app.world_mut().resource_mut::<InputEventBuffer>();
|
||||
// Use last known cursor position
|
||||
let pos = buffer
|
||||
.events
|
||||
.iter()
|
||||
.rev()
|
||||
.find_map(|e| match e {
|
||||
| crate::platform::input::InputEvent::MouseMove { pos } => Some(*pos),
|
||||
| crate::platform::input::InputEvent::MouseWheel { pos, .. } => Some(*pos),
|
||||
| _ => None,
|
||||
})
|
||||
.unwrap_or(glam::Vec2::ZERO);
|
||||
|
||||
buffer
|
||||
.events
|
||||
.push(crate::platform::input::InputEvent::MouseWheel {
|
||||
delta: glam::Vec2::new(delta_x, delta_y),
|
||||
pos,
|
||||
});
|
||||
},
|
||||
|
||||
| _ => {},
|
||||
}
|
||||
}
|
||||
|
||||
@@ -324,17 +503,26 @@ impl ApplicationHandler for AppHandler {
|
||||
/// - Window creation fails during initialization
|
||||
/// - The event loop encounters a fatal error
|
||||
pub fn run_executor(app: App) -> Result<(), Box<dyn std::error::Error>> {
|
||||
eprintln!(">>> iOS executor: run_executor() called");
|
||||
|
||||
eprintln!(">>> iOS executor: Creating event loop");
|
||||
let event_loop = EventLoop::new()?;
|
||||
eprintln!(">>> iOS executor: Event loop created");
|
||||
|
||||
// Run as fast as possible (unbounded)
|
||||
eprintln!(">>> iOS executor: Setting control flow");
|
||||
event_loop.set_control_flow(ControlFlow::Poll);
|
||||
|
||||
info!("Starting iOS executor (unbounded mode)");
|
||||
eprintln!(">>> iOS executor: Starting (unbounded mode)");
|
||||
|
||||
// Create handler in Initializing state with the app
|
||||
eprintln!(">>> iOS executor: Creating AppHandler");
|
||||
let mut handler = AppHandler::Initializing { app: Some(app) };
|
||||
|
||||
eprintln!(">>> iOS executor: Running event loop (blocking call)");
|
||||
event_loop.run_app(&mut handler)?;
|
||||
|
||||
eprintln!(">>> iOS executor: Event loop returned (should never reach here)");
|
||||
Ok(())
|
||||
}
|
||||
|
||||
62
crates/libmarathon/src/render/alpha.rs
Normal file
62
crates/libmarathon/src/render/alpha.rs
Normal file
@@ -0,0 +1,62 @@
|
||||
use bevy_reflect::{std_traits::ReflectDefault, Reflect};
|
||||
|
||||
// TODO: add discussion about performance.
|
||||
/// Sets how a material's base color alpha channel is used for transparency.
|
||||
#[derive(Debug, Default, Reflect, Copy, Clone, PartialEq)]
|
||||
#[reflect(Default, Debug, Clone)]
|
||||
pub enum AlphaMode {
|
||||
/// Base color alpha values are overridden to be fully opaque (1.0).
|
||||
#[default]
|
||||
Opaque,
|
||||
/// Reduce transparency to fully opaque or fully transparent
|
||||
/// based on a threshold.
|
||||
///
|
||||
/// Compares the base color alpha value to the specified threshold.
|
||||
/// If the value is below the threshold,
|
||||
/// considers the color to be fully transparent (alpha is set to 0.0).
|
||||
/// If it is equal to or above the threshold,
|
||||
/// considers the color to be fully opaque (alpha is set to 1.0).
|
||||
Mask(f32),
|
||||
/// The base color alpha value defines the opacity of the color.
|
||||
/// Standard alpha-blending is used to blend the fragment's color
|
||||
/// with the color behind it.
|
||||
Blend,
|
||||
/// Similar to [`AlphaMode::Blend`], however assumes RGB channel values are
|
||||
/// [premultiplied](https://en.wikipedia.org/wiki/Alpha_compositing#Straight_versus_premultiplied).
|
||||
///
|
||||
/// For otherwise constant RGB values, behaves more like [`AlphaMode::Blend`] for
|
||||
/// alpha values closer to 1.0, and more like [`AlphaMode::Add`] for
|
||||
/// alpha values closer to 0.0.
|
||||
///
|
||||
/// Can be used to avoid “border” or “outline” artifacts that can occur
|
||||
/// when using plain alpha-blended textures.
|
||||
Premultiplied,
|
||||
/// Spreads the fragment out over a hardware-dependent number of sample
|
||||
/// locations proportional to the alpha value. This requires multisample
|
||||
/// antialiasing; if MSAA isn't on, this is identical to
|
||||
/// [`AlphaMode::Mask`] with a value of 0.5.
|
||||
///
|
||||
/// Alpha to coverage provides improved performance and better visual
|
||||
/// fidelity over [`AlphaMode::Blend`], as Bevy doesn't have to sort objects
|
||||
/// when it's in use. It's especially useful for complex transparent objects
|
||||
/// like foliage.
|
||||
///
|
||||
/// [alpha to coverage]: https://en.wikipedia.org/wiki/Alpha_to_coverage
|
||||
AlphaToCoverage,
|
||||
/// Combines the color of the fragments with the colors behind them in an
|
||||
/// additive process, (i.e. like light) producing lighter results.
|
||||
///
|
||||
/// Black produces no effect. Alpha values can be used to modulate the result.
|
||||
///
|
||||
/// Useful for effects like holograms, ghosts, lasers and other energy beams.
|
||||
Add,
|
||||
/// Combines the color of the fragments with the colors behind them in a
|
||||
/// multiplicative process, (i.e. like pigments) producing darker results.
|
||||
///
|
||||
/// White produces no effect. Alpha values can be used to modulate the result.
|
||||
///
|
||||
/// Useful for effects like stained glass, window tint film and some colored liquids.
|
||||
Multiply,
|
||||
}
|
||||
|
||||
impl Eq for AlphaMode {}
|
||||
2142
crates/libmarathon/src/render/batching/gpu_preprocessing.rs
Normal file
2142
crates/libmarathon/src/render/batching/gpu_preprocessing.rs
Normal file
File diff suppressed because it is too large
Load Diff
225
crates/libmarathon/src/render/batching/mod.rs
Normal file
225
crates/libmarathon/src/render/batching/mod.rs
Normal file
@@ -0,0 +1,225 @@
|
||||
use bevy_ecs::{
|
||||
component::Component,
|
||||
entity::Entity,
|
||||
system::{ResMut, SystemParam, SystemParamItem},
|
||||
};
|
||||
use bytemuck::Pod;
|
||||
use gpu_preprocessing::UntypedPhaseIndirectParametersBuffers;
|
||||
use nonmax::NonMaxU32;
|
||||
|
||||
use crate::render::{
|
||||
render_phase::{
|
||||
BinnedPhaseItem, CachedRenderPipelinePhaseItem, DrawFunctionId, PhaseItemExtraIndex,
|
||||
SortedPhaseItem, SortedRenderPhase, ViewBinnedRenderPhases,
|
||||
},
|
||||
render_resource::{CachedRenderPipelineId, GpuArrayBufferable},
|
||||
sync_world::MainEntity,
|
||||
};
|
||||
|
||||
pub mod gpu_preprocessing;
|
||||
pub mod no_gpu_preprocessing;
|
||||
|
||||
/// Add this component to mesh entities to disable automatic batching
|
||||
#[derive(Component, Default)]
|
||||
pub struct NoAutomaticBatching;
|
||||
|
||||
/// Data necessary to be equal for two draw commands to be mergeable
|
||||
///
|
||||
/// This is based on the following assumptions:
|
||||
/// - Only entities with prepared assets (pipelines, materials, meshes) are
|
||||
/// queued to phases
|
||||
/// - View bindings are constant across a phase for a given draw function as
|
||||
/// phases are per-view
|
||||
/// - `batch_and_prepare_render_phase` is the only system that performs this
|
||||
/// batching and has sole responsibility for preparing the per-object data.
|
||||
/// As such the mesh binding and dynamic offsets are assumed to only be
|
||||
/// variable as a result of the `batch_and_prepare_render_phase` system, e.g.
|
||||
/// due to having to split data across separate uniform bindings within the
|
||||
/// same buffer due to the maximum uniform buffer binding size.
|
||||
#[derive(PartialEq)]
|
||||
struct BatchMeta<T: PartialEq> {
|
||||
/// The pipeline id encompasses all pipeline configuration including vertex
|
||||
/// buffers and layouts, shaders and their specializations, bind group
|
||||
/// layouts, etc.
|
||||
pipeline_id: CachedRenderPipelineId,
|
||||
/// The draw function id defines the `RenderCommands` that are called to
|
||||
/// set the pipeline and bindings, and make the draw command
|
||||
draw_function_id: DrawFunctionId,
|
||||
dynamic_offset: Option<NonMaxU32>,
|
||||
user_data: T,
|
||||
}
|
||||
|
||||
impl<T: PartialEq> BatchMeta<T> {
|
||||
fn new(item: &impl CachedRenderPipelinePhaseItem, user_data: T) -> Self {
|
||||
BatchMeta {
|
||||
pipeline_id: item.cached_pipeline(),
|
||||
draw_function_id: item.draw_function(),
|
||||
dynamic_offset: match item.extra_index() {
|
||||
PhaseItemExtraIndex::DynamicOffset(dynamic_offset) => {
|
||||
NonMaxU32::new(dynamic_offset)
|
||||
}
|
||||
PhaseItemExtraIndex::None | PhaseItemExtraIndex::IndirectParametersIndex { .. } => {
|
||||
None
|
||||
}
|
||||
},
|
||||
user_data,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// A trait to support getting data used for batching draw commands via phase
|
||||
/// items.
|
||||
///
|
||||
/// This is a simple version that only allows for sorting, not binning, as well
|
||||
/// as only CPU processing, not GPU preprocessing. For these fancier features,
|
||||
/// see [`GetFullBatchData`].
|
||||
pub trait GetBatchData {
|
||||
/// The system parameters [`GetBatchData::get_batch_data`] needs in
|
||||
/// order to compute the batch data.
|
||||
type Param: SystemParam + 'static;
|
||||
/// Data used for comparison between phase items. If the pipeline id, draw
|
||||
/// function id, per-instance data buffer dynamic offset and this data
|
||||
/// matches, the draws can be batched.
|
||||
type CompareData: PartialEq;
|
||||
/// The per-instance data to be inserted into the
|
||||
/// [`crate::render_resource::GpuArrayBuffer`] containing these data for all
|
||||
/// instances.
|
||||
type BufferData: GpuArrayBufferable + Sync + Send + 'static;
|
||||
/// Get the per-instance data to be inserted into the
|
||||
/// [`crate::render_resource::GpuArrayBuffer`]. If the instance can be
|
||||
/// batched, also return the data used for comparison when deciding whether
|
||||
/// draws can be batched, else return None for the `CompareData`.
|
||||
///
|
||||
/// This is only called when building instance data on CPU. In the GPU
|
||||
/// instance data building path, we use
|
||||
/// [`GetFullBatchData::get_index_and_compare_data`] instead.
|
||||
fn get_batch_data(
|
||||
param: &SystemParamItem<Self::Param>,
|
||||
query_item: (Entity, MainEntity),
|
||||
) -> Option<(Self::BufferData, Option<Self::CompareData>)>;
|
||||
}
|
||||
|
||||
/// A trait to support getting data used for batching draw commands via phase
|
||||
/// items.
|
||||
///
|
||||
/// This version allows for binning and GPU preprocessing.
|
||||
pub trait GetFullBatchData: GetBatchData {
|
||||
/// The per-instance data that was inserted into the
|
||||
/// [`crate::render_resource::BufferVec`] during extraction.
|
||||
type BufferInputData: Pod + Default + Sync + Send;
|
||||
|
||||
/// Get the per-instance data to be inserted into the
|
||||
/// [`crate::render_resource::GpuArrayBuffer`].
|
||||
///
|
||||
/// This is only called when building uniforms on CPU. In the GPU instance
|
||||
/// buffer building path, we use
|
||||
/// [`GetFullBatchData::get_index_and_compare_data`] instead.
|
||||
fn get_binned_batch_data(
|
||||
param: &SystemParamItem<Self::Param>,
|
||||
query_item: MainEntity,
|
||||
) -> Option<Self::BufferData>;
|
||||
|
||||
/// Returns the index of the [`GetFullBatchData::BufferInputData`] that the
|
||||
/// GPU preprocessing phase will use.
|
||||
///
|
||||
/// We already inserted the [`GetFullBatchData::BufferInputData`] during the
|
||||
/// extraction phase before we got here, so this function shouldn't need to
|
||||
/// look up any render data. If CPU instance buffer building is in use, this
|
||||
/// function will never be called.
|
||||
fn get_index_and_compare_data(
|
||||
param: &SystemParamItem<Self::Param>,
|
||||
query_item: MainEntity,
|
||||
) -> Option<(NonMaxU32, Option<Self::CompareData>)>;
|
||||
|
||||
/// Returns the index of the [`GetFullBatchData::BufferInputData`] that the
|
||||
/// GPU preprocessing phase will use.
|
||||
///
|
||||
/// We already inserted the [`GetFullBatchData::BufferInputData`] during the
|
||||
/// extraction phase before we got here, so this function shouldn't need to
|
||||
/// look up any render data.
|
||||
///
|
||||
/// This function is currently only called for unbatchable entities when GPU
|
||||
/// instance buffer building is in use. For batchable entities, the uniform
|
||||
/// index is written during queuing (e.g. in `queue_material_meshes`). In
|
||||
/// the case of CPU instance buffer building, the CPU writes the uniforms,
|
||||
/// so there's no index to return.
|
||||
fn get_binned_index(
|
||||
param: &SystemParamItem<Self::Param>,
|
||||
query_item: MainEntity,
|
||||
) -> Option<NonMaxU32>;
|
||||
|
||||
/// Writes the [`gpu_preprocessing::IndirectParametersGpuMetadata`]
|
||||
/// necessary to draw this batch into the given metadata buffer at the given
|
||||
/// index.
|
||||
///
|
||||
/// This is only used if GPU culling is enabled (which requires GPU
|
||||
/// preprocessing).
|
||||
///
|
||||
/// * `indexed` is true if the mesh is indexed or false if it's non-indexed.
|
||||
///
|
||||
/// * `base_output_index` is the index of the first mesh instance in this
|
||||
/// batch in the `MeshUniform` output buffer.
|
||||
///
|
||||
/// * `batch_set_index` is the index of the batch set in the
|
||||
/// [`gpu_preprocessing::IndirectBatchSet`] buffer, if this batch belongs to
|
||||
/// a batch set.
|
||||
///
|
||||
/// * `indirect_parameters_buffers` is the buffer in which to write the
|
||||
/// metadata.
|
||||
///
|
||||
/// * `indirect_parameters_offset` is the index in that buffer at which to
|
||||
/// write the metadata.
|
||||
fn write_batch_indirect_parameters_metadata(
|
||||
indexed: bool,
|
||||
base_output_index: u32,
|
||||
batch_set_index: Option<NonMaxU32>,
|
||||
indirect_parameters_buffers: &mut UntypedPhaseIndirectParametersBuffers,
|
||||
indirect_parameters_offset: u32,
|
||||
);
|
||||
}
|
||||
|
||||
/// Sorts a render phase that uses bins.
|
||||
pub fn sort_binned_render_phase<BPI>(mut phases: ResMut<ViewBinnedRenderPhases<BPI>>)
|
||||
where
|
||||
BPI: BinnedPhaseItem,
|
||||
{
|
||||
for phase in phases.values_mut() {
|
||||
phase.multidrawable_meshes.sort_unstable_keys();
|
||||
phase.batchable_meshes.sort_unstable_keys();
|
||||
phase.unbatchable_meshes.sort_unstable_keys();
|
||||
phase.non_mesh_items.sort_unstable_keys();
|
||||
}
|
||||
}
|
||||
|
||||
/// Batches the items in a sorted render phase.
|
||||
///
|
||||
/// This means comparing metadata needed to draw each phase item and trying to
|
||||
/// combine the draws into a batch.
|
||||
///
|
||||
/// This is common code factored out from
|
||||
/// [`gpu_preprocessing::batch_and_prepare_sorted_render_phase`] and
|
||||
/// [`no_gpu_preprocessing::batch_and_prepare_sorted_render_phase`].
|
||||
fn batch_and_prepare_sorted_render_phase<I, GBD>(
|
||||
phase: &mut SortedRenderPhase<I>,
|
||||
mut process_item: impl FnMut(&mut I) -> Option<GBD::CompareData>,
|
||||
) where
|
||||
I: CachedRenderPipelinePhaseItem + SortedPhaseItem,
|
||||
GBD: GetBatchData,
|
||||
{
|
||||
let items = phase.items.iter_mut().map(|item| {
|
||||
let batch_data = match process_item(item) {
|
||||
Some(compare_data) if I::AUTOMATIC_BATCHING => Some(BatchMeta::new(item, compare_data)),
|
||||
_ => None,
|
||||
};
|
||||
(item.batch_range_mut(), batch_data)
|
||||
});
|
||||
|
||||
items.reduce(|(start_range, prev_batch_meta), (range, batch_meta)| {
|
||||
if batch_meta.is_some() && prev_batch_meta == batch_meta {
|
||||
start_range.end = range.end;
|
||||
(start_range, prev_batch_meta)
|
||||
} else {
|
||||
(range, batch_meta)
|
||||
}
|
||||
});
|
||||
}
|
||||
182
crates/libmarathon/src/render/batching/no_gpu_preprocessing.rs
Normal file
182
crates/libmarathon/src/render/batching/no_gpu_preprocessing.rs
Normal file
@@ -0,0 +1,182 @@
|
||||
//! Batching functionality when GPU preprocessing isn't in use.
|
||||
|
||||
use bevy_derive::{Deref, DerefMut};
|
||||
use bevy_ecs::entity::Entity;
|
||||
use bevy_ecs::resource::Resource;
|
||||
use bevy_ecs::system::{Res, ResMut, StaticSystemParam};
|
||||
use smallvec::{smallvec, SmallVec};
|
||||
use tracing::error;
|
||||
use wgpu::BindingResource;
|
||||
|
||||
use crate::render::{
|
||||
render_phase::{
|
||||
BinnedPhaseItem, BinnedRenderPhaseBatch, BinnedRenderPhaseBatchSets,
|
||||
CachedRenderPipelinePhaseItem, PhaseItemExtraIndex, SortedPhaseItem,
|
||||
ViewBinnedRenderPhases, ViewSortedRenderPhases,
|
||||
},
|
||||
render_resource::{GpuArrayBuffer, GpuArrayBufferable},
|
||||
renderer::{RenderDevice, RenderQueue},
|
||||
};
|
||||
|
||||
use super::{GetBatchData, GetFullBatchData};
|
||||
|
||||
/// The GPU buffers holding the data needed to render batches.
|
||||
///
|
||||
/// For example, in the 3D PBR pipeline this holds `MeshUniform`s, which are the
|
||||
/// `BD` type parameter in that mode.
|
||||
#[derive(Resource, Deref, DerefMut)]
|
||||
pub struct BatchedInstanceBuffer<BD>(pub GpuArrayBuffer<BD>)
|
||||
where
|
||||
BD: GpuArrayBufferable + Sync + Send + 'static;
|
||||
|
||||
impl<BD> BatchedInstanceBuffer<BD>
|
||||
where
|
||||
BD: GpuArrayBufferable + Sync + Send + 'static,
|
||||
{
|
||||
/// Creates a new buffer.
|
||||
pub fn new(render_device: &RenderDevice) -> Self {
|
||||
BatchedInstanceBuffer(GpuArrayBuffer::new(render_device))
|
||||
}
|
||||
|
||||
/// Returns the binding of the buffer that contains the per-instance data.
|
||||
///
|
||||
/// If we're in the GPU instance buffer building mode, this buffer needs to
|
||||
/// be filled in via a compute shader.
|
||||
pub fn instance_data_binding(&self) -> Option<BindingResource<'_>> {
|
||||
self.binding()
|
||||
}
|
||||
}
|
||||
|
||||
/// A system that clears out the [`BatchedInstanceBuffer`] for the frame.
|
||||
///
|
||||
/// This needs to run before the CPU batched instance buffers are used.
|
||||
pub fn clear_batched_cpu_instance_buffers<GBD>(
|
||||
cpu_batched_instance_buffer: Option<ResMut<BatchedInstanceBuffer<GBD::BufferData>>>,
|
||||
) where
|
||||
GBD: GetBatchData,
|
||||
{
|
||||
if let Some(mut cpu_batched_instance_buffer) = cpu_batched_instance_buffer {
|
||||
cpu_batched_instance_buffer.clear();
|
||||
}
|
||||
}
|
||||
|
||||
/// Batch the items in a sorted render phase, when GPU instance buffer building
|
||||
/// isn't in use. This means comparing metadata needed to draw each phase item
|
||||
/// and trying to combine the draws into a batch.
|
||||
pub fn batch_and_prepare_sorted_render_phase<I, GBD>(
|
||||
batched_instance_buffer: ResMut<BatchedInstanceBuffer<GBD::BufferData>>,
|
||||
mut phases: ResMut<ViewSortedRenderPhases<I>>,
|
||||
param: StaticSystemParam<GBD::Param>,
|
||||
) where
|
||||
I: CachedRenderPipelinePhaseItem + SortedPhaseItem,
|
||||
GBD: GetBatchData,
|
||||
{
|
||||
let system_param_item = param.into_inner();
|
||||
|
||||
// We only process CPU-built batch data in this function.
|
||||
let batched_instance_buffer = batched_instance_buffer.into_inner();
|
||||
|
||||
for phase in phases.values_mut() {
|
||||
super::batch_and_prepare_sorted_render_phase::<I, GBD>(phase, |item| {
|
||||
let (buffer_data, compare_data) =
|
||||
GBD::get_batch_data(&system_param_item, (item.entity(), item.main_entity()))?;
|
||||
let buffer_index = batched_instance_buffer.push(buffer_data);
|
||||
|
||||
let index = buffer_index.index;
|
||||
let (batch_range, extra_index) = item.batch_range_and_extra_index_mut();
|
||||
*batch_range = index..index + 1;
|
||||
*extra_index = PhaseItemExtraIndex::maybe_dynamic_offset(buffer_index.dynamic_offset);
|
||||
|
||||
compare_data
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
/// Creates batches for a render phase that uses bins, when GPU batch data
|
||||
/// building isn't in use.
|
||||
pub fn batch_and_prepare_binned_render_phase<BPI, GFBD>(
|
||||
gpu_array_buffer: ResMut<BatchedInstanceBuffer<GFBD::BufferData>>,
|
||||
mut phases: ResMut<ViewBinnedRenderPhases<BPI>>,
|
||||
param: StaticSystemParam<GFBD::Param>,
|
||||
) where
|
||||
BPI: BinnedPhaseItem,
|
||||
GFBD: GetFullBatchData,
|
||||
{
|
||||
let gpu_array_buffer = gpu_array_buffer.into_inner();
|
||||
let system_param_item = param.into_inner();
|
||||
|
||||
for phase in phases.values_mut() {
|
||||
// Prepare batchables.
|
||||
|
||||
for bin in phase.batchable_meshes.values_mut() {
|
||||
let mut batch_set: SmallVec<[BinnedRenderPhaseBatch; 1]> = smallvec![];
|
||||
for main_entity in bin.entities().keys() {
|
||||
let Some(buffer_data) =
|
||||
GFBD::get_binned_batch_data(&system_param_item, *main_entity)
|
||||
else {
|
||||
continue;
|
||||
};
|
||||
let instance = gpu_array_buffer.push(buffer_data);
|
||||
|
||||
// If the dynamic offset has changed, flush the batch.
|
||||
//
|
||||
// This is the only time we ever have more than one batch per
|
||||
// bin. Note that dynamic offsets are only used on platforms
|
||||
// with no storage buffers.
|
||||
if !batch_set.last().is_some_and(|batch| {
|
||||
batch.instance_range.end == instance.index
|
||||
&& batch.extra_index
|
||||
== PhaseItemExtraIndex::maybe_dynamic_offset(instance.dynamic_offset)
|
||||
}) {
|
||||
batch_set.push(BinnedRenderPhaseBatch {
|
||||
representative_entity: (Entity::PLACEHOLDER, *main_entity),
|
||||
instance_range: instance.index..instance.index,
|
||||
extra_index: PhaseItemExtraIndex::maybe_dynamic_offset(
|
||||
instance.dynamic_offset,
|
||||
),
|
||||
});
|
||||
}
|
||||
|
||||
if let Some(batch) = batch_set.last_mut() {
|
||||
batch.instance_range.end = instance.index + 1;
|
||||
}
|
||||
}
|
||||
|
||||
match phase.batch_sets {
|
||||
BinnedRenderPhaseBatchSets::DynamicUniforms(ref mut batch_sets) => {
|
||||
batch_sets.push(batch_set);
|
||||
}
|
||||
BinnedRenderPhaseBatchSets::Direct(_)
|
||||
| BinnedRenderPhaseBatchSets::MultidrawIndirect { .. } => {
|
||||
error!(
|
||||
"Dynamic uniform batch sets should be used when GPU preprocessing is off"
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Prepare unbatchables.
|
||||
for unbatchables in phase.unbatchable_meshes.values_mut() {
|
||||
for main_entity in unbatchables.entities.keys() {
|
||||
let Some(buffer_data) =
|
||||
GFBD::get_binned_batch_data(&system_param_item, *main_entity)
|
||||
else {
|
||||
continue;
|
||||
};
|
||||
let instance = gpu_array_buffer.push(buffer_data);
|
||||
unbatchables.buffer_indices.add(instance.into());
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Writes the instance buffer data to the GPU.
|
||||
pub fn write_batched_instance_buffer<GBD>(
|
||||
render_device: Res<RenderDevice>,
|
||||
render_queue: Res<RenderQueue>,
|
||||
mut cpu_batched_instance_buffer: ResMut<BatchedInstanceBuffer<GBD::BufferData>>,
|
||||
) where
|
||||
GBD: GetBatchData,
|
||||
{
|
||||
cpu_batched_instance_buffer.write_buffer(&render_device, &render_queue);
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user