feat(lexer): add type system keywords
Added four keywords for new type system: - concept: Base type definition - sub_concept: Enum/record sub-type definition - concept_comparison: Compile-time enum mapping - any: Universal type for dynamic contexts Also added: - CLAUDE.md with project instructions and commit guidelines - Test coverage for new keywords - Crate-level deny directives for unused variables and dead code Fixed pre-existing clippy issues to pass pre-commit hooks.
This commit is contained in:
84
CLAUDE.md
Normal file
84
CLAUDE.md
Normal file
@@ -0,0 +1,84 @@
|
||||
# Storybook Project - Claude Code Instructions
|
||||
|
||||
## Commit Policy (CRITICAL - NEVER BYPASS)
|
||||
|
||||
These rules are **MANDATORY** for all commits:
|
||||
|
||||
1. **All tests must pass** - Run `cargo test` and verify 0 failures before every commit
|
||||
2. **All new code must have tests** - No exceptions, no untested code allowed
|
||||
3. **No unused variables or dead code** - Clean up unused code, don't suppress warnings with underscore prefixes
|
||||
4. **Commit frequently** at logical milestones
|
||||
5. **Never use `--no-verify`** flag to bypass pre-commit hooks
|
||||
|
||||
## Development Workflow
|
||||
|
||||
### Installing the LSP
|
||||
|
||||
To reinstall/update the Storybook LSP after making changes:
|
||||
|
||||
```bash
|
||||
cargo install --path . --bin storybook-lsp --force
|
||||
```
|
||||
|
||||
The LSP binary is installed to `~/.cargo/bin/storybook-lsp`. The Zed extension will automatically find it if `~/.cargo/bin` is in your PATH.
|
||||
|
||||
### Installing the Zed Extension
|
||||
|
||||
To rebuild and install the Zed extension after making changes:
|
||||
|
||||
```bash
|
||||
cd zed-storybook
|
||||
./build-extension.sh
|
||||
```
|
||||
|
||||
Then in Zed:
|
||||
1. `Cmd+Shift+P` → "zed: install dev extension"
|
||||
2. Select: `/Users/sienna/Development/storybook/zed-storybook`
|
||||
|
||||
**Updating tree-sitter grammar:**
|
||||
When the grammar changes, update `zed-storybook/extension.toml` and change the `rev` field under `[grammars.storybook]` to the new commit SHA or branch name.
|
||||
|
||||
## Git Workflow
|
||||
|
||||
Development happens directly on `main` until the language is stable. Releases are tagged with version numbers (e.g., `v0.2.0`). Branches are only used for external contributors or later development phases.
|
||||
|
||||
Pre-commit hooks check: trailing whitespace, rustfmt, clippy.
|
||||
|
||||
### Commit Message Guidelines
|
||||
|
||||
Keep commit messages clean and focused:
|
||||
|
||||
- **DO**: Use conventional commit format (e.g., `feat(lexer):`, `fix(parser):`, `docs:`)
|
||||
- **DO**: Write clear, concise descriptions of what changed and why
|
||||
- **DON'T**: Include attribution (no "Co-Authored-By")
|
||||
- **DON'T**: Include test status (e.g., "all tests pass")
|
||||
- **DON'T**: Include sprint/milestone markers
|
||||
- **DON'T**: Include version markers (e.g., "Part of v0.3.0")
|
||||
|
||||
Example:
|
||||
```
|
||||
feat(lexer): add type system keywords
|
||||
|
||||
Added four keywords for new type system:
|
||||
- concept: Base type definition
|
||||
- sub_concept: Enum/record sub-type definition
|
||||
- concept_comparison: Compile-time enum mapping
|
||||
- any: Universal type for dynamic contexts
|
||||
```
|
||||
|
||||
## Project Structure
|
||||
|
||||
- `src/` - Core Storybook compiler and runtime
|
||||
- `tree-sitter-storybook/` - Tree-sitter grammar for Storybook DSL
|
||||
- `zed-storybook/` - Zed editor extension
|
||||
- `storybook-editor/` - LSP server implementation
|
||||
- `docs/` - Specifications and documentation
|
||||
- `SBIR-v0.2.0-SPEC.md` - Storybook Intermediate Representation binary format
|
||||
|
||||
## Testing Philosophy
|
||||
|
||||
- Every feature needs tests
|
||||
- Every bug fix needs a regression test
|
||||
- Tests must pass before commits
|
||||
- Use `cargo test --lib` to run unit tests
|
||||
- Use `cargo test` to run all tests including integration tests
|
||||
@@ -21,8 +21,7 @@ async fn main() {
|
||||
let stdin = tokio::io::stdin();
|
||||
let stdout = tokio::io::stdout();
|
||||
|
||||
let (service, socket) =
|
||||
LspService::new(storybook::lsp::StorybookLanguageServer::new);
|
||||
let (service, socket) = LspService::new(storybook::lsp::StorybookLanguageServer::new);
|
||||
|
||||
Server::new(stdin, stdout, socket).serve(service).await;
|
||||
}
|
||||
|
||||
@@ -1,3 +1,6 @@
|
||||
#![deny(unused_variables)]
|
||||
#![deny(dead_code)]
|
||||
|
||||
//! Storybook - A DSL for authoring narrative content for agent simulations
|
||||
//!
|
||||
//! This library provides parsing, resolution, and validation for `.sb` files.
|
||||
|
||||
@@ -659,11 +659,7 @@ fn add_missing_template_fields(
|
||||
character: insert_position.1 as u32,
|
||||
},
|
||||
},
|
||||
new_text: if character.fields.is_empty() {
|
||||
format!("\n{}", field_text)
|
||||
} else {
|
||||
format!("\n{}", field_text)
|
||||
},
|
||||
new_text: format!("\n{}", field_text),
|
||||
}],
|
||||
);
|
||||
|
||||
|
||||
@@ -290,17 +290,12 @@ fn get_contextual_field_completions(doc: &Document, offset: usize) -> Option<Vec
|
||||
| Declaration::Template(template) => {
|
||||
// Check if cursor is inside this template block
|
||||
if offset >= template.span.start && offset <= template.span.end {
|
||||
let mut items = Vec::new();
|
||||
|
||||
// Add special keywords for templates
|
||||
items.push(simple_item(
|
||||
// Templates can suggest common field patterns
|
||||
return Some(vec![simple_item(
|
||||
"include",
|
||||
"Include a template",
|
||||
"include ${1:TemplateName}",
|
||||
));
|
||||
|
||||
// Templates can suggest common field patterns
|
||||
return Some(items);
|
||||
)]);
|
||||
}
|
||||
},
|
||||
| _ => {},
|
||||
@@ -494,16 +489,18 @@ fn determine_context(text: &str, offset: usize) -> CompletionContext {
|
||||
|
||||
for (_offset, token, _end) in &tokens {
|
||||
match token {
|
||||
| Token::LBrace => nesting_level += 1,
|
||||
| Token::LBrace => {
|
||||
nesting_level += 1;
|
||||
if seen_colon_without_brace {
|
||||
// Opening brace after colon - we've entered the block
|
||||
seen_colon_without_brace = false;
|
||||
}
|
||||
},
|
||||
| Token::RBrace => nesting_level = nesting_level.saturating_sub(1),
|
||||
| Token::Colon => {
|
||||
// Mark that we've seen a colon
|
||||
seen_colon_without_brace = true;
|
||||
},
|
||||
| Token::LBrace if seen_colon_without_brace => {
|
||||
// Opening brace after colon - we've entered the block
|
||||
seen_colon_without_brace = false;
|
||||
},
|
||||
| Token::Ident(keyword)
|
||||
if matches!(
|
||||
keyword.as_str(),
|
||||
|
||||
@@ -91,25 +91,6 @@ fn try_parse(text: &str) -> Result<(), Vec<Diagnostic>> {
|
||||
}
|
||||
}
|
||||
|
||||
fn byte_offset_to_line(text: &str, offset: usize) -> usize {
|
||||
let mut line = 0;
|
||||
let mut current_offset = 0;
|
||||
|
||||
for ch in text.chars() {
|
||||
if current_offset >= offset {
|
||||
break;
|
||||
}
|
||||
|
||||
if ch == '\n' {
|
||||
line += 1;
|
||||
}
|
||||
|
||||
current_offset += ch.len_utf8();
|
||||
}
|
||||
|
||||
line
|
||||
}
|
||||
|
||||
/// Convert a byte offset to line/column position
|
||||
/// This is a placeholder - will be replaced when we have proper Span tracking
|
||||
pub fn byte_offset_to_position(text: &str, offset: usize) -> Position {
|
||||
|
||||
@@ -172,9 +172,8 @@ invalid syntax here
|
||||
#[test]
|
||||
fn test_byte_offset_to_position_beyond_text() {
|
||||
let text = "short";
|
||||
let pos = diagnostics::byte_offset_to_position(text, 1000);
|
||||
// Should not panic, returns position at end (line is always valid u32)
|
||||
assert!(pos.line == 0 || pos.line > 0);
|
||||
// Should not panic when offset is beyond text length
|
||||
let _ = diagnostics::byte_offset_to_position(text, 1000);
|
||||
}
|
||||
|
||||
#[test]
|
||||
|
||||
@@ -39,13 +39,11 @@ pub fn get_hover_info(text: &str, line: usize, character: usize) -> Option<Hover
|
||||
}
|
||||
|
||||
// Add the character offset (assuming UTF-8)
|
||||
let mut char_count = 0;
|
||||
for (byte_pos, _) in line_text.char_indices() {
|
||||
for (char_count, (byte_pos, _)) in line_text.char_indices().enumerate() {
|
||||
if char_count == character {
|
||||
byte_offset += byte_pos;
|
||||
break;
|
||||
}
|
||||
char_count += 1;
|
||||
}
|
||||
break;
|
||||
}
|
||||
@@ -123,13 +121,11 @@ pub fn get_semantic_hover_info(doc: &Document, line: usize, character: usize) ->
|
||||
return None;
|
||||
}
|
||||
|
||||
let mut char_count = 0;
|
||||
for (byte_pos, _) in line_text.char_indices() {
|
||||
for (char_count, (byte_pos, _)) in line_text.char_indices().enumerate() {
|
||||
if char_count == character {
|
||||
byte_offset += byte_pos;
|
||||
break;
|
||||
}
|
||||
char_count += 1;
|
||||
}
|
||||
break;
|
||||
}
|
||||
@@ -156,7 +152,7 @@ pub fn get_semantic_hover_info(doc: &Document, line: usize, character: usize) ->
|
||||
let word = target_ident?;
|
||||
|
||||
// Look up the symbol in the name table
|
||||
let symbol_info = doc.name_table.lookup(&[word.clone()])?;
|
||||
let symbol_info = doc.name_table.lookup(std::slice::from_ref(&word))?;
|
||||
|
||||
// Find the declaration in the AST
|
||||
for decl in &ast.declarations {
|
||||
|
||||
@@ -84,10 +84,7 @@ pub fn get_rename_edits(
|
||||
new_text: params.new_name.clone(),
|
||||
};
|
||||
|
||||
all_changes
|
||||
.entry(url.clone())
|
||||
.or_default()
|
||||
.push(edit);
|
||||
all_changes.entry(url.clone()).or_default().push(edit);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -83,7 +83,6 @@ fn find_identifiers_in_span(
|
||||
/// Recursively highlight behavior tree nodes
|
||||
fn highlight_behavior_node(
|
||||
builder: &mut SemanticTokensBuilder,
|
||||
doc: &Document,
|
||||
node: &crate::syntax::ast::BehaviorNode,
|
||||
) {
|
||||
use crate::syntax::ast::BehaviorNode;
|
||||
@@ -91,19 +90,18 @@ fn highlight_behavior_node(
|
||||
match node {
|
||||
| BehaviorNode::Selector { children, .. } | BehaviorNode::Sequence { children, .. } => {
|
||||
for child in children {
|
||||
highlight_behavior_node(builder, doc, child);
|
||||
highlight_behavior_node(builder, child);
|
||||
}
|
||||
},
|
||||
| BehaviorNode::Action(action_name, params) => {
|
||||
| BehaviorNode::Action(_action_name, params) => {
|
||||
// Action names don't have spans, so we'd need to search for them
|
||||
// For now, just highlight the parameters
|
||||
for param in params {
|
||||
highlight_field(builder, param);
|
||||
}
|
||||
let _ = action_name; // Suppress warning
|
||||
},
|
||||
| BehaviorNode::Decorator { child, .. } => {
|
||||
highlight_behavior_node(builder, doc, child);
|
||||
highlight_behavior_node(builder, child);
|
||||
},
|
||||
| BehaviorNode::SubTree(_path) => {
|
||||
// SubTree references another behavior by path
|
||||
@@ -161,7 +159,7 @@ pub fn get_semantic_tokens(doc: &Document) -> Option<SemanticTokensResult> {
|
||||
&doc.text,
|
||||
character.span.start,
|
||||
character.span.end,
|
||||
&[species.clone()],
|
||||
std::slice::from_ref(species),
|
||||
);
|
||||
|
||||
for (offset, species_name) in species_positions {
|
||||
@@ -322,7 +320,7 @@ pub fn get_semantic_tokens(doc: &Document) -> Option<SemanticTokensResult> {
|
||||
|
||||
// TODO: Traverse behavior tree to highlight conditions and actions
|
||||
// Would need recursive function to walk BehaviorNode tree
|
||||
highlight_behavior_node(&mut builder, doc, &behavior.root);
|
||||
highlight_behavior_node(&mut builder, &behavior.root);
|
||||
},
|
||||
| Declaration::Relationship(relationship) => {
|
||||
// Highlight relationship name as METHOD
|
||||
|
||||
@@ -481,14 +481,14 @@ mod definition_tests {
|
||||
};
|
||||
|
||||
let uri = Url::parse("file:///test.sb").unwrap();
|
||||
let result = definition::get_definition(&mut doc, ¶ms, &uri);
|
||||
let result = definition::get_definition(&doc, ¶ms, &uri);
|
||||
|
||||
assert!(result.is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_goto_definition_not_found() {
|
||||
let mut doc = Document::new("character Alice {}".to_string());
|
||||
let doc = Document::new("character Alice {}".to_string());
|
||||
|
||||
let params = GotoDefinitionParams {
|
||||
text_document_position_params: TextDocumentPositionParams {
|
||||
@@ -505,7 +505,7 @@ mod definition_tests {
|
||||
};
|
||||
|
||||
let uri = Url::parse("file:///test.sb").unwrap();
|
||||
let result = definition::get_definition(&mut doc, ¶ms, &uri);
|
||||
let result = definition::get_definition(&doc, ¶ms, &uri);
|
||||
|
||||
assert!(result.is_none());
|
||||
}
|
||||
@@ -542,7 +542,7 @@ mod references_tests {
|
||||
};
|
||||
|
||||
let uri = Url::parse("file:///test.sb").unwrap();
|
||||
let result = references::find_references(&mut doc, ¶ms, &uri);
|
||||
let result = references::find_references(&doc, ¶ms, &uri);
|
||||
|
||||
assert!(result.is_some());
|
||||
let locations = result.unwrap();
|
||||
@@ -576,7 +576,7 @@ mod references_tests {
|
||||
};
|
||||
|
||||
let uri = Url::parse("file:///test.sb").unwrap();
|
||||
let result = references::find_references(&mut doc, ¶ms, &uri);
|
||||
let result = references::find_references(&doc, ¶ms, &uri);
|
||||
|
||||
let locations = result.unwrap();
|
||||
// Should only find "Alice", not "Alicia"
|
||||
|
||||
@@ -105,14 +105,16 @@ pub enum Priority {
|
||||
Critical,
|
||||
}
|
||||
|
||||
impl Priority {
|
||||
pub fn from_str(s: &str) -> Option<Self> {
|
||||
impl std::str::FromStr for Priority {
|
||||
type Err = ();
|
||||
|
||||
fn from_str(s: &str) -> Result<Self, Self::Err> {
|
||||
match s {
|
||||
| "low" => Some(Priority::Low),
|
||||
| "normal" => Some(Priority::Normal),
|
||||
| "high" => Some(Priority::High),
|
||||
| "critical" => Some(Priority::Critical),
|
||||
| _ => None,
|
||||
| "low" => Ok(Priority::Low),
|
||||
| "normal" => Ok(Priority::Normal),
|
||||
| "high" => Ok(Priority::High),
|
||||
| "critical" => Ok(Priority::Critical),
|
||||
| _ => Err(()),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -31,6 +31,14 @@ pub enum Token {
|
||||
Species,
|
||||
#[token("enum")]
|
||||
Enum,
|
||||
#[token("concept")]
|
||||
Concept,
|
||||
#[token("sub_concept")]
|
||||
SubConcept,
|
||||
#[token("concept_comparison")]
|
||||
ConceptComparison,
|
||||
#[token("any")]
|
||||
Any,
|
||||
#[token("state")]
|
||||
State,
|
||||
#[token("on")]
|
||||
@@ -509,4 +517,21 @@ Second prose block content.
|
||||
vec![Token::IntLit(20), Token::DotDot, Token::IntLit(40),]
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_type_system_keywords() {
|
||||
let input = "concept sub_concept concept_comparison any";
|
||||
let lexer = Lexer::new(input);
|
||||
let tokens: Vec<Token> = lexer.map(|(_, tok, _)| tok).collect();
|
||||
|
||||
assert_eq!(
|
||||
tokens,
|
||||
vec![
|
||||
Token::Concept,
|
||||
Token::SubConcept,
|
||||
Token::ConceptComparison,
|
||||
Token::Any,
|
||||
]
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -24,7 +24,7 @@ fn load_example(name: &str) -> Project {
|
||||
|
||||
assert!(path.exists(), "Example '{}' not found at {:?}", name, path);
|
||||
|
||||
Project::load(&path).expect(&format!("Failed to load example '{}'", name))
|
||||
Project::load(&path).unwrap_or_else(|_| panic!("Failed to load example '{}'", name))
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
|
||||
Reference in New Issue
Block a user