- io/fs.rs: 20 tests (path resolution, project open/close/get, known projects, model prefs, file read/write, list dir, validate path, scaffold) - io/shell.rs: 4 new tests (allowlist, command execution, stdout capture, exit codes) - http/workflow.rs: 8 tests (parse_test_status, to_test_case, to_review_story) Coverage: 28.6% → 48.1% Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
989 lines
38 KiB
Rust
989 lines
38 KiB
Rust
use crate::io::story_metadata::{TestPlanStatus, parse_front_matter};
|
|
use crate::state::SessionState;
|
|
use crate::store::StoreOps;
|
|
use serde::Serialize;
|
|
use serde_json::json;
|
|
use std::fs;
|
|
use std::path::{Path, PathBuf};
|
|
|
|
const KEY_LAST_PROJECT: &str = "last_project_path";
|
|
const KEY_SELECTED_MODEL: &str = "selected_model";
|
|
const KEY_KNOWN_PROJECTS: &str = "known_projects";
|
|
|
|
const STORY_KIT_README: &str = r#"# Story Kit: The Story-Driven Spec Workflow (SDSW)
|
|
|
|
**Target Audience:** Large Language Models (LLMs) acting as Senior Engineers.
|
|
**Goal:** To maintain long-term project coherence, prevent context window exhaustion, and ensure high-quality, testable code generation in large software projects.
|
|
|
|
---
|
|
|
|
## 1. The Philosophy
|
|
|
|
We treat the codebase as the implementation of a **"Living Specification."** driven by **User Stories**
|
|
Instead of ephemeral chat prompts ("Fix this", "Add that"), we work through persistent artifacts.
|
|
* **Stories** define the *Change*.
|
|
* **Specs** define the *Truth*.
|
|
* **Code** defines the *Reality*.
|
|
|
|
**The Golden Rule:** You are not allowed to write code until the Spec reflects the new reality requested by the Story.
|
|
|
|
---
|
|
|
|
## 2. Directory Structure
|
|
|
|
When initializing a new project under this workflow, create the following structure immediately:
|
|
|
|
```text
|
|
project_root/
|
|
.story_kit
|
|
|-- README.md # This document
|
|
├── stories/ # The "Inbox" of feature requests.
|
|
├── specs/ # The "Brain" of the project.
|
|
│ ├── README.md # Explains this workflow to future sessions.
|
|
│ ├── 00_CONTEXT.md # High-level goals, domain definition, and glossary.
|
|
│ ├── tech/ # Implementation details (Stack, Architecture, Constraints).
|
|
│ │ └── STACK.md # The "Constitution" (Languages, Libs, Patterns).
|
|
│ └── functional/ # Domain logic (Platform-agnostic behavior).
|
|
│ ├── 01_CORE.md
|
|
│ └── ...
|
|
└── src/ # The Code.
|
|
```
|
|
|
|
---
|
|
|
|
## 3. The Cycle (The "Loop")
|
|
|
|
When the user asks for a feature, follow this 4-step loop strictly:
|
|
|
|
### Step 1: The Story (Ingest)
|
|
* **User Input:** "I want the robot to dance."
|
|
* **Action:** Create a file `stories/XX_robot_dance.md`.
|
|
* **Content:**
|
|
* **User Story:** "As a user, I want..."
|
|
* **Acceptance Criteria:** Bullet points of observable success.
|
|
* **Out of scope:** Things that are out of scope so that the LLM doesn't go crazy
|
|
* **Git:** Make a local feature branch for the story, named from the story (e.g., `feature/story-33-camera-format-auto-selection`). You must create and switch to the feature branch before making any edits.
|
|
|
|
### Step 2: The Spec (Digest)
|
|
* **Action:** Update the files in `specs/`.
|
|
* **Logic:**
|
|
* Does `specs/functional/LOCOMOTION.md` exist? If no, create it.
|
|
* Add the "Dance" state to the state machine definition in the spec.
|
|
* Check `specs/tech/STACK.md`: Do we have an approved animation library? If no, propose adding one to the Stack or reject the feature.
|
|
* **Output:** Show the user the diff of the Spec. **Wait for approval.**
|
|
|
|
### Step 3: The Implementation (Code)
|
|
* **Action:** Write the code to match the *Spec* (not just the Story).
|
|
* **Constraint:** adhere strictly to `specs/tech/STACK.md` (e.g., if it says "No `unwrap()`", you must not use `unwrap()`).
|
|
|
|
### Step 4: Verification (Close)
|
|
* **Action:** Write a test case that maps directly to the Acceptance Criteria in the Story.
|
|
* **Action:** Run compilation and make sure it succeeds without errors. Consult `specs/tech/STACK.md` and run all required linters listed there (treat warnings as errors). Run tests and make sure they all pass before proceeding. Ask questions here if needed.
|
|
* **Action:** Do not accept stories yourself. Ask the user if they accept the story. If they agree, move the story file to `stories/archive/`. Tell the user they should commit (this gives them the chance to exclude files via .gitignore if necessary).
|
|
* **Action:** When the user accepts:
|
|
1. Move the story file to `stories/archive/` (e.g., `mv stories/XX_story_name.md stories/archive/`)
|
|
2. Commit both changes to the feature branch
|
|
3. Perform the squash merge: `git merge --squash feature/story-name`
|
|
4. Commit to master with a comprehensive commit message
|
|
5. Delete the feature branch: `git branch -D feature/story-name`
|
|
* **Important:** Do NOT mark acceptance criteria as complete before user acceptance. Only mark them complete when the user explicitly accepts the story.
|
|
|
|
**CRITICAL - NO SUMMARY DOCUMENTS:**
|
|
* **NEVER** create a separate summary document (e.g., `STORY_XX_SUMMARY.md`, `IMPLEMENTATION_NOTES.md`, etc.)
|
|
* **NEVER** write terminal output to a markdown file for "documentation purposes"
|
|
* The `specs/` folder IS the documentation. Keep it updated after each story.
|
|
* If you find yourself typing `cat << 'EOF' > SUMMARY.md` or similar, **STOP IMMEDIATELY**.
|
|
* The only files that should exist after story completion:
|
|
* Updated code in `src/`
|
|
* Updated specs in `specs/`
|
|
* Archived story in `stories/archive/`
|
|
|
|
---
|
|
|
|
|
|
## 3.5. Bug Workflow (Simplified Path)
|
|
|
|
Not everything needs to be a full story. Simple bugs can skip the story process:
|
|
|
|
### When to Use Bug Workflow
|
|
* Defects in existing functionality (not new features)
|
|
* State inconsistencies or data corruption
|
|
* UI glitches that don't require spec changes
|
|
* Performance issues with known fixes
|
|
|
|
### Bug Process
|
|
1. **Document Bug:** Create `bugs/bug-N-short-description.md` with:
|
|
* **Symptom:** What the user observes
|
|
* **Root Cause:** Technical explanation (if known)
|
|
* **Reproduction Steps:** How to trigger the bug
|
|
* **Proposed Fix:** Brief technical approach
|
|
* **Workaround:** Temporary solution if available
|
|
2. **Fix Immediately:** Make minimal code changes to fix the bug
|
|
3. **Archive:** Move fixed bugs to `bugs/archive/` when complete
|
|
4. **No Spec Update Needed:** Unless the bug reveals a spec deficiency
|
|
|
|
### Bug vs Story
|
|
* **Bug:** Existing functionality is broken → Fix it
|
|
* **Story:** New functionality is needed → Spec it, then build it
|
|
* **Spike:** Uncertainty/feasibility discovery → Run spike workflow
|
|
|
|
---
|
|
|
|
## 3.6. Spike Workflow (Research Path)
|
|
|
|
Not everything needs a story or bug fix. Spikes are time-boxed investigations to reduce uncertainty.
|
|
|
|
### When to Use a Spike
|
|
* Unclear root cause or feasibility
|
|
* Need to compare libraries/encoders/formats
|
|
* Need to validate performance constraints
|
|
|
|
### Spike Process
|
|
1. **Document Spike:** Create `spikes/spike-N-short-description.md` with:
|
|
* **Question:** What you need to answer
|
|
* **Hypothesis:** What you expect to be true
|
|
* **Timebox:** Strict limit for the research
|
|
* **Investigation Plan:** Steps/tools to use
|
|
* **Findings:** Evidence and observations
|
|
* **Recommendation:** Next step (Story, Bug, or No Action)
|
|
2. **Execute Research:** Stay within the timebox. No production code changes.
|
|
3. **Escalate if Needed:** If implementation is required, open a Story or Bug and follow that workflow.
|
|
4. **Archive:** Move completed spikes to `spikes/archive/`.
|
|
|
|
### Spike Output
|
|
* Decision and evidence, not production code
|
|
* Specs updated only if the spike changes system truth
|
|
|
|
---
|
|
|
|
## 4. Context Reset Protocol
|
|
|
|
When the LLM context window fills up (or the chat gets slow/confused):
|
|
1. **Stop Coding.**
|
|
2. **Instruction:** Tell the user to open a new chat.
|
|
3. **Handoff:** The only context the new LLM needs is in the `specs/` folder.
|
|
* *Prompt for New Session:* "I am working on Project X. Read `specs/00_CONTEXT.md` and `specs/tech/STACK.md`. Then look at `stories/` to see what is pending."
|
|
|
|
|
|
---
|
|
|
|
## 5. Setup Instructions (For the LLM)
|
|
|
|
If a user hands you this document and says "Apply this process to my project":
|
|
|
|
1. **Analyze the Request:** Ask for the high-level goal ("What are we building?") and the tech preferences ("Rust or Python?").
|
|
2. **Git Check:** Check if the directory is a git repository (`git status`). If not, run `git init`.
|
|
3. **Scaffold:** Run commands to create the `specs/` and `stories/` folders.
|
|
4. **Draft Context:** Write `specs/00_CONTEXT.md` based on the user's answer.
|
|
5. **Draft Stack:** Write `specs/tech/STACK.md` based on best practices for that language.
|
|
6. **Wait:** Ask the user for "Story #1".
|
|
|
|
---
|
|
|
|
## 6. Code Quality Tools
|
|
|
|
**MANDATORY:** Before completing Step 4 (Verification) of any story, you MUST run all applicable linters and fix ALL errors and warnings. Zero tolerance for warnings or errors.
|
|
|
|
**AUTO-RUN CHECKS:** Always run the required lint/test/build checks as soon as relevant changes are made. Do not ask for permission to run them—run them automatically and fix any failures.
|
|
|
|
**ALWAYS FIX DIAGNOSTICS:** At every stage, you must proactively fix all errors and warnings without waiting for user confirmation. Do not pause to ask whether to fix diagnostics—fix them immediately as part of the workflow.
|
|
|
|
### TypeScript/JavaScript: Biome
|
|
|
|
* **Tool:** [Biome](https://biomejs.dev/) - Fast formatter and linter
|
|
* **Check Command:** `npx @biomejs/biome check src/`
|
|
* **Fix Command:** `npx @biomejs/biome check --write src/`
|
|
* **Unsafe Fixes:** `npx @biomejs/biome check --write --unsafe src/`
|
|
* **Configuration:** `biome.json` in project root
|
|
* **When to Run:**
|
|
* After every code change to TypeScript/React files
|
|
* Before committing any frontend changes
|
|
* During Step 4 (Verification) - must show 0 errors, 0 warnings
|
|
|
|
**Biome Rules to Follow:**
|
|
* No `any` types (use proper TypeScript types or `unknown`)
|
|
* No array index as `key` in React (use stable IDs)
|
|
* No assignments in expressions (extract to separate statements)
|
|
* All buttons must have explicit `type` prop (`button`, `submit`, or `reset`)
|
|
* Mouse events must be accompanied by keyboard events for accessibility
|
|
* Use template literals instead of string concatenation
|
|
* Import types with `import type { }` syntax
|
|
* Organize imports automatically
|
|
|
|
"#;
|
|
|
|
const STORY_KIT_SPECS_README: &str = r#"# Project Specs
|
|
|
|
This folder contains the "Living Specification" for the project. It serves as the source of truth for all AI sessions.
|
|
|
|
## Structure
|
|
|
|
* **00_CONTEXT.md**: The high-level overview, goals, domain definition, and glossary. Start here.
|
|
* **tech/**: Implementation details, including the Tech Stack, Architecture, and Constraints.
|
|
* **STACK.md**: The technical "Constitution" (Languages, Libraries, Patterns).
|
|
* **functional/**: Domain logic and behavior descriptions, platform-agnostic.
|
|
* **01_CORE.md**: Core functional specifications.
|
|
|
|
## Usage for LLMs
|
|
|
|
1. **Always read 00_CONTEXT.md** and **tech/STACK.md** at the beginning of a session.
|
|
2. Before writing code, ensure the spec in this folder reflects the desired reality.
|
|
3. If a Story changes behavior, update the spec *first*, get approval, then write code.
|
|
"#;
|
|
|
|
const STORY_KIT_CONTEXT: &str = r#"# Project Context
|
|
|
|
## High-Level Goal
|
|
To build a standalone **Agentic AI Code Assistant** application as a single Rust binary that serves a Vite/React web UI and exposes a WebSocket API. The assistant will facilitate a "Story-Driven Spec Workflow" (SDSW) for software development. Unlike a passive chat interface, this assistant acts as an **Agent**, capable of using tools to read the filesystem, execute shell commands, manage git repositories, and modify code directly to implement features.
|
|
|
|
## Core Features
|
|
1. **Chat Interface:** A conversational UI for the user to interact with the AI assistant.
|
|
2. **Agentic Tool Bridge:** A robust system mapping LLM "Tool Calls" to native Rust functions.
|
|
* **Filesystem:** Read/Write access (scoped to the target project).
|
|
* **Search:** High-performance file searching (ripgrep-style) and content retrieval.
|
|
* **Shell Integration:** Ability to execute approved commands (e.g., `cargo`, `npm`, `git`) to run tests, linters, and version control.
|
|
3. **Workflow Management:** Specialized tools to manage the SDSW lifecycle:
|
|
* Ingesting stories.
|
|
* Updating specs.
|
|
* Implementing code.
|
|
* Verifying results (running tests).
|
|
4. **LLM Integration:** Connection to an LLM backend to drive the intelligence and tool selection.
|
|
* **Remote:** Support for major APIs (Anthropic Claude, Google Gemini, OpenAI, etc).
|
|
* **Local:** Support for local inference via Ollama.
|
|
|
|
## Domain Definition
|
|
* **User:** A software engineer using the assistant to build a project.
|
|
* **Target Project:** The local software project the user is working on.
|
|
* **Agent:** The AI entity that receives prompts and decides which **Tools** to invoke to solve the problem.
|
|
* **Tool:** A discrete function exposed to the Agent (e.g., `run_shell_command`, `write_file`, `search_project`).
|
|
* **Story:** A unit of work defining a change (Feature Request).
|
|
* **Spec:** A persistent documentation artifact defining the current truth of the system.
|
|
|
|
## Glossary
|
|
* **SDSW:** Story-Driven Spec Workflow.
|
|
* **Web Server Binary:** The Rust binary that serves the Vite/React frontend and exposes the WebSocket API.
|
|
* **Living Spec:** The collection of Markdown files in `.story_kit/` that define the project.
|
|
* **Tool Call:** A structured request from the LLM to execute a specific native function.
|
|
"#;
|
|
|
|
const STORY_KIT_STACK: &str = r#"# Tech Stack & Constraints
|
|
|
|
## Overview
|
|
This project is a standalone Rust **web server binary** that serves a Vite/React frontend and exposes a **WebSocket API**. The built frontend assets are packaged with the binary (in a `frontend` directory) and served as static files. It functions as an **Agentic Code Assistant** capable of safely executing tools on the host system.
|
|
|
|
## Core Stack
|
|
* **Backend:** Rust (Web Server)
|
|
* **MSRV:** Stable (latest)
|
|
* **Framework:** Poem HTTP server with WebSocket support for streaming; HTTP APIs should use Poem OpenAPI (Swagger) for non-streaming endpoints.
|
|
* **Frontend:** TypeScript + React
|
|
* **Build Tool:** Vite
|
|
* **Package Manager:** pnpm (required)
|
|
* **Styling:** CSS Modules or Tailwind (TBD - Defaulting to CSS Modules)
|
|
* **State Management:** React Context / Hooks
|
|
* **Chat UI:** Rendered Markdown with syntax highlighting.
|
|
|
|
## Agent Architecture
|
|
The application follows a **Tool-Use (Function Calling)** architecture:
|
|
1. **Frontend:** Collects user input and sends it to the LLM.
|
|
2. **LLM:** Decides to generate text OR request a **Tool Call** (e.g., `execute_shell`, `read_file`).
|
|
3. **Web Server Backend (The "Hand"):**
|
|
* Intercepts Tool Calls.
|
|
* Validates the request against the **Safety Policy**.
|
|
* Executes the native code (File I/O, Shell Process, Search).
|
|
* Returns the output (stdout/stderr/file content) to the LLM.
|
|
* **Streaming:** The backend sends real-time updates over WebSocket to keep the UI responsive during long-running Agent tasks.
|
|
|
|
## LLM Provider Abstraction
|
|
To support both Remote and Local models, the system implements a `ModelProvider` abstraction layer.
|
|
|
|
* **Strategy:**
|
|
* Abstract the differences between API formats (OpenAI-compatible vs Anthropic vs Gemini).
|
|
* Normalize "Tool Use" definitions, as each provider handles function calling schemas differently.
|
|
* **Supported Providers:**
|
|
* **Ollama:** Local inference (e.g., Llama 3, DeepSeek Coder) for privacy and offline usage.
|
|
* **Anthropic:** Claude 3.5 models (Sonnet, Haiku) via API for coding tasks (Story 12).
|
|
* **Provider Selection:**
|
|
* Automatic detection based on model name prefix:
|
|
* `claude-` → Anthropic API
|
|
* Otherwise → Ollama
|
|
* Single unified model dropdown with section headers ("Anthropic", "Ollama")
|
|
* **API Key Management:**
|
|
* Anthropic API key stored server-side and persisted securely
|
|
* On first use of Claude model, user prompted to enter API key
|
|
* Key persists across sessions (no re-entry needed)
|
|
|
|
## Tooling Capabilities
|
|
|
|
### 1. Filesystem (Native)
|
|
* **Scope:** Strictly limited to the user-selected `project_root`.
|
|
* **Operations:** Read, Write, List, Delete.
|
|
* **Constraint:** Modifications to `.git/` are strictly forbidden via file APIs (use Git tools instead).
|
|
|
|
### 2. Shell Execution
|
|
* **Library:** `tokio::process` for async execution.
|
|
* **Constraint:** We do **not** run an interactive shell (repl). We run discrete, stateless commands.
|
|
* **Allowlist:** The agent may only execute specific binaries:
|
|
* `git`
|
|
* `cargo`, `rustc`, `rustfmt`, `clippy`
|
|
* `npm`, `node`, `yarn`, `pnpm`, `bun`
|
|
* `ls`, `find`, `grep` (if not using internal search)
|
|
* `mkdir`, `rm`, `touch`, `mv`, `cp`
|
|
|
|
### 3. Search & Navigation
|
|
* **Library:** `ignore` (by BurntSushi) + `grep` logic.
|
|
* **Behavior:**
|
|
* Must respect `.gitignore` files automatically.
|
|
* Must be performant (parallel traversal).
|
|
|
|
## Coding Standards
|
|
|
|
### Rust
|
|
* **Style:** `rustfmt` standard.
|
|
* **Linter:** `clippy` - Must pass with 0 warnings before merging.
|
|
* **Error Handling:** Custom `AppError` type deriving `thiserror`. All Commands return `Result<T, AppError>`.
|
|
* **Concurrency:** Heavy tools (Search, Shell) must run on `tokio` threads to avoid blocking the UI.
|
|
* **Quality Gates:**
|
|
* `cargo clippy --all-targets --all-features` must show 0 errors, 0 warnings
|
|
* `cargo check` must succeed
|
|
* `cargo test` must pass all tests
|
|
|
|
### TypeScript / React
|
|
* **Style:** Biome formatter (replaces Prettier/ESLint).
|
|
* **Linter:** Biome - Must pass with 0 errors, 0 warnings before merging.
|
|
* **Types:** Shared types with Rust (via `tauri-specta` or manual interface matching) are preferred to ensure type safety across the bridge.
|
|
* **Quality Gates:**
|
|
* `npx @biomejs/biome check src/` must show 0 errors, 0 warnings
|
|
* `npm run build` must succeed
|
|
* No `any` types allowed (use proper types or `unknown`)
|
|
* React keys must use stable IDs, not array indices
|
|
* All buttons must have explicit `type` attribute
|
|
|
|
## Libraries (Approved)
|
|
* **Rust:**
|
|
* `serde`, `serde_json`: Serialization.
|
|
* `ignore`: Fast recursive directory iteration respecting gitignore.
|
|
* `walkdir`: Simple directory traversal.
|
|
* `tokio`: Async runtime.
|
|
* `reqwest`: For LLM API calls (Anthropic, Ollama).
|
|
* `eventsource-stream`: For Server-Sent Events (Anthropic streaming).
|
|
* `uuid`: For unique message IDs.
|
|
* `chrono`: For timestamps.
|
|
* `poem`: HTTP server framework.
|
|
* `poem-openapi`: OpenAPI (Swagger) for non-streaming HTTP APIs.
|
|
* **JavaScript:**
|
|
* `react-markdown`: For rendering chat responses.
|
|
|
|
## Safety & Sandbox
|
|
1. **Project Scope:** The application must strictly enforce that it does not read/write outside the `project_root` selected by the user.
|
|
2. **Human in the Loop:**
|
|
* Shell commands that modify state (non-readonly) should ideally require a UI confirmation (configurable).
|
|
* File writes must be confirmed or revertible."#;
|
|
|
|
pub fn get_home_directory() -> Result<String, String> {
|
|
let home = homedir::my_home()
|
|
.map_err(|e| format!("Failed to resolve home directory: {e}"))?
|
|
.ok_or_else(|| "Home directory not found".to_string())?;
|
|
Ok(home.to_string_lossy().to_string())
|
|
}
|
|
|
|
/// Resolves a relative path against the active project root (pure function for testing).
|
|
/// Returns error if path attempts traversal (..).
|
|
fn resolve_path_impl(root: PathBuf, relative_path: &str) -> Result<PathBuf, String> {
|
|
if relative_path.contains("..") {
|
|
return Err("Security Violation: Directory traversal ('..') is not allowed.".to_string());
|
|
}
|
|
|
|
Ok(root.join(relative_path))
|
|
}
|
|
|
|
fn is_story_kit_path(path: &str) -> bool {
|
|
path == ".story_kit" || path.starts_with(".story_kit/")
|
|
}
|
|
|
|
async fn ensure_test_plan_approved(root: PathBuf) -> Result<(), String> {
|
|
let approved = tokio::task::spawn_blocking(move || {
|
|
let story_path = root
|
|
.join(".story_kit")
|
|
.join("stories")
|
|
.join("current")
|
|
.join("26_establish_tdd_workflow_and_gates.md");
|
|
let contents = fs::read_to_string(&story_path)
|
|
.map_err(|e| format!("Failed to read story file for test plan approval: {e}"))?;
|
|
let metadata = parse_front_matter(&contents)
|
|
.map_err(|e| format!("Failed to parse story front matter: {e:?}"))?;
|
|
|
|
Ok::<bool, String>(matches!(metadata.test_plan, Some(TestPlanStatus::Approved)))
|
|
})
|
|
.await
|
|
.map_err(|e| format!("Task failed: {e}"))??;
|
|
|
|
if approved {
|
|
Ok(())
|
|
} else {
|
|
Err("Test plan is not approved for the current story.".to_string())
|
|
}
|
|
}
|
|
|
|
/// Resolves a relative path against the active project root.
|
|
/// Returns error if no project is open or if path attempts traversal (..).
|
|
fn resolve_path(state: &SessionState, relative_path: &str) -> Result<PathBuf, String> {
|
|
let root = state.get_project_root()?;
|
|
resolve_path_impl(root, relative_path)
|
|
}
|
|
|
|
/// Validate that a path exists and is a directory (pure function for testing)
|
|
async fn validate_project_path(path: PathBuf) -> Result<(), String> {
|
|
tokio::task::spawn_blocking(move || {
|
|
if !path.exists() {
|
|
return Err(format!("Path does not exist: {}", path.display()));
|
|
}
|
|
if !path.is_dir() {
|
|
return Err(format!("Path is not a directory: {}", path.display()));
|
|
}
|
|
Ok(())
|
|
})
|
|
.await
|
|
.map_err(|e| format!("Task failed: {}", e))?
|
|
}
|
|
|
|
fn write_file_if_missing(path: &Path, content: &str) -> Result<(), String> {
|
|
if path.exists() {
|
|
return Ok(());
|
|
}
|
|
fs::write(path, content).map_err(|e| format!("Failed to write file: {}", e))?;
|
|
Ok(())
|
|
}
|
|
|
|
fn scaffold_story_kit(root: &Path) -> Result<(), String> {
|
|
let story_kit_root = root.join(".story_kit");
|
|
let specs_root = story_kit_root.join("specs");
|
|
let tech_root = specs_root.join("tech");
|
|
let functional_root = specs_root.join("functional");
|
|
let stories_root = story_kit_root.join("stories");
|
|
let archive_root = stories_root.join("archive");
|
|
|
|
fs::create_dir_all(&tech_root).map_err(|e| format!("Failed to create specs/tech: {}", e))?;
|
|
fs::create_dir_all(&functional_root)
|
|
.map_err(|e| format!("Failed to create specs/functional: {}", e))?;
|
|
fs::create_dir_all(&archive_root)
|
|
.map_err(|e| format!("Failed to create stories/archive: {}", e))?;
|
|
|
|
write_file_if_missing(&story_kit_root.join("README.md"), STORY_KIT_README)?;
|
|
write_file_if_missing(&specs_root.join("README.md"), STORY_KIT_SPECS_README)?;
|
|
write_file_if_missing(&specs_root.join("00_CONTEXT.md"), STORY_KIT_CONTEXT)?;
|
|
write_file_if_missing(&tech_root.join("STACK.md"), STORY_KIT_STACK)?;
|
|
|
|
Ok(())
|
|
}
|
|
|
|
async fn ensure_project_root_with_story_kit(path: PathBuf) -> Result<(), String> {
|
|
tokio::task::spawn_blocking(move || {
|
|
if !path.exists() {
|
|
fs::create_dir_all(&path)
|
|
.map_err(|e| format!("Failed to create project directory: {}", e))?;
|
|
scaffold_story_kit(&path)?;
|
|
}
|
|
Ok(())
|
|
})
|
|
.await
|
|
.map_err(|e| format!("Task failed: {}", e))?
|
|
}
|
|
|
|
pub async fn open_project(
|
|
path: String,
|
|
state: &SessionState,
|
|
store: &dyn StoreOps,
|
|
) -> Result<String, String> {
|
|
let p = PathBuf::from(&path);
|
|
|
|
ensure_project_root_with_story_kit(p.clone()).await?;
|
|
validate_project_path(p.clone()).await?;
|
|
|
|
{
|
|
let mut root = state.project_root.lock().map_err(|e| e.to_string())?;
|
|
*root = Some(p);
|
|
}
|
|
|
|
store.set(KEY_LAST_PROJECT, json!(path));
|
|
|
|
let mut known_projects = get_known_projects(store)?;
|
|
|
|
known_projects.retain(|p| p != &path);
|
|
known_projects.insert(0, path.clone());
|
|
store.set(KEY_KNOWN_PROJECTS, json!(known_projects));
|
|
|
|
store.save()?;
|
|
|
|
Ok(path)
|
|
}
|
|
|
|
pub fn close_project(state: &SessionState, store: &dyn StoreOps) -> Result<(), String> {
|
|
{
|
|
let mut root = state.project_root.lock().map_err(|e| e.to_string())?;
|
|
*root = None;
|
|
}
|
|
|
|
store.delete(KEY_LAST_PROJECT);
|
|
store.save()?;
|
|
|
|
Ok(())
|
|
}
|
|
|
|
pub fn get_current_project(
|
|
state: &SessionState,
|
|
store: &dyn StoreOps,
|
|
) -> Result<Option<String>, String> {
|
|
{
|
|
let root = state.project_root.lock().map_err(|e| e.to_string())?;
|
|
if let Some(path) = &*root {
|
|
return Ok(Some(path.to_string_lossy().to_string()));
|
|
}
|
|
}
|
|
|
|
if let Some(path_str) = store
|
|
.get(KEY_LAST_PROJECT)
|
|
.as_ref()
|
|
.and_then(|val| val.as_str())
|
|
{
|
|
let p = PathBuf::from(path_str);
|
|
if p.exists() && p.is_dir() {
|
|
let mut root = state.project_root.lock().map_err(|e| e.to_string())?;
|
|
*root = Some(p);
|
|
return Ok(Some(path_str.to_string()));
|
|
}
|
|
}
|
|
|
|
Ok(None)
|
|
}
|
|
|
|
pub fn get_known_projects(store: &dyn StoreOps) -> Result<Vec<String>, String> {
|
|
let projects = store
|
|
.get(KEY_KNOWN_PROJECTS)
|
|
.and_then(|val| val.as_array().cloned())
|
|
.unwrap_or_default()
|
|
.into_iter()
|
|
.filter_map(|val| val.as_str().map(|s| s.to_string()))
|
|
.collect();
|
|
|
|
Ok(projects)
|
|
}
|
|
|
|
pub fn forget_known_project(path: String, store: &dyn StoreOps) -> Result<(), String> {
|
|
let mut known_projects = get_known_projects(store)?;
|
|
let original_len = known_projects.len();
|
|
|
|
known_projects.retain(|p| p != &path);
|
|
|
|
if known_projects.len() == original_len {
|
|
return Ok(());
|
|
}
|
|
|
|
store.set(KEY_KNOWN_PROJECTS, json!(known_projects));
|
|
store.save()?;
|
|
Ok(())
|
|
}
|
|
|
|
pub fn get_model_preference(store: &dyn StoreOps) -> Result<Option<String>, String> {
|
|
if let Some(model) = store
|
|
.get(KEY_SELECTED_MODEL)
|
|
.as_ref()
|
|
.and_then(|val| val.as_str())
|
|
{
|
|
return Ok(Some(model.to_string()));
|
|
}
|
|
Ok(None)
|
|
}
|
|
|
|
pub fn set_model_preference(model: String, store: &dyn StoreOps) -> Result<(), String> {
|
|
store.set(KEY_SELECTED_MODEL, json!(model));
|
|
store.save()?;
|
|
Ok(())
|
|
}
|
|
|
|
async fn read_file_impl(full_path: PathBuf) -> Result<String, String> {
|
|
tokio::task::spawn_blocking(move || {
|
|
fs::read_to_string(&full_path).map_err(|e| format!("Failed to read file: {}", e))
|
|
})
|
|
.await
|
|
.map_err(|e| format!("Task failed: {}", e))?
|
|
}
|
|
|
|
pub async fn read_file(path: String, state: &SessionState) -> Result<String, String> {
|
|
let full_path = resolve_path(state, &path)?;
|
|
read_file_impl(full_path).await
|
|
}
|
|
|
|
async fn write_file_impl(full_path: PathBuf, content: String) -> Result<(), String> {
|
|
tokio::task::spawn_blocking(move || {
|
|
if let Some(parent) = full_path.parent() {
|
|
fs::create_dir_all(parent)
|
|
.map_err(|e| format!("Failed to create directories: {}", e))?;
|
|
}
|
|
|
|
fs::write(&full_path, content).map_err(|e| format!("Failed to write file: {}", e))
|
|
})
|
|
.await
|
|
.map_err(|e| format!("Task failed: {}", e))?
|
|
}
|
|
|
|
pub async fn write_file(path: String, content: String, state: &SessionState) -> Result<(), String> {
|
|
let root = state.get_project_root()?;
|
|
if !is_story_kit_path(&path) {
|
|
ensure_test_plan_approved(root.clone()).await?;
|
|
}
|
|
let full_path = resolve_path_impl(root, &path)?;
|
|
write_file_impl(full_path, content).await
|
|
}
|
|
|
|
#[derive(Serialize, Debug, poem_openapi::Object)]
|
|
pub struct FileEntry {
|
|
pub name: String,
|
|
pub kind: String,
|
|
}
|
|
|
|
async fn list_directory_impl(full_path: PathBuf) -> Result<Vec<FileEntry>, String> {
|
|
tokio::task::spawn_blocking(move || {
|
|
let entries = fs::read_dir(&full_path).map_err(|e| format!("Failed to read dir: {}", e))?;
|
|
|
|
let mut result = Vec::new();
|
|
for entry in entries {
|
|
let entry = entry.map_err(|e| e.to_string())?;
|
|
let ft = entry.file_type().map_err(|e| e.to_string())?;
|
|
let name = entry.file_name().to_string_lossy().to_string();
|
|
|
|
result.push(FileEntry {
|
|
name,
|
|
kind: if ft.is_dir() {
|
|
"dir".to_string()
|
|
} else {
|
|
"file".to_string()
|
|
},
|
|
});
|
|
}
|
|
|
|
result.sort_by(|a, b| match (a.kind.as_str(), b.kind.as_str()) {
|
|
("dir", "file") => std::cmp::Ordering::Less,
|
|
("file", "dir") => std::cmp::Ordering::Greater,
|
|
_ => a.name.cmp(&b.name),
|
|
});
|
|
|
|
Ok(result)
|
|
})
|
|
.await
|
|
.map_err(|e| format!("Task failed: {}", e))?
|
|
}
|
|
|
|
pub async fn list_directory(path: String, state: &SessionState) -> Result<Vec<FileEntry>, String> {
|
|
let full_path = resolve_path(state, &path)?;
|
|
list_directory_impl(full_path).await
|
|
}
|
|
|
|
pub async fn list_directory_absolute(path: String) -> Result<Vec<FileEntry>, String> {
|
|
let full_path = PathBuf::from(path);
|
|
list_directory_impl(full_path).await
|
|
}
|
|
|
|
pub async fn create_directory_absolute(path: String) -> Result<bool, String> {
|
|
let full_path = PathBuf::from(path);
|
|
tokio::task::spawn_blocking(move || {
|
|
fs::create_dir_all(&full_path).map_err(|e| format!("Failed to create directory: {}", e))?;
|
|
Ok(true)
|
|
})
|
|
.await
|
|
.map_err(|e| format!("Task failed: {}", e))?
|
|
}
|
|
|
|
#[cfg(test)]
|
|
mod tests {
|
|
use super::*;
|
|
use crate::store::JsonFileStore;
|
|
use tempfile::tempdir;
|
|
|
|
fn make_store(dir: &tempfile::TempDir) -> JsonFileStore {
|
|
JsonFileStore::new(dir.path().join("test_store.json")).unwrap()
|
|
}
|
|
|
|
fn make_state_with_root(path: PathBuf) -> SessionState {
|
|
let state = SessionState::default();
|
|
{
|
|
let mut root = state.project_root.lock().unwrap();
|
|
*root = Some(path);
|
|
}
|
|
state
|
|
}
|
|
|
|
// --- resolve_path_impl ---
|
|
|
|
#[test]
|
|
fn resolve_path_joins_relative_to_root() {
|
|
let root = PathBuf::from("/projects/myapp");
|
|
let result = resolve_path_impl(root, "src/main.rs").unwrap();
|
|
assert_eq!(result, PathBuf::from("/projects/myapp/src/main.rs"));
|
|
}
|
|
|
|
#[test]
|
|
fn resolve_path_rejects_traversal() {
|
|
let root = PathBuf::from("/projects/myapp");
|
|
let result = resolve_path_impl(root, "../etc/passwd");
|
|
assert!(result.is_err());
|
|
assert!(result.unwrap_err().contains("traversal"));
|
|
}
|
|
|
|
// --- is_story_kit_path ---
|
|
|
|
#[test]
|
|
fn is_story_kit_path_matches_root_and_children() {
|
|
assert!(is_story_kit_path(".story_kit"));
|
|
assert!(is_story_kit_path(".story_kit/stories/current/26.md"));
|
|
assert!(!is_story_kit_path("src/main.rs"));
|
|
assert!(!is_story_kit_path(".story_kit_other"));
|
|
}
|
|
|
|
// --- open/close/get project ---
|
|
|
|
#[tokio::test]
|
|
async fn open_project_sets_root_and_persists() {
|
|
let dir = tempdir().unwrap();
|
|
let project_dir = dir.path().join("myproject");
|
|
fs::create_dir_all(&project_dir).unwrap();
|
|
let store = make_store(&dir);
|
|
let state = SessionState::default();
|
|
|
|
let result = open_project(
|
|
project_dir.to_string_lossy().to_string(),
|
|
&state,
|
|
&store,
|
|
)
|
|
.await;
|
|
|
|
assert!(result.is_ok());
|
|
let root = state.get_project_root().unwrap();
|
|
assert_eq!(root, project_dir);
|
|
}
|
|
|
|
#[tokio::test]
|
|
async fn close_project_clears_root() {
|
|
let dir = tempdir().unwrap();
|
|
let project_dir = dir.path().join("myproject");
|
|
fs::create_dir_all(&project_dir).unwrap();
|
|
let store = make_store(&dir);
|
|
let state = make_state_with_root(project_dir);
|
|
|
|
close_project(&state, &store).unwrap();
|
|
|
|
let root = state.project_root.lock().unwrap();
|
|
assert!(root.is_none());
|
|
}
|
|
|
|
#[tokio::test]
|
|
async fn get_current_project_returns_none_when_no_project() {
|
|
let dir = tempdir().unwrap();
|
|
let store = make_store(&dir);
|
|
let state = SessionState::default();
|
|
|
|
let result = get_current_project(&state, &store).unwrap();
|
|
assert!(result.is_none());
|
|
}
|
|
|
|
#[tokio::test]
|
|
async fn get_current_project_returns_active_root() {
|
|
let dir = tempdir().unwrap();
|
|
let store = make_store(&dir);
|
|
let state = make_state_with_root(dir.path().to_path_buf());
|
|
|
|
let result = get_current_project(&state, &store).unwrap();
|
|
assert!(result.is_some());
|
|
}
|
|
|
|
// --- known projects ---
|
|
|
|
#[test]
|
|
fn known_projects_empty_by_default() {
|
|
let dir = tempdir().unwrap();
|
|
let store = make_store(&dir);
|
|
let projects = get_known_projects(&store).unwrap();
|
|
assert!(projects.is_empty());
|
|
}
|
|
|
|
#[tokio::test]
|
|
async fn open_project_adds_to_known_projects() {
|
|
let dir = tempdir().unwrap();
|
|
let project_dir = dir.path().join("proj1");
|
|
fs::create_dir_all(&project_dir).unwrap();
|
|
let store = make_store(&dir);
|
|
let state = SessionState::default();
|
|
|
|
open_project(
|
|
project_dir.to_string_lossy().to_string(),
|
|
&state,
|
|
&store,
|
|
)
|
|
.await
|
|
.unwrap();
|
|
|
|
let projects = get_known_projects(&store).unwrap();
|
|
assert_eq!(projects.len(), 1);
|
|
}
|
|
|
|
#[test]
|
|
fn forget_known_project_removes_it() {
|
|
let dir = tempdir().unwrap();
|
|
let store = make_store(&dir);
|
|
|
|
store.set(KEY_KNOWN_PROJECTS, json!(["/a", "/b", "/c"]));
|
|
forget_known_project("/b".to_string(), &store).unwrap();
|
|
|
|
let projects = get_known_projects(&store).unwrap();
|
|
assert_eq!(projects, vec!["/a", "/c"]);
|
|
}
|
|
|
|
#[test]
|
|
fn forget_unknown_project_is_noop() {
|
|
let dir = tempdir().unwrap();
|
|
let store = make_store(&dir);
|
|
|
|
store.set(KEY_KNOWN_PROJECTS, json!(["/a"]));
|
|
forget_known_project("/nonexistent".to_string(), &store).unwrap();
|
|
|
|
let projects = get_known_projects(&store).unwrap();
|
|
assert_eq!(projects, vec!["/a"]);
|
|
}
|
|
|
|
// --- model preference ---
|
|
|
|
#[test]
|
|
fn model_preference_none_by_default() {
|
|
let dir = tempdir().unwrap();
|
|
let store = make_store(&dir);
|
|
assert!(get_model_preference(&store).unwrap().is_none());
|
|
}
|
|
|
|
#[test]
|
|
fn set_and_get_model_preference() {
|
|
let dir = tempdir().unwrap();
|
|
let store = make_store(&dir);
|
|
set_model_preference("claude-3-sonnet".to_string(), &store).unwrap();
|
|
assert_eq!(
|
|
get_model_preference(&store).unwrap(),
|
|
Some("claude-3-sonnet".to_string())
|
|
);
|
|
}
|
|
|
|
// --- file operations ---
|
|
|
|
#[tokio::test]
|
|
async fn read_file_impl_reads_content() {
|
|
let dir = tempdir().unwrap();
|
|
let file = dir.path().join("test.txt");
|
|
fs::write(&file, "hello world").unwrap();
|
|
|
|
let content = read_file_impl(file).await.unwrap();
|
|
assert_eq!(content, "hello world");
|
|
}
|
|
|
|
#[tokio::test]
|
|
async fn read_file_impl_errors_on_missing() {
|
|
let dir = tempdir().unwrap();
|
|
let result = read_file_impl(dir.path().join("missing.txt")).await;
|
|
assert!(result.is_err());
|
|
}
|
|
|
|
#[tokio::test]
|
|
async fn write_file_impl_creates_and_writes() {
|
|
let dir = tempdir().unwrap();
|
|
let file = dir.path().join("sub").join("output.txt");
|
|
|
|
write_file_impl(file.clone(), "content".to_string()).await.unwrap();
|
|
|
|
assert_eq!(fs::read_to_string(&file).unwrap(), "content");
|
|
}
|
|
|
|
#[tokio::test]
|
|
async fn write_file_requires_approved_test_plan() {
|
|
let dir = tempdir().expect("tempdir");
|
|
let state = SessionState::default();
|
|
|
|
{
|
|
let mut root = state.project_root.lock().expect("lock project root");
|
|
*root = Some(dir.path().to_path_buf());
|
|
}
|
|
|
|
let result = write_file("notes.txt".to_string(), "hello".to_string(), &state).await;
|
|
|
|
assert!(
|
|
result.is_err(),
|
|
"expected write to be blocked when test plan is not approved"
|
|
);
|
|
}
|
|
|
|
// --- list directory ---
|
|
|
|
#[tokio::test]
|
|
async fn list_directory_impl_returns_sorted_entries() {
|
|
let dir = tempdir().unwrap();
|
|
fs::create_dir(dir.path().join("zdir")).unwrap();
|
|
fs::create_dir(dir.path().join("adir")).unwrap();
|
|
fs::write(dir.path().join("file.txt"), "").unwrap();
|
|
|
|
let entries = list_directory_impl(dir.path().to_path_buf()).await.unwrap();
|
|
|
|
assert_eq!(entries[0].name, "adir");
|
|
assert_eq!(entries[0].kind, "dir");
|
|
assert_eq!(entries[1].name, "zdir");
|
|
assert_eq!(entries[1].kind, "dir");
|
|
assert_eq!(entries[2].name, "file.txt");
|
|
assert_eq!(entries[2].kind, "file");
|
|
}
|
|
|
|
// --- validate_project_path ---
|
|
|
|
#[tokio::test]
|
|
async fn validate_project_path_rejects_missing() {
|
|
let result = validate_project_path(PathBuf::from("/nonexistent/path")).await;
|
|
assert!(result.is_err());
|
|
}
|
|
|
|
#[tokio::test]
|
|
async fn validate_project_path_rejects_file() {
|
|
let dir = tempdir().unwrap();
|
|
let file = dir.path().join("not_a_dir.txt");
|
|
fs::write(&file, "").unwrap();
|
|
|
|
let result = validate_project_path(file).await;
|
|
assert!(result.is_err());
|
|
}
|
|
|
|
#[tokio::test]
|
|
async fn validate_project_path_accepts_directory() {
|
|
let dir = tempdir().unwrap();
|
|
let result = validate_project_path(dir.path().to_path_buf()).await;
|
|
assert!(result.is_ok());
|
|
}
|
|
|
|
// --- scaffold ---
|
|
|
|
#[test]
|
|
fn scaffold_story_kit_creates_structure() {
|
|
let dir = tempdir().unwrap();
|
|
scaffold_story_kit(dir.path()).unwrap();
|
|
|
|
assert!(dir.path().join(".story_kit/README.md").exists());
|
|
assert!(dir.path().join(".story_kit/specs/README.md").exists());
|
|
assert!(dir.path().join(".story_kit/specs/00_CONTEXT.md").exists());
|
|
assert!(dir.path().join(".story_kit/specs/tech/STACK.md").exists());
|
|
assert!(dir.path().join(".story_kit/stories/archive").is_dir());
|
|
}
|
|
|
|
#[test]
|
|
fn scaffold_story_kit_does_not_overwrite_existing() {
|
|
let dir = tempdir().unwrap();
|
|
let readme = dir.path().join(".story_kit/README.md");
|
|
fs::create_dir_all(readme.parent().unwrap()).unwrap();
|
|
fs::write(&readme, "custom content").unwrap();
|
|
|
|
scaffold_story_kit(dir.path()).unwrap();
|
|
|
|
assert_eq!(fs::read_to_string(&readme).unwrap(), "custom content");
|
|
}
|
|
}
|