Compare commits
43 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4e590401a5 | ||
|
|
6b6815325d | ||
|
|
f874783b09 | ||
|
|
292f9cdfe2 | ||
|
|
1cce46d3fa | ||
|
|
e85c06df19 | ||
|
|
8b85ca743e | ||
|
|
1a7b6c7342 | ||
|
|
4a94158ef2 | ||
|
|
f10ea1ecf2 | ||
|
|
1a3b69301a | ||
|
|
6d3eab92fd | ||
|
|
f6920a87ad | ||
|
|
5f9d903987 | ||
|
|
ea916d27f4 | ||
|
|
970b9bcd9d | ||
|
|
a5ee6890f5 | ||
|
|
41dc3292bb | ||
|
|
3766f8b464 | ||
|
|
0c85ecc85c | ||
|
|
2c29a4d2b8 | ||
|
|
454d694d24 | ||
|
|
96bedd70dc | ||
|
|
fffdd5c5ea | ||
|
|
4805598932 | ||
|
|
3d55e2fcc6 | ||
|
|
96b31d1a48 | ||
|
|
11168fa426 | ||
|
|
c2c2d65889 | ||
|
|
5c8c4b7ff3 | ||
|
|
fbab93f493 | ||
|
|
78ff6d104e | ||
|
|
fcc2b9c3eb | ||
|
|
0c4239501a | ||
|
|
13b6ecd958 | ||
|
|
1816a94617 | ||
|
|
56d3373e69 | ||
|
|
efdb0c5814 | ||
|
|
b8365275d8 | ||
|
|
6ddfd29927 | ||
|
|
01b157a2e4 | ||
|
|
99a59d7ad1 | ||
|
|
eb8adb6225 |
@@ -0,0 +1,21 @@
|
|||||||
|
---
|
||||||
|
name: "Rename MCP whatsup tool to status for consistency"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Story 376: Rename MCP whatsup tool to status for consistency
|
||||||
|
|
||||||
|
## User Story
|
||||||
|
|
||||||
|
As a developer using storkit's MCP tools, I want the MCP tool to be called `status` instead of `whatsup`, so that the naming is consistent between the bot command (`status`), the web UI slash command (`/status`), and the MCP tool.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- [ ] MCP tool is renamed from 'whatsup' to 'status'
|
||||||
|
- [ ] MCP tool is discoverable as 'status' via tools/list
|
||||||
|
- [ ] The tool still accepts a story_id parameter and returns the same triage data
|
||||||
|
- [ ] Old 'whatsup' tool name is removed from the MCP registry
|
||||||
|
- [ ] Any internal references to the whatsup tool name are updated
|
||||||
|
|
||||||
|
## Out of Scope
|
||||||
|
|
||||||
|
- TBD
|
||||||
@@ -0,0 +1,34 @@
|
|||||||
|
---
|
||||||
|
name: "CLI treats --help and --version as project paths"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Bug 369: CLI treats --help and --version as project paths
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
When running `storkit <anything>`, the binary treats the first argument as a project path, creates a directory for it, and scaffolds `.storkit/` inside. This happens for `--help`, `--version`, `serve`, `x`, or any other string. There is no validation that the argument is an existing directory or a reasonable path before creating it.
|
||||||
|
|
||||||
|
## How to Reproduce
|
||||||
|
|
||||||
|
1. Run `storkit --help` or `storkit serve` or `storkit x` in any directory
|
||||||
|
2. Observe that a directory with that name is created with a full `.storkit/` scaffold inside it
|
||||||
|
|
||||||
|
## Actual Result
|
||||||
|
|
||||||
|
Any argument is treated as a project path and a directory is created and scaffolded. No flags are recognised.
|
||||||
|
|
||||||
|
## Expected Result
|
||||||
|
|
||||||
|
- `storkit --help` prints usage info and exits
|
||||||
|
- `storkit --version` prints the version and exits
|
||||||
|
- `storkit <path>` only works if the path already exists as a directory
|
||||||
|
- If the path does not exist, storkit prints a clear error and exits non-zero
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- [ ] storkit --help prints usage information and exits with code 0
|
||||||
|
- [ ] storkit --version prints the version and exits with code 0
|
||||||
|
- [ ] storkit -h and storkit -V work as short aliases
|
||||||
|
- [ ] storkit does not create directories for any argument — the path must already exist
|
||||||
|
- [ ] If the path does not exist, storkit prints a clear error and exits non-zero
|
||||||
|
- [ ] Arguments starting with - that are not recognised produce a clear error message
|
||||||
@@ -0,0 +1,33 @@
|
|||||||
|
---
|
||||||
|
name: "Scaffold does not create .mcp.json in project root"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Bug 370: Scaffold does not create .mcp.json in project root
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
Two related problems with project setup:
|
||||||
|
|
||||||
|
1. When the user clicks the "project setup" button in the web UI to open a new project, the scaffold does not reliably run — the `.storkit/` directory and associated files may not be created.
|
||||||
|
2. Even when the scaffold does run, it does not write `.mcp.json` to the project root. Without this file, agents spawned in worktrees cannot find the MCP server, causing `--permission-prompt-tool mcp__storkit__prompt_permission not found` errors and agent failures.
|
||||||
|
|
||||||
|
## How to Reproduce
|
||||||
|
|
||||||
|
1. Open the storkit web UI and use the project setup button to open a new project directory
|
||||||
|
2. Check whether the full scaffold was created (`.storkit/`, `CLAUDE.md`, `script/test`, etc.)
|
||||||
|
3. Check the project root for `.mcp.json`
|
||||||
|
|
||||||
|
## Actual Result
|
||||||
|
|
||||||
|
The scaffold may not run when using the UI project setup flow. When it does run, `.mcp.json` is not created in the project root. Agents fail because MCP tools are unavailable.
|
||||||
|
|
||||||
|
## Expected Result
|
||||||
|
|
||||||
|
Clicking the project setup button reliably runs the full scaffold, including `.mcp.json` pointing to the server's port.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- [ ] The web UI project setup button triggers the full scaffold for new projects
|
||||||
|
- [ ] scaffold_story_kit writes .mcp.json to the project root with the server's port
|
||||||
|
- [ ] Existing .mcp.json is not overwritten if already present
|
||||||
|
- [ ] .mcp.json is included in .gitignore since the port is environment-specific
|
||||||
@@ -0,0 +1,32 @@
|
|||||||
|
---
|
||||||
|
name: "No-arg storkit in empty directory skips scaffold"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Bug 371: No-arg storkit in empty directory skips scaffold
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
When running `storkit` with no path argument from an empty directory (no `.storkit/`), the server starts but never calls `open_project` or the scaffold. The `find_story_kit_root` check fails to find `.storkit/`, so the fallback at main.rs:179-186 just sets `project_root = cwd` without scaffolding. This means no `.storkit/`, no `project.toml`, no `.mcp.json`, no `CLAUDE.md` — the project is non-functional.
|
||||||
|
|
||||||
|
The explicit path branch (`storkit .`) works correctly because it calls `open_project` → `ensure_project_root_with_story_kit` → `scaffold_story_kit`. The no-arg branch should do the same.
|
||||||
|
|
||||||
|
## How to Reproduce
|
||||||
|
|
||||||
|
1. Create a new empty directory
|
||||||
|
2. cd into it
|
||||||
|
3. Run `storkit` (no path argument)
|
||||||
|
4. Observe that no scaffold is created — `.storkit/`, `CLAUDE.md`, `.mcp.json`, etc. are all missing
|
||||||
|
|
||||||
|
## Actual Result
|
||||||
|
|
||||||
|
Server starts with project_root set to cwd but no scaffold runs. The project is non-functional — no agent config, no MCP endpoint, no work pipeline directories.
|
||||||
|
|
||||||
|
## Expected Result
|
||||||
|
|
||||||
|
Running `storkit` with no arguments from a directory without `.storkit/` should scaffold the project the same as `storkit .` does — calling `open_project` and triggering `ensure_project_root_with_story_kit`.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- [ ] Running `storkit` with no args from a dir without `.storkit/` calls `open_project` and triggers the full scaffold
|
||||||
|
- [ ] The no-arg fallback path in main.rs calls `open_project(cwd)` instead of just setting project_root directly
|
||||||
|
- [ ] After `storkit` completes startup, `.storkit/project.toml`, `.mcp.json`, `CLAUDE.md`, and `script/test` all exist
|
||||||
@@ -0,0 +1,24 @@
|
|||||||
|
---
|
||||||
|
name: "Scaffold auto-detects tech stack and configures script/test"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Story 372: Scaffold auto-detects tech stack and configures script/test
|
||||||
|
|
||||||
|
## User Story
|
||||||
|
|
||||||
|
As a user setting up a new project with storkit, I want the scaffold to detect my project's tech stack and generate a working `script/test` automatically, so that agents can run tests immediately without manual configuration.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- [ ] Scaffold detects Go projects (go.mod) and adds `go test ./...` to script/test
|
||||||
|
- [ ] Scaffold detects Node.js projects (package.json) and adds `npm test` to script/test
|
||||||
|
- [ ] Scaffold detects Rust projects (Cargo.toml) and adds `cargo test` to script/test
|
||||||
|
- [ ] Scaffold detects Python projects (pyproject.toml or requirements.txt) and adds `pytest` to script/test
|
||||||
|
- [ ] Scaffold handles multi-stack projects (e.g. Go + Next.js) by combining the relevant test commands
|
||||||
|
- [ ] project.toml component entries are generated to match detected tech stack
|
||||||
|
- [ ] Falls back to the generic 'No tests configured' stub if no known stack is detected
|
||||||
|
- [ ] Coder agent prompt includes instruction to configure `script/test` for the project's test framework if it still contains the generic stub
|
||||||
|
|
||||||
|
## Out of Scope
|
||||||
|
|
||||||
|
- TBD
|
||||||
@@ -0,0 +1,28 @@
|
|||||||
|
---
|
||||||
|
name: "Scaffold gitignore missing transient pipeline stage directories"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Bug 373: Scaffold gitignore missing transient pipeline stage directories
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
The `write_story_kit_gitignore` function in `server/src/io/fs.rs` does not include the transient pipeline stages (`work/2_current/`, `work/3_qa/`, `work/4_merge/`) in the `.storkit/.gitignore` entries list. These stages are not committed to git (only `1_backlog`, `5_done`, and `6_archived` are commit-worthy per spike 92), so they should be ignored for new projects.
|
||||||
|
|
||||||
|
## How to Reproduce
|
||||||
|
|
||||||
|
1. Scaffold a new project with storkit
|
||||||
|
2. Check `.storkit/.gitignore`
|
||||||
|
|
||||||
|
## Actual Result
|
||||||
|
|
||||||
|
`.storkit/.gitignore` only contains `bot.toml`, `matrix_store/`, `matrix_device_id`, `worktrees/`, `merge_workspace/`, `coverage/`. The transient pipeline directories are missing.
|
||||||
|
|
||||||
|
## Expected Result
|
||||||
|
|
||||||
|
`.storkit/.gitignore` also includes `work/2_current/`, `work/3_qa/`, `work/4_merge/`.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- [ ] Scaffold writes work/2_current/, work/3_qa/, work/4_merge/ to .storkit/.gitignore
|
||||||
|
- [ ] Idempotent — running scaffold again does not duplicate entries
|
||||||
|
- [ ] Existing .storkit/.gitignore files get the new entries appended on next scaffold run
|
||||||
@@ -0,0 +1,30 @@
|
|||||||
|
---
|
||||||
|
name: "Web UI implements all bot commands as slash commands"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Story 374: Web UI implements all bot commands as slash commands
|
||||||
|
|
||||||
|
## User Story
|
||||||
|
|
||||||
|
As a user working in the storkit web UI, I want to type slash commands (e.g. `/status`, `/start 42`, `/cost`) in the chat input to trigger the same deterministic bot commands available in Matrix, so that I can manage my project entirely from the browser without needing a chat bot.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- [ ] /status — shows pipeline status and agent availability; /status <number> shows story triage dump
|
||||||
|
- [ ] /assign <number> <model> — pre-assign a model to a story
|
||||||
|
- [ ] /start <number> — start a coder on a story; /start <number> opus for specific model
|
||||||
|
- [ ] /show <number> — display full text of a work item
|
||||||
|
- [ ] /move <number> <stage> — move a work item to a pipeline stage
|
||||||
|
- [ ] /delete <number> — remove a work item from the pipeline
|
||||||
|
- [ ] /cost — show token spend (24h total, top stories, by agent type, all-time)
|
||||||
|
- [ ] /git — show git status (branch, uncommitted changes, ahead/behind)
|
||||||
|
- [ ] /overview <number> — show implementation summary for a merged story
|
||||||
|
- [ ] /rebuild — rebuild the server binary and restart
|
||||||
|
- [ ] /reset — clear the current Claude Code session
|
||||||
|
- [ ] /help — list all available slash commands
|
||||||
|
- [ ] Slash commands are handled at the frontend/backend level without LLM invocation
|
||||||
|
- [ ] Unrecognised slash commands show a helpful error message
|
||||||
|
|
||||||
|
## Out of Scope
|
||||||
|
|
||||||
|
- TBD
|
||||||
@@ -0,0 +1,43 @@
|
|||||||
|
---
|
||||||
|
name: "Default project.toml contains Rust-specific setup commands for non-Rust projects"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Bug 375: Default project.toml contains Rust-specific setup commands for non-Rust projects
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
When scaffolding a new project where no tech stack is detected, the generated `project.toml` contains Rust-specific setup commands (`cargo check`) as example fallback components. This causes coder agents to try to satisfy Rust gates on non-Rust projects.
|
||||||
|
|
||||||
|
## Fix
|
||||||
|
|
||||||
|
1. In `detect_components_toml()` fallback (when no stack markers found): replace the Rust/pnpm example components with a single generic `app` component with empty `setup = []`
|
||||||
|
2. In the onboarding prompt Step 4: simplify to configure `[[component]]` entries based on what the user told the LLM in Step 2 (tech stack), rather than re-scanning the filesystem independently
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- [ ] Default project.toml does not contain language-specific setup commands when that language is not detected in the project
|
||||||
|
- [ ] If go.mod is present, setup commands use Go tooling
|
||||||
|
- [ ] If package.json is present, setup commands use npm/node tooling
|
||||||
|
- [ ] If no known stack is detected, setup commands are empty or just echo a placeholder
|
||||||
|
|
||||||
|
## How to Reproduce
|
||||||
|
|
||||||
|
1. Create a new Go + Next.js project directory with `go.mod` and `package.json`
|
||||||
|
2. Run `storkit .` to scaffold
|
||||||
|
3. Check `.storkit/project.toml` — the component setup commands reference cargo/Rust
|
||||||
|
4. Start a coder agent — it creates a `Cargo.toml` trying to satisfy the Rust setup commands
|
||||||
|
|
||||||
|
## Actual Result
|
||||||
|
|
||||||
|
The scaffolded `project.toml` has Rust-specific setup commands (`cargo check`) even for non-Rust projects. Agents try to satisfy these and create spurious files.
|
||||||
|
|
||||||
|
## Expected Result
|
||||||
|
|
||||||
|
The scaffolded `project.toml` should have generic or stack-appropriate setup commands. If no known stack is detected, setup commands should be empty or minimal (not Rust-specific).
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- [ ] Default project.toml does not contain language-specific setup commands when that language is not detected in the project
|
||||||
|
- [ ] If go.mod is present, setup commands use Go tooling
|
||||||
|
- [ ] If package.json is present, setup commands use npm/node tooling
|
||||||
|
- [ ] If no known stack is detected, setup commands are empty or just echo a placeholder
|
||||||
@@ -0,0 +1,30 @@
|
|||||||
|
---
|
||||||
|
name: "update_story MCP tool writes front matter values as YAML strings instead of native types"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Bug 377: update_story MCP tool writes front matter values as YAML strings instead of native types
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
The `update_story` MCP tool accepts `front_matter` as a `Map<String, String>`, so all values are written as quoted YAML strings. Fields like `retry_count` (expected `u32`) and `blocked` (expected `bool`) end up as `"0"` and `"false"` in the YAML. This causes `parse_front_matter()` to fail because serde_yaml cannot deserialize a quoted string into `u32` or `bool`. When parsing fails, the story `name` comes back as `None`, so the status command shows no title for the story.
|
||||||
|
|
||||||
|
## How to Reproduce
|
||||||
|
|
||||||
|
1. Call `update_story` with `front_matter: {"blocked": "false", "retry_count": "0"}`
|
||||||
|
2. Read the story file — front matter contains `blocked: "false"` and `retry_count: "0"` (quoted strings)
|
||||||
|
3. Call `get_pipeline_status` or the bot `status` command
|
||||||
|
4. The story shows with no title/name
|
||||||
|
|
||||||
|
## Actual Result
|
||||||
|
|
||||||
|
Front matter values are written as quoted YAML strings. `parse_front_matter()` fails to deserialize `"false"` as `bool` and `"0"` as `u32`, returning an error. The story name is lost and the status command shows no title.
|
||||||
|
|
||||||
|
## Expected Result
|
||||||
|
|
||||||
|
The `update_story` tool should write `blocked` and `retry_count` as native YAML types (unquoted `false` and `0`), or `parse_front_matter()` should accept both string and native representations. The story name should always be displayed correctly in the status command.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- [ ] update_story with front_matter {"blocked": "false"} writes `blocked: false` (unquoted) in the YAML
|
||||||
|
- [ ] update_story with front_matter {"retry_count": "0"} writes `retry_count: 0` (unquoted) in the YAML
|
||||||
|
- [ ] Story name is displayed correctly in the status command after update_story modifies front matter fields
|
||||||
@@ -0,0 +1,20 @@
|
|||||||
|
---
|
||||||
|
name: "Status command shows work item type (story, bug, spike, refactor) next to each item"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Story 378: Status command shows work item type (story, bug, spike, refactor) next to each item
|
||||||
|
|
||||||
|
## User Story
|
||||||
|
|
||||||
|
As a user viewing the pipeline status, I want to see the type of each work item (story, bug, spike, refactor) so that I can quickly understand what kind of work is in progress without having to open individual files.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- [ ] The status command displays the work item type (story, bug, spike, refactor) as a label next to each item — e.g. "375 [bug] — Default project.toml contains Rust-specific setup commands"
|
||||||
|
- [ ] The type is extracted from the story_id filename convention ({id}_{type}_{slug})
|
||||||
|
- [ ] All known types are supported: story, bug, spike, refactor
|
||||||
|
- [ ] Unknown or missing types are omitted gracefully (no crash, no placeholder)
|
||||||
|
|
||||||
|
## Out of Scope
|
||||||
|
|
||||||
|
- TBD
|
||||||
72
Cargo.lock
generated
72
Cargo.lock
generated
@@ -1774,9 +1774,9 @@ checksum = "d98f6fed1fde3f8c21bc40a1abb88dd75e67924f9cffc3ef95607bad8017f8e2"
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "iri-string"
|
name = "iri-string"
|
||||||
version = "0.7.10"
|
version = "0.7.11"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "c91338f0783edbd6195decb37bae672fd3b165faffb89bf7b9e6942f8b1a731a"
|
checksum = "d8e7418f59cc01c88316161279a7f665217ae316b388e58a0d10e29f54f1e5eb"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"memchr",
|
"memchr",
|
||||||
"serde",
|
"serde",
|
||||||
@@ -1815,7 +1815,7 @@ dependencies = [
|
|||||||
"cesu8",
|
"cesu8",
|
||||||
"cfg-if",
|
"cfg-if",
|
||||||
"combine",
|
"combine",
|
||||||
"jni-sys",
|
"jni-sys 0.3.1",
|
||||||
"log",
|
"log",
|
||||||
"thiserror 1.0.69",
|
"thiserror 1.0.69",
|
||||||
"walkdir",
|
"walkdir",
|
||||||
@@ -1824,9 +1824,31 @@ dependencies = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "jni-sys"
|
name = "jni-sys"
|
||||||
version = "0.3.0"
|
version = "0.3.1"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "8eaf4bc02d17cbdd7ff4c7438cafcdf7fb9a4613313ad11b4f8fefe7d3fa0130"
|
checksum = "41a652e1f9b6e0275df1f15b32661cf0d4b78d4d87ddec5e0c3c20f097433258"
|
||||||
|
dependencies = [
|
||||||
|
"jni-sys 0.4.1",
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "jni-sys"
|
||||||
|
version = "0.4.1"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "c6377a88cb3910bee9b0fa88d4f42e1d2da8e79915598f65fb0c7ee14c878af2"
|
||||||
|
dependencies = [
|
||||||
|
"jni-sys-macros",
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "jni-sys-macros"
|
||||||
|
version = "0.4.1"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "38c0b942f458fe50cdac086d2f946512305e5631e720728f2a61aabcd47a6264"
|
||||||
|
dependencies = [
|
||||||
|
"quote",
|
||||||
|
"syn 2.0.117",
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "jobserver"
|
name = "jobserver"
|
||||||
@@ -2948,9 +2970,9 @@ dependencies = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pulldown-cmark"
|
name = "pulldown-cmark"
|
||||||
version = "0.13.1"
|
version = "0.13.3"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "83c41efbf8f90ac44de7f3a868f0867851d261b56291732d0cbf7cceaaeb55a6"
|
checksum = "7c3a14896dfa883796f1cb410461aef38810ea05f2b2c33c5aded3649095fdad"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"bitflags 2.11.0",
|
"bitflags 2.11.0",
|
||||||
"memchr",
|
"memchr",
|
||||||
@@ -3625,9 +3647,9 @@ checksum = "f87165f0995f63a9fbeea62b64d10b4d9d8e78ec6d7d51fb2125fda7bb36788f"
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "rustls-webpki"
|
name = "rustls-webpki"
|
||||||
version = "0.103.9"
|
version = "0.103.10"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "d7df23109aa6c1567d1c575b9952556388da57401e4ace1d15f79eedad0d8f53"
|
checksum = "df33b2b81ac578cabaf06b89b0631153a3f416b0a886e8a7a1707fb51abbd1ef"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"aws-lc-rs",
|
"aws-lc-rs",
|
||||||
"ring",
|
"ring",
|
||||||
@@ -3801,9 +3823,9 @@ dependencies = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "serde_spanned"
|
name = "serde_spanned"
|
||||||
version = "1.0.4"
|
version = "1.1.0"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "f8bbf91e5a4d6315eee45e704372590b30e260ee83af6639d64557f51b067776"
|
checksum = "876ac351060d4f882bb1032b6369eb0aef79ad9df1ea8bc404874d8cc3d0cd98"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"serde_core",
|
"serde_core",
|
||||||
]
|
]
|
||||||
@@ -3994,7 +4016,7 @@ checksum = "6ce2be8dc25455e1f91df71bfa12ad37d7af1092ae736f3a6cd0e37bc7810596"
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "storkit"
|
name = "storkit"
|
||||||
version = "0.4.1"
|
version = "0.5.1"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"async-stream",
|
"async-stream",
|
||||||
"async-trait",
|
"async-trait",
|
||||||
@@ -4024,7 +4046,7 @@ dependencies = [
|
|||||||
"tempfile",
|
"tempfile",
|
||||||
"tokio",
|
"tokio",
|
||||||
"tokio-tungstenite 0.29.0",
|
"tokio-tungstenite 0.29.0",
|
||||||
"toml 1.0.7+spec-1.1.0",
|
"toml 1.1.0+spec-1.1.0",
|
||||||
"uuid",
|
"uuid",
|
||||||
"wait-timeout",
|
"wait-timeout",
|
||||||
"walkdir",
|
"walkdir",
|
||||||
@@ -4371,14 +4393,14 @@ dependencies = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "toml"
|
name = "toml"
|
||||||
version = "1.0.7+spec-1.1.0"
|
version = "1.1.0+spec-1.1.0"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "dd28d57d8a6f6e458bc0b8784f8fdcc4b99a437936056fa122cb234f18656a96"
|
checksum = "f8195ca05e4eb728f4ba94f3e3291661320af739c4e43779cbdfae82ab239fcc"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"indexmap",
|
"indexmap",
|
||||||
"serde_core",
|
"serde_core",
|
||||||
"serde_spanned",
|
"serde_spanned",
|
||||||
"toml_datetime 1.0.1+spec-1.1.0",
|
"toml_datetime 1.1.0+spec-1.1.0",
|
||||||
"toml_parser",
|
"toml_parser",
|
||||||
"toml_writer",
|
"toml_writer",
|
||||||
"winnow 1.0.0",
|
"winnow 1.0.0",
|
||||||
@@ -4395,39 +4417,39 @@ dependencies = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "toml_datetime"
|
name = "toml_datetime"
|
||||||
version = "1.0.1+spec-1.1.0"
|
version = "1.1.0+spec-1.1.0"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "9b320e741db58cac564e26c607d3cc1fdc4a88fd36c879568c07856ed83ff3e9"
|
checksum = "97251a7c317e03ad83774a8752a7e81fb6067740609f75ea2b585b569a59198f"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"serde_core",
|
"serde_core",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "toml_edit"
|
name = "toml_edit"
|
||||||
version = "0.25.5+spec-1.1.0"
|
version = "0.25.8+spec-1.1.0"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "8ca1a40644a28bce036923f6a431df0b34236949d111cc07cb6dca830c9ef2e1"
|
checksum = "16bff38f1d86c47f9ff0647e6838d7bb362522bdf44006c7068c2b1e606f1f3c"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"indexmap",
|
"indexmap",
|
||||||
"toml_datetime 1.0.1+spec-1.1.0",
|
"toml_datetime 1.1.0+spec-1.1.0",
|
||||||
"toml_parser",
|
"toml_parser",
|
||||||
"winnow 1.0.0",
|
"winnow 1.0.0",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "toml_parser"
|
name = "toml_parser"
|
||||||
version = "1.0.10+spec-1.1.0"
|
version = "1.1.0+spec-1.1.0"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "7df25b4befd31c4816df190124375d5a20c6b6921e2cad937316de3fccd63420"
|
checksum = "2334f11ee363607eb04df9b8fc8a13ca1715a72ba8662a26ac285c98aabb4011"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"winnow 1.0.0",
|
"winnow 1.0.0",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "toml_writer"
|
name = "toml_writer"
|
||||||
version = "1.0.7+spec-1.1.0"
|
version = "1.1.0+spec-1.1.0"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "f17aaa1c6e3dc22b1da4b6bba97d066e354c7945cac2f7852d4e4e7ca7a6b56d"
|
checksum = "d282ade6016312faf3e41e57ebbba0c073e4056dab1232ab1cb624199648f8ed"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "tower"
|
name = "tower"
|
||||||
|
|||||||
58
README.md
58
README.md
@@ -77,64 +77,6 @@ ldd target/x86_64-unknown-linux-musl/release/storkit
|
|||||||
./storkit
|
./storkit
|
||||||
```
|
```
|
||||||
|
|
||||||
## Running in Docker (with gVisor sandboxing)
|
|
||||||
|
|
||||||
The `docker/docker-compose.yml` runs the container under [gVisor](https://gvisor.dev/)
|
|
||||||
(`runtime: runsc`). gVisor intercepts all container syscalls in userspace, providing an
|
|
||||||
extra layer of isolation so that even a compromised workload cannot make raw syscalls to
|
|
||||||
the host kernel.
|
|
||||||
|
|
||||||
### Host setup (Linux only)
|
|
||||||
|
|
||||||
gVisor is a Linux technology. On macOS (OrbStack, Docker Desktop) you must remove
|
|
||||||
`runtime: runsc` from `docker/docker-compose.yml` — gVisor is not available there.
|
|
||||||
|
|
||||||
**1. Install gVisor (Debian/Ubuntu):**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
curl -fsSL https://gvisor.dev/archive.key | sudo gpg --dearmor -o /usr/share/keyrings/gvisor-archive-keyring.gpg
|
|
||||||
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/gvisor-archive-keyring.gpg] \
|
|
||||||
https://storage.googleapis.com/gvisor/releases release main" \
|
|
||||||
| sudo tee /etc/apt/sources.list.d/gvisor.list
|
|
||||||
sudo apt-get update && sudo apt-get install -y runsc
|
|
||||||
```
|
|
||||||
|
|
||||||
**2. Register runsc with Docker (`/etc/docker/daemon.json`):**
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"runtimes": {
|
|
||||||
"runsc": { "path": "/usr/bin/runsc" }
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**3. Restart Docker and verify:**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo systemctl restart docker
|
|
||||||
docker run --runtime=runsc hello-world
|
|
||||||
```
|
|
||||||
|
|
||||||
**4. Launch storkit:**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
GIT_USER_NAME="Your Name" GIT_USER_EMAIL="you@example.com" \
|
|
||||||
PROJECT_PATH=/path/to/your/repo \
|
|
||||||
docker compose -f docker/docker-compose.yml up
|
|
||||||
```
|
|
||||||
|
|
||||||
### gVisor compatibility notes
|
|
||||||
|
|
||||||
The following storkit subsystems have been verified to work under `runsc`:
|
|
||||||
|
|
||||||
- **PTY-based agent spawning** (`portable_pty` / `openpty`) – gVisor implements the
|
|
||||||
full POSIX PTY interface (`/dev/ptmx`, `TIOCGWINSZ`, etc.).
|
|
||||||
- **`rebuild_and_restart`** – uses `execve()` to replace the server process, which
|
|
||||||
gVisor fully supports.
|
|
||||||
- **Rust compilation** – `cargo build` inside the container invokes standard fork/exec
|
|
||||||
primitives, all of which gVisor implements.
|
|
||||||
|
|
||||||
## Releasing
|
## Releasing
|
||||||
|
|
||||||
Builds both macOS and Linux binaries locally, tags the repo, and publishes a Gitea release with a changelog.
|
Builds both macOS and Linux binaries locally, tags the repo, and publishes a Gitea release with a changelog.
|
||||||
|
|||||||
@@ -8,39 +8,12 @@
|
|||||||
# OrbStack users: just install OrbStack and use `docker compose` normally.
|
# OrbStack users: just install OrbStack and use `docker compose` normally.
|
||||||
# OrbStack's VirtioFS bind mount driver is significantly faster than
|
# OrbStack's VirtioFS bind mount driver is significantly faster than
|
||||||
# Docker Desktop's default (see spike findings).
|
# Docker Desktop's default (see spike findings).
|
||||||
#
|
|
||||||
# ── gVisor (runsc) host setup ────────────────────────────────────────────
|
|
||||||
# This compose file uses `runtime: runsc` (gVisor) for syscall-level
|
|
||||||
# sandboxing. gVisor intercepts all container syscalls in userspace so
|
|
||||||
# that even if a malicious workload escapes the container's process
|
|
||||||
# namespace it cannot make raw syscalls to the host kernel.
|
|
||||||
#
|
|
||||||
# Prerequisites on the Docker host:
|
|
||||||
# 1. Install gVisor:
|
|
||||||
# curl -fsSL https://gvisor.dev/archive.key | sudo gpg --dearmor -o /usr/share/keyrings/gvisor-archive-keyring.gpg
|
|
||||||
# echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/gvisor-archive-keyring.gpg] https://storage.googleapis.com/gvisor/releases release main" | sudo tee /etc/apt/sources.list.d/gvisor.list
|
|
||||||
# sudo apt-get update && sudo apt-get install -y runsc
|
|
||||||
# 2. Register runsc with Docker (/etc/docker/daemon.json):
|
|
||||||
# {
|
|
||||||
# "runtimes": {
|
|
||||||
# "runsc": { "path": "/usr/bin/runsc" }
|
|
||||||
# }
|
|
||||||
# }
|
|
||||||
# 3. Restart Docker: sudo systemctl restart docker
|
|
||||||
# 4. Verify: docker run --runtime=runsc hello-world
|
|
||||||
#
|
|
||||||
# Note: On macOS (OrbStack / Docker Desktop) gVisor is Linux-only and
|
|
||||||
# not supported. Remove `runtime: runsc` for local development on macOS.
|
|
||||||
|
|
||||||
services:
|
services:
|
||||||
storkit:
|
storkit:
|
||||||
build:
|
build:
|
||||||
context: ..
|
context: ..
|
||||||
dockerfile: docker/Dockerfile
|
dockerfile: docker/Dockerfile
|
||||||
# Run under gVisor for syscall-level sandboxing.
|
|
||||||
# Requires runsc installed and registered in /etc/docker/daemon.json.
|
|
||||||
# See host setup instructions in the header comment above.
|
|
||||||
runtime: runsc
|
|
||||||
container_name: storkit
|
container_name: storkit
|
||||||
ports:
|
ports:
|
||||||
# Bind to localhost only — not exposed on all interfaces.
|
# Bind to localhost only — not exposed on all interfaces.
|
||||||
|
|||||||
16812
frontend/package-lock.json
generated
16812
frontend/package-lock.json
generated
File diff suppressed because it is too large
Load Diff
@@ -1,41 +1,41 @@
|
|||||||
{
|
{
|
||||||
"name": "living-spec-standalone",
|
"name": "living-spec-standalone",
|
||||||
"private": true,
|
"private": true,
|
||||||
"version": "0.4.1",
|
"version": "0.5.1",
|
||||||
"type": "module",
|
"type": "module",
|
||||||
"scripts": {
|
"scripts": {
|
||||||
"dev": "vite",
|
"dev": "vite",
|
||||||
"build": "tsc && vite build",
|
"build": "tsc && vite build",
|
||||||
"preview": "vite preview",
|
"preview": "vite preview",
|
||||||
"server": "cargo run --manifest-path server/Cargo.toml",
|
"server": "cargo run --manifest-path server/Cargo.toml",
|
||||||
"test": "vitest run",
|
"test": "vitest run",
|
||||||
"test:unit": "vitest run",
|
"test:unit": "vitest run",
|
||||||
"test:e2e": "playwright test",
|
"test:e2e": "playwright test",
|
||||||
"test:coverage": "vitest run --coverage"
|
"test:coverage": "vitest run --coverage"
|
||||||
},
|
},
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@types/react-syntax-highlighter": "^15.5.13",
|
"@types/react-syntax-highlighter": "^15.5.13",
|
||||||
"react": "^19.1.0",
|
"react": "^19.1.0",
|
||||||
"react-dom": "^19.1.0",
|
"react-dom": "^19.1.0",
|
||||||
"react-markdown": "^10.1.0",
|
"react-markdown": "^10.1.0",
|
||||||
"react-syntax-highlighter": "^16.1.0"
|
"react-syntax-highlighter": "^16.1.0"
|
||||||
},
|
},
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
"@biomejs/biome": "^2.4.2",
|
"@biomejs/biome": "^2.4.2",
|
||||||
"@playwright/test": "^1.47.2",
|
"@playwright/test": "^1.47.2",
|
||||||
"@testing-library/jest-dom": "^6.0.0",
|
"@testing-library/jest-dom": "^6.0.0",
|
||||||
"@testing-library/react": "^16.0.0",
|
"@testing-library/react": "^16.0.0",
|
||||||
"@testing-library/user-event": "^14.4.3",
|
"@testing-library/user-event": "^14.4.3",
|
||||||
"@types/node": "^25.0.0",
|
"@types/node": "^25.0.0",
|
||||||
"@types/react": "^19.1.8",
|
"@types/react": "^19.1.8",
|
||||||
"@types/react-dom": "^19.1.6",
|
"@types/react-dom": "^19.1.6",
|
||||||
"@vitejs/plugin-react": "^4.6.0",
|
"@vitejs/plugin-react": "^4.6.0",
|
||||||
"@vitest/coverage-v8": "^2.1.9",
|
"@vitest/coverage-v8": "^2.1.9",
|
||||||
"jest": "^29.0.0",
|
"jest": "^29.0.0",
|
||||||
"jsdom": "^28.1.0",
|
"jsdom": "^28.1.0",
|
||||||
"ts-jest": "^29.0.0",
|
"ts-jest": "^29.0.0",
|
||||||
"typescript": "~5.8.3",
|
"typescript": "~5.8.3",
|
||||||
"vite": "^5.4.21",
|
"vite": "^5.4.21",
|
||||||
"vitest": "^2.1.4"
|
"vitest": "^2.1.4"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,27 +1,27 @@
|
|||||||
import { defineConfig } from "@playwright/test";
|
|
||||||
import { dirname, resolve } from "node:path";
|
import { dirname, resolve } from "node:path";
|
||||||
import { fileURLToPath } from "node:url";
|
import { fileURLToPath } from "node:url";
|
||||||
|
import { defineConfig } from "@playwright/test";
|
||||||
|
|
||||||
const configDir = dirname(fileURLToPath(new URL(import.meta.url)));
|
const configDir = dirname(fileURLToPath(new URL(import.meta.url)));
|
||||||
const frontendRoot = resolve(configDir, ".");
|
const frontendRoot = resolve(configDir, ".");
|
||||||
|
|
||||||
export default defineConfig({
|
export default defineConfig({
|
||||||
testDir: "./tests/e2e",
|
testDir: "./tests/e2e",
|
||||||
fullyParallel: true,
|
fullyParallel: true,
|
||||||
timeout: 30_000,
|
timeout: 30_000,
|
||||||
expect: {
|
expect: {
|
||||||
timeout: 5_000,
|
timeout: 5_000,
|
||||||
},
|
},
|
||||||
use: {
|
use: {
|
||||||
baseURL: "http://127.0.0.1:41700",
|
baseURL: "http://127.0.0.1:41700",
|
||||||
trace: "on-first-retry",
|
trace: "on-first-retry",
|
||||||
},
|
},
|
||||||
webServer: {
|
webServer: {
|
||||||
command:
|
command:
|
||||||
"pnpm exec vite --config vite.config.ts --host 127.0.0.1 --port 41700 --strictPort",
|
"pnpm exec vite --config vite.config.ts --host 127.0.0.1 --port 41700 --strictPort",
|
||||||
url: "http://127.0.0.1:41700/@vite/client",
|
url: "http://127.0.0.1:41700/@vite/client",
|
||||||
reuseExistingServer: true,
|
reuseExistingServer: true,
|
||||||
timeout: 120_000,
|
timeout: 120_000,
|
||||||
cwd: frontendRoot,
|
cwd: frontendRoot,
|
||||||
},
|
},
|
||||||
});
|
});
|
||||||
|
|||||||
@@ -382,6 +382,14 @@ export const api = {
|
|||||||
deleteStory(storyId: string) {
|
deleteStory(storyId: string) {
|
||||||
return callMcpTool("delete_story", { story_id: storyId });
|
return callMcpTool("delete_story", { story_id: storyId });
|
||||||
},
|
},
|
||||||
|
/** Execute a bot slash command without LLM invocation. Returns markdown response text. */
|
||||||
|
botCommand(command: string, args: string, baseUrl?: string) {
|
||||||
|
return requestJson<{ response: string }>(
|
||||||
|
"/bot/command",
|
||||||
|
{ method: "POST", body: JSON.stringify({ command, args }) },
|
||||||
|
baseUrl,
|
||||||
|
);
|
||||||
|
},
|
||||||
};
|
};
|
||||||
|
|
||||||
async function callMcpTool(
|
async function callMcpTool(
|
||||||
|
|||||||
@@ -40,6 +40,7 @@ vi.mock("../api/client", () => {
|
|||||||
setAnthropicApiKey: vi.fn(),
|
setAnthropicApiKey: vi.fn(),
|
||||||
readFile: vi.fn(),
|
readFile: vi.fn(),
|
||||||
listProjectFiles: vi.fn(),
|
listProjectFiles: vi.fn(),
|
||||||
|
botCommand: vi.fn(),
|
||||||
};
|
};
|
||||||
class ChatWebSocket {
|
class ChatWebSocket {
|
||||||
connect(handlers: WsHandlers) {
|
connect(handlers: WsHandlers) {
|
||||||
@@ -64,6 +65,7 @@ const mockedApi = {
|
|||||||
setAnthropicApiKey: vi.mocked(api.setAnthropicApiKey),
|
setAnthropicApiKey: vi.mocked(api.setAnthropicApiKey),
|
||||||
readFile: vi.mocked(api.readFile),
|
readFile: vi.mocked(api.readFile),
|
||||||
listProjectFiles: vi.mocked(api.listProjectFiles),
|
listProjectFiles: vi.mocked(api.listProjectFiles),
|
||||||
|
botCommand: vi.mocked(api.botCommand),
|
||||||
};
|
};
|
||||||
|
|
||||||
function setupMocks() {
|
function setupMocks() {
|
||||||
@@ -76,6 +78,7 @@ function setupMocks() {
|
|||||||
mockedApi.listProjectFiles.mockResolvedValue([]);
|
mockedApi.listProjectFiles.mockResolvedValue([]);
|
||||||
mockedApi.cancelChat.mockResolvedValue(true);
|
mockedApi.cancelChat.mockResolvedValue(true);
|
||||||
mockedApi.setAnthropicApiKey.mockResolvedValue(true);
|
mockedApi.setAnthropicApiKey.mockResolvedValue(true);
|
||||||
|
mockedApi.botCommand.mockResolvedValue({ response: "Bot response" });
|
||||||
}
|
}
|
||||||
|
|
||||||
describe("Default provider selection (Story 206)", () => {
|
describe("Default provider selection (Story 206)", () => {
|
||||||
@@ -1457,3 +1460,204 @@ describe("File reference expansion (Story 269 AC4)", () => {
|
|||||||
expect(mockedApi.readFile).not.toHaveBeenCalled();
|
expect(mockedApi.readFile).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
|
describe("Slash command handling (Story 374)", () => {
|
||||||
|
beforeEach(() => {
|
||||||
|
capturedWsHandlers = null;
|
||||||
|
lastSendChatArgs = null;
|
||||||
|
setupMocks();
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
vi.clearAllMocks();
|
||||||
|
});
|
||||||
|
|
||||||
|
it("AC: /status calls botCommand and displays response", async () => {
|
||||||
|
mockedApi.botCommand.mockResolvedValue({ response: "Pipeline: 3 active" });
|
||||||
|
render(<Chat projectPath="/tmp/project" onCloseProject={vi.fn()} />);
|
||||||
|
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
||||||
|
|
||||||
|
const input = screen.getByPlaceholderText("Send a message...");
|
||||||
|
await act(async () => {
|
||||||
|
fireEvent.change(input, { target: { value: "/status" } });
|
||||||
|
});
|
||||||
|
await act(async () => {
|
||||||
|
fireEvent.keyDown(input, { key: "Enter", shiftKey: false });
|
||||||
|
});
|
||||||
|
|
||||||
|
await waitFor(() => {
|
||||||
|
expect(mockedApi.botCommand).toHaveBeenCalledWith(
|
||||||
|
"status",
|
||||||
|
"",
|
||||||
|
undefined,
|
||||||
|
);
|
||||||
|
});
|
||||||
|
expect(await screen.findByText("Pipeline: 3 active")).toBeInTheDocument();
|
||||||
|
// Should NOT go to LLM
|
||||||
|
expect(lastSendChatArgs).toBeNull();
|
||||||
|
});
|
||||||
|
|
||||||
|
it("AC: /status <number> passes args to botCommand", async () => {
|
||||||
|
mockedApi.botCommand.mockResolvedValue({ response: "Story 42 details" });
|
||||||
|
render(<Chat projectPath="/tmp/project" onCloseProject={vi.fn()} />);
|
||||||
|
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
||||||
|
|
||||||
|
const input = screen.getByPlaceholderText("Send a message...");
|
||||||
|
await act(async () => {
|
||||||
|
fireEvent.change(input, { target: { value: "/status 42" } });
|
||||||
|
});
|
||||||
|
await act(async () => {
|
||||||
|
fireEvent.keyDown(input, { key: "Enter", shiftKey: false });
|
||||||
|
});
|
||||||
|
|
||||||
|
await waitFor(() => {
|
||||||
|
expect(mockedApi.botCommand).toHaveBeenCalledWith(
|
||||||
|
"status",
|
||||||
|
"42",
|
||||||
|
undefined,
|
||||||
|
);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
it("AC: /start <number> calls botCommand", async () => {
|
||||||
|
mockedApi.botCommand.mockResolvedValue({ response: "Started agent" });
|
||||||
|
render(<Chat projectPath="/tmp/project" onCloseProject={vi.fn()} />);
|
||||||
|
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
||||||
|
|
||||||
|
const input = screen.getByPlaceholderText("Send a message...");
|
||||||
|
await act(async () => {
|
||||||
|
fireEvent.change(input, { target: { value: "/start 42 opus" } });
|
||||||
|
});
|
||||||
|
await act(async () => {
|
||||||
|
fireEvent.keyDown(input, { key: "Enter", shiftKey: false });
|
||||||
|
});
|
||||||
|
|
||||||
|
await waitFor(() => {
|
||||||
|
expect(mockedApi.botCommand).toHaveBeenCalledWith(
|
||||||
|
"start",
|
||||||
|
"42 opus",
|
||||||
|
undefined,
|
||||||
|
);
|
||||||
|
});
|
||||||
|
expect(await screen.findByText("Started agent")).toBeInTheDocument();
|
||||||
|
});
|
||||||
|
|
||||||
|
it("AC: /git calls botCommand", async () => {
|
||||||
|
mockedApi.botCommand.mockResolvedValue({ response: "On branch main" });
|
||||||
|
render(<Chat projectPath="/tmp/project" onCloseProject={vi.fn()} />);
|
||||||
|
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
||||||
|
|
||||||
|
const input = screen.getByPlaceholderText("Send a message...");
|
||||||
|
await act(async () => {
|
||||||
|
fireEvent.change(input, { target: { value: "/git" } });
|
||||||
|
});
|
||||||
|
await act(async () => {
|
||||||
|
fireEvent.keyDown(input, { key: "Enter", shiftKey: false });
|
||||||
|
});
|
||||||
|
|
||||||
|
await waitFor(() => {
|
||||||
|
expect(mockedApi.botCommand).toHaveBeenCalledWith("git", "", undefined);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
it("AC: /cost calls botCommand", async () => {
|
||||||
|
mockedApi.botCommand.mockResolvedValue({ response: "$1.23 today" });
|
||||||
|
render(<Chat projectPath="/tmp/project" onCloseProject={vi.fn()} />);
|
||||||
|
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
||||||
|
|
||||||
|
const input = screen.getByPlaceholderText("Send a message...");
|
||||||
|
await act(async () => {
|
||||||
|
fireEvent.change(input, { target: { value: "/cost" } });
|
||||||
|
});
|
||||||
|
await act(async () => {
|
||||||
|
fireEvent.keyDown(input, { key: "Enter", shiftKey: false });
|
||||||
|
});
|
||||||
|
|
||||||
|
await waitFor(() => {
|
||||||
|
expect(mockedApi.botCommand).toHaveBeenCalledWith("cost", "", undefined);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
it("AC: /reset clears messages and session without LLM", async () => {
|
||||||
|
render(<Chat projectPath="/tmp/project" onCloseProject={vi.fn()} />);
|
||||||
|
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
||||||
|
|
||||||
|
// First add a message so there is history to clear
|
||||||
|
act(() => {
|
||||||
|
capturedWsHandlers?.onUpdate([
|
||||||
|
{ role: "user", content: "hello" },
|
||||||
|
{ role: "assistant", content: "world" },
|
||||||
|
]);
|
||||||
|
});
|
||||||
|
expect(await screen.findByText("world")).toBeInTheDocument();
|
||||||
|
|
||||||
|
const input = screen.getByPlaceholderText("Send a message...");
|
||||||
|
await act(async () => {
|
||||||
|
fireEvent.change(input, { target: { value: "/reset" } });
|
||||||
|
});
|
||||||
|
await act(async () => {
|
||||||
|
fireEvent.keyDown(input, { key: "Enter", shiftKey: false });
|
||||||
|
});
|
||||||
|
|
||||||
|
// LLM must NOT be invoked
|
||||||
|
expect(lastSendChatArgs).toBeNull();
|
||||||
|
// botCommand must NOT be invoked (reset is frontend-only)
|
||||||
|
expect(mockedApi.botCommand).not.toHaveBeenCalled();
|
||||||
|
// Confirmation message should appear
|
||||||
|
expect(await screen.findByText(/Session reset/)).toBeInTheDocument();
|
||||||
|
});
|
||||||
|
|
||||||
|
it("AC: unrecognised slash command shows error message", async () => {
|
||||||
|
render(<Chat projectPath="/tmp/project" onCloseProject={vi.fn()} />);
|
||||||
|
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
||||||
|
|
||||||
|
const input = screen.getByPlaceholderText("Send a message...");
|
||||||
|
await act(async () => {
|
||||||
|
fireEvent.change(input, { target: { value: "/foobar" } });
|
||||||
|
});
|
||||||
|
await act(async () => {
|
||||||
|
fireEvent.keyDown(input, { key: "Enter", shiftKey: false });
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(await screen.findByText(/Unknown command/)).toBeInTheDocument();
|
||||||
|
// Should NOT go to LLM
|
||||||
|
expect(lastSendChatArgs).toBeNull();
|
||||||
|
// Should NOT call botCommand
|
||||||
|
expect(mockedApi.botCommand).not.toHaveBeenCalled();
|
||||||
|
});
|
||||||
|
|
||||||
|
it("AC: /help shows help overlay", async () => {
|
||||||
|
render(<Chat projectPath="/tmp/project" onCloseProject={vi.fn()} />);
|
||||||
|
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
||||||
|
|
||||||
|
const input = screen.getByPlaceholderText("Send a message...");
|
||||||
|
await act(async () => {
|
||||||
|
fireEvent.change(input, { target: { value: "/help" } });
|
||||||
|
});
|
||||||
|
await act(async () => {
|
||||||
|
fireEvent.keyDown(input, { key: "Enter", shiftKey: false });
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(await screen.findByTestId("help-overlay")).toBeInTheDocument();
|
||||||
|
expect(lastSendChatArgs).toBeNull();
|
||||||
|
expect(mockedApi.botCommand).not.toHaveBeenCalled();
|
||||||
|
});
|
||||||
|
|
||||||
|
it("AC: botCommand API error shows error message in chat", async () => {
|
||||||
|
mockedApi.botCommand.mockRejectedValue(new Error("Server error"));
|
||||||
|
render(<Chat projectPath="/tmp/project" onCloseProject={vi.fn()} />);
|
||||||
|
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
||||||
|
|
||||||
|
const input = screen.getByPlaceholderText("Send a message...");
|
||||||
|
await act(async () => {
|
||||||
|
fireEvent.change(input, { target: { value: "/git" } });
|
||||||
|
});
|
||||||
|
await act(async () => {
|
||||||
|
fireEvent.keyDown(input, { key: "Enter", shiftKey: false });
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(
|
||||||
|
await screen.findByText(/Error running command/),
|
||||||
|
).toBeInTheDocument();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|||||||
@@ -612,6 +612,80 @@ export function Chat({ projectPath, onCloseProject }: ChatProps) {
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// /reset — clear session and message history without LLM
|
||||||
|
if (/^\/reset\s*$/i.test(messageText)) {
|
||||||
|
setMessages([]);
|
||||||
|
setClaudeSessionId(null);
|
||||||
|
setStreamingContent("");
|
||||||
|
setStreamingThinking("");
|
||||||
|
setActivityStatus(null);
|
||||||
|
setMessages([
|
||||||
|
{
|
||||||
|
role: "assistant",
|
||||||
|
content: "Session reset. Starting a fresh conversation.",
|
||||||
|
},
|
||||||
|
]);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Slash commands forwarded to the backend bot command endpoint
|
||||||
|
const slashMatch = messageText.match(/^\/(\S+)(?:\s+([\s\S]*))?$/);
|
||||||
|
if (slashMatch) {
|
||||||
|
const cmd = slashMatch[1].toLowerCase();
|
||||||
|
const args = (slashMatch[2] ?? "").trim();
|
||||||
|
|
||||||
|
// Ignore commands handled elsewhere
|
||||||
|
if (cmd !== "btw") {
|
||||||
|
const knownCommands = new Set([
|
||||||
|
"status",
|
||||||
|
"assign",
|
||||||
|
"start",
|
||||||
|
"show",
|
||||||
|
"move",
|
||||||
|
"delete",
|
||||||
|
"cost",
|
||||||
|
"git",
|
||||||
|
"overview",
|
||||||
|
"rebuild",
|
||||||
|
]);
|
||||||
|
|
||||||
|
if (knownCommands.has(cmd)) {
|
||||||
|
// Show the slash command in chat as a user message (display only)
|
||||||
|
setMessages((prev: Message[]) => [
|
||||||
|
...prev,
|
||||||
|
{ role: "user", content: messageText },
|
||||||
|
]);
|
||||||
|
try {
|
||||||
|
const result = await api.botCommand(cmd, args, undefined);
|
||||||
|
setMessages((prev: Message[]) => [
|
||||||
|
...prev,
|
||||||
|
{ role: "assistant", content: result.response },
|
||||||
|
]);
|
||||||
|
} catch (e) {
|
||||||
|
setMessages((prev: Message[]) => [
|
||||||
|
...prev,
|
||||||
|
{
|
||||||
|
role: "assistant",
|
||||||
|
content: `**Error running command:** ${e}`,
|
||||||
|
},
|
||||||
|
]);
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Unknown slash command
|
||||||
|
setMessages((prev: Message[]) => [
|
||||||
|
...prev,
|
||||||
|
{ role: "user", content: messageText },
|
||||||
|
{
|
||||||
|
role: "assistant",
|
||||||
|
content: `Unknown command: \`/${cmd}\`. Type \`/help\` to see available commands.`,
|
||||||
|
},
|
||||||
|
]);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// /btw <question> — answered from context without disrupting main chat
|
// /btw <question> — answered from context without disrupting main chat
|
||||||
const btwMatch = messageText.match(/^\/btw\s+(.+)/s);
|
const btwMatch = messageText.match(/^\/btw\s+(.+)/s);
|
||||||
if (btwMatch) {
|
if (btwMatch) {
|
||||||
|
|||||||
@@ -12,6 +12,57 @@ const SLASH_COMMANDS: SlashCommand[] = [
|
|||||||
name: "/help",
|
name: "/help",
|
||||||
description: "Show this list of available slash commands.",
|
description: "Show this list of available slash commands.",
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
name: "/status",
|
||||||
|
description:
|
||||||
|
"Show pipeline status and agent availability. `/status <number>` shows a story triage dump.",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "/assign <number> <model>",
|
||||||
|
description: "Pre-assign a model to a story (e.g. `/assign 42 opus`).",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "/start <number>",
|
||||||
|
description:
|
||||||
|
"Start a coder on a story. Optionally specify a model: `/start <number> opus`.",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "/show <number>",
|
||||||
|
description: "Display the full text of a work item.",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "/move <number> <stage>",
|
||||||
|
description:
|
||||||
|
"Move a work item to a pipeline stage (backlog, current, qa, merge, done).",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "/delete <number>",
|
||||||
|
description:
|
||||||
|
"Remove a work item from the pipeline and stop any running agent.",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "/cost",
|
||||||
|
description:
|
||||||
|
"Show token spend: 24h total, top stories, breakdown by agent type, and all-time total.",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "/git",
|
||||||
|
description:
|
||||||
|
"Show git status: branch, uncommitted changes, and ahead/behind remote.",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "/overview <number>",
|
||||||
|
description: "Show the implementation summary for a merged story.",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "/rebuild",
|
||||||
|
description: "Rebuild the server binary and restart.",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "/reset",
|
||||||
|
description:
|
||||||
|
"Clear the current Claude Code session and start fresh (messages and session ID are cleared locally).",
|
||||||
|
},
|
||||||
{
|
{
|
||||||
name: "/btw <question>",
|
name: "/btw <question>",
|
||||||
description:
|
description:
|
||||||
|
|||||||
@@ -1,24 +1,24 @@
|
|||||||
{
|
{
|
||||||
"compilerOptions": {
|
"compilerOptions": {
|
||||||
"target": "ES2020",
|
"target": "ES2020",
|
||||||
"useDefineForClassFields": true,
|
"useDefineForClassFields": true,
|
||||||
"lib": ["ES2020", "DOM", "DOM.Iterable"],
|
"lib": ["ES2020", "DOM", "DOM.Iterable"],
|
||||||
"module": "ESNext",
|
"module": "ESNext",
|
||||||
"skipLibCheck": true,
|
"skipLibCheck": true,
|
||||||
|
|
||||||
/* Bundler mode */
|
/* Bundler mode */
|
||||||
"moduleResolution": "bundler",
|
"moduleResolution": "bundler",
|
||||||
"allowImportingTsExtensions": true,
|
"allowImportingTsExtensions": true,
|
||||||
"resolveJsonModule": true,
|
"resolveJsonModule": true,
|
||||||
"isolatedModules": true,
|
"isolatedModules": true,
|
||||||
"noEmit": true,
|
"noEmit": true,
|
||||||
"jsx": "react-jsx",
|
"jsx": "react-jsx",
|
||||||
|
|
||||||
/* Linting */
|
/* Linting */
|
||||||
"strict": true,
|
"strict": true,
|
||||||
"noUnusedLocals": true,
|
"noUnusedLocals": true,
|
||||||
"noUnusedParameters": true,
|
"noUnusedParameters": true,
|
||||||
"noFallthroughCasesInSwitch": true
|
"noFallthroughCasesInSwitch": true
|
||||||
},
|
},
|
||||||
"include": ["src"]
|
"include": ["src"]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -3,49 +3,49 @@ import { defineConfig } from "vite";
|
|||||||
|
|
||||||
// https://vite.dev/config/
|
// https://vite.dev/config/
|
||||||
export default defineConfig(() => {
|
export default defineConfig(() => {
|
||||||
const backendPort = Number(process.env.STORKIT_PORT || "3001");
|
const backendPort = Number(process.env.STORKIT_PORT || "3001");
|
||||||
return {
|
return {
|
||||||
plugins: [react()],
|
plugins: [react()],
|
||||||
define: {
|
define: {
|
||||||
__STORKIT_PORT__: JSON.stringify(String(backendPort)),
|
__STORKIT_PORT__: JSON.stringify(String(backendPort)),
|
||||||
__BUILD_TIME__: JSON.stringify(new Date().toISOString()),
|
__BUILD_TIME__: JSON.stringify(new Date().toISOString()),
|
||||||
},
|
},
|
||||||
server: {
|
server: {
|
||||||
port: backendPort + 2172,
|
port: backendPort + 2172,
|
||||||
proxy: {
|
proxy: {
|
||||||
"/api": {
|
"/api": {
|
||||||
target: `http://127.0.0.1:${String(backendPort)}`,
|
target: `http://127.0.0.1:${String(backendPort)}`,
|
||||||
timeout: 120000,
|
timeout: 120000,
|
||||||
configure: (proxy) => {
|
configure: (proxy) => {
|
||||||
proxy.on("error", (_err) => {
|
proxy.on("error", (_err) => {
|
||||||
// Swallow proxy errors (e.g. ECONNREFUSED during backend restart)
|
// Swallow proxy errors (e.g. ECONNREFUSED during backend restart)
|
||||||
// so the vite dev server doesn't crash.
|
// so the vite dev server doesn't crash.
|
||||||
});
|
});
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
"/agents": {
|
"/agents": {
|
||||||
target: `http://127.0.0.1:${String(backendPort)}`,
|
target: `http://127.0.0.1:${String(backendPort)}`,
|
||||||
timeout: 120000,
|
timeout: 120000,
|
||||||
configure: (proxy) => {
|
configure: (proxy) => {
|
||||||
proxy.on("error", (_err) => {});
|
proxy.on("error", (_err) => {});
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
watch: {
|
watch: {
|
||||||
ignored: [
|
ignored: [
|
||||||
"**/.story_kit/**",
|
"**/.story_kit/**",
|
||||||
"**/target/**",
|
"**/target/**",
|
||||||
"**/.git/**",
|
"**/.git/**",
|
||||||
"**/server/**",
|
"**/server/**",
|
||||||
"**/Cargo.*",
|
"**/Cargo.*",
|
||||||
"**/vendor/**",
|
"**/vendor/**",
|
||||||
"**/node_modules/**",
|
"**/node_modules/**",
|
||||||
],
|
],
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
build: {
|
build: {
|
||||||
outDir: "dist",
|
outDir: "dist",
|
||||||
emptyOutDir: true,
|
emptyOutDir: true,
|
||||||
},
|
},
|
||||||
};
|
};
|
||||||
});
|
});
|
||||||
|
|||||||
@@ -2,26 +2,26 @@ import react from "@vitejs/plugin-react";
|
|||||||
import { defineConfig } from "vitest/config";
|
import { defineConfig } from "vitest/config";
|
||||||
|
|
||||||
export default defineConfig({
|
export default defineConfig({
|
||||||
plugins: [react()],
|
plugins: [react()],
|
||||||
define: {
|
define: {
|
||||||
__BUILD_TIME__: JSON.stringify("2026-01-01T00:00:00.000Z"),
|
__BUILD_TIME__: JSON.stringify("2026-01-01T00:00:00.000Z"),
|
||||||
},
|
},
|
||||||
test: {
|
test: {
|
||||||
environment: "jsdom",
|
environment: "jsdom",
|
||||||
environmentOptions: {
|
environmentOptions: {
|
||||||
jsdom: {
|
jsdom: {
|
||||||
url: "http://localhost:3000",
|
url: "http://localhost:3000",
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
globals: true,
|
globals: true,
|
||||||
testTimeout: 10_000,
|
testTimeout: 10_000,
|
||||||
setupFiles: ["./src/setupTests.ts"],
|
setupFiles: ["./src/setupTests.ts"],
|
||||||
css: true,
|
css: true,
|
||||||
exclude: ["tests/e2e/**", "node_modules/**"],
|
exclude: ["tests/e2e/**", "node_modules/**"],
|
||||||
coverage: {
|
coverage: {
|
||||||
provider: "v8",
|
provider: "v8",
|
||||||
reporter: ["text", "json-summary"],
|
reporter: ["text", "json-summary"],
|
||||||
reportsDirectory: "./coverage",
|
reportsDirectory: "./coverage",
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
});
|
});
|
||||||
|
|||||||
@@ -147,9 +147,65 @@ else
|
|||||||
| sed 's/^/- /')
|
| sed 's/^/- /')
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
# ── Generate summary overview ─────────────────────────────────
|
||||||
|
# Group completed items by keyword clusters to identify the
|
||||||
|
# release's focus areas.
|
||||||
|
generate_summary() {
|
||||||
|
local all_items="$1"
|
||||||
|
local themes=""
|
||||||
|
|
||||||
|
# Count items matching each theme keyword (one item per line via echo -e)
|
||||||
|
local expanded
|
||||||
|
expanded=$(echo -e "$all_items")
|
||||||
|
local bot_count=$(echo "$expanded" | grep -icE 'bot|command|chat|matrix|slack|whatsapp|status|help|assign|rebuild|shutdown|whatsup' || true)
|
||||||
|
local mcp_count=$(echo "$expanded" | grep -icE 'mcp|tool' || true)
|
||||||
|
local docker_count=$(echo "$expanded" | grep -icE 'docker|container|gvisor|orbstack|harden|security' || true)
|
||||||
|
local agent_count=$(echo "$expanded" | grep -icE 'agent|runtime|chatgpt|gemini|openai|model|coder' || true)
|
||||||
|
local ui_count=$(echo "$expanded" | grep -icE 'frontend|ui|web|oauth|scaffold' || true)
|
||||||
|
local infra_count=$(echo "$expanded" | grep -icE 'release|makefile|refactor|upgrade|worktree|pipeline' || true)
|
||||||
|
|
||||||
|
# Build theme list, highest count first
|
||||||
|
local -a theme_pairs=()
|
||||||
|
[ "$agent_count" -gt 0 ] && theme_pairs+=("${agent_count}:multi-model agents")
|
||||||
|
[ "$bot_count" -gt 0 ] && theme_pairs+=("${bot_count}:bot commands")
|
||||||
|
[ "$mcp_count" -gt 0 ] && theme_pairs+=("${mcp_count}:MCP tools")
|
||||||
|
[ "$docker_count" -gt 0 ] && theme_pairs+=("${docker_count}:Docker hardening")
|
||||||
|
[ "$ui_count" -gt 0 ] && theme_pairs+=("${ui_count}:developer experience")
|
||||||
|
[ "$infra_count" -gt 0 ] && theme_pairs+=("${infra_count}:infrastructure")
|
||||||
|
|
||||||
|
# Sort by count descending, take top 3
|
||||||
|
local sorted=$(printf '%s\n' "${theme_pairs[@]}" | sort -t: -k1 -nr | head -3)
|
||||||
|
local labels=""
|
||||||
|
while IFS=: read -r count label; do
|
||||||
|
[ -z "$label" ] && continue
|
||||||
|
if [ -z "$labels" ]; then
|
||||||
|
# Capitalise first theme
|
||||||
|
labels="$(echo "${label:0:1}" | tr '[:lower:]' '[:upper:]')${label:1}"
|
||||||
|
else
|
||||||
|
labels="${labels}, ${label}"
|
||||||
|
fi
|
||||||
|
done <<< "$sorted"
|
||||||
|
|
||||||
|
echo "$labels"
|
||||||
|
}
|
||||||
|
|
||||||
|
ALL_ITEMS="${FEATURES}${FIXES}${REFACTORS}"
|
||||||
|
SUMMARY=$(generate_summary "$ALL_ITEMS")
|
||||||
|
if [ -n "$SUMMARY" ]; then
|
||||||
|
SUMMARY_LINE="**Focus:** ${SUMMARY}"
|
||||||
|
else
|
||||||
|
SUMMARY_LINE=""
|
||||||
|
fi
|
||||||
|
|
||||||
# Assemble the release body.
|
# Assemble the release body.
|
||||||
RELEASE_BODY="## What's Changed"
|
RELEASE_BODY="## What's Changed"
|
||||||
|
|
||||||
|
if [ -n "$SUMMARY_LINE" ]; then
|
||||||
|
RELEASE_BODY="${RELEASE_BODY}
|
||||||
|
|
||||||
|
${SUMMARY_LINE}"
|
||||||
|
fi
|
||||||
|
|
||||||
if [ -n "$FEATURES" ]; then
|
if [ -n "$FEATURES" ]; then
|
||||||
RELEASE_BODY="${RELEASE_BODY}
|
RELEASE_BODY="${RELEASE_BODY}
|
||||||
|
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
[package]
|
[package]
|
||||||
name = "storkit"
|
name = "storkit"
|
||||||
version = "0.4.1"
|
version = "0.5.1"
|
||||||
edition = "2024"
|
edition = "2024"
|
||||||
build = "build.rs"
|
build = "build.rs"
|
||||||
|
|
||||||
|
|||||||
@@ -102,13 +102,29 @@ fn run_command_with_timeout(
|
|||||||
args: &[&str],
|
args: &[&str],
|
||||||
dir: &Path,
|
dir: &Path,
|
||||||
) -> Result<(bool, String), String> {
|
) -> Result<(bool, String), String> {
|
||||||
let mut child = Command::new(program)
|
// On Linux, execve can return ETXTBSY (26) briefly after a file is written
|
||||||
.args(args)
|
// before the kernel releases its "write open" state. Retry once after a
|
||||||
|
// short pause to handle this race condition.
|
||||||
|
let mut last_err = None;
|
||||||
|
let mut cmd = Command::new(&program);
|
||||||
|
cmd.args(args)
|
||||||
.current_dir(dir)
|
.current_dir(dir)
|
||||||
.stdout(std::process::Stdio::piped())
|
.stdout(std::process::Stdio::piped())
|
||||||
.stderr(std::process::Stdio::piped())
|
.stderr(std::process::Stdio::piped());
|
||||||
.spawn()
|
let mut child = loop {
|
||||||
.map_err(|e| format!("Failed to spawn command: {e}"))?;
|
match cmd.spawn() {
|
||||||
|
Ok(c) => break c,
|
||||||
|
Err(e) if e.raw_os_error() == Some(26) => {
|
||||||
|
// ETXTBSY — wait briefly and retry once
|
||||||
|
if last_err.is_some() {
|
||||||
|
return Err(format!("Failed to spawn command: {e}"));
|
||||||
|
}
|
||||||
|
last_err = Some(e);
|
||||||
|
std::thread::sleep(std::time::Duration::from_millis(50));
|
||||||
|
}
|
||||||
|
Err(e) => return Err(format!("Failed to spawn command: {e}")),
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
// Drain stdout/stderr in background threads so the pipe buffers never fill.
|
// Drain stdout/stderr in background threads so the pipe buffers never fill.
|
||||||
let stdout_handle = child.stdout.take().map(|r| {
|
let stdout_handle = child.stdout.take().map(|r| {
|
||||||
|
|||||||
@@ -144,6 +144,10 @@ impl AgentPool {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pub fn port(&self) -> u16 {
|
||||||
|
self.port
|
||||||
|
}
|
||||||
|
|
||||||
/// Create a pool with a dummy watcher channel for unit tests.
|
/// Create a pool with a dummy watcher channel for unit tests.
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
pub fn new_test(port: u16) -> Self {
|
pub fn new_test(port: u16) -> Self {
|
||||||
|
|||||||
286
server/src/http/bot_command.rs
Normal file
286
server/src/http/bot_command.rs
Normal file
@@ -0,0 +1,286 @@
|
|||||||
|
//! Bot command HTTP endpoint.
|
||||||
|
//!
|
||||||
|
//! `POST /api/bot/command` lets the web UI invoke the same deterministic bot
|
||||||
|
//! commands available in Matrix without going through the LLM.
|
||||||
|
//!
|
||||||
|
//! Synchronous commands (status, assign, git, cost, move, show, overview,
|
||||||
|
//! help) are dispatched directly through the matrix command registry.
|
||||||
|
//! Asynchronous commands (start, delete, rebuild) are dispatched to their
|
||||||
|
//! dedicated async handlers. The `reset` command is handled by the frontend
|
||||||
|
//! (it clears local session state and message history) and is not routed here.
|
||||||
|
|
||||||
|
use crate::http::context::{AppContext, OpenApiResult};
|
||||||
|
use crate::matrix::commands::CommandDispatch;
|
||||||
|
use poem::http::StatusCode;
|
||||||
|
use poem_openapi::{Object, OpenApi, Tags, payload::Json};
|
||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
use std::collections::HashSet;
|
||||||
|
use std::sync::{Arc, Mutex};
|
||||||
|
|
||||||
|
#[derive(Tags)]
|
||||||
|
enum BotCommandTags {
|
||||||
|
BotCommand,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Body for `POST /api/bot/command`.
|
||||||
|
#[derive(Object, Deserialize)]
|
||||||
|
struct BotCommandRequest {
|
||||||
|
/// The command keyword without the leading slash (e.g. `"status"`, `"start"`).
|
||||||
|
command: String,
|
||||||
|
/// Any text after the command keyword, trimmed (may be empty).
|
||||||
|
#[oai(default)]
|
||||||
|
args: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Response body for `POST /api/bot/command`.
|
||||||
|
#[derive(Object, Serialize)]
|
||||||
|
struct BotCommandResponse {
|
||||||
|
/// Markdown-formatted response text.
|
||||||
|
response: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct BotCommandApi {
|
||||||
|
pub ctx: Arc<AppContext>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[OpenApi(tag = "BotCommandTags::BotCommand")]
|
||||||
|
impl BotCommandApi {
|
||||||
|
/// Execute a slash command without LLM invocation.
|
||||||
|
///
|
||||||
|
/// Dispatches to the same handlers used by the Matrix and Slack bots.
|
||||||
|
/// Returns a markdown-formatted response that the frontend can display
|
||||||
|
/// directly in the chat panel.
|
||||||
|
#[oai(path = "/bot/command", method = "post")]
|
||||||
|
async fn run_command(
|
||||||
|
&self,
|
||||||
|
body: Json<BotCommandRequest>,
|
||||||
|
) -> OpenApiResult<Json<BotCommandResponse>> {
|
||||||
|
let project_root = self.ctx.state.get_project_root().map_err(|e| {
|
||||||
|
poem::Error::from_string(e, StatusCode::BAD_REQUEST)
|
||||||
|
})?;
|
||||||
|
|
||||||
|
let cmd = body.command.trim().to_ascii_lowercase();
|
||||||
|
let args = body.args.trim();
|
||||||
|
let response = dispatch_command(&cmd, args, &project_root, &self.ctx.agents).await;
|
||||||
|
|
||||||
|
Ok(Json(BotCommandResponse { response }))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Dispatch a command keyword + args to the appropriate handler.
|
||||||
|
async fn dispatch_command(
|
||||||
|
cmd: &str,
|
||||||
|
args: &str,
|
||||||
|
project_root: &std::path::Path,
|
||||||
|
agents: &Arc<crate::agents::AgentPool>,
|
||||||
|
) -> String {
|
||||||
|
match cmd {
|
||||||
|
"start" => dispatch_start(args, project_root, agents).await,
|
||||||
|
"delete" => dispatch_delete(args, project_root, agents).await,
|
||||||
|
"rebuild" => dispatch_rebuild(project_root, agents).await,
|
||||||
|
// All other commands go through the synchronous command registry.
|
||||||
|
_ => dispatch_sync(cmd, args, project_root, agents),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn dispatch_sync(
|
||||||
|
cmd: &str,
|
||||||
|
args: &str,
|
||||||
|
project_root: &std::path::Path,
|
||||||
|
agents: &Arc<crate::agents::AgentPool>,
|
||||||
|
) -> String {
|
||||||
|
let ambient_rooms: Arc<Mutex<HashSet<String>>> = Arc::new(Mutex::new(HashSet::new()));
|
||||||
|
// Use a synthetic bot name/id so strip_bot_mention passes through.
|
||||||
|
let bot_name = "__web_ui__";
|
||||||
|
let bot_user_id = "@__web_ui__:localhost";
|
||||||
|
let room_id = "__web_ui__";
|
||||||
|
|
||||||
|
let dispatch = CommandDispatch {
|
||||||
|
bot_name,
|
||||||
|
bot_user_id,
|
||||||
|
project_root,
|
||||||
|
agents,
|
||||||
|
ambient_rooms: &ambient_rooms,
|
||||||
|
room_id,
|
||||||
|
};
|
||||||
|
|
||||||
|
// Build a synthetic message that the registry can parse.
|
||||||
|
let synthetic = if args.is_empty() {
|
||||||
|
format!("{bot_name} {cmd}")
|
||||||
|
} else {
|
||||||
|
format!("{bot_name} {cmd} {args}")
|
||||||
|
};
|
||||||
|
|
||||||
|
match crate::matrix::commands::try_handle_command(&dispatch, &synthetic) {
|
||||||
|
Some(response) => response,
|
||||||
|
None => {
|
||||||
|
// Command exists in the registry but its fallback handler returns None
|
||||||
|
// (start, delete, rebuild, reset, htop — handled elsewhere or in
|
||||||
|
// the frontend). Should not be reached for those since we intercept
|
||||||
|
// them above. For genuinely unknown commands, tell the user.
|
||||||
|
format!("Unknown command: `/{cmd}`. Type `/help` to see available commands.")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn dispatch_start(
|
||||||
|
args: &str,
|
||||||
|
project_root: &std::path::Path,
|
||||||
|
agents: &Arc<crate::agents::AgentPool>,
|
||||||
|
) -> String {
|
||||||
|
// args: "<number>" or "<number> <model_hint>"
|
||||||
|
let mut parts = args.splitn(2, char::is_whitespace);
|
||||||
|
let number_str = parts.next().unwrap_or("").trim();
|
||||||
|
let hint_str = parts.next().unwrap_or("").trim();
|
||||||
|
|
||||||
|
if number_str.is_empty() || !number_str.chars().all(|c| c.is_ascii_digit()) {
|
||||||
|
return "Usage: `/start <number>` or `/start <number> <model>` (e.g. `/start 42 opus`)"
|
||||||
|
.to_string();
|
||||||
|
}
|
||||||
|
|
||||||
|
let agent_hint = if hint_str.is_empty() {
|
||||||
|
None
|
||||||
|
} else {
|
||||||
|
Some(hint_str)
|
||||||
|
};
|
||||||
|
|
||||||
|
crate::matrix::start::handle_start("web-ui", number_str, agent_hint, project_root, agents)
|
||||||
|
.await
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn dispatch_delete(
|
||||||
|
args: &str,
|
||||||
|
project_root: &std::path::Path,
|
||||||
|
agents: &Arc<crate::agents::AgentPool>,
|
||||||
|
) -> String {
|
||||||
|
let number_str = args.trim();
|
||||||
|
if number_str.is_empty() || !number_str.chars().all(|c| c.is_ascii_digit()) {
|
||||||
|
return "Usage: `/delete <number>` (e.g. `/delete 42`)".to_string();
|
||||||
|
}
|
||||||
|
crate::matrix::delete::handle_delete("web-ui", number_str, project_root, agents).await
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn dispatch_rebuild(
|
||||||
|
project_root: &std::path::Path,
|
||||||
|
agents: &Arc<crate::agents::AgentPool>,
|
||||||
|
) -> String {
|
||||||
|
crate::matrix::rebuild::handle_rebuild("web-ui", project_root, agents).await
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Tests
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
use tempfile::TempDir;
|
||||||
|
|
||||||
|
fn test_api(dir: &TempDir) -> BotCommandApi {
|
||||||
|
BotCommandApi {
|
||||||
|
ctx: Arc::new(AppContext::new_test(dir.path().to_path_buf())),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn help_command_returns_response() {
|
||||||
|
let dir = TempDir::new().unwrap();
|
||||||
|
let api = test_api(&dir);
|
||||||
|
let body = BotCommandRequest {
|
||||||
|
command: "help".to_string(),
|
||||||
|
args: String::new(),
|
||||||
|
};
|
||||||
|
let result = api.run_command(Json(body)).await;
|
||||||
|
assert!(result.is_ok());
|
||||||
|
let resp = result.unwrap().0;
|
||||||
|
assert!(!resp.response.is_empty());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn unknown_command_returns_error_message() {
|
||||||
|
let dir = TempDir::new().unwrap();
|
||||||
|
let api = test_api(&dir);
|
||||||
|
let body = BotCommandRequest {
|
||||||
|
command: "nonexistent_xyz".to_string(),
|
||||||
|
args: String::new(),
|
||||||
|
};
|
||||||
|
let result = api.run_command(Json(body)).await;
|
||||||
|
assert!(result.is_ok());
|
||||||
|
let resp = result.unwrap().0;
|
||||||
|
assert!(
|
||||||
|
resp.response.contains("Unknown command"),
|
||||||
|
"expected 'Unknown command' in: {}",
|
||||||
|
resp.response
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn start_without_number_returns_usage() {
|
||||||
|
let dir = TempDir::new().unwrap();
|
||||||
|
let api = test_api(&dir);
|
||||||
|
let body = BotCommandRequest {
|
||||||
|
command: "start".to_string(),
|
||||||
|
args: String::new(),
|
||||||
|
};
|
||||||
|
let result = api.run_command(Json(body)).await;
|
||||||
|
assert!(result.is_ok());
|
||||||
|
let resp = result.unwrap().0;
|
||||||
|
assert!(
|
||||||
|
resp.response.contains("Usage"),
|
||||||
|
"expected usage hint in: {}",
|
||||||
|
resp.response
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn delete_without_number_returns_usage() {
|
||||||
|
let dir = TempDir::new().unwrap();
|
||||||
|
let api = test_api(&dir);
|
||||||
|
let body = BotCommandRequest {
|
||||||
|
command: "delete".to_string(),
|
||||||
|
args: String::new(),
|
||||||
|
};
|
||||||
|
let result = api.run_command(Json(body)).await;
|
||||||
|
assert!(result.is_ok());
|
||||||
|
let resp = result.unwrap().0;
|
||||||
|
assert!(
|
||||||
|
resp.response.contains("Usage"),
|
||||||
|
"expected usage hint in: {}",
|
||||||
|
resp.response
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn git_command_returns_response() {
|
||||||
|
let dir = TempDir::new().unwrap();
|
||||||
|
// Initialise a bare git repo so the git command has something to query.
|
||||||
|
std::process::Command::new("git")
|
||||||
|
.args(["init"])
|
||||||
|
.current_dir(dir.path())
|
||||||
|
.output()
|
||||||
|
.ok();
|
||||||
|
let api = test_api(&dir);
|
||||||
|
let body = BotCommandRequest {
|
||||||
|
command: "git".to_string(),
|
||||||
|
args: String::new(),
|
||||||
|
};
|
||||||
|
let result = api.run_command(Json(body)).await;
|
||||||
|
assert!(result.is_ok());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn run_command_requires_project_root() {
|
||||||
|
// Create a context with no project root set.
|
||||||
|
let dir = TempDir::new().unwrap();
|
||||||
|
let ctx = AppContext::new_test(dir.path().to_path_buf());
|
||||||
|
// Clear the project root.
|
||||||
|
*ctx.state.project_root.lock().unwrap() = None;
|
||||||
|
let api = BotCommandApi { ctx: Arc::new(ctx) };
|
||||||
|
let body = BotCommandRequest {
|
||||||
|
command: "status".to_string(),
|
||||||
|
args: String::new(),
|
||||||
|
};
|
||||||
|
let result = api.run_command(Json(body)).await;
|
||||||
|
assert!(result.is_err(), "should fail when no project root is set");
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -2,6 +2,7 @@ pub mod agents;
|
|||||||
pub mod agents_sse;
|
pub mod agents_sse;
|
||||||
pub mod anthropic;
|
pub mod anthropic;
|
||||||
pub mod assets;
|
pub mod assets;
|
||||||
|
pub mod bot_command;
|
||||||
pub mod chat;
|
pub mod chat;
|
||||||
pub mod context;
|
pub mod context;
|
||||||
pub mod health;
|
pub mod health;
|
||||||
@@ -16,6 +17,7 @@ pub mod ws;
|
|||||||
|
|
||||||
use agents::AgentsApi;
|
use agents::AgentsApi;
|
||||||
use anthropic::AnthropicApi;
|
use anthropic::AnthropicApi;
|
||||||
|
use bot_command::BotCommandApi;
|
||||||
use chat::ChatApi;
|
use chat::ChatApi;
|
||||||
use context::AppContext;
|
use context::AppContext;
|
||||||
use health::HealthApi;
|
use health::HealthApi;
|
||||||
@@ -113,6 +115,7 @@ type ApiTuple = (
|
|||||||
AgentsApi,
|
AgentsApi,
|
||||||
SettingsApi,
|
SettingsApi,
|
||||||
HealthApi,
|
HealthApi,
|
||||||
|
BotCommandApi,
|
||||||
);
|
);
|
||||||
|
|
||||||
type ApiService = OpenApiService<ApiTuple, ()>;
|
type ApiService = OpenApiService<ApiTuple, ()>;
|
||||||
@@ -128,6 +131,7 @@ pub fn build_openapi_service(ctx: Arc<AppContext>) -> (ApiService, ApiService) {
|
|||||||
AgentsApi { ctx: ctx.clone() },
|
AgentsApi { ctx: ctx.clone() },
|
||||||
SettingsApi { ctx: ctx.clone() },
|
SettingsApi { ctx: ctx.clone() },
|
||||||
HealthApi,
|
HealthApi,
|
||||||
|
BotCommandApi { ctx: ctx.clone() },
|
||||||
);
|
);
|
||||||
|
|
||||||
let api_service =
|
let api_service =
|
||||||
@@ -140,8 +144,9 @@ pub fn build_openapi_service(ctx: Arc<AppContext>) -> (ApiService, ApiService) {
|
|||||||
IoApi { ctx: ctx.clone() },
|
IoApi { ctx: ctx.clone() },
|
||||||
ChatApi { ctx: ctx.clone() },
|
ChatApi { ctx: ctx.clone() },
|
||||||
AgentsApi { ctx: ctx.clone() },
|
AgentsApi { ctx: ctx.clone() },
|
||||||
SettingsApi { ctx },
|
SettingsApi { ctx: ctx.clone() },
|
||||||
HealthApi,
|
HealthApi,
|
||||||
|
BotCommandApi { ctx },
|
||||||
);
|
);
|
||||||
|
|
||||||
let docs_service =
|
let docs_service =
|
||||||
|
|||||||
@@ -39,6 +39,7 @@ impl ProjectApi {
|
|||||||
payload.0.path,
|
payload.0.path,
|
||||||
&self.ctx.state,
|
&self.ctx.state,
|
||||||
self.ctx.store.as_ref(),
|
self.ctx.store.as_ref(),
|
||||||
|
self.ctx.agents.port(),
|
||||||
)
|
)
|
||||||
.await
|
.await
|
||||||
.map_err(bad_request)?;
|
.map_err(bad_request)?;
|
||||||
|
|||||||
@@ -183,6 +183,18 @@ pub fn add_criterion_to_file(
|
|||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Encode a string value as a YAML scalar.
|
||||||
|
///
|
||||||
|
/// Booleans (`true`/`false`) and integers are written as native YAML types (unquoted).
|
||||||
|
/// Everything else is written as a quoted string to avoid ambiguity.
|
||||||
|
fn yaml_encode_scalar(value: &str) -> String {
|
||||||
|
match value {
|
||||||
|
"true" | "false" => value.to_string(),
|
||||||
|
s if s.parse::<i64>().is_ok() => s.to_string(),
|
||||||
|
s => format!("\"{}\"", s.replace('"', "\\\"").replace('\n', " ").replace('\r', "")),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/// Update the user story text and/or description in a story file.
|
/// Update the user story text and/or description in a story file.
|
||||||
///
|
///
|
||||||
/// At least one of `user_story` or `description` must be provided.
|
/// At least one of `user_story` or `description` must be provided.
|
||||||
@@ -209,7 +221,7 @@ pub fn update_story_in_file(
|
|||||||
|
|
||||||
if let Some(fields) = front_matter {
|
if let Some(fields) = front_matter {
|
||||||
for (key, value) in fields {
|
for (key, value) in fields {
|
||||||
let yaml_value = format!("\"{}\"", value.replace('"', "\\\"").replace('\n', " ").replace('\r', ""));
|
let yaml_value = yaml_encode_scalar(value);
|
||||||
contents = set_front_matter_field(&contents, key, &yaml_value);
|
contents = set_front_matter_field(&contents, key, &yaml_value);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -589,4 +601,55 @@ mod tests {
|
|||||||
let contents = fs::read_to_string(&filepath).unwrap();
|
let contents = fs::read_to_string(&filepath).unwrap();
|
||||||
assert!(contents.contains("agent: \"dev\""));
|
assert!(contents.contains("agent: \"dev\""));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn update_story_bool_front_matter_written_unquoted() {
|
||||||
|
let tmp = tempfile::tempdir().unwrap();
|
||||||
|
let current = tmp.path().join(".storkit/work/2_current");
|
||||||
|
fs::create_dir_all(¤t).unwrap();
|
||||||
|
let filepath = current.join("27_test.md");
|
||||||
|
fs::write(&filepath, "---\nname: T\n---\n\nNo sections.\n").unwrap();
|
||||||
|
|
||||||
|
let mut fields = HashMap::new();
|
||||||
|
fields.insert("blocked".to_string(), "false".to_string());
|
||||||
|
update_story_in_file(tmp.path(), "27_test", None, None, Some(&fields)).unwrap();
|
||||||
|
|
||||||
|
let result = fs::read_to_string(&filepath).unwrap();
|
||||||
|
assert!(result.contains("blocked: false"), "bool should be unquoted: {result}");
|
||||||
|
assert!(!result.contains("blocked: \"false\""), "bool must not be quoted: {result}");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn update_story_integer_front_matter_written_unquoted() {
|
||||||
|
let tmp = tempfile::tempdir().unwrap();
|
||||||
|
let current = tmp.path().join(".storkit/work/2_current");
|
||||||
|
fs::create_dir_all(¤t).unwrap();
|
||||||
|
let filepath = current.join("28_test.md");
|
||||||
|
fs::write(&filepath, "---\nname: T\n---\n\nNo sections.\n").unwrap();
|
||||||
|
|
||||||
|
let mut fields = HashMap::new();
|
||||||
|
fields.insert("retry_count".to_string(), "0".to_string());
|
||||||
|
update_story_in_file(tmp.path(), "28_test", None, None, Some(&fields)).unwrap();
|
||||||
|
|
||||||
|
let result = fs::read_to_string(&filepath).unwrap();
|
||||||
|
assert!(result.contains("retry_count: 0"), "integer should be unquoted: {result}");
|
||||||
|
assert!(!result.contains("retry_count: \"0\""), "integer must not be quoted: {result}");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn update_story_bool_front_matter_parseable_after_write() {
|
||||||
|
let tmp = tempfile::tempdir().unwrap();
|
||||||
|
let current = tmp.path().join(".storkit/work/2_current");
|
||||||
|
fs::create_dir_all(¤t).unwrap();
|
||||||
|
let filepath = current.join("29_test.md");
|
||||||
|
fs::write(&filepath, "---\nname: My Story\n---\n\nNo sections.\n").unwrap();
|
||||||
|
|
||||||
|
let mut fields = HashMap::new();
|
||||||
|
fields.insert("blocked".to_string(), "false".to_string());
|
||||||
|
update_story_in_file(tmp.path(), "29_test", None, None, Some(&fields)).unwrap();
|
||||||
|
|
||||||
|
let contents = fs::read_to_string(&filepath).unwrap();
|
||||||
|
let meta = parse_front_matter(&contents).expect("front matter should parse");
|
||||||
|
assert_eq!(meta.name.as_deref(), Some("My Story"), "name preserved after writing bool field");
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -110,7 +110,7 @@ role = "Full-stack engineer. Implements features across all components."
|
|||||||
model = "sonnet"
|
model = "sonnet"
|
||||||
max_turns = 50
|
max_turns = 50
|
||||||
max_budget_usd = 5.00
|
max_budget_usd = 5.00
|
||||||
prompt = "You are working in a git worktree on story {{story_id}}. Read CLAUDE.md first, then .storkit/README.md to understand the dev process. Follow the workflow through implementation and verification. The worktree and feature branch already exist - do not create them. Check .mcp.json for MCP tools. Do NOT accept the story or merge - commit your work and stop.\n\nIMPORTANT: Commit all your work before your process exits. The server will automatically run acceptance gates when your process exits."
|
prompt = "You are working in a git worktree on story {{story_id}}. Read CLAUDE.md first, then .storkit/README.md to understand the dev process. Follow the workflow through implementation and verification. The worktree and feature branch already exist - do not create them. Check .mcp.json for MCP tools. Do NOT accept the story or merge - commit your work and stop.\n\nIMPORTANT: Commit all your work before your process exits. The server will automatically run acceptance gates when your process exits.\n\nIf `script/test` still contains the generic 'No tests configured' stub, update it to run the project's actual test suite before starting implementation."
|
||||||
system_prompt = "You are a full-stack engineer working autonomously in a git worktree. Follow the Story-Driven Test Workflow strictly. Commit all your work before finishing. Do not accept stories, move them to archived, or merge to master."
|
system_prompt = "You are a full-stack engineer working autonomously in a git worktree. Follow the Story-Driven Test Workflow strictly. Commit all your work before finishing. Do not accept stories, move them to archived, or merge to master."
|
||||||
|
|
||||||
[[agent]]
|
[[agent]]
|
||||||
@@ -184,37 +184,58 @@ pub fn detect_components_toml(root: &Path) -> String {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if sections.is_empty() {
|
if sections.is_empty() {
|
||||||
// No tech stack markers detected — emit two example components so that
|
// No tech stack markers detected — emit a single generic component
|
||||||
// the scaffold is immediately usable and agents can see the expected
|
// with an empty setup list. The ONBOARDING_PROMPT instructs the chat
|
||||||
// format. The ONBOARDING_PROMPT instructs the chat agent to inspect
|
// agent to inspect the project and replace this with real definitions.
|
||||||
// the project and replace these placeholders with real definitions.
|
|
||||||
sections.push(
|
sections.push(
|
||||||
"# EXAMPLE: Replace with your actual backend component.\n\
|
"[[component]]\nname = \"app\"\npath = \".\"\nsetup = []\n".to_string(),
|
||||||
# Common patterns: \"cargo check\" (Rust), \"go build ./...\" (Go),\n\
|
|
||||||
# \"python -m pytest\" (Python), \"mvn verify\" (Java)\n\
|
|
||||||
[[component]]\n\
|
|
||||||
name = \"backend\"\n\
|
|
||||||
path = \".\"\n\
|
|
||||||
setup = [\"cargo check\"]\n\
|
|
||||||
teardown = []\n"
|
|
||||||
.to_string(),
|
|
||||||
);
|
|
||||||
sections.push(
|
|
||||||
"# EXAMPLE: Replace with your actual frontend component.\n\
|
|
||||||
# Common patterns: \"pnpm install\" (pnpm), \"npm install\" (npm),\n\
|
|
||||||
# \"yarn\" (Yarn), \"bun install\" (Bun)\n\
|
|
||||||
[[component]]\n\
|
|
||||||
name = \"frontend\"\n\
|
|
||||||
path = \".\"\n\
|
|
||||||
setup = [\"pnpm install\"]\n\
|
|
||||||
teardown = []\n"
|
|
||||||
.to_string(),
|
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
sections.join("\n")
|
sections.join("\n")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Generate `script/test` content for a new project at `root`.
|
||||||
|
///
|
||||||
|
/// Inspects well-known marker files to identify which tech stacks are present
|
||||||
|
/// and emits the appropriate test commands. Multi-stack projects get combined
|
||||||
|
/// commands run sequentially. Falls back to the generic stub when no markers
|
||||||
|
/// are found so the scaffold is always valid.
|
||||||
|
pub fn detect_script_test(root: &Path) -> String {
|
||||||
|
let mut commands: Vec<&str> = Vec::new();
|
||||||
|
|
||||||
|
if root.join("Cargo.toml").exists() {
|
||||||
|
commands.push("cargo test");
|
||||||
|
}
|
||||||
|
|
||||||
|
if root.join("package.json").exists() {
|
||||||
|
if root.join("pnpm-lock.yaml").exists() {
|
||||||
|
commands.push("pnpm test");
|
||||||
|
} else {
|
||||||
|
commands.push("npm test");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if root.join("pyproject.toml").exists() || root.join("requirements.txt").exists() {
|
||||||
|
commands.push("pytest");
|
||||||
|
}
|
||||||
|
|
||||||
|
if root.join("go.mod").exists() {
|
||||||
|
commands.push("go test ./...");
|
||||||
|
}
|
||||||
|
|
||||||
|
if commands.is_empty() {
|
||||||
|
return STORY_KIT_SCRIPT_TEST.to_string();
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut script = "#!/usr/bin/env bash\nset -euo pipefail\n\n".to_string();
|
||||||
|
for cmd in commands {
|
||||||
|
script.push_str(cmd);
|
||||||
|
script.push('\n');
|
||||||
|
}
|
||||||
|
script
|
||||||
|
}
|
||||||
|
|
||||||
/// Generate a complete `project.toml` for a new project at `root`.
|
/// Generate a complete `project.toml` for a new project at `root`.
|
||||||
///
|
///
|
||||||
/// Detects the tech stack via [`detect_components_toml`] and prepends the
|
/// Detects the tech stack via [`detect_components_toml`] and prepends the
|
||||||
@@ -329,6 +350,11 @@ fn write_story_kit_gitignore(root: &Path) -> Result<(), String> {
|
|||||||
"worktrees/",
|
"worktrees/",
|
||||||
"merge_workspace/",
|
"merge_workspace/",
|
||||||
"coverage/",
|
"coverage/",
|
||||||
|
"work/2_current/",
|
||||||
|
"work/3_qa/",
|
||||||
|
"work/4_merge/",
|
||||||
|
"logs/",
|
||||||
|
"token_usage.jsonl",
|
||||||
];
|
];
|
||||||
|
|
||||||
let gitignore_path = root.join(".storkit").join(".gitignore");
|
let gitignore_path = root.join(".storkit").join(".gitignore");
|
||||||
@@ -369,7 +395,7 @@ fn write_story_kit_gitignore(root: &Path) -> Result<(), String> {
|
|||||||
/// the project root and git does not support `../` patterns in `.gitignore`
|
/// the project root and git does not support `../` patterns in `.gitignore`
|
||||||
/// files, so they cannot be expressed in `.storkit/.gitignore`.
|
/// files, so they cannot be expressed in `.storkit/.gitignore`.
|
||||||
fn append_root_gitignore_entries(root: &Path) -> Result<(), String> {
|
fn append_root_gitignore_entries(root: &Path) -> Result<(), String> {
|
||||||
let entries = [".storkit_port", "store.json"];
|
let entries = [".storkit_port", "store.json", ".mcp.json"];
|
||||||
|
|
||||||
let gitignore_path = root.join(".gitignore");
|
let gitignore_path = root.join(".gitignore");
|
||||||
let existing = if gitignore_path.exists() {
|
let existing = if gitignore_path.exists() {
|
||||||
@@ -404,7 +430,7 @@ fn append_root_gitignore_entries(root: &Path) -> Result<(), String> {
|
|||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
fn scaffold_story_kit(root: &Path) -> Result<(), String> {
|
fn scaffold_story_kit(root: &Path, port: u16) -> Result<(), String> {
|
||||||
let story_kit_root = root.join(".storkit");
|
let story_kit_root = root.join(".storkit");
|
||||||
let specs_root = story_kit_root.join("specs");
|
let specs_root = story_kit_root.join("specs");
|
||||||
let tech_root = specs_root.join("tech");
|
let tech_root = specs_root.join("tech");
|
||||||
@@ -437,9 +463,18 @@ fn scaffold_story_kit(root: &Path) -> Result<(), String> {
|
|||||||
write_file_if_missing(&story_kit_root.join("project.toml"), &project_toml_content)?;
|
write_file_if_missing(&story_kit_root.join("project.toml"), &project_toml_content)?;
|
||||||
write_file_if_missing(&specs_root.join("00_CONTEXT.md"), STORY_KIT_CONTEXT)?;
|
write_file_if_missing(&specs_root.join("00_CONTEXT.md"), STORY_KIT_CONTEXT)?;
|
||||||
write_file_if_missing(&tech_root.join("STACK.md"), STORY_KIT_STACK)?;
|
write_file_if_missing(&tech_root.join("STACK.md"), STORY_KIT_STACK)?;
|
||||||
write_script_if_missing(&script_root.join("test"), STORY_KIT_SCRIPT_TEST)?;
|
let script_test_content = detect_script_test(root);
|
||||||
|
write_script_if_missing(&script_root.join("test"), &script_test_content)?;
|
||||||
write_file_if_missing(&root.join("CLAUDE.md"), STORY_KIT_CLAUDE_MD)?;
|
write_file_if_missing(&root.join("CLAUDE.md"), STORY_KIT_CLAUDE_MD)?;
|
||||||
|
|
||||||
|
// Write .mcp.json at the project root so agents can find the MCP server.
|
||||||
|
// Only written when missing — never overwrites an existing file, because
|
||||||
|
// the port is environment-specific and must not clobber a running instance.
|
||||||
|
let mcp_content = format!(
|
||||||
|
"{{\n \"mcpServers\": {{\n \"storkit\": {{\n \"type\": \"http\",\n \"url\": \"http://localhost:{port}/mcp\"\n }}\n }}\n}}\n"
|
||||||
|
);
|
||||||
|
write_file_if_missing(&root.join(".mcp.json"), &mcp_content)?;
|
||||||
|
|
||||||
// Create .claude/settings.json with sensible permission defaults so that
|
// Create .claude/settings.json with sensible permission defaults so that
|
||||||
// Claude Code (both agents and web UI chat) can operate without constant
|
// Claude Code (both agents and web UI chat) can operate without constant
|
||||||
// permission prompts.
|
// permission prompts.
|
||||||
@@ -505,14 +540,14 @@ fn scaffold_story_kit(root: &Path) -> Result<(), String> {
|
|||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn ensure_project_root_with_story_kit(path: PathBuf) -> Result<(), String> {
|
async fn ensure_project_root_with_story_kit(path: PathBuf, port: u16) -> Result<(), String> {
|
||||||
tokio::task::spawn_blocking(move || {
|
tokio::task::spawn_blocking(move || {
|
||||||
if !path.exists() {
|
if !path.exists() {
|
||||||
fs::create_dir_all(&path)
|
fs::create_dir_all(&path)
|
||||||
.map_err(|e| format!("Failed to create project directory: {}", e))?;
|
.map_err(|e| format!("Failed to create project directory: {}", e))?;
|
||||||
}
|
}
|
||||||
if !path.join(".storkit").is_dir() {
|
if !path.join(".storkit").is_dir() {
|
||||||
scaffold_story_kit(&path)?;
|
scaffold_story_kit(&path, port)?;
|
||||||
}
|
}
|
||||||
Ok(())
|
Ok(())
|
||||||
})
|
})
|
||||||
@@ -524,10 +559,11 @@ pub async fn open_project(
|
|||||||
path: String,
|
path: String,
|
||||||
state: &SessionState,
|
state: &SessionState,
|
||||||
store: &dyn StoreOps,
|
store: &dyn StoreOps,
|
||||||
|
port: u16,
|
||||||
) -> Result<String, String> {
|
) -> Result<String, String> {
|
||||||
let p = PathBuf::from(&path);
|
let p = PathBuf::from(&path);
|
||||||
|
|
||||||
ensure_project_root_with_story_kit(p.clone()).await?;
|
ensure_project_root_with_story_kit(p.clone(), port).await?;
|
||||||
validate_project_path(p.clone()).await?;
|
validate_project_path(p.clone()).await?;
|
||||||
|
|
||||||
{
|
{
|
||||||
@@ -816,7 +852,7 @@ mod tests {
|
|||||||
let store = make_store(&dir);
|
let store = make_store(&dir);
|
||||||
let state = SessionState::default();
|
let state = SessionState::default();
|
||||||
|
|
||||||
let result = open_project(project_dir.to_string_lossy().to_string(), &state, &store).await;
|
let result = open_project(project_dir.to_string_lossy().to_string(), &state, &store, 3001).await;
|
||||||
|
|
||||||
assert!(result.is_ok());
|
assert!(result.is_ok());
|
||||||
let root = state.get_project_root().unwrap();
|
let root = state.get_project_root().unwrap();
|
||||||
@@ -824,25 +860,79 @@ mod tests {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn open_project_does_not_write_mcp_json() {
|
async fn open_project_does_not_overwrite_existing_mcp_json() {
|
||||||
// open_project must NOT overwrite .mcp.json — test servers started by QA
|
// scaffold must NOT overwrite .mcp.json when it already exists — QA
|
||||||
// agents share the real project root, so writing here would clobber the
|
// test servers share the real project root, and re-writing would
|
||||||
// root .mcp.json with the wrong port. .mcp.json is written once during
|
// clobber the file with the wrong port.
|
||||||
// worktree creation (worktree.rs) and should not be touched again.
|
let dir = tempdir().unwrap();
|
||||||
|
let project_dir = dir.path().join("myproject");
|
||||||
|
fs::create_dir_all(&project_dir).unwrap();
|
||||||
|
// Pre-write .mcp.json with a different port to simulate an already-configured project.
|
||||||
|
let mcp_path = project_dir.join(".mcp.json");
|
||||||
|
fs::write(&mcp_path, "{\"existing\": true}").unwrap();
|
||||||
|
let store = make_store(&dir);
|
||||||
|
let state = SessionState::default();
|
||||||
|
|
||||||
|
open_project(project_dir.to_string_lossy().to_string(), &state, &store, 3001)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(
|
||||||
|
fs::read_to_string(&mcp_path).unwrap(),
|
||||||
|
"{\"existing\": true}",
|
||||||
|
"open_project must not overwrite an existing .mcp.json"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn open_project_writes_mcp_json_when_missing() {
|
||||||
let dir = tempdir().unwrap();
|
let dir = tempdir().unwrap();
|
||||||
let project_dir = dir.path().join("myproject");
|
let project_dir = dir.path().join("myproject");
|
||||||
fs::create_dir_all(&project_dir).unwrap();
|
fs::create_dir_all(&project_dir).unwrap();
|
||||||
let store = make_store(&dir);
|
let store = make_store(&dir);
|
||||||
let state = SessionState::default();
|
let state = SessionState::default();
|
||||||
|
|
||||||
open_project(project_dir.to_string_lossy().to_string(), &state, &store)
|
open_project(project_dir.to_string_lossy().to_string(), &state, &store, 3001)
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
let mcp_path = project_dir.join(".mcp.json");
|
let mcp_path = project_dir.join(".mcp.json");
|
||||||
|
assert!(mcp_path.exists(), "open_project should write .mcp.json for new projects");
|
||||||
|
let content = fs::read_to_string(&mcp_path).unwrap();
|
||||||
|
assert!(content.contains("3001"), "mcp.json should reference the server port");
|
||||||
|
assert!(content.contains("localhost"), "mcp.json should reference localhost");
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Regression test for bug 371: no-arg `storkit` in empty directory skips scaffold.
|
||||||
|
/// `open_project` on a directory without `.storkit/` must create all required scaffold
|
||||||
|
/// files — the same files that `storkit .` produces.
|
||||||
|
#[tokio::test]
|
||||||
|
async fn open_project_on_empty_dir_creates_full_scaffold() {
|
||||||
|
let dir = tempdir().unwrap();
|
||||||
|
let project_dir = dir.path().join("myproject");
|
||||||
|
fs::create_dir_all(&project_dir).unwrap();
|
||||||
|
let store = make_store(&dir);
|
||||||
|
let state = SessionState::default();
|
||||||
|
|
||||||
|
open_project(project_dir.to_string_lossy().to_string(), &state, &store, 3001)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
assert!(
|
assert!(
|
||||||
!mcp_path.exists(),
|
project_dir.join(".storkit/project.toml").exists(),
|
||||||
"open_project must not write .mcp.json — that would overwrite the root with the wrong port"
|
"open_project must create .storkit/project.toml"
|
||||||
|
);
|
||||||
|
assert!(
|
||||||
|
project_dir.join(".mcp.json").exists(),
|
||||||
|
"open_project must create .mcp.json"
|
||||||
|
);
|
||||||
|
assert!(
|
||||||
|
project_dir.join("CLAUDE.md").exists(),
|
||||||
|
"open_project must create CLAUDE.md"
|
||||||
|
);
|
||||||
|
assert!(
|
||||||
|
project_dir.join("script/test").exists(),
|
||||||
|
"open_project must create script/test"
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -898,7 +988,7 @@ mod tests {
|
|||||||
let store = make_store(&dir);
|
let store = make_store(&dir);
|
||||||
let state = SessionState::default();
|
let state = SessionState::default();
|
||||||
|
|
||||||
open_project(project_dir.to_string_lossy().to_string(), &state, &store)
|
open_project(project_dir.to_string_lossy().to_string(), &state, &store, 3001)
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1071,7 +1161,7 @@ mod tests {
|
|||||||
#[test]
|
#[test]
|
||||||
fn scaffold_story_kit_creates_structure() {
|
fn scaffold_story_kit_creates_structure() {
|
||||||
let dir = tempdir().unwrap();
|
let dir = tempdir().unwrap();
|
||||||
scaffold_story_kit(dir.path()).unwrap();
|
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||||
|
|
||||||
assert!(dir.path().join(".storkit/README.md").exists());
|
assert!(dir.path().join(".storkit/README.md").exists());
|
||||||
assert!(dir.path().join(".storkit/project.toml").exists());
|
assert!(dir.path().join(".storkit/project.toml").exists());
|
||||||
@@ -1085,7 +1175,7 @@ mod tests {
|
|||||||
#[test]
|
#[test]
|
||||||
fn scaffold_story_kit_creates_work_pipeline_dirs() {
|
fn scaffold_story_kit_creates_work_pipeline_dirs() {
|
||||||
let dir = tempdir().unwrap();
|
let dir = tempdir().unwrap();
|
||||||
scaffold_story_kit(dir.path()).unwrap();
|
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||||
|
|
||||||
let stages = [
|
let stages = [
|
||||||
"1_backlog",
|
"1_backlog",
|
||||||
@@ -1109,7 +1199,7 @@ mod tests {
|
|||||||
#[test]
|
#[test]
|
||||||
fn scaffold_story_kit_project_toml_has_coder_qa_mergemaster() {
|
fn scaffold_story_kit_project_toml_has_coder_qa_mergemaster() {
|
||||||
let dir = tempdir().unwrap();
|
let dir = tempdir().unwrap();
|
||||||
scaffold_story_kit(dir.path()).unwrap();
|
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||||
|
|
||||||
let content = fs::read_to_string(dir.path().join(".storkit/project.toml")).unwrap();
|
let content = fs::read_to_string(dir.path().join(".storkit/project.toml")).unwrap();
|
||||||
assert!(content.contains("[[agent]]"));
|
assert!(content.contains("[[agent]]"));
|
||||||
@@ -1122,7 +1212,7 @@ mod tests {
|
|||||||
#[test]
|
#[test]
|
||||||
fn scaffold_context_is_blank_template_not_story_kit_content() {
|
fn scaffold_context_is_blank_template_not_story_kit_content() {
|
||||||
let dir = tempdir().unwrap();
|
let dir = tempdir().unwrap();
|
||||||
scaffold_story_kit(dir.path()).unwrap();
|
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||||
|
|
||||||
let content = fs::read_to_string(dir.path().join(".storkit/specs/00_CONTEXT.md")).unwrap();
|
let content = fs::read_to_string(dir.path().join(".storkit/specs/00_CONTEXT.md")).unwrap();
|
||||||
assert!(content.contains("<!-- storkit:scaffold-template -->"));
|
assert!(content.contains("<!-- storkit:scaffold-template -->"));
|
||||||
@@ -1138,7 +1228,7 @@ mod tests {
|
|||||||
#[test]
|
#[test]
|
||||||
fn scaffold_stack_is_blank_template_not_story_kit_content() {
|
fn scaffold_stack_is_blank_template_not_story_kit_content() {
|
||||||
let dir = tempdir().unwrap();
|
let dir = tempdir().unwrap();
|
||||||
scaffold_story_kit(dir.path()).unwrap();
|
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||||
|
|
||||||
let content = fs::read_to_string(dir.path().join(".storkit/specs/tech/STACK.md")).unwrap();
|
let content = fs::read_to_string(dir.path().join(".storkit/specs/tech/STACK.md")).unwrap();
|
||||||
assert!(content.contains("<!-- storkit:scaffold-template -->"));
|
assert!(content.contains("<!-- storkit:scaffold-template -->"));
|
||||||
@@ -1157,7 +1247,7 @@ mod tests {
|
|||||||
use std::os::unix::fs::PermissionsExt;
|
use std::os::unix::fs::PermissionsExt;
|
||||||
|
|
||||||
let dir = tempdir().unwrap();
|
let dir = tempdir().unwrap();
|
||||||
scaffold_story_kit(dir.path()).unwrap();
|
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||||
|
|
||||||
let script_test = dir.path().join("script/test");
|
let script_test = dir.path().join("script/test");
|
||||||
assert!(script_test.exists(), "script/test should be created");
|
assert!(script_test.exists(), "script/test should be created");
|
||||||
@@ -1175,7 +1265,7 @@ mod tests {
|
|||||||
fs::create_dir_all(readme.parent().unwrap()).unwrap();
|
fs::create_dir_all(readme.parent().unwrap()).unwrap();
|
||||||
fs::write(&readme, "custom content").unwrap();
|
fs::write(&readme, "custom content").unwrap();
|
||||||
|
|
||||||
scaffold_story_kit(dir.path()).unwrap();
|
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||||
|
|
||||||
assert_eq!(fs::read_to_string(&readme).unwrap(), "custom content");
|
assert_eq!(fs::read_to_string(&readme).unwrap(), "custom content");
|
||||||
}
|
}
|
||||||
@@ -1183,13 +1273,13 @@ mod tests {
|
|||||||
#[test]
|
#[test]
|
||||||
fn scaffold_story_kit_is_idempotent() {
|
fn scaffold_story_kit_is_idempotent() {
|
||||||
let dir = tempdir().unwrap();
|
let dir = tempdir().unwrap();
|
||||||
scaffold_story_kit(dir.path()).unwrap();
|
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||||
|
|
||||||
let readme_content = fs::read_to_string(dir.path().join(".storkit/README.md")).unwrap();
|
let readme_content = fs::read_to_string(dir.path().join(".storkit/README.md")).unwrap();
|
||||||
let toml_content = fs::read_to_string(dir.path().join(".storkit/project.toml")).unwrap();
|
let toml_content = fs::read_to_string(dir.path().join(".storkit/project.toml")).unwrap();
|
||||||
|
|
||||||
// Run again — must not change content or add duplicate .gitignore entries
|
// Run again — must not change content or add duplicate .gitignore entries
|
||||||
scaffold_story_kit(dir.path()).unwrap();
|
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||||
|
|
||||||
assert_eq!(
|
assert_eq!(
|
||||||
fs::read_to_string(dir.path().join(".storkit/README.md")).unwrap(),
|
fs::read_to_string(dir.path().join(".storkit/README.md")).unwrap(),
|
||||||
@@ -1237,7 +1327,7 @@ mod tests {
|
|||||||
.status()
|
.status()
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
scaffold_story_kit(dir.path()).unwrap();
|
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||||
|
|
||||||
// Only 1 commit should exist — scaffold must not commit into an existing repo
|
// Only 1 commit should exist — scaffold must not commit into an existing repo
|
||||||
let log_output = std::process::Command::new("git")
|
let log_output = std::process::Command::new("git")
|
||||||
@@ -1256,7 +1346,7 @@ mod tests {
|
|||||||
#[test]
|
#[test]
|
||||||
fn scaffold_creates_story_kit_gitignore_with_relative_entries() {
|
fn scaffold_creates_story_kit_gitignore_with_relative_entries() {
|
||||||
let dir = tempdir().unwrap();
|
let dir = tempdir().unwrap();
|
||||||
scaffold_story_kit(dir.path()).unwrap();
|
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||||
|
|
||||||
// .storkit/.gitignore must contain relative patterns for files under .storkit/
|
// .storkit/.gitignore must contain relative patterns for files under .storkit/
|
||||||
let sk_content = fs::read_to_string(dir.path().join(".storkit/.gitignore")).unwrap();
|
let sk_content = fs::read_to_string(dir.path().join(".storkit/.gitignore")).unwrap();
|
||||||
@@ -1287,7 +1377,7 @@ mod tests {
|
|||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
scaffold_story_kit(dir.path()).unwrap();
|
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||||
|
|
||||||
let content = fs::read_to_string(dir.path().join(".storkit/.gitignore")).unwrap();
|
let content = fs::read_to_string(dir.path().join(".storkit/.gitignore")).unwrap();
|
||||||
let worktrees_count = content.lines().filter(|l| l.trim() == "worktrees/").count();
|
let worktrees_count = content.lines().filter(|l| l.trim() == "worktrees/").count();
|
||||||
@@ -1303,7 +1393,7 @@ mod tests {
|
|||||||
#[test]
|
#[test]
|
||||||
fn scaffold_creates_claude_md_at_project_root() {
|
fn scaffold_creates_claude_md_at_project_root() {
|
||||||
let dir = tempdir().unwrap();
|
let dir = tempdir().unwrap();
|
||||||
scaffold_story_kit(dir.path()).unwrap();
|
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||||
|
|
||||||
let claude_md = dir.path().join("CLAUDE.md");
|
let claude_md = dir.path().join("CLAUDE.md");
|
||||||
assert!(
|
assert!(
|
||||||
@@ -1332,7 +1422,7 @@ mod tests {
|
|||||||
let claude_md = dir.path().join("CLAUDE.md");
|
let claude_md = dir.path().join("CLAUDE.md");
|
||||||
fs::write(&claude_md, "custom CLAUDE.md content").unwrap();
|
fs::write(&claude_md, "custom CLAUDE.md content").unwrap();
|
||||||
|
|
||||||
scaffold_story_kit(dir.path()).unwrap();
|
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||||
|
|
||||||
assert_eq!(
|
assert_eq!(
|
||||||
fs::read_to_string(&claude_md).unwrap(),
|
fs::read_to_string(&claude_md).unwrap(),
|
||||||
@@ -1341,6 +1431,46 @@ mod tests {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn scaffold_story_kit_writes_mcp_json_with_port() {
|
||||||
|
let dir = tempdir().unwrap();
|
||||||
|
scaffold_story_kit(dir.path(), 4242).unwrap();
|
||||||
|
|
||||||
|
let mcp_path = dir.path().join(".mcp.json");
|
||||||
|
assert!(mcp_path.exists(), ".mcp.json should be created by scaffold");
|
||||||
|
let content = fs::read_to_string(&mcp_path).unwrap();
|
||||||
|
assert!(content.contains("4242"), ".mcp.json should reference the given port");
|
||||||
|
assert!(content.contains("localhost"), ".mcp.json should reference localhost");
|
||||||
|
assert!(content.contains("storkit"), ".mcp.json should name the storkit server");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn scaffold_story_kit_does_not_overwrite_existing_mcp_json() {
|
||||||
|
let dir = tempdir().unwrap();
|
||||||
|
let mcp_path = dir.path().join(".mcp.json");
|
||||||
|
fs::write(&mcp_path, "{\"custom\": true}").unwrap();
|
||||||
|
|
||||||
|
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||||
|
|
||||||
|
assert_eq!(
|
||||||
|
fs::read_to_string(&mcp_path).unwrap(),
|
||||||
|
"{\"custom\": true}",
|
||||||
|
"scaffold should not overwrite an existing .mcp.json"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn scaffold_gitignore_includes_mcp_json() {
|
||||||
|
let dir = tempdir().unwrap();
|
||||||
|
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||||
|
|
||||||
|
let root_gitignore = fs::read_to_string(dir.path().join(".gitignore")).unwrap();
|
||||||
|
assert!(
|
||||||
|
root_gitignore.contains(".mcp.json"),
|
||||||
|
"root .gitignore should include .mcp.json (port is environment-specific)"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
// --- open_project scaffolding ---
|
// --- open_project scaffolding ---
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
@@ -1351,7 +1481,7 @@ mod tests {
|
|||||||
let store = make_store(&dir);
|
let store = make_store(&dir);
|
||||||
let state = SessionState::default();
|
let state = SessionState::default();
|
||||||
|
|
||||||
open_project(project_dir.to_string_lossy().to_string(), &state, &store)
|
open_project(project_dir.to_string_lossy().to_string(), &state, &store, 3001)
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1370,7 +1500,7 @@ mod tests {
|
|||||||
let store = make_store(&dir);
|
let store = make_store(&dir);
|
||||||
let state = SessionState::default();
|
let state = SessionState::default();
|
||||||
|
|
||||||
open_project(project_dir.to_string_lossy().to_string(), &state, &store)
|
open_project(project_dir.to_string_lossy().to_string(), &state, &store, 3001)
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -1426,10 +1556,19 @@ mod tests {
|
|||||||
toml.contains("[[component]]"),
|
toml.contains("[[component]]"),
|
||||||
"should always emit at least one component"
|
"should always emit at least one component"
|
||||||
);
|
);
|
||||||
// The fallback should include example backend and frontend entries
|
// Fallback should use a generic app component with empty setup
|
||||||
assert!(
|
assert!(
|
||||||
toml.contains("name = \"backend\"") || toml.contains("name = \"frontend\""),
|
toml.contains("name = \"app\""),
|
||||||
"fallback should include example component entries"
|
"fallback should use generic 'app' component name"
|
||||||
|
);
|
||||||
|
assert!(
|
||||||
|
toml.contains("setup = []"),
|
||||||
|
"fallback should have empty setup list"
|
||||||
|
);
|
||||||
|
// Must not contain Rust-specific commands in a non-Rust project
|
||||||
|
assert!(
|
||||||
|
!toml.contains("cargo"),
|
||||||
|
"fallback must not contain Rust-specific commands"
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1516,6 +1655,38 @@ mod tests {
|
|||||||
assert!(toml.contains("setup = [\"bundle install\"]"));
|
assert!(toml.contains("setup = [\"bundle install\"]"));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// --- Bug 375: no Rust-specific commands for non-Rust projects ---
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn no_rust_commands_in_go_project() {
|
||||||
|
let dir = tempdir().unwrap();
|
||||||
|
fs::write(dir.path().join("go.mod"), "module example.com/app\n").unwrap();
|
||||||
|
|
||||||
|
let toml = detect_components_toml(dir.path());
|
||||||
|
assert!(!toml.contains("cargo"), "go project must not contain cargo commands");
|
||||||
|
assert!(toml.contains("go build"), "go project must use Go tooling");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn no_rust_commands_in_node_project() {
|
||||||
|
let dir = tempdir().unwrap();
|
||||||
|
fs::write(dir.path().join("package.json"), "{}").unwrap();
|
||||||
|
|
||||||
|
let toml = detect_components_toml(dir.path());
|
||||||
|
assert!(!toml.contains("cargo"), "node project must not contain cargo commands");
|
||||||
|
assert!(toml.contains("npm install"), "node project must use npm tooling");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn no_rust_commands_when_no_stack_detected() {
|
||||||
|
let dir = tempdir().unwrap();
|
||||||
|
|
||||||
|
let toml = detect_components_toml(dir.path());
|
||||||
|
assert!(!toml.contains("cargo"), "unknown stack must not contain cargo commands");
|
||||||
|
// setup list must be empty
|
||||||
|
assert!(toml.contains("setup = []"), "unknown stack must have empty setup list");
|
||||||
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn detect_multiple_markers_generates_multiple_components() {
|
fn detect_multiple_markers_generates_multiple_components() {
|
||||||
let dir = tempdir().unwrap();
|
let dir = tempdir().unwrap();
|
||||||
@@ -1544,6 +1715,124 @@ mod tests {
|
|||||||
assert!(!toml.contains("name = \"app\""));
|
assert!(!toml.contains("name = \"app\""));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// --- detect_script_test ---
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn detect_script_test_no_markers_returns_stub() {
|
||||||
|
let dir = tempdir().unwrap();
|
||||||
|
let script = detect_script_test(dir.path());
|
||||||
|
assert!(
|
||||||
|
script.contains("No tests configured"),
|
||||||
|
"fallback should contain the generic stub message"
|
||||||
|
);
|
||||||
|
assert!(script.starts_with("#!/usr/bin/env bash"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn detect_script_test_cargo_toml_adds_cargo_test() {
|
||||||
|
let dir = tempdir().unwrap();
|
||||||
|
fs::write(dir.path().join("Cargo.toml"), "[package]\nname = \"x\"\n").unwrap();
|
||||||
|
|
||||||
|
let script = detect_script_test(dir.path());
|
||||||
|
assert!(script.contains("cargo test"), "Rust project should run cargo test");
|
||||||
|
assert!(!script.contains("No tests configured"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn detect_script_test_package_json_npm_adds_npm_test() {
|
||||||
|
let dir = tempdir().unwrap();
|
||||||
|
fs::write(dir.path().join("package.json"), "{}").unwrap();
|
||||||
|
|
||||||
|
let script = detect_script_test(dir.path());
|
||||||
|
assert!(script.contains("npm test"), "Node project without pnpm-lock should run npm test");
|
||||||
|
assert!(!script.contains("No tests configured"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn detect_script_test_package_json_pnpm_adds_pnpm_test() {
|
||||||
|
let dir = tempdir().unwrap();
|
||||||
|
fs::write(dir.path().join("package.json"), "{}").unwrap();
|
||||||
|
fs::write(dir.path().join("pnpm-lock.yaml"), "").unwrap();
|
||||||
|
|
||||||
|
let script = detect_script_test(dir.path());
|
||||||
|
assert!(script.contains("pnpm test"), "Node project with pnpm-lock should run pnpm test");
|
||||||
|
// "pnpm test" is a substring of itself; verify there's no bare "npm test" line
|
||||||
|
assert!(!script.lines().any(|l| l.trim() == "npm test"), "should not use npm when pnpm-lock.yaml is present");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn detect_script_test_pyproject_toml_adds_pytest() {
|
||||||
|
let dir = tempdir().unwrap();
|
||||||
|
fs::write(dir.path().join("pyproject.toml"), "[project]\nname = \"x\"\n").unwrap();
|
||||||
|
|
||||||
|
let script = detect_script_test(dir.path());
|
||||||
|
assert!(script.contains("pytest"), "Python project should run pytest");
|
||||||
|
assert!(!script.contains("No tests configured"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn detect_script_test_requirements_txt_adds_pytest() {
|
||||||
|
let dir = tempdir().unwrap();
|
||||||
|
fs::write(dir.path().join("requirements.txt"), "flask\n").unwrap();
|
||||||
|
|
||||||
|
let script = detect_script_test(dir.path());
|
||||||
|
assert!(script.contains("pytest"), "Python project (requirements.txt) should run pytest");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn detect_script_test_go_mod_adds_go_test() {
|
||||||
|
let dir = tempdir().unwrap();
|
||||||
|
fs::write(dir.path().join("go.mod"), "module example.com/app\n").unwrap();
|
||||||
|
|
||||||
|
let script = detect_script_test(dir.path());
|
||||||
|
assert!(script.contains("go test ./..."), "Go project should run go test ./...");
|
||||||
|
assert!(!script.contains("No tests configured"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn detect_script_test_multi_stack_combines_commands() {
|
||||||
|
let dir = tempdir().unwrap();
|
||||||
|
fs::write(dir.path().join("go.mod"), "module example.com/app\n").unwrap();
|
||||||
|
fs::write(dir.path().join("package.json"), "{}").unwrap();
|
||||||
|
|
||||||
|
let script = detect_script_test(dir.path());
|
||||||
|
assert!(script.contains("go test ./..."), "multi-stack should include Go test command");
|
||||||
|
assert!(script.contains("npm test"), "multi-stack should include Node test command");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn detect_script_test_output_starts_with_shebang() {
|
||||||
|
let dir = tempdir().unwrap();
|
||||||
|
fs::write(dir.path().join("Cargo.toml"), "[package]\nname = \"x\"\n").unwrap();
|
||||||
|
|
||||||
|
let script = detect_script_test(dir.path());
|
||||||
|
assert!(
|
||||||
|
script.starts_with("#!/usr/bin/env bash\nset -euo pipefail\n"),
|
||||||
|
"generated script should start with bash shebang and set -euo pipefail"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn scaffold_script_test_contains_detected_commands_for_rust() {
|
||||||
|
let dir = tempdir().unwrap();
|
||||||
|
fs::write(dir.path().join("Cargo.toml"), "[package]\nname = \"myapp\"\n").unwrap();
|
||||||
|
|
||||||
|
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||||
|
|
||||||
|
let content = fs::read_to_string(dir.path().join("script/test")).unwrap();
|
||||||
|
assert!(content.contains("cargo test"), "Rust project scaffold should set cargo test in script/test");
|
||||||
|
assert!(!content.contains("No tests configured"), "should not use stub when stack is detected");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn scaffold_script_test_fallback_stub_when_no_stack() {
|
||||||
|
let dir = tempdir().unwrap();
|
||||||
|
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||||
|
|
||||||
|
let content = fs::read_to_string(dir.path().join("script/test")).unwrap();
|
||||||
|
assert!(content.contains("No tests configured"), "unknown stack should use the generic stub");
|
||||||
|
}
|
||||||
|
|
||||||
// --- generate_project_toml ---
|
// --- generate_project_toml ---
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
@@ -1572,7 +1861,7 @@ mod tests {
|
|||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
scaffold_story_kit(dir.path()).unwrap();
|
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||||
|
|
||||||
let content = fs::read_to_string(dir.path().join(".storkit/project.toml")).unwrap();
|
let content = fs::read_to_string(dir.path().join(".storkit/project.toml")).unwrap();
|
||||||
assert!(
|
assert!(
|
||||||
@@ -1592,17 +1881,21 @@ mod tests {
|
|||||||
#[test]
|
#[test]
|
||||||
fn scaffold_project_toml_fallback_when_no_stack_detected() {
|
fn scaffold_project_toml_fallback_when_no_stack_detected() {
|
||||||
let dir = tempdir().unwrap();
|
let dir = tempdir().unwrap();
|
||||||
scaffold_story_kit(dir.path()).unwrap();
|
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||||
|
|
||||||
let content = fs::read_to_string(dir.path().join(".storkit/project.toml")).unwrap();
|
let content = fs::read_to_string(dir.path().join(".storkit/project.toml")).unwrap();
|
||||||
assert!(
|
assert!(
|
||||||
content.contains("[[component]]"),
|
content.contains("[[component]]"),
|
||||||
"project.toml should always have at least one component"
|
"project.toml should always have at least one component"
|
||||||
);
|
);
|
||||||
// Fallback emits example components so the scaffold is immediately usable
|
// Fallback uses generic app component with empty setup — no Rust-specific commands
|
||||||
assert!(
|
assert!(
|
||||||
content.contains("name = \"backend\"") || content.contains("name = \"frontend\""),
|
content.contains("name = \"app\""),
|
||||||
"fallback should include example component entries"
|
"fallback should use generic 'app' component name"
|
||||||
|
);
|
||||||
|
assert!(
|
||||||
|
!content.contains("cargo"),
|
||||||
|
"fallback must not contain Rust-specific commands for non-Rust projects"
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1614,7 +1907,7 @@ mod tests {
|
|||||||
let existing = "[[component]]\nname = \"custom\"\npath = \".\"\nsetup = [\"make build\"]\n";
|
let existing = "[[component]]\nname = \"custom\"\npath = \".\"\nsetup = [\"make build\"]\n";
|
||||||
fs::write(sk_dir.join("project.toml"), existing).unwrap();
|
fs::write(sk_dir.join("project.toml"), existing).unwrap();
|
||||||
|
|
||||||
scaffold_story_kit(dir.path()).unwrap();
|
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||||
|
|
||||||
let content = fs::read_to_string(sk_dir.join("project.toml")).unwrap();
|
let content = fs::read_to_string(sk_dir.join("project.toml")).unwrap();
|
||||||
assert_eq!(
|
assert_eq!(
|
||||||
|
|||||||
@@ -34,6 +34,32 @@ use std::path::PathBuf;
|
|||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
use tokio::sync::broadcast;
|
use tokio::sync::broadcast;
|
||||||
|
|
||||||
|
/// What the first CLI argument means.
|
||||||
|
#[derive(Debug, PartialEq)]
|
||||||
|
enum CliDirective {
|
||||||
|
/// `--help` / `-h`
|
||||||
|
Help,
|
||||||
|
/// `--version` / `-V`
|
||||||
|
Version,
|
||||||
|
/// An unrecognised flag (starts with `-`).
|
||||||
|
UnknownFlag(String),
|
||||||
|
/// A positional path argument.
|
||||||
|
Path,
|
||||||
|
/// No arguments at all.
|
||||||
|
None,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Inspect the raw CLI arguments and return the directive they imply.
|
||||||
|
fn classify_cli_args(args: &[String]) -> CliDirective {
|
||||||
|
match args.first().map(String::as_str) {
|
||||||
|
None => CliDirective::None,
|
||||||
|
Some("--help" | "-h") => CliDirective::Help,
|
||||||
|
Some("--version" | "-V") => CliDirective::Version,
|
||||||
|
Some(a) if a.starts_with('-') => CliDirective::UnknownFlag(a.to_string()),
|
||||||
|
Some(_) => CliDirective::Path,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/// Resolve the optional positional path argument (everything after the binary
|
/// Resolve the optional positional path argument (everything after the binary
|
||||||
/// name) into an absolute `PathBuf`. Returns `None` when no argument was
|
/// name) into an absolute `PathBuf`. Returns `None` when no argument was
|
||||||
/// supplied so that the caller can fall back to the auto-detect behaviour.
|
/// supplied so that the caller can fall back to the auto-detect behaviour.
|
||||||
@@ -53,8 +79,61 @@ async fn main() -> Result<(), std::io::Error> {
|
|||||||
|
|
||||||
// Collect CLI args, skipping the binary name (argv[0]).
|
// Collect CLI args, skipping the binary name (argv[0]).
|
||||||
let cli_args: Vec<String> = std::env::args().skip(1).collect();
|
let cli_args: Vec<String> = std::env::args().skip(1).collect();
|
||||||
|
|
||||||
|
// Handle CLI flags before treating anything as a project path.
|
||||||
|
match classify_cli_args(&cli_args) {
|
||||||
|
CliDirective::Help => {
|
||||||
|
println!("storkit [PATH]");
|
||||||
|
println!();
|
||||||
|
println!("Serve a storkit project.");
|
||||||
|
println!();
|
||||||
|
println!("USAGE:");
|
||||||
|
println!(" storkit [PATH]");
|
||||||
|
println!();
|
||||||
|
println!("ARGS:");
|
||||||
|
println!(
|
||||||
|
" PATH Path to an existing project directory. \
|
||||||
|
If omitted, storkit searches parent directories for a .storkit/ root."
|
||||||
|
);
|
||||||
|
println!();
|
||||||
|
println!("OPTIONS:");
|
||||||
|
println!(" -h, --help Print this help and exit");
|
||||||
|
println!(" -V, --version Print the version and exit");
|
||||||
|
std::process::exit(0);
|
||||||
|
}
|
||||||
|
CliDirective::Version => {
|
||||||
|
println!("storkit {}", env!("CARGO_PKG_VERSION"));
|
||||||
|
std::process::exit(0);
|
||||||
|
}
|
||||||
|
CliDirective::UnknownFlag(flag) => {
|
||||||
|
eprintln!("error: unknown option: {flag}");
|
||||||
|
eprintln!("Run 'storkit --help' for usage.");
|
||||||
|
std::process::exit(1);
|
||||||
|
}
|
||||||
|
CliDirective::Path | CliDirective::None => {}
|
||||||
|
}
|
||||||
|
|
||||||
let explicit_path = parse_project_path_arg(&cli_args, &cwd);
|
let explicit_path = parse_project_path_arg(&cli_args, &cwd);
|
||||||
|
|
||||||
|
// When a path is given explicitly on the CLI, it must already exist as a
|
||||||
|
// directory. We do not create directories from the command line.
|
||||||
|
if let Some(ref path) = explicit_path {
|
||||||
|
if !path.exists() {
|
||||||
|
eprintln!(
|
||||||
|
"error: path does not exist: {}",
|
||||||
|
path.display()
|
||||||
|
);
|
||||||
|
std::process::exit(1);
|
||||||
|
}
|
||||||
|
if !path.is_dir() {
|
||||||
|
eprintln!(
|
||||||
|
"error: path is not a directory: {}",
|
||||||
|
path.display()
|
||||||
|
);
|
||||||
|
std::process::exit(1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if let Some(explicit_root) = explicit_path {
|
if let Some(explicit_root) = explicit_path {
|
||||||
// An explicit path was given on the command line.
|
// An explicit path was given on the command line.
|
||||||
// Open it directly — scaffold .storkit/ if it is missing — and
|
// Open it directly — scaffold .storkit/ if it is missing — and
|
||||||
@@ -63,6 +142,7 @@ async fn main() -> Result<(), std::io::Error> {
|
|||||||
explicit_root.to_string_lossy().to_string(),
|
explicit_root.to_string_lossy().to_string(),
|
||||||
&app_state,
|
&app_state,
|
||||||
store.as_ref(),
|
store.as_ref(),
|
||||||
|
port,
|
||||||
)
|
)
|
||||||
.await
|
.await
|
||||||
{
|
{
|
||||||
@@ -85,6 +165,7 @@ async fn main() -> Result<(), std::io::Error> {
|
|||||||
project_root.to_string_lossy().to_string(),
|
project_root.to_string_lossy().to_string(),
|
||||||
&app_state,
|
&app_state,
|
||||||
store.as_ref(),
|
store.as_ref(),
|
||||||
|
port,
|
||||||
)
|
)
|
||||||
.await
|
.await
|
||||||
.unwrap_or_else(|e| {
|
.unwrap_or_else(|e| {
|
||||||
@@ -96,13 +177,19 @@ async fn main() -> Result<(), std::io::Error> {
|
|||||||
config::ProjectConfig::load(&project_root)
|
config::ProjectConfig::load(&project_root)
|
||||||
.unwrap_or_else(|e| panic!("Invalid project.toml: {e}"));
|
.unwrap_or_else(|e| panic!("Invalid project.toml: {e}"));
|
||||||
} else {
|
} else {
|
||||||
// No .storkit/ found — fall back to cwd so existing behaviour is preserved.
|
// No .storkit/ found in cwd or parents — scaffold cwd as a new
|
||||||
// TRACE:MERGE-DEBUG — remove once root cause is found
|
// project, exactly like `storkit .` does.
|
||||||
slog!(
|
io::fs::open_project(
|
||||||
"[MERGE-DEBUG] main: no .storkit/ found, falling back to cwd {:?}",
|
cwd.to_string_lossy().to_string(),
|
||||||
cwd
|
&app_state,
|
||||||
);
|
store.as_ref(),
|
||||||
*app_state.project_root.lock().unwrap() = Some(cwd.clone());
|
port,
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.unwrap_or_else(|e| {
|
||||||
|
slog!("Warning: failed to scaffold project at {cwd:?}: {e}");
|
||||||
|
cwd.to_string_lossy().to_string()
|
||||||
|
});
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -399,6 +486,61 @@ name = "coder"
|
|||||||
.unwrap_or_else(|e| panic!("Invalid project.toml: {e}"));
|
.unwrap_or_else(|e| panic!("Invalid project.toml: {e}"));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ── classify_cli_args ─────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn classify_none_when_no_args() {
|
||||||
|
assert_eq!(classify_cli_args(&[]), CliDirective::None);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn classify_help_long() {
|
||||||
|
assert_eq!(
|
||||||
|
classify_cli_args(&["--help".to_string()]),
|
||||||
|
CliDirective::Help
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn classify_help_short() {
|
||||||
|
assert_eq!(
|
||||||
|
classify_cli_args(&["-h".to_string()]),
|
||||||
|
CliDirective::Help
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn classify_version_long() {
|
||||||
|
assert_eq!(
|
||||||
|
classify_cli_args(&["--version".to_string()]),
|
||||||
|
CliDirective::Version
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn classify_version_short() {
|
||||||
|
assert_eq!(
|
||||||
|
classify_cli_args(&["-V".to_string()]),
|
||||||
|
CliDirective::Version
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn classify_unknown_flag() {
|
||||||
|
assert_eq!(
|
||||||
|
classify_cli_args(&["--serve".to_string()]),
|
||||||
|
CliDirective::UnknownFlag("--serve".to_string())
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn classify_path() {
|
||||||
|
assert_eq!(
|
||||||
|
classify_cli_args(&["/some/path".to_string()]),
|
||||||
|
CliDirective::Path
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
// ── parse_project_path_arg ────────────────────────────────────────────
|
// ── parse_project_path_arg ────────────────────────────────────────────
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
|
|||||||
@@ -16,23 +16,36 @@ pub(super) fn handle_status(ctx: &CommandContext) -> Option<String> {
|
|||||||
|
|
||||||
/// Format a short display label for a work item.
|
/// Format a short display label for a work item.
|
||||||
///
|
///
|
||||||
/// Extracts the leading numeric ID from the file stem (e.g. `"293"` from
|
/// Extracts the leading numeric ID and optional type tag from the file stem
|
||||||
/// `"293_story_register_all_bot_commands"`) and combines it with the human-
|
/// (e.g. `"293"` and `"story"` from `"293_story_register_all_bot_commands"`)
|
||||||
/// readable name from the front matter when available.
|
/// and combines them with the human-readable name from the front matter when
|
||||||
|
/// available. Known types (`story`, `bug`, `spike`, `refactor`) are shown as
|
||||||
|
/// bracketed labels; unknown or missing types are omitted silently.
|
||||||
///
|
///
|
||||||
/// Examples:
|
/// Examples:
|
||||||
/// - `("293_story_foo", Some("Register all bot commands"))` → `"293 — Register all bot commands"`
|
/// - `("293_story_foo", Some("Register all bot commands"))` → `"293 [story] — Register all bot commands"`
|
||||||
/// - `("293_story_foo", None)` → `"293"`
|
/// - `("375_bug_foo", None)` → `"375 [bug]"`
|
||||||
|
/// - `("293_story_foo", None)` → `"293 [story]"`
|
||||||
/// - `("no_number_here", None)` → `"no_number_here"`
|
/// - `("no_number_here", None)` → `"no_number_here"`
|
||||||
pub(super) fn story_short_label(stem: &str, name: Option<&str>) -> String {
|
pub(super) fn story_short_label(stem: &str, name: Option<&str>) -> String {
|
||||||
let number = stem
|
let mut parts = stem.splitn(3, '_');
|
||||||
.split('_')
|
let first = parts.next().unwrap_or(stem);
|
||||||
.next()
|
let (number, type_label) = if !first.is_empty() && first.chars().all(|c| c.is_ascii_digit()) {
|
||||||
.filter(|s| !s.is_empty() && s.chars().all(|c| c.is_ascii_digit()))
|
let t = parts.next().and_then(|t| match t {
|
||||||
.unwrap_or(stem);
|
"story" | "bug" | "spike" | "refactor" => Some(t),
|
||||||
match name {
|
_ => None,
|
||||||
Some(n) => format!("{number} — {n}"),
|
});
|
||||||
|
(first, t)
|
||||||
|
} else {
|
||||||
|
(stem, None)
|
||||||
|
};
|
||||||
|
let prefix = match type_label {
|
||||||
|
Some(t) => format!("{number} [{t}]"),
|
||||||
None => number.to_string(),
|
None => number.to_string(),
|
||||||
|
};
|
||||||
|
match name {
|
||||||
|
Some(n) => format!("{prefix} — {n}"),
|
||||||
|
None => prefix,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -200,13 +213,13 @@ mod tests {
|
|||||||
#[test]
|
#[test]
|
||||||
fn short_label_extracts_number_and_name() {
|
fn short_label_extracts_number_and_name() {
|
||||||
let label = story_short_label("293_story_register_all_bot_commands", Some("Register all bot commands"));
|
let label = story_short_label("293_story_register_all_bot_commands", Some("Register all bot commands"));
|
||||||
assert_eq!(label, "293 — Register all bot commands");
|
assert_eq!(label, "293 [story] — Register all bot commands");
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn short_label_number_only_when_no_name() {
|
fn short_label_number_only_when_no_name() {
|
||||||
let label = story_short_label("297_story_improve_bot_status_command_formatting", None);
|
let label = story_short_label("297_story_improve_bot_status_command_formatting", None);
|
||||||
assert_eq!(label, "297");
|
assert_eq!(label, "297 [story]");
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
@@ -224,6 +237,37 @@ mod tests {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn short_label_shows_bug_type() {
|
||||||
|
let label = story_short_label("375_bug_default_project_toml", Some("Default project.toml issue"));
|
||||||
|
assert_eq!(label, "375 [bug] — Default project.toml issue");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn short_label_shows_spike_type() {
|
||||||
|
let label = story_short_label("61_spike_filesystem_watcher_architecture", Some("Filesystem watcher architecture"));
|
||||||
|
assert_eq!(label, "61 [spike] — Filesystem watcher architecture");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn short_label_shows_refactor_type() {
|
||||||
|
let label = story_short_label("260_refactor_upgrade_libsqlite3_sys", Some("Upgrade libsqlite3-sys"));
|
||||||
|
assert_eq!(label, "260 [refactor] — Upgrade libsqlite3-sys");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn short_label_omits_unknown_type() {
|
||||||
|
let label = story_short_label("42_task_do_something", Some("Do something"));
|
||||||
|
assert_eq!(label, "42 — Do something");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn short_label_no_type_when_only_id() {
|
||||||
|
// Stem with only a numeric ID and no type segment
|
||||||
|
let label = story_short_label("42", Some("Some item"));
|
||||||
|
assert_eq!(label, "42 — Some item");
|
||||||
|
}
|
||||||
|
|
||||||
// -- build_pipeline_status formatting -----------------------------------
|
// -- build_pipeline_status formatting -----------------------------------
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
@@ -248,8 +292,8 @@ mod tests {
|
|||||||
"output must not show full filename stem: {output}"
|
"output must not show full filename stem: {output}"
|
||||||
);
|
);
|
||||||
assert!(
|
assert!(
|
||||||
output.contains("293 — Register all bot commands"),
|
output.contains("293 [story] — Register all bot commands"),
|
||||||
"output must show number and title: {output}"
|
"output must show number, type, and title: {output}"
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -288,7 +332,7 @@ mod tests {
|
|||||||
let output = build_pipeline_status(tmp.path(), &agents);
|
let output = build_pipeline_status(tmp.path(), &agents);
|
||||||
|
|
||||||
assert!(
|
assert!(
|
||||||
output.contains("293 — Register all bot commands — $0.29"),
|
output.contains("293 [story] — Register all bot commands — $0.29"),
|
||||||
"output must show cost next to story: {output}"
|
"output must show cost next to story: {output}"
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
@@ -351,7 +395,7 @@ mod tests {
|
|||||||
let output = build_pipeline_status(tmp.path(), &agents);
|
let output = build_pipeline_status(tmp.path(), &agents);
|
||||||
|
|
||||||
assert!(
|
assert!(
|
||||||
output.contains("293 — Register all bot commands — $0.29"),
|
output.contains("293 [story] — Register all bot commands — $0.29"),
|
||||||
"output must show aggregated cost: {output}"
|
"output must show aggregated cost: {output}"
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -189,23 +189,6 @@ mod tests {
|
|||||||
use crate::transport::MessageId;
|
use crate::transport::MessageId;
|
||||||
use std::sync::Mutex;
|
use std::sync::Mutex;
|
||||||
|
|
||||||
// ── AC: docker-compose.yml specifies runtime: runsc ──────────────────
|
|
||||||
|
|
||||||
// docker-compose.yml embedded at compile time for a hermetic test.
|
|
||||||
const DOCKER_COMPOSE_YML: &str =
|
|
||||||
include_str!(concat!(env!("CARGO_MANIFEST_DIR"), "/../docker/docker-compose.yml"));
|
|
||||||
|
|
||||||
/// The docker-compose.yml must opt the container into the gVisor runtime
|
|
||||||
/// so that all container syscalls are intercepted in userspace.
|
|
||||||
#[test]
|
|
||||||
fn docker_compose_specifies_runsc_runtime() {
|
|
||||||
assert!(
|
|
||||||
DOCKER_COMPOSE_YML.contains("runtime: runsc"),
|
|
||||||
"docker/docker-compose.yml must contain `runtime: runsc` \
|
|
||||||
to enable gVisor sandboxing"
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
/// In-memory transport that records sent messages.
|
/// In-memory transport that records sent messages.
|
||||||
struct CapturingTransport {
|
struct CapturingTransport {
|
||||||
sent: Mutex<Vec<(String, String)>>,
|
sent: Mutex<Vec<(String, String)>>,
|
||||||
|
|||||||
Reference in New Issue
Block a user