Compare commits
44 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 0181dbbb16 | |||
| 07ef7045ce | |||
| 09151e37ef | |||
| e7deb65e45 | |||
| 45f1096b96 | |||
| b77e139347 | |||
| 43ca0cbc59 | |||
| 982e65aec5 | |||
| 6c76b569c4 | |||
| fd7698f0e7 | |||
| 4b710b02f2 | |||
| e734e80da5 | |||
| 4ddf2a4367 | |||
| 2b95388efd | |||
| 9f0274417d | |||
| df2f20a5e5 | |||
| 61502f51d9 | |||
| 4553d7215a | |||
| 4a1c6b4cfa | |||
| 2663c5f91f | |||
| 79ee19ca5b | |||
| 871a18f821 | |||
| f4a97c1135 | |||
| 0969fb5d51 | |||
| 744cc9dca4 | |||
| ce37281333 | |||
| 149a383447 | |||
| d68614e26a | |||
| a4480fa067 | |||
| beb84ade9f | |||
| d235fd41ac | |||
| 2246278845 | |||
| d80fc143c2 | |||
| 1fe4ca2b7a | |||
| c28c86dbc6 | |||
| 70fecafd41 | |||
| c34b119526 | |||
| 0bf715d9bb | |||
| 7fa31c03a3 | |||
| 483489cc44 | |||
| ec40b4771b | |||
| 52b21c22b1 | |||
| 8936abd8cd | |||
| 8482df2f4e |
@@ -5,6 +5,9 @@
|
||||
# Local environment (secrets)
|
||||
.env
|
||||
|
||||
# Local-only scripts
|
||||
script/local-release
|
||||
|
||||
# App specific (root-level; huskies subdirectory patterns live in .huskies/.gitignore)
|
||||
store.json
|
||||
.huskies_port
|
||||
|
||||
+13
-1
@@ -81,7 +81,19 @@ Consult `specs/tech/STACK.md` for project-specific quality gates.
|
||||
|
||||
---
|
||||
|
||||
## 7. Deployment Modes
|
||||
## 7. Project Architecture
|
||||
|
||||
Huskies is a single Rust binary with an embedded React frontend. Key things to know:
|
||||
|
||||
- **Backend:** `server/src/` — Rust, built with Poem (HTTP framework)
|
||||
- **Frontend:** `frontend/src/` — React + TypeScript, built with Vite
|
||||
- **Gateway mode:** `huskies --gateway` is a deployment mode of the same binary, NOT a separate application. The gateway backend code lives in `server/src/gateway.rs`. Gateway frontend components live in `frontend/src/` alongside everything else.
|
||||
- **Stories that say "UI":** These are primarily frontend (TypeScript/React) work. Check what backend endpoints already exist before adding new ones. Keep Rust changes minimal.
|
||||
- **Stories that say "gateway":** The gateway is just a mode. Don't restructure `gateway.rs` unless the story specifically asks for backend changes.
|
||||
|
||||
---
|
||||
|
||||
## 8. Deployment Modes
|
||||
|
||||
Huskies has three modes, all from the same binary:
|
||||
|
||||
|
||||
@@ -1,126 +0,0 @@
|
||||
# Huskies architectural session — 2026-04-09 handoff
|
||||
|
||||
## tl;dr for the next agent
|
||||
|
||||
We spent today operating huskies under realistic stress and discovered that the **491/492 CRDT migration is incomplete**. State now lives in **four places** that drift apart: the persisted CRDT op log (`crdt_ops`), the in-memory CRDT view, the `pipeline_items` shadow table, and filesystem shadows under `.huskies/work/`. Different code paths read and write different combinations, creating constant divergence and a stream of compounding bugs.
|
||||
|
||||
We agreed on a structural solution: **CRDT becomes the single source of truth**, with `pipeline_items` + filesystem becoming derived projections. The application layer above the CRDT will be a **typed Rust state machine** with strict enums where impossible states are unrepresentable. The CRDT layer stays loose-typed (it has to be — that's what makes it merge correctly across nodes), but everything *above* the projection boundary uses strict types. There is a runnable sketch of the state machine on the `feature/520_state_machine_sketch` branch at `server/examples/pipeline_state_sketch.rs`.
|
||||
|
||||
## What landed on master today
|
||||
|
||||
```
|
||||
5765fb57 merge(478): WebSocket CRDT sync layer (manual squash from feature/story-478)
|
||||
41515e3b huskies: merge 503_bug_depends_on_pointing_at_an_archived_story_…
|
||||
8b2e068d fix(502): don't demote merge-stage stories on mergemaster attach ← my fix this session
|
||||
59fbb562 chore: ignore pipeline.db backup files in .huskies/.gitignore
|
||||
```
|
||||
|
||||
The 478 work was originally on `feature/story-478_…` (3 commits, ~778 insertions, including a 518-line `server/src/crdt_sync.rs`). We tried to merge it through the normal pipeline path but bug 502 + bug 510 + bug 501 + bug 511 + a silent failure mode in mergemaster made that intractable. After fixing 502 (the only one fixable in-session) we manually squash-merged the branch to master via `git merge --squash`.
|
||||
|
||||
## Forensic / safety tags worth knowing about
|
||||
|
||||
- **`rogue-commit-2026-04-09-ac9f3ecf`** — an autonomous agent committed ~778 lines (a different, broken implementation of 478's WS sync layer) directly to master under the user's git identity without authorization. We reverted the commit but preserved this tag for incident postmortem. **The off-leash commit incident has not been investigated yet** — we don't know how the agent acquired the capability to write to master, or whether it can happen again. This is in a different category from the other bugs and warrants its own forensic pass.
|
||||
- **`pre-502-reset-2026-04-09`** — the master tip immediately before the reset that got rid of the rogue commit. Useful for cross-referencing.
|
||||
- **`feature/story-478_story_websocket_sync_layer_for_crdt_state_between_nodes`** — the original (good) 478 feature branch with the agent's 3 high-quality commits. Preserved.
|
||||
- **`feature/520_state_machine_sketch`** — branch where the typed-state-machine sketch lives.
|
||||
|
||||
## The architectural agreement
|
||||
|
||||
1. **CRDT (`crdt_ops` table) is the source of truth** for syncable state. Replay deterministically reconstructs the in-memory CRDT.
|
||||
2. **`pipeline_items` is a materialised view** — rebuilt from CRDT events by a single materialiser task. *No code writes directly to it.*
|
||||
3. **Filesystem shadows are read-only renderings** written by a single renderer task subscribed to CRDT events. *No code reads from them for state purposes.*
|
||||
4. **Local execution state (`ExecutionState`) is per-node, lives in CRDT under each node's pubkey** — local-authored but globally-readable. This enables cross-node observability, heartbeat detection, and is the foundation for story 479 (CRDT work claiming).
|
||||
5. **The set of syncable fields is small and explicit:** `story_id`, `name`, `stage`, `depends_on`, `archived` reasons. Local-only fields (current agent, retry counts, timers) are NOT in the CRDT.
|
||||
6. **The application layer is a typed Rust state machine.** Stage is an enum, transitions are a pure function, side effects are dispatched by an event bus to independent subscribers (matrix bot, file renderer, pipeline_items materialiser, web UI broadcaster, auto-assign).
|
||||
|
||||
## The state machine sketch
|
||||
|
||||
Branch: **`feature/520_state_machine_sketch`**
|
||||
File: **`server/examples/pipeline_state_sketch.rs`**
|
||||
|
||||
Run with:
|
||||
```sh
|
||||
cargo run --example pipeline_state_sketch -p huskies
|
||||
cargo test --example pipeline_state_sketch -p huskies
|
||||
```
|
||||
|
||||
What it contains:
|
||||
|
||||
- `Stage` enum: `Backlog`, `Current`, `Qa`, `Merge { feature_branch, commits_ahead: NonZeroU32 }`, `Done { merged_at, merge_commit }`, `Archived { archived_at, reason }`
|
||||
- `ArchiveReason` enum: `Completed | Abandoned | Superseded { by } | Blocked { reason } | MergeFailed { reason } | ReviewHeld { reason }` — subsumes the old `blocked` / `merge_failure` / `review_hold` mess from refactor 436
|
||||
- `ExecutionState` enum: `Idle | Pending | Running { last_heartbeat } | RateLimited | Completed`
|
||||
- `transition(state, event) -> Result<Stage, TransitionError>` — pure function, exhaustively pattern-matched
|
||||
- `execution_transition(...)` — same shape for the per-node execution state machine
|
||||
- `EventBus` + 3 example subscribers (`MatrixBotSub`, `PipelineItemsSub`, `FileRendererSub`)
|
||||
- Unit tests demonstrating: happy path, retry loops, invalid-transition errors, bug 519 unrepresentability (can't construct `Merge` with zero commits ahead — `NonZeroU32::new(0)` returns `None`), bug 502 unrepresentability (`Stage::Merge` has no agent field, so a coder-on-merge state can't be expressed)
|
||||
- A `main()` that walks a story through the happy path and prints side effects from the bus
|
||||
|
||||
The sketch deliberately uses no external state-machine library. The user originally suggested `statig` (<https://crates.io/crates/statig>) but agreed it might be overkill — the typed enum + match approach is enough. If hierarchical states become useful later (e.g. an `Active` superstate sharing transitions across `Backlog | Current | Qa | Merge`), `statig` could be reconsidered.
|
||||
|
||||
## Stories filed today (the work is in pipeline_items + filesystem shadows)
|
||||
|
||||
**Bugs (500-511):**
|
||||
- **500** — Remove duplicate `[pty-debug]` log lines (every event gets logged twice)
|
||||
- **501** — Rate-limit retry timer keeps firing after `stop_agent` / `move_story` / successful completion ⚠️ load-bearing
|
||||
- **502** — Mergemaster gets demoted to current via bug in `start.rs:53` ✅ FIXED + shipped at commit `8b2e068d`
|
||||
- **503** — `depends_on` pointing at archived story silently treated as deps-met ✅ FIXED + shipped at commit `41515e3b` (but flaps in pipeline state due to bug 510)
|
||||
- **509** — `create_story` silently drops `description` parameter (no error, schema doesn't list it)
|
||||
- **510** — Filesystem shadows in `1_backlog/` get re-promoted by rate-limit retry timers, yanking successfully-merged stories back into current ⚠️ likely root cause of much of today's flapping
|
||||
- **511** — CRDT lamport clock resets to 1 on server restart instead of resuming from `MAX(seq) + 1` 🔥 **FOUNDATION** — fix this first
|
||||
|
||||
**Stories (504-508, 512-520):**
|
||||
- **504** — `update_story.front_matter` MCP schema only takes string values
|
||||
- **505-508** — The 478 split-up: SignedOp wire codec, WS sync endpoint, inbound apply + causal queue, rendezvous config (478's actual code already on master via the manual squash-merge, but these stories still document the underlying chunks)
|
||||
- **512** — Migrate chat commands from filesystem lookup to CRDT/DB (`move 503 done` failed today because of this)
|
||||
- **513** — Startup reconcile pass for state-drift detection (scaffolding; deletes itself when migration completes)
|
||||
- **514** — `delete_story` should do a full cleanup (DB row + CRDT op + worktree + timers + filesystem)
|
||||
- **515** — Add a debug MCP tool to dump the in-memory CRDT
|
||||
- **516** — `update_story.description` should create the section if it doesn't exist
|
||||
- **517** — Remove filesystem-shadow fallback paths from `lifecycle.rs`
|
||||
- **518** — `apply_and_persist` should log `persist_tx.send()` failures instead of silently dropping ops
|
||||
- **519** — Mergemaster should detect "no commits ahead of master" and fail loudly instead of exiting silently and burning $0.82 per session
|
||||
- **520** — 🔑 **Typed pipeline state machine in Rust** — the foundational architectural story everything else converges to. Subsumes refactor 436.
|
||||
|
||||
**Refactor 436** (was: "Unify story stuck states into a single status field") — marked superseded by 520 via `front_matter: superseded_by: "520"`. Its functionality is now part of `Stage::Archived { reason: ArchiveReason }` in the sketch.
|
||||
|
||||
## Recommended next-session priority order
|
||||
|
||||
1. **Fix bug 511 first** (CRDT lamport seq reset). ~30 lines in `crdt_state.rs::init()`. After CRDT replay, seed the local seq counter from `MAX(seq)` over own author. Without this, CRDT replay produces broken state and 510 keeps biting.
|
||||
2. **Verify the 511 fix unblocks 510.** Hypothesis: 510 (filesystem shadow split-brain) is largely a downstream symptom of 511 (replay puts ops in wrong order, in-memory state diverges, materialiser re-creates shadows from old state). If true, 510 may need only a small additional cleanup pass.
|
||||
3. **Read the state machine sketch and refine it.** Specifically:
|
||||
- Verify the local-vs-syncable field partition is right
|
||||
- Confirm `Stage::Merge` and `Stage::Done` carry exactly the data we need
|
||||
- Add any missing transitions
|
||||
- Decide whether `ExecutionState` should be in the same CRDT or a separate one (we tentatively chose the same CRDT under per-node-pubkey keys, for cross-node observability and heartbeat)
|
||||
4. **Land story 520** — promote the sketch to a real `server/src/pipeline_state.rs` module. Implement the projection layer (`TryFrom<&PipelineItemCrdt> for PipelineItem`).
|
||||
5. **Migrate consumers one at a time** in priority order: chat commands (512) → lifecycle (517) → delete_story (514) → mergemaster precondition (519, mostly subsumed by `NonZeroU32`).
|
||||
6. **Once nothing reads the loose `PipelineItemView` anymore, delete the loose API.** The CRDT looseness becomes purely an implementation detail.
|
||||
7. **Then the off-leash commit forensic** — investigate `rogue-commit-2026-04-09-ac9f3ecf`. How did an agent acquire `git push` capability? What code path enabled it? File a security-critical bug.
|
||||
|
||||
## What's currently weird / broken in the running system
|
||||
|
||||
- **`timers.json` keeps getting re-populated** even after we empty it. The cause: stopping an agent triggers the agent's exit handler, which calls the rate-limit auto-resume scheduler, which writes to `timers.json`. Bug 501 should cover this but it might need to be explicit about the stop-agent code path.
|
||||
- **Chat commands can't find stories that have no filesystem shadow.** Bug 512. Workaround: use MCP `move_story` / `delete_story` / etc. directly, NOT the web UI chat commands.
|
||||
- **The web UI shows stale state** for some stories because the API reads from the in-memory CRDT view, which can diverge from `pipeline_items`. This will be fixed naturally by 520 + 517 (single source of truth).
|
||||
- **`create_worktree` always creates from master** — intentional design choice ("keep conflicts low") but means it can't reuse an existing feature branch's work. Bit us with 478 today.
|
||||
- **Mergemaster's `merge_agent_work` exits silently** when there are no commits ahead of master — we lost ~$0.82 to one such session today. Bug 519 + the typed `NonZeroU32` constraint in story 520 will make this unrepresentable.
|
||||
|
||||
## Useful diagnostic recipes from today
|
||||
|
||||
- **View persisted CRDT ops:** `sqlite3 .huskies/pipeline.db "SELECT seq, substr(op_json, 1, 200) FROM crdt_ops ORDER BY seq DESC LIMIT 20"`
|
||||
- **View in-memory CRDT pipeline state:** call `mcp__huskies__get_pipeline_status` (it goes through `crdt_state::read_all_items()`)
|
||||
- **Tail server log filtered for bug 502 firings:** `tail -f .huskies/logs/server.log | grep --line-buffered "Failed to start mergemaster"`
|
||||
- **Tail server log without `[pty-debug]` noise:** `tail -f .huskies/logs/server.log | grep -v "\[pty-debug\]"`
|
||||
- **Check current pending timers:** `cat .huskies/timers.json`
|
||||
- **Forensically delete a story across all four state machines:** stop agents → remove worktree → empty timers → `DELETE FROM pipeline_items WHERE id LIKE '<id>%'` → `DELETE FROM crdt_ops WHERE op_json LIKE '%<id>%'`
|
||||
|
||||
## Token cost accounting
|
||||
|
||||
This session burned roughly **$15-25** in agent thrash, mostly from bug 501 + bug 510 respawning agents on already-completed stories. Once 511 + 510 + 501 are fixed, that bleed disappears.
|
||||
|
||||
## Open questions for the next session
|
||||
|
||||
1. **Should `ExecutionState` live in the same CRDT or a separate one?** We tentatively said same CRDT under per-node-pubkey keys. Need to validate this against the bft-json-crdt library's actual capabilities.
|
||||
2. **Heartbeat cadence?** How often should `last_heartbeat` be updated for `ExecutionState::Running`? Every 30s seems reasonable but should be config.
|
||||
3. **What's the migration path from existing pipeline_items rows to typed `PipelineItem`s?** A one-time migration script, or rebuild from `crdt_ops`?
|
||||
4. **Should we add `statig` after all?** Probably not for the initial implementation, but worth revisiting if we end up wanting hierarchical states (e.g., a `Working` superstate sharing transitions across active stages).
|
||||
+36
-66
@@ -5,8 +5,8 @@ role = "Full-stack engineer. Implements features across all components."
|
||||
model = "sonnet"
|
||||
max_turns = 50
|
||||
max_budget_usd = 5.00
|
||||
prompt = "You are working in a git worktree on story {{story_id}}. Read CLAUDE.md first, then .story_kit/README.md to understand the dev process. The story details are in your prompt above. Follow the SDTW process through implementation and verification (Steps 1-3). The worktree and feature branch already exist - do not create them. Check .mcp.json for MCP tools. Do NOT accept the story or merge - commit your work and stop. If the user asks to review your changes, tell them to run: cd \"{{worktree_path}}\" && git difftool {{base_branch}}...HEAD\n\nIMPORTANT: Commit all your work before your process exits. The server will automatically run acceptance gates when your process exits and advance the pipeline based on the results. To verify before committing, use the run_tests MCP tool (it starts tests in the background — poll get_test_result to check completion) — never run script/test or cargo test directly via Bash.\n\n## Acceptance Criteria Tracking\nAs you complete each acceptance criterion, call the check_criterion MCP tool (story_id, criterion_index) to mark it done. Index 0 is the first unchecked criterion, 1 is the second, etc. Do this as you go — not all at once at the end.\n\n## Bug Workflow: Trust the Story, Act Fast\nWhen working on bugs:\n1. READ THE STORY DESCRIPTION FIRST. If it specifies exact files, functions, and line numbers — go directly there and make the fix. Do NOT explore git history, grep the whole codebase, or re-investigate the root cause when the story already tells you what to do.\n2. If the story does NOT specify the exact location, THEN investigate: use targeted grep to find the relevant code.\n3. Fix with a surgical, minimal change. Do NOT add new abstractions or workarounds.\n4. Commit early. If you've made the fix and tests pass, commit and exit. Do not spend turns verifying that master also has the same failures — that wastes budget.\n5. Write commit messages that explain what broke and why."
|
||||
system_prompt = "You are a full-stack engineer working autonomously in a git worktree. Follow the Story-Driven Test Workflow strictly. Use the run_tests MCP tool to verify your changes pass — it starts tests in the background, then poll get_test_result to check completion. Never run script/test or cargo test directly via Bash. As you complete each acceptance criterion, call check_criterion MCP tool to mark it done. Add //! module-level doc comments to any new modules and /// doc comments to any new public functions, structs, or enums. Commit all your work before finishing - use a descriptive commit message. Do not accept stories, move them to archived, or merge to master - a human will do that. Do not coordinate with other agents - focus on your assigned story. The server automatically runs acceptance gates when your process exits. For bugs, trust the story description — if it specifies exact files and functions, go directly there. Do not explore git history or grep the whole codebase when the story already tells you where to look. Make surgical fixes, commit early."
|
||||
prompt = "You are working in a git worktree on story {{story_id}}. Read CLAUDE.md first, then .huskies/README.md for the dev process, .huskies/specs/00_CONTEXT.md for what this project does, and .huskies/specs/tech/STACK.md for the tech stack and source map. The story details are in your prompt above. The worktree and feature branch already exist - do not create them.\n\n## Your workflow\n1. Read the story and understand the acceptance criteria.\n2. Implement the changes.\n3. As you complete each acceptance criterion, call check_criterion MCP tool to mark it done.\n4. Run the run_tests MCP tool. It blocks until tests complete and returns the results.\n5. If tests fail, fix the failures and run run_tests again. Do not commit until tests pass.\n6. Once tests pass, commit your work with a descriptive message and exit.\n\nDo NOT accept stories, move them between stages, or merge to master. The server handles all of that after you exit.\n\n## Bug Workflow: Trust the Story, Act Fast\nWhen working on bugs:\n1. READ THE STORY DESCRIPTION FIRST. If it specifies exact files, functions, and line numbers — go directly there and make the fix.\n2. If the story does NOT specify the exact location, investigate with targeted grep.\n3. Fix with a surgical, minimal change.\n4. Run tests, fix failures, commit and exit.\n5. Write commit messages that explain what broke and why."
|
||||
system_prompt = "You are a full-stack engineer working autonomously in a git worktree. Always run the run_tests MCP tool before committing — do not commit until tests pass. As you complete each acceptance criterion, call check_criterion MCP tool to mark it done. Add //! module-level doc comments to any new modules and /// doc comments to any new public functions, structs, or enums. Do not accept stories, move them between stages, or merge to master — the server handles that. For bugs, trust the story description and make surgical fixes."
|
||||
|
||||
[[agent]]
|
||||
name = "coder-2"
|
||||
@@ -15,8 +15,8 @@ role = "Full-stack engineer. Implements features across all components."
|
||||
model = "sonnet"
|
||||
max_turns = 50
|
||||
max_budget_usd = 5.00
|
||||
prompt = "You are working in a git worktree on story {{story_id}}. Read CLAUDE.md first, then .story_kit/README.md to understand the dev process. The story details are in your prompt above. Follow the SDTW process through implementation and verification (Steps 1-3). The worktree and feature branch already exist - do not create them. Check .mcp.json for MCP tools. Do NOT accept the story or merge - commit your work and stop. If the user asks to review your changes, tell them to run: cd \"{{worktree_path}}\" && git difftool {{base_branch}}...HEAD\n\nIMPORTANT: Commit all your work before your process exits. The server will automatically run acceptance gates when your process exits and advance the pipeline based on the results. To verify before committing, use the run_tests MCP tool (it starts tests in the background — poll get_test_result to check completion) — never run script/test or cargo test directly via Bash.\n\n## Acceptance Criteria Tracking\nAs you complete each acceptance criterion, call the check_criterion MCP tool (story_id, criterion_index) to mark it done. Index 0 is the first unchecked criterion, 1 is the second, etc. Do this as you go — not all at once at the end.\n\n## Bug Workflow: Trust the Story, Act Fast\nWhen working on bugs:\n1. READ THE STORY DESCRIPTION FIRST. If it specifies exact files, functions, and line numbers — go directly there and make the fix. Do NOT explore git history, grep the whole codebase, or re-investigate the root cause when the story already tells you what to do.\n2. If the story does NOT specify the exact location, THEN investigate: use targeted grep to find the relevant code.\n3. Fix with a surgical, minimal change. Do NOT add new abstractions or workarounds.\n4. Commit early. If you've made the fix and tests pass, commit and exit. Do not spend turns verifying that master also has the same failures — that wastes budget.\n5. Write commit messages that explain what broke and why."
|
||||
system_prompt = "You are a full-stack engineer working autonomously in a git worktree. Follow the Story-Driven Test Workflow strictly. Use the run_tests MCP tool to verify your changes pass — it starts tests in the background, then poll get_test_result to check completion. Never run script/test or cargo test directly via Bash. As you complete each acceptance criterion, call check_criterion MCP tool to mark it done. Add //! module-level doc comments to any new modules and /// doc comments to any new public functions, structs, or enums. Commit all your work before finishing - use a descriptive commit message. Do not accept stories, move them to archived, or merge to master - a human will do that. Do not coordinate with other agents - focus on your assigned story. The server automatically runs acceptance gates when your process exits. For bugs, trust the story description — if it specifies exact files and functions, go directly there. Do not explore git history or grep the whole codebase when the story already tells you where to look. Make surgical fixes, commit early."
|
||||
prompt = "You are working in a git worktree on story {{story_id}}. Read CLAUDE.md first, then .huskies/README.md for the dev process, .huskies/specs/00_CONTEXT.md for what this project does, and .huskies/specs/tech/STACK.md for the tech stack and source map. The story details are in your prompt above. The worktree and feature branch already exist - do not create them.\n\n## Your workflow\n1. Read the story and understand the acceptance criteria.\n2. Implement the changes.\n3. As you complete each acceptance criterion, call check_criterion MCP tool to mark it done.\n4. Run the run_tests MCP tool. It blocks until tests complete and returns the results.\n5. If tests fail, fix the failures and run run_tests again. Do not commit until tests pass.\n6. Once tests pass, commit your work with a descriptive message and exit.\n\nDo NOT accept stories, move them between stages, or merge to master. The server handles all of that after you exit.\n\n## Bug Workflow: Trust the Story, Act Fast\nWhen working on bugs:\n1. READ THE STORY DESCRIPTION FIRST. If it specifies exact files, functions, and line numbers — go directly there and make the fix.\n2. If the story does NOT specify the exact location, investigate with targeted grep.\n3. Fix with a surgical, minimal change.\n4. Run tests, fix failures, commit and exit.\n5. Write commit messages that explain what broke and why."
|
||||
system_prompt = "You are a full-stack engineer working autonomously in a git worktree. Always run the run_tests MCP tool before committing — do not commit until tests pass. As you complete each acceptance criterion, call check_criterion MCP tool to mark it done. Add //! module-level doc comments to any new modules and /// doc comments to any new public functions, structs, or enums. Do not accept stories, move them between stages, or merge to master — the server handles that. For bugs, trust the story description and make surgical fixes."
|
||||
|
||||
[[agent]]
|
||||
name = "coder-3"
|
||||
@@ -25,8 +25,8 @@ role = "Full-stack engineer. Implements features across all components."
|
||||
model = "sonnet"
|
||||
max_turns = 50
|
||||
max_budget_usd = 5.00
|
||||
prompt = "You are working in a git worktree on story {{story_id}}. Read CLAUDE.md first, then .story_kit/README.md to understand the dev process. The story details are in your prompt above. Follow the SDTW process through implementation and verification (Steps 1-3). The worktree and feature branch already exist - do not create them. Check .mcp.json for MCP tools. Do NOT accept the story or merge - commit your work and stop. If the user asks to review your changes, tell them to run: cd \"{{worktree_path}}\" && git difftool {{base_branch}}...HEAD\n\nIMPORTANT: Commit all your work before your process exits. The server will automatically run acceptance gates when your process exits and advance the pipeline based on the results. To verify before committing, use the run_tests MCP tool (it starts tests in the background — poll get_test_result to check completion) — never run script/test or cargo test directly via Bash.\n\n## Acceptance Criteria Tracking\nAs you complete each acceptance criterion, call the check_criterion MCP tool (story_id, criterion_index) to mark it done. Index 0 is the first unchecked criterion, 1 is the second, etc. Do this as you go — not all at once at the end.\n\n## Bug Workflow: Trust the Story, Act Fast\nWhen working on bugs:\n1. READ THE STORY DESCRIPTION FIRST. If it specifies exact files, functions, and line numbers — go directly there and make the fix. Do NOT explore git history, grep the whole codebase, or re-investigate the root cause when the story already tells you what to do.\n2. If the story does NOT specify the exact location, THEN investigate: use targeted grep to find the relevant code.\n3. Fix with a surgical, minimal change. Do NOT add new abstractions or workarounds.\n4. Commit early. If you've made the fix and tests pass, commit and exit. Do not spend turns verifying that master also has the same failures — that wastes budget.\n5. Write commit messages that explain what broke and why."
|
||||
system_prompt = "You are a full-stack engineer working autonomously in a git worktree. Follow the Story-Driven Test Workflow strictly. Use the run_tests MCP tool to verify your changes pass — it starts tests in the background, then poll get_test_result to check completion. Never run script/test or cargo test directly via Bash. As you complete each acceptance criterion, call check_criterion MCP tool to mark it done. Add //! module-level doc comments to any new modules and /// doc comments to any new public functions, structs, or enums. Commit all your work before finishing - use a descriptive commit message. Do not accept stories, move them to archived, or merge to master - a human will do that. Do not coordinate with other agents - focus on your assigned story. The server automatically runs acceptance gates when your process exits. For bugs, trust the story description — if it specifies exact files and functions, go directly there. Do not explore git history or grep the whole codebase when the story already tells you where to look. Make surgical fixes, commit early."
|
||||
prompt = "You are working in a git worktree on story {{story_id}}. Read CLAUDE.md first, then .huskies/README.md for the dev process, .huskies/specs/00_CONTEXT.md for what this project does, and .huskies/specs/tech/STACK.md for the tech stack and source map. The story details are in your prompt above. The worktree and feature branch already exist - do not create them.\n\n## Your workflow\n1. Read the story and understand the acceptance criteria.\n2. Implement the changes.\n3. As you complete each acceptance criterion, call check_criterion MCP tool to mark it done.\n4. Run the run_tests MCP tool. It blocks until tests complete and returns the results.\n5. If tests fail, fix the failures and run run_tests again. Do not commit until tests pass.\n6. Once tests pass, commit your work with a descriptive message and exit.\n\nDo NOT accept stories, move them between stages, or merge to master. The server handles all of that after you exit.\n\n## Bug Workflow: Trust the Story, Act Fast\nWhen working on bugs:\n1. READ THE STORY DESCRIPTION FIRST. If it specifies exact files, functions, and line numbers — go directly there and make the fix.\n2. If the story does NOT specify the exact location, investigate with targeted grep.\n3. Fix with a surgical, minimal change.\n4. Run tests, fix failures, commit and exit.\n5. Write commit messages that explain what broke and why."
|
||||
system_prompt = "You are a full-stack engineer working autonomously in a git worktree. Always run the run_tests MCP tool before committing — do not commit until tests pass. As you complete each acceptance criterion, call check_criterion MCP tool to mark it done. Add //! module-level doc comments to any new modules and /// doc comments to any new public functions, structs, or enums. Do not accept stories, move them between stages, or merge to master — the server handles that. For bugs, trust the story description and make surgical fixes."
|
||||
|
||||
[[agent]]
|
||||
name = "qa-2"
|
||||
@@ -37,7 +37,7 @@ max_turns = 40
|
||||
max_budget_usd = 4.00
|
||||
prompt = """You are the QA agent for story {{story_id}}. Your job is to verify the coder's work satisfies the story's acceptance criteria and produce a structured QA report.
|
||||
|
||||
Read CLAUDE.md first, then .story_kit/README.md to understand the dev process.
|
||||
Read CLAUDE.md first, then .huskies/README.md for the dev process, .huskies/specs/00_CONTEXT.md for what this project does, and .huskies/specs/tech/STACK.md for the tech stack and source map.
|
||||
|
||||
## Your Workflow
|
||||
|
||||
@@ -48,7 +48,7 @@ Read CLAUDE.md first, then .story_kit/README.md to understand the dev process.
|
||||
|
||||
### 1. Deterministic Gates (Prerequisites)
|
||||
Run these first — if any fail, reject immediately without proceeding to AC review:
|
||||
- Call the `run_tests` MCP tool to start tests, then poll `get_test_result` until complete — all gates must pass (0 lint errors/warnings, all tests green, frontend build clean if applicable). Do NOT run script/test via Bash.
|
||||
- Call the `run_tests` MCP tool — it blocks until complete. All gates must pass (0 lint errors/warnings, all tests green, frontend build clean if applicable).
|
||||
|
||||
### 2. Code Change Review
|
||||
- Run `git diff master...HEAD --stat` to see what files changed
|
||||
@@ -72,7 +72,7 @@ An AC fails if:
|
||||
- A test exists but doesn't actually assert the behaviour described
|
||||
|
||||
### 4. Manual Testing Support (only if all gates PASS and all ACs PASS)
|
||||
- Build: run `script/build` and note success/failure
|
||||
- Build: run `run_build` MCP tool and note success/failure
|
||||
- If build succeeds: find a free port (try 3010-3020), set `HUSKIES_PORT=<port>` and start the server with `script/server`
|
||||
- Generate a testing plan including:
|
||||
- URL to visit in the browser
|
||||
@@ -126,8 +126,8 @@ role = "Senior full-stack engineer for complex tasks. Implements features across
|
||||
model = "opus"
|
||||
max_turns = 80
|
||||
max_budget_usd = 20.00
|
||||
prompt = "You are working in a git worktree on story {{story_id}}. Read CLAUDE.md first, then .story_kit/README.md to understand the dev process. The story details are in your prompt above. Follow the SDTW process through implementation and verification (Steps 1-3). The worktree and feature branch already exist - do not create them. Check .mcp.json for MCP tools. Do NOT accept the story or merge - commit your work and stop. If the user asks to review your changes, tell them to run: cd \"{{worktree_path}}\" && git difftool {{base_branch}}...HEAD\n\nIMPORTANT: Commit all your work before your process exits. The server will automatically run acceptance gates when your process exits and advance the pipeline based on the results. To verify before committing, use the run_tests MCP tool (it starts tests in the background — poll get_test_result to check completion) — never run script/test or cargo test directly via Bash.\n\n## Acceptance Criteria Tracking\nAs you complete each acceptance criterion, call the check_criterion MCP tool (story_id, criterion_index) to mark it done. Index 0 is the first unchecked criterion, 1 is the second, etc. Do this as you go — not all at once at the end.\n\n## Bug Workflow: Trust the Story, Act Fast\nWhen working on bugs:\n1. READ THE STORY DESCRIPTION FIRST. If it specifies exact files, functions, and line numbers — go directly there and make the fix. Do NOT explore git history, grep the whole codebase, or re-investigate the root cause when the story already tells you what to do.\n2. If the story does NOT specify the exact location, THEN investigate: use targeted grep to find the relevant code.\n3. Fix with a surgical, minimal change. Do NOT add new abstractions or workarounds.\n4. Commit early. If you've made the fix and tests pass, commit and exit. Do not spend turns verifying that master also has the same failures — that wastes budget.\n5. Write commit messages that explain what broke and why."
|
||||
system_prompt = "You are a senior full-stack engineer working autonomously in a git worktree. You handle complex tasks requiring deep architectural understanding. Follow the Story-Driven Test Workflow strictly. Use the run_tests MCP tool to verify your changes pass — it starts tests in the background, then poll get_test_result to check completion. Never run script/test or cargo test directly via Bash. As you complete each acceptance criterion, call check_criterion MCP tool to mark it done. Add //! module-level doc comments to any new modules and /// doc comments to any new public functions, structs, or enums. Commit all your work before finishing - use a descriptive commit message. Do not accept stories, move them to archived, or merge to master - a human will do that. Do not coordinate with other agents - focus on your assigned story. The server automatically runs acceptance gates when your process exits. For bugs, trust the story description — if it specifies exact files and functions, go directly there. Do not explore git history or grep the whole codebase when the story already tells you where to look. Make surgical fixes, commit early."
|
||||
prompt = "You are working in a git worktree on story {{story_id}}. Read CLAUDE.md first, then .huskies/README.md for the dev process, .huskies/specs/00_CONTEXT.md for what this project does, and .huskies/specs/tech/STACK.md for the tech stack and source map. The story details are in your prompt above. The worktree and feature branch already exist - do not create them.\n\n## Your workflow\n1. Read the story and understand the acceptance criteria.\n2. Implement the changes.\n3. As you complete each acceptance criterion, call check_criterion MCP tool to mark it done.\n4. Run the run_tests MCP tool. It blocks until tests complete and returns the results.\n5. If tests fail, fix the failures and run run_tests again. Do not commit until tests pass.\n6. Once tests pass, commit your work with a descriptive message and exit.\n\nDo NOT accept stories, move them between stages, or merge to master. The server handles all of that after you exit.\n\n## Bug Workflow: Trust the Story, Act Fast\nWhen working on bugs:\n1. READ THE STORY DESCRIPTION FIRST. If it specifies exact files, functions, and line numbers — go directly there and make the fix.\n2. If the story does NOT specify the exact location, investigate with targeted grep.\n3. Fix with a surgical, minimal change.\n4. Run tests, fix failures, commit and exit.\n5. Write commit messages that explain what broke and why."
|
||||
system_prompt = "You are a senior full-stack engineer working autonomously in a git worktree. You handle complex tasks requiring deep architectural understanding. Always run the run_tests MCP tool before committing — do not commit until tests pass. As you complete each acceptance criterion, call check_criterion MCP tool to mark it done. Add //! module-level doc comments to any new modules and /// doc comments to any new public functions, structs, or enums. Do not accept stories, move them between stages, or merge to master — the server handles that. For bugs, trust the story description and make surgical fixes."
|
||||
|
||||
[[agent]]
|
||||
name = "qa"
|
||||
@@ -138,7 +138,7 @@ max_turns = 40
|
||||
max_budget_usd = 4.00
|
||||
prompt = """You are the QA agent for story {{story_id}}. Your job is to verify the coder's work satisfies the story's acceptance criteria and produce a structured QA report.
|
||||
|
||||
Read CLAUDE.md first, then .story_kit/README.md to understand the dev process.
|
||||
Read CLAUDE.md first, then .huskies/README.md for the dev process, .huskies/specs/00_CONTEXT.md for what this project does, and .huskies/specs/tech/STACK.md for the tech stack and source map.
|
||||
|
||||
## Your Workflow
|
||||
|
||||
@@ -149,7 +149,7 @@ Read CLAUDE.md first, then .story_kit/README.md to understand the dev process.
|
||||
|
||||
### 1. Deterministic Gates (Prerequisites)
|
||||
Run these first — if any fail, reject immediately without proceeding to AC review:
|
||||
- Call the `run_tests` MCP tool to start tests, then poll `get_test_result` until complete — all gates must pass (0 lint errors/warnings, all tests green, frontend build clean if applicable). Do NOT run script/test via Bash.
|
||||
- Call the `run_tests` MCP tool — it blocks until complete. All gates must pass (0 lint errors/warnings, all tests green, frontend build clean if applicable).
|
||||
|
||||
### 2. Code Change Review
|
||||
- Run `git diff master...HEAD --stat` to see what files changed
|
||||
@@ -173,7 +173,7 @@ An AC fails if:
|
||||
- A test exists but doesn't actually assert the behaviour described
|
||||
|
||||
### 4. Manual Testing Support (only if all gates PASS and all ACs PASS)
|
||||
- Build: run `script/build` and note success/failure
|
||||
- Build: run `run_build` MCP tool and note success/failure
|
||||
- If build succeeds: find a free port (try 3010-3020), set `HUSKIES_PORT=<port>` and start the server with `script/server`
|
||||
- Generate a testing plan including:
|
||||
- URL to visit in the browser
|
||||
@@ -229,62 +229,32 @@ max_turns = 30
|
||||
max_budget_usd = 5.00
|
||||
prompt = """You are the mergemaster agent for story {{story_id}}. Your job is to merge the completed coder work into master.
|
||||
|
||||
Read CLAUDE.md first, then .story_kit/README.md to understand the dev process.
|
||||
Read CLAUDE.md first, then .huskies/README.md for the dev process, .huskies/specs/00_CONTEXT.md for what this project does, and .huskies/specs/tech/STACK.md for the tech stack and source map.
|
||||
|
||||
## Your Workflow
|
||||
1. Call merge_agent_work(story_id='{{story_id}}') — this blocks until the merge completes and returns the result. Do NOT poll get_merge_status.
|
||||
2. Review the result: check success, had_conflicts, conflicts_resolved, gates_passed, and gate_output
|
||||
3. If merge succeeded and gates passed: report success to the human
|
||||
4. If conflicts were auto-resolved (conflicts_resolved=true) and gates passed: report success, noting which conflicts were resolved
|
||||
5. If conflicts could not be auto-resolved: **resolve them yourself** in the merge worktree (see below)
|
||||
6. If merge failed for any other reason: call report_merge_failure(story_id='{{story_id}}', reason='<details>') and report to the human
|
||||
7. If gates failed after merge: attempt to fix the issues yourself in the merge worktree, then re-trigger merge_agent_work. After 3 fix attempts, call report_merge_failure and stop.
|
||||
|
||||
## Resolving Complex Conflicts Yourself
|
||||
|
||||
When the auto-resolver fails, you have access to the merge worktree at `.story_kit/merge_workspace/`. Go in there and resolve the conflicts manually:
|
||||
|
||||
1. Run `git diff --name-only --diff-filter=U` in the merge worktree to list conflicted files
|
||||
2. **Build context before touching code.** Run `git log --oneline master...HEAD` on the feature branch to see its commits. Then run `git log --oneline --since="$(git log -1 --format=%ci <feature-branch-base-commit>)" master` to see what landed on master since the branch was created. Read the story files in `.story_kit/work/` for any recently merged stories that touch the same files — this tells you WHY master changed and what must be preserved.
|
||||
3. Read each conflicted file and understand both sides of the conflict
|
||||
4. **Understand intent, not just syntax.** The feature branch may be behind master — master's version of shared infrastructure is almost always correct. The feature branch's contribution is the NEW functionality it adds. Your job is to integrate the new into master's structure, not pick one side.
|
||||
5. Resolve by integrating the feature's new functionality into master's code structure
|
||||
5. Stage resolved files with `git add`
|
||||
6. Call the `run_tests` MCP tool to start tests, then poll `get_test_result` until complete
|
||||
7. If it compiles, commit and re-trigger merge_agent_work
|
||||
|
||||
### Common conflict patterns:
|
||||
|
||||
**Story file rename/rename conflicts:** Both branches moved the story .md file to different pipeline directories. Resolution: `git rm` both sides — story files in pipeline directories are gitignored and don't need to be committed.
|
||||
|
||||
**Duplicate functions/imports:** The auto-resolver keeps both sides, producing duplicates. Resolution: keep one copy (prefer master's version), delete the duplicate.
|
||||
|
||||
**Formatting-only conflicts:** Both sides reformatted the same code differently. Resolution: pick either side (prefer master).
|
||||
|
||||
**IMPORTANT: After resolving ANY conflict or fixing ANY gate failure in the merge workspace, use the `run_lint` MCP tool to check formatting, then `run_tests` to verify everything passes before recommitting.** The auto-resolver frequently produces code that compiles but fails formatting or linting checks.
|
||||
1. Call merge_agent_work(story_id='{{story_id}}'). It blocks until the merge completes and returns the full result.
|
||||
2. If success and gates passed: you're done. Exit.
|
||||
3. If gates failed: read the gate_output carefully, fix the issues in the merge workspace at `.huskies/merge_workspace/`, run run_tests MCP tool to verify, recommit, and call merge_agent_work again.
|
||||
4. If merge failed for any other reason: call report_merge_failure(story_id='{{story_id}}', reason='<details>') and exit.
|
||||
5. After 3 failed fix attempts, call report_merge_failure and exit.
|
||||
|
||||
## Fixing Gate Failures
|
||||
|
||||
If quality gates fail, attempt to fix issues yourself in the merge workspace. Use the run_tests MCP tool to verify before recommitting.
|
||||
The auto-resolver often produces broken code. Common problems:
|
||||
- Duplicate imports or definitions (kept both sides)
|
||||
- Formatting issues (import ordering, line breaks)
|
||||
- Unclosed delimiters from bad conflict resolution
|
||||
- Type mismatches from incompatible merge of both sides
|
||||
|
||||
**Fix yourself (up to 3 attempts total):**
|
||||
- Syntax errors
|
||||
- Duplicate definitions from merge artifacts
|
||||
- Unused import warnings
|
||||
- Formatting issues that block linting
|
||||
To fix:
|
||||
1. Read the broken files in `.huskies/merge_workspace/`
|
||||
2. Fix the issues — prefer master's structure, integrate only the feature's new code
|
||||
3. Run run_lint MCP tool to check formatting
|
||||
4. Run run_tests MCP tool to verify everything passes
|
||||
5. Commit the fix and call merge_agent_work again
|
||||
|
||||
**Report to human without attempting a fix:**
|
||||
- Logic errors or incorrect business logic
|
||||
- Missing function implementations
|
||||
- Architectural changes required
|
||||
|
||||
**Max retry limit:** If gates still fail after 3 fix attempts, call report_merge_failure to record the failure, then stop immediately and report the full gate output to the human.
|
||||
|
||||
## CRITICAL Rules
|
||||
- NEVER manually move story files between pipeline stages (e.g. from 4_merge/ to 5_done/)
|
||||
- NEVER call accept_story — only merge_agent_work can move stories to done after a successful merge
|
||||
- When merge fails after exhausting your fix attempts, ALWAYS call report_merge_failure
|
||||
- Report conflict resolution outcomes clearly
|
||||
- Report gate failures with full output so the human can act if needed
|
||||
- The server automatically runs acceptance gates when your process exits"""
|
||||
system_prompt = "You are the mergemaster agent. Your primary job is to merge feature branches to master. First try the merge_agent_work MCP tool. If the auto-resolver fails on complex conflicts, resolve them yourself in the merge workspace. Common patterns: discard story file rename conflicts (gitignored), remove duplicate definitions/imports. After resolving, verify with run_tests MCP tool before re-triggering merge. CRITICAL: Never manually move story files or call accept_story. After 3 failed fix attempts, call report_merge_failure and stop."
|
||||
## Rules
|
||||
- NEVER manually move story files between pipeline stages
|
||||
- NEVER call accept_story — merge_agent_work handles that
|
||||
- ALWAYS call report_merge_failure if you can't fix the merge"""
|
||||
system_prompt = "You are the mergemaster agent. Call merge_agent_work to merge. If gates fail, fix the issues in the merge workspace, verify with run_lint and run_tests MCP tools, recommit, and retrigger. After 3 failed attempts, call report_merge_failure and exit. Never move story files or call accept_story."
|
||||
|
||||
+112
-112
@@ -1,130 +1,130 @@
|
||||
# Tech Stack & Constraints
|
||||
# Tech Stack
|
||||
|
||||
## Overview
|
||||
This project is a standalone Rust **web server binary** that serves a Vite/React frontend and exposes a **WebSocket API**. The built frontend assets are packaged with the binary (in a `frontend` directory) and served as static files. It functions as an **Agentic Code Assistant** capable of safely executing tools on the host system.
|
||||
## Backend
|
||||
- **Language:** Rust
|
||||
- **Framework:** Poem (HTTP + WebSocket + OpenAPI)
|
||||
- **Database:** SQLite via sqlx + rusqlite
|
||||
- **State:** BFT CRDT replicated document backed by SQLite
|
||||
- **Agents:** Claude Code CLI spawned in PTY pseudo-terminals
|
||||
- **Package manager:** cargo
|
||||
|
||||
## Core Stack
|
||||
* **Backend:** Rust (Web Server)
|
||||
* **MSRV:** Stable (latest)
|
||||
* **Framework:** Poem HTTP server with WebSocket support for streaming; HTTP APIs should use Poem OpenAPI (Swagger) for non-streaming endpoints.
|
||||
* **Frontend:** TypeScript + React
|
||||
* **Build Tool:** Vite
|
||||
* **Package Manager:** npm
|
||||
* **Styling:** CSS Modules or Tailwind (TBD - Defaulting to CSS Modules)
|
||||
* **State Management:** React Context / Hooks
|
||||
* **Chat UI:** Rendered Markdown with syntax highlighting.
|
||||
## Frontend
|
||||
- **Language:** TypeScript + React
|
||||
- **Build:** Vite
|
||||
- **Package manager:** npm
|
||||
- **Testing:** Vitest (unit), Playwright (e2e)
|
||||
|
||||
## Agent Architecture
|
||||
The application follows a **Tool-Use (Function Calling)** architecture:
|
||||
1. **Frontend:** Collects user input and sends it to the LLM.
|
||||
2. **LLM:** Decides to generate text OR request a **Tool Call** (e.g., `execute_shell`, `read_file`).
|
||||
3. **Web Server Backend (The "Hand"):**
|
||||
* Intercepts Tool Calls.
|
||||
* Validates the request against the **Safety Policy**.
|
||||
* Executes the native code (File I/O, Shell Process, Search).
|
||||
* Returns the output (stdout/stderr/file content) to the LLM.
|
||||
* **Streaming:** The backend sends real-time updates over WebSocket to keep the UI responsive during long-running Agent tasks.
|
||||
## Deployment
|
||||
- Single Rust binary with embedded React frontend (rust-embed)
|
||||
- Three modes: standard server, headless build agent (`--rendezvous`), multi-project gateway (`--gateway`)
|
||||
- Docker container with OrbStack recommended on macOS
|
||||
|
||||
## LLM Provider Abstraction
|
||||
To support both Remote and Local models, the system implements a `ModelProvider` abstraction layer.
|
||||
## Project Layout
|
||||
```
|
||||
server/src/ — Rust backend
|
||||
frontend/src/ — React frontend
|
||||
crates/bft-json-crdt/ — CRDT library
|
||||
.huskies/ — Pipeline config, agent config, specs
|
||||
script/ — test, build, lint scripts
|
||||
docker/ — Dockerfile and docker-compose
|
||||
website/ — Static marketing/docs site
|
||||
```
|
||||
|
||||
* **Strategy:**
|
||||
* Abstract the differences between API formats (OpenAI-compatible vs Anthropic vs Gemini).
|
||||
* Normalize "Tool Use" definitions, as each provider handles function calling schemas differently.
|
||||
* **Supported Providers:**
|
||||
* **Ollama:** Local inference (e.g., Llama 3, DeepSeek Coder) for privacy and offline usage.
|
||||
* **Anthropic:** Claude 3.5 models (Sonnet, Haiku) via API for coding tasks (Story 12).
|
||||
* **Provider Selection:**
|
||||
* Automatic detection based on model name prefix:
|
||||
* `claude-` → Anthropic API
|
||||
* Otherwise → Ollama
|
||||
* Single unified model dropdown with section headers ("Anthropic", "Ollama")
|
||||
* **API Key Management:**
|
||||
* Anthropic API key stored server-side and persisted securely
|
||||
* On first use of Claude model, user prompted to enter API key
|
||||
* Key persists across sessions (no re-entry needed)
|
||||
## Source Map
|
||||
|
||||
## Tooling Capabilities
|
||||
### Core
|
||||
|
||||
### 1. Filesystem (Native)
|
||||
* **Scope:** Strictly limited to the user-selected `project_root`.
|
||||
* **Operations:** Read, Write, List, Delete.
|
||||
* **Constraint:** Modifications to `.git/` are strictly forbidden via file APIs (use Git tools instead).
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `server/src/main.rs` | Entry point, CLI argument parsing, and server startup |
|
||||
| `server/src/config.rs` | Parses `project.toml` for agents, components, and server settings |
|
||||
| `server/src/state.rs` | Global mutable session state (project root, cancellation) |
|
||||
| `server/src/store.rs` | JSON-backed persistent key-value store for settings |
|
||||
| `server/src/gateway.rs` | Multi-project gateway mode (MCP proxy, project switching, agent registration) |
|
||||
|
||||
### 2. Shell Execution
|
||||
* **Library:** `tokio::process` for async execution.
|
||||
* **Constraint:** We do **not** run an interactive shell (repl). We run discrete, stateless commands.
|
||||
* **Allowlist:** The agent may only execute specific binaries:
|
||||
* `git`
|
||||
* `cargo`, `rustc`, `rustfmt`, `clippy`
|
||||
* `npm`, `node`, `yarn`, `pnpm`, `bun`
|
||||
* `ls`, `find`, `grep` (if not using internal search)
|
||||
* `mkdir`, `rm`, `touch`, `mv`, `cp`
|
||||
### Agents
|
||||
|
||||
### 3. Search & Navigation
|
||||
* **Library:** `ignore` (by BurntSushi) + `grep` logic.
|
||||
* **Behavior:**
|
||||
* Must respect `.gitignore` files automatically.
|
||||
* Must be performant (parallel traversal).
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `server/src/agents/mod.rs` | Types, configuration, and orchestration for coding agents |
|
||||
| `server/src/agents/gates.rs` | Runs test suites and validation scripts in agent worktrees |
|
||||
| `server/src/agents/lifecycle.rs` | File creation, archival, and stage transitions for pipeline items |
|
||||
| `server/src/agents/merge.rs` | Rebases agent work onto master and runs post-merge validation |
|
||||
| `server/src/agents/pty.rs` | Spawns agent processes in pseudo-terminals and streams output |
|
||||
| `server/src/agents/token_usage.rs` | Persists per-agent token consumption records to disk |
|
||||
| `server/src/agent_log.rs` | Reads and writes JSONL agent event logs to disk |
|
||||
| `server/src/agent_mode.rs` | Headless build-agent mode for distributed story processing |
|
||||
|
||||
## Coding Standards
|
||||
### Agent Pool
|
||||
|
||||
### Rust
|
||||
* **Style:** `rustfmt` standard.
|
||||
* **Linter:** `clippy` - Must pass with 0 warnings before merging.
|
||||
* **Error Handling:** Custom `AppError` type deriving `thiserror`. All Commands return `Result<T, AppError>`.
|
||||
* **Concurrency:** Heavy tools (Search, Shell) must run on `tokio` threads to avoid blocking the UI.
|
||||
* **Quality Gates:**
|
||||
* `cargo clippy --all-targets --all-features` must show 0 errors, 0 warnings
|
||||
* `cargo check` must succeed
|
||||
* `cargo nextest run` must pass all tests
|
||||
* **Test Coverage:**
|
||||
* Generate JSON report: `cargo llvm-cov nextest --no-clean --json --output-path .story_kit/coverage/server.json`
|
||||
* Generate lcov report: `cargo llvm-cov report --lcov --output-path .story_kit/coverage/server.lcov`
|
||||
* Reports are written to `.story_kit/coverage/` (excluded from git)
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `server/src/agents/pool/mod.rs` | Manages the set of active agents across all pipeline stages |
|
||||
| `server/src/agents/pool/start.rs` | Spawns a new agent process in a worktree for a story |
|
||||
| `server/src/agents/pool/stop.rs` | Terminates a running agent while preserving its worktree |
|
||||
| `server/src/agents/pool/pipeline/advance.rs` | Moves stories forward through pipeline stages |
|
||||
| `server/src/agents/pool/pipeline/completion.rs` | Processes exit results and triggers pipeline advancement |
|
||||
| `server/src/agents/pool/pipeline/merge.rs` | Orchestrates the merge-to-master flow for completed stories |
|
||||
| `server/src/agents/pool/auto_assign/auto_assign.rs` | Scans pipeline stages and dispatches agents to unassigned stories |
|
||||
|
||||
### TypeScript / React
|
||||
* **Style:** Biome formatter (replaces Prettier/ESLint).
|
||||
* **Linter:** Biome - Must pass with 0 errors, 0 warnings before merging.
|
||||
* **Types:** Shared types with Rust (via `tauri-specta` or manual interface matching) are preferred to ensure type safety across the bridge.
|
||||
* **Testing:** Vitest for unit/component tests; Playwright for end-to-end tests.
|
||||
* **Quality Gates:**
|
||||
* `npx @biomejs/biome check src/` must show 0 errors, 0 warnings
|
||||
* `npm run build` must succeed
|
||||
* `npm test` must pass
|
||||
* `npm run test:e2e` must pass
|
||||
* No `any` types allowed (use proper types or `unknown`)
|
||||
* React keys must use stable IDs, not array indices
|
||||
* All buttons must have explicit `type` attribute
|
||||
### CRDT & Database
|
||||
|
||||
## Libraries (Approved)
|
||||
* **Rust:**
|
||||
* `serde`, `serde_json`: Serialization.
|
||||
* `ignore`: Fast recursive directory iteration respecting gitignore.
|
||||
* `walkdir`: Simple directory traversal.
|
||||
* `tokio`: Async runtime.
|
||||
* `reqwest`: For LLM API calls (Anthropic, Ollama).
|
||||
* `eventsource-stream`: For Server-Sent Events (Anthropic streaming).
|
||||
* `uuid`: For unique message IDs.
|
||||
* `chrono`: For timestamps.
|
||||
* `poem`: HTTP server framework.
|
||||
* `poem-openapi`: OpenAPI (Swagger) for non-streaming HTTP APIs.
|
||||
* **JavaScript:**
|
||||
* `react-markdown`: For rendering chat responses.
|
||||
* `vitest`: Unit/component testing.
|
||||
* `playwright`: End-to-end testing.
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `server/src/crdt_state.rs` | Pipeline state as a conflict-free replicated document backed by SQLite |
|
||||
| `server/src/crdt_sync.rs` | WebSocket-based replication of pipeline state between nodes |
|
||||
| `server/src/pipeline_state.rs` | Typed pipeline state machine |
|
||||
| `server/src/db/mod.rs` | Content store, shadow writes, and CRDT op persistence |
|
||||
|
||||
## Running the App (Worktrees & Ports)
|
||||
### HTTP — MCP Tools (the tools agents call)
|
||||
|
||||
Multiple instances can run simultaneously in different worktrees. To avoid port conflicts:
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `server/src/http/mcp/mod.rs` | MCP endpoint dispatching tool calls |
|
||||
| `server/src/http/mcp/agent_tools.rs` | Start, stop, wait, list, and inspect agents |
|
||||
| `server/src/http/mcp/git_tools.rs` | Status, diff, add, commit, and log on agent worktrees |
|
||||
| `server/src/http/mcp/merge_tools.rs` | Merge agent work to master and report failures |
|
||||
| `server/src/http/mcp/shell_tools.rs` | Run commands, execute tests, and stream output |
|
||||
| `server/src/http/mcp/story_tools.rs` | Create, update, move, and manage stories/bugs/refactors |
|
||||
| `server/src/http/mcp/diagnostics.rs` | Server logs, CRDT dump, version, and story movement helpers |
|
||||
|
||||
- **Backend:** Set `HUSKIES_PORT` to a unique port (default is 3001). Example: `HUSKIES_PORT=3002 cargo run`
|
||||
- **Frontend:** Run `npm run dev` from `frontend/`. It auto-selects the next unused port. It reads `HUSKIES_PORT` to know which backend to talk to, so export it before running: `export HUSKIES_PORT=3002 && cd frontend && npm run dev`
|
||||
### Chat — Bot Commands
|
||||
|
||||
When running in a worktree, use a port that won't conflict with the main instance (3001). Ports 3002+ are good choices.
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `server/src/chat/commands/mod.rs` | Bot-level command registry shared by all transports |
|
||||
| `server/src/chat/commands/status.rs` | `status` command and pipeline status helpers |
|
||||
| `server/src/chat/commands/backlog.rs` | `backlog` command — shows only backlog-stage items |
|
||||
| `server/src/chat/commands/run_tests.rs` | `run_tests` command — run the project's test suite |
|
||||
|
||||
## Safety & Sandbox
|
||||
1. **Project Scope:** The application must strictly enforce that it does not read/write outside the `project_root` selected by the user.
|
||||
2. **Human in the Loop:**
|
||||
* Shell commands that modify state (non-readonly) should ideally require a UI confirmation (configurable).
|
||||
* File writes must be confirmed or revertible.
|
||||
### Chat — Transports
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `server/src/chat/transport/matrix/` | Matrix bot integration |
|
||||
| `server/src/chat/transport/slack/` | Slack bot integration |
|
||||
| `server/src/chat/transport/whatsapp/` | WhatsApp Business API integration |
|
||||
| `server/src/chat/transport/discord/` | Discord bot integration |
|
||||
|
||||
### Frontend
|
||||
|
||||
| Directory | Description |
|
||||
|-----------|-------------|
|
||||
| `frontend/src/components/` | React UI components |
|
||||
| `frontend/src/api/` | API client code (gateway, agents, etc.) |
|
||||
|
||||
### Utilities
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `server/src/rebuild.rs` | Server rebuild and restart logic |
|
||||
| `server/src/worktree.rs` | Creates, lists, and removes git worktrees for agent isolation |
|
||||
| `server/src/io/watcher.rs` | Filesystem watcher for `.huskies/work/` and `project.toml` |
|
||||
|
||||
## Quality Gates
|
||||
All enforced by `script/test`:
|
||||
1. Frontend build (`npm run build`)
|
||||
2. Rust formatting (`cargo fmt --all --check`)
|
||||
3. Rust linting (`cargo clippy -- -D warnings`)
|
||||
4. Rust tests (`cargo test`)
|
||||
5. Frontend tests (`npm test`)
|
||||
|
||||
Generated
+49
-43
@@ -229,7 +229,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "94893f1e0c6eeab764ade8dc4c0db24caf4fe7cbbaafc0eba0a9030f447b5185"
|
||||
dependencies = [
|
||||
"num-traits",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -366,9 +366,9 @@ checksum = "c08606f8c3cbf4ce6ec8e28fb0014a2c086708fe954eaa885384a6165172e7e8"
|
||||
|
||||
[[package]]
|
||||
name = "aws-lc-rs"
|
||||
version = "1.16.2"
|
||||
version = "1.16.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "a054912289d18629dc78375ba2c3726a3afe3ff71b4edba9dedfca0e3446d1fc"
|
||||
checksum = "0ec6fb3fe69024a75fa7e1bfb48aa6cf59706a101658ea01bfd33b2b248a038f"
|
||||
dependencies = [
|
||||
"aws-lc-sys",
|
||||
"zeroize",
|
||||
@@ -376,9 +376,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "aws-lc-sys"
|
||||
version = "0.39.1"
|
||||
version = "0.40.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "83a25cf98105baa966497416dbd42565ce3a8cf8dbfd59803ec9ad46f3126399"
|
||||
checksum = "f50037ee5e1e41e7b8f9d161680a725bd1626cb6f8c7e901f91f942850852fe7"
|
||||
dependencies = [
|
||||
"cc",
|
||||
"cmake",
|
||||
@@ -441,7 +441,7 @@ dependencies = [
|
||||
"criterion",
|
||||
"fastcrypto",
|
||||
"indexmap 2.14.0",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"random_color",
|
||||
"serde",
|
||||
"serde_json",
|
||||
@@ -1649,7 +1649,7 @@ dependencies = [
|
||||
"num-bigint",
|
||||
"once_cell",
|
||||
"p256",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"readonly",
|
||||
"rfc6979",
|
||||
"rsa 0.8.2",
|
||||
@@ -2288,7 +2288,7 @@ checksum = "df3b46402a9d5adb4c86a0cf463f42e19994e3ee891101b1841f30a545cb49a9"
|
||||
|
||||
[[package]]
|
||||
name = "huskies"
|
||||
version = "0.10.1"
|
||||
version = "0.10.4"
|
||||
dependencies = [
|
||||
"async-stream",
|
||||
"async-trait",
|
||||
@@ -2802,9 +2802,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "konst"
|
||||
version = "0.3.16"
|
||||
version = "0.3.17"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "4381b9b00c55f251f2ebe9473aef7c117e96828def1a7cb3bd3f0f903c6894e9"
|
||||
checksum = "97feab15b395d1860944abe6a8dd8ed9f8eadfae01750fada8427abda531d887"
|
||||
dependencies = [
|
||||
"const_panic",
|
||||
"konst_kernel",
|
||||
@@ -3165,7 +3165,7 @@ dependencies = [
|
||||
"js_option",
|
||||
"matrix-sdk-common",
|
||||
"pbkdf2",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"rmp-serde",
|
||||
"ruma",
|
||||
"serde",
|
||||
@@ -3255,7 +3255,7 @@ dependencies = [
|
||||
"getrandom 0.2.17",
|
||||
"hmac",
|
||||
"pbkdf2",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"rmp-serde",
|
||||
"serde",
|
||||
"serde_json",
|
||||
@@ -3509,7 +3509,7 @@ dependencies = [
|
||||
"num-integer",
|
||||
"num-iter",
|
||||
"num-traits",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"smallvec",
|
||||
"zeroize",
|
||||
]
|
||||
@@ -3570,7 +3570,7 @@ dependencies = [
|
||||
"chrono",
|
||||
"getrandom 0.2.17",
|
||||
"http",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"reqwest 0.12.28",
|
||||
"serde",
|
||||
"serde_json",
|
||||
@@ -3726,7 +3726,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "3c80231409c20246a13fddb31776fb942c38553c51e871f8cbd687a4cfb5843d"
|
||||
dependencies = [
|
||||
"phf_shared 0.11.3",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -4231,9 +4231,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "rand"
|
||||
version = "0.8.5"
|
||||
version = "0.8.6"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "34af8d1a0e25924bc5b7c43c079c942339d8f0a8b57c39049bef581b46327404"
|
||||
checksum = "5ca0ecfa931c29007047d1bc58e623ab12e5590e8c7cc53200d5202b69266d8a"
|
||||
dependencies = [
|
||||
"libc",
|
||||
"rand_chacha 0.3.1",
|
||||
@@ -4693,7 +4693,7 @@ dependencies = [
|
||||
"js_int",
|
||||
"konst",
|
||||
"percent-encoding",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"regex",
|
||||
"ruma-identifiers-validation",
|
||||
"ruma-macros",
|
||||
@@ -4803,7 +4803,7 @@ dependencies = [
|
||||
"base64",
|
||||
"ed25519-dalek",
|
||||
"pkcs8 0.10.2",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"ruma-common",
|
||||
"serde_json",
|
||||
"sha2 0.10.9",
|
||||
@@ -4952,9 +4952,9 @@ checksum = "f87165f0995f63a9fbeea62b64d10b4d9d8e78ec6d7d51fb2125fda7bb36788f"
|
||||
|
||||
[[package]]
|
||||
name = "rustls-webpki"
|
||||
version = "0.103.12"
|
||||
version = "0.103.13"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "8279bb85272c9f10811ae6a6c547ff594d6a7f3c6c6b02ee9726d1d0dcfcdd06"
|
||||
checksum = "61c429a8649f110dddef65e2a5ad240f747e85f7758a6bccc7e5777bd33f756e"
|
||||
dependencies = [
|
||||
"aws-lc-rs",
|
||||
"ring",
|
||||
@@ -5078,7 +5078,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "25996b82292a7a57ed3508f052cfff8640d38d32018784acd714758b43da9c8f"
|
||||
dependencies = [
|
||||
"bitcoin_hashes",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"secp256k1-sys",
|
||||
]
|
||||
|
||||
@@ -5344,9 +5344,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "sha3"
|
||||
version = "0.10.8"
|
||||
version = "0.10.9"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "75872d278a8f37ef87fa0ddbda7802605cb18344497949862c0d4dcb291eba60"
|
||||
checksum = "77fd7028345d415a4034cf8777cd4f8ab1851274233b45f84e3d955502d93874"
|
||||
dependencies = [
|
||||
"digest 0.10.7",
|
||||
"keccak",
|
||||
@@ -5587,7 +5587,7 @@ dependencies = [
|
||||
"md-5",
|
||||
"memchr",
|
||||
"percent-encoding",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"rsa 0.9.10",
|
||||
"sha1",
|
||||
"sha2 0.10.9",
|
||||
@@ -5623,7 +5623,7 @@ dependencies = [
|
||||
"log",
|
||||
"md-5",
|
||||
"memchr",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"serde",
|
||||
"serde_json",
|
||||
"sha2 0.10.9",
|
||||
@@ -5996,9 +5996,9 @@ checksum = "1f3ccbac311fea05f86f61904b462b55fb3df8837a366dfc601a0161d0532f20"
|
||||
|
||||
[[package]]
|
||||
name = "tokio"
|
||||
version = "1.51.1"
|
||||
version = "1.52.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f66bf9585cda4b724d3e78ab34b73fb2bbaba9011b9bfdf69dc836382ea13b8c"
|
||||
checksum = "b67dee974fe86fd92cc45b7a95fdd2f99a36a6d7b0d431a231178d3d670bbcc6"
|
||||
dependencies = [
|
||||
"bytes",
|
||||
"libc",
|
||||
@@ -6327,15 +6327,15 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "typenum"
|
||||
version = "1.19.0"
|
||||
version = "1.20.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "562d481066bde0658276a35467c4af00bdc6ee726305698a55b86e61d7ad82bb"
|
||||
checksum = "40ce102ab67701b8526c123c1bab5cbe42d7040ccfd0f64af1a385808d2f43de"
|
||||
|
||||
[[package]]
|
||||
name = "typewit"
|
||||
version = "1.15.1"
|
||||
version = "1.15.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "bc19094686c694eb41b3b99dcc2f2975d4b078512fa22ae6c63f7ca318bdcff7"
|
||||
checksum = "214ca0b2191785cbc06209b9ca1861e048e39b5ba33574b3cedd58363d5bb5f6"
|
||||
dependencies = [
|
||||
"typewit_proc_macros",
|
||||
]
|
||||
@@ -6465,9 +6465,9 @@ checksum = "b6c140620e7ffbb22c2dee59cafe6084a59b5ffc27a8859a5f0d494b5d52b6be"
|
||||
|
||||
[[package]]
|
||||
name = "uuid"
|
||||
version = "1.23.0"
|
||||
version = "1.23.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "5ac8b6f42ead25368cf5b098aeb3dc8a1a2c05a3eee8a9a1a68c640edbfc79d9"
|
||||
checksum = "ddd74a9687298c6858e9b88ec8935ec45d22e8fd5e6394fa1bd4e99a87789c76"
|
||||
dependencies = [
|
||||
"getrandom 0.4.2",
|
||||
"js-sys",
|
||||
@@ -6512,7 +6512,7 @@ dependencies = [
|
||||
"hmac",
|
||||
"matrix-pickle",
|
||||
"prost",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"serde",
|
||||
"serde_bytes",
|
||||
"serde_json",
|
||||
@@ -6580,11 +6580,11 @@ checksum = "ccf3ec651a847eb01de73ccad15eb7d99f80485de043efb2f370cd654f4ea44b"
|
||||
|
||||
[[package]]
|
||||
name = "wasip2"
|
||||
version = "1.0.2+wasi-0.2.9"
|
||||
version = "1.0.3+wasi-0.2.9"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "9517f9239f02c069db75e65f174b3da828fe5f5b945c4dd26bd25d89c03ebcf5"
|
||||
checksum = "20064672db26d7cdc89c7798c48a0fdfac8213434a1186e5ef29fd560ae223d6"
|
||||
dependencies = [
|
||||
"wit-bindgen",
|
||||
"wit-bindgen 0.57.1",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -6593,7 +6593,7 @@ version = "0.4.0+wasi-0.3.0-rc-2026-01-06"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "5428f8bf88ea5ddc08faddef2ac4a67e390b88186c703ce6dbd955e1c145aca5"
|
||||
dependencies = [
|
||||
"wit-bindgen",
|
||||
"wit-bindgen 0.51.0",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -6770,18 +6770,18 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "webpki-root-certs"
|
||||
version = "1.0.6"
|
||||
version = "1.0.7"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "804f18a4ac2676ffb4e8b5b5fa9ae38af06df08162314f96a68d2a363e21a8ca"
|
||||
checksum = "f31141ce3fc3e300ae89b78c0dd67f9708061d1d2eda54b8209346fd6be9a92c"
|
||||
dependencies = [
|
||||
"rustls-pki-types",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "webpki-roots"
|
||||
version = "1.0.6"
|
||||
version = "1.0.7"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "22cfaf3c063993ff62e73cb4311efde4db1efb31ab78a3e5c457939ad5cc0bed"
|
||||
checksum = "52f5ee44c96cf55f1b349600768e3ece3a8f26010c05265ab73f945bb1a2eb9d"
|
||||
dependencies = [
|
||||
"rustls-pki-types",
|
||||
]
|
||||
@@ -7271,6 +7271,12 @@ dependencies = [
|
||||
"wit-bindgen-rust-macro",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "wit-bindgen"
|
||||
version = "0.57.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1ebf944e87a7c253233ad6766e082e3cd714b5d03812acc24c318f549614536e"
|
||||
|
||||
[[package]]
|
||||
name = "wit-bindgen-core"
|
||||
version = "0.51.0"
|
||||
|
||||
@@ -220,283 +220,6 @@ Both return a JSON document with:
|
||||
|
||||
## Source Map
|
||||
|
||||
### Core
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `server/src/main.rs` | Entry point, CLI argument parsing, and server startup |
|
||||
| `server/src/config.rs` | Parses `project.toml` for agents, components, and server settings |
|
||||
| `server/src/state.rs` | Global mutable session state (project root, cancellation) |
|
||||
| `server/src/store.rs` | JSON-backed persistent key-value store for settings |
|
||||
|
||||
### Agents
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `server/src/agents/mod.rs` | Types, configuration, and orchestration for coding agents |
|
||||
| `server/src/agents/gates.rs` | Runs test suites and validation scripts in agent worktrees |
|
||||
| `server/src/agents/lifecycle.rs` | File creation, archival, and stage transitions for pipeline items |
|
||||
| `server/src/agents/merge.rs` | Rebases agent work onto master and runs post-merge validation |
|
||||
| `server/src/agents/pty.rs` | Spawns agent processes in pseudo-terminals and streams output |
|
||||
| `server/src/agents/token_usage.rs` | Persists per-agent token consumption records to disk |
|
||||
| `server/src/agent_log.rs` | Reads and writes JSONL agent event logs to disk |
|
||||
| `server/src/agent_mode.rs` | Headless build-agent mode for distributed story processing |
|
||||
|
||||
### Agent Pool
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `server/src/agents/pool/mod.rs` | Manages the set of active agents across all pipeline stages |
|
||||
| `server/src/agents/pool/types.rs` | `AgentPool`, `StoryAgent`, and related data structures |
|
||||
| `server/src/agents/pool/start.rs` | Spawns a new agent process in a worktree for a story |
|
||||
| `server/src/agents/pool/stop.rs` | Terminates a running agent while preserving its worktree |
|
||||
| `server/src/agents/pool/wait.rs` | Blocks until an agent reaches a terminal state |
|
||||
| `server/src/agents/pool/query.rs` | Lists available/active agents and info lookups |
|
||||
| `server/src/agents/pool/process.rs` | Kills orphaned PTY child processes on shutdown |
|
||||
| `server/src/agents/pool/worktree.rs` | Creates and configures git worktrees for agents |
|
||||
| `server/src/agents/pool/test_helpers.rs` | In-memory pool construction and test assertions |
|
||||
|
||||
### Agent Pool — Auto-assign
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `server/src/agents/pool/auto_assign/mod.rs` | Wires sub-files and re-exports public items |
|
||||
| `server/src/agents/pool/auto_assign/auto_assign.rs` | Scans pipeline stages and dispatches agents to unassigned stories |
|
||||
| `server/src/agents/pool/auto_assign/reconcile.rs` | Startup reconciliation: detects committed work and advances pipeline |
|
||||
| `server/src/agents/pool/auto_assign/scan.rs` | Scans pipeline stages for work items and queries pool state |
|
||||
| `server/src/agents/pool/auto_assign/story_checks.rs` | Front-matter checks: review holds, blocked state, merge failures |
|
||||
| `server/src/agents/pool/auto_assign/watchdog.rs` | Detects orphaned agents and triggers auto-assign |
|
||||
|
||||
### Agent Pool — Pipeline
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `server/src/agents/pool/pipeline/mod.rs` | Stage advancement, completion handling, and merge orchestration |
|
||||
| `server/src/agents/pool/pipeline/advance.rs` | Moves stories forward through pipeline stages |
|
||||
| `server/src/agents/pool/pipeline/completion.rs` | Processes exit results and triggers pipeline advancement |
|
||||
| `server/src/agents/pool/pipeline/merge.rs` | Orchestrates the merge-to-master flow for completed stories |
|
||||
|
||||
### Agent Runtimes
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `server/src/agents/runtime/mod.rs` | Pluggable backends (Claude Code, Gemini, OpenAI) for running agents |
|
||||
| `server/src/agents/runtime/claude_code.rs` | Launches Claude Code CLI sessions as agent backends |
|
||||
| `server/src/agents/runtime/gemini.rs` | Drives Google Gemini API sessions as agent backends |
|
||||
| `server/src/agents/runtime/openai.rs` | Drives OpenAI API sessions as agent backends |
|
||||
|
||||
### CRDT
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `server/src/crdt_state.rs` | Pipeline state as a conflict-free replicated document backed by SQLite |
|
||||
| `server/src/crdt_sync.rs` | WebSocket-based replication of pipeline state between nodes |
|
||||
| `server/src/crdt_wire.rs` | Serialization format for `SignedOp` sync messages |
|
||||
| `server/src/pipeline_state.rs` | Typed pipeline state machine |
|
||||
|
||||
### Database
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `server/src/db/mod.rs` | Content store, shadow writes, and CRDT op persistence |
|
||||
|
||||
### HTTP Server
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `server/src/http/mod.rs` | Module declarations for all REST, MCP, WebSocket, and SSE endpoints |
|
||||
| `server/src/http/context.rs` | Shared `AppContext` threaded through all HTTP handlers |
|
||||
| `server/src/http/agents.rs` | REST API for listing, starting, stopping, and inspecting agents |
|
||||
| `server/src/http/agents_sse.rs` | Server-Sent Events endpoint for real-time agent output |
|
||||
| `server/src/http/anthropic.rs` | Proxy for model listing and key-validation to Anthropic |
|
||||
| `server/src/http/assets.rs` | Serves the embedded React frontend via `rust-embed` |
|
||||
| `server/src/http/bot_command.rs` | Bot command HTTP endpoint |
|
||||
| `server/src/http/chat.rs` | REST API for the LLM-powered chat interface |
|
||||
| `server/src/http/health.rs` | Returns a static "ok" response |
|
||||
| `server/src/http/io.rs` | REST API for file and directory operations |
|
||||
| `server/src/http/model.rs` | REST API for model selection and LLM provider management |
|
||||
| `server/src/http/oauth.rs` | Anthropic OAuth callback and token exchange flow |
|
||||
| `server/src/http/project.rs` | REST API for project initialization and context management |
|
||||
| `server/src/http/settings.rs` | REST API for user preferences and editor configuration |
|
||||
| `server/src/http/wizard.rs` | REST API for the project setup wizard |
|
||||
| `server/src/http/ws.rs` | Real-time pipeline updates, chat, and permission prompts |
|
||||
| `server/src/http/test_helpers.rs` | Shared test utilities for HTTP handler tests |
|
||||
|
||||
### HTTP — MCP Tools
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `server/src/http/mcp/mod.rs` | Model Context Protocol endpoint dispatching tool calls |
|
||||
| `server/src/http/mcp/agent_tools.rs` | Start, stop, wait, list, and inspect agents via MCP |
|
||||
| `server/src/http/mcp/diagnostics.rs` | Server logs, CRDT dump, and story movement helpers |
|
||||
| `server/src/http/mcp/git_tools.rs` | Status, diff, add, commit, and log on agent worktrees |
|
||||
| `server/src/http/mcp/merge_tools.rs` | Merge agent work to master and report failures |
|
||||
| `server/src/http/mcp/qa_tools.rs` | Request, approve, and reject QA reviews |
|
||||
| `server/src/http/mcp/shell_tools.rs` | Run commands, execute tests, and stream output |
|
||||
| `server/src/http/mcp/status_tools.rs` | Pipeline status, story triage, and AC inspection |
|
||||
| `server/src/http/mcp/story_tools.rs` | Create, update, move, and manage stories/bugs/refactors |
|
||||
| `server/src/http/mcp/wizard_tools.rs` | Interactive setup wizard tool implementations |
|
||||
|
||||
### HTTP — Workflow
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `server/src/http/workflow/mod.rs` | Shared story/bug file operations for HTTP and MCP handlers |
|
||||
| `server/src/http/workflow/bug_ops.rs` | Creates bug, refactor, and spike files in the pipeline |
|
||||
| `server/src/http/workflow/story_ops.rs` | Creates, updates, and manages acceptance criteria in stories |
|
||||
| `server/src/http/workflow/test_results.rs` | Writes structured test results into story markdown |
|
||||
|
||||
### I/O
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `server/src/io/mod.rs` | Filesystem, shell, search, onboarding, and story metadata operations |
|
||||
| `server/src/io/fs/mod.rs` | Module declarations and re-exports for file operations |
|
||||
| `server/src/io/fs/files.rs` | Read, write, list, and create files and directories |
|
||||
| `server/src/io/fs/paths.rs` | Resolves CLI and session-relative paths to absolute paths |
|
||||
| `server/src/io/fs/preferences.rs` | Reads and writes model selection and user settings |
|
||||
| `server/src/io/fs/project.rs` | Tracks known projects and resolves the active project root |
|
||||
| `server/src/io/fs/scaffold.rs` | Creates the `.huskies/` directory structure and default files |
|
||||
| `server/src/io/onboarding.rs` | Checks whether scaffold templates have been customized |
|
||||
| `server/src/io/search.rs` | Full-text search across project files |
|
||||
| `server/src/io/shell.rs` | Runs commands in the project directory and captures output |
|
||||
| `server/src/io/story_metadata.rs` | Parses and modifies YAML front matter in story markdown |
|
||||
| `server/src/io/watcher.rs` | Filesystem watcher for `.huskies/work/` and `project.toml` |
|
||||
| `server/src/io/wizard.rs` | Multi-step project onboarding flow with per-step status |
|
||||
| `server/src/io/test_helpers.rs` | Shared test utilities for I/O module tests |
|
||||
|
||||
### Chat
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `server/src/chat/mod.rs` | Transport abstraction for chat platforms |
|
||||
| `server/src/chat/lookup.rs` | Shared story-lookup helper for chat commands |
|
||||
| `server/src/chat/timer.rs` | Deferred agent start via one-shot timers |
|
||||
| `server/src/chat/util.rs` | Shared text utilities used by all transports |
|
||||
| `server/src/chat/test_helpers.rs` | Shared test utilities for chat handler tests |
|
||||
|
||||
### Chat — Commands
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `server/src/chat/commands/mod.rs` | Bot-level command registry shared by all transports |
|
||||
| `server/src/chat/commands/ambient.rs` | `ambient` command handler |
|
||||
| `server/src/chat/commands/assign.rs` | `assign` command handler |
|
||||
| `server/src/chat/commands/backlog.rs` | `backlog` command — shows only backlog-stage items |
|
||||
| `server/src/chat/commands/cost.rs` | `cost` command handler |
|
||||
| `server/src/chat/commands/coverage.rs` | `coverage` command — show or refresh test coverage |
|
||||
| `server/src/chat/commands/depends.rs` | `depends` command handler |
|
||||
| `server/src/chat/commands/git.rs` | `git` command handler |
|
||||
| `server/src/chat/commands/help.rs` | `help` command handler |
|
||||
| `server/src/chat/commands/loc.rs` | `loc` command — top source files by line count |
|
||||
| `server/src/chat/commands/move_story.rs` | `move` command handler |
|
||||
| `server/src/chat/commands/overview.rs` | `overview` command handler |
|
||||
| `server/src/chat/commands/run_tests.rs` | `test` command — run the project's test suite |
|
||||
| `server/src/chat/commands/setup.rs` | `setup` command handler |
|
||||
| `server/src/chat/commands/show.rs` | `show` command handler |
|
||||
| `server/src/chat/commands/status.rs` | `status` command and pipeline status helpers |
|
||||
| `server/src/chat/commands/timer.rs` | `timer` command handler |
|
||||
| `server/src/chat/commands/triage.rs` | Story triage dump subcommand of `status` |
|
||||
| `server/src/chat/commands/unblock.rs` | `unblock` command handler |
|
||||
| `server/src/chat/commands/unreleased.rs` | `unreleased` command handler |
|
||||
|
||||
### Chat — Matrix Transport
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `server/src/chat/transport/matrix/mod.rs` | Matrix bot integration |
|
||||
| `server/src/chat/transport/matrix/config.rs` | Deserialization of `bot.toml` Matrix settings |
|
||||
| `server/src/chat/transport/matrix/commands.rs` | Re-exports from `crate::chat::commands` |
|
||||
| `server/src/chat/transport/matrix/transport_impl.rs` | Matrix `ChatTransport` implementation |
|
||||
| `server/src/chat/transport/matrix/assign.rs` | Assign/re-assign a coder model to a story |
|
||||
| `server/src/chat/transport/matrix/delete.rs` | Delete a story/bug/spike from the pipeline |
|
||||
| `server/src/chat/transport/matrix/htop.rs` | Live-updating system and agent process dashboard |
|
||||
| `server/src/chat/transport/matrix/notifications.rs` | Stage transition notifications for Matrix rooms |
|
||||
| `server/src/chat/transport/matrix/rebuild.rs` | Trigger a server rebuild and restart |
|
||||
| `server/src/chat/transport/matrix/reset.rs` | Clear the current Claude Code session for a room |
|
||||
| `server/src/chat/transport/matrix/rmtree.rs` | Delete the worktree for a story |
|
||||
| `server/src/chat/transport/matrix/start.rs` | Start a coder agent on a story |
|
||||
|
||||
### Chat — Matrix Bot
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `server/src/chat/transport/matrix/bot/mod.rs` | Sub-modules for the Matrix chat bot |
|
||||
| `server/src/chat/transport/matrix/bot/context.rs` | Shared state (rooms, history, permissions) |
|
||||
| `server/src/chat/transport/matrix/bot/format.rs` | Markdown-to-HTML conversion and startup announcements |
|
||||
| `server/src/chat/transport/matrix/bot/history.rs` | Per-room message history for LLM context |
|
||||
| `server/src/chat/transport/matrix/bot/mentions.rs` | Checks whether a message mentions the bot |
|
||||
| `server/src/chat/transport/matrix/bot/messages.rs` | Processes incoming messages and dispatches commands |
|
||||
| `server/src/chat/transport/matrix/bot/run.rs` | Connects to homeserver and processes sync events |
|
||||
| `server/src/chat/transport/matrix/bot/verification.rs` | Interactive emoji verification flow for E2EE |
|
||||
|
||||
### Chat — Slack Transport
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `server/src/chat/transport/slack/mod.rs` | Slack Bot API integration |
|
||||
| `server/src/chat/transport/slack/commands.rs` | Incoming message dispatch and slash command handling |
|
||||
| `server/src/chat/transport/slack/format.rs` | Markdown to Slack mrkdwn conversion |
|
||||
| `server/src/chat/transport/slack/history.rs` | Conversation history persistence |
|
||||
| `server/src/chat/transport/slack/meta.rs` | `ChatTransport` implementation for Slack |
|
||||
| `server/src/chat/transport/slack/verify.rs` | Request signature verification |
|
||||
|
||||
### Chat — Discord Transport
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `server/src/chat/transport/discord/mod.rs` | Discord Bot integration |
|
||||
| `server/src/chat/transport/discord/commands.rs` | Incoming message dispatch and command handling |
|
||||
| `server/src/chat/transport/discord/format.rs` | Markdown to Discord format conversion |
|
||||
| `server/src/chat/transport/discord/gateway.rs` | Minimal Discord Gateway WebSocket client |
|
||||
| `server/src/chat/transport/discord/history.rs` | Conversation history persistence |
|
||||
| `server/src/chat/transport/discord/meta.rs` | `ChatTransport` implementation for Discord |
|
||||
|
||||
### Chat — WhatsApp Transport
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `server/src/chat/transport/whatsapp/mod.rs` | WhatsApp Business API integration |
|
||||
| `server/src/chat/transport/whatsapp/commands.rs` | Processes incoming messages as bot commands |
|
||||
| `server/src/chat/transport/whatsapp/format.rs` | Markdown-to-WhatsApp conversion and message chunking |
|
||||
| `server/src/chat/transport/whatsapp/history.rs` | Per-number history and messaging window tracking |
|
||||
| `server/src/chat/transport/whatsapp/meta.rs` | Meta Cloud API transport via Graph API |
|
||||
| `server/src/chat/transport/whatsapp/twilio.rs` | Twilio transport for sending/receiving messages |
|
||||
|
||||
### Chat — Transport Abstraction
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `server/src/chat/transport/mod.rs` | Pluggable backends (Matrix, Slack, WhatsApp, Discord) |
|
||||
|
||||
### LLM
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `server/src/llm/mod.rs` | Chat orchestration, prompts, OAuth, and provider integrations |
|
||||
| `server/src/llm/chat.rs` | Multi-turn conversations with tool-calling LLM providers |
|
||||
| `server/src/llm/oauth.rs` | Token refresh and credential management for Claude API |
|
||||
| `server/src/llm/prompts.rs` | Static prompt templates for chat and onboarding |
|
||||
| `server/src/llm/types.rs` | `Message`, `Role`, `ToolCall`, `ModelProvider` types |
|
||||
|
||||
### LLM — Providers
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `server/src/llm/providers/mod.rs` | Module declarations for Anthropic, Claude Code, and Ollama |
|
||||
| `server/src/llm/providers/anthropic.rs` | Streaming completion client for Claude Messages API |
|
||||
| `server/src/llm/providers/claude_code.rs` | Runs Claude Code CLI in a PTY and parses output |
|
||||
| `server/src/llm/providers/ollama.rs` | Streaming completion client for Ollama models |
|
||||
|
||||
### Utilities
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `server/src/log_buffer.rs` | Bounded in-memory ring buffer for server log output |
|
||||
| `server/src/rebuild.rs` | Server rebuild and restart logic |
|
||||
| `server/src/workflow.rs` | Test result tracking and acceptance evaluation |
|
||||
| `server/src/worktree.rs` | Creates, lists, and removes git worktrees for agent isolation |
|
||||
|
||||
## License
|
||||
See `.huskies/specs/tech/STACK.md` for the full source map.
|
||||
|
||||
GPL-3.0. See [LICENSE](LICENSE).
|
||||
|
||||
Generated
+2
-2
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "huskies",
|
||||
"version": "0.10.1",
|
||||
"version": "0.10.4",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "huskies",
|
||||
"version": "0.10.1",
|
||||
"version": "0.10.4",
|
||||
"dependencies": {
|
||||
"@types/react-syntax-highlighter": "^15.5.13",
|
||||
"react": "^19.1.0",
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"name": "huskies",
|
||||
"private": true,
|
||||
"version": "0.10.1",
|
||||
"version": "0.10.4",
|
||||
"type": "module",
|
||||
"scripts": {
|
||||
"dev": "vite",
|
||||
|
||||
@@ -0,0 +1,43 @@
|
||||
export interface BotConfig {
|
||||
transport: string | null;
|
||||
enabled: boolean | null;
|
||||
homeserver: string | null;
|
||||
username: string | null;
|
||||
password: string | null;
|
||||
room_ids: string[] | null;
|
||||
slack_bot_token: string | null;
|
||||
slack_signing_secret: string | null;
|
||||
slack_channel_ids: string[] | null;
|
||||
}
|
||||
|
||||
const DEFAULT_API_BASE = "/api";
|
||||
|
||||
async function requestJson<T>(
|
||||
path: string,
|
||||
options: RequestInit = {},
|
||||
baseUrl = DEFAULT_API_BASE,
|
||||
): Promise<T> {
|
||||
const res = await fetch(`${baseUrl}${path}`, {
|
||||
headers: { "Content-Type": "application/json", ...(options.headers ?? {}) },
|
||||
...options,
|
||||
});
|
||||
if (!res.ok) {
|
||||
const text = await res.text();
|
||||
throw new Error(text || `Request failed (${res.status})`);
|
||||
}
|
||||
return res.json() as Promise<T>;
|
||||
}
|
||||
|
||||
export const botConfigApi = {
|
||||
getConfig(baseUrl?: string): Promise<BotConfig> {
|
||||
return requestJson<BotConfig>("/bot/config", {}, baseUrl);
|
||||
},
|
||||
|
||||
saveConfig(config: BotConfig, baseUrl?: string): Promise<BotConfig> {
|
||||
return requestJson<BotConfig>(
|
||||
"/bot/config",
|
||||
{ method: "PUT", body: JSON.stringify(config) },
|
||||
baseUrl,
|
||||
);
|
||||
},
|
||||
};
|
||||
@@ -8,6 +8,8 @@ export interface JoinedAgent {
|
||||
label: string;
|
||||
address: string;
|
||||
registered_at: number;
|
||||
/// Unix timestamp of the last heartbeat from this agent.
|
||||
last_seen: number;
|
||||
/// Project this agent is assigned to, if any.
|
||||
assigned_project?: string;
|
||||
}
|
||||
@@ -22,6 +24,28 @@ export interface GatewayInfo {
|
||||
projects: GatewayProject[];
|
||||
}
|
||||
|
||||
export interface PipelineItem {
|
||||
story_id: string;
|
||||
name: string;
|
||||
stage: string;
|
||||
agent?: { agent_name: string; model: string; status: string } | null;
|
||||
blocked?: boolean;
|
||||
retry_count?: number;
|
||||
merge_failure?: string;
|
||||
}
|
||||
|
||||
export interface ProjectPipelineStatus {
|
||||
active: PipelineItem[];
|
||||
backlog: { story_id: string; name: string }[];
|
||||
backlog_count: number;
|
||||
error?: string;
|
||||
}
|
||||
|
||||
export interface AllProjectsPipeline {
|
||||
active: string;
|
||||
projects: Record<string, ProjectPipelineStatus>;
|
||||
}
|
||||
|
||||
export interface GenerateTokenResponse {
|
||||
token: string;
|
||||
}
|
||||
@@ -86,4 +110,40 @@ export const gatewayApi = {
|
||||
getGatewayInfo(): Promise<GatewayInfo> {
|
||||
return gatewayRequest<GatewayInfo>("/api/gateway");
|
||||
},
|
||||
|
||||
/// Add a new project to the gateway config.
|
||||
addProject(name: string, url: string): Promise<GatewayProject> {
|
||||
return gatewayRequest<GatewayProject>("/api/gateway/projects", {
|
||||
method: "POST",
|
||||
body: JSON.stringify({ name, url }),
|
||||
});
|
||||
},
|
||||
|
||||
/// Remove a project from the gateway config.
|
||||
removeProject(name: string): Promise<void> {
|
||||
return gatewayRequest<void>(
|
||||
`/api/gateway/projects/${encodeURIComponent(name)}`,
|
||||
{ method: "DELETE" },
|
||||
);
|
||||
},
|
||||
|
||||
/// Send a heartbeat for an agent to update its last-seen timestamp.
|
||||
heartbeat(id: string): Promise<void> {
|
||||
return gatewayRequest<void>(`/gateway/agents/${id}/heartbeat`, {
|
||||
method: "POST",
|
||||
});
|
||||
},
|
||||
|
||||
/// Fetch pipeline status from all registered projects.
|
||||
getAllProjectsPipeline(): Promise<AllProjectsPipeline> {
|
||||
return gatewayRequest<AllProjectsPipeline>("/api/gateway/pipeline");
|
||||
},
|
||||
|
||||
/// Switch the active project.
|
||||
switchProject(project: string): Promise<{ ok: boolean; error?: string }> {
|
||||
return gatewayRequest<{ ok: boolean; error?: string }>(
|
||||
"/api/gateway/switch",
|
||||
{ method: "POST", body: JSON.stringify({ project }) },
|
||||
);
|
||||
},
|
||||
};
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
|
||||
import type { ProjectSettings } from "./settings";
|
||||
import { settingsApi } from "./settings";
|
||||
|
||||
const mockFetch = vi.fn();
|
||||
@@ -22,7 +23,77 @@ function errorResponse(status: number, text: string) {
|
||||
return new Response(text, { status });
|
||||
}
|
||||
|
||||
const defaultProjectSettings: ProjectSettings = {
|
||||
default_qa: "server",
|
||||
default_coder_model: null,
|
||||
max_coders: null,
|
||||
max_retries: 2,
|
||||
base_branch: null,
|
||||
rate_limit_notifications: true,
|
||||
timezone: null,
|
||||
rendezvous: null,
|
||||
watcher_sweep_interval_secs: 60,
|
||||
watcher_done_retention_secs: 14400,
|
||||
};
|
||||
|
||||
describe("settingsApi", () => {
|
||||
describe("getProjectSettings", () => {
|
||||
it("sends GET to /settings and returns project settings", async () => {
|
||||
mockFetch.mockResolvedValueOnce(okResponse(defaultProjectSettings));
|
||||
|
||||
const result = await settingsApi.getProjectSettings();
|
||||
|
||||
expect(mockFetch).toHaveBeenCalledWith(
|
||||
"/api/settings",
|
||||
expect.objectContaining({
|
||||
headers: expect.objectContaining({
|
||||
"Content-Type": "application/json",
|
||||
}),
|
||||
}),
|
||||
);
|
||||
expect(result).toEqual(defaultProjectSettings);
|
||||
});
|
||||
|
||||
it("uses custom baseUrl when provided", async () => {
|
||||
mockFetch.mockResolvedValueOnce(okResponse(defaultProjectSettings));
|
||||
await settingsApi.getProjectSettings("http://localhost:4000/api");
|
||||
expect(mockFetch).toHaveBeenCalledWith(
|
||||
"http://localhost:4000/api/settings",
|
||||
expect.anything(),
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe("putProjectSettings", () => {
|
||||
it("sends PUT to /settings with settings body", async () => {
|
||||
const updated = { ...defaultProjectSettings, default_qa: "agent" };
|
||||
mockFetch.mockResolvedValueOnce(okResponse(updated));
|
||||
|
||||
const result = await settingsApi.putProjectSettings(updated);
|
||||
|
||||
expect(mockFetch).toHaveBeenCalledWith(
|
||||
"/api/settings",
|
||||
expect.objectContaining({
|
||||
method: "PUT",
|
||||
body: JSON.stringify(updated),
|
||||
}),
|
||||
);
|
||||
expect(result.default_qa).toBe("agent");
|
||||
});
|
||||
|
||||
it("throws on validation error", async () => {
|
||||
mockFetch.mockResolvedValueOnce(
|
||||
errorResponse(400, "Invalid default_qa value"),
|
||||
);
|
||||
await expect(
|
||||
settingsApi.putProjectSettings({
|
||||
...defaultProjectSettings,
|
||||
default_qa: "invalid",
|
||||
}),
|
||||
).rejects.toThrow("Invalid default_qa value");
|
||||
});
|
||||
});
|
||||
|
||||
describe("getEditorCommand", () => {
|
||||
it("sends GET to /settings/editor and returns editor settings", async () => {
|
||||
const expected = { editor_command: "zed" };
|
||||
|
||||
@@ -2,6 +2,19 @@ export interface EditorSettings {
|
||||
editor_command: string | null;
|
||||
}
|
||||
|
||||
export interface ProjectSettings {
|
||||
default_qa: string;
|
||||
default_coder_model: string | null;
|
||||
max_coders: number | null;
|
||||
max_retries: number;
|
||||
base_branch: string | null;
|
||||
rate_limit_notifications: boolean;
|
||||
timezone: string | null;
|
||||
rendezvous: string | null;
|
||||
watcher_sweep_interval_secs: number;
|
||||
watcher_done_retention_secs: number;
|
||||
}
|
||||
|
||||
export interface OpenFileResult {
|
||||
success: boolean;
|
||||
}
|
||||
@@ -34,6 +47,21 @@ async function requestJson<T>(
|
||||
}
|
||||
|
||||
export const settingsApi = {
|
||||
getProjectSettings(baseUrl?: string): Promise<ProjectSettings> {
|
||||
return requestJson<ProjectSettings>("/settings", {}, baseUrl);
|
||||
},
|
||||
|
||||
putProjectSettings(
|
||||
settings: ProjectSettings,
|
||||
baseUrl?: string,
|
||||
): Promise<ProjectSettings> {
|
||||
return requestJson<ProjectSettings>(
|
||||
"/settings",
|
||||
{ method: "PUT", body: JSON.stringify(settings) },
|
||||
baseUrl,
|
||||
);
|
||||
},
|
||||
|
||||
getEditorCommand(baseUrl?: string): Promise<EditorSettings> {
|
||||
return requestJson<EditorSettings>("/settings/editor", {}, baseUrl);
|
||||
},
|
||||
|
||||
@@ -0,0 +1,344 @@
|
||||
import * as React from "react";
|
||||
import type { BotConfig } from "../api/bot_config";
|
||||
import { botConfigApi } from "../api/bot_config";
|
||||
|
||||
const { useState, useEffect } = React;
|
||||
|
||||
interface BotConfigPageProps {
|
||||
onBack: () => void;
|
||||
}
|
||||
|
||||
const fieldStyle: React.CSSProperties = {
|
||||
display: "flex",
|
||||
flexDirection: "column",
|
||||
gap: "4px",
|
||||
};
|
||||
|
||||
const labelStyle: React.CSSProperties = {
|
||||
fontSize: "0.8em",
|
||||
color: "#aaa",
|
||||
fontWeight: 500,
|
||||
};
|
||||
|
||||
const inputStyle: React.CSSProperties = {
|
||||
padding: "8px 10px",
|
||||
borderRadius: "6px",
|
||||
border: "1px solid #333",
|
||||
background: "#1e1e1e",
|
||||
color: "#ececec",
|
||||
fontSize: "0.9em",
|
||||
fontFamily: "monospace",
|
||||
outline: "none",
|
||||
};
|
||||
|
||||
const sectionStyle: React.CSSProperties = {
|
||||
background: "#1e1e1e",
|
||||
border: "1px solid #333",
|
||||
borderRadius: "8px",
|
||||
padding: "20px",
|
||||
display: "flex",
|
||||
flexDirection: "column",
|
||||
gap: "14px",
|
||||
};
|
||||
|
||||
const sectionTitleStyle: React.CSSProperties = {
|
||||
fontSize: "0.85em",
|
||||
fontWeight: 600,
|
||||
color: "#aaa",
|
||||
textTransform: "uppercase",
|
||||
letterSpacing: "0.06em",
|
||||
marginBottom: "2px",
|
||||
};
|
||||
|
||||
function Field({
|
||||
label,
|
||||
value,
|
||||
onChange,
|
||||
placeholder,
|
||||
type = "text",
|
||||
}: {
|
||||
label: string;
|
||||
value: string;
|
||||
onChange: (v: string) => void;
|
||||
placeholder?: string;
|
||||
type?: string;
|
||||
}) {
|
||||
return (
|
||||
<div style={fieldStyle}>
|
||||
<label style={labelStyle}>{label}</label>
|
||||
<input
|
||||
type={type}
|
||||
value={value}
|
||||
onChange={(e) => onChange(e.target.value)}
|
||||
placeholder={placeholder}
|
||||
style={inputStyle}
|
||||
autoComplete="off"
|
||||
/>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
function ListField({
|
||||
label,
|
||||
value,
|
||||
onChange,
|
||||
placeholder,
|
||||
}: {
|
||||
label: string;
|
||||
value: string[];
|
||||
onChange: (v: string[]) => void;
|
||||
placeholder?: string;
|
||||
}) {
|
||||
return (
|
||||
<div style={fieldStyle}>
|
||||
<label style={labelStyle}>{label} (one per line)</label>
|
||||
<textarea
|
||||
value={value.join("\n")}
|
||||
onChange={(e) =>
|
||||
onChange(e.target.value.split("\n").filter((s) => s.trim()))
|
||||
}
|
||||
placeholder={placeholder}
|
||||
rows={3}
|
||||
style={{ ...inputStyle, resize: "vertical" }}
|
||||
/>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
/// Bot configuration page — form for Matrix and Slack credentials.
|
||||
export function BotConfigPage({ onBack }: BotConfigPageProps) {
|
||||
const [transport, setTransport] = useState<"matrix" | "slack">("matrix");
|
||||
const [enabled, setEnabled] = useState(false);
|
||||
const [homeserver, setHomeserver] = useState("");
|
||||
const [username, setUsername] = useState("");
|
||||
const [password, setPassword] = useState("");
|
||||
const [roomIds, setRoomIds] = useState<string[]>([]);
|
||||
const [slackBotToken, setSlackBotToken] = useState("");
|
||||
const [slackSigningSecret, setSlackSigningSecret] = useState("");
|
||||
const [slackChannelIds, setSlackChannelIds] = useState<string[]>([]);
|
||||
const [status, setStatus] = useState<"idle" | "saving" | "saved" | "error">(
|
||||
"idle",
|
||||
);
|
||||
const [errorMsg, setErrorMsg] = useState<string | null>(null);
|
||||
|
||||
useEffect(() => {
|
||||
botConfigApi
|
||||
.getConfig()
|
||||
.then((cfg) => {
|
||||
if (cfg.transport === "slack") setTransport("slack");
|
||||
setEnabled(cfg.enabled ?? false);
|
||||
setHomeserver(cfg.homeserver ?? "");
|
||||
setUsername(cfg.username ?? "");
|
||||
setPassword(cfg.password ?? "");
|
||||
setRoomIds(cfg.room_ids ?? []);
|
||||
setSlackBotToken(cfg.slack_bot_token ?? "");
|
||||
setSlackSigningSecret(cfg.slack_signing_secret ?? "");
|
||||
setSlackChannelIds(cfg.slack_channel_ids ?? []);
|
||||
})
|
||||
.catch(() => {});
|
||||
}, []);
|
||||
|
||||
function buildConfig(): BotConfig {
|
||||
return {
|
||||
transport,
|
||||
enabled,
|
||||
homeserver: homeserver || null,
|
||||
username: username || null,
|
||||
password: password || null,
|
||||
room_ids: roomIds.length > 0 ? roomIds : null,
|
||||
slack_bot_token: slackBotToken || null,
|
||||
slack_signing_secret: slackSigningSecret || null,
|
||||
slack_channel_ids: slackChannelIds.length > 0 ? slackChannelIds : null,
|
||||
};
|
||||
}
|
||||
|
||||
async function handleSave() {
|
||||
setStatus("saving");
|
||||
setErrorMsg(null);
|
||||
try {
|
||||
await botConfigApi.saveConfig(buildConfig());
|
||||
setStatus("saved");
|
||||
setTimeout(() => setStatus("idle"), 2000);
|
||||
} catch (e) {
|
||||
setStatus("error");
|
||||
setErrorMsg(e instanceof Error ? e.message : "Save failed");
|
||||
}
|
||||
}
|
||||
|
||||
return (
|
||||
<div
|
||||
style={{
|
||||
display: "flex",
|
||||
flexDirection: "column",
|
||||
height: "100%",
|
||||
backgroundColor: "#171717",
|
||||
color: "#ececec",
|
||||
overflow: "auto",
|
||||
}}
|
||||
>
|
||||
<div
|
||||
style={{
|
||||
padding: "12px 24px",
|
||||
borderBottom: "1px solid #333",
|
||||
display: "flex",
|
||||
alignItems: "center",
|
||||
gap: "16px",
|
||||
background: "#171717",
|
||||
flexShrink: 0,
|
||||
}}
|
||||
>
|
||||
<button
|
||||
type="button"
|
||||
onClick={onBack}
|
||||
style={{
|
||||
background: "transparent",
|
||||
border: "none",
|
||||
cursor: "pointer",
|
||||
color: "#888",
|
||||
fontSize: "0.9em",
|
||||
padding: "4px 8px",
|
||||
borderRadius: "4px",
|
||||
}}
|
||||
>
|
||||
← Back
|
||||
</button>
|
||||
<span style={{ fontWeight: 700, fontSize: "1em" }}>
|
||||
Bot Configuration
|
||||
</span>
|
||||
</div>
|
||||
|
||||
<div
|
||||
style={{
|
||||
flex: 1,
|
||||
padding: "24px",
|
||||
display: "flex",
|
||||
flexDirection: "column",
|
||||
gap: "20px",
|
||||
maxWidth: "600px",
|
||||
}}
|
||||
>
|
||||
<div style={sectionStyle}>
|
||||
<div style={sectionTitleStyle}>General</div>
|
||||
|
||||
<div style={fieldStyle}>
|
||||
<label style={labelStyle}>Transport</label>
|
||||
<select
|
||||
value={transport}
|
||||
onChange={(e) =>
|
||||
setTransport(e.target.value as "matrix" | "slack")
|
||||
}
|
||||
style={{ ...inputStyle, cursor: "pointer" }}
|
||||
>
|
||||
<option value="matrix">Matrix</option>
|
||||
<option value="slack">Slack</option>
|
||||
</select>
|
||||
</div>
|
||||
|
||||
<label
|
||||
style={{
|
||||
display: "flex",
|
||||
alignItems: "center",
|
||||
gap: "8px",
|
||||
cursor: "pointer",
|
||||
fontSize: "0.9em",
|
||||
color: "#ccc",
|
||||
}}
|
||||
>
|
||||
<input
|
||||
type="checkbox"
|
||||
checked={enabled}
|
||||
onChange={(e) => setEnabled(e.target.checked)}
|
||||
/>
|
||||
Enabled
|
||||
</label>
|
||||
</div>
|
||||
|
||||
{transport === "matrix" && (
|
||||
<div style={sectionStyle}>
|
||||
<div style={sectionTitleStyle}>Matrix Credentials</div>
|
||||
<Field
|
||||
label="Homeserver"
|
||||
value={homeserver}
|
||||
onChange={setHomeserver}
|
||||
placeholder="https://matrix.example.com"
|
||||
/>
|
||||
<Field
|
||||
label="Username"
|
||||
value={username}
|
||||
onChange={setUsername}
|
||||
placeholder="@botname:example.com"
|
||||
/>
|
||||
<Field
|
||||
label="Password"
|
||||
value={password}
|
||||
onChange={setPassword}
|
||||
placeholder="bot password"
|
||||
type="password"
|
||||
/>
|
||||
<ListField
|
||||
label="Room IDs"
|
||||
value={roomIds}
|
||||
onChange={setRoomIds}
|
||||
placeholder="!roomid:example.com"
|
||||
/>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{transport === "slack" && (
|
||||
<div style={sectionStyle}>
|
||||
<div style={sectionTitleStyle}>Slack Credentials</div>
|
||||
<Field
|
||||
label="Bot Token"
|
||||
value={slackBotToken}
|
||||
onChange={setSlackBotToken}
|
||||
placeholder="xoxb-..."
|
||||
/>
|
||||
<Field
|
||||
label="Signing Secret"
|
||||
value={slackSigningSecret}
|
||||
onChange={setSlackSigningSecret}
|
||||
placeholder="signing secret"
|
||||
type="password"
|
||||
/>
|
||||
<ListField
|
||||
label="Channel IDs"
|
||||
value={slackChannelIds}
|
||||
onChange={setSlackChannelIds}
|
||||
placeholder="C01ABCDEF"
|
||||
/>
|
||||
</div>
|
||||
)}
|
||||
|
||||
<div style={{ display: "flex", alignItems: "center", gap: "12px" }}>
|
||||
<button
|
||||
type="button"
|
||||
onClick={handleSave}
|
||||
disabled={status === "saving"}
|
||||
style={{
|
||||
padding: "8px 24px",
|
||||
borderRadius: "6px",
|
||||
border: "none",
|
||||
background: status === "saved" ? "#1a5c2a" : "#2563eb",
|
||||
color: "#fff",
|
||||
cursor: status === "saving" ? "not-allowed" : "pointer",
|
||||
fontSize: "0.9em",
|
||||
fontWeight: 600,
|
||||
opacity: status === "saving" ? 0.7 : 1,
|
||||
}}
|
||||
>
|
||||
{status === "saving"
|
||||
? "Saving..."
|
||||
: status === "saved"
|
||||
? "Saved!"
|
||||
: "Save"}
|
||||
</button>
|
||||
{status === "error" && errorMsg && (
|
||||
<span style={{ color: "#f08080", fontSize: "0.85em" }}>
|
||||
{errorMsg}
|
||||
</span>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@@ -8,6 +8,8 @@ import { useChatSend } from "../hooks/useChatSend";
|
||||
import { useChatWebSocket } from "../hooks/useChatWebSocket";
|
||||
import { estimateTokens, getContextWindowSize } from "../utils/chatUtils";
|
||||
import { ApiKeyDialog } from "./ApiKeyDialog";
|
||||
import { BotConfigPage } from "./BotConfigPage";
|
||||
import { SettingsPage } from "./SettingsPage";
|
||||
import { ChatHeader } from "./ChatHeader";
|
||||
import type { ChatInputHandle } from "./ChatInput";
|
||||
import { ChatInput } from "./ChatInput";
|
||||
@@ -61,6 +63,7 @@ export function Chat({
|
||||
null,
|
||||
);
|
||||
const [showHelp, setShowHelp] = useState(false);
|
||||
const [view, setView] = useState<"chat" | "bot-config" | "settings">("chat");
|
||||
const [queuedMessages, setQueuedMessages] = useState<
|
||||
{ id: string; text: string }[]
|
||||
>([]);
|
||||
@@ -373,12 +376,22 @@ export function Chat({
|
||||
onToggleTools={setEnableTools}
|
||||
wsConnected={wsConnected}
|
||||
oauthStatus={oauthStatus}
|
||||
onShowBotConfig={() => setView("bot-config")}
|
||||
onShowSettings={() => setView("settings")}
|
||||
/>
|
||||
|
||||
{view === "bot-config" && (
|
||||
<BotConfigPage onBack={() => setView("chat")} />
|
||||
)}
|
||||
|
||||
{view === "settings" && (
|
||||
<SettingsPage onBack={() => setView("chat")} />
|
||||
)}
|
||||
|
||||
<div
|
||||
data-testid="chat-content-area"
|
||||
style={{
|
||||
display: "flex",
|
||||
display: view === "chat" ? "flex" : "none",
|
||||
flex: 1,
|
||||
minHeight: 0,
|
||||
flexDirection: isNarrowScreen ? "column" : "row",
|
||||
|
||||
@@ -34,6 +34,8 @@ interface ChatHeaderProps {
|
||||
onToggleTools: (enabled: boolean) => void;
|
||||
wsConnected: boolean;
|
||||
oauthStatus?: OAuthStatus | null;
|
||||
onShowBotConfig?: () => void;
|
||||
onShowSettings?: () => void;
|
||||
}
|
||||
|
||||
const getContextEmoji = (percentage: number): string => {
|
||||
@@ -58,6 +60,8 @@ export function ChatHeader({
|
||||
onToggleTools,
|
||||
wsConnected,
|
||||
oauthStatus = null,
|
||||
onShowBotConfig,
|
||||
onShowSettings,
|
||||
}: ChatHeaderProps) {
|
||||
const hasModelOptions = availableModels.length > 0 || claudeModels.length > 0;
|
||||
const [showConfirm, setShowConfirm] = useState(false);
|
||||
@@ -513,6 +517,80 @@ export function ChatHeader({
|
||||
🔄 New Session
|
||||
</button>
|
||||
|
||||
{onShowBotConfig && (
|
||||
<button
|
||||
type="button"
|
||||
onClick={onShowBotConfig}
|
||||
title="Configure bot credentials"
|
||||
style={{
|
||||
padding: "6px 12px",
|
||||
borderRadius: "99px",
|
||||
border: "none",
|
||||
fontSize: "0.85em",
|
||||
backgroundColor: "#2f2f2f",
|
||||
color: "#888",
|
||||
cursor: "pointer",
|
||||
outline: "none",
|
||||
transition: "all 0.2s",
|
||||
}}
|
||||
onMouseOver={(e) => {
|
||||
e.currentTarget.style.backgroundColor = "#3f3f3f";
|
||||
e.currentTarget.style.color = "#ccc";
|
||||
}}
|
||||
onMouseOut={(e) => {
|
||||
e.currentTarget.style.backgroundColor = "#2f2f2f";
|
||||
e.currentTarget.style.color = "#888";
|
||||
}}
|
||||
onFocus={(e) => {
|
||||
e.currentTarget.style.backgroundColor = "#3f3f3f";
|
||||
e.currentTarget.style.color = "#ccc";
|
||||
}}
|
||||
onBlur={(e) => {
|
||||
e.currentTarget.style.backgroundColor = "#2f2f2f";
|
||||
e.currentTarget.style.color = "#888";
|
||||
}}
|
||||
>
|
||||
⚙ Bot
|
||||
</button>
|
||||
)}
|
||||
|
||||
{onShowSettings && (
|
||||
<button
|
||||
type="button"
|
||||
onClick={onShowSettings}
|
||||
title="Edit project.toml settings"
|
||||
style={{
|
||||
padding: "6px 12px",
|
||||
borderRadius: "99px",
|
||||
border: "none",
|
||||
fontSize: "0.85em",
|
||||
backgroundColor: "#2f2f2f",
|
||||
color: "#888",
|
||||
cursor: "pointer",
|
||||
outline: "none",
|
||||
transition: "all 0.2s",
|
||||
}}
|
||||
onMouseOver={(e) => {
|
||||
e.currentTarget.style.backgroundColor = "#3f3f3f";
|
||||
e.currentTarget.style.color = "#ccc";
|
||||
}}
|
||||
onMouseOut={(e) => {
|
||||
e.currentTarget.style.backgroundColor = "#2f2f2f";
|
||||
e.currentTarget.style.color = "#888";
|
||||
}}
|
||||
onFocus={(e) => {
|
||||
e.currentTarget.style.backgroundColor = "#3f3f3f";
|
||||
e.currentTarget.style.color = "#ccc";
|
||||
}}
|
||||
onBlur={(e) => {
|
||||
e.currentTarget.style.backgroundColor = "#2f2f2f";
|
||||
e.currentTarget.style.color = "#888";
|
||||
}}
|
||||
>
|
||||
⚙ Settings
|
||||
</button>
|
||||
)}
|
||||
|
||||
{hasModelOptions ? (
|
||||
<select
|
||||
value={model}
|
||||
|
||||
@@ -1,14 +1,173 @@
|
||||
/// Gateway management panel shown when huskies runs in `--gateway` mode.
|
||||
///
|
||||
/// Provides:
|
||||
/// - A cross-project pipeline status view showing active stories per project.
|
||||
/// - Clicking a project card switches to it.
|
||||
/// - An "Add Agent" button that generates a one-time join token.
|
||||
/// - Instructions for running a build agent with the token.
|
||||
/// - A list of connected agents with per-agent project assignment and "Remove" buttons.
|
||||
/// - A list of connected agents with per-agent status, project assignment, and "Remove" buttons.
|
||||
/// - Auto-refresh every 5 seconds so new agents and disconnections appear without a page reload.
|
||||
|
||||
import * as React from "react";
|
||||
import { gatewayApi, type JoinedAgent, type GatewayProject } from "../api/gateway";
|
||||
import {
|
||||
gatewayApi,
|
||||
type JoinedAgent,
|
||||
type GatewayProject,
|
||||
type AllProjectsPipeline,
|
||||
type PipelineItem,
|
||||
} from "../api/gateway";
|
||||
|
||||
const { useCallback, useEffect, useState } = React;
|
||||
const { useCallback, useEffect, useRef, useState } = React;
|
||||
|
||||
/// Seconds of silence before an agent is considered disconnected.
|
||||
const DISCONNECT_THRESHOLD_SECS = 60;
|
||||
|
||||
/// Poll the agent list this often (milliseconds).
|
||||
const POLL_INTERVAL_MS = 5_000;
|
||||
|
||||
type AgentStatus = "idle" | "working" | "disconnected";
|
||||
|
||||
/// Derive an agent's display status from its last-seen timestamp and project assignment.
|
||||
function agentStatus(agent: JoinedAgent): AgentStatus {
|
||||
const nowSecs = Date.now() / 1000;
|
||||
if (nowSecs - agent.last_seen > DISCONNECT_THRESHOLD_SECS) {
|
||||
return "disconnected";
|
||||
}
|
||||
return agent.assigned_project ? "working" : "idle";
|
||||
}
|
||||
|
||||
const STATUS_COLORS: Record<AgentStatus, string> = {
|
||||
idle: "#6e7681",
|
||||
working: "#3fb950",
|
||||
disconnected: "#f85149",
|
||||
};
|
||||
|
||||
const STATUS_LABELS: Record<AgentStatus, string> = {
|
||||
idle: "Idle",
|
||||
working: "Working",
|
||||
disconnected: "Disconnected",
|
||||
};
|
||||
|
||||
const STAGE_COLORS: Record<string, string> = {
|
||||
current: "#3fb950",
|
||||
qa: "#d2a679",
|
||||
merge: "#79c0ff",
|
||||
done: "#6e7681",
|
||||
};
|
||||
|
||||
const STAGE_LABELS: Record<string, string> = {
|
||||
current: "In Progress",
|
||||
qa: "QA",
|
||||
merge: "Merging",
|
||||
done: "Done",
|
||||
};
|
||||
|
||||
/// A single story row inside a project pipeline card.
|
||||
function StoryRow({ item }: { item: PipelineItem }) {
|
||||
const color = STAGE_COLORS[item.stage] ?? "#8b949e";
|
||||
const label = STAGE_LABELS[item.stage] ?? item.stage;
|
||||
|
||||
return (
|
||||
<div
|
||||
style={{
|
||||
display: "flex",
|
||||
alignItems: "center",
|
||||
gap: "8px",
|
||||
padding: "4px 0",
|
||||
fontSize: "0.82em",
|
||||
}}
|
||||
>
|
||||
<span
|
||||
style={{
|
||||
padding: "1px 6px",
|
||||
borderRadius: "10px",
|
||||
background: `${color}22`,
|
||||
color,
|
||||
border: `1px solid ${color}44`,
|
||||
whiteSpace: "nowrap",
|
||||
flexShrink: 0,
|
||||
}}
|
||||
>
|
||||
{label}
|
||||
</span>
|
||||
<span style={{ color: "#e6edf3", overflow: "hidden", textOverflow: "ellipsis", whiteSpace: "nowrap" }}>
|
||||
{item.name}
|
||||
</span>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
/// Pipeline status card for a single project.
|
||||
function ProjectPipelineCard({
|
||||
name,
|
||||
pipeline,
|
||||
isActive,
|
||||
onSwitch,
|
||||
}: {
|
||||
name: string;
|
||||
pipeline: AllProjectsPipeline["projects"][string];
|
||||
isActive: boolean;
|
||||
onSwitch: (name: string) => void;
|
||||
}) {
|
||||
const activeItems = pipeline.active ?? [];
|
||||
const backlogCount = pipeline.backlog_count ?? 0;
|
||||
const hasError = Boolean(pipeline.error);
|
||||
|
||||
return (
|
||||
<div
|
||||
data-testid={`pipeline-card-${name}`}
|
||||
onClick={() => onSwitch(name)}
|
||||
style={{
|
||||
padding: "12px 16px",
|
||||
background: "#161b22",
|
||||
border: `1px solid ${isActive ? "#238636" : "#30363d"}`,
|
||||
borderRadius: "8px",
|
||||
marginBottom: "8px",
|
||||
cursor: "pointer",
|
||||
}}
|
||||
>
|
||||
<div
|
||||
style={{
|
||||
display: "flex",
|
||||
alignItems: "center",
|
||||
gap: "8px",
|
||||
marginBottom: activeItems.length > 0 ? "8px" : 0,
|
||||
}}
|
||||
>
|
||||
<span style={{ fontWeight: 600, color: "#e6edf3" }}>{name}</span>
|
||||
{isActive && (
|
||||
<span
|
||||
style={{
|
||||
fontSize: "0.7em",
|
||||
padding: "1px 6px",
|
||||
borderRadius: "10px",
|
||||
background: "#23863622",
|
||||
color: "#3fb950",
|
||||
border: "1px solid #23863644",
|
||||
}}
|
||||
>
|
||||
active
|
||||
</span>
|
||||
)}
|
||||
<span style={{ marginLeft: "auto", fontSize: "0.75em", color: "#6e7681" }}>
|
||||
{backlogCount > 0 ? `${backlogCount} in backlog` : ""}
|
||||
</span>
|
||||
</div>
|
||||
|
||||
{hasError ? (
|
||||
<div style={{ fontSize: "0.8em", color: "#f85149" }}>{pipeline.error}</div>
|
||||
) : activeItems.length === 0 ? (
|
||||
<div style={{ fontSize: "0.8em", color: "#6e7681" }}>No active stories</div>
|
||||
) : (
|
||||
<div>
|
||||
{activeItems.map((item) => (
|
||||
<StoryRow key={item.story_id} item={item} />
|
||||
))}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
function TokenDisplay({ token }: { token: string }) {
|
||||
const [copied, setCopied] = useState(false);
|
||||
@@ -100,7 +259,9 @@ function AgentRow({
|
||||
onAssign: (id: string, project: string | null) => void;
|
||||
}) {
|
||||
const registeredAt = new Date(agent.registered_at * 1000).toLocaleString();
|
||||
const isAssigned = Boolean(agent.assigned_project);
|
||||
const status = agentStatus(agent);
|
||||
const statusColor = STATUS_COLORS[status];
|
||||
const statusLabel = STATUS_LABELS[status];
|
||||
|
||||
return (
|
||||
<div
|
||||
@@ -121,18 +282,38 @@ function AgentRow({
|
||||
width: "8px",
|
||||
height: "8px",
|
||||
borderRadius: "50%",
|
||||
background: isAssigned ? "#3fb950" : "#6e7681",
|
||||
background: statusColor,
|
||||
flexShrink: 0,
|
||||
}}
|
||||
title={isAssigned ? "Assigned" : "Idle (unassigned)"}
|
||||
title={statusLabel}
|
||||
/>
|
||||
<div style={{ flex: 1 }}>
|
||||
<div style={{ fontWeight: 600, color: "#e6edf3" }}>{agent.label}</div>
|
||||
<div style={{ display: "flex", alignItems: "center", gap: "8px" }}>
|
||||
<span style={{ fontWeight: 600, color: "#e6edf3" }}>{agent.label}</span>
|
||||
<span
|
||||
data-testid={`agent-status-${agent.id}`}
|
||||
style={{
|
||||
fontSize: "0.75em",
|
||||
padding: "1px 6px",
|
||||
borderRadius: "10px",
|
||||
background: `${statusColor}22`,
|
||||
color: statusColor,
|
||||
border: `1px solid ${statusColor}44`,
|
||||
}}
|
||||
>
|
||||
{statusLabel}
|
||||
</span>
|
||||
</div>
|
||||
<div style={{ fontSize: "0.8em", color: "#8b949e" }}>
|
||||
{agent.address}
|
||||
</div>
|
||||
<div style={{ fontSize: "0.75em", color: "#6e7681" }}>
|
||||
Registered {registeredAt}
|
||||
{agent.assigned_project && (
|
||||
<span style={{ marginLeft: "8px", color: "#8b949e" }}>
|
||||
· Project: {agent.assigned_project}
|
||||
</span>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
<select
|
||||
@@ -185,8 +366,21 @@ export function GatewayPanel() {
|
||||
const [token, setToken] = useState<string | null>(null);
|
||||
const [generating, setGenerating] = useState(false);
|
||||
const [error, setError] = useState<string | null>(null);
|
||||
const [pipeline, setPipeline] = useState<AllProjectsPipeline | null>(null);
|
||||
|
||||
// Add-project form state
|
||||
const [newProjectName, setNewProjectName] = useState("");
|
||||
const [newProjectUrl, setNewProjectUrl] = useState("");
|
||||
const [addingProject, setAddingProject] = useState(false);
|
||||
|
||||
// Keep stable refs so polling intervals don't recreate on state changes.
|
||||
const setAgentsRef = useRef(setAgents);
|
||||
setAgentsRef.current = setAgents;
|
||||
const setPipelineRef = useRef(setPipeline);
|
||||
setPipelineRef.current = setPipeline;
|
||||
|
||||
useEffect(() => {
|
||||
// Initial load.
|
||||
gatewayApi
|
||||
.listAgents()
|
||||
.then(setAgents)
|
||||
@@ -195,6 +389,25 @@ export function GatewayPanel() {
|
||||
.getGatewayInfo()
|
||||
.then((info) => setProjects(info.projects))
|
||||
.catch(() => setProjects([]));
|
||||
gatewayApi
|
||||
.getAllProjectsPipeline()
|
||||
.then(setPipeline)
|
||||
.catch(() => setPipeline(null));
|
||||
|
||||
// Poll so the dashboard auto-updates as agents connect/disconnect and
|
||||
// stories move through pipelines.
|
||||
const timer = setInterval(() => {
|
||||
gatewayApi
|
||||
.listAgents()
|
||||
.then((updated) => setAgentsRef.current(updated))
|
||||
.catch(() => {});
|
||||
gatewayApi
|
||||
.getAllProjectsPipeline()
|
||||
.then((updated) => setPipelineRef.current(updated))
|
||||
.catch(() => {});
|
||||
}, POLL_INTERVAL_MS);
|
||||
|
||||
return () => clearInterval(timer);
|
||||
}, []);
|
||||
|
||||
const handleAddAgent = useCallback(async () => {
|
||||
@@ -234,6 +447,53 @@ export function GatewayPanel() {
|
||||
[],
|
||||
);
|
||||
|
||||
const handleAddProject = useCallback(async () => {
|
||||
const name = newProjectName.trim();
|
||||
const url = newProjectUrl.trim();
|
||||
if (!name || !url) return;
|
||||
setAddingProject(true);
|
||||
setError(null);
|
||||
try {
|
||||
const created = await gatewayApi.addProject(name, url);
|
||||
setProjects((prev) => [...prev, created]);
|
||||
setNewProjectName("");
|
||||
setNewProjectUrl("");
|
||||
} catch (e) {
|
||||
setError(e instanceof Error ? e.message : String(e));
|
||||
} finally {
|
||||
setAddingProject(false);
|
||||
}
|
||||
}, [newProjectName, newProjectUrl]);
|
||||
|
||||
const handleSwitchProject = useCallback(async (name: string) => {
|
||||
setError(null);
|
||||
try {
|
||||
const result = await gatewayApi.switchProject(name);
|
||||
if (!result.ok) {
|
||||
setError(result.error ?? "Failed to switch project");
|
||||
return;
|
||||
}
|
||||
// Refresh pipeline to reflect new active project.
|
||||
const updated = await gatewayApi.getAllProjectsPipeline();
|
||||
setPipeline(updated);
|
||||
} catch (e) {
|
||||
setError(e instanceof Error ? e.message : String(e));
|
||||
}
|
||||
}, []);
|
||||
|
||||
const handleRemoveProject = useCallback(async (name: string) => {
|
||||
if (!window.confirm(`Remove project "${name}"? This cannot be undone.`)) {
|
||||
return;
|
||||
}
|
||||
setError(null);
|
||||
try {
|
||||
await gatewayApi.removeProject(name);
|
||||
setProjects((prev) => prev.filter((p) => p.name !== name));
|
||||
} catch (e) {
|
||||
setError(e instanceof Error ? e.message : String(e));
|
||||
}
|
||||
}, []);
|
||||
|
||||
return (
|
||||
<div
|
||||
style={{
|
||||
@@ -252,6 +512,34 @@ export function GatewayPanel() {
|
||||
Manage build agents connected to this gateway.
|
||||
</p>
|
||||
|
||||
{/* Cross-project pipeline status */}
|
||||
<section style={{ marginBottom: "32px" }}>
|
||||
<h2
|
||||
style={{
|
||||
fontSize: "1.1em",
|
||||
fontWeight: 600,
|
||||
marginBottom: "12px",
|
||||
borderBottom: "1px solid #21262d",
|
||||
paddingBottom: "8px",
|
||||
}}
|
||||
>
|
||||
Pipeline Status
|
||||
</h2>
|
||||
{pipeline ? (
|
||||
Object.entries(pipeline.projects).map(([name, status]) => (
|
||||
<ProjectPipelineCard
|
||||
key={name}
|
||||
name={name}
|
||||
pipeline={status}
|
||||
isActive={name === pipeline.active}
|
||||
onSwitch={handleSwitchProject}
|
||||
/>
|
||||
))
|
||||
) : (
|
||||
<p style={{ color: "#6e7681" }}>Loading pipeline status…</p>
|
||||
)}
|
||||
</section>
|
||||
|
||||
{/* Add Agent */}
|
||||
<section style={{ marginBottom: "32px" }}>
|
||||
<h2
|
||||
@@ -330,6 +618,138 @@ export function GatewayPanel() {
|
||||
)}
|
||||
</section>
|
||||
|
||||
{/* Project management */}
|
||||
<section style={{ marginTop: "32px" }}>
|
||||
<h2
|
||||
style={{
|
||||
fontSize: "1.1em",
|
||||
fontWeight: 600,
|
||||
marginBottom: "12px",
|
||||
borderBottom: "1px solid #21262d",
|
||||
paddingBottom: "8px",
|
||||
}}
|
||||
>
|
||||
Projects{" "}
|
||||
{projects.length > 0 && (
|
||||
<span style={{ fontSize: "0.8em", color: "#8b949e", fontWeight: 400 }}>
|
||||
({projects.length})
|
||||
</span>
|
||||
)}
|
||||
</h2>
|
||||
|
||||
{/* Existing projects list */}
|
||||
{projects.map((p) => (
|
||||
<div
|
||||
key={p.name}
|
||||
data-testid={`project-row-${p.name}`}
|
||||
style={{
|
||||
display: "flex",
|
||||
alignItems: "center",
|
||||
gap: "12px",
|
||||
padding: "10px 14px",
|
||||
background: "#161b22",
|
||||
border: "1px solid #30363d",
|
||||
borderRadius: "8px",
|
||||
marginBottom: "8px",
|
||||
}}
|
||||
>
|
||||
<div style={{ flex: 1 }}>
|
||||
<div style={{ fontWeight: 600, color: "#e6edf3" }}>{p.name}</div>
|
||||
<div style={{ fontSize: "0.8em", color: "#8b949e" }}>{p.url}</div>
|
||||
</div>
|
||||
<button
|
||||
type="button"
|
||||
data-testid={`remove-project-${p.name}`}
|
||||
onClick={() => handleRemoveProject(p.name)}
|
||||
style={{
|
||||
fontSize: "0.8em",
|
||||
padding: "4px 10px",
|
||||
borderRadius: "4px",
|
||||
border: "1px solid #f85149",
|
||||
background: "none",
|
||||
color: "#f85149",
|
||||
cursor: "pointer",
|
||||
}}
|
||||
>
|
||||
Remove
|
||||
</button>
|
||||
</div>
|
||||
))}
|
||||
|
||||
{/* Add project form */}
|
||||
<div
|
||||
style={{
|
||||
marginTop: "12px",
|
||||
display: "flex",
|
||||
gap: "8px",
|
||||
alignItems: "flex-end",
|
||||
flexWrap: "wrap",
|
||||
}}
|
||||
>
|
||||
<div style={{ flex: "1 1 140px" }}>
|
||||
<div style={{ fontSize: "0.75em", color: "#8b949e", marginBottom: "4px" }}>
|
||||
Name
|
||||
</div>
|
||||
<input
|
||||
data-testid="new-project-name"
|
||||
type="text"
|
||||
placeholder="my-project"
|
||||
value={newProjectName}
|
||||
onChange={(e) => setNewProjectName(e.target.value)}
|
||||
style={{
|
||||
width: "100%",
|
||||
padding: "6px 10px",
|
||||
borderRadius: "4px",
|
||||
border: "1px solid #30363d",
|
||||
background: "#0d1117",
|
||||
color: "#e6edf3",
|
||||
fontSize: "0.85em",
|
||||
}}
|
||||
/>
|
||||
</div>
|
||||
<div style={{ flex: "2 1 200px" }}>
|
||||
<div style={{ fontSize: "0.75em", color: "#8b949e", marginBottom: "4px" }}>
|
||||
Container URL
|
||||
</div>
|
||||
<input
|
||||
data-testid="new-project-url"
|
||||
type="text"
|
||||
placeholder="http://localhost:3001"
|
||||
value={newProjectUrl}
|
||||
onChange={(e) => setNewProjectUrl(e.target.value)}
|
||||
style={{
|
||||
width: "100%",
|
||||
padding: "6px 10px",
|
||||
borderRadius: "4px",
|
||||
border: "1px solid #30363d",
|
||||
background: "#0d1117",
|
||||
color: "#e6edf3",
|
||||
fontSize: "0.85em",
|
||||
}}
|
||||
/>
|
||||
</div>
|
||||
<button
|
||||
type="button"
|
||||
data-testid="add-project-button"
|
||||
onClick={handleAddProject}
|
||||
disabled={addingProject || !newProjectName.trim() || !newProjectUrl.trim()}
|
||||
style={{
|
||||
padding: "6px 14px",
|
||||
borderRadius: "4px",
|
||||
border: "1px solid #238636",
|
||||
background: addingProject ? "#1a2f1a" : "#238636",
|
||||
color: "#fff",
|
||||
cursor: addingProject ? "not-allowed" : "pointer",
|
||||
fontWeight: 600,
|
||||
fontSize: "0.85em",
|
||||
whiteSpace: "nowrap",
|
||||
}}
|
||||
>
|
||||
{addingProject ? "Adding…" : "Add Project"}
|
||||
</button>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
{error && (
|
||||
<div
|
||||
style={{
|
||||
|
||||
@@ -0,0 +1,461 @@
|
||||
import * as React from "react";
|
||||
import type { ProjectSettings } from "../api/settings";
|
||||
import { settingsApi } from "../api/settings";
|
||||
|
||||
const { useState, useEffect } = React;
|
||||
|
||||
interface SettingsPageProps {
|
||||
onBack: () => void;
|
||||
}
|
||||
|
||||
const fieldStyle: React.CSSProperties = {
|
||||
display: "flex",
|
||||
flexDirection: "column",
|
||||
gap: "4px",
|
||||
};
|
||||
|
||||
const labelStyle: React.CSSProperties = {
|
||||
fontSize: "0.8em",
|
||||
color: "#aaa",
|
||||
fontWeight: 500,
|
||||
};
|
||||
|
||||
const descStyle: React.CSSProperties = {
|
||||
fontSize: "0.75em",
|
||||
color: "#666",
|
||||
marginTop: "2px",
|
||||
};
|
||||
|
||||
const inputStyle: React.CSSProperties = {
|
||||
padding: "8px 10px",
|
||||
borderRadius: "6px",
|
||||
border: "1px solid #333",
|
||||
background: "#1e1e1e",
|
||||
color: "#ececec",
|
||||
fontSize: "0.9em",
|
||||
fontFamily: "monospace",
|
||||
outline: "none",
|
||||
};
|
||||
|
||||
const sectionStyle: React.CSSProperties = {
|
||||
background: "#1e1e1e",
|
||||
border: "1px solid #333",
|
||||
borderRadius: "8px",
|
||||
padding: "20px",
|
||||
display: "flex",
|
||||
flexDirection: "column",
|
||||
gap: "16px",
|
||||
};
|
||||
|
||||
const sectionTitleStyle: React.CSSProperties = {
|
||||
fontSize: "0.85em",
|
||||
fontWeight: 600,
|
||||
color: "#aaa",
|
||||
textTransform: "uppercase",
|
||||
letterSpacing: "0.06em",
|
||||
marginBottom: "2px",
|
||||
};
|
||||
|
||||
interface TextFieldProps {
|
||||
label: string;
|
||||
description?: string;
|
||||
value: string;
|
||||
onChange: (v: string) => void;
|
||||
placeholder?: string;
|
||||
}
|
||||
|
||||
function TextField({ label, description, value, onChange, placeholder }: TextFieldProps) {
|
||||
return (
|
||||
<div style={fieldStyle}>
|
||||
<label style={labelStyle}>{label}</label>
|
||||
{description && <span style={descStyle}>{description}</span>}
|
||||
<input
|
||||
type="text"
|
||||
value={value}
|
||||
onChange={(e) => onChange(e.target.value)}
|
||||
placeholder={placeholder ?? ""}
|
||||
style={inputStyle}
|
||||
autoComplete="off"
|
||||
/>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
interface NumberFieldProps {
|
||||
label: string;
|
||||
description?: string;
|
||||
value: number | null;
|
||||
onChange: (v: number | null) => void;
|
||||
min?: number;
|
||||
placeholder?: string;
|
||||
}
|
||||
|
||||
function NumberField({ label, description, value, onChange, min, placeholder }: NumberFieldProps) {
|
||||
return (
|
||||
<div style={fieldStyle}>
|
||||
<label style={labelStyle}>{label}</label>
|
||||
{description && <span style={descStyle}>{description}</span>}
|
||||
<input
|
||||
type="number"
|
||||
value={value === null ? "" : value}
|
||||
min={min}
|
||||
onChange={(e) => {
|
||||
const raw = e.target.value.trim();
|
||||
if (raw === "") {
|
||||
onChange(null);
|
||||
} else {
|
||||
const n = Number(raw);
|
||||
if (!Number.isNaN(n)) onChange(n);
|
||||
}
|
||||
}}
|
||||
placeholder={placeholder ?? ""}
|
||||
style={inputStyle}
|
||||
/>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
interface CheckboxFieldProps {
|
||||
label: string;
|
||||
description?: string;
|
||||
checked: boolean;
|
||||
onChange: (v: boolean) => void;
|
||||
}
|
||||
|
||||
function CheckboxField({ label, description, checked, onChange }: CheckboxFieldProps) {
|
||||
return (
|
||||
<div style={fieldStyle}>
|
||||
{description && <span style={descStyle}>{description}</span>}
|
||||
<label
|
||||
style={{
|
||||
display: "flex",
|
||||
alignItems: "center",
|
||||
gap: "8px",
|
||||
cursor: "pointer",
|
||||
fontSize: "0.9em",
|
||||
color: "#ccc",
|
||||
}}
|
||||
>
|
||||
<input
|
||||
type="checkbox"
|
||||
checked={checked}
|
||||
onChange={(e) => onChange(e.target.checked)}
|
||||
/>
|
||||
{label}
|
||||
</label>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
const QA_MODES = ["server", "agent", "human"] as const;
|
||||
|
||||
/** Settings page — form-based editor for project.toml scalar settings. */
|
||||
export function SettingsPage({ onBack }: SettingsPageProps) {
|
||||
const [settings, setSettings] = useState<ProjectSettings | null>(null);
|
||||
const [status, setStatus] = useState<"idle" | "loading" | "saving" | "saved" | "error">("loading");
|
||||
const [errorMsg, setErrorMsg] = useState<string | null>(null);
|
||||
const [validationErrors, setValidationErrors] = useState<Record<string, string>>({});
|
||||
|
||||
useEffect(() => {
|
||||
settingsApi
|
||||
.getProjectSettings()
|
||||
.then((s) => {
|
||||
setSettings(s);
|
||||
setStatus("idle");
|
||||
})
|
||||
.catch((e: unknown) => {
|
||||
setStatus("error");
|
||||
setErrorMsg(e instanceof Error ? e.message : "Failed to load settings");
|
||||
});
|
||||
}, []);
|
||||
|
||||
function patch(partial: Partial<ProjectSettings>) {
|
||||
setSettings((prev) => (prev ? { ...prev, ...partial } : prev));
|
||||
setValidationErrors({});
|
||||
}
|
||||
|
||||
function validate(s: ProjectSettings): Record<string, string> {
|
||||
const errors: Record<string, string> = {};
|
||||
if (!QA_MODES.includes(s.default_qa as (typeof QA_MODES)[number])) {
|
||||
errors.default_qa = `Must be one of: ${QA_MODES.join(", ")}`;
|
||||
}
|
||||
if (s.max_retries < 0) {
|
||||
errors.max_retries = "Must be 0 or greater";
|
||||
}
|
||||
if (s.watcher_sweep_interval_secs < 1) {
|
||||
errors.watcher_sweep_interval_secs = "Must be at least 1 second";
|
||||
}
|
||||
if (s.watcher_done_retention_secs < 1) {
|
||||
errors.watcher_done_retention_secs = "Must be at least 1 second";
|
||||
}
|
||||
return errors;
|
||||
}
|
||||
|
||||
async function handleSave() {
|
||||
if (!settings) return;
|
||||
const errors = validate(settings);
|
||||
if (Object.keys(errors).length > 0) {
|
||||
setValidationErrors(errors);
|
||||
return;
|
||||
}
|
||||
setStatus("saving");
|
||||
setErrorMsg(null);
|
||||
try {
|
||||
const saved = await settingsApi.putProjectSettings(settings);
|
||||
setSettings(saved);
|
||||
setStatus("saved");
|
||||
setTimeout(() => setStatus("idle"), 2000);
|
||||
} catch (e) {
|
||||
setStatus("error");
|
||||
setErrorMsg(e instanceof Error ? e.message : "Save failed");
|
||||
}
|
||||
}
|
||||
|
||||
const s = settings;
|
||||
|
||||
return (
|
||||
<div
|
||||
style={{
|
||||
display: "flex",
|
||||
flexDirection: "column",
|
||||
height: "100%",
|
||||
backgroundColor: "#171717",
|
||||
color: "#ececec",
|
||||
overflow: "auto",
|
||||
}}
|
||||
>
|
||||
{/* Header */}
|
||||
<div
|
||||
style={{
|
||||
padding: "12px 24px",
|
||||
borderBottom: "1px solid #333",
|
||||
display: "flex",
|
||||
alignItems: "center",
|
||||
gap: "16px",
|
||||
background: "#171717",
|
||||
flexShrink: 0,
|
||||
}}
|
||||
>
|
||||
<button
|
||||
type="button"
|
||||
onClick={onBack}
|
||||
style={{
|
||||
background: "transparent",
|
||||
border: "none",
|
||||
cursor: "pointer",
|
||||
color: "#888",
|
||||
fontSize: "0.9em",
|
||||
padding: "4px 8px",
|
||||
borderRadius: "4px",
|
||||
}}
|
||||
>
|
||||
← Back
|
||||
</button>
|
||||
<span style={{ fontWeight: 700, fontSize: "1em" }}>Project Settings</span>
|
||||
</div>
|
||||
|
||||
{/* Body */}
|
||||
<div
|
||||
style={{
|
||||
flex: 1,
|
||||
padding: "24px",
|
||||
display: "flex",
|
||||
flexDirection: "column",
|
||||
gap: "20px",
|
||||
maxWidth: "640px",
|
||||
}}
|
||||
>
|
||||
{status === "loading" && (
|
||||
<p style={{ color: "#888", fontSize: "0.9em" }}>Loading settings…</p>
|
||||
)}
|
||||
|
||||
{status === "error" && !s && (
|
||||
<p style={{ color: "#f08080", fontSize: "0.9em" }}>
|
||||
Error: {errorMsg}
|
||||
</p>
|
||||
)}
|
||||
|
||||
{s && (
|
||||
<>
|
||||
{/* Pipeline */}
|
||||
<div style={sectionStyle}>
|
||||
<div style={sectionTitleStyle}>Pipeline</div>
|
||||
|
||||
<div style={fieldStyle}>
|
||||
<label style={labelStyle}>Default QA Mode</label>
|
||||
<span style={descStyle}>
|
||||
How stories are QA-reviewed after the coder stage.
|
||||
Default: server.
|
||||
</span>
|
||||
<select
|
||||
value={s.default_qa}
|
||||
onChange={(e) => patch({ default_qa: e.target.value })}
|
||||
style={{ ...inputStyle, cursor: "pointer" }}
|
||||
>
|
||||
{QA_MODES.map((m) => (
|
||||
<option key={m} value={m}>
|
||||
{m}
|
||||
</option>
|
||||
))}
|
||||
</select>
|
||||
{validationErrors.default_qa && (
|
||||
<span style={{ color: "#f08080", fontSize: "0.8em" }}>
|
||||
{validationErrors.default_qa}
|
||||
</span>
|
||||
)}
|
||||
</div>
|
||||
|
||||
<NumberField
|
||||
label="Max Retries"
|
||||
description="Maximum retries per story per pipeline stage before blocking. Default: 2. Set 0 to disable."
|
||||
value={s.max_retries}
|
||||
min={0}
|
||||
onChange={(v) => patch({ max_retries: v ?? 0 })}
|
||||
/>
|
||||
{validationErrors.max_retries && (
|
||||
<span style={{ color: "#f08080", fontSize: "0.8em" }}>
|
||||
{validationErrors.max_retries}
|
||||
</span>
|
||||
)}
|
||||
|
||||
<NumberField
|
||||
label="Max Concurrent Coders"
|
||||
description="Maximum number of coder-stage agents running at once. Leave blank for unlimited."
|
||||
value={s.max_coders}
|
||||
min={1}
|
||||
placeholder="unlimited"
|
||||
onChange={(v) => patch({ max_coders: v })}
|
||||
/>
|
||||
|
||||
<TextField
|
||||
label="Default Coder Model"
|
||||
description="When set, only coder agents matching this model are auto-assigned (e.g. sonnet, opus)."
|
||||
value={s.default_coder_model ?? ""}
|
||||
onChange={(v) =>
|
||||
patch({ default_coder_model: v.trim() || null })
|
||||
}
|
||||
placeholder="e.g. sonnet"
|
||||
/>
|
||||
</div>
|
||||
|
||||
{/* Git */}
|
||||
<div style={sectionStyle}>
|
||||
<div style={sectionTitleStyle}>Git</div>
|
||||
|
||||
<TextField
|
||||
label="Base Branch"
|
||||
description="Overrides auto-detection of the merge target branch (e.g. main, master, develop)."
|
||||
value={s.base_branch ?? ""}
|
||||
onChange={(v) =>
|
||||
patch({ base_branch: v.trim() || null })
|
||||
}
|
||||
placeholder="e.g. master"
|
||||
/>
|
||||
</div>
|
||||
|
||||
{/* Notifications */}
|
||||
<div style={sectionStyle}>
|
||||
<div style={sectionTitleStyle}>Notifications</div>
|
||||
|
||||
<CheckboxField
|
||||
label="Rate Limit Notifications"
|
||||
description="Send chat notifications on soft API rate-limit warnings. Disable to reduce noise."
|
||||
checked={s.rate_limit_notifications}
|
||||
onChange={(v) => patch({ rate_limit_notifications: v })}
|
||||
/>
|
||||
</div>
|
||||
|
||||
{/* Advanced */}
|
||||
<div style={sectionStyle}>
|
||||
<div style={sectionTitleStyle}>Advanced</div>
|
||||
|
||||
<TextField
|
||||
label="Timezone"
|
||||
description="IANA timezone for timer inputs (e.g. Europe/London, America/New_York). Leave blank for system default."
|
||||
value={s.timezone ?? ""}
|
||||
onChange={(v) => patch({ timezone: v.trim() || null })}
|
||||
placeholder="e.g. Europe/London"
|
||||
/>
|
||||
|
||||
<TextField
|
||||
label="Rendezvous URL"
|
||||
description="WebSocket URL of a remote huskies node for CRDT state sync (e.g. ws://host:3001/crdt-sync)."
|
||||
value={s.rendezvous ?? ""}
|
||||
onChange={(v) => patch({ rendezvous: v.trim() || null })}
|
||||
placeholder="e.g. ws://host:3001/crdt-sync"
|
||||
/>
|
||||
</div>
|
||||
|
||||
{/* Watcher */}
|
||||
<div style={sectionStyle}>
|
||||
<div style={sectionTitleStyle}>Archiver</div>
|
||||
|
||||
<NumberField
|
||||
label="Sweep Interval (seconds)"
|
||||
description="How often to check the done stage for items ready to archive. Default: 60."
|
||||
value={s.watcher_sweep_interval_secs}
|
||||
min={1}
|
||||
onChange={(v) =>
|
||||
patch({ watcher_sweep_interval_secs: v ?? 60 })
|
||||
}
|
||||
/>
|
||||
{validationErrors.watcher_sweep_interval_secs && (
|
||||
<span style={{ color: "#f08080", fontSize: "0.8em" }}>
|
||||
{validationErrors.watcher_sweep_interval_secs}
|
||||
</span>
|
||||
)}
|
||||
|
||||
<NumberField
|
||||
label="Done Retention (seconds)"
|
||||
description="How long an item must stay in the done stage before archiving. Default: 14400 (4 hours)."
|
||||
value={s.watcher_done_retention_secs}
|
||||
min={1}
|
||||
onChange={(v) =>
|
||||
patch({ watcher_done_retention_secs: v ?? 14400 })
|
||||
}
|
||||
/>
|
||||
{validationErrors.watcher_done_retention_secs && (
|
||||
<span style={{ color: "#f08080", fontSize: "0.8em" }}>
|
||||
{validationErrors.watcher_done_retention_secs}
|
||||
</span>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{/* Save */}
|
||||
<div style={{ display: "flex", alignItems: "center", gap: "12px" }}>
|
||||
<button
|
||||
type="button"
|
||||
onClick={handleSave}
|
||||
disabled={status === "saving"}
|
||||
style={{
|
||||
padding: "8px 24px",
|
||||
borderRadius: "6px",
|
||||
border: "none",
|
||||
background:
|
||||
status === "saved" ? "#1a5c2a" : "#2563eb",
|
||||
color: "#fff",
|
||||
cursor:
|
||||
status === "saving" ? "not-allowed" : "pointer",
|
||||
fontSize: "0.9em",
|
||||
fontWeight: 600,
|
||||
opacity: status === "saving" ? 0.7 : 1,
|
||||
}}
|
||||
>
|
||||
{status === "saving"
|
||||
? "Saving…"
|
||||
: status === "saved"
|
||||
? "Saved!"
|
||||
: "Save"}
|
||||
</button>
|
||||
{status === "error" && errorMsg && (
|
||||
<span style={{ color: "#f08080", fontSize: "0.85em" }}>
|
||||
{errorMsg}
|
||||
</span>
|
||||
)}
|
||||
</div>
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
+1
-1
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "huskies"
|
||||
version = "0.10.1"
|
||||
version = "0.10.4"
|
||||
edition = "2024"
|
||||
build = "build.rs"
|
||||
|
||||
|
||||
@@ -15,7 +15,7 @@ use super::scan::{
|
||||
};
|
||||
use super::story_checks::{
|
||||
check_archived_dependencies, has_merge_failure, has_review_hold, has_unmet_dependencies,
|
||||
is_story_blocked, read_story_front_matter_agent,
|
||||
is_story_blocked, is_story_frozen, read_story_front_matter_agent,
|
||||
};
|
||||
|
||||
impl AgentPool {
|
||||
@@ -103,6 +103,12 @@ impl AgentPool {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Skip frozen stories — pipeline advancement is suspended.
|
||||
if is_story_frozen(project_root, stage_dir, story_id) {
|
||||
slog!("[auto-assign] Story '{story_id}' is frozen; skipping until unfrozen.");
|
||||
continue;
|
||||
}
|
||||
|
||||
// Skip blocked stories (retry limit exceeded).
|
||||
if is_story_blocked(project_root, stage_dir, story_id) {
|
||||
continue;
|
||||
|
||||
@@ -93,6 +93,19 @@ pub(super) fn check_archived_dependencies(
|
||||
crate::io::story_metadata::check_archived_deps(project_root, stage_dir, story_id)
|
||||
}
|
||||
|
||||
/// Return `true` if the story file has `frozen: true` in its front matter.
|
||||
pub(super) fn is_story_frozen(project_root: &Path, _stage_dir: &str, story_id: &str) -> bool {
|
||||
use crate::io::story_metadata::parse_front_matter;
|
||||
let contents = match read_story_contents(project_root, story_id) {
|
||||
Some(c) => c,
|
||||
None => return false,
|
||||
};
|
||||
parse_front_matter(&contents)
|
||||
.ok()
|
||||
.and_then(|m| m.frozen)
|
||||
.unwrap_or(false)
|
||||
}
|
||||
|
||||
/// Return `true` if the story file has a `merge_failure` field in its front matter.
|
||||
pub(super) fn has_merge_failure(project_root: &Path, _stage_dir: &str, story_id: &str) -> bool {
|
||||
use crate::io::story_metadata::parse_front_matter;
|
||||
|
||||
@@ -40,6 +40,13 @@ impl AgentPool {
|
||||
.map(agent_config_stage)
|
||||
.unwrap_or_else(|| pipeline_stage(agent_name));
|
||||
|
||||
// If the story is frozen, do not advance the pipeline. The agent's work
|
||||
// is done but the story stays at its current stage.
|
||||
if crate::io::story_metadata::is_story_frozen_in_store(story_id) {
|
||||
slog!("[pipeline] Story '{story_id}' is frozen; pipeline advancement suppressed.");
|
||||
return;
|
||||
}
|
||||
|
||||
match stage {
|
||||
PipelineStage::Other => {
|
||||
// Supervisors and unknown agents do not advance the pipeline.
|
||||
|
||||
@@ -7,7 +7,9 @@
|
||||
//! Passing no dependency numbers clears the field entirely.
|
||||
|
||||
use super::CommandContext;
|
||||
use crate::io::story_metadata::{parse_front_matter, write_depends_on};
|
||||
use crate::io::story_metadata::{
|
||||
parse_front_matter, write_depends_on, write_depends_on_in_content,
|
||||
};
|
||||
|
||||
/// Handle the `depends` command.
|
||||
///
|
||||
@@ -51,7 +53,7 @@ pub(super) fn handle_depends(ctx: &CommandContext) -> Option<String> {
|
||||
}
|
||||
|
||||
// Find the story by numeric prefix: CRDT → content store → filesystem.
|
||||
let (story_id, _stage_dir, path, content) =
|
||||
let (story_id, stage_dir, path, content) =
|
||||
match crate::chat::lookup::find_story_by_number(ctx.project_root, num_str) {
|
||||
Some(found) => found,
|
||||
None => {
|
||||
@@ -62,23 +64,48 @@ pub(super) fn handle_depends(ctx: &CommandContext) -> Option<String> {
|
||||
};
|
||||
|
||||
let story_name = content
|
||||
.or_else(|| std::fs::read_to_string(&path).ok())
|
||||
.and_then(|c| parse_front_matter(&c).ok())
|
||||
.as_deref()
|
||||
.and_then(|c| parse_front_matter(c).ok())
|
||||
.and_then(|m| m.name)
|
||||
.unwrap_or_else(|| story_id.clone());
|
||||
|
||||
match write_depends_on(&path, &deps) {
|
||||
Ok(()) if deps.is_empty() => Some(format!(
|
||||
"Cleared all dependencies for **{story_name}** ({story_id})."
|
||||
)),
|
||||
Ok(()) => {
|
||||
// Prefer the CRDT content store; fall back to filesystem only when the
|
||||
// story has not been loaded into the DB (e.g. very early startup or tests
|
||||
// that haven't called write_item_with_content).
|
||||
if let Some(existing) = crate::db::read_content(&story_id) {
|
||||
let updated = write_depends_on_in_content(&existing, &deps);
|
||||
crate::db::write_content(&story_id, &updated);
|
||||
let stage = crate::pipeline_state::read_typed(&story_id)
|
||||
.ok()
|
||||
.flatten()
|
||||
.map(|i| i.stage.dir_name().to_string())
|
||||
.unwrap_or_else(|| stage_dir.clone());
|
||||
crate::db::write_item_with_content(&story_id, &stage, &updated);
|
||||
if deps.is_empty() {
|
||||
Some(format!(
|
||||
"Cleared all dependencies for **{story_name}** ({story_id})."
|
||||
))
|
||||
} else {
|
||||
let nums: Vec<String> = deps.iter().map(|n| n.to_string()).collect();
|
||||
Some(format!(
|
||||
"Set depends_on: [{}] for **{story_name}** ({story_id}).",
|
||||
nums.join(", ")
|
||||
))
|
||||
}
|
||||
Err(e) => Some(format!("Failed to update dependencies for {story_id}: {e}")),
|
||||
} else {
|
||||
match write_depends_on(&path, &deps) {
|
||||
Ok(()) if deps.is_empty() => Some(format!(
|
||||
"Cleared all dependencies for **{story_name}** ({story_id})."
|
||||
)),
|
||||
Ok(()) => {
|
||||
let nums: Vec<String> = deps.iter().map(|n| n.to_string()).collect();
|
||||
Some(format!(
|
||||
"Set depends_on: [{}] for **{story_name}** ({story_id}).",
|
||||
nums.join(", ")
|
||||
))
|
||||
}
|
||||
Err(e) => Some(format!("Failed to update dependencies for {story_id}: {e}")),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -170,10 +197,10 @@ mod tests {
|
||||
write_story_file(
|
||||
tmp.path(),
|
||||
"1_backlog",
|
||||
"42_story_foo.md",
|
||||
"9912_story_foo.md",
|
||||
"---\nname: Foo\n---\n",
|
||||
);
|
||||
let output = depends_cmd_with_root(tmp.path(), "42 abc").unwrap();
|
||||
let output = depends_cmd_with_root(tmp.path(), "9912 abc").unwrap();
|
||||
assert!(
|
||||
output.contains("Invalid dependency number"),
|
||||
"non-numeric dep should error: {output}"
|
||||
@@ -181,25 +208,24 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn depends_sets_deps_and_writes_to_file() {
|
||||
fn depends_sets_deps_and_writes_to_content_store() {
|
||||
let tmp = tempfile::TempDir::new().unwrap();
|
||||
write_story_file(
|
||||
tmp.path(),
|
||||
"1_backlog",
|
||||
"42_story_foo.md",
|
||||
"9910_story_foo.md",
|
||||
"---\nname: Foo\n---\n",
|
||||
);
|
||||
let output = depends_cmd_with_root(tmp.path(), "42 477 478").unwrap();
|
||||
let output = depends_cmd_with_root(tmp.path(), "9910 477 478").unwrap();
|
||||
assert!(
|
||||
output.contains("477") && output.contains("478"),
|
||||
"response should mention dep numbers: {output}"
|
||||
);
|
||||
let contents =
|
||||
std::fs::read_to_string(tmp.path().join(".huskies/work/1_backlog/42_story_foo.md"))
|
||||
.unwrap();
|
||||
let contents = crate::db::read_content("9910_story_foo")
|
||||
.expect("content store should have updated story");
|
||||
assert!(
|
||||
contents.contains("depends_on: [477, 478]"),
|
||||
"file should have depends_on set: {contents}"
|
||||
"content store should have depends_on set: {contents}"
|
||||
);
|
||||
}
|
||||
|
||||
@@ -209,20 +235,19 @@ mod tests {
|
||||
write_story_file(
|
||||
tmp.path(),
|
||||
"2_current",
|
||||
"10_story_bar.md",
|
||||
"9911_story_bar.md",
|
||||
"---\nname: Bar\ndepends_on: [477]\n---\n",
|
||||
);
|
||||
let output = depends_cmd_with_root(tmp.path(), "10").unwrap();
|
||||
let output = depends_cmd_with_root(tmp.path(), "9911").unwrap();
|
||||
assert!(
|
||||
output.contains("Cleared"),
|
||||
"should confirm clearing deps: {output}"
|
||||
);
|
||||
let contents =
|
||||
std::fs::read_to_string(tmp.path().join(".huskies/work/2_current/10_story_bar.md"))
|
||||
.unwrap();
|
||||
let contents = crate::db::read_content("9911_story_bar")
|
||||
.expect("content store should have updated story");
|
||||
assert!(
|
||||
!contents.contains("depends_on"),
|
||||
"file should have depends_on cleared: {contents}"
|
||||
"content store should have depends_on cleared: {contents}"
|
||||
);
|
||||
}
|
||||
|
||||
@@ -232,12 +257,12 @@ mod tests {
|
||||
write_story_file(
|
||||
tmp.path(),
|
||||
"3_qa",
|
||||
"55_story_inqa.md",
|
||||
"9913_story_inqa.md",
|
||||
"---\nname: In QA\n---\n",
|
||||
);
|
||||
let output = depends_cmd_with_root(tmp.path(), "55 100").unwrap();
|
||||
let output = depends_cmd_with_root(tmp.path(), "9913 100").unwrap();
|
||||
assert!(
|
||||
output.contains("In QA") || output.contains("55_story_inqa"),
|
||||
output.contains("In QA") || output.contains("9913_story_inqa"),
|
||||
"should find story in qa stage: {output}"
|
||||
);
|
||||
assert!(output.contains("100"), "should mention dep 100: {output}");
|
||||
|
||||
@@ -0,0 +1,259 @@
|
||||
//! Handler for the `diff` command.
|
||||
//!
|
||||
//! Shows the git diff from the configured main branch to the story's worktree
|
||||
//! HEAD, formatted for readability in chat.
|
||||
|
||||
use super::CommandContext;
|
||||
use std::path::Path;
|
||||
use std::process::Command;
|
||||
|
||||
/// Display the git diff from the configured main branch to a story's worktree HEAD.
|
||||
///
|
||||
/// Usage: `diff <number>`
|
||||
pub(super) fn handle_diff(ctx: &CommandContext) -> Option<String> {
|
||||
let num_str = ctx.args.trim();
|
||||
if num_str.is_empty() {
|
||||
return Some(format!(
|
||||
"Usage: `{} diff <number>`\n\nShows the git diff from the main branch to the story's worktree HEAD.",
|
||||
ctx.bot_name
|
||||
));
|
||||
}
|
||||
if !num_str.chars().all(|c| c.is_ascii_digit()) {
|
||||
return Some(format!(
|
||||
"Invalid story number: `{num_str}`. Usage: `{} diff <number>`",
|
||||
ctx.bot_name
|
||||
));
|
||||
}
|
||||
|
||||
let story_id = match find_story_id(num_str) {
|
||||
Some(id) => id,
|
||||
None => {
|
||||
return Some(format!(
|
||||
"No story with number **{num_str}** found in the pipeline."
|
||||
));
|
||||
}
|
||||
};
|
||||
|
||||
let wt_path = crate::worktree::worktree_path(ctx.project_root, &story_id);
|
||||
if !wt_path.is_dir() {
|
||||
return Some(format!(
|
||||
"Story **{num_str}** has no worktree. The diff is only available once a coder has started working on it."
|
||||
));
|
||||
}
|
||||
|
||||
let base_branch = resolve_base_branch(ctx.project_root);
|
||||
let range = format!("{base_branch}...HEAD");
|
||||
|
||||
let stat = run_git(&wt_path, &["diff", "--stat", &range]);
|
||||
let diff = run_git(&wt_path, &["diff", &range]);
|
||||
|
||||
let mut out = format!("## Diff — story {num_str} vs `{base_branch}`\n\n");
|
||||
|
||||
if stat.is_empty() && diff.is_empty() {
|
||||
out.push_str("*(no changes relative to main branch)*\n");
|
||||
return Some(out);
|
||||
}
|
||||
|
||||
if !stat.is_empty() {
|
||||
out.push_str("**Changed files:**\n```\n");
|
||||
out.push_str(&stat);
|
||||
out.push_str("\n```\n\n");
|
||||
}
|
||||
|
||||
if !diff.is_empty() {
|
||||
const MAX_DIFF_BYTES: usize = 8_000;
|
||||
if diff.len() > MAX_DIFF_BYTES {
|
||||
let truncated = truncate_at_char_boundary(&diff, MAX_DIFF_BYTES);
|
||||
out.push_str("**Diff** *(truncated — showing first 8 KB)*:\n```diff\n");
|
||||
out.push_str(truncated);
|
||||
out.push_str("\n... (truncated)\n```\n");
|
||||
} else {
|
||||
out.push_str("**Diff:**\n```diff\n");
|
||||
out.push_str(&diff);
|
||||
out.push_str("\n```\n");
|
||||
}
|
||||
}
|
||||
|
||||
Some(out)
|
||||
}
|
||||
|
||||
/// Find the story_id in the pipeline whose numeric prefix matches `num_str`.
|
||||
fn find_story_id(num_str: &str) -> Option<String> {
|
||||
let items = crate::pipeline_state::read_all_typed();
|
||||
items.into_iter().find_map(|item| {
|
||||
let file_num = item
|
||||
.story_id
|
||||
.0
|
||||
.split('_')
|
||||
.next()
|
||||
.filter(|s| !s.is_empty() && s.chars().all(|c| c.is_ascii_digit()))
|
||||
.unwrap_or("");
|
||||
if file_num == num_str {
|
||||
Some(item.story_id.0.clone())
|
||||
} else {
|
||||
None
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
/// Return the configured base branch, or auto-detect it from the project root HEAD.
|
||||
fn resolve_base_branch(project_root: &Path) -> String {
|
||||
crate::config::ProjectConfig::load(project_root)
|
||||
.ok()
|
||||
.and_then(|c| c.base_branch)
|
||||
.unwrap_or_else(|| {
|
||||
Command::new("git")
|
||||
.args(["rev-parse", "--abbrev-ref", "HEAD"])
|
||||
.current_dir(project_root)
|
||||
.output()
|
||||
.ok()
|
||||
.filter(|o| o.status.success())
|
||||
.map(|o| String::from_utf8_lossy(&o.stdout).trim().to_string())
|
||||
.unwrap_or_else(|| "master".to_string())
|
||||
})
|
||||
}
|
||||
|
||||
/// Run a git command in `dir`, returning trimmed stdout (empty string on failure).
|
||||
fn run_git(dir: &Path, args: &[&str]) -> String {
|
||||
Command::new("git")
|
||||
.args(args)
|
||||
.current_dir(dir)
|
||||
.output()
|
||||
.ok()
|
||||
.filter(|o| o.status.success())
|
||||
.map(|o| String::from_utf8_lossy(&o.stdout).trim().to_string())
|
||||
.unwrap_or_default()
|
||||
}
|
||||
|
||||
/// Truncate `s` to at most `max_bytes` bytes without splitting a UTF-8 character.
|
||||
fn truncate_at_char_boundary(s: &str, max_bytes: usize) -> &str {
|
||||
if s.len() <= max_bytes {
|
||||
return s;
|
||||
}
|
||||
let mut boundary = max_bytes;
|
||||
while !s.is_char_boundary(boundary) {
|
||||
boundary -= 1;
|
||||
}
|
||||
&s[..boundary]
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Tests
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::agents::AgentPool;
|
||||
use std::collections::HashSet;
|
||||
use std::sync::{Arc, Mutex};
|
||||
|
||||
use super::super::{CommandDispatch, try_handle_command};
|
||||
|
||||
fn diff_cmd(root: &std::path::Path, args: &str) -> Option<String> {
|
||||
let agents = Arc::new(AgentPool::new_test(3000));
|
||||
let ambient_rooms = Arc::new(Mutex::new(HashSet::new()));
|
||||
let room_id = "!test:example.com".to_string();
|
||||
let dispatch = CommandDispatch {
|
||||
bot_name: "Timmy",
|
||||
bot_user_id: "@timmy:homeserver.local",
|
||||
project_root: root,
|
||||
agents: &agents,
|
||||
ambient_rooms: &ambient_rooms,
|
||||
room_id: &room_id,
|
||||
};
|
||||
try_handle_command(&dispatch, &format!("@timmy diff {args}"))
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn diff_command_is_registered() {
|
||||
let found = super::super::commands().iter().any(|c| c.name == "diff");
|
||||
assert!(found, "diff command must be in the registry");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn diff_command_appears_in_help() {
|
||||
let result = super::super::tests::try_cmd_addressed(
|
||||
"Timmy",
|
||||
"@timmy:homeserver.local",
|
||||
"@timmy help",
|
||||
);
|
||||
let output = result.unwrap();
|
||||
assert!(
|
||||
output.contains("diff"),
|
||||
"help should list diff command: {output}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn diff_command_no_args_returns_usage() {
|
||||
let tmp = tempfile::TempDir::new().unwrap();
|
||||
let output = diff_cmd(tmp.path(), "").unwrap();
|
||||
assert!(
|
||||
output.contains("Usage"),
|
||||
"no args should show usage: {output}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn diff_command_non_numeric_returns_error() {
|
||||
let tmp = tempfile::TempDir::new().unwrap();
|
||||
let output = diff_cmd(tmp.path(), "abc").unwrap();
|
||||
assert!(
|
||||
output.contains("Invalid"),
|
||||
"non-numeric arg should return error: {output}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn diff_command_story_not_found_returns_friendly_message() {
|
||||
crate::db::ensure_content_store();
|
||||
let tmp = tempfile::TempDir::new().unwrap();
|
||||
let output = diff_cmd(tmp.path(), "99993").unwrap();
|
||||
assert!(
|
||||
output.contains("99993"),
|
||||
"message should include story number: {output}"
|
||||
);
|
||||
assert!(
|
||||
output.contains("found") || output.contains("pipeline"),
|
||||
"message should explain not found: {output}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn diff_command_no_worktree_returns_clear_error() {
|
||||
use crate::chat::test_helpers::write_story_file;
|
||||
let tmp = tempfile::TempDir::new().unwrap();
|
||||
write_story_file(
|
||||
tmp.path(),
|
||||
"2_current",
|
||||
"55551_story_no_worktree.md",
|
||||
"---\nname: No Worktree\n---\n",
|
||||
);
|
||||
let output = diff_cmd(tmp.path(), "55551").unwrap();
|
||||
assert!(
|
||||
output.contains("worktree")
|
||||
|| output.contains("no worktree")
|
||||
|| output.contains("Worktree"),
|
||||
"should report missing worktree: {output}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn truncate_at_char_boundary_short_string() {
|
||||
let s = "hello";
|
||||
assert_eq!(truncate_at_char_boundary(s, 100), "hello");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn truncate_at_char_boundary_exact_limit() {
|
||||
let s = "hello";
|
||||
assert_eq!(truncate_at_char_boundary(s, 5), "hello");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn truncate_at_char_boundary_over_limit() {
|
||||
let s = "hello world";
|
||||
assert_eq!(truncate_at_char_boundary(s, 5), "hello");
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,300 @@
|
||||
//! Handler for the `freeze` and `unfreeze` commands.
|
||||
//!
|
||||
//! `freeze <number>` sets `frozen: true` on the story, halting pipeline
|
||||
//! advancement and auto-assign until `unfreeze <number>` clears the flag.
|
||||
|
||||
use super::CommandContext;
|
||||
use crate::io::story_metadata::{
|
||||
clear_front_matter_field_in_content, parse_front_matter, set_front_matter_field,
|
||||
};
|
||||
use std::path::Path;
|
||||
|
||||
/// Handle the `freeze` command.
|
||||
///
|
||||
/// Parses `<number>` from `ctx.args`, locates the work item, and sets
|
||||
/// `frozen: true` in its front matter.
|
||||
pub(super) fn handle_freeze(ctx: &CommandContext) -> Option<String> {
|
||||
let num_str = ctx.args.trim();
|
||||
if num_str.is_empty() || !num_str.chars().all(|c| c.is_ascii_digit()) {
|
||||
return Some(format!(
|
||||
"Usage: `{} freeze <number>` (e.g. `freeze 42`)",
|
||||
ctx.bot_name
|
||||
));
|
||||
}
|
||||
Some(freeze_by_number(ctx.project_root, num_str))
|
||||
}
|
||||
|
||||
/// Core freeze logic: find story by numeric prefix and set `frozen: true`.
|
||||
///
|
||||
/// Returns a Markdown-formatted response string suitable for all transports.
|
||||
pub(crate) fn freeze_by_number(project_root: &Path, story_number: &str) -> String {
|
||||
let (story_id, _, _, _) =
|
||||
match crate::chat::lookup::find_story_by_number(project_root, story_number) {
|
||||
Some(found) => found,
|
||||
None => {
|
||||
return format!("No story, bug, or spike with number **{story_number}** found.");
|
||||
}
|
||||
};
|
||||
|
||||
freeze_by_story_id(&story_id)
|
||||
}
|
||||
|
||||
fn freeze_by_story_id(story_id: &str) -> String {
|
||||
let contents = match crate::db::read_content(story_id) {
|
||||
Some(c) => c,
|
||||
None => return format!("Failed to read story content for **{story_id}**"),
|
||||
};
|
||||
|
||||
let meta = match parse_front_matter(&contents) {
|
||||
Ok(m) => m,
|
||||
Err(e) => return format!("Failed to parse front matter for **{story_id}**: {e}"),
|
||||
};
|
||||
|
||||
let story_name = meta.name.as_deref().unwrap_or(story_id).to_string();
|
||||
|
||||
if meta.frozen == Some(true) {
|
||||
return format!("**{story_name}** ({story_id}) is already frozen.");
|
||||
}
|
||||
|
||||
let updated = set_front_matter_field(&contents, "frozen", "true");
|
||||
|
||||
crate::db::write_content(story_id, &updated);
|
||||
let stage = crate::pipeline_state::read_typed(story_id)
|
||||
.ok()
|
||||
.flatten()
|
||||
.map(|i| i.stage.dir_name().to_string())
|
||||
.unwrap_or_else(|| "2_current".to_string());
|
||||
crate::db::write_item_with_content(story_id, &stage, &updated);
|
||||
|
||||
format!(
|
||||
"Frozen **{story_name}** ({story_id}). Pipeline advancement and auto-assign suppressed until unfrozen."
|
||||
)
|
||||
}
|
||||
|
||||
/// Handle the `unfreeze` command.
|
||||
///
|
||||
/// Parses `<number>` from `ctx.args`, locates the work item, and clears the
|
||||
/// `frozen` flag to resume normal pipeline behaviour.
|
||||
pub(super) fn handle_unfreeze(ctx: &CommandContext) -> Option<String> {
|
||||
let num_str = ctx.args.trim();
|
||||
if num_str.is_empty() || !num_str.chars().all(|c| c.is_ascii_digit()) {
|
||||
return Some(format!(
|
||||
"Usage: `{} unfreeze <number>` (e.g. `unfreeze 42`)",
|
||||
ctx.bot_name
|
||||
));
|
||||
}
|
||||
Some(unfreeze_by_number(ctx.project_root, num_str))
|
||||
}
|
||||
|
||||
/// Core unfreeze logic: find story by numeric prefix and clear `frozen` flag.
|
||||
pub(crate) fn unfreeze_by_number(project_root: &Path, story_number: &str) -> String {
|
||||
let (story_id, _, _, _) =
|
||||
match crate::chat::lookup::find_story_by_number(project_root, story_number) {
|
||||
Some(found) => found,
|
||||
None => {
|
||||
return format!("No story, bug, or spike with number **{story_number}** found.");
|
||||
}
|
||||
};
|
||||
|
||||
unfreeze_by_story_id(&story_id)
|
||||
}
|
||||
|
||||
fn unfreeze_by_story_id(story_id: &str) -> String {
|
||||
let contents = match crate::db::read_content(story_id) {
|
||||
Some(c) => c,
|
||||
None => return format!("Failed to read story content for **{story_id}**"),
|
||||
};
|
||||
|
||||
let meta = match parse_front_matter(&contents) {
|
||||
Ok(m) => m,
|
||||
Err(e) => return format!("Failed to parse front matter for **{story_id}**: {e}"),
|
||||
};
|
||||
|
||||
let story_name = meta.name.as_deref().unwrap_or(story_id).to_string();
|
||||
|
||||
if meta.frozen != Some(true) {
|
||||
return format!("**{story_name}** ({story_id}) is not frozen. Nothing to unfreeze.");
|
||||
}
|
||||
|
||||
let updated = clear_front_matter_field_in_content(&contents, "frozen");
|
||||
|
||||
crate::db::write_content(story_id, &updated);
|
||||
let stage = crate::pipeline_state::read_typed(story_id)
|
||||
.ok()
|
||||
.flatten()
|
||||
.map(|i| i.stage.dir_name().to_string())
|
||||
.unwrap_or_else(|| "2_current".to_string());
|
||||
crate::db::write_item_with_content(story_id, &stage, &updated);
|
||||
|
||||
format!("Unfrozen **{story_name}** ({story_id}). Normal pipeline behaviour resumed.")
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Tests
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use crate::agents::AgentPool;
|
||||
use crate::chat::test_helpers::write_story_file;
|
||||
use std::collections::HashSet;
|
||||
use std::sync::{Arc, Mutex};
|
||||
|
||||
use super::super::{CommandDispatch, try_handle_command};
|
||||
|
||||
fn freeze_cmd_with_root(root: &std::path::Path, args: &str) -> Option<String> {
|
||||
let agents = Arc::new(AgentPool::new_test(3000));
|
||||
let ambient_rooms = Arc::new(Mutex::new(HashSet::new()));
|
||||
let room_id = "!test:example.com".to_string();
|
||||
let dispatch = CommandDispatch {
|
||||
bot_name: "Timmy",
|
||||
bot_user_id: "@timmy:homeserver.local",
|
||||
project_root: root,
|
||||
agents: &agents,
|
||||
ambient_rooms: &ambient_rooms,
|
||||
room_id: &room_id,
|
||||
};
|
||||
try_handle_command(&dispatch, &format!("@timmy freeze {args}"))
|
||||
}
|
||||
|
||||
fn unfreeze_cmd_with_root(root: &std::path::Path, args: &str) -> Option<String> {
|
||||
let agents = Arc::new(AgentPool::new_test(3000));
|
||||
let ambient_rooms = Arc::new(Mutex::new(HashSet::new()));
|
||||
let room_id = "!test:example.com".to_string();
|
||||
let dispatch = CommandDispatch {
|
||||
bot_name: "Timmy",
|
||||
bot_user_id: "@timmy:homeserver.local",
|
||||
project_root: root,
|
||||
agents: &agents,
|
||||
ambient_rooms: &ambient_rooms,
|
||||
room_id: &room_id,
|
||||
};
|
||||
try_handle_command(&dispatch, &format!("@timmy unfreeze {args}"))
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn freeze_command_is_registered() {
|
||||
use super::super::commands;
|
||||
assert!(
|
||||
commands().iter().any(|c| c.name == "freeze"),
|
||||
"freeze command must be in the registry"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn unfreeze_command_is_registered() {
|
||||
use super::super::commands;
|
||||
assert!(
|
||||
commands().iter().any(|c| c.name == "unfreeze"),
|
||||
"unfreeze command must be in the registry"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn freeze_command_no_args_returns_usage() {
|
||||
let tmp = tempfile::TempDir::new().unwrap();
|
||||
let output = freeze_cmd_with_root(tmp.path(), "").unwrap();
|
||||
assert!(
|
||||
output.contains("Usage"),
|
||||
"no args should show usage: {output}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn unfreeze_command_no_args_returns_usage() {
|
||||
let tmp = tempfile::TempDir::new().unwrap();
|
||||
let output = unfreeze_cmd_with_root(tmp.path(), "").unwrap();
|
||||
assert!(
|
||||
output.contains("Usage"),
|
||||
"no args should show usage: {output}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn freeze_command_not_found_returns_error() {
|
||||
let tmp = tempfile::TempDir::new().unwrap();
|
||||
let output = freeze_cmd_with_root(tmp.path(), "9988").unwrap();
|
||||
assert!(
|
||||
output.contains("9988") && output.contains("found"),
|
||||
"not-found message should include number and 'found': {output}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn freeze_command_sets_frozen_flag() {
|
||||
let tmp = tempfile::TempDir::new().unwrap();
|
||||
crate::db::ensure_content_store();
|
||||
write_story_file(
|
||||
tmp.path(),
|
||||
"2_current",
|
||||
"9940_story_freezeme.md",
|
||||
"---\nname: Freeze Me\n---\n# Story\n",
|
||||
);
|
||||
let output = freeze_cmd_with_root(tmp.path(), "9940").unwrap();
|
||||
assert!(
|
||||
output.contains("Frozen") && output.contains("Freeze Me"),
|
||||
"should confirm freeze with story name: {output}"
|
||||
);
|
||||
let contents = crate::db::read_content("9940_story_freezeme")
|
||||
.expect("story content should be readable after freeze");
|
||||
assert!(
|
||||
contents.contains("frozen: true"),
|
||||
"frozen flag should be set: {contents}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn unfreeze_command_clears_frozen_flag() {
|
||||
let tmp = tempfile::TempDir::new().unwrap();
|
||||
crate::db::ensure_content_store();
|
||||
write_story_file(
|
||||
tmp.path(),
|
||||
"2_current",
|
||||
"9941_story_frozen.md",
|
||||
"---\nname: Frozen Story\nfrozen: true\n---\n# Story\n",
|
||||
);
|
||||
let output = unfreeze_cmd_with_root(tmp.path(), "9941").unwrap();
|
||||
assert!(
|
||||
output.contains("Unfrozen") && output.contains("Frozen Story"),
|
||||
"should confirm unfreeze with story name: {output}"
|
||||
);
|
||||
let contents = crate::db::read_content("9941_story_frozen")
|
||||
.expect("story content should be readable after unfreeze");
|
||||
assert!(
|
||||
!contents.contains("frozen:"),
|
||||
"frozen flag should be removed: {contents}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn unfreeze_command_not_frozen_returns_error() {
|
||||
let tmp = tempfile::TempDir::new().unwrap();
|
||||
write_story_file(
|
||||
tmp.path(),
|
||||
"2_current",
|
||||
"9942_story_notfrozen.md",
|
||||
"---\nname: Not Frozen\n---\n# Story\n",
|
||||
);
|
||||
let output = unfreeze_cmd_with_root(tmp.path(), "9942").unwrap();
|
||||
assert!(
|
||||
output.contains("not frozen"),
|
||||
"should return not-frozen error: {output}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn freeze_command_already_frozen_returns_message() {
|
||||
let tmp = tempfile::TempDir::new().unwrap();
|
||||
write_story_file(
|
||||
tmp.path(),
|
||||
"2_current",
|
||||
"9943_story_alreadyfrozen.md",
|
||||
"---\nname: Already Frozen\nfrozen: true\n---\n# Story\n",
|
||||
);
|
||||
let output = freeze_cmd_with_root(tmp.path(), "9943").unwrap();
|
||||
assert!(
|
||||
output.contains("already frozen"),
|
||||
"should say already frozen: {output}"
|
||||
);
|
||||
}
|
||||
}
|
||||
@@ -11,6 +11,8 @@ mod backlog;
|
||||
mod cost;
|
||||
mod coverage;
|
||||
mod depends;
|
||||
mod diff;
|
||||
mod freeze;
|
||||
mod git;
|
||||
mod help;
|
||||
pub(crate) mod loc;
|
||||
@@ -163,6 +165,11 @@ pub fn commands() -> &'static [BotCommand] {
|
||||
description: "Display the full text of a work item: `show <number>`",
|
||||
handler: show::handle_show,
|
||||
},
|
||||
BotCommand {
|
||||
name: "diff",
|
||||
description: "Show git diff from main branch to story worktree HEAD: `diff <number>`",
|
||||
handler: diff::handle_diff,
|
||||
},
|
||||
BotCommand {
|
||||
name: "overview",
|
||||
description: "Show implementation summary for a merged story: `overview <number>`",
|
||||
@@ -203,6 +210,16 @@ pub fn commands() -> &'static [BotCommand] {
|
||||
description: "Reset a blocked story: `unblock <number>` (clears blocked flag and resets retry count)",
|
||||
handler: unblock::handle_unblock,
|
||||
},
|
||||
BotCommand {
|
||||
name: "freeze",
|
||||
description: "Freeze a story at its current stage: `freeze <number>` (suppresses pipeline advancement and auto-assign)",
|
||||
handler: freeze::handle_freeze,
|
||||
},
|
||||
BotCommand {
|
||||
name: "unfreeze",
|
||||
description: "Unfreeze a story: `unfreeze <number>` (resumes normal pipeline behaviour)",
|
||||
handler: freeze::handle_unfreeze,
|
||||
},
|
||||
BotCommand {
|
||||
name: "unreleased",
|
||||
description: "Show stories merged to master since the last release tag",
|
||||
|
||||
@@ -105,58 +105,13 @@ fn find_story_merge_commit(root: &std::path::Path, num_str: &str) -> Option<Stri
|
||||
if hash.is_empty() { None } else { Some(hash) }
|
||||
}
|
||||
|
||||
/// Find the human-readable name of a story by searching content store then filesystem.
|
||||
/// Find the human-readable name of a story by searching CRDT then content store.
|
||||
fn find_story_name(root: &std::path::Path, num_str: &str) -> Option<String> {
|
||||
// Try content store first.
|
||||
for id in crate::db::all_content_ids() {
|
||||
let file_num = id.split('_').next().unwrap_or("");
|
||||
if file_num == num_str
|
||||
&& let Some(c) = crate::db::read_content(&id)
|
||||
{
|
||||
return crate::io::story_metadata::parse_front_matter(&c)
|
||||
.ok()
|
||||
.and_then(|m| m.name);
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback: filesystem scan.
|
||||
let stages = [
|
||||
"1_backlog",
|
||||
"2_current",
|
||||
"3_qa",
|
||||
"4_merge",
|
||||
"5_done",
|
||||
"6_archived",
|
||||
];
|
||||
for stage in &stages {
|
||||
let dir = root.join(".huskies").join("work").join(stage);
|
||||
if !dir.exists() {
|
||||
continue;
|
||||
}
|
||||
if let Ok(entries) = std::fs::read_dir(&dir) {
|
||||
for entry in entries.flatten() {
|
||||
let path = entry.path();
|
||||
if path.extension().and_then(|e| e.to_str()) != Some("md") {
|
||||
continue;
|
||||
}
|
||||
if let Some(stem) = path.file_stem().and_then(|s| s.to_str()) {
|
||||
let file_num = stem
|
||||
.split('_')
|
||||
.next()
|
||||
.filter(|s| !s.is_empty() && s.chars().all(|c| c.is_ascii_digit()))
|
||||
.unwrap_or("");
|
||||
if file_num == num_str {
|
||||
return std::fs::read_to_string(&path).ok().and_then(|c| {
|
||||
crate::io::story_metadata::parse_front_matter(&c)
|
||||
.ok()
|
||||
.and_then(|m| m.name)
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
None
|
||||
let (_, _, _, content) = crate::chat::lookup::find_story_by_number(root, num_str)?;
|
||||
let content = content?;
|
||||
crate::io::story_metadata::parse_front_matter(&content)
|
||||
.ok()
|
||||
.and_then(|m| m.name)
|
||||
}
|
||||
|
||||
/// Return the `git show --stat` output for a commit.
|
||||
|
||||
@@ -59,12 +59,17 @@ fn wizard_generate_reply(ctx: &CommandContext) -> String {
|
||||
}
|
||||
|
||||
/// Compose a status reply for the `setup` command (no args).
|
||||
///
|
||||
/// If no wizard state exists, automatically initializes it so the user does
|
||||
/// not need to run `huskies init` manually.
|
||||
fn wizard_status_reply(ctx: &CommandContext) -> String {
|
||||
if WizardState::load(ctx.project_root).is_none() {
|
||||
WizardState::init_if_missing(ctx.project_root);
|
||||
}
|
||||
match WizardState::load(ctx.project_root) {
|
||||
Some(state) => format_wizard_state(&state),
|
||||
None => {
|
||||
"No setup wizard active. Run `huskies init` in the project root to begin.".to_string()
|
||||
}
|
||||
None => "Unable to initialize setup wizard. Ensure the `.huskies/` directory exists."
|
||||
.to_string(),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -205,13 +210,18 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn setup_no_wizard_returns_helpful_message() {
|
||||
fn setup_no_wizard_auto_initializes() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
std::fs::create_dir_all(dir.path().join(".huskies")).unwrap();
|
||||
let agents = Arc::new(crate::agents::AgentPool::new_test(4000));
|
||||
let rooms = Arc::new(Mutex::new(HashSet::new()));
|
||||
let ctx = make_ctx("", dir.path(), &agents, &rooms);
|
||||
let result = handle_setup(&ctx).unwrap();
|
||||
assert!(result.contains("huskies init"));
|
||||
// Bot should auto-initialize and return wizard status, not ask user to run huskies init.
|
||||
assert!(result.contains("Setup wizard"));
|
||||
assert!(!result.contains("huskies init"));
|
||||
// Wizard state file should now exist.
|
||||
assert!(WizardState::load(dir.path()).is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
|
||||
@@ -2,6 +2,65 @@
|
||||
|
||||
use super::CommandContext;
|
||||
|
||||
/// Strip YAML front matter and return a summary of useful fields + the remaining body.
|
||||
fn strip_front_matter(text: &str) -> (String, String) {
|
||||
let trimmed = text.trim_start();
|
||||
if !trimmed.starts_with("---") {
|
||||
return (String::new(), text.to_string());
|
||||
}
|
||||
|
||||
// Find the closing ---
|
||||
if let Some(end) = trimmed[3..].find("\n---") {
|
||||
let yaml_block = &trimmed[3..3 + end].trim();
|
||||
let body = &trimmed[3 + end + 4..]; // skip past closing ---
|
||||
|
||||
// Extract useful fields from YAML (simple line-based parsing)
|
||||
let mut parts = Vec::new();
|
||||
for line in yaml_block.lines() {
|
||||
let line = line.trim();
|
||||
if line.starts_with("depends_on:") {
|
||||
let val = line.trim_start_matches("depends_on:").trim();
|
||||
if !val.is_empty() && val != "[]" {
|
||||
parts.push(format!("**Depends on:** {val}"));
|
||||
}
|
||||
} else if line.starts_with("agent:") {
|
||||
let val = line.trim_start_matches("agent:").trim().trim_matches('"');
|
||||
if !val.is_empty() {
|
||||
parts.push(format!("**Agent:** {val}"));
|
||||
}
|
||||
} else if line.starts_with("blocked:") {
|
||||
let val = line.trim_start_matches("blocked:").trim();
|
||||
if val == "true" {
|
||||
parts.push("**Blocked:** yes".to_string());
|
||||
}
|
||||
} else if line.starts_with("retry_count:") {
|
||||
let val = line.trim_start_matches("retry_count:").trim();
|
||||
if val != "0" && !val.is_empty() {
|
||||
parts.push(format!("**Retries:** {val}"));
|
||||
}
|
||||
} else if line.starts_with("qa:") {
|
||||
let val = line.trim_start_matches("qa:").trim().trim_matches('"');
|
||||
if val == "human" {
|
||||
parts.push("**QA:** human review required".to_string());
|
||||
}
|
||||
} else if line.starts_with("merge_failure:") {
|
||||
let val = line
|
||||
.trim_start_matches("merge_failure:")
|
||||
.trim()
|
||||
.trim_matches('"');
|
||||
if !val.is_empty() {
|
||||
parts.push(format!("**Merge failure:** {val}"));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
(parts.join(" · "), body.to_string())
|
||||
} else {
|
||||
// No closing ---, return as-is
|
||||
(String::new(), text.to_string())
|
||||
}
|
||||
}
|
||||
|
||||
/// Display the full markdown text of a work item identified by its numeric ID.
|
||||
///
|
||||
/// Lookup priority: CRDT → content store → filesystem (Story 512).
|
||||
@@ -21,8 +80,8 @@ pub(super) fn handle_show(ctx: &CommandContext) -> Option<String> {
|
||||
));
|
||||
}
|
||||
|
||||
// Find the story by numeric prefix: CRDT → content store → filesystem.
|
||||
let (story_id, _stage_dir, path, content) =
|
||||
// Find the story by numeric prefix: CRDT → content store.
|
||||
let (story_id, _stage_dir, _path, content) =
|
||||
match crate::chat::lookup::find_story_by_number(ctx.project_root, num_str) {
|
||||
Some(found) => found,
|
||||
None => {
|
||||
@@ -32,16 +91,40 @@ pub(super) fn handle_show(ctx: &CommandContext) -> Option<String> {
|
||||
}
|
||||
};
|
||||
|
||||
// `content` is populated from the content store (CRDT/DB path) or read
|
||||
// from disk during the filesystem fallback. If it is None (story found in
|
||||
// CRDT but no content-store entry yet), attempt a direct disk read.
|
||||
Some(
|
||||
content
|
||||
.or_else(|| std::fs::read_to_string(&path).ok())
|
||||
.unwrap_or_else(|| {
|
||||
format!("Story {story_id} found in pipeline but its content is unavailable.")
|
||||
}),
|
||||
)
|
||||
// `content` comes from the CRDT / content store. If unavailable, report
|
||||
// it rather than silently reading a stale on-disk copy.
|
||||
let text = content.unwrap_or_else(|| {
|
||||
format!("Story {story_id} found in pipeline but its content is unavailable.")
|
||||
});
|
||||
|
||||
// Strip front matter block and extract useful metadata to show inline.
|
||||
let (front_matter_summary, body) = strip_front_matter(&text);
|
||||
|
||||
// Convert markdown headings to bold text for consistent rendering across
|
||||
// Matrix clients. Element X doesn't style <h2> tags distinctly, but bold
|
||||
// text renders consistently everywhere.
|
||||
let formatted = body
|
||||
.lines()
|
||||
.map(|line| {
|
||||
let trimmed = line.trim_start();
|
||||
if let Some(rest) = trimmed.strip_prefix("### ") {
|
||||
format!("\n**{}**", rest)
|
||||
} else if let Some(rest) = trimmed.strip_prefix("## ") {
|
||||
format!("\n**{}**", rest)
|
||||
} else if let Some(rest) = trimmed.strip_prefix("# ") {
|
||||
format!("\n**{}**", rest)
|
||||
} else {
|
||||
line.to_string()
|
||||
}
|
||||
})
|
||||
.collect::<Vec<_>>()
|
||||
.join("\n");
|
||||
|
||||
if front_matter_summary.is_empty() {
|
||||
Some(formatted.trim().to_string())
|
||||
} else {
|
||||
Some(format!("{front_matter_summary}\n{}", formatted.trim()))
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
|
||||
@@ -228,7 +228,13 @@ fn render_item_line(
|
||||
} else {
|
||||
Some(item.name.as_str())
|
||||
};
|
||||
let display = story_short_label(story_id, name_opt);
|
||||
let frozen = crate::io::story_metadata::is_story_frozen_in_store(story_id);
|
||||
let base_label = story_short_label(story_id, name_opt);
|
||||
let display = if frozen {
|
||||
format!("\u{2744}\u{FE0F} {base_label}") // ❄️ prefix
|
||||
} else {
|
||||
base_label
|
||||
};
|
||||
let cost_suffix = cost_by_story
|
||||
.get(story_id)
|
||||
.filter(|&&c| c > 0.0)
|
||||
|
||||
@@ -6,8 +6,7 @@
|
||||
|
||||
use super::CommandContext;
|
||||
use crate::io::story_metadata::{
|
||||
clear_front_matter_field, clear_front_matter_field_in_content, parse_front_matter,
|
||||
set_front_matter_field,
|
||||
clear_front_matter_field_in_content, parse_front_matter, set_front_matter_field,
|
||||
};
|
||||
use std::path::Path;
|
||||
|
||||
@@ -34,9 +33,9 @@ pub(super) fn handle_unblock(ctx: &CommandContext) -> Option<String> {
|
||||
/// Returns a Markdown-formatted response string suitable for all transports.
|
||||
/// Also used by the MCP `unblock` tool.
|
||||
///
|
||||
/// Lookup priority: CRDT → content store → filesystem (Story 512).
|
||||
/// Lookup priority: CRDT → content store.
|
||||
pub(crate) fn unblock_by_number(project_root: &Path, story_number: &str) -> String {
|
||||
let (story_id, _stage_dir, path, _content) =
|
||||
let (story_id, _, _, _) =
|
||||
match crate::chat::lookup::find_story_by_number(project_root, story_number) {
|
||||
Some(found) => found,
|
||||
None => {
|
||||
@@ -44,15 +43,7 @@ pub(crate) fn unblock_by_number(project_root: &Path, story_number: &str) -> Stri
|
||||
}
|
||||
};
|
||||
|
||||
// Prefer DB-backed unblock when the story is in the content store.
|
||||
// Note: `content` may have come from the filesystem fallback in
|
||||
// `find_story_by_number`, so we must re-check the DB rather than
|
||||
// relying on `content.is_some()` alone.
|
||||
if crate::db::read_content(&story_id).is_some() {
|
||||
unblock_by_story_id(&story_id)
|
||||
} else {
|
||||
unblock_by_path(&path, &story_id)
|
||||
}
|
||||
unblock_by_story_id(&story_id)
|
||||
}
|
||||
|
||||
/// Unblock a story using the content store (DB-backed).
|
||||
@@ -105,64 +96,6 @@ fn unblock_by_story_id(story_id: &str) -> String {
|
||||
)
|
||||
}
|
||||
|
||||
/// Core unblock logic: reset blocked state for a known story file path.
|
||||
///
|
||||
/// Reads front matter, verifies the story is blocked, clears the `blocked`
|
||||
/// flag, and resets `retry_count` to 0. Also used by the MCP `unblock` tool
|
||||
/// when the caller has already resolved the story path from a full `story_id`.
|
||||
pub(crate) fn unblock_by_path(path: &Path, story_id: &str) -> String {
|
||||
let contents = match std::fs::read_to_string(path) {
|
||||
Ok(c) => c,
|
||||
Err(e) => return format!("Failed to read story file: {e}"),
|
||||
};
|
||||
|
||||
let meta = match parse_front_matter(&contents) {
|
||||
Ok(m) => m,
|
||||
Err(e) => return format!("Failed to parse front matter for **{story_id}**: {e}"),
|
||||
};
|
||||
|
||||
let story_name = meta.name.as_deref().unwrap_or(story_id).to_string();
|
||||
|
||||
let has_blocked = meta.blocked == Some(true);
|
||||
let has_merge_failure = meta.merge_failure.is_some();
|
||||
|
||||
if !has_blocked && !has_merge_failure {
|
||||
return format!("**{story_name}** ({story_id}) is not blocked. Nothing to unblock.");
|
||||
}
|
||||
|
||||
// Clear the blocked flag if present.
|
||||
if has_blocked && let Err(e) = clear_front_matter_field(path, "blocked") {
|
||||
return format!("Failed to clear blocked flag on **{story_id}**: {e}");
|
||||
}
|
||||
|
||||
// Clear merge_failure if present.
|
||||
if has_merge_failure && let Err(e) = clear_front_matter_field(path, "merge_failure") {
|
||||
return format!("Failed to clear merge_failure on **{story_id}**: {e}");
|
||||
}
|
||||
|
||||
// Reset retry_count to 0 (re-read the updated file, modify, write).
|
||||
let updated_contents = match std::fs::read_to_string(path) {
|
||||
Ok(c) => c,
|
||||
Err(e) => return format!("Failed to re-read story file after unblocking: {e}"),
|
||||
};
|
||||
let with_retry_reset = set_front_matter_field(&updated_contents, "retry_count", "0");
|
||||
if let Err(e) = std::fs::write(path, &with_retry_reset) {
|
||||
return format!("Failed to reset retry_count on **{story_id}**: {e}");
|
||||
}
|
||||
|
||||
let mut cleared = Vec::new();
|
||||
if has_blocked {
|
||||
cleared.push("blocked");
|
||||
}
|
||||
if has_merge_failure {
|
||||
cleared.push("merge_failure");
|
||||
}
|
||||
format!(
|
||||
"Unblocked **{story_name}** ({story_id}). Cleared: {}. Retry count reset to 0.",
|
||||
cleared.join(", ")
|
||||
)
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Tests
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
@@ -4,7 +4,7 @@ use crate::chat::ChatTransport;
|
||||
use crate::chat::timer::TimerStore;
|
||||
use crate::http::context::{PermissionDecision, PermissionForward};
|
||||
use matrix_sdk::ruma::{OwnedEventId, OwnedRoomId, OwnedUserId};
|
||||
use std::collections::{HashMap, HashSet};
|
||||
use std::collections::{BTreeMap, HashMap, HashSet};
|
||||
use std::path::PathBuf;
|
||||
use std::sync::Arc;
|
||||
use tokio::sync::Mutex as TokioMutex;
|
||||
@@ -65,6 +65,70 @@ pub struct BotContext {
|
||||
/// In gateway mode: valid project names accepted by the `switch` command.
|
||||
/// Empty in standalone mode.
|
||||
pub gateway_projects: Vec<String>,
|
||||
/// In gateway mode: mapping of project name → base URL (e.g. `"http://localhost:3001"`).
|
||||
/// Used to proxy bot commands to the active project's `/api/bot/command` endpoint.
|
||||
/// Empty in standalone mode.
|
||||
pub gateway_project_urls: BTreeMap<String, String>,
|
||||
}
|
||||
|
||||
impl BotContext {
|
||||
/// Resolve the effective project root for command dispatch.
|
||||
///
|
||||
/// In gateway mode the bot's `project_root` is the gateway config directory.
|
||||
/// Each project lives in a subdirectory named after the project, so the
|
||||
/// effective root for commands is `project_root / active_project_name`.
|
||||
/// In standalone (single-project) mode this returns `project_root` unchanged.
|
||||
pub async fn effective_project_root(&self) -> PathBuf {
|
||||
if let Some(ref ap) = self.gateway_active_project {
|
||||
let name = ap.read().await.clone();
|
||||
self.project_root.join(&name)
|
||||
} else {
|
||||
self.project_root.clone()
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns `true` if the bot is running in gateway mode.
|
||||
pub fn is_gateway(&self) -> bool {
|
||||
self.gateway_active_project.is_some()
|
||||
}
|
||||
|
||||
/// Return the base URL for the currently active project, if in gateway mode.
|
||||
pub async fn active_project_url(&self) -> Option<String> {
|
||||
let ap = self.gateway_active_project.as_ref()?;
|
||||
let name = ap.read().await.clone();
|
||||
self.gateway_project_urls.get(&name).cloned()
|
||||
}
|
||||
|
||||
/// Proxy a bot command to the active project's `/api/bot/command` endpoint.
|
||||
///
|
||||
/// Returns the Markdown response from the project server, or an error
|
||||
/// message if the request failed.
|
||||
pub async fn proxy_bot_command(&self, command: &str, args: &str) -> Option<String> {
|
||||
let base_url = self.active_project_url().await?;
|
||||
let url = format!("{base_url}/api/bot/command");
|
||||
let client = reqwest::Client::new();
|
||||
let body = serde_json::json!({
|
||||
"command": command,
|
||||
"args": args,
|
||||
});
|
||||
match client.post(&url).json(&body).send().await {
|
||||
Ok(resp) if resp.status().is_success() => {
|
||||
match resp.json::<serde_json::Value>().await {
|
||||
Ok(json) => json
|
||||
.get("response")
|
||||
.and_then(|v| v.as_str())
|
||||
.map(String::from),
|
||||
Err(e) => Some(format!("Failed to parse response from project server: {e}")),
|
||||
}
|
||||
}
|
||||
Ok(resp) => Some(format!(
|
||||
"Project server returned HTTP {}: {}",
|
||||
resp.status(),
|
||||
resp.text().await.unwrap_or_default()
|
||||
)),
|
||||
Err(e) => Some(format!("Failed to reach project server at {url}: {e}")),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
@@ -88,6 +152,135 @@ mod tests {
|
||||
assert_clone::<BotContext>();
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn effective_project_root_standalone_returns_project_root() {
|
||||
// In standalone mode (gateway_active_project is None), the effective root
|
||||
// must equal the project_root exactly.
|
||||
let (_perm_tx, perm_rx) = mpsc::unbounded_channel();
|
||||
let ctx = BotContext {
|
||||
bot_user_id: make_user_id("@bot:example.com"),
|
||||
target_room_ids: vec![],
|
||||
project_root: PathBuf::from("/projects/myapp"),
|
||||
allowed_users: vec![],
|
||||
history: Arc::new(TokioMutex::new(std::collections::HashMap::new())),
|
||||
history_size: 20,
|
||||
bot_sent_event_ids: Arc::new(TokioMutex::new(std::collections::HashSet::new())),
|
||||
perm_rx: Arc::new(TokioMutex::new(perm_rx)),
|
||||
pending_perm_replies: Arc::new(TokioMutex::new(std::collections::HashMap::new())),
|
||||
permission_timeout_secs: 120,
|
||||
bot_name: "Assistant".to_string(),
|
||||
ambient_rooms: Arc::new(std::sync::Mutex::new(std::collections::HashSet::new())),
|
||||
agents: Arc::new(crate::agents::AgentPool::new_test(3000)),
|
||||
htop_sessions: Arc::new(TokioMutex::new(std::collections::HashMap::new())),
|
||||
transport: Arc::new(crate::chat::transport::whatsapp::WhatsAppTransport::new(
|
||||
"test-phone".to_string(),
|
||||
"test-token".to_string(),
|
||||
"pipeline_notification".to_string(),
|
||||
)),
|
||||
timer_store: Arc::new(crate::chat::timer::TimerStore::load(
|
||||
std::path::PathBuf::from("/tmp/timers.json"),
|
||||
)),
|
||||
gateway_active_project: None,
|
||||
gateway_projects: vec![],
|
||||
gateway_project_urls: BTreeMap::new(),
|
||||
};
|
||||
assert_eq!(
|
||||
ctx.effective_project_root().await,
|
||||
PathBuf::from("/projects/myapp")
|
||||
);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn effective_project_root_gateway_uses_active_project_subdir() {
|
||||
// In gateway mode, the effective root must be config_dir / active_project_name.
|
||||
let (_perm_tx, perm_rx) = mpsc::unbounded_channel();
|
||||
let active = Arc::new(RwLock::new("huskies".to_string()));
|
||||
let ctx = BotContext {
|
||||
bot_user_id: make_user_id("@bot:example.com"),
|
||||
target_room_ids: vec![],
|
||||
project_root: PathBuf::from("/gateway"),
|
||||
allowed_users: vec![],
|
||||
history: Arc::new(TokioMutex::new(std::collections::HashMap::new())),
|
||||
history_size: 20,
|
||||
bot_sent_event_ids: Arc::new(TokioMutex::new(std::collections::HashSet::new())),
|
||||
perm_rx: Arc::new(TokioMutex::new(perm_rx)),
|
||||
pending_perm_replies: Arc::new(TokioMutex::new(std::collections::HashMap::new())),
|
||||
permission_timeout_secs: 120,
|
||||
bot_name: "Assistant".to_string(),
|
||||
ambient_rooms: Arc::new(std::sync::Mutex::new(std::collections::HashSet::new())),
|
||||
agents: Arc::new(crate::agents::AgentPool::new_test(3000)),
|
||||
htop_sessions: Arc::new(TokioMutex::new(std::collections::HashMap::new())),
|
||||
transport: Arc::new(crate::chat::transport::whatsapp::WhatsAppTransport::new(
|
||||
"test-phone".to_string(),
|
||||
"test-token".to_string(),
|
||||
"pipeline_notification".to_string(),
|
||||
)),
|
||||
timer_store: Arc::new(crate::chat::timer::TimerStore::load(
|
||||
std::path::PathBuf::from("/tmp/timers.json"),
|
||||
)),
|
||||
gateway_active_project: Some(Arc::clone(&active)),
|
||||
gateway_projects: vec!["huskies".into(), "robot-studio".into()],
|
||||
gateway_project_urls: BTreeMap::from([
|
||||
("huskies".into(), "http://localhost:3001".into()),
|
||||
("robot-studio".into(), "http://localhost:3002".into()),
|
||||
]),
|
||||
};
|
||||
assert_eq!(
|
||||
ctx.effective_project_root().await,
|
||||
PathBuf::from("/gateway/huskies")
|
||||
);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn effective_project_root_gateway_reflects_project_switch() {
|
||||
// Switching the active project must change the effective root.
|
||||
let (_perm_tx, perm_rx) = mpsc::unbounded_channel();
|
||||
let active = Arc::new(RwLock::new("huskies".to_string()));
|
||||
let ctx = BotContext {
|
||||
bot_user_id: make_user_id("@bot:example.com"),
|
||||
target_room_ids: vec![],
|
||||
project_root: PathBuf::from("/gateway"),
|
||||
allowed_users: vec![],
|
||||
history: Arc::new(TokioMutex::new(std::collections::HashMap::new())),
|
||||
history_size: 20,
|
||||
bot_sent_event_ids: Arc::new(TokioMutex::new(std::collections::HashSet::new())),
|
||||
perm_rx: Arc::new(TokioMutex::new(perm_rx)),
|
||||
pending_perm_replies: Arc::new(TokioMutex::new(std::collections::HashMap::new())),
|
||||
permission_timeout_secs: 120,
|
||||
bot_name: "Assistant".to_string(),
|
||||
ambient_rooms: Arc::new(std::sync::Mutex::new(std::collections::HashSet::new())),
|
||||
agents: Arc::new(crate::agents::AgentPool::new_test(3000)),
|
||||
htop_sessions: Arc::new(TokioMutex::new(std::collections::HashMap::new())),
|
||||
transport: Arc::new(crate::chat::transport::whatsapp::WhatsAppTransport::new(
|
||||
"test-phone".to_string(),
|
||||
"test-token".to_string(),
|
||||
"pipeline_notification".to_string(),
|
||||
)),
|
||||
timer_store: Arc::new(crate::chat::timer::TimerStore::load(
|
||||
std::path::PathBuf::from("/tmp/timers.json"),
|
||||
)),
|
||||
gateway_active_project: Some(Arc::clone(&active)),
|
||||
gateway_projects: vec!["huskies".into(), "robot-studio".into()],
|
||||
gateway_project_urls: BTreeMap::from([
|
||||
("huskies".into(), "http://localhost:3001".into()),
|
||||
("robot-studio".into(), "http://localhost:3002".into()),
|
||||
]),
|
||||
};
|
||||
|
||||
assert_eq!(
|
||||
ctx.effective_project_root().await,
|
||||
PathBuf::from("/gateway/huskies")
|
||||
);
|
||||
|
||||
// Simulate switch_project changing the active project.
|
||||
*active.write().await = "robot-studio".to_string();
|
||||
|
||||
assert_eq!(
|
||||
ctx.effective_project_root().await,
|
||||
PathBuf::from("/gateway/robot-studio")
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn bot_context_has_no_require_verified_devices_field() {
|
||||
// Verification is always on — BotContext no longer has a toggle field.
|
||||
@@ -118,6 +311,7 @@ mod tests {
|
||||
)),
|
||||
gateway_active_project: None,
|
||||
gateway_projects: vec![],
|
||||
gateway_project_urls: BTreeMap::new(),
|
||||
};
|
||||
// Clone must work (required by Matrix SDK event handler injection).
|
||||
let _cloned = ctx.clone();
|
||||
|
||||
@@ -96,6 +96,49 @@ mod tests {
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn markdown_to_html_heading_renders_as_h_tag() {
|
||||
let html = markdown_to_html("## Section\nContent here.");
|
||||
assert!(
|
||||
html.contains("<h2>Section</h2>"),
|
||||
"expected <h2> heading tag: {html}"
|
||||
);
|
||||
assert!(
|
||||
html.contains("<p>Content here.</p>"),
|
||||
"expected paragraph after heading: {html}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn markdown_to_html_heading_with_preceding_prose_renders_correctly() {
|
||||
let html = markdown_to_html("Intro text.\n## Section\nBody.");
|
||||
assert!(
|
||||
html.contains("<h2>Section</h2>"),
|
||||
"expected <h2> heading tag: {html}"
|
||||
);
|
||||
assert!(
|
||||
html.contains("<p>Intro text.</p>"),
|
||||
"expected intro paragraph: {html}"
|
||||
);
|
||||
assert!(
|
||||
html.contains("<p>Body.</p>"),
|
||||
"expected body paragraph: {html}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn markdown_to_html_multiple_headings_each_render_as_h_tags() {
|
||||
let html = markdown_to_html("## Section 1\nContent one.\n\n## Section 2\nContent two.");
|
||||
assert!(
|
||||
html.contains("<h2>Section 1</h2>"),
|
||||
"expected first <h2>: {html}"
|
||||
);
|
||||
assert!(
|
||||
html.contains("<h2>Section 2</h2>"),
|
||||
"expected second <h2>: {html}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn startup_announcement_uses_bot_name() {
|
||||
assert_eq!(format_startup_announcement("Timmy"), "Timmy is online.");
|
||||
|
||||
@@ -174,13 +174,71 @@ pub(super) async fn on_room_message(
|
||||
let user_message = body;
|
||||
slog!("[matrix-bot] Message from {sender}: {user_message}");
|
||||
|
||||
// In gateway mode, resolve commands against the active project's root directory.
|
||||
// The gateway's own project_root is the gateway config dir; each project lives in
|
||||
// a subdirectory named after the project. Standalone mode is unaffected.
|
||||
let effective_root = ctx.effective_project_root().await;
|
||||
|
||||
// ── Gateway command proxy ───────────────────────────────────────────
|
||||
// In gateway mode the bot has no local CRDT or project filesystem, so most
|
||||
// commands must be forwarded to the active project's `/api/bot/command`
|
||||
// endpoint. Only a small set of gateway-local commands are handled here.
|
||||
if ctx.is_gateway() {
|
||||
// Commands that are meaningful on the gateway itself (no project state needed).
|
||||
const GATEWAY_LOCAL_COMMANDS: &[&str] = &["help", "ambient", "reset", "switch"];
|
||||
|
||||
let stripped = crate::chat::util::strip_bot_mention(
|
||||
&user_message,
|
||||
&ctx.bot_name,
|
||||
ctx.bot_user_id.as_str(),
|
||||
)
|
||||
.trim()
|
||||
.trim_start_matches(|c: char| !c.is_alphanumeric())
|
||||
.to_string();
|
||||
|
||||
let (cmd, args) = match stripped.split_once(char::is_whitespace) {
|
||||
Some((c, a)) => (c.to_ascii_lowercase(), a.trim().to_string()),
|
||||
None => (stripped.to_ascii_lowercase(), String::new()),
|
||||
};
|
||||
|
||||
// Only proxy if the first word is a known bot command (sync or async).
|
||||
let is_known_command = !cmd.is_empty()
|
||||
&& !GATEWAY_LOCAL_COMMANDS.contains(&cmd.as_str())
|
||||
&& (crate::chat::commands::commands()
|
||||
.iter()
|
||||
.any(|c| c.name == cmd)
|
||||
|| [
|
||||
"assign", "start", "delete", "rebuild", "rmtree", "htop", "timer",
|
||||
]
|
||||
.contains(&cmd.as_str()));
|
||||
|
||||
if is_known_command {
|
||||
// Proxy to the active project server.
|
||||
let response = match ctx.proxy_bot_command(&cmd, &args).await {
|
||||
Some(r) => r,
|
||||
None => "No active project selected or project URL not configured.".to_string(),
|
||||
};
|
||||
let html = markdown_to_html(&response);
|
||||
if let Ok(msg_id) = ctx
|
||||
.transport
|
||||
.send_message(&room_id_str, &response, &html)
|
||||
.await
|
||||
&& let Ok(event_id) = msg_id.parse()
|
||||
{
|
||||
ctx.bot_sent_event_ids.lock().await.insert(event_id);
|
||||
}
|
||||
return;
|
||||
}
|
||||
// Gateway-local commands and freeform text fall through to normal handling below.
|
||||
}
|
||||
|
||||
// Check for bot-level commands (help, status, ambient, …) before invoking
|
||||
// the LLM. All commands are registered in commands.rs — no special-casing
|
||||
// needed here.
|
||||
let dispatch = super::super::commands::CommandDispatch {
|
||||
bot_name: &ctx.bot_name,
|
||||
bot_user_id: ctx.bot_user_id.as_str(),
|
||||
project_root: &ctx.project_root,
|
||||
project_root: &effective_root,
|
||||
agents: &ctx.agents,
|
||||
ambient_rooms: &ctx.ambient_rooms,
|
||||
room_id: &room_id_str,
|
||||
@@ -219,7 +277,7 @@ pub(super) async fn on_room_message(
|
||||
&ctx.bot_name,
|
||||
&story_number,
|
||||
&model,
|
||||
&ctx.project_root,
|
||||
&effective_root,
|
||||
&ctx.agents,
|
||||
)
|
||||
.await
|
||||
@@ -287,7 +345,7 @@ pub(super) async fn on_room_message(
|
||||
super::super::delete::handle_delete(
|
||||
&ctx.bot_name,
|
||||
&story_number,
|
||||
&ctx.project_root,
|
||||
&effective_root,
|
||||
&ctx.agents,
|
||||
)
|
||||
.await
|
||||
@@ -321,7 +379,7 @@ pub(super) async fn on_room_message(
|
||||
super::super::rmtree::handle_rmtree(
|
||||
&ctx.bot_name,
|
||||
&story_number,
|
||||
&ctx.project_root,
|
||||
&effective_root,
|
||||
&ctx.agents,
|
||||
)
|
||||
.await
|
||||
@@ -361,7 +419,7 @@ pub(super) async fn on_room_message(
|
||||
&ctx.bot_name,
|
||||
&story_number,
|
||||
agent_hint.as_deref(),
|
||||
&ctx.project_root,
|
||||
&effective_root,
|
||||
&ctx.agents,
|
||||
)
|
||||
.await
|
||||
@@ -587,7 +645,18 @@ pub(super) async fn handle_message(
|
||||
let sent_any_chunk = Arc::new(AtomicBool::new(false));
|
||||
let sent_any_chunk_for_callback = Arc::clone(&sent_any_chunk);
|
||||
|
||||
let project_root_str = ctx.project_root.to_string_lossy().to_string();
|
||||
// In gateway mode, run Claude Code in the gateway config directory so it
|
||||
// picks up the `.mcp.json` that points to the gateway's MCP proxy endpoint.
|
||||
// The gateway proxies tool calls to the active project automatically.
|
||||
// In standalone mode, use the project root directly.
|
||||
let project_root_str = if ctx.is_gateway() {
|
||||
ctx.project_root.to_string_lossy().to_string()
|
||||
} else {
|
||||
ctx.effective_project_root()
|
||||
.await
|
||||
.to_string_lossy()
|
||||
.to_string()
|
||||
};
|
||||
let chat_fut = provider.chat_stream(
|
||||
&prompt,
|
||||
&project_root_str,
|
||||
|
||||
@@ -30,6 +30,7 @@ pub async fn run_bot(
|
||||
shutdown_rx: watch::Receiver<Option<crate::rebuild::ShutdownReason>>,
|
||||
gateway_active_project: Option<Arc<RwLock<String>>>,
|
||||
gateway_projects: Vec<String>,
|
||||
gateway_project_urls: std::collections::BTreeMap<String, String>,
|
||||
) -> Result<(), String> {
|
||||
let store_path = project_root.join(".huskies").join("matrix_store");
|
||||
let client = Client::builder()
|
||||
@@ -247,6 +248,7 @@ pub async fn run_bot(
|
||||
timer_store,
|
||||
gateway_active_project,
|
||||
gateway_projects,
|
||||
gateway_project_urls,
|
||||
};
|
||||
|
||||
slog!(
|
||||
|
||||
@@ -58,6 +58,11 @@ use tokio::sync::{Mutex as TokioMutex, RwLock, broadcast, mpsc, watch};
|
||||
/// announce the shutdown to all configured rooms before the process exits.
|
||||
///
|
||||
/// Must be called from within a Tokio runtime context (e.g., from `main`).
|
||||
///
|
||||
/// Returns an [`tokio::task::AbortHandle`] if the bot was actually spawned (Matrix/Discord
|
||||
/// transports), or `None` if the config is absent, disabled, or uses a webhook-based
|
||||
/// transport (Slack/WhatsApp) that does not require a persistent background task.
|
||||
#[allow(clippy::too_many_arguments)]
|
||||
pub fn spawn_bot(
|
||||
project_root: &Path,
|
||||
watcher_tx: broadcast::Sender<WatcherEvent>,
|
||||
@@ -66,12 +71,13 @@ pub fn spawn_bot(
|
||||
shutdown_rx: watch::Receiver<Option<ShutdownReason>>,
|
||||
gateway_active_project: Option<Arc<RwLock<String>>>,
|
||||
gateway_projects: Vec<String>,
|
||||
) {
|
||||
gateway_project_urls: std::collections::BTreeMap<String, String>,
|
||||
) -> Option<tokio::task::AbortHandle> {
|
||||
let config = match BotConfig::load(project_root) {
|
||||
Some(c) => c,
|
||||
None => {
|
||||
crate::slog!("[matrix-bot] bot.toml absent or disabled; Matrix integration skipped");
|
||||
return;
|
||||
return None;
|
||||
}
|
||||
};
|
||||
|
||||
@@ -81,7 +87,7 @@ pub fn spawn_bot(
|
||||
"[bot] transport={} — skipping Matrix bot; webhooks handle this transport",
|
||||
config.transport
|
||||
);
|
||||
return;
|
||||
return None;
|
||||
}
|
||||
|
||||
crate::slog!(
|
||||
@@ -93,7 +99,7 @@ pub fn spawn_bot(
|
||||
let root = project_root.to_path_buf();
|
||||
let watcher_rx = watcher_tx.subscribe();
|
||||
let watcher_rx_auto = watcher_tx.subscribe();
|
||||
tokio::spawn(async move {
|
||||
let handle = tokio::spawn(async move {
|
||||
if let Err(e) = bot::run_bot(
|
||||
config,
|
||||
root,
|
||||
@@ -104,10 +110,12 @@ pub fn spawn_bot(
|
||||
shutdown_rx,
|
||||
gateway_active_project,
|
||||
gateway_projects,
|
||||
gateway_project_urls,
|
||||
)
|
||||
.await
|
||||
{
|
||||
crate::slog!("[matrix-bot] Fatal error: {e}");
|
||||
}
|
||||
});
|
||||
Some(handle.abort_handle())
|
||||
}
|
||||
|
||||
+50
-6
@@ -223,12 +223,24 @@ pub fn normalize_line_breaks(text: &str) -> String {
|
||||
|
||||
let prev_line = lines[i - 1];
|
||||
|
||||
// Insert a blank separator when both the current and previous lines
|
||||
// are non-empty prose (not inside a code fence, not structured Markdown).
|
||||
// ATX headings (lines starting with one or more `#` characters) always
|
||||
// need a blank line before and after them so that Matrix clients render
|
||||
// the heading with visual separation. Without a blank line, a single
|
||||
// newline between a heading and adjacent text is swallowed by many
|
||||
// Matrix clients (including Element X), joining the heading text and
|
||||
// the following content on the same line without any heading formatting.
|
||||
let is_cur_heading = line.trim_start().starts_with('#');
|
||||
let is_prev_heading = prev_line.trim_start().starts_with('#');
|
||||
|
||||
// Insert a blank separator when:
|
||||
// 1. Both lines are non-empty prose (standard prose-to-prose rule).
|
||||
// 2. The current line is an ATX heading (adds blank line *before* it).
|
||||
// 3. The previous line was an ATX heading (adds blank line *after* it).
|
||||
let should_double = !line.is_empty()
|
||||
&& !prev_line.is_empty()
|
||||
&& !is_structured_line(line)
|
||||
&& !is_structured_line(prev_line);
|
||||
&& ((!is_structured_line(line) && !is_structured_line(prev_line))
|
||||
|| is_cur_heading
|
||||
|| is_prev_heading);
|
||||
|
||||
if should_double {
|
||||
result.push("");
|
||||
@@ -599,10 +611,42 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn normalize_heading_single_newline_preserved() {
|
||||
fn normalize_heading_followed_by_prose_gets_blank_line() {
|
||||
// A blank line must be inserted after a heading so Matrix clients render
|
||||
// the heading with visual separation from the following paragraph.
|
||||
let input = "# My Heading\nSome text below.";
|
||||
let output = normalize_line_breaks(input);
|
||||
assert_eq!(output, "# My Heading\nSome text below.");
|
||||
assert_eq!(output, "# My Heading\n\nSome text below.");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn normalize_prose_before_heading_gets_blank_line() {
|
||||
// A blank line must be inserted before a heading when prose precedes it.
|
||||
let input = "Some intro text.\n## Section";
|
||||
let output = normalize_line_breaks(input);
|
||||
assert_eq!(output, "Some intro text.\n\n## Section");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn normalize_heading_surrounded_by_prose_gets_blank_lines_both_sides() {
|
||||
let input = "Intro.\n## Heading\nContent.";
|
||||
let output = normalize_line_breaks(input);
|
||||
assert_eq!(output, "Intro.\n\n## Heading\n\nContent.");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn normalize_consecutive_headings_separated_by_blank_lines() {
|
||||
let input = "## Section 1\n## Section 2";
|
||||
let output = normalize_line_breaks(input);
|
||||
assert_eq!(output, "## Section 1\n\n## Section 2");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn normalize_heading_already_separated_by_blank_line_unchanged() {
|
||||
// When there is already a blank line, no extra blank is inserted.
|
||||
let input = "# Heading\n\nContent.";
|
||||
let output = normalize_line_breaks(input);
|
||||
assert_eq!(output, "# Heading\n\nContent.");
|
||||
}
|
||||
|
||||
#[test]
|
||||
|
||||
+785
-92
File diff suppressed because it is too large
Load Diff
@@ -402,6 +402,34 @@ impl AgentsApi {
|
||||
}
|
||||
}
|
||||
|
||||
// Filesystem miss — fall back to CRDT-only path (story exists in the CRDT
|
||||
// but has no corresponding .md file on disk).
|
||||
if let Some(content) = crate::db::read_content(&story_id.0) {
|
||||
let item = crate::pipeline_state::read_typed(&story_id.0)
|
||||
.map_err(|e| bad_request(format!("Pipeline read error: {e}")))?;
|
||||
let stage = item
|
||||
.as_ref()
|
||||
.map(|i| match &i.stage {
|
||||
crate::pipeline_state::Stage::Backlog => "backlog",
|
||||
crate::pipeline_state::Stage::Coding => "current",
|
||||
crate::pipeline_state::Stage::Qa => "qa",
|
||||
crate::pipeline_state::Stage::Merge { .. } => "merge",
|
||||
crate::pipeline_state::Stage::Done { .. } => "done",
|
||||
crate::pipeline_state::Stage::Archived { .. } => "archived",
|
||||
})
|
||||
.unwrap_or("unknown")
|
||||
.to_string();
|
||||
let metadata = crate::io::story_metadata::parse_front_matter(&content).ok();
|
||||
let name = metadata.as_ref().and_then(|m| m.name.clone());
|
||||
let agent = metadata.and_then(|m| m.agent);
|
||||
return Ok(Json(WorkItemContentResponse {
|
||||
content,
|
||||
stage,
|
||||
name,
|
||||
agent,
|
||||
}));
|
||||
}
|
||||
|
||||
Err(not_found(format!("Work item not found: {}", story_id.0)))
|
||||
}
|
||||
|
||||
@@ -953,6 +981,50 @@ allowed_tools = ["Read", "Bash"]
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn get_work_item_content_falls_back_to_crdt_when_no_file() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let root = tmp.path().to_path_buf();
|
||||
// Seed content + CRDT with no .md file on disk.
|
||||
crate::db::write_item_with_content(
|
||||
"44_story_crdt_only",
|
||||
"1_backlog",
|
||||
"---\nname: \"CRDT Only\"\n---\n\nCRDT content.",
|
||||
);
|
||||
let ctx = AppContext::new_test(root);
|
||||
let api = AgentsApi { ctx: Arc::new(ctx) };
|
||||
let result = api
|
||||
.get_work_item_content(Path("44_story_crdt_only".to_string()))
|
||||
.await
|
||||
.unwrap()
|
||||
.0;
|
||||
assert!(result.content.contains("CRDT content."));
|
||||
assert_eq!(result.stage, "backlog");
|
||||
assert_eq!(result.name, Some("CRDT Only".to_string()));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn get_work_item_content_crdt_fallback_with_current_stage() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let root = tmp.path().to_path_buf();
|
||||
// Seed a CRDT-only story in the coding/current stage.
|
||||
crate::db::write_item_with_content(
|
||||
"45_story_crdt_current",
|
||||
"2_current",
|
||||
"---\nname: \"Current CRDT\"\n---\n\nIn progress.",
|
||||
);
|
||||
let ctx = AppContext::new_test(root);
|
||||
let api = AgentsApi { ctx: Arc::new(ctx) };
|
||||
let result = api
|
||||
.get_work_item_content(Path("45_story_crdt_current".to_string()))
|
||||
.await
|
||||
.unwrap()
|
||||
.0;
|
||||
assert!(result.content.contains("In progress."));
|
||||
assert_eq!(result.stage, "current");
|
||||
assert_eq!(result.name, Some("Current CRDT".to_string()));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn get_work_item_content_returns_error_when_no_project_root() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
|
||||
@@ -0,0 +1,55 @@
|
||||
//! Bot configuration endpoints — GET/PUT for .huskies/bot.toml credentials.
|
||||
use crate::http::context::{AppContext, OpenApiResult, bad_request};
|
||||
use poem_openapi::{Object, OpenApi, Tags, payload::Json};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::sync::Arc;
|
||||
|
||||
#[derive(Tags)]
|
||||
enum BotConfigTags {
|
||||
BotConfig,
|
||||
}
|
||||
|
||||
#[derive(Object, Serialize, Deserialize, Default)]
|
||||
struct BotConfigPayload {
|
||||
pub transport: Option<String>,
|
||||
pub enabled: Option<bool>,
|
||||
pub homeserver: Option<String>,
|
||||
pub username: Option<String>,
|
||||
pub password: Option<String>,
|
||||
pub room_ids: Option<Vec<String>>,
|
||||
pub slack_bot_token: Option<String>,
|
||||
pub slack_signing_secret: Option<String>,
|
||||
pub slack_channel_ids: Option<Vec<String>>,
|
||||
}
|
||||
|
||||
pub struct BotConfigApi {
|
||||
pub ctx: Arc<AppContext>,
|
||||
}
|
||||
|
||||
#[OpenApi(tag = "BotConfigTags::BotConfig")]
|
||||
impl BotConfigApi {
|
||||
/// Read current bot credentials from .huskies/bot.toml.
|
||||
#[oai(path = "/bot/config", method = "get")]
|
||||
async fn get_config(&self) -> OpenApiResult<Json<BotConfigPayload>> {
|
||||
let root = self.ctx.state.get_project_root().map_err(bad_request)?;
|
||||
let path = root.join(".huskies").join("bot.toml");
|
||||
let config: BotConfigPayload = std::fs::read_to_string(&path)
|
||||
.ok()
|
||||
.and_then(|s| toml::from_str(&s).ok())
|
||||
.unwrap_or_default();
|
||||
Ok(Json(config))
|
||||
}
|
||||
|
||||
/// Persist bot credentials to .huskies/bot.toml.
|
||||
#[oai(path = "/bot/config", method = "put")]
|
||||
async fn put_config(
|
||||
&self,
|
||||
payload: Json<BotConfigPayload>,
|
||||
) -> OpenApiResult<Json<BotConfigPayload>> {
|
||||
let root = self.ctx.state.get_project_root().map_err(bad_request)?;
|
||||
let path = root.join(".huskies").join("bot.toml");
|
||||
let content = toml::to_string(&payload.0).map_err(|e| bad_request(e.to_string()))?;
|
||||
std::fs::write(&path, content).map_err(|e| bad_request(e.to_string()))?;
|
||||
Ok(payload)
|
||||
}
|
||||
}
|
||||
@@ -230,6 +230,92 @@ pub(super) fn tool_get_agent_config(ctx: &AppContext) -> Result<String, String>
|
||||
.map_err(|e| format!("Serialization error: {e}"))
|
||||
}
|
||||
|
||||
/// Get remaining turns and budget for a running agent.
|
||||
///
|
||||
/// Returns turns used, max turns, remaining turns, budget used, max budget,
|
||||
/// and remaining budget for the named agent. Fails if the agent is not
|
||||
/// currently running or pending.
|
||||
pub(super) fn tool_get_agent_remaining_turns_and_budget(
|
||||
args: &Value,
|
||||
ctx: &AppContext,
|
||||
) -> Result<String, String> {
|
||||
let story_id = args
|
||||
.get("story_id")
|
||||
.and_then(|v| v.as_str())
|
||||
.ok_or("Missing required argument: story_id")?;
|
||||
let agent_name = args
|
||||
.get("agent_name")
|
||||
.and_then(|v| v.as_str())
|
||||
.ok_or("Missing required argument: agent_name")?;
|
||||
|
||||
// Verify the agent exists and is running/pending.
|
||||
let agents = ctx.agents.list_agents()?;
|
||||
let agent_info = agents
|
||||
.iter()
|
||||
.find(|a| a.story_id == story_id && a.agent_name == agent_name)
|
||||
.ok_or_else(|| format!("No agent '{agent_name}' found for story '{story_id}'"))?;
|
||||
|
||||
if !matches!(
|
||||
agent_info.status,
|
||||
crate::agents::AgentStatus::Running | crate::agents::AgentStatus::Pending
|
||||
) {
|
||||
return Err(format!(
|
||||
"Agent '{agent_name}' for story '{story_id}' is not running (status: {})",
|
||||
agent_info.status
|
||||
));
|
||||
}
|
||||
|
||||
let project_root = ctx.agents.get_project_root(&ctx.state)?;
|
||||
let config = ProjectConfig::load(&project_root)?;
|
||||
|
||||
// Find the agent config (max_turns, max_budget_usd).
|
||||
let agent_config = config.agent.iter().find(|a| a.name == agent_name);
|
||||
|
||||
let max_turns = agent_config.and_then(|a| a.max_turns);
|
||||
let max_budget_usd = agent_config.and_then(|a| a.max_budget_usd);
|
||||
|
||||
// Count turns by reading log files and counting assistant events.
|
||||
let log_files =
|
||||
crate::agent_log::list_story_log_files(&project_root, story_id, Some(agent_name));
|
||||
let mut turns_used: u64 = 0;
|
||||
for path in &log_files {
|
||||
if let Ok(entries) = crate::agent_log::read_log(path) {
|
||||
for entry in &entries {
|
||||
if entry.event.get("type").and_then(|v| v.as_str()) == Some("agent_json")
|
||||
&& let Some(data) = entry.event.get("data")
|
||||
&& data.get("type").and_then(|v| v.as_str()) == Some("assistant")
|
||||
{
|
||||
turns_used += 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Compute budget used from completed-session token usage records.
|
||||
let all_records = crate::agents::token_usage::read_all(&project_root).unwrap_or_default();
|
||||
let budget_used_usd: f64 = all_records
|
||||
.iter()
|
||||
.filter(|r| r.story_id == story_id && r.agent_name == agent_name)
|
||||
.map(|r| r.usage.total_cost_usd)
|
||||
.sum();
|
||||
|
||||
let remaining_turns = max_turns.map(|max| (max as i64) - (turns_used as i64));
|
||||
let remaining_budget_usd = max_budget_usd.map(|max| max - budget_used_usd);
|
||||
|
||||
serde_json::to_string_pretty(&json!({
|
||||
"story_id": story_id,
|
||||
"agent_name": agent_name,
|
||||
"status": agent_info.status.to_string(),
|
||||
"turns_used": turns_used,
|
||||
"max_turns": max_turns,
|
||||
"remaining_turns": remaining_turns,
|
||||
"budget_used_usd": budget_used_usd,
|
||||
"max_budget_usd": max_budget_usd,
|
||||
"remaining_budget_usd": remaining_budget_usd,
|
||||
}))
|
||||
.map_err(|e| format!("Serialization error: {e}"))
|
||||
}
|
||||
|
||||
pub(super) async fn tool_wait_for_agent(args: &Value, ctx: &AppContext) -> Result<String, String> {
|
||||
let story_id = args
|
||||
.get("story_id")
|
||||
@@ -840,4 +926,112 @@ stage = "coder"
|
||||
let pct = read_coverage_percent_from_json(tmp.path());
|
||||
assert!(pct.is_none());
|
||||
}
|
||||
|
||||
// ── get_agent_remaining_turns_and_budget tests ──────────────────────────
|
||||
|
||||
#[test]
|
||||
fn tool_get_agent_remaining_turns_and_budget_missing_story_id() {
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
let ctx = test_ctx(tmp.path());
|
||||
let result =
|
||||
tool_get_agent_remaining_turns_and_budget(&json!({"agent_name": "coder-1"}), &ctx);
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("story_id"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn tool_get_agent_remaining_turns_and_budget_missing_agent_name() {
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
let ctx = test_ctx(tmp.path());
|
||||
let result =
|
||||
tool_get_agent_remaining_turns_and_budget(&json!({"story_id": "1_test"}), &ctx);
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("agent_name"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn tool_get_agent_remaining_turns_and_budget_no_agent_returns_error() {
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
let ctx = test_ctx(tmp.path());
|
||||
let result = tool_get_agent_remaining_turns_and_budget(
|
||||
&json!({"story_id": "99_nope", "agent_name": "coder-1"}),
|
||||
&ctx,
|
||||
);
|
||||
assert!(result.is_err());
|
||||
let err = result.unwrap_err();
|
||||
assert!(
|
||||
err.contains("No agent"),
|
||||
"expected 'No agent' error, got: {err}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn tool_get_agent_remaining_turns_and_budget_completed_agent_returns_error() {
|
||||
use crate::agents::AgentStatus;
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
let ctx = test_ctx(tmp.path());
|
||||
ctx.agents
|
||||
.inject_test_agent("42_story", "coder-1", AgentStatus::Completed);
|
||||
|
||||
let result = tool_get_agent_remaining_turns_and_budget(
|
||||
&json!({"story_id": "42_story", "agent_name": "coder-1"}),
|
||||
&ctx,
|
||||
);
|
||||
assert!(result.is_err());
|
||||
let err = result.unwrap_err();
|
||||
assert!(
|
||||
err.contains("not running"),
|
||||
"expected 'not running' error, got: {err}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn tool_get_agent_remaining_turns_and_budget_running_agent_returns_data() {
|
||||
use crate::agents::AgentStatus;
|
||||
use crate::store::StoreOps;
|
||||
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
let ctx = test_ctx(tmp.path());
|
||||
ctx.store
|
||||
.set("project_root", json!(tmp.path().to_string_lossy().as_ref()));
|
||||
ctx.agents
|
||||
.inject_test_agent("42_story", "coder-1", AgentStatus::Running);
|
||||
|
||||
let result = tool_get_agent_remaining_turns_and_budget(
|
||||
&json!({"story_id": "42_story", "agent_name": "coder-1"}),
|
||||
&ctx,
|
||||
)
|
||||
.unwrap();
|
||||
let parsed: Value = serde_json::from_str(&result).unwrap();
|
||||
|
||||
assert_eq!(parsed["story_id"], "42_story");
|
||||
assert_eq!(parsed["agent_name"], "coder-1");
|
||||
assert_eq!(parsed["status"], "running");
|
||||
assert!(parsed.get("turns_used").is_some());
|
||||
assert!(parsed.get("budget_used_usd").is_some());
|
||||
// max_turns and max_budget_usd may be null if not configured
|
||||
assert!(parsed.get("max_turns").is_some());
|
||||
assert!(parsed.get("remaining_turns").is_some());
|
||||
assert!(parsed.get("max_budget_usd").is_some());
|
||||
assert!(parsed.get("remaining_budget_usd").is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn tool_get_agent_remaining_turns_and_budget_in_tools_list() {
|
||||
use super::super::handle_tools_list;
|
||||
let resp = handle_tools_list(Some(json!(1)));
|
||||
let tools = resp.result.unwrap()["tools"].as_array().unwrap().clone();
|
||||
let tool = tools
|
||||
.iter()
|
||||
.find(|t| t["name"] == "get_agent_remaining_turns_and_budget");
|
||||
assert!(
|
||||
tool.is_some(),
|
||||
"get_agent_remaining_turns_and_budget missing from tools list"
|
||||
);
|
||||
let t = tool.unwrap();
|
||||
let required = t["inputSchema"]["required"].as_array().unwrap();
|
||||
let req_names: Vec<&str> = required.iter().map(|v| v.as_str().unwrap()).collect();
|
||||
assert!(req_names.contains(&"story_id"));
|
||||
assert!(req_names.contains(&"agent_name"));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -349,13 +349,14 @@ pub(super) fn tool_dump_crdt(args: &Value) -> Result<String, String> {
|
||||
.map_err(|e| format!("Serialization error: {e}"))
|
||||
}
|
||||
|
||||
/// MCP tool: return the server version and build hash.
|
||||
pub(super) fn tool_get_version() -> Result<String, String> {
|
||||
/// MCP tool: return the server version, build hash, and running port.
|
||||
pub(super) fn tool_get_version(ctx: &AppContext) -> Result<String, String> {
|
||||
let build_hash =
|
||||
std::fs::read_to_string(".huskies/build_hash").unwrap_or_else(|_| "unknown".to_string());
|
||||
serde_json::to_string_pretty(&json!({
|
||||
"version": env!("CARGO_PKG_VERSION"),
|
||||
"build_hash": build_hash.trim(),
|
||||
"port": ctx.agents.port(),
|
||||
}))
|
||||
.map_err(|e| format!("Serialization error: {e}"))
|
||||
}
|
||||
|
||||
@@ -15,6 +15,23 @@ pub(super) async fn tool_merge_agent_work(
|
||||
.and_then(|v| v.as_str())
|
||||
.ok_or("Missing required argument: story_id")?;
|
||||
|
||||
// Check CRDT stage before attempting merge — if already done or archived,
|
||||
// return success immediately to avoid spurious error notifications.
|
||||
if let Some(item) = crate::crdt_state::read_item(story_id)
|
||||
&& (item.stage == "5_done" || item.stage == "6_archived")
|
||||
{
|
||||
return serde_json::to_string_pretty(&json!({
|
||||
"story_id": story_id,
|
||||
"status": "completed",
|
||||
"success": true,
|
||||
"message": format!(
|
||||
"Story '{}' is already in '{}' — no merge needed.",
|
||||
story_id, item.stage
|
||||
),
|
||||
}))
|
||||
.map_err(|e| format!("Serialization error: {e}"));
|
||||
}
|
||||
|
||||
let project_root = ctx.agents.get_project_root(&ctx.state)?;
|
||||
ctx.agents.start_merge_agent_work(&project_root, story_id)?;
|
||||
|
||||
@@ -258,6 +275,60 @@ mod tests {
|
||||
assert!(result.unwrap_err().contains("story_id"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn tool_merge_agent_work_already_done_returns_success() {
|
||||
crate::crdt_state::init_for_test();
|
||||
crate::crdt_state::write_item(
|
||||
"99_story_already_done",
|
||||
"5_done",
|
||||
Some("Already done story"),
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
);
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
let ctx = test_ctx(tmp.path());
|
||||
let result =
|
||||
tool_merge_agent_work(&json!({"story_id": "99_story_already_done"}), &ctx).await;
|
||||
assert!(result.is_ok(), "expected Ok, got: {result:?}");
|
||||
let body = result.unwrap();
|
||||
let v: serde_json::Value = serde_json::from_str(&body).unwrap();
|
||||
assert_eq!(v["status"], "completed");
|
||||
assert_eq!(v["success"], true);
|
||||
assert!(v["message"].as_str().unwrap().contains("5_done"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn tool_merge_agent_work_already_archived_returns_success() {
|
||||
crate::crdt_state::init_for_test();
|
||||
crate::crdt_state::write_item(
|
||||
"98_story_already_archived",
|
||||
"6_archived",
|
||||
Some("Already archived story"),
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
);
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
let ctx = test_ctx(tmp.path());
|
||||
let result =
|
||||
tool_merge_agent_work(&json!({"story_id": "98_story_already_archived"}), &ctx).await;
|
||||
assert!(result.is_ok(), "expected Ok, got: {result:?}");
|
||||
let body = result.unwrap();
|
||||
let v: serde_json::Value = serde_json::from_str(&body).unwrap();
|
||||
assert_eq!(v["status"], "completed");
|
||||
assert_eq!(v["success"], true);
|
||||
assert!(v["message"].as_str().unwrap().contains("6_archived"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn tool_move_story_to_merge_missing_story_id() {
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
|
||||
@@ -431,6 +431,24 @@ fn handle_tools_list(id: Option<Value>) -> JsonRpcResponse {
|
||||
"required": ["story_id", "agent_name"]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "get_agent_remaining_turns_and_budget",
|
||||
"description": "Get remaining turns and budget for a running agent. Returns turns used, max turns, remaining turns, budget used (from completed sessions), max budget, and remaining budget. Only works for agents in running or pending state.",
|
||||
"inputSchema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"story_id": {
|
||||
"type": "string",
|
||||
"description": "Story identifier (e.g. '42_story_my_feature')"
|
||||
},
|
||||
"agent_name": {
|
||||
"type": "string",
|
||||
"description": "Agent name (e.g. 'coder-1', 'mergemaster', 'qa')"
|
||||
}
|
||||
},
|
||||
"required": ["story_id", "agent_name"]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "create_worktree",
|
||||
"description": "Create a git worktree for a story under .huskies/worktrees/{story_id} with deterministic naming. Writes .mcp.json and runs component setup. Returns the worktree path.",
|
||||
@@ -513,6 +531,28 @@ fn handle_tools_list(id: Option<Value>) -> JsonRpcResponse {
|
||||
"required": ["story_id", "criterion_index"]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "edit_criterion",
|
||||
"description": "Update the text of an existing acceptance criterion in place, preserving its checked/unchecked state. Uses a 0-based index counting all criteria (both checked and unchecked).",
|
||||
"inputSchema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"story_id": {
|
||||
"type": "string",
|
||||
"description": "Story identifier (filename stem, e.g. '28_my_story')"
|
||||
},
|
||||
"criterion_index": {
|
||||
"type": "integer",
|
||||
"description": "0-based index of the criterion to edit (counts all criteria)"
|
||||
},
|
||||
"new_text": {
|
||||
"type": "string",
|
||||
"description": "New text for the criterion (without the '- [ ] ' or '- [x] ' prefix)"
|
||||
}
|
||||
},
|
||||
"required": ["story_id", "criterion_index", "new_text"]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "add_criterion",
|
||||
"description": "Add an acceptance criterion to an existing story file. Appends '- [ ] {criterion}' after the last existing criterion in the '## Acceptance Criteria' section. Auto-commits via the filesystem watcher.",
|
||||
@@ -531,6 +571,24 @@ fn handle_tools_list(id: Option<Value>) -> JsonRpcResponse {
|
||||
"required": ["story_id", "criterion"]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "remove_criterion",
|
||||
"description": "Remove an acceptance criterion from a story by its 0-based index (counting all criteria, both checked and unchecked). Returns an error if the index is out of range. Auto-commits via the filesystem watcher.",
|
||||
"inputSchema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"story_id": {
|
||||
"type": "string",
|
||||
"description": "Story identifier (filename stem, e.g. '28_my_story')"
|
||||
},
|
||||
"criterion_index": {
|
||||
"type": "integer",
|
||||
"description": "0-based index of the criterion to remove (counts all criteria)"
|
||||
}
|
||||
},
|
||||
"required": ["story_id", "criterion_index"]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "update_story",
|
||||
"description": "Update an existing story file. Can replace the '## User Story' and/or '## Description' section content, and/or set YAML front matter fields (e.g. agent, qa). Auto-commits via the filesystem watcher.",
|
||||
@@ -839,7 +897,7 @@ fn handle_tools_list(id: Option<Value>) -> JsonRpcResponse {
|
||||
},
|
||||
{
|
||||
"name": "get_version",
|
||||
"description": "Return the server version and build hash.",
|
||||
"description": "Return the server version, build hash, and running port.",
|
||||
"inputSchema": {
|
||||
"type": "object",
|
||||
"properties": {}
|
||||
@@ -1232,6 +1290,9 @@ async fn handle_tools_call(id: Option<Value>, params: &Value, ctx: &AppContext)
|
||||
"reload_agent_config" => agent_tools::tool_get_agent_config(ctx),
|
||||
"get_agent_output" => agent_tools::tool_get_agent_output(&args, ctx).await,
|
||||
"wait_for_agent" => agent_tools::tool_wait_for_agent(&args, ctx).await,
|
||||
"get_agent_remaining_turns_and_budget" => {
|
||||
agent_tools::tool_get_agent_remaining_turns_and_budget(&args, ctx)
|
||||
}
|
||||
// Worktree tools
|
||||
"create_worktree" => agent_tools::tool_create_worktree(&args, ctx).await,
|
||||
"list_worktrees" => agent_tools::tool_list_worktrees(ctx),
|
||||
@@ -1242,7 +1303,9 @@ async fn handle_tools_call(id: Option<Value>, params: &Value, ctx: &AppContext)
|
||||
"accept_story" => story_tools::tool_accept_story(&args, ctx),
|
||||
// Story mutation tools (auto-commit to master)
|
||||
"check_criterion" => story_tools::tool_check_criterion(&args, ctx),
|
||||
"edit_criterion" => story_tools::tool_edit_criterion(&args, ctx),
|
||||
"add_criterion" => story_tools::tool_add_criterion(&args, ctx),
|
||||
"remove_criterion" => story_tools::tool_remove_criterion(&args, ctx),
|
||||
"update_story" => story_tools::tool_update_story(&args, ctx),
|
||||
// Spike lifecycle tools
|
||||
"create_spike" => story_tools::tool_create_spike(&args, ctx),
|
||||
@@ -1267,7 +1330,7 @@ async fn handle_tools_call(id: Option<Value>, params: &Value, ctx: &AppContext)
|
||||
"get_pipeline_status" => story_tools::tool_get_pipeline_status(ctx),
|
||||
// Diagnostics
|
||||
"get_server_logs" => diagnostics::tool_get_server_logs(&args),
|
||||
"get_version" => diagnostics::tool_get_version(),
|
||||
"get_version" => diagnostics::tool_get_version(ctx),
|
||||
// Server lifecycle
|
||||
"rebuild_and_restart" => diagnostics::tool_rebuild_and_restart(ctx).await,
|
||||
// Permission bridge (Claude Code → frontend dialog)
|
||||
@@ -1381,6 +1444,7 @@ mod tests {
|
||||
assert!(names.contains(&"reload_agent_config"));
|
||||
assert!(names.contains(&"get_agent_output"));
|
||||
assert!(names.contains(&"wait_for_agent"));
|
||||
assert!(names.contains(&"get_agent_remaining_turns_and_budget"));
|
||||
assert!(names.contains(&"create_worktree"));
|
||||
assert!(names.contains(&"list_worktrees"));
|
||||
assert!(names.contains(&"remove_worktree"));
|
||||
@@ -1426,7 +1490,8 @@ mod tests {
|
||||
assert!(names.contains(&"loc_file"));
|
||||
assert!(names.contains(&"dump_crdt"));
|
||||
assert!(names.contains(&"get_version"));
|
||||
assert_eq!(tools.len(), 63);
|
||||
assert!(names.contains(&"remove_criterion"));
|
||||
assert_eq!(tools.len(), 66);
|
||||
}
|
||||
|
||||
#[test]
|
||||
|
||||
@@ -5,8 +5,9 @@ use crate::agents::{
|
||||
use crate::http::context::AppContext;
|
||||
use crate::http::workflow::{
|
||||
add_criterion_to_file, check_criterion_in_file, create_bug_file, create_refactor_file,
|
||||
create_spike_file, create_story_file, list_bug_files, list_refactor_files, load_pipeline_state,
|
||||
load_upcoming_stories, update_story_in_file, validate_story_dirs,
|
||||
create_spike_file, create_story_file, edit_criterion_in_file, list_bug_files,
|
||||
list_refactor_files, load_pipeline_state, load_upcoming_stories, remove_criterion_from_file,
|
||||
update_story_in_file, validate_story_dirs,
|
||||
};
|
||||
use crate::io::story_metadata::{
|
||||
check_archived_deps, check_archived_deps_from_list, parse_front_matter, parse_unchecked_todos,
|
||||
@@ -331,6 +332,28 @@ pub(super) fn tool_check_criterion(args: &Value, ctx: &AppContext) -> Result<Str
|
||||
))
|
||||
}
|
||||
|
||||
pub(super) fn tool_edit_criterion(args: &Value, ctx: &AppContext) -> Result<String, String> {
|
||||
let story_id = args
|
||||
.get("story_id")
|
||||
.and_then(|v| v.as_str())
|
||||
.ok_or("Missing required argument: story_id")?;
|
||||
let criterion_index = args
|
||||
.get("criterion_index")
|
||||
.and_then(|v| v.as_u64())
|
||||
.ok_or("Missing required argument: criterion_index")? as usize;
|
||||
let new_text = args
|
||||
.get("new_text")
|
||||
.and_then(|v| v.as_str())
|
||||
.ok_or("Missing required argument: new_text")?;
|
||||
|
||||
let root = ctx.state.get_project_root()?;
|
||||
edit_criterion_in_file(&root, story_id, criterion_index, new_text)?;
|
||||
|
||||
Ok(format!(
|
||||
"Criterion {criterion_index} updated for story '{story_id}'."
|
||||
))
|
||||
}
|
||||
|
||||
pub(super) fn tool_add_criterion(args: &Value, ctx: &AppContext) -> Result<String, String> {
|
||||
let story_id = args
|
||||
.get("story_id")
|
||||
@@ -349,6 +372,24 @@ pub(super) fn tool_add_criterion(args: &Value, ctx: &AppContext) -> Result<Strin
|
||||
))
|
||||
}
|
||||
|
||||
pub(super) fn tool_remove_criterion(args: &Value, ctx: &AppContext) -> Result<String, String> {
|
||||
let story_id = args
|
||||
.get("story_id")
|
||||
.and_then(|v| v.as_str())
|
||||
.ok_or("Missing required argument: story_id")?;
|
||||
let criterion_index = args
|
||||
.get("criterion_index")
|
||||
.and_then(|v| v.as_u64())
|
||||
.ok_or("Missing required argument: criterion_index")? as usize;
|
||||
|
||||
let root = ctx.state.get_project_root()?;
|
||||
remove_criterion_from_file(&root, story_id, criterion_index)?;
|
||||
|
||||
Ok(format!(
|
||||
"Removed criterion {criterion_index} from story '{story_id}'."
|
||||
))
|
||||
}
|
||||
|
||||
pub(super) fn tool_update_story(args: &Value, ctx: &AppContext) -> Result<String, String> {
|
||||
let story_id = args
|
||||
.get("story_id")
|
||||
@@ -1722,6 +1763,66 @@ mod tests {
|
||||
assert!(result.unwrap().contains("Criterion 0 checked"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn tool_remove_criterion_missing_story_id() {
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
let ctx = test_ctx(tmp.path());
|
||||
let result = tool_remove_criterion(&json!({"criterion_index": 0}), &ctx);
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("story_id"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn tool_remove_criterion_missing_criterion_index() {
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
let ctx = test_ctx(tmp.path());
|
||||
let result = tool_remove_criterion(&json!({"story_id": "1_test"}), &ctx);
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("criterion_index"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn tool_remove_criterion_removes_item() {
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
setup_git_repo_in(tmp.path());
|
||||
|
||||
crate::db::ensure_content_store();
|
||||
crate::db::write_item_with_content(
|
||||
"9905_test",
|
||||
"2_current",
|
||||
"---\nname: Test\n---\n## Acceptance Criteria\n- [ ] Keep me\n- [ ] Remove me\n",
|
||||
);
|
||||
|
||||
let ctx = test_ctx(tmp.path());
|
||||
let result = tool_remove_criterion(
|
||||
&json!({"story_id": "9905_test", "criterion_index": 1}),
|
||||
&ctx,
|
||||
);
|
||||
assert!(result.is_ok(), "Expected ok: {result:?}");
|
||||
assert!(result.unwrap().contains("Removed criterion 1"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn tool_remove_criterion_out_of_range() {
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
setup_git_repo_in(tmp.path());
|
||||
|
||||
crate::db::ensure_content_store();
|
||||
crate::db::write_item_with_content(
|
||||
"9906_test",
|
||||
"2_current",
|
||||
"---\nname: Test\n---\n## Acceptance Criteria\n- [ ] Only one\n",
|
||||
);
|
||||
|
||||
let ctx = test_ctx(tmp.path());
|
||||
let result = tool_remove_criterion(
|
||||
&json!({"story_id": "9906_test", "criterion_index": 5}),
|
||||
&ctx,
|
||||
);
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("out of range"));
|
||||
}
|
||||
|
||||
/// Regression test for bug 514: deleting a story must cancel its pending
|
||||
/// rate-limit retry timer so the tick loop cannot re-spawn an agent.
|
||||
///
|
||||
|
||||
@@ -43,6 +43,8 @@ pub(crate) fn step_output_path(
|
||||
.join("STACK.md"),
|
||||
),
|
||||
WizardStep::TestScript => Some(project_root.join("script").join("test")),
|
||||
WizardStep::BuildScript => Some(project_root.join("script").join("build")),
|
||||
WizardStep::LintScript => Some(project_root.join("script").join("lint")),
|
||||
WizardStep::ReleaseScript => Some(project_root.join("script").join("release")),
|
||||
WizardStep::TestCoverage => Some(project_root.join("script").join("test_coverage")),
|
||||
WizardStep::Scaffold => None,
|
||||
@@ -52,22 +54,35 @@ pub(crate) fn step_output_path(
|
||||
pub(crate) fn is_script_step(step: WizardStep) -> bool {
|
||||
matches!(
|
||||
step,
|
||||
WizardStep::TestScript | WizardStep::ReleaseScript | WizardStep::TestCoverage
|
||||
WizardStep::TestScript
|
||||
| WizardStep::BuildScript
|
||||
| WizardStep::LintScript
|
||||
| WizardStep::ReleaseScript
|
||||
| WizardStep::TestCoverage
|
||||
)
|
||||
}
|
||||
|
||||
/// Write `content` to `path` only when the file does not already exist.
|
||||
/// Write `content` to `path`, skipping if the file already exists with real
|
||||
/// (non-template) content.
|
||||
///
|
||||
/// Existing files (including `CLAUDE.md`) are never overwritten — the wizard
|
||||
/// appends or skips per the acceptance criteria. For script steps the file is
|
||||
/// also made executable after writing.
|
||||
/// Scaffold template files (those containing [`TEMPLATE_SENTINEL`]) are treated
|
||||
/// as placeholders and will be overwritten with the wizard-generated content.
|
||||
/// Files with real user content are never overwritten. For script steps the
|
||||
/// file is also made executable after writing.
|
||||
pub(crate) fn write_if_missing(
|
||||
path: &Path,
|
||||
content: &str,
|
||||
executable: bool,
|
||||
) -> Result<bool, String> {
|
||||
use crate::io::onboarding::TEMPLATE_SENTINEL;
|
||||
if path.exists() {
|
||||
return Ok(false); // already present — skip silently
|
||||
// Overwrite scaffold template placeholders; preserve real user content.
|
||||
let is_template = std::fs::read_to_string(path)
|
||||
.map(|s| s.contains(TEMPLATE_SENTINEL))
|
||||
.unwrap_or(false);
|
||||
if !is_template {
|
||||
return Ok(false); // real content already present — skip
|
||||
}
|
||||
}
|
||||
if let Some(parent) = path.parent() {
|
||||
fs::create_dir_all(parent)
|
||||
@@ -247,6 +262,90 @@ pub(crate) fn generation_hint(step: WizardStep, project_root: &Path) -> String {
|
||||
}
|
||||
}
|
||||
}
|
||||
WizardStep::BuildScript => {
|
||||
if bare {
|
||||
"This is a bare project with no existing code. Read the STACK.md generated \
|
||||
in the previous step (or ask the user about their stack if it was skipped) \
|
||||
and generate a `script/build` shell script (#!/usr/bin/env bash, set -euo pipefail) \
|
||||
with appropriate build commands for their chosen language and framework."
|
||||
.to_string()
|
||||
} else {
|
||||
let has_cargo = project_root.join("Cargo.toml").exists();
|
||||
let has_pkg = project_root.join("package.json").exists();
|
||||
let has_pnpm = project_root.join("pnpm-lock.yaml").exists();
|
||||
let has_frontend_subdir =
|
||||
project_root.join("frontend").join("package.json").exists()
|
||||
|| project_root.join("client").join("package.json").exists();
|
||||
let has_go = project_root.join("go.mod").exists();
|
||||
let mut cmds = Vec::new();
|
||||
if has_cargo {
|
||||
cmds.push("cargo build --release");
|
||||
}
|
||||
if has_pkg {
|
||||
cmds.push(if has_pnpm {
|
||||
"pnpm run build"
|
||||
} else {
|
||||
"npm run build"
|
||||
});
|
||||
}
|
||||
if has_frontend_subdir {
|
||||
cmds.push("(cd frontend && npm run build)");
|
||||
}
|
||||
if has_go {
|
||||
cmds.push("go build ./...");
|
||||
}
|
||||
if cmds.is_empty() {
|
||||
"Generate a `script/build` shell script (#!/usr/bin/env bash, set -euo pipefail) that builds the project.".to_string()
|
||||
} else {
|
||||
format!(
|
||||
"Generate a `script/build` shell script (#!/usr/bin/env bash, set -euo pipefail) that runs: {}",
|
||||
cmds.join(", ")
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
WizardStep::LintScript => {
|
||||
if bare {
|
||||
"This is a bare project with no existing code. Read the STACK.md generated \
|
||||
in the previous step (or ask the user about their stack if it was skipped) \
|
||||
and generate a `script/lint` shell script (#!/usr/bin/env bash, set -euo pipefail) \
|
||||
with appropriate lint commands for their chosen language and framework."
|
||||
.to_string()
|
||||
} else {
|
||||
let has_cargo = project_root.join("Cargo.toml").exists();
|
||||
let has_pkg = project_root.join("package.json").exists();
|
||||
let has_pnpm = project_root.join("pnpm-lock.yaml").exists();
|
||||
let has_python = project_root.join("pyproject.toml").exists()
|
||||
|| project_root.join("requirements.txt").exists();
|
||||
let has_go = project_root.join("go.mod").exists();
|
||||
let mut cmds = Vec::new();
|
||||
if has_cargo {
|
||||
cmds.push("cargo fmt --all --check");
|
||||
cmds.push("cargo clippy -- -D warnings");
|
||||
}
|
||||
if has_pkg {
|
||||
cmds.push(if has_pnpm {
|
||||
"pnpm run lint"
|
||||
} else {
|
||||
"npm run lint"
|
||||
});
|
||||
}
|
||||
if has_python {
|
||||
cmds.push("flake8 . (or ruff check . if ruff is configured)");
|
||||
}
|
||||
if has_go {
|
||||
cmds.push("go vet ./...");
|
||||
}
|
||||
if cmds.is_empty() {
|
||||
"Generate a `script/lint` shell script (#!/usr/bin/env bash, set -euo pipefail) that runs the project's linters.".to_string()
|
||||
} else {
|
||||
format!(
|
||||
"Generate a `script/lint` shell script (#!/usr/bin/env bash, set -euo pipefail) that runs: {}",
|
||||
cmds.join(", ")
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
WizardStep::ReleaseScript => {
|
||||
if bare {
|
||||
"This is a bare project with no existing code. Read the STACK.md generated \
|
||||
@@ -473,13 +572,13 @@ mod tests {
|
||||
fn wizard_confirm_does_not_overwrite_existing_file() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
let ctx = setup(&dir);
|
||||
// Pre-create the specs directory and file.
|
||||
// Pre-create the specs directory and file with real (non-template) content.
|
||||
let specs_dir = dir.path().join(".huskies").join("specs");
|
||||
std::fs::create_dir_all(&specs_dir).unwrap();
|
||||
let context_path = specs_dir.join("00_CONTEXT.md");
|
||||
std::fs::write(&context_path, "original content").unwrap();
|
||||
|
||||
// Stage and confirm — existing file should NOT be overwritten.
|
||||
// Stage and confirm — existing real file should NOT be overwritten.
|
||||
tool_wizard_generate(&serde_json::json!({"content": "new content"}), &ctx).unwrap();
|
||||
let result = tool_wizard_confirm(&ctx).unwrap();
|
||||
assert!(result.contains("already exists"));
|
||||
@@ -489,6 +588,34 @@ mod tests {
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn wizard_confirm_overwrites_scaffold_template_file() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
let ctx = setup(&dir);
|
||||
// Pre-create the file with scaffold template placeholder content.
|
||||
let specs_dir = dir.path().join(".huskies").join("specs");
|
||||
std::fs::create_dir_all(&specs_dir).unwrap();
|
||||
let context_path = specs_dir.join("00_CONTEXT.md");
|
||||
std::fs::write(
|
||||
&context_path,
|
||||
"<!-- huskies:scaffold-template -->\n# Project Context\n\nTODO: Describe...",
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
// Stage and confirm — template placeholder should be overwritten with generated content.
|
||||
tool_wizard_generate(
|
||||
&serde_json::json!({"content": "# My Real Project\n\nThis is a real project."}),
|
||||
&ctx,
|
||||
)
|
||||
.unwrap();
|
||||
let result = tool_wizard_confirm(&ctx).unwrap();
|
||||
assert!(result.contains("confirmed"));
|
||||
assert_eq!(
|
||||
std::fs::read_to_string(&context_path).unwrap(),
|
||||
"# My Real Project\n\nThis is a real project."
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn wizard_skip_advances_wizard() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
@@ -517,8 +644,8 @@ mod tests {
|
||||
fn wizard_complete_returns_done_message() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
let ctx = setup(&dir);
|
||||
// Skip all remaining steps.
|
||||
for _ in 0..5 {
|
||||
// Skip all remaining steps (scaffold is pre-confirmed, so 7 remaining).
|
||||
for _ in 0..7 {
|
||||
tool_wizard_skip(&ctx).unwrap();
|
||||
}
|
||||
let result = tool_wizard_status(&ctx).unwrap();
|
||||
@@ -629,4 +756,61 @@ mod tests {
|
||||
assert!(hint.contains("cargo nextest"));
|
||||
assert!(!hint.contains("bare project"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn generation_hint_bare_build_script_references_stack() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
std::fs::create_dir_all(dir.path().join(".huskies")).unwrap();
|
||||
let hint = generation_hint(WizardStep::BuildScript, dir.path());
|
||||
assert!(hint.contains("bare project"));
|
||||
assert!(hint.contains("STACK.md"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn generation_hint_bare_lint_script_references_stack() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
std::fs::create_dir_all(dir.path().join(".huskies")).unwrap();
|
||||
let hint = generation_hint(WizardStep::LintScript, dir.path());
|
||||
assert!(hint.contains("bare project"));
|
||||
assert!(hint.contains("STACK.md"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn generation_hint_existing_project_build_script_detects_cargo() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
std::fs::write(dir.path().join("Cargo.toml"), "[package]").unwrap();
|
||||
let hint = generation_hint(WizardStep::BuildScript, dir.path());
|
||||
assert!(hint.contains("cargo build --release"));
|
||||
assert!(!hint.contains("bare project"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn generation_hint_existing_project_lint_script_detects_cargo() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
std::fs::write(dir.path().join("Cargo.toml"), "[package]").unwrap();
|
||||
let hint = generation_hint(WizardStep::LintScript, dir.path());
|
||||
assert!(hint.contains("cargo fmt --all --check"));
|
||||
assert!(hint.contains("cargo clippy -- -D warnings"));
|
||||
assert!(!hint.contains("bare project"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn step_output_path_build_script_returns_script_build() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
let path = step_output_path(dir.path(), WizardStep::BuildScript).unwrap();
|
||||
assert!(path.ends_with("script/build"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn step_output_path_lint_script_returns_script_lint() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
let path = step_output_path(dir.path(), WizardStep::LintScript).unwrap();
|
||||
assert!(path.ends_with("script/lint"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn is_script_step_includes_build_and_lint() {
|
||||
assert!(is_script_step(WizardStep::BuildScript));
|
||||
assert!(is_script_step(WizardStep::LintScript));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -4,6 +4,7 @@ pub mod agents_sse;
|
||||
pub mod anthropic;
|
||||
pub mod assets;
|
||||
pub mod bot_command;
|
||||
pub mod bot_config;
|
||||
pub mod chat;
|
||||
pub mod context;
|
||||
pub mod health;
|
||||
@@ -23,6 +24,7 @@ pub mod ws;
|
||||
use agents::AgentsApi;
|
||||
use anthropic::AnthropicApi;
|
||||
use bot_command::BotCommandApi;
|
||||
use bot_config::BotConfigApi;
|
||||
use chat::ChatApi;
|
||||
use context::AppContext;
|
||||
use health::HealthApi;
|
||||
@@ -196,6 +198,7 @@ type ApiTuple = (
|
||||
HealthApi,
|
||||
BotCommandApi,
|
||||
wizard::WizardApi,
|
||||
BotConfigApi,
|
||||
);
|
||||
|
||||
type ApiService = OpenApiService<ApiTuple, ()>;
|
||||
@@ -213,6 +216,7 @@ pub fn build_openapi_service(ctx: Arc<AppContext>) -> (ApiService, ApiService) {
|
||||
HealthApi,
|
||||
BotCommandApi { ctx: ctx.clone() },
|
||||
wizard::WizardApi { ctx: ctx.clone() },
|
||||
BotConfigApi { ctx: ctx.clone() },
|
||||
);
|
||||
|
||||
let api_service =
|
||||
@@ -228,7 +232,8 @@ pub fn build_openapi_service(ctx: Arc<AppContext>) -> (ApiService, ApiService) {
|
||||
SettingsApi { ctx: ctx.clone() },
|
||||
HealthApi,
|
||||
BotCommandApi { ctx: ctx.clone() },
|
||||
wizard::WizardApi { ctx },
|
||||
wizard::WizardApi { ctx: ctx.clone() },
|
||||
BotConfigApi { ctx },
|
||||
);
|
||||
|
||||
let docs_service =
|
||||
|
||||
+411
-1
@@ -1,13 +1,181 @@
|
||||
//! HTTP settings endpoints — REST API for user preferences and editor configuration.
|
||||
use crate::config::ProjectConfig;
|
||||
use crate::http::context::{AppContext, OpenApiResult, bad_request};
|
||||
use crate::store::StoreOps;
|
||||
use poem_openapi::{Object, OpenApi, Tags, param::Query, payload::Json};
|
||||
use serde::Serialize;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use serde_json::json;
|
||||
use std::path::Path;
|
||||
use std::sync::Arc;
|
||||
|
||||
const EDITOR_COMMAND_KEY: &str = "editor_command";
|
||||
|
||||
/// Project-level settings exposed via `GET /api/settings` and `PUT /api/settings`.
|
||||
///
|
||||
/// Only contains the scalar fields of `ProjectConfig` — array sections
|
||||
/// (`[[component]]`, `[[agent]]`, `[watcher]`) are preserved in the TOML file
|
||||
/// and are not editable through this API.
|
||||
#[derive(Debug, Object, Serialize, Deserialize)]
|
||||
struct ProjectSettings {
|
||||
/// Project-wide default QA mode: "server", "agent", or "human". Default: "server".
|
||||
default_qa: String,
|
||||
/// Default model for coder-stage agents (e.g. "sonnet"). When set, only agents whose
|
||||
/// model matches this value are used for auto-assignment.
|
||||
default_coder_model: Option<String>,
|
||||
/// Maximum number of concurrent coder-stage agents. When set, stories wait in
|
||||
/// 2_current/ until a slot is free.
|
||||
max_coders: Option<u32>,
|
||||
/// Maximum retries per story per pipeline stage before marking as blocked. Default: 2.
|
||||
max_retries: u32,
|
||||
/// Optional base branch name (e.g. "main", "master"). Overrides auto-detection.
|
||||
base_branch: Option<String>,
|
||||
/// Whether to send RateLimitWarning chat notifications. Default: true.
|
||||
rate_limit_notifications: bool,
|
||||
/// IANA timezone name (e.g. "Europe/London"). Timer inputs are interpreted in this tz.
|
||||
timezone: Option<String>,
|
||||
/// WebSocket URL of a remote huskies node to sync CRDT state with.
|
||||
rendezvous: Option<String>,
|
||||
/// How often (seconds) to check 5_done/ for items to archive. Default: 60.
|
||||
watcher_sweep_interval_secs: u64,
|
||||
/// How long (seconds) an item must remain in 5_done/ before archiving. Default: 14400.
|
||||
watcher_done_retention_secs: u64,
|
||||
}
|
||||
|
||||
/// Load `ProjectSettings` from `ProjectConfig`.
|
||||
fn settings_from_config(cfg: &ProjectConfig) -> ProjectSettings {
|
||||
ProjectSettings {
|
||||
default_qa: cfg.default_qa.clone(),
|
||||
default_coder_model: cfg.default_coder_model.clone(),
|
||||
max_coders: cfg.max_coders.map(|v| v as u32),
|
||||
max_retries: cfg.max_retries,
|
||||
base_branch: cfg.base_branch.clone(),
|
||||
rate_limit_notifications: cfg.rate_limit_notifications,
|
||||
timezone: cfg.timezone.clone(),
|
||||
rendezvous: cfg.rendezvous.clone(),
|
||||
watcher_sweep_interval_secs: cfg.watcher.sweep_interval_secs,
|
||||
watcher_done_retention_secs: cfg.watcher.done_retention_secs,
|
||||
}
|
||||
}
|
||||
|
||||
/// Validate the incoming `ProjectSettings` before writing.
|
||||
fn validate_project_settings(s: &ProjectSettings) -> Result<(), String> {
|
||||
match s.default_qa.as_str() {
|
||||
"server" | "agent" | "human" => {}
|
||||
other => {
|
||||
return Err(format!(
|
||||
"Invalid default_qa value '{other}'. Must be one of: server, agent, human"
|
||||
));
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Write only the scalar settings from `s` into the project.toml at the given root.
|
||||
/// Array sections (`[[component]]`, `[[agent]]`) are preserved unchanged.
|
||||
fn write_project_settings(project_root: &Path, s: &ProjectSettings) -> Result<(), String> {
|
||||
let config_path = project_root.join(".huskies/project.toml");
|
||||
|
||||
let content = if config_path.exists() {
|
||||
std::fs::read_to_string(&config_path).map_err(|e| format!("Read config: {e}"))?
|
||||
} else {
|
||||
String::new()
|
||||
};
|
||||
|
||||
let mut val: toml::Value = if content.trim().is_empty() {
|
||||
toml::Value::Table(toml::map::Map::new())
|
||||
} else {
|
||||
toml::from_str(&content).map_err(|e| format!("Parse config: {e}"))?
|
||||
};
|
||||
|
||||
let table = val
|
||||
.as_table_mut()
|
||||
.ok_or_else(|| "Config is not a TOML table".to_string())?;
|
||||
|
||||
// Scalar root fields
|
||||
table.insert(
|
||||
"default_qa".to_string(),
|
||||
toml::Value::String(s.default_qa.clone()),
|
||||
);
|
||||
table.insert(
|
||||
"max_retries".to_string(),
|
||||
toml::Value::Integer(s.max_retries as i64),
|
||||
);
|
||||
table.insert(
|
||||
"rate_limit_notifications".to_string(),
|
||||
toml::Value::Boolean(s.rate_limit_notifications),
|
||||
);
|
||||
|
||||
// Optional scalar fields
|
||||
match &s.default_coder_model {
|
||||
Some(v) => {
|
||||
table.insert(
|
||||
"default_coder_model".to_string(),
|
||||
toml::Value::String(v.clone()),
|
||||
);
|
||||
}
|
||||
None => {
|
||||
table.remove("default_coder_model");
|
||||
}
|
||||
}
|
||||
match s.max_coders {
|
||||
Some(v) => {
|
||||
table.insert("max_coders".to_string(), toml::Value::Integer(v as i64));
|
||||
}
|
||||
None => {
|
||||
table.remove("max_coders");
|
||||
}
|
||||
}
|
||||
match &s.base_branch {
|
||||
Some(v) => {
|
||||
table.insert("base_branch".to_string(), toml::Value::String(v.clone()));
|
||||
}
|
||||
None => {
|
||||
table.remove("base_branch");
|
||||
}
|
||||
}
|
||||
match &s.timezone {
|
||||
Some(v) => {
|
||||
table.insert("timezone".to_string(), toml::Value::String(v.clone()));
|
||||
}
|
||||
None => {
|
||||
table.remove("timezone");
|
||||
}
|
||||
}
|
||||
match &s.rendezvous {
|
||||
Some(v) => {
|
||||
table.insert("rendezvous".to_string(), toml::Value::String(v.clone()));
|
||||
}
|
||||
None => {
|
||||
table.remove("rendezvous");
|
||||
}
|
||||
}
|
||||
|
||||
// [watcher] sub-table
|
||||
let watcher_entry = table
|
||||
.entry("watcher".to_string())
|
||||
.or_insert_with(|| toml::Value::Table(toml::map::Map::new()));
|
||||
if let toml::Value::Table(wt) = watcher_entry {
|
||||
wt.insert(
|
||||
"sweep_interval_secs".to_string(),
|
||||
toml::Value::Integer(s.watcher_sweep_interval_secs as i64),
|
||||
);
|
||||
wt.insert(
|
||||
"done_retention_secs".to_string(),
|
||||
toml::Value::Integer(s.watcher_done_retention_secs as i64),
|
||||
);
|
||||
}
|
||||
|
||||
// Ensure .huskies/ directory exists
|
||||
if let Some(parent) = config_path.parent() {
|
||||
std::fs::create_dir_all(parent).map_err(|e| format!("Create .huskies dir: {e}"))?;
|
||||
}
|
||||
|
||||
let new_content = toml::to_string_pretty(&val).map_err(|e| format!("Serialize config: {e}"))?;
|
||||
std::fs::write(&config_path, new_content).map_err(|e| format!("Write config: {e}"))?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[derive(Tags)]
|
||||
enum SettingsTags {
|
||||
Settings,
|
||||
@@ -71,6 +239,30 @@ impl SettingsApi {
|
||||
Ok(Json(OpenFileResponse { success: true }))
|
||||
}
|
||||
|
||||
/// Get current project.toml scalar settings as JSON.
|
||||
#[oai(path = "/settings", method = "get")]
|
||||
async fn get_settings(&self) -> OpenApiResult<Json<ProjectSettings>> {
|
||||
let project_root = self.ctx.state.get_project_root().map_err(bad_request)?;
|
||||
let config = ProjectConfig::load(&project_root).map_err(bad_request)?;
|
||||
Ok(Json(settings_from_config(&config)))
|
||||
}
|
||||
|
||||
/// Update project.toml scalar settings. Array sections (component, agent) are preserved.
|
||||
///
|
||||
/// Returns 400 if the input fails validation (e.g. unknown qa mode, negative max_retries).
|
||||
#[oai(path = "/settings", method = "put")]
|
||||
async fn put_settings(
|
||||
&self,
|
||||
payload: Json<ProjectSettings>,
|
||||
) -> OpenApiResult<Json<ProjectSettings>> {
|
||||
validate_project_settings(&payload.0).map_err(bad_request)?;
|
||||
let project_root = self.ctx.state.get_project_root().map_err(bad_request)?;
|
||||
write_project_settings(&project_root, &payload.0).map_err(bad_request)?;
|
||||
// Re-read to confirm what was written
|
||||
let config = ProjectConfig::load(&project_root).map_err(bad_request)?;
|
||||
Ok(Json(settings_from_config(&config)))
|
||||
}
|
||||
|
||||
/// Set the preferred editor command (e.g. "zed", "code", "cursor").
|
||||
/// Pass null or empty string to clear the preference.
|
||||
#[oai(path = "/settings/editor", method = "put")]
|
||||
@@ -360,4 +552,222 @@ mod tests {
|
||||
.await;
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
// ── /api/settings GET/PUT ──────────────────────────────────────────────
|
||||
|
||||
fn default_project_settings() -> ProjectSettings {
|
||||
let cfg = ProjectConfig::default();
|
||||
settings_from_config(&cfg)
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn get_settings_returns_defaults_when_no_project_toml() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
// Create .huskies dir so project root detection works but no project.toml
|
||||
std::fs::create_dir_all(dir.path().join(".huskies")).unwrap();
|
||||
let ctx = AppContext::new_test(dir.path().to_path_buf());
|
||||
let api = SettingsApi { ctx: Arc::new(ctx) };
|
||||
let result = api.get_settings().await.unwrap().0;
|
||||
assert_eq!(result.default_qa, "server");
|
||||
assert_eq!(result.max_retries, 2);
|
||||
assert!(result.rate_limit_notifications);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn put_settings_writes_and_returns_settings() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
std::fs::create_dir_all(dir.path().join(".huskies")).unwrap();
|
||||
let ctx = AppContext::new_test(dir.path().to_path_buf());
|
||||
let api = SettingsApi { ctx: Arc::new(ctx) };
|
||||
|
||||
let mut s = default_project_settings();
|
||||
s.default_qa = "agent".to_string();
|
||||
s.max_retries = 5;
|
||||
s.rate_limit_notifications = false;
|
||||
|
||||
let result = api.put_settings(Json(s)).await.unwrap().0;
|
||||
assert_eq!(result.default_qa, "agent");
|
||||
assert_eq!(result.max_retries, 5);
|
||||
assert!(!result.rate_limit_notifications);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn put_settings_preserves_agent_sections() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
let huskies_dir = dir.path().join(".huskies");
|
||||
std::fs::create_dir_all(&huskies_dir).unwrap();
|
||||
|
||||
// Write a project.toml with agent sections
|
||||
std::fs::write(
|
||||
huskies_dir.join("project.toml"),
|
||||
r#"
|
||||
[[agent]]
|
||||
name = "coder-1"
|
||||
model = "sonnet"
|
||||
stage = "coder"
|
||||
|
||||
[[component]]
|
||||
name = "server"
|
||||
path = "."
|
||||
"#,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let ctx = AppContext::new_test(dir.path().to_path_buf());
|
||||
let api = SettingsApi { ctx: Arc::new(ctx) };
|
||||
|
||||
let mut s = default_project_settings();
|
||||
s.default_qa = "human".to_string();
|
||||
api.put_settings(Json(s)).await.unwrap();
|
||||
|
||||
// Re-read the file and verify agent/component sections are still there
|
||||
let written = std::fs::read_to_string(huskies_dir.join("project.toml")).unwrap();
|
||||
assert!(
|
||||
written.contains("coder-1"),
|
||||
"agent section should be preserved"
|
||||
);
|
||||
assert!(
|
||||
written.contains("server"),
|
||||
"component section should be preserved"
|
||||
);
|
||||
assert!(written.contains("human"), "new setting should be written");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn put_settings_rejects_invalid_qa_mode() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
std::fs::create_dir_all(dir.path().join(".huskies")).unwrap();
|
||||
let ctx = AppContext::new_test(dir.path().to_path_buf());
|
||||
let api = SettingsApi { ctx: Arc::new(ctx) };
|
||||
|
||||
let mut s = default_project_settings();
|
||||
s.default_qa = "invalid_mode".to_string();
|
||||
|
||||
let result = api.put_settings(Json(s)).await;
|
||||
assert!(result.is_err());
|
||||
let err = result.unwrap_err();
|
||||
assert_eq!(err.status(), poem::http::StatusCode::BAD_REQUEST);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn validate_project_settings_accepts_valid_qa_modes() {
|
||||
for mode in &["server", "agent", "human"] {
|
||||
let s = ProjectSettings {
|
||||
default_qa: mode.to_string(),
|
||||
default_coder_model: None,
|
||||
max_coders: None,
|
||||
max_retries: 2,
|
||||
base_branch: None,
|
||||
rate_limit_notifications: true,
|
||||
timezone: None,
|
||||
rendezvous: None,
|
||||
watcher_sweep_interval_secs: 60,
|
||||
watcher_done_retention_secs: 14400,
|
||||
};
|
||||
assert!(
|
||||
validate_project_settings(&s).is_ok(),
|
||||
"qa mode '{mode}' should be valid"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn validate_project_settings_rejects_unknown_qa_mode() {
|
||||
let s = ProjectSettings {
|
||||
default_qa: "robot".to_string(),
|
||||
default_coder_model: None,
|
||||
max_coders: None,
|
||||
max_retries: 2,
|
||||
base_branch: None,
|
||||
rate_limit_notifications: true,
|
||||
timezone: None,
|
||||
rendezvous: None,
|
||||
watcher_sweep_interval_secs: 60,
|
||||
watcher_done_retention_secs: 14400,
|
||||
};
|
||||
let err = validate_project_settings(&s).unwrap_err();
|
||||
assert!(err.contains("robot"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn write_and_read_project_settings_roundtrip() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
std::fs::create_dir_all(dir.path().join(".huskies")).unwrap();
|
||||
|
||||
let s = ProjectSettings {
|
||||
default_qa: "agent".to_string(),
|
||||
default_coder_model: Some("opus".to_string()),
|
||||
max_coders: Some(2),
|
||||
max_retries: 3,
|
||||
base_branch: Some("main".to_string()),
|
||||
rate_limit_notifications: false,
|
||||
timezone: Some("America/New_York".to_string()),
|
||||
rendezvous: Some("ws://host:3001/crdt-sync".to_string()),
|
||||
watcher_sweep_interval_secs: 30,
|
||||
watcher_done_retention_secs: 7200,
|
||||
};
|
||||
|
||||
write_project_settings(dir.path(), &s).unwrap();
|
||||
|
||||
let config = ProjectConfig::load(dir.path()).unwrap();
|
||||
let loaded = settings_from_config(&config);
|
||||
|
||||
assert_eq!(loaded.default_qa, "agent");
|
||||
assert_eq!(loaded.default_coder_model, Some("opus".to_string()));
|
||||
assert_eq!(loaded.max_coders, Some(2));
|
||||
assert_eq!(loaded.max_retries, 3);
|
||||
assert_eq!(loaded.base_branch, Some("main".to_string()));
|
||||
assert!(!loaded.rate_limit_notifications);
|
||||
assert_eq!(loaded.timezone, Some("America/New_York".to_string()));
|
||||
assert_eq!(
|
||||
loaded.rendezvous,
|
||||
Some("ws://host:3001/crdt-sync".to_string())
|
||||
);
|
||||
assert_eq!(loaded.watcher_sweep_interval_secs, 30);
|
||||
assert_eq!(loaded.watcher_done_retention_secs, 7200);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn write_project_settings_clears_optional_fields_when_none() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
let huskies_dir = dir.path().join(".huskies");
|
||||
std::fs::create_dir_all(&huskies_dir).unwrap();
|
||||
|
||||
// First write with optional fields set
|
||||
let s_with = ProjectSettings {
|
||||
default_qa: "server".to_string(),
|
||||
default_coder_model: Some("sonnet".to_string()),
|
||||
max_coders: Some(3),
|
||||
max_retries: 2,
|
||||
base_branch: Some("master".to_string()),
|
||||
rate_limit_notifications: true,
|
||||
timezone: Some("UTC".to_string()),
|
||||
rendezvous: None,
|
||||
watcher_sweep_interval_secs: 60,
|
||||
watcher_done_retention_secs: 14400,
|
||||
};
|
||||
write_project_settings(dir.path(), &s_with).unwrap();
|
||||
|
||||
// Then write with optional fields cleared
|
||||
let s_clear = ProjectSettings {
|
||||
default_qa: "server".to_string(),
|
||||
default_coder_model: None,
|
||||
max_coders: None,
|
||||
max_retries: 2,
|
||||
base_branch: None,
|
||||
rate_limit_notifications: true,
|
||||
timezone: None,
|
||||
rendezvous: None,
|
||||
watcher_sweep_interval_secs: 60,
|
||||
watcher_done_retention_secs: 14400,
|
||||
};
|
||||
write_project_settings(dir.path(), &s_clear).unwrap();
|
||||
|
||||
let config = ProjectConfig::load(dir.path()).unwrap();
|
||||
let loaded = settings_from_config(&config);
|
||||
assert!(loaded.default_coder_model.is_none());
|
||||
assert!(loaded.max_coders.is_none());
|
||||
assert!(loaded.base_branch.is_none());
|
||||
assert!(loaded.timezone.is_none());
|
||||
}
|
||||
}
|
||||
|
||||
@@ -195,7 +195,7 @@ mod tests {
|
||||
let body: serde_json::Value = resp.0.into_body().into_json().await.unwrap();
|
||||
assert_eq!(body["current_step_index"], 1);
|
||||
assert!(!body["completed"].as_bool().unwrap());
|
||||
assert_eq!(body["steps"].as_array().unwrap().len(), 6);
|
||||
assert_eq!(body["steps"].as_array().unwrap().len(), 8);
|
||||
assert_eq!(body["steps"][0]["status"], "confirmed");
|
||||
}
|
||||
|
||||
@@ -279,11 +279,13 @@ mod tests {
|
||||
let (dir, client) = setup();
|
||||
WizardState::init_if_missing(dir.path());
|
||||
|
||||
// Steps 2-6 (scaffold is already confirmed)
|
||||
// Steps 2-8 (scaffold is already confirmed)
|
||||
let steps = [
|
||||
"context",
|
||||
"stack",
|
||||
"test_script",
|
||||
"build_script",
|
||||
"lint_script",
|
||||
"release_script",
|
||||
"test_coverage",
|
||||
];
|
||||
|
||||
@@ -7,7 +7,8 @@ pub use bug_ops::{
|
||||
create_bug_file, create_refactor_file, create_spike_file, list_bug_files, list_refactor_files,
|
||||
};
|
||||
pub use story_ops::{
|
||||
add_criterion_to_file, check_criterion_in_file, create_story_file, update_story_in_file,
|
||||
add_criterion_to_file, check_criterion_in_file, create_story_file, edit_criterion_in_file,
|
||||
remove_criterion_from_file, update_story_in_file,
|
||||
};
|
||||
pub use test_results::{
|
||||
read_test_results_from_story_file, write_coverage_baseline_to_story_file,
|
||||
|
||||
@@ -126,6 +126,111 @@ pub fn check_criterion_in_file(
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Remove an acceptance criterion from a story by its 0-based index (counting all criteria,
|
||||
/// both checked and unchecked). Returns an error if the index is out of range.
|
||||
pub fn remove_criterion_from_file(
|
||||
project_root: &Path,
|
||||
story_id: &str,
|
||||
criterion_index: usize,
|
||||
) -> Result<(), String> {
|
||||
let contents = read_story_content(project_root, story_id)?;
|
||||
|
||||
let mut count: usize = 0;
|
||||
let mut found = false;
|
||||
let new_lines: Vec<String> = contents
|
||||
.lines()
|
||||
.filter(|line| {
|
||||
let trimmed = line.trim();
|
||||
if trimmed.starts_with("- [ ] ") || trimmed.starts_with("- [x] ") {
|
||||
if count == criterion_index {
|
||||
count += 1;
|
||||
found = true;
|
||||
return false;
|
||||
}
|
||||
count += 1;
|
||||
}
|
||||
true
|
||||
})
|
||||
.map(|s| s.to_string())
|
||||
.collect();
|
||||
|
||||
if !found {
|
||||
return Err(format!(
|
||||
"Criterion index {criterion_index} out of range. Story '{story_id}' has \
|
||||
{count} criteria (indices 0..{}).",
|
||||
count.saturating_sub(1)
|
||||
));
|
||||
}
|
||||
|
||||
let mut new_str = new_lines.join("\n");
|
||||
if contents.ends_with('\n') {
|
||||
new_str.push('\n');
|
||||
}
|
||||
|
||||
let stage = story_stage(story_id).unwrap_or_else(|| "2_current".to_string());
|
||||
write_story_content(project_root, story_id, &stage, &new_str);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Edit the text of an existing acceptance criterion without changing its checked state.
|
||||
///
|
||||
/// Finds the criterion at `criterion_index` (0-based, counting all criteria regardless
|
||||
/// of checked state) and replaces its text with `new_text`.
|
||||
pub fn edit_criterion_in_file(
|
||||
project_root: &Path,
|
||||
story_id: &str,
|
||||
criterion_index: usize,
|
||||
new_text: &str,
|
||||
) -> Result<(), String> {
|
||||
let contents = read_story_content(project_root, story_id)?;
|
||||
|
||||
let mut count: usize = 0;
|
||||
let mut found = false;
|
||||
let new_lines: Vec<String> = contents
|
||||
.lines()
|
||||
.map(|line| {
|
||||
let trimmed = line.trim();
|
||||
let prefix = if trimmed.starts_with("- [ ] ") {
|
||||
Some("- [ ] ")
|
||||
} else if trimmed.starts_with("- [x] ") {
|
||||
Some("- [x] ")
|
||||
} else {
|
||||
None
|
||||
};
|
||||
if let Some(p) = prefix {
|
||||
if count == criterion_index {
|
||||
count += 1;
|
||||
found = true;
|
||||
let indent_len = line.len() - trimmed.len();
|
||||
let indent = &line[..indent_len];
|
||||
return format!("{indent}{p}{new_text}");
|
||||
}
|
||||
count += 1;
|
||||
}
|
||||
line.to_string()
|
||||
})
|
||||
.collect();
|
||||
|
||||
if !found {
|
||||
return Err(format!(
|
||||
"Criterion index {criterion_index} out of range. Story '{story_id}' has \
|
||||
{count} criteria (indices 0..{}).",
|
||||
count.saturating_sub(1)
|
||||
));
|
||||
}
|
||||
|
||||
let mut new_str = new_lines.join("\n");
|
||||
if contents.ends_with('\n') {
|
||||
new_str.push('\n');
|
||||
}
|
||||
|
||||
let stage = story_stage(story_id).unwrap_or_else(|| "2_current".to_string());
|
||||
write_story_content(project_root, story_id, &stage, &new_str);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Add a new acceptance criterion to a story.
|
||||
///
|
||||
/// Appends `- [ ] {criterion}` after the last existing criterion line in the
|
||||
@@ -520,6 +625,61 @@ mod tests {
|
||||
assert!(result.unwrap_err().contains("Acceptance Criteria"));
|
||||
}
|
||||
|
||||
// ── remove_criterion_from_file tests ──────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn remove_criterion_removes_by_index() {
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
setup_story_in_fs(
|
||||
tmp.path(),
|
||||
"20_remove_test",
|
||||
&story_with_ac_section(&["First", "Second", "Third"]),
|
||||
);
|
||||
|
||||
remove_criterion_from_file(tmp.path(), "20_remove_test", 1).unwrap();
|
||||
|
||||
let contents = read_story_content(tmp.path(), "20_remove_test").unwrap();
|
||||
assert!(contents.contains("- [ ] First"), "First should remain");
|
||||
assert!(!contents.contains("Second"), "Second should be removed");
|
||||
assert!(contents.contains("- [ ] Third"), "Third should remain");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn remove_criterion_shifts_indices() {
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
setup_story_in_fs(
|
||||
tmp.path(),
|
||||
"21_remove_test",
|
||||
&story_with_ac_section(&["A", "B", "C"]),
|
||||
);
|
||||
|
||||
remove_criterion_from_file(tmp.path(), "21_remove_test", 0).unwrap();
|
||||
|
||||
let contents = read_story_content(tmp.path(), "21_remove_test").unwrap();
|
||||
assert!(!contents.contains("- [ ] A"), "A should be removed");
|
||||
assert!(contents.contains("- [ ] B"), "B should remain");
|
||||
assert!(contents.contains("- [ ] C"), "C should remain");
|
||||
// B is now at index 0, C at index 1 — verify by removing B next
|
||||
remove_criterion_from_file(tmp.path(), "21_remove_test", 0).unwrap();
|
||||
let contents2 = read_story_content(tmp.path(), "21_remove_test").unwrap();
|
||||
assert!(!contents2.contains("- [ ] B"), "B should now be removed");
|
||||
assert!(contents2.contains("- [ ] C"), "C should still remain");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn remove_criterion_out_of_range_returns_error() {
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
setup_story_in_fs(
|
||||
tmp.path(),
|
||||
"22_remove_test",
|
||||
&story_with_ac_section(&["Only"]),
|
||||
);
|
||||
|
||||
let result = remove_criterion_from_file(tmp.path(), "22_remove_test", 5);
|
||||
assert!(result.is_err(), "should fail for out-of-range index");
|
||||
assert!(result.unwrap_err().contains("out of range"));
|
||||
}
|
||||
|
||||
// ── update_story_in_file tests ─────────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
|
||||
+21
-11
@@ -37,6 +37,13 @@ pub(crate) async fn ensure_project_root_with_story_kit(
|
||||
if !path.join(".huskies").is_dir() {
|
||||
scaffold_story_kit(&path, port)?;
|
||||
}
|
||||
// Always update .mcp.json with the current port so the bot connects to
|
||||
// the right endpoint even when HUSKIES_PORT changes between restarts.
|
||||
let mcp_content = format!(
|
||||
"{{\n \"mcpServers\": {{\n \"huskies\": {{\n \"type\": \"http\",\n \"url\": \"http://localhost:{port}/mcp\"\n }}\n }}\n}}\n"
|
||||
);
|
||||
fs::write(path.join(".mcp.json"), mcp_content)
|
||||
.map_err(|e| format!("Failed to write .mcp.json: {}", e))?;
|
||||
Ok(())
|
||||
})
|
||||
.await
|
||||
@@ -194,16 +201,15 @@ mod tests {
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn open_project_does_not_overwrite_existing_mcp_json() {
|
||||
// scaffold must NOT overwrite .mcp.json when it already exists — QA
|
||||
// test servers share the real project root, and re-writing would
|
||||
// clobber the file with the wrong port.
|
||||
async fn open_project_updates_mcp_json_with_current_port() {
|
||||
// .mcp.json must always be updated with the actual running port so the
|
||||
// bot connects to the right MCP endpoint even when HUSKIES_PORT changes.
|
||||
let dir = tempdir().unwrap();
|
||||
let project_dir = dir.path().join("myproject");
|
||||
fs::create_dir_all(&project_dir).unwrap();
|
||||
// Pre-write .mcp.json with a different port to simulate an already-configured project.
|
||||
// Pre-write .mcp.json with a different port to simulate a stale file.
|
||||
let mcp_path = project_dir.join(".mcp.json");
|
||||
fs::write(&mcp_path, "{\"existing\": true}").unwrap();
|
||||
fs::write(&mcp_path, "{\"stale\": true}").unwrap();
|
||||
let store = make_store(&dir);
|
||||
let state = SessionState::default();
|
||||
|
||||
@@ -211,15 +217,19 @@ mod tests {
|
||||
project_dir.to_string_lossy().to_string(),
|
||||
&state,
|
||||
&store,
|
||||
3001,
|
||||
3002,
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(
|
||||
fs::read_to_string(&mcp_path).unwrap(),
|
||||
"{\"existing\": true}",
|
||||
"open_project must not overwrite an existing .mcp.json"
|
||||
let content = fs::read_to_string(&mcp_path).unwrap();
|
||||
assert!(
|
||||
content.contains("3002"),
|
||||
"open_project must update .mcp.json with the actual running port"
|
||||
);
|
||||
assert!(
|
||||
content.contains("localhost"),
|
||||
"mcp.json must reference localhost"
|
||||
);
|
||||
}
|
||||
|
||||
|
||||
@@ -100,6 +100,24 @@ const DEFAULT_PROJECT_SETTINGS_TOML: &str = r#"# Project-wide default QA mode: "
|
||||
# Per-story `qa` front matter overrides this setting.
|
||||
default_qa = "server"
|
||||
|
||||
# Maximum number of retries per story per pipeline stage before marking as blocked.
|
||||
# Set to 0 to disable retry limits.
|
||||
max_retries = 2
|
||||
|
||||
# Default model for coder-stage agents (e.g. "sonnet", "opus").
|
||||
# When set, only coder agents whose model matches this value are considered for
|
||||
# auto-assignment, so opus agents are only used when explicitly requested via
|
||||
# story front matter `agent:` field.
|
||||
# default_coder_model = "sonnet"
|
||||
|
||||
# Maximum number of concurrent coder-stage agents.
|
||||
# Stories wait in 2_current/ until a slot frees up.
|
||||
# max_coders = 3
|
||||
|
||||
# Override the base branch for worktree creation and merge operations.
|
||||
# When not set, the system auto-detects the base branch from the current HEAD.
|
||||
# base_branch = "main"
|
||||
|
||||
# Suppress soft rate-limit warning notifications in chat.
|
||||
# Hard blocks and story-blocked notifications are always sent.
|
||||
# rate_limit_notifications = true
|
||||
@@ -199,33 +217,202 @@ pub fn detect_components_toml(root: &Path) -> String {
|
||||
sections.join("\n")
|
||||
}
|
||||
|
||||
/// Detect the appropriate Node.js test command for a directory containing `package.json`.
|
||||
///
|
||||
/// Reads the `package.json` content to identify known test runners (vitest, jest).
|
||||
/// Falls back to `npm test` or `pnpm test` based on which lock file is present.
|
||||
fn detect_node_test_cmd(pkg_dir: &Path) -> String {
|
||||
let has_pnpm = pkg_dir.join("pnpm-lock.yaml").exists();
|
||||
let content = std::fs::read_to_string(pkg_dir.join("package.json")).unwrap_or_default();
|
||||
|
||||
if content.contains("\"vitest\"") {
|
||||
let pm = if has_pnpm { "pnpm" } else { "npx" };
|
||||
return format!("{} vitest run", pm);
|
||||
}
|
||||
if content.contains("\"jest\"") {
|
||||
let pm = if has_pnpm { "pnpm" } else { "npx" };
|
||||
return format!("{} jest", pm);
|
||||
}
|
||||
|
||||
if has_pnpm {
|
||||
"pnpm test".to_string()
|
||||
} else {
|
||||
"npm test".to_string()
|
||||
}
|
||||
}
|
||||
|
||||
/// Detect the appropriate Node.js build command for a directory containing `package.json`.
|
||||
fn detect_node_build_cmd(pkg_dir: &Path) -> String {
|
||||
if pkg_dir.join("pnpm-lock.yaml").exists() {
|
||||
"pnpm run build".to_string()
|
||||
} else {
|
||||
"npm run build".to_string()
|
||||
}
|
||||
}
|
||||
|
||||
/// Detect the appropriate Node.js lint command for a directory containing `package.json`.
|
||||
///
|
||||
/// Reads the `package.json` content to identify eslint. Falls back to
|
||||
/// `npm run lint` or `pnpm run lint` based on which lock file is present.
|
||||
fn detect_node_lint_cmd(pkg_dir: &Path) -> String {
|
||||
let has_pnpm = pkg_dir.join("pnpm-lock.yaml").exists();
|
||||
let content = std::fs::read_to_string(pkg_dir.join("package.json")).unwrap_or_default();
|
||||
if content.contains("\"eslint\"") {
|
||||
let pm = if has_pnpm { "pnpm" } else { "npx" };
|
||||
return format!("{pm} eslint .");
|
||||
}
|
||||
if has_pnpm {
|
||||
"pnpm run lint".to_string()
|
||||
} else {
|
||||
"npm run lint".to_string()
|
||||
}
|
||||
}
|
||||
|
||||
/// Generate `script/build` content for a new project at `root`.
|
||||
///
|
||||
/// Inspects well-known marker files to identify which tech stacks are present
|
||||
/// and emits the appropriate build commands. Multi-stack projects get combined
|
||||
/// commands run sequentially. Falls back to a generic stub when no markers
|
||||
/// are found so the scaffold is always valid.
|
||||
///
|
||||
/// For projects with a frontend in a known subdirectory (`frontend/`, `client/`),
|
||||
/// the build command is detected from the presence of `pnpm-lock.yaml`.
|
||||
pub fn detect_script_build(root: &Path) -> String {
|
||||
let mut commands: Vec<String> = Vec::new();
|
||||
|
||||
if root.join("Cargo.toml").exists() {
|
||||
commands.push("cargo build --release".to_string());
|
||||
}
|
||||
|
||||
if root.join("package.json").exists() {
|
||||
commands.push(detect_node_build_cmd(root));
|
||||
}
|
||||
|
||||
// Detect frontend in known subdirectories (e.g. frontend/, client/)
|
||||
for subdir in &["frontend", "client"] {
|
||||
let sub_path = root.join(subdir);
|
||||
if sub_path.join("package.json").exists() {
|
||||
let cmd = detect_node_build_cmd(&sub_path);
|
||||
commands.push(format!("(cd {} && {})", subdir, cmd));
|
||||
}
|
||||
}
|
||||
|
||||
if root.join("pyproject.toml").exists() {
|
||||
commands.push("python -m build".to_string());
|
||||
}
|
||||
|
||||
if root.join("go.mod").exists() {
|
||||
commands.push("go build ./...".to_string());
|
||||
}
|
||||
|
||||
if commands.is_empty() {
|
||||
return "#!/usr/bin/env bash\nset -euo pipefail\n\n# Add your project's build commands here.\necho \"No build configured\"\n".to_string();
|
||||
}
|
||||
|
||||
let mut script = "#!/usr/bin/env bash\nset -euo pipefail\n\n".to_string();
|
||||
for cmd in commands {
|
||||
script.push_str(&cmd);
|
||||
script.push('\n');
|
||||
}
|
||||
script
|
||||
}
|
||||
|
||||
/// Generate `script/lint` content for a new project at `root`.
|
||||
///
|
||||
/// Inspects well-known marker files to identify which linters are present
|
||||
/// and emits the appropriate lint commands. Multi-stack projects get combined
|
||||
/// commands run sequentially. Falls back to a generic stub when no markers
|
||||
/// are found so the scaffold is always valid.
|
||||
///
|
||||
/// For projects with a frontend in a known subdirectory (`frontend/`, `client/`),
|
||||
/// the lint command is detected from the `package.json` (eslint, npm, pnpm).
|
||||
pub fn detect_script_lint(root: &Path) -> String {
|
||||
let mut commands: Vec<String> = Vec::new();
|
||||
|
||||
if root.join("Cargo.toml").exists() {
|
||||
commands.push("cargo fmt --all --check".to_string());
|
||||
commands.push("cargo clippy -- -D warnings".to_string());
|
||||
}
|
||||
|
||||
if root.join("package.json").exists() {
|
||||
commands.push(detect_node_lint_cmd(root));
|
||||
}
|
||||
|
||||
// Detect frontend in known subdirectories (e.g. frontend/, client/)
|
||||
for subdir in &["frontend", "client"] {
|
||||
let sub_path = root.join(subdir);
|
||||
if sub_path.join("package.json").exists() {
|
||||
let cmd = detect_node_lint_cmd(&sub_path);
|
||||
commands.push(format!("(cd {} && {})", subdir, cmd));
|
||||
}
|
||||
}
|
||||
|
||||
if root.join("pyproject.toml").exists() || root.join("requirements.txt").exists() {
|
||||
let mut content = std::fs::read_to_string(root.join("pyproject.toml")).unwrap_or_default();
|
||||
content
|
||||
.push_str(&std::fs::read_to_string(root.join("requirements.txt")).unwrap_or_default());
|
||||
if content.contains("ruff") {
|
||||
commands.push("ruff check .".to_string());
|
||||
} else {
|
||||
commands.push("flake8 .".to_string());
|
||||
}
|
||||
}
|
||||
|
||||
if root.join("go.mod").exists() {
|
||||
commands.push("go vet ./...".to_string());
|
||||
}
|
||||
|
||||
if commands.is_empty() {
|
||||
return "#!/usr/bin/env bash\nset -euo pipefail\n\n# Add your project's lint commands here.\necho \"No linters configured\"\n".to_string();
|
||||
}
|
||||
|
||||
let mut script = "#!/usr/bin/env bash\nset -euo pipefail\n\n".to_string();
|
||||
for cmd in commands {
|
||||
script.push_str(&cmd);
|
||||
script.push('\n');
|
||||
}
|
||||
script
|
||||
}
|
||||
|
||||
/// Generate `script/test` content for a new project at `root`.
|
||||
///
|
||||
/// Inspects well-known marker files to identify which tech stacks are present
|
||||
/// and emits the appropriate test commands. Multi-stack projects get combined
|
||||
/// commands run sequentially. Falls back to the generic stub when no markers
|
||||
/// are found so the scaffold is always valid.
|
||||
///
|
||||
/// For projects with a frontend in a known subdirectory (`frontend/`, `client/`),
|
||||
/// the test runner is detected from the `package.json` (vitest, jest, npm, pnpm).
|
||||
pub fn detect_script_test(root: &Path) -> String {
|
||||
let mut commands: Vec<&str> = Vec::new();
|
||||
let mut commands: Vec<String> = Vec::new();
|
||||
|
||||
if root.join("Cargo.toml").exists() {
|
||||
commands.push("cargo test");
|
||||
commands.push("cargo test".to_string());
|
||||
}
|
||||
|
||||
if root.join("package.json").exists() {
|
||||
if root.join("pnpm-lock.yaml").exists() {
|
||||
commands.push("pnpm test");
|
||||
commands.push("pnpm test".to_string());
|
||||
} else {
|
||||
commands.push("npm test");
|
||||
commands.push("npm test".to_string());
|
||||
}
|
||||
}
|
||||
|
||||
// Detect frontend in known subdirectories (e.g. frontend/, client/)
|
||||
for subdir in &["frontend", "client"] {
|
||||
let sub_path = root.join(subdir);
|
||||
if sub_path.join("package.json").exists() {
|
||||
let cmd = detect_node_test_cmd(&sub_path);
|
||||
commands.push(format!("(cd {} && {})", subdir, cmd));
|
||||
}
|
||||
}
|
||||
|
||||
if root.join("pyproject.toml").exists() || root.join("requirements.txt").exists() {
|
||||
commands.push("pytest");
|
||||
commands.push("pytest".to_string());
|
||||
}
|
||||
|
||||
if root.join("go.mod").exists() {
|
||||
commands.push("go test ./...");
|
||||
commands.push("go test ./...".to_string());
|
||||
}
|
||||
|
||||
if commands.is_empty() {
|
||||
@@ -234,7 +421,7 @@ pub fn detect_script_test(root: &Path) -> String {
|
||||
|
||||
let mut script = "#!/usr/bin/env bash\nset -euo pipefail\n\n".to_string();
|
||||
for cmd in commands {
|
||||
script.push_str(cmd);
|
||||
script.push_str(&cmd);
|
||||
script.push('\n');
|
||||
}
|
||||
script
|
||||
@@ -298,6 +485,8 @@ fn write_story_kit_gitignore(root: &Path) -> Result<(), String> {
|
||||
"token_usage.jsonl",
|
||||
"wizard_state.json",
|
||||
"store.json",
|
||||
"pipeline.db",
|
||||
"*.db",
|
||||
];
|
||||
|
||||
let gitignore_path = root.join(".huskies").join(".gitignore");
|
||||
@@ -411,6 +600,10 @@ pub(crate) fn scaffold_story_kit(root: &Path, port: u16) -> Result<(), String> {
|
||||
write_file_if_missing(&tech_root.join("STACK.md"), STORY_KIT_STACK)?;
|
||||
let script_test_content = detect_script_test(root);
|
||||
write_script_if_missing(&script_root.join("test"), &script_test_content)?;
|
||||
let script_build_content = detect_script_build(root);
|
||||
write_script_if_missing(&script_root.join("build"), &script_build_content)?;
|
||||
let script_lint_content = detect_script_lint(root);
|
||||
write_script_if_missing(&script_root.join("lint"), &script_lint_content)?;
|
||||
write_file_if_missing(&root.join("CLAUDE.md"), STORY_KIT_CLAUDE_MD)?;
|
||||
|
||||
// Write per-transport bot.toml example files so users can see all options.
|
||||
@@ -584,6 +777,78 @@ mod tests {
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn scaffold_project_toml_contains_max_retries_with_default_value() {
|
||||
let dir = tempdir().unwrap();
|
||||
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||
|
||||
let content = fs::read_to_string(dir.path().join(".huskies/project.toml")).unwrap();
|
||||
assert!(
|
||||
content.contains("max_retries = 2"),
|
||||
"project.toml scaffold should include max_retries with default value 2"
|
||||
);
|
||||
assert!(
|
||||
content.contains("Maximum number of retries"),
|
||||
"project.toml scaffold should include a comment explaining max_retries"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn scaffold_project_toml_contains_commented_out_optional_fields() {
|
||||
let dir = tempdir().unwrap();
|
||||
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||
|
||||
let content = fs::read_to_string(dir.path().join(".huskies/project.toml")).unwrap();
|
||||
assert!(
|
||||
content.contains("# default_coder_model"),
|
||||
"project.toml scaffold should include commented-out default_coder_model"
|
||||
);
|
||||
assert!(
|
||||
content.contains("# max_coders"),
|
||||
"project.toml scaffold should include commented-out max_coders"
|
||||
);
|
||||
assert!(
|
||||
content.contains("# base_branch"),
|
||||
"project.toml scaffold should include commented-out base_branch"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn scaffold_project_toml_round_trips_through_project_config_load() {
|
||||
use crate::config::ProjectConfig;
|
||||
|
||||
let dir = tempdir().unwrap();
|
||||
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||
|
||||
// The generated project.toml must parse without error.
|
||||
let config = ProjectConfig::load(dir.path())
|
||||
.expect("Generated project.toml should parse without error");
|
||||
|
||||
// Key defaults must survive the round-trip.
|
||||
assert_eq!(config.default_qa, "server");
|
||||
assert_eq!(config.max_retries, 2);
|
||||
assert!(
|
||||
config.rate_limit_notifications,
|
||||
"rate_limit_notifications should default to true"
|
||||
);
|
||||
assert!(
|
||||
config.default_coder_model.is_none(),
|
||||
"default_coder_model should be None when commented out"
|
||||
);
|
||||
assert!(
|
||||
config.max_coders.is_none(),
|
||||
"max_coders should be None when commented out"
|
||||
);
|
||||
assert!(
|
||||
config.base_branch.is_none(),
|
||||
"base_branch should be None when commented out"
|
||||
);
|
||||
assert!(
|
||||
config.timezone.is_none(),
|
||||
"timezone should be None when commented out"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn scaffold_context_is_blank_template_not_story_kit_content() {
|
||||
let dir = tempdir().unwrap();
|
||||
@@ -744,6 +1009,9 @@ mod tests {
|
||||
assert!(!root_content.contains(".huskies/coverage/"));
|
||||
// store.json must be in .huskies/.gitignore instead
|
||||
assert!(sk_content.contains("store.json"));
|
||||
// Database files must be ignored so novice users don't accidentally commit them
|
||||
assert!(sk_content.contains("pipeline.db"));
|
||||
assert!(sk_content.contains("*.db"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -1165,6 +1433,141 @@ mod tests {
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_test_frontend_subdir_with_vitest_uses_npx_vitest() {
|
||||
let dir = tempdir().unwrap();
|
||||
let frontend = dir.path().join("frontend");
|
||||
fs::create_dir_all(&frontend).unwrap();
|
||||
fs::write(
|
||||
frontend.join("package.json"),
|
||||
r#"{"devDependencies":{"vitest":"^1.0.0"},"scripts":{"test":"vitest run"}}"#,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let script = detect_script_test(dir.path());
|
||||
assert!(
|
||||
script.contains("vitest run"),
|
||||
"frontend with vitest should emit vitest run"
|
||||
);
|
||||
assert!(
|
||||
script.contains("cd frontend"),
|
||||
"should cd into the frontend directory"
|
||||
);
|
||||
assert!(
|
||||
!script.contains("No tests configured"),
|
||||
"should not use stub when frontend is detected"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_test_frontend_subdir_with_jest_uses_npx_jest() {
|
||||
let dir = tempdir().unwrap();
|
||||
let frontend = dir.path().join("frontend");
|
||||
fs::create_dir_all(&frontend).unwrap();
|
||||
fs::write(
|
||||
frontend.join("package.json"),
|
||||
r#"{"devDependencies":{"jest":"^29.0.0"},"scripts":{"test":"jest"}}"#,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let script = detect_script_test(dir.path());
|
||||
assert!(
|
||||
script.contains("jest"),
|
||||
"frontend with jest should emit jest"
|
||||
);
|
||||
assert!(
|
||||
script.contains("cd frontend"),
|
||||
"should cd into the frontend directory"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_test_frontend_subdir_no_known_runner_uses_npm_test() {
|
||||
let dir = tempdir().unwrap();
|
||||
let frontend = dir.path().join("frontend");
|
||||
fs::create_dir_all(&frontend).unwrap();
|
||||
fs::write(
|
||||
frontend.join("package.json"),
|
||||
r#"{"scripts":{"test":"mocha"}}"#,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let script = detect_script_test(dir.path());
|
||||
assert!(
|
||||
script.contains("npm test"),
|
||||
"frontend without known runner should fall back to npm test"
|
||||
);
|
||||
assert!(script.contains("cd frontend"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_test_frontend_subdir_pnpm_uses_pnpm_vitest() {
|
||||
let dir = tempdir().unwrap();
|
||||
let frontend = dir.path().join("frontend");
|
||||
fs::create_dir_all(&frontend).unwrap();
|
||||
fs::write(
|
||||
frontend.join("package.json"),
|
||||
r#"{"devDependencies":{"vitest":"^1.0.0"}}"#,
|
||||
)
|
||||
.unwrap();
|
||||
fs::write(frontend.join("pnpm-lock.yaml"), "").unwrap();
|
||||
|
||||
let script = detect_script_test(dir.path());
|
||||
assert!(
|
||||
script.contains("pnpm vitest run"),
|
||||
"pnpm frontend with vitest should use pnpm vitest run"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_test_rust_plus_frontend_subdir_both_included() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(
|
||||
dir.path().join("Cargo.toml"),
|
||||
"[package]\nname = \"server\"\n",
|
||||
)
|
||||
.unwrap();
|
||||
let frontend = dir.path().join("frontend");
|
||||
fs::create_dir_all(&frontend).unwrap();
|
||||
fs::write(
|
||||
frontend.join("package.json"),
|
||||
r#"{"devDependencies":{"vitest":"^1.0.0"}}"#,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let script = detect_script_test(dir.path());
|
||||
assert!(
|
||||
script.contains("cargo test"),
|
||||
"Rust + frontend should include cargo test"
|
||||
);
|
||||
assert!(
|
||||
script.contains("vitest run"),
|
||||
"Rust + frontend should include vitest run"
|
||||
);
|
||||
assert!(
|
||||
script.contains("cd frontend"),
|
||||
"Rust + frontend should cd into frontend"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_test_client_subdir_detected() {
|
||||
let dir = tempdir().unwrap();
|
||||
let client = dir.path().join("client");
|
||||
fs::create_dir_all(&client).unwrap();
|
||||
fs::write(
|
||||
client.join("package.json"),
|
||||
r#"{"scripts":{"test":"jest"}}"#,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let script = detect_script_test(dir.path());
|
||||
assert!(
|
||||
script.contains("cd client"),
|
||||
"client/ subdir should also be detected"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_test_output_starts_with_shebang() {
|
||||
let dir = tempdir().unwrap();
|
||||
@@ -1211,6 +1614,347 @@ mod tests {
|
||||
);
|
||||
}
|
||||
|
||||
// --- detect_script_build ---
|
||||
|
||||
#[test]
|
||||
fn detect_script_build_no_markers_returns_stub() {
|
||||
let dir = tempdir().unwrap();
|
||||
let script = detect_script_build(dir.path());
|
||||
assert!(
|
||||
script.contains("No build configured"),
|
||||
"fallback should contain the generic stub message"
|
||||
);
|
||||
assert!(script.starts_with("#!/usr/bin/env bash"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_build_cargo_toml_adds_cargo_build_release() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(dir.path().join("Cargo.toml"), "[package]\nname = \"x\"\n").unwrap();
|
||||
|
||||
let script = detect_script_build(dir.path());
|
||||
assert!(
|
||||
script.contains("cargo build --release"),
|
||||
"Rust project should run cargo build --release"
|
||||
);
|
||||
assert!(!script.contains("No build configured"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_build_package_json_npm_adds_npm_run_build() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(dir.path().join("package.json"), "{}").unwrap();
|
||||
|
||||
let script = detect_script_build(dir.path());
|
||||
assert!(
|
||||
script.contains("npm run build"),
|
||||
"Node project without pnpm-lock should run npm run build"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_build_package_json_pnpm_adds_pnpm_run_build() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(dir.path().join("package.json"), "{}").unwrap();
|
||||
fs::write(dir.path().join("pnpm-lock.yaml"), "").unwrap();
|
||||
|
||||
let script = detect_script_build(dir.path());
|
||||
assert!(
|
||||
script.contains("pnpm run build"),
|
||||
"Node project with pnpm-lock should run pnpm run build"
|
||||
);
|
||||
assert!(
|
||||
!script.lines().any(|l| l.trim() == "npm run build"),
|
||||
"should not use npm when pnpm-lock.yaml is present"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_build_go_mod_adds_go_build() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(dir.path().join("go.mod"), "module example.com/app\n").unwrap();
|
||||
|
||||
let script = detect_script_build(dir.path());
|
||||
assert!(
|
||||
script.contains("go build ./..."),
|
||||
"Go project should run go build ./..."
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_build_pyproject_toml_adds_python_build() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(
|
||||
dir.path().join("pyproject.toml"),
|
||||
"[project]\nname = \"x\"\n",
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let script = detect_script_build(dir.path());
|
||||
assert!(
|
||||
script.contains("python -m build"),
|
||||
"Python project should run python -m build"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_build_frontend_subdir_detected() {
|
||||
let dir = tempdir().unwrap();
|
||||
let frontend = dir.path().join("frontend");
|
||||
fs::create_dir_all(&frontend).unwrap();
|
||||
fs::write(frontend.join("package.json"), "{}").unwrap();
|
||||
|
||||
let script = detect_script_build(dir.path());
|
||||
assert!(
|
||||
script.contains("cd frontend"),
|
||||
"frontend subdir should be detected for build"
|
||||
);
|
||||
assert!(script.contains("npm run build"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_build_rust_plus_frontend_subdir_both_included() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(
|
||||
dir.path().join("Cargo.toml"),
|
||||
"[package]\nname = \"server\"\n",
|
||||
)
|
||||
.unwrap();
|
||||
let frontend = dir.path().join("frontend");
|
||||
fs::create_dir_all(&frontend).unwrap();
|
||||
fs::write(frontend.join("package.json"), "{}").unwrap();
|
||||
|
||||
let script = detect_script_build(dir.path());
|
||||
assert!(script.contains("cargo build --release"));
|
||||
assert!(script.contains("cd frontend"));
|
||||
assert!(script.contains("npm run build"));
|
||||
}
|
||||
|
||||
// --- detect_script_lint ---
|
||||
|
||||
#[test]
|
||||
fn detect_script_lint_no_markers_returns_stub() {
|
||||
let dir = tempdir().unwrap();
|
||||
let script = detect_script_lint(dir.path());
|
||||
assert!(
|
||||
script.contains("No linters configured"),
|
||||
"fallback should contain the generic stub message"
|
||||
);
|
||||
assert!(script.starts_with("#!/usr/bin/env bash"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_lint_cargo_toml_adds_fmt_and_clippy() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(dir.path().join("Cargo.toml"), "[package]\nname = \"x\"\n").unwrap();
|
||||
|
||||
let script = detect_script_lint(dir.path());
|
||||
assert!(
|
||||
script.contains("cargo fmt --all --check"),
|
||||
"Rust project should check formatting"
|
||||
);
|
||||
assert!(
|
||||
script.contains("cargo clippy -- -D warnings"),
|
||||
"Rust project should run clippy"
|
||||
);
|
||||
assert!(!script.contains("No linters configured"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_lint_package_json_without_eslint_uses_npm_run_lint() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(dir.path().join("package.json"), "{}").unwrap();
|
||||
|
||||
let script = detect_script_lint(dir.path());
|
||||
assert!(
|
||||
script.contains("npm run lint"),
|
||||
"Node project without eslint dep should fall back to npm run lint"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_lint_package_json_with_eslint_uses_npx_eslint() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(
|
||||
dir.path().join("package.json"),
|
||||
r#"{"devDependencies":{"eslint":"^8.0.0"}}"#,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let script = detect_script_lint(dir.path());
|
||||
assert!(
|
||||
script.contains("npx eslint ."),
|
||||
"Node project with eslint should use npx eslint ."
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_lint_pnpm_with_eslint_uses_pnpm_eslint() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(
|
||||
dir.path().join("package.json"),
|
||||
r#"{"devDependencies":{"eslint":"^8.0.0"}}"#,
|
||||
)
|
||||
.unwrap();
|
||||
fs::write(dir.path().join("pnpm-lock.yaml"), "").unwrap();
|
||||
|
||||
let script = detect_script_lint(dir.path());
|
||||
assert!(
|
||||
script.contains("pnpm eslint ."),
|
||||
"pnpm project with eslint should use pnpm eslint ."
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_lint_python_requirements_uses_flake8() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(dir.path().join("requirements.txt"), "flask\n").unwrap();
|
||||
|
||||
let script = detect_script_lint(dir.path());
|
||||
assert!(
|
||||
script.contains("flake8 ."),
|
||||
"Python project without ruff should use flake8"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_lint_python_with_ruff_uses_ruff() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(
|
||||
dir.path().join("pyproject.toml"),
|
||||
"[project]\nname = \"x\"\n\n[tool.ruff]\n",
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let script = detect_script_lint(dir.path());
|
||||
assert!(
|
||||
script.contains("ruff check ."),
|
||||
"Python project with ruff configured should use ruff"
|
||||
);
|
||||
assert!(
|
||||
!script.contains("flake8"),
|
||||
"should not use flake8 when ruff is configured"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_lint_go_mod_adds_go_vet() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(dir.path().join("go.mod"), "module example.com/app\n").unwrap();
|
||||
|
||||
let script = detect_script_lint(dir.path());
|
||||
assert!(
|
||||
script.contains("go vet ./..."),
|
||||
"Go project should run go vet ./..."
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_lint_frontend_subdir_detected() {
|
||||
let dir = tempdir().unwrap();
|
||||
let frontend = dir.path().join("frontend");
|
||||
fs::create_dir_all(&frontend).unwrap();
|
||||
fs::write(frontend.join("package.json"), "{}").unwrap();
|
||||
|
||||
let script = detect_script_lint(dir.path());
|
||||
assert!(
|
||||
script.contains("cd frontend"),
|
||||
"frontend subdir should be detected for lint"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_lint_rust_plus_frontend_subdir_both_included() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(
|
||||
dir.path().join("Cargo.toml"),
|
||||
"[package]\nname = \"server\"\n",
|
||||
)
|
||||
.unwrap();
|
||||
let frontend = dir.path().join("frontend");
|
||||
fs::create_dir_all(&frontend).unwrap();
|
||||
fs::write(frontend.join("package.json"), "{}").unwrap();
|
||||
|
||||
let script = detect_script_lint(dir.path());
|
||||
assert!(script.contains("cargo fmt --all --check"));
|
||||
assert!(script.contains("cargo clippy -- -D warnings"));
|
||||
assert!(script.contains("cd frontend"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn scaffold_story_kit_creates_script_build_and_lint() {
|
||||
let dir = tempdir().unwrap();
|
||||
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||
|
||||
assert!(
|
||||
dir.path().join("script/build").exists(),
|
||||
"script/build should be created by scaffold"
|
||||
);
|
||||
assert!(
|
||||
dir.path().join("script/lint").exists(),
|
||||
"script/lint should be created by scaffold"
|
||||
);
|
||||
}
|
||||
|
||||
#[cfg(unix)]
|
||||
#[test]
|
||||
fn scaffold_story_kit_creates_executable_script_build_and_lint() {
|
||||
use std::os::unix::fs::PermissionsExt;
|
||||
|
||||
let dir = tempdir().unwrap();
|
||||
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||
|
||||
for name in &["build", "lint"] {
|
||||
let path = dir.path().join("script").join(name);
|
||||
assert!(path.exists(), "script/{name} should be created");
|
||||
let perms = fs::metadata(&path).unwrap().permissions();
|
||||
assert!(
|
||||
perms.mode() & 0o111 != 0,
|
||||
"script/{name} should be executable"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn scaffold_script_build_contains_detected_commands_for_rust() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(
|
||||
dir.path().join("Cargo.toml"),
|
||||
"[package]\nname = \"myapp\"\n",
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||
|
||||
let content = fs::read_to_string(dir.path().join("script/build")).unwrap();
|
||||
assert!(
|
||||
content.contains("cargo build --release"),
|
||||
"Rust project scaffold should set cargo build --release in script/build"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn scaffold_script_lint_contains_detected_commands_for_rust() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(
|
||||
dir.path().join("Cargo.toml"),
|
||||
"[package]\nname = \"myapp\"\n",
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||
|
||||
let content = fs::read_to_string(dir.path().join("script/lint")).unwrap();
|
||||
assert!(
|
||||
content.contains("cargo fmt --all --check"),
|
||||
"Rust project scaffold should include fmt check in script/lint"
|
||||
);
|
||||
assert!(
|
||||
content.contains("cargo clippy -- -D warnings"),
|
||||
"Rust project scaffold should include clippy in script/lint"
|
||||
);
|
||||
}
|
||||
|
||||
// --- generate_project_toml ---
|
||||
|
||||
#[test]
|
||||
|
||||
@@ -5,7 +5,7 @@ use std::path::Path;
|
||||
/// Only untouched templates contain this marker — real project content
|
||||
/// will never include it, so it avoids false positives when the project
|
||||
/// itself is an "Agentic AI Code Assistant".
|
||||
const TEMPLATE_SENTINEL: &str = "<!-- huskies:scaffold-template -->";
|
||||
pub(crate) const TEMPLATE_SENTINEL: &str = "<!-- huskies:scaffold-template -->";
|
||||
|
||||
/// Marker found in the default `script/test` scaffold output.
|
||||
const TEMPLATE_MARKER_SCRIPT: &str = "No tests configured";
|
||||
|
||||
@@ -57,6 +57,9 @@ pub struct StoryMetadata {
|
||||
/// Story numbers this story depends on. Auto-assign will skip this story
|
||||
/// until all dependencies have reached `5_done` or `6_archived`.
|
||||
pub depends_on: Option<Vec<u32>>,
|
||||
/// When `true`, the story is frozen: auto-assign skips it, the pipeline
|
||||
/// does not advance it, and no mergemaster is spawned.
|
||||
pub frozen: Option<bool>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
@@ -89,6 +92,8 @@ struct FrontMatter {
|
||||
blocked: Option<bool>,
|
||||
/// Story numbers this story depends on.
|
||||
depends_on: Option<Vec<u32>>,
|
||||
/// When `true`, the story is frozen.
|
||||
frozen: Option<bool>,
|
||||
}
|
||||
|
||||
pub fn parse_front_matter(contents: &str) -> Result<StoryMetadata, StoryMetaError> {
|
||||
@@ -129,6 +134,7 @@ fn build_metadata(front: FrontMatter) -> StoryMetadata {
|
||||
retry_count: front.retry_count,
|
||||
blocked: front.blocked,
|
||||
depends_on: front.depends_on,
|
||||
frozen: front.frozen,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -439,6 +445,20 @@ pub fn increment_retry_count_in_content(contents: &str) -> (String, u32) {
|
||||
(updated, new_count)
|
||||
}
|
||||
|
||||
/// Return `true` if the story has `frozen: true` in the content store.
|
||||
///
|
||||
/// Used by the pipeline advance code to suppress stage transitions for frozen stories.
|
||||
pub fn is_story_frozen_in_store(story_id: &str) -> bool {
|
||||
let contents = match crate::db::read_content(story_id) {
|
||||
Some(c) => c,
|
||||
None => return false,
|
||||
};
|
||||
parse_front_matter(&contents)
|
||||
.ok()
|
||||
.and_then(|m| m.frozen)
|
||||
.unwrap_or(false)
|
||||
}
|
||||
|
||||
/// Write `blocked: true` to story content (pure function).
|
||||
pub fn write_blocked_in_content(contents: &str) -> String {
|
||||
set_front_matter_field(contents, "blocked", "true")
|
||||
@@ -459,6 +479,20 @@ pub fn write_review_hold_in_content(contents: &str) -> String {
|
||||
set_front_matter_field(contents, "review_hold", "true")
|
||||
}
|
||||
|
||||
/// Write or update `depends_on` in story content (pure function).
|
||||
///
|
||||
/// Serialises `deps` as an inline YAML sequence, e.g. `[477, 478]`.
|
||||
/// If `deps` is empty the field is removed.
|
||||
pub fn write_depends_on_in_content(contents: &str, deps: &[u32]) -> String {
|
||||
if deps.is_empty() {
|
||||
remove_front_matter_field(contents, "depends_on")
|
||||
} else {
|
||||
let nums: Vec<String> = deps.iter().map(|n| n.to_string()).collect();
|
||||
let yaml_value = format!("[{}]", nums.join(", "));
|
||||
set_front_matter_field(contents, "depends_on", &yaml_value)
|
||||
}
|
||||
}
|
||||
|
||||
/// Resolve the effective QA mode for a story file.
|
||||
///
|
||||
/// Reads the `qa` front matter field. If absent, falls back to `default`.
|
||||
|
||||
+11
-3
@@ -16,9 +16,13 @@ pub enum WizardStep {
|
||||
Stack,
|
||||
/// Step 4: create script/test
|
||||
TestScript,
|
||||
/// Step 5: create script/release
|
||||
/// Step 5: create script/build
|
||||
BuildScript,
|
||||
/// Step 6: create script/lint
|
||||
LintScript,
|
||||
/// Step 7: create script/release
|
||||
ReleaseScript,
|
||||
/// Step 6: create script/test_coverage
|
||||
/// Step 8: create script/test_coverage
|
||||
TestCoverage,
|
||||
}
|
||||
|
||||
@@ -29,6 +33,8 @@ impl WizardStep {
|
||||
WizardStep::Context,
|
||||
WizardStep::Stack,
|
||||
WizardStep::TestScript,
|
||||
WizardStep::BuildScript,
|
||||
WizardStep::LintScript,
|
||||
WizardStep::ReleaseScript,
|
||||
WizardStep::TestCoverage,
|
||||
];
|
||||
@@ -40,6 +46,8 @@ impl WizardStep {
|
||||
WizardStep::Context => "Generate project context (00_CONTEXT.md)",
|
||||
WizardStep::Stack => "Generate tech stack spec (STACK.md)",
|
||||
WizardStep::TestScript => "Create test script (script/test)",
|
||||
WizardStep::BuildScript => "Create build script (script/build)",
|
||||
WizardStep::LintScript => "Create lint script (script/lint)",
|
||||
WizardStep::ReleaseScript => "Create release script (script/release)",
|
||||
WizardStep::TestCoverage => "Create test coverage script (script/test_coverage)",
|
||||
}
|
||||
@@ -262,7 +270,7 @@ mod tests {
|
||||
#[test]
|
||||
fn default_state_has_all_steps_pending() {
|
||||
let state = WizardState::default();
|
||||
assert_eq!(state.steps.len(), 6);
|
||||
assert_eq!(state.steps.len(), 8);
|
||||
for step in &state.steps {
|
||||
assert_eq!(step.status, StepStatus::Pending);
|
||||
}
|
||||
|
||||
+2
-1
@@ -860,7 +860,7 @@ async fn main() -> Result<(), std::io::Error> {
|
||||
// Optional Matrix bot: connect to the homeserver and start listening for
|
||||
// messages if `.huskies/bot.toml` is present and enabled.
|
||||
if let Some(ref root) = startup_root {
|
||||
chat::transport::matrix::spawn_bot(
|
||||
let _ = chat::transport::matrix::spawn_bot(
|
||||
root,
|
||||
watcher_tx_for_bot,
|
||||
perm_rx_for_bot,
|
||||
@@ -868,6 +868,7 @@ async fn main() -> Result<(), std::io::Error> {
|
||||
matrix_shutdown_rx,
|
||||
None,
|
||||
vec![],
|
||||
std::collections::BTreeMap::new(),
|
||||
);
|
||||
} else {
|
||||
// Keep the receiver alive (drop it) so the sender never errors.
|
||||
|
||||
Reference in New Issue
Block a user