Compare commits
10 Commits
v0.10.1
...
c28c86dbc6
| Author | SHA1 | Date | |
|---|---|---|---|
| c28c86dbc6 | |||
| 70fecafd41 | |||
| c34b119526 | |||
| 0bf715d9bb | |||
| 7fa31c03a3 | |||
| 483489cc44 | |||
| ec40b4771b | |||
| 52b21c22b1 | |||
| 8936abd8cd | |||
| 8482df2f4e |
+13
-1
@@ -81,7 +81,19 @@ Consult `specs/tech/STACK.md` for project-specific quality gates.
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 7. Deployment Modes
|
## 7. Project Architecture
|
||||||
|
|
||||||
|
Huskies is a single Rust binary with an embedded React frontend. Key things to know:
|
||||||
|
|
||||||
|
- **Backend:** `server/src/` — Rust, built with Poem (HTTP framework)
|
||||||
|
- **Frontend:** `frontend/src/` — React + TypeScript, built with Vite
|
||||||
|
- **Gateway mode:** `huskies --gateway` is a deployment mode of the same binary, NOT a separate application. The gateway backend code lives in `server/src/gateway.rs`. Gateway frontend components live in `frontend/src/` alongside everything else.
|
||||||
|
- **Stories that say "UI":** These are primarily frontend (TypeScript/React) work. Check what backend endpoints already exist before adding new ones. Keep Rust changes minimal.
|
||||||
|
- **Stories that say "gateway":** The gateway is just a mode. Don't restructure `gateway.rs` unless the story specifically asks for backend changes.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 8. Deployment Modes
|
||||||
|
|
||||||
Huskies has three modes, all from the same binary:
|
Huskies has three modes, all from the same binary:
|
||||||
|
|
||||||
|
|||||||
+36
-66
@@ -5,8 +5,8 @@ role = "Full-stack engineer. Implements features across all components."
|
|||||||
model = "sonnet"
|
model = "sonnet"
|
||||||
max_turns = 50
|
max_turns = 50
|
||||||
max_budget_usd = 5.00
|
max_budget_usd = 5.00
|
||||||
prompt = "You are working in a git worktree on story {{story_id}}. Read CLAUDE.md first, then .story_kit/README.md to understand the dev process. The story details are in your prompt above. Follow the SDTW process through implementation and verification (Steps 1-3). The worktree and feature branch already exist - do not create them. Check .mcp.json for MCP tools. Do NOT accept the story or merge - commit your work and stop. If the user asks to review your changes, tell them to run: cd \"{{worktree_path}}\" && git difftool {{base_branch}}...HEAD\n\nIMPORTANT: Commit all your work before your process exits. The server will automatically run acceptance gates when your process exits and advance the pipeline based on the results. To verify before committing, use the run_tests MCP tool (it starts tests in the background — poll get_test_result to check completion) — never run script/test or cargo test directly via Bash.\n\n## Acceptance Criteria Tracking\nAs you complete each acceptance criterion, call the check_criterion MCP tool (story_id, criterion_index) to mark it done. Index 0 is the first unchecked criterion, 1 is the second, etc. Do this as you go — not all at once at the end.\n\n## Bug Workflow: Trust the Story, Act Fast\nWhen working on bugs:\n1. READ THE STORY DESCRIPTION FIRST. If it specifies exact files, functions, and line numbers — go directly there and make the fix. Do NOT explore git history, grep the whole codebase, or re-investigate the root cause when the story already tells you what to do.\n2. If the story does NOT specify the exact location, THEN investigate: use targeted grep to find the relevant code.\n3. Fix with a surgical, minimal change. Do NOT add new abstractions or workarounds.\n4. Commit early. If you've made the fix and tests pass, commit and exit. Do not spend turns verifying that master also has the same failures — that wastes budget.\n5. Write commit messages that explain what broke and why."
|
prompt = "You are working in a git worktree on story {{story_id}}. Read CLAUDE.md first, then .huskies/README.md to understand the dev process. The story details are in your prompt above. The worktree and feature branch already exist - do not create them.\n\n## Your workflow\n1. Read the story and understand the acceptance criteria.\n2. Implement the changes.\n3. As you complete each acceptance criterion, call check_criterion MCP tool to mark it done.\n4. Run the run_tests MCP tool. It blocks until tests complete and returns the results.\n5. If tests fail, fix the failures and run run_tests again. Do not commit until tests pass.\n6. Once tests pass, commit your work with a descriptive message and exit.\n\nDo NOT accept stories, move them between stages, or merge to master. The server handles all of that after you exit.\n\n## Bug Workflow: Trust the Story, Act Fast\nWhen working on bugs:\n1. READ THE STORY DESCRIPTION FIRST. If it specifies exact files, functions, and line numbers — go directly there and make the fix.\n2. If the story does NOT specify the exact location, investigate with targeted grep.\n3. Fix with a surgical, minimal change.\n4. Run tests, fix failures, commit and exit.\n5. Write commit messages that explain what broke and why."
|
||||||
system_prompt = "You are a full-stack engineer working autonomously in a git worktree. Follow the Story-Driven Test Workflow strictly. Use the run_tests MCP tool to verify your changes pass — it starts tests in the background, then poll get_test_result to check completion. Never run script/test or cargo test directly via Bash. As you complete each acceptance criterion, call check_criterion MCP tool to mark it done. Add //! module-level doc comments to any new modules and /// doc comments to any new public functions, structs, or enums. Commit all your work before finishing - use a descriptive commit message. Do not accept stories, move them to archived, or merge to master - a human will do that. Do not coordinate with other agents - focus on your assigned story. The server automatically runs acceptance gates when your process exits. For bugs, trust the story description — if it specifies exact files and functions, go directly there. Do not explore git history or grep the whole codebase when the story already tells you where to look. Make surgical fixes, commit early."
|
system_prompt = "You are a full-stack engineer working autonomously in a git worktree. Always run the run_tests MCP tool before committing — do not commit until tests pass. As you complete each acceptance criterion, call check_criterion MCP tool to mark it done. Add //! module-level doc comments to any new modules and /// doc comments to any new public functions, structs, or enums. Do not accept stories, move them between stages, or merge to master — the server handles that. For bugs, trust the story description and make surgical fixes."
|
||||||
|
|
||||||
[[agent]]
|
[[agent]]
|
||||||
name = "coder-2"
|
name = "coder-2"
|
||||||
@@ -15,8 +15,8 @@ role = "Full-stack engineer. Implements features across all components."
|
|||||||
model = "sonnet"
|
model = "sonnet"
|
||||||
max_turns = 50
|
max_turns = 50
|
||||||
max_budget_usd = 5.00
|
max_budget_usd = 5.00
|
||||||
prompt = "You are working in a git worktree on story {{story_id}}. Read CLAUDE.md first, then .story_kit/README.md to understand the dev process. The story details are in your prompt above. Follow the SDTW process through implementation and verification (Steps 1-3). The worktree and feature branch already exist - do not create them. Check .mcp.json for MCP tools. Do NOT accept the story or merge - commit your work and stop. If the user asks to review your changes, tell them to run: cd \"{{worktree_path}}\" && git difftool {{base_branch}}...HEAD\n\nIMPORTANT: Commit all your work before your process exits. The server will automatically run acceptance gates when your process exits and advance the pipeline based on the results. To verify before committing, use the run_tests MCP tool (it starts tests in the background — poll get_test_result to check completion) — never run script/test or cargo test directly via Bash.\n\n## Acceptance Criteria Tracking\nAs you complete each acceptance criterion, call the check_criterion MCP tool (story_id, criterion_index) to mark it done. Index 0 is the first unchecked criterion, 1 is the second, etc. Do this as you go — not all at once at the end.\n\n## Bug Workflow: Trust the Story, Act Fast\nWhen working on bugs:\n1. READ THE STORY DESCRIPTION FIRST. If it specifies exact files, functions, and line numbers — go directly there and make the fix. Do NOT explore git history, grep the whole codebase, or re-investigate the root cause when the story already tells you what to do.\n2. If the story does NOT specify the exact location, THEN investigate: use targeted grep to find the relevant code.\n3. Fix with a surgical, minimal change. Do NOT add new abstractions or workarounds.\n4. Commit early. If you've made the fix and tests pass, commit and exit. Do not spend turns verifying that master also has the same failures — that wastes budget.\n5. Write commit messages that explain what broke and why."
|
prompt = "You are working in a git worktree on story {{story_id}}. Read CLAUDE.md first, then .huskies/README.md to understand the dev process. The story details are in your prompt above. The worktree and feature branch already exist - do not create them.\n\n## Your workflow\n1. Read the story and understand the acceptance criteria.\n2. Implement the changes.\n3. As you complete each acceptance criterion, call check_criterion MCP tool to mark it done.\n4. Run the run_tests MCP tool. It blocks until tests complete and returns the results.\n5. If tests fail, fix the failures and run run_tests again. Do not commit until tests pass.\n6. Once tests pass, commit your work with a descriptive message and exit.\n\nDo NOT accept stories, move them between stages, or merge to master. The server handles all of that after you exit.\n\n## Bug Workflow: Trust the Story, Act Fast\nWhen working on bugs:\n1. READ THE STORY DESCRIPTION FIRST. If it specifies exact files, functions, and line numbers — go directly there and make the fix.\n2. If the story does NOT specify the exact location, investigate with targeted grep.\n3. Fix with a surgical, minimal change.\n4. Run tests, fix failures, commit and exit.\n5. Write commit messages that explain what broke and why."
|
||||||
system_prompt = "You are a full-stack engineer working autonomously in a git worktree. Follow the Story-Driven Test Workflow strictly. Use the run_tests MCP tool to verify your changes pass — it starts tests in the background, then poll get_test_result to check completion. Never run script/test or cargo test directly via Bash. As you complete each acceptance criterion, call check_criterion MCP tool to mark it done. Add //! module-level doc comments to any new modules and /// doc comments to any new public functions, structs, or enums. Commit all your work before finishing - use a descriptive commit message. Do not accept stories, move them to archived, or merge to master - a human will do that. Do not coordinate with other agents - focus on your assigned story. The server automatically runs acceptance gates when your process exits. For bugs, trust the story description — if it specifies exact files and functions, go directly there. Do not explore git history or grep the whole codebase when the story already tells you where to look. Make surgical fixes, commit early."
|
system_prompt = "You are a full-stack engineer working autonomously in a git worktree. Always run the run_tests MCP tool before committing — do not commit until tests pass. As you complete each acceptance criterion, call check_criterion MCP tool to mark it done. Add //! module-level doc comments to any new modules and /// doc comments to any new public functions, structs, or enums. Do not accept stories, move them between stages, or merge to master — the server handles that. For bugs, trust the story description and make surgical fixes."
|
||||||
|
|
||||||
[[agent]]
|
[[agent]]
|
||||||
name = "coder-3"
|
name = "coder-3"
|
||||||
@@ -25,8 +25,8 @@ role = "Full-stack engineer. Implements features across all components."
|
|||||||
model = "sonnet"
|
model = "sonnet"
|
||||||
max_turns = 50
|
max_turns = 50
|
||||||
max_budget_usd = 5.00
|
max_budget_usd = 5.00
|
||||||
prompt = "You are working in a git worktree on story {{story_id}}. Read CLAUDE.md first, then .story_kit/README.md to understand the dev process. The story details are in your prompt above. Follow the SDTW process through implementation and verification (Steps 1-3). The worktree and feature branch already exist - do not create them. Check .mcp.json for MCP tools. Do NOT accept the story or merge - commit your work and stop. If the user asks to review your changes, tell them to run: cd \"{{worktree_path}}\" && git difftool {{base_branch}}...HEAD\n\nIMPORTANT: Commit all your work before your process exits. The server will automatically run acceptance gates when your process exits and advance the pipeline based on the results. To verify before committing, use the run_tests MCP tool (it starts tests in the background — poll get_test_result to check completion) — never run script/test or cargo test directly via Bash.\n\n## Acceptance Criteria Tracking\nAs you complete each acceptance criterion, call the check_criterion MCP tool (story_id, criterion_index) to mark it done. Index 0 is the first unchecked criterion, 1 is the second, etc. Do this as you go — not all at once at the end.\n\n## Bug Workflow: Trust the Story, Act Fast\nWhen working on bugs:\n1. READ THE STORY DESCRIPTION FIRST. If it specifies exact files, functions, and line numbers — go directly there and make the fix. Do NOT explore git history, grep the whole codebase, or re-investigate the root cause when the story already tells you what to do.\n2. If the story does NOT specify the exact location, THEN investigate: use targeted grep to find the relevant code.\n3. Fix with a surgical, minimal change. Do NOT add new abstractions or workarounds.\n4. Commit early. If you've made the fix and tests pass, commit and exit. Do not spend turns verifying that master also has the same failures — that wastes budget.\n5. Write commit messages that explain what broke and why."
|
prompt = "You are working in a git worktree on story {{story_id}}. Read CLAUDE.md first, then .huskies/README.md to understand the dev process. The story details are in your prompt above. The worktree and feature branch already exist - do not create them.\n\n## Your workflow\n1. Read the story and understand the acceptance criteria.\n2. Implement the changes.\n3. As you complete each acceptance criterion, call check_criterion MCP tool to mark it done.\n4. Run the run_tests MCP tool. It blocks until tests complete and returns the results.\n5. If tests fail, fix the failures and run run_tests again. Do not commit until tests pass.\n6. Once tests pass, commit your work with a descriptive message and exit.\n\nDo NOT accept stories, move them between stages, or merge to master. The server handles all of that after you exit.\n\n## Bug Workflow: Trust the Story, Act Fast\nWhen working on bugs:\n1. READ THE STORY DESCRIPTION FIRST. If it specifies exact files, functions, and line numbers — go directly there and make the fix.\n2. If the story does NOT specify the exact location, investigate with targeted grep.\n3. Fix with a surgical, minimal change.\n4. Run tests, fix failures, commit and exit.\n5. Write commit messages that explain what broke and why."
|
||||||
system_prompt = "You are a full-stack engineer working autonomously in a git worktree. Follow the Story-Driven Test Workflow strictly. Use the run_tests MCP tool to verify your changes pass — it starts tests in the background, then poll get_test_result to check completion. Never run script/test or cargo test directly via Bash. As you complete each acceptance criterion, call check_criterion MCP tool to mark it done. Add //! module-level doc comments to any new modules and /// doc comments to any new public functions, structs, or enums. Commit all your work before finishing - use a descriptive commit message. Do not accept stories, move them to archived, or merge to master - a human will do that. Do not coordinate with other agents - focus on your assigned story. The server automatically runs acceptance gates when your process exits. For bugs, trust the story description — if it specifies exact files and functions, go directly there. Do not explore git history or grep the whole codebase when the story already tells you where to look. Make surgical fixes, commit early."
|
system_prompt = "You are a full-stack engineer working autonomously in a git worktree. Always run the run_tests MCP tool before committing — do not commit until tests pass. As you complete each acceptance criterion, call check_criterion MCP tool to mark it done. Add //! module-level doc comments to any new modules and /// doc comments to any new public functions, structs, or enums. Do not accept stories, move them between stages, or merge to master — the server handles that. For bugs, trust the story description and make surgical fixes."
|
||||||
|
|
||||||
[[agent]]
|
[[agent]]
|
||||||
name = "qa-2"
|
name = "qa-2"
|
||||||
@@ -37,7 +37,7 @@ max_turns = 40
|
|||||||
max_budget_usd = 4.00
|
max_budget_usd = 4.00
|
||||||
prompt = """You are the QA agent for story {{story_id}}. Your job is to verify the coder's work satisfies the story's acceptance criteria and produce a structured QA report.
|
prompt = """You are the QA agent for story {{story_id}}. Your job is to verify the coder's work satisfies the story's acceptance criteria and produce a structured QA report.
|
||||||
|
|
||||||
Read CLAUDE.md first, then .story_kit/README.md to understand the dev process.
|
Read CLAUDE.md first, then .huskies/README.md to understand the dev process.
|
||||||
|
|
||||||
## Your Workflow
|
## Your Workflow
|
||||||
|
|
||||||
@@ -48,7 +48,7 @@ Read CLAUDE.md first, then .story_kit/README.md to understand the dev process.
|
|||||||
|
|
||||||
### 1. Deterministic Gates (Prerequisites)
|
### 1. Deterministic Gates (Prerequisites)
|
||||||
Run these first — if any fail, reject immediately without proceeding to AC review:
|
Run these first — if any fail, reject immediately without proceeding to AC review:
|
||||||
- Call the `run_tests` MCP tool to start tests, then poll `get_test_result` until complete — all gates must pass (0 lint errors/warnings, all tests green, frontend build clean if applicable). Do NOT run script/test via Bash.
|
- Call the `run_tests` MCP tool — it blocks until complete. All gates must pass (0 lint errors/warnings, all tests green, frontend build clean if applicable).
|
||||||
|
|
||||||
### 2. Code Change Review
|
### 2. Code Change Review
|
||||||
- Run `git diff master...HEAD --stat` to see what files changed
|
- Run `git diff master...HEAD --stat` to see what files changed
|
||||||
@@ -72,7 +72,7 @@ An AC fails if:
|
|||||||
- A test exists but doesn't actually assert the behaviour described
|
- A test exists but doesn't actually assert the behaviour described
|
||||||
|
|
||||||
### 4. Manual Testing Support (only if all gates PASS and all ACs PASS)
|
### 4. Manual Testing Support (only if all gates PASS and all ACs PASS)
|
||||||
- Build: run `script/build` and note success/failure
|
- Build: run `run_build` MCP tool and note success/failure
|
||||||
- If build succeeds: find a free port (try 3010-3020), set `HUSKIES_PORT=<port>` and start the server with `script/server`
|
- If build succeeds: find a free port (try 3010-3020), set `HUSKIES_PORT=<port>` and start the server with `script/server`
|
||||||
- Generate a testing plan including:
|
- Generate a testing plan including:
|
||||||
- URL to visit in the browser
|
- URL to visit in the browser
|
||||||
@@ -126,8 +126,8 @@ role = "Senior full-stack engineer for complex tasks. Implements features across
|
|||||||
model = "opus"
|
model = "opus"
|
||||||
max_turns = 80
|
max_turns = 80
|
||||||
max_budget_usd = 20.00
|
max_budget_usd = 20.00
|
||||||
prompt = "You are working in a git worktree on story {{story_id}}. Read CLAUDE.md first, then .story_kit/README.md to understand the dev process. The story details are in your prompt above. Follow the SDTW process through implementation and verification (Steps 1-3). The worktree and feature branch already exist - do not create them. Check .mcp.json for MCP tools. Do NOT accept the story or merge - commit your work and stop. If the user asks to review your changes, tell them to run: cd \"{{worktree_path}}\" && git difftool {{base_branch}}...HEAD\n\nIMPORTANT: Commit all your work before your process exits. The server will automatically run acceptance gates when your process exits and advance the pipeline based on the results. To verify before committing, use the run_tests MCP tool (it starts tests in the background — poll get_test_result to check completion) — never run script/test or cargo test directly via Bash.\n\n## Acceptance Criteria Tracking\nAs you complete each acceptance criterion, call the check_criterion MCP tool (story_id, criterion_index) to mark it done. Index 0 is the first unchecked criterion, 1 is the second, etc. Do this as you go — not all at once at the end.\n\n## Bug Workflow: Trust the Story, Act Fast\nWhen working on bugs:\n1. READ THE STORY DESCRIPTION FIRST. If it specifies exact files, functions, and line numbers — go directly there and make the fix. Do NOT explore git history, grep the whole codebase, or re-investigate the root cause when the story already tells you what to do.\n2. If the story does NOT specify the exact location, THEN investigate: use targeted grep to find the relevant code.\n3. Fix with a surgical, minimal change. Do NOT add new abstractions or workarounds.\n4. Commit early. If you've made the fix and tests pass, commit and exit. Do not spend turns verifying that master also has the same failures — that wastes budget.\n5. Write commit messages that explain what broke and why."
|
prompt = "You are working in a git worktree on story {{story_id}}. Read CLAUDE.md first, then .huskies/README.md to understand the dev process. The story details are in your prompt above. The worktree and feature branch already exist - do not create them.\n\n## Your workflow\n1. Read the story and understand the acceptance criteria.\n2. Implement the changes.\n3. As you complete each acceptance criterion, call check_criterion MCP tool to mark it done.\n4. Run the run_tests MCP tool. It blocks until tests complete and returns the results.\n5. If tests fail, fix the failures and run run_tests again. Do not commit until tests pass.\n6. Once tests pass, commit your work with a descriptive message and exit.\n\nDo NOT accept stories, move them between stages, or merge to master. The server handles all of that after you exit.\n\n## Bug Workflow: Trust the Story, Act Fast\nWhen working on bugs:\n1. READ THE STORY DESCRIPTION FIRST. If it specifies exact files, functions, and line numbers — go directly there and make the fix.\n2. If the story does NOT specify the exact location, investigate with targeted grep.\n3. Fix with a surgical, minimal change.\n4. Run tests, fix failures, commit and exit.\n5. Write commit messages that explain what broke and why."
|
||||||
system_prompt = "You are a senior full-stack engineer working autonomously in a git worktree. You handle complex tasks requiring deep architectural understanding. Follow the Story-Driven Test Workflow strictly. Use the run_tests MCP tool to verify your changes pass — it starts tests in the background, then poll get_test_result to check completion. Never run script/test or cargo test directly via Bash. As you complete each acceptance criterion, call check_criterion MCP tool to mark it done. Add //! module-level doc comments to any new modules and /// doc comments to any new public functions, structs, or enums. Commit all your work before finishing - use a descriptive commit message. Do not accept stories, move them to archived, or merge to master - a human will do that. Do not coordinate with other agents - focus on your assigned story. The server automatically runs acceptance gates when your process exits. For bugs, trust the story description — if it specifies exact files and functions, go directly there. Do not explore git history or grep the whole codebase when the story already tells you where to look. Make surgical fixes, commit early."
|
system_prompt = "You are a senior full-stack engineer working autonomously in a git worktree. You handle complex tasks requiring deep architectural understanding. Always run the run_tests MCP tool before committing — do not commit until tests pass. As you complete each acceptance criterion, call check_criterion MCP tool to mark it done. Add //! module-level doc comments to any new modules and /// doc comments to any new public functions, structs, or enums. Do not accept stories, move them between stages, or merge to master — the server handles that. For bugs, trust the story description and make surgical fixes."
|
||||||
|
|
||||||
[[agent]]
|
[[agent]]
|
||||||
name = "qa"
|
name = "qa"
|
||||||
@@ -138,7 +138,7 @@ max_turns = 40
|
|||||||
max_budget_usd = 4.00
|
max_budget_usd = 4.00
|
||||||
prompt = """You are the QA agent for story {{story_id}}. Your job is to verify the coder's work satisfies the story's acceptance criteria and produce a structured QA report.
|
prompt = """You are the QA agent for story {{story_id}}. Your job is to verify the coder's work satisfies the story's acceptance criteria and produce a structured QA report.
|
||||||
|
|
||||||
Read CLAUDE.md first, then .story_kit/README.md to understand the dev process.
|
Read CLAUDE.md first, then .huskies/README.md to understand the dev process.
|
||||||
|
|
||||||
## Your Workflow
|
## Your Workflow
|
||||||
|
|
||||||
@@ -149,7 +149,7 @@ Read CLAUDE.md first, then .story_kit/README.md to understand the dev process.
|
|||||||
|
|
||||||
### 1. Deterministic Gates (Prerequisites)
|
### 1. Deterministic Gates (Prerequisites)
|
||||||
Run these first — if any fail, reject immediately without proceeding to AC review:
|
Run these first — if any fail, reject immediately without proceeding to AC review:
|
||||||
- Call the `run_tests` MCP tool to start tests, then poll `get_test_result` until complete — all gates must pass (0 lint errors/warnings, all tests green, frontend build clean if applicable). Do NOT run script/test via Bash.
|
- Call the `run_tests` MCP tool — it blocks until complete. All gates must pass (0 lint errors/warnings, all tests green, frontend build clean if applicable).
|
||||||
|
|
||||||
### 2. Code Change Review
|
### 2. Code Change Review
|
||||||
- Run `git diff master...HEAD --stat` to see what files changed
|
- Run `git diff master...HEAD --stat` to see what files changed
|
||||||
@@ -173,7 +173,7 @@ An AC fails if:
|
|||||||
- A test exists but doesn't actually assert the behaviour described
|
- A test exists but doesn't actually assert the behaviour described
|
||||||
|
|
||||||
### 4. Manual Testing Support (only if all gates PASS and all ACs PASS)
|
### 4. Manual Testing Support (only if all gates PASS and all ACs PASS)
|
||||||
- Build: run `script/build` and note success/failure
|
- Build: run `run_build` MCP tool and note success/failure
|
||||||
- If build succeeds: find a free port (try 3010-3020), set `HUSKIES_PORT=<port>` and start the server with `script/server`
|
- If build succeeds: find a free port (try 3010-3020), set `HUSKIES_PORT=<port>` and start the server with `script/server`
|
||||||
- Generate a testing plan including:
|
- Generate a testing plan including:
|
||||||
- URL to visit in the browser
|
- URL to visit in the browser
|
||||||
@@ -229,62 +229,32 @@ max_turns = 30
|
|||||||
max_budget_usd = 5.00
|
max_budget_usd = 5.00
|
||||||
prompt = """You are the mergemaster agent for story {{story_id}}. Your job is to merge the completed coder work into master.
|
prompt = """You are the mergemaster agent for story {{story_id}}. Your job is to merge the completed coder work into master.
|
||||||
|
|
||||||
Read CLAUDE.md first, then .story_kit/README.md to understand the dev process.
|
Read CLAUDE.md first, then .huskies/README.md to understand the project.
|
||||||
|
|
||||||
## Your Workflow
|
## Your Workflow
|
||||||
1. Call merge_agent_work(story_id='{{story_id}}') — this blocks until the merge completes and returns the result. Do NOT poll get_merge_status.
|
1. Call merge_agent_work(story_id='{{story_id}}'). It blocks until the merge completes and returns the full result.
|
||||||
2. Review the result: check success, had_conflicts, conflicts_resolved, gates_passed, and gate_output
|
2. If success and gates passed: you're done. Exit.
|
||||||
3. If merge succeeded and gates passed: report success to the human
|
3. If gates failed: read the gate_output carefully, fix the issues in the merge workspace at `.huskies/merge_workspace/`, run run_tests MCP tool to verify, recommit, and call merge_agent_work again.
|
||||||
4. If conflicts were auto-resolved (conflicts_resolved=true) and gates passed: report success, noting which conflicts were resolved
|
4. If merge failed for any other reason: call report_merge_failure(story_id='{{story_id}}', reason='<details>') and exit.
|
||||||
5. If conflicts could not be auto-resolved: **resolve them yourself** in the merge worktree (see below)
|
5. After 3 failed fix attempts, call report_merge_failure and exit.
|
||||||
6. If merge failed for any other reason: call report_merge_failure(story_id='{{story_id}}', reason='<details>') and report to the human
|
|
||||||
7. If gates failed after merge: attempt to fix the issues yourself in the merge worktree, then re-trigger merge_agent_work. After 3 fix attempts, call report_merge_failure and stop.
|
|
||||||
|
|
||||||
## Resolving Complex Conflicts Yourself
|
|
||||||
|
|
||||||
When the auto-resolver fails, you have access to the merge worktree at `.story_kit/merge_workspace/`. Go in there and resolve the conflicts manually:
|
|
||||||
|
|
||||||
1. Run `git diff --name-only --diff-filter=U` in the merge worktree to list conflicted files
|
|
||||||
2. **Build context before touching code.** Run `git log --oneline master...HEAD` on the feature branch to see its commits. Then run `git log --oneline --since="$(git log -1 --format=%ci <feature-branch-base-commit>)" master` to see what landed on master since the branch was created. Read the story files in `.story_kit/work/` for any recently merged stories that touch the same files — this tells you WHY master changed and what must be preserved.
|
|
||||||
3. Read each conflicted file and understand both sides of the conflict
|
|
||||||
4. **Understand intent, not just syntax.** The feature branch may be behind master — master's version of shared infrastructure is almost always correct. The feature branch's contribution is the NEW functionality it adds. Your job is to integrate the new into master's structure, not pick one side.
|
|
||||||
5. Resolve by integrating the feature's new functionality into master's code structure
|
|
||||||
5. Stage resolved files with `git add`
|
|
||||||
6. Call the `run_tests` MCP tool to start tests, then poll `get_test_result` until complete
|
|
||||||
7. If it compiles, commit and re-trigger merge_agent_work
|
|
||||||
|
|
||||||
### Common conflict patterns:
|
|
||||||
|
|
||||||
**Story file rename/rename conflicts:** Both branches moved the story .md file to different pipeline directories. Resolution: `git rm` both sides — story files in pipeline directories are gitignored and don't need to be committed.
|
|
||||||
|
|
||||||
**Duplicate functions/imports:** The auto-resolver keeps both sides, producing duplicates. Resolution: keep one copy (prefer master's version), delete the duplicate.
|
|
||||||
|
|
||||||
**Formatting-only conflicts:** Both sides reformatted the same code differently. Resolution: pick either side (prefer master).
|
|
||||||
|
|
||||||
**IMPORTANT: After resolving ANY conflict or fixing ANY gate failure in the merge workspace, use the `run_lint` MCP tool to check formatting, then `run_tests` to verify everything passes before recommitting.** The auto-resolver frequently produces code that compiles but fails formatting or linting checks.
|
|
||||||
|
|
||||||
## Fixing Gate Failures
|
## Fixing Gate Failures
|
||||||
|
|
||||||
If quality gates fail, attempt to fix issues yourself in the merge workspace. Use the run_tests MCP tool to verify before recommitting.
|
The auto-resolver often produces broken code. Common problems:
|
||||||
|
- Duplicate imports or definitions (kept both sides)
|
||||||
|
- Formatting issues (import ordering, line breaks)
|
||||||
|
- Unclosed delimiters from bad conflict resolution
|
||||||
|
- Type mismatches from incompatible merge of both sides
|
||||||
|
|
||||||
**Fix yourself (up to 3 attempts total):**
|
To fix:
|
||||||
- Syntax errors
|
1. Read the broken files in `.huskies/merge_workspace/`
|
||||||
- Duplicate definitions from merge artifacts
|
2. Fix the issues — prefer master's structure, integrate only the feature's new code
|
||||||
- Unused import warnings
|
3. Run run_lint MCP tool to check formatting
|
||||||
- Formatting issues that block linting
|
4. Run run_tests MCP tool to verify everything passes
|
||||||
|
5. Commit the fix and call merge_agent_work again
|
||||||
|
|
||||||
**Report to human without attempting a fix:**
|
## Rules
|
||||||
- Logic errors or incorrect business logic
|
- NEVER manually move story files between pipeline stages
|
||||||
- Missing function implementations
|
- NEVER call accept_story — merge_agent_work handles that
|
||||||
- Architectural changes required
|
- ALWAYS call report_merge_failure if you can't fix the merge"""
|
||||||
|
system_prompt = "You are the mergemaster agent. Call merge_agent_work to merge. If gates fail, fix the issues in the merge workspace, verify with run_lint and run_tests MCP tools, recommit, and retrigger. After 3 failed attempts, call report_merge_failure and exit. Never move story files or call accept_story."
|
||||||
**Max retry limit:** If gates still fail after 3 fix attempts, call report_merge_failure to record the failure, then stop immediately and report the full gate output to the human.
|
|
||||||
|
|
||||||
## CRITICAL Rules
|
|
||||||
- NEVER manually move story files between pipeline stages (e.g. from 4_merge/ to 5_done/)
|
|
||||||
- NEVER call accept_story — only merge_agent_work can move stories to done after a successful merge
|
|
||||||
- When merge fails after exhausting your fix attempts, ALWAYS call report_merge_failure
|
|
||||||
- Report conflict resolution outcomes clearly
|
|
||||||
- Report gate failures with full output so the human can act if needed
|
|
||||||
- The server automatically runs acceptance gates when your process exits"""
|
|
||||||
system_prompt = "You are the mergemaster agent. Your primary job is to merge feature branches to master. First try the merge_agent_work MCP tool. If the auto-resolver fails on complex conflicts, resolve them yourself in the merge workspace. Common patterns: discard story file rename conflicts (gitignored), remove duplicate definitions/imports. After resolving, verify with run_tests MCP tool before re-triggering merge. CRITICAL: Never manually move story files or call accept_story. After 3 failed fix attempts, call report_merge_failure and stop."
|
|
||||||
|
|||||||
@@ -7,7 +7,9 @@
|
|||||||
//! Passing no dependency numbers clears the field entirely.
|
//! Passing no dependency numbers clears the field entirely.
|
||||||
|
|
||||||
use super::CommandContext;
|
use super::CommandContext;
|
||||||
use crate::io::story_metadata::{parse_front_matter, write_depends_on};
|
use crate::io::story_metadata::{
|
||||||
|
parse_front_matter, write_depends_on, write_depends_on_in_content,
|
||||||
|
};
|
||||||
|
|
||||||
/// Handle the `depends` command.
|
/// Handle the `depends` command.
|
||||||
///
|
///
|
||||||
@@ -51,7 +53,7 @@ pub(super) fn handle_depends(ctx: &CommandContext) -> Option<String> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Find the story by numeric prefix: CRDT → content store → filesystem.
|
// Find the story by numeric prefix: CRDT → content store → filesystem.
|
||||||
let (story_id, _stage_dir, path, content) =
|
let (story_id, stage_dir, path, content) =
|
||||||
match crate::chat::lookup::find_story_by_number(ctx.project_root, num_str) {
|
match crate::chat::lookup::find_story_by_number(ctx.project_root, num_str) {
|
||||||
Some(found) => found,
|
Some(found) => found,
|
||||||
None => {
|
None => {
|
||||||
@@ -62,11 +64,35 @@ pub(super) fn handle_depends(ctx: &CommandContext) -> Option<String> {
|
|||||||
};
|
};
|
||||||
|
|
||||||
let story_name = content
|
let story_name = content
|
||||||
.or_else(|| std::fs::read_to_string(&path).ok())
|
.as_deref()
|
||||||
.and_then(|c| parse_front_matter(&c).ok())
|
.and_then(|c| parse_front_matter(c).ok())
|
||||||
.and_then(|m| m.name)
|
.and_then(|m| m.name)
|
||||||
.unwrap_or_else(|| story_id.clone());
|
.unwrap_or_else(|| story_id.clone());
|
||||||
|
|
||||||
|
// Prefer the CRDT content store; fall back to filesystem only when the
|
||||||
|
// story has not been loaded into the DB (e.g. very early startup or tests
|
||||||
|
// that haven't called write_item_with_content).
|
||||||
|
if let Some(existing) = crate::db::read_content(&story_id) {
|
||||||
|
let updated = write_depends_on_in_content(&existing, &deps);
|
||||||
|
crate::db::write_content(&story_id, &updated);
|
||||||
|
let stage = crate::pipeline_state::read_typed(&story_id)
|
||||||
|
.ok()
|
||||||
|
.flatten()
|
||||||
|
.map(|i| i.stage.dir_name().to_string())
|
||||||
|
.unwrap_or_else(|| stage_dir.clone());
|
||||||
|
crate::db::write_item_with_content(&story_id, &stage, &updated);
|
||||||
|
if deps.is_empty() {
|
||||||
|
Some(format!(
|
||||||
|
"Cleared all dependencies for **{story_name}** ({story_id})."
|
||||||
|
))
|
||||||
|
} else {
|
||||||
|
let nums: Vec<String> = deps.iter().map(|n| n.to_string()).collect();
|
||||||
|
Some(format!(
|
||||||
|
"Set depends_on: [{}] for **{story_name}** ({story_id}).",
|
||||||
|
nums.join(", ")
|
||||||
|
))
|
||||||
|
}
|
||||||
|
} else {
|
||||||
match write_depends_on(&path, &deps) {
|
match write_depends_on(&path, &deps) {
|
||||||
Ok(()) if deps.is_empty() => Some(format!(
|
Ok(()) if deps.is_empty() => Some(format!(
|
||||||
"Cleared all dependencies for **{story_name}** ({story_id})."
|
"Cleared all dependencies for **{story_name}** ({story_id})."
|
||||||
@@ -81,6 +107,7 @@ pub(super) fn handle_depends(ctx: &CommandContext) -> Option<String> {
|
|||||||
Err(e) => Some(format!("Failed to update dependencies for {story_id}: {e}")),
|
Err(e) => Some(format!("Failed to update dependencies for {story_id}: {e}")),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// ---------------------------------------------------------------------------
|
// ---------------------------------------------------------------------------
|
||||||
// Tests
|
// Tests
|
||||||
@@ -170,10 +197,10 @@ mod tests {
|
|||||||
write_story_file(
|
write_story_file(
|
||||||
tmp.path(),
|
tmp.path(),
|
||||||
"1_backlog",
|
"1_backlog",
|
||||||
"42_story_foo.md",
|
"9912_story_foo.md",
|
||||||
"---\nname: Foo\n---\n",
|
"---\nname: Foo\n---\n",
|
||||||
);
|
);
|
||||||
let output = depends_cmd_with_root(tmp.path(), "42 abc").unwrap();
|
let output = depends_cmd_with_root(tmp.path(), "9912 abc").unwrap();
|
||||||
assert!(
|
assert!(
|
||||||
output.contains("Invalid dependency number"),
|
output.contains("Invalid dependency number"),
|
||||||
"non-numeric dep should error: {output}"
|
"non-numeric dep should error: {output}"
|
||||||
@@ -181,25 +208,24 @@ mod tests {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn depends_sets_deps_and_writes_to_file() {
|
fn depends_sets_deps_and_writes_to_content_store() {
|
||||||
let tmp = tempfile::TempDir::new().unwrap();
|
let tmp = tempfile::TempDir::new().unwrap();
|
||||||
write_story_file(
|
write_story_file(
|
||||||
tmp.path(),
|
tmp.path(),
|
||||||
"1_backlog",
|
"1_backlog",
|
||||||
"42_story_foo.md",
|
"9910_story_foo.md",
|
||||||
"---\nname: Foo\n---\n",
|
"---\nname: Foo\n---\n",
|
||||||
);
|
);
|
||||||
let output = depends_cmd_with_root(tmp.path(), "42 477 478").unwrap();
|
let output = depends_cmd_with_root(tmp.path(), "9910 477 478").unwrap();
|
||||||
assert!(
|
assert!(
|
||||||
output.contains("477") && output.contains("478"),
|
output.contains("477") && output.contains("478"),
|
||||||
"response should mention dep numbers: {output}"
|
"response should mention dep numbers: {output}"
|
||||||
);
|
);
|
||||||
let contents =
|
let contents = crate::db::read_content("9910_story_foo")
|
||||||
std::fs::read_to_string(tmp.path().join(".huskies/work/1_backlog/42_story_foo.md"))
|
.expect("content store should have updated story");
|
||||||
.unwrap();
|
|
||||||
assert!(
|
assert!(
|
||||||
contents.contains("depends_on: [477, 478]"),
|
contents.contains("depends_on: [477, 478]"),
|
||||||
"file should have depends_on set: {contents}"
|
"content store should have depends_on set: {contents}"
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -209,20 +235,19 @@ mod tests {
|
|||||||
write_story_file(
|
write_story_file(
|
||||||
tmp.path(),
|
tmp.path(),
|
||||||
"2_current",
|
"2_current",
|
||||||
"10_story_bar.md",
|
"9911_story_bar.md",
|
||||||
"---\nname: Bar\ndepends_on: [477]\n---\n",
|
"---\nname: Bar\ndepends_on: [477]\n---\n",
|
||||||
);
|
);
|
||||||
let output = depends_cmd_with_root(tmp.path(), "10").unwrap();
|
let output = depends_cmd_with_root(tmp.path(), "9911").unwrap();
|
||||||
assert!(
|
assert!(
|
||||||
output.contains("Cleared"),
|
output.contains("Cleared"),
|
||||||
"should confirm clearing deps: {output}"
|
"should confirm clearing deps: {output}"
|
||||||
);
|
);
|
||||||
let contents =
|
let contents = crate::db::read_content("9911_story_bar")
|
||||||
std::fs::read_to_string(tmp.path().join(".huskies/work/2_current/10_story_bar.md"))
|
.expect("content store should have updated story");
|
||||||
.unwrap();
|
|
||||||
assert!(
|
assert!(
|
||||||
!contents.contains("depends_on"),
|
!contents.contains("depends_on"),
|
||||||
"file should have depends_on cleared: {contents}"
|
"content store should have depends_on cleared: {contents}"
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -232,12 +257,12 @@ mod tests {
|
|||||||
write_story_file(
|
write_story_file(
|
||||||
tmp.path(),
|
tmp.path(),
|
||||||
"3_qa",
|
"3_qa",
|
||||||
"55_story_inqa.md",
|
"9913_story_inqa.md",
|
||||||
"---\nname: In QA\n---\n",
|
"---\nname: In QA\n---\n",
|
||||||
);
|
);
|
||||||
let output = depends_cmd_with_root(tmp.path(), "55 100").unwrap();
|
let output = depends_cmd_with_root(tmp.path(), "9913 100").unwrap();
|
||||||
assert!(
|
assert!(
|
||||||
output.contains("In QA") || output.contains("55_story_inqa"),
|
output.contains("In QA") || output.contains("9913_story_inqa"),
|
||||||
"should find story in qa stage: {output}"
|
"should find story in qa stage: {output}"
|
||||||
);
|
);
|
||||||
assert!(output.contains("100"), "should mention dep 100: {output}");
|
assert!(output.contains("100"), "should mention dep 100: {output}");
|
||||||
|
|||||||
@@ -105,58 +105,13 @@ fn find_story_merge_commit(root: &std::path::Path, num_str: &str) -> Option<Stri
|
|||||||
if hash.is_empty() { None } else { Some(hash) }
|
if hash.is_empty() { None } else { Some(hash) }
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Find the human-readable name of a story by searching content store then filesystem.
|
/// Find the human-readable name of a story by searching CRDT then content store.
|
||||||
fn find_story_name(root: &std::path::Path, num_str: &str) -> Option<String> {
|
fn find_story_name(root: &std::path::Path, num_str: &str) -> Option<String> {
|
||||||
// Try content store first.
|
let (_, _, _, content) = crate::chat::lookup::find_story_by_number(root, num_str)?;
|
||||||
for id in crate::db::all_content_ids() {
|
let content = content?;
|
||||||
let file_num = id.split('_').next().unwrap_or("");
|
crate::io::story_metadata::parse_front_matter(&content)
|
||||||
if file_num == num_str
|
|
||||||
&& let Some(c) = crate::db::read_content(&id)
|
|
||||||
{
|
|
||||||
return crate::io::story_metadata::parse_front_matter(&c)
|
|
||||||
.ok()
|
|
||||||
.and_then(|m| m.name);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Fallback: filesystem scan.
|
|
||||||
let stages = [
|
|
||||||
"1_backlog",
|
|
||||||
"2_current",
|
|
||||||
"3_qa",
|
|
||||||
"4_merge",
|
|
||||||
"5_done",
|
|
||||||
"6_archived",
|
|
||||||
];
|
|
||||||
for stage in &stages {
|
|
||||||
let dir = root.join(".huskies").join("work").join(stage);
|
|
||||||
if !dir.exists() {
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
if let Ok(entries) = std::fs::read_dir(&dir) {
|
|
||||||
for entry in entries.flatten() {
|
|
||||||
let path = entry.path();
|
|
||||||
if path.extension().and_then(|e| e.to_str()) != Some("md") {
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
if let Some(stem) = path.file_stem().and_then(|s| s.to_str()) {
|
|
||||||
let file_num = stem
|
|
||||||
.split('_')
|
|
||||||
.next()
|
|
||||||
.filter(|s| !s.is_empty() && s.chars().all(|c| c.is_ascii_digit()))
|
|
||||||
.unwrap_or("");
|
|
||||||
if file_num == num_str {
|
|
||||||
return std::fs::read_to_string(&path).ok().and_then(|c| {
|
|
||||||
crate::io::story_metadata::parse_front_matter(&c)
|
|
||||||
.ok()
|
.ok()
|
||||||
.and_then(|m| m.name)
|
.and_then(|m| m.name)
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
None
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Return the `git show --stat` output for a commit.
|
/// Return the `git show --stat` output for a commit.
|
||||||
|
|||||||
@@ -6,8 +6,7 @@
|
|||||||
|
|
||||||
use super::CommandContext;
|
use super::CommandContext;
|
||||||
use crate::io::story_metadata::{
|
use crate::io::story_metadata::{
|
||||||
clear_front_matter_field, clear_front_matter_field_in_content, parse_front_matter,
|
clear_front_matter_field_in_content, parse_front_matter, set_front_matter_field,
|
||||||
set_front_matter_field,
|
|
||||||
};
|
};
|
||||||
use std::path::Path;
|
use std::path::Path;
|
||||||
|
|
||||||
@@ -34,9 +33,9 @@ pub(super) fn handle_unblock(ctx: &CommandContext) -> Option<String> {
|
|||||||
/// Returns a Markdown-formatted response string suitable for all transports.
|
/// Returns a Markdown-formatted response string suitable for all transports.
|
||||||
/// Also used by the MCP `unblock` tool.
|
/// Also used by the MCP `unblock` tool.
|
||||||
///
|
///
|
||||||
/// Lookup priority: CRDT → content store → filesystem (Story 512).
|
/// Lookup priority: CRDT → content store.
|
||||||
pub(crate) fn unblock_by_number(project_root: &Path, story_number: &str) -> String {
|
pub(crate) fn unblock_by_number(project_root: &Path, story_number: &str) -> String {
|
||||||
let (story_id, _stage_dir, path, _content) =
|
let (story_id, _, _, _) =
|
||||||
match crate::chat::lookup::find_story_by_number(project_root, story_number) {
|
match crate::chat::lookup::find_story_by_number(project_root, story_number) {
|
||||||
Some(found) => found,
|
Some(found) => found,
|
||||||
None => {
|
None => {
|
||||||
@@ -44,15 +43,7 @@ pub(crate) fn unblock_by_number(project_root: &Path, story_number: &str) -> Stri
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
// Prefer DB-backed unblock when the story is in the content store.
|
|
||||||
// Note: `content` may have come from the filesystem fallback in
|
|
||||||
// `find_story_by_number`, so we must re-check the DB rather than
|
|
||||||
// relying on `content.is_some()` alone.
|
|
||||||
if crate::db::read_content(&story_id).is_some() {
|
|
||||||
unblock_by_story_id(&story_id)
|
unblock_by_story_id(&story_id)
|
||||||
} else {
|
|
||||||
unblock_by_path(&path, &story_id)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Unblock a story using the content store (DB-backed).
|
/// Unblock a story using the content store (DB-backed).
|
||||||
@@ -105,64 +96,6 @@ fn unblock_by_story_id(story_id: &str) -> String {
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Core unblock logic: reset blocked state for a known story file path.
|
|
||||||
///
|
|
||||||
/// Reads front matter, verifies the story is blocked, clears the `blocked`
|
|
||||||
/// flag, and resets `retry_count` to 0. Also used by the MCP `unblock` tool
|
|
||||||
/// when the caller has already resolved the story path from a full `story_id`.
|
|
||||||
pub(crate) fn unblock_by_path(path: &Path, story_id: &str) -> String {
|
|
||||||
let contents = match std::fs::read_to_string(path) {
|
|
||||||
Ok(c) => c,
|
|
||||||
Err(e) => return format!("Failed to read story file: {e}"),
|
|
||||||
};
|
|
||||||
|
|
||||||
let meta = match parse_front_matter(&contents) {
|
|
||||||
Ok(m) => m,
|
|
||||||
Err(e) => return format!("Failed to parse front matter for **{story_id}**: {e}"),
|
|
||||||
};
|
|
||||||
|
|
||||||
let story_name = meta.name.as_deref().unwrap_or(story_id).to_string();
|
|
||||||
|
|
||||||
let has_blocked = meta.blocked == Some(true);
|
|
||||||
let has_merge_failure = meta.merge_failure.is_some();
|
|
||||||
|
|
||||||
if !has_blocked && !has_merge_failure {
|
|
||||||
return format!("**{story_name}** ({story_id}) is not blocked. Nothing to unblock.");
|
|
||||||
}
|
|
||||||
|
|
||||||
// Clear the blocked flag if present.
|
|
||||||
if has_blocked && let Err(e) = clear_front_matter_field(path, "blocked") {
|
|
||||||
return format!("Failed to clear blocked flag on **{story_id}**: {e}");
|
|
||||||
}
|
|
||||||
|
|
||||||
// Clear merge_failure if present.
|
|
||||||
if has_merge_failure && let Err(e) = clear_front_matter_field(path, "merge_failure") {
|
|
||||||
return format!("Failed to clear merge_failure on **{story_id}**: {e}");
|
|
||||||
}
|
|
||||||
|
|
||||||
// Reset retry_count to 0 (re-read the updated file, modify, write).
|
|
||||||
let updated_contents = match std::fs::read_to_string(path) {
|
|
||||||
Ok(c) => c,
|
|
||||||
Err(e) => return format!("Failed to re-read story file after unblocking: {e}"),
|
|
||||||
};
|
|
||||||
let with_retry_reset = set_front_matter_field(&updated_contents, "retry_count", "0");
|
|
||||||
if let Err(e) = std::fs::write(path, &with_retry_reset) {
|
|
||||||
return format!("Failed to reset retry_count on **{story_id}**: {e}");
|
|
||||||
}
|
|
||||||
|
|
||||||
let mut cleared = Vec::new();
|
|
||||||
if has_blocked {
|
|
||||||
cleared.push("blocked");
|
|
||||||
}
|
|
||||||
if has_merge_failure {
|
|
||||||
cleared.push("merge_failure");
|
|
||||||
}
|
|
||||||
format!(
|
|
||||||
"Unblocked **{story_name}** ({story_id}). Cleared: {}. Retry count reset to 0.",
|
|
||||||
cleared.join(", ")
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
// ---------------------------------------------------------------------------
|
// ---------------------------------------------------------------------------
|
||||||
// Tests
|
// Tests
|
||||||
// ---------------------------------------------------------------------------
|
// ---------------------------------------------------------------------------
|
||||||
|
|||||||
@@ -58,6 +58,10 @@ use tokio::sync::{Mutex as TokioMutex, RwLock, broadcast, mpsc, watch};
|
|||||||
/// announce the shutdown to all configured rooms before the process exits.
|
/// announce the shutdown to all configured rooms before the process exits.
|
||||||
///
|
///
|
||||||
/// Must be called from within a Tokio runtime context (e.g., from `main`).
|
/// Must be called from within a Tokio runtime context (e.g., from `main`).
|
||||||
|
///
|
||||||
|
/// Returns an [`tokio::task::AbortHandle`] if the bot was actually spawned (Matrix/Discord
|
||||||
|
/// transports), or `None` if the config is absent, disabled, or uses a webhook-based
|
||||||
|
/// transport (Slack/WhatsApp) that does not require a persistent background task.
|
||||||
pub fn spawn_bot(
|
pub fn spawn_bot(
|
||||||
project_root: &Path,
|
project_root: &Path,
|
||||||
watcher_tx: broadcast::Sender<WatcherEvent>,
|
watcher_tx: broadcast::Sender<WatcherEvent>,
|
||||||
@@ -66,12 +70,12 @@ pub fn spawn_bot(
|
|||||||
shutdown_rx: watch::Receiver<Option<ShutdownReason>>,
|
shutdown_rx: watch::Receiver<Option<ShutdownReason>>,
|
||||||
gateway_active_project: Option<Arc<RwLock<String>>>,
|
gateway_active_project: Option<Arc<RwLock<String>>>,
|
||||||
gateway_projects: Vec<String>,
|
gateway_projects: Vec<String>,
|
||||||
) {
|
) -> Option<tokio::task::AbortHandle> {
|
||||||
let config = match BotConfig::load(project_root) {
|
let config = match BotConfig::load(project_root) {
|
||||||
Some(c) => c,
|
Some(c) => c,
|
||||||
None => {
|
None => {
|
||||||
crate::slog!("[matrix-bot] bot.toml absent or disabled; Matrix integration skipped");
|
crate::slog!("[matrix-bot] bot.toml absent or disabled; Matrix integration skipped");
|
||||||
return;
|
return None;
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -81,7 +85,7 @@ pub fn spawn_bot(
|
|||||||
"[bot] transport={} — skipping Matrix bot; webhooks handle this transport",
|
"[bot] transport={} — skipping Matrix bot; webhooks handle this transport",
|
||||||
config.transport
|
config.transport
|
||||||
);
|
);
|
||||||
return;
|
return None;
|
||||||
}
|
}
|
||||||
|
|
||||||
crate::slog!(
|
crate::slog!(
|
||||||
@@ -93,7 +97,7 @@ pub fn spawn_bot(
|
|||||||
let root = project_root.to_path_buf();
|
let root = project_root.to_path_buf();
|
||||||
let watcher_rx = watcher_tx.subscribe();
|
let watcher_rx = watcher_tx.subscribe();
|
||||||
let watcher_rx_auto = watcher_tx.subscribe();
|
let watcher_rx_auto = watcher_tx.subscribe();
|
||||||
tokio::spawn(async move {
|
let handle = tokio::spawn(async move {
|
||||||
if let Err(e) = bot::run_bot(
|
if let Err(e) = bot::run_bot(
|
||||||
config,
|
config,
|
||||||
root,
|
root,
|
||||||
@@ -110,4 +114,5 @@ pub fn spawn_bot(
|
|||||||
crate::slog!("[matrix-bot] Fatal error: {e}");
|
crate::slog!("[matrix-bot] Fatal error: {e}");
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
Some(handle.abort_handle())
|
||||||
}
|
}
|
||||||
|
|||||||
+435
-12
@@ -19,6 +19,7 @@ use std::collections::BTreeMap;
|
|||||||
use std::collections::HashMap;
|
use std::collections::HashMap;
|
||||||
use std::path::{Path, PathBuf};
|
use std::path::{Path, PathBuf};
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
|
use tokio::sync::Mutex as TokioMutex;
|
||||||
use tokio::sync::RwLock;
|
use tokio::sync::RwLock;
|
||||||
use uuid::Uuid;
|
use uuid::Uuid;
|
||||||
|
|
||||||
@@ -106,8 +107,13 @@ pub struct GatewayState {
|
|||||||
pub joined_agents: Arc<RwLock<Vec<JoinedAgent>>>,
|
pub joined_agents: Arc<RwLock<Vec<JoinedAgent>>>,
|
||||||
/// One-time join tokens that have been issued but not yet consumed.
|
/// One-time join tokens that have been issued but not yet consumed.
|
||||||
pending_tokens: Arc<RwLock<HashMap<String, PendingToken>>>,
|
pending_tokens: Arc<RwLock<HashMap<String, PendingToken>>>,
|
||||||
/// Directory containing `projects.toml`, used for persisting agent data.
|
/// Directory containing `projects.toml` and the `.huskies/` subfolder.
|
||||||
pub config_dir: PathBuf,
|
pub config_dir: PathBuf,
|
||||||
|
/// HTTP port the gateway is listening on.
|
||||||
|
pub port: u16,
|
||||||
|
/// Abort handle for the running Matrix bot task (if any).
|
||||||
|
/// Stored so the bot can be restarted when credentials change.
|
||||||
|
pub bot_handle: Arc<TokioMutex<Option<tokio::task::AbortHandle>>>,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Load persisted agents from `<config_dir>/gateway_agents.json`.
|
/// Load persisted agents from `<config_dir>/gateway_agents.json`.
|
||||||
@@ -138,7 +144,7 @@ impl GatewayState {
|
|||||||
/// The first project in the config becomes the active project by default.
|
/// The first project in the config becomes the active project by default.
|
||||||
/// Previously registered agents are loaded from `gateway_agents.json` in
|
/// Previously registered agents are loaded from `gateway_agents.json` in
|
||||||
/// `config_dir` if the file exists.
|
/// `config_dir` if the file exists.
|
||||||
pub fn new(config: GatewayConfig, config_dir: PathBuf) -> Result<Self, String> {
|
pub fn new(config: GatewayConfig, config_dir: PathBuf, port: u16) -> Result<Self, String> {
|
||||||
if config.projects.is_empty() {
|
if config.projects.is_empty() {
|
||||||
return Err("projects.toml must define at least one project".to_string());
|
return Err("projects.toml must define at least one project".to_string());
|
||||||
}
|
}
|
||||||
@@ -151,6 +157,8 @@ impl GatewayState {
|
|||||||
joined_agents: Arc::new(RwLock::new(agents)),
|
joined_agents: Arc::new(RwLock::new(agents)),
|
||||||
pending_tokens: Arc::new(RwLock::new(HashMap::new())),
|
pending_tokens: Arc::new(RwLock::new(HashMap::new())),
|
||||||
config_dir,
|
config_dir,
|
||||||
|
port,
|
||||||
|
bot_handle: Arc::new(TokioMutex::new(None)),
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -898,6 +906,9 @@ const GATEWAY_UI_HTML: &str = r#"<!DOCTYPE html>
|
|||||||
.status { margin-top: 1rem; font-size: 0.8rem; color: #64748b; min-height: 1.25rem; }
|
.status { margin-top: 1rem; font-size: 0.8rem; color: #64748b; min-height: 1.25rem; }
|
||||||
.status.ok { color: #4ade80; }
|
.status.ok { color: #4ade80; }
|
||||||
.status.err { color: #f87171; }
|
.status.err { color: #f87171; }
|
||||||
|
.nav { margin-top: 1.25rem; padding-top: 1rem; border-top: 1px solid #334155; display: flex; gap: 1rem; }
|
||||||
|
.nav a { font-size: 0.8rem; color: #64748b; text-decoration: none; }
|
||||||
|
.nav a:hover { color: #94a3b8; }
|
||||||
</style>
|
</style>
|
||||||
</head>
|
</head>
|
||||||
<body>
|
<body>
|
||||||
@@ -918,6 +929,9 @@ const GATEWAY_UI_HTML: &str = r#"<!DOCTYPE html>
|
|||||||
<span id="active-name"></span>
|
<span id="active-name"></span>
|
||||||
</div>
|
</div>
|
||||||
<div id="status" class="status"></div>
|
<div id="status" class="status"></div>
|
||||||
|
<nav class="nav">
|
||||||
|
<a href="/bot-config">🤖 Bot Configuration</a>
|
||||||
|
</nav>
|
||||||
</div>
|
</div>
|
||||||
<script>
|
<script>
|
||||||
async function loadState() {
|
async function loadState() {
|
||||||
@@ -1051,6 +1065,343 @@ pub async fn gateway_switch_handler(
|
|||||||
))
|
))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ── Bot configuration API ────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Request/response body for the bot configuration API.
|
||||||
|
#[derive(Deserialize, Serialize, Default)]
|
||||||
|
struct BotConfigPayload {
|
||||||
|
/// Chat transport: `"matrix"` or `"slack"`.
|
||||||
|
transport: String,
|
||||||
|
// Matrix fields
|
||||||
|
homeserver: Option<String>,
|
||||||
|
username: Option<String>,
|
||||||
|
password: Option<String>,
|
||||||
|
// Slack fields
|
||||||
|
slack_bot_token: Option<String>,
|
||||||
|
slack_signing_secret: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Read the current raw bot.toml (without validation) as key/value pairs for
|
||||||
|
/// the configuration UI. Returns an empty payload if the file does not exist.
|
||||||
|
fn read_bot_config_raw(config_dir: &Path) -> BotConfigPayload {
|
||||||
|
let path = config_dir.join(".huskies").join("bot.toml");
|
||||||
|
let content = match std::fs::read_to_string(&path) {
|
||||||
|
Ok(c) => c,
|
||||||
|
Err(_) => return BotConfigPayload::default(),
|
||||||
|
};
|
||||||
|
let table: toml::Value = match toml::from_str(&content) {
|
||||||
|
Ok(v) => v,
|
||||||
|
Err(_) => return BotConfigPayload::default(),
|
||||||
|
};
|
||||||
|
let s = |key: &str| -> Option<String> {
|
||||||
|
table
|
||||||
|
.get(key)
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.map(|s| s.to_string())
|
||||||
|
};
|
||||||
|
BotConfigPayload {
|
||||||
|
transport: s("transport").unwrap_or_else(|| "matrix".to_string()),
|
||||||
|
homeserver: s("homeserver"),
|
||||||
|
username: s("username"),
|
||||||
|
password: s("password"),
|
||||||
|
slack_bot_token: s("slack_bot_token"),
|
||||||
|
slack_signing_secret: s("slack_signing_secret"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Write a `bot.toml` from the given payload.
|
||||||
|
fn write_bot_config(config_dir: &Path, payload: &BotConfigPayload) -> Result<(), String> {
|
||||||
|
let huskies_dir = config_dir.join(".huskies");
|
||||||
|
std::fs::create_dir_all(&huskies_dir)
|
||||||
|
.map_err(|e| format!("cannot create .huskies dir: {e}"))?;
|
||||||
|
let path = huskies_dir.join("bot.toml");
|
||||||
|
|
||||||
|
let content = match payload.transport.as_str() {
|
||||||
|
"slack" => {
|
||||||
|
format!(
|
||||||
|
"enabled = true\ntransport = \"slack\"\n\nslack_bot_token = {}\nslack_signing_secret = {}\nslack_channel_ids = []\n",
|
||||||
|
toml_string(payload.slack_bot_token.as_deref().unwrap_or("")),
|
||||||
|
toml_string(payload.slack_signing_secret.as_deref().unwrap_or("")),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
_ => {
|
||||||
|
// Default to matrix
|
||||||
|
format!(
|
||||||
|
"enabled = true\ntransport = \"matrix\"\n\nhomeserver = {}\nusername = {}\npassword = {}\nroom_ids = []\nallowed_users = []\n",
|
||||||
|
toml_string(payload.homeserver.as_deref().unwrap_or("")),
|
||||||
|
toml_string(payload.username.as_deref().unwrap_or("")),
|
||||||
|
toml_string(payload.password.as_deref().unwrap_or("")),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
std::fs::write(&path, content).map_err(|e| format!("cannot write bot.toml: {e}"))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Escape a string as a TOML quoted string.
|
||||||
|
fn toml_string(s: &str) -> String {
|
||||||
|
format!("\"{}\"", s.replace('\\', "\\\\").replace('"', "\\\""))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// `GET /api/gateway/bot-config` — return current bot.toml fields as JSON.
|
||||||
|
#[handler]
|
||||||
|
pub async fn gateway_bot_config_get_handler(state: Data<&Arc<GatewayState>>) -> Response {
|
||||||
|
let payload = read_bot_config_raw(&state.config_dir);
|
||||||
|
Response::builder()
|
||||||
|
.status(StatusCode::OK)
|
||||||
|
.header("Content-Type", "application/json")
|
||||||
|
.body(Body::from(serde_json::to_vec(&payload).unwrap_or_default()))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// `POST /api/gateway/bot-config` — write new bot.toml and restart the bot.
|
||||||
|
#[handler]
|
||||||
|
pub async fn gateway_bot_config_save_handler(
|
||||||
|
state: Data<&Arc<GatewayState>>,
|
||||||
|
body: Json<BotConfigPayload>,
|
||||||
|
) -> Response {
|
||||||
|
if let Err(e) = write_bot_config(&state.config_dir, &body) {
|
||||||
|
let err = json!({ "ok": false, "error": e });
|
||||||
|
return Response::builder()
|
||||||
|
.status(StatusCode::INTERNAL_SERVER_ERROR)
|
||||||
|
.header("Content-Type", "application/json")
|
||||||
|
.body(Body::from(serde_json::to_vec(&err).unwrap_or_default()));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Abort the existing bot task (if any) and spawn a fresh one with the new config.
|
||||||
|
{
|
||||||
|
let mut handle = state.bot_handle.lock().await;
|
||||||
|
if let Some(h) = handle.take() {
|
||||||
|
h.abort();
|
||||||
|
}
|
||||||
|
let gateway_projects: Vec<String> = state.config.projects.keys().cloned().collect();
|
||||||
|
let new_handle = spawn_gateway_bot(
|
||||||
|
&state.config_dir,
|
||||||
|
Arc::clone(&state.active_project),
|
||||||
|
gateway_projects,
|
||||||
|
state.port,
|
||||||
|
);
|
||||||
|
*handle = new_handle;
|
||||||
|
}
|
||||||
|
|
||||||
|
crate::slog!("[gateway] Bot configuration saved; bot restarted");
|
||||||
|
let ok = json!({ "ok": true });
|
||||||
|
Response::builder()
|
||||||
|
.status(StatusCode::OK)
|
||||||
|
.header("Content-Type", "application/json")
|
||||||
|
.body(Body::from(serde_json::to_vec(&ok).unwrap_or_default()))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Self-contained HTML page for bot configuration.
|
||||||
|
const GATEWAY_BOT_CONFIG_HTML: &str = r#"<!DOCTYPE html>
|
||||||
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<meta charset="UTF-8">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
|
<title>Bot Configuration — Huskies Gateway</title>
|
||||||
|
<style>
|
||||||
|
*, *::before, *::after { box-sizing: border-box; margin: 0; padding: 0; }
|
||||||
|
body {
|
||||||
|
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif;
|
||||||
|
background: #0f172a;
|
||||||
|
color: #e2e8f0;
|
||||||
|
min-height: 100vh;
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
justify-content: center;
|
||||||
|
}
|
||||||
|
.card {
|
||||||
|
background: #1e293b;
|
||||||
|
border: 1px solid #334155;
|
||||||
|
border-radius: 12px;
|
||||||
|
padding: 2rem;
|
||||||
|
width: 100%;
|
||||||
|
max-width: 520px;
|
||||||
|
box-shadow: 0 4px 24px rgba(0,0,0,0.4);
|
||||||
|
}
|
||||||
|
.header {
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
gap: 0.75rem;
|
||||||
|
margin-bottom: 1.5rem;
|
||||||
|
}
|
||||||
|
.back {
|
||||||
|
color: #64748b;
|
||||||
|
text-decoration: none;
|
||||||
|
font-size: 0.85rem;
|
||||||
|
margin-right: auto;
|
||||||
|
}
|
||||||
|
.back:hover { color: #94a3b8; }
|
||||||
|
.logo { font-size: 1.5rem; }
|
||||||
|
h1 { font-size: 1.2rem; font-weight: 600; color: #f8fafc; }
|
||||||
|
.field { margin-bottom: 1rem; }
|
||||||
|
label {
|
||||||
|
display: block;
|
||||||
|
font-size: 0.75rem;
|
||||||
|
font-weight: 500;
|
||||||
|
color: #94a3b8;
|
||||||
|
text-transform: uppercase;
|
||||||
|
letter-spacing: 0.05em;
|
||||||
|
margin-bottom: 0.4rem;
|
||||||
|
}
|
||||||
|
input, select {
|
||||||
|
width: 100%;
|
||||||
|
padding: 0.625rem 0.875rem;
|
||||||
|
background: #0f172a;
|
||||||
|
border: 1px solid #334155;
|
||||||
|
border-radius: 8px;
|
||||||
|
color: #f1f5f9;
|
||||||
|
font-size: 0.9rem;
|
||||||
|
}
|
||||||
|
input:focus, select:focus { outline: none; border-color: #6366f1; box-shadow: 0 0 0 2px rgba(99,102,241,0.25); }
|
||||||
|
select {
|
||||||
|
cursor: pointer;
|
||||||
|
appearance: none;
|
||||||
|
background-image: url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' width='12' height='12' viewBox='0 0 12 12'%3E%3Cpath fill='%2394a3b8' d='M6 8L1 3h10z'/%3E%3C/svg%3E");
|
||||||
|
background-repeat: no-repeat;
|
||||||
|
background-position: right 0.875rem center;
|
||||||
|
padding-right: 2.5rem;
|
||||||
|
}
|
||||||
|
.section { margin-top: 1rem; }
|
||||||
|
.divider {
|
||||||
|
border: none;
|
||||||
|
border-top: 1px solid #334155;
|
||||||
|
margin: 1.25rem 0;
|
||||||
|
}
|
||||||
|
button {
|
||||||
|
width: 100%;
|
||||||
|
padding: 0.75rem;
|
||||||
|
background: #6366f1;
|
||||||
|
border: none;
|
||||||
|
border-radius: 8px;
|
||||||
|
color: #fff;
|
||||||
|
font-size: 0.9rem;
|
||||||
|
font-weight: 600;
|
||||||
|
cursor: pointer;
|
||||||
|
margin-top: 1.25rem;
|
||||||
|
}
|
||||||
|
button:hover { background: #4f46e5; }
|
||||||
|
button:disabled { background: #334155; color: #64748b; cursor: not-allowed; }
|
||||||
|
.status { margin-top: 0.875rem; font-size: 0.8rem; color: #64748b; min-height: 1.25rem; }
|
||||||
|
.status.ok { color: #4ade80; }
|
||||||
|
.status.err { color: #f87171; }
|
||||||
|
</style>
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
<div class="card">
|
||||||
|
<div class="header">
|
||||||
|
<a href="/" class="back">← Gateway</a>
|
||||||
|
<span class="logo">🤖</span>
|
||||||
|
<h1>Bot Configuration</h1>
|
||||||
|
</div>
|
||||||
|
<div class="field">
|
||||||
|
<label for="transport">Transport</label>
|
||||||
|
<select id="transport" onchange="onTransportChange(this.value)">
|
||||||
|
<option value="matrix">Matrix</option>
|
||||||
|
<option value="slack">Slack</option>
|
||||||
|
</select>
|
||||||
|
</div>
|
||||||
|
<hr class="divider">
|
||||||
|
<div id="matrix-fields" class="section">
|
||||||
|
<div class="field">
|
||||||
|
<label for="homeserver">Homeserver URL</label>
|
||||||
|
<input type="text" id="homeserver" placeholder="https://matrix.example.com">
|
||||||
|
</div>
|
||||||
|
<div class="field">
|
||||||
|
<label for="username">Bot Username</label>
|
||||||
|
<input type="text" id="username" placeholder="@bot:example.com">
|
||||||
|
</div>
|
||||||
|
<div class="field">
|
||||||
|
<label for="password">Password</label>
|
||||||
|
<input type="password" id="password" placeholder="••••••••">
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div id="slack-fields" class="section" style="display:none">
|
||||||
|
<div class="field">
|
||||||
|
<label for="slack-bot-token">Bot Token</label>
|
||||||
|
<input type="password" id="slack-bot-token" placeholder="xoxb-…">
|
||||||
|
</div>
|
||||||
|
<div class="field">
|
||||||
|
<label for="slack-signing-secret">App / Signing Secret</label>
|
||||||
|
<input type="password" id="slack-signing-secret" placeholder="Your signing secret">
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<button id="save-btn" onclick="save()">Save & Restart Bot</button>
|
||||||
|
<div id="status" class="status"></div>
|
||||||
|
</div>
|
||||||
|
<script>
|
||||||
|
function onTransportChange(v) {
|
||||||
|
document.getElementById('matrix-fields').style.display = v === 'matrix' ? '' : 'none';
|
||||||
|
document.getElementById('slack-fields').style.display = v === 'slack' ? '' : 'none';
|
||||||
|
}
|
||||||
|
async function loadConfig() {
|
||||||
|
try {
|
||||||
|
const r = await fetch('/api/gateway/bot-config');
|
||||||
|
const d = await r.json();
|
||||||
|
document.getElementById('transport').value = d.transport || 'matrix';
|
||||||
|
onTransportChange(d.transport || 'matrix');
|
||||||
|
document.getElementById('homeserver').value = d.homeserver || '';
|
||||||
|
document.getElementById('username').value = d.username || '';
|
||||||
|
document.getElementById('password').value = d.password || '';
|
||||||
|
document.getElementById('slack-bot-token').value = d.slack_bot_token || '';
|
||||||
|
document.getElementById('slack-signing-secret').value = d.slack_signing_secret || '';
|
||||||
|
} catch(e) {
|
||||||
|
document.getElementById('status').textContent = 'Failed to load config: ' + e;
|
||||||
|
document.getElementById('status').className = 'status err';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
async function save() {
|
||||||
|
const btn = document.getElementById('save-btn');
|
||||||
|
const statusEl = document.getElementById('status');
|
||||||
|
btn.disabled = true;
|
||||||
|
btn.textContent = 'Saving…';
|
||||||
|
statusEl.className = 'status';
|
||||||
|
statusEl.textContent = '';
|
||||||
|
const transport = document.getElementById('transport').value;
|
||||||
|
const payload = { transport };
|
||||||
|
if (transport === 'matrix') {
|
||||||
|
payload.homeserver = document.getElementById('homeserver').value;
|
||||||
|
payload.username = document.getElementById('username').value;
|
||||||
|
payload.password = document.getElementById('password').value;
|
||||||
|
} else {
|
||||||
|
payload.slack_bot_token = document.getElementById('slack-bot-token').value;
|
||||||
|
payload.slack_signing_secret = document.getElementById('slack-signing-secret').value;
|
||||||
|
}
|
||||||
|
try {
|
||||||
|
const r = await fetch('/api/gateway/bot-config', {
|
||||||
|
method: 'POST',
|
||||||
|
headers: {'Content-Type': 'application/json'},
|
||||||
|
body: JSON.stringify(payload)
|
||||||
|
});
|
||||||
|
const d = await r.json();
|
||||||
|
if (d.ok) {
|
||||||
|
statusEl.className = 'status ok';
|
||||||
|
statusEl.textContent = 'Saved — bot restarted with new credentials.';
|
||||||
|
} else {
|
||||||
|
statusEl.className = 'status err';
|
||||||
|
statusEl.textContent = d.error || 'Save failed';
|
||||||
|
}
|
||||||
|
} catch(e) {
|
||||||
|
statusEl.className = 'status err';
|
||||||
|
statusEl.textContent = 'Error: ' + e;
|
||||||
|
}
|
||||||
|
btn.disabled = false;
|
||||||
|
btn.textContent = 'Save & Restart Bot';
|
||||||
|
}
|
||||||
|
loadConfig();
|
||||||
|
</script>
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
|
"#;
|
||||||
|
|
||||||
|
/// Serve the bot configuration HTML page at `GET /bot-config`.
|
||||||
|
#[handler]
|
||||||
|
pub async fn gateway_bot_config_page_handler() -> Response {
|
||||||
|
Response::builder()
|
||||||
|
.status(StatusCode::OK)
|
||||||
|
.header("Content-Type", "text/html; charset=utf-8")
|
||||||
|
.body(Body::from(GATEWAY_BOT_CONFIG_HTML))
|
||||||
|
}
|
||||||
|
|
||||||
// ── Gateway server startup ───────────────────────────────────────────
|
// ── Gateway server startup ───────────────────────────────────────────
|
||||||
|
|
||||||
/// Start the gateway HTTP server. This is the entry point when `--gateway` is used.
|
/// Start the gateway HTTP server. This is the entry point when `--gateway` is used.
|
||||||
@@ -1062,7 +1413,8 @@ pub async fn run(config_path: &Path, port: u16) -> Result<(), std::io::Error> {
|
|||||||
.to_path_buf();
|
.to_path_buf();
|
||||||
|
|
||||||
let config = GatewayConfig::load(config_path).map_err(std::io::Error::other)?;
|
let config = GatewayConfig::load(config_path).map_err(std::io::Error::other)?;
|
||||||
let state = GatewayState::new(config, config_dir.clone()).map_err(std::io::Error::other)?;
|
let state =
|
||||||
|
GatewayState::new(config, config_dir.clone(), port).map_err(std::io::Error::other)?;
|
||||||
let state_arc = Arc::new(state);
|
let state_arc = Arc::new(state);
|
||||||
|
|
||||||
let active = state_arc.active_project.read().await.clone();
|
let active = state_arc.active_project.read().await.clone();
|
||||||
@@ -1086,17 +1438,23 @@ pub async fn run(config_path: &Path, port: u16) -> Result<(), std::io::Error> {
|
|||||||
|
|
||||||
// Spawn the Matrix bot if `.huskies/bot.toml` exists in the config directory.
|
// Spawn the Matrix bot if `.huskies/bot.toml` exists in the config directory.
|
||||||
let gateway_projects: Vec<String> = state_arc.config.projects.keys().cloned().collect();
|
let gateway_projects: Vec<String> = state_arc.config.projects.keys().cloned().collect();
|
||||||
spawn_gateway_bot(
|
let bot_abort = spawn_gateway_bot(
|
||||||
&config_dir,
|
&config_dir,
|
||||||
Arc::clone(&state_arc.active_project),
|
Arc::clone(&state_arc.active_project),
|
||||||
gateway_projects,
|
gateway_projects,
|
||||||
port,
|
port,
|
||||||
);
|
);
|
||||||
|
*state_arc.bot_handle.lock().await = bot_abort;
|
||||||
|
|
||||||
let route = poem::Route::new()
|
let route = poem::Route::new()
|
||||||
.at("/", poem::get(gateway_index_handler))
|
.at("/", poem::get(gateway_index_handler))
|
||||||
|
.at("/bot-config", poem::get(gateway_bot_config_page_handler))
|
||||||
.at("/api/gateway", poem::get(gateway_api_handler))
|
.at("/api/gateway", poem::get(gateway_api_handler))
|
||||||
.at("/api/gateway/switch", poem::post(gateway_switch_handler))
|
.at("/api/gateway/switch", poem::post(gateway_switch_handler))
|
||||||
|
.at(
|
||||||
|
"/api/gateway/bot-config",
|
||||||
|
poem::get(gateway_bot_config_get_handler).post(gateway_bot_config_save_handler),
|
||||||
|
)
|
||||||
.at(
|
.at(
|
||||||
"/mcp",
|
"/mcp",
|
||||||
poem::post(gateway_mcp_post_handler).get(gateway_mcp_get_handler),
|
poem::post(gateway_mcp_post_handler).get(gateway_mcp_get_handler),
|
||||||
@@ -1167,12 +1525,14 @@ fn write_gateway_mcp_json(config_dir: &Path, port: u16) -> Result<(), std::io::E
|
|||||||
/// returns immediately without spawning anything. When the bot is enabled it
|
/// returns immediately without spawning anything. When the bot is enabled it
|
||||||
/// receives a shared reference to the gateway's active-project `RwLock` so the
|
/// receives a shared reference to the gateway's active-project `RwLock` so the
|
||||||
/// `switch` command can change the active project without going through HTTP.
|
/// `switch` command can change the active project without going through HTTP.
|
||||||
|
///
|
||||||
|
/// Returns an [`tokio::task::AbortHandle`] if the bot task was spawned, `None` otherwise.
|
||||||
fn spawn_gateway_bot(
|
fn spawn_gateway_bot(
|
||||||
config_dir: &Path,
|
config_dir: &Path,
|
||||||
active_project: ActiveProject,
|
active_project: ActiveProject,
|
||||||
gateway_projects: Vec<String>,
|
gateway_projects: Vec<String>,
|
||||||
port: u16,
|
port: u16,
|
||||||
) {
|
) -> Option<tokio::task::AbortHandle> {
|
||||||
use crate::agents::AgentPool;
|
use crate::agents::AgentPool;
|
||||||
use tokio::sync::{broadcast, mpsc};
|
use tokio::sync::{broadcast, mpsc};
|
||||||
|
|
||||||
@@ -1202,7 +1562,7 @@ fn spawn_gateway_bot(
|
|||||||
shutdown_rx,
|
shutdown_rx,
|
||||||
Some(active_project),
|
Some(active_project),
|
||||||
gateway_projects,
|
gateway_projects,
|
||||||
);
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
// ── Tests ────────────────────────────────────────────────────────────
|
// ── Tests ────────────────────────────────────────────────────────────
|
||||||
@@ -1238,7 +1598,7 @@ url = "http://localhost:3002"
|
|||||||
let config = GatewayConfig {
|
let config = GatewayConfig {
|
||||||
projects: BTreeMap::new(),
|
projects: BTreeMap::new(),
|
||||||
};
|
};
|
||||||
assert!(GatewayState::new(config, PathBuf::new()).is_err());
|
assert!(GatewayState::new(config, PathBuf::from("."), 3000).is_err());
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
@@ -1257,7 +1617,7 @@ url = "http://localhost:3002"
|
|||||||
},
|
},
|
||||||
);
|
);
|
||||||
let config = GatewayConfig { projects };
|
let config = GatewayConfig { projects };
|
||||||
let state = GatewayState::new(config, PathBuf::new()).unwrap();
|
let state = GatewayState::new(config, PathBuf::from("."), 3000).unwrap();
|
||||||
let active = state.active_project.blocking_read().clone();
|
let active = state.active_project.blocking_read().clone();
|
||||||
assert_eq!(active, "alpha"); // BTreeMap sorts alphabetically.
|
assert_eq!(active, "alpha"); // BTreeMap sorts alphabetically.
|
||||||
}
|
}
|
||||||
@@ -1290,7 +1650,7 @@ url = "http://localhost:3002"
|
|||||||
},
|
},
|
||||||
);
|
);
|
||||||
let config = GatewayConfig { projects };
|
let config = GatewayConfig { projects };
|
||||||
let state = GatewayState::new(config, PathBuf::new()).unwrap();
|
let state = GatewayState::new(config, PathBuf::from("."), 3000).unwrap();
|
||||||
|
|
||||||
let params = json!({ "arguments": { "project": "beta" } });
|
let params = json!({ "arguments": { "project": "beta" } });
|
||||||
let resp = handle_switch_project(¶ms, &state).await;
|
let resp = handle_switch_project(¶ms, &state).await;
|
||||||
@@ -1310,7 +1670,7 @@ url = "http://localhost:3002"
|
|||||||
},
|
},
|
||||||
);
|
);
|
||||||
let config = GatewayConfig { projects };
|
let config = GatewayConfig { projects };
|
||||||
let state = GatewayState::new(config, PathBuf::new()).unwrap();
|
let state = GatewayState::new(config, PathBuf::from("."), 3000).unwrap();
|
||||||
|
|
||||||
let params = json!({ "arguments": { "project": "nonexistent" } });
|
let params = json!({ "arguments": { "project": "nonexistent" } });
|
||||||
let resp = handle_switch_project(¶ms, &state).await;
|
let resp = handle_switch_project(¶ms, &state).await;
|
||||||
@@ -1327,7 +1687,7 @@ url = "http://localhost:3002"
|
|||||||
},
|
},
|
||||||
);
|
);
|
||||||
let config = GatewayConfig { projects };
|
let config = GatewayConfig { projects };
|
||||||
let state = GatewayState::new(config, PathBuf::new()).unwrap();
|
let state = GatewayState::new(config, PathBuf::from("."), 3000).unwrap();
|
||||||
|
|
||||||
let url = state.active_url().await.unwrap();
|
let url = state.active_url().await.unwrap();
|
||||||
assert_eq!(url, "http://my:3001");
|
assert_eq!(url, "http://my:3001");
|
||||||
@@ -1463,7 +1823,7 @@ enabled = false
|
|||||||
},
|
},
|
||||||
);
|
);
|
||||||
let config = GatewayConfig { projects };
|
let config = GatewayConfig { projects };
|
||||||
Arc::new(GatewayState::new(config, PathBuf::new()).unwrap())
|
Arc::new(GatewayState::new(config, PathBuf::new(), 3000).unwrap())
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
@@ -1611,4 +1971,67 @@ enabled = false
|
|||||||
let resp = cli.delete("/gateway/agents/no-such-id").send().await;
|
let resp = cli.delete("/gateway/agents/no-such-id").send().await;
|
||||||
assert_eq!(resp.0.status(), StatusCode::NOT_FOUND);
|
assert_eq!(resp.0.status(), StatusCode::NOT_FOUND);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ── Bot configuration helper tests ──────────────────────────────────
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn toml_string_plain() {
|
||||||
|
assert_eq!(toml_string("hello"), "\"hello\"");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn toml_string_escapes_quotes_and_backslashes() {
|
||||||
|
assert_eq!(toml_string(r#"say "hi""#), r#""say \"hi\"""#);
|
||||||
|
assert_eq!(toml_string(r"a\b"), r#""a\\b""#);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn write_and_read_matrix_bot_config_round_trips() {
|
||||||
|
let tmp = tempfile::tempdir().unwrap();
|
||||||
|
let payload = BotConfigPayload {
|
||||||
|
transport: "matrix".into(),
|
||||||
|
homeserver: Some("https://matrix.example.com".into()),
|
||||||
|
username: Some("@bot:example.com".into()),
|
||||||
|
password: Some("s3cr3t".into()),
|
||||||
|
slack_bot_token: None,
|
||||||
|
slack_signing_secret: None,
|
||||||
|
};
|
||||||
|
write_bot_config(tmp.path(), &payload).expect("write should succeed");
|
||||||
|
|
||||||
|
let read = read_bot_config_raw(tmp.path());
|
||||||
|
assert_eq!(read.transport, "matrix");
|
||||||
|
assert_eq!(
|
||||||
|
read.homeserver.as_deref(),
|
||||||
|
Some("https://matrix.example.com")
|
||||||
|
);
|
||||||
|
assert_eq!(read.username.as_deref(), Some("@bot:example.com"));
|
||||||
|
assert_eq!(read.password.as_deref(), Some("s3cr3t"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn write_and_read_slack_bot_config_round_trips() {
|
||||||
|
let tmp = tempfile::tempdir().unwrap();
|
||||||
|
let payload = BotConfigPayload {
|
||||||
|
transport: "slack".into(),
|
||||||
|
homeserver: None,
|
||||||
|
username: None,
|
||||||
|
password: None,
|
||||||
|
slack_bot_token: Some("xoxb-abc123".into()),
|
||||||
|
slack_signing_secret: Some("sig-secret".into()),
|
||||||
|
};
|
||||||
|
write_bot_config(tmp.path(), &payload).expect("write should succeed");
|
||||||
|
|
||||||
|
let read = read_bot_config_raw(tmp.path());
|
||||||
|
assert_eq!(read.transport, "slack");
|
||||||
|
assert_eq!(read.slack_bot_token.as_deref(), Some("xoxb-abc123"));
|
||||||
|
assert_eq!(read.slack_signing_secret.as_deref(), Some("sig-secret"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn read_bot_config_raw_returns_default_when_file_absent() {
|
||||||
|
let tmp = tempfile::tempdir().unwrap();
|
||||||
|
let read = read_bot_config_raw(tmp.path());
|
||||||
|
assert_eq!(read.transport, "");
|
||||||
|
assert!(read.homeserver.is_none());
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -15,6 +15,23 @@ pub(super) async fn tool_merge_agent_work(
|
|||||||
.and_then(|v| v.as_str())
|
.and_then(|v| v.as_str())
|
||||||
.ok_or("Missing required argument: story_id")?;
|
.ok_or("Missing required argument: story_id")?;
|
||||||
|
|
||||||
|
// Check CRDT stage before attempting merge — if already done or archived,
|
||||||
|
// return success immediately to avoid spurious error notifications.
|
||||||
|
if let Some(item) = crate::crdt_state::read_item(story_id)
|
||||||
|
&& (item.stage == "5_done" || item.stage == "6_archived")
|
||||||
|
{
|
||||||
|
return serde_json::to_string_pretty(&json!({
|
||||||
|
"story_id": story_id,
|
||||||
|
"status": "completed",
|
||||||
|
"success": true,
|
||||||
|
"message": format!(
|
||||||
|
"Story '{}' is already in '{}' — no merge needed.",
|
||||||
|
story_id, item.stage
|
||||||
|
),
|
||||||
|
}))
|
||||||
|
.map_err(|e| format!("Serialization error: {e}"));
|
||||||
|
}
|
||||||
|
|
||||||
let project_root = ctx.agents.get_project_root(&ctx.state)?;
|
let project_root = ctx.agents.get_project_root(&ctx.state)?;
|
||||||
ctx.agents.start_merge_agent_work(&project_root, story_id)?;
|
ctx.agents.start_merge_agent_work(&project_root, story_id)?;
|
||||||
|
|
||||||
@@ -258,6 +275,60 @@ mod tests {
|
|||||||
assert!(result.unwrap_err().contains("story_id"));
|
assert!(result.unwrap_err().contains("story_id"));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn tool_merge_agent_work_already_done_returns_success() {
|
||||||
|
crate::crdt_state::init_for_test();
|
||||||
|
crate::crdt_state::write_item(
|
||||||
|
"99_story_already_done",
|
||||||
|
"5_done",
|
||||||
|
Some("Already done story"),
|
||||||
|
None,
|
||||||
|
None,
|
||||||
|
None,
|
||||||
|
None,
|
||||||
|
None,
|
||||||
|
None,
|
||||||
|
None,
|
||||||
|
);
|
||||||
|
let tmp = tempfile::tempdir().unwrap();
|
||||||
|
let ctx = test_ctx(tmp.path());
|
||||||
|
let result =
|
||||||
|
tool_merge_agent_work(&json!({"story_id": "99_story_already_done"}), &ctx).await;
|
||||||
|
assert!(result.is_ok(), "expected Ok, got: {result:?}");
|
||||||
|
let body = result.unwrap();
|
||||||
|
let v: serde_json::Value = serde_json::from_str(&body).unwrap();
|
||||||
|
assert_eq!(v["status"], "completed");
|
||||||
|
assert_eq!(v["success"], true);
|
||||||
|
assert!(v["message"].as_str().unwrap().contains("5_done"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn tool_merge_agent_work_already_archived_returns_success() {
|
||||||
|
crate::crdt_state::init_for_test();
|
||||||
|
crate::crdt_state::write_item(
|
||||||
|
"98_story_already_archived",
|
||||||
|
"6_archived",
|
||||||
|
Some("Already archived story"),
|
||||||
|
None,
|
||||||
|
None,
|
||||||
|
None,
|
||||||
|
None,
|
||||||
|
None,
|
||||||
|
None,
|
||||||
|
None,
|
||||||
|
);
|
||||||
|
let tmp = tempfile::tempdir().unwrap();
|
||||||
|
let ctx = test_ctx(tmp.path());
|
||||||
|
let result =
|
||||||
|
tool_merge_agent_work(&json!({"story_id": "98_story_already_archived"}), &ctx).await;
|
||||||
|
assert!(result.is_ok(), "expected Ok, got: {result:?}");
|
||||||
|
let body = result.unwrap();
|
||||||
|
let v: serde_json::Value = serde_json::from_str(&body).unwrap();
|
||||||
|
assert_eq!(v["status"], "completed");
|
||||||
|
assert_eq!(v["success"], true);
|
||||||
|
assert!(v["message"].as_str().unwrap().contains("6_archived"));
|
||||||
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn tool_move_story_to_merge_missing_story_id() {
|
async fn tool_move_story_to_merge_missing_story_id() {
|
||||||
let tmp = tempfile::tempdir().unwrap();
|
let tmp = tempfile::tempdir().unwrap();
|
||||||
|
|||||||
@@ -513,6 +513,28 @@ fn handle_tools_list(id: Option<Value>) -> JsonRpcResponse {
|
|||||||
"required": ["story_id", "criterion_index"]
|
"required": ["story_id", "criterion_index"]
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"name": "edit_criterion",
|
||||||
|
"description": "Update the text of an existing acceptance criterion in place, preserving its checked/unchecked state. Uses a 0-based index counting all criteria (both checked and unchecked).",
|
||||||
|
"inputSchema": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"story_id": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Story identifier (filename stem, e.g. '28_my_story')"
|
||||||
|
},
|
||||||
|
"criterion_index": {
|
||||||
|
"type": "integer",
|
||||||
|
"description": "0-based index of the criterion to edit (counts all criteria)"
|
||||||
|
},
|
||||||
|
"new_text": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "New text for the criterion (without the '- [ ] ' or '- [x] ' prefix)"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["story_id", "criterion_index", "new_text"]
|
||||||
|
}
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "add_criterion",
|
"name": "add_criterion",
|
||||||
"description": "Add an acceptance criterion to an existing story file. Appends '- [ ] {criterion}' after the last existing criterion in the '## Acceptance Criteria' section. Auto-commits via the filesystem watcher.",
|
"description": "Add an acceptance criterion to an existing story file. Appends '- [ ] {criterion}' after the last existing criterion in the '## Acceptance Criteria' section. Auto-commits via the filesystem watcher.",
|
||||||
@@ -531,6 +553,24 @@ fn handle_tools_list(id: Option<Value>) -> JsonRpcResponse {
|
|||||||
"required": ["story_id", "criterion"]
|
"required": ["story_id", "criterion"]
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"name": "remove_criterion",
|
||||||
|
"description": "Remove an acceptance criterion from a story by its 0-based index (counting all criteria, both checked and unchecked). Returns an error if the index is out of range. Auto-commits via the filesystem watcher.",
|
||||||
|
"inputSchema": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"story_id": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Story identifier (filename stem, e.g. '28_my_story')"
|
||||||
|
},
|
||||||
|
"criterion_index": {
|
||||||
|
"type": "integer",
|
||||||
|
"description": "0-based index of the criterion to remove (counts all criteria)"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["story_id", "criterion_index"]
|
||||||
|
}
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "update_story",
|
"name": "update_story",
|
||||||
"description": "Update an existing story file. Can replace the '## User Story' and/or '## Description' section content, and/or set YAML front matter fields (e.g. agent, qa). Auto-commits via the filesystem watcher.",
|
"description": "Update an existing story file. Can replace the '## User Story' and/or '## Description' section content, and/or set YAML front matter fields (e.g. agent, qa). Auto-commits via the filesystem watcher.",
|
||||||
@@ -1242,7 +1282,9 @@ async fn handle_tools_call(id: Option<Value>, params: &Value, ctx: &AppContext)
|
|||||||
"accept_story" => story_tools::tool_accept_story(&args, ctx),
|
"accept_story" => story_tools::tool_accept_story(&args, ctx),
|
||||||
// Story mutation tools (auto-commit to master)
|
// Story mutation tools (auto-commit to master)
|
||||||
"check_criterion" => story_tools::tool_check_criterion(&args, ctx),
|
"check_criterion" => story_tools::tool_check_criterion(&args, ctx),
|
||||||
|
"edit_criterion" => story_tools::tool_edit_criterion(&args, ctx),
|
||||||
"add_criterion" => story_tools::tool_add_criterion(&args, ctx),
|
"add_criterion" => story_tools::tool_add_criterion(&args, ctx),
|
||||||
|
"remove_criterion" => story_tools::tool_remove_criterion(&args, ctx),
|
||||||
"update_story" => story_tools::tool_update_story(&args, ctx),
|
"update_story" => story_tools::tool_update_story(&args, ctx),
|
||||||
// Spike lifecycle tools
|
// Spike lifecycle tools
|
||||||
"create_spike" => story_tools::tool_create_spike(&args, ctx),
|
"create_spike" => story_tools::tool_create_spike(&args, ctx),
|
||||||
@@ -1426,7 +1468,8 @@ mod tests {
|
|||||||
assert!(names.contains(&"loc_file"));
|
assert!(names.contains(&"loc_file"));
|
||||||
assert!(names.contains(&"dump_crdt"));
|
assert!(names.contains(&"dump_crdt"));
|
||||||
assert!(names.contains(&"get_version"));
|
assert!(names.contains(&"get_version"));
|
||||||
assert_eq!(tools.len(), 63);
|
assert!(names.contains(&"remove_criterion"));
|
||||||
|
assert_eq!(tools.len(), 65);
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
|
|||||||
@@ -5,8 +5,9 @@ use crate::agents::{
|
|||||||
use crate::http::context::AppContext;
|
use crate::http::context::AppContext;
|
||||||
use crate::http::workflow::{
|
use crate::http::workflow::{
|
||||||
add_criterion_to_file, check_criterion_in_file, create_bug_file, create_refactor_file,
|
add_criterion_to_file, check_criterion_in_file, create_bug_file, create_refactor_file,
|
||||||
create_spike_file, create_story_file, list_bug_files, list_refactor_files, load_pipeline_state,
|
create_spike_file, create_story_file, edit_criterion_in_file, list_bug_files,
|
||||||
load_upcoming_stories, update_story_in_file, validate_story_dirs,
|
list_refactor_files, load_pipeline_state, load_upcoming_stories, remove_criterion_from_file,
|
||||||
|
update_story_in_file, validate_story_dirs,
|
||||||
};
|
};
|
||||||
use crate::io::story_metadata::{
|
use crate::io::story_metadata::{
|
||||||
check_archived_deps, check_archived_deps_from_list, parse_front_matter, parse_unchecked_todos,
|
check_archived_deps, check_archived_deps_from_list, parse_front_matter, parse_unchecked_todos,
|
||||||
@@ -331,6 +332,28 @@ pub(super) fn tool_check_criterion(args: &Value, ctx: &AppContext) -> Result<Str
|
|||||||
))
|
))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pub(super) fn tool_edit_criterion(args: &Value, ctx: &AppContext) -> Result<String, String> {
|
||||||
|
let story_id = args
|
||||||
|
.get("story_id")
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.ok_or("Missing required argument: story_id")?;
|
||||||
|
let criterion_index = args
|
||||||
|
.get("criterion_index")
|
||||||
|
.and_then(|v| v.as_u64())
|
||||||
|
.ok_or("Missing required argument: criterion_index")? as usize;
|
||||||
|
let new_text = args
|
||||||
|
.get("new_text")
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.ok_or("Missing required argument: new_text")?;
|
||||||
|
|
||||||
|
let root = ctx.state.get_project_root()?;
|
||||||
|
edit_criterion_in_file(&root, story_id, criterion_index, new_text)?;
|
||||||
|
|
||||||
|
Ok(format!(
|
||||||
|
"Criterion {criterion_index} updated for story '{story_id}'."
|
||||||
|
))
|
||||||
|
}
|
||||||
|
|
||||||
pub(super) fn tool_add_criterion(args: &Value, ctx: &AppContext) -> Result<String, String> {
|
pub(super) fn tool_add_criterion(args: &Value, ctx: &AppContext) -> Result<String, String> {
|
||||||
let story_id = args
|
let story_id = args
|
||||||
.get("story_id")
|
.get("story_id")
|
||||||
@@ -349,6 +372,24 @@ pub(super) fn tool_add_criterion(args: &Value, ctx: &AppContext) -> Result<Strin
|
|||||||
))
|
))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pub(super) fn tool_remove_criterion(args: &Value, ctx: &AppContext) -> Result<String, String> {
|
||||||
|
let story_id = args
|
||||||
|
.get("story_id")
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.ok_or("Missing required argument: story_id")?;
|
||||||
|
let criterion_index = args
|
||||||
|
.get("criterion_index")
|
||||||
|
.and_then(|v| v.as_u64())
|
||||||
|
.ok_or("Missing required argument: criterion_index")? as usize;
|
||||||
|
|
||||||
|
let root = ctx.state.get_project_root()?;
|
||||||
|
remove_criterion_from_file(&root, story_id, criterion_index)?;
|
||||||
|
|
||||||
|
Ok(format!(
|
||||||
|
"Removed criterion {criterion_index} from story '{story_id}'."
|
||||||
|
))
|
||||||
|
}
|
||||||
|
|
||||||
pub(super) fn tool_update_story(args: &Value, ctx: &AppContext) -> Result<String, String> {
|
pub(super) fn tool_update_story(args: &Value, ctx: &AppContext) -> Result<String, String> {
|
||||||
let story_id = args
|
let story_id = args
|
||||||
.get("story_id")
|
.get("story_id")
|
||||||
@@ -1722,6 +1763,66 @@ mod tests {
|
|||||||
assert!(result.unwrap().contains("Criterion 0 checked"));
|
assert!(result.unwrap().contains("Criterion 0 checked"));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn tool_remove_criterion_missing_story_id() {
|
||||||
|
let tmp = tempfile::tempdir().unwrap();
|
||||||
|
let ctx = test_ctx(tmp.path());
|
||||||
|
let result = tool_remove_criterion(&json!({"criterion_index": 0}), &ctx);
|
||||||
|
assert!(result.is_err());
|
||||||
|
assert!(result.unwrap_err().contains("story_id"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn tool_remove_criterion_missing_criterion_index() {
|
||||||
|
let tmp = tempfile::tempdir().unwrap();
|
||||||
|
let ctx = test_ctx(tmp.path());
|
||||||
|
let result = tool_remove_criterion(&json!({"story_id": "1_test"}), &ctx);
|
||||||
|
assert!(result.is_err());
|
||||||
|
assert!(result.unwrap_err().contains("criterion_index"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn tool_remove_criterion_removes_item() {
|
||||||
|
let tmp = tempfile::tempdir().unwrap();
|
||||||
|
setup_git_repo_in(tmp.path());
|
||||||
|
|
||||||
|
crate::db::ensure_content_store();
|
||||||
|
crate::db::write_item_with_content(
|
||||||
|
"9905_test",
|
||||||
|
"2_current",
|
||||||
|
"---\nname: Test\n---\n## Acceptance Criteria\n- [ ] Keep me\n- [ ] Remove me\n",
|
||||||
|
);
|
||||||
|
|
||||||
|
let ctx = test_ctx(tmp.path());
|
||||||
|
let result = tool_remove_criterion(
|
||||||
|
&json!({"story_id": "9905_test", "criterion_index": 1}),
|
||||||
|
&ctx,
|
||||||
|
);
|
||||||
|
assert!(result.is_ok(), "Expected ok: {result:?}");
|
||||||
|
assert!(result.unwrap().contains("Removed criterion 1"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn tool_remove_criterion_out_of_range() {
|
||||||
|
let tmp = tempfile::tempdir().unwrap();
|
||||||
|
setup_git_repo_in(tmp.path());
|
||||||
|
|
||||||
|
crate::db::ensure_content_store();
|
||||||
|
crate::db::write_item_with_content(
|
||||||
|
"9906_test",
|
||||||
|
"2_current",
|
||||||
|
"---\nname: Test\n---\n## Acceptance Criteria\n- [ ] Only one\n",
|
||||||
|
);
|
||||||
|
|
||||||
|
let ctx = test_ctx(tmp.path());
|
||||||
|
let result = tool_remove_criterion(
|
||||||
|
&json!({"story_id": "9906_test", "criterion_index": 5}),
|
||||||
|
&ctx,
|
||||||
|
);
|
||||||
|
assert!(result.is_err());
|
||||||
|
assert!(result.unwrap_err().contains("out of range"));
|
||||||
|
}
|
||||||
|
|
||||||
/// Regression test for bug 514: deleting a story must cancel its pending
|
/// Regression test for bug 514: deleting a story must cancel its pending
|
||||||
/// rate-limit retry timer so the tick loop cannot re-spawn an agent.
|
/// rate-limit retry timer so the tick loop cannot re-spawn an agent.
|
||||||
///
|
///
|
||||||
|
|||||||
@@ -7,7 +7,8 @@ pub use bug_ops::{
|
|||||||
create_bug_file, create_refactor_file, create_spike_file, list_bug_files, list_refactor_files,
|
create_bug_file, create_refactor_file, create_spike_file, list_bug_files, list_refactor_files,
|
||||||
};
|
};
|
||||||
pub use story_ops::{
|
pub use story_ops::{
|
||||||
add_criterion_to_file, check_criterion_in_file, create_story_file, update_story_in_file,
|
add_criterion_to_file, check_criterion_in_file, create_story_file, edit_criterion_in_file,
|
||||||
|
remove_criterion_from_file, update_story_in_file,
|
||||||
};
|
};
|
||||||
pub use test_results::{
|
pub use test_results::{
|
||||||
read_test_results_from_story_file, write_coverage_baseline_to_story_file,
|
read_test_results_from_story_file, write_coverage_baseline_to_story_file,
|
||||||
|
|||||||
@@ -126,6 +126,111 @@ pub fn check_criterion_in_file(
|
|||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Remove an acceptance criterion from a story by its 0-based index (counting all criteria,
|
||||||
|
/// both checked and unchecked). Returns an error if the index is out of range.
|
||||||
|
pub fn remove_criterion_from_file(
|
||||||
|
project_root: &Path,
|
||||||
|
story_id: &str,
|
||||||
|
criterion_index: usize,
|
||||||
|
) -> Result<(), String> {
|
||||||
|
let contents = read_story_content(project_root, story_id)?;
|
||||||
|
|
||||||
|
let mut count: usize = 0;
|
||||||
|
let mut found = false;
|
||||||
|
let new_lines: Vec<String> = contents
|
||||||
|
.lines()
|
||||||
|
.filter(|line| {
|
||||||
|
let trimmed = line.trim();
|
||||||
|
if trimmed.starts_with("- [ ] ") || trimmed.starts_with("- [x] ") {
|
||||||
|
if count == criterion_index {
|
||||||
|
count += 1;
|
||||||
|
found = true;
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
count += 1;
|
||||||
|
}
|
||||||
|
true
|
||||||
|
})
|
||||||
|
.map(|s| s.to_string())
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
if !found {
|
||||||
|
return Err(format!(
|
||||||
|
"Criterion index {criterion_index} out of range. Story '{story_id}' has \
|
||||||
|
{count} criteria (indices 0..{}).",
|
||||||
|
count.saturating_sub(1)
|
||||||
|
));
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut new_str = new_lines.join("\n");
|
||||||
|
if contents.ends_with('\n') {
|
||||||
|
new_str.push('\n');
|
||||||
|
}
|
||||||
|
|
||||||
|
let stage = story_stage(story_id).unwrap_or_else(|| "2_current".to_string());
|
||||||
|
write_story_content(project_root, story_id, &stage, &new_str);
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Edit the text of an existing acceptance criterion without changing its checked state.
|
||||||
|
///
|
||||||
|
/// Finds the criterion at `criterion_index` (0-based, counting all criteria regardless
|
||||||
|
/// of checked state) and replaces its text with `new_text`.
|
||||||
|
pub fn edit_criterion_in_file(
|
||||||
|
project_root: &Path,
|
||||||
|
story_id: &str,
|
||||||
|
criterion_index: usize,
|
||||||
|
new_text: &str,
|
||||||
|
) -> Result<(), String> {
|
||||||
|
let contents = read_story_content(project_root, story_id)?;
|
||||||
|
|
||||||
|
let mut count: usize = 0;
|
||||||
|
let mut found = false;
|
||||||
|
let new_lines: Vec<String> = contents
|
||||||
|
.lines()
|
||||||
|
.map(|line| {
|
||||||
|
let trimmed = line.trim();
|
||||||
|
let prefix = if trimmed.starts_with("- [ ] ") {
|
||||||
|
Some("- [ ] ")
|
||||||
|
} else if trimmed.starts_with("- [x] ") {
|
||||||
|
Some("- [x] ")
|
||||||
|
} else {
|
||||||
|
None
|
||||||
|
};
|
||||||
|
if let Some(p) = prefix {
|
||||||
|
if count == criterion_index {
|
||||||
|
count += 1;
|
||||||
|
found = true;
|
||||||
|
let indent_len = line.len() - trimmed.len();
|
||||||
|
let indent = &line[..indent_len];
|
||||||
|
return format!("{indent}{p}{new_text}");
|
||||||
|
}
|
||||||
|
count += 1;
|
||||||
|
}
|
||||||
|
line.to_string()
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
if !found {
|
||||||
|
return Err(format!(
|
||||||
|
"Criterion index {criterion_index} out of range. Story '{story_id}' has \
|
||||||
|
{count} criteria (indices 0..{}).",
|
||||||
|
count.saturating_sub(1)
|
||||||
|
));
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut new_str = new_lines.join("\n");
|
||||||
|
if contents.ends_with('\n') {
|
||||||
|
new_str.push('\n');
|
||||||
|
}
|
||||||
|
|
||||||
|
let stage = story_stage(story_id).unwrap_or_else(|| "2_current".to_string());
|
||||||
|
write_story_content(project_root, story_id, &stage, &new_str);
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
/// Add a new acceptance criterion to a story.
|
/// Add a new acceptance criterion to a story.
|
||||||
///
|
///
|
||||||
/// Appends `- [ ] {criterion}` after the last existing criterion line in the
|
/// Appends `- [ ] {criterion}` after the last existing criterion line in the
|
||||||
@@ -520,6 +625,61 @@ mod tests {
|
|||||||
assert!(result.unwrap_err().contains("Acceptance Criteria"));
|
assert!(result.unwrap_err().contains("Acceptance Criteria"));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ── remove_criterion_from_file tests ──────────────────────────────────────
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn remove_criterion_removes_by_index() {
|
||||||
|
let tmp = tempfile::tempdir().unwrap();
|
||||||
|
setup_story_in_fs(
|
||||||
|
tmp.path(),
|
||||||
|
"20_remove_test",
|
||||||
|
&story_with_ac_section(&["First", "Second", "Third"]),
|
||||||
|
);
|
||||||
|
|
||||||
|
remove_criterion_from_file(tmp.path(), "20_remove_test", 1).unwrap();
|
||||||
|
|
||||||
|
let contents = read_story_content(tmp.path(), "20_remove_test").unwrap();
|
||||||
|
assert!(contents.contains("- [ ] First"), "First should remain");
|
||||||
|
assert!(!contents.contains("Second"), "Second should be removed");
|
||||||
|
assert!(contents.contains("- [ ] Third"), "Third should remain");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn remove_criterion_shifts_indices() {
|
||||||
|
let tmp = tempfile::tempdir().unwrap();
|
||||||
|
setup_story_in_fs(
|
||||||
|
tmp.path(),
|
||||||
|
"21_remove_test",
|
||||||
|
&story_with_ac_section(&["A", "B", "C"]),
|
||||||
|
);
|
||||||
|
|
||||||
|
remove_criterion_from_file(tmp.path(), "21_remove_test", 0).unwrap();
|
||||||
|
|
||||||
|
let contents = read_story_content(tmp.path(), "21_remove_test").unwrap();
|
||||||
|
assert!(!contents.contains("- [ ] A"), "A should be removed");
|
||||||
|
assert!(contents.contains("- [ ] B"), "B should remain");
|
||||||
|
assert!(contents.contains("- [ ] C"), "C should remain");
|
||||||
|
// B is now at index 0, C at index 1 — verify by removing B next
|
||||||
|
remove_criterion_from_file(tmp.path(), "21_remove_test", 0).unwrap();
|
||||||
|
let contents2 = read_story_content(tmp.path(), "21_remove_test").unwrap();
|
||||||
|
assert!(!contents2.contains("- [ ] B"), "B should now be removed");
|
||||||
|
assert!(contents2.contains("- [ ] C"), "C should still remain");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn remove_criterion_out_of_range_returns_error() {
|
||||||
|
let tmp = tempfile::tempdir().unwrap();
|
||||||
|
setup_story_in_fs(
|
||||||
|
tmp.path(),
|
||||||
|
"22_remove_test",
|
||||||
|
&story_with_ac_section(&["Only"]),
|
||||||
|
);
|
||||||
|
|
||||||
|
let result = remove_criterion_from_file(tmp.path(), "22_remove_test", 5);
|
||||||
|
assert!(result.is_err(), "should fail for out-of-range index");
|
||||||
|
assert!(result.unwrap_err().contains("out of range"));
|
||||||
|
}
|
||||||
|
|
||||||
// ── update_story_in_file tests ─────────────────────────────────────────────
|
// ── update_story_in_file tests ─────────────────────────────────────────────
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
|
|||||||
@@ -459,6 +459,20 @@ pub fn write_review_hold_in_content(contents: &str) -> String {
|
|||||||
set_front_matter_field(contents, "review_hold", "true")
|
set_front_matter_field(contents, "review_hold", "true")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Write or update `depends_on` in story content (pure function).
|
||||||
|
///
|
||||||
|
/// Serialises `deps` as an inline YAML sequence, e.g. `[477, 478]`.
|
||||||
|
/// If `deps` is empty the field is removed.
|
||||||
|
pub fn write_depends_on_in_content(contents: &str, deps: &[u32]) -> String {
|
||||||
|
if deps.is_empty() {
|
||||||
|
remove_front_matter_field(contents, "depends_on")
|
||||||
|
} else {
|
||||||
|
let nums: Vec<String> = deps.iter().map(|n| n.to_string()).collect();
|
||||||
|
let yaml_value = format!("[{}]", nums.join(", "));
|
||||||
|
set_front_matter_field(contents, "depends_on", &yaml_value)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/// Resolve the effective QA mode for a story file.
|
/// Resolve the effective QA mode for a story file.
|
||||||
///
|
///
|
||||||
/// Reads the `qa` front matter field. If absent, falls back to `default`.
|
/// Reads the `qa` front matter field. If absent, falls back to `default`.
|
||||||
|
|||||||
+1
-1
@@ -860,7 +860,7 @@ async fn main() -> Result<(), std::io::Error> {
|
|||||||
// Optional Matrix bot: connect to the homeserver and start listening for
|
// Optional Matrix bot: connect to the homeserver and start listening for
|
||||||
// messages if `.huskies/bot.toml` is present and enabled.
|
// messages if `.huskies/bot.toml` is present and enabled.
|
||||||
if let Some(ref root) = startup_root {
|
if let Some(ref root) = startup_root {
|
||||||
chat::transport::matrix::spawn_bot(
|
let _ = chat::transport::matrix::spawn_bot(
|
||||||
root,
|
root,
|
||||||
watcher_tx_for_bot,
|
watcher_tx_for_bot,
|
||||||
perm_rx_for_bot,
|
perm_rx_for_bot,
|
||||||
|
|||||||
Reference in New Issue
Block a user