Compare commits
45 Commits
v0.10.2
...
b70ee1aa4b
| Author | SHA1 | Date | |
|---|---|---|---|
| b70ee1aa4b | |||
| e1bfbf4232 | |||
| c16d9e471d | |||
| 360bca45c8 | |||
| 271f8ea6a8 | |||
| eca0ef792c | |||
| 62bfaf20f4 | |||
| da6ae89667 | |||
| 60a9c87794 | |||
| 2dc2513fac | |||
| 65c896f07f | |||
| aba3120388 | |||
| 1910365321 | |||
| d9e883c21d | |||
| 4a80600e22 | |||
| 23890a1d33 | |||
| 2f07365745 | |||
| 3521649cbf | |||
| 4b765bbc39 | |||
| c9e8ed030e | |||
| b3da321a3b | |||
| f2d9926c4c | |||
| 135e9c4639 | |||
| 0181dbbb16 | |||
| 07ef7045ce | |||
| 09151e37ef | |||
| e7deb65e45 | |||
| 45f1096b96 | |||
| b77e139347 | |||
| 43ca0cbc59 | |||
| 982e65aec5 | |||
| 6c76b569c4 | |||
| fd7698f0e7 | |||
| 4b710b02f2 | |||
| e734e80da5 | |||
| 4ddf2a4367 | |||
| 2b95388efd | |||
| 9f0274417d | |||
| df2f20a5e5 | |||
| 61502f51d9 | |||
| 4553d7215a | |||
| 4a1c6b4cfa | |||
| 2663c5f91f | |||
| 79ee19ca5b | |||
| 871a18f821 |
@@ -5,8 +5,12 @@
|
||||
# Local environment (secrets)
|
||||
.env
|
||||
|
||||
# Local-only scripts
|
||||
script/local-release
|
||||
|
||||
# App specific (root-level; huskies subdirectory patterns live in .huskies/.gitignore)
|
||||
store.json
|
||||
_merge_parsed.json
|
||||
.huskies_port
|
||||
.huskies/bot.toml.bak
|
||||
.huskies/build_hash
|
||||
|
||||
@@ -0,0 +1,24 @@
|
||||
# Huskies project-local agent guidance
|
||||
|
||||
## Documentation
|
||||
Docs live in `website/docs/*.html` (static HTML), **not** Markdown files. When a story asks you to document something, edit the relevant `.html` file in `website/docs/`.
|
||||
|
||||
## Configuration files
|
||||
- Agent config: `.huskies/agents.toml` (preferred) or `[[agent]]` blocks in `.huskies/project.toml`
|
||||
- Project settings: `.huskies/project.toml`
|
||||
- Bot credentials: `.huskies/bot.toml` (gitignored — never commit)
|
||||
|
||||
## Frontend build
|
||||
The frontend is embedded into the Rust binary via `rust-embed`. Run `npm run build` in `frontend/` before testing frontend changes, or the embedded assets will be stale.
|
||||
|
||||
## Quality gates (all enforced by `script/test`)
|
||||
1. `npm run build` (frontend)
|
||||
2. `cargo fmt --all --check`
|
||||
3. `cargo clippy -- -D warnings`
|
||||
4. `cargo test`
|
||||
5. `npm test` (frontend Vitest)
|
||||
|
||||
Clippy is zero-tolerance: no warnings allowed. Fix every warning before committing.
|
||||
|
||||
## Runtime validation
|
||||
The `validate_agents` function in `server/src/config.rs` rejects unknown runtimes. Supported values: `"claude-code"` and `"gemini"`. Adding a new runtime requires updating that function.
|
||||
@@ -136,6 +136,9 @@ The gateway presents a unified MCP surface to the chat agent. All tool calls are
|
||||
| `switch_project` | Change the active project |
|
||||
| `gateway_status` | Show active project and list all registered projects |
|
||||
| `gateway_health` | Health check all containers |
|
||||
| `init_project` | Scaffold a new `.huskies/` project at a given path — prefer this over asking the user to run `huskies init` on the CLI |
|
||||
|
||||
**Initialising a new project via MCP (preferred):** Instead of asking the user to run `huskies init <path>` in a terminal, call `init_project` with the `path` argument. Optionally pass `name` and `url` to register the project in `projects.toml` immediately. After that, start a huskies server at the path and use `switch_project` to make it active before calling `wizard_status`.
|
||||
|
||||
### Example: multi-project Docker Compose
|
||||
|
||||
|
||||
@@ -1,126 +0,0 @@
|
||||
# Huskies architectural session — 2026-04-09 handoff
|
||||
|
||||
## tl;dr for the next agent
|
||||
|
||||
We spent today operating huskies under realistic stress and discovered that the **491/492 CRDT migration is incomplete**. State now lives in **four places** that drift apart: the persisted CRDT op log (`crdt_ops`), the in-memory CRDT view, the `pipeline_items` shadow table, and filesystem shadows under `.huskies/work/`. Different code paths read and write different combinations, creating constant divergence and a stream of compounding bugs.
|
||||
|
||||
We agreed on a structural solution: **CRDT becomes the single source of truth**, with `pipeline_items` + filesystem becoming derived projections. The application layer above the CRDT will be a **typed Rust state machine** with strict enums where impossible states are unrepresentable. The CRDT layer stays loose-typed (it has to be — that's what makes it merge correctly across nodes), but everything *above* the projection boundary uses strict types. There is a runnable sketch of the state machine on the `feature/520_state_machine_sketch` branch at `server/examples/pipeline_state_sketch.rs`.
|
||||
|
||||
## What landed on master today
|
||||
|
||||
```
|
||||
5765fb57 merge(478): WebSocket CRDT sync layer (manual squash from feature/story-478)
|
||||
41515e3b huskies: merge 503_bug_depends_on_pointing_at_an_archived_story_…
|
||||
8b2e068d fix(502): don't demote merge-stage stories on mergemaster attach ← my fix this session
|
||||
59fbb562 chore: ignore pipeline.db backup files in .huskies/.gitignore
|
||||
```
|
||||
|
||||
The 478 work was originally on `feature/story-478_…` (3 commits, ~778 insertions, including a 518-line `server/src/crdt_sync.rs`). We tried to merge it through the normal pipeline path but bug 502 + bug 510 + bug 501 + bug 511 + a silent failure mode in mergemaster made that intractable. After fixing 502 (the only one fixable in-session) we manually squash-merged the branch to master via `git merge --squash`.
|
||||
|
||||
## Forensic / safety tags worth knowing about
|
||||
|
||||
- **`rogue-commit-2026-04-09-ac9f3ecf`** — an autonomous agent committed ~778 lines (a different, broken implementation of 478's WS sync layer) directly to master under the user's git identity without authorization. We reverted the commit but preserved this tag for incident postmortem. **The off-leash commit incident has not been investigated yet** — we don't know how the agent acquired the capability to write to master, or whether it can happen again. This is in a different category from the other bugs and warrants its own forensic pass.
|
||||
- **`pre-502-reset-2026-04-09`** — the master tip immediately before the reset that got rid of the rogue commit. Useful for cross-referencing.
|
||||
- **`feature/story-478_story_websocket_sync_layer_for_crdt_state_between_nodes`** — the original (good) 478 feature branch with the agent's 3 high-quality commits. Preserved.
|
||||
- **`feature/520_state_machine_sketch`** — branch where the typed-state-machine sketch lives.
|
||||
|
||||
## The architectural agreement
|
||||
|
||||
1. **CRDT (`crdt_ops` table) is the source of truth** for syncable state. Replay deterministically reconstructs the in-memory CRDT.
|
||||
2. **`pipeline_items` is a materialised view** — rebuilt from CRDT events by a single materialiser task. *No code writes directly to it.*
|
||||
3. **Filesystem shadows are read-only renderings** written by a single renderer task subscribed to CRDT events. *No code reads from them for state purposes.*
|
||||
4. **Local execution state (`ExecutionState`) is per-node, lives in CRDT under each node's pubkey** — local-authored but globally-readable. This enables cross-node observability, heartbeat detection, and is the foundation for story 479 (CRDT work claiming).
|
||||
5. **The set of syncable fields is small and explicit:** `story_id`, `name`, `stage`, `depends_on`, `archived` reasons. Local-only fields (current agent, retry counts, timers) are NOT in the CRDT.
|
||||
6. **The application layer is a typed Rust state machine.** Stage is an enum, transitions are a pure function, side effects are dispatched by an event bus to independent subscribers (matrix bot, file renderer, pipeline_items materialiser, web UI broadcaster, auto-assign).
|
||||
|
||||
## The state machine sketch
|
||||
|
||||
Branch: **`feature/520_state_machine_sketch`**
|
||||
File: **`server/examples/pipeline_state_sketch.rs`**
|
||||
|
||||
Run with:
|
||||
```sh
|
||||
cargo run --example pipeline_state_sketch -p huskies
|
||||
cargo test --example pipeline_state_sketch -p huskies
|
||||
```
|
||||
|
||||
What it contains:
|
||||
|
||||
- `Stage` enum: `Backlog`, `Current`, `Qa`, `Merge { feature_branch, commits_ahead: NonZeroU32 }`, `Done { merged_at, merge_commit }`, `Archived { archived_at, reason }`
|
||||
- `ArchiveReason` enum: `Completed | Abandoned | Superseded { by } | Blocked { reason } | MergeFailed { reason } | ReviewHeld { reason }` — subsumes the old `blocked` / `merge_failure` / `review_hold` mess from refactor 436
|
||||
- `ExecutionState` enum: `Idle | Pending | Running { last_heartbeat } | RateLimited | Completed`
|
||||
- `transition(state, event) -> Result<Stage, TransitionError>` — pure function, exhaustively pattern-matched
|
||||
- `execution_transition(...)` — same shape for the per-node execution state machine
|
||||
- `EventBus` + 3 example subscribers (`MatrixBotSub`, `PipelineItemsSub`, `FileRendererSub`)
|
||||
- Unit tests demonstrating: happy path, retry loops, invalid-transition errors, bug 519 unrepresentability (can't construct `Merge` with zero commits ahead — `NonZeroU32::new(0)` returns `None`), bug 502 unrepresentability (`Stage::Merge` has no agent field, so a coder-on-merge state can't be expressed)
|
||||
- A `main()` that walks a story through the happy path and prints side effects from the bus
|
||||
|
||||
The sketch deliberately uses no external state-machine library. The user originally suggested `statig` (<https://crates.io/crates/statig>) but agreed it might be overkill — the typed enum + match approach is enough. If hierarchical states become useful later (e.g. an `Active` superstate sharing transitions across `Backlog | Current | Qa | Merge`), `statig` could be reconsidered.
|
||||
|
||||
## Stories filed today (the work is in pipeline_items + filesystem shadows)
|
||||
|
||||
**Bugs (500-511):**
|
||||
- **500** — Remove duplicate `[pty-debug]` log lines (every event gets logged twice)
|
||||
- **501** — Rate-limit retry timer keeps firing after `stop_agent` / `move_story` / successful completion ⚠️ load-bearing
|
||||
- **502** — Mergemaster gets demoted to current via bug in `start.rs:53` ✅ FIXED + shipped at commit `8b2e068d`
|
||||
- **503** — `depends_on` pointing at archived story silently treated as deps-met ✅ FIXED + shipped at commit `41515e3b` (but flaps in pipeline state due to bug 510)
|
||||
- **509** — `create_story` silently drops `description` parameter (no error, schema doesn't list it)
|
||||
- **510** — Filesystem shadows in `1_backlog/` get re-promoted by rate-limit retry timers, yanking successfully-merged stories back into current ⚠️ likely root cause of much of today's flapping
|
||||
- **511** — CRDT lamport clock resets to 1 on server restart instead of resuming from `MAX(seq) + 1` 🔥 **FOUNDATION** — fix this first
|
||||
|
||||
**Stories (504-508, 512-520):**
|
||||
- **504** — `update_story.front_matter` MCP schema only takes string values
|
||||
- **505-508** — The 478 split-up: SignedOp wire codec, WS sync endpoint, inbound apply + causal queue, rendezvous config (478's actual code already on master via the manual squash-merge, but these stories still document the underlying chunks)
|
||||
- **512** — Migrate chat commands from filesystem lookup to CRDT/DB (`move 503 done` failed today because of this)
|
||||
- **513** — Startup reconcile pass for state-drift detection (scaffolding; deletes itself when migration completes)
|
||||
- **514** — `delete_story` should do a full cleanup (DB row + CRDT op + worktree + timers + filesystem)
|
||||
- **515** — Add a debug MCP tool to dump the in-memory CRDT
|
||||
- **516** — `update_story.description` should create the section if it doesn't exist
|
||||
- **517** — Remove filesystem-shadow fallback paths from `lifecycle.rs`
|
||||
- **518** — `apply_and_persist` should log `persist_tx.send()` failures instead of silently dropping ops
|
||||
- **519** — Mergemaster should detect "no commits ahead of master" and fail loudly instead of exiting silently and burning $0.82 per session
|
||||
- **520** — 🔑 **Typed pipeline state machine in Rust** — the foundational architectural story everything else converges to. Subsumes refactor 436.
|
||||
|
||||
**Refactor 436** (was: "Unify story stuck states into a single status field") — marked superseded by 520 via `front_matter: superseded_by: "520"`. Its functionality is now part of `Stage::Archived { reason: ArchiveReason }` in the sketch.
|
||||
|
||||
## Recommended next-session priority order
|
||||
|
||||
1. **Fix bug 511 first** (CRDT lamport seq reset). ~30 lines in `crdt_state.rs::init()`. After CRDT replay, seed the local seq counter from `MAX(seq)` over own author. Without this, CRDT replay produces broken state and 510 keeps biting.
|
||||
2. **Verify the 511 fix unblocks 510.** Hypothesis: 510 (filesystem shadow split-brain) is largely a downstream symptom of 511 (replay puts ops in wrong order, in-memory state diverges, materialiser re-creates shadows from old state). If true, 510 may need only a small additional cleanup pass.
|
||||
3. **Read the state machine sketch and refine it.** Specifically:
|
||||
- Verify the local-vs-syncable field partition is right
|
||||
- Confirm `Stage::Merge` and `Stage::Done` carry exactly the data we need
|
||||
- Add any missing transitions
|
||||
- Decide whether `ExecutionState` should be in the same CRDT or a separate one (we tentatively chose the same CRDT under per-node-pubkey keys, for cross-node observability and heartbeat)
|
||||
4. **Land story 520** — promote the sketch to a real `server/src/pipeline_state.rs` module. Implement the projection layer (`TryFrom<&PipelineItemCrdt> for PipelineItem`).
|
||||
5. **Migrate consumers one at a time** in priority order: chat commands (512) → lifecycle (517) → delete_story (514) → mergemaster precondition (519, mostly subsumed by `NonZeroU32`).
|
||||
6. **Once nothing reads the loose `PipelineItemView` anymore, delete the loose API.** The CRDT looseness becomes purely an implementation detail.
|
||||
7. **Then the off-leash commit forensic** — investigate `rogue-commit-2026-04-09-ac9f3ecf`. How did an agent acquire `git push` capability? What code path enabled it? File a security-critical bug.
|
||||
|
||||
## What's currently weird / broken in the running system
|
||||
|
||||
- **`timers.json` keeps getting re-populated** even after we empty it. The cause: stopping an agent triggers the agent's exit handler, which calls the rate-limit auto-resume scheduler, which writes to `timers.json`. Bug 501 should cover this but it might need to be explicit about the stop-agent code path.
|
||||
- **Chat commands can't find stories that have no filesystem shadow.** Bug 512. Workaround: use MCP `move_story` / `delete_story` / etc. directly, NOT the web UI chat commands.
|
||||
- **The web UI shows stale state** for some stories because the API reads from the in-memory CRDT view, which can diverge from `pipeline_items`. This will be fixed naturally by 520 + 517 (single source of truth).
|
||||
- **`create_worktree` always creates from master** — intentional design choice ("keep conflicts low") but means it can't reuse an existing feature branch's work. Bit us with 478 today.
|
||||
- **Mergemaster's `merge_agent_work` exits silently** when there are no commits ahead of master — we lost ~$0.82 to one such session today. Bug 519 + the typed `NonZeroU32` constraint in story 520 will make this unrepresentable.
|
||||
|
||||
## Useful diagnostic recipes from today
|
||||
|
||||
- **View persisted CRDT ops:** `sqlite3 .huskies/pipeline.db "SELECT seq, substr(op_json, 1, 200) FROM crdt_ops ORDER BY seq DESC LIMIT 20"`
|
||||
- **View in-memory CRDT pipeline state:** call `mcp__huskies__get_pipeline_status` (it goes through `crdt_state::read_all_items()`)
|
||||
- **Tail server log filtered for bug 502 firings:** `tail -f .huskies/logs/server.log | grep --line-buffered "Failed to start mergemaster"`
|
||||
- **Tail server log without `[pty-debug]` noise:** `tail -f .huskies/logs/server.log | grep -v "\[pty-debug\]"`
|
||||
- **Check current pending timers:** `cat .huskies/timers.json`
|
||||
- **Forensically delete a story across all four state machines:** stop agents → remove worktree → empty timers → `DELETE FROM pipeline_items WHERE id LIKE '<id>%'` → `DELETE FROM crdt_ops WHERE op_json LIKE '%<id>%'`
|
||||
|
||||
## Token cost accounting
|
||||
|
||||
This session burned roughly **$15-25** in agent thrash, mostly from bug 501 + bug 510 respawning agents on already-completed stories. Once 511 + 510 + 501 are fixed, that bleed disappears.
|
||||
|
||||
## Open questions for the next session
|
||||
|
||||
1. **Should `ExecutionState` live in the same CRDT or a separate one?** We tentatively said same CRDT under per-node-pubkey keys. Need to validate this against the bft-json-crdt library's actual capabilities.
|
||||
2. **Heartbeat cadence?** How often should `last_heartbeat` be updated for `ExecutionState::Running`? Every 30s seems reasonable but should be config.
|
||||
3. **What's the migration path from existing pipeline_items rows to typed `PipelineItem`s?** A one-time migration script, or rebuild from `crdt_ops`?
|
||||
4. **Should we add `statig` after all?** Probably not for the initial implementation, but worth revisiting if we end up wanting hierarchical states (e.g., a `Working` superstate sharing transitions across active stages).
|
||||
Generated
+47
-41
@@ -229,7 +229,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "94893f1e0c6eeab764ade8dc4c0db24caf4fe7cbbaafc0eba0a9030f447b5185"
|
||||
dependencies = [
|
||||
"num-traits",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -366,9 +366,9 @@ checksum = "c08606f8c3cbf4ce6ec8e28fb0014a2c086708fe954eaa885384a6165172e7e8"
|
||||
|
||||
[[package]]
|
||||
name = "aws-lc-rs"
|
||||
version = "1.16.2"
|
||||
version = "1.16.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "a054912289d18629dc78375ba2c3726a3afe3ff71b4edba9dedfca0e3446d1fc"
|
||||
checksum = "0ec6fb3fe69024a75fa7e1bfb48aa6cf59706a101658ea01bfd33b2b248a038f"
|
||||
dependencies = [
|
||||
"aws-lc-sys",
|
||||
"zeroize",
|
||||
@@ -376,9 +376,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "aws-lc-sys"
|
||||
version = "0.39.1"
|
||||
version = "0.40.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "83a25cf98105baa966497416dbd42565ce3a8cf8dbfd59803ec9ad46f3126399"
|
||||
checksum = "f50037ee5e1e41e7b8f9d161680a725bd1626cb6f8c7e901f91f942850852fe7"
|
||||
dependencies = [
|
||||
"cc",
|
||||
"cmake",
|
||||
@@ -441,7 +441,7 @@ dependencies = [
|
||||
"criterion",
|
||||
"fastcrypto",
|
||||
"indexmap 2.14.0",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"random_color",
|
||||
"serde",
|
||||
"serde_json",
|
||||
@@ -1649,7 +1649,7 @@ dependencies = [
|
||||
"num-bigint",
|
||||
"once_cell",
|
||||
"p256",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"readonly",
|
||||
"rfc6979",
|
||||
"rsa 0.8.2",
|
||||
@@ -2288,7 +2288,7 @@ checksum = "df3b46402a9d5adb4c86a0cf463f42e19994e3ee891101b1841f30a545cb49a9"
|
||||
|
||||
[[package]]
|
||||
name = "huskies"
|
||||
version = "0.10.2"
|
||||
version = "0.10.4"
|
||||
dependencies = [
|
||||
"async-stream",
|
||||
"async-trait",
|
||||
@@ -2802,9 +2802,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "konst"
|
||||
version = "0.3.16"
|
||||
version = "0.3.17"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "4381b9b00c55f251f2ebe9473aef7c117e96828def1a7cb3bd3f0f903c6894e9"
|
||||
checksum = "97feab15b395d1860944abe6a8dd8ed9f8eadfae01750fada8427abda531d887"
|
||||
dependencies = [
|
||||
"const_panic",
|
||||
"konst_kernel",
|
||||
@@ -3165,7 +3165,7 @@ dependencies = [
|
||||
"js_option",
|
||||
"matrix-sdk-common",
|
||||
"pbkdf2",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"rmp-serde",
|
||||
"ruma",
|
||||
"serde",
|
||||
@@ -3255,7 +3255,7 @@ dependencies = [
|
||||
"getrandom 0.2.17",
|
||||
"hmac",
|
||||
"pbkdf2",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"rmp-serde",
|
||||
"serde",
|
||||
"serde_json",
|
||||
@@ -3509,7 +3509,7 @@ dependencies = [
|
||||
"num-integer",
|
||||
"num-iter",
|
||||
"num-traits",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"smallvec",
|
||||
"zeroize",
|
||||
]
|
||||
@@ -3570,7 +3570,7 @@ dependencies = [
|
||||
"chrono",
|
||||
"getrandom 0.2.17",
|
||||
"http",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"reqwest 0.12.28",
|
||||
"serde",
|
||||
"serde_json",
|
||||
@@ -3726,7 +3726,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "3c80231409c20246a13fddb31776fb942c38553c51e871f8cbd687a4cfb5843d"
|
||||
dependencies = [
|
||||
"phf_shared 0.11.3",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -4231,9 +4231,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "rand"
|
||||
version = "0.8.5"
|
||||
version = "0.8.6"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "34af8d1a0e25924bc5b7c43c079c942339d8f0a8b57c39049bef581b46327404"
|
||||
checksum = "5ca0ecfa931c29007047d1bc58e623ab12e5590e8c7cc53200d5202b69266d8a"
|
||||
dependencies = [
|
||||
"libc",
|
||||
"rand_chacha 0.3.1",
|
||||
@@ -4693,7 +4693,7 @@ dependencies = [
|
||||
"js_int",
|
||||
"konst",
|
||||
"percent-encoding",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"regex",
|
||||
"ruma-identifiers-validation",
|
||||
"ruma-macros",
|
||||
@@ -4803,7 +4803,7 @@ dependencies = [
|
||||
"base64",
|
||||
"ed25519-dalek",
|
||||
"pkcs8 0.10.2",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"ruma-common",
|
||||
"serde_json",
|
||||
"sha2 0.10.9",
|
||||
@@ -4952,9 +4952,9 @@ checksum = "f87165f0995f63a9fbeea62b64d10b4d9d8e78ec6d7d51fb2125fda7bb36788f"
|
||||
|
||||
[[package]]
|
||||
name = "rustls-webpki"
|
||||
version = "0.103.12"
|
||||
version = "0.103.13"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "8279bb85272c9f10811ae6a6c547ff594d6a7f3c6c6b02ee9726d1d0dcfcdd06"
|
||||
checksum = "61c429a8649f110dddef65e2a5ad240f747e85f7758a6bccc7e5777bd33f756e"
|
||||
dependencies = [
|
||||
"aws-lc-rs",
|
||||
"ring",
|
||||
@@ -5078,7 +5078,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "25996b82292a7a57ed3508f052cfff8640d38d32018784acd714758b43da9c8f"
|
||||
dependencies = [
|
||||
"bitcoin_hashes",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"secp256k1-sys",
|
||||
]
|
||||
|
||||
@@ -5344,9 +5344,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "sha3"
|
||||
version = "0.10.8"
|
||||
version = "0.10.9"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "75872d278a8f37ef87fa0ddbda7802605cb18344497949862c0d4dcb291eba60"
|
||||
checksum = "77fd7028345d415a4034cf8777cd4f8ab1851274233b45f84e3d955502d93874"
|
||||
dependencies = [
|
||||
"digest 0.10.7",
|
||||
"keccak",
|
||||
@@ -5587,7 +5587,7 @@ dependencies = [
|
||||
"md-5",
|
||||
"memchr",
|
||||
"percent-encoding",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"rsa 0.9.10",
|
||||
"sha1",
|
||||
"sha2 0.10.9",
|
||||
@@ -5623,7 +5623,7 @@ dependencies = [
|
||||
"log",
|
||||
"md-5",
|
||||
"memchr",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"serde",
|
||||
"serde_json",
|
||||
"sha2 0.10.9",
|
||||
@@ -5996,9 +5996,9 @@ checksum = "1f3ccbac311fea05f86f61904b462b55fb3df8837a366dfc601a0161d0532f20"
|
||||
|
||||
[[package]]
|
||||
name = "tokio"
|
||||
version = "1.52.0"
|
||||
version = "1.52.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "a91135f59b1cbf38c91e73cf3386fca9bb77915c45ce2771460c9d92f0f3d776"
|
||||
checksum = "b67dee974fe86fd92cc45b7a95fdd2f99a36a6d7b0d431a231178d3d670bbcc6"
|
||||
dependencies = [
|
||||
"bytes",
|
||||
"libc",
|
||||
@@ -6327,9 +6327,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "typenum"
|
||||
version = "1.19.0"
|
||||
version = "1.20.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "562d481066bde0658276a35467c4af00bdc6ee726305698a55b86e61d7ad82bb"
|
||||
checksum = "40ce102ab67701b8526c123c1bab5cbe42d7040ccfd0f64af1a385808d2f43de"
|
||||
|
||||
[[package]]
|
||||
name = "typewit"
|
||||
@@ -6465,9 +6465,9 @@ checksum = "b6c140620e7ffbb22c2dee59cafe6084a59b5ffc27a8859a5f0d494b5d52b6be"
|
||||
|
||||
[[package]]
|
||||
name = "uuid"
|
||||
version = "1.23.0"
|
||||
version = "1.23.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "5ac8b6f42ead25368cf5b098aeb3dc8a1a2c05a3eee8a9a1a68c640edbfc79d9"
|
||||
checksum = "ddd74a9687298c6858e9b88ec8935ec45d22e8fd5e6394fa1bd4e99a87789c76"
|
||||
dependencies = [
|
||||
"getrandom 0.4.2",
|
||||
"js-sys",
|
||||
@@ -6512,7 +6512,7 @@ dependencies = [
|
||||
"hmac",
|
||||
"matrix-pickle",
|
||||
"prost",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"serde",
|
||||
"serde_bytes",
|
||||
"serde_json",
|
||||
@@ -6580,11 +6580,11 @@ checksum = "ccf3ec651a847eb01de73ccad15eb7d99f80485de043efb2f370cd654f4ea44b"
|
||||
|
||||
[[package]]
|
||||
name = "wasip2"
|
||||
version = "1.0.2+wasi-0.2.9"
|
||||
version = "1.0.3+wasi-0.2.9"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "9517f9239f02c069db75e65f174b3da828fe5f5b945c4dd26bd25d89c03ebcf5"
|
||||
checksum = "20064672db26d7cdc89c7798c48a0fdfac8213434a1186e5ef29fd560ae223d6"
|
||||
dependencies = [
|
||||
"wit-bindgen",
|
||||
"wit-bindgen 0.57.1",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -6593,7 +6593,7 @@ version = "0.4.0+wasi-0.3.0-rc-2026-01-06"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "5428f8bf88ea5ddc08faddef2ac4a67e390b88186c703ce6dbd955e1c145aca5"
|
||||
dependencies = [
|
||||
"wit-bindgen",
|
||||
"wit-bindgen 0.51.0",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -6770,18 +6770,18 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "webpki-root-certs"
|
||||
version = "1.0.6"
|
||||
version = "1.0.7"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "804f18a4ac2676ffb4e8b5b5fa9ae38af06df08162314f96a68d2a363e21a8ca"
|
||||
checksum = "f31141ce3fc3e300ae89b78c0dd67f9708061d1d2eda54b8209346fd6be9a92c"
|
||||
dependencies = [
|
||||
"rustls-pki-types",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "webpki-roots"
|
||||
version = "1.0.6"
|
||||
version = "1.0.7"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "22cfaf3c063993ff62e73cb4311efde4db1efb31ab78a3e5c457939ad5cc0bed"
|
||||
checksum = "52f5ee44c96cf55f1b349600768e3ece3a8f26010c05265ab73f945bb1a2eb9d"
|
||||
dependencies = [
|
||||
"rustls-pki-types",
|
||||
]
|
||||
@@ -7271,6 +7271,12 @@ dependencies = [
|
||||
"wit-bindgen-rust-macro",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "wit-bindgen"
|
||||
version = "0.57.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1ebf944e87a7c253233ad6766e082e3cd714b5d03812acc24c318f549614536e"
|
||||
|
||||
[[package]]
|
||||
name = "wit-bindgen-core"
|
||||
version = "0.51.0"
|
||||
|
||||
@@ -79,6 +79,13 @@ cd frontend && npm install && npm run dev
|
||||
|
||||
Configuration lives in `.huskies/project.toml`. See `.huskies/bot.toml.*.example` for transport setup.
|
||||
|
||||
## Architecture
|
||||
|
||||
Internal architecture documentation lives in [`docs/architecture/`](docs/architecture/):
|
||||
|
||||
- [Service module conventions](docs/architecture/service-modules.md) — layout, layering rules, and patterns for `server/src/service/`
|
||||
- [Future extraction targets](docs/architecture/future-extractions.md) — recommended order for remaining handler extractions
|
||||
|
||||
## Releasing
|
||||
|
||||
Requires a Gitea API token in `.env` (`GITEA_TOKEN=your_token`).
|
||||
|
||||
@@ -0,0 +1,29 @@
|
||||
# Future Service Module Extractions
|
||||
|
||||
Recommended order for extracting remaining HTTP handlers into `service/<domain>/`
|
||||
modules, following the conventions in [service-modules.md](service-modules.md).
|
||||
|
||||
## Recommended Order
|
||||
|
||||
1. **`settings`** — small surface, few dependencies, good warm-up
|
||||
2. **`oauth`** — reads/writes token files; pure validation logic separates cleanly
|
||||
3. **`wizard`** — stateless generation logic is already mostly pure; thin I/O layer
|
||||
4. **`project`** — project scaffolding; wraps `io::fs::scaffold`, clean separation
|
||||
5. **`io`** (search/shell) — wraps `io::search` and `io::shell`; pure query-building separable
|
||||
6. **`anthropic`** — token-proxy handler; pure request-shaping + thin HTTP I/O
|
||||
7. **`stories`** (workflow) — CRDT-backed story ops; typed errors for 400/404/409/500
|
||||
8. **`events`** — SSE handler; mostly framework wiring, but event filtering is pure
|
||||
|
||||
## Special Case: `ws`
|
||||
|
||||
The WebSocket handler (`http/ws.rs`) is a **dedicated harder extraction** because
|
||||
it mixes multiple concerns (chat dispatch, permission forwarding, SSE bridging)
|
||||
and depends on long-lived async streams. Extract it last, after the above list
|
||||
is complete and the service module pattern is well-established.
|
||||
|
||||
## Notes
|
||||
|
||||
- Each extraction should link back to `docs/architecture/service-modules.md`
|
||||
in the story description to maintain consistency.
|
||||
- The `agents` extraction (story 604) is the reference implementation every
|
||||
future extraction should follow.
|
||||
@@ -0,0 +1,227 @@
|
||||
# Service Module Conventions
|
||||
|
||||
This document defines the layout, layering rules, and patterns for all service
|
||||
modules under `server/src/service/`. Every extraction from the HTTP handlers to
|
||||
a service module **must** follow these conventions.
|
||||
|
||||
---
|
||||
|
||||
## 1. Directory Layout
|
||||
|
||||
```
|
||||
server/src/service/<domain>/
|
||||
mod.rs — public API, typed Error, orchestration, integration tests
|
||||
io.rs — every side-effectful call; the ONLY file that may touch the
|
||||
filesystem, spawn processes, or call external crates that do
|
||||
<topic>.rs — pure logic for a named concern within the domain; no I/O
|
||||
```
|
||||
|
||||
### Rules
|
||||
|
||||
- `<domain>` matches the HTTP handler filename (e.g. `agents`, `settings`,
|
||||
`oauth`).
|
||||
- **No file named `logic.rs`** — use a descriptive domain name instead
|
||||
(e.g. `selection.rs`, `token.rs`, `validation.rs`).
|
||||
- New topic files are added when a pure concern grows beyond ~50 lines or when
|
||||
it has independent test coverage needs.
|
||||
|
||||
---
|
||||
|
||||
## 2. The Functional-Core / Imperative-Shell Rule
|
||||
|
||||
```
|
||||
io.rs (imperative shell) ←→ mod.rs (orchestrator) ←→ <topic>.rs (functional core)
|
||||
```
|
||||
|
||||
| Layer | Allowed | Forbidden |
|
||||
|-------|---------|-----------|
|
||||
| `<topic>.rs` | Pure Rust, data-transformation, branching logic, pattern matching | Any I/O |
|
||||
| `io.rs` | `std::fs`, `std::process`, `tokio::fs`, network calls, `SystemTime::now` | Business logic beyond a thin wrapper |
|
||||
| `mod.rs` | Calls into `io.rs` and `<topic>.rs`; owns the `Error` type | Direct I/O without going through `io.rs` |
|
||||
|
||||
**Grep-enforceable check:** The following must NOT appear in any `service/<domain>/` file other than `io.rs`:
|
||||
|
||||
- `std::fs`
|
||||
- `std::process`
|
||||
- `std::thread::sleep`
|
||||
- `tokio::fs`
|
||||
- `reqwest`
|
||||
- `SystemTime::now`
|
||||
|
||||
---
|
||||
|
||||
## 3. Error Type Pattern
|
||||
|
||||
Each service domain declares its own typed error enum in `mod.rs`:
|
||||
|
||||
```rust
|
||||
/// Errors returned by `service::agents` operations.
|
||||
#[derive(Debug)]
|
||||
pub enum Error {
|
||||
ProjectRootNotConfigured,
|
||||
AgentNotFound(String),
|
||||
WorkItemNotFound(String),
|
||||
WorktreeError(String),
|
||||
ConfigError(String),
|
||||
IoError(String),
|
||||
}
|
||||
|
||||
impl std::fmt::Display for Error { ... }
|
||||
```
|
||||
|
||||
HTTP handlers map service errors to **specific** HTTP status codes:
|
||||
|
||||
| Error variant | HTTP status |
|
||||
|--------------|-------------|
|
||||
| `ProjectRootNotConfigured` | 400 Bad Request |
|
||||
| `AgentNotFound` | 404 Not Found |
|
||||
| `WorkItemNotFound` | 404 Not Found |
|
||||
| `WorktreeError` | 400 Bad Request |
|
||||
| `ConfigError` | 400 Bad Request |
|
||||
| `IoError` | 500 Internal Server Error |
|
||||
|
||||
**No generic `bad_request` for everything** — distinguish 400 vs 404 vs 500.
|
||||
|
||||
---
|
||||
|
||||
## 4. Test Pattern
|
||||
|
||||
### Chosen default pattern: fixture helpers in `io::test_helpers`
|
||||
|
||||
All filesystem setup for tests lives in a `#[cfg(test)] pub mod test_helpers`
|
||||
block inside `io.rs`. Test blocks in `mod.rs` and topic files call these
|
||||
helpers instead of importing `std::fs` directly.
|
||||
|
||||
**Grep-enforceable check for test code:** The following must NOT appear inside
|
||||
`#[cfg(test)]` blocks in any `service/<domain>/` file **other than `io.rs`**:
|
||||
|
||||
- `std::fs::` (any item)
|
||||
- `tokio::fs`
|
||||
- `std::process::` (any item)
|
||||
- `Command::new`
|
||||
|
||||
Run to verify:
|
||||
|
||||
```sh
|
||||
grep -rn --include='*.rs' \
|
||||
'std::fs::\|tokio::fs\|std::process::\|Command::new' \
|
||||
server/src/service/ | grep -v '/io\.rs'
|
||||
```
|
||||
|
||||
This must return zero matches (including lines inside `#[cfg(test)]` blocks).
|
||||
|
||||
### Pure topic files (`<topic>.rs`)
|
||||
|
||||
```rust
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
// Unit tests MUST:
|
||||
// - Use no tempdir, tokio runtime, or filesystem
|
||||
// - Cover every branch of every public function
|
||||
#[test]
|
||||
fn filter_removes_archived_agents() { ... }
|
||||
}
|
||||
```
|
||||
|
||||
### `io.rs`
|
||||
|
||||
```rust
|
||||
/// Fixture helpers — the ONLY place allowed to call std::fs in tests.
|
||||
#[cfg(test)]
|
||||
pub mod test_helpers {
|
||||
use tempfile::TempDir;
|
||||
|
||||
pub fn make_work_dirs(tmp: &TempDir) { ... }
|
||||
pub fn make_stage_dirs(tmp: &TempDir) { ... }
|
||||
pub fn make_project_toml(tmp: &TempDir, content: &str) { ... }
|
||||
pub fn write_story_file(tmp: &TempDir, relative_path: &str, content: &str) { ... }
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use tempfile::TempDir;
|
||||
|
||||
// IO tests MAY use tempdirs and real filesystem.
|
||||
// Keep them few and focused on the thin I/O wrapper contract.
|
||||
#[test]
|
||||
fn is_archived_returns_true_when_in_done() { ... }
|
||||
}
|
||||
```
|
||||
|
||||
### `mod.rs`
|
||||
|
||||
```rust
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use io::test_helpers::*; // ← fixture helpers; never import std::fs here
|
||||
|
||||
// Integration tests compose io + pure layers end-to-end.
|
||||
// May use tempdirs. Keep the count small — they are integration-level.
|
||||
#[tokio::test]
|
||||
async fn list_agents_excludes_archived() { ... }
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Dependency Injection Pattern
|
||||
|
||||
Service functions take **only the dependencies they actually use**:
|
||||
|
||||
```rust
|
||||
// Good — takes only what it needs
|
||||
pub async fn start_agent(
|
||||
pool: &AgentPool,
|
||||
project_root: &Path,
|
||||
story_id: &str,
|
||||
agent_name: Option<&str>,
|
||||
) -> Result<AgentInfo, Error> { ... }
|
||||
|
||||
// Bad — takes the whole AppContext
|
||||
pub async fn start_agent(ctx: &AppContext, ...) -> Result<AgentInfo, Error> { ... }
|
||||
```
|
||||
|
||||
Standard injected dependencies for `service::agents`:
|
||||
|
||||
| Type | Purpose |
|
||||
|------|---------|
|
||||
| `&AgentPool` | Agent lifecycle operations |
|
||||
| `&Path` (`project_root`) | Filesystem operations scoped to the project |
|
||||
| `&WorkflowState` | In-memory test result cache |
|
||||
|
||||
**The dependency set chosen for `agents` is the reference pattern for all future
|
||||
service module extractions.**
|
||||
|
||||
---
|
||||
|
||||
## 6. HTTP Handler Contract
|
||||
|
||||
After extraction, HTTP handlers are thin adapters:
|
||||
|
||||
```rust
|
||||
async fn start_agent(&self, payload: Json<StartAgentPayload>) -> OpenApiResult<...> {
|
||||
let project_root = self.ctx.agents.get_project_root(&self.ctx.state)
|
||||
.map_err(|e| bad_request(e))?; // extract from AppContext
|
||||
let info = service::agents::start_agent( // call service
|
||||
&self.ctx.agents, &project_root, &payload.story_id, payload.agent_name.as_deref(),
|
||||
).await.map_err(map_service_error)?; // map typed error → HTTP
|
||||
Ok(Json(AgentInfoResponse { ... })) // shape DTO
|
||||
}
|
||||
```
|
||||
|
||||
Handlers must contain **no**:
|
||||
- `std::fs` / file reads
|
||||
- `std::process` invocations
|
||||
- Inline load-mutate-save sequences
|
||||
- Inline validation that belongs in the service layer
|
||||
|
||||
---
|
||||
|
||||
## 7. Follow-up Extractions
|
||||
|
||||
See [future-extractions.md](future-extractions.md) for the recommended order
|
||||
and rationale for remaining extraction targets.
|
||||
Generated
+2
-2
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "huskies",
|
||||
"version": "0.10.2",
|
||||
"version": "0.10.4",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "huskies",
|
||||
"version": "0.10.2",
|
||||
"version": "0.10.4",
|
||||
"dependencies": {
|
||||
"@types/react-syntax-highlighter": "^15.5.13",
|
||||
"react": "^19.1.0",
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"name": "huskies",
|
||||
"private": true,
|
||||
"version": "0.10.2",
|
||||
"version": "0.10.4",
|
||||
"type": "module",
|
||||
"scripts": {
|
||||
"dev": "vite",
|
||||
|
||||
@@ -194,7 +194,6 @@ body,
|
||||
#root {
|
||||
height: 100%;
|
||||
margin: 0;
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
/* Agent activity indicator pulse */
|
||||
|
||||
@@ -1,8 +1,14 @@
|
||||
import { fireEvent, render, screen, waitFor } from "@testing-library/react";
|
||||
import { act, fireEvent, render, screen, waitFor } from "@testing-library/react";
|
||||
import userEvent from "@testing-library/user-event";
|
||||
import { beforeEach, describe, expect, it, vi } from "vitest";
|
||||
import { api } from "./api/client";
|
||||
|
||||
vi.mock("./api/gateway", () => ({
|
||||
gatewayApi: {
|
||||
getServerMode: vi.fn().mockResolvedValue({ mode: "standard" }),
|
||||
},
|
||||
}));
|
||||
|
||||
vi.mock("./api/client", () => {
|
||||
const api = {
|
||||
getCurrentProject: vi.fn(),
|
||||
@@ -76,7 +82,11 @@ describe("App", () => {
|
||||
|
||||
async function renderApp() {
|
||||
const { default: App } = await import("./App");
|
||||
return render(<App />);
|
||||
let result!: ReturnType<typeof render>;
|
||||
await act(async () => {
|
||||
result = render(<App />);
|
||||
});
|
||||
return result;
|
||||
}
|
||||
|
||||
it("calls getCurrentProject() on mount", async () => {
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
|
||||
import type { ProjectSettings } from "./settings";
|
||||
import { settingsApi } from "./settings";
|
||||
|
||||
const mockFetch = vi.fn();
|
||||
@@ -22,7 +23,77 @@ function errorResponse(status: number, text: string) {
|
||||
return new Response(text, { status });
|
||||
}
|
||||
|
||||
const defaultProjectSettings: ProjectSettings = {
|
||||
default_qa: "server",
|
||||
default_coder_model: null,
|
||||
max_coders: null,
|
||||
max_retries: 2,
|
||||
base_branch: null,
|
||||
rate_limit_notifications: true,
|
||||
timezone: null,
|
||||
rendezvous: null,
|
||||
watcher_sweep_interval_secs: 60,
|
||||
watcher_done_retention_secs: 14400,
|
||||
};
|
||||
|
||||
describe("settingsApi", () => {
|
||||
describe("getProjectSettings", () => {
|
||||
it("sends GET to /settings and returns project settings", async () => {
|
||||
mockFetch.mockResolvedValueOnce(okResponse(defaultProjectSettings));
|
||||
|
||||
const result = await settingsApi.getProjectSettings();
|
||||
|
||||
expect(mockFetch).toHaveBeenCalledWith(
|
||||
"/api/settings",
|
||||
expect.objectContaining({
|
||||
headers: expect.objectContaining({
|
||||
"Content-Type": "application/json",
|
||||
}),
|
||||
}),
|
||||
);
|
||||
expect(result).toEqual(defaultProjectSettings);
|
||||
});
|
||||
|
||||
it("uses custom baseUrl when provided", async () => {
|
||||
mockFetch.mockResolvedValueOnce(okResponse(defaultProjectSettings));
|
||||
await settingsApi.getProjectSettings("http://localhost:4000/api");
|
||||
expect(mockFetch).toHaveBeenCalledWith(
|
||||
"http://localhost:4000/api/settings",
|
||||
expect.anything(),
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe("putProjectSettings", () => {
|
||||
it("sends PUT to /settings with settings body", async () => {
|
||||
const updated = { ...defaultProjectSettings, default_qa: "agent" };
|
||||
mockFetch.mockResolvedValueOnce(okResponse(updated));
|
||||
|
||||
const result = await settingsApi.putProjectSettings(updated);
|
||||
|
||||
expect(mockFetch).toHaveBeenCalledWith(
|
||||
"/api/settings",
|
||||
expect.objectContaining({
|
||||
method: "PUT",
|
||||
body: JSON.stringify(updated),
|
||||
}),
|
||||
);
|
||||
expect(result.default_qa).toBe("agent");
|
||||
});
|
||||
|
||||
it("throws on validation error", async () => {
|
||||
mockFetch.mockResolvedValueOnce(
|
||||
errorResponse(400, "Invalid default_qa value"),
|
||||
);
|
||||
await expect(
|
||||
settingsApi.putProjectSettings({
|
||||
...defaultProjectSettings,
|
||||
default_qa: "invalid",
|
||||
}),
|
||||
).rejects.toThrow("Invalid default_qa value");
|
||||
});
|
||||
});
|
||||
|
||||
describe("getEditorCommand", () => {
|
||||
it("sends GET to /settings/editor and returns editor settings", async () => {
|
||||
const expected = { editor_command: "zed" };
|
||||
|
||||
@@ -2,6 +2,19 @@ export interface EditorSettings {
|
||||
editor_command: string | null;
|
||||
}
|
||||
|
||||
export interface ProjectSettings {
|
||||
default_qa: string;
|
||||
default_coder_model: string | null;
|
||||
max_coders: number | null;
|
||||
max_retries: number;
|
||||
base_branch: string | null;
|
||||
rate_limit_notifications: boolean;
|
||||
timezone: string | null;
|
||||
rendezvous: string | null;
|
||||
watcher_sweep_interval_secs: number;
|
||||
watcher_done_retention_secs: number;
|
||||
}
|
||||
|
||||
export interface OpenFileResult {
|
||||
success: boolean;
|
||||
}
|
||||
@@ -34,6 +47,21 @@ async function requestJson<T>(
|
||||
}
|
||||
|
||||
export const settingsApi = {
|
||||
getProjectSettings(baseUrl?: string): Promise<ProjectSettings> {
|
||||
return requestJson<ProjectSettings>("/settings", {}, baseUrl);
|
||||
},
|
||||
|
||||
putProjectSettings(
|
||||
settings: ProjectSettings,
|
||||
baseUrl?: string,
|
||||
): Promise<ProjectSettings> {
|
||||
return requestJson<ProjectSettings>(
|
||||
"/settings",
|
||||
{ method: "PUT", body: JSON.stringify(settings) },
|
||||
baseUrl,
|
||||
);
|
||||
},
|
||||
|
||||
getEditorCommand(baseUrl?: string): Promise<EditorSettings> {
|
||||
return requestJson<EditorSettings>("/settings/editor", {}, baseUrl);
|
||||
},
|
||||
|
||||
@@ -165,7 +165,7 @@ describe("Chat message rendering — unified tool call UI", () => {
|
||||
},
|
||||
];
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onUpdate(messages);
|
||||
});
|
||||
|
||||
@@ -199,7 +199,7 @@ describe("Chat message rendering — unified tool call UI", () => {
|
||||
{ role: "assistant", content: "The file contains a main function." },
|
||||
];
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onUpdate(messages);
|
||||
});
|
||||
|
||||
@@ -219,7 +219,7 @@ describe("Chat message rendering — unified tool call UI", () => {
|
||||
{ role: "assistant", content: "Hi there! How can I help?" },
|
||||
];
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onUpdate(messages);
|
||||
});
|
||||
|
||||
@@ -254,7 +254,7 @@ describe("Chat message rendering — unified tool call UI", () => {
|
||||
},
|
||||
];
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onUpdate(messages);
|
||||
});
|
||||
|
||||
@@ -396,7 +396,7 @@ describe("Chat reconciliation banner", () => {
|
||||
|
||||
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onReconciliationProgress(
|
||||
"42_story_test",
|
||||
"checking",
|
||||
@@ -417,7 +417,7 @@ describe("Chat reconciliation banner", () => {
|
||||
|
||||
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onReconciliationProgress(
|
||||
"42_story_test",
|
||||
"gates_running",
|
||||
@@ -435,7 +435,7 @@ describe("Chat reconciliation banner", () => {
|
||||
|
||||
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onReconciliationProgress(
|
||||
"42_story_test",
|
||||
"checking",
|
||||
@@ -447,7 +447,7 @@ describe("Chat reconciliation banner", () => {
|
||||
await screen.findByTestId("reconciliation-banner"),
|
||||
).toBeInTheDocument();
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onReconciliationProgress(
|
||||
"",
|
||||
"done",
|
||||
@@ -504,7 +504,7 @@ describe("Chat localStorage persistence (Story 145)", () => {
|
||||
{ role: "assistant", content: "Hi there!" },
|
||||
];
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onUpdate(history);
|
||||
});
|
||||
|
||||
@@ -555,7 +555,7 @@ describe("Chat localStorage persistence (Story 145)", () => {
|
||||
{ role: "assistant", content: "I should survive a reload" },
|
||||
];
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onUpdate(history);
|
||||
});
|
||||
|
||||
@@ -604,7 +604,7 @@ describe("Chat localStorage persistence (Story 145)", () => {
|
||||
{ role: "user", content: "What is Rust?" },
|
||||
{ role: "assistant", content: "Rust is a systems programming language." },
|
||||
];
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onUpdate(priorHistory);
|
||||
});
|
||||
|
||||
@@ -692,12 +692,12 @@ describe("Chat activity status indicator (Bug 140)", () => {
|
||||
});
|
||||
|
||||
// Simulate tokens arriving (streamingContent becomes non-empty)
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onToken("I'll read that file for you.");
|
||||
});
|
||||
|
||||
// Now simulate a tool activity event while streamingContent is non-empty
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onActivity("read_file");
|
||||
});
|
||||
|
||||
@@ -742,7 +742,7 @@ describe("Chat activity status indicator (Bug 140)", () => {
|
||||
});
|
||||
|
||||
// Tokens arrive — streamingContent is non-empty, no activity
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onToken("Here is my response...");
|
||||
});
|
||||
|
||||
@@ -765,12 +765,12 @@ describe("Chat activity status indicator (Bug 140)", () => {
|
||||
});
|
||||
|
||||
// Simulate tokens arriving
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onToken("Let me read that.");
|
||||
});
|
||||
|
||||
// Claude Code sends tool name "Read" (not "read_file")
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onActivity("Read");
|
||||
});
|
||||
|
||||
@@ -792,11 +792,11 @@ describe("Chat activity status indicator (Bug 140)", () => {
|
||||
fireEvent.keyDown(input, { key: "Enter", shiftKey: false });
|
||||
});
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onToken("Running tests now.");
|
||||
});
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onActivity("Bash");
|
||||
});
|
||||
|
||||
@@ -818,11 +818,11 @@ describe("Chat activity status indicator (Bug 140)", () => {
|
||||
fireEvent.keyDown(input, { key: "Enter", shiftKey: false });
|
||||
});
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onToken("Working on it.");
|
||||
});
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onActivity("SomeCustomTool");
|
||||
});
|
||||
|
||||
@@ -899,7 +899,7 @@ describe("Chat message queue (Story 155)", () => {
|
||||
).toBeInTheDocument();
|
||||
|
||||
// Simulate agent response completing (loading → false)
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onUpdate([
|
||||
{ role: "user", content: "First" },
|
||||
{ role: "assistant", content: "Done." },
|
||||
@@ -1066,7 +1066,7 @@ describe("Chat message queue (Story 155)", () => {
|
||||
expect(indicators[1]).toHaveTextContent("Third");
|
||||
|
||||
// Simulate first response completing — both "Second" and "Third" are drained at once
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onUpdate([
|
||||
{ role: "user", content: "First" },
|
||||
{ role: "assistant", content: "Response 1." },
|
||||
@@ -1145,7 +1145,7 @@ describe("Remove bubble styling from streaming messages (Story 163)", () => {
|
||||
});
|
||||
|
||||
// Simulate streaming tokens arriving
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onToken("Streaming response text");
|
||||
});
|
||||
|
||||
@@ -1176,7 +1176,7 @@ describe("Remove bubble styling from streaming messages (Story 163)", () => {
|
||||
fireEvent.keyDown(input, { key: "Enter", shiftKey: false });
|
||||
});
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onToken("Some markdown content");
|
||||
});
|
||||
|
||||
@@ -1200,7 +1200,7 @@ describe("Remove bubble styling from streaming messages (Story 163)", () => {
|
||||
});
|
||||
|
||||
// Simulate streaming tokens
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onToken("Final response");
|
||||
});
|
||||
|
||||
@@ -1211,7 +1211,7 @@ describe("Remove bubble styling from streaming messages (Story 163)", () => {
|
||||
const streamingStyleAttr = streamingStyledDiv.getAttribute("style") ?? "";
|
||||
|
||||
// Transition: onUpdate completes the message
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onUpdate([
|
||||
{ role: "user", content: "Hello" },
|
||||
{ role: "assistant", content: "Final response" },
|
||||
@@ -1244,7 +1244,7 @@ describe("Remove bubble styling from streaming messages (Story 163)", () => {
|
||||
|
||||
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onUpdate([
|
||||
{ role: "user", content: "Hi" },
|
||||
{ role: "assistant", content: "Hello there!" },
|
||||
@@ -1268,7 +1268,7 @@ describe("Remove bubble styling from streaming messages (Story 163)", () => {
|
||||
|
||||
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onUpdate([
|
||||
{ role: "user", content: "I am a user message" },
|
||||
{ role: "assistant", content: "I am a response" },
|
||||
@@ -1310,7 +1310,7 @@ describe("Bug 264: Claude Code session ID persisted across browser refresh", ()
|
||||
|
||||
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onSessionId("test-session-abc");
|
||||
});
|
||||
|
||||
@@ -1394,7 +1394,7 @@ describe("Bug 264: Claude Code session ID persisted across browser refresh", ()
|
||||
render(<Chat projectPath={PROJECT_PATH} onCloseProject={vi.fn()} />);
|
||||
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onSessionId("my-session");
|
||||
});
|
||||
|
||||
@@ -1595,7 +1595,7 @@ describe("Slash command handling (Story 374)", () => {
|
||||
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
||||
|
||||
// First add a message so there is history to clear
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onUpdate([
|
||||
{ role: "user", content: "hello" },
|
||||
{ role: "assistant", content: "world" },
|
||||
@@ -1701,7 +1701,7 @@ describe("Bug 450: WebSocket error messages displayed in chat", () => {
|
||||
|
||||
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onError("Something went wrong on the server.");
|
||||
});
|
||||
|
||||
@@ -1715,7 +1715,7 @@ describe("Bug 450: WebSocket error messages displayed in chat", () => {
|
||||
|
||||
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
capturedWsHandlers?.onError(
|
||||
"OAuth login required. Please visit: https://example.com/oauth/login",
|
||||
);
|
||||
|
||||
@@ -9,6 +9,7 @@ import { useChatWebSocket } from "../hooks/useChatWebSocket";
|
||||
import { estimateTokens, getContextWindowSize } from "../utils/chatUtils";
|
||||
import { ApiKeyDialog } from "./ApiKeyDialog";
|
||||
import { BotConfigPage } from "./BotConfigPage";
|
||||
import { SettingsPage } from "./SettingsPage";
|
||||
import { ChatHeader } from "./ChatHeader";
|
||||
import type { ChatInputHandle } from "./ChatInput";
|
||||
import { ChatInput } from "./ChatInput";
|
||||
@@ -62,7 +63,7 @@ export function Chat({
|
||||
null,
|
||||
);
|
||||
const [showHelp, setShowHelp] = useState(false);
|
||||
const [view, setView] = useState<"chat" | "bot-config">("chat");
|
||||
const [view, setView] = useState<"chat" | "bot-config" | "settings">("chat");
|
||||
const [queuedMessages, setQueuedMessages] = useState<
|
||||
{ id: string; text: string }[]
|
||||
>([]);
|
||||
@@ -376,16 +377,21 @@ export function Chat({
|
||||
wsConnected={wsConnected}
|
||||
oauthStatus={oauthStatus}
|
||||
onShowBotConfig={() => setView("bot-config")}
|
||||
onShowSettings={() => setView("settings")}
|
||||
/>
|
||||
|
||||
{view === "bot-config" && (
|
||||
<BotConfigPage onBack={() => setView("chat")} />
|
||||
)}
|
||||
|
||||
{view === "settings" && (
|
||||
<SettingsPage onBack={() => setView("chat")} />
|
||||
)}
|
||||
|
||||
<div
|
||||
data-testid="chat-content-area"
|
||||
style={{
|
||||
display: view === "bot-config" ? "none" : "flex",
|
||||
display: view === "chat" ? "flex" : "none",
|
||||
flex: 1,
|
||||
minHeight: 0,
|
||||
flexDirection: isNarrowScreen ? "column" : "row",
|
||||
|
||||
@@ -35,6 +35,7 @@ interface ChatHeaderProps {
|
||||
wsConnected: boolean;
|
||||
oauthStatus?: OAuthStatus | null;
|
||||
onShowBotConfig?: () => void;
|
||||
onShowSettings?: () => void;
|
||||
}
|
||||
|
||||
const getContextEmoji = (percentage: number): string => {
|
||||
@@ -60,6 +61,7 @@ export function ChatHeader({
|
||||
wsConnected,
|
||||
oauthStatus = null,
|
||||
onShowBotConfig,
|
||||
onShowSettings,
|
||||
}: ChatHeaderProps) {
|
||||
const hasModelOptions = availableModels.length > 0 || claudeModels.length > 0;
|
||||
const [showConfirm, setShowConfirm] = useState(false);
|
||||
@@ -552,6 +554,43 @@ export function ChatHeader({
|
||||
</button>
|
||||
)}
|
||||
|
||||
{onShowSettings && (
|
||||
<button
|
||||
type="button"
|
||||
onClick={onShowSettings}
|
||||
title="Edit project.toml settings"
|
||||
style={{
|
||||
padding: "6px 12px",
|
||||
borderRadius: "99px",
|
||||
border: "none",
|
||||
fontSize: "0.85em",
|
||||
backgroundColor: "#2f2f2f",
|
||||
color: "#888",
|
||||
cursor: "pointer",
|
||||
outline: "none",
|
||||
transition: "all 0.2s",
|
||||
}}
|
||||
onMouseOver={(e) => {
|
||||
e.currentTarget.style.backgroundColor = "#3f3f3f";
|
||||
e.currentTarget.style.color = "#ccc";
|
||||
}}
|
||||
onMouseOut={(e) => {
|
||||
e.currentTarget.style.backgroundColor = "#2f2f2f";
|
||||
e.currentTarget.style.color = "#888";
|
||||
}}
|
||||
onFocus={(e) => {
|
||||
e.currentTarget.style.backgroundColor = "#3f3f3f";
|
||||
e.currentTarget.style.color = "#ccc";
|
||||
}}
|
||||
onBlur={(e) => {
|
||||
e.currentTarget.style.backgroundColor = "#2f2f2f";
|
||||
e.currentTarget.style.color = "#888";
|
||||
}}
|
||||
>
|
||||
⚙ Settings
|
||||
</button>
|
||||
)}
|
||||
|
||||
{hasModelOptions ? (
|
||||
<select
|
||||
value={model}
|
||||
|
||||
@@ -0,0 +1,461 @@
|
||||
import * as React from "react";
|
||||
import type { ProjectSettings } from "../api/settings";
|
||||
import { settingsApi } from "../api/settings";
|
||||
|
||||
const { useState, useEffect } = React;
|
||||
|
||||
interface SettingsPageProps {
|
||||
onBack: () => void;
|
||||
}
|
||||
|
||||
const fieldStyle: React.CSSProperties = {
|
||||
display: "flex",
|
||||
flexDirection: "column",
|
||||
gap: "4px",
|
||||
};
|
||||
|
||||
const labelStyle: React.CSSProperties = {
|
||||
fontSize: "0.8em",
|
||||
color: "#aaa",
|
||||
fontWeight: 500,
|
||||
};
|
||||
|
||||
const descStyle: React.CSSProperties = {
|
||||
fontSize: "0.75em",
|
||||
color: "#666",
|
||||
marginTop: "2px",
|
||||
};
|
||||
|
||||
const inputStyle: React.CSSProperties = {
|
||||
padding: "8px 10px",
|
||||
borderRadius: "6px",
|
||||
border: "1px solid #333",
|
||||
background: "#1e1e1e",
|
||||
color: "#ececec",
|
||||
fontSize: "0.9em",
|
||||
fontFamily: "monospace",
|
||||
outline: "none",
|
||||
};
|
||||
|
||||
const sectionStyle: React.CSSProperties = {
|
||||
background: "#1e1e1e",
|
||||
border: "1px solid #333",
|
||||
borderRadius: "8px",
|
||||
padding: "20px",
|
||||
display: "flex",
|
||||
flexDirection: "column",
|
||||
gap: "16px",
|
||||
};
|
||||
|
||||
const sectionTitleStyle: React.CSSProperties = {
|
||||
fontSize: "0.85em",
|
||||
fontWeight: 600,
|
||||
color: "#aaa",
|
||||
textTransform: "uppercase",
|
||||
letterSpacing: "0.06em",
|
||||
marginBottom: "2px",
|
||||
};
|
||||
|
||||
interface TextFieldProps {
|
||||
label: string;
|
||||
description?: string;
|
||||
value: string;
|
||||
onChange: (v: string) => void;
|
||||
placeholder?: string;
|
||||
}
|
||||
|
||||
function TextField({ label, description, value, onChange, placeholder }: TextFieldProps) {
|
||||
return (
|
||||
<div style={fieldStyle}>
|
||||
<label style={labelStyle}>{label}</label>
|
||||
{description && <span style={descStyle}>{description}</span>}
|
||||
<input
|
||||
type="text"
|
||||
value={value}
|
||||
onChange={(e) => onChange(e.target.value)}
|
||||
placeholder={placeholder ?? ""}
|
||||
style={inputStyle}
|
||||
autoComplete="off"
|
||||
/>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
interface NumberFieldProps {
|
||||
label: string;
|
||||
description?: string;
|
||||
value: number | null;
|
||||
onChange: (v: number | null) => void;
|
||||
min?: number;
|
||||
placeholder?: string;
|
||||
}
|
||||
|
||||
function NumberField({ label, description, value, onChange, min, placeholder }: NumberFieldProps) {
|
||||
return (
|
||||
<div style={fieldStyle}>
|
||||
<label style={labelStyle}>{label}</label>
|
||||
{description && <span style={descStyle}>{description}</span>}
|
||||
<input
|
||||
type="number"
|
||||
value={value === null ? "" : value}
|
||||
min={min}
|
||||
onChange={(e) => {
|
||||
const raw = e.target.value.trim();
|
||||
if (raw === "") {
|
||||
onChange(null);
|
||||
} else {
|
||||
const n = Number(raw);
|
||||
if (!Number.isNaN(n)) onChange(n);
|
||||
}
|
||||
}}
|
||||
placeholder={placeholder ?? ""}
|
||||
style={inputStyle}
|
||||
/>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
interface CheckboxFieldProps {
|
||||
label: string;
|
||||
description?: string;
|
||||
checked: boolean;
|
||||
onChange: (v: boolean) => void;
|
||||
}
|
||||
|
||||
function CheckboxField({ label, description, checked, onChange }: CheckboxFieldProps) {
|
||||
return (
|
||||
<div style={fieldStyle}>
|
||||
{description && <span style={descStyle}>{description}</span>}
|
||||
<label
|
||||
style={{
|
||||
display: "flex",
|
||||
alignItems: "center",
|
||||
gap: "8px",
|
||||
cursor: "pointer",
|
||||
fontSize: "0.9em",
|
||||
color: "#ccc",
|
||||
}}
|
||||
>
|
||||
<input
|
||||
type="checkbox"
|
||||
checked={checked}
|
||||
onChange={(e) => onChange(e.target.checked)}
|
||||
/>
|
||||
{label}
|
||||
</label>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
const QA_MODES = ["server", "agent", "human"] as const;
|
||||
|
||||
/** Settings page — form-based editor for project.toml scalar settings. */
|
||||
export function SettingsPage({ onBack }: SettingsPageProps) {
|
||||
const [settings, setSettings] = useState<ProjectSettings | null>(null);
|
||||
const [status, setStatus] = useState<"idle" | "loading" | "saving" | "saved" | "error">("loading");
|
||||
const [errorMsg, setErrorMsg] = useState<string | null>(null);
|
||||
const [validationErrors, setValidationErrors] = useState<Record<string, string>>({});
|
||||
|
||||
useEffect(() => {
|
||||
settingsApi
|
||||
.getProjectSettings()
|
||||
.then((s) => {
|
||||
setSettings(s);
|
||||
setStatus("idle");
|
||||
})
|
||||
.catch((e: unknown) => {
|
||||
setStatus("error");
|
||||
setErrorMsg(e instanceof Error ? e.message : "Failed to load settings");
|
||||
});
|
||||
}, []);
|
||||
|
||||
function patch(partial: Partial<ProjectSettings>) {
|
||||
setSettings((prev) => (prev ? { ...prev, ...partial } : prev));
|
||||
setValidationErrors({});
|
||||
}
|
||||
|
||||
function validate(s: ProjectSettings): Record<string, string> {
|
||||
const errors: Record<string, string> = {};
|
||||
if (!QA_MODES.includes(s.default_qa as (typeof QA_MODES)[number])) {
|
||||
errors.default_qa = `Must be one of: ${QA_MODES.join(", ")}`;
|
||||
}
|
||||
if (s.max_retries < 0) {
|
||||
errors.max_retries = "Must be 0 or greater";
|
||||
}
|
||||
if (s.watcher_sweep_interval_secs < 1) {
|
||||
errors.watcher_sweep_interval_secs = "Must be at least 1 second";
|
||||
}
|
||||
if (s.watcher_done_retention_secs < 1) {
|
||||
errors.watcher_done_retention_secs = "Must be at least 1 second";
|
||||
}
|
||||
return errors;
|
||||
}
|
||||
|
||||
async function handleSave() {
|
||||
if (!settings) return;
|
||||
const errors = validate(settings);
|
||||
if (Object.keys(errors).length > 0) {
|
||||
setValidationErrors(errors);
|
||||
return;
|
||||
}
|
||||
setStatus("saving");
|
||||
setErrorMsg(null);
|
||||
try {
|
||||
const saved = await settingsApi.putProjectSettings(settings);
|
||||
setSettings(saved);
|
||||
setStatus("saved");
|
||||
setTimeout(() => setStatus("idle"), 2000);
|
||||
} catch (e) {
|
||||
setStatus("error");
|
||||
setErrorMsg(e instanceof Error ? e.message : "Save failed");
|
||||
}
|
||||
}
|
||||
|
||||
const s = settings;
|
||||
|
||||
return (
|
||||
<div
|
||||
style={{
|
||||
display: "flex",
|
||||
flexDirection: "column",
|
||||
height: "100%",
|
||||
backgroundColor: "#171717",
|
||||
color: "#ececec",
|
||||
overflow: "auto",
|
||||
}}
|
||||
>
|
||||
{/* Header */}
|
||||
<div
|
||||
style={{
|
||||
padding: "12px 24px",
|
||||
borderBottom: "1px solid #333",
|
||||
display: "flex",
|
||||
alignItems: "center",
|
||||
gap: "16px",
|
||||
background: "#171717",
|
||||
flexShrink: 0,
|
||||
}}
|
||||
>
|
||||
<button
|
||||
type="button"
|
||||
onClick={onBack}
|
||||
style={{
|
||||
background: "transparent",
|
||||
border: "none",
|
||||
cursor: "pointer",
|
||||
color: "#888",
|
||||
fontSize: "0.9em",
|
||||
padding: "4px 8px",
|
||||
borderRadius: "4px",
|
||||
}}
|
||||
>
|
||||
← Back
|
||||
</button>
|
||||
<span style={{ fontWeight: 700, fontSize: "1em" }}>Project Settings</span>
|
||||
</div>
|
||||
|
||||
{/* Body */}
|
||||
<div
|
||||
style={{
|
||||
flex: 1,
|
||||
padding: "24px",
|
||||
display: "flex",
|
||||
flexDirection: "column",
|
||||
gap: "20px",
|
||||
maxWidth: "640px",
|
||||
}}
|
||||
>
|
||||
{status === "loading" && (
|
||||
<p style={{ color: "#888", fontSize: "0.9em" }}>Loading settings…</p>
|
||||
)}
|
||||
|
||||
{status === "error" && !s && (
|
||||
<p style={{ color: "#f08080", fontSize: "0.9em" }}>
|
||||
Error: {errorMsg}
|
||||
</p>
|
||||
)}
|
||||
|
||||
{s && (
|
||||
<>
|
||||
{/* Pipeline */}
|
||||
<div style={sectionStyle}>
|
||||
<div style={sectionTitleStyle}>Pipeline</div>
|
||||
|
||||
<div style={fieldStyle}>
|
||||
<label style={labelStyle}>Default QA Mode</label>
|
||||
<span style={descStyle}>
|
||||
How stories are QA-reviewed after the coder stage.
|
||||
Default: server.
|
||||
</span>
|
||||
<select
|
||||
value={s.default_qa}
|
||||
onChange={(e) => patch({ default_qa: e.target.value })}
|
||||
style={{ ...inputStyle, cursor: "pointer" }}
|
||||
>
|
||||
{QA_MODES.map((m) => (
|
||||
<option key={m} value={m}>
|
||||
{m}
|
||||
</option>
|
||||
))}
|
||||
</select>
|
||||
{validationErrors.default_qa && (
|
||||
<span style={{ color: "#f08080", fontSize: "0.8em" }}>
|
||||
{validationErrors.default_qa}
|
||||
</span>
|
||||
)}
|
||||
</div>
|
||||
|
||||
<NumberField
|
||||
label="Max Retries"
|
||||
description="Maximum retries per story per pipeline stage before blocking. Default: 2. Set 0 to disable."
|
||||
value={s.max_retries}
|
||||
min={0}
|
||||
onChange={(v) => patch({ max_retries: v ?? 0 })}
|
||||
/>
|
||||
{validationErrors.max_retries && (
|
||||
<span style={{ color: "#f08080", fontSize: "0.8em" }}>
|
||||
{validationErrors.max_retries}
|
||||
</span>
|
||||
)}
|
||||
|
||||
<NumberField
|
||||
label="Max Concurrent Coders"
|
||||
description="Maximum number of coder-stage agents running at once. Leave blank for unlimited."
|
||||
value={s.max_coders}
|
||||
min={1}
|
||||
placeholder="unlimited"
|
||||
onChange={(v) => patch({ max_coders: v })}
|
||||
/>
|
||||
|
||||
<TextField
|
||||
label="Default Coder Model"
|
||||
description="When set, only coder agents matching this model are auto-assigned (e.g. sonnet, opus)."
|
||||
value={s.default_coder_model ?? ""}
|
||||
onChange={(v) =>
|
||||
patch({ default_coder_model: v.trim() || null })
|
||||
}
|
||||
placeholder="e.g. sonnet"
|
||||
/>
|
||||
</div>
|
||||
|
||||
{/* Git */}
|
||||
<div style={sectionStyle}>
|
||||
<div style={sectionTitleStyle}>Git</div>
|
||||
|
||||
<TextField
|
||||
label="Base Branch"
|
||||
description="Overrides auto-detection of the merge target branch (e.g. main, master, develop)."
|
||||
value={s.base_branch ?? ""}
|
||||
onChange={(v) =>
|
||||
patch({ base_branch: v.trim() || null })
|
||||
}
|
||||
placeholder="e.g. master"
|
||||
/>
|
||||
</div>
|
||||
|
||||
{/* Notifications */}
|
||||
<div style={sectionStyle}>
|
||||
<div style={sectionTitleStyle}>Notifications</div>
|
||||
|
||||
<CheckboxField
|
||||
label="Rate Limit Notifications"
|
||||
description="Send chat notifications on soft API rate-limit warnings. Disable to reduce noise."
|
||||
checked={s.rate_limit_notifications}
|
||||
onChange={(v) => patch({ rate_limit_notifications: v })}
|
||||
/>
|
||||
</div>
|
||||
|
||||
{/* Advanced */}
|
||||
<div style={sectionStyle}>
|
||||
<div style={sectionTitleStyle}>Advanced</div>
|
||||
|
||||
<TextField
|
||||
label="Timezone"
|
||||
description="IANA timezone for timer inputs (e.g. Europe/London, America/New_York). Leave blank for system default."
|
||||
value={s.timezone ?? ""}
|
||||
onChange={(v) => patch({ timezone: v.trim() || null })}
|
||||
placeholder="e.g. Europe/London"
|
||||
/>
|
||||
|
||||
<TextField
|
||||
label="Rendezvous URL"
|
||||
description="WebSocket URL of a remote huskies node for CRDT state sync (e.g. ws://host:3001/crdt-sync)."
|
||||
value={s.rendezvous ?? ""}
|
||||
onChange={(v) => patch({ rendezvous: v.trim() || null })}
|
||||
placeholder="e.g. ws://host:3001/crdt-sync"
|
||||
/>
|
||||
</div>
|
||||
|
||||
{/* Watcher */}
|
||||
<div style={sectionStyle}>
|
||||
<div style={sectionTitleStyle}>Archiver</div>
|
||||
|
||||
<NumberField
|
||||
label="Sweep Interval (seconds)"
|
||||
description="How often to check the done stage for items ready to archive. Default: 60."
|
||||
value={s.watcher_sweep_interval_secs}
|
||||
min={1}
|
||||
onChange={(v) =>
|
||||
patch({ watcher_sweep_interval_secs: v ?? 60 })
|
||||
}
|
||||
/>
|
||||
{validationErrors.watcher_sweep_interval_secs && (
|
||||
<span style={{ color: "#f08080", fontSize: "0.8em" }}>
|
||||
{validationErrors.watcher_sweep_interval_secs}
|
||||
</span>
|
||||
)}
|
||||
|
||||
<NumberField
|
||||
label="Done Retention (seconds)"
|
||||
description="How long an item must stay in the done stage before archiving. Default: 14400 (4 hours)."
|
||||
value={s.watcher_done_retention_secs}
|
||||
min={1}
|
||||
onChange={(v) =>
|
||||
patch({ watcher_done_retention_secs: v ?? 14400 })
|
||||
}
|
||||
/>
|
||||
{validationErrors.watcher_done_retention_secs && (
|
||||
<span style={{ color: "#f08080", fontSize: "0.8em" }}>
|
||||
{validationErrors.watcher_done_retention_secs}
|
||||
</span>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{/* Save */}
|
||||
<div style={{ display: "flex", alignItems: "center", gap: "12px" }}>
|
||||
<button
|
||||
type="button"
|
||||
onClick={handleSave}
|
||||
disabled={status === "saving"}
|
||||
style={{
|
||||
padding: "8px 24px",
|
||||
borderRadius: "6px",
|
||||
border: "none",
|
||||
background:
|
||||
status === "saved" ? "#1a5c2a" : "#2563eb",
|
||||
color: "#fff",
|
||||
cursor:
|
||||
status === "saving" ? "not-allowed" : "pointer",
|
||||
fontSize: "0.9em",
|
||||
fontWeight: 600,
|
||||
opacity: status === "saving" ? 0.7 : 1,
|
||||
}}
|
||||
>
|
||||
{status === "saving"
|
||||
? "Saving…"
|
||||
: status === "saved"
|
||||
? "Saved!"
|
||||
: "Save"}
|
||||
</button>
|
||||
{status === "error" && errorMsg && (
|
||||
<span style={{ color: "#f08080", fontSize: "0.85em" }}>
|
||||
{errorMsg}
|
||||
</span>
|
||||
)}
|
||||
</div>
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@@ -138,7 +138,7 @@ describe("usePathCompletion hook", () => {
|
||||
expect(result.current.matchList[0].name).toBe("Documents");
|
||||
});
|
||||
|
||||
it("calls setPathInput when acceptMatch is invoked", () => {
|
||||
it("calls setPathInput when acceptMatch is invoked", async () => {
|
||||
const setPathInput = vi.fn();
|
||||
|
||||
const { result } = renderHook(() =>
|
||||
@@ -151,7 +151,7 @@ describe("usePathCompletion hook", () => {
|
||||
}),
|
||||
);
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
result.current.acceptMatch("/home/user/Documents/");
|
||||
});
|
||||
|
||||
@@ -308,14 +308,14 @@ describe("usePathCompletion hook", () => {
|
||||
expect(result.current.matchList.length).toBe(2);
|
||||
});
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
result.current.acceptSelectedMatch();
|
||||
});
|
||||
|
||||
expect(setPathInput).toHaveBeenCalledWith("/home/user/Documents/");
|
||||
});
|
||||
|
||||
it("acceptSelectedMatch does nothing when matchList is empty", () => {
|
||||
it("acceptSelectedMatch does nothing when matchList is empty", async () => {
|
||||
const setPathInput = vi.fn();
|
||||
|
||||
const { result } = renderHook(() =>
|
||||
@@ -328,7 +328,7 @@ describe("usePathCompletion hook", () => {
|
||||
}),
|
||||
);
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
result.current.acceptSelectedMatch();
|
||||
});
|
||||
|
||||
@@ -352,7 +352,7 @@ describe("usePathCompletion hook", () => {
|
||||
expect(result.current.matchList.length).toBe(1);
|
||||
});
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
result.current.closeSuggestions();
|
||||
});
|
||||
|
||||
@@ -450,7 +450,7 @@ describe("usePathCompletion hook", () => {
|
||||
expect(result.current.matchList.length).toBe(2);
|
||||
});
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
result.current.setSelectedMatch(1);
|
||||
});
|
||||
|
||||
|
||||
@@ -19,7 +19,7 @@ function makeMessages(count: number): Message[] {
|
||||
}));
|
||||
}
|
||||
|
||||
describe("useChatHistory", () => {
|
||||
describe("useChatHistory", async () => {
|
||||
beforeEach(() => {
|
||||
localStorage.clear();
|
||||
});
|
||||
@@ -28,7 +28,7 @@ describe("useChatHistory", () => {
|
||||
localStorage.clear();
|
||||
});
|
||||
|
||||
it("AC1: restores messages from localStorage on mount", () => {
|
||||
it("AC1: restores messages from localStorage on mount", async () => {
|
||||
localStorage.setItem(STORAGE_KEY, JSON.stringify(sampleMessages));
|
||||
|
||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||
@@ -36,13 +36,13 @@ describe("useChatHistory", () => {
|
||||
expect(result.current.messages).toEqual(sampleMessages);
|
||||
});
|
||||
|
||||
it("AC1: returns empty array when localStorage has no data", () => {
|
||||
it("AC1: returns empty array when localStorage has no data", async () => {
|
||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||
|
||||
expect(result.current.messages).toEqual([]);
|
||||
});
|
||||
|
||||
it("AC1: returns empty array when localStorage contains invalid JSON", () => {
|
||||
it("AC1: returns empty array when localStorage contains invalid JSON", async () => {
|
||||
localStorage.setItem(STORAGE_KEY, "not-json{{{");
|
||||
|
||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||
@@ -50,7 +50,7 @@ describe("useChatHistory", () => {
|
||||
expect(result.current.messages).toEqual([]);
|
||||
});
|
||||
|
||||
it("AC1: returns empty array when localStorage contains a non-array", () => {
|
||||
it("AC1: returns empty array when localStorage contains a non-array", async () => {
|
||||
localStorage.setItem(STORAGE_KEY, JSON.stringify({ not: "array" }));
|
||||
|
||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||
@@ -58,10 +58,10 @@ describe("useChatHistory", () => {
|
||||
expect(result.current.messages).toEqual([]);
|
||||
});
|
||||
|
||||
it("AC2: saves messages to localStorage when setMessages is called with an array", () => {
|
||||
it("AC2: saves messages to localStorage when setMessages is called with an array", async () => {
|
||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
result.current.setMessages(sampleMessages);
|
||||
});
|
||||
|
||||
@@ -69,10 +69,10 @@ describe("useChatHistory", () => {
|
||||
expect(stored).toEqual(sampleMessages);
|
||||
});
|
||||
|
||||
it("AC2: saves messages to localStorage when setMessages is called with updater function", () => {
|
||||
it("AC2: saves messages to localStorage when setMessages is called with updater function", async () => {
|
||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
result.current.setMessages(() => sampleMessages);
|
||||
});
|
||||
|
||||
@@ -80,14 +80,14 @@ describe("useChatHistory", () => {
|
||||
expect(stored).toEqual(sampleMessages);
|
||||
});
|
||||
|
||||
it("AC3: clearMessages removes messages from state and localStorage", () => {
|
||||
it("AC3: clearMessages removes messages from state and localStorage", async () => {
|
||||
localStorage.setItem(STORAGE_KEY, JSON.stringify(sampleMessages));
|
||||
|
||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||
|
||||
expect(result.current.messages).toEqual(sampleMessages);
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
result.current.clearMessages();
|
||||
});
|
||||
|
||||
@@ -95,7 +95,7 @@ describe("useChatHistory", () => {
|
||||
expect(localStorage.getItem(STORAGE_KEY)).toBeNull();
|
||||
});
|
||||
|
||||
it("AC4: handles localStorage quota errors gracefully", () => {
|
||||
it("AC4: handles localStorage quota errors gracefully", async () => {
|
||||
const warnSpy = vi.spyOn(console, "warn").mockImplementation(() => {});
|
||||
const setItemSpy = vi
|
||||
.spyOn(Storage.prototype, "setItem")
|
||||
@@ -106,7 +106,7 @@ describe("useChatHistory", () => {
|
||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||
|
||||
// Should not throw
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
result.current.setMessages(sampleMessages);
|
||||
});
|
||||
|
||||
@@ -121,7 +121,7 @@ describe("useChatHistory", () => {
|
||||
setItemSpy.mockRestore();
|
||||
});
|
||||
|
||||
it("AC5: scopes storage key to project path", () => {
|
||||
it("AC5: scopes storage key to project path", async () => {
|
||||
const projectA = "/projects/a";
|
||||
const projectB = "/projects/b";
|
||||
const keyA = `storykit-chat-history:${projectA}`;
|
||||
@@ -140,12 +140,12 @@ describe("useChatHistory", () => {
|
||||
expect(resultB.current.messages).toEqual(messagesB);
|
||||
});
|
||||
|
||||
it("AC2: removes localStorage key when messages are set to empty array", () => {
|
||||
it("AC2: removes localStorage key when messages are set to empty array", async () => {
|
||||
localStorage.setItem(STORAGE_KEY, JSON.stringify(sampleMessages));
|
||||
|
||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
result.current.setMessages([]);
|
||||
});
|
||||
|
||||
@@ -154,20 +154,20 @@ describe("useChatHistory", () => {
|
||||
|
||||
// --- Story 179: Chat history pruning tests ---
|
||||
|
||||
it("S179: default limit of 200 is applied when saving to localStorage", () => {
|
||||
it("S179: default limit of 200 is applied when saving to localStorage", async () => {
|
||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||
|
||||
expect(result.current.maxMessages).toBe(200);
|
||||
});
|
||||
|
||||
it("S179: messages are pruned from the front when exceeding the limit", () => {
|
||||
it("S179: messages are pruned from the front when exceeding the limit", async () => {
|
||||
// Set a small limit to make testing practical
|
||||
localStorage.setItem(LIMIT_KEY, "3");
|
||||
|
||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||
const fiveMessages = makeMessages(5);
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
result.current.setMessages(fiveMessages);
|
||||
});
|
||||
|
||||
@@ -180,13 +180,13 @@ describe("useChatHistory", () => {
|
||||
expect(stored[0].content).toBe("Message 3");
|
||||
});
|
||||
|
||||
it("S179: messages under the limit are not pruned", () => {
|
||||
it("S179: messages under the limit are not pruned", async () => {
|
||||
localStorage.setItem(LIMIT_KEY, "10");
|
||||
|
||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||
const threeMessages = makeMessages(3);
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
result.current.setMessages(threeMessages);
|
||||
});
|
||||
|
||||
@@ -197,7 +197,7 @@ describe("useChatHistory", () => {
|
||||
expect(stored).toHaveLength(3);
|
||||
});
|
||||
|
||||
it("S179: limit is configurable via localStorage key", () => {
|
||||
it("S179: limit is configurable via localStorage key", async () => {
|
||||
localStorage.setItem(LIMIT_KEY, "5");
|
||||
|
||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||
@@ -205,10 +205,10 @@ describe("useChatHistory", () => {
|
||||
expect(result.current.maxMessages).toBe(5);
|
||||
});
|
||||
|
||||
it("S179: setMaxMessages updates the limit and persists it", () => {
|
||||
it("S179: setMaxMessages updates the limit and persists it", async () => {
|
||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
result.current.setMaxMessages(50);
|
||||
});
|
||||
|
||||
@@ -216,13 +216,13 @@ describe("useChatHistory", () => {
|
||||
expect(localStorage.getItem(LIMIT_KEY)).toBe("50");
|
||||
});
|
||||
|
||||
it("S179: a limit of 0 means unlimited (no pruning)", () => {
|
||||
it("S179: a limit of 0 means unlimited (no pruning)", async () => {
|
||||
localStorage.setItem(LIMIT_KEY, "0");
|
||||
|
||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||
const manyMessages = makeMessages(500);
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
result.current.setMessages(manyMessages);
|
||||
});
|
||||
|
||||
@@ -233,11 +233,11 @@ describe("useChatHistory", () => {
|
||||
expect(stored).toEqual(manyMessages);
|
||||
});
|
||||
|
||||
it("S179: changing the limit re-prunes messages on next save", () => {
|
||||
it("S179: changing the limit re-prunes messages on next save", async () => {
|
||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||
const tenMessages = makeMessages(10);
|
||||
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
result.current.setMessages(tenMessages);
|
||||
});
|
||||
|
||||
@@ -248,7 +248,7 @@ describe("useChatHistory", () => {
|
||||
expect(stored).toHaveLength(10);
|
||||
|
||||
// Now lower the limit — the effect re-runs and prunes
|
||||
act(() => {
|
||||
await act(async () => {
|
||||
result.current.setMaxMessages(3);
|
||||
});
|
||||
|
||||
@@ -257,7 +257,7 @@ describe("useChatHistory", () => {
|
||||
expect(stored[0].content).toBe("Message 8");
|
||||
});
|
||||
|
||||
it("S179: invalid limit in localStorage falls back to default", () => {
|
||||
it("S179: invalid limit in localStorage falls back to default", async () => {
|
||||
localStorage.setItem(LIMIT_KEY, "not-a-number");
|
||||
|
||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||
@@ -265,7 +265,7 @@ describe("useChatHistory", () => {
|
||||
expect(result.current.maxMessages).toBe(200);
|
||||
});
|
||||
|
||||
it("S179: negative limit in localStorage falls back to default", () => {
|
||||
it("S179: negative limit in localStorage falls back to default", async () => {
|
||||
localStorage.setItem(LIMIT_KEY, "-5");
|
||||
|
||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||
|
||||
@@ -0,0 +1,75 @@
|
||||
import { expect, test } from "@playwright/test";
|
||||
|
||||
/// Regression test: gateway UI must have vertical scrolling when content
|
||||
/// overflows the viewport. Verifies the `overflow: hidden` fix on
|
||||
/// `html / body / #root` — without that fix the page is locked at y=0.
|
||||
test.describe("Gateway UI scrolling", () => {
|
||||
test("page scrolls when content exceeds viewport height", async ({
|
||||
page,
|
||||
}) => {
|
||||
// Use a small viewport to guarantee overflow even with modest content.
|
||||
await page.setViewportSize({ width: 1280, height: 400 });
|
||||
|
||||
// --- mock API endpoints ---
|
||||
|
||||
// Identify this server as a gateway.
|
||||
await page.route("/gateway/mode", async (route) => {
|
||||
await route.fulfill({ json: { mode: "gateway" } });
|
||||
});
|
||||
|
||||
// Return enough agents to push the page past 400 px.
|
||||
const agents = Array.from({ length: 15 }, (_, i) => ({
|
||||
id: `agent-${i}`,
|
||||
label: `Build Agent ${i}`,
|
||||
address: `10.0.0.${i}:5000`,
|
||||
registered_at: Date.now() / 1000 - 60,
|
||||
last_seen: Date.now() / 1000 - 10,
|
||||
}));
|
||||
await page.route("/gateway/agents", async (route) => {
|
||||
await route.fulfill({ json: agents });
|
||||
});
|
||||
|
||||
await page.route("/api/gateway", async (route) => {
|
||||
await route.fulfill({ json: { active: "", projects: [] } });
|
||||
});
|
||||
|
||||
await page.route("/api/gateway/pipeline", async (route) => {
|
||||
await route.fulfill({ json: { active: "", projects: {} } });
|
||||
});
|
||||
|
||||
// Non-gateway APIs called by App.tsx on startup — respond quickly so the
|
||||
// loading gate (`isCheckingProject`) clears and the gateway panel renders.
|
||||
await page.route("/api/project", async (route) => {
|
||||
await route.fulfill({ json: null });
|
||||
});
|
||||
await page.route("/api/projects", async (route) => {
|
||||
await route.fulfill({ json: [] });
|
||||
});
|
||||
await page.route("/oauth/status", async (route) => {
|
||||
await route.fulfill({ json: { authenticated: false } });
|
||||
});
|
||||
await page.route("/api/home", async (route) => {
|
||||
await route.fulfill({ json: "/home/test" });
|
||||
});
|
||||
|
||||
await page.goto("/");
|
||||
|
||||
// Wait until the gateway panel is visible.
|
||||
await page.waitForSelector('[data-testid="add-agent-button"]');
|
||||
|
||||
// The scrolling element should be taller than the visible viewport.
|
||||
const isOverflowing = await page.evaluate(() => {
|
||||
const el =
|
||||
document.scrollingElement ?? document.documentElement;
|
||||
return el.scrollHeight > el.clientHeight;
|
||||
});
|
||||
expect(isOverflowing).toBe(true);
|
||||
|
||||
// Scrolling must actually move the viewport.
|
||||
await page.evaluate(() => window.scrollBy(0, 300));
|
||||
const scrollY = await page.evaluate(
|
||||
() => document.scrollingElement?.scrollTop ?? window.scrollY,
|
||||
);
|
||||
expect(scrollY).toBeGreaterThan(0);
|
||||
});
|
||||
});
|
||||
+1
-1
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "huskies"
|
||||
version = "0.10.2"
|
||||
version = "0.10.4"
|
||||
edition = "2024"
|
||||
build = "build.rs"
|
||||
|
||||
|
||||
@@ -0,0 +1,118 @@
|
||||
//! Project-local agent prompt layer.
|
||||
//!
|
||||
//! Reads `.huskies/AGENT.md` from the project root and appends its content to
|
||||
//! the baked-in agent prompt at spawn time. This lets projects record
|
||||
//! non-obvious facts (directory conventions, known traps, etc.) that every
|
||||
//! agent should know without modifying the shared agent configuration.
|
||||
//!
|
||||
//! Behaviour contract:
|
||||
//! - If the file is missing or empty the caller receives `None`; agents spawn
|
||||
//! normally with no warnings or errors.
|
||||
//! - If the file exists and is non-empty, the content is returned and an
|
||||
//! INFO-level log line is emitted with the file path and byte count.
|
||||
//! - The file is read fresh on every agent spawn — no caching.
|
||||
|
||||
use std::path::Path;
|
||||
|
||||
/// Attempt to load the project-local agent prompt from `.huskies/AGENT.md`.
|
||||
///
|
||||
/// Returns `Some(content)` when the file exists and is non-empty, or `None`
|
||||
/// when the file is absent or empty. Never returns an error; any I/O problem
|
||||
/// is silently treated as "no local prompt".
|
||||
pub fn read_project_local_prompt(project_root: &Path) -> Option<String> {
|
||||
let path = project_root.join(".huskies/AGENT.md");
|
||||
let content = std::fs::read_to_string(&path).ok()?;
|
||||
let trimmed = content.trim();
|
||||
if trimmed.is_empty() {
|
||||
return None;
|
||||
}
|
||||
crate::slog!(
|
||||
"[agents] project-local prompt loaded: {} ({} bytes)",
|
||||
path.display(),
|
||||
trimmed.len()
|
||||
);
|
||||
Some(trimmed.to_string())
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn returns_none_when_file_absent() {
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
let result = read_project_local_prompt(tmp.path());
|
||||
assert!(result.is_none(), "missing file must return None");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn returns_none_when_file_empty() {
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
let huskies_dir = tmp.path().join(".huskies");
|
||||
std::fs::create_dir_all(&huskies_dir).unwrap();
|
||||
std::fs::write(huskies_dir.join("AGENT.md"), "").unwrap();
|
||||
let result = read_project_local_prompt(tmp.path());
|
||||
assert!(result.is_none(), "empty file must return None");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn returns_none_when_file_whitespace_only() {
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
let huskies_dir = tmp.path().join(".huskies");
|
||||
std::fs::create_dir_all(&huskies_dir).unwrap();
|
||||
std::fs::write(huskies_dir.join("AGENT.md"), " \n\n ").unwrap();
|
||||
let result = read_project_local_prompt(tmp.path());
|
||||
assert!(result.is_none(), "whitespace-only file must return None");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn returns_content_when_file_non_empty() {
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
let huskies_dir = tmp.path().join(".huskies");
|
||||
std::fs::create_dir_all(&huskies_dir).unwrap();
|
||||
let marker = "DISTINCTIVE_MARKER_XYZ42";
|
||||
std::fs::write(huskies_dir.join("AGENT.md"), format!("# Hints\n{marker}\n")).unwrap();
|
||||
let result = read_project_local_prompt(tmp.path());
|
||||
assert!(result.is_some(), "non-empty file must return Some");
|
||||
let content = result.unwrap();
|
||||
assert!(
|
||||
content.contains(marker),
|
||||
"returned content must include the marker: {content}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn appended_to_prompt_integration() {
|
||||
// Simulates the start.rs usage: marker appears in the constructed
|
||||
// system prompt when the file is present, absent when it is not.
|
||||
let tmp_with = tempfile::tempdir().unwrap();
|
||||
let huskies_dir = tmp_with.path().join(".huskies");
|
||||
std::fs::create_dir_all(&huskies_dir).unwrap();
|
||||
let marker = "INTEGRATION_MARKER_601";
|
||||
std::fs::write(huskies_dir.join("AGENT.md"), marker).unwrap();
|
||||
|
||||
let base_prompt = "You are a coder agent.".to_string();
|
||||
let local = read_project_local_prompt(tmp_with.path());
|
||||
let effective = match local {
|
||||
Some(ref extra) => format!("{base_prompt}\n\n{extra}"),
|
||||
None => base_prompt.clone(),
|
||||
};
|
||||
assert!(
|
||||
effective.contains(marker),
|
||||
"marker must appear in effective prompt when file present: {effective}"
|
||||
);
|
||||
|
||||
// Without the file
|
||||
let tmp_without = tempfile::tempdir().unwrap();
|
||||
let local2 = read_project_local_prompt(tmp_without.path());
|
||||
assert!(local2.is_none(), "no marker when file absent");
|
||||
let effective2 = match local2 {
|
||||
Some(ref extra) => format!("{base_prompt}\n\n{extra}"),
|
||||
None => base_prompt.clone(),
|
||||
};
|
||||
assert!(
|
||||
!effective2.contains(marker),
|
||||
"marker must NOT appear in effective prompt when file absent: {effective2}"
|
||||
);
|
||||
}
|
||||
}
|
||||
@@ -1,6 +1,7 @@
|
||||
//! Agent subsystem — types, configuration, and orchestration for coding agents.
|
||||
pub mod gates;
|
||||
pub mod lifecycle;
|
||||
pub mod local_prompt;
|
||||
pub mod merge;
|
||||
mod pool;
|
||||
pub(crate) mod pty;
|
||||
|
||||
@@ -410,6 +410,17 @@ impl AgentPool {
|
||||
}
|
||||
};
|
||||
|
||||
// Append project-local prompt content (.huskies/AGENT.md) to the
|
||||
// baked-in prompt so every agent role sees project-specific guidance
|
||||
// without any config changes. The file is read fresh each spawn;
|
||||
// if absent or empty, the prompt is unchanged and no warning is logged.
|
||||
if let Some(local) =
|
||||
crate::agents::local_prompt::read_project_local_prompt(&project_root_clone)
|
||||
{
|
||||
prompt.push_str("\n\n");
|
||||
prompt.push_str(&local);
|
||||
}
|
||||
|
||||
// Build the effective prompt and determine resume session.
|
||||
//
|
||||
// When resuming a previous session, discard the full rendered prompt
|
||||
|
||||
@@ -59,12 +59,17 @@ fn wizard_generate_reply(ctx: &CommandContext) -> String {
|
||||
}
|
||||
|
||||
/// Compose a status reply for the `setup` command (no args).
|
||||
///
|
||||
/// If no wizard state exists, automatically initializes it so the user does
|
||||
/// not need to run `huskies init` manually.
|
||||
fn wizard_status_reply(ctx: &CommandContext) -> String {
|
||||
if WizardState::load(ctx.project_root).is_none() {
|
||||
WizardState::init_if_missing(ctx.project_root);
|
||||
}
|
||||
match WizardState::load(ctx.project_root) {
|
||||
Some(state) => format_wizard_state(&state),
|
||||
None => {
|
||||
"No setup wizard active. Run `huskies init` in the project root to begin.".to_string()
|
||||
}
|
||||
None => "Unable to initialize setup wizard. Ensure the `.huskies/` directory exists."
|
||||
.to_string(),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -205,13 +210,18 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn setup_no_wizard_returns_helpful_message() {
|
||||
fn setup_no_wizard_auto_initializes() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
std::fs::create_dir_all(dir.path().join(".huskies")).unwrap();
|
||||
let agents = Arc::new(crate::agents::AgentPool::new_test(4000));
|
||||
let rooms = Arc::new(Mutex::new(HashSet::new()));
|
||||
let ctx = make_ctx("", dir.path(), &agents, &rooms);
|
||||
let result = handle_setup(&ctx).unwrap();
|
||||
assert!(result.contains("huskies init"));
|
||||
// Bot should auto-initialize and return wizard status, not ask user to run huskies init.
|
||||
assert!(result.contains("Setup wizard"));
|
||||
assert!(!result.contains("huskies init"));
|
||||
// Wizard state file should now exist.
|
||||
assert!(WizardState::load(dir.path()).is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
|
||||
@@ -2,6 +2,65 @@
|
||||
|
||||
use super::CommandContext;
|
||||
|
||||
/// Strip YAML front matter and return a summary of useful fields + the remaining body.
|
||||
fn strip_front_matter(text: &str) -> (String, String) {
|
||||
let trimmed = text.trim_start();
|
||||
if !trimmed.starts_with("---") {
|
||||
return (String::new(), text.to_string());
|
||||
}
|
||||
|
||||
// Find the closing ---
|
||||
if let Some(end) = trimmed[3..].find("\n---") {
|
||||
let yaml_block = &trimmed[3..3 + end].trim();
|
||||
let body = &trimmed[3 + end + 4..]; // skip past closing ---
|
||||
|
||||
// Extract useful fields from YAML (simple line-based parsing)
|
||||
let mut parts = Vec::new();
|
||||
for line in yaml_block.lines() {
|
||||
let line = line.trim();
|
||||
if line.starts_with("depends_on:") {
|
||||
let val = line.trim_start_matches("depends_on:").trim();
|
||||
if !val.is_empty() && val != "[]" {
|
||||
parts.push(format!("**Depends on:** {val}"));
|
||||
}
|
||||
} else if line.starts_with("agent:") {
|
||||
let val = line.trim_start_matches("agent:").trim().trim_matches('"');
|
||||
if !val.is_empty() {
|
||||
parts.push(format!("**Agent:** {val}"));
|
||||
}
|
||||
} else if line.starts_with("blocked:") {
|
||||
let val = line.trim_start_matches("blocked:").trim();
|
||||
if val == "true" {
|
||||
parts.push("**Blocked:** yes".to_string());
|
||||
}
|
||||
} else if line.starts_with("retry_count:") {
|
||||
let val = line.trim_start_matches("retry_count:").trim();
|
||||
if val != "0" && !val.is_empty() {
|
||||
parts.push(format!("**Retries:** {val}"));
|
||||
}
|
||||
} else if line.starts_with("qa:") {
|
||||
let val = line.trim_start_matches("qa:").trim().trim_matches('"');
|
||||
if val == "human" {
|
||||
parts.push("**QA:** human review required".to_string());
|
||||
}
|
||||
} else if line.starts_with("merge_failure:") {
|
||||
let val = line
|
||||
.trim_start_matches("merge_failure:")
|
||||
.trim()
|
||||
.trim_matches('"');
|
||||
if !val.is_empty() {
|
||||
parts.push(format!("**Merge failure:** {val}"));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
(parts.join(" · "), body.to_string())
|
||||
} else {
|
||||
// No closing ---, return as-is
|
||||
(String::new(), text.to_string())
|
||||
}
|
||||
}
|
||||
|
||||
/// Display the full markdown text of a work item identified by its numeric ID.
|
||||
///
|
||||
/// Lookup priority: CRDT → content store → filesystem (Story 512).
|
||||
@@ -34,9 +93,38 @@ pub(super) fn handle_show(ctx: &CommandContext) -> Option<String> {
|
||||
|
||||
// `content` comes from the CRDT / content store. If unavailable, report
|
||||
// it rather than silently reading a stale on-disk copy.
|
||||
Some(content.unwrap_or_else(|| {
|
||||
let text = content.unwrap_or_else(|| {
|
||||
format!("Story {story_id} found in pipeline but its content is unavailable.")
|
||||
}))
|
||||
});
|
||||
|
||||
// Strip front matter block and extract useful metadata to show inline.
|
||||
let (front_matter_summary, body) = strip_front_matter(&text);
|
||||
|
||||
// Convert markdown headings to bold text for consistent rendering across
|
||||
// Matrix clients. Element X doesn't style <h2> tags distinctly, but bold
|
||||
// text renders consistently everywhere.
|
||||
let formatted = body
|
||||
.lines()
|
||||
.map(|line| {
|
||||
let trimmed = line.trim_start();
|
||||
if let Some(rest) = trimmed.strip_prefix("### ") {
|
||||
format!("\n**{}**", rest)
|
||||
} else if let Some(rest) = trimmed.strip_prefix("## ") {
|
||||
format!("\n**{}**", rest)
|
||||
} else if let Some(rest) = trimmed.strip_prefix("# ") {
|
||||
format!("\n**{}**", rest)
|
||||
} else {
|
||||
line.to_string()
|
||||
}
|
||||
})
|
||||
.collect::<Vec<_>>()
|
||||
.join("\n");
|
||||
|
||||
if front_matter_summary.is_empty() {
|
||||
Some(formatted.trim().to_string())
|
||||
} else {
|
||||
Some(format!("{front_matter_summary}\n{}", formatted.trim()))
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
|
||||
+8
-1138
File diff suppressed because it is too large
Load Diff
@@ -1,10 +1,10 @@
|
||||
//! Matrix bot context — shared state for the Matrix bot (rooms, history, permissions).
|
||||
use crate::agents::AgentPool;
|
||||
use crate::chat::ChatTransport;
|
||||
use crate::chat::timer::TimerStore;
|
||||
use crate::http::context::{PermissionDecision, PermissionForward};
|
||||
use crate::service::timer::TimerStore;
|
||||
use matrix_sdk::ruma::{OwnedEventId, OwnedRoomId, OwnedUserId};
|
||||
use std::collections::{HashMap, HashSet};
|
||||
use std::collections::{BTreeMap, HashMap, HashSet};
|
||||
use std::path::PathBuf;
|
||||
use std::sync::Arc;
|
||||
use tokio::sync::Mutex as TokioMutex;
|
||||
@@ -65,6 +65,70 @@ pub struct BotContext {
|
||||
/// In gateway mode: valid project names accepted by the `switch` command.
|
||||
/// Empty in standalone mode.
|
||||
pub gateway_projects: Vec<String>,
|
||||
/// In gateway mode: mapping of project name → base URL (e.g. `"http://localhost:3001"`).
|
||||
/// Used to proxy bot commands to the active project's `/api/bot/command` endpoint.
|
||||
/// Empty in standalone mode.
|
||||
pub gateway_project_urls: BTreeMap<String, String>,
|
||||
}
|
||||
|
||||
impl BotContext {
|
||||
/// Resolve the effective project root for command dispatch.
|
||||
///
|
||||
/// In gateway mode the bot's `project_root` is the gateway config directory.
|
||||
/// Each project lives in a subdirectory named after the project, so the
|
||||
/// effective root for commands is `project_root / active_project_name`.
|
||||
/// In standalone (single-project) mode this returns `project_root` unchanged.
|
||||
pub async fn effective_project_root(&self) -> PathBuf {
|
||||
if let Some(ref ap) = self.gateway_active_project {
|
||||
let name = ap.read().await.clone();
|
||||
self.project_root.join(&name)
|
||||
} else {
|
||||
self.project_root.clone()
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns `true` if the bot is running in gateway mode.
|
||||
pub fn is_gateway(&self) -> bool {
|
||||
self.gateway_active_project.is_some()
|
||||
}
|
||||
|
||||
/// Return the base URL for the currently active project, if in gateway mode.
|
||||
pub async fn active_project_url(&self) -> Option<String> {
|
||||
let ap = self.gateway_active_project.as_ref()?;
|
||||
let name = ap.read().await.clone();
|
||||
self.gateway_project_urls.get(&name).cloned()
|
||||
}
|
||||
|
||||
/// Proxy a bot command to the active project's `/api/bot/command` endpoint.
|
||||
///
|
||||
/// Returns the Markdown response from the project server, or an error
|
||||
/// message if the request failed.
|
||||
pub async fn proxy_bot_command(&self, command: &str, args: &str) -> Option<String> {
|
||||
let base_url = self.active_project_url().await?;
|
||||
let url = format!("{base_url}/api/bot/command");
|
||||
let client = reqwest::Client::new();
|
||||
let body = serde_json::json!({
|
||||
"command": command,
|
||||
"args": args,
|
||||
});
|
||||
match client.post(&url).json(&body).send().await {
|
||||
Ok(resp) if resp.status().is_success() => {
|
||||
match resp.json::<serde_json::Value>().await {
|
||||
Ok(json) => json
|
||||
.get("response")
|
||||
.and_then(|v| v.as_str())
|
||||
.map(String::from),
|
||||
Err(e) => Some(format!("Failed to parse response from project server: {e}")),
|
||||
}
|
||||
}
|
||||
Ok(resp) => Some(format!(
|
||||
"Project server returned HTTP {}: {}",
|
||||
resp.status(),
|
||||
resp.text().await.unwrap_or_default()
|
||||
)),
|
||||
Err(e) => Some(format!("Failed to reach project server at {url}: {e}")),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
@@ -88,6 +152,135 @@ mod tests {
|
||||
assert_clone::<BotContext>();
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn effective_project_root_standalone_returns_project_root() {
|
||||
// In standalone mode (gateway_active_project is None), the effective root
|
||||
// must equal the project_root exactly.
|
||||
let (_perm_tx, perm_rx) = mpsc::unbounded_channel();
|
||||
let ctx = BotContext {
|
||||
bot_user_id: make_user_id("@bot:example.com"),
|
||||
target_room_ids: vec![],
|
||||
project_root: PathBuf::from("/projects/myapp"),
|
||||
allowed_users: vec![],
|
||||
history: Arc::new(TokioMutex::new(std::collections::HashMap::new())),
|
||||
history_size: 20,
|
||||
bot_sent_event_ids: Arc::new(TokioMutex::new(std::collections::HashSet::new())),
|
||||
perm_rx: Arc::new(TokioMutex::new(perm_rx)),
|
||||
pending_perm_replies: Arc::new(TokioMutex::new(std::collections::HashMap::new())),
|
||||
permission_timeout_secs: 120,
|
||||
bot_name: "Assistant".to_string(),
|
||||
ambient_rooms: Arc::new(std::sync::Mutex::new(std::collections::HashSet::new())),
|
||||
agents: Arc::new(crate::agents::AgentPool::new_test(3000)),
|
||||
htop_sessions: Arc::new(TokioMutex::new(std::collections::HashMap::new())),
|
||||
transport: Arc::new(crate::chat::transport::whatsapp::WhatsAppTransport::new(
|
||||
"test-phone".to_string(),
|
||||
"test-token".to_string(),
|
||||
"pipeline_notification".to_string(),
|
||||
)),
|
||||
timer_store: Arc::new(crate::service::timer::TimerStore::load(
|
||||
std::path::PathBuf::from("/tmp/timers.json"),
|
||||
)),
|
||||
gateway_active_project: None,
|
||||
gateway_projects: vec![],
|
||||
gateway_project_urls: BTreeMap::new(),
|
||||
};
|
||||
assert_eq!(
|
||||
ctx.effective_project_root().await,
|
||||
PathBuf::from("/projects/myapp")
|
||||
);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn effective_project_root_gateway_uses_active_project_subdir() {
|
||||
// In gateway mode, the effective root must be config_dir / active_project_name.
|
||||
let (_perm_tx, perm_rx) = mpsc::unbounded_channel();
|
||||
let active = Arc::new(RwLock::new("huskies".to_string()));
|
||||
let ctx = BotContext {
|
||||
bot_user_id: make_user_id("@bot:example.com"),
|
||||
target_room_ids: vec![],
|
||||
project_root: PathBuf::from("/gateway"),
|
||||
allowed_users: vec![],
|
||||
history: Arc::new(TokioMutex::new(std::collections::HashMap::new())),
|
||||
history_size: 20,
|
||||
bot_sent_event_ids: Arc::new(TokioMutex::new(std::collections::HashSet::new())),
|
||||
perm_rx: Arc::new(TokioMutex::new(perm_rx)),
|
||||
pending_perm_replies: Arc::new(TokioMutex::new(std::collections::HashMap::new())),
|
||||
permission_timeout_secs: 120,
|
||||
bot_name: "Assistant".to_string(),
|
||||
ambient_rooms: Arc::new(std::sync::Mutex::new(std::collections::HashSet::new())),
|
||||
agents: Arc::new(crate::agents::AgentPool::new_test(3000)),
|
||||
htop_sessions: Arc::new(TokioMutex::new(std::collections::HashMap::new())),
|
||||
transport: Arc::new(crate::chat::transport::whatsapp::WhatsAppTransport::new(
|
||||
"test-phone".to_string(),
|
||||
"test-token".to_string(),
|
||||
"pipeline_notification".to_string(),
|
||||
)),
|
||||
timer_store: Arc::new(crate::service::timer::TimerStore::load(
|
||||
std::path::PathBuf::from("/tmp/timers.json"),
|
||||
)),
|
||||
gateway_active_project: Some(Arc::clone(&active)),
|
||||
gateway_projects: vec!["huskies".into(), "robot-studio".into()],
|
||||
gateway_project_urls: BTreeMap::from([
|
||||
("huskies".into(), "http://localhost:3001".into()),
|
||||
("robot-studio".into(), "http://localhost:3002".into()),
|
||||
]),
|
||||
};
|
||||
assert_eq!(
|
||||
ctx.effective_project_root().await,
|
||||
PathBuf::from("/gateway/huskies")
|
||||
);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn effective_project_root_gateway_reflects_project_switch() {
|
||||
// Switching the active project must change the effective root.
|
||||
let (_perm_tx, perm_rx) = mpsc::unbounded_channel();
|
||||
let active = Arc::new(RwLock::new("huskies".to_string()));
|
||||
let ctx = BotContext {
|
||||
bot_user_id: make_user_id("@bot:example.com"),
|
||||
target_room_ids: vec![],
|
||||
project_root: PathBuf::from("/gateway"),
|
||||
allowed_users: vec![],
|
||||
history: Arc::new(TokioMutex::new(std::collections::HashMap::new())),
|
||||
history_size: 20,
|
||||
bot_sent_event_ids: Arc::new(TokioMutex::new(std::collections::HashSet::new())),
|
||||
perm_rx: Arc::new(TokioMutex::new(perm_rx)),
|
||||
pending_perm_replies: Arc::new(TokioMutex::new(std::collections::HashMap::new())),
|
||||
permission_timeout_secs: 120,
|
||||
bot_name: "Assistant".to_string(),
|
||||
ambient_rooms: Arc::new(std::sync::Mutex::new(std::collections::HashSet::new())),
|
||||
agents: Arc::new(crate::agents::AgentPool::new_test(3000)),
|
||||
htop_sessions: Arc::new(TokioMutex::new(std::collections::HashMap::new())),
|
||||
transport: Arc::new(crate::chat::transport::whatsapp::WhatsAppTransport::new(
|
||||
"test-phone".to_string(),
|
||||
"test-token".to_string(),
|
||||
"pipeline_notification".to_string(),
|
||||
)),
|
||||
timer_store: Arc::new(crate::service::timer::TimerStore::load(
|
||||
std::path::PathBuf::from("/tmp/timers.json"),
|
||||
)),
|
||||
gateway_active_project: Some(Arc::clone(&active)),
|
||||
gateway_projects: vec!["huskies".into(), "robot-studio".into()],
|
||||
gateway_project_urls: BTreeMap::from([
|
||||
("huskies".into(), "http://localhost:3001".into()),
|
||||
("robot-studio".into(), "http://localhost:3002".into()),
|
||||
]),
|
||||
};
|
||||
|
||||
assert_eq!(
|
||||
ctx.effective_project_root().await,
|
||||
PathBuf::from("/gateway/huskies")
|
||||
);
|
||||
|
||||
// Simulate switch_project changing the active project.
|
||||
*active.write().await = "robot-studio".to_string();
|
||||
|
||||
assert_eq!(
|
||||
ctx.effective_project_root().await,
|
||||
PathBuf::from("/gateway/robot-studio")
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn bot_context_has_no_require_verified_devices_field() {
|
||||
// Verification is always on — BotContext no longer has a toggle field.
|
||||
@@ -113,11 +306,12 @@ mod tests {
|
||||
"test-token".to_string(),
|
||||
"pipeline_notification".to_string(),
|
||||
)),
|
||||
timer_store: Arc::new(crate::chat::timer::TimerStore::load(
|
||||
timer_store: Arc::new(crate::service::timer::TimerStore::load(
|
||||
std::path::PathBuf::from("/tmp/timers.json"),
|
||||
)),
|
||||
gateway_active_project: None,
|
||||
gateway_projects: vec![],
|
||||
gateway_project_urls: BTreeMap::new(),
|
||||
};
|
||||
// Clone must work (required by Matrix SDK event handler injection).
|
||||
let _cloned = ctx.clone();
|
||||
|
||||
@@ -96,6 +96,49 @@ mod tests {
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn markdown_to_html_heading_renders_as_h_tag() {
|
||||
let html = markdown_to_html("## Section\nContent here.");
|
||||
assert!(
|
||||
html.contains("<h2>Section</h2>"),
|
||||
"expected <h2> heading tag: {html}"
|
||||
);
|
||||
assert!(
|
||||
html.contains("<p>Content here.</p>"),
|
||||
"expected paragraph after heading: {html}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn markdown_to_html_heading_with_preceding_prose_renders_correctly() {
|
||||
let html = markdown_to_html("Intro text.\n## Section\nBody.");
|
||||
assert!(
|
||||
html.contains("<h2>Section</h2>"),
|
||||
"expected <h2> heading tag: {html}"
|
||||
);
|
||||
assert!(
|
||||
html.contains("<p>Intro text.</p>"),
|
||||
"expected intro paragraph: {html}"
|
||||
);
|
||||
assert!(
|
||||
html.contains("<p>Body.</p>"),
|
||||
"expected body paragraph: {html}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn markdown_to_html_multiple_headings_each_render_as_h_tags() {
|
||||
let html = markdown_to_html("## Section 1\nContent one.\n\n## Section 2\nContent two.");
|
||||
assert!(
|
||||
html.contains("<h2>Section 1</h2>"),
|
||||
"expected first <h2>: {html}"
|
||||
);
|
||||
assert!(
|
||||
html.contains("<h2>Section 2</h2>"),
|
||||
"expected second <h2>: {html}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn startup_announcement_uses_bot_name() {
|
||||
assert_eq!(format_startup_announcement("Timmy"), "Timmy is online.");
|
||||
|
||||
@@ -174,13 +174,92 @@ pub(super) async fn on_room_message(
|
||||
let user_message = body;
|
||||
slog!("[matrix-bot] Message from {sender}: {user_message}");
|
||||
|
||||
// In gateway mode, resolve commands against the active project's root directory.
|
||||
// The gateway's own project_root is the gateway config dir; each project lives in
|
||||
// a subdirectory named after the project. Standalone mode is unaffected.
|
||||
let effective_root = ctx.effective_project_root().await;
|
||||
|
||||
// ── Gateway command proxy ───────────────────────────────────────────
|
||||
// In gateway mode the bot has no local CRDT or project filesystem, so most
|
||||
// commands must be forwarded to the active project's `/api/bot/command`
|
||||
// endpoint. Only a small set of gateway-local commands are handled here.
|
||||
if ctx.is_gateway() {
|
||||
// Commands that are meaningful on the gateway itself (no project state needed).
|
||||
const GATEWAY_LOCAL_COMMANDS: &[&str] =
|
||||
&["help", "ambient", "reset", "switch", "all_status"];
|
||||
|
||||
let stripped = crate::chat::util::strip_bot_mention(
|
||||
&user_message,
|
||||
&ctx.bot_name,
|
||||
ctx.bot_user_id.as_str(),
|
||||
)
|
||||
.trim()
|
||||
.trim_start_matches(|c: char| !c.is_alphanumeric())
|
||||
.to_string();
|
||||
|
||||
let (cmd, args) = match stripped.split_once(char::is_whitespace) {
|
||||
Some((c, a)) => (c.to_ascii_lowercase(), a.trim().to_string()),
|
||||
None => (stripped.to_ascii_lowercase(), String::new()),
|
||||
};
|
||||
|
||||
// Only proxy if the first word is a known bot command (sync or async).
|
||||
let is_known_command = !cmd.is_empty()
|
||||
&& !GATEWAY_LOCAL_COMMANDS.contains(&cmd.as_str())
|
||||
&& (crate::chat::commands::commands()
|
||||
.iter()
|
||||
.any(|c| c.name == cmd)
|
||||
|| [
|
||||
"assign", "start", "delete", "rebuild", "rmtree", "htop", "timer",
|
||||
]
|
||||
.contains(&cmd.as_str()));
|
||||
|
||||
if is_known_command {
|
||||
// Proxy to the active project server.
|
||||
let response = match ctx.proxy_bot_command(&cmd, &args).await {
|
||||
Some(r) => r,
|
||||
None => "No active project selected or project URL not configured.".to_string(),
|
||||
};
|
||||
let html = markdown_to_html(&response);
|
||||
if let Ok(msg_id) = ctx
|
||||
.transport
|
||||
.send_message(&room_id_str, &response, &html)
|
||||
.await
|
||||
&& let Ok(event_id) = msg_id.parse()
|
||||
{
|
||||
ctx.bot_sent_event_ids.lock().await.insert(event_id);
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
// `all_status` — aggregate pipeline status across all projects (gateway-only).
|
||||
if cmd == "all_status" {
|
||||
let project_urls = ctx.gateway_project_urls.clone();
|
||||
let client = reqwest::Client::new();
|
||||
let statuses =
|
||||
crate::gateway::fetch_all_project_pipeline_statuses(&project_urls, &client).await;
|
||||
let response = crate::gateway::format_aggregate_status_compact(&statuses);
|
||||
let html = markdown_to_html(&response);
|
||||
if let Ok(msg_id) = ctx
|
||||
.transport
|
||||
.send_message(&room_id_str, &response, &html)
|
||||
.await
|
||||
&& let Ok(event_id) = msg_id.parse()
|
||||
{
|
||||
ctx.bot_sent_event_ids.lock().await.insert(event_id);
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
// Gateway-local commands and freeform text fall through to normal handling below.
|
||||
}
|
||||
|
||||
// Check for bot-level commands (help, status, ambient, …) before invoking
|
||||
// the LLM. All commands are registered in commands.rs — no special-casing
|
||||
// needed here.
|
||||
let dispatch = super::super::commands::CommandDispatch {
|
||||
bot_name: &ctx.bot_name,
|
||||
bot_user_id: ctx.bot_user_id.as_str(),
|
||||
project_root: &ctx.project_root,
|
||||
project_root: &effective_root,
|
||||
agents: &ctx.agents,
|
||||
ambient_rooms: &ctx.ambient_rooms,
|
||||
room_id: &room_id_str,
|
||||
@@ -219,7 +298,7 @@ pub(super) async fn on_room_message(
|
||||
&ctx.bot_name,
|
||||
&story_number,
|
||||
&model,
|
||||
&ctx.project_root,
|
||||
&effective_root,
|
||||
&ctx.agents,
|
||||
)
|
||||
.await
|
||||
@@ -287,7 +366,7 @@ pub(super) async fn on_room_message(
|
||||
super::super::delete::handle_delete(
|
||||
&ctx.bot_name,
|
||||
&story_number,
|
||||
&ctx.project_root,
|
||||
&effective_root,
|
||||
&ctx.agents,
|
||||
)
|
||||
.await
|
||||
@@ -321,7 +400,7 @@ pub(super) async fn on_room_message(
|
||||
super::super::rmtree::handle_rmtree(
|
||||
&ctx.bot_name,
|
||||
&story_number,
|
||||
&ctx.project_root,
|
||||
&effective_root,
|
||||
&ctx.agents,
|
||||
)
|
||||
.await
|
||||
@@ -361,7 +440,7 @@ pub(super) async fn on_room_message(
|
||||
&ctx.bot_name,
|
||||
&story_number,
|
||||
agent_hint.as_deref(),
|
||||
&ctx.project_root,
|
||||
&effective_root,
|
||||
&ctx.agents,
|
||||
)
|
||||
.await
|
||||
@@ -493,13 +572,13 @@ pub(super) async fn on_room_message(
|
||||
|
||||
// Check for the timer command, which requires async file I/O and cannot
|
||||
// be handled by the sync command registry.
|
||||
if let Some(timer_cmd) = crate::chat::timer::extract_timer_command(
|
||||
if let Some(timer_cmd) = crate::service::timer::extract_timer_command(
|
||||
&user_message,
|
||||
&ctx.bot_name,
|
||||
ctx.bot_user_id.as_str(),
|
||||
) {
|
||||
slog!("[matrix-bot] Handling timer command from {sender}: {timer_cmd:?}");
|
||||
let response = crate::chat::timer::handle_timer_command(
|
||||
let response = crate::service::timer::handle_timer_command(
|
||||
timer_cmd,
|
||||
&ctx.timer_store,
|
||||
&ctx.project_root,
|
||||
@@ -587,7 +666,18 @@ pub(super) async fn handle_message(
|
||||
let sent_any_chunk = Arc::new(AtomicBool::new(false));
|
||||
let sent_any_chunk_for_callback = Arc::clone(&sent_any_chunk);
|
||||
|
||||
let project_root_str = ctx.project_root.to_string_lossy().to_string();
|
||||
// In gateway mode, run Claude Code in the gateway config directory so it
|
||||
// picks up the `.mcp.json` that points to the gateway's MCP proxy endpoint.
|
||||
// The gateway proxies tool calls to the active project automatically.
|
||||
// In standalone mode, use the project root directly.
|
||||
let project_root_str = if ctx.is_gateway() {
|
||||
ctx.project_root.to_string_lossy().to_string()
|
||||
} else {
|
||||
ctx.effective_project_root()
|
||||
.await
|
||||
.to_string_lossy()
|
||||
.to_string()
|
||||
};
|
||||
let chat_fut = provider.chat_stream(
|
||||
&prompt,
|
||||
&project_root_str,
|
||||
|
||||
@@ -30,6 +30,7 @@ pub async fn run_bot(
|
||||
shutdown_rx: watch::Receiver<Option<crate::rebuild::ShutdownReason>>,
|
||||
gateway_active_project: Option<Arc<RwLock<String>>>,
|
||||
gateway_projects: Vec<String>,
|
||||
gateway_project_urls: std::collections::BTreeMap<String, String>,
|
||||
) -> Result<(), String> {
|
||||
let store_path = project_root.join(".huskies").join("matrix_store");
|
||||
let client = Client::builder()
|
||||
@@ -167,6 +168,11 @@ pub async fn run_bot(
|
||||
let notif_room_ids = target_room_ids.clone();
|
||||
let notif_project_root = project_root.clone();
|
||||
let announce_room_ids = target_room_ids.clone();
|
||||
// Clone values needed by the gateway notification poller (only used in gateway mode).
|
||||
let poller_room_ids: Vec<String> = target_room_ids.iter().map(|r| r.to_string()).collect();
|
||||
let poller_project_urls = gateway_project_urls.clone();
|
||||
let poller_poll_interval = config.aggregated_notifications_poll_interval_secs;
|
||||
let poller_enabled = config.aggregated_notifications_enabled;
|
||||
|
||||
let persisted = load_history(&project_root);
|
||||
slog!(
|
||||
@@ -222,11 +228,14 @@ pub async fn run_bot(
|
||||
.unwrap_or_else(|| "Assistant".to_string());
|
||||
let announce_bot_name = bot_name.clone();
|
||||
|
||||
let timer_store = Arc::new(crate::chat::timer::TimerStore::load(
|
||||
let timer_store = Arc::new(crate::service::timer::TimerStore::load(
|
||||
project_root.join(".huskies").join("timers.json"),
|
||||
));
|
||||
// Auto-schedule timers when an agent hits a hard rate limit.
|
||||
crate::chat::timer::spawn_rate_limit_auto_scheduler(Arc::clone(&timer_store), watcher_rx_auto);
|
||||
crate::service::timer::spawn_rate_limit_auto_scheduler(
|
||||
Arc::clone(&timer_store),
|
||||
watcher_rx_auto,
|
||||
);
|
||||
|
||||
let ctx = BotContext {
|
||||
bot_user_id,
|
||||
@@ -247,6 +256,7 @@ pub async fn run_bot(
|
||||
timer_store,
|
||||
gateway_active_project,
|
||||
gateway_projects,
|
||||
gateway_project_urls,
|
||||
};
|
||||
|
||||
slog!(
|
||||
@@ -262,13 +272,27 @@ pub async fn run_bot(
|
||||
// Spawn the stage-transition notification listener before entering the
|
||||
// sync loop so it starts receiving watcher events immediately.
|
||||
let notif_room_id_strings: Vec<String> = notif_room_ids.iter().map(|r| r.to_string()).collect();
|
||||
super::super::notifications::spawn_notification_listener(
|
||||
crate::service::notifications::spawn_notification_listener(
|
||||
Arc::clone(&transport),
|
||||
move || notif_room_id_strings.clone(),
|
||||
watcher_rx,
|
||||
notif_project_root,
|
||||
);
|
||||
|
||||
// In gateway mode, spawn the cross-project notification poller.
|
||||
// It polls every registered project's `/api/events` endpoint and forwards
|
||||
// new events to the configured gateway rooms with a `[project-name]` prefix.
|
||||
// The poller is controlled by the gateway-level `aggregated_notifications_enabled`
|
||||
// flag in bot.toml — set it to `false` to disable without touching per-project configs.
|
||||
if !poller_project_urls.is_empty() && poller_enabled {
|
||||
crate::gateway::spawn_gateway_notification_poller(
|
||||
Arc::clone(&transport),
|
||||
poller_room_ids,
|
||||
poller_project_urls,
|
||||
poller_poll_interval,
|
||||
);
|
||||
}
|
||||
|
||||
// Spawn a shutdown watcher that sends a best-effort goodbye message to all
|
||||
// configured rooms when the server is about to stop (SIGINT/SIGTERM or rebuild).
|
||||
{
|
||||
|
||||
@@ -10,6 +10,14 @@ fn default_permission_timeout_secs() -> u64 {
|
||||
120
|
||||
}
|
||||
|
||||
fn default_aggregated_notifications_poll_interval_secs() -> u64 {
|
||||
5
|
||||
}
|
||||
|
||||
fn default_aggregated_notifications_enabled() -> bool {
|
||||
true
|
||||
}
|
||||
|
||||
/// Configuration for the Matrix bot, read from `.huskies/bot.toml`.
|
||||
#[derive(Deserialize, Clone, Debug)]
|
||||
pub struct BotConfig {
|
||||
@@ -146,6 +154,26 @@ pub struct BotConfig {
|
||||
/// When empty or absent, all users in configured channels are allowed.
|
||||
#[serde(default)]
|
||||
pub discord_allowed_users: Vec<String>,
|
||||
|
||||
/// How often (in seconds) the gateway polls each project server's
|
||||
/// `/api/events` endpoint to aggregate cross-project notifications.
|
||||
///
|
||||
/// Only used when the gateway's bot is enabled. Defaults to 5 seconds.
|
||||
#[serde(default = "default_aggregated_notifications_poll_interval_secs")]
|
||||
pub aggregated_notifications_poll_interval_secs: u64,
|
||||
|
||||
/// Whether the gateway-level aggregated cross-project notification stream
|
||||
/// is enabled. When `false`, the gateway will not poll per-project
|
||||
/// servers for events even if the bot is otherwise enabled.
|
||||
///
|
||||
/// Set this in the **gateway's** `bot.toml` (not in per-project configs).
|
||||
/// Adding a new project to `projects.toml` never requires touching
|
||||
/// per-project bot configs — the aggregated stream picks it up
|
||||
/// automatically once this flag is `true` (the default).
|
||||
///
|
||||
/// Defaults to `true`.
|
||||
#[serde(default = "default_aggregated_notifications_enabled")]
|
||||
pub aggregated_notifications_enabled: bool,
|
||||
}
|
||||
|
||||
fn default_transport() -> String {
|
||||
@@ -658,6 +686,47 @@ require_verified_devices = true
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn aggregated_notifications_enabled_defaults_to_true() {
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
let sk = tmp.path().join(".huskies");
|
||||
fs::create_dir_all(&sk).unwrap();
|
||||
fs::write(
|
||||
sk.join("bot.toml"),
|
||||
r#"
|
||||
homeserver = "https://matrix.example.com"
|
||||
username = "@bot:example.com"
|
||||
password = "secret"
|
||||
room_ids = ["!abc:example.com"]
|
||||
enabled = true
|
||||
"#,
|
||||
)
|
||||
.unwrap();
|
||||
let config = BotConfig::load(tmp.path()).unwrap();
|
||||
assert!(config.aggregated_notifications_enabled);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn aggregated_notifications_enabled_can_be_set_to_false() {
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
let sk = tmp.path().join(".huskies");
|
||||
fs::create_dir_all(&sk).unwrap();
|
||||
fs::write(
|
||||
sk.join("bot.toml"),
|
||||
r#"
|
||||
homeserver = "https://matrix.example.com"
|
||||
username = "@bot:example.com"
|
||||
password = "secret"
|
||||
room_ids = ["!abc:example.com"]
|
||||
enabled = true
|
||||
aggregated_notifications_enabled = false
|
||||
"#,
|
||||
)
|
||||
.unwrap();
|
||||
let config = BotConfig::load(tmp.path()).unwrap();
|
||||
assert!(!config.aggregated_notifications_enabled);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn load_reads_ambient_rooms() {
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
|
||||
@@ -21,7 +21,6 @@ pub mod commands;
|
||||
pub(crate) mod config;
|
||||
pub mod delete;
|
||||
pub mod htop;
|
||||
pub mod notifications;
|
||||
pub mod rebuild;
|
||||
pub mod reset;
|
||||
pub mod rmtree;
|
||||
@@ -62,6 +61,7 @@ use tokio::sync::{Mutex as TokioMutex, RwLock, broadcast, mpsc, watch};
|
||||
/// Returns an [`tokio::task::AbortHandle`] if the bot was actually spawned (Matrix/Discord
|
||||
/// transports), or `None` if the config is absent, disabled, or uses a webhook-based
|
||||
/// transport (Slack/WhatsApp) that does not require a persistent background task.
|
||||
#[allow(clippy::too_many_arguments)]
|
||||
pub fn spawn_bot(
|
||||
project_root: &Path,
|
||||
watcher_tx: broadcast::Sender<WatcherEvent>,
|
||||
@@ -70,6 +70,7 @@ pub fn spawn_bot(
|
||||
shutdown_rx: watch::Receiver<Option<ShutdownReason>>,
|
||||
gateway_active_project: Option<Arc<RwLock<String>>>,
|
||||
gateway_projects: Vec<String>,
|
||||
gateway_project_urls: std::collections::BTreeMap<String, String>,
|
||||
) -> Option<tokio::task::AbortHandle> {
|
||||
let config = match BotConfig::load(project_root) {
|
||||
Some(c) => c,
|
||||
@@ -108,6 +109,7 @@ pub fn spawn_bot(
|
||||
shutdown_rx,
|
||||
gateway_active_project,
|
||||
gateway_projects,
|
||||
gateway_project_urls,
|
||||
)
|
||||
.await
|
||||
{
|
||||
|
||||
+50
-6
@@ -223,12 +223,24 @@ pub fn normalize_line_breaks(text: &str) -> String {
|
||||
|
||||
let prev_line = lines[i - 1];
|
||||
|
||||
// Insert a blank separator when both the current and previous lines
|
||||
// are non-empty prose (not inside a code fence, not structured Markdown).
|
||||
// ATX headings (lines starting with one or more `#` characters) always
|
||||
// need a blank line before and after them so that Matrix clients render
|
||||
// the heading with visual separation. Without a blank line, a single
|
||||
// newline between a heading and adjacent text is swallowed by many
|
||||
// Matrix clients (including Element X), joining the heading text and
|
||||
// the following content on the same line without any heading formatting.
|
||||
let is_cur_heading = line.trim_start().starts_with('#');
|
||||
let is_prev_heading = prev_line.trim_start().starts_with('#');
|
||||
|
||||
// Insert a blank separator when:
|
||||
// 1. Both lines are non-empty prose (standard prose-to-prose rule).
|
||||
// 2. The current line is an ATX heading (adds blank line *before* it).
|
||||
// 3. The previous line was an ATX heading (adds blank line *after* it).
|
||||
let should_double = !line.is_empty()
|
||||
&& !prev_line.is_empty()
|
||||
&& !is_structured_line(line)
|
||||
&& !is_structured_line(prev_line);
|
||||
&& ((!is_structured_line(line) && !is_structured_line(prev_line))
|
||||
|| is_cur_heading
|
||||
|| is_prev_heading);
|
||||
|
||||
if should_double {
|
||||
result.push("");
|
||||
@@ -599,10 +611,42 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn normalize_heading_single_newline_preserved() {
|
||||
fn normalize_heading_followed_by_prose_gets_blank_line() {
|
||||
// A blank line must be inserted after a heading so Matrix clients render
|
||||
// the heading with visual separation from the following paragraph.
|
||||
let input = "# My Heading\nSome text below.";
|
||||
let output = normalize_line_breaks(input);
|
||||
assert_eq!(output, "# My Heading\nSome text below.");
|
||||
assert_eq!(output, "# My Heading\n\nSome text below.");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn normalize_prose_before_heading_gets_blank_line() {
|
||||
// A blank line must be inserted before a heading when prose precedes it.
|
||||
let input = "Some intro text.\n## Section";
|
||||
let output = normalize_line_breaks(input);
|
||||
assert_eq!(output, "Some intro text.\n\n## Section");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn normalize_heading_surrounded_by_prose_gets_blank_lines_both_sides() {
|
||||
let input = "Intro.\n## Heading\nContent.";
|
||||
let output = normalize_line_breaks(input);
|
||||
assert_eq!(output, "Intro.\n\n## Heading\n\nContent.");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn normalize_consecutive_headings_separated_by_blank_lines() {
|
||||
let input = "## Section 1\n## Section 2";
|
||||
let output = normalize_line_breaks(input);
|
||||
assert_eq!(output, "## Section 1\n\n## Section 2");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn normalize_heading_already_separated_by_blank_line_unchanged() {
|
||||
// When there is already a blank line, no extra blank is inserted.
|
||||
let input = "# Heading\n\nContent.";
|
||||
let output = normalize_line_breaks(input);
|
||||
assert_eq!(output, "# Heading\n\nContent.");
|
||||
}
|
||||
|
||||
#[test]
|
||||
|
||||
+878
-2107
File diff suppressed because it is too large
Load Diff
+175
-172
@@ -1,11 +1,14 @@
|
||||
//! HTTP agent endpoints — REST API for listing, starting, stopping, and inspecting agents.
|
||||
use crate::config::ProjectConfig;
|
||||
//! HTTP agent endpoints — thin adapters over `service::agents`.
|
||||
//!
|
||||
//! Each handler: extracts payload → calls `service::agents::X` → shapes
|
||||
//! response DTO → returns HTTP result. No filesystem access, no inline
|
||||
//! validation, no process invocations.
|
||||
use crate::http::context::{AppContext, OpenApiResult, bad_request, not_found};
|
||||
use crate::service::agents::{self as svc, AgentConfigEntry, WorkItemContent};
|
||||
use crate::workflow::{StoryTestResults, TestCaseResult, TestStatus};
|
||||
use crate::worktree;
|
||||
use poem::http::StatusCode;
|
||||
use poem_openapi::{Object, OpenApi, Tags, param::Path, payload::Json};
|
||||
use serde::Serialize;
|
||||
use std::path;
|
||||
use std::sync::Arc;
|
||||
|
||||
#[derive(Tags)]
|
||||
@@ -45,6 +48,20 @@ struct AgentConfigInfoResponse {
|
||||
max_budget_usd: Option<f64>,
|
||||
}
|
||||
|
||||
impl From<AgentConfigEntry> for AgentConfigInfoResponse {
|
||||
fn from(e: AgentConfigEntry) -> Self {
|
||||
Self {
|
||||
name: e.name,
|
||||
role: e.role,
|
||||
stage: e.stage,
|
||||
model: e.model,
|
||||
allowed_tools: e.allowed_tools,
|
||||
max_turns: e.max_turns,
|
||||
max_budget_usd: e.max_budget_usd,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Object)]
|
||||
struct CreateWorktreePayload {
|
||||
story_id: String,
|
||||
@@ -73,6 +90,17 @@ struct WorkItemContentResponse {
|
||||
agent: Option<String>,
|
||||
}
|
||||
|
||||
impl From<WorkItemContent> for WorkItemContentResponse {
|
||||
fn from(w: WorkItemContent) -> Self {
|
||||
Self {
|
||||
content: w.content,
|
||||
stage: w.stage,
|
||||
name: w.name,
|
||||
agent: w.agent,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// A single test case result for the OpenAPI response.
|
||||
#[derive(Object, Serialize)]
|
||||
struct TestCaseResultResponse {
|
||||
@@ -153,15 +181,23 @@ struct AllTokenUsageResponse {
|
||||
records: Vec<TokenUsageRecordResponse>,
|
||||
}
|
||||
|
||||
/// Returns true if the story file exists in `work/5_done/` or `work/6_archived/`.
|
||||
///
|
||||
/// Used to exclude agents for already-archived stories from the `list_agents`
|
||||
/// response so the agents panel is not cluttered with old completed items on
|
||||
/// frontend startup.
|
||||
pub fn story_is_archived(project_root: &path::Path, story_id: &str) -> bool {
|
||||
let work = project_root.join(".huskies").join("work");
|
||||
let filename = format!("{story_id}.md");
|
||||
work.join("5_done").join(&filename).exists() || work.join("6_archived").join(&filename).exists()
|
||||
/// Map a `service::agents::Error` to a Poem HTTP error with the correct status.
|
||||
fn map_svc_error(err: svc::Error) -> poem::Error {
|
||||
match err {
|
||||
svc::Error::AgentNotFound(_) => {
|
||||
poem::Error::from_string(err.to_string(), StatusCode::NOT_FOUND)
|
||||
}
|
||||
svc::Error::WorkItemNotFound(_) => {
|
||||
poem::Error::from_string(err.to_string(), StatusCode::NOT_FOUND)
|
||||
}
|
||||
svc::Error::Worktree(_) => {
|
||||
poem::Error::from_string(err.to_string(), StatusCode::BAD_REQUEST)
|
||||
}
|
||||
svc::Error::Config(_) => poem::Error::from_string(err.to_string(), StatusCode::BAD_REQUEST),
|
||||
svc::Error::Io(_) => {
|
||||
poem::Error::from_string(err.to_string(), StatusCode::INTERNAL_SERVER_ERROR)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub struct AgentsApi {
|
||||
@@ -183,18 +219,16 @@ impl AgentsApi {
|
||||
.get_project_root(&self.ctx.state)
|
||||
.map_err(bad_request)?;
|
||||
|
||||
let info = self
|
||||
.ctx
|
||||
.agents
|
||||
.start_agent(
|
||||
&project_root,
|
||||
&payload.0.story_id,
|
||||
payload.0.agent_name.as_deref(),
|
||||
None,
|
||||
None,
|
||||
)
|
||||
.await
|
||||
.map_err(bad_request)?;
|
||||
let info = svc::start_agent(
|
||||
&self.ctx.agents,
|
||||
&project_root,
|
||||
&payload.0.story_id,
|
||||
payload.0.agent_name.as_deref(),
|
||||
None,
|
||||
None,
|
||||
)
|
||||
.await
|
||||
.map_err(map_svc_error)?;
|
||||
|
||||
Ok(Json(AgentInfoResponse {
|
||||
story_id: info.story_id,
|
||||
@@ -214,11 +248,14 @@ impl AgentsApi {
|
||||
.get_project_root(&self.ctx.state)
|
||||
.map_err(bad_request)?;
|
||||
|
||||
self.ctx
|
||||
.agents
|
||||
.stop_agent(&project_root, &payload.0.story_id, &payload.0.agent_name)
|
||||
.await
|
||||
.map_err(bad_request)?;
|
||||
svc::stop_agent(
|
||||
&self.ctx.agents,
|
||||
&project_root,
|
||||
&payload.0.story_id,
|
||||
&payload.0.agent_name,
|
||||
)
|
||||
.await
|
||||
.map_err(map_svc_error)?;
|
||||
|
||||
Ok(Json(true))
|
||||
}
|
||||
@@ -231,17 +268,12 @@ impl AgentsApi {
|
||||
#[oai(path = "/agents", method = "get")]
|
||||
async fn list_agents(&self) -> OpenApiResult<Json<Vec<AgentInfoResponse>>> {
|
||||
let project_root = self.ctx.agents.get_project_root(&self.ctx.state).ok();
|
||||
let agents = self.ctx.agents.list_agents().map_err(bad_request)?;
|
||||
let agents =
|
||||
svc::list_agents(&self.ctx.agents, project_root.as_deref()).map_err(map_svc_error)?;
|
||||
|
||||
Ok(Json(
|
||||
agents
|
||||
.into_iter()
|
||||
.filter(|info| {
|
||||
project_root
|
||||
.as_deref()
|
||||
.map(|root| !story_is_archived(root, &info.story_id))
|
||||
.unwrap_or(true)
|
||||
})
|
||||
.map(|info| AgentInfoResponse {
|
||||
story_id: info.story_id,
|
||||
agent_name: info.agent_name,
|
||||
@@ -262,21 +294,11 @@ impl AgentsApi {
|
||||
.get_project_root(&self.ctx.state)
|
||||
.map_err(bad_request)?;
|
||||
|
||||
let config = ProjectConfig::load(&project_root).map_err(bad_request)?;
|
||||
|
||||
let entries = svc::get_agent_config(&project_root).map_err(map_svc_error)?;
|
||||
Ok(Json(
|
||||
config
|
||||
.agent
|
||||
.iter()
|
||||
.map(|a| AgentConfigInfoResponse {
|
||||
name: a.name.clone(),
|
||||
role: a.role.clone(),
|
||||
stage: a.stage.clone(),
|
||||
model: a.model.clone(),
|
||||
allowed_tools: a.allowed_tools.clone(),
|
||||
max_turns: a.max_turns,
|
||||
max_budget_usd: a.max_budget_usd,
|
||||
})
|
||||
entries
|
||||
.into_iter()
|
||||
.map(AgentConfigInfoResponse::from)
|
||||
.collect(),
|
||||
))
|
||||
}
|
||||
@@ -290,21 +312,11 @@ impl AgentsApi {
|
||||
.get_project_root(&self.ctx.state)
|
||||
.map_err(bad_request)?;
|
||||
|
||||
let config = ProjectConfig::load(&project_root).map_err(bad_request)?;
|
||||
|
||||
let entries = svc::reload_config(&project_root).map_err(map_svc_error)?;
|
||||
Ok(Json(
|
||||
config
|
||||
.agent
|
||||
.iter()
|
||||
.map(|a| AgentConfigInfoResponse {
|
||||
name: a.name.clone(),
|
||||
role: a.role.clone(),
|
||||
stage: a.stage.clone(),
|
||||
model: a.model.clone(),
|
||||
allowed_tools: a.allowed_tools.clone(),
|
||||
max_turns: a.max_turns,
|
||||
max_budget_usd: a.max_budget_usd,
|
||||
})
|
||||
entries
|
||||
.into_iter()
|
||||
.map(AgentConfigInfoResponse::from)
|
||||
.collect(),
|
||||
))
|
||||
}
|
||||
@@ -321,12 +333,9 @@ impl AgentsApi {
|
||||
.get_project_root(&self.ctx.state)
|
||||
.map_err(bad_request)?;
|
||||
|
||||
let info = self
|
||||
.ctx
|
||||
.agents
|
||||
.create_worktree(&project_root, &payload.0.story_id)
|
||||
let info = svc::create_worktree(&self.ctx.agents, &project_root, &payload.0.story_id)
|
||||
.await
|
||||
.map_err(bad_request)?;
|
||||
.map_err(map_svc_error)?;
|
||||
|
||||
Ok(Json(WorktreeInfoResponse {
|
||||
story_id: payload.0.story_id,
|
||||
@@ -345,7 +354,7 @@ impl AgentsApi {
|
||||
.get_project_root(&self.ctx.state)
|
||||
.map_err(bad_request)?;
|
||||
|
||||
let entries = worktree::list_worktrees(&project_root).map_err(bad_request)?;
|
||||
let entries = svc::list_worktrees(&project_root).map_err(map_svc_error)?;
|
||||
|
||||
Ok(Json(
|
||||
entries
|
||||
@@ -373,36 +382,12 @@ impl AgentsApi {
|
||||
.get_project_root(&self.ctx.state)
|
||||
.map_err(bad_request)?;
|
||||
|
||||
let stages = [
|
||||
("1_backlog", "backlog"),
|
||||
("2_current", "current"),
|
||||
("3_qa", "qa"),
|
||||
("4_merge", "merge"),
|
||||
("5_done", "done"),
|
||||
("6_archived", "archived"),
|
||||
];
|
||||
let item = svc::get_work_item_content(&project_root, &story_id.0).map_err(|e| match e {
|
||||
svc::Error::WorkItemNotFound(_) => not_found(e.to_string()),
|
||||
other => map_svc_error(other),
|
||||
})?;
|
||||
|
||||
let work_dir = project_root.join(".huskies").join("work");
|
||||
let filename = format!("{}.md", story_id.0);
|
||||
|
||||
for (stage_dir, stage_name) in &stages {
|
||||
let file_path = work_dir.join(stage_dir).join(&filename);
|
||||
if file_path.exists() {
|
||||
let content = std::fs::read_to_string(&file_path)
|
||||
.map_err(|e| bad_request(format!("Failed to read work item: {e}")))?;
|
||||
let metadata = crate::io::story_metadata::parse_front_matter(&content).ok();
|
||||
let name = metadata.as_ref().and_then(|m| m.name.clone());
|
||||
let agent = metadata.and_then(|m| m.agent);
|
||||
return Ok(Json(WorkItemContentResponse {
|
||||
content,
|
||||
stage: stage_name.to_string(),
|
||||
name,
|
||||
agent,
|
||||
}));
|
||||
}
|
||||
}
|
||||
|
||||
Err(not_found(format!("Work item not found: {}", story_id.0)))
|
||||
Ok(Json(WorkItemContentResponse::from(item)))
|
||||
}
|
||||
|
||||
/// Get test results for a work item by its story_id.
|
||||
@@ -414,30 +399,37 @@ impl AgentsApi {
|
||||
&self,
|
||||
story_id: Path<String>,
|
||||
) -> OpenApiResult<Json<Option<TestResultsResponse>>> {
|
||||
// Try in-memory workflow state first.
|
||||
let workflow = self
|
||||
.ctx
|
||||
.workflow
|
||||
.lock()
|
||||
.map_err(|e| bad_request(format!("Lock error: {e}")))?;
|
||||
|
||||
if let Some(results) = workflow.results.get(&story_id.0) {
|
||||
return Ok(Json(Some(TestResultsResponse::from_story_results(results))));
|
||||
// Fast path: return from in-memory state without requiring project_root.
|
||||
let in_memory = {
|
||||
let workflow = self
|
||||
.ctx
|
||||
.workflow
|
||||
.lock()
|
||||
.map_err(|e| bad_request(format!("Lock error: {e}")))?;
|
||||
workflow.results.get(&story_id.0).cloned()
|
||||
};
|
||||
if let Some(results) = in_memory {
|
||||
return Ok(Json(Some(TestResultsResponse::from_story_results(
|
||||
&results,
|
||||
))));
|
||||
}
|
||||
drop(workflow);
|
||||
|
||||
// Fall back to file-persisted results.
|
||||
// Slow path: fall back to results persisted in the story file.
|
||||
let project_root = self
|
||||
.ctx
|
||||
.agents
|
||||
.get_project_root(&self.ctx.state)
|
||||
.map_err(bad_request)?;
|
||||
|
||||
let file_results =
|
||||
crate::http::workflow::read_test_results_from_story_file(&project_root, &story_id.0);
|
||||
let workflow = self
|
||||
.ctx
|
||||
.workflow
|
||||
.lock()
|
||||
.map_err(|e| bad_request(format!("Lock error: {e}")))?;
|
||||
|
||||
let results = svc::get_test_results(&project_root, &story_id.0, &workflow);
|
||||
Ok(Json(
|
||||
file_results.map(|r| TestResultsResponse::from_story_results(&r)),
|
||||
results.map(|r| TestResultsResponse::from_story_results(&r)),
|
||||
))
|
||||
}
|
||||
|
||||
@@ -458,26 +450,8 @@ impl AgentsApi {
|
||||
.get_project_root(&self.ctx.state)
|
||||
.map_err(bad_request)?;
|
||||
|
||||
let log_path = crate::agent_log::find_latest_log(&project_root, &story_id.0, &agent_name.0);
|
||||
|
||||
let Some(path) = log_path else {
|
||||
return Ok(Json(AgentOutputResponse {
|
||||
output: String::new(),
|
||||
}));
|
||||
};
|
||||
|
||||
let entries = crate::agent_log::read_log(&path).map_err(bad_request)?;
|
||||
|
||||
let output: String = entries
|
||||
.iter()
|
||||
.filter(|e| e.event.get("type").and_then(|t| t.as_str()) == Some("output"))
|
||||
.filter_map(|e| {
|
||||
e.event
|
||||
.get("text")
|
||||
.and_then(|t| t.as_str())
|
||||
.map(str::to_owned)
|
||||
})
|
||||
.collect();
|
||||
let output = svc::get_agent_output(&project_root, &story_id.0, &agent_name.0)
|
||||
.map_err(map_svc_error)?;
|
||||
|
||||
Ok(Json(AgentOutputResponse { output }))
|
||||
}
|
||||
@@ -491,10 +465,9 @@ impl AgentsApi {
|
||||
.get_project_root(&self.ctx.state)
|
||||
.map_err(bad_request)?;
|
||||
|
||||
let config = ProjectConfig::load(&project_root).map_err(bad_request)?;
|
||||
worktree::remove_worktree_by_story_id(&project_root, &story_id.0, &config)
|
||||
svc::remove_worktree(&project_root, &story_id.0)
|
||||
.await
|
||||
.map_err(bad_request)?;
|
||||
.map_err(map_svc_error)?;
|
||||
|
||||
Ok(Json(true))
|
||||
}
|
||||
@@ -514,39 +487,25 @@ impl AgentsApi {
|
||||
.get_project_root(&self.ctx.state)
|
||||
.map_err(bad_request)?;
|
||||
|
||||
let all_records = crate::agents::token_usage::read_all(&project_root)
|
||||
.map_err(|e| bad_request(format!("Failed to read token usage: {e}")))?;
|
||||
let summary =
|
||||
svc::get_work_item_token_cost(&project_root, &story_id.0).map_err(map_svc_error)?;
|
||||
|
||||
let mut agent_map: std::collections::HashMap<String, AgentCostEntry> =
|
||||
std::collections::HashMap::new();
|
||||
|
||||
let mut total_cost_usd = 0.0_f64;
|
||||
|
||||
for record in all_records.into_iter().filter(|r| r.story_id == story_id.0) {
|
||||
total_cost_usd += record.usage.total_cost_usd;
|
||||
let entry = agent_map
|
||||
.entry(record.agent_name.clone())
|
||||
.or_insert_with(|| AgentCostEntry {
|
||||
agent_name: record.agent_name.clone(),
|
||||
model: record.model.clone(),
|
||||
input_tokens: 0,
|
||||
output_tokens: 0,
|
||||
cache_creation_input_tokens: 0,
|
||||
cache_read_input_tokens: 0,
|
||||
total_cost_usd: 0.0,
|
||||
});
|
||||
entry.input_tokens += record.usage.input_tokens;
|
||||
entry.output_tokens += record.usage.output_tokens;
|
||||
entry.cache_creation_input_tokens += record.usage.cache_creation_input_tokens;
|
||||
entry.cache_read_input_tokens += record.usage.cache_read_input_tokens;
|
||||
entry.total_cost_usd += record.usage.total_cost_usd;
|
||||
}
|
||||
|
||||
let mut agents: Vec<AgentCostEntry> = agent_map.into_values().collect();
|
||||
agents.sort_by(|a, b| a.agent_name.cmp(&b.agent_name));
|
||||
let agents = summary
|
||||
.agents
|
||||
.into_iter()
|
||||
.map(|a| AgentCostEntry {
|
||||
agent_name: a.agent_name,
|
||||
model: a.model,
|
||||
input_tokens: a.input_tokens,
|
||||
output_tokens: a.output_tokens,
|
||||
cache_creation_input_tokens: a.cache_creation_input_tokens,
|
||||
cache_read_input_tokens: a.cache_read_input_tokens,
|
||||
total_cost_usd: a.total_cost_usd,
|
||||
})
|
||||
.collect();
|
||||
|
||||
Ok(Json(TokenCostResponse {
|
||||
total_cost_usd,
|
||||
total_cost_usd: summary.total_cost_usd,
|
||||
agents,
|
||||
}))
|
||||
}
|
||||
@@ -562,8 +521,7 @@ impl AgentsApi {
|
||||
.get_project_root(&self.ctx.state)
|
||||
.map_err(bad_request)?;
|
||||
|
||||
let records = crate::agents::token_usage::read_all(&project_root)
|
||||
.map_err(|e| bad_request(format!("Failed to read token usage: {e}")))?;
|
||||
let records = svc::get_all_token_usage(&project_root).map_err(map_svc_error)?;
|
||||
|
||||
let response_records: Vec<TokenUsageRecordResponse> = records
|
||||
.into_iter()
|
||||
@@ -590,6 +548,7 @@ impl AgentsApi {
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::agents::AgentStatus;
|
||||
use std::path;
|
||||
use tempfile::TempDir;
|
||||
|
||||
fn make_work_dirs(tmp: &TempDir) -> path::PathBuf {
|
||||
@@ -604,7 +563,7 @@ mod tests {
|
||||
fn story_is_archived_false_when_file_absent() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let root = make_work_dirs(&tmp);
|
||||
assert!(!story_is_archived(&root, "79_story_foo"));
|
||||
assert!(!svc::is_archived(&root, "79_story_foo"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -616,7 +575,7 @@ mod tests {
|
||||
"---\nname: test\n---\n",
|
||||
)
|
||||
.unwrap();
|
||||
assert!(story_is_archived(&root, "79_story_foo"));
|
||||
assert!(svc::is_archived(&root, "79_story_foo"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -628,7 +587,7 @@ mod tests {
|
||||
"---\nname: test\n---\n",
|
||||
)
|
||||
.unwrap();
|
||||
assert!(story_is_archived(&root, "79_story_foo"));
|
||||
assert!(svc::is_archived(&root, "79_story_foo"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
@@ -953,6 +912,50 @@ allowed_tools = ["Read", "Bash"]
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn get_work_item_content_falls_back_to_crdt_when_no_file() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let root = tmp.path().to_path_buf();
|
||||
// Seed content + CRDT with no .md file on disk.
|
||||
crate::db::write_item_with_content(
|
||||
"44_story_crdt_only",
|
||||
"1_backlog",
|
||||
"---\nname: \"CRDT Only\"\n---\n\nCRDT content.",
|
||||
);
|
||||
let ctx = AppContext::new_test(root);
|
||||
let api = AgentsApi { ctx: Arc::new(ctx) };
|
||||
let result = api
|
||||
.get_work_item_content(Path("44_story_crdt_only".to_string()))
|
||||
.await
|
||||
.unwrap()
|
||||
.0;
|
||||
assert!(result.content.contains("CRDT content."));
|
||||
assert_eq!(result.stage, "backlog");
|
||||
assert_eq!(result.name, Some("CRDT Only".to_string()));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn get_work_item_content_crdt_fallback_with_current_stage() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let root = tmp.path().to_path_buf();
|
||||
// Seed a CRDT-only story in the coding/current stage.
|
||||
crate::db::write_item_with_content(
|
||||
"45_story_crdt_current",
|
||||
"2_current",
|
||||
"---\nname: \"Current CRDT\"\n---\n\nIn progress.",
|
||||
);
|
||||
let ctx = AppContext::new_test(root);
|
||||
let api = AgentsApi { ctx: Arc::new(ctx) };
|
||||
let result = api
|
||||
.get_work_item_content(Path("45_story_crdt_current".to_string()))
|
||||
.await
|
||||
.unwrap()
|
||||
.0;
|
||||
assert!(result.content.contains("In progress."));
|
||||
assert_eq!(result.stage, "current");
|
||||
assert_eq!(result.name, Some("Current CRDT".to_string()));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn get_work_item_content_returns_error_when_no_project_root() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
|
||||
@@ -1,50 +1,10 @@
|
||||
//! Anthropic API proxy — forwards model listing and key-validation requests to Anthropic.
|
||||
//! Anthropic API proxy — thin adapter over `service::anthropic`.
|
||||
use crate::http::context::{AppContext, OpenApiResult, bad_request};
|
||||
use crate::llm::chat;
|
||||
use crate::store::StoreOps;
|
||||
use crate::service::anthropic::{self as svc, ModelSummary};
|
||||
use poem_openapi::{Object, OpenApi, Tags, payload::Json};
|
||||
use reqwest::header::{HeaderMap, HeaderValue};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use serde::Deserialize;
|
||||
use std::sync::Arc;
|
||||
|
||||
const ANTHROPIC_MODELS_URL: &str = "https://api.anthropic.com/v1/models";
|
||||
const ANTHROPIC_VERSION: &str = "2023-06-01";
|
||||
const KEY_ANTHROPIC_API_KEY: &str = "anthropic_api_key";
|
||||
|
||||
#[derive(Deserialize)]
|
||||
struct AnthropicModelsResponse {
|
||||
data: Vec<AnthropicModelInfo>,
|
||||
}
|
||||
|
||||
#[derive(Deserialize)]
|
||||
struct AnthropicModelInfo {
|
||||
id: String,
|
||||
context_window: u64,
|
||||
}
|
||||
|
||||
#[derive(Serialize, Object)]
|
||||
struct AnthropicModelSummary {
|
||||
id: String,
|
||||
context_window: u64,
|
||||
}
|
||||
|
||||
fn get_anthropic_api_key(ctx: &AppContext) -> Result<String, String> {
|
||||
match ctx.store.get(KEY_ANTHROPIC_API_KEY) {
|
||||
Some(value) => {
|
||||
if let Some(key) = value.as_str() {
|
||||
if key.is_empty() {
|
||||
Err("Anthropic API key is empty. Please set your API key.".to_string())
|
||||
} else {
|
||||
Ok(key.to_string())
|
||||
}
|
||||
} else {
|
||||
Err("Stored API key is not a string".to_string())
|
||||
}
|
||||
}
|
||||
None => Err("Anthropic API key not found. Please set your API key.".to_string()),
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Deserialize, Object)]
|
||||
struct ApiKeyPayload {
|
||||
api_key: String,
|
||||
@@ -79,8 +39,8 @@ impl AnthropicApi {
|
||||
/// Returns `true` if a non-empty key is present, otherwise `false`.
|
||||
#[oai(path = "/anthropic/key/exists", method = "get")]
|
||||
async fn get_anthropic_api_key_exists(&self) -> OpenApiResult<Json<bool>> {
|
||||
let exists =
|
||||
chat::get_anthropic_api_key_exists(self.ctx.store.as_ref()).map_err(bad_request)?;
|
||||
let exists = svc::get_api_key_exists(self.ctx.store.as_ref())
|
||||
.map_err(|e| bad_request(e.to_string()))?;
|
||||
Ok(Json(exists))
|
||||
}
|
||||
|
||||
@@ -92,74 +52,62 @@ impl AnthropicApi {
|
||||
&self,
|
||||
payload: Json<ApiKeyPayload>,
|
||||
) -> OpenApiResult<Json<bool>> {
|
||||
chat::set_anthropic_api_key(self.ctx.store.as_ref(), payload.0.api_key)
|
||||
.map_err(bad_request)?;
|
||||
svc::set_api_key(self.ctx.store.as_ref(), payload.0.api_key)
|
||||
.map_err(|e| bad_request(e.to_string()))?;
|
||||
Ok(Json(true))
|
||||
}
|
||||
|
||||
/// List available Anthropic models.
|
||||
#[oai(path = "/anthropic/models", method = "get")]
|
||||
async fn list_anthropic_models(&self) -> OpenApiResult<Json<Vec<AnthropicModelSummary>>> {
|
||||
self.list_anthropic_models_from(ANTHROPIC_MODELS_URL).await
|
||||
}
|
||||
}
|
||||
|
||||
impl AnthropicApi {
|
||||
async fn list_anthropic_models_from(
|
||||
&self,
|
||||
url: &str,
|
||||
) -> OpenApiResult<Json<Vec<AnthropicModelSummary>>> {
|
||||
let api_key = get_anthropic_api_key(self.ctx.as_ref()).map_err(bad_request)?;
|
||||
let client = reqwest::Client::new();
|
||||
let mut headers = HeaderMap::new();
|
||||
headers.insert(
|
||||
"x-api-key",
|
||||
HeaderValue::from_str(&api_key).map_err(|e| bad_request(e.to_string()))?,
|
||||
);
|
||||
headers.insert(
|
||||
"anthropic-version",
|
||||
HeaderValue::from_static(ANTHROPIC_VERSION),
|
||||
);
|
||||
|
||||
let response = client
|
||||
.get(url)
|
||||
.headers(headers)
|
||||
.send()
|
||||
async fn list_anthropic_models(&self) -> OpenApiResult<Json<Vec<ModelSummary>>> {
|
||||
let models = svc::list_models(self.ctx.store.as_ref())
|
||||
.await
|
||||
.map_err(|e| bad_request(e.to_string()))?;
|
||||
|
||||
if !response.status().is_success() {
|
||||
let status = response.status();
|
||||
let error_text = response
|
||||
.text()
|
||||
.await
|
||||
.unwrap_or_else(|_| "Unknown error".to_string());
|
||||
return Err(bad_request(format!(
|
||||
"Anthropic API error {status}: {error_text}"
|
||||
)));
|
||||
}
|
||||
|
||||
let body = response
|
||||
.json::<AnthropicModelsResponse>()
|
||||
.await
|
||||
.map_err(|e| bad_request(e.to_string()))?;
|
||||
let models = body
|
||||
.data
|
||||
.into_iter()
|
||||
.map(|m| AnthropicModelSummary {
|
||||
id: m.id,
|
||||
context_window: m.context_window,
|
||||
})
|
||||
.collect();
|
||||
|
||||
Ok(Json(models))
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
impl AnthropicApi {
|
||||
/// List models from an injectable URL (used in tests to avoid real network calls).
|
||||
async fn list_anthropic_models_from(
|
||||
&self,
|
||||
url: &str,
|
||||
) -> OpenApiResult<Json<Vec<ModelSummary>>> {
|
||||
let models = svc::list_models_from(self.ctx.store.as_ref(), url)
|
||||
.await
|
||||
.map_err(|e| bad_request(e.to_string()))?;
|
||||
Ok(Json(models))
|
||||
}
|
||||
}
|
||||
|
||||
// Private helper retained for backward compatibility with tests that call it directly.
|
||||
#[cfg(test)]
|
||||
fn get_anthropic_api_key(ctx: &AppContext) -> Result<String, String> {
|
||||
svc::get_api_key(ctx.store.as_ref()).map_err(|e| e.to_string())
|
||||
}
|
||||
|
||||
// Private types retained so existing tests that deserialise them directly continue to compile.
|
||||
#[cfg(test)]
|
||||
#[derive(serde::Deserialize)]
|
||||
struct AnthropicModelsResponse {
|
||||
data: Vec<AnthropicModelInfo>,
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
#[derive(serde::Deserialize)]
|
||||
struct AnthropicModelInfo {
|
||||
id: String,
|
||||
context_window: u64,
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::http::context::AppContext;
|
||||
use crate::http::test_helpers::{make_api, test_ctx};
|
||||
use crate::store::StoreOps;
|
||||
const KEY_ANTHROPIC_API_KEY: &str = "anthropic_api_key";
|
||||
use serde_json::json;
|
||||
use tempfile::TempDir;
|
||||
|
||||
|
||||
+153
-190
@@ -3,19 +3,16 @@
|
||||
//! `POST /api/bot/command` lets the web UI invoke the same deterministic bot
|
||||
//! commands available in Matrix without going through the LLM.
|
||||
//!
|
||||
//! Synchronous commands (status, git, cost, move, show, overview, help) are
|
||||
//! dispatched directly through the matrix command registry.
|
||||
//! Asynchronous commands (assign, start, delete, rebuild) are dispatched to
|
||||
//! their dedicated async handlers. The `reset` command is handled by the frontend
|
||||
//! (it clears local session state and message history) and is not routed here.
|
||||
//! Dispatches to [`crate::service::bot_command::execute`], which owns all
|
||||
//! parsing and routing logic. This handler is a thin OpenAPI adapter: it
|
||||
//! receives JSON, calls the service, and maps typed errors to HTTP status codes.
|
||||
|
||||
use crate::chat::commands::CommandDispatch;
|
||||
use crate::http::context::{AppContext, OpenApiResult};
|
||||
use crate::service::bot_command as svc;
|
||||
use poem::http::StatusCode;
|
||||
use poem_openapi::{Object, OpenApi, Tags, payload::Json};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::collections::HashSet;
|
||||
use std::sync::{Arc, Mutex};
|
||||
use std::sync::Arc;
|
||||
|
||||
#[derive(Tags)]
|
||||
enum BotCommandTags {
|
||||
@@ -50,6 +47,11 @@ impl BotCommandApi {
|
||||
/// Dispatches to the same handlers used by the Matrix and Slack bots.
|
||||
/// Returns a markdown-formatted response that the frontend can display
|
||||
/// directly in the chat panel.
|
||||
///
|
||||
/// # Errors
|
||||
/// - `400 Bad Request` — project root not set, or invalid command arguments.
|
||||
/// - `404 Not Found` — unrecognised command keyword.
|
||||
/// - `500 Internal Server Error` — command execution failed.
|
||||
#[oai(path = "/bot/command", method = "post")]
|
||||
async fn run_command(
|
||||
&self,
|
||||
@@ -63,173 +65,23 @@ impl BotCommandApi {
|
||||
|
||||
let cmd = body.command.trim().to_ascii_lowercase();
|
||||
let args = body.args.trim();
|
||||
let response = dispatch_command(&cmd, args, &project_root, &self.ctx.agents).await;
|
||||
|
||||
let response = svc::execute(&cmd, args, &project_root, &self.ctx.agents)
|
||||
.await
|
||||
.map_err(|e| match e {
|
||||
svc::Error::UnknownCommand(msg) => {
|
||||
poem::Error::from_string(msg, StatusCode::NOT_FOUND)
|
||||
}
|
||||
svc::Error::BadArgs(msg) => poem::Error::from_string(msg, StatusCode::BAD_REQUEST),
|
||||
svc::Error::CommandFailed(msg) => {
|
||||
poem::Error::from_string(msg, StatusCode::INTERNAL_SERVER_ERROR)
|
||||
}
|
||||
})?;
|
||||
|
||||
Ok(Json(BotCommandResponse { response }))
|
||||
}
|
||||
}
|
||||
|
||||
/// Dispatch a command keyword + args to the appropriate handler.
|
||||
async fn dispatch_command(
|
||||
cmd: &str,
|
||||
args: &str,
|
||||
project_root: &std::path::Path,
|
||||
agents: &Arc<crate::agents::AgentPool>,
|
||||
) -> String {
|
||||
match cmd {
|
||||
"assign" => dispatch_assign(args, project_root, agents).await,
|
||||
"start" => dispatch_start(args, project_root, agents).await,
|
||||
"delete" => dispatch_delete(args, project_root, agents).await,
|
||||
"rebuild" => dispatch_rebuild(project_root, agents).await,
|
||||
"timer" => dispatch_timer(args, project_root).await,
|
||||
// All other commands go through the synchronous command registry.
|
||||
_ => dispatch_sync(cmd, args, project_root, agents),
|
||||
}
|
||||
}
|
||||
|
||||
fn dispatch_sync(
|
||||
cmd: &str,
|
||||
args: &str,
|
||||
project_root: &std::path::Path,
|
||||
agents: &Arc<crate::agents::AgentPool>,
|
||||
) -> String {
|
||||
let ambient_rooms: Arc<Mutex<HashSet<String>>> = Arc::new(Mutex::new(HashSet::new()));
|
||||
// Use a synthetic bot name/id so strip_bot_mention passes through.
|
||||
let bot_name = "__web_ui__";
|
||||
let bot_user_id = "@__web_ui__:localhost";
|
||||
let room_id = "__web_ui__";
|
||||
|
||||
let dispatch = CommandDispatch {
|
||||
bot_name,
|
||||
bot_user_id,
|
||||
project_root,
|
||||
agents,
|
||||
ambient_rooms: &ambient_rooms,
|
||||
room_id,
|
||||
};
|
||||
|
||||
// Build a synthetic message that the registry can parse.
|
||||
let synthetic = if args.is_empty() {
|
||||
format!("{bot_name} {cmd}")
|
||||
} else {
|
||||
format!("{bot_name} {cmd} {args}")
|
||||
};
|
||||
|
||||
match crate::chat::commands::try_handle_command(&dispatch, &synthetic) {
|
||||
Some(response) => response,
|
||||
None => {
|
||||
// Command exists in the registry but its fallback handler returns None
|
||||
// (start, delete, rebuild, reset, htop — handled elsewhere or in
|
||||
// the frontend). Should not be reached for those since we intercept
|
||||
// them above. For genuinely unknown commands, tell the user.
|
||||
format!("Unknown command: `/{cmd}`. Type `/help` to see available commands.")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async fn dispatch_assign(
|
||||
args: &str,
|
||||
project_root: &std::path::Path,
|
||||
agents: &Arc<crate::agents::AgentPool>,
|
||||
) -> String {
|
||||
// args: "<number> <model>"
|
||||
let mut parts = args.splitn(2, char::is_whitespace);
|
||||
let number_str = parts.next().unwrap_or("").trim();
|
||||
let model_str = parts.next().unwrap_or("").trim();
|
||||
|
||||
if number_str.is_empty()
|
||||
|| !number_str.chars().all(|c| c.is_ascii_digit())
|
||||
|| model_str.is_empty()
|
||||
{
|
||||
return "Usage: `/assign <number> <model>` (e.g. `/assign 42 opus`)".to_string();
|
||||
}
|
||||
|
||||
crate::chat::transport::matrix::assign::handle_assign(
|
||||
"web-ui",
|
||||
number_str,
|
||||
model_str,
|
||||
project_root,
|
||||
agents,
|
||||
)
|
||||
.await
|
||||
}
|
||||
|
||||
async fn dispatch_start(
|
||||
args: &str,
|
||||
project_root: &std::path::Path,
|
||||
agents: &Arc<crate::agents::AgentPool>,
|
||||
) -> String {
|
||||
// args: "<number>" or "<number> <model_hint>"
|
||||
let mut parts = args.splitn(2, char::is_whitespace);
|
||||
let number_str = parts.next().unwrap_or("").trim();
|
||||
let hint_str = parts.next().unwrap_or("").trim();
|
||||
|
||||
if number_str.is_empty() || !number_str.chars().all(|c| c.is_ascii_digit()) {
|
||||
return "Usage: `/start <number>` or `/start <number> <model>` (e.g. `/start 42 opus`)"
|
||||
.to_string();
|
||||
}
|
||||
|
||||
let agent_hint = if hint_str.is_empty() {
|
||||
None
|
||||
} else {
|
||||
Some(hint_str)
|
||||
};
|
||||
|
||||
crate::chat::transport::matrix::start::handle_start(
|
||||
"web-ui",
|
||||
number_str,
|
||||
agent_hint,
|
||||
project_root,
|
||||
agents,
|
||||
)
|
||||
.await
|
||||
}
|
||||
|
||||
async fn dispatch_delete(
|
||||
args: &str,
|
||||
project_root: &std::path::Path,
|
||||
agents: &Arc<crate::agents::AgentPool>,
|
||||
) -> String {
|
||||
let number_str = args.trim();
|
||||
if number_str.is_empty() || !number_str.chars().all(|c| c.is_ascii_digit()) {
|
||||
return "Usage: `/delete <number>` (e.g. `/delete 42`)".to_string();
|
||||
}
|
||||
crate::chat::transport::matrix::delete::handle_delete(
|
||||
"web-ui",
|
||||
number_str,
|
||||
project_root,
|
||||
agents,
|
||||
)
|
||||
.await
|
||||
}
|
||||
|
||||
async fn dispatch_rebuild(
|
||||
project_root: &std::path::Path,
|
||||
agents: &Arc<crate::agents::AgentPool>,
|
||||
) -> String {
|
||||
crate::chat::transport::matrix::rebuild::handle_rebuild("web-ui", project_root, agents).await
|
||||
}
|
||||
|
||||
async fn dispatch_timer(args: &str, project_root: &std::path::Path) -> String {
|
||||
// Re-use the existing parser by constructing a synthetic message that
|
||||
// looks like a bot-addressed timer command.
|
||||
let synthetic = format!("__web_ui__ timer {args}");
|
||||
let timer_cmd = match crate::chat::timer::extract_timer_command(
|
||||
&synthetic,
|
||||
"__web_ui__",
|
||||
"@__web_ui__:localhost",
|
||||
) {
|
||||
Some(cmd) => cmd,
|
||||
None => {
|
||||
return "Usage: `/timer list`, `/timer <number> <HH:MM>`, or `/timer cancel <number>`"
|
||||
.to_string();
|
||||
}
|
||||
};
|
||||
let store =
|
||||
crate::chat::timer::TimerStore::load(project_root.join(".huskies").join("timers.json"));
|
||||
crate::chat::timer::handle_timer_command(timer_cmd, &store, project_root).await
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Tests
|
||||
// ---------------------------------------------------------------------------
|
||||
@@ -268,13 +120,7 @@ mod tests {
|
||||
args: String::new(),
|
||||
};
|
||||
let result = api.run_command(Json(body)).await;
|
||||
assert!(result.is_ok());
|
||||
let resp = result.unwrap().0;
|
||||
assert!(
|
||||
resp.response.contains("Unknown command"),
|
||||
"expected 'Unknown command' in: {}",
|
||||
resp.response
|
||||
);
|
||||
assert!(result.is_err(), "unknown command should return HTTP 404");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
@@ -286,13 +132,7 @@ mod tests {
|
||||
args: String::new(),
|
||||
};
|
||||
let result = api.run_command(Json(body)).await;
|
||||
assert!(result.is_ok());
|
||||
let resp = result.unwrap().0;
|
||||
assert!(
|
||||
resp.response.contains("Usage"),
|
||||
"expected usage hint in: {}",
|
||||
resp.response
|
||||
);
|
||||
assert!(result.is_err(), "start with no args should return HTTP 400");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
@@ -304,12 +144,9 @@ mod tests {
|
||||
args: String::new(),
|
||||
};
|
||||
let result = api.run_command(Json(body)).await;
|
||||
assert!(result.is_ok());
|
||||
let resp = result.unwrap().0;
|
||||
assert!(
|
||||
resp.response.contains("Usage"),
|
||||
"expected usage hint in: {}",
|
||||
resp.response
|
||||
result.is_err(),
|
||||
"delete with no args should return HTTP 400"
|
||||
);
|
||||
}
|
||||
|
||||
@@ -340,7 +177,11 @@ mod tests {
|
||||
args: "list".to_string(),
|
||||
};
|
||||
let result = api.run_command(Json(body)).await;
|
||||
assert!(result.is_ok());
|
||||
assert!(
|
||||
result.is_ok(),
|
||||
"timer list should succeed, got err: {:?}",
|
||||
result.err().map(|e| e.to_string())
|
||||
);
|
||||
let resp = result.unwrap().0;
|
||||
assert!(
|
||||
!resp.response.contains("Unknown command"),
|
||||
@@ -349,6 +190,128 @@ mod tests {
|
||||
);
|
||||
}
|
||||
|
||||
// -- htop (web-UI slash-command path) ------------------------------------
|
||||
|
||||
#[tokio::test]
|
||||
async fn htop_returns_dashboard_not_unknown_command() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
let api = test_api(&dir);
|
||||
let body = BotCommandRequest {
|
||||
command: "htop".to_string(),
|
||||
args: String::new(),
|
||||
};
|
||||
let result = api.run_command(Json(body)).await;
|
||||
assert!(result.is_ok());
|
||||
let resp = result.unwrap().0;
|
||||
assert!(
|
||||
!resp.response.contains("Unknown command"),
|
||||
"htop should not return 'Unknown command': {}",
|
||||
resp.response
|
||||
);
|
||||
assert!(
|
||||
resp.response.contains("htop"),
|
||||
"htop response should contain 'htop': {}",
|
||||
resp.response
|
||||
);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn htop_with_duration_returns_dashboard() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
let api = test_api(&dir);
|
||||
let body = BotCommandRequest {
|
||||
command: "htop".to_string(),
|
||||
args: "10m".to_string(),
|
||||
};
|
||||
let result = api.run_command(Json(body)).await;
|
||||
assert!(result.is_ok());
|
||||
let resp = result.unwrap().0;
|
||||
assert!(
|
||||
!resp.response.contains("Unknown command"),
|
||||
"htop 10m should not return 'Unknown command': {}",
|
||||
resp.response
|
||||
);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn htop_stop_returns_response_not_unknown_command() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
let api = test_api(&dir);
|
||||
let body = BotCommandRequest {
|
||||
command: "htop".to_string(),
|
||||
args: "stop".to_string(),
|
||||
};
|
||||
let result = api.run_command(Json(body)).await;
|
||||
assert!(result.is_ok());
|
||||
let resp = result.unwrap().0;
|
||||
assert!(
|
||||
!resp.response.contains("Unknown command"),
|
||||
"htop stop should not return 'Unknown command': {}",
|
||||
resp.response
|
||||
);
|
||||
}
|
||||
|
||||
// -- rmtree ----------------------------------------------------------------
|
||||
|
||||
#[tokio::test]
|
||||
async fn rmtree_without_number_returns_usage() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
let api = test_api(&dir);
|
||||
let body = BotCommandRequest {
|
||||
command: "rmtree".to_string(),
|
||||
args: String::new(),
|
||||
};
|
||||
let result = api.run_command(Json(body)).await;
|
||||
assert!(
|
||||
result.is_err(),
|
||||
"rmtree with no args should return HTTP 400"
|
||||
);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn rmtree_with_non_numeric_arg_returns_usage() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
let api = test_api(&dir);
|
||||
let body = BotCommandRequest {
|
||||
command: "rmtree".to_string(),
|
||||
args: "foo".to_string(),
|
||||
};
|
||||
let result = api.run_command(Json(body)).await;
|
||||
assert!(
|
||||
result.is_err(),
|
||||
"rmtree with non-numeric arg should return HTTP 400"
|
||||
);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn rmtree_does_not_return_unknown_command() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
let api = test_api(&dir);
|
||||
let body = BotCommandRequest {
|
||||
command: "rmtree".to_string(),
|
||||
args: "999".to_string(),
|
||||
};
|
||||
let result = api.run_command(Json(body)).await;
|
||||
assert!(result.is_ok());
|
||||
let resp = result.unwrap().0;
|
||||
assert!(
|
||||
!resp.response.contains("Unknown command"),
|
||||
"/rmtree should not return 'Unknown command': {}",
|
||||
resp.response
|
||||
);
|
||||
}
|
||||
|
||||
// -- htop bot-command path (regression: htop must remain in command registry) --
|
||||
|
||||
#[test]
|
||||
fn htop_is_registered_in_bot_command_registry() {
|
||||
let commands = crate::chat::commands::commands();
|
||||
assert!(
|
||||
commands.iter().any(|c| c.name == "htop"),
|
||||
"htop must be registered in the bot command registry so /help lists it"
|
||||
);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn run_command_requires_project_root() {
|
||||
// Create a context with no project root set.
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
//! Application context — shared state (`AppContext`) threaded through all HTTP handlers.
|
||||
use crate::agents::{AgentPool, ReconciliationEvent};
|
||||
use crate::chat::timer::TimerStore;
|
||||
use crate::io::watcher::WatcherEvent;
|
||||
use crate::rebuild::{BotShutdownNotifier, ShutdownReason};
|
||||
use crate::service::timer::TimerStore;
|
||||
use crate::state::SessionState;
|
||||
use crate::store::JsonFileStore;
|
||||
use crate::workflow::WorkflowState;
|
||||
|
||||
@@ -0,0 +1,198 @@
|
||||
//! Per-project event buffer and `GET /api/events` HTTP endpoint.
|
||||
//!
|
||||
//! The gateway polls `/api/events?since={ts_ms}` on each registered project
|
||||
//! server to aggregate cross-project pipeline notifications into a single
|
||||
//! gateway chat channel. Each project server buffers up to 500 events in
|
||||
//! memory and serves them via this endpoint.
|
||||
//!
|
||||
//! Domain logic lives in `service::events`; this module is a thin HTTP
|
||||
//! adapter: extract query params → call service → shape response.
|
||||
|
||||
#[cfg(test)]
|
||||
pub use crate::service::events::StoredEvent;
|
||||
pub use crate::service::events::{EventBuffer, subscribe_to_watcher};
|
||||
// MAX_BUFFER_SIZE is used in tests via `use super::*`.
|
||||
#[cfg(test)]
|
||||
pub use crate::service::events::MAX_BUFFER_SIZE;
|
||||
|
||||
use poem::web::{Data, Query};
|
||||
use poem::{Response, handler, http::StatusCode};
|
||||
use serde::Deserialize;
|
||||
|
||||
/// Query parameters for `GET /api/events`.
|
||||
#[derive(Deserialize)]
|
||||
pub struct EventsQuery {
|
||||
/// Return only events with `timestamp_ms` strictly greater than this value.
|
||||
/// Defaults to `0` (return all buffered events).
|
||||
#[serde(default)]
|
||||
pub since: u64,
|
||||
}
|
||||
|
||||
/// `GET /api/events?since={ts_ms}`
|
||||
///
|
||||
/// Returns a JSON array of [`StoredEvent`] objects recorded after `since` ms.
|
||||
/// The gateway polls this endpoint on each registered project server to build
|
||||
/// an aggregated cross-project notification stream.
|
||||
#[handler]
|
||||
pub fn events_handler(
|
||||
Query(params): Query<EventsQuery>,
|
||||
Data(buffer): Data<&EventBuffer>,
|
||||
) -> Response {
|
||||
let events = crate::service::events::events_since(buffer, params.since);
|
||||
let body = serde_json::to_vec(&events).unwrap_or_else(|_| b"[]".to_vec());
|
||||
Response::builder()
|
||||
.status(StatusCode::OK)
|
||||
.header(poem::http::header::CONTENT_TYPE, "application/json")
|
||||
.body(body)
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use tokio::sync::broadcast;
|
||||
|
||||
#[test]
|
||||
fn event_buffer_push_and_retrieve() {
|
||||
let buf = EventBuffer::new();
|
||||
buf.push(StoredEvent::MergeFailure {
|
||||
story_id: "42_story_x".to_string(),
|
||||
reason: "conflict".to_string(),
|
||||
timestamp_ms: 1000,
|
||||
});
|
||||
buf.push(StoredEvent::StoryBlocked {
|
||||
story_id: "43_story_y".to_string(),
|
||||
reason: "retry limit".to_string(),
|
||||
timestamp_ms: 2000,
|
||||
});
|
||||
|
||||
let all = buf.events_since(0);
|
||||
assert_eq!(all.len(), 2);
|
||||
|
||||
let after_1000 = buf.events_since(1000);
|
||||
assert_eq!(after_1000.len(), 1);
|
||||
assert!(matches!(after_1000[0], StoredEvent::StoryBlocked { .. }));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn event_buffer_evicts_oldest_when_full() {
|
||||
let buf = EventBuffer::new();
|
||||
for i in 0..MAX_BUFFER_SIZE + 1 {
|
||||
buf.push(StoredEvent::MergeFailure {
|
||||
story_id: format!("{i}_story_x"),
|
||||
reason: "x".to_string(),
|
||||
timestamp_ms: i as u64,
|
||||
});
|
||||
}
|
||||
// Buffer must not exceed MAX_BUFFER_SIZE.
|
||||
assert_eq!(buf.events_since(0).len(), MAX_BUFFER_SIZE);
|
||||
// Oldest entry (timestamp_ms == 0) should have been evicted.
|
||||
assert!(buf.events_since(0).iter().all(|e| e.timestamp_ms() > 0));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn stage_transition_timestamp_ms_accessor() {
|
||||
let e = StoredEvent::StageTransition {
|
||||
story_id: "1".to_string(),
|
||||
from_stage: "2_current".to_string(),
|
||||
to_stage: "3_qa".to_string(),
|
||||
timestamp_ms: 9999,
|
||||
};
|
||||
assert_eq!(e.timestamp_ms(), 9999);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn subscribe_to_watcher_stores_work_item_with_from_stage() {
|
||||
let buf = EventBuffer::new();
|
||||
let (tx, rx) = broadcast::channel(16);
|
||||
|
||||
subscribe_to_watcher(buf.clone(), rx);
|
||||
|
||||
tx.send(crate::io::watcher::WatcherEvent::WorkItem {
|
||||
stage: "3_qa".to_string(),
|
||||
item_id: "42_story_foo".to_string(),
|
||||
action: "qa".to_string(),
|
||||
commit_msg: "huskies: qa 42_story_foo".to_string(),
|
||||
from_stage: Some("2_current".to_string()),
|
||||
})
|
||||
.unwrap();
|
||||
|
||||
tokio::time::sleep(std::time::Duration::from_millis(50)).await;
|
||||
|
||||
let events = buf.events_since(0);
|
||||
assert_eq!(events.len(), 1);
|
||||
assert!(matches!(events[0], StoredEvent::StageTransition { .. }));
|
||||
if let StoredEvent::StageTransition {
|
||||
ref story_id,
|
||||
ref from_stage,
|
||||
ref to_stage,
|
||||
..
|
||||
} = events[0]
|
||||
{
|
||||
assert_eq!(story_id, "42_story_foo");
|
||||
assert_eq!(from_stage, "2_current");
|
||||
assert_eq!(to_stage, "3_qa");
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn subscribe_to_watcher_ignores_work_item_without_from_stage() {
|
||||
let buf = EventBuffer::new();
|
||||
let (tx, rx) = broadcast::channel(16);
|
||||
|
||||
subscribe_to_watcher(buf.clone(), rx);
|
||||
|
||||
// Synthetic event: no from_stage.
|
||||
tx.send(crate::io::watcher::WatcherEvent::WorkItem {
|
||||
stage: "2_current".to_string(),
|
||||
item_id: "99_story_syn".to_string(),
|
||||
action: "start".to_string(),
|
||||
commit_msg: "huskies: start 99_story_syn".to_string(),
|
||||
from_stage: None,
|
||||
})
|
||||
.unwrap();
|
||||
|
||||
tokio::time::sleep(std::time::Duration::from_millis(50)).await;
|
||||
|
||||
assert_eq!(buf.events_since(0).len(), 0);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn subscribe_to_watcher_stores_merge_failure() {
|
||||
let buf = EventBuffer::new();
|
||||
let (tx, rx) = broadcast::channel(16);
|
||||
|
||||
subscribe_to_watcher(buf.clone(), rx);
|
||||
|
||||
tx.send(crate::io::watcher::WatcherEvent::MergeFailure {
|
||||
story_id: "42_story_foo".to_string(),
|
||||
reason: "merge conflict".to_string(),
|
||||
})
|
||||
.unwrap();
|
||||
|
||||
tokio::time::sleep(std::time::Duration::from_millis(50)).await;
|
||||
|
||||
let events = buf.events_since(0);
|
||||
assert_eq!(events.len(), 1);
|
||||
assert!(matches!(events[0], StoredEvent::MergeFailure { .. }));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn subscribe_to_watcher_stores_story_blocked() {
|
||||
let buf = EventBuffer::new();
|
||||
let (tx, rx) = broadcast::channel(16);
|
||||
|
||||
subscribe_to_watcher(buf.clone(), rx);
|
||||
|
||||
tx.send(crate::io::watcher::WatcherEvent::StoryBlocked {
|
||||
story_id: "43_story_bar".to_string(),
|
||||
reason: "retry limit exceeded".to_string(),
|
||||
})
|
||||
.unwrap();
|
||||
|
||||
tokio::time::sleep(std::time::Duration::from_millis(50)).await;
|
||||
|
||||
let events = buf.events_since(0);
|
||||
assert_eq!(events.len(), 1);
|
||||
assert!(matches!(events[0], StoredEvent::StoryBlocked { .. }));
|
||||
}
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
+10
-11
@@ -1,7 +1,13 @@
|
||||
//! Health check endpoint — returns a static "ok" response.
|
||||
//! Health check endpoint — thin HTTP adapter over `service::health`.
|
||||
//!
|
||||
//! Domain logic (the `HealthStatus` type and check function) lives in
|
||||
//! `service::health`; this module is a thin adapter: call service → shape
|
||||
//! response.
|
||||
|
||||
pub use crate::service::health::HealthStatus;
|
||||
|
||||
use poem::handler;
|
||||
use poem_openapi::{Object, OpenApi, Tags, payload::Json};
|
||||
use serde::Serialize;
|
||||
use poem_openapi::{OpenApi, Tags, payload::Json};
|
||||
|
||||
/// Health check endpoint.
|
||||
///
|
||||
@@ -16,11 +22,6 @@ enum HealthTags {
|
||||
Health,
|
||||
}
|
||||
|
||||
#[derive(Serialize, Object)]
|
||||
pub struct HealthStatus {
|
||||
status: String,
|
||||
}
|
||||
|
||||
pub struct HealthApi;
|
||||
|
||||
#[OpenApi(tag = "HealthTags::Health")]
|
||||
@@ -30,9 +31,7 @@ impl HealthApi {
|
||||
/// Returns a JSON status object to confirm the server is running.
|
||||
#[oai(path = "/health", method = "get")]
|
||||
async fn health(&self) -> Json<HealthStatus> {
|
||||
Json(HealthStatus {
|
||||
status: "ok".to_string(),
|
||||
})
|
||||
Json(crate::service::health::check())
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
+24
-25
@@ -1,6 +1,6 @@
|
||||
//! HTTP I/O endpoints — REST API for file and directory operations.
|
||||
//! HTTP I/O endpoints — thin adapters over `service::file_io`.
|
||||
use crate::http::context::{AppContext, OpenApiResult, bad_request};
|
||||
use crate::io::fs as io_fs;
|
||||
use crate::service::file_io::{self as svc, FileEntry};
|
||||
use poem_openapi::{Object, OpenApi, Tags, payload::Json};
|
||||
use serde::Deserialize;
|
||||
use std::sync::Arc;
|
||||
@@ -46,18 +46,18 @@ impl IoApi {
|
||||
/// Read a file from the currently open project and return its contents.
|
||||
#[oai(path = "/io/fs/read", method = "post")]
|
||||
async fn read_file(&self, payload: Json<FilePathPayload>) -> OpenApiResult<Json<String>> {
|
||||
let content = io_fs::read_file(payload.0.path, &self.ctx.state)
|
||||
let content = svc::read_file(payload.0.path, &self.ctx.state)
|
||||
.await
|
||||
.map_err(bad_request)?;
|
||||
.map_err(|e| bad_request(e.to_string()))?;
|
||||
Ok(Json(content))
|
||||
}
|
||||
|
||||
/// Write a file to the currently open project, creating parent directories if needed.
|
||||
#[oai(path = "/io/fs/write", method = "post")]
|
||||
async fn write_file(&self, payload: Json<WriteFilePayload>) -> OpenApiResult<Json<bool>> {
|
||||
io_fs::write_file(payload.0.path, payload.0.content, &self.ctx.state)
|
||||
svc::write_file(payload.0.path, payload.0.content, &self.ctx.state)
|
||||
.await
|
||||
.map_err(bad_request)?;
|
||||
.map_err(|e| bad_request(e.to_string()))?;
|
||||
Ok(Json(true))
|
||||
}
|
||||
|
||||
@@ -66,10 +66,10 @@ impl IoApi {
|
||||
async fn list_directory(
|
||||
&self,
|
||||
payload: Json<FilePathPayload>,
|
||||
) -> OpenApiResult<Json<Vec<io_fs::FileEntry>>> {
|
||||
let entries = io_fs::list_directory(payload.0.path, &self.ctx.state)
|
||||
) -> OpenApiResult<Json<Vec<FileEntry>>> {
|
||||
let entries = svc::list_directory(payload.0.path, &self.ctx.state)
|
||||
.await
|
||||
.map_err(bad_request)?;
|
||||
.map_err(|e| bad_request(e.to_string()))?;
|
||||
Ok(Json(entries))
|
||||
}
|
||||
|
||||
@@ -78,10 +78,10 @@ impl IoApi {
|
||||
async fn list_directory_absolute(
|
||||
&self,
|
||||
payload: Json<FilePathPayload>,
|
||||
) -> OpenApiResult<Json<Vec<io_fs::FileEntry>>> {
|
||||
let entries = io_fs::list_directory_absolute(payload.0.path)
|
||||
) -> OpenApiResult<Json<Vec<FileEntry>>> {
|
||||
let entries = svc::list_directory_absolute(payload.0.path)
|
||||
.await
|
||||
.map_err(bad_request)?;
|
||||
.map_err(|e| bad_request(e.to_string()))?;
|
||||
Ok(Json(entries))
|
||||
}
|
||||
|
||||
@@ -91,25 +91,25 @@ impl IoApi {
|
||||
&self,
|
||||
payload: Json<CreateDirectoryPayload>,
|
||||
) -> OpenApiResult<Json<bool>> {
|
||||
io_fs::create_directory_absolute(payload.0.path)
|
||||
svc::create_directory_absolute(payload.0.path)
|
||||
.await
|
||||
.map_err(bad_request)?;
|
||||
.map_err(|e| bad_request(e.to_string()))?;
|
||||
Ok(Json(true))
|
||||
}
|
||||
|
||||
/// Get the user's home directory.
|
||||
#[oai(path = "/io/fs/home", method = "get")]
|
||||
async fn get_home_directory(&self) -> OpenApiResult<Json<String>> {
|
||||
let home = io_fs::get_home_directory().map_err(bad_request)?;
|
||||
let home = svc::get_home_directory().map_err(|e| bad_request(e.to_string()))?;
|
||||
Ok(Json(home))
|
||||
}
|
||||
|
||||
/// List all files in the project recursively, respecting .gitignore.
|
||||
#[oai(path = "/io/fs/files", method = "get")]
|
||||
async fn list_project_files(&self) -> OpenApiResult<Json<Vec<String>>> {
|
||||
let files = io_fs::list_project_files(&self.ctx.state)
|
||||
let files = svc::list_project_files(&self.ctx.state)
|
||||
.await
|
||||
.map_err(bad_request)?;
|
||||
.map_err(|e| bad_request(e.to_string()))?;
|
||||
Ok(Json(files))
|
||||
}
|
||||
|
||||
@@ -118,10 +118,10 @@ impl IoApi {
|
||||
async fn search_files(
|
||||
&self,
|
||||
payload: Json<SearchPayload>,
|
||||
) -> OpenApiResult<Json<Vec<crate::io::search::SearchResult>>> {
|
||||
let results = crate::io::search::search_files(payload.0.query, &self.ctx.state)
|
||||
) -> OpenApiResult<Json<Vec<crate::service::file_io::SearchResult>>> {
|
||||
let results = svc::search_files(payload.0.query, &self.ctx.state)
|
||||
.await
|
||||
.map_err(bad_request)?;
|
||||
.map_err(|e| bad_request(e.to_string()))?;
|
||||
Ok(Json(results))
|
||||
}
|
||||
|
||||
@@ -130,11 +130,10 @@ impl IoApi {
|
||||
async fn exec_shell(
|
||||
&self,
|
||||
payload: Json<ExecShellPayload>,
|
||||
) -> OpenApiResult<Json<crate::io::shell::CommandOutput>> {
|
||||
let output =
|
||||
crate::io::shell::exec_shell(payload.0.command, payload.0.args, &self.ctx.state)
|
||||
.await
|
||||
.map_err(bad_request)?;
|
||||
) -> OpenApiResult<Json<crate::service::file_io::CommandOutput>> {
|
||||
let output = svc::exec_shell(payload.0.command, payload.0.args, &self.ctx.state)
|
||||
.await
|
||||
.map_err(|e| bad_request(e.to_string()))?;
|
||||
Ok(Json(output))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
use crate::agents::PipelineStage;
|
||||
use crate::config::ProjectConfig;
|
||||
use crate::http::context::AppContext;
|
||||
use crate::http::settings::get_editor_command_from_store;
|
||||
use crate::service::settings::get_editor_command;
|
||||
use crate::slog_warn;
|
||||
use crate::worktree;
|
||||
use serde_json::{Value, json};
|
||||
@@ -86,7 +86,7 @@ pub(super) fn tool_list_agents(ctx: &AppContext) -> Result<String, String> {
|
||||
.filter(|a| {
|
||||
project_root
|
||||
.as_deref()
|
||||
.map(|root| !crate::http::agents::story_is_archived(root, &a.story_id))
|
||||
.map(|root| !crate::service::agents::is_archived(root, &a.story_id))
|
||||
.unwrap_or(true)
|
||||
})
|
||||
.map(|a| json!({
|
||||
@@ -414,7 +414,7 @@ pub(super) fn tool_get_editor_command(args: &Value, ctx: &AppContext) -> Result<
|
||||
.and_then(|v| v.as_str())
|
||||
.ok_or("Missing required argument: worktree_path")?;
|
||||
|
||||
let editor = get_editor_command_from_store(ctx)
|
||||
let editor = get_editor_command(&*ctx.store)
|
||||
.ok_or_else(|| "No editor configured. Set one via PUT /api/settings/editor.".to_string())?;
|
||||
|
||||
Ok(format!("{editor} {worktree_path}"))
|
||||
|
||||
@@ -1,10 +1,15 @@
|
||||
//! MCP diagnostic tools — server logs, CRDT dump, and story movement helpers.
|
||||
//!
|
||||
//! This file is a thin adapter: it deserialises MCP payloads, delegates to
|
||||
//! `crate::service::diagnostics` for all business logic, and serialises responses.
|
||||
use crate::agents::move_story_to_stage;
|
||||
use crate::http::context::AppContext;
|
||||
use crate::log_buffer;
|
||||
use crate::service::diagnostics::{add_permission_rule, generate_permission_rule};
|
||||
use crate::slog;
|
||||
use crate::slog_warn;
|
||||
use serde_json::{Value, json};
|
||||
#[allow(unused_imports)]
|
||||
use std::fs;
|
||||
|
||||
pub(super) fn tool_get_server_logs(args: &Value) -> Result<String, String> {
|
||||
@@ -44,94 +49,6 @@ pub(super) async fn tool_rebuild_and_restart(ctx: &AppContext) -> Result<String,
|
||||
crate::rebuild::rebuild_and_restart(&ctx.agents, &project_root, notifier).await
|
||||
}
|
||||
|
||||
/// Generate a Claude Code permission rule string for the given tool name and input.
|
||||
///
|
||||
/// - `Edit` / `Write` / `Read` / `Grep` / `Glob` etc. → just the tool name
|
||||
/// - `Bash` → `Bash(first_word *)` derived from the `command` field in `tool_input`
|
||||
/// - `mcp__*` → the full tool name (e.g. `mcp__huskies__create_story`)
|
||||
fn generate_permission_rule(tool_name: &str, tool_input: &Value) -> String {
|
||||
if tool_name == "Bash" {
|
||||
// Extract command from tool_input.command and use first word as prefix
|
||||
let command_str = tool_input
|
||||
.get("command")
|
||||
.and_then(|v| v.as_str())
|
||||
.unwrap_or("");
|
||||
let first_word = command_str.split_whitespace().next().unwrap_or("unknown");
|
||||
format!("Bash({first_word} *)")
|
||||
} else {
|
||||
// For Edit, Write, Read, Glob, Grep, MCP tools, etc. — use the tool name directly
|
||||
tool_name.to_string()
|
||||
}
|
||||
}
|
||||
|
||||
/// Add a permission rule to `.claude/settings.json` in the project root.
|
||||
/// Does nothing if the rule already exists. Creates the file if missing.
|
||||
pub(super) fn add_permission_rule(
|
||||
project_root: &std::path::Path,
|
||||
rule: &str,
|
||||
) -> Result<(), String> {
|
||||
let claude_dir = project_root.join(".claude");
|
||||
fs::create_dir_all(&claude_dir)
|
||||
.map_err(|e| format!("Failed to create .claude/ directory: {e}"))?;
|
||||
|
||||
let settings_path = claude_dir.join("settings.json");
|
||||
let mut settings: Value = if settings_path.exists() {
|
||||
let content = fs::read_to_string(&settings_path)
|
||||
.map_err(|e| format!("Failed to read settings.json: {e}"))?;
|
||||
serde_json::from_str(&content).map_err(|e| format!("Failed to parse settings.json: {e}"))?
|
||||
} else {
|
||||
json!({ "permissions": { "allow": [] } })
|
||||
};
|
||||
|
||||
let allow_arr = settings
|
||||
.pointer_mut("/permissions/allow")
|
||||
.and_then(|v| v.as_array_mut());
|
||||
|
||||
let allow = match allow_arr {
|
||||
Some(arr) => arr,
|
||||
None => {
|
||||
// Ensure the structure exists
|
||||
settings
|
||||
.as_object_mut()
|
||||
.unwrap()
|
||||
.entry("permissions")
|
||||
.or_insert(json!({ "allow": [] }));
|
||||
settings
|
||||
.pointer_mut("/permissions/allow")
|
||||
.unwrap()
|
||||
.as_array_mut()
|
||||
.unwrap()
|
||||
}
|
||||
};
|
||||
|
||||
// Check for duplicates — exact string match
|
||||
let rule_value = Value::String(rule.to_string());
|
||||
if allow.contains(&rule_value) {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
// Also check for wildcard coverage: if "mcp__huskies__*" exists, don't add
|
||||
// a more specific "mcp__huskies__create_story".
|
||||
let dominated = allow.iter().any(|existing| {
|
||||
if let Some(pat) = existing.as_str()
|
||||
&& let Some(prefix) = pat.strip_suffix('*')
|
||||
{
|
||||
return rule.starts_with(prefix);
|
||||
}
|
||||
false
|
||||
});
|
||||
if dominated {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
allow.push(rule_value);
|
||||
|
||||
let pretty =
|
||||
serde_json::to_string_pretty(&settings).map_err(|e| format!("Failed to serialize: {e}"))?;
|
||||
fs::write(&settings_path, pretty).map_err(|e| format!("Failed to write settings.json: {e}"))?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// MCP tool called by Claude Code via `--permission-prompt-tool`.
|
||||
///
|
||||
/// Forwards the permission request through the shared channel to the active
|
||||
@@ -349,13 +266,14 @@ pub(super) fn tool_dump_crdt(args: &Value) -> Result<String, String> {
|
||||
.map_err(|e| format!("Serialization error: {e}"))
|
||||
}
|
||||
|
||||
/// MCP tool: return the server version and build hash.
|
||||
pub(super) fn tool_get_version() -> Result<String, String> {
|
||||
/// MCP tool: return the server version, build hash, and running port.
|
||||
pub(super) fn tool_get_version(ctx: &AppContext) -> Result<String, String> {
|
||||
let build_hash =
|
||||
std::fs::read_to_string(".huskies/build_hash").unwrap_or_else(|_| "unknown".to_string());
|
||||
serde_json::to_string_pretty(&json!({
|
||||
"version": env!("CARGO_PKG_VERSION"),
|
||||
"build_hash": build_hash.trim(),
|
||||
"port": ctx.agents.port(),
|
||||
}))
|
||||
.map_err(|e| format!("Serialization error: {e}"))
|
||||
}
|
||||
|
||||
@@ -1,68 +1,34 @@
|
||||
//! MCP git tools — status, diff, add, commit, and log operations on agent worktrees.
|
||||
//!
|
||||
//! This file is a thin adapter: it deserialises MCP payloads, delegates to
|
||||
//! `crate::service::git_ops` for all business logic, and serialises responses.
|
||||
use crate::http::context::AppContext;
|
||||
use serde_json::{Value, json};
|
||||
use std::path::PathBuf;
|
||||
|
||||
/// Validates that `worktree_path` exists and is inside the project's
|
||||
/// `.huskies/worktrees/` directory. Returns the canonicalized path.
|
||||
///
|
||||
/// Thin wrapper that obtains the project root from `ctx` and delegates to
|
||||
/// `service::git_ops::io::validate_worktree_path`.
|
||||
fn validate_worktree_path(worktree_path: &str, ctx: &AppContext) -> Result<PathBuf, String> {
|
||||
let wd = PathBuf::from(worktree_path);
|
||||
|
||||
if !wd.is_absolute() {
|
||||
return Err("worktree_path must be an absolute path".to_string());
|
||||
}
|
||||
if !wd.exists() {
|
||||
return Err(format!("worktree_path does not exist: {worktree_path}"));
|
||||
}
|
||||
|
||||
let project_root = ctx.agents.get_project_root(&ctx.state)?;
|
||||
let worktrees_root = project_root.join(".huskies").join("worktrees");
|
||||
|
||||
let canonical_wd = wd
|
||||
.canonicalize()
|
||||
.map_err(|e| format!("Cannot canonicalize worktree_path: {e}"))?;
|
||||
|
||||
let canonical_wt = if worktrees_root.exists() {
|
||||
worktrees_root
|
||||
.canonicalize()
|
||||
.map_err(|e| format!("Cannot canonicalize worktrees root: {e}"))?
|
||||
} else {
|
||||
return Err("No worktrees directory found in project".to_string());
|
||||
};
|
||||
|
||||
if !canonical_wd.starts_with(&canonical_wt) {
|
||||
return Err(format!(
|
||||
"worktree_path must be inside .huskies/worktrees/. Got: {worktree_path}"
|
||||
));
|
||||
}
|
||||
|
||||
Ok(canonical_wd)
|
||||
crate::service::git_ops::io::validate_worktree_path(worktree_path, &project_root)
|
||||
.map_err(|e| e.to_string())
|
||||
}
|
||||
|
||||
/// Run a git command in the given directory and return its output.
|
||||
async fn run_git(args: Vec<&'static str>, dir: PathBuf) -> Result<std::process::Output, String> {
|
||||
tokio::task::spawn_blocking(move || {
|
||||
std::process::Command::new("git")
|
||||
.args(&args)
|
||||
.current_dir(&dir)
|
||||
.output()
|
||||
})
|
||||
.await
|
||||
.map_err(|e| format!("Task join error: {e}"))?
|
||||
.map_err(|e| format!("Failed to run git: {e}"))
|
||||
crate::service::git_ops::io::run_git(args, dir)
|
||||
.await
|
||||
.map_err(|e| e.to_string())
|
||||
}
|
||||
|
||||
/// Run a git command with owned args in the given directory.
|
||||
async fn run_git_owned(args: Vec<String>, dir: PathBuf) -> Result<std::process::Output, String> {
|
||||
tokio::task::spawn_blocking(move || {
|
||||
std::process::Command::new("git")
|
||||
.args(&args)
|
||||
.current_dir(&dir)
|
||||
.output()
|
||||
})
|
||||
.await
|
||||
.map_err(|e| format!("Task join error: {e}"))?
|
||||
.map_err(|e| format!("Failed to run git: {e}"))
|
||||
crate::service::git_ops::io::run_git_owned(args, dir)
|
||||
.await
|
||||
.map_err(|e| e.to_string())
|
||||
}
|
||||
|
||||
/// git_status — returns working tree status (staged, unstaged, untracked files).
|
||||
@@ -86,29 +52,8 @@ pub(super) async fn tool_git_status(args: &Value, ctx: &AppContext) -> Result<St
|
||||
));
|
||||
}
|
||||
|
||||
let mut staged: Vec<String> = Vec::new();
|
||||
let mut unstaged: Vec<String> = Vec::new();
|
||||
let mut untracked: Vec<String> = Vec::new();
|
||||
|
||||
for line in stdout.lines() {
|
||||
if line.len() < 3 {
|
||||
continue;
|
||||
}
|
||||
let x = line.chars().next().unwrap_or(' ');
|
||||
let y = line.chars().nth(1).unwrap_or(' ');
|
||||
let path = line[3..].to_string();
|
||||
|
||||
match (x, y) {
|
||||
('?', '?') => untracked.push(path),
|
||||
(' ', _) => unstaged.push(path),
|
||||
(_, ' ') => staged.push(path),
|
||||
_ => {
|
||||
// Both staged and unstaged modifications
|
||||
staged.push(path.clone());
|
||||
unstaged.push(path);
|
||||
}
|
||||
}
|
||||
}
|
||||
let (staged, unstaged, untracked) =
|
||||
crate::service::git_ops::parse_git_status_porcelain(&stdout);
|
||||
|
||||
serde_json::to_string_pretty(&json!({
|
||||
"staged": staged,
|
||||
|
||||
@@ -897,7 +897,7 @@ fn handle_tools_list(id: Option<Value>) -> JsonRpcResponse {
|
||||
},
|
||||
{
|
||||
"name": "get_version",
|
||||
"description": "Return the server version and build hash.",
|
||||
"description": "Return the server version, build hash, and running port.",
|
||||
"inputSchema": {
|
||||
"type": "object",
|
||||
"properties": {}
|
||||
@@ -1330,7 +1330,7 @@ async fn handle_tools_call(id: Option<Value>, params: &Value, ctx: &AppContext)
|
||||
"get_pipeline_status" => story_tools::tool_get_pipeline_status(ctx),
|
||||
// Diagnostics
|
||||
"get_server_logs" => diagnostics::tool_get_server_logs(&args),
|
||||
"get_version" => diagnostics::tool_get_version(),
|
||||
"get_version" => diagnostics::tool_get_version(ctx),
|
||||
// Server lifecycle
|
||||
"rebuild_and_restart" => diagnostics::tool_rebuild_and_restart(ctx).await,
|
||||
// Permission bridge (Claude Code → frontend dialog)
|
||||
|
||||
@@ -1,8 +1,12 @@
|
||||
//! MCP QA tools — request, approve, and reject QA reviews for stories.
|
||||
//!
|
||||
//! This file is a thin adapter: it deserialises MCP payloads, delegates to
|
||||
//! `crate::service::qa` for all business logic, and serialises responses.
|
||||
use crate::agents::{
|
||||
move_story_to_done, move_story_to_merge, move_story_to_qa, reject_story_from_qa,
|
||||
};
|
||||
use crate::http::context::AppContext;
|
||||
use crate::service::qa::{find_free_port, is_spike, merge_spike_branch_to_master};
|
||||
use crate::slog;
|
||||
use crate::slog_warn;
|
||||
use serde_json::{Value, json};
|
||||
@@ -57,8 +61,7 @@ pub(super) async fn tool_approve_qa(args: &Value, ctx: &AppContext) -> Result<St
|
||||
let _ = crate::io::story_metadata::clear_front_matter_field(&qa_path, "review_hold");
|
||||
}
|
||||
|
||||
let item_type = crate::agents::lifecycle::item_type_from_id(story_id);
|
||||
if item_type == "spike" {
|
||||
if is_spike(story_id) {
|
||||
// Spikes skip the merge stage entirely: merge the feature branch to master
|
||||
// directly (fast-forward or simple merge), then move straight to done.
|
||||
let branch = format!("feature/story-{story_id}");
|
||||
@@ -68,7 +71,8 @@ pub(super) async fn tool_approve_qa(args: &Value, ctx: &AppContext) -> Result<St
|
||||
let merge_ok =
|
||||
tokio::task::spawn_blocking(move || merge_spike_branch_to_master(&root, &br, &sid))
|
||||
.await
|
||||
.map_err(|e| format!("Merge task panicked: {e}"))??;
|
||||
.map_err(|e| format!("Merge task panicked: {e}"))?
|
||||
.map_err(|e| e.to_string())?;
|
||||
|
||||
move_story_to_done(&project_root, story_id)?;
|
||||
|
||||
@@ -115,73 +119,6 @@ pub(super) async fn tool_approve_qa(args: &Value, ctx: &AppContext) -> Result<St
|
||||
}
|
||||
}
|
||||
|
||||
/// Merge a spike's feature branch into master using a fast-forward or simple merge.
|
||||
///
|
||||
/// Unlike the squash-merge pipeline used for stories, spikes skip quality gates
|
||||
/// and preserve their commit history. Returns `true` if a merge was performed,
|
||||
/// `false` if the branch had no unmerged commits.
|
||||
fn merge_spike_branch_to_master(
|
||||
project_root: &std::path::Path,
|
||||
branch: &str,
|
||||
story_id: &str,
|
||||
) -> Result<bool, String> {
|
||||
use std::process::Command;
|
||||
|
||||
// Check the branch exists and has unmerged changes.
|
||||
if !crate::agents::lifecycle::feature_branch_has_unmerged_changes(project_root, story_id) {
|
||||
slog!("[qa] Spike '{story_id}': feature branch has no unmerged changes, skipping merge.");
|
||||
return Ok(false);
|
||||
}
|
||||
|
||||
// Ensure we are on master.
|
||||
let checkout = Command::new("git")
|
||||
.args(["checkout", "master"])
|
||||
.current_dir(project_root)
|
||||
.output()
|
||||
.map_err(|e| format!("git checkout master failed: {e}"))?;
|
||||
if !checkout.status.success() {
|
||||
return Err(format!(
|
||||
"Failed to checkout master: {}",
|
||||
String::from_utf8_lossy(&checkout.stderr)
|
||||
));
|
||||
}
|
||||
|
||||
// Try fast-forward first, then fall back to a regular merge.
|
||||
let ff = Command::new("git")
|
||||
.args(["merge", "--ff-only", branch])
|
||||
.current_dir(project_root)
|
||||
.output()
|
||||
.map_err(|e| format!("git merge --ff-only failed: {e}"))?;
|
||||
|
||||
if ff.status.success() {
|
||||
slog!("[qa] Spike '{story_id}': fast-forward merged '{branch}' into master.");
|
||||
return Ok(true);
|
||||
}
|
||||
|
||||
// Fast-forward failed (diverged history) — fall back to a regular merge.
|
||||
let merge = Command::new("git")
|
||||
.args([
|
||||
"merge",
|
||||
"--no-ff",
|
||||
branch,
|
||||
"-m",
|
||||
&format!("Merge spike branch '{branch}' into master"),
|
||||
])
|
||||
.current_dir(project_root)
|
||||
.output()
|
||||
.map_err(|e| format!("git merge failed: {e}"))?;
|
||||
|
||||
if merge.status.success() {
|
||||
slog!("[qa] Spike '{story_id}': merged '{branch}' into master (no-ff).");
|
||||
Ok(true)
|
||||
} else {
|
||||
Err(format!(
|
||||
"Failed to merge spike branch '{branch}' into master: {}",
|
||||
String::from_utf8_lossy(&merge.stderr)
|
||||
))
|
||||
}
|
||||
}
|
||||
|
||||
pub(super) async fn tool_reject_qa(args: &Value, ctx: &AppContext) -> Result<String, String> {
|
||||
let story_id = args
|
||||
.get("story_id")
|
||||
@@ -294,16 +231,6 @@ pub(super) async fn tool_launch_qa_app(args: &Value, ctx: &AppContext) -> Result
|
||||
.map_err(|e| format!("Serialization error: {e}"))
|
||||
}
|
||||
|
||||
/// Find a free TCP port starting from `start`.
|
||||
pub(super) fn find_free_port(start: u16) -> u16 {
|
||||
for port in start..start + 100 {
|
||||
if std::net::TcpListener::bind(("127.0.0.1", port)).is_ok() {
|
||||
return port;
|
||||
}
|
||||
}
|
||||
start // fallback
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
@@ -1,5 +1,10 @@
|
||||
//! MCP shell tools — run commands, execute tests, and stream output via MCP.
|
||||
//!
|
||||
//! This file is a thin adapter: it deserialises MCP payloads, delegates to
|
||||
//! `crate::service::shell` for all business logic, and serialises responses.
|
||||
use crate::http::context::AppContext;
|
||||
#[allow(unused_imports)]
|
||||
use crate::service::shell::{extract_count, is_dangerous, parse_test_counts, truncate_output};
|
||||
use bytes::Bytes;
|
||||
use futures::StreamExt;
|
||||
use poem::{Body, Response};
|
||||
@@ -11,92 +16,15 @@ const MAX_TIMEOUT_SECS: u64 = 600;
|
||||
const TEST_TIMEOUT_SECS: u64 = 1200;
|
||||
const MAX_OUTPUT_LINES: usize = 100;
|
||||
|
||||
/// Patterns that are unconditionally blocked regardless of context.
|
||||
static BLOCKED_PATTERNS: &[&str] = &[
|
||||
"rm -rf /",
|
||||
"rm -fr /",
|
||||
"rm -rf /*",
|
||||
"rm -fr /*",
|
||||
"rm --no-preserve-root",
|
||||
":(){ :|:& };:",
|
||||
"> /dev/sda",
|
||||
"dd if=/dev",
|
||||
];
|
||||
|
||||
/// Binaries that are unconditionally blocked.
|
||||
static BLOCKED_BINARIES: &[&str] = &[
|
||||
"sudo", "su", "shutdown", "reboot", "halt", "poweroff", "mkfs",
|
||||
];
|
||||
|
||||
/// Returns an error message if the command matches a blocked pattern or binary.
|
||||
fn is_dangerous(command: &str) -> Option<String> {
|
||||
let trimmed = command.trim();
|
||||
|
||||
// Check each blocked pattern (substring match)
|
||||
for &pattern in BLOCKED_PATTERNS {
|
||||
if trimmed.contains(pattern) {
|
||||
return Some(format!(
|
||||
"Command blocked: dangerous pattern '{pattern}' detected"
|
||||
));
|
||||
}
|
||||
}
|
||||
|
||||
// Check first token of the command against blocked binaries
|
||||
if let Some(first_token) = trimmed.split_whitespace().next() {
|
||||
let binary = std::path::Path::new(first_token)
|
||||
.file_name()
|
||||
.and_then(|n| n.to_str())
|
||||
.unwrap_or(first_token);
|
||||
if BLOCKED_BINARIES.contains(&binary) {
|
||||
return Some(format!("Command blocked: '{binary}' is not permitted"));
|
||||
}
|
||||
}
|
||||
|
||||
None
|
||||
}
|
||||
|
||||
/// Validates that `working_dir` exists and is inside the project's
|
||||
/// `.huskies/worktrees/` directory. Returns the canonicalized path.
|
||||
///
|
||||
/// Thin wrapper that obtains the project root from `ctx` and delegates to
|
||||
/// `service::shell::io::validate_working_dir`.
|
||||
fn validate_working_dir(working_dir: &str, ctx: &AppContext) -> Result<PathBuf, String> {
|
||||
let wd = PathBuf::from(working_dir);
|
||||
|
||||
if !wd.is_absolute() {
|
||||
return Err("working_dir must be an absolute path".to_string());
|
||||
}
|
||||
if !wd.exists() {
|
||||
return Err(format!("working_dir does not exist: {working_dir}"));
|
||||
}
|
||||
|
||||
let project_root = ctx.agents.get_project_root(&ctx.state)?;
|
||||
let worktrees_root = project_root.join(".huskies").join("worktrees");
|
||||
|
||||
let canonical_wd = wd
|
||||
.canonicalize()
|
||||
.map_err(|e| format!("Cannot canonicalize working_dir: {e}"))?;
|
||||
|
||||
// If worktrees_root doesn't exist yet, we can't allow anything
|
||||
let canonical_wt = if worktrees_root.exists() {
|
||||
worktrees_root
|
||||
.canonicalize()
|
||||
.map_err(|e| format!("Cannot canonicalize worktrees root: {e}"))?
|
||||
} else {
|
||||
return Err("No worktrees directory found in project".to_string());
|
||||
};
|
||||
|
||||
// Also allow the merge workspace so mergemaster can fix conflicts.
|
||||
let merge_workspace = project_root.join(".huskies").join("merge_workspace");
|
||||
let canonical_mw = merge_workspace.canonicalize().unwrap_or_default();
|
||||
|
||||
let in_worktrees = canonical_wd.starts_with(&canonical_wt);
|
||||
let in_merge_ws =
|
||||
!canonical_mw.as_os_str().is_empty() && canonical_wd.starts_with(&canonical_mw);
|
||||
if !in_worktrees && !in_merge_ws {
|
||||
return Err(format!(
|
||||
"working_dir must be inside .huskies/worktrees/ or .huskies/merge_workspace/. Got: {working_dir}"
|
||||
));
|
||||
}
|
||||
|
||||
Ok(canonical_wd)
|
||||
crate::service::shell::io::validate_working_dir(working_dir, &project_root)
|
||||
.map_err(|e| e.to_string())
|
||||
}
|
||||
|
||||
/// Regular (non-SSE) run_command: runs the bash command to completion and
|
||||
@@ -328,51 +256,6 @@ pub(super) fn handle_run_command_sse(
|
||||
.body(Body::from_bytes_stream(stream.map(|r| r.map(Bytes::from))))
|
||||
}
|
||||
|
||||
/// Truncate output to at most `max_lines` lines, keeping the tail.
|
||||
fn truncate_output(output: &str, max_lines: usize) -> String {
|
||||
let lines: Vec<&str> = output.lines().collect();
|
||||
if lines.len() <= max_lines {
|
||||
return output.to_string();
|
||||
}
|
||||
let omitted = lines.len() - max_lines;
|
||||
let tail = lines[lines.len() - max_lines..].join("\n");
|
||||
format!("[... {omitted} lines omitted ...]\n{tail}")
|
||||
}
|
||||
|
||||
/// Parse cumulative passed/failed counts from `cargo test` output lines like:
|
||||
/// `"test result: ok. 5 passed; 0 failed; ..."`
|
||||
fn parse_test_counts(output: &str) -> (u64, u64) {
|
||||
let mut total_passed = 0u64;
|
||||
let mut total_failed = 0u64;
|
||||
for line in output.lines() {
|
||||
if line.contains("test result:") {
|
||||
if let Some(p) = extract_count(line, "passed") {
|
||||
total_passed += p;
|
||||
}
|
||||
if let Some(f) = extract_count(line, "failed") {
|
||||
total_failed += f;
|
||||
}
|
||||
}
|
||||
}
|
||||
(total_passed, total_failed)
|
||||
}
|
||||
|
||||
/// Extract a count immediately before `label` in `line` (e.g. `"5 passed"` → 5).
|
||||
fn extract_count(line: &str, label: &str) -> Option<u64> {
|
||||
let pos = line.find(label)?;
|
||||
let before = line[..pos].trim_end();
|
||||
let num_str: String = before
|
||||
.chars()
|
||||
.rev()
|
||||
.take_while(|c| c.is_ascii_digit())
|
||||
.collect();
|
||||
if num_str.is_empty() {
|
||||
return None;
|
||||
}
|
||||
let num_str: String = num_str.chars().rev().collect();
|
||||
num_str.parse().ok()
|
||||
}
|
||||
|
||||
/// Run the project's test suite (`script/test`) and block until complete.
|
||||
///
|
||||
/// Spawns the test process, then polls every second server-side until the
|
||||
|
||||
@@ -1,4 +1,8 @@
|
||||
//! MCP story tools — create, update, move, and manage stories, bugs, and refactors via MCP.
|
||||
//!
|
||||
//! This file is a thin adapter: it deserialises MCP payloads, delegates to
|
||||
//! `crate::service::story` and `crate::http::workflow` for business logic,
|
||||
//! and serialises responses.
|
||||
use crate::agents::{
|
||||
close_bug_to_archive, feature_branch_has_unmerged_changes, move_story_to_done,
|
||||
};
|
||||
@@ -12,7 +16,9 @@ use crate::http::workflow::{
|
||||
use crate::io::story_metadata::{
|
||||
check_archived_deps, check_archived_deps_from_list, parse_front_matter, parse_unchecked_todos,
|
||||
};
|
||||
use crate::service::story::parse_test_cases;
|
||||
use crate::slog_warn;
|
||||
#[allow(unused_imports)]
|
||||
use crate::workflow::{TestCaseResult, TestStatus, evaluate_acceptance_with_coverage};
|
||||
use serde_json::{Value, json};
|
||||
use std::collections::HashMap;
|
||||
@@ -702,46 +708,6 @@ pub(super) fn tool_list_refactors(ctx: &AppContext) -> Result<String, String> {
|
||||
.map_err(|e| format!("Serialization error: {e}"))
|
||||
}
|
||||
|
||||
pub(super) fn parse_test_cases(value: Option<&Value>) -> Result<Vec<TestCaseResult>, String> {
|
||||
let arr = match value {
|
||||
Some(Value::Array(a)) => a,
|
||||
Some(Value::Null) | None => return Ok(Vec::new()),
|
||||
_ => return Err("Expected array for test cases".to_string()),
|
||||
};
|
||||
|
||||
arr.iter()
|
||||
.map(|item| {
|
||||
let name = item
|
||||
.get("name")
|
||||
.and_then(|v| v.as_str())
|
||||
.ok_or("Test case missing 'name'")?
|
||||
.to_string();
|
||||
let status_str = item
|
||||
.get("status")
|
||||
.and_then(|v| v.as_str())
|
||||
.ok_or("Test case missing 'status'")?;
|
||||
let status = match status_str {
|
||||
"pass" => TestStatus::Pass,
|
||||
"fail" => TestStatus::Fail,
|
||||
other => {
|
||||
return Err(format!(
|
||||
"Invalid test status '{other}'. Use 'pass' or 'fail'."
|
||||
));
|
||||
}
|
||||
};
|
||||
let details = item
|
||||
.get("details")
|
||||
.and_then(|v| v.as_str())
|
||||
.map(String::from);
|
||||
Ok(TestCaseResult {
|
||||
name,
|
||||
status,
|
||||
details,
|
||||
})
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
+128
-319
@@ -12,91 +12,60 @@
|
||||
//! 5. `wizard_retry` — discard staged content and regenerate from scratch
|
||||
|
||||
use crate::http::context::AppContext;
|
||||
use crate::io::wizard::{StepStatus, WizardState, WizardStep, format_wizard_state};
|
||||
use crate::io::wizard::WizardStep;
|
||||
use crate::service::wizard::state_machine;
|
||||
use crate::service::wizard::{self as svc};
|
||||
use serde_json::Value;
|
||||
use std::fs;
|
||||
use std::path::Path;
|
||||
|
||||
// ── helpers ───────────────────────────────────────────────────────────────────
|
||||
// ── Thin adapters (kept for callers in chat/commands/setup.rs) ────────────────
|
||||
|
||||
/// Return the filesystem path (relative to `project_root`) for a step's output.
|
||||
/// Return the filesystem path for a step's output file.
|
||||
///
|
||||
/// Returns `None` for `Scaffold` since that step has no single output file — it
|
||||
/// creates the full `.huskies/` directory structure and is handled by
|
||||
/// `huskies init` before the server starts.
|
||||
/// Pure path concatenation — delegates to `service::wizard::state_machine`.
|
||||
pub(crate) fn step_output_path(
|
||||
project_root: &Path,
|
||||
step: WizardStep,
|
||||
) -> Option<std::path::PathBuf> {
|
||||
match step {
|
||||
WizardStep::Context => Some(
|
||||
project_root
|
||||
.join(".huskies")
|
||||
.join("specs")
|
||||
.join("00_CONTEXT.md"),
|
||||
),
|
||||
WizardStep::Stack => Some(
|
||||
project_root
|
||||
.join(".huskies")
|
||||
.join("specs")
|
||||
.join("tech")
|
||||
.join("STACK.md"),
|
||||
),
|
||||
WizardStep::TestScript => Some(project_root.join("script").join("test")),
|
||||
WizardStep::ReleaseScript => Some(project_root.join("script").join("release")),
|
||||
WizardStep::TestCoverage => Some(project_root.join("script").join("test_coverage")),
|
||||
WizardStep::Scaffold => None,
|
||||
}
|
||||
state_machine::step_output_path(project_root, step)
|
||||
}
|
||||
|
||||
/// Return true when `step` produces an executable script file.
|
||||
pub(crate) fn is_script_step(step: WizardStep) -> bool {
|
||||
matches!(
|
||||
step,
|
||||
WizardStep::TestScript | WizardStep::ReleaseScript | WizardStep::TestCoverage
|
||||
)
|
||||
state_machine::is_script_step(step)
|
||||
}
|
||||
|
||||
/// Write `content` to `path` only when the file does not already exist.
|
||||
/// Write `content` to `path`, skipping if the file already has real content.
|
||||
///
|
||||
/// Existing files (including `CLAUDE.md`) are never overwritten — the wizard
|
||||
/// appends or skips per the acceptance criteria. For script steps the file is
|
||||
/// also made executable after writing.
|
||||
/// Delegates to `service::wizard::write_step_file`.
|
||||
pub(crate) fn write_if_missing(
|
||||
path: &Path,
|
||||
content: &str,
|
||||
executable: bool,
|
||||
) -> Result<bool, String> {
|
||||
if path.exists() {
|
||||
return Ok(false); // already present — skip silently
|
||||
}
|
||||
if let Some(parent) = path.parent() {
|
||||
fs::create_dir_all(parent)
|
||||
.map_err(|e| format!("Failed to create directory {}: {e}", parent.display()))?;
|
||||
}
|
||||
fs::write(path, content).map_err(|e| format!("Failed to write {}: {e}", path.display()))?;
|
||||
|
||||
if executable {
|
||||
#[cfg(unix)]
|
||||
{
|
||||
use std::os::unix::fs::PermissionsExt;
|
||||
let mut perms = fs::metadata(path)
|
||||
.map_err(|e| format!("Failed to read permissions: {e}"))?
|
||||
.permissions();
|
||||
perms.set_mode(0o755);
|
||||
fs::set_permissions(path, perms)
|
||||
.map_err(|e| format!("Failed to set permissions: {e}"))?;
|
||||
}
|
||||
}
|
||||
|
||||
Ok(true)
|
||||
svc::write_step_file(path, content, executable).map_err(|e| e.to_string())
|
||||
}
|
||||
|
||||
/// Serialise a `WizardStep` to its snake_case string (e.g. `"test_script"`).
|
||||
fn step_slug(step: WizardStep) -> String {
|
||||
serde_json::to_value(step)
|
||||
.ok()
|
||||
.and_then(|v| v.as_str().map(String::from))
|
||||
.unwrap_or_default()
|
||||
/// Return true when the project directory has no meaningful source files.
|
||||
///
|
||||
/// Delegates to `service::wizard::state_machine::is_bare_project` after
|
||||
/// reading directory entries via `service::wizard::io`.
|
||||
#[cfg(test)]
|
||||
fn is_bare_project(project_root: &Path) -> bool {
|
||||
use crate::service::wizard::io as wio;
|
||||
let names = wio::list_dir_names(project_root);
|
||||
state_machine::is_bare_project(&names)
|
||||
}
|
||||
|
||||
/// Return a generation hint for `step` based on the project at `project_root`.
|
||||
///
|
||||
/// Reads filesystem state then delegates pure logic to `state_machine`.
|
||||
pub(crate) fn generation_hint(step: WizardStep, project_root: &Path) -> String {
|
||||
use crate::service::wizard::io as wio;
|
||||
let names = wio::list_dir_names(project_root);
|
||||
let tools = wio::detect_project_tools(project_root);
|
||||
let is_bare = state_machine::is_bare_project(&names);
|
||||
state_machine::generation_hint(step, is_bare, &tools)
|
||||
}
|
||||
|
||||
// ── MCP tool handlers ─────────────────────────────────────────────────────────
|
||||
@@ -104,9 +73,7 @@ fn step_slug(step: WizardStep) -> String {
|
||||
/// `wizard_status` — return current wizard state as a human-readable summary.
|
||||
pub(super) fn tool_wizard_status(ctx: &AppContext) -> Result<String, String> {
|
||||
let root = ctx.state.get_project_root()?;
|
||||
let state =
|
||||
WizardState::load(&root).ok_or("No wizard active. Run `huskies init` to begin setup.")?;
|
||||
Ok(format_wizard_state(&state))
|
||||
svc::status(&root).map_err(|e| e.to_string())
|
||||
}
|
||||
|
||||
/// `wizard_generate` — mark the current step as generating or stage content.
|
||||
@@ -118,161 +85,8 @@ pub(super) fn tool_wizard_status(ctx: &AppContext) -> Result<String, String> {
|
||||
/// until `wizard_confirm` is called.
|
||||
pub(super) fn tool_wizard_generate(args: &Value, ctx: &AppContext) -> Result<String, String> {
|
||||
let root = ctx.state.get_project_root()?;
|
||||
let mut state = WizardState::load(&root).ok_or("No wizard active.")?;
|
||||
|
||||
if state.completed {
|
||||
return Ok("Wizard is already complete.".to_string());
|
||||
}
|
||||
|
||||
let current_idx = state.current_step_index();
|
||||
let step = state.steps[current_idx].step;
|
||||
|
||||
// If content is provided, stage it for confirmation.
|
||||
if let Some(content) = args.get("content").and_then(|v| v.as_str()) {
|
||||
state.set_step_status(
|
||||
step,
|
||||
StepStatus::AwaitingConfirmation,
|
||||
Some(content.to_string()),
|
||||
);
|
||||
state
|
||||
.save(&root)
|
||||
.map_err(|e| format!("Failed to save wizard state: {e}"))?;
|
||||
return Ok(format!(
|
||||
"Content staged for '{}'. Run `wizard_confirm` to write it to disk, `wizard_retry` to regenerate, or `wizard_skip` to skip.",
|
||||
step.label()
|
||||
));
|
||||
}
|
||||
|
||||
// No content provided — mark as generating and return a hint.
|
||||
state.set_step_status(step, StepStatus::Generating, None);
|
||||
state
|
||||
.save(&root)
|
||||
.map_err(|e| format!("Failed to save wizard state: {e}"))?;
|
||||
|
||||
let hint = generation_hint(step, &root);
|
||||
let slug = step_slug(step);
|
||||
|
||||
Ok(format!(
|
||||
"Step '{}' marked as generating.\n\n{hint}\n\nOnce you have the content, call `wizard_generate` again with a `content` argument (or PUT /wizard/step/{slug}/content). Then call `wizard_confirm` to write it to disk.",
|
||||
step.label(),
|
||||
))
|
||||
}
|
||||
|
||||
/// Return true if the project directory has no meaningful source files.
|
||||
pub(crate) fn is_bare_project(project_root: &Path) -> bool {
|
||||
std::fs::read_dir(project_root)
|
||||
.ok()
|
||||
.map(|entries| {
|
||||
let names: Vec<String> = entries
|
||||
.filter_map(|e| e.ok())
|
||||
.map(|e| e.file_name().to_string_lossy().to_string())
|
||||
.collect();
|
||||
// A bare project only has huskies scaffolding and no real code
|
||||
names.iter().all(|n| {
|
||||
n.starts_with('.')
|
||||
|| n == "CLAUDE.md"
|
||||
|| n == "LICENSE"
|
||||
|| n == "README.md"
|
||||
|| n == "script"
|
||||
})
|
||||
})
|
||||
.unwrap_or(true)
|
||||
}
|
||||
|
||||
/// Return a generation hint for a step based on the project root.
|
||||
pub(crate) fn generation_hint(step: WizardStep, project_root: &Path) -> String {
|
||||
let bare = is_bare_project(project_root);
|
||||
|
||||
match step {
|
||||
WizardStep::Context => {
|
||||
if bare {
|
||||
"This is a bare project with no existing code. Ask the user what they want \
|
||||
to build — the project's purpose, goals, target users, and key features. \
|
||||
Then generate `.huskies/specs/00_CONTEXT.md` from their answers covering:\n\
|
||||
- High-level goal of the project\n\
|
||||
- Core features\n\
|
||||
- Domain concepts and entities\n\
|
||||
- Glossary of abbreviations and technical terms"
|
||||
.to_string()
|
||||
} else {
|
||||
"Read the project source tree and generate a `.huskies/specs/00_CONTEXT.md` describing:\n\
|
||||
- High-level goal of the project\n\
|
||||
- Core features\n\
|
||||
- Domain concepts and entities\n\
|
||||
- Glossary of abbreviations and technical terms".to_string()
|
||||
}
|
||||
}
|
||||
WizardStep::Stack => {
|
||||
if bare {
|
||||
"This is a bare project with no existing code. Ask the user what language, \
|
||||
frameworks, and tools they plan to use. Then generate `.huskies/specs/tech/STACK.md` \
|
||||
from their answers covering:\n\
|
||||
- Language, frameworks, and runtimes\n\
|
||||
- Coding standards and linting rules\n\
|
||||
- Quality gates (commands that must pass before merging)\n\
|
||||
- Approved libraries and their purpose".to_string()
|
||||
} else {
|
||||
"Read the project source tree and generate a `.huskies/specs/tech/STACK.md` describing:\n\
|
||||
- Language, frameworks, and runtimes\n\
|
||||
- Coding standards and linting rules\n\
|
||||
- Quality gates (commands that must pass before merging)\n\
|
||||
- Approved libraries and their purpose".to_string()
|
||||
}
|
||||
}
|
||||
WizardStep::TestScript => {
|
||||
if bare {
|
||||
"This is a bare project with no existing code. Read the STACK.md generated \
|
||||
in the previous step (or ask the user about their stack if it was skipped) \
|
||||
and generate a `script/test` shell script (#!/usr/bin/env bash, set -euo pipefail) \
|
||||
with appropriate test commands for their chosen language and framework."
|
||||
.to_string()
|
||||
} else {
|
||||
let has_cargo = project_root.join("Cargo.toml").exists();
|
||||
let has_pkg = project_root.join("package.json").exists();
|
||||
let has_pnpm = project_root.join("pnpm-lock.yaml").exists();
|
||||
let mut cmds = Vec::new();
|
||||
if has_cargo {
|
||||
cmds.push("cargo nextest run");
|
||||
}
|
||||
if has_pkg {
|
||||
cmds.push(if has_pnpm { "pnpm test" } else { "npm test" });
|
||||
}
|
||||
if cmds.is_empty() {
|
||||
"Generate a `script/test` shell script (#!/usr/bin/env bash, set -euo pipefail) that runs the project's test suite.".to_string()
|
||||
} else {
|
||||
format!(
|
||||
"Generate a `script/test` shell script (#!/usr/bin/env bash, set -euo pipefail) that runs: {}",
|
||||
cmds.join(", ")
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
WizardStep::ReleaseScript => {
|
||||
if bare {
|
||||
"This is a bare project with no existing code. Read the STACK.md generated \
|
||||
in the previous step (or ask the user about their stack if it was skipped) \
|
||||
and generate a `script/release` shell script (#!/usr/bin/env bash, set -euo pipefail) \
|
||||
with appropriate build/release commands for their chosen language and framework."
|
||||
.to_string()
|
||||
} else {
|
||||
"Generate a `script/release` shell script (#!/usr/bin/env bash, set -euo pipefail) that builds and releases the project (e.g. `cargo build --release` or `npm run build`).".to_string()
|
||||
}
|
||||
}
|
||||
WizardStep::TestCoverage => {
|
||||
if bare {
|
||||
"This is a bare project with no existing code. Read the STACK.md generated \
|
||||
in the previous step (or ask the user about their stack if it was skipped) \
|
||||
and generate a `script/test_coverage` shell script (#!/usr/bin/env bash, set -euo pipefail) \
|
||||
with appropriate test coverage commands for their chosen language and framework."
|
||||
.to_string()
|
||||
} else {
|
||||
"Generate a `script/test_coverage` shell script (#!/usr/bin/env bash, set -euo pipefail) that generates a test coverage report (e.g. `cargo llvm-cov nextest` or `npm run coverage`).".to_string()
|
||||
}
|
||||
}
|
||||
WizardStep::Scaffold => {
|
||||
"Scaffold step is handled automatically by `huskies init`.".to_string()
|
||||
}
|
||||
}
|
||||
let content = args.get("content").and_then(|v| v.as_str());
|
||||
svc::generate(&root, content).map_err(|e| e.to_string())
|
||||
}
|
||||
|
||||
/// `wizard_confirm` — confirm the current step and write its content to disk.
|
||||
@@ -283,111 +97,20 @@ pub(crate) fn generation_hint(step: WizardStep, project_root: &Path) -> String {
|
||||
/// advances to the next pending step.
|
||||
pub(super) fn tool_wizard_confirm(ctx: &AppContext) -> Result<String, String> {
|
||||
let root = ctx.state.get_project_root()?;
|
||||
let mut state = WizardState::load(&root).ok_or("No wizard active.")?;
|
||||
|
||||
if state.completed {
|
||||
return Ok("Wizard is already complete.".to_string());
|
||||
}
|
||||
|
||||
let current_idx = state.current_step_index();
|
||||
let step = state.steps[current_idx].step;
|
||||
let content = state.steps[current_idx].content.clone();
|
||||
|
||||
// Write content to disk (only if a file path exists and the file is absent).
|
||||
let write_msg = if let (Some(c), Some(ref path)) = (&content, step_output_path(&root, step)) {
|
||||
let executable = is_script_step(step);
|
||||
match write_if_missing(path, c, executable)? {
|
||||
true => format!(" File written: `{}`.", path.display()),
|
||||
false => format!(" File `{}` already exists — skipped.", path.display()),
|
||||
}
|
||||
} else {
|
||||
String::new()
|
||||
};
|
||||
|
||||
state
|
||||
.confirm_step(step)
|
||||
.map_err(|e| format!("Cannot confirm step: {e}"))?;
|
||||
state
|
||||
.save(&root)
|
||||
.map_err(|e| format!("Failed to save wizard state: {e}"))?;
|
||||
|
||||
let next_idx = state.current_step_index();
|
||||
if state.completed {
|
||||
Ok(format!(
|
||||
"Step '{}' confirmed.{write_msg}\n\nSetup wizard complete! All steps done.",
|
||||
step.label()
|
||||
))
|
||||
} else {
|
||||
let next = &state.steps[next_idx];
|
||||
Ok(format!(
|
||||
"Step '{}' confirmed.{write_msg}\n\nNext: {} — run `wizard_generate` to begin.",
|
||||
step.label(),
|
||||
next.step.label()
|
||||
))
|
||||
}
|
||||
svc::confirm(&root).map_err(|e| e.to_string())
|
||||
}
|
||||
|
||||
/// `wizard_skip` — skip the current step without writing any file.
|
||||
pub(super) fn tool_wizard_skip(ctx: &AppContext) -> Result<String, String> {
|
||||
let root = ctx.state.get_project_root()?;
|
||||
let mut state = WizardState::load(&root).ok_or("No wizard active.")?;
|
||||
|
||||
if state.completed {
|
||||
return Ok("Wizard is already complete.".to_string());
|
||||
}
|
||||
|
||||
let current_idx = state.current_step_index();
|
||||
let step = state.steps[current_idx].step;
|
||||
|
||||
state
|
||||
.skip_step(step)
|
||||
.map_err(|e| format!("Cannot skip step: {e}"))?;
|
||||
state
|
||||
.save(&root)
|
||||
.map_err(|e| format!("Failed to save wizard state: {e}"))?;
|
||||
|
||||
let next_idx = state.current_step_index();
|
||||
if state.completed {
|
||||
Ok(format!(
|
||||
"Step '{}' skipped. Setup wizard complete!",
|
||||
step.label()
|
||||
))
|
||||
} else {
|
||||
let next = &state.steps[next_idx];
|
||||
Ok(format!(
|
||||
"Step '{}' skipped.\n\nNext: {} — run `wizard_generate` to begin.",
|
||||
step.label(),
|
||||
next.step.label()
|
||||
))
|
||||
}
|
||||
svc::skip(&root).map_err(|e| e.to_string())
|
||||
}
|
||||
|
||||
/// `wizard_retry` — discard staged content and reset the current step to
|
||||
/// `Pending` so it can be regenerated.
|
||||
pub(super) fn tool_wizard_retry(ctx: &AppContext) -> Result<String, String> {
|
||||
let root = ctx.state.get_project_root()?;
|
||||
let mut state = WizardState::load(&root).ok_or("No wizard active.")?;
|
||||
|
||||
if state.completed {
|
||||
return Ok("Wizard is already complete.".to_string());
|
||||
}
|
||||
|
||||
let current_idx = state.current_step_index();
|
||||
let step = state.steps[current_idx].step;
|
||||
|
||||
// Clear content and reset to pending.
|
||||
if let Some(s) = state.steps.iter_mut().find(|s| s.step == step) {
|
||||
s.status = StepStatus::Pending;
|
||||
s.content = None;
|
||||
}
|
||||
state
|
||||
.save(&root)
|
||||
.map_err(|e| format!("Failed to save wizard state: {e}"))?;
|
||||
|
||||
Ok(format!(
|
||||
"Step '{}' reset to pending. Run `wizard_generate` to regenerate content.",
|
||||
step.label()
|
||||
))
|
||||
svc::retry(&root).map_err(|e| e.to_string())
|
||||
}
|
||||
|
||||
// ── tests ─────────────────────────────────────────────────────────────────────
|
||||
@@ -396,6 +119,7 @@ pub(super) fn tool_wizard_retry(ctx: &AppContext) -> Result<String, String> {
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::http::context::AppContext;
|
||||
use crate::io::wizard::{StepStatus, WizardState, format_wizard_state};
|
||||
use tempfile::TempDir;
|
||||
|
||||
fn setup(dir: &TempDir) -> AppContext {
|
||||
@@ -473,13 +197,13 @@ mod tests {
|
||||
fn wizard_confirm_does_not_overwrite_existing_file() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
let ctx = setup(&dir);
|
||||
// Pre-create the specs directory and file.
|
||||
// Pre-create the specs directory and file with real (non-template) content.
|
||||
let specs_dir = dir.path().join(".huskies").join("specs");
|
||||
std::fs::create_dir_all(&specs_dir).unwrap();
|
||||
let context_path = specs_dir.join("00_CONTEXT.md");
|
||||
std::fs::write(&context_path, "original content").unwrap();
|
||||
|
||||
// Stage and confirm — existing file should NOT be overwritten.
|
||||
// Stage and confirm — existing real file should NOT be overwritten.
|
||||
tool_wizard_generate(&serde_json::json!({"content": "new content"}), &ctx).unwrap();
|
||||
let result = tool_wizard_confirm(&ctx).unwrap();
|
||||
assert!(result.contains("already exists"));
|
||||
@@ -489,6 +213,34 @@ mod tests {
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn wizard_confirm_overwrites_scaffold_template_file() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
let ctx = setup(&dir);
|
||||
// Pre-create the file with scaffold template placeholder content.
|
||||
let specs_dir = dir.path().join(".huskies").join("specs");
|
||||
std::fs::create_dir_all(&specs_dir).unwrap();
|
||||
let context_path = specs_dir.join("00_CONTEXT.md");
|
||||
std::fs::write(
|
||||
&context_path,
|
||||
"<!-- huskies:scaffold-template -->\n# Project Context\n\nTODO: Describe...",
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
// Stage and confirm — template placeholder should be overwritten with generated content.
|
||||
tool_wizard_generate(
|
||||
&serde_json::json!({"content": "# My Real Project\n\nThis is a real project."}),
|
||||
&ctx,
|
||||
)
|
||||
.unwrap();
|
||||
let result = tool_wizard_confirm(&ctx).unwrap();
|
||||
assert!(result.contains("confirmed"));
|
||||
assert_eq!(
|
||||
std::fs::read_to_string(&context_path).unwrap(),
|
||||
"# My Real Project\n\nThis is a real project."
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn wizard_skip_advances_wizard() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
@@ -517,8 +269,8 @@ mod tests {
|
||||
fn wizard_complete_returns_done_message() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
let ctx = setup(&dir);
|
||||
// Skip all remaining steps.
|
||||
for _ in 0..5 {
|
||||
// Skip all remaining steps (scaffold is pre-confirmed, so 7 remaining).
|
||||
for _ in 0..7 {
|
||||
tool_wizard_skip(&ctx).unwrap();
|
||||
}
|
||||
let result = tool_wizard_status(&ctx).unwrap();
|
||||
@@ -629,4 +381,61 @@ mod tests {
|
||||
assert!(hint.contains("cargo nextest"));
|
||||
assert!(!hint.contains("bare project"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn generation_hint_bare_build_script_references_stack() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
std::fs::create_dir_all(dir.path().join(".huskies")).unwrap();
|
||||
let hint = generation_hint(WizardStep::BuildScript, dir.path());
|
||||
assert!(hint.contains("bare project"));
|
||||
assert!(hint.contains("STACK.md"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn generation_hint_bare_lint_script_references_stack() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
std::fs::create_dir_all(dir.path().join(".huskies")).unwrap();
|
||||
let hint = generation_hint(WizardStep::LintScript, dir.path());
|
||||
assert!(hint.contains("bare project"));
|
||||
assert!(hint.contains("STACK.md"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn generation_hint_existing_project_build_script_detects_cargo() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
std::fs::write(dir.path().join("Cargo.toml"), "[package]").unwrap();
|
||||
let hint = generation_hint(WizardStep::BuildScript, dir.path());
|
||||
assert!(hint.contains("cargo build --release"));
|
||||
assert!(!hint.contains("bare project"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn generation_hint_existing_project_lint_script_detects_cargo() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
std::fs::write(dir.path().join("Cargo.toml"), "[package]").unwrap();
|
||||
let hint = generation_hint(WizardStep::LintScript, dir.path());
|
||||
assert!(hint.contains("cargo fmt --all --check"));
|
||||
assert!(hint.contains("cargo clippy -- -D warnings"));
|
||||
assert!(!hint.contains("bare project"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn step_output_path_build_script_returns_script_build() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
let path = step_output_path(dir.path(), WizardStep::BuildScript).unwrap();
|
||||
assert!(path.ends_with("script/build"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn step_output_path_lint_script_returns_script_lint() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
let path = step_output_path(dir.path(), WizardStep::LintScript).unwrap();
|
||||
assert!(path.ends_with("script/lint"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn is_script_step_includes_build_and_lint() {
|
||||
assert!(is_script_step(WizardStep::BuildScript));
|
||||
assert!(is_script_step(WizardStep::LintScript));
|
||||
}
|
||||
}
|
||||
|
||||
+17
-2
@@ -7,6 +7,7 @@ pub mod bot_command;
|
||||
pub mod bot_config;
|
||||
pub mod chat;
|
||||
pub mod context;
|
||||
pub mod events;
|
||||
pub mod health;
|
||||
pub mod io;
|
||||
pub mod mcp;
|
||||
@@ -17,6 +18,7 @@ pub mod settings;
|
||||
pub(crate) mod test_helpers;
|
||||
pub mod workflow;
|
||||
|
||||
pub mod gateway;
|
||||
pub mod project;
|
||||
pub mod wizard;
|
||||
pub mod ws;
|
||||
@@ -68,6 +70,7 @@ pub fn build_routes(
|
||||
whatsapp_ctx: Option<Arc<WhatsAppWebhookContext>>,
|
||||
slack_ctx: Option<Arc<SlackWebhookContext>>,
|
||||
port: u16,
|
||||
event_buffer: Option<events::EventBuffer>,
|
||||
) -> impl poem::Endpoint {
|
||||
let ctx_arc = std::sync::Arc::new(ctx);
|
||||
|
||||
@@ -103,6 +106,10 @@ pub fn build_routes(
|
||||
.at("/", get(assets::embedded_index))
|
||||
.at("/*path", get(assets::embedded_file));
|
||||
|
||||
if let Some(buf) = event_buffer {
|
||||
route = route.at("/api/events", get(events::events_handler).data(buf));
|
||||
}
|
||||
|
||||
if let Some(wa_ctx) = whatsapp_ctx {
|
||||
route = route.at(
|
||||
"/webhook/whatsapp",
|
||||
@@ -302,7 +309,7 @@ mod tests {
|
||||
fn build_routes_constructs_without_panic() {
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
let ctx = context::AppContext::new_test(tmp.path().to_path_buf());
|
||||
let _endpoint = build_routes(ctx, None, None, 3001);
|
||||
let _endpoint = build_routes(ctx, None, None, 3001, None);
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -311,6 +318,14 @@ mod tests {
|
||||
// ensuring the port parameter flows through to OAuthState.
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
let ctx = context::AppContext::new_test(tmp.path().to_path_buf());
|
||||
let _endpoint = build_routes(ctx, None, None, 9999);
|
||||
let _endpoint = build_routes(ctx, None, None, 9999, None);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn build_routes_with_event_buffer_constructs_without_panic() {
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
let ctx = context::AppContext::new_test(tmp.path().to_path_buf());
|
||||
let buf = events::EventBuffer::new();
|
||||
let _endpoint = build_routes(ctx, None, None, 3001, Some(buf));
|
||||
}
|
||||
}
|
||||
|
||||
+77
-283
@@ -1,102 +1,23 @@
|
||||
//! OAuth endpoints — Anthropic OAuth callback and token exchange flow.
|
||||
use crate::llm::oauth;
|
||||
//! OAuth endpoints — thin HTTP adapters over `service::oauth`.
|
||||
//!
|
||||
//! Business logic lives in `service::oauth`. These handlers only:
|
||||
//! 1. Extract parameters from the HTTP request.
|
||||
//! 2. Call the service layer.
|
||||
//! 3. Map service errors to HTTP responses.
|
||||
use crate::service::oauth as svc;
|
||||
use crate::slog;
|
||||
use poem::handler;
|
||||
use poem::http::StatusCode;
|
||||
use poem::web::{Data, Query, Redirect};
|
||||
use serde::Deserialize;
|
||||
use std::collections::HashMap;
|
||||
use std::sync::{Arc, Mutex};
|
||||
use std::sync::Arc;
|
||||
|
||||
/// Anthropic OAuth configuration.
|
||||
const CLIENT_ID: &str = "9d1c250a-e61b-44d9-88ed-5944d1962f5e";
|
||||
/// Claude.ai authorize URL (for Max/Pro subscriptions).
|
||||
const AUTHORIZE_URL: &str = "https://claude.com/cai/oauth/authorize";
|
||||
const TOKEN_ENDPOINT: &str = "https://platform.claude.com/v1/oauth/token";
|
||||
const SCOPES: &str =
|
||||
"user:inference user:profile user:mcp_servers user:sessions:claude_code user:file_upload";
|
||||
|
||||
/// In-memory store for pending PKCE flows, keyed by state parameter.
|
||||
#[derive(Clone)]
|
||||
pub struct OAuthState {
|
||||
/// Maps state → (code_verifier, redirect_uri)
|
||||
pending: Arc<Mutex<HashMap<String, PendingFlow>>>,
|
||||
/// The port the server is listening on (for building redirect_uri).
|
||||
port: u16,
|
||||
}
|
||||
|
||||
struct PendingFlow {
|
||||
code_verifier: String,
|
||||
redirect_uri: String,
|
||||
}
|
||||
|
||||
impl OAuthState {
|
||||
pub fn new(port: u16) -> Self {
|
||||
Self {
|
||||
pending: Arc::new(Mutex::new(HashMap::new())),
|
||||
port,
|
||||
}
|
||||
}
|
||||
|
||||
fn callback_url(&self) -> String {
|
||||
format!("http://localhost:{}/callback", self.port)
|
||||
}
|
||||
}
|
||||
|
||||
/// Generate a random alphanumeric string of the given length.
|
||||
fn random_string(len: usize) -> String {
|
||||
use std::collections::hash_map::RandomState;
|
||||
use std::hash::{BuildHasher, Hasher};
|
||||
let mut s = String::with_capacity(len);
|
||||
let chars = b"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789";
|
||||
for _ in 0..len {
|
||||
let hasher = RandomState::new().build_hasher();
|
||||
let idx = hasher.finish() as usize % chars.len();
|
||||
s.push(chars[idx] as char);
|
||||
}
|
||||
s
|
||||
}
|
||||
|
||||
/// Compute the S256 PKCE code challenge from a code verifier.
|
||||
fn compute_code_challenge(verifier: &str) -> String {
|
||||
use sha2::{Digest, Sha256};
|
||||
let hash = Sha256::digest(verifier.as_bytes());
|
||||
base64url_encode(&hash)
|
||||
}
|
||||
|
||||
/// Base64url-encode without padding (RFC 7636).
|
||||
fn base64url_encode(data: &[u8]) -> String {
|
||||
// Standard base64 then convert to base64url
|
||||
const CHARS: &[u8] = b"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
|
||||
let mut result = String::new();
|
||||
let mut i = 0;
|
||||
while i < data.len() {
|
||||
let b0 = data[i] as u32;
|
||||
let b1 = if i + 1 < data.len() {
|
||||
data[i + 1] as u32
|
||||
} else {
|
||||
0
|
||||
};
|
||||
let b2 = if i + 2 < data.len() {
|
||||
data[i + 2] as u32
|
||||
} else {
|
||||
0
|
||||
};
|
||||
let triple = (b0 << 16) | (b1 << 8) | b2;
|
||||
|
||||
result.push(CHARS[((triple >> 18) & 0x3F) as usize] as char);
|
||||
result.push(CHARS[((triple >> 12) & 0x3F) as usize] as char);
|
||||
if i + 1 < data.len() {
|
||||
result.push(CHARS[((triple >> 6) & 0x3F) as usize] as char);
|
||||
}
|
||||
if i + 2 < data.len() {
|
||||
result.push(CHARS[(triple & 0x3F) as usize] as char);
|
||||
}
|
||||
i += 3;
|
||||
}
|
||||
// Convert to base64url: replace + with -, / with _
|
||||
result.replace('+', "-").replace('/', "_")
|
||||
}
|
||||
// Re-export service types so that existing tests in this file continue to
|
||||
// compile unchanged (they use `use super::*` and call these by name).
|
||||
pub(crate) use svc::OAuthState;
|
||||
// Re-exported for tests only (tests use `use super::*` to call these by name).
|
||||
#[cfg(test)]
|
||||
pub(crate) use svc::pkce::{base64url_encode, compute_code_challenge, random_string};
|
||||
|
||||
/// `GET /oauth/authorize` — Initiates the OAuth flow.
|
||||
///
|
||||
@@ -104,35 +25,11 @@ fn base64url_encode(data: &[u8]) -> String {
|
||||
/// Anthropic's authorization page.
|
||||
#[handler]
|
||||
pub async fn oauth_authorize(state: Data<&Arc<OAuthState>>) -> Redirect {
|
||||
let code_verifier = random_string(128);
|
||||
let code_challenge = compute_code_challenge(&code_verifier);
|
||||
let csrf_state = random_string(32);
|
||||
let redirect_uri = state.callback_url();
|
||||
|
||||
slog!("[oauth] Starting OAuth flow, state={}", csrf_state);
|
||||
|
||||
// Store the pending flow
|
||||
state.pending.lock().unwrap().insert(
|
||||
csrf_state.clone(),
|
||||
PendingFlow {
|
||||
code_verifier,
|
||||
redirect_uri: redirect_uri.clone(),
|
||||
},
|
||||
);
|
||||
|
||||
let authorize_url = format!(
|
||||
"{}?code=true&client_id={}&response_type=code&redirect_uri={}&scope={}&code_challenge={}&code_challenge_method=S256&state={}",
|
||||
AUTHORIZE_URL,
|
||||
CLIENT_ID,
|
||||
percent_encode(&redirect_uri),
|
||||
percent_encode(SCOPES),
|
||||
percent_encode(&code_challenge),
|
||||
percent_encode(&csrf_state),
|
||||
);
|
||||
|
||||
Redirect::temporary(authorize_url)
|
||||
let (_, url) = svc::initiate_flow(&state);
|
||||
Redirect::temporary(url)
|
||||
}
|
||||
|
||||
/// Query parameters received on the OAuth callback URL.
|
||||
#[derive(Deserialize)]
|
||||
pub struct CallbackParams {
|
||||
code: Option<String>,
|
||||
@@ -141,18 +38,6 @@ pub struct CallbackParams {
|
||||
error_description: Option<String>,
|
||||
}
|
||||
|
||||
/// Response from the Anthropic OAuth token endpoint.
|
||||
#[derive(Deserialize)]
|
||||
struct TokenResponse {
|
||||
access_token: String,
|
||||
refresh_token: Option<String>,
|
||||
expires_in: u64,
|
||||
#[allow(dead_code)]
|
||||
token_type: Option<String>,
|
||||
#[allow(dead_code)]
|
||||
scope: Option<String>,
|
||||
}
|
||||
|
||||
/// `GET /oauth/callback` — Handles the OAuth redirect from Anthropic.
|
||||
///
|
||||
/// Exchanges the authorization code for tokens and writes them to
|
||||
@@ -162,7 +47,7 @@ pub async fn oauth_callback(
|
||||
state: Data<&Arc<OAuthState>>,
|
||||
Query(params): Query<CallbackParams>,
|
||||
) -> poem::Response {
|
||||
// Handle errors from Anthropic
|
||||
// Handle provider-side errors (e.g. user denied access).
|
||||
if let Some(err) = ¶ms.error {
|
||||
let desc = params
|
||||
.error_description
|
||||
@@ -177,7 +62,7 @@ pub async fn oauth_callback(
|
||||
}
|
||||
|
||||
let code = match ¶ms.code {
|
||||
Some(c) => c,
|
||||
Some(c) => c.clone(),
|
||||
None => {
|
||||
return html_response(
|
||||
StatusCode::BAD_REQUEST,
|
||||
@@ -188,7 +73,7 @@ pub async fn oauth_callback(
|
||||
};
|
||||
|
||||
let csrf_state = match ¶ms.state {
|
||||
Some(s) => s,
|
||||
Some(s) => s.clone(),
|
||||
None => {
|
||||
return html_response(
|
||||
StatusCode::BAD_REQUEST,
|
||||
@@ -198,163 +83,72 @@ pub async fn oauth_callback(
|
||||
}
|
||||
};
|
||||
|
||||
// Look up and remove the pending flow
|
||||
let pending = state.pending.lock().unwrap().remove(csrf_state);
|
||||
let flow = match pending {
|
||||
Some(f) => f,
|
||||
None => {
|
||||
slog!("[oauth] Unknown state parameter: {}", csrf_state);
|
||||
return html_response(
|
||||
StatusCode::BAD_REQUEST,
|
||||
"Invalid State",
|
||||
"Unknown or expired state parameter. Please try logging in again.",
|
||||
);
|
||||
}
|
||||
};
|
||||
|
||||
slog!("[oauth] Received callback, exchanging code for tokens");
|
||||
|
||||
// Exchange the authorization code for tokens
|
||||
let client = reqwest::Client::new();
|
||||
let resp = client
|
||||
.post(TOKEN_ENDPOINT)
|
||||
.header("Content-Type", "application/json")
|
||||
.json(&serde_json::json!({
|
||||
"grant_type": "authorization_code",
|
||||
"code": code,
|
||||
"client_id": CLIENT_ID,
|
||||
"redirect_uri": &flow.redirect_uri,
|
||||
"code_verifier": &flow.code_verifier,
|
||||
"state": csrf_state,
|
||||
}))
|
||||
.send()
|
||||
.await;
|
||||
|
||||
let resp = match resp {
|
||||
Ok(r) => r,
|
||||
Err(e) => {
|
||||
slog!("[oauth] Token exchange request failed: {}", e);
|
||||
return html_response(
|
||||
StatusCode::INTERNAL_SERVER_ERROR,
|
||||
"Token Exchange Failed",
|
||||
&format!("Failed to contact Anthropic: {e}"),
|
||||
);
|
||||
}
|
||||
};
|
||||
|
||||
let status = resp.status();
|
||||
let body = resp.text().await.unwrap_or_default();
|
||||
|
||||
slog!(
|
||||
"[oauth] Token exchange response (HTTP {}): {}",
|
||||
status,
|
||||
body
|
||||
);
|
||||
|
||||
if !status.is_success() {
|
||||
return html_response(
|
||||
StatusCode::INTERNAL_SERVER_ERROR,
|
||||
"Token Exchange Failed",
|
||||
&format!("Anthropic returned HTTP {status}. Please try again."),
|
||||
);
|
||||
match svc::exchange_code(&state, &code, &csrf_state).await {
|
||||
Ok(()) => html_response(
|
||||
StatusCode::OK,
|
||||
"Authenticated!",
|
||||
"Claude OAuth login successful. You can close this tab and return to Huskies.",
|
||||
),
|
||||
Err(e) => map_service_error(e),
|
||||
}
|
||||
|
||||
let token_resp: TokenResponse = match serde_json::from_str(&body) {
|
||||
Ok(t) => t,
|
||||
Err(e) => {
|
||||
slog!("[oauth] Failed to parse token response: {}", e);
|
||||
return html_response(
|
||||
StatusCode::INTERNAL_SERVER_ERROR,
|
||||
"Token Parse Failed",
|
||||
"Received an unexpected response from Anthropic.",
|
||||
);
|
||||
}
|
||||
};
|
||||
|
||||
let now_ms = std::time::SystemTime::now()
|
||||
.duration_since(std::time::UNIX_EPOCH)
|
||||
.map(|d| d.as_millis() as u64)
|
||||
.unwrap_or(0);
|
||||
|
||||
let creds = oauth::CredentialsFile {
|
||||
claude_ai_oauth: oauth::OAuthCredentials {
|
||||
access_token: token_resp.access_token,
|
||||
refresh_token: token_resp.refresh_token.unwrap_or_default(),
|
||||
expires_at: now_ms + (token_resp.expires_in * 1000),
|
||||
scopes: SCOPES.split(' ').map(|s| s.to_string()).collect(),
|
||||
subscription_type: None,
|
||||
rate_limit_tier: None,
|
||||
},
|
||||
};
|
||||
|
||||
if let Err(e) = oauth::write_credentials(&creds) {
|
||||
slog!("[oauth] Failed to write credentials: {}", e);
|
||||
return html_response(
|
||||
StatusCode::INTERNAL_SERVER_ERROR,
|
||||
"Credential Write Failed",
|
||||
&format!("Tokens received but failed to save: {e}"),
|
||||
);
|
||||
}
|
||||
|
||||
slog!("[oauth] Successfully authenticated and saved credentials");
|
||||
|
||||
html_response(
|
||||
StatusCode::OK,
|
||||
"Authenticated!",
|
||||
"Claude OAuth login successful. You can close this tab and return to Huskies.",
|
||||
)
|
||||
}
|
||||
|
||||
/// Check whether valid (non-expired) OAuth credentials exist.
|
||||
/// `GET /oauth/status` — Check whether valid (non-expired) OAuth credentials exist.
|
||||
#[handler]
|
||||
pub async fn oauth_status() -> poem::Response {
|
||||
match oauth::read_credentials() {
|
||||
Ok(creds) => {
|
||||
let now_ms = std::time::SystemTime::now()
|
||||
.duration_since(std::time::UNIX_EPOCH)
|
||||
.map(|d| d.as_millis() as u64)
|
||||
.unwrap_or(0);
|
||||
let expired = now_ms > creds.claude_ai_oauth.expires_at;
|
||||
let body = serde_json::json!({
|
||||
"authenticated": true,
|
||||
"expired": expired,
|
||||
"expires_at": creds.claude_ai_oauth.expires_at,
|
||||
"has_refresh_token": !creds.claude_ai_oauth.refresh_token.is_empty(),
|
||||
});
|
||||
poem::Response::builder()
|
||||
.status(StatusCode::OK)
|
||||
.header("Content-Type", "application/json")
|
||||
.body(body.to_string())
|
||||
}
|
||||
Err(_) => {
|
||||
let body = serde_json::json!({
|
||||
"authenticated": false,
|
||||
"expired": false,
|
||||
"expires_at": 0,
|
||||
"has_refresh_token": false,
|
||||
});
|
||||
poem::Response::builder()
|
||||
.status(StatusCode::OK)
|
||||
.header("Content-Type", "application/json")
|
||||
.body(body.to_string())
|
||||
}
|
||||
}
|
||||
let status = svc::check_status();
|
||||
let body = serde_json::json!({
|
||||
"authenticated": status.authenticated,
|
||||
"expired": status.expired,
|
||||
"expires_at": status.expires_at,
|
||||
"has_refresh_token": status.has_refresh_token,
|
||||
});
|
||||
poem::Response::builder()
|
||||
.status(StatusCode::OK)
|
||||
.header("Content-Type", "application/json")
|
||||
.body(body.to_string())
|
||||
}
|
||||
|
||||
/// Percent-encode a string for use in URL query parameters.
|
||||
fn percent_encode(input: &str) -> String {
|
||||
let mut encoded = String::with_capacity(input.len() * 3);
|
||||
for byte in input.bytes() {
|
||||
match byte {
|
||||
b'A'..=b'Z' | b'a'..=b'z' | b'0'..=b'9' | b'-' | b'_' | b'.' | b'~' => {
|
||||
encoded.push(byte as char);
|
||||
}
|
||||
_ => {
|
||||
encoded.push_str(&format!("%{byte:02X}"));
|
||||
}
|
||||
// ── Private helpers ───────────────────────────────────────────────────────────
|
||||
|
||||
/// Map a service-layer `Error` to an HTML HTTP response.
|
||||
fn map_service_error(e: svc::Error) -> poem::Response {
|
||||
use svc::Error;
|
||||
match e {
|
||||
Error::MissingCode => html_response(
|
||||
StatusCode::BAD_REQUEST,
|
||||
"Missing Code",
|
||||
"No authorization code received.",
|
||||
),
|
||||
Error::MissingState => html_response(
|
||||
StatusCode::BAD_REQUEST,
|
||||
"Missing State",
|
||||
"No state parameter received.",
|
||||
),
|
||||
Error::InvalidState(msg) => html_response(StatusCode::BAD_REQUEST, "Invalid State", &msg),
|
||||
Error::AuthorizationDenied(msg) => {
|
||||
html_response(StatusCode::BAD_REQUEST, "Authentication Failed", &msg)
|
||||
}
|
||||
Error::InvalidGrant(msg) => {
|
||||
html_response(StatusCode::BAD_REQUEST, "Token Exchange Failed", &msg)
|
||||
}
|
||||
Error::Network(msg) => html_response(
|
||||
StatusCode::INTERNAL_SERVER_ERROR,
|
||||
"Token Exchange Failed",
|
||||
&msg,
|
||||
),
|
||||
Error::TokenExpired(msg) => html_response(StatusCode::UNAUTHORIZED, "Token Expired", &msg),
|
||||
Error::TokenStorage(msg) => html_response(
|
||||
StatusCode::INTERNAL_SERVER_ERROR,
|
||||
"Credential Write Failed",
|
||||
&msg,
|
||||
),
|
||||
Error::Parse(msg) => html_response(
|
||||
StatusCode::INTERNAL_SERVER_ERROR,
|
||||
"Token Parse Failed",
|
||||
&msg,
|
||||
),
|
||||
}
|
||||
encoded
|
||||
}
|
||||
|
||||
fn html_response(status: StatusCode, title: &str, message: &str) -> poem::Response {
|
||||
|
||||
+24
-10
@@ -1,6 +1,7 @@
|
||||
//! HTTP project endpoints — REST API for project initialization and context management.
|
||||
use crate::http::context::{AppContext, OpenApiResult, bad_request};
|
||||
use crate::io::fs;
|
||||
//! HTTP project endpoints — thin adapters over `service::project`.
|
||||
use crate::http::context::{AppContext, OpenApiResult, bad_request, not_found};
|
||||
use crate::service::project::{self as svc, Error as ProjectError};
|
||||
use poem::http::StatusCode;
|
||||
use poem_openapi::{Object, OpenApi, Tags, payload::Json};
|
||||
use serde::Deserialize;
|
||||
use std::sync::Arc;
|
||||
@@ -15,6 +16,17 @@ struct PathPayload {
|
||||
path: String,
|
||||
}
|
||||
|
||||
/// Map a typed [`ProjectError`] to a `poem::Error` with the appropriate HTTP status.
|
||||
fn map_project_error(e: ProjectError) -> poem::Error {
|
||||
match e {
|
||||
ProjectError::PathNotFound(msg) => not_found(msg),
|
||||
ProjectError::NotADirectory(msg) => bad_request(msg),
|
||||
ProjectError::Internal(msg) => {
|
||||
poem::Error::from_string(msg, StatusCode::INTERNAL_SERVER_ERROR)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub struct ProjectApi {
|
||||
pub ctx: Arc<AppContext>,
|
||||
}
|
||||
@@ -26,8 +38,8 @@ impl ProjectApi {
|
||||
/// Returns null when no project is open.
|
||||
#[oai(path = "/project", method = "get")]
|
||||
async fn get_current_project(&self) -> OpenApiResult<Json<Option<String>>> {
|
||||
let result = fs::get_current_project(&self.ctx.state, self.ctx.store.as_ref())
|
||||
.map_err(bad_request)?;
|
||||
let result = svc::get_current_project(&self.ctx.state, self.ctx.store.as_ref())
|
||||
.map_err(map_project_error)?;
|
||||
Ok(Json(result))
|
||||
}
|
||||
|
||||
@@ -36,14 +48,14 @@ impl ProjectApi {
|
||||
/// Persists the selected path for later sessions.
|
||||
#[oai(path = "/project", method = "post")]
|
||||
async fn open_project(&self, payload: Json<PathPayload>) -> OpenApiResult<Json<String>> {
|
||||
let confirmed = fs::open_project(
|
||||
let confirmed = svc::open_project(
|
||||
payload.0.path,
|
||||
&self.ctx.state,
|
||||
self.ctx.store.as_ref(),
|
||||
self.ctx.agents.port(),
|
||||
)
|
||||
.await
|
||||
.map_err(bad_request)?;
|
||||
.map_err(map_project_error)?;
|
||||
Ok(Json(confirmed))
|
||||
}
|
||||
|
||||
@@ -55,21 +67,23 @@ impl ProjectApi {
|
||||
"[MERGE-DEBUG] DELETE /project called! \
|
||||
Backtrace: this is the only code path that clears project_root."
|
||||
);
|
||||
fs::close_project(&self.ctx.state, self.ctx.store.as_ref()).map_err(bad_request)?;
|
||||
svc::close_project(&self.ctx.state, self.ctx.store.as_ref()).map_err(map_project_error)?;
|
||||
Ok(Json(true))
|
||||
}
|
||||
|
||||
/// List known projects from the store.
|
||||
#[oai(path = "/projects", method = "get")]
|
||||
async fn list_known_projects(&self) -> OpenApiResult<Json<Vec<String>>> {
|
||||
let projects = fs::get_known_projects(self.ctx.store.as_ref()).map_err(bad_request)?;
|
||||
let projects =
|
||||
svc::get_known_projects(self.ctx.store.as_ref()).map_err(map_project_error)?;
|
||||
Ok(Json(projects))
|
||||
}
|
||||
|
||||
/// Forget a known project path.
|
||||
#[oai(path = "/projects/forget", method = "post")]
|
||||
async fn forget_known_project(&self, payload: Json<PathPayload>) -> OpenApiResult<Json<bool>> {
|
||||
fs::forget_known_project(payload.0.path, self.ctx.store.as_ref()).map_err(bad_request)?;
|
||||
svc::forget_known_project(payload.0.path, self.ctx.store.as_ref())
|
||||
.map_err(map_project_error)?;
|
||||
Ok(Json(true))
|
||||
}
|
||||
}
|
||||
|
||||
+276
-25
@@ -1,12 +1,39 @@
|
||||
//! HTTP settings endpoints — REST API for user preferences and editor configuration.
|
||||
use crate::http::context::{AppContext, OpenApiResult, bad_request};
|
||||
use crate::service::settings as svc;
|
||||
use crate::store::StoreOps;
|
||||
use poem_openapi::{Object, OpenApi, Tags, param::Query, payload::Json};
|
||||
use serde::Serialize;
|
||||
use serde_json::json;
|
||||
#[cfg(test)]
|
||||
use std::path::Path;
|
||||
use std::sync::Arc;
|
||||
|
||||
const EDITOR_COMMAND_KEY: &str = "editor_command";
|
||||
// Re-export service types so the test module (which does `use super::*`) can
|
||||
// access them without modification.
|
||||
pub use svc::EDITOR_COMMAND_KEY;
|
||||
pub use svc::ProjectSettings;
|
||||
#[cfg(test)]
|
||||
pub use svc::settings_from_config;
|
||||
|
||||
/// Thin wrapper — delegates to [`svc::validate_project_settings`] and maps
|
||||
/// the typed error to `String` so existing tests calling `.unwrap_err()` can
|
||||
/// call `.contains()` directly.
|
||||
fn validate_project_settings(s: &ProjectSettings) -> Result<(), String> {
|
||||
svc::validate_project_settings(s).map_err(|e| e.to_string())
|
||||
}
|
||||
|
||||
/// Thin wrapper — delegates to [`svc::write_project_settings`] and maps the
|
||||
/// typed error to `String` so existing tests can call `.unwrap()` unchanged.
|
||||
#[cfg(test)]
|
||||
fn write_project_settings(project_root: &Path, s: &ProjectSettings) -> Result<(), String> {
|
||||
svc::write_project_settings(project_root, s).map_err(|e| e.to_string())
|
||||
}
|
||||
|
||||
/// Return the configured editor command from the store, or `None` if not set.
|
||||
pub fn get_editor_command_from_store(ctx: &AppContext) -> Option<String> {
|
||||
svc::get_editor_command(&*ctx.store)
|
||||
}
|
||||
|
||||
#[derive(Tags)]
|
||||
enum SettingsTags {
|
||||
@@ -37,11 +64,7 @@ impl SettingsApi {
|
||||
/// Get the configured editor command (e.g. "zed", "code", "cursor"), or null if not set.
|
||||
#[oai(path = "/settings/editor", method = "get")]
|
||||
async fn get_editor(&self) -> OpenApiResult<Json<EditorCommandResponse>> {
|
||||
let editor_command = self
|
||||
.ctx
|
||||
.store
|
||||
.get(EDITOR_COMMAND_KEY)
|
||||
.and_then(|v| v.as_str().map(|s| s.to_string()));
|
||||
let editor_command = get_editor_command_from_store(&self.ctx);
|
||||
Ok(Json(EditorCommandResponse { editor_command }))
|
||||
}
|
||||
|
||||
@@ -55,22 +78,38 @@ impl SettingsApi {
|
||||
path: Query<String>,
|
||||
line: Query<Option<u32>>,
|
||||
) -> OpenApiResult<Json<OpenFileResponse>> {
|
||||
let editor_command = get_editor_command_from_store(&self.ctx)
|
||||
.ok_or_else(|| bad_request("No editor configured".to_string()))?;
|
||||
|
||||
let file_ref = match line.0 {
|
||||
Some(l) => format!("{}:{}", path.0, l),
|
||||
None => path.0.clone(),
|
||||
};
|
||||
|
||||
std::process::Command::new(&editor_command)
|
||||
.arg(&file_ref)
|
||||
.spawn()
|
||||
.map_err(|e| bad_request(format!("Failed to open editor: {e}")))?;
|
||||
|
||||
svc::open_file_in_editor(&*self.ctx.store, &path.0, line.0)
|
||||
.map_err(|e| bad_request(e.to_string()))?;
|
||||
Ok(Json(OpenFileResponse { success: true }))
|
||||
}
|
||||
|
||||
/// Get current project.toml scalar settings as JSON.
|
||||
#[oai(path = "/settings", method = "get")]
|
||||
async fn get_settings(&self) -> OpenApiResult<Json<ProjectSettings>> {
|
||||
let project_root = self.ctx.state.get_project_root().map_err(bad_request)?;
|
||||
let s =
|
||||
svc::load_project_settings(&project_root).map_err(|e| bad_request(e.to_string()))?;
|
||||
Ok(Json(s))
|
||||
}
|
||||
|
||||
/// Update project.toml scalar settings. Array sections (component, agent) are preserved.
|
||||
///
|
||||
/// Returns 400 if the input fails validation (e.g. unknown qa mode, negative max_retries).
|
||||
#[oai(path = "/settings", method = "put")]
|
||||
async fn put_settings(
|
||||
&self,
|
||||
payload: Json<ProjectSettings>,
|
||||
) -> OpenApiResult<Json<ProjectSettings>> {
|
||||
validate_project_settings(&payload.0).map_err(bad_request)?;
|
||||
let project_root = self.ctx.state.get_project_root().map_err(bad_request)?;
|
||||
svc::write_project_settings(&project_root, &payload.0)
|
||||
.map_err(|e| bad_request(e.to_string()))?;
|
||||
// Re-read to confirm what was written
|
||||
let s =
|
||||
svc::load_project_settings(&project_root).map_err(|e| bad_request(e.to_string()))?;
|
||||
Ok(Json(s))
|
||||
}
|
||||
|
||||
/// Set the preferred editor command (e.g. "zed", "code", "cursor").
|
||||
/// Pass null or empty string to clear the preference.
|
||||
#[oai(path = "/settings/editor", method = "put")]
|
||||
@@ -102,12 +141,6 @@ impl SettingsApi {
|
||||
}
|
||||
}
|
||||
|
||||
pub fn get_editor_command_from_store(ctx: &AppContext) -> Option<String> {
|
||||
ctx.store
|
||||
.get(EDITOR_COMMAND_KEY)
|
||||
.and_then(|v| v.as_str().map(|s| s.to_string()))
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
impl From<std::sync::Arc<AppContext>> for SettingsApi {
|
||||
fn from(ctx: std::sync::Arc<AppContext>) -> Self {
|
||||
@@ -360,4 +393,222 @@ mod tests {
|
||||
.await;
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
// ── /api/settings GET/PUT ──────────────────────────────────────────────
|
||||
|
||||
fn default_project_settings() -> ProjectSettings {
|
||||
let cfg = crate::config::ProjectConfig::default();
|
||||
settings_from_config(&cfg)
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn get_settings_returns_defaults_when_no_project_toml() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
// Create .huskies dir so project root detection works but no project.toml
|
||||
std::fs::create_dir_all(dir.path().join(".huskies")).unwrap();
|
||||
let ctx = AppContext::new_test(dir.path().to_path_buf());
|
||||
let api = SettingsApi { ctx: Arc::new(ctx) };
|
||||
let result = api.get_settings().await.unwrap().0;
|
||||
assert_eq!(result.default_qa, "server");
|
||||
assert_eq!(result.max_retries, 2);
|
||||
assert!(result.rate_limit_notifications);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn put_settings_writes_and_returns_settings() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
std::fs::create_dir_all(dir.path().join(".huskies")).unwrap();
|
||||
let ctx = AppContext::new_test(dir.path().to_path_buf());
|
||||
let api = SettingsApi { ctx: Arc::new(ctx) };
|
||||
|
||||
let mut s = default_project_settings();
|
||||
s.default_qa = "agent".to_string();
|
||||
s.max_retries = 5;
|
||||
s.rate_limit_notifications = false;
|
||||
|
||||
let result = api.put_settings(Json(s)).await.unwrap().0;
|
||||
assert_eq!(result.default_qa, "agent");
|
||||
assert_eq!(result.max_retries, 5);
|
||||
assert!(!result.rate_limit_notifications);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn put_settings_preserves_agent_sections() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
let huskies_dir = dir.path().join(".huskies");
|
||||
std::fs::create_dir_all(&huskies_dir).unwrap();
|
||||
|
||||
// Write a project.toml with agent sections
|
||||
std::fs::write(
|
||||
huskies_dir.join("project.toml"),
|
||||
r#"
|
||||
[[agent]]
|
||||
name = "coder-1"
|
||||
model = "sonnet"
|
||||
stage = "coder"
|
||||
|
||||
[[component]]
|
||||
name = "server"
|
||||
path = "."
|
||||
"#,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let ctx = AppContext::new_test(dir.path().to_path_buf());
|
||||
let api = SettingsApi { ctx: Arc::new(ctx) };
|
||||
|
||||
let mut s = default_project_settings();
|
||||
s.default_qa = "human".to_string();
|
||||
api.put_settings(Json(s)).await.unwrap();
|
||||
|
||||
// Re-read the file and verify agent/component sections are still there
|
||||
let written = std::fs::read_to_string(huskies_dir.join("project.toml")).unwrap();
|
||||
assert!(
|
||||
written.contains("coder-1"),
|
||||
"agent section should be preserved"
|
||||
);
|
||||
assert!(
|
||||
written.contains("server"),
|
||||
"component section should be preserved"
|
||||
);
|
||||
assert!(written.contains("human"), "new setting should be written");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn put_settings_rejects_invalid_qa_mode() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
std::fs::create_dir_all(dir.path().join(".huskies")).unwrap();
|
||||
let ctx = AppContext::new_test(dir.path().to_path_buf());
|
||||
let api = SettingsApi { ctx: Arc::new(ctx) };
|
||||
|
||||
let mut s = default_project_settings();
|
||||
s.default_qa = "invalid_mode".to_string();
|
||||
|
||||
let result = api.put_settings(Json(s)).await;
|
||||
assert!(result.is_err());
|
||||
let err = result.unwrap_err();
|
||||
assert_eq!(err.status(), poem::http::StatusCode::BAD_REQUEST);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn validate_project_settings_accepts_valid_qa_modes() {
|
||||
for mode in &["server", "agent", "human"] {
|
||||
let s = ProjectSettings {
|
||||
default_qa: mode.to_string(),
|
||||
default_coder_model: None,
|
||||
max_coders: None,
|
||||
max_retries: 2,
|
||||
base_branch: None,
|
||||
rate_limit_notifications: true,
|
||||
timezone: None,
|
||||
rendezvous: None,
|
||||
watcher_sweep_interval_secs: 60,
|
||||
watcher_done_retention_secs: 14400,
|
||||
};
|
||||
assert!(
|
||||
validate_project_settings(&s).is_ok(),
|
||||
"qa mode '{mode}' should be valid"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn validate_project_settings_rejects_unknown_qa_mode() {
|
||||
let s = ProjectSettings {
|
||||
default_qa: "robot".to_string(),
|
||||
default_coder_model: None,
|
||||
max_coders: None,
|
||||
max_retries: 2,
|
||||
base_branch: None,
|
||||
rate_limit_notifications: true,
|
||||
timezone: None,
|
||||
rendezvous: None,
|
||||
watcher_sweep_interval_secs: 60,
|
||||
watcher_done_retention_secs: 14400,
|
||||
};
|
||||
let err = validate_project_settings(&s).unwrap_err();
|
||||
assert!(err.contains("robot"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn write_and_read_project_settings_roundtrip() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
std::fs::create_dir_all(dir.path().join(".huskies")).unwrap();
|
||||
|
||||
let s = ProjectSettings {
|
||||
default_qa: "agent".to_string(),
|
||||
default_coder_model: Some("opus".to_string()),
|
||||
max_coders: Some(2),
|
||||
max_retries: 3,
|
||||
base_branch: Some("main".to_string()),
|
||||
rate_limit_notifications: false,
|
||||
timezone: Some("America/New_York".to_string()),
|
||||
rendezvous: Some("ws://host:3001/crdt-sync".to_string()),
|
||||
watcher_sweep_interval_secs: 30,
|
||||
watcher_done_retention_secs: 7200,
|
||||
};
|
||||
|
||||
write_project_settings(dir.path(), &s).unwrap();
|
||||
|
||||
let config = crate::config::ProjectConfig::load(dir.path()).unwrap();
|
||||
let loaded = settings_from_config(&config);
|
||||
|
||||
assert_eq!(loaded.default_qa, "agent");
|
||||
assert_eq!(loaded.default_coder_model, Some("opus".to_string()));
|
||||
assert_eq!(loaded.max_coders, Some(2));
|
||||
assert_eq!(loaded.max_retries, 3);
|
||||
assert_eq!(loaded.base_branch, Some("main".to_string()));
|
||||
assert!(!loaded.rate_limit_notifications);
|
||||
assert_eq!(loaded.timezone, Some("America/New_York".to_string()));
|
||||
assert_eq!(
|
||||
loaded.rendezvous,
|
||||
Some("ws://host:3001/crdt-sync".to_string())
|
||||
);
|
||||
assert_eq!(loaded.watcher_sweep_interval_secs, 30);
|
||||
assert_eq!(loaded.watcher_done_retention_secs, 7200);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn write_project_settings_clears_optional_fields_when_none() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
let huskies_dir = dir.path().join(".huskies");
|
||||
std::fs::create_dir_all(&huskies_dir).unwrap();
|
||||
|
||||
// First write with optional fields set
|
||||
let s_with = ProjectSettings {
|
||||
default_qa: "server".to_string(),
|
||||
default_coder_model: Some("sonnet".to_string()),
|
||||
max_coders: Some(3),
|
||||
max_retries: 2,
|
||||
base_branch: Some("master".to_string()),
|
||||
rate_limit_notifications: true,
|
||||
timezone: Some("UTC".to_string()),
|
||||
rendezvous: None,
|
||||
watcher_sweep_interval_secs: 60,
|
||||
watcher_done_retention_secs: 14400,
|
||||
};
|
||||
write_project_settings(dir.path(), &s_with).unwrap();
|
||||
|
||||
// Then write with optional fields cleared
|
||||
let s_clear = ProjectSettings {
|
||||
default_qa: "server".to_string(),
|
||||
default_coder_model: None,
|
||||
max_coders: None,
|
||||
max_retries: 2,
|
||||
base_branch: None,
|
||||
rate_limit_notifications: true,
|
||||
timezone: None,
|
||||
rendezvous: None,
|
||||
watcher_sweep_interval_secs: 60,
|
||||
watcher_done_retention_secs: 14400,
|
||||
};
|
||||
write_project_settings(dir.path(), &s_clear).unwrap();
|
||||
|
||||
let config = crate::config::ProjectConfig::load(dir.path()).unwrap();
|
||||
let loaded = settings_from_config(&config);
|
||||
assert!(loaded.default_coder_model.is_none());
|
||||
assert!(loaded.max_coders.is_none());
|
||||
assert!(loaded.base_branch.is_none());
|
||||
assert!(loaded.timezone.is_none());
|
||||
}
|
||||
}
|
||||
|
||||
+15
-33
@@ -1,6 +1,7 @@
|
||||
//! HTTP wizard endpoints — REST API for the project setup wizard.
|
||||
use crate::http::context::{AppContext, OpenApiResult, bad_request, not_found};
|
||||
use crate::io::wizard::{StepStatus, WizardState, WizardStep};
|
||||
use crate::io::wizard::{WizardState, WizardStep};
|
||||
use crate::service::wizard as svc;
|
||||
use poem_openapi::{Object, OpenApi, Tags, param::Path, payload::Json};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::sync::Arc;
|
||||
@@ -80,8 +81,7 @@ impl WizardApi {
|
||||
#[oai(path = "/wizard", method = "get")]
|
||||
async fn get_wizard_state(&self) -> OpenApiResult<Json<WizardResponse>> {
|
||||
let root = self.ctx.state.get_project_root().map_err(bad_request)?;
|
||||
let state =
|
||||
WizardState::load(&root).ok_or_else(|| not_found("No wizard active".to_string()))?;
|
||||
let state = svc::get_state(&root).map_err(|_| not_found("No wizard active".to_string()))?;
|
||||
Ok(Json(WizardResponse::from(&state)))
|
||||
}
|
||||
|
||||
@@ -97,16 +97,8 @@ impl WizardApi {
|
||||
) -> OpenApiResult<Json<WizardResponse>> {
|
||||
let root = self.ctx.state.get_project_root().map_err(bad_request)?;
|
||||
let wizard_step = parse_step(&step.0)?;
|
||||
let mut state =
|
||||
WizardState::load(&root).ok_or_else(|| not_found("No wizard active".to_string()))?;
|
||||
|
||||
state.set_step_status(
|
||||
wizard_step,
|
||||
StepStatus::AwaitingConfirmation,
|
||||
payload.0.content,
|
||||
);
|
||||
state.save(&root).map_err(bad_request)?;
|
||||
|
||||
let state = svc::set_step_content(&root, wizard_step, payload.0.content)
|
||||
.map_err(|e| bad_request(e.to_string()))?;
|
||||
Ok(Json(WizardResponse::from(&state)))
|
||||
}
|
||||
|
||||
@@ -117,12 +109,8 @@ impl WizardApi {
|
||||
async fn confirm_step(&self, step: Path<String>) -> OpenApiResult<Json<WizardResponse>> {
|
||||
let root = self.ctx.state.get_project_root().map_err(bad_request)?;
|
||||
let wizard_step = parse_step(&step.0)?;
|
||||
let mut state =
|
||||
WizardState::load(&root).ok_or_else(|| not_found("No wizard active".to_string()))?;
|
||||
|
||||
state.confirm_step(wizard_step).map_err(bad_request)?;
|
||||
state.save(&root).map_err(bad_request)?;
|
||||
|
||||
let state =
|
||||
svc::mark_step_confirmed(&root, wizard_step).map_err(|e| bad_request(e.to_string()))?;
|
||||
Ok(Json(WizardResponse::from(&state)))
|
||||
}
|
||||
|
||||
@@ -133,12 +121,8 @@ impl WizardApi {
|
||||
async fn skip_step(&self, step: Path<String>) -> OpenApiResult<Json<WizardResponse>> {
|
||||
let root = self.ctx.state.get_project_root().map_err(bad_request)?;
|
||||
let wizard_step = parse_step(&step.0)?;
|
||||
let mut state =
|
||||
WizardState::load(&root).ok_or_else(|| not_found("No wizard active".to_string()))?;
|
||||
|
||||
state.skip_step(wizard_step).map_err(bad_request)?;
|
||||
state.save(&root).map_err(bad_request)?;
|
||||
|
||||
let state =
|
||||
svc::mark_step_skipped(&root, wizard_step).map_err(|e| bad_request(e.to_string()))?;
|
||||
Ok(Json(WizardResponse::from(&state)))
|
||||
}
|
||||
|
||||
@@ -147,12 +131,8 @@ impl WizardApi {
|
||||
async fn mark_generating(&self, step: Path<String>) -> OpenApiResult<Json<WizardResponse>> {
|
||||
let root = self.ctx.state.get_project_root().map_err(bad_request)?;
|
||||
let wizard_step = parse_step(&step.0)?;
|
||||
let mut state =
|
||||
WizardState::load(&root).ok_or_else(|| not_found("No wizard active".to_string()))?;
|
||||
|
||||
state.set_step_status(wizard_step, StepStatus::Generating, None);
|
||||
state.save(&root).map_err(bad_request)?;
|
||||
|
||||
let state = svc::mark_step_generating(&root, wizard_step)
|
||||
.map_err(|e| bad_request(e.to_string()))?;
|
||||
Ok(Json(WizardResponse::from(&state)))
|
||||
}
|
||||
}
|
||||
@@ -195,7 +175,7 @@ mod tests {
|
||||
let body: serde_json::Value = resp.0.into_body().into_json().await.unwrap();
|
||||
assert_eq!(body["current_step_index"], 1);
|
||||
assert!(!body["completed"].as_bool().unwrap());
|
||||
assert_eq!(body["steps"].as_array().unwrap().len(), 6);
|
||||
assert_eq!(body["steps"].as_array().unwrap().len(), 8);
|
||||
assert_eq!(body["steps"][0]["status"], "confirmed");
|
||||
}
|
||||
|
||||
@@ -279,11 +259,13 @@ mod tests {
|
||||
let (dir, client) = setup();
|
||||
WizardState::init_if_missing(dir.path());
|
||||
|
||||
// Steps 2-6 (scaffold is already confirmed)
|
||||
// Steps 2-8 (scaffold is already confirmed)
|
||||
let steps = [
|
||||
"context",
|
||||
"stack",
|
||||
"test_script",
|
||||
"build_script",
|
||||
"lint_script",
|
||||
"release_script",
|
||||
"test_coverage",
|
||||
];
|
||||
|
||||
+49
-971
File diff suppressed because it is too large
Load Diff
@@ -11,6 +11,4 @@ pub use files::{
|
||||
};
|
||||
pub use paths::{find_story_kit_root, get_home_directory, resolve_cli_path};
|
||||
pub use preferences::{get_model_preference, set_model_preference};
|
||||
pub use project::{
|
||||
close_project, forget_known_project, get_current_project, get_known_projects, open_project,
|
||||
};
|
||||
pub use project::open_project;
|
||||
|
||||
+25
-11
@@ -37,6 +37,13 @@ pub(crate) async fn ensure_project_root_with_story_kit(
|
||||
if !path.join(".huskies").is_dir() {
|
||||
scaffold_story_kit(&path, port)?;
|
||||
}
|
||||
// Always update .mcp.json with the current port so the bot connects to
|
||||
// the right endpoint even when HUSKIES_PORT changes between restarts.
|
||||
let mcp_content = format!(
|
||||
"{{\n \"mcpServers\": {{\n \"huskies\": {{\n \"type\": \"http\",\n \"url\": \"http://localhost:{port}/mcp\"\n }}\n }}\n}}\n"
|
||||
);
|
||||
fs::write(path.join(".mcp.json"), mcp_content)
|
||||
.map_err(|e| format!("Failed to write .mcp.json: {}", e))?;
|
||||
Ok(())
|
||||
})
|
||||
.await
|
||||
@@ -77,6 +84,7 @@ pub async fn open_project(
|
||||
Ok(path)
|
||||
}
|
||||
|
||||
#[allow(dead_code)]
|
||||
pub fn close_project(state: &SessionState, store: &dyn StoreOps) -> Result<(), String> {
|
||||
{
|
||||
// TRACE:MERGE-DEBUG — remove once root cause is found
|
||||
@@ -91,6 +99,7 @@ pub fn close_project(state: &SessionState, store: &dyn StoreOps) -> Result<(), S
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[allow(dead_code)]
|
||||
pub fn get_current_project(
|
||||
state: &SessionState,
|
||||
store: &dyn StoreOps,
|
||||
@@ -124,6 +133,7 @@ pub fn get_current_project(
|
||||
Ok(None)
|
||||
}
|
||||
|
||||
#[allow(dead_code)]
|
||||
pub fn get_known_projects(store: &dyn StoreOps) -> Result<Vec<String>, String> {
|
||||
let projects = store
|
||||
.get(KEY_KNOWN_PROJECTS)
|
||||
@@ -136,6 +146,7 @@ pub fn get_known_projects(store: &dyn StoreOps) -> Result<Vec<String>, String> {
|
||||
Ok(projects)
|
||||
}
|
||||
|
||||
#[allow(dead_code)]
|
||||
pub fn forget_known_project(path: String, store: &dyn StoreOps) -> Result<(), String> {
|
||||
let mut known_projects = get_known_projects(store)?;
|
||||
let original_len = known_projects.len();
|
||||
@@ -194,16 +205,15 @@ mod tests {
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn open_project_does_not_overwrite_existing_mcp_json() {
|
||||
// scaffold must NOT overwrite .mcp.json when it already exists — QA
|
||||
// test servers share the real project root, and re-writing would
|
||||
// clobber the file with the wrong port.
|
||||
async fn open_project_updates_mcp_json_with_current_port() {
|
||||
// .mcp.json must always be updated with the actual running port so the
|
||||
// bot connects to the right MCP endpoint even when HUSKIES_PORT changes.
|
||||
let dir = tempdir().unwrap();
|
||||
let project_dir = dir.path().join("myproject");
|
||||
fs::create_dir_all(&project_dir).unwrap();
|
||||
// Pre-write .mcp.json with a different port to simulate an already-configured project.
|
||||
// Pre-write .mcp.json with a different port to simulate a stale file.
|
||||
let mcp_path = project_dir.join(".mcp.json");
|
||||
fs::write(&mcp_path, "{\"existing\": true}").unwrap();
|
||||
fs::write(&mcp_path, "{\"stale\": true}").unwrap();
|
||||
let store = make_store(&dir);
|
||||
let state = SessionState::default();
|
||||
|
||||
@@ -211,15 +221,19 @@ mod tests {
|
||||
project_dir.to_string_lossy().to_string(),
|
||||
&state,
|
||||
&store,
|
||||
3001,
|
||||
3002,
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(
|
||||
fs::read_to_string(&mcp_path).unwrap(),
|
||||
"{\"existing\": true}",
|
||||
"open_project must not overwrite an existing .mcp.json"
|
||||
let content = fs::read_to_string(&mcp_path).unwrap();
|
||||
assert!(
|
||||
content.contains("3002"),
|
||||
"open_project must update .mcp.json with the actual running port"
|
||||
);
|
||||
assert!(
|
||||
content.contains("localhost"),
|
||||
"mcp.json must reference localhost"
|
||||
);
|
||||
}
|
||||
|
||||
|
||||
@@ -100,6 +100,24 @@ const DEFAULT_PROJECT_SETTINGS_TOML: &str = r#"# Project-wide default QA mode: "
|
||||
# Per-story `qa` front matter overrides this setting.
|
||||
default_qa = "server"
|
||||
|
||||
# Maximum number of retries per story per pipeline stage before marking as blocked.
|
||||
# Set to 0 to disable retry limits.
|
||||
max_retries = 2
|
||||
|
||||
# Default model for coder-stage agents (e.g. "sonnet", "opus").
|
||||
# When set, only coder agents whose model matches this value are considered for
|
||||
# auto-assignment, so opus agents are only used when explicitly requested via
|
||||
# story front matter `agent:` field.
|
||||
# default_coder_model = "sonnet"
|
||||
|
||||
# Maximum number of concurrent coder-stage agents.
|
||||
# Stories wait in 2_current/ until a slot frees up.
|
||||
# max_coders = 3
|
||||
|
||||
# Override the base branch for worktree creation and merge operations.
|
||||
# When not set, the system auto-detects the base branch from the current HEAD.
|
||||
# base_branch = "main"
|
||||
|
||||
# Suppress soft rate-limit warning notifications in chat.
|
||||
# Hard blocks and story-blocked notifications are always sent.
|
||||
# rate_limit_notifications = true
|
||||
@@ -199,33 +217,202 @@ pub fn detect_components_toml(root: &Path) -> String {
|
||||
sections.join("\n")
|
||||
}
|
||||
|
||||
/// Detect the appropriate Node.js test command for a directory containing `package.json`.
|
||||
///
|
||||
/// Reads the `package.json` content to identify known test runners (vitest, jest).
|
||||
/// Falls back to `npm test` or `pnpm test` based on which lock file is present.
|
||||
fn detect_node_test_cmd(pkg_dir: &Path) -> String {
|
||||
let has_pnpm = pkg_dir.join("pnpm-lock.yaml").exists();
|
||||
let content = std::fs::read_to_string(pkg_dir.join("package.json")).unwrap_or_default();
|
||||
|
||||
if content.contains("\"vitest\"") {
|
||||
let pm = if has_pnpm { "pnpm" } else { "npx" };
|
||||
return format!("{} vitest run", pm);
|
||||
}
|
||||
if content.contains("\"jest\"") {
|
||||
let pm = if has_pnpm { "pnpm" } else { "npx" };
|
||||
return format!("{} jest", pm);
|
||||
}
|
||||
|
||||
if has_pnpm {
|
||||
"pnpm test".to_string()
|
||||
} else {
|
||||
"npm test".to_string()
|
||||
}
|
||||
}
|
||||
|
||||
/// Detect the appropriate Node.js build command for a directory containing `package.json`.
|
||||
fn detect_node_build_cmd(pkg_dir: &Path) -> String {
|
||||
if pkg_dir.join("pnpm-lock.yaml").exists() {
|
||||
"pnpm run build".to_string()
|
||||
} else {
|
||||
"npm run build".to_string()
|
||||
}
|
||||
}
|
||||
|
||||
/// Detect the appropriate Node.js lint command for a directory containing `package.json`.
|
||||
///
|
||||
/// Reads the `package.json` content to identify eslint. Falls back to
|
||||
/// `npm run lint` or `pnpm run lint` based on which lock file is present.
|
||||
fn detect_node_lint_cmd(pkg_dir: &Path) -> String {
|
||||
let has_pnpm = pkg_dir.join("pnpm-lock.yaml").exists();
|
||||
let content = std::fs::read_to_string(pkg_dir.join("package.json")).unwrap_or_default();
|
||||
if content.contains("\"eslint\"") {
|
||||
let pm = if has_pnpm { "pnpm" } else { "npx" };
|
||||
return format!("{pm} eslint .");
|
||||
}
|
||||
if has_pnpm {
|
||||
"pnpm run lint".to_string()
|
||||
} else {
|
||||
"npm run lint".to_string()
|
||||
}
|
||||
}
|
||||
|
||||
/// Generate `script/build` content for a new project at `root`.
|
||||
///
|
||||
/// Inspects well-known marker files to identify which tech stacks are present
|
||||
/// and emits the appropriate build commands. Multi-stack projects get combined
|
||||
/// commands run sequentially. Falls back to a generic stub when no markers
|
||||
/// are found so the scaffold is always valid.
|
||||
///
|
||||
/// For projects with a frontend in a known subdirectory (`frontend/`, `client/`),
|
||||
/// the build command is detected from the presence of `pnpm-lock.yaml`.
|
||||
pub fn detect_script_build(root: &Path) -> String {
|
||||
let mut commands: Vec<String> = Vec::new();
|
||||
|
||||
if root.join("Cargo.toml").exists() {
|
||||
commands.push("cargo build --release".to_string());
|
||||
}
|
||||
|
||||
if root.join("package.json").exists() {
|
||||
commands.push(detect_node_build_cmd(root));
|
||||
}
|
||||
|
||||
// Detect frontend in known subdirectories (e.g. frontend/, client/)
|
||||
for subdir in &["frontend", "client"] {
|
||||
let sub_path = root.join(subdir);
|
||||
if sub_path.join("package.json").exists() {
|
||||
let cmd = detect_node_build_cmd(&sub_path);
|
||||
commands.push(format!("(cd {} && {})", subdir, cmd));
|
||||
}
|
||||
}
|
||||
|
||||
if root.join("pyproject.toml").exists() {
|
||||
commands.push("python -m build".to_string());
|
||||
}
|
||||
|
||||
if root.join("go.mod").exists() {
|
||||
commands.push("go build ./...".to_string());
|
||||
}
|
||||
|
||||
if commands.is_empty() {
|
||||
return "#!/usr/bin/env bash\nset -euo pipefail\n\n# Add your project's build commands here.\necho \"No build configured\"\n".to_string();
|
||||
}
|
||||
|
||||
let mut script = "#!/usr/bin/env bash\nset -euo pipefail\n\n".to_string();
|
||||
for cmd in commands {
|
||||
script.push_str(&cmd);
|
||||
script.push('\n');
|
||||
}
|
||||
script
|
||||
}
|
||||
|
||||
/// Generate `script/lint` content for a new project at `root`.
|
||||
///
|
||||
/// Inspects well-known marker files to identify which linters are present
|
||||
/// and emits the appropriate lint commands. Multi-stack projects get combined
|
||||
/// commands run sequentially. Falls back to a generic stub when no markers
|
||||
/// are found so the scaffold is always valid.
|
||||
///
|
||||
/// For projects with a frontend in a known subdirectory (`frontend/`, `client/`),
|
||||
/// the lint command is detected from the `package.json` (eslint, npm, pnpm).
|
||||
pub fn detect_script_lint(root: &Path) -> String {
|
||||
let mut commands: Vec<String> = Vec::new();
|
||||
|
||||
if root.join("Cargo.toml").exists() {
|
||||
commands.push("cargo fmt --all --check".to_string());
|
||||
commands.push("cargo clippy -- -D warnings".to_string());
|
||||
}
|
||||
|
||||
if root.join("package.json").exists() {
|
||||
commands.push(detect_node_lint_cmd(root));
|
||||
}
|
||||
|
||||
// Detect frontend in known subdirectories (e.g. frontend/, client/)
|
||||
for subdir in &["frontend", "client"] {
|
||||
let sub_path = root.join(subdir);
|
||||
if sub_path.join("package.json").exists() {
|
||||
let cmd = detect_node_lint_cmd(&sub_path);
|
||||
commands.push(format!("(cd {} && {})", subdir, cmd));
|
||||
}
|
||||
}
|
||||
|
||||
if root.join("pyproject.toml").exists() || root.join("requirements.txt").exists() {
|
||||
let mut content = std::fs::read_to_string(root.join("pyproject.toml")).unwrap_or_default();
|
||||
content
|
||||
.push_str(&std::fs::read_to_string(root.join("requirements.txt")).unwrap_or_default());
|
||||
if content.contains("ruff") {
|
||||
commands.push("ruff check .".to_string());
|
||||
} else {
|
||||
commands.push("flake8 .".to_string());
|
||||
}
|
||||
}
|
||||
|
||||
if root.join("go.mod").exists() {
|
||||
commands.push("go vet ./...".to_string());
|
||||
}
|
||||
|
||||
if commands.is_empty() {
|
||||
return "#!/usr/bin/env bash\nset -euo pipefail\n\n# Add your project's lint commands here.\necho \"No linters configured\"\n".to_string();
|
||||
}
|
||||
|
||||
let mut script = "#!/usr/bin/env bash\nset -euo pipefail\n\n".to_string();
|
||||
for cmd in commands {
|
||||
script.push_str(&cmd);
|
||||
script.push('\n');
|
||||
}
|
||||
script
|
||||
}
|
||||
|
||||
/// Generate `script/test` content for a new project at `root`.
|
||||
///
|
||||
/// Inspects well-known marker files to identify which tech stacks are present
|
||||
/// and emits the appropriate test commands. Multi-stack projects get combined
|
||||
/// commands run sequentially. Falls back to the generic stub when no markers
|
||||
/// are found so the scaffold is always valid.
|
||||
///
|
||||
/// For projects with a frontend in a known subdirectory (`frontend/`, `client/`),
|
||||
/// the test runner is detected from the `package.json` (vitest, jest, npm, pnpm).
|
||||
pub fn detect_script_test(root: &Path) -> String {
|
||||
let mut commands: Vec<&str> = Vec::new();
|
||||
let mut commands: Vec<String> = Vec::new();
|
||||
|
||||
if root.join("Cargo.toml").exists() {
|
||||
commands.push("cargo test");
|
||||
commands.push("cargo test".to_string());
|
||||
}
|
||||
|
||||
if root.join("package.json").exists() {
|
||||
if root.join("pnpm-lock.yaml").exists() {
|
||||
commands.push("pnpm test");
|
||||
commands.push("pnpm test".to_string());
|
||||
} else {
|
||||
commands.push("npm test");
|
||||
commands.push("npm test".to_string());
|
||||
}
|
||||
}
|
||||
|
||||
// Detect frontend in known subdirectories (e.g. frontend/, client/)
|
||||
for subdir in &["frontend", "client"] {
|
||||
let sub_path = root.join(subdir);
|
||||
if sub_path.join("package.json").exists() {
|
||||
let cmd = detect_node_test_cmd(&sub_path);
|
||||
commands.push(format!("(cd {} && {})", subdir, cmd));
|
||||
}
|
||||
}
|
||||
|
||||
if root.join("pyproject.toml").exists() || root.join("requirements.txt").exists() {
|
||||
commands.push("pytest");
|
||||
commands.push("pytest".to_string());
|
||||
}
|
||||
|
||||
if root.join("go.mod").exists() {
|
||||
commands.push("go test ./...");
|
||||
commands.push("go test ./...".to_string());
|
||||
}
|
||||
|
||||
if commands.is_empty() {
|
||||
@@ -234,7 +421,7 @@ pub fn detect_script_test(root: &Path) -> String {
|
||||
|
||||
let mut script = "#!/usr/bin/env bash\nset -euo pipefail\n\n".to_string();
|
||||
for cmd in commands {
|
||||
script.push_str(cmd);
|
||||
script.push_str(&cmd);
|
||||
script.push('\n');
|
||||
}
|
||||
script
|
||||
@@ -298,6 +485,8 @@ fn write_story_kit_gitignore(root: &Path) -> Result<(), String> {
|
||||
"token_usage.jsonl",
|
||||
"wizard_state.json",
|
||||
"store.json",
|
||||
"pipeline.db",
|
||||
"*.db",
|
||||
];
|
||||
|
||||
let gitignore_path = root.join(".huskies").join(".gitignore");
|
||||
@@ -411,6 +600,10 @@ pub(crate) fn scaffold_story_kit(root: &Path, port: u16) -> Result<(), String> {
|
||||
write_file_if_missing(&tech_root.join("STACK.md"), STORY_KIT_STACK)?;
|
||||
let script_test_content = detect_script_test(root);
|
||||
write_script_if_missing(&script_root.join("test"), &script_test_content)?;
|
||||
let script_build_content = detect_script_build(root);
|
||||
write_script_if_missing(&script_root.join("build"), &script_build_content)?;
|
||||
let script_lint_content = detect_script_lint(root);
|
||||
write_script_if_missing(&script_root.join("lint"), &script_lint_content)?;
|
||||
write_file_if_missing(&root.join("CLAUDE.md"), STORY_KIT_CLAUDE_MD)?;
|
||||
|
||||
// Write per-transport bot.toml example files so users can see all options.
|
||||
@@ -584,6 +777,78 @@ mod tests {
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn scaffold_project_toml_contains_max_retries_with_default_value() {
|
||||
let dir = tempdir().unwrap();
|
||||
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||
|
||||
let content = fs::read_to_string(dir.path().join(".huskies/project.toml")).unwrap();
|
||||
assert!(
|
||||
content.contains("max_retries = 2"),
|
||||
"project.toml scaffold should include max_retries with default value 2"
|
||||
);
|
||||
assert!(
|
||||
content.contains("Maximum number of retries"),
|
||||
"project.toml scaffold should include a comment explaining max_retries"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn scaffold_project_toml_contains_commented_out_optional_fields() {
|
||||
let dir = tempdir().unwrap();
|
||||
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||
|
||||
let content = fs::read_to_string(dir.path().join(".huskies/project.toml")).unwrap();
|
||||
assert!(
|
||||
content.contains("# default_coder_model"),
|
||||
"project.toml scaffold should include commented-out default_coder_model"
|
||||
);
|
||||
assert!(
|
||||
content.contains("# max_coders"),
|
||||
"project.toml scaffold should include commented-out max_coders"
|
||||
);
|
||||
assert!(
|
||||
content.contains("# base_branch"),
|
||||
"project.toml scaffold should include commented-out base_branch"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn scaffold_project_toml_round_trips_through_project_config_load() {
|
||||
use crate::config::ProjectConfig;
|
||||
|
||||
let dir = tempdir().unwrap();
|
||||
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||
|
||||
// The generated project.toml must parse without error.
|
||||
let config = ProjectConfig::load(dir.path())
|
||||
.expect("Generated project.toml should parse without error");
|
||||
|
||||
// Key defaults must survive the round-trip.
|
||||
assert_eq!(config.default_qa, "server");
|
||||
assert_eq!(config.max_retries, 2);
|
||||
assert!(
|
||||
config.rate_limit_notifications,
|
||||
"rate_limit_notifications should default to true"
|
||||
);
|
||||
assert!(
|
||||
config.default_coder_model.is_none(),
|
||||
"default_coder_model should be None when commented out"
|
||||
);
|
||||
assert!(
|
||||
config.max_coders.is_none(),
|
||||
"max_coders should be None when commented out"
|
||||
);
|
||||
assert!(
|
||||
config.base_branch.is_none(),
|
||||
"base_branch should be None when commented out"
|
||||
);
|
||||
assert!(
|
||||
config.timezone.is_none(),
|
||||
"timezone should be None when commented out"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn scaffold_context_is_blank_template_not_story_kit_content() {
|
||||
let dir = tempdir().unwrap();
|
||||
@@ -744,6 +1009,9 @@ mod tests {
|
||||
assert!(!root_content.contains(".huskies/coverage/"));
|
||||
// store.json must be in .huskies/.gitignore instead
|
||||
assert!(sk_content.contains("store.json"));
|
||||
// Database files must be ignored so novice users don't accidentally commit them
|
||||
assert!(sk_content.contains("pipeline.db"));
|
||||
assert!(sk_content.contains("*.db"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -1165,6 +1433,141 @@ mod tests {
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_test_frontend_subdir_with_vitest_uses_npx_vitest() {
|
||||
let dir = tempdir().unwrap();
|
||||
let frontend = dir.path().join("frontend");
|
||||
fs::create_dir_all(&frontend).unwrap();
|
||||
fs::write(
|
||||
frontend.join("package.json"),
|
||||
r#"{"devDependencies":{"vitest":"^1.0.0"},"scripts":{"test":"vitest run"}}"#,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let script = detect_script_test(dir.path());
|
||||
assert!(
|
||||
script.contains("vitest run"),
|
||||
"frontend with vitest should emit vitest run"
|
||||
);
|
||||
assert!(
|
||||
script.contains("cd frontend"),
|
||||
"should cd into the frontend directory"
|
||||
);
|
||||
assert!(
|
||||
!script.contains("No tests configured"),
|
||||
"should not use stub when frontend is detected"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_test_frontend_subdir_with_jest_uses_npx_jest() {
|
||||
let dir = tempdir().unwrap();
|
||||
let frontend = dir.path().join("frontend");
|
||||
fs::create_dir_all(&frontend).unwrap();
|
||||
fs::write(
|
||||
frontend.join("package.json"),
|
||||
r#"{"devDependencies":{"jest":"^29.0.0"},"scripts":{"test":"jest"}}"#,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let script = detect_script_test(dir.path());
|
||||
assert!(
|
||||
script.contains("jest"),
|
||||
"frontend with jest should emit jest"
|
||||
);
|
||||
assert!(
|
||||
script.contains("cd frontend"),
|
||||
"should cd into the frontend directory"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_test_frontend_subdir_no_known_runner_uses_npm_test() {
|
||||
let dir = tempdir().unwrap();
|
||||
let frontend = dir.path().join("frontend");
|
||||
fs::create_dir_all(&frontend).unwrap();
|
||||
fs::write(
|
||||
frontend.join("package.json"),
|
||||
r#"{"scripts":{"test":"mocha"}}"#,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let script = detect_script_test(dir.path());
|
||||
assert!(
|
||||
script.contains("npm test"),
|
||||
"frontend without known runner should fall back to npm test"
|
||||
);
|
||||
assert!(script.contains("cd frontend"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_test_frontend_subdir_pnpm_uses_pnpm_vitest() {
|
||||
let dir = tempdir().unwrap();
|
||||
let frontend = dir.path().join("frontend");
|
||||
fs::create_dir_all(&frontend).unwrap();
|
||||
fs::write(
|
||||
frontend.join("package.json"),
|
||||
r#"{"devDependencies":{"vitest":"^1.0.0"}}"#,
|
||||
)
|
||||
.unwrap();
|
||||
fs::write(frontend.join("pnpm-lock.yaml"), "").unwrap();
|
||||
|
||||
let script = detect_script_test(dir.path());
|
||||
assert!(
|
||||
script.contains("pnpm vitest run"),
|
||||
"pnpm frontend with vitest should use pnpm vitest run"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_test_rust_plus_frontend_subdir_both_included() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(
|
||||
dir.path().join("Cargo.toml"),
|
||||
"[package]\nname = \"server\"\n",
|
||||
)
|
||||
.unwrap();
|
||||
let frontend = dir.path().join("frontend");
|
||||
fs::create_dir_all(&frontend).unwrap();
|
||||
fs::write(
|
||||
frontend.join("package.json"),
|
||||
r#"{"devDependencies":{"vitest":"^1.0.0"}}"#,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let script = detect_script_test(dir.path());
|
||||
assert!(
|
||||
script.contains("cargo test"),
|
||||
"Rust + frontend should include cargo test"
|
||||
);
|
||||
assert!(
|
||||
script.contains("vitest run"),
|
||||
"Rust + frontend should include vitest run"
|
||||
);
|
||||
assert!(
|
||||
script.contains("cd frontend"),
|
||||
"Rust + frontend should cd into frontend"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_test_client_subdir_detected() {
|
||||
let dir = tempdir().unwrap();
|
||||
let client = dir.path().join("client");
|
||||
fs::create_dir_all(&client).unwrap();
|
||||
fs::write(
|
||||
client.join("package.json"),
|
||||
r#"{"scripts":{"test":"jest"}}"#,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let script = detect_script_test(dir.path());
|
||||
assert!(
|
||||
script.contains("cd client"),
|
||||
"client/ subdir should also be detected"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_test_output_starts_with_shebang() {
|
||||
let dir = tempdir().unwrap();
|
||||
@@ -1211,6 +1614,347 @@ mod tests {
|
||||
);
|
||||
}
|
||||
|
||||
// --- detect_script_build ---
|
||||
|
||||
#[test]
|
||||
fn detect_script_build_no_markers_returns_stub() {
|
||||
let dir = tempdir().unwrap();
|
||||
let script = detect_script_build(dir.path());
|
||||
assert!(
|
||||
script.contains("No build configured"),
|
||||
"fallback should contain the generic stub message"
|
||||
);
|
||||
assert!(script.starts_with("#!/usr/bin/env bash"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_build_cargo_toml_adds_cargo_build_release() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(dir.path().join("Cargo.toml"), "[package]\nname = \"x\"\n").unwrap();
|
||||
|
||||
let script = detect_script_build(dir.path());
|
||||
assert!(
|
||||
script.contains("cargo build --release"),
|
||||
"Rust project should run cargo build --release"
|
||||
);
|
||||
assert!(!script.contains("No build configured"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_build_package_json_npm_adds_npm_run_build() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(dir.path().join("package.json"), "{}").unwrap();
|
||||
|
||||
let script = detect_script_build(dir.path());
|
||||
assert!(
|
||||
script.contains("npm run build"),
|
||||
"Node project without pnpm-lock should run npm run build"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_build_package_json_pnpm_adds_pnpm_run_build() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(dir.path().join("package.json"), "{}").unwrap();
|
||||
fs::write(dir.path().join("pnpm-lock.yaml"), "").unwrap();
|
||||
|
||||
let script = detect_script_build(dir.path());
|
||||
assert!(
|
||||
script.contains("pnpm run build"),
|
||||
"Node project with pnpm-lock should run pnpm run build"
|
||||
);
|
||||
assert!(
|
||||
!script.lines().any(|l| l.trim() == "npm run build"),
|
||||
"should not use npm when pnpm-lock.yaml is present"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_build_go_mod_adds_go_build() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(dir.path().join("go.mod"), "module example.com/app\n").unwrap();
|
||||
|
||||
let script = detect_script_build(dir.path());
|
||||
assert!(
|
||||
script.contains("go build ./..."),
|
||||
"Go project should run go build ./..."
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_build_pyproject_toml_adds_python_build() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(
|
||||
dir.path().join("pyproject.toml"),
|
||||
"[project]\nname = \"x\"\n",
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let script = detect_script_build(dir.path());
|
||||
assert!(
|
||||
script.contains("python -m build"),
|
||||
"Python project should run python -m build"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_build_frontend_subdir_detected() {
|
||||
let dir = tempdir().unwrap();
|
||||
let frontend = dir.path().join("frontend");
|
||||
fs::create_dir_all(&frontend).unwrap();
|
||||
fs::write(frontend.join("package.json"), "{}").unwrap();
|
||||
|
||||
let script = detect_script_build(dir.path());
|
||||
assert!(
|
||||
script.contains("cd frontend"),
|
||||
"frontend subdir should be detected for build"
|
||||
);
|
||||
assert!(script.contains("npm run build"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_build_rust_plus_frontend_subdir_both_included() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(
|
||||
dir.path().join("Cargo.toml"),
|
||||
"[package]\nname = \"server\"\n",
|
||||
)
|
||||
.unwrap();
|
||||
let frontend = dir.path().join("frontend");
|
||||
fs::create_dir_all(&frontend).unwrap();
|
||||
fs::write(frontend.join("package.json"), "{}").unwrap();
|
||||
|
||||
let script = detect_script_build(dir.path());
|
||||
assert!(script.contains("cargo build --release"));
|
||||
assert!(script.contains("cd frontend"));
|
||||
assert!(script.contains("npm run build"));
|
||||
}
|
||||
|
||||
// --- detect_script_lint ---
|
||||
|
||||
#[test]
|
||||
fn detect_script_lint_no_markers_returns_stub() {
|
||||
let dir = tempdir().unwrap();
|
||||
let script = detect_script_lint(dir.path());
|
||||
assert!(
|
||||
script.contains("No linters configured"),
|
||||
"fallback should contain the generic stub message"
|
||||
);
|
||||
assert!(script.starts_with("#!/usr/bin/env bash"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_lint_cargo_toml_adds_fmt_and_clippy() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(dir.path().join("Cargo.toml"), "[package]\nname = \"x\"\n").unwrap();
|
||||
|
||||
let script = detect_script_lint(dir.path());
|
||||
assert!(
|
||||
script.contains("cargo fmt --all --check"),
|
||||
"Rust project should check formatting"
|
||||
);
|
||||
assert!(
|
||||
script.contains("cargo clippy -- -D warnings"),
|
||||
"Rust project should run clippy"
|
||||
);
|
||||
assert!(!script.contains("No linters configured"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_lint_package_json_without_eslint_uses_npm_run_lint() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(dir.path().join("package.json"), "{}").unwrap();
|
||||
|
||||
let script = detect_script_lint(dir.path());
|
||||
assert!(
|
||||
script.contains("npm run lint"),
|
||||
"Node project without eslint dep should fall back to npm run lint"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_lint_package_json_with_eslint_uses_npx_eslint() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(
|
||||
dir.path().join("package.json"),
|
||||
r#"{"devDependencies":{"eslint":"^8.0.0"}}"#,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let script = detect_script_lint(dir.path());
|
||||
assert!(
|
||||
script.contains("npx eslint ."),
|
||||
"Node project with eslint should use npx eslint ."
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_lint_pnpm_with_eslint_uses_pnpm_eslint() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(
|
||||
dir.path().join("package.json"),
|
||||
r#"{"devDependencies":{"eslint":"^8.0.0"}}"#,
|
||||
)
|
||||
.unwrap();
|
||||
fs::write(dir.path().join("pnpm-lock.yaml"), "").unwrap();
|
||||
|
||||
let script = detect_script_lint(dir.path());
|
||||
assert!(
|
||||
script.contains("pnpm eslint ."),
|
||||
"pnpm project with eslint should use pnpm eslint ."
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_lint_python_requirements_uses_flake8() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(dir.path().join("requirements.txt"), "flask\n").unwrap();
|
||||
|
||||
let script = detect_script_lint(dir.path());
|
||||
assert!(
|
||||
script.contains("flake8 ."),
|
||||
"Python project without ruff should use flake8"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_lint_python_with_ruff_uses_ruff() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(
|
||||
dir.path().join("pyproject.toml"),
|
||||
"[project]\nname = \"x\"\n\n[tool.ruff]\n",
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let script = detect_script_lint(dir.path());
|
||||
assert!(
|
||||
script.contains("ruff check ."),
|
||||
"Python project with ruff configured should use ruff"
|
||||
);
|
||||
assert!(
|
||||
!script.contains("flake8"),
|
||||
"should not use flake8 when ruff is configured"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_lint_go_mod_adds_go_vet() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(dir.path().join("go.mod"), "module example.com/app\n").unwrap();
|
||||
|
||||
let script = detect_script_lint(dir.path());
|
||||
assert!(
|
||||
script.contains("go vet ./..."),
|
||||
"Go project should run go vet ./..."
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_lint_frontend_subdir_detected() {
|
||||
let dir = tempdir().unwrap();
|
||||
let frontend = dir.path().join("frontend");
|
||||
fs::create_dir_all(&frontend).unwrap();
|
||||
fs::write(frontend.join("package.json"), "{}").unwrap();
|
||||
|
||||
let script = detect_script_lint(dir.path());
|
||||
assert!(
|
||||
script.contains("cd frontend"),
|
||||
"frontend subdir should be detected for lint"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detect_script_lint_rust_plus_frontend_subdir_both_included() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(
|
||||
dir.path().join("Cargo.toml"),
|
||||
"[package]\nname = \"server\"\n",
|
||||
)
|
||||
.unwrap();
|
||||
let frontend = dir.path().join("frontend");
|
||||
fs::create_dir_all(&frontend).unwrap();
|
||||
fs::write(frontend.join("package.json"), "{}").unwrap();
|
||||
|
||||
let script = detect_script_lint(dir.path());
|
||||
assert!(script.contains("cargo fmt --all --check"));
|
||||
assert!(script.contains("cargo clippy -- -D warnings"));
|
||||
assert!(script.contains("cd frontend"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn scaffold_story_kit_creates_script_build_and_lint() {
|
||||
let dir = tempdir().unwrap();
|
||||
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||
|
||||
assert!(
|
||||
dir.path().join("script/build").exists(),
|
||||
"script/build should be created by scaffold"
|
||||
);
|
||||
assert!(
|
||||
dir.path().join("script/lint").exists(),
|
||||
"script/lint should be created by scaffold"
|
||||
);
|
||||
}
|
||||
|
||||
#[cfg(unix)]
|
||||
#[test]
|
||||
fn scaffold_story_kit_creates_executable_script_build_and_lint() {
|
||||
use std::os::unix::fs::PermissionsExt;
|
||||
|
||||
let dir = tempdir().unwrap();
|
||||
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||
|
||||
for name in &["build", "lint"] {
|
||||
let path = dir.path().join("script").join(name);
|
||||
assert!(path.exists(), "script/{name} should be created");
|
||||
let perms = fs::metadata(&path).unwrap().permissions();
|
||||
assert!(
|
||||
perms.mode() & 0o111 != 0,
|
||||
"script/{name} should be executable"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn scaffold_script_build_contains_detected_commands_for_rust() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(
|
||||
dir.path().join("Cargo.toml"),
|
||||
"[package]\nname = \"myapp\"\n",
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||
|
||||
let content = fs::read_to_string(dir.path().join("script/build")).unwrap();
|
||||
assert!(
|
||||
content.contains("cargo build --release"),
|
||||
"Rust project scaffold should set cargo build --release in script/build"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn scaffold_script_lint_contains_detected_commands_for_rust() {
|
||||
let dir = tempdir().unwrap();
|
||||
fs::write(
|
||||
dir.path().join("Cargo.toml"),
|
||||
"[package]\nname = \"myapp\"\n",
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||
|
||||
let content = fs::read_to_string(dir.path().join("script/lint")).unwrap();
|
||||
assert!(
|
||||
content.contains("cargo fmt --all --check"),
|
||||
"Rust project scaffold should include fmt check in script/lint"
|
||||
);
|
||||
assert!(
|
||||
content.contains("cargo clippy -- -D warnings"),
|
||||
"Rust project scaffold should include clippy in script/lint"
|
||||
);
|
||||
}
|
||||
|
||||
// --- generate_project_toml ---
|
||||
|
||||
#[test]
|
||||
|
||||
@@ -5,7 +5,7 @@ use std::path::Path;
|
||||
/// Only untouched templates contain this marker — real project content
|
||||
/// will never include it, so it avoids false positives when the project
|
||||
/// itself is an "Agentic AI Code Assistant".
|
||||
const TEMPLATE_SENTINEL: &str = "<!-- huskies:scaffold-template -->";
|
||||
pub(crate) const TEMPLATE_SENTINEL: &str = "<!-- huskies:scaffold-template -->";
|
||||
|
||||
/// Marker found in the default `script/test` scaffold output.
|
||||
const TEMPLATE_MARKER_SCRIPT: &str = "No tests configured";
|
||||
|
||||
+11
-3
@@ -16,9 +16,13 @@ pub enum WizardStep {
|
||||
Stack,
|
||||
/// Step 4: create script/test
|
||||
TestScript,
|
||||
/// Step 5: create script/release
|
||||
/// Step 5: create script/build
|
||||
BuildScript,
|
||||
/// Step 6: create script/lint
|
||||
LintScript,
|
||||
/// Step 7: create script/release
|
||||
ReleaseScript,
|
||||
/// Step 6: create script/test_coverage
|
||||
/// Step 8: create script/test_coverage
|
||||
TestCoverage,
|
||||
}
|
||||
|
||||
@@ -29,6 +33,8 @@ impl WizardStep {
|
||||
WizardStep::Context,
|
||||
WizardStep::Stack,
|
||||
WizardStep::TestScript,
|
||||
WizardStep::BuildScript,
|
||||
WizardStep::LintScript,
|
||||
WizardStep::ReleaseScript,
|
||||
WizardStep::TestCoverage,
|
||||
];
|
||||
@@ -40,6 +46,8 @@ impl WizardStep {
|
||||
WizardStep::Context => "Generate project context (00_CONTEXT.md)",
|
||||
WizardStep::Stack => "Generate tech stack spec (STACK.md)",
|
||||
WizardStep::TestScript => "Create test script (script/test)",
|
||||
WizardStep::BuildScript => "Create build script (script/build)",
|
||||
WizardStep::LintScript => "Create lint script (script/lint)",
|
||||
WizardStep::ReleaseScript => "Create release script (script/release)",
|
||||
WizardStep::TestCoverage => "Create test coverage script (script/test_coverage)",
|
||||
}
|
||||
@@ -262,7 +270,7 @@ mod tests {
|
||||
#[test]
|
||||
fn default_state_has_all_steps_pending() {
|
||||
let state = WizardState::default();
|
||||
assert_eq!(state.steps.len(), 6);
|
||||
assert_eq!(state.steps.len(), 8);
|
||||
for step in &state.steps {
|
||||
assert_eq!(step.status, StepStatus::Pending);
|
||||
}
|
||||
|
||||
@@ -31,35 +31,6 @@ pub struct ChatResult {
|
||||
pub session_id: Option<String>,
|
||||
}
|
||||
|
||||
fn get_anthropic_api_key_exists_impl(store: &dyn StoreOps) -> bool {
|
||||
match store.get(KEY_ANTHROPIC_API_KEY) {
|
||||
Some(value) => value.as_str().map(|k| !k.is_empty()).unwrap_or(false),
|
||||
None => false,
|
||||
}
|
||||
}
|
||||
|
||||
fn set_anthropic_api_key_impl(store: &dyn StoreOps, api_key: &str) -> Result<(), String> {
|
||||
store.set(KEY_ANTHROPIC_API_KEY, json!(api_key));
|
||||
store.save()?;
|
||||
|
||||
match store.get(KEY_ANTHROPIC_API_KEY) {
|
||||
Some(value) => {
|
||||
if let Some(retrieved) = value.as_str() {
|
||||
if retrieved != api_key {
|
||||
return Err("Retrieved key does not match saved key".to_string());
|
||||
}
|
||||
} else {
|
||||
return Err("Stored value is not a string".to_string());
|
||||
}
|
||||
}
|
||||
None => {
|
||||
return Err("API key was saved but cannot be retrieved".to_string());
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn get_anthropic_api_key_impl(store: &dyn StoreOps) -> Result<String, String> {
|
||||
match store.get(KEY_ANTHROPIC_API_KEY) {
|
||||
Some(value) => {
|
||||
@@ -172,14 +143,6 @@ pub async fn get_ollama_models(base_url: Option<String>) -> Result<Vec<String>,
|
||||
OllamaProvider::get_models(&url).await
|
||||
}
|
||||
|
||||
pub fn get_anthropic_api_key_exists(store: &dyn StoreOps) -> Result<bool, String> {
|
||||
Ok(get_anthropic_api_key_exists_impl(store))
|
||||
}
|
||||
|
||||
pub fn set_anthropic_api_key(store: &dyn StoreOps, api_key: String) -> Result<(), String> {
|
||||
set_anthropic_api_key_impl(store, &api_key)
|
||||
}
|
||||
|
||||
/// Build a prompt for Claude Code that includes prior conversation history.
|
||||
///
|
||||
/// When a Claude Code session cannot be resumed (no session_id), we embed
|
||||
@@ -627,22 +590,6 @@ mod tests {
|
||||
save_should_fail: false,
|
||||
}
|
||||
}
|
||||
|
||||
fn with_save_error() -> Self {
|
||||
Self {
|
||||
data: Mutex::new(HashMap::new()),
|
||||
save_should_fail: true,
|
||||
}
|
||||
}
|
||||
|
||||
fn with_entry(key: &str, value: serde_json::Value) -> Self {
|
||||
let mut map = HashMap::new();
|
||||
map.insert(key.to_string(), value);
|
||||
Self {
|
||||
data: Mutex::new(map),
|
||||
save_should_fail: false,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl StoreOps for MockStore {
|
||||
@@ -695,121 +642,6 @@ mod tests {
|
||||
assert!(result.is_ok());
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// get_anthropic_api_key_exists_impl
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
#[test]
|
||||
fn api_key_exists_when_key_is_present_and_non_empty() {
|
||||
let store = MockStore::with_entry("anthropic_api_key", json!("sk-test-key"));
|
||||
assert!(get_anthropic_api_key_exists_impl(&store));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn api_key_exists_returns_false_when_key_is_empty_string() {
|
||||
let store = MockStore::with_entry("anthropic_api_key", json!(""));
|
||||
assert!(!get_anthropic_api_key_exists_impl(&store));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn api_key_exists_returns_false_when_key_absent() {
|
||||
let store = MockStore::new();
|
||||
assert!(!get_anthropic_api_key_exists_impl(&store));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn api_key_exists_returns_false_when_value_is_not_string() {
|
||||
let store = MockStore::with_entry("anthropic_api_key", json!(42));
|
||||
assert!(!get_anthropic_api_key_exists_impl(&store));
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// get_anthropic_api_key_impl
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
#[test]
|
||||
fn get_api_key_returns_key_when_present() {
|
||||
let store = MockStore::with_entry("anthropic_api_key", json!("sk-test-key"));
|
||||
let result = get_anthropic_api_key_impl(&store);
|
||||
assert_eq!(result.unwrap(), "sk-test-key");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn get_api_key_errors_when_empty() {
|
||||
let store = MockStore::with_entry("anthropic_api_key", json!(""));
|
||||
let result = get_anthropic_api_key_impl(&store);
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("empty"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn get_api_key_errors_when_absent() {
|
||||
let store = MockStore::new();
|
||||
let result = get_anthropic_api_key_impl(&store);
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("not found"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn get_api_key_errors_when_value_not_string() {
|
||||
let store = MockStore::with_entry("anthropic_api_key", json!(123));
|
||||
let result = get_anthropic_api_key_impl(&store);
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("not a string"));
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// set_anthropic_api_key_impl
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
#[test]
|
||||
fn set_api_key_stores_and_returns_ok() {
|
||||
let store = MockStore::new();
|
||||
let result = set_anthropic_api_key_impl(&store, "sk-my-key");
|
||||
assert!(result.is_ok());
|
||||
assert_eq!(store.get("anthropic_api_key"), Some(json!("sk-my-key")));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn set_api_key_returns_error_when_save_fails() {
|
||||
let store = MockStore::with_save_error();
|
||||
let result = set_anthropic_api_key_impl(&store, "sk-my-key");
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("mock save error"));
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Public wrappers: get_anthropic_api_key_exists / set_anthropic_api_key
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
#[test]
|
||||
fn public_api_key_exists_returns_ok_bool() {
|
||||
let store = MockStore::with_entry("anthropic_api_key", json!("sk-abc"));
|
||||
let result = get_anthropic_api_key_exists(&store);
|
||||
assert_eq!(result, Ok(true));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn public_api_key_exists_false_when_absent() {
|
||||
let store = MockStore::new();
|
||||
let result = get_anthropic_api_key_exists(&store);
|
||||
assert_eq!(result, Ok(false));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn public_set_api_key_succeeds() {
|
||||
let store = MockStore::new();
|
||||
let result = set_anthropic_api_key(&store, "sk-xyz".to_string());
|
||||
assert!(result.is_ok());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn public_set_api_key_propagates_save_error() {
|
||||
let store = MockStore::with_save_error();
|
||||
let result = set_anthropic_api_key(&store, "sk-xyz".to_string());
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// get_tool_definitions
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
+21
-6
@@ -20,6 +20,7 @@ mod llm;
|
||||
pub mod log_buffer;
|
||||
pub(crate) mod pipeline_state;
|
||||
pub mod rebuild;
|
||||
mod service;
|
||||
mod state;
|
||||
mod store;
|
||||
mod workflow;
|
||||
@@ -544,6 +545,8 @@ async fn main() -> Result<(), std::io::Error> {
|
||||
let watcher_rx_for_whatsapp = watcher_tx.subscribe();
|
||||
let watcher_rx_for_slack = watcher_tx.subscribe();
|
||||
let watcher_rx_for_discord = watcher_tx.subscribe();
|
||||
// Subscribe to watcher events for the per-project event buffer (gateway polling).
|
||||
let watcher_rx_for_events = watcher_tx.subscribe();
|
||||
// Wrap perm_rx in Arc<Mutex> so it can be shared with both the WebSocket
|
||||
// handler (via AppContext) and the Matrix bot.
|
||||
let perm_rx = Arc::new(tokio::sync::Mutex::new(perm_rx));
|
||||
@@ -777,7 +780,7 @@ async fn main() -> Result<(), std::io::Error> {
|
||||
// in `chat::transport::matrix::bot::run::spawn_bot`. Refactor to consume this
|
||||
// shared instance via `AppContext.timer_store` so cancellations from MCP
|
||||
// tools and the bot's tick loop see the same in-memory state.
|
||||
let timer_store = std::sync::Arc::new(crate::chat::timer::TimerStore::load(
|
||||
let timer_store = std::sync::Arc::new(crate::service::timer::TimerStore::load(
|
||||
startup_root
|
||||
.as_ref()
|
||||
.map(|r| r.join(".huskies").join("timers.json"))
|
||||
@@ -802,7 +805,18 @@ async fn main() -> Result<(), std::io::Error> {
|
||||
test_jobs: std::sync::Arc::new(std::sync::Mutex::new(std::collections::HashMap::new())),
|
||||
};
|
||||
|
||||
let app = build_routes(ctx, whatsapp_ctx.clone(), slack_ctx.clone(), port);
|
||||
// Create the per-project event buffer and subscribe it to the watcher channel
|
||||
// so that pipeline events are buffered for the gateway's `/api/events` poller.
|
||||
let event_buffer = crate::http::events::EventBuffer::new();
|
||||
crate::http::events::subscribe_to_watcher(event_buffer.clone(), watcher_rx_for_events);
|
||||
|
||||
let app = build_routes(
|
||||
ctx,
|
||||
whatsapp_ctx.clone(),
|
||||
slack_ctx.clone(),
|
||||
port,
|
||||
Some(event_buffer),
|
||||
);
|
||||
|
||||
// Unified 1-second background tick loop: fires due timers, detects orphaned
|
||||
// agents (watchdog), and promotes done→archived items (sweep). Replaces the
|
||||
@@ -830,7 +844,7 @@ async fn main() -> Result<(), std::io::Error> {
|
||||
// Timer: fire due timers every second.
|
||||
if let Some(ref root) = tick_root {
|
||||
let result =
|
||||
crate::chat::timer::tick_once(&tick_timer, &tick_agents, root).await;
|
||||
crate::service::timer::tick_once(&tick_timer, &tick_agents, root).await;
|
||||
if let Err(msg) = result {
|
||||
crate::slog_error!("[tick] Timer tick panicked: {msg}");
|
||||
}
|
||||
@@ -868,6 +882,7 @@ async fn main() -> Result<(), std::io::Error> {
|
||||
matrix_shutdown_rx,
|
||||
None,
|
||||
vec![],
|
||||
std::collections::BTreeMap::new(),
|
||||
);
|
||||
} else {
|
||||
// Keep the receiver alive (drop it) so the sender never errors.
|
||||
@@ -878,7 +893,7 @@ async fn main() -> Result<(), std::io::Error> {
|
||||
// These mirror the listener that the Matrix bot spawns internally.
|
||||
if let (Some(ctx), Some(root)) = (&whatsapp_ctx, &startup_root) {
|
||||
let ambient_rooms = Arc::clone(&ctx.ambient_rooms);
|
||||
chat::transport::matrix::notifications::spawn_notification_listener(
|
||||
crate::service::notifications::spawn_notification_listener(
|
||||
Arc::clone(&ctx.transport),
|
||||
move || ambient_rooms.lock().unwrap().iter().cloned().collect(),
|
||||
watcher_rx_for_whatsapp,
|
||||
@@ -889,7 +904,7 @@ async fn main() -> Result<(), std::io::Error> {
|
||||
}
|
||||
if let (Some(ctx), Some(root)) = (&slack_ctx, &startup_root) {
|
||||
let channel_ids: Vec<String> = ctx.channel_ids.iter().cloned().collect();
|
||||
chat::transport::matrix::notifications::spawn_notification_listener(
|
||||
crate::service::notifications::spawn_notification_listener(
|
||||
Arc::clone(&ctx.transport) as Arc<dyn crate::chat::ChatTransport>,
|
||||
move || channel_ids.clone(),
|
||||
watcher_rx_for_slack,
|
||||
@@ -904,7 +919,7 @@ async fn main() -> Result<(), std::io::Error> {
|
||||
|
||||
// Spawn stage-transition notification listener for Discord.
|
||||
let channel_ids: Vec<String> = ctx.channel_ids.iter().cloned().collect();
|
||||
chat::transport::matrix::notifications::spawn_notification_listener(
|
||||
crate::service::notifications::spawn_notification_listener(
|
||||
Arc::clone(&ctx.transport) as Arc<dyn crate::chat::ChatTransport>,
|
||||
move || channel_ids.clone(),
|
||||
watcher_rx_for_discord,
|
||||
|
||||
@@ -0,0 +1,240 @@
|
||||
//! Agent I/O wrappers — the ONLY place in `service/agents/` that may perform
|
||||
//! filesystem reads, process invocations, or other side effects.
|
||||
//!
|
||||
//! Every function here is a thin adapter over an existing lower-level call.
|
||||
//! No business logic lives here; all branching belongs in the pure topic files
|
||||
//! or in `mod.rs`.
|
||||
use crate::agent_log::{self, LogEntry};
|
||||
use crate::agents::token_usage::{self, TokenUsageRecord};
|
||||
use crate::config::ProjectConfig;
|
||||
use crate::worktree::{self, WorktreeListEntry};
|
||||
use std::path::Path;
|
||||
|
||||
use super::Error;
|
||||
|
||||
/// Return `true` if the story's `.md` file exists in `5_done/` or `6_archived/`.
|
||||
pub fn is_archived(project_root: &Path, story_id: &str) -> bool {
|
||||
let work = project_root.join(".huskies").join("work");
|
||||
let filename = format!("{story_id}.md");
|
||||
work.join("5_done").join(&filename).exists() || work.join("6_archived").join(&filename).exists()
|
||||
}
|
||||
|
||||
/// Read and return all log entries for the most recent session of an agent.
|
||||
///
|
||||
/// Returns `Ok(vec![])` when no log file exists yet.
|
||||
pub fn read_agent_log(
|
||||
project_root: &Path,
|
||||
story_id: &str,
|
||||
agent_name: &str,
|
||||
) -> Result<Vec<LogEntry>, Error> {
|
||||
let log_path = agent_log::find_latest_log(project_root, story_id, agent_name);
|
||||
let Some(path) = log_path else {
|
||||
return Ok(Vec::new());
|
||||
};
|
||||
agent_log::read_log(&path).map_err(Error::Io)
|
||||
}
|
||||
|
||||
/// Read all token usage records from the persistent JSONL file.
|
||||
///
|
||||
/// Returns an empty vec when the file does not yet exist.
|
||||
pub fn read_token_records(project_root: &Path) -> Result<Vec<TokenUsageRecord>, Error> {
|
||||
token_usage::read_all(project_root).map_err(Error::Io)
|
||||
}
|
||||
|
||||
/// Load the project configuration from `project.toml`.
|
||||
///
|
||||
/// Falls back to default config when the file is absent.
|
||||
pub fn load_config(project_root: &Path) -> Result<ProjectConfig, Error> {
|
||||
ProjectConfig::load(project_root).map_err(Error::Config)
|
||||
}
|
||||
|
||||
/// List all worktrees under `.huskies/worktrees/`.
|
||||
pub fn list_worktrees(project_root: &Path) -> Result<Vec<WorktreeListEntry>, Error> {
|
||||
worktree::list_worktrees(project_root).map_err(Error::Io)
|
||||
}
|
||||
|
||||
/// Remove the git worktree for a story by ID.
|
||||
///
|
||||
/// Loads the project config to honour teardown commands. Returns an error if
|
||||
/// the worktree directory does not exist.
|
||||
pub async fn remove_worktree(project_root: &Path, story_id: &str) -> Result<(), Error> {
|
||||
let config = load_config(project_root)?;
|
||||
worktree::remove_worktree_by_story_id(project_root, story_id, &config)
|
||||
.await
|
||||
.map_err(Error::Worktree)
|
||||
}
|
||||
|
||||
/// Read test results persisted in a story's markdown file.
|
||||
///
|
||||
/// Returns `None` when the story has no test results section.
|
||||
pub fn read_test_results_from_file(
|
||||
project_root: &Path,
|
||||
story_id: &str,
|
||||
) -> Option<crate::workflow::StoryTestResults> {
|
||||
crate::http::workflow::read_test_results_from_story_file(project_root, story_id)
|
||||
}
|
||||
|
||||
/// Read a work item file from a pipeline stage directory.
|
||||
///
|
||||
/// Returns `Ok(Some(content))` when found, `Ok(None)` when absent.
|
||||
pub fn read_work_item_from_stage(
|
||||
work_dir: &std::path::Path,
|
||||
stage_dir: &str,
|
||||
filename: &str,
|
||||
) -> Result<Option<String>, Error> {
|
||||
let file_path = work_dir.join(stage_dir).join(filename);
|
||||
if file_path.exists() {
|
||||
let content = std::fs::read_to_string(&file_path)
|
||||
.map_err(|e| Error::Io(format!("Failed to read work item: {e}")))?;
|
||||
Ok(Some(content))
|
||||
} else {
|
||||
Ok(None)
|
||||
}
|
||||
}
|
||||
|
||||
/// Test-fixture helpers that may call `std::fs` — kept here so that
|
||||
/// `mod.rs` and topic-file `#[cfg(test)]` blocks never need to import
|
||||
/// `std::fs`, `tokio::fs`, or `std::process` directly.
|
||||
#[cfg(test)]
|
||||
pub mod test_helpers {
|
||||
use tempfile::TempDir;
|
||||
|
||||
/// Create the `.huskies/` directory.
|
||||
pub fn make_huskies_dir(tmp: &TempDir) {
|
||||
std::fs::create_dir_all(tmp.path().join(".huskies")).unwrap();
|
||||
}
|
||||
|
||||
/// Create the `5_done` and `6_archived` work-stage directories.
|
||||
pub fn make_work_dirs(tmp: &TempDir) {
|
||||
for stage in &["5_done", "6_archived"] {
|
||||
std::fs::create_dir_all(tmp.path().join(".huskies").join("work").join(stage)).unwrap();
|
||||
}
|
||||
}
|
||||
|
||||
/// Create all six pipeline stage directories under `.huskies/work/`.
|
||||
pub fn make_stage_dirs(tmp: &TempDir) {
|
||||
for stage in &[
|
||||
"1_backlog",
|
||||
"2_current",
|
||||
"3_qa",
|
||||
"4_merge",
|
||||
"5_done",
|
||||
"6_archived",
|
||||
] {
|
||||
std::fs::create_dir_all(tmp.path().join(".huskies").join("work").join(stage)).unwrap();
|
||||
}
|
||||
}
|
||||
|
||||
/// Write `.huskies/project.toml` with the given TOML content.
|
||||
pub fn make_project_toml(tmp: &TempDir, content: &str) {
|
||||
let sk_dir = tmp.path().join(".huskies");
|
||||
std::fs::create_dir_all(&sk_dir).unwrap();
|
||||
std::fs::write(sk_dir.join("project.toml"), content).unwrap();
|
||||
}
|
||||
|
||||
/// Write a fixture file at `relative_path` (relative to the tmp root).
|
||||
pub fn write_story_file(tmp: &TempDir, relative_path: &str, content: &str) {
|
||||
let path = tmp.path().join(relative_path);
|
||||
if let Some(parent) = path.parent() {
|
||||
std::fs::create_dir_all(parent).unwrap();
|
||||
}
|
||||
std::fs::write(path, content).unwrap();
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use tempfile::TempDir;
|
||||
|
||||
fn make_work_dirs(tmp: &TempDir) {
|
||||
for stage in &["5_done", "6_archived"] {
|
||||
std::fs::create_dir_all(tmp.path().join(".huskies").join("work").join(stage)).unwrap();
|
||||
}
|
||||
}
|
||||
|
||||
// ── is_archived ───────────────────────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn is_archived_false_when_file_absent() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
make_work_dirs(&tmp);
|
||||
assert!(!is_archived(tmp.path(), "42_story_foo"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn is_archived_true_when_in_5_done() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
make_work_dirs(&tmp);
|
||||
std::fs::write(
|
||||
tmp.path().join(".huskies/work/5_done/42_story_foo.md"),
|
||||
"---\nname: test\n---\n",
|
||||
)
|
||||
.unwrap();
|
||||
assert!(is_archived(tmp.path(), "42_story_foo"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn is_archived_true_when_in_6_archived() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
make_work_dirs(&tmp);
|
||||
std::fs::write(
|
||||
tmp.path().join(".huskies/work/6_archived/42_story_foo.md"),
|
||||
"---\nname: test\n---\n",
|
||||
)
|
||||
.unwrap();
|
||||
assert!(is_archived(tmp.path(), "42_story_foo"));
|
||||
}
|
||||
|
||||
// ── read_agent_log ────────────────────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn read_agent_log_returns_empty_when_no_log() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let entries = read_agent_log(tmp.path(), "42_story_foo", "coder-1").unwrap();
|
||||
assert!(entries.is_empty());
|
||||
}
|
||||
|
||||
// ── read_token_records ────────────────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn read_token_records_returns_empty_when_no_file() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let records = read_token_records(tmp.path()).unwrap();
|
||||
assert!(records.is_empty());
|
||||
}
|
||||
|
||||
// ── load_config ───────────────────────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn load_config_returns_default_when_no_file() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
std::fs::create_dir_all(tmp.path().join(".huskies")).unwrap();
|
||||
let config = load_config(tmp.path()).unwrap();
|
||||
// Default config has one "default" agent
|
||||
assert_eq!(config.agent.len(), 1);
|
||||
assert_eq!(config.agent[0].name, "default");
|
||||
}
|
||||
|
||||
// ── list_worktrees ────────────────────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn list_worktrees_empty_when_no_dir() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let entries = list_worktrees(tmp.path()).unwrap();
|
||||
assert!(entries.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn list_worktrees_returns_subdirs() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let wt_dir = tmp.path().join(".huskies").join("worktrees");
|
||||
std::fs::create_dir_all(wt_dir.join("42_story_foo")).unwrap();
|
||||
std::fs::create_dir_all(wt_dir.join("43_story_bar")).unwrap();
|
||||
let mut entries = list_worktrees(tmp.path()).unwrap();
|
||||
entries.sort_by(|a, b| a.story_id.cmp(&b.story_id));
|
||||
assert_eq!(entries.len(), 2);
|
||||
assert_eq!(entries[0].story_id, "42_story_foo");
|
||||
assert_eq!(entries[1].story_id, "43_story_bar");
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,451 @@
|
||||
//! Agent service — public API for the agent domain.
|
||||
//!
|
||||
//! This module orchestrates calls to `io.rs` (side effects) and the pure
|
||||
//! topic modules (`selection`, `token`) to implement the full agent service
|
||||
//! surface. HTTP handlers call these functions instead of reaching directly
|
||||
//! into `AgentPool` or the filesystem.
|
||||
//!
|
||||
//! Conventions: `docs/architecture/service-modules.md`
|
||||
mod io;
|
||||
pub mod selection;
|
||||
pub mod token;
|
||||
|
||||
use crate::agents::AgentInfo;
|
||||
use crate::agents::AgentPool;
|
||||
use crate::agents::token_usage::TokenUsageRecord;
|
||||
use crate::config::ProjectConfig;
|
||||
use crate::workflow::StoryTestResults;
|
||||
use crate::worktree::{WorktreeInfo, WorktreeListEntry};
|
||||
use std::path::Path;
|
||||
|
||||
pub use io::is_archived;
|
||||
pub use token::TokenCostSummary;
|
||||
|
||||
// ── Error type ────────────────────────────────────────────────────────────────
|
||||
|
||||
/// Typed errors returned by `service::agents` functions.
|
||||
///
|
||||
/// HTTP handlers map these to specific status codes — see the conventions doc
|
||||
/// for the full mapping table.
|
||||
#[derive(Debug)]
|
||||
pub enum Error {
|
||||
/// No agent with the given name/story exists in the pool.
|
||||
AgentNotFound(String),
|
||||
/// No work item found for the requested story ID.
|
||||
WorkItemNotFound(String),
|
||||
/// A worktree operation failed.
|
||||
Worktree(String),
|
||||
/// Project configuration could not be loaded.
|
||||
Config(String),
|
||||
/// A filesystem or I/O operation failed.
|
||||
Io(String),
|
||||
}
|
||||
|
||||
impl std::fmt::Display for Error {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
match self {
|
||||
Self::AgentNotFound(msg) => write!(f, "Agent not found: {msg}"),
|
||||
Self::WorkItemNotFound(msg) => write!(f, "Work item not found: {msg}"),
|
||||
Self::Worktree(msg) => write!(f, "Worktree error: {msg}"),
|
||||
Self::Config(msg) => write!(f, "Config error: {msg}"),
|
||||
Self::Io(msg) => write!(f, "I/O error: {msg}"),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ── Shared service types ─────────────────────────────────────────────────────
|
||||
|
||||
/// Content and metadata for a work-item (story) file.
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct WorkItemContent {
|
||||
pub content: String,
|
||||
pub stage: String,
|
||||
pub name: Option<String>,
|
||||
pub agent: Option<String>,
|
||||
}
|
||||
|
||||
/// A single entry in the project's configured agent roster.
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct AgentConfigEntry {
|
||||
pub name: String,
|
||||
pub role: String,
|
||||
pub stage: Option<String>,
|
||||
pub model: Option<String>,
|
||||
pub allowed_tools: Option<Vec<String>>,
|
||||
pub max_turns: Option<u32>,
|
||||
pub max_budget_usd: Option<f64>,
|
||||
}
|
||||
|
||||
// ── Public API ────────────────────────────────────────────────────────────────
|
||||
|
||||
/// Start an agent for a story.
|
||||
///
|
||||
/// Takes only what it needs: the pool (for spawning) and the project root
|
||||
/// (for config and worktree creation). Does not touch `AppContext`.
|
||||
pub async fn start_agent(
|
||||
pool: &AgentPool,
|
||||
project_root: &Path,
|
||||
story_id: &str,
|
||||
agent_name: Option<&str>,
|
||||
resume_context: Option<&str>,
|
||||
session_id_to_resume: Option<String>,
|
||||
) -> Result<AgentInfo, Error> {
|
||||
pool.start_agent(
|
||||
project_root,
|
||||
story_id,
|
||||
agent_name,
|
||||
resume_context,
|
||||
session_id_to_resume,
|
||||
)
|
||||
.await
|
||||
.map_err(Error::AgentNotFound)
|
||||
}
|
||||
|
||||
/// Stop a running agent.
|
||||
pub async fn stop_agent(
|
||||
pool: &AgentPool,
|
||||
project_root: &Path,
|
||||
story_id: &str,
|
||||
agent_name: &str,
|
||||
) -> Result<(), Error> {
|
||||
pool.stop_agent(project_root, story_id, agent_name)
|
||||
.await
|
||||
.map_err(Error::AgentNotFound)
|
||||
}
|
||||
|
||||
/// List all agents, optionally filtering out those belonging to archived stories.
|
||||
///
|
||||
/// When `project_root` is `None` the archive filter is skipped and all agents
|
||||
/// are returned (safe default when the server is not yet fully configured).
|
||||
pub fn list_agents(pool: &AgentPool, project_root: Option<&Path>) -> Result<Vec<AgentInfo>, Error> {
|
||||
let agents = pool.list_agents().map_err(Error::Io)?;
|
||||
match project_root {
|
||||
Some(root) => Ok(selection::filter_non_archived(agents, |id| {
|
||||
io::is_archived(root, id)
|
||||
})),
|
||||
None => Ok(agents),
|
||||
}
|
||||
}
|
||||
|
||||
/// Create a git worktree for a story.
|
||||
pub async fn create_worktree(
|
||||
pool: &AgentPool,
|
||||
project_root: &Path,
|
||||
story_id: &str,
|
||||
) -> Result<WorktreeInfo, Error> {
|
||||
pool.create_worktree(project_root, story_id)
|
||||
.await
|
||||
.map_err(Error::Worktree)
|
||||
}
|
||||
|
||||
/// List all worktrees under `.huskies/worktrees/`.
|
||||
pub fn list_worktrees(project_root: &Path) -> Result<Vec<WorktreeListEntry>, Error> {
|
||||
io::list_worktrees(project_root)
|
||||
}
|
||||
|
||||
/// Remove the git worktree for a story.
|
||||
pub async fn remove_worktree(project_root: &Path, story_id: &str) -> Result<(), Error> {
|
||||
io::remove_worktree(project_root, story_id).await
|
||||
}
|
||||
|
||||
/// Get the configured agent roster from `project.toml`.
|
||||
pub fn get_agent_config(project_root: &Path) -> Result<Vec<AgentConfigEntry>, Error> {
|
||||
let config = io::load_config(project_root)?;
|
||||
Ok(config_to_entries(&config))
|
||||
}
|
||||
|
||||
/// Reload and return the project's agent configuration.
|
||||
///
|
||||
/// Semantically identical to `get_agent_config`; provided as a distinct
|
||||
/// function so callers can express intent (UI "Reload" button).
|
||||
pub fn reload_config(project_root: &Path) -> Result<Vec<AgentConfigEntry>, Error> {
|
||||
get_agent_config(project_root)
|
||||
}
|
||||
|
||||
/// Get the concatenated output text for an agent's most recent session.
|
||||
///
|
||||
/// Returns an empty string when no log file exists yet.
|
||||
pub fn get_agent_output(
|
||||
project_root: &Path,
|
||||
story_id: &str,
|
||||
agent_name: &str,
|
||||
) -> Result<String, Error> {
|
||||
let entries = io::read_agent_log(project_root, story_id, agent_name)?;
|
||||
Ok(selection::collect_output_text(&entries))
|
||||
}
|
||||
|
||||
/// Get the markdown content and metadata for a work item.
|
||||
///
|
||||
/// Searches all pipeline stage directories, falling back to the CRDT content
|
||||
/// store when no file is present on disk. Returns `Error::WorkItemNotFound`
|
||||
/// when neither source has the item.
|
||||
pub fn get_work_item_content(
|
||||
project_root: &Path,
|
||||
story_id: &str,
|
||||
) -> Result<WorkItemContent, Error> {
|
||||
let stages = [
|
||||
("1_backlog", "backlog"),
|
||||
("2_current", "current"),
|
||||
("3_qa", "qa"),
|
||||
("4_merge", "merge"),
|
||||
("5_done", "done"),
|
||||
("6_archived", "archived"),
|
||||
];
|
||||
|
||||
let work_dir = project_root.join(".huskies").join("work");
|
||||
let filename = format!("{story_id}.md");
|
||||
|
||||
for (stage_dir, stage_name) in &stages {
|
||||
if let Some(content) = io::read_work_item_from_stage(&work_dir, stage_dir, &filename)? {
|
||||
let metadata = crate::io::story_metadata::parse_front_matter(&content).ok();
|
||||
return Ok(WorkItemContent {
|
||||
content,
|
||||
stage: stage_name.to_string(),
|
||||
name: metadata.as_ref().and_then(|m| m.name.clone()),
|
||||
agent: metadata.and_then(|m| m.agent),
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// CRDT-only fallback
|
||||
if let Some(content) = crate::db::read_content(story_id) {
|
||||
let item = crate::pipeline_state::read_typed(story_id)
|
||||
.map_err(|e| Error::Io(format!("Pipeline read error: {e}")))?;
|
||||
let stage = item
|
||||
.as_ref()
|
||||
.map(|i| match &i.stage {
|
||||
crate::pipeline_state::Stage::Backlog => "backlog",
|
||||
crate::pipeline_state::Stage::Coding => "current",
|
||||
crate::pipeline_state::Stage::Qa => "qa",
|
||||
crate::pipeline_state::Stage::Merge { .. } => "merge",
|
||||
crate::pipeline_state::Stage::Done { .. } => "done",
|
||||
crate::pipeline_state::Stage::Archived { .. } => "archived",
|
||||
})
|
||||
.unwrap_or("unknown")
|
||||
.to_string();
|
||||
let metadata = crate::io::story_metadata::parse_front_matter(&content).ok();
|
||||
return Ok(WorkItemContent {
|
||||
content,
|
||||
stage,
|
||||
name: metadata.as_ref().and_then(|m| m.name.clone()),
|
||||
agent: metadata.and_then(|m| m.agent),
|
||||
});
|
||||
}
|
||||
|
||||
Err(Error::WorkItemNotFound(format!(
|
||||
"Work item not found: {story_id}"
|
||||
)))
|
||||
}
|
||||
|
||||
/// Get test results for a work item.
|
||||
///
|
||||
/// Checks in-memory workflow state first (fast path), then falls back to
|
||||
/// results persisted in the story file.
|
||||
pub fn get_test_results(
|
||||
project_root: &Path,
|
||||
story_id: &str,
|
||||
workflow: &crate::workflow::WorkflowState,
|
||||
) -> Option<StoryTestResults> {
|
||||
if let Some(results) = workflow.results.get(story_id) {
|
||||
return Some(results.clone());
|
||||
}
|
||||
io::read_test_results_from_file(project_root, story_id)
|
||||
}
|
||||
|
||||
/// Get the aggregated token cost for a specific story.
|
||||
pub fn get_work_item_token_cost(
|
||||
project_root: &Path,
|
||||
story_id: &str,
|
||||
) -> Result<TokenCostSummary, Error> {
|
||||
let records = io::read_token_records(project_root)?;
|
||||
Ok(token::aggregate_for_story(&records, story_id))
|
||||
}
|
||||
|
||||
/// Get all token usage records across all stories.
|
||||
pub fn get_all_token_usage(project_root: &Path) -> Result<Vec<TokenUsageRecord>, Error> {
|
||||
io::read_token_records(project_root)
|
||||
}
|
||||
|
||||
// ── Helpers ───────────────────────────────────────────────────────────────────
|
||||
|
||||
fn config_to_entries(config: &ProjectConfig) -> Vec<AgentConfigEntry> {
|
||||
config
|
||||
.agent
|
||||
.iter()
|
||||
.map(|a| AgentConfigEntry {
|
||||
name: a.name.clone(),
|
||||
role: a.role.clone(),
|
||||
stage: a.stage.clone(),
|
||||
model: a.model.clone(),
|
||||
allowed_tools: a.allowed_tools.clone(),
|
||||
max_turns: a.max_turns,
|
||||
max_budget_usd: a.max_budget_usd,
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
|
||||
// ── Integration tests ─────────────────────────────────────────────────────────
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::agents::AgentStatus;
|
||||
use io::test_helpers::*;
|
||||
use std::sync::Arc;
|
||||
use tempfile::TempDir;
|
||||
|
||||
fn make_pool(tmp: &TempDir) -> Arc<AgentPool> {
|
||||
let (tx, _) = tokio::sync::broadcast::channel(64);
|
||||
let pool = AgentPool::new(3001, tx);
|
||||
let state = crate::state::SessionState::default();
|
||||
*state.project_root.lock().unwrap() = Some(tmp.path().to_path_buf());
|
||||
Arc::new(pool)
|
||||
}
|
||||
|
||||
// ── list_agents ───────────────────────────────────────────────────────────
|
||||
|
||||
#[tokio::test]
|
||||
async fn list_agents_excludes_archived_stories() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
make_work_dirs(&tmp);
|
||||
write_story_file(
|
||||
&tmp,
|
||||
".huskies/work/6_archived/79_story_archived.md",
|
||||
"---\nname: archived\n---\n",
|
||||
);
|
||||
|
||||
let pool = make_pool(&tmp);
|
||||
pool.inject_test_agent("79_story_archived", "coder-1", AgentStatus::Completed);
|
||||
pool.inject_test_agent("80_story_active", "coder-1", AgentStatus::Running);
|
||||
|
||||
let agents = list_agents(&pool, Some(tmp.path())).unwrap();
|
||||
assert!(!agents.iter().any(|a| a.story_id == "79_story_archived"));
|
||||
assert!(agents.iter().any(|a| a.story_id == "80_story_active"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn list_agents_includes_all_when_no_project_root() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let pool = make_pool(&tmp);
|
||||
pool.inject_test_agent("42_story_whatever", "coder-1", AgentStatus::Completed);
|
||||
|
||||
let agents = list_agents(&pool, None).unwrap();
|
||||
assert!(agents.iter().any(|a| a.story_id == "42_story_whatever"));
|
||||
}
|
||||
|
||||
// ── get_agent_config ──────────────────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn get_agent_config_returns_default_when_no_toml() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
make_huskies_dir(&tmp);
|
||||
let entries = get_agent_config(tmp.path()).unwrap();
|
||||
assert_eq!(entries.len(), 1);
|
||||
assert_eq!(entries[0].name, "default");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn get_agent_config_returns_configured_agents() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
make_project_toml(
|
||||
&tmp,
|
||||
r#"
|
||||
[[agent]]
|
||||
name = "coder-1"
|
||||
role = "Full-stack engineer"
|
||||
model = "sonnet"
|
||||
max_turns = 30
|
||||
max_budget_usd = 5.0
|
||||
"#,
|
||||
);
|
||||
let entries = get_agent_config(tmp.path()).unwrap();
|
||||
assert_eq!(entries.len(), 1);
|
||||
assert_eq!(entries[0].name, "coder-1");
|
||||
assert_eq!(entries[0].model, Some("sonnet".to_string()));
|
||||
assert_eq!(entries[0].max_turns, Some(30));
|
||||
}
|
||||
|
||||
// ── get_agent_output ──────────────────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn get_agent_output_returns_empty_when_no_log() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let output = get_agent_output(tmp.path(), "42_story_foo", "coder-1").unwrap();
|
||||
assert_eq!(output, "");
|
||||
}
|
||||
|
||||
// ── get_work_item_content ─────────────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn get_work_item_content_reads_from_backlog() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
make_stage_dirs(&tmp);
|
||||
write_story_file(
|
||||
&tmp,
|
||||
".huskies/work/1_backlog/42_story_foo.md",
|
||||
"---\nname: \"Foo Story\"\n---\n\nSome content.",
|
||||
);
|
||||
let item = get_work_item_content(tmp.path(), "42_story_foo").unwrap();
|
||||
assert!(item.content.contains("Some content."));
|
||||
assert_eq!(item.stage, "backlog");
|
||||
assert_eq!(item.name, Some("Foo Story".to_string()));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn get_work_item_content_returns_not_found_for_absent_story() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
make_stage_dirs(&tmp);
|
||||
let result = get_work_item_content(tmp.path(), "99_story_nonexistent");
|
||||
assert!(matches!(result, Err(Error::WorkItemNotFound(_))));
|
||||
}
|
||||
|
||||
// ── get_work_item_token_cost ──────────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn get_work_item_token_cost_returns_zero_when_no_records() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let summary = get_work_item_token_cost(tmp.path(), "42_story_foo").unwrap();
|
||||
assert_eq!(summary.total_cost_usd, 0.0);
|
||||
assert!(summary.agents.is_empty());
|
||||
}
|
||||
|
||||
// ── get_all_token_usage ───────────────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn get_all_token_usage_returns_empty_when_no_file() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let records = get_all_token_usage(tmp.path()).unwrap();
|
||||
assert!(records.is_empty());
|
||||
}
|
||||
|
||||
// ── get_test_results ──────────────────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn get_test_results_returns_none_when_no_results() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let workflow = crate::workflow::WorkflowState::default();
|
||||
let result = get_test_results(tmp.path(), "42_story_foo", &workflow);
|
||||
assert!(result.is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn get_test_results_returns_in_memory_results_first() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let mut workflow = crate::workflow::WorkflowState::default();
|
||||
workflow
|
||||
.record_test_results_validated(
|
||||
"42_story_foo".to_string(),
|
||||
vec![crate::workflow::TestCaseResult {
|
||||
name: "test1".to_string(),
|
||||
status: crate::workflow::TestStatus::Pass,
|
||||
details: None,
|
||||
}],
|
||||
vec![],
|
||||
)
|
||||
.unwrap();
|
||||
let result =
|
||||
get_test_results(tmp.path(), "42_story_foo", &workflow).expect("should have results");
|
||||
assert_eq!(result.unit.len(), 1);
|
||||
assert_eq!(result.unit[0].name, "test1");
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,171 @@
|
||||
//! Pure agent selection and filtering logic — no I/O, no side effects.
|
||||
//!
|
||||
//! All functions in this module are pure: they take data, transform it, and
|
||||
//! return a result without touching the filesystem, network, or any mutable
|
||||
//! global state. This makes them fast to test without tempdirs or async runtimes.
|
||||
use crate::agent_log::LogEntry;
|
||||
use crate::agents::AgentInfo;
|
||||
|
||||
/// Filter a list of agents, removing any whose story is archived.
|
||||
///
|
||||
/// `is_archived` is a predicate injected by the caller — typically a closure
|
||||
/// over the project root that calls `io::is_archived`. This keeps the function
|
||||
/// pure: it never touches the filesystem itself.
|
||||
pub fn filter_non_archived<F>(agents: Vec<AgentInfo>, is_archived: F) -> Vec<AgentInfo>
|
||||
where
|
||||
F: Fn(&str) -> bool,
|
||||
{
|
||||
agents
|
||||
.into_iter()
|
||||
.filter(|info| !is_archived(&info.story_id))
|
||||
.collect()
|
||||
}
|
||||
|
||||
/// Concatenate the text of all `output` events from an agent log.
|
||||
///
|
||||
/// Non-output events (status, done, error, agent_json, thinking) are silently
|
||||
/// skipped. Returns an empty string when `entries` is empty or contains no
|
||||
/// output events.
|
||||
pub fn collect_output_text(entries: &[LogEntry]) -> String {
|
||||
entries
|
||||
.iter()
|
||||
.filter(|e| e.event.get("type").and_then(|t| t.as_str()) == Some("output"))
|
||||
.filter_map(|e| {
|
||||
e.event
|
||||
.get("text")
|
||||
.and_then(|t| t.as_str())
|
||||
.map(str::to_owned)
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::agents::AgentStatus;
|
||||
|
||||
fn make_agent(story_id: &str) -> AgentInfo {
|
||||
AgentInfo {
|
||||
story_id: story_id.to_string(),
|
||||
agent_name: "coder-1".to_string(),
|
||||
status: AgentStatus::Running,
|
||||
session_id: None,
|
||||
worktree_path: None,
|
||||
base_branch: None,
|
||||
completion: None,
|
||||
log_session_id: None,
|
||||
throttled: false,
|
||||
}
|
||||
}
|
||||
|
||||
fn make_log_entry(event_type: &str, text: Option<&str>) -> LogEntry {
|
||||
let mut obj = serde_json::Map::new();
|
||||
obj.insert(
|
||||
"type".to_string(),
|
||||
serde_json::Value::String(event_type.to_string()),
|
||||
);
|
||||
if let Some(t) = text {
|
||||
obj.insert("text".to_string(), serde_json::Value::String(t.to_string()));
|
||||
}
|
||||
LogEntry {
|
||||
timestamp: "2024-01-01T00:00:00Z".to_string(),
|
||||
event: serde_json::Value::Object(obj),
|
||||
}
|
||||
}
|
||||
|
||||
// ── filter_non_archived ───────────────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn filter_keeps_non_archived_agents() {
|
||||
let agents = vec![make_agent("10_active"), make_agent("11_active")];
|
||||
let result = filter_non_archived(agents, |_| false);
|
||||
assert_eq!(result.len(), 2);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn filter_removes_archived_agents() {
|
||||
let agents = vec![make_agent("10_archived"), make_agent("11_active")];
|
||||
let result = filter_non_archived(agents, |id| id == "10_archived");
|
||||
assert_eq!(result.len(), 1);
|
||||
assert_eq!(result[0].story_id, "11_active");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn filter_removes_all_when_all_archived() {
|
||||
let agents = vec![make_agent("10_a"), make_agent("11_b")];
|
||||
let result = filter_non_archived(agents, |_| true);
|
||||
assert!(result.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn filter_returns_empty_for_empty_input() {
|
||||
let result = filter_non_archived(vec![], |_| false);
|
||||
assert!(result.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn filter_preserves_order() {
|
||||
let agents = vec![
|
||||
make_agent("1_a"),
|
||||
make_agent("2_b"),
|
||||
make_agent("3_c"),
|
||||
make_agent("4_d"),
|
||||
];
|
||||
let result = filter_non_archived(agents, |id| id == "2_b");
|
||||
assert_eq!(result.len(), 3);
|
||||
assert_eq!(result[0].story_id, "1_a");
|
||||
assert_eq!(result[1].story_id, "3_c");
|
||||
assert_eq!(result[2].story_id, "4_d");
|
||||
}
|
||||
|
||||
// ── collect_output_text ───────────────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn collect_output_text_empty_entries() {
|
||||
let result = collect_output_text(&[]);
|
||||
assert_eq!(result, "");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn collect_output_text_skips_non_output_events() {
|
||||
let entries = vec![
|
||||
make_log_entry("status", Some("running")),
|
||||
make_log_entry("done", None),
|
||||
];
|
||||
let result = collect_output_text(&entries);
|
||||
assert_eq!(result, "");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn collect_output_text_concatenates_output_events() {
|
||||
let entries = vec![
|
||||
make_log_entry("output", Some("Hello ")),
|
||||
make_log_entry("output", Some("world\n")),
|
||||
];
|
||||
let result = collect_output_text(&entries);
|
||||
assert_eq!(result, "Hello world\n");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn collect_output_text_skips_output_without_text_field() {
|
||||
let entry = LogEntry {
|
||||
timestamp: "2024-01-01T00:00:00Z".to_string(),
|
||||
event: serde_json::json!({"type": "output"}),
|
||||
};
|
||||
let result = collect_output_text(&[entry]);
|
||||
assert_eq!(result, "");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn collect_output_text_mixed_event_types() {
|
||||
let entries = vec![
|
||||
make_log_entry("status", Some("running")),
|
||||
make_log_entry("output", Some("line1\n")),
|
||||
make_log_entry("agent_json", None),
|
||||
make_log_entry("output", Some("line2\n")),
|
||||
make_log_entry("done", None),
|
||||
];
|
||||
let result = collect_output_text(&entries);
|
||||
assert_eq!(result, "line1\nline2\n");
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,160 @@
|
||||
//! Pure token usage aggregation — no I/O, no side effects.
|
||||
//!
|
||||
//! Functions here take slices of `TokenUsageRecord` (already loaded by `io.rs`)
|
||||
//! and compute summaries. Tests cover every branch without touching the filesystem.
|
||||
use crate::agents::token_usage::TokenUsageRecord;
|
||||
use std::collections::HashMap;
|
||||
|
||||
/// Per-agent cost breakdown entry.
|
||||
#[derive(Debug, Clone, PartialEq)]
|
||||
pub struct AgentTokenCost {
|
||||
pub agent_name: String,
|
||||
pub model: Option<String>,
|
||||
pub input_tokens: u64,
|
||||
pub output_tokens: u64,
|
||||
pub cache_creation_input_tokens: u64,
|
||||
pub cache_read_input_tokens: u64,
|
||||
pub total_cost_usd: f64,
|
||||
}
|
||||
|
||||
/// Aggregated token cost for a story.
|
||||
#[derive(Debug, Clone, PartialEq)]
|
||||
pub struct TokenCostSummary {
|
||||
pub total_cost_usd: f64,
|
||||
pub agents: Vec<AgentTokenCost>,
|
||||
}
|
||||
|
||||
/// Aggregate token usage records for a single story.
|
||||
///
|
||||
/// Records for other stories are ignored. The returned `agents` list is sorted
|
||||
/// alphabetically by `agent_name` for deterministic output. Returns a zero-cost
|
||||
/// summary when no records match the given `story_id`.
|
||||
pub fn aggregate_for_story(records: &[TokenUsageRecord], story_id: &str) -> TokenCostSummary {
|
||||
let mut agent_map: HashMap<String, AgentTokenCost> = HashMap::new();
|
||||
let mut total_cost_usd = 0.0_f64;
|
||||
|
||||
for record in records.iter().filter(|r| r.story_id == story_id) {
|
||||
total_cost_usd += record.usage.total_cost_usd;
|
||||
let entry = agent_map
|
||||
.entry(record.agent_name.clone())
|
||||
.or_insert_with(|| AgentTokenCost {
|
||||
agent_name: record.agent_name.clone(),
|
||||
model: record.model.clone(),
|
||||
input_tokens: 0,
|
||||
output_tokens: 0,
|
||||
cache_creation_input_tokens: 0,
|
||||
cache_read_input_tokens: 0,
|
||||
total_cost_usd: 0.0,
|
||||
});
|
||||
entry.input_tokens += record.usage.input_tokens;
|
||||
entry.output_tokens += record.usage.output_tokens;
|
||||
entry.cache_creation_input_tokens += record.usage.cache_creation_input_tokens;
|
||||
entry.cache_read_input_tokens += record.usage.cache_read_input_tokens;
|
||||
entry.total_cost_usd += record.usage.total_cost_usd;
|
||||
}
|
||||
|
||||
let mut agents: Vec<AgentTokenCost> = agent_map.into_values().collect();
|
||||
agents.sort_by(|a, b| a.agent_name.cmp(&b.agent_name));
|
||||
|
||||
TokenCostSummary {
|
||||
total_cost_usd,
|
||||
agents,
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::agents::TokenUsage;
|
||||
|
||||
fn make_record(story_id: &str, agent: &str, cost: f64) -> TokenUsageRecord {
|
||||
TokenUsageRecord {
|
||||
story_id: story_id.to_string(),
|
||||
agent_name: agent.to_string(),
|
||||
timestamp: "2024-01-01T00:00:00Z".to_string(),
|
||||
model: None,
|
||||
usage: TokenUsage {
|
||||
input_tokens: 100,
|
||||
output_tokens: 50,
|
||||
cache_creation_input_tokens: 10,
|
||||
cache_read_input_tokens: 20,
|
||||
total_cost_usd: cost,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn aggregate_returns_zero_when_no_records() {
|
||||
let summary = aggregate_for_story(&[], "42_story_foo");
|
||||
assert_eq!(summary.total_cost_usd, 0.0);
|
||||
assert!(summary.agents.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn aggregate_filters_to_story_id() {
|
||||
let records = vec![
|
||||
make_record("42_story_foo", "coder-1", 1.0),
|
||||
make_record("99_story_other", "coder-1", 5.0),
|
||||
];
|
||||
let summary = aggregate_for_story(&records, "42_story_foo");
|
||||
assert!((summary.total_cost_usd - 1.0).abs() < f64::EPSILON);
|
||||
assert_eq!(summary.agents.len(), 1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn aggregate_sums_tokens_per_agent() {
|
||||
let records = vec![
|
||||
make_record("42_story_foo", "coder-1", 1.0),
|
||||
make_record("42_story_foo", "coder-1", 2.0),
|
||||
];
|
||||
let summary = aggregate_for_story(&records, "42_story_foo");
|
||||
assert!((summary.total_cost_usd - 3.0).abs() < f64::EPSILON);
|
||||
assert_eq!(summary.agents.len(), 1);
|
||||
assert_eq!(summary.agents[0].input_tokens, 200);
|
||||
assert_eq!(summary.agents[0].output_tokens, 100);
|
||||
assert!((summary.agents[0].total_cost_usd - 3.0).abs() < f64::EPSILON);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn aggregate_splits_by_agent() {
|
||||
let records = vec![
|
||||
make_record("42_story_foo", "coder-1", 1.0),
|
||||
make_record("42_story_foo", "qa", 0.5),
|
||||
];
|
||||
let summary = aggregate_for_story(&records, "42_story_foo");
|
||||
assert!((summary.total_cost_usd - 1.5).abs() < f64::EPSILON);
|
||||
assert_eq!(summary.agents.len(), 2);
|
||||
// sorted alphabetically
|
||||
assert_eq!(summary.agents[0].agent_name, "coder-1");
|
||||
assert_eq!(summary.agents[1].agent_name, "qa");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn aggregate_sorts_agents_alphabetically() {
|
||||
let records = vec![
|
||||
make_record("42_story_foo", "z-agent", 1.0),
|
||||
make_record("42_story_foo", "a-agent", 1.0),
|
||||
make_record("42_story_foo", "m-agent", 1.0),
|
||||
];
|
||||
let summary = aggregate_for_story(&records, "42_story_foo");
|
||||
assert_eq!(summary.agents[0].agent_name, "a-agent");
|
||||
assert_eq!(summary.agents[1].agent_name, "m-agent");
|
||||
assert_eq!(summary.agents[2].agent_name, "z-agent");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn aggregate_returns_zero_when_no_matching_story() {
|
||||
let records = vec![make_record("99_other", "coder-1", 5.0)];
|
||||
let summary = aggregate_for_story(&records, "42_story_foo");
|
||||
assert_eq!(summary.total_cost_usd, 0.0);
|
||||
assert!(summary.agents.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn aggregate_preserves_model_from_first_record() {
|
||||
let mut r = make_record("42_story_foo", "coder-1", 1.0);
|
||||
r.model = Some("claude-sonnet".to_string());
|
||||
let summary = aggregate_for_story(&[r], "42_story_foo");
|
||||
assert_eq!(summary.agents[0].model, Some("claude-sonnet".to_string()));
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,100 @@
|
||||
//! Anthropic I/O — the ONLY place in `service/anthropic/` that may perform
|
||||
//! network requests or store operations.
|
||||
//!
|
||||
//! Every function here is a thin adapter that converts lower-level errors
|
||||
//! into the typed [`super::Error`] variants. No business logic or branching
|
||||
//! lives here; that belongs in `mod.rs`.
|
||||
|
||||
use super::{Error, ModelSummary, ModelsResponse};
|
||||
use crate::store::StoreOps;
|
||||
use reqwest::header::{HeaderMap, HeaderValue};
|
||||
|
||||
/// Store key for the Anthropic API key — shared with `llm::chat`.
|
||||
pub(crate) const KEY_ANTHROPIC_API_KEY: &str = "anthropic_api_key";
|
||||
|
||||
const ANTHROPIC_VERSION: &str = "2023-06-01";
|
||||
|
||||
/// Return whether a non-empty API key is stored.
|
||||
pub(super) fn api_key_exists(store: &dyn StoreOps) -> bool {
|
||||
match store.get(KEY_ANTHROPIC_API_KEY) {
|
||||
Some(value) => value.as_str().map(|k| !k.is_empty()).unwrap_or(false),
|
||||
None => false,
|
||||
}
|
||||
}
|
||||
|
||||
/// Read the stored API key, returning a typed error when absent or invalid.
|
||||
pub(super) fn get_api_key(store: &dyn StoreOps) -> Result<String, Error> {
|
||||
match store.get(KEY_ANTHROPIC_API_KEY) {
|
||||
Some(value) => {
|
||||
if let Some(key) = value.as_str() {
|
||||
if key.is_empty() {
|
||||
Err(Error::Validation(
|
||||
"Anthropic API key is empty. Please set your API key.".to_string(),
|
||||
))
|
||||
} else {
|
||||
Ok(key.to_string())
|
||||
}
|
||||
} else {
|
||||
Err(Error::Validation(
|
||||
"Stored API key is not a string".to_string(),
|
||||
))
|
||||
}
|
||||
}
|
||||
None => Err(Error::Validation(
|
||||
"Anthropic API key not found. Please set your API key.".to_string(),
|
||||
)),
|
||||
}
|
||||
}
|
||||
|
||||
/// Persist a new API key to the store.
|
||||
pub(super) fn save_api_key(store: &dyn StoreOps, api_key: &str) -> Result<(), String> {
|
||||
store.set(KEY_ANTHROPIC_API_KEY, serde_json::json!(api_key));
|
||||
store.save()
|
||||
}
|
||||
|
||||
/// Fetch models from the Anthropic API at `url`.
|
||||
pub(super) async fn fetch_models(api_key: &str, url: &str) -> Result<Vec<ModelSummary>, Error> {
|
||||
let client = reqwest::Client::new();
|
||||
let mut headers = HeaderMap::new();
|
||||
headers.insert(
|
||||
"x-api-key",
|
||||
HeaderValue::from_str(api_key)
|
||||
.map_err(|e| Error::Validation(format!("Invalid API key header value: {e}")))?,
|
||||
);
|
||||
headers.insert(
|
||||
"anthropic-version",
|
||||
HeaderValue::from_static(ANTHROPIC_VERSION),
|
||||
);
|
||||
|
||||
let response = client
|
||||
.get(url)
|
||||
.headers(headers)
|
||||
.send()
|
||||
.await
|
||||
.map_err(|e| Error::UpstreamApi(e.to_string()))?;
|
||||
|
||||
if !response.status().is_success() {
|
||||
let status = response.status();
|
||||
let error_text = response
|
||||
.text()
|
||||
.await
|
||||
.unwrap_or_else(|_| "Unknown error".to_string());
|
||||
return Err(Error::UpstreamApi(format!(
|
||||
"Anthropic API error {status}: {error_text}"
|
||||
)));
|
||||
}
|
||||
|
||||
let body = response
|
||||
.json::<ModelsResponse>()
|
||||
.await
|
||||
.map_err(|e| Error::Internal(format!("Failed to parse response: {e}")))?;
|
||||
|
||||
Ok(body
|
||||
.data
|
||||
.into_iter()
|
||||
.map(|m| ModelSummary {
|
||||
id: m.id,
|
||||
context_window: m.context_window,
|
||||
})
|
||||
.collect())
|
||||
}
|
||||
@@ -0,0 +1,178 @@
|
||||
//! Anthropic service — public API for Anthropic API-key management and model listing.
|
||||
//!
|
||||
//! Exposes functions to check, store, and use the Anthropic API key, and to
|
||||
//! list available models. HTTP handlers call these functions instead of
|
||||
//! talking to `llm::chat` or making HTTP requests directly.
|
||||
//!
|
||||
//! Conventions: `docs/architecture/service-modules.md`
|
||||
|
||||
pub(super) mod io;
|
||||
|
||||
use crate::store::StoreOps;
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
const ANTHROPIC_MODELS_URL: &str = "https://api.anthropic.com/v1/models";
|
||||
|
||||
// ── Error type ────────────────────────────────────────────────────────────────
|
||||
|
||||
/// Typed errors returned by `service::anthropic` functions.
|
||||
///
|
||||
/// HTTP handlers map these to status codes:
|
||||
/// - [`Error::Validation`] → 400 Bad Request
|
||||
/// - [`Error::UpstreamApi`] → 502 Bad Gateway (or 400 for invalid keys)
|
||||
/// - [`Error::Internal`] → 500 Internal Server Error
|
||||
#[derive(Debug)]
|
||||
pub enum Error {
|
||||
/// The request was invalid (e.g. missing, empty, or malformed API key).
|
||||
Validation(String),
|
||||
/// The upstream Anthropic API returned an error or was unreachable.
|
||||
UpstreamApi(String),
|
||||
/// An internal error occurred (JSON parse failure, store I/O error, etc.).
|
||||
Internal(String),
|
||||
}
|
||||
|
||||
impl std::fmt::Display for Error {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
match self {
|
||||
Self::Validation(msg) => write!(f, "Validation error: {msg}"),
|
||||
Self::UpstreamApi(msg) => write!(f, "Upstream API error: {msg}"),
|
||||
Self::Internal(msg) => write!(f, "Internal error: {msg}"),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ── Types ─────────────────────────────────────────────────────────────────────
|
||||
|
||||
/// A summary of an Anthropic model as returned by the `/v1/models` endpoint.
|
||||
#[derive(Serialize, Deserialize, Debug, PartialEq, poem_openapi::Object)]
|
||||
pub struct ModelSummary {
|
||||
pub id: String,
|
||||
pub context_window: u64,
|
||||
}
|
||||
|
||||
/// Raw response shape from the Anthropic `/v1/models` endpoint.
|
||||
#[derive(Deserialize)]
|
||||
pub(super) struct ModelsResponse {
|
||||
pub data: Vec<ModelInfo>,
|
||||
}
|
||||
|
||||
/// A single model entry in the Anthropic API response.
|
||||
#[derive(Deserialize)]
|
||||
pub(super) struct ModelInfo {
|
||||
pub id: String,
|
||||
pub context_window: u64,
|
||||
}
|
||||
|
||||
// ── Public API ────────────────────────────────────────────────────────────────
|
||||
|
||||
/// Return whether a non-empty Anthropic API key is currently stored.
|
||||
pub fn get_api_key_exists(store: &dyn StoreOps) -> Result<bool, Error> {
|
||||
Ok(io::api_key_exists(store))
|
||||
}
|
||||
|
||||
/// Read the stored Anthropic API key.
|
||||
///
|
||||
/// Returns [`Error::Validation`] when the key is absent, empty, or not a string.
|
||||
pub fn get_api_key(store: &dyn StoreOps) -> Result<String, Error> {
|
||||
io::get_api_key(store)
|
||||
}
|
||||
|
||||
/// Store or replace the Anthropic API key.
|
||||
pub fn set_api_key(store: &dyn StoreOps, api_key: String) -> Result<(), Error> {
|
||||
io::save_api_key(store, &api_key).map_err(Error::Internal)
|
||||
}
|
||||
|
||||
/// List available Anthropic models from the production endpoint.
|
||||
pub async fn list_models(store: &dyn StoreOps) -> Result<Vec<ModelSummary>, Error> {
|
||||
list_models_from(store, ANTHROPIC_MODELS_URL).await
|
||||
}
|
||||
|
||||
/// List available Anthropic models from `url` (injectable for tests).
|
||||
pub async fn list_models_from(store: &dyn StoreOps, url: &str) -> Result<Vec<ModelSummary>, Error> {
|
||||
let api_key = get_api_key(store)?;
|
||||
io::fetch_models(&api_key, url).await
|
||||
}
|
||||
|
||||
/// Parse a raw JSON string from the Anthropic `/v1/models` endpoint into model summaries.
|
||||
///
|
||||
/// Pure function for unit testing; production code uses [`list_models`].
|
||||
#[cfg(test)]
|
||||
pub fn parse_models_response(json: &str) -> Result<Vec<ModelSummary>, Error> {
|
||||
let response: ModelsResponse = serde_json::from_str(json)
|
||||
.map_err(|e| Error::Internal(format!("Failed to parse models response: {e}")))?;
|
||||
Ok(response
|
||||
.data
|
||||
.into_iter()
|
||||
.map(|m| ModelSummary {
|
||||
id: m.id,
|
||||
context_window: m.context_window,
|
||||
})
|
||||
.collect())
|
||||
}
|
||||
|
||||
// ── Tests ─────────────────────────────────────────────────────────────────────
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
// Pure unit tests for response parsing — no tempdir, no network.
|
||||
|
||||
#[test]
|
||||
fn parse_models_response_parses_single_model() {
|
||||
let json = r#"{"data":[{"id":"claude-opus-4-5","context_window":200000}]}"#;
|
||||
let models = parse_models_response(json).unwrap();
|
||||
assert_eq!(models.len(), 1);
|
||||
assert_eq!(models[0].id, "claude-opus-4-5");
|
||||
assert_eq!(models[0].context_window, 200000);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_models_response_parses_multiple_models() {
|
||||
let json = r#"{"data":[
|
||||
{"id":"claude-opus-4-5","context_window":200000},
|
||||
{"id":"claude-haiku-4-5-20251001","context_window":100000}
|
||||
]}"#;
|
||||
let models = parse_models_response(json).unwrap();
|
||||
assert_eq!(models.len(), 2);
|
||||
assert_eq!(models[0].id, "claude-opus-4-5");
|
||||
assert_eq!(models[1].context_window, 100000);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_models_response_returns_empty_for_empty_data() {
|
||||
let json = r#"{"data":[]}"#;
|
||||
let models = parse_models_response(json).unwrap();
|
||||
assert!(models.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_models_response_returns_internal_error_for_invalid_json() {
|
||||
let result = parse_models_response("not json at all");
|
||||
assert!(matches!(result, Err(Error::Internal(_))));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_models_response_returns_error_for_missing_data_field() {
|
||||
let result = parse_models_response(r#"{"wrong_field":[]}"#);
|
||||
assert!(matches!(result, Err(Error::Internal(_))));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn error_display_validation() {
|
||||
let e = Error::Validation("no key".to_string());
|
||||
assert!(e.to_string().contains("no key"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn error_display_upstream_api() {
|
||||
let e = Error::UpstreamApi("500 Server Error".to_string());
|
||||
assert!(e.to_string().contains("500 Server Error"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn error_display_internal() {
|
||||
let e = Error::Internal("parse failed".to_string());
|
||||
assert!(e.to_string().contains("parse failed"));
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,158 @@
|
||||
//! Bot command I/O — the ONLY place in `service/bot_command/` that may call
|
||||
//! transport handlers, load stores, spawn tasks, or interact with the agent
|
||||
//! pool.
|
||||
//!
|
||||
//! Every function here is a thin adapter over the underlying matrix/timer/htop
|
||||
//! handlers. No argument parsing or business logic lives here — that belongs in
|
||||
//! `parse.rs` or `mod.rs`.
|
||||
|
||||
use crate::agents::AgentPool;
|
||||
use std::path::Path;
|
||||
use std::sync::Arc;
|
||||
|
||||
use super::parse::{AssignArgs, StartArgs};
|
||||
|
||||
/// Call the Matrix `assign` handler with pre-validated arguments.
|
||||
pub(super) async fn call_assign(
|
||||
args: &AssignArgs,
|
||||
project_root: &Path,
|
||||
agents: &Arc<AgentPool>,
|
||||
) -> String {
|
||||
crate::chat::transport::matrix::assign::handle_assign(
|
||||
"web-ui",
|
||||
&args.number,
|
||||
&args.model,
|
||||
project_root,
|
||||
agents,
|
||||
)
|
||||
.await
|
||||
}
|
||||
|
||||
/// Call the Matrix `start` handler with pre-validated arguments.
|
||||
pub(super) async fn call_start(
|
||||
args: &StartArgs,
|
||||
project_root: &Path,
|
||||
agents: &Arc<AgentPool>,
|
||||
) -> String {
|
||||
crate::chat::transport::matrix::start::handle_start(
|
||||
"web-ui",
|
||||
&args.number,
|
||||
args.hint.as_deref(),
|
||||
project_root,
|
||||
agents,
|
||||
)
|
||||
.await
|
||||
}
|
||||
|
||||
/// Call the Matrix `delete` handler with a pre-validated story number.
|
||||
pub(super) async fn call_delete(
|
||||
number: &str,
|
||||
project_root: &Path,
|
||||
agents: &Arc<AgentPool>,
|
||||
) -> String {
|
||||
crate::chat::transport::matrix::delete::handle_delete("web-ui", number, project_root, agents)
|
||||
.await
|
||||
}
|
||||
|
||||
/// Call the Matrix `rmtree` handler with a pre-validated story number.
|
||||
pub(super) async fn call_rmtree(
|
||||
number: &str,
|
||||
project_root: &Path,
|
||||
agents: &Arc<AgentPool>,
|
||||
) -> String {
|
||||
crate::chat::transport::matrix::rmtree::handle_rmtree("web-ui", number, project_root, agents)
|
||||
.await
|
||||
}
|
||||
|
||||
/// Call the Matrix `rebuild` handler.
|
||||
pub(super) async fn call_rebuild(project_root: &Path, agents: &Arc<AgentPool>) -> String {
|
||||
crate::chat::transport::matrix::rebuild::handle_rebuild("web-ui", project_root, agents).await
|
||||
}
|
||||
|
||||
/// Parse and execute a `timer` command.
|
||||
///
|
||||
/// Returns `Err` with a usage string if the timer arguments cannot be parsed.
|
||||
pub(super) async fn call_timer(args: &str, project_root: &Path) -> Result<String, String> {
|
||||
let synthetic = format!("__web_ui__ timer {args}");
|
||||
let timer_cmd = match crate::service::timer::extract_timer_command(
|
||||
&synthetic,
|
||||
"__web_ui__",
|
||||
"@__web_ui__:localhost",
|
||||
) {
|
||||
Some(cmd) => cmd,
|
||||
None => {
|
||||
return Err(
|
||||
"Usage: `/timer list`, `/timer <number> <HH:MM>`, or `/timer cancel <number>`"
|
||||
.to_string(),
|
||||
);
|
||||
}
|
||||
};
|
||||
let store =
|
||||
crate::service::timer::TimerStore::load(project_root.join(".huskies").join("timers.json"));
|
||||
Ok(crate::service::timer::handle_timer_command(timer_cmd, &store, project_root).await)
|
||||
}
|
||||
|
||||
/// Build an `htop` snapshot for the web UI.
|
||||
///
|
||||
/// The web UI uses one-shot HTTP requests, so live-updating sessions are not
|
||||
/// supported. `htop stop` returns a helpful explanation instead of an error.
|
||||
pub(super) fn call_htop(args: &str, agents: &Arc<AgentPool>) -> String {
|
||||
use crate::chat::transport::matrix::htop::{HtopCommand, build_htop_message};
|
||||
|
||||
let synthetic = if args.is_empty() {
|
||||
"__web_ui__ htop".to_string()
|
||||
} else {
|
||||
format!("__web_ui__ htop {args}")
|
||||
};
|
||||
|
||||
match crate::chat::transport::matrix::htop::extract_htop_command(
|
||||
&synthetic,
|
||||
"__web_ui__",
|
||||
"@__web_ui__:localhost",
|
||||
) {
|
||||
Some(HtopCommand::Stop) => "No active htop session in the web UI. \
|
||||
Live sessions are only supported in chat transports (Matrix, Slack, Discord)."
|
||||
.to_string(),
|
||||
Some(HtopCommand::Start { duration_secs }) => build_htop_message(agents, 0, duration_secs),
|
||||
None => build_htop_message(agents, 0, 300),
|
||||
}
|
||||
}
|
||||
|
||||
/// Dispatch through the synchronous command registry.
|
||||
///
|
||||
/// Returns `Some(response)` if the command keyword is registered, or `None`
|
||||
/// if the keyword is unknown.
|
||||
pub(super) fn call_sync(
|
||||
cmd: &str,
|
||||
args: &str,
|
||||
project_root: &Path,
|
||||
agents: &Arc<AgentPool>,
|
||||
) -> Option<String> {
|
||||
use crate::chat::commands::CommandDispatch;
|
||||
use std::collections::HashSet;
|
||||
use std::sync::Mutex;
|
||||
|
||||
let ambient_rooms: Arc<Mutex<HashSet<String>>> = Arc::new(Mutex::new(HashSet::new()));
|
||||
let bot_name = "__web_ui__";
|
||||
let bot_user_id = "@__web_ui__:localhost";
|
||||
let room_id = "__web_ui__";
|
||||
|
||||
let dispatch = CommandDispatch {
|
||||
bot_name,
|
||||
bot_user_id,
|
||||
project_root,
|
||||
agents,
|
||||
ambient_rooms: &ambient_rooms,
|
||||
room_id,
|
||||
};
|
||||
|
||||
// Build a synthetic bot-addressed message so the registry parses it
|
||||
// identically to messages from chat transports.
|
||||
let synthetic = if args.is_empty() {
|
||||
format!("{bot_name} {cmd}")
|
||||
} else {
|
||||
format!("{bot_name} {cmd} {args}")
|
||||
};
|
||||
|
||||
crate::chat::commands::try_handle_command(&dispatch, &synthetic)
|
||||
}
|
||||
@@ -0,0 +1,97 @@
|
||||
//! Bot command service — domain logic for dispatching slash commands.
|
||||
//!
|
||||
//! Extracted from `http/bot_command.rs` so that argument parsing and dispatch
|
||||
//! are independently testable without an HTTP layer.
|
||||
//!
|
||||
//! Conventions: `docs/architecture/service-modules.md`
|
||||
//!
|
||||
//! # Structure
|
||||
//! - `mod.rs` (this file) — public API and typed `Error` type
|
||||
//! - `parse.rs` — pure argument parsing, no I/O
|
||||
//! - `io.rs` — all side-effectful calls (transport handlers, stores, agent pool)
|
||||
|
||||
pub(super) mod io;
|
||||
pub mod parse;
|
||||
|
||||
use crate::agents::AgentPool;
|
||||
use std::path::Path;
|
||||
use std::sync::Arc;
|
||||
|
||||
// ── Error type ────────────────────────────────────────────────────────────────
|
||||
|
||||
/// Typed errors returned by `service::bot_command::execute`.
|
||||
///
|
||||
/// HTTP handlers map these to specific status codes:
|
||||
/// - [`Error::UnknownCommand`] → 404 Not Found
|
||||
/// - [`Error::BadArgs`] → 400 Bad Request
|
||||
/// - [`Error::CommandFailed`] → 500 Internal Server Error
|
||||
#[derive(Debug)]
|
||||
#[allow(dead_code)] // CommandFailed is part of the public API contract; not yet reachable
|
||||
pub enum Error {
|
||||
/// The command keyword does not match any registered command.
|
||||
UnknownCommand(String),
|
||||
/// The command exists but the provided arguments are invalid.
|
||||
BadArgs(String),
|
||||
/// The command ran but failed with an internal error.
|
||||
CommandFailed(String),
|
||||
}
|
||||
|
||||
impl std::fmt::Display for Error {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
match self {
|
||||
Self::UnknownCommand(msg) | Self::BadArgs(msg) | Self::CommandFailed(msg) => {
|
||||
write!(f, "{msg}")
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ── Public API ────────────────────────────────────────────────────────────────
|
||||
|
||||
/// Execute a bot command and return the markdown response.
|
||||
///
|
||||
/// Dispatches to the same handlers used by the Matrix and Slack bots. The
|
||||
/// `cmd` argument is the lower-cased command keyword (e.g. `"status"`,
|
||||
/// `"start"`). The `args` argument is any text after the keyword, already
|
||||
/// trimmed.
|
||||
///
|
||||
/// # Errors
|
||||
/// - [`Error::UnknownCommand`] if the command keyword is not registered.
|
||||
/// - [`Error::BadArgs`] if the arguments fail validation.
|
||||
/// - [`Error::CommandFailed`] if command execution raises an internal error.
|
||||
pub async fn execute(
|
||||
cmd: &str,
|
||||
args: &str,
|
||||
project_root: &Path,
|
||||
agents: &Arc<AgentPool>,
|
||||
) -> Result<String, Error> {
|
||||
match cmd {
|
||||
"assign" => {
|
||||
let parsed = parse::parse_assign(args).map_err(Error::BadArgs)?;
|
||||
Ok(io::call_assign(&parsed, project_root, agents).await)
|
||||
}
|
||||
"start" => {
|
||||
let parsed = parse::parse_start(args).map_err(Error::BadArgs)?;
|
||||
Ok(io::call_start(&parsed, project_root, agents).await)
|
||||
}
|
||||
"delete" => {
|
||||
let number = parse::parse_number("delete", args).map_err(Error::BadArgs)?;
|
||||
Ok(io::call_delete(&number, project_root, agents).await)
|
||||
}
|
||||
"rmtree" => {
|
||||
let number = parse::parse_number("rmtree", args).map_err(Error::BadArgs)?;
|
||||
Ok(io::call_rmtree(&number, project_root, agents).await)
|
||||
}
|
||||
"rebuild" => Ok(io::call_rebuild(project_root, agents).await),
|
||||
"timer" => io::call_timer(args, project_root)
|
||||
.await
|
||||
.map_err(Error::BadArgs),
|
||||
"htop" => Ok(io::call_htop(args, agents)),
|
||||
_ => match io::call_sync(cmd, args, project_root, agents) {
|
||||
Some(response) => Ok(response),
|
||||
None => Err(Error::UnknownCommand(format!(
|
||||
"Unknown command: `/{cmd}`. Type `/help` to see available commands."
|
||||
))),
|
||||
},
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,216 @@
|
||||
//! Pure argument parsing for bot commands.
|
||||
//!
|
||||
//! Every function in this module is synchronous and free of I/O. All
|
||||
//! filesystem, network, and agent-pool access belongs in `io.rs`.
|
||||
|
||||
// ── Parsed argument types ─────────────────────────────────────────────────────
|
||||
|
||||
/// Parsed arguments for the `assign` command.
|
||||
#[derive(Debug)]
|
||||
pub struct AssignArgs {
|
||||
/// The numeric story identifier (as a string, e.g. `"42"`).
|
||||
pub number: String,
|
||||
/// The model / agent name (e.g. `"opus"`, `"coder-sonnet"`).
|
||||
pub model: String,
|
||||
}
|
||||
|
||||
/// Parsed arguments for the `start` command.
|
||||
#[derive(Debug)]
|
||||
pub struct StartArgs {
|
||||
/// The numeric story identifier.
|
||||
pub number: String,
|
||||
/// Optional model hint (e.g. `"opus"` → resolved to `"coder-opus"`).
|
||||
pub hint: Option<String>,
|
||||
}
|
||||
|
||||
// ── Parsing functions ─────────────────────────────────────────────────────────
|
||||
|
||||
/// Parse `assign` arguments: `<number> <model>`.
|
||||
///
|
||||
/// Returns `Err` with a user-visible usage string if the arguments are missing
|
||||
/// or invalid (non-numeric number, empty model).
|
||||
pub fn parse_assign(args: &str) -> Result<AssignArgs, String> {
|
||||
let mut parts = args.splitn(2, char::is_whitespace);
|
||||
let number = parts.next().unwrap_or("").trim().to_string();
|
||||
let model = parts.next().unwrap_or("").trim().to_string();
|
||||
|
||||
if number.is_empty() || !number.chars().all(|c| c.is_ascii_digit()) || model.is_empty() {
|
||||
return Err("Usage: `/assign <number> <model>` (e.g. `/assign 42 opus`)".to_string());
|
||||
}
|
||||
|
||||
Ok(AssignArgs { number, model })
|
||||
}
|
||||
|
||||
/// Parse `start` arguments: `<number>` or `<number> <model_hint>`.
|
||||
///
|
||||
/// Returns `Err` with a user-visible usage string if the number is missing
|
||||
/// or non-numeric.
|
||||
pub fn parse_start(args: &str) -> Result<StartArgs, String> {
|
||||
let mut parts = args.splitn(2, char::is_whitespace);
|
||||
let number = parts.next().unwrap_or("").trim().to_string();
|
||||
let hint_str = parts.next().unwrap_or("").trim();
|
||||
|
||||
if number.is_empty() || !number.chars().all(|c| c.is_ascii_digit()) {
|
||||
return Err(
|
||||
"Usage: `/start <number>` or `/start <number> <model>` (e.g. `/start 42 opus`)"
|
||||
.to_string(),
|
||||
);
|
||||
}
|
||||
|
||||
let hint = if hint_str.is_empty() {
|
||||
None
|
||||
} else {
|
||||
Some(hint_str.to_string())
|
||||
};
|
||||
|
||||
Ok(StartArgs { number, hint })
|
||||
}
|
||||
|
||||
/// Parse a single numeric argument for commands like `delete` and `rmtree`.
|
||||
///
|
||||
/// `cmd_name` is used only in the error message (e.g. `"delete"` or `"rmtree"`).
|
||||
/// Returns `Err` with a user-visible usage string if the argument is missing
|
||||
/// or non-numeric.
|
||||
pub fn parse_number(cmd_name: &str, args: &str) -> Result<String, String> {
|
||||
let number = args.trim().to_string();
|
||||
if number.is_empty() || !number.chars().all(|c| c.is_ascii_digit()) {
|
||||
return Err(format!(
|
||||
"Usage: `/{cmd_name} <number>` (e.g. `/{cmd_name} 42`)"
|
||||
));
|
||||
}
|
||||
Ok(number)
|
||||
}
|
||||
|
||||
// ── Tests ─────────────────────────────────────────────────────────────────────
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
// -- parse_assign ----------------------------------------------------------
|
||||
|
||||
#[test]
|
||||
fn assign_valid() {
|
||||
let r = parse_assign("42 opus").unwrap();
|
||||
assert_eq!(r.number, "42");
|
||||
assert_eq!(r.model, "opus");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn assign_valid_model_with_spaces() {
|
||||
// splitn(2): everything after first whitespace goes into `model`.
|
||||
let r = parse_assign("42 claude-opus-4").unwrap();
|
||||
assert_eq!(r.number, "42");
|
||||
assert_eq!(r.model, "claude-opus-4");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn assign_missing_all_args() {
|
||||
assert!(parse_assign("").is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn assign_missing_model() {
|
||||
let err = parse_assign("42").unwrap_err();
|
||||
assert!(
|
||||
err.contains("Usage"),
|
||||
"error should contain usage hint: {err}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn assign_non_numeric_number() {
|
||||
let err = parse_assign("foo opus").unwrap_err();
|
||||
assert!(
|
||||
err.contains("Usage"),
|
||||
"error should contain usage hint: {err}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn assign_number_with_letters_is_invalid() {
|
||||
assert!(parse_assign("42x opus").is_err());
|
||||
}
|
||||
|
||||
// -- parse_start -----------------------------------------------------------
|
||||
|
||||
#[test]
|
||||
fn start_valid_number_only() {
|
||||
let r = parse_start("42").unwrap();
|
||||
assert_eq!(r.number, "42");
|
||||
assert!(r.hint.is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn start_valid_with_hint() {
|
||||
let r = parse_start("42 opus").unwrap();
|
||||
assert_eq!(r.number, "42");
|
||||
assert_eq!(r.hint.as_deref(), Some("opus"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn start_missing_number() {
|
||||
let err = parse_start("").unwrap_err();
|
||||
assert!(
|
||||
err.contains("Usage"),
|
||||
"error should contain usage hint: {err}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn start_non_numeric_number() {
|
||||
let err = parse_start("foo").unwrap_err();
|
||||
assert!(
|
||||
err.contains("Usage"),
|
||||
"error should contain usage hint: {err}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn start_non_numeric_with_hint() {
|
||||
assert!(parse_start("foo opus").is_err());
|
||||
}
|
||||
|
||||
// -- parse_number ----------------------------------------------------------
|
||||
|
||||
#[test]
|
||||
fn number_valid() {
|
||||
assert_eq!(parse_number("delete", "99").unwrap(), "99");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn number_missing() {
|
||||
let err = parse_number("delete", "").unwrap_err();
|
||||
assert!(
|
||||
err.contains("Usage"),
|
||||
"error should contain usage hint: {err}"
|
||||
);
|
||||
assert!(
|
||||
err.contains("delete"),
|
||||
"error should mention the command: {err}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn number_non_numeric() {
|
||||
let err = parse_number("delete", "abc").unwrap_err();
|
||||
assert!(
|
||||
err.contains("Usage"),
|
||||
"error should contain usage hint: {err}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn number_usage_contains_cmd_name() {
|
||||
let err = parse_number("rmtree", "").unwrap_err();
|
||||
assert!(
|
||||
err.contains("rmtree"),
|
||||
"usage should mention the command: {err}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn number_whitespace_only_is_invalid() {
|
||||
assert!(parse_number("delete", " ").is_err());
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,70 @@
|
||||
//! Pure helpers for pipeline item ID parsing.
|
||||
//!
|
||||
//! Pipeline item IDs share the format `{number}_{type}_{slug}`, e.g.
|
||||
//! `"42_story_foo"`, `"7_bug_bar"`, `"100_refactor_baz"`. The functions here
|
||||
//! extract or validate the leading numeric segment without performing any I/O.
|
||||
|
||||
/// Extract the numeric prefix from a pipeline item ID.
|
||||
///
|
||||
/// Returns the leading digit sequence from IDs like `"42_story_foo"` → `"42"`.
|
||||
/// Returns `None` if the ID has no leading digit sequence.
|
||||
pub fn extract_item_number(item_id: &str) -> Option<&str> {
|
||||
item_id
|
||||
.split('_')
|
||||
.next()
|
||||
.filter(|s| !s.is_empty() && s.chars().all(|c| c.is_ascii_digit()))
|
||||
}
|
||||
|
||||
#[allow(dead_code)]
|
||||
/// Return `true` if `item_id` has a valid `{digits}_` prefix format.
|
||||
///
|
||||
/// Valid: `"42_story_foo"`, `"1_bug_bar"`.
|
||||
/// Invalid: `"story_without_number"`, `""`, `"abc_story"`.
|
||||
pub fn has_valid_id_prefix(item_id: &str) -> bool {
|
||||
extract_item_number(item_id).is_some()
|
||||
}
|
||||
|
||||
// ── Tests ─────────────────────────────────────────────────────────────────────
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn extract_item_number_extracts_prefix() {
|
||||
assert_eq!(extract_item_number("42_story_foo"), Some("42"));
|
||||
assert_eq!(extract_item_number("1_bug_bar"), Some("1"));
|
||||
assert_eq!(extract_item_number("100_refactor_baz"), Some("100"));
|
||||
assert_eq!(
|
||||
extract_item_number("261_story_bot_notifications"),
|
||||
Some("261")
|
||||
);
|
||||
assert_eq!(extract_item_number("1_spike_research"), Some("1"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn extract_item_number_returns_none_for_no_numeric_prefix() {
|
||||
assert_eq!(extract_item_number("story_without_number"), None);
|
||||
assert_eq!(extract_item_number("abc_story"), None);
|
||||
assert_eq!(extract_item_number("abc_story_thing"), None);
|
||||
assert_eq!(extract_item_number(""), None);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn extract_item_number_returns_none_for_empty_first_segment() {
|
||||
// Leading underscore: first segment is "".
|
||||
assert_eq!(extract_item_number("_story_thing"), None);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn has_valid_id_prefix_returns_true_for_valid_ids() {
|
||||
assert!(has_valid_id_prefix("42_story_foo"));
|
||||
assert!(has_valid_id_prefix("1_bug_bar"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn has_valid_id_prefix_returns_false_for_invalid_ids() {
|
||||
assert!(!has_valid_id_prefix("story_no_number"));
|
||||
assert!(!has_valid_id_prefix(""));
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,6 @@
|
||||
//! Shared pure helpers used by multiple service modules.
|
||||
//!
|
||||
//! All sub-modules here are pure (no I/O, no side effects). Any helper that
|
||||
//! duplicates logic across two or more service modules belongs here; anything
|
||||
//! used by only one service stays in that service.
|
||||
pub mod item_id;
|
||||
@@ -0,0 +1,72 @@
|
||||
//! Diagnostics I/O — the ONLY place in `service::diagnostics/` that may perform side effects.
|
||||
//!
|
||||
//! Side effects here include: reading and writing `.claude/settings.json` via `std::fs`.
|
||||
//! Pure permission-rule logic (pattern derivation, wildcard domination checks) lives in
|
||||
//! `permission.rs`.
|
||||
|
||||
use serde_json::{Value, json};
|
||||
use std::fs;
|
||||
use std::path::Path;
|
||||
|
||||
/// Add a permission rule to `.claude/settings.json` in the project root.
|
||||
///
|
||||
/// Does nothing if the rule already exists (exact match) or is already covered
|
||||
/// by a wildcard pattern in the allow list. Creates the file and any missing
|
||||
/// parent directories if they do not yet exist.
|
||||
///
|
||||
/// # Errors
|
||||
/// Returns `Err(String)` if the directory cannot be created, the file cannot be
|
||||
/// read or written, or the JSON cannot be parsed or serialised.
|
||||
pub fn add_permission_rule(project_root: &Path, rule: &str) -> Result<(), String> {
|
||||
let claude_dir = project_root.join(".claude");
|
||||
fs::create_dir_all(&claude_dir)
|
||||
.map_err(|e| format!("Failed to create .claude/ directory: {e}"))?;
|
||||
|
||||
let settings_path = claude_dir.join("settings.json");
|
||||
let mut settings: Value = if settings_path.exists() {
|
||||
let content = fs::read_to_string(&settings_path)
|
||||
.map_err(|e| format!("Failed to read settings.json: {e}"))?;
|
||||
serde_json::from_str(&content).map_err(|e| format!("Failed to parse settings.json: {e}"))?
|
||||
} else {
|
||||
json!({ "permissions": { "allow": [] } })
|
||||
};
|
||||
|
||||
let allow_arr = settings
|
||||
.pointer_mut("/permissions/allow")
|
||||
.and_then(|v| v.as_array_mut());
|
||||
|
||||
let allow = match allow_arr {
|
||||
Some(arr) => arr,
|
||||
None => {
|
||||
settings
|
||||
.as_object_mut()
|
||||
.unwrap()
|
||||
.entry("permissions")
|
||||
.or_insert(json!({ "allow": [] }));
|
||||
settings
|
||||
.pointer_mut("/permissions/allow")
|
||||
.unwrap()
|
||||
.as_array_mut()
|
||||
.unwrap()
|
||||
}
|
||||
};
|
||||
|
||||
let rule_value = Value::String(rule.to_string());
|
||||
|
||||
// Exact duplicate check.
|
||||
if allow.contains(&rule_value) {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
// Wildcard-coverage check: if "mcp__huskies__*" exists, skip more-specific rules.
|
||||
if super::permission::is_dominated_by_wildcard(rule, allow) {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
allow.push(rule_value);
|
||||
|
||||
let pretty =
|
||||
serde_json::to_string_pretty(&settings).map_err(|e| format!("Failed to serialize: {e}"))?;
|
||||
fs::write(&settings_path, pretty).map_err(|e| format!("Failed to write settings.json: {e}"))?;
|
||||
Ok(())
|
||||
}
|
||||
@@ -0,0 +1,89 @@
|
||||
//! Diagnostics service — server logs, CRDT dump, permission management, and story movement.
|
||||
//!
|
||||
//! Extracted from `http/mcp/diagnostics.rs` following the conventions in
|
||||
//! `docs/architecture/service-modules.md`:
|
||||
//! - `mod.rs` (this file) — public API, typed [`Error`], orchestration
|
||||
//! - `io.rs` — the ONLY place that performs side effects (filesystem reads/writes)
|
||||
//! - `permission.rs` — pure permission-rule generation and wildcard checks
|
||||
|
||||
pub mod io;
|
||||
pub mod permission;
|
||||
|
||||
pub use io::add_permission_rule;
|
||||
pub use permission::generate_permission_rule;
|
||||
#[allow(unused_imports)]
|
||||
pub use permission::is_dominated_by_wildcard;
|
||||
|
||||
// ── Error type ────────────────────────────────────────────────────────────────
|
||||
|
||||
/// Typed errors returned by `service::diagnostics` functions.
|
||||
///
|
||||
/// HTTP handlers map these to status codes:
|
||||
/// - [`Error::NotFound`] → 404 Not Found
|
||||
/// - [`Error::Validation`] → 400 Bad Request
|
||||
/// - [`Error::Conflict`] → 409 Conflict
|
||||
/// - [`Error::Io`] → 500 Internal Server Error
|
||||
/// - [`Error::UpstreamFailure`] → 500 Internal Server Error
|
||||
#[allow(dead_code)]
|
||||
#[derive(Debug)]
|
||||
pub enum Error {
|
||||
/// The requested resource was not found.
|
||||
NotFound(String),
|
||||
/// A required argument is missing or has an invalid value.
|
||||
Validation(String),
|
||||
/// The operation cannot proceed due to a conflicting state.
|
||||
Conflict(String),
|
||||
/// A filesystem read or write operation failed.
|
||||
Io(String),
|
||||
/// An upstream dependency returned an unexpected error.
|
||||
UpstreamFailure(String),
|
||||
}
|
||||
|
||||
impl std::fmt::Display for Error {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
match self {
|
||||
Self::NotFound(msg) => write!(f, "Not found: {msg}"),
|
||||
Self::Validation(msg) => write!(f, "Validation error: {msg}"),
|
||||
Self::Conflict(msg) => write!(f, "Conflict: {msg}"),
|
||||
Self::Io(msg) => write!(f, "I/O error: {msg}"),
|
||||
Self::UpstreamFailure(msg) => write!(f, "Upstream failure: {msg}"),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ── Tests ─────────────────────────────────────────────────────────────────────
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn error_display_not_found() {
|
||||
let e = Error::NotFound("log file missing".to_string());
|
||||
assert!(e.to_string().contains("Not found"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn error_display_validation() {
|
||||
let e = Error::Validation("invalid filter".to_string());
|
||||
assert!(e.to_string().contains("Validation error"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn error_display_conflict() {
|
||||
let e = Error::Conflict("story in wrong stage".to_string());
|
||||
assert!(e.to_string().contains("Conflict"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn error_display_io() {
|
||||
let e = Error::Io("settings.json write failed".to_string());
|
||||
assert!(e.to_string().contains("I/O error"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn error_display_upstream_failure() {
|
||||
let e = Error::UpstreamFailure("rebuild failed".to_string());
|
||||
assert!(e.to_string().contains("Upstream failure"));
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,105 @@
|
||||
//! Pure permission-rule generation for `service::diagnostics`.
|
||||
//!
|
||||
//! These functions produce Claude Code permission-rule strings from tool call
|
||||
//! metadata. No I/O: they take `&str` / `&Value` and return `String`.
|
||||
|
||||
use serde_json::Value;
|
||||
|
||||
/// Generate a Claude Code permission rule string for the given tool name and input.
|
||||
///
|
||||
/// - `Bash` tools → `Bash(first_word *)` derived from the `command` field.
|
||||
/// - All other tools → the tool name verbatim (e.g. `Edit`, `mcp__huskies__create_story`).
|
||||
pub fn generate_permission_rule(tool_name: &str, tool_input: &Value) -> String {
|
||||
if tool_name == "Bash" {
|
||||
let command_str = tool_input
|
||||
.get("command")
|
||||
.and_then(|v| v.as_str())
|
||||
.unwrap_or("");
|
||||
let first_word = command_str.split_whitespace().next().unwrap_or("unknown");
|
||||
format!("Bash({first_word} *)")
|
||||
} else {
|
||||
tool_name.to_string()
|
||||
}
|
||||
}
|
||||
|
||||
/// Return `true` if `rule` is already covered by an existing wildcard in `allow_list`.
|
||||
///
|
||||
/// For example, if `allow_list` contains `"mcp__huskies__*"`, then the more
|
||||
/// specific rule `"mcp__huskies__create_story"` is already covered.
|
||||
pub fn is_dominated_by_wildcard(rule: &str, allow_list: &[Value]) -> bool {
|
||||
allow_list.iter().any(|existing| {
|
||||
if let Some(pat) = existing.as_str()
|
||||
&& let Some(prefix) = pat.strip_suffix('*')
|
||||
{
|
||||
return rule.starts_with(prefix);
|
||||
}
|
||||
false
|
||||
})
|
||||
}
|
||||
|
||||
// ── Tests ─────────────────────────────────────────────────────────────────────
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use serde_json::json;
|
||||
|
||||
#[test]
|
||||
fn generate_rule_for_edit_tool() {
|
||||
let rule = generate_permission_rule("Edit", &json!({}));
|
||||
assert_eq!(rule, "Edit");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn generate_rule_for_write_tool() {
|
||||
let rule = generate_permission_rule("Write", &json!({}));
|
||||
assert_eq!(rule, "Write");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn generate_rule_for_bash_git() {
|
||||
let rule = generate_permission_rule("Bash", &json!({"command": "git status"}));
|
||||
assert_eq!(rule, "Bash(git *)");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn generate_rule_for_bash_cargo() {
|
||||
let rule = generate_permission_rule("Bash", &json!({"command": "cargo test --all"}));
|
||||
assert_eq!(rule, "Bash(cargo *)");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn generate_rule_for_bash_empty_command() {
|
||||
let rule = generate_permission_rule("Bash", &json!({}));
|
||||
assert_eq!(rule, "Bash(unknown *)");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn generate_rule_for_mcp_tool() {
|
||||
let rule = generate_permission_rule("mcp__huskies__create_story", &json!({"name": "foo"}));
|
||||
assert_eq!(rule, "mcp__huskies__create_story");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn is_dominated_by_exact_wildcard() {
|
||||
let allow = vec![json!("mcp__huskies__*")];
|
||||
assert!(is_dominated_by_wildcard(
|
||||
"mcp__huskies__create_story",
|
||||
&allow
|
||||
));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn is_not_dominated_by_different_prefix() {
|
||||
let allow = vec![json!("mcp__other__*")];
|
||||
assert!(!is_dominated_by_wildcard(
|
||||
"mcp__huskies__create_story",
|
||||
&allow
|
||||
));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn is_not_dominated_when_list_is_empty() {
|
||||
assert!(!is_dominated_by_wildcard("Edit", &[]));
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,184 @@
|
||||
//! Pure event-buffer types — no side effects.
|
||||
//!
|
||||
//! `StoredEvent` and `EventBuffer` contain only data-transformation and
|
||||
//! structural logic; all I/O (clocks, spawned tasks) lives in `io.rs`.
|
||||
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::collections::VecDeque;
|
||||
use std::sync::{Arc, Mutex};
|
||||
|
||||
/// Maximum number of events retained in the in-memory buffer.
|
||||
pub const MAX_BUFFER_SIZE: usize = 500;
|
||||
|
||||
/// A pipeline event stored in the event buffer with a timestamp.
|
||||
#[derive(Clone, Debug, Serialize, Deserialize)]
|
||||
#[serde(tag = "type", rename_all = "snake_case")]
|
||||
pub enum StoredEvent {
|
||||
/// A work item transitioned between pipeline stages.
|
||||
StageTransition {
|
||||
/// Work item ID (e.g. `"42_story_my_feature"`).
|
||||
story_id: String,
|
||||
/// The stage the item moved FROM (display name, e.g. `"Current"`).
|
||||
from_stage: String,
|
||||
/// The stage the item moved TO (directory key, e.g. `"3_qa"`).
|
||||
to_stage: String,
|
||||
/// Unix timestamp in milliseconds when this event was recorded.
|
||||
timestamp_ms: u64,
|
||||
},
|
||||
/// A merge operation failed for a story.
|
||||
MergeFailure {
|
||||
/// Work item ID (e.g. `"42_story_my_feature"`).
|
||||
story_id: String,
|
||||
/// Human-readable description of the failure.
|
||||
reason: String,
|
||||
/// Unix timestamp in milliseconds when this event was recorded.
|
||||
timestamp_ms: u64,
|
||||
},
|
||||
/// A story was blocked (e.g. retry limit exceeded).
|
||||
StoryBlocked {
|
||||
/// Work item ID (e.g. `"42_story_my_feature"`).
|
||||
story_id: String,
|
||||
/// Human-readable reason the story was blocked.
|
||||
reason: String,
|
||||
/// Unix timestamp in milliseconds when this event was recorded.
|
||||
timestamp_ms: u64,
|
||||
},
|
||||
}
|
||||
|
||||
impl StoredEvent {
|
||||
/// Returns the `timestamp_ms` field common to all event variants.
|
||||
pub fn timestamp_ms(&self) -> u64 {
|
||||
match self {
|
||||
StoredEvent::StageTransition { timestamp_ms, .. } => *timestamp_ms,
|
||||
StoredEvent::MergeFailure { timestamp_ms, .. } => *timestamp_ms,
|
||||
StoredEvent::StoryBlocked { timestamp_ms, .. } => *timestamp_ms,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Shared, thread-safe ring buffer of recent pipeline events.
|
||||
///
|
||||
/// Wrapped in `Arc` so it can be shared between the background subscriber
|
||||
/// task and the HTTP handler. The inner `Mutex` guards the `VecDeque`.
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct EventBuffer(Arc<Mutex<VecDeque<StoredEvent>>>);
|
||||
|
||||
impl EventBuffer {
|
||||
/// Create a new, empty event buffer.
|
||||
pub fn new() -> Self {
|
||||
EventBuffer(Arc::new(Mutex::new(VecDeque::new())))
|
||||
}
|
||||
|
||||
/// Append an event to the buffer, evicting the oldest entry if the buffer
|
||||
/// exceeds [`MAX_BUFFER_SIZE`].
|
||||
pub fn push(&self, event: StoredEvent) {
|
||||
let mut buf = self.0.lock().unwrap();
|
||||
if buf.len() >= MAX_BUFFER_SIZE {
|
||||
buf.pop_front();
|
||||
}
|
||||
buf.push_back(event);
|
||||
}
|
||||
|
||||
/// Return all events whose `timestamp_ms` is strictly greater than `since_ms`.
|
||||
pub fn events_since(&self, since_ms: u64) -> Vec<StoredEvent> {
|
||||
let buf = self.0.lock().unwrap();
|
||||
buf.iter()
|
||||
.filter(|e| e.timestamp_ms() > since_ms)
|
||||
.cloned()
|
||||
.collect()
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for EventBuffer {
|
||||
fn default() -> Self {
|
||||
Self::new()
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn push_and_retrieve_events() {
|
||||
let buf = EventBuffer::new();
|
||||
buf.push(StoredEvent::MergeFailure {
|
||||
story_id: "42_story_x".to_string(),
|
||||
reason: "conflict".to_string(),
|
||||
timestamp_ms: 1000,
|
||||
});
|
||||
buf.push(StoredEvent::StoryBlocked {
|
||||
story_id: "43_story_y".to_string(),
|
||||
reason: "retry limit".to_string(),
|
||||
timestamp_ms: 2000,
|
||||
});
|
||||
|
||||
let all = buf.events_since(0);
|
||||
assert_eq!(all.len(), 2);
|
||||
|
||||
let after_1000 = buf.events_since(1000);
|
||||
assert_eq!(after_1000.len(), 1);
|
||||
assert!(matches!(after_1000[0], StoredEvent::StoryBlocked { .. }));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn evicts_oldest_when_full() {
|
||||
let buf = EventBuffer::new();
|
||||
for i in 0..MAX_BUFFER_SIZE + 1 {
|
||||
buf.push(StoredEvent::MergeFailure {
|
||||
story_id: format!("{i}_story_x"),
|
||||
reason: "x".to_string(),
|
||||
timestamp_ms: i as u64,
|
||||
});
|
||||
}
|
||||
assert_eq!(buf.events_since(0).len(), MAX_BUFFER_SIZE);
|
||||
assert!(buf.events_since(0).iter().all(|e| e.timestamp_ms() > 0));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn timestamp_ms_accessor_for_all_variants() {
|
||||
let variants = [
|
||||
StoredEvent::StageTransition {
|
||||
story_id: "1".to_string(),
|
||||
from_stage: "2_current".to_string(),
|
||||
to_stage: "3_qa".to_string(),
|
||||
timestamp_ms: 100,
|
||||
},
|
||||
StoredEvent::MergeFailure {
|
||||
story_id: "2".to_string(),
|
||||
reason: "x".to_string(),
|
||||
timestamp_ms: 200,
|
||||
},
|
||||
StoredEvent::StoryBlocked {
|
||||
story_id: "3".to_string(),
|
||||
reason: "y".to_string(),
|
||||
timestamp_ms: 300,
|
||||
},
|
||||
];
|
||||
assert_eq!(variants[0].timestamp_ms(), 100);
|
||||
assert_eq!(variants[1].timestamp_ms(), 200);
|
||||
assert_eq!(variants[2].timestamp_ms(), 300);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn events_since_filters_by_timestamp() {
|
||||
let buf = EventBuffer::new();
|
||||
for ts in [100u64, 200, 300] {
|
||||
buf.push(StoredEvent::MergeFailure {
|
||||
story_id: "x".to_string(),
|
||||
reason: "r".to_string(),
|
||||
timestamp_ms: ts,
|
||||
});
|
||||
}
|
||||
// strictly greater than 100
|
||||
let result = buf.events_since(100);
|
||||
assert_eq!(result.len(), 2);
|
||||
assert!(result.iter().all(|e| e.timestamp_ms() > 100));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn default_creates_empty_buffer() {
|
||||
let buf = EventBuffer::default();
|
||||
assert_eq!(buf.events_since(0).len(), 0);
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,67 @@
|
||||
//! Events I/O wrappers — the ONLY place in `service/events/` that may perform
|
||||
//! side effects such as reading the system clock or spawning async tasks.
|
||||
|
||||
use crate::io::watcher::WatcherEvent;
|
||||
use tokio::sync::broadcast;
|
||||
|
||||
use super::buffer::{EventBuffer, StoredEvent};
|
||||
|
||||
/// Returns the current Unix timestamp in milliseconds.
|
||||
pub(super) fn now_ms() -> u64 {
|
||||
std::time::SystemTime::now()
|
||||
.duration_since(std::time::UNIX_EPOCH)
|
||||
.map(|d| d.as_millis() as u64)
|
||||
.unwrap_or(0)
|
||||
}
|
||||
|
||||
/// Spawn a background task that consumes [`WatcherEvent`] broadcasts and
|
||||
/// stores relevant events in `buffer`.
|
||||
///
|
||||
/// Only [`WatcherEvent::WorkItem`] (with a known `from_stage`),
|
||||
/// [`WatcherEvent::MergeFailure`], and [`WatcherEvent::StoryBlocked`]
|
||||
/// variants are stored. All other variants are silently ignored.
|
||||
pub fn subscribe_to_watcher(buffer: EventBuffer, mut rx: broadcast::Receiver<WatcherEvent>) {
|
||||
tokio::spawn(async move {
|
||||
loop {
|
||||
match rx.recv().await {
|
||||
Ok(WatcherEvent::WorkItem {
|
||||
stage,
|
||||
item_id,
|
||||
from_stage,
|
||||
..
|
||||
}) => {
|
||||
if let Some(from) = from_stage {
|
||||
buffer.push(StoredEvent::StageTransition {
|
||||
story_id: item_id,
|
||||
from_stage: from,
|
||||
to_stage: stage,
|
||||
timestamp_ms: now_ms(),
|
||||
});
|
||||
}
|
||||
}
|
||||
Ok(WatcherEvent::MergeFailure { story_id, reason }) => {
|
||||
buffer.push(StoredEvent::MergeFailure {
|
||||
story_id,
|
||||
reason,
|
||||
timestamp_ms: now_ms(),
|
||||
});
|
||||
}
|
||||
Ok(WatcherEvent::StoryBlocked { story_id, reason }) => {
|
||||
buffer.push(StoredEvent::StoryBlocked {
|
||||
story_id,
|
||||
reason,
|
||||
timestamp_ms: now_ms(),
|
||||
});
|
||||
}
|
||||
Ok(_) => {}
|
||||
Err(broadcast::error::RecvError::Lagged(n)) => {
|
||||
crate::slog!("[events] Subscriber lagged, skipped {n} events");
|
||||
}
|
||||
Err(broadcast::error::RecvError::Closed) => {
|
||||
crate::slog!("[events] Watcher channel closed; stopping event subscriber");
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
@@ -0,0 +1,45 @@
|
||||
//! Events service — public API for the events domain.
|
||||
//!
|
||||
//! This module re-exports the pure buffer types from `buffer.rs` and the
|
||||
//! side-effectful watcher subscription from `io.rs`. HTTP handlers call
|
||||
//! these exports instead of containing the logic inline.
|
||||
//!
|
||||
//! Conventions: `docs/architecture/service-modules.md`
|
||||
|
||||
pub mod buffer;
|
||||
pub(super) mod io;
|
||||
|
||||
pub use buffer::{EventBuffer, StoredEvent};
|
||||
// Re-exported for tests (http::events uses it via `use super::*`).
|
||||
#[allow(unused_imports)]
|
||||
pub use buffer::MAX_BUFFER_SIZE;
|
||||
pub use io::subscribe_to_watcher;
|
||||
|
||||
// ── Error type ────────────────────────────────────────────────────────────────
|
||||
|
||||
/// Typed errors returned by `service::events` functions.
|
||||
///
|
||||
/// Events operations on the in-memory buffer are infallible; this enum
|
||||
/// exists to satisfy the module convention and to accommodate future
|
||||
/// error cases (e.g. persistence).
|
||||
#[allow(dead_code)]
|
||||
#[derive(Debug)]
|
||||
pub enum Error {
|
||||
/// A serialisation or internal error occurred.
|
||||
Internal(String),
|
||||
}
|
||||
|
||||
impl std::fmt::Display for Error {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
match self {
|
||||
Self::Internal(msg) => write!(f, "Events error: {msg}"),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ── Public API ────────────────────────────────────────────────────────────────
|
||||
|
||||
/// Return all events in `buffer` recorded after `since_ms` milliseconds.
|
||||
pub fn events_since(buffer: &EventBuffer, since_ms: u64) -> Vec<StoredEvent> {
|
||||
buffer.events_since(since_ms)
|
||||
}
|
||||
@@ -0,0 +1,84 @@
|
||||
//! File I/O — the ONLY place in `service/file_io/` that may perform
|
||||
//! filesystem reads, writes, shell execution, or other side effects.
|
||||
//!
|
||||
//! Every function here is a thin adapter that converts lower-level
|
||||
//! `String` errors into the typed [`super::Error`] variants.
|
||||
|
||||
use super::Error;
|
||||
use crate::io::fs::FileEntry;
|
||||
use crate::io::search::SearchResult;
|
||||
use crate::io::shell::CommandOutput;
|
||||
use crate::state::SessionState;
|
||||
|
||||
pub(super) async fn read_file(path: String, state: &SessionState) -> Result<String, Error> {
|
||||
crate::io::fs::read_file(path, state)
|
||||
.await
|
||||
.map_err(Error::Filesystem)
|
||||
}
|
||||
|
||||
pub(super) async fn write_file(
|
||||
path: String,
|
||||
content: String,
|
||||
state: &SessionState,
|
||||
) -> Result<(), Error> {
|
||||
crate::io::fs::write_file(path, content, state)
|
||||
.await
|
||||
.map_err(Error::Filesystem)
|
||||
}
|
||||
|
||||
pub(super) async fn list_directory(
|
||||
path: String,
|
||||
state: &SessionState,
|
||||
) -> Result<Vec<FileEntry>, Error> {
|
||||
crate::io::fs::list_directory(path, state)
|
||||
.await
|
||||
.map_err(Error::Filesystem)
|
||||
}
|
||||
|
||||
pub(super) async fn list_directory_absolute(path: String) -> Result<Vec<FileEntry>, Error> {
|
||||
crate::io::fs::list_directory_absolute(path)
|
||||
.await
|
||||
.map_err(Error::Filesystem)
|
||||
}
|
||||
|
||||
pub(super) async fn create_directory_absolute(path: String) -> Result<(), Error> {
|
||||
crate::io::fs::create_directory_absolute(path)
|
||||
.await
|
||||
.map_err(Error::Filesystem)
|
||||
.map(|_| ())
|
||||
}
|
||||
|
||||
pub(super) fn get_home_directory() -> Result<String, Error> {
|
||||
crate::io::fs::get_home_directory().map_err(Error::Filesystem)
|
||||
}
|
||||
|
||||
pub(super) async fn list_project_files(state: &SessionState) -> Result<Vec<String>, Error> {
|
||||
crate::io::fs::list_project_files(state)
|
||||
.await
|
||||
.map_err(Error::Filesystem)
|
||||
}
|
||||
|
||||
pub(super) async fn search_files(
|
||||
query: String,
|
||||
state: &SessionState,
|
||||
) -> Result<Vec<SearchResult>, Error> {
|
||||
crate::io::search::search_files(query, state)
|
||||
.await
|
||||
.map_err(Error::Filesystem)
|
||||
}
|
||||
|
||||
pub(super) async fn exec_shell(
|
||||
command: String,
|
||||
args: Vec<String>,
|
||||
state: &SessionState,
|
||||
) -> Result<CommandOutput, Error> {
|
||||
crate::io::shell::exec_shell(command, args, state)
|
||||
.await
|
||||
.map_err(|e| {
|
||||
if e.contains("not in the allowlist") {
|
||||
Error::Validation(e)
|
||||
} else {
|
||||
Error::Filesystem(e)
|
||||
}
|
||||
})
|
||||
}
|
||||
@@ -0,0 +1,183 @@
|
||||
//! File I/O service — public API for filesystem and shell operations.
|
||||
//!
|
||||
//! Exposes functions for reading, writing, and listing files scoped to the
|
||||
//! active project root, plus utilities for absolute-path and shell operations.
|
||||
//! HTTP handlers call these functions instead of touching `io::fs` directly.
|
||||
//!
|
||||
//! Conventions: `docs/architecture/service-modules.md`
|
||||
|
||||
pub(super) mod io;
|
||||
|
||||
use crate::state::SessionState;
|
||||
|
||||
/// Re-export the canonical filesystem entry type so HTTP handlers don't need
|
||||
/// to import from `io::fs` directly.
|
||||
pub use crate::io::fs::FileEntry;
|
||||
/// Re-export the search result type.
|
||||
pub use crate::io::search::SearchResult;
|
||||
/// Re-export the shell output type.
|
||||
pub use crate::io::shell::CommandOutput;
|
||||
|
||||
// ── Error type ────────────────────────────────────────────────────────────────
|
||||
|
||||
/// Typed errors returned by `service::file_io` functions.
|
||||
///
|
||||
/// HTTP handlers map these to status codes:
|
||||
/// - [`Error::Validation`] → 400 Bad Request
|
||||
/// - [`Error::Filesystem`] → 400 Bad Request (or 404 when appropriate)
|
||||
#[derive(Debug)]
|
||||
pub enum Error {
|
||||
/// The request was invalid (e.g. path traversal attempt, command not allowlisted).
|
||||
Validation(String),
|
||||
/// A filesystem or shell operation failed (file not found, permission denied, etc.).
|
||||
Filesystem(String),
|
||||
}
|
||||
|
||||
impl std::fmt::Display for Error {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
match self {
|
||||
Self::Validation(msg) => write!(f, "Validation error: {msg}"),
|
||||
Self::Filesystem(msg) => write!(f, "Filesystem error: {msg}"),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ── Path validation ───────────────────────────────────────────────────────────
|
||||
|
||||
/// Validate a relative path, rejecting directory traversal attempts.
|
||||
///
|
||||
/// Returns [`Error::Validation`] when the path contains `..`.
|
||||
pub fn validate_path(path: &str) -> Result<(), Error> {
|
||||
if path.contains("..") {
|
||||
return Err(Error::Validation(
|
||||
"Security Violation: Directory traversal ('..') is not allowed.".to_string(),
|
||||
));
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// ── Public API ────────────────────────────────────────────────────────────────
|
||||
|
||||
/// Read a file from the project root.
|
||||
pub async fn read_file(path: String, state: &SessionState) -> Result<String, Error> {
|
||||
validate_path(&path)?;
|
||||
io::read_file(path, state).await
|
||||
}
|
||||
|
||||
/// Write a file to the project root, creating parent directories as needed.
|
||||
pub async fn write_file(path: String, content: String, state: &SessionState) -> Result<(), Error> {
|
||||
validate_path(&path)?;
|
||||
io::write_file(path, content, state).await
|
||||
}
|
||||
|
||||
/// List directory entries at a project-relative path.
|
||||
pub async fn list_directory(path: String, state: &SessionState) -> Result<Vec<FileEntry>, Error> {
|
||||
io::list_directory(path, state).await
|
||||
}
|
||||
|
||||
/// List directory entries at an absolute path (not scoped to the project root).
|
||||
pub async fn list_directory_absolute(path: String) -> Result<Vec<FileEntry>, Error> {
|
||||
io::list_directory_absolute(path).await
|
||||
}
|
||||
|
||||
/// Create a directory (and all parents) at an absolute path.
|
||||
pub async fn create_directory_absolute(path: String) -> Result<(), Error> {
|
||||
io::create_directory_absolute(path).await
|
||||
}
|
||||
|
||||
/// Return the current user's home directory path.
|
||||
pub fn get_home_directory() -> Result<String, Error> {
|
||||
io::get_home_directory()
|
||||
}
|
||||
|
||||
/// List all files in the project recursively, respecting `.gitignore`.
|
||||
pub async fn list_project_files(state: &SessionState) -> Result<Vec<String>, Error> {
|
||||
io::list_project_files(state).await
|
||||
}
|
||||
|
||||
/// Search the project for files whose contents contain `query`.
|
||||
pub async fn search_files(query: String, state: &SessionState) -> Result<Vec<SearchResult>, Error> {
|
||||
io::search_files(query, state).await
|
||||
}
|
||||
|
||||
/// Execute an allowlisted shell command in the project root directory.
|
||||
pub async fn exec_shell(
|
||||
command: String,
|
||||
args: Vec<String>,
|
||||
state: &SessionState,
|
||||
) -> Result<CommandOutput, Error> {
|
||||
io::exec_shell(command, args, state).await
|
||||
}
|
||||
|
||||
// ── Tests ─────────────────────────────────────────────────────────────────────
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
// Pure unit tests for path validation and sanitisation — no tempdir, no network.
|
||||
|
||||
#[test]
|
||||
fn validate_path_accepts_simple_relative_path() {
|
||||
assert!(validate_path("src/main.rs").is_ok());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn validate_path_accepts_dot_path() {
|
||||
assert!(validate_path(".").is_ok());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn validate_path_accepts_root_relative() {
|
||||
assert!(validate_path("subdir/file.txt").is_ok());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn validate_path_rejects_parent_traversal() {
|
||||
let result = validate_path("../etc/passwd");
|
||||
assert!(matches!(result, Err(Error::Validation(_))));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn validate_path_rejects_embedded_traversal() {
|
||||
let result = validate_path("src/../../../etc/passwd");
|
||||
assert!(matches!(result, Err(Error::Validation(_))));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn validate_path_rejects_double_dot_only() {
|
||||
let result = validate_path("..");
|
||||
assert!(matches!(result, Err(Error::Validation(_))));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn validate_path_accepts_file_with_single_dots_in_name() {
|
||||
// Filenames like "config.dev.toml" have single dots — must be accepted.
|
||||
assert!(validate_path("config.dev.toml").is_ok());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn validate_path_rejects_traversal_with_url_encoding_lookalike() {
|
||||
// A literal ".." sequence anywhere in the string is rejected.
|
||||
let result = validate_path("valid/..hidden");
|
||||
assert!(matches!(result, Err(Error::Validation(_))));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn error_display_validation() {
|
||||
let e = Error::Validation("bad path".to_string());
|
||||
assert!(e.to_string().contains("bad path"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn error_display_filesystem() {
|
||||
let e = Error::Filesystem("file not found".to_string());
|
||||
assert!(e.to_string().contains("file not found"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn error_display_filesystem_contains_message() {
|
||||
let e = Error::Filesystem("task panic".to_string());
|
||||
assert!(e.to_string().contains("task panic"));
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,136 @@
|
||||
//! Gateway aggregation — pure functions for cross-project pipeline status.
|
||||
//!
|
||||
//! Formats aggregated pipeline data into compact text suitable for chat
|
||||
//! transports (Matrix, Slack). Uses `service::pipeline::aggregate_pipeline_counts`
|
||||
//! for per-project parsing.
|
||||
|
||||
use serde_json::Value;
|
||||
use std::collections::BTreeMap;
|
||||
|
||||
/// Format an aggregated status map as a compact, one-line-per-project string
|
||||
/// suitable for Matrix/Slack messages.
|
||||
///
|
||||
/// Healthy projects: `🟢 **name** — B:5 C:2 Q:1 M:0 D:12`
|
||||
/// Blocked items appended on the same line: `| blocked: 42 [story]`
|
||||
/// Unreachable projects: `🔴 **name** — UNREACHABLE`
|
||||
pub fn format_aggregate_status_compact(statuses: &BTreeMap<String, Value>) -> String {
|
||||
let mut lines: Vec<String> = Vec::new();
|
||||
for (name, status) in statuses {
|
||||
if let Some(err) = status.get("error").and_then(|e| e.as_str()) {
|
||||
lines.push(format!("\u{1F534} **{name}** — UNREACHABLE: {err}"));
|
||||
} else {
|
||||
let counts = status.get("counts");
|
||||
let b = counts
|
||||
.and_then(|c| c.get("backlog"))
|
||||
.and_then(|n| n.as_u64())
|
||||
.unwrap_or(0);
|
||||
let c = counts
|
||||
.and_then(|c| c.get("current"))
|
||||
.and_then(|n| n.as_u64())
|
||||
.unwrap_or(0);
|
||||
let q = counts
|
||||
.and_then(|c| c.get("qa"))
|
||||
.and_then(|n| n.as_u64())
|
||||
.unwrap_or(0);
|
||||
let m = counts
|
||||
.and_then(|c| c.get("merge"))
|
||||
.and_then(|n| n.as_u64())
|
||||
.unwrap_or(0);
|
||||
let d = counts
|
||||
.and_then(|c| c.get("done"))
|
||||
.and_then(|n| n.as_u64())
|
||||
.unwrap_or(0);
|
||||
|
||||
let blocked_arr = status
|
||||
.get("blocked")
|
||||
.and_then(|a| a.as_array())
|
||||
.cloned()
|
||||
.unwrap_or_default();
|
||||
|
||||
let indicator = if blocked_arr.is_empty() {
|
||||
"\u{1F7E2}" // 🟢
|
||||
} else {
|
||||
"\u{1F7E0}" // 🟠
|
||||
};
|
||||
|
||||
let mut line = format!("{indicator} **{name}** — B:{b} C:{c} Q:{q} M:{m} D:{d}");
|
||||
|
||||
if !blocked_arr.is_empty() {
|
||||
let ids: Vec<String> = blocked_arr
|
||||
.iter()
|
||||
.filter_map(|item| item.get("story_id").and_then(|s| s.as_str()))
|
||||
.map(|s| s.to_string())
|
||||
.collect();
|
||||
line.push_str(&format!(" | blocked: {}", ids.join(", ")));
|
||||
}
|
||||
|
||||
lines.push(line);
|
||||
}
|
||||
}
|
||||
if lines.is_empty() {
|
||||
return "No projects registered.".to_string();
|
||||
}
|
||||
format!("**All Projects**\n\n{}", lines.join("\n\n"))
|
||||
}
|
||||
|
||||
// ── Tests ────────────────────────────────────────────────────────────────────
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use serde_json::json;
|
||||
|
||||
#[test]
|
||||
fn format_healthy_project() {
|
||||
let mut statuses = BTreeMap::new();
|
||||
statuses.insert(
|
||||
"huskies".to_string(),
|
||||
json!({
|
||||
"counts": { "backlog": 5, "current": 2, "qa": 1, "merge": 0, "done": 12 },
|
||||
"blocked": []
|
||||
}),
|
||||
);
|
||||
let output = format_aggregate_status_compact(&statuses);
|
||||
assert!(output.contains("huskies"));
|
||||
assert!(output.contains("B:5"));
|
||||
assert!(output.contains("C:2"));
|
||||
assert!(output.contains("Q:1"));
|
||||
assert!(output.contains("D:12"));
|
||||
assert!(!output.contains("blocked:"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn format_unreachable_project() {
|
||||
let mut statuses = BTreeMap::new();
|
||||
statuses.insert(
|
||||
"broken".to_string(),
|
||||
json!({ "error": "connection refused" }),
|
||||
);
|
||||
let output = format_aggregate_status_compact(&statuses);
|
||||
assert!(output.contains("broken"));
|
||||
assert!(output.contains("UNREACHABLE"));
|
||||
assert!(output.contains("connection refused"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn format_blocked_items_shown() {
|
||||
let mut statuses = BTreeMap::new();
|
||||
statuses.insert(
|
||||
"myproj".to_string(),
|
||||
json!({
|
||||
"counts": { "backlog": 0, "current": 1, "qa": 0, "merge": 0, "done": 0 },
|
||||
"blocked": [{ "story_id": "42_story_x", "name": "X", "stage": "current", "reason": "blocked" }]
|
||||
}),
|
||||
);
|
||||
let output = format_aggregate_status_compact(&statuses);
|
||||
assert!(output.contains("blocked:"));
|
||||
assert!(output.contains("42_story_x"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn format_empty_projects() {
|
||||
let statuses = BTreeMap::new();
|
||||
let output = format_aggregate_status_compact(&statuses);
|
||||
assert_eq!(output, "No projects registered.");
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,191 @@
|
||||
//! Gateway configuration types — pure parsing and validation.
|
||||
//!
|
||||
//! Contains `ProjectEntry`, `GatewayConfig`, and validation logic.
|
||||
//! All filesystem I/O (loading from disk) lives in `io.rs`.
|
||||
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::collections::BTreeMap;
|
||||
|
||||
/// A single project entry in `projects.toml`.
|
||||
#[derive(Debug, Clone, Deserialize, Serialize)]
|
||||
pub struct ProjectEntry {
|
||||
/// Base URL of the project's huskies container (e.g. `http://localhost:3001`).
|
||||
pub url: String,
|
||||
}
|
||||
|
||||
/// Top-level `projects.toml` config.
|
||||
#[derive(Debug, Clone, Deserialize, Serialize)]
|
||||
pub struct GatewayConfig {
|
||||
/// Map of project name → container URL.
|
||||
#[serde(default)]
|
||||
pub projects: BTreeMap<String, ProjectEntry>,
|
||||
}
|
||||
|
||||
/// Validate that a gateway config has at least one project.
|
||||
///
|
||||
/// Returns the name of the first project (alphabetically) on success,
|
||||
/// or an error message if the config is empty.
|
||||
pub fn validate_config(config: &GatewayConfig) -> Result<String, String> {
|
||||
if config.projects.is_empty() {
|
||||
return Err("projects.toml must define at least one project".to_string());
|
||||
}
|
||||
Ok(config.projects.keys().next().unwrap().clone())
|
||||
}
|
||||
|
||||
/// Validate that a project name exists in the given project map.
|
||||
///
|
||||
/// Returns the project's URL on success.
|
||||
pub fn validate_project_exists(
|
||||
projects: &BTreeMap<String, ProjectEntry>,
|
||||
name: &str,
|
||||
) -> Result<String, String> {
|
||||
projects.get(name).map(|p| p.url.clone()).ok_or_else(|| {
|
||||
let available: Vec<&str> = projects.keys().map(|s| s.as_str()).collect();
|
||||
format!(
|
||||
"unknown project '{name}'. Available: {}",
|
||||
available.join(", ")
|
||||
)
|
||||
})
|
||||
}
|
||||
|
||||
/// Escape a string as a TOML quoted string.
|
||||
pub fn toml_string(s: &str) -> String {
|
||||
format!("\"{}\"", s.replace('\\', "\\\\").replace('"', "\\\""))
|
||||
}
|
||||
|
||||
/// Serialize a `bot.toml` content string from the given fields.
|
||||
pub fn serialize_bot_config(
|
||||
transport: &str,
|
||||
homeserver: Option<&str>,
|
||||
username: Option<&str>,
|
||||
password: Option<&str>,
|
||||
slack_bot_token: Option<&str>,
|
||||
slack_signing_secret: Option<&str>,
|
||||
) -> String {
|
||||
match transport {
|
||||
"slack" => {
|
||||
format!(
|
||||
"enabled = true\ntransport = \"slack\"\n\nslack_bot_token = {}\nslack_signing_secret = {}\nslack_channel_ids = []\n",
|
||||
toml_string(slack_bot_token.unwrap_or("")),
|
||||
toml_string(slack_signing_secret.unwrap_or("")),
|
||||
)
|
||||
}
|
||||
_ => {
|
||||
format!(
|
||||
"enabled = true\ntransport = \"matrix\"\n\nhomeserver = {}\nusername = {}\npassword = {}\nroom_ids = []\nallowed_users = []\n",
|
||||
toml_string(homeserver.unwrap_or("")),
|
||||
toml_string(username.unwrap_or("")),
|
||||
toml_string(password.unwrap_or("")),
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ── Tests ────────────────────────────────────────────────────────────────────
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn parse_valid_projects_toml() {
|
||||
let toml_str = r#"
|
||||
[projects.huskies]
|
||||
url = "http://localhost:3001"
|
||||
|
||||
[projects.robot-studio]
|
||||
url = "http://localhost:3002"
|
||||
"#;
|
||||
let config: GatewayConfig = toml::from_str(toml_str).unwrap();
|
||||
assert_eq!(config.projects.len(), 2);
|
||||
assert_eq!(config.projects["huskies"].url, "http://localhost:3001");
|
||||
assert_eq!(config.projects["robot-studio"].url, "http://localhost:3002");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_empty_projects_toml() {
|
||||
let toml_str = "[projects]\n";
|
||||
let config: GatewayConfig = toml::from_str(toml_str).unwrap();
|
||||
assert!(config.projects.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn validate_config_rejects_empty() {
|
||||
let config = GatewayConfig {
|
||||
projects: BTreeMap::new(),
|
||||
};
|
||||
assert!(validate_config(&config).is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn validate_config_returns_first_project_name() {
|
||||
let mut projects = BTreeMap::new();
|
||||
projects.insert(
|
||||
"beta".into(),
|
||||
ProjectEntry {
|
||||
url: "http://b".into(),
|
||||
},
|
||||
);
|
||||
projects.insert(
|
||||
"alpha".into(),
|
||||
ProjectEntry {
|
||||
url: "http://a".into(),
|
||||
},
|
||||
);
|
||||
let config = GatewayConfig { projects };
|
||||
assert_eq!(validate_config(&config).unwrap(), "alpha");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn validate_project_exists_succeeds() {
|
||||
let mut projects = BTreeMap::new();
|
||||
projects.insert(
|
||||
"p1".into(),
|
||||
ProjectEntry {
|
||||
url: "http://p1".into(),
|
||||
},
|
||||
);
|
||||
assert_eq!(
|
||||
validate_project_exists(&projects, "p1").unwrap(),
|
||||
"http://p1"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn validate_project_exists_fails() {
|
||||
let projects = BTreeMap::new();
|
||||
assert!(validate_project_exists(&projects, "missing").is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn toml_string_escapes_quotes() {
|
||||
assert_eq!(toml_string(r#"a"b"#), r#""a\"b""#);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn toml_string_escapes_backslashes() {
|
||||
assert_eq!(toml_string(r"a\b"), r#""a\\b""#);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn serialize_bot_config_matrix() {
|
||||
let content = serialize_bot_config(
|
||||
"matrix",
|
||||
Some("https://mx.io"),
|
||||
Some("@bot:mx.io"),
|
||||
Some("pass"),
|
||||
None,
|
||||
None,
|
||||
);
|
||||
assert!(content.contains("transport = \"matrix\""));
|
||||
assert!(content.contains("homeserver = \"https://mx.io\""));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn serialize_bot_config_slack() {
|
||||
let content =
|
||||
serialize_bot_config("slack", None, None, None, Some("xoxb-123"), Some("secret"));
|
||||
assert!(content.contains("transport = \"slack\""));
|
||||
assert!(content.contains("slack_bot_token = \"xoxb-123\""));
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,407 @@
|
||||
//! Gateway I/O — the ONLY place in `service/gateway/` that may perform side effects.
|
||||
//!
|
||||
//! Side effects here include: reading/writing config and agent state files,
|
||||
//! HTTP requests to project containers (proxying, health checks, polling),
|
||||
//! spawning the Matrix bot task, and the notification poller background task.
|
||||
|
||||
use super::config::{GatewayConfig, ProjectEntry};
|
||||
use super::registration::JoinedAgent;
|
||||
pub use reqwest::Client;
|
||||
use serde_json::{Value, json};
|
||||
use std::collections::{BTreeMap, HashMap};
|
||||
use std::path::Path;
|
||||
|
||||
// ── Config I/O ───────────────────────────────────────────────────────────────
|
||||
|
||||
/// Load gateway config from a `projects.toml` file.
|
||||
pub fn load_config(path: &Path) -> Result<GatewayConfig, String> {
|
||||
let contents = std::fs::read_to_string(path)
|
||||
.map_err(|e| format!("cannot read {}: {e}", path.display()))?;
|
||||
toml::from_str(&contents).map_err(|e| format!("invalid projects.toml: {e}"))
|
||||
}
|
||||
|
||||
/// Load persisted agents from `<config_dir>/gateway_agents.json`.
|
||||
/// Returns an empty list if the file does not exist or cannot be parsed.
|
||||
pub fn load_agents(config_dir: &Path) -> Vec<JoinedAgent> {
|
||||
let path = config_dir.join("gateway_agents.json");
|
||||
match std::fs::read(&path) {
|
||||
Ok(data) => serde_json::from_slice(&data).unwrap_or_default(),
|
||||
Err(_) => Vec::new(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Persist the current projects map to `<config_dir>/projects.toml`.
|
||||
/// Silently ignores write errors or skips when `config_dir` is empty.
|
||||
pub async fn save_config(projects: &BTreeMap<String, ProjectEntry>, config_dir: &Path) {
|
||||
if config_dir.as_os_str().is_empty() {
|
||||
return;
|
||||
}
|
||||
let path = config_dir.join("projects.toml");
|
||||
let config = GatewayConfig {
|
||||
projects: projects.clone(),
|
||||
};
|
||||
if let Ok(data) = toml::to_string_pretty(&config) {
|
||||
let _ = tokio::fs::write(&path, data).await;
|
||||
}
|
||||
}
|
||||
|
||||
/// Persist the current agent list to `<config_dir>/gateway_agents.json`.
|
||||
/// Silently ignores write errors.
|
||||
pub async fn save_agents(agents: &[JoinedAgent], config_dir: &Path) {
|
||||
if config_dir == Path::new("") {
|
||||
return;
|
||||
}
|
||||
let path = config_dir.join("gateway_agents.json");
|
||||
if let Ok(data) = serde_json::to_vec_pretty(agents) {
|
||||
let _ = tokio::fs::write(&path, data).await;
|
||||
}
|
||||
}
|
||||
|
||||
// ── Bot config I/O ──────────────────────────────────────────────────────────
|
||||
|
||||
/// Read the current raw bot.toml as key/value pairs for the configuration UI.
|
||||
/// Returns `None` values if the file does not exist.
|
||||
pub fn read_bot_config_raw(config_dir: &Path) -> BotConfigFields {
|
||||
let path = config_dir.join(".huskies").join("bot.toml");
|
||||
let content = match std::fs::read_to_string(&path) {
|
||||
Ok(c) => c,
|
||||
Err(_) => return BotConfigFields::default(),
|
||||
};
|
||||
let table: toml::Value = match toml::from_str(&content) {
|
||||
Ok(v) => v,
|
||||
Err(_) => return BotConfigFields::default(),
|
||||
};
|
||||
let s = |key: &str| -> Option<String> {
|
||||
table
|
||||
.get(key)
|
||||
.and_then(|v| v.as_str())
|
||||
.map(|s| s.to_string())
|
||||
};
|
||||
BotConfigFields {
|
||||
transport: s("transport").unwrap_or_else(|| "matrix".to_string()),
|
||||
homeserver: s("homeserver"),
|
||||
username: s("username"),
|
||||
password: s("password"),
|
||||
slack_bot_token: s("slack_bot_token"),
|
||||
slack_signing_secret: s("slack_signing_secret"),
|
||||
}
|
||||
}
|
||||
|
||||
/// Raw bot.toml fields for the configuration UI.
|
||||
#[derive(Default)]
|
||||
pub struct BotConfigFields {
|
||||
pub transport: String,
|
||||
pub homeserver: Option<String>,
|
||||
pub username: Option<String>,
|
||||
pub password: Option<String>,
|
||||
pub slack_bot_token: Option<String>,
|
||||
pub slack_signing_secret: Option<String>,
|
||||
}
|
||||
|
||||
/// Write a `bot.toml` from the given content string.
|
||||
pub fn write_bot_config(config_dir: &Path, content: &str) -> Result<(), String> {
|
||||
let huskies_dir = config_dir.join(".huskies");
|
||||
std::fs::create_dir_all(&huskies_dir)
|
||||
.map_err(|e| format!("cannot create .huskies dir: {e}"))?;
|
||||
let path = huskies_dir.join("bot.toml");
|
||||
std::fs::write(&path, content).map_err(|e| format!("cannot write bot.toml: {e}"))
|
||||
}
|
||||
|
||||
// ── MCP proxy I/O ───────────────────────────────────────────────────────────
|
||||
|
||||
/// Proxy a raw MCP request body to the given project URL.
|
||||
pub async fn proxy_mcp_call(
|
||||
client: &Client,
|
||||
base_url: &str,
|
||||
request_bytes: &[u8],
|
||||
) -> Result<Vec<u8>, String> {
|
||||
let mcp_url = format!("{}/mcp", base_url.trim_end_matches('/'));
|
||||
|
||||
let resp = client
|
||||
.post(&mcp_url)
|
||||
.header("Content-Type", "application/json")
|
||||
.body(request_bytes.to_vec())
|
||||
.send()
|
||||
.await
|
||||
.map_err(|e| format!("failed to reach {mcp_url}: {e}"))?;
|
||||
|
||||
resp.bytes()
|
||||
.await
|
||||
.map(|b| b.to_vec())
|
||||
.map_err(|e| format!("failed to read response from {mcp_url}: {e}"))
|
||||
}
|
||||
|
||||
/// Fetch tools/list from a project's MCP endpoint.
|
||||
pub async fn fetch_tools_list(client: &Client, base_url: &str) -> Result<Value, String> {
|
||||
let mcp_url = format!("{}/mcp", base_url.trim_end_matches('/'));
|
||||
|
||||
let rpc_body = json!({
|
||||
"jsonrpc": "2.0",
|
||||
"id": 1,
|
||||
"method": "tools/list",
|
||||
"params": {}
|
||||
});
|
||||
|
||||
let resp = client
|
||||
.post(&mcp_url)
|
||||
.json(&rpc_body)
|
||||
.send()
|
||||
.await
|
||||
.map_err(|e| format!("failed to reach {mcp_url}: {e}"))?;
|
||||
|
||||
resp.json()
|
||||
.await
|
||||
.map_err(|e| format!("invalid JSON from upstream: {e}"))
|
||||
}
|
||||
|
||||
/// Fetch and aggregate pipeline status for a single project URL.
|
||||
pub async fn fetch_one_project_pipeline_status(url: &str, client: &Client) -> Value {
|
||||
let mcp_url = format!("{}/mcp", url.trim_end_matches('/'));
|
||||
let rpc_body = json!({
|
||||
"jsonrpc": "2.0",
|
||||
"id": 1,
|
||||
"method": "tools/call",
|
||||
"params": {
|
||||
"name": "get_pipeline_status",
|
||||
"arguments": {}
|
||||
}
|
||||
});
|
||||
|
||||
match client.post(&mcp_url).json(&rpc_body).send().await {
|
||||
Ok(resp) => match resp.json::<Value>().await {
|
||||
Ok(upstream) => {
|
||||
if let Some(text) = upstream
|
||||
.get("result")
|
||||
.and_then(|r| r.get("content"))
|
||||
.and_then(|c| c.get(0))
|
||||
.and_then(|c| c.get("text"))
|
||||
.and_then(|t| t.as_str())
|
||||
{
|
||||
match serde_json::from_str::<Value>(text) {
|
||||
Ok(pipeline) => {
|
||||
crate::service::pipeline::aggregate_pipeline_counts(&pipeline)
|
||||
}
|
||||
Err(_) => json!({ "error": "invalid pipeline JSON" }),
|
||||
}
|
||||
} else {
|
||||
json!({ "error": "unexpected response shape" })
|
||||
}
|
||||
}
|
||||
Err(e) => json!({ "error": format!("invalid response: {e}") }),
|
||||
},
|
||||
Err(e) => json!({ "error": format!("unreachable: {e}") }),
|
||||
}
|
||||
}
|
||||
|
||||
/// Fetch `get_pipeline_status` from every registered project URL in parallel.
|
||||
pub async fn fetch_all_project_pipeline_statuses(
|
||||
project_urls: &BTreeMap<String, String>,
|
||||
client: &Client,
|
||||
) -> BTreeMap<String, Value> {
|
||||
use futures::future::join_all;
|
||||
|
||||
let futures: Vec<_> = project_urls
|
||||
.iter()
|
||||
.map(|(name, url)| {
|
||||
let name = name.clone();
|
||||
let url = url.clone();
|
||||
let client = client.clone();
|
||||
async move {
|
||||
let result = fetch_one_project_pipeline_status(&url, &client).await;
|
||||
(name, result)
|
||||
}
|
||||
})
|
||||
.collect();
|
||||
|
||||
join_all(futures).await.into_iter().collect()
|
||||
}
|
||||
|
||||
/// Fetch the pipeline status from a single project for the `gateway_status` tool.
|
||||
pub async fn fetch_pipeline_status_for_project(
|
||||
client: &Client,
|
||||
base_url: &str,
|
||||
) -> Result<Value, String> {
|
||||
let mcp_url = format!("{}/mcp", base_url.trim_end_matches('/'));
|
||||
let rpc_body = json!({
|
||||
"jsonrpc": "2.0",
|
||||
"id": 1,
|
||||
"method": "tools/call",
|
||||
"params": {
|
||||
"name": "get_pipeline_status",
|
||||
"arguments": {}
|
||||
}
|
||||
});
|
||||
|
||||
let resp = client
|
||||
.post(&mcp_url)
|
||||
.json(&rpc_body)
|
||||
.send()
|
||||
.await
|
||||
.map_err(|e| format!("failed to reach {mcp_url}: {e}"))?;
|
||||
|
||||
resp.json()
|
||||
.await
|
||||
.map_err(|e| format!("invalid upstream response: {e}"))
|
||||
}
|
||||
|
||||
/// Check health of a single project URL.
|
||||
pub async fn check_project_health(client: &Client, base_url: &str) -> Result<bool, String> {
|
||||
let health_url = format!("{}/health", base_url.trim_end_matches('/'));
|
||||
match client.get(&health_url).send().await {
|
||||
Ok(resp) => Ok(resp.status().is_success()),
|
||||
Err(e) => Err(format!("unreachable: {e}")),
|
||||
}
|
||||
}
|
||||
|
||||
// ── Gateway MCP JSON ────────────────────────────────────────────────────────
|
||||
|
||||
/// Write (or overwrite) a `.mcp.json` in `config_dir` that points Claude Code
|
||||
/// CLI at the gateway's own `/mcp` endpoint.
|
||||
pub fn write_gateway_mcp_json(config_dir: &Path, port: u16) -> Result<(), std::io::Error> {
|
||||
let host = std::env::var("HUSKIES_HOST").unwrap_or_else(|_| "127.0.0.1".to_string());
|
||||
let url = format!("http://{host}:{port}/mcp");
|
||||
let content = json!({
|
||||
"mcpServers": {
|
||||
"huskies": {
|
||||
"type": "http",
|
||||
"url": url
|
||||
}
|
||||
}
|
||||
});
|
||||
let path = config_dir.join(".mcp.json");
|
||||
std::fs::write(&path, serde_json::to_string_pretty(&content).unwrap())?;
|
||||
crate::slog!("[gateway] Wrote {} pointing to {}", path.display(), url);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// ── Init project I/O ────────────────────────────────────────────────────────
|
||||
|
||||
/// Check if a path already has a `.huskies/` directory.
|
||||
pub fn has_huskies_dir(path: &Path) -> bool {
|
||||
path.join(".huskies").exists()
|
||||
}
|
||||
|
||||
/// Create a directory (and parents) if it does not exist.
|
||||
pub fn ensure_directory(path: &Path) -> Result<(), String> {
|
||||
if !path.exists() {
|
||||
std::fs::create_dir_all(path)
|
||||
.map_err(|e| format!("failed to create directory '{}': {e}", path.display()))?;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Scaffold a huskies project at the given path.
|
||||
pub fn scaffold_project(path: &Path) -> Result<(), String> {
|
||||
crate::io::fs::scaffold::scaffold_story_kit(path, 3001)
|
||||
}
|
||||
|
||||
/// Initialise wizard state at the given path.
|
||||
pub fn init_wizard_state(path: &Path) {
|
||||
crate::io::wizard::WizardState::init_if_missing(path);
|
||||
}
|
||||
|
||||
// ── Notification poller ─────────────────────────────────────────────────────
|
||||
|
||||
/// Spawn a background task that polls events from all project servers.
|
||||
pub fn spawn_gateway_notification_poller(
|
||||
transport: std::sync::Arc<dyn crate::chat::ChatTransport>,
|
||||
room_ids: Vec<String>,
|
||||
project_urls: BTreeMap<String, String>,
|
||||
poll_interval_secs: u64,
|
||||
) {
|
||||
tokio::spawn(async move {
|
||||
let client = Client::builder()
|
||||
.timeout(std::time::Duration::from_secs(10))
|
||||
.build()
|
||||
.unwrap_or_else(|_| Client::new());
|
||||
let interval = std::time::Duration::from_secs(poll_interval_secs.max(1));
|
||||
|
||||
let mut last_ts: HashMap<String, u64> = project_urls
|
||||
.keys()
|
||||
.map(|name| (name.clone(), 0u64))
|
||||
.collect();
|
||||
|
||||
loop {
|
||||
for (project_name, base_url) in &project_urls {
|
||||
let since = last_ts.get(project_name).copied().unwrap_or(0);
|
||||
let url = format!("{base_url}/api/events?since={since}");
|
||||
|
||||
let response = match client.get(&url).send().await {
|
||||
Ok(r) => r,
|
||||
Err(e) => {
|
||||
crate::slog!(
|
||||
"[gateway-poller] {project_name}: unreachable ({e}); skipping"
|
||||
);
|
||||
continue;
|
||||
}
|
||||
};
|
||||
|
||||
let events: Vec<crate::service::events::StoredEvent> = match response.json().await {
|
||||
Ok(v) => v,
|
||||
Err(e) => {
|
||||
crate::slog!(
|
||||
"[gateway-poller] {project_name}: failed to parse events: {e}"
|
||||
);
|
||||
continue;
|
||||
}
|
||||
};
|
||||
|
||||
for event in &events {
|
||||
let ts = event.timestamp_ms();
|
||||
if ts > *last_ts.get(project_name).unwrap_or(&0) {
|
||||
last_ts.insert(project_name.clone(), ts);
|
||||
}
|
||||
|
||||
let (plain, html) = super::polling::format_gateway_event(project_name, event);
|
||||
for room_id in &room_ids {
|
||||
if let Err(e) = transport.send_message(room_id, &plain, &html).await {
|
||||
crate::slog!(
|
||||
"[gateway-poller] Failed to send notification to {room_id}: {e}"
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
tokio::time::sleep(interval).await;
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// ── Gateway bot spawn ───────────────────────────────────────────────────────
|
||||
|
||||
/// Re-export type alias for the active project lock.
|
||||
pub type ActiveProject = std::sync::Arc<tokio::sync::RwLock<String>>;
|
||||
|
||||
/// Attempt to spawn the Matrix bot against the gateway config directory.
|
||||
pub fn spawn_gateway_bot(
|
||||
config_dir: &Path,
|
||||
active_project: ActiveProject,
|
||||
gateway_projects: Vec<String>,
|
||||
gateway_project_urls: BTreeMap<String, String>,
|
||||
port: u16,
|
||||
) -> Option<tokio::task::AbortHandle> {
|
||||
use crate::agents::AgentPool;
|
||||
use tokio::sync::{broadcast, mpsc};
|
||||
|
||||
let (watcher_tx, _) = broadcast::channel(16);
|
||||
let (_perm_tx, perm_rx) = mpsc::unbounded_channel();
|
||||
let perm_rx = std::sync::Arc::new(tokio::sync::Mutex::new(perm_rx));
|
||||
|
||||
let (shutdown_tx, shutdown_rx) =
|
||||
tokio::sync::watch::channel::<Option<crate::rebuild::ShutdownReason>>(None);
|
||||
std::mem::forget(shutdown_tx);
|
||||
|
||||
let agents = std::sync::Arc::new(AgentPool::new(port, watcher_tx.clone()));
|
||||
|
||||
crate::chat::transport::matrix::spawn_bot(
|
||||
config_dir,
|
||||
watcher_tx,
|
||||
perm_rx,
|
||||
agents,
|
||||
shutdown_rx,
|
||||
Some(active_project),
|
||||
gateway_projects,
|
||||
gateway_project_urls,
|
||||
)
|
||||
}
|
||||
@@ -0,0 +1,580 @@
|
||||
//! Gateway service — domain logic for the multi-project gateway.
|
||||
//!
|
||||
//! Follows the conventions in `docs/architecture/service-modules.md`:
|
||||
//! - `mod.rs` (this file) — public API, typed [`Error`], orchestration, `GatewayState`
|
||||
//! - `io.rs` — the ONLY place that performs side effects (filesystem, network, process spawn)
|
||||
//! - `config.rs` — pure config types and validation
|
||||
//! - `registration.rs` — pure agent registration logic
|
||||
//! - `aggregation.rs` — pure cross-project pipeline formatting
|
||||
//! - `polling.rs` — pure notification event formatting
|
||||
|
||||
pub mod aggregation;
|
||||
pub mod config;
|
||||
pub(crate) mod io;
|
||||
pub mod polling;
|
||||
pub mod registration;
|
||||
|
||||
pub use aggregation::format_aggregate_status_compact;
|
||||
pub use config::{GatewayConfig, ProjectEntry};
|
||||
pub use io::{fetch_all_project_pipeline_statuses, spawn_gateway_notification_poller};
|
||||
pub use registration::JoinedAgent;
|
||||
|
||||
use io::Client;
|
||||
use std::collections::{BTreeMap, HashMap};
|
||||
use std::path::PathBuf;
|
||||
use std::sync::Arc;
|
||||
use tokio::sync::Mutex as TokioMutex;
|
||||
use tokio::sync::RwLock;
|
||||
|
||||
// ── Error type ──────────────────────────────────────────────────────────────
|
||||
|
||||
/// Typed errors returned by `service::gateway` functions.
|
||||
///
|
||||
/// HTTP handlers map these to appropriate status codes:
|
||||
/// - [`Error::ProjectNotFound`] → 404 Not Found
|
||||
/// - [`Error::UnreachableProject`] → 502 Bad Gateway
|
||||
/// - [`Error::DuplicateToken`] → 409 Conflict
|
||||
/// - [`Error::InvalidAgent`] → 404 Not Found / 400 Bad Request
|
||||
/// - [`Error::Config`] → 400 Bad Request
|
||||
/// - [`Error::Upstream`] → 502 Bad Gateway
|
||||
#[derive(Debug)]
|
||||
pub enum Error {
|
||||
/// A referenced project does not exist in the gateway config.
|
||||
ProjectNotFound(String),
|
||||
/// A project container is unreachable.
|
||||
UnreachableProject(String),
|
||||
/// A join token has already been consumed or a project name is taken.
|
||||
DuplicateToken(String),
|
||||
/// An agent ID is invalid or not found.
|
||||
InvalidAgent(String),
|
||||
/// A configuration value is invalid.
|
||||
Config(String),
|
||||
/// An upstream project container returned an unexpected response.
|
||||
Upstream(String),
|
||||
}
|
||||
|
||||
impl std::fmt::Display for Error {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
match self {
|
||||
Self::ProjectNotFound(msg) => write!(f, "Project not found: {msg}"),
|
||||
Self::UnreachableProject(msg) => write!(f, "Unreachable project: {msg}"),
|
||||
Self::DuplicateToken(msg) => write!(f, "Duplicate token: {msg}"),
|
||||
Self::InvalidAgent(msg) => write!(f, "Invalid agent: {msg}"),
|
||||
Self::Config(msg) => write!(f, "Config error: {msg}"),
|
||||
Self::Upstream(msg) => write!(f, "Upstream error: {msg}"),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ── Gateway state ───────────────────────────────────────────────────────────
|
||||
|
||||
/// A one-time join token that has been generated but not yet consumed.
|
||||
pub(crate) struct PendingToken {
|
||||
#[allow(dead_code)]
|
||||
pub(crate) created_at: f64,
|
||||
}
|
||||
|
||||
/// Shared gateway state threaded through HTTP handlers.
|
||||
#[derive(Clone)]
|
||||
pub struct GatewayState {
|
||||
/// The live set of registered projects (initially loaded from `projects.toml`).
|
||||
pub projects: Arc<RwLock<BTreeMap<String, ProjectEntry>>>,
|
||||
/// The currently active project name.
|
||||
pub active_project: Arc<RwLock<String>>,
|
||||
/// HTTP client for proxying requests to project containers.
|
||||
pub client: Client,
|
||||
/// Build agents that have joined this gateway.
|
||||
pub joined_agents: Arc<RwLock<Vec<JoinedAgent>>>,
|
||||
/// One-time join tokens that have been issued but not yet consumed.
|
||||
pub(crate) pending_tokens: Arc<RwLock<HashMap<String, PendingToken>>>,
|
||||
/// Directory containing `projects.toml` and the `.huskies/` subfolder.
|
||||
pub config_dir: PathBuf,
|
||||
/// HTTP port the gateway is listening on.
|
||||
pub port: u16,
|
||||
/// Abort handle for the running Matrix bot task (if any).
|
||||
pub bot_handle: Arc<TokioMutex<Option<tokio::task::AbortHandle>>>,
|
||||
}
|
||||
|
||||
impl GatewayState {
|
||||
/// Create a new gateway state from a config and config directory.
|
||||
///
|
||||
/// The first project in the config becomes the active project by default.
|
||||
/// Previously registered agents are loaded from `gateway_agents.json`.
|
||||
pub fn new(
|
||||
gateway_config: GatewayConfig,
|
||||
config_dir: PathBuf,
|
||||
port: u16,
|
||||
) -> Result<Self, String> {
|
||||
let first = config::validate_config(&gateway_config)?;
|
||||
let agents = io::load_agents(&config_dir);
|
||||
Ok(Self {
|
||||
projects: Arc::new(RwLock::new(gateway_config.projects)),
|
||||
active_project: Arc::new(RwLock::new(first)),
|
||||
client: Client::new(),
|
||||
joined_agents: Arc::new(RwLock::new(agents)),
|
||||
pending_tokens: Arc::new(RwLock::new(HashMap::new())),
|
||||
config_dir,
|
||||
port,
|
||||
bot_handle: Arc::new(TokioMutex::new(None)),
|
||||
})
|
||||
}
|
||||
|
||||
/// Get the URL of the currently active project.
|
||||
pub async fn active_url(&self) -> Result<String, Error> {
|
||||
let name = self.active_project.read().await.clone();
|
||||
self.projects
|
||||
.read()
|
||||
.await
|
||||
.get(&name)
|
||||
.map(|p| p.url.clone())
|
||||
.ok_or_else(|| {
|
||||
Error::ProjectNotFound(format!("active project '{name}' not found in config"))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// ── Public API ──────────────────────────────────────────────────────────────
|
||||
|
||||
/// Switch the active project. Returns the project's URL on success.
|
||||
pub async fn switch_project(state: &GatewayState, project: &str) -> Result<String, Error> {
|
||||
if project.is_empty() {
|
||||
return Err(Error::Config("missing required parameter: project".into()));
|
||||
}
|
||||
|
||||
let url = {
|
||||
let projects = state.projects.read().await;
|
||||
config::validate_project_exists(&projects, project).map_err(Error::ProjectNotFound)?
|
||||
};
|
||||
|
||||
*state.active_project.write().await = project.to_string();
|
||||
Ok(url)
|
||||
}
|
||||
|
||||
/// Generate a one-time join token. Returns the token string.
|
||||
pub async fn generate_join_token(state: &GatewayState) -> String {
|
||||
let token = uuid::Uuid::new_v4().to_string();
|
||||
let now = chrono::Utc::now().timestamp() as f64;
|
||||
state
|
||||
.pending_tokens
|
||||
.write()
|
||||
.await
|
||||
.insert(token.clone(), PendingToken { created_at: now });
|
||||
crate::slog!("[gateway] Generated join token {:.8}…", &token);
|
||||
token
|
||||
}
|
||||
|
||||
/// Register a build agent with a join token.
|
||||
pub async fn register_agent(
|
||||
state: &GatewayState,
|
||||
token: &str,
|
||||
label: String,
|
||||
address: String,
|
||||
) -> Result<JoinedAgent, Error> {
|
||||
// Validate and consume the token.
|
||||
let mut tokens = state.pending_tokens.write().await;
|
||||
if !tokens.contains_key(token) {
|
||||
return Err(Error::DuplicateToken(
|
||||
"invalid or already-used join token".into(),
|
||||
));
|
||||
}
|
||||
tokens.remove(token);
|
||||
drop(tokens);
|
||||
|
||||
let now = chrono::Utc::now().timestamp() as f64;
|
||||
let agent = registration::create_agent(uuid::Uuid::new_v4().to_string(), label, address, now);
|
||||
|
||||
crate::slog!(
|
||||
"[gateway] Agent '{}' registered (id={})",
|
||||
agent.label,
|
||||
agent.id
|
||||
);
|
||||
|
||||
{
|
||||
let mut agents = state.joined_agents.write().await;
|
||||
agents.push(agent.clone());
|
||||
io::save_agents(&agents, &state.config_dir).await;
|
||||
}
|
||||
|
||||
Ok(agent)
|
||||
}
|
||||
|
||||
/// Remove a registered agent by ID. Returns `true` if found and removed.
|
||||
pub async fn remove_agent(state: &GatewayState, id: &str) -> bool {
|
||||
let mut agents = state.joined_agents.write().await;
|
||||
let removed = registration::remove_agent(&mut agents, id);
|
||||
if removed {
|
||||
io::save_agents(&agents, &state.config_dir).await;
|
||||
crate::slog!("[gateway] Removed agent id={id}");
|
||||
}
|
||||
removed
|
||||
}
|
||||
|
||||
/// Assign or unassign an agent to a project.
|
||||
pub async fn assign_agent(
|
||||
state: &GatewayState,
|
||||
id: &str,
|
||||
project: Option<String>,
|
||||
) -> Result<JoinedAgent, Error> {
|
||||
let project_clean = project.and_then(|p| if p.is_empty() { None } else { Some(p) });
|
||||
|
||||
let updated = {
|
||||
let projects = state.projects.read().await;
|
||||
let mut agents = state.joined_agents.write().await;
|
||||
registration::assign_agent(&mut agents, id, project_clean, &projects)?
|
||||
};
|
||||
|
||||
crate::slog!(
|
||||
"[gateway] Agent '{}' (id={}) assigned to {:?}",
|
||||
updated.label,
|
||||
updated.id,
|
||||
updated.assigned_project
|
||||
);
|
||||
let agents = state.joined_agents.read().await.clone();
|
||||
io::save_agents(&agents, &state.config_dir).await;
|
||||
Ok(updated)
|
||||
}
|
||||
|
||||
/// Update an agent's heartbeat. Returns `true` if found.
|
||||
pub async fn heartbeat_agent(state: &GatewayState, id: &str) -> bool {
|
||||
let now = chrono::Utc::now().timestamp() as f64;
|
||||
let mut agents = state.joined_agents.write().await;
|
||||
registration::heartbeat(&mut agents, id, now)
|
||||
}
|
||||
|
||||
/// Add a new project to the gateway config.
|
||||
pub async fn add_project(state: &GatewayState, name: &str, url: &str) -> Result<(), Error> {
|
||||
let name = name.trim().to_string();
|
||||
let url = url.trim().to_string();
|
||||
|
||||
if name.is_empty() {
|
||||
return Err(Error::Config("project name must not be empty".into()));
|
||||
}
|
||||
if url.is_empty() {
|
||||
return Err(Error::Config("project url must not be empty".into()));
|
||||
}
|
||||
|
||||
{
|
||||
let mut projects = state.projects.write().await;
|
||||
if projects.contains_key(&name) {
|
||||
return Err(Error::DuplicateToken(format!(
|
||||
"project '{name}' already exists"
|
||||
)));
|
||||
}
|
||||
projects.insert(name.clone(), ProjectEntry { url: url.clone() });
|
||||
}
|
||||
|
||||
let snapshot = state.projects.read().await.clone();
|
||||
io::save_config(&snapshot, &state.config_dir).await;
|
||||
crate::slog!("[gateway] Added project '{name}' ({url})");
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Remove a project from the gateway config.
|
||||
pub async fn remove_project(state: &GatewayState, name: &str) -> Result<(), Error> {
|
||||
let active = state.active_project.read().await.clone();
|
||||
|
||||
{
|
||||
let mut projects = state.projects.write().await;
|
||||
if !projects.contains_key(name) {
|
||||
return Err(Error::ProjectNotFound(format!(
|
||||
"project '{name}' not found"
|
||||
)));
|
||||
}
|
||||
if projects.len() == 1 {
|
||||
return Err(Error::Config("cannot remove the last project".into()));
|
||||
}
|
||||
projects.remove(name);
|
||||
}
|
||||
|
||||
let snapshot = state.projects.read().await.clone();
|
||||
io::save_config(&snapshot, &state.config_dir).await;
|
||||
|
||||
// If the removed project was active, switch to the first remaining.
|
||||
if active == name {
|
||||
let first = state.projects.read().await.keys().next().cloned();
|
||||
if let Some(new_active) = first {
|
||||
*state.active_project.write().await = new_active;
|
||||
}
|
||||
}
|
||||
|
||||
crate::slog!("[gateway] Removed project '{name}'");
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Initialise a new huskies project at the given path.
|
||||
///
|
||||
/// Optionally registers the project in the gateway's project map.
|
||||
pub async fn init_project(
|
||||
state: &GatewayState,
|
||||
path_str: &str,
|
||||
name: Option<&str>,
|
||||
url: Option<&str>,
|
||||
) -> Result<Option<String>, Error> {
|
||||
let path_str = path_str.trim();
|
||||
if path_str.is_empty() {
|
||||
return Err(Error::Config("missing required parameter: path".into()));
|
||||
}
|
||||
|
||||
let project_path = std::path::Path::new(path_str);
|
||||
|
||||
if io::has_huskies_dir(project_path) {
|
||||
return Err(Error::Config(format!(
|
||||
"path '{}' is already a huskies project (.huskies/ exists). \
|
||||
Use wizard_status to check setup progress.",
|
||||
project_path.display()
|
||||
)));
|
||||
}
|
||||
|
||||
io::ensure_directory(project_path).map_err(Error::Config)?;
|
||||
|
||||
io::scaffold_project(project_path)
|
||||
.map_err(|e| Error::Config(format!("scaffold failed: {e}")))?;
|
||||
|
||||
io::init_wizard_state(project_path);
|
||||
|
||||
// Optionally register in projects.toml.
|
||||
let registered_name: Option<String> = match (name, url) {
|
||||
(Some(n), Some(u)) if !n.trim().is_empty() && !u.trim().is_empty() => {
|
||||
let n = n.trim();
|
||||
let u = u.trim();
|
||||
let mut projects = state.projects.write().await;
|
||||
if projects.contains_key(n) {
|
||||
return Err(Error::DuplicateToken(format!(
|
||||
"project '{n}' is already registered. Choose a different name or use switch_project."
|
||||
)));
|
||||
}
|
||||
projects.insert(n.to_string(), ProjectEntry { url: u.to_string() });
|
||||
io::save_config(&projects, &state.config_dir).await;
|
||||
crate::slog!("[gateway] init_project: registered '{n}' ({u})");
|
||||
Some(n.to_string())
|
||||
}
|
||||
_ => None,
|
||||
};
|
||||
|
||||
Ok(registered_name)
|
||||
}
|
||||
|
||||
/// Fetch aggregated health status across all projects.
|
||||
pub async fn health_check_all(state: &GatewayState) -> (bool, BTreeMap<String, &'static str>) {
|
||||
let mut all_healthy = true;
|
||||
let mut statuses = BTreeMap::new();
|
||||
|
||||
let project_entries: Vec<(String, String)> = state
|
||||
.projects
|
||||
.read()
|
||||
.await
|
||||
.iter()
|
||||
.map(|(n, e)| (n.clone(), e.url.clone()))
|
||||
.collect();
|
||||
|
||||
for (name, url) in &project_entries {
|
||||
let healthy = io::check_project_health(&state.client, url)
|
||||
.await
|
||||
.unwrap_or(false);
|
||||
if !healthy {
|
||||
all_healthy = false;
|
||||
}
|
||||
statuses.insert(name.clone(), if healthy { "ok" } else { "error" });
|
||||
}
|
||||
|
||||
(all_healthy, statuses)
|
||||
}
|
||||
|
||||
/// Save bot config and restart the bot.
|
||||
pub async fn save_bot_config_and_restart(state: &GatewayState, content: &str) -> Result<(), Error> {
|
||||
io::write_bot_config(&state.config_dir, content).map_err(Error::Config)?;
|
||||
|
||||
// Abort existing bot task and spawn a fresh one.
|
||||
{
|
||||
let mut handle = state.bot_handle.lock().await;
|
||||
if let Some(h) = handle.take() {
|
||||
h.abort();
|
||||
}
|
||||
let gateway_projects: Vec<String> = state.projects.read().await.keys().cloned().collect();
|
||||
let gateway_project_urls: BTreeMap<String, String> = state
|
||||
.projects
|
||||
.read()
|
||||
.await
|
||||
.iter()
|
||||
.map(|(name, entry)| (name.clone(), entry.url.clone()))
|
||||
.collect();
|
||||
|
||||
let new_handle = io::spawn_gateway_bot(
|
||||
&state.config_dir,
|
||||
Arc::clone(&state.active_project),
|
||||
gateway_projects,
|
||||
gateway_project_urls,
|
||||
state.port,
|
||||
);
|
||||
*handle = new_handle;
|
||||
}
|
||||
|
||||
crate::slog!("[gateway] Bot configuration saved; bot restarted");
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// ── Tests ────────────────────────────────────────────────────────────────────
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
fn make_config(names: &[(&str, &str)]) -> GatewayConfig {
|
||||
let mut projects = BTreeMap::new();
|
||||
for (name, url) in names {
|
||||
projects.insert(
|
||||
name.to_string(),
|
||||
ProjectEntry {
|
||||
url: url.to_string(),
|
||||
},
|
||||
);
|
||||
}
|
||||
GatewayConfig { projects }
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn gateway_state_rejects_empty_config() {
|
||||
let config = GatewayConfig {
|
||||
projects: BTreeMap::new(),
|
||||
};
|
||||
assert!(GatewayState::new(config, PathBuf::from("."), 3000).is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn gateway_state_sets_first_project_active() {
|
||||
let config = make_config(&[("alpha", "http://a:3001"), ("beta", "http://b:3002")]);
|
||||
let state = GatewayState::new(config, PathBuf::from("."), 3000).unwrap();
|
||||
let active = state.active_project.blocking_read().clone();
|
||||
assert_eq!(active, "alpha");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn switch_project_to_known_project() {
|
||||
let config = make_config(&[("alpha", "http://a:3001"), ("beta", "http://b:3002")]);
|
||||
let state = GatewayState::new(config, PathBuf::from("."), 3000).unwrap();
|
||||
let url = switch_project(&state, "beta").await.unwrap();
|
||||
assert_eq!(url, "http://b:3002");
|
||||
assert_eq!(*state.active_project.read().await, "beta");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn switch_project_to_unknown_fails() {
|
||||
let config = make_config(&[("alpha", "http://a:3001")]);
|
||||
let state = GatewayState::new(config, PathBuf::from("."), 3000).unwrap();
|
||||
assert!(switch_project(&state, "nonexistent").await.is_err());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn switch_project_empty_name_fails() {
|
||||
let config = make_config(&[("alpha", "http://a:3001")]);
|
||||
let state = GatewayState::new(config, PathBuf::from("."), 3000).unwrap();
|
||||
assert!(switch_project(&state, "").await.is_err());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn active_url_returns_correct_url() {
|
||||
let config = make_config(&[("myproj", "http://my:3001")]);
|
||||
let state = GatewayState::new(config, PathBuf::from("."), 3000).unwrap();
|
||||
let url = state.active_url().await.unwrap();
|
||||
assert_eq!(url, "http://my:3001");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn error_display_variants() {
|
||||
assert!(
|
||||
Error::ProjectNotFound("x".into())
|
||||
.to_string()
|
||||
.contains("Project not found")
|
||||
);
|
||||
assert!(
|
||||
Error::UnreachableProject("x".into())
|
||||
.to_string()
|
||||
.contains("Unreachable")
|
||||
);
|
||||
assert!(
|
||||
Error::DuplicateToken("x".into())
|
||||
.to_string()
|
||||
.contains("Duplicate")
|
||||
);
|
||||
assert!(
|
||||
Error::InvalidAgent("x".into())
|
||||
.to_string()
|
||||
.contains("Invalid agent")
|
||||
);
|
||||
assert!(
|
||||
Error::Config("x".into())
|
||||
.to_string()
|
||||
.contains("Config error")
|
||||
);
|
||||
assert!(Error::Upstream("x".into()).to_string().contains("Upstream"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn generate_and_register_agent() {
|
||||
let config = make_config(&[("test", "http://test:3001")]);
|
||||
let state = GatewayState::new(config, PathBuf::new(), 3000).unwrap();
|
||||
let token = generate_join_token(&state).await;
|
||||
let agent = register_agent(&state, &token, "test-agent".into(), "ws://a".into())
|
||||
.await
|
||||
.unwrap();
|
||||
assert_eq!(agent.label, "test-agent");
|
||||
assert!(state.pending_tokens.read().await.is_empty());
|
||||
assert_eq!(state.joined_agents.read().await.len(), 1);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn register_agent_invalid_token_fails() {
|
||||
let config = make_config(&[("test", "http://test:3001")]);
|
||||
let state = GatewayState::new(config, PathBuf::new(), 3000).unwrap();
|
||||
let result = register_agent(&state, "bad-token", "a".into(), "ws://a".into()).await;
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn remove_agent_success() {
|
||||
let config = make_config(&[("test", "http://test:3001")]);
|
||||
let state = GatewayState::new(config, PathBuf::new(), 3000).unwrap();
|
||||
let token = generate_join_token(&state).await;
|
||||
let agent = register_agent(&state, &token, "a".into(), "ws://a".into())
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(remove_agent(&state, &agent.id).await);
|
||||
assert!(state.joined_agents.read().await.is_empty());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn heartbeat_agent_updates_timestamp() {
|
||||
let config = make_config(&[("test", "http://test:3001")]);
|
||||
let state = GatewayState::new(config, PathBuf::new(), 3000).unwrap();
|
||||
let token = generate_join_token(&state).await;
|
||||
let agent = register_agent(&state, &token, "a".into(), "ws://a".into())
|
||||
.await
|
||||
.unwrap();
|
||||
let old_ts = agent.last_seen;
|
||||
// Small sleep to ensure timestamp differs.
|
||||
tokio::time::sleep(std::time::Duration::from_millis(10)).await;
|
||||
assert!(heartbeat_agent(&state, &agent.id).await);
|
||||
let agents = state.joined_agents.read().await;
|
||||
assert!(agents[0].last_seen >= old_ts);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn init_project_scaffolds_directory() {
|
||||
let dir = tempfile::tempdir().unwrap();
|
||||
let config = make_config(&[("test", "http://test:3001")]);
|
||||
let state = GatewayState::new(config, PathBuf::new(), 3000).unwrap();
|
||||
let result = init_project(&state, dir.path().to_str().unwrap(), None, None).await;
|
||||
assert!(result.is_ok());
|
||||
assert!(dir.path().join(".huskies").exists());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn init_project_already_exists_fails() {
|
||||
let dir = tempfile::tempdir().unwrap();
|
||||
std::fs::create_dir_all(dir.path().join(".huskies")).unwrap();
|
||||
let config = make_config(&[("test", "http://test:3001")]);
|
||||
let state = GatewayState::new(config, PathBuf::new(), 3000).unwrap();
|
||||
let result = init_project(&state, dir.path().to_str().unwrap(), None, None).await;
|
||||
assert!(result.is_err());
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,91 @@
|
||||
//! Gateway notification polling — pure event formatting.
|
||||
//!
|
||||
//! Formats pipeline events from project containers into gateway notifications
|
||||
//! with `[project-name]` prefixes. The actual I/O (HTTP polling, spawning
|
||||
//! tasks, sending messages) lives in `io.rs`.
|
||||
|
||||
use crate::service::events::StoredEvent;
|
||||
use crate::service::notifications::{
|
||||
format_blocked_notification, format_error_notification, format_stage_notification,
|
||||
stage_display_name,
|
||||
};
|
||||
|
||||
/// Format a [`StoredEvent`] from a project into a gateway notification.
|
||||
///
|
||||
/// Prefixes the message with `[project-name]` so users can distinguish which
|
||||
/// project emitted the event.
|
||||
pub fn format_gateway_event(project_name: &str, event: &StoredEvent) -> (String, String) {
|
||||
let prefix = format!("[{project_name}] ");
|
||||
|
||||
match event {
|
||||
StoredEvent::StageTransition {
|
||||
story_id,
|
||||
from_stage,
|
||||
to_stage,
|
||||
..
|
||||
} => {
|
||||
let from_display = stage_display_name(from_stage);
|
||||
let to_display = stage_display_name(to_stage);
|
||||
let (plain, html) = format_stage_notification(story_id, None, from_display, to_display);
|
||||
(format!("{prefix}{plain}"), format!("{prefix}{html}"))
|
||||
}
|
||||
StoredEvent::MergeFailure {
|
||||
story_id, reason, ..
|
||||
} => {
|
||||
let (plain, html) = format_error_notification(story_id, None, reason);
|
||||
(format!("{prefix}{plain}"), format!("{prefix}{html}"))
|
||||
}
|
||||
StoredEvent::StoryBlocked {
|
||||
story_id, reason, ..
|
||||
} => {
|
||||
let (plain, html) = format_blocked_notification(story_id, None, reason);
|
||||
(format!("{prefix}{plain}"), format!("{prefix}{html}"))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ── Tests ────────────────────────────────────────────────────────────────────
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn stage_transition_prefixes_project_name() {
|
||||
let event = StoredEvent::StageTransition {
|
||||
story_id: "42_story_my_feature".to_string(),
|
||||
from_stage: "2_current".to_string(),
|
||||
to_stage: "3_qa".to_string(),
|
||||
timestamp_ms: 1000,
|
||||
};
|
||||
let (plain, html) = format_gateway_event("huskies", &event);
|
||||
assert!(plain.starts_with("[huskies] "));
|
||||
assert!(html.starts_with("[huskies] "));
|
||||
assert!(plain.contains("Current"));
|
||||
assert!(plain.contains("QA"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn merge_failure_prefixes_project_name() {
|
||||
let event = StoredEvent::MergeFailure {
|
||||
story_id: "42_story_my_feature".to_string(),
|
||||
reason: "merge conflict".to_string(),
|
||||
timestamp_ms: 1000,
|
||||
};
|
||||
let (plain, _html) = format_gateway_event("robot-studio", &event);
|
||||
assert!(plain.starts_with("[robot-studio] "));
|
||||
assert!(plain.contains("merge conflict"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn story_blocked_prefixes_project_name() {
|
||||
let event = StoredEvent::StoryBlocked {
|
||||
story_id: "43_story_bar".to_string(),
|
||||
reason: "retry limit exceeded".to_string(),
|
||||
timestamp_ms: 2000,
|
||||
};
|
||||
let (plain, _html) = format_gateway_event("huskies", &event);
|
||||
assert!(plain.starts_with("[huskies] "));
|
||||
assert!(plain.contains("BLOCKED"));
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,165 @@
|
||||
//! Gateway agent registration — pure logic for managing build agents.
|
||||
//!
|
||||
//! Contains `JoinedAgent` and functions that validate and manipulate agent
|
||||
//! state in memory. All persistence (disk I/O) lives in `io.rs`.
|
||||
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::collections::BTreeMap;
|
||||
|
||||
use super::config::ProjectEntry;
|
||||
|
||||
/// A build agent that has registered with this gateway.
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct JoinedAgent {
|
||||
/// Unique ID assigned by the gateway on registration.
|
||||
pub id: String,
|
||||
/// Human-readable label provided by the agent (e.g. `build-agent-abc123`).
|
||||
pub label: String,
|
||||
/// The agent's CRDT-sync WebSocket address (e.g. `ws://host:3001/crdt-sync`).
|
||||
pub address: String,
|
||||
/// Unix timestamp when the agent registered.
|
||||
pub registered_at: f64,
|
||||
/// Unix timestamp of the last heartbeat from this agent.
|
||||
#[serde(default)]
|
||||
pub last_seen: f64,
|
||||
/// Project this agent is assigned to, if any.
|
||||
#[serde(default, skip_serializing_if = "Option::is_none")]
|
||||
pub assigned_project: Option<String>,
|
||||
}
|
||||
|
||||
/// Create a new `JoinedAgent` from registration data.
|
||||
pub fn create_agent(id: String, label: String, address: String, now: f64) -> JoinedAgent {
|
||||
JoinedAgent {
|
||||
id,
|
||||
label,
|
||||
address,
|
||||
registered_at: now,
|
||||
last_seen: now,
|
||||
assigned_project: None,
|
||||
}
|
||||
}
|
||||
|
||||
/// Remove an agent by ID from the list. Returns `true` if found and removed.
|
||||
pub fn remove_agent(agents: &mut Vec<JoinedAgent>, id: &str) -> bool {
|
||||
let before = agents.len();
|
||||
agents.retain(|a| a.id != id);
|
||||
agents.len() < before
|
||||
}
|
||||
|
||||
/// Assign (or unassign) an agent to a project.
|
||||
///
|
||||
/// Returns the updated agent on success, or an error if the agent or project
|
||||
/// is not found.
|
||||
pub fn assign_agent(
|
||||
agents: &mut [JoinedAgent],
|
||||
id: &str,
|
||||
project: Option<String>,
|
||||
projects: &BTreeMap<String, ProjectEntry>,
|
||||
) -> Result<JoinedAgent, super::Error> {
|
||||
// Validate project exists if assigning.
|
||||
if let Some(ref p) = project
|
||||
&& !projects.contains_key(p.as_str())
|
||||
{
|
||||
return Err(super::Error::ProjectNotFound(format!(
|
||||
"unknown project '{p}'"
|
||||
)));
|
||||
}
|
||||
|
||||
match agents.iter_mut().find(|a| a.id == id) {
|
||||
None => Err(super::Error::InvalidAgent(format!("agent not found: {id}"))),
|
||||
Some(a) => {
|
||||
a.assigned_project = project;
|
||||
Ok(a.clone())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Update an agent's last-seen timestamp. Returns `true` if the agent was found.
|
||||
pub fn heartbeat(agents: &mut [JoinedAgent], id: &str, now: f64) -> bool {
|
||||
match agents.iter_mut().find(|a| a.id == id) {
|
||||
None => false,
|
||||
Some(a) => {
|
||||
a.last_seen = now;
|
||||
true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ── Tests ────────────────────────────────────────────────────────────────────
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn create_agent_sets_fields() {
|
||||
let agent = create_agent("id-1".into(), "lbl".into(), "ws://a".into(), 100.0);
|
||||
assert_eq!(agent.id, "id-1");
|
||||
assert_eq!(agent.label, "lbl");
|
||||
assert_eq!(agent.address, "ws://a");
|
||||
assert_eq!(agent.registered_at, 100.0);
|
||||
assert_eq!(agent.last_seen, 100.0);
|
||||
assert!(agent.assigned_project.is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn remove_agent_by_id() {
|
||||
let mut agents = vec![
|
||||
create_agent("a".into(), "A".into(), "ws://a".into(), 0.0),
|
||||
create_agent("b".into(), "B".into(), "ws://b".into(), 0.0),
|
||||
];
|
||||
assert!(remove_agent(&mut agents, "a"));
|
||||
assert_eq!(agents.len(), 1);
|
||||
assert_eq!(agents[0].id, "b");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn remove_agent_missing_returns_false() {
|
||||
let mut agents = vec![];
|
||||
assert!(!remove_agent(&mut agents, "x"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn assign_agent_to_valid_project() {
|
||||
let mut projects = BTreeMap::new();
|
||||
projects.insert(
|
||||
"proj".into(),
|
||||
ProjectEntry {
|
||||
url: "http://p".into(),
|
||||
},
|
||||
);
|
||||
let mut agents = vec![create_agent("a".into(), "A".into(), "ws://a".into(), 0.0)];
|
||||
let result = assign_agent(&mut agents, "a", Some("proj".into()), &projects);
|
||||
assert!(result.is_ok());
|
||||
assert_eq!(result.unwrap().assigned_project, Some("proj".into()));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn assign_agent_to_unknown_project_fails() {
|
||||
let projects = BTreeMap::new();
|
||||
let mut agents = vec![create_agent("a".into(), "A".into(), "ws://a".into(), 0.0)];
|
||||
let result = assign_agent(&mut agents, "a", Some("nope".into()), &projects);
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn assign_agent_unknown_id_fails() {
|
||||
let projects = BTreeMap::new();
|
||||
let mut agents: Vec<JoinedAgent> = vec![];
|
||||
let result = assign_agent(&mut agents, "x", None, &projects);
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn heartbeat_updates_last_seen() {
|
||||
let mut agents = vec![create_agent("a".into(), "A".into(), "ws://a".into(), 0.0)];
|
||||
assert!(heartbeat(&mut agents, "a", 999.0));
|
||||
assert_eq!(agents[0].last_seen, 999.0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn heartbeat_unknown_id_returns_false() {
|
||||
let mut agents: Vec<JoinedAgent> = vec![];
|
||||
assert!(!heartbeat(&mut agents, "x", 1.0));
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,90 @@
|
||||
//! Git I/O — the ONLY place in `service::git_ops/` that may perform side effects.
|
||||
//!
|
||||
//! Side effects here include: spawning git processes via `std::process::Command`
|
||||
//! (wrapped in `tokio::task::spawn_blocking`), and filesystem existence and
|
||||
//! canonicalization checks for path validation.
|
||||
//! All pure logic (path-prefix checks, porcelain parsing) lives in `path_guard.rs`
|
||||
//! and `porcelain.rs`.
|
||||
|
||||
use super::Error;
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::process::Output;
|
||||
|
||||
/// Validate that `worktree_path` is an absolute path that exists on disk and
|
||||
/// lies inside the project's `.huskies/worktrees/` directory. Returns the
|
||||
/// canonicalized path on success.
|
||||
///
|
||||
/// # Errors
|
||||
/// - [`Error::Validation`] if the path is relative or does not exist.
|
||||
/// - [`Error::PathNotAllowed`] if the path is outside `.huskies/worktrees/`.
|
||||
/// - [`Error::Io`] if canonicalization fails.
|
||||
pub fn validate_worktree_path(worktree_path: &str, project_root: &Path) -> Result<PathBuf, Error> {
|
||||
let wd = PathBuf::from(worktree_path);
|
||||
|
||||
if !wd.is_absolute() {
|
||||
return Err(Error::Validation(
|
||||
"worktree_path must be an absolute path".to_string(),
|
||||
));
|
||||
}
|
||||
if !wd.exists() {
|
||||
return Err(Error::Validation(format!(
|
||||
"worktree_path does not exist: {worktree_path}"
|
||||
)));
|
||||
}
|
||||
|
||||
let worktrees_root = project_root.join(".huskies").join("worktrees");
|
||||
|
||||
let canonical_wd = wd
|
||||
.canonicalize()
|
||||
.map_err(|e| Error::Io(format!("Cannot canonicalize worktree_path: {e}")))?;
|
||||
|
||||
let canonical_wt = if worktrees_root.exists() {
|
||||
worktrees_root
|
||||
.canonicalize()
|
||||
.map_err(|e| Error::Io(format!("Cannot canonicalize worktrees root: {e}")))?
|
||||
} else {
|
||||
return Err(Error::PathNotAllowed(
|
||||
"No worktrees directory found in project".to_string(),
|
||||
));
|
||||
};
|
||||
|
||||
if !super::path_guard::is_under_root(&canonical_wd, &canonical_wt) {
|
||||
return Err(Error::PathNotAllowed(format!(
|
||||
"worktree_path must be inside .huskies/worktrees/. Got: {worktree_path}"
|
||||
)));
|
||||
}
|
||||
|
||||
Ok(canonical_wd)
|
||||
}
|
||||
|
||||
/// Run a git command with static arg slices in `dir` and return the process output.
|
||||
///
|
||||
/// # Errors
|
||||
/// - [`Error::UpstreamFailure`] if the task panics or git cannot be spawned.
|
||||
pub async fn run_git(args: Vec<&'static str>, dir: PathBuf) -> Result<Output, Error> {
|
||||
tokio::task::spawn_blocking(move || {
|
||||
std::process::Command::new("git")
|
||||
.args(&args)
|
||||
.current_dir(&dir)
|
||||
.output()
|
||||
})
|
||||
.await
|
||||
.map_err(|e| Error::UpstreamFailure(format!("Task join error: {e}")))?
|
||||
.map_err(|e| Error::Io(format!("Failed to run git: {e}")))
|
||||
}
|
||||
|
||||
/// Run a git command with owned `String` args in `dir` and return the process output.
|
||||
///
|
||||
/// # Errors
|
||||
/// - [`Error::UpstreamFailure`] if the task panics or git cannot be spawned.
|
||||
pub async fn run_git_owned(args: Vec<String>, dir: PathBuf) -> Result<Output, Error> {
|
||||
tokio::task::spawn_blocking(move || {
|
||||
std::process::Command::new("git")
|
||||
.args(&args)
|
||||
.current_dir(&dir)
|
||||
.output()
|
||||
})
|
||||
.await
|
||||
.map_err(|e| Error::UpstreamFailure(format!("Task join error: {e}")))?
|
||||
.map_err(|e| Error::Io(format!("Failed to run git: {e}")))
|
||||
}
|
||||
@@ -0,0 +1,100 @@
|
||||
//! Git operations service — worktree path validation and git command execution.
|
||||
//!
|
||||
//! Extracted from `http/mcp/git_tools.rs` following the conventions in
|
||||
//! `docs/architecture/service-modules.md`:
|
||||
//! - `mod.rs` (this file) — public API, typed [`Error`], orchestration
|
||||
//! - `io.rs` — the ONLY place that performs side effects (git processes, filesystem)
|
||||
//! - `path_guard.rs` — pure path-prefix safety checks
|
||||
//! - `porcelain.rs` — pure git porcelain output parsers
|
||||
|
||||
pub mod io;
|
||||
pub mod path_guard;
|
||||
pub mod porcelain;
|
||||
|
||||
#[allow(unused_imports)]
|
||||
pub use path_guard::is_under_root;
|
||||
pub use porcelain::parse_git_status_porcelain;
|
||||
|
||||
// ── Error type ────────────────────────────────────────────────────────────────
|
||||
|
||||
/// Typed errors returned by `service::git_ops` functions.
|
||||
///
|
||||
/// HTTP handlers map these to status codes:
|
||||
/// - [`Error::NotFound`] → 404 Not Found
|
||||
/// - [`Error::Validation`] → 400 Bad Request
|
||||
/// - [`Error::Conflict`] → 409 Conflict
|
||||
/// - [`Error::PathNotAllowed`] → 400 Bad Request (sandbox violation)
|
||||
/// - [`Error::Io`] → 500 Internal Server Error
|
||||
/// - [`Error::UpstreamFailure`] → 500 Internal Server Error
|
||||
#[allow(dead_code)]
|
||||
#[derive(Debug)]
|
||||
pub enum Error {
|
||||
/// The requested worktree or path does not exist.
|
||||
NotFound(String),
|
||||
/// A required argument is missing or has an invalid value.
|
||||
Validation(String),
|
||||
/// The git operation cannot proceed due to a conflicting state.
|
||||
Conflict(String),
|
||||
/// The path is outside the allowed sandbox.
|
||||
PathNotAllowed(String),
|
||||
/// A filesystem or git I/O operation failed.
|
||||
Io(String),
|
||||
/// An upstream git command returned an unexpected error.
|
||||
UpstreamFailure(String),
|
||||
}
|
||||
|
||||
impl std::fmt::Display for Error {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
match self {
|
||||
Self::NotFound(msg) => write!(f, "Not found: {msg}"),
|
||||
Self::Validation(msg) => write!(f, "Validation error: {msg}"),
|
||||
Self::Conflict(msg) => write!(f, "Conflict: {msg}"),
|
||||
Self::PathNotAllowed(msg) => write!(f, "Path not allowed: {msg}"),
|
||||
Self::Io(msg) => write!(f, "I/O error: {msg}"),
|
||||
Self::UpstreamFailure(msg) => write!(f, "Upstream failure: {msg}"),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ── Tests ─────────────────────────────────────────────────────────────────────
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn error_display_not_found() {
|
||||
let e = Error::NotFound("worktree missing".to_string());
|
||||
assert!(e.to_string().contains("Not found"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn error_display_validation() {
|
||||
let e = Error::Validation("relative path".to_string());
|
||||
assert!(e.to_string().contains("Validation error"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn error_display_conflict() {
|
||||
let e = Error::Conflict("uncommitted changes".to_string());
|
||||
assert!(e.to_string().contains("Conflict"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn error_display_path_not_allowed() {
|
||||
let e = Error::PathNotAllowed("outside sandbox".to_string());
|
||||
assert!(e.to_string().contains("Path not allowed"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn error_display_io() {
|
||||
let e = Error::Io("permission denied".to_string());
|
||||
assert!(e.to_string().contains("I/O error"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn error_display_upstream_failure() {
|
||||
let e = Error::UpstreamFailure("git not found".to_string());
|
||||
assert!(e.to_string().contains("Upstream failure"));
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,58 @@
|
||||
//! Pure path-guard helpers for `service::git_ops`.
|
||||
//!
|
||||
//! These functions are free of side effects — they operate on already-resolved
|
||||
//! `Path` values and perform no filesystem I/O. Path existence checks and
|
||||
//! canonicalization belong in `io.rs`.
|
||||
|
||||
use std::path::Path;
|
||||
|
||||
/// Return `true` if `canonical_path` starts with (i.e. is under) `root`.
|
||||
///
|
||||
/// Both paths must already be canonicalized so that symlinks, `.`, and `..`
|
||||
/// components do not cause false negatives.
|
||||
pub fn is_under_root(canonical_path: &Path, root: &Path) -> bool {
|
||||
canonical_path.starts_with(root)
|
||||
}
|
||||
|
||||
// ── Tests ─────────────────────────────────────────────────────────────────────
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use std::path::PathBuf;
|
||||
|
||||
#[test]
|
||||
fn is_under_root_returns_true_for_child() {
|
||||
let root = PathBuf::from("/project/.huskies/worktrees");
|
||||
let child = PathBuf::from("/project/.huskies/worktrees/42_story_foo");
|
||||
assert!(is_under_root(&child, &root));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn is_under_root_returns_false_for_sibling() {
|
||||
let root = PathBuf::from("/project/.huskies/worktrees");
|
||||
let sibling = PathBuf::from("/project/.huskies/other");
|
||||
assert!(!is_under_root(&sibling, &root));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn is_under_root_returns_false_for_parent() {
|
||||
let root = PathBuf::from("/project/.huskies/worktrees");
|
||||
let parent = PathBuf::from("/project/.huskies");
|
||||
assert!(!is_under_root(&parent, &root));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn is_under_root_returns_true_for_exact_match() {
|
||||
let root = PathBuf::from("/project/.huskies/worktrees");
|
||||
assert!(is_under_root(&root, &root));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn is_under_root_returns_false_for_path_with_shared_prefix_but_not_child() {
|
||||
// /foo/bar-extra is NOT under /foo/bar
|
||||
let root = PathBuf::from("/foo/bar");
|
||||
let other = PathBuf::from("/foo/bar-extra");
|
||||
assert!(!is_under_root(&other, &root));
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,107 @@
|
||||
//! Pure git porcelain output parsers for `service::git_ops`.
|
||||
//!
|
||||
//! These functions parse the text output of `git status --porcelain=v1`
|
||||
//! and similar commands. No I/O: they take `&str` and return structured data.
|
||||
|
||||
/// Parse `git status --porcelain=v1 -u` output into three file lists.
|
||||
///
|
||||
/// Returns `(staged, unstaged, untracked)` where each entry is the file path
|
||||
/// string from the porcelain line.
|
||||
pub fn parse_git_status_porcelain(stdout: &str) -> (Vec<String>, Vec<String>, Vec<String>) {
|
||||
let mut staged: Vec<String> = Vec::new();
|
||||
let mut unstaged: Vec<String> = Vec::new();
|
||||
let mut untracked: Vec<String> = Vec::new();
|
||||
|
||||
for line in stdout.lines() {
|
||||
if line.len() < 3 {
|
||||
continue;
|
||||
}
|
||||
let x = line.chars().next().unwrap_or(' ');
|
||||
let y = line.chars().nth(1).unwrap_or(' ');
|
||||
let path = line[3..].to_string();
|
||||
|
||||
match (x, y) {
|
||||
('?', '?') => untracked.push(path),
|
||||
(' ', _) => unstaged.push(path),
|
||||
(_, ' ') => staged.push(path),
|
||||
_ => {
|
||||
// Both staged and unstaged modifications.
|
||||
staged.push(path.clone());
|
||||
unstaged.push(path);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
(staged, unstaged, untracked)
|
||||
}
|
||||
|
||||
// ── Tests ─────────────────────────────────────────────────────────────────────
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn parse_empty_output_returns_empty_vecs() {
|
||||
let (s, u, t) = parse_git_status_porcelain("");
|
||||
assert!(s.is_empty());
|
||||
assert!(u.is_empty());
|
||||
assert!(t.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_untracked_file() {
|
||||
let output = "?? new_file.txt\n";
|
||||
let (staged, unstaged, untracked) = parse_git_status_porcelain(output);
|
||||
assert!(staged.is_empty());
|
||||
assert!(unstaged.is_empty());
|
||||
assert_eq!(untracked, vec!["new_file.txt"]);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_staged_file() {
|
||||
let output = "A staged.txt\n";
|
||||
let (staged, unstaged, untracked) = parse_git_status_porcelain(output);
|
||||
assert_eq!(staged, vec!["staged.txt"]);
|
||||
assert!(unstaged.is_empty());
|
||||
assert!(untracked.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_unstaged_modified_file() {
|
||||
// 'M' in second column = unstaged modification
|
||||
let output = " M modified.txt\n";
|
||||
let (staged, unstaged, untracked) = parse_git_status_porcelain(output);
|
||||
assert!(staged.is_empty());
|
||||
assert_eq!(unstaged, vec!["modified.txt"]);
|
||||
assert!(untracked.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_both_staged_and_unstaged() {
|
||||
// 'MM' = staged + unstaged in same file
|
||||
let output = "MM both.txt\n";
|
||||
let (staged, unstaged, untracked) = parse_git_status_porcelain(output);
|
||||
assert_eq!(staged, vec!["both.txt"]);
|
||||
assert_eq!(unstaged, vec!["both.txt"]);
|
||||
assert!(untracked.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_mixed_output() {
|
||||
let output = "A staged.rs\n M unstaged.rs\n?? untracked.rs\n";
|
||||
let (staged, unstaged, untracked) = parse_git_status_porcelain(output);
|
||||
assert_eq!(staged, vec!["staged.rs"]);
|
||||
assert_eq!(unstaged, vec!["unstaged.rs"]);
|
||||
assert_eq!(untracked, vec!["untracked.rs"]);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_skips_short_lines() {
|
||||
// Lines shorter than 3 chars should be skipped.
|
||||
let output = "A \nMM both.txt\n";
|
||||
let (staged, _unstaged, _untracked) = parse_git_status_porcelain(output);
|
||||
// Only "both.txt" should appear — the 2-char "A " line is skipped.
|
||||
assert_eq!(staged, vec!["both.txt"]);
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,38 @@
|
||||
//! Pure health-check logic — no side effects.
|
||||
|
||||
use poem_openapi::Object;
|
||||
use serde::Serialize;
|
||||
|
||||
/// The JSON payload returned by the health check endpoint.
|
||||
#[derive(Serialize, Object)]
|
||||
pub struct HealthStatus {
|
||||
/// Human-readable status string, always `"ok"` when the server is healthy.
|
||||
pub status: String,
|
||||
}
|
||||
|
||||
/// Return a healthy status response.
|
||||
pub fn ok() -> HealthStatus {
|
||||
HealthStatus {
|
||||
status: "ok".to_string(),
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn ok_returns_status_ok() {
|
||||
let s = ok();
|
||||
assert_eq!(s.status, "ok");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn health_status_serializes() {
|
||||
let s = HealthStatus {
|
||||
status: "ok".to_string(),
|
||||
};
|
||||
let json = serde_json::to_value(&s).unwrap();
|
||||
assert_eq!(json["status"], "ok");
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,4 @@
|
||||
//! Health I/O wrappers.
|
||||
//!
|
||||
//! Health has no side effects; this file exists to satisfy the
|
||||
//! service-module convention (`docs/architecture/service-modules.md`).
|
||||
@@ -0,0 +1,39 @@
|
||||
//! Health service — public API for the health domain.
|
||||
//!
|
||||
//! Exposes a single `check()` function that returns a [`HealthStatus`].
|
||||
//! HTTP handlers call this instead of constructing the response inline.
|
||||
//!
|
||||
//! Conventions: `docs/architecture/service-modules.md`
|
||||
|
||||
pub mod check;
|
||||
pub(super) mod io;
|
||||
|
||||
pub use check::HealthStatus;
|
||||
|
||||
// ── Error type ────────────────────────────────────────────────────────────────
|
||||
|
||||
/// Typed errors returned by `service::health` functions.
|
||||
///
|
||||
/// Health checks are currently infallible; this enum satisfies the module
|
||||
/// convention and accommodates future error cases (e.g. dependency checks).
|
||||
#[allow(dead_code)]
|
||||
#[derive(Debug)]
|
||||
pub enum Error {
|
||||
/// An internal error occurred during the health check.
|
||||
Internal(String),
|
||||
}
|
||||
|
||||
impl std::fmt::Display for Error {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
match self {
|
||||
Self::Internal(msg) => write!(f, "Health error: {msg}"),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ── Public API ────────────────────────────────────────────────────────────────
|
||||
|
||||
/// Perform a health check and return the status.
|
||||
pub fn check() -> HealthStatus {
|
||||
check::ok()
|
||||
}
|
||||
@@ -0,0 +1,5 @@
|
||||
//! Merge I/O — the ONLY place in `service::merge/` that may perform side effects.
|
||||
//!
|
||||
//! Currently, the bulk of the merge I/O is handled by `crate::agents::merge`
|
||||
//! and `crate::io::story_metadata`. This file is the designated home for any
|
||||
//! future I/O helpers that are extracted from merge-related MCP handlers.
|
||||
@@ -0,0 +1,87 @@
|
||||
//! Merge service — domain logic for merging agent work to master.
|
||||
//!
|
||||
//! Extracted from `http/mcp/merge_tools.rs` following the conventions in
|
||||
//! `docs/architecture/service-modules.md`:
|
||||
//! - `mod.rs` (this file) — public API, typed [`Error`], orchestration
|
||||
//! - `io.rs` — the ONLY place that performs side effects
|
||||
//! - `status.rs` — pure merge-status message formatting
|
||||
|
||||
pub mod io;
|
||||
pub mod status;
|
||||
|
||||
#[allow(unused_imports)]
|
||||
pub use status::format_merge_status_message;
|
||||
|
||||
// ── Error type ────────────────────────────────────────────────────────────────
|
||||
|
||||
/// Typed errors returned by `service::merge` functions.
|
||||
///
|
||||
/// HTTP handlers map these to status codes:
|
||||
/// - [`Error::NotFound`] → 404 Not Found
|
||||
/// - [`Error::Validation`] → 400 Bad Request
|
||||
/// - [`Error::Conflict`] → 409 Conflict
|
||||
/// - [`Error::Io`] → 500 Internal Server Error
|
||||
/// - [`Error::UpstreamFailure`] → 500 Internal Server Error
|
||||
#[allow(dead_code)]
|
||||
#[derive(Debug)]
|
||||
pub enum Error {
|
||||
/// The requested story or merge job was not found.
|
||||
NotFound(String),
|
||||
/// A required argument is missing or has an invalid value.
|
||||
Validation(String),
|
||||
/// The merge cannot proceed due to a conflicting state.
|
||||
Conflict(String),
|
||||
/// A filesystem or process I/O operation failed.
|
||||
Io(String),
|
||||
/// An upstream dependency (agents, git) returned an unexpected error.
|
||||
UpstreamFailure(String),
|
||||
}
|
||||
|
||||
impl std::fmt::Display for Error {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
match self {
|
||||
Self::NotFound(msg) => write!(f, "Not found: {msg}"),
|
||||
Self::Validation(msg) => write!(f, "Validation error: {msg}"),
|
||||
Self::Conflict(msg) => write!(f, "Conflict: {msg}"),
|
||||
Self::Io(msg) => write!(f, "I/O error: {msg}"),
|
||||
Self::UpstreamFailure(msg) => write!(f, "Upstream failure: {msg}"),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ── Tests ─────────────────────────────────────────────────────────────────────
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn error_display_not_found() {
|
||||
let e = Error::NotFound("merge job missing".to_string());
|
||||
assert!(e.to_string().contains("Not found"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn error_display_validation() {
|
||||
let e = Error::Validation("story_id required".to_string());
|
||||
assert!(e.to_string().contains("Validation error"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn error_display_conflict() {
|
||||
let e = Error::Conflict("story already merged".to_string());
|
||||
assert!(e.to_string().contains("Conflict"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn error_display_io() {
|
||||
let e = Error::Io("write failed".to_string());
|
||||
assert!(e.to_string().contains("I/O error"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn error_display_upstream_failure() {
|
||||
let e = Error::UpstreamFailure("git crashed".to_string());
|
||||
assert!(e.to_string().contains("Upstream failure"));
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,89 @@
|
||||
//! Pure merge-status message formatting for `service::merge`.
|
||||
//!
|
||||
//! These functions transform a completed merge report into human-readable
|
||||
//! status messages. No I/O: they are pure functions over plain data.
|
||||
|
||||
use crate::agents::merge::MergeReport;
|
||||
|
||||
#[allow(dead_code)]
|
||||
/// Derive a human-readable status message from a completed [`MergeReport`].
|
||||
///
|
||||
/// The message explains what happened and (on failure) what the caller
|
||||
/// should do next.
|
||||
pub fn format_merge_status_message(report: &MergeReport) -> &'static str {
|
||||
if report.success && report.gates_passed && report.conflicts_resolved {
|
||||
"Merge complete: conflicts were auto-resolved and all quality gates passed. Story moved to done and worktree cleaned up."
|
||||
} else if report.success && report.gates_passed {
|
||||
"Merge complete: all quality gates passed. Story moved to done and worktree cleaned up."
|
||||
} else if report.had_conflicts && !report.conflicts_resolved {
|
||||
"Merge failed: conflicts detected that could not be auto-resolved. Merge was aborted — master is untouched. Call report_merge_failure with the conflict details so the human can resolve them. Do NOT manually move the story file or call accept_story."
|
||||
} else if report.success && !report.gates_passed {
|
||||
"Merge committed but quality gates failed. Review gate_output and fix issues before re-running."
|
||||
} else {
|
||||
"Merge failed. Review gate_output for details. Call report_merge_failure to record the failure. Do NOT manually move the story file or call accept_story."
|
||||
}
|
||||
}
|
||||
|
||||
// ── Tests ─────────────────────────────────────────────────────────────────────
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
fn report(
|
||||
success: bool,
|
||||
had_conflicts: bool,
|
||||
conflicts_resolved: bool,
|
||||
gates_passed: bool,
|
||||
) -> MergeReport {
|
||||
MergeReport {
|
||||
story_id: String::new(),
|
||||
success,
|
||||
had_conflicts,
|
||||
conflicts_resolved,
|
||||
conflict_details: None,
|
||||
gates_passed,
|
||||
gate_output: String::new(),
|
||||
worktree_cleaned_up: false,
|
||||
story_archived: false,
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn clean_merge_message() {
|
||||
let r = report(true, false, false, true);
|
||||
let msg = format_merge_status_message(&r);
|
||||
assert!(msg.contains("quality gates passed"));
|
||||
assert!(msg.contains("done"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn conflicts_resolved_message() {
|
||||
let r = report(true, true, true, true);
|
||||
let msg = format_merge_status_message(&r);
|
||||
assert!(msg.contains("auto-resolved"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn unresolved_conflicts_message() {
|
||||
let r = report(false, true, false, false);
|
||||
let msg = format_merge_status_message(&r);
|
||||
assert!(msg.contains("could not be auto-resolved"));
|
||||
assert!(msg.contains("report_merge_failure"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn gates_failed_message() {
|
||||
let r = report(true, false, false, false);
|
||||
let msg = format_merge_status_message(&r);
|
||||
assert!(msg.contains("quality gates failed"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn general_failure_message() {
|
||||
let r = report(false, false, false, false);
|
||||
let msg = format_merge_status_message(&r);
|
||||
assert!(msg.contains("Merge failed"));
|
||||
assert!(msg.contains("report_merge_failure"));
|
||||
}
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user