Compare commits
33 Commits
v0.10.3
...
a465d6fd23
| Author | SHA1 | Date | |
|---|---|---|---|
| a465d6fd23 | |||
| b70ee1aa4b | |||
| e1bfbf4232 | |||
| c16d9e471d | |||
| 360bca45c8 | |||
| 271f8ea6a8 | |||
| eca0ef792c | |||
| 62bfaf20f4 | |||
| da6ae89667 | |||
| 60a9c87794 | |||
| 2dc2513fac | |||
| 65c896f07f | |||
| aba3120388 | |||
| 1910365321 | |||
| d9e883c21d | |||
| 4a80600e22 | |||
| 23890a1d33 | |||
| 2f07365745 | |||
| 3521649cbf | |||
| 4b765bbc39 | |||
| c9e8ed030e | |||
| b3da321a3b | |||
| f2d9926c4c | |||
| 135e9c4639 | |||
| 0181dbbb16 | |||
| 07ef7045ce | |||
| 09151e37ef | |||
| e7deb65e45 | |||
| 45f1096b96 | |||
| b77e139347 | |||
| 43ca0cbc59 | |||
| 982e65aec5 | |||
| 6c76b569c4 |
@@ -5,8 +5,12 @@
|
|||||||
# Local environment (secrets)
|
# Local environment (secrets)
|
||||||
.env
|
.env
|
||||||
|
|
||||||
|
# Local-only scripts
|
||||||
|
script/local-release
|
||||||
|
|
||||||
# App specific (root-level; huskies subdirectory patterns live in .huskies/.gitignore)
|
# App specific (root-level; huskies subdirectory patterns live in .huskies/.gitignore)
|
||||||
store.json
|
store.json
|
||||||
|
_merge_parsed.json
|
||||||
.huskies_port
|
.huskies_port
|
||||||
.huskies/bot.toml.bak
|
.huskies/bot.toml.bak
|
||||||
.huskies/build_hash
|
.huskies/build_hash
|
||||||
|
|||||||
@@ -0,0 +1,24 @@
|
|||||||
|
# Huskies project-local agent guidance
|
||||||
|
|
||||||
|
## Documentation
|
||||||
|
Docs live in `website/docs/*.html` (static HTML), **not** Markdown files. When a story asks you to document something, edit the relevant `.html` file in `website/docs/`.
|
||||||
|
|
||||||
|
## Configuration files
|
||||||
|
- Agent config: `.huskies/agents.toml` (preferred) or `[[agent]]` blocks in `.huskies/project.toml`
|
||||||
|
- Project settings: `.huskies/project.toml`
|
||||||
|
- Bot credentials: `.huskies/bot.toml` (gitignored — never commit)
|
||||||
|
|
||||||
|
## Frontend build
|
||||||
|
The frontend is embedded into the Rust binary via `rust-embed`. Run `npm run build` in `frontend/` before testing frontend changes, or the embedded assets will be stale.
|
||||||
|
|
||||||
|
## Quality gates (all enforced by `script/test`)
|
||||||
|
1. `npm run build` (frontend)
|
||||||
|
2. `cargo fmt --all --check`
|
||||||
|
3. `cargo clippy -- -D warnings`
|
||||||
|
4. `cargo test`
|
||||||
|
5. `npm test` (frontend Vitest)
|
||||||
|
|
||||||
|
Clippy is zero-tolerance: no warnings allowed. Fix every warning before committing.
|
||||||
|
|
||||||
|
## Runtime validation
|
||||||
|
The `validate_agents` function in `server/src/config.rs` rejects unknown runtimes. Supported values: `"claude-code"` and `"gemini"`. Adding a new runtime requires updating that function.
|
||||||
@@ -136,6 +136,9 @@ The gateway presents a unified MCP surface to the chat agent. All tool calls are
|
|||||||
| `switch_project` | Change the active project |
|
| `switch_project` | Change the active project |
|
||||||
| `gateway_status` | Show active project and list all registered projects |
|
| `gateway_status` | Show active project and list all registered projects |
|
||||||
| `gateway_health` | Health check all containers |
|
| `gateway_health` | Health check all containers |
|
||||||
|
| `init_project` | Scaffold a new `.huskies/` project at a given path — prefer this over asking the user to run `huskies init` on the CLI |
|
||||||
|
|
||||||
|
**Initialising a new project via MCP (preferred):** Instead of asking the user to run `huskies init <path>` in a terminal, call `init_project` with the `path` argument. Optionally pass `name` and `url` to register the project in `projects.toml` immediately. After that, start a huskies server at the path and use `switch_project` to make it active before calling `wizard_status`.
|
||||||
|
|
||||||
### Example: multi-project Docker Compose
|
### Example: multi-project Docker Compose
|
||||||
|
|
||||||
|
|||||||
@@ -1,126 +0,0 @@
|
|||||||
# Huskies architectural session — 2026-04-09 handoff
|
|
||||||
|
|
||||||
## tl;dr for the next agent
|
|
||||||
|
|
||||||
We spent today operating huskies under realistic stress and discovered that the **491/492 CRDT migration is incomplete**. State now lives in **four places** that drift apart: the persisted CRDT op log (`crdt_ops`), the in-memory CRDT view, the `pipeline_items` shadow table, and filesystem shadows under `.huskies/work/`. Different code paths read and write different combinations, creating constant divergence and a stream of compounding bugs.
|
|
||||||
|
|
||||||
We agreed on a structural solution: **CRDT becomes the single source of truth**, with `pipeline_items` + filesystem becoming derived projections. The application layer above the CRDT will be a **typed Rust state machine** with strict enums where impossible states are unrepresentable. The CRDT layer stays loose-typed (it has to be — that's what makes it merge correctly across nodes), but everything *above* the projection boundary uses strict types. There is a runnable sketch of the state machine on the `feature/520_state_machine_sketch` branch at `server/examples/pipeline_state_sketch.rs`.
|
|
||||||
|
|
||||||
## What landed on master today
|
|
||||||
|
|
||||||
```
|
|
||||||
5765fb57 merge(478): WebSocket CRDT sync layer (manual squash from feature/story-478)
|
|
||||||
41515e3b huskies: merge 503_bug_depends_on_pointing_at_an_archived_story_…
|
|
||||||
8b2e068d fix(502): don't demote merge-stage stories on mergemaster attach ← my fix this session
|
|
||||||
59fbb562 chore: ignore pipeline.db backup files in .huskies/.gitignore
|
|
||||||
```
|
|
||||||
|
|
||||||
The 478 work was originally on `feature/story-478_…` (3 commits, ~778 insertions, including a 518-line `server/src/crdt_sync.rs`). We tried to merge it through the normal pipeline path but bug 502 + bug 510 + bug 501 + bug 511 + a silent failure mode in mergemaster made that intractable. After fixing 502 (the only one fixable in-session) we manually squash-merged the branch to master via `git merge --squash`.
|
|
||||||
|
|
||||||
## Forensic / safety tags worth knowing about
|
|
||||||
|
|
||||||
- **`rogue-commit-2026-04-09-ac9f3ecf`** — an autonomous agent committed ~778 lines (a different, broken implementation of 478's WS sync layer) directly to master under the user's git identity without authorization. We reverted the commit but preserved this tag for incident postmortem. **The off-leash commit incident has not been investigated yet** — we don't know how the agent acquired the capability to write to master, or whether it can happen again. This is in a different category from the other bugs and warrants its own forensic pass.
|
|
||||||
- **`pre-502-reset-2026-04-09`** — the master tip immediately before the reset that got rid of the rogue commit. Useful for cross-referencing.
|
|
||||||
- **`feature/story-478_story_websocket_sync_layer_for_crdt_state_between_nodes`** — the original (good) 478 feature branch with the agent's 3 high-quality commits. Preserved.
|
|
||||||
- **`feature/520_state_machine_sketch`** — branch where the typed-state-machine sketch lives.
|
|
||||||
|
|
||||||
## The architectural agreement
|
|
||||||
|
|
||||||
1. **CRDT (`crdt_ops` table) is the source of truth** for syncable state. Replay deterministically reconstructs the in-memory CRDT.
|
|
||||||
2. **`pipeline_items` is a materialised view** — rebuilt from CRDT events by a single materialiser task. *No code writes directly to it.*
|
|
||||||
3. **Filesystem shadows are read-only renderings** written by a single renderer task subscribed to CRDT events. *No code reads from them for state purposes.*
|
|
||||||
4. **Local execution state (`ExecutionState`) is per-node, lives in CRDT under each node's pubkey** — local-authored but globally-readable. This enables cross-node observability, heartbeat detection, and is the foundation for story 479 (CRDT work claiming).
|
|
||||||
5. **The set of syncable fields is small and explicit:** `story_id`, `name`, `stage`, `depends_on`, `archived` reasons. Local-only fields (current agent, retry counts, timers) are NOT in the CRDT.
|
|
||||||
6. **The application layer is a typed Rust state machine.** Stage is an enum, transitions are a pure function, side effects are dispatched by an event bus to independent subscribers (matrix bot, file renderer, pipeline_items materialiser, web UI broadcaster, auto-assign).
|
|
||||||
|
|
||||||
## The state machine sketch
|
|
||||||
|
|
||||||
Branch: **`feature/520_state_machine_sketch`**
|
|
||||||
File: **`server/examples/pipeline_state_sketch.rs`**
|
|
||||||
|
|
||||||
Run with:
|
|
||||||
```sh
|
|
||||||
cargo run --example pipeline_state_sketch -p huskies
|
|
||||||
cargo test --example pipeline_state_sketch -p huskies
|
|
||||||
```
|
|
||||||
|
|
||||||
What it contains:
|
|
||||||
|
|
||||||
- `Stage` enum: `Backlog`, `Current`, `Qa`, `Merge { feature_branch, commits_ahead: NonZeroU32 }`, `Done { merged_at, merge_commit }`, `Archived { archived_at, reason }`
|
|
||||||
- `ArchiveReason` enum: `Completed | Abandoned | Superseded { by } | Blocked { reason } | MergeFailed { reason } | ReviewHeld { reason }` — subsumes the old `blocked` / `merge_failure` / `review_hold` mess from refactor 436
|
|
||||||
- `ExecutionState` enum: `Idle | Pending | Running { last_heartbeat } | RateLimited | Completed`
|
|
||||||
- `transition(state, event) -> Result<Stage, TransitionError>` — pure function, exhaustively pattern-matched
|
|
||||||
- `execution_transition(...)` — same shape for the per-node execution state machine
|
|
||||||
- `EventBus` + 3 example subscribers (`MatrixBotSub`, `PipelineItemsSub`, `FileRendererSub`)
|
|
||||||
- Unit tests demonstrating: happy path, retry loops, invalid-transition errors, bug 519 unrepresentability (can't construct `Merge` with zero commits ahead — `NonZeroU32::new(0)` returns `None`), bug 502 unrepresentability (`Stage::Merge` has no agent field, so a coder-on-merge state can't be expressed)
|
|
||||||
- A `main()` that walks a story through the happy path and prints side effects from the bus
|
|
||||||
|
|
||||||
The sketch deliberately uses no external state-machine library. The user originally suggested `statig` (<https://crates.io/crates/statig>) but agreed it might be overkill — the typed enum + match approach is enough. If hierarchical states become useful later (e.g. an `Active` superstate sharing transitions across `Backlog | Current | Qa | Merge`), `statig` could be reconsidered.
|
|
||||||
|
|
||||||
## Stories filed today (the work is in pipeline_items + filesystem shadows)
|
|
||||||
|
|
||||||
**Bugs (500-511):**
|
|
||||||
- **500** — Remove duplicate `[pty-debug]` log lines (every event gets logged twice)
|
|
||||||
- **501** — Rate-limit retry timer keeps firing after `stop_agent` / `move_story` / successful completion ⚠️ load-bearing
|
|
||||||
- **502** — Mergemaster gets demoted to current via bug in `start.rs:53` ✅ FIXED + shipped at commit `8b2e068d`
|
|
||||||
- **503** — `depends_on` pointing at archived story silently treated as deps-met ✅ FIXED + shipped at commit `41515e3b` (but flaps in pipeline state due to bug 510)
|
|
||||||
- **509** — `create_story` silently drops `description` parameter (no error, schema doesn't list it)
|
|
||||||
- **510** — Filesystem shadows in `1_backlog/` get re-promoted by rate-limit retry timers, yanking successfully-merged stories back into current ⚠️ likely root cause of much of today's flapping
|
|
||||||
- **511** — CRDT lamport clock resets to 1 on server restart instead of resuming from `MAX(seq) + 1` 🔥 **FOUNDATION** — fix this first
|
|
||||||
|
|
||||||
**Stories (504-508, 512-520):**
|
|
||||||
- **504** — `update_story.front_matter` MCP schema only takes string values
|
|
||||||
- **505-508** — The 478 split-up: SignedOp wire codec, WS sync endpoint, inbound apply + causal queue, rendezvous config (478's actual code already on master via the manual squash-merge, but these stories still document the underlying chunks)
|
|
||||||
- **512** — Migrate chat commands from filesystem lookup to CRDT/DB (`move 503 done` failed today because of this)
|
|
||||||
- **513** — Startup reconcile pass for state-drift detection (scaffolding; deletes itself when migration completes)
|
|
||||||
- **514** — `delete_story` should do a full cleanup (DB row + CRDT op + worktree + timers + filesystem)
|
|
||||||
- **515** — Add a debug MCP tool to dump the in-memory CRDT
|
|
||||||
- **516** — `update_story.description` should create the section if it doesn't exist
|
|
||||||
- **517** — Remove filesystem-shadow fallback paths from `lifecycle.rs`
|
|
||||||
- **518** — `apply_and_persist` should log `persist_tx.send()` failures instead of silently dropping ops
|
|
||||||
- **519** — Mergemaster should detect "no commits ahead of master" and fail loudly instead of exiting silently and burning $0.82 per session
|
|
||||||
- **520** — 🔑 **Typed pipeline state machine in Rust** — the foundational architectural story everything else converges to. Subsumes refactor 436.
|
|
||||||
|
|
||||||
**Refactor 436** (was: "Unify story stuck states into a single status field") — marked superseded by 520 via `front_matter: superseded_by: "520"`. Its functionality is now part of `Stage::Archived { reason: ArchiveReason }` in the sketch.
|
|
||||||
|
|
||||||
## Recommended next-session priority order
|
|
||||||
|
|
||||||
1. **Fix bug 511 first** (CRDT lamport seq reset). ~30 lines in `crdt_state.rs::init()`. After CRDT replay, seed the local seq counter from `MAX(seq)` over own author. Without this, CRDT replay produces broken state and 510 keeps biting.
|
|
||||||
2. **Verify the 511 fix unblocks 510.** Hypothesis: 510 (filesystem shadow split-brain) is largely a downstream symptom of 511 (replay puts ops in wrong order, in-memory state diverges, materialiser re-creates shadows from old state). If true, 510 may need only a small additional cleanup pass.
|
|
||||||
3. **Read the state machine sketch and refine it.** Specifically:
|
|
||||||
- Verify the local-vs-syncable field partition is right
|
|
||||||
- Confirm `Stage::Merge` and `Stage::Done` carry exactly the data we need
|
|
||||||
- Add any missing transitions
|
|
||||||
- Decide whether `ExecutionState` should be in the same CRDT or a separate one (we tentatively chose the same CRDT under per-node-pubkey keys, for cross-node observability and heartbeat)
|
|
||||||
4. **Land story 520** — promote the sketch to a real `server/src/pipeline_state.rs` module. Implement the projection layer (`TryFrom<&PipelineItemCrdt> for PipelineItem`).
|
|
||||||
5. **Migrate consumers one at a time** in priority order: chat commands (512) → lifecycle (517) → delete_story (514) → mergemaster precondition (519, mostly subsumed by `NonZeroU32`).
|
|
||||||
6. **Once nothing reads the loose `PipelineItemView` anymore, delete the loose API.** The CRDT looseness becomes purely an implementation detail.
|
|
||||||
7. **Then the off-leash commit forensic** — investigate `rogue-commit-2026-04-09-ac9f3ecf`. How did an agent acquire `git push` capability? What code path enabled it? File a security-critical bug.
|
|
||||||
|
|
||||||
## What's currently weird / broken in the running system
|
|
||||||
|
|
||||||
- **`timers.json` keeps getting re-populated** even after we empty it. The cause: stopping an agent triggers the agent's exit handler, which calls the rate-limit auto-resume scheduler, which writes to `timers.json`. Bug 501 should cover this but it might need to be explicit about the stop-agent code path.
|
|
||||||
- **Chat commands can't find stories that have no filesystem shadow.** Bug 512. Workaround: use MCP `move_story` / `delete_story` / etc. directly, NOT the web UI chat commands.
|
|
||||||
- **The web UI shows stale state** for some stories because the API reads from the in-memory CRDT view, which can diverge from `pipeline_items`. This will be fixed naturally by 520 + 517 (single source of truth).
|
|
||||||
- **`create_worktree` always creates from master** — intentional design choice ("keep conflicts low") but means it can't reuse an existing feature branch's work. Bit us with 478 today.
|
|
||||||
- **Mergemaster's `merge_agent_work` exits silently** when there are no commits ahead of master — we lost ~$0.82 to one such session today. Bug 519 + the typed `NonZeroU32` constraint in story 520 will make this unrepresentable.
|
|
||||||
|
|
||||||
## Useful diagnostic recipes from today
|
|
||||||
|
|
||||||
- **View persisted CRDT ops:** `sqlite3 .huskies/pipeline.db "SELECT seq, substr(op_json, 1, 200) FROM crdt_ops ORDER BY seq DESC LIMIT 20"`
|
|
||||||
- **View in-memory CRDT pipeline state:** call `mcp__huskies__get_pipeline_status` (it goes through `crdt_state::read_all_items()`)
|
|
||||||
- **Tail server log filtered for bug 502 firings:** `tail -f .huskies/logs/server.log | grep --line-buffered "Failed to start mergemaster"`
|
|
||||||
- **Tail server log without `[pty-debug]` noise:** `tail -f .huskies/logs/server.log | grep -v "\[pty-debug\]"`
|
|
||||||
- **Check current pending timers:** `cat .huskies/timers.json`
|
|
||||||
- **Forensically delete a story across all four state machines:** stop agents → remove worktree → empty timers → `DELETE FROM pipeline_items WHERE id LIKE '<id>%'` → `DELETE FROM crdt_ops WHERE op_json LIKE '%<id>%'`
|
|
||||||
|
|
||||||
## Token cost accounting
|
|
||||||
|
|
||||||
This session burned roughly **$15-25** in agent thrash, mostly from bug 501 + bug 510 respawning agents on already-completed stories. Once 511 + 510 + 501 are fixed, that bleed disappears.
|
|
||||||
|
|
||||||
## Open questions for the next session
|
|
||||||
|
|
||||||
1. **Should `ExecutionState` live in the same CRDT or a separate one?** We tentatively said same CRDT under per-node-pubkey keys. Need to validate this against the bft-json-crdt library's actual capabilities.
|
|
||||||
2. **Heartbeat cadence?** How often should `last_heartbeat` be updated for `ExecutionState::Running`? Every 30s seems reasonable but should be config.
|
|
||||||
3. **What's the migration path from existing pipeline_items rows to typed `PipelineItem`s?** A one-time migration script, or rebuild from `crdt_ops`?
|
|
||||||
4. **Should we add `statig` after all?** Probably not for the initial implementation, but worth revisiting if we end up wanting hierarchical states (e.g., a `Working` superstate sharing transitions across active stages).
|
|
||||||
Generated
+35
-29
@@ -229,7 +229,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
|
|||||||
checksum = "94893f1e0c6eeab764ade8dc4c0db24caf4fe7cbbaafc0eba0a9030f447b5185"
|
checksum = "94893f1e0c6eeab764ade8dc4c0db24caf4fe7cbbaafc0eba0a9030f447b5185"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"num-traits",
|
"num-traits",
|
||||||
"rand 0.8.5",
|
"rand 0.8.6",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@@ -441,7 +441,7 @@ dependencies = [
|
|||||||
"criterion",
|
"criterion",
|
||||||
"fastcrypto",
|
"fastcrypto",
|
||||||
"indexmap 2.14.0",
|
"indexmap 2.14.0",
|
||||||
"rand 0.8.5",
|
"rand 0.8.6",
|
||||||
"random_color",
|
"random_color",
|
||||||
"serde",
|
"serde",
|
||||||
"serde_json",
|
"serde_json",
|
||||||
@@ -1649,7 +1649,7 @@ dependencies = [
|
|||||||
"num-bigint",
|
"num-bigint",
|
||||||
"once_cell",
|
"once_cell",
|
||||||
"p256",
|
"p256",
|
||||||
"rand 0.8.5",
|
"rand 0.8.6",
|
||||||
"readonly",
|
"readonly",
|
||||||
"rfc6979",
|
"rfc6979",
|
||||||
"rsa 0.8.2",
|
"rsa 0.8.2",
|
||||||
@@ -2288,7 +2288,7 @@ checksum = "df3b46402a9d5adb4c86a0cf463f42e19994e3ee891101b1841f30a545cb49a9"
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "huskies"
|
name = "huskies"
|
||||||
version = "0.10.3"
|
version = "0.10.4"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"async-stream",
|
"async-stream",
|
||||||
"async-trait",
|
"async-trait",
|
||||||
@@ -3165,7 +3165,7 @@ dependencies = [
|
|||||||
"js_option",
|
"js_option",
|
||||||
"matrix-sdk-common",
|
"matrix-sdk-common",
|
||||||
"pbkdf2",
|
"pbkdf2",
|
||||||
"rand 0.8.5",
|
"rand 0.8.6",
|
||||||
"rmp-serde",
|
"rmp-serde",
|
||||||
"ruma",
|
"ruma",
|
||||||
"serde",
|
"serde",
|
||||||
@@ -3255,7 +3255,7 @@ dependencies = [
|
|||||||
"getrandom 0.2.17",
|
"getrandom 0.2.17",
|
||||||
"hmac",
|
"hmac",
|
||||||
"pbkdf2",
|
"pbkdf2",
|
||||||
"rand 0.8.5",
|
"rand 0.8.6",
|
||||||
"rmp-serde",
|
"rmp-serde",
|
||||||
"serde",
|
"serde",
|
||||||
"serde_json",
|
"serde_json",
|
||||||
@@ -3509,7 +3509,7 @@ dependencies = [
|
|||||||
"num-integer",
|
"num-integer",
|
||||||
"num-iter",
|
"num-iter",
|
||||||
"num-traits",
|
"num-traits",
|
||||||
"rand 0.8.5",
|
"rand 0.8.6",
|
||||||
"smallvec",
|
"smallvec",
|
||||||
"zeroize",
|
"zeroize",
|
||||||
]
|
]
|
||||||
@@ -3570,7 +3570,7 @@ dependencies = [
|
|||||||
"chrono",
|
"chrono",
|
||||||
"getrandom 0.2.17",
|
"getrandom 0.2.17",
|
||||||
"http",
|
"http",
|
||||||
"rand 0.8.5",
|
"rand 0.8.6",
|
||||||
"reqwest 0.12.28",
|
"reqwest 0.12.28",
|
||||||
"serde",
|
"serde",
|
||||||
"serde_json",
|
"serde_json",
|
||||||
@@ -3726,7 +3726,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
|
|||||||
checksum = "3c80231409c20246a13fddb31776fb942c38553c51e871f8cbd687a4cfb5843d"
|
checksum = "3c80231409c20246a13fddb31776fb942c38553c51e871f8cbd687a4cfb5843d"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"phf_shared 0.11.3",
|
"phf_shared 0.11.3",
|
||||||
"rand 0.8.5",
|
"rand 0.8.6",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@@ -4231,9 +4231,9 @@ dependencies = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "rand"
|
name = "rand"
|
||||||
version = "0.8.5"
|
version = "0.8.6"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "34af8d1a0e25924bc5b7c43c079c942339d8f0a8b57c39049bef581b46327404"
|
checksum = "5ca0ecfa931c29007047d1bc58e623ab12e5590e8c7cc53200d5202b69266d8a"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"libc",
|
"libc",
|
||||||
"rand_chacha 0.3.1",
|
"rand_chacha 0.3.1",
|
||||||
@@ -4693,7 +4693,7 @@ dependencies = [
|
|||||||
"js_int",
|
"js_int",
|
||||||
"konst",
|
"konst",
|
||||||
"percent-encoding",
|
"percent-encoding",
|
||||||
"rand 0.8.5",
|
"rand 0.8.6",
|
||||||
"regex",
|
"regex",
|
||||||
"ruma-identifiers-validation",
|
"ruma-identifiers-validation",
|
||||||
"ruma-macros",
|
"ruma-macros",
|
||||||
@@ -4803,7 +4803,7 @@ dependencies = [
|
|||||||
"base64",
|
"base64",
|
||||||
"ed25519-dalek",
|
"ed25519-dalek",
|
||||||
"pkcs8 0.10.2",
|
"pkcs8 0.10.2",
|
||||||
"rand 0.8.5",
|
"rand 0.8.6",
|
||||||
"ruma-common",
|
"ruma-common",
|
||||||
"serde_json",
|
"serde_json",
|
||||||
"sha2 0.10.9",
|
"sha2 0.10.9",
|
||||||
@@ -4952,9 +4952,9 @@ checksum = "f87165f0995f63a9fbeea62b64d10b4d9d8e78ec6d7d51fb2125fda7bb36788f"
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "rustls-webpki"
|
name = "rustls-webpki"
|
||||||
version = "0.103.12"
|
version = "0.103.13"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "8279bb85272c9f10811ae6a6c547ff594d6a7f3c6c6b02ee9726d1d0dcfcdd06"
|
checksum = "61c429a8649f110dddef65e2a5ad240f747e85f7758a6bccc7e5777bd33f756e"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"aws-lc-rs",
|
"aws-lc-rs",
|
||||||
"ring",
|
"ring",
|
||||||
@@ -5078,7 +5078,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
|
|||||||
checksum = "25996b82292a7a57ed3508f052cfff8640d38d32018784acd714758b43da9c8f"
|
checksum = "25996b82292a7a57ed3508f052cfff8640d38d32018784acd714758b43da9c8f"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"bitcoin_hashes",
|
"bitcoin_hashes",
|
||||||
"rand 0.8.5",
|
"rand 0.8.6",
|
||||||
"secp256k1-sys",
|
"secp256k1-sys",
|
||||||
]
|
]
|
||||||
|
|
||||||
@@ -5344,9 +5344,9 @@ dependencies = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "sha3"
|
name = "sha3"
|
||||||
version = "0.10.8"
|
version = "0.10.9"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "75872d278a8f37ef87fa0ddbda7802605cb18344497949862c0d4dcb291eba60"
|
checksum = "77fd7028345d415a4034cf8777cd4f8ab1851274233b45f84e3d955502d93874"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"digest 0.10.7",
|
"digest 0.10.7",
|
||||||
"keccak",
|
"keccak",
|
||||||
@@ -5587,7 +5587,7 @@ dependencies = [
|
|||||||
"md-5",
|
"md-5",
|
||||||
"memchr",
|
"memchr",
|
||||||
"percent-encoding",
|
"percent-encoding",
|
||||||
"rand 0.8.5",
|
"rand 0.8.6",
|
||||||
"rsa 0.9.10",
|
"rsa 0.9.10",
|
||||||
"sha1",
|
"sha1",
|
||||||
"sha2 0.10.9",
|
"sha2 0.10.9",
|
||||||
@@ -5623,7 +5623,7 @@ dependencies = [
|
|||||||
"log",
|
"log",
|
||||||
"md-5",
|
"md-5",
|
||||||
"memchr",
|
"memchr",
|
||||||
"rand 0.8.5",
|
"rand 0.8.6",
|
||||||
"serde",
|
"serde",
|
||||||
"serde_json",
|
"serde_json",
|
||||||
"sha2 0.10.9",
|
"sha2 0.10.9",
|
||||||
@@ -5996,9 +5996,9 @@ checksum = "1f3ccbac311fea05f86f61904b462b55fb3df8837a366dfc601a0161d0532f20"
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "tokio"
|
name = "tokio"
|
||||||
version = "1.52.0"
|
version = "1.52.1"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "a91135f59b1cbf38c91e73cf3386fca9bb77915c45ce2771460c9d92f0f3d776"
|
checksum = "b67dee974fe86fd92cc45b7a95fdd2f99a36a6d7b0d431a231178d3d670bbcc6"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"bytes",
|
"bytes",
|
||||||
"libc",
|
"libc",
|
||||||
@@ -6327,9 +6327,9 @@ dependencies = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "typenum"
|
name = "typenum"
|
||||||
version = "1.19.0"
|
version = "1.20.0"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "562d481066bde0658276a35467c4af00bdc6ee726305698a55b86e61d7ad82bb"
|
checksum = "40ce102ab67701b8526c123c1bab5cbe42d7040ccfd0f64af1a385808d2f43de"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "typewit"
|
name = "typewit"
|
||||||
@@ -6512,7 +6512,7 @@ dependencies = [
|
|||||||
"hmac",
|
"hmac",
|
||||||
"matrix-pickle",
|
"matrix-pickle",
|
||||||
"prost",
|
"prost",
|
||||||
"rand 0.8.5",
|
"rand 0.8.6",
|
||||||
"serde",
|
"serde",
|
||||||
"serde_bytes",
|
"serde_bytes",
|
||||||
"serde_json",
|
"serde_json",
|
||||||
@@ -6580,11 +6580,11 @@ checksum = "ccf3ec651a847eb01de73ccad15eb7d99f80485de043efb2f370cd654f4ea44b"
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "wasip2"
|
name = "wasip2"
|
||||||
version = "1.0.2+wasi-0.2.9"
|
version = "1.0.3+wasi-0.2.9"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "9517f9239f02c069db75e65f174b3da828fe5f5b945c4dd26bd25d89c03ebcf5"
|
checksum = "20064672db26d7cdc89c7798c48a0fdfac8213434a1186e5ef29fd560ae223d6"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"wit-bindgen",
|
"wit-bindgen 0.57.1",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@@ -6593,7 +6593,7 @@ version = "0.4.0+wasi-0.3.0-rc-2026-01-06"
|
|||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "5428f8bf88ea5ddc08faddef2ac4a67e390b88186c703ce6dbd955e1c145aca5"
|
checksum = "5428f8bf88ea5ddc08faddef2ac4a67e390b88186c703ce6dbd955e1c145aca5"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"wit-bindgen",
|
"wit-bindgen 0.51.0",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@@ -7271,6 +7271,12 @@ dependencies = [
|
|||||||
"wit-bindgen-rust-macro",
|
"wit-bindgen-rust-macro",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "wit-bindgen"
|
||||||
|
version = "0.57.1"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "1ebf944e87a7c253233ad6766e082e3cd714b5d03812acc24c318f549614536e"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "wit-bindgen-core"
|
name = "wit-bindgen-core"
|
||||||
version = "0.51.0"
|
version = "0.51.0"
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ A story-driven development server that manages work items, spawns coding agents,
|
|||||||
|
|
||||||
## Getting started with Claude Code
|
## Getting started with Claude Code
|
||||||
|
|
||||||
1. Download the huskies binary (or build from source — see below).
|
1. Download the huskies binary (or build from source — see below). Add it to your $PATH.
|
||||||
|
|
||||||
2. From your project directory, scaffold and start the server:
|
2. From your project directory, scaffold and start the server:
|
||||||
|
|
||||||
@@ -79,6 +79,13 @@ cd frontend && npm install && npm run dev
|
|||||||
|
|
||||||
Configuration lives in `.huskies/project.toml`. See `.huskies/bot.toml.*.example` for transport setup.
|
Configuration lives in `.huskies/project.toml`. See `.huskies/bot.toml.*.example` for transport setup.
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
Internal architecture documentation lives in [`docs/architecture/`](docs/architecture/):
|
||||||
|
|
||||||
|
- [Service module conventions](docs/architecture/service-modules.md) — layout, layering rules, and patterns for `server/src/service/`
|
||||||
|
- [Future extraction targets](docs/architecture/future-extractions.md) — recommended order for remaining handler extractions
|
||||||
|
|
||||||
## Releasing
|
## Releasing
|
||||||
|
|
||||||
Requires a Gitea API token in `.env` (`GITEA_TOKEN=your_token`).
|
Requires a Gitea API token in `.env` (`GITEA_TOKEN=your_token`).
|
||||||
|
|||||||
@@ -0,0 +1,29 @@
|
|||||||
|
# Future Service Module Extractions
|
||||||
|
|
||||||
|
Recommended order for extracting remaining HTTP handlers into `service/<domain>/`
|
||||||
|
modules, following the conventions in [service-modules.md](service-modules.md).
|
||||||
|
|
||||||
|
## Recommended Order
|
||||||
|
|
||||||
|
1. **`settings`** — small surface, few dependencies, good warm-up
|
||||||
|
2. **`oauth`** — reads/writes token files; pure validation logic separates cleanly
|
||||||
|
3. **`wizard`** — stateless generation logic is already mostly pure; thin I/O layer
|
||||||
|
4. **`project`** — project scaffolding; wraps `io::fs::scaffold`, clean separation
|
||||||
|
5. **`io`** (search/shell) — wraps `io::search` and `io::shell`; pure query-building separable
|
||||||
|
6. **`anthropic`** — token-proxy handler; pure request-shaping + thin HTTP I/O
|
||||||
|
7. **`stories`** (workflow) — CRDT-backed story ops; typed errors for 400/404/409/500
|
||||||
|
8. **`events`** — SSE handler; mostly framework wiring, but event filtering is pure
|
||||||
|
|
||||||
|
## Special Case: `ws`
|
||||||
|
|
||||||
|
The WebSocket handler (`http/ws.rs`) is a **dedicated harder extraction** because
|
||||||
|
it mixes multiple concerns (chat dispatch, permission forwarding, SSE bridging)
|
||||||
|
and depends on long-lived async streams. Extract it last, after the above list
|
||||||
|
is complete and the service module pattern is well-established.
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- Each extraction should link back to `docs/architecture/service-modules.md`
|
||||||
|
in the story description to maintain consistency.
|
||||||
|
- The `agents` extraction (story 604) is the reference implementation every
|
||||||
|
future extraction should follow.
|
||||||
@@ -0,0 +1,227 @@
|
|||||||
|
# Service Module Conventions
|
||||||
|
|
||||||
|
This document defines the layout, layering rules, and patterns for all service
|
||||||
|
modules under `server/src/service/`. Every extraction from the HTTP handlers to
|
||||||
|
a service module **must** follow these conventions.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Directory Layout
|
||||||
|
|
||||||
|
```
|
||||||
|
server/src/service/<domain>/
|
||||||
|
mod.rs — public API, typed Error, orchestration, integration tests
|
||||||
|
io.rs — every side-effectful call; the ONLY file that may touch the
|
||||||
|
filesystem, spawn processes, or call external crates that do
|
||||||
|
<topic>.rs — pure logic for a named concern within the domain; no I/O
|
||||||
|
```
|
||||||
|
|
||||||
|
### Rules
|
||||||
|
|
||||||
|
- `<domain>` matches the HTTP handler filename (e.g. `agents`, `settings`,
|
||||||
|
`oauth`).
|
||||||
|
- **No file named `logic.rs`** — use a descriptive domain name instead
|
||||||
|
(e.g. `selection.rs`, `token.rs`, `validation.rs`).
|
||||||
|
- New topic files are added when a pure concern grows beyond ~50 lines or when
|
||||||
|
it has independent test coverage needs.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. The Functional-Core / Imperative-Shell Rule
|
||||||
|
|
||||||
|
```
|
||||||
|
io.rs (imperative shell) ←→ mod.rs (orchestrator) ←→ <topic>.rs (functional core)
|
||||||
|
```
|
||||||
|
|
||||||
|
| Layer | Allowed | Forbidden |
|
||||||
|
|-------|---------|-----------|
|
||||||
|
| `<topic>.rs` | Pure Rust, data-transformation, branching logic, pattern matching | Any I/O |
|
||||||
|
| `io.rs` | `std::fs`, `std::process`, `tokio::fs`, network calls, `SystemTime::now` | Business logic beyond a thin wrapper |
|
||||||
|
| `mod.rs` | Calls into `io.rs` and `<topic>.rs`; owns the `Error` type | Direct I/O without going through `io.rs` |
|
||||||
|
|
||||||
|
**Grep-enforceable check:** The following must NOT appear in any `service/<domain>/` file other than `io.rs`:
|
||||||
|
|
||||||
|
- `std::fs`
|
||||||
|
- `std::process`
|
||||||
|
- `std::thread::sleep`
|
||||||
|
- `tokio::fs`
|
||||||
|
- `reqwest`
|
||||||
|
- `SystemTime::now`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Error Type Pattern
|
||||||
|
|
||||||
|
Each service domain declares its own typed error enum in `mod.rs`:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
/// Errors returned by `service::agents` operations.
|
||||||
|
#[derive(Debug)]
|
||||||
|
pub enum Error {
|
||||||
|
ProjectRootNotConfigured,
|
||||||
|
AgentNotFound(String),
|
||||||
|
WorkItemNotFound(String),
|
||||||
|
WorktreeError(String),
|
||||||
|
ConfigError(String),
|
||||||
|
IoError(String),
|
||||||
|
}
|
||||||
|
|
||||||
|
impl std::fmt::Display for Error { ... }
|
||||||
|
```
|
||||||
|
|
||||||
|
HTTP handlers map service errors to **specific** HTTP status codes:
|
||||||
|
|
||||||
|
| Error variant | HTTP status |
|
||||||
|
|--------------|-------------|
|
||||||
|
| `ProjectRootNotConfigured` | 400 Bad Request |
|
||||||
|
| `AgentNotFound` | 404 Not Found |
|
||||||
|
| `WorkItemNotFound` | 404 Not Found |
|
||||||
|
| `WorktreeError` | 400 Bad Request |
|
||||||
|
| `ConfigError` | 400 Bad Request |
|
||||||
|
| `IoError` | 500 Internal Server Error |
|
||||||
|
|
||||||
|
**No generic `bad_request` for everything** — distinguish 400 vs 404 vs 500.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. Test Pattern
|
||||||
|
|
||||||
|
### Chosen default pattern: fixture helpers in `io::test_helpers`
|
||||||
|
|
||||||
|
All filesystem setup for tests lives in a `#[cfg(test)] pub mod test_helpers`
|
||||||
|
block inside `io.rs`. Test blocks in `mod.rs` and topic files call these
|
||||||
|
helpers instead of importing `std::fs` directly.
|
||||||
|
|
||||||
|
**Grep-enforceable check for test code:** The following must NOT appear inside
|
||||||
|
`#[cfg(test)]` blocks in any `service/<domain>/` file **other than `io.rs`**:
|
||||||
|
|
||||||
|
- `std::fs::` (any item)
|
||||||
|
- `tokio::fs`
|
||||||
|
- `std::process::` (any item)
|
||||||
|
- `Command::new`
|
||||||
|
|
||||||
|
Run to verify:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
grep -rn --include='*.rs' \
|
||||||
|
'std::fs::\|tokio::fs\|std::process::\|Command::new' \
|
||||||
|
server/src/service/ | grep -v '/io\.rs'
|
||||||
|
```
|
||||||
|
|
||||||
|
This must return zero matches (including lines inside `#[cfg(test)]` blocks).
|
||||||
|
|
||||||
|
### Pure topic files (`<topic>.rs`)
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
// Unit tests MUST:
|
||||||
|
// - Use no tempdir, tokio runtime, or filesystem
|
||||||
|
// - Cover every branch of every public function
|
||||||
|
#[test]
|
||||||
|
fn filter_removes_archived_agents() { ... }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### `io.rs`
|
||||||
|
|
||||||
|
```rust
|
||||||
|
/// Fixture helpers — the ONLY place allowed to call std::fs in tests.
|
||||||
|
#[cfg(test)]
|
||||||
|
pub mod test_helpers {
|
||||||
|
use tempfile::TempDir;
|
||||||
|
|
||||||
|
pub fn make_work_dirs(tmp: &TempDir) { ... }
|
||||||
|
pub fn make_stage_dirs(tmp: &TempDir) { ... }
|
||||||
|
pub fn make_project_toml(tmp: &TempDir, content: &str) { ... }
|
||||||
|
pub fn write_story_file(tmp: &TempDir, relative_path: &str, content: &str) { ... }
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
use tempfile::TempDir;
|
||||||
|
|
||||||
|
// IO tests MAY use tempdirs and real filesystem.
|
||||||
|
// Keep them few and focused on the thin I/O wrapper contract.
|
||||||
|
#[test]
|
||||||
|
fn is_archived_returns_true_when_in_done() { ... }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### `mod.rs`
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
use io::test_helpers::*; // ← fixture helpers; never import std::fs here
|
||||||
|
|
||||||
|
// Integration tests compose io + pure layers end-to-end.
|
||||||
|
// May use tempdirs. Keep the count small — they are integration-level.
|
||||||
|
#[tokio::test]
|
||||||
|
async fn list_agents_excludes_archived() { ... }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. Dependency Injection Pattern
|
||||||
|
|
||||||
|
Service functions take **only the dependencies they actually use**:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
// Good — takes only what it needs
|
||||||
|
pub async fn start_agent(
|
||||||
|
pool: &AgentPool,
|
||||||
|
project_root: &Path,
|
||||||
|
story_id: &str,
|
||||||
|
agent_name: Option<&str>,
|
||||||
|
) -> Result<AgentInfo, Error> { ... }
|
||||||
|
|
||||||
|
// Bad — takes the whole AppContext
|
||||||
|
pub async fn start_agent(ctx: &AppContext, ...) -> Result<AgentInfo, Error> { ... }
|
||||||
|
```
|
||||||
|
|
||||||
|
Standard injected dependencies for `service::agents`:
|
||||||
|
|
||||||
|
| Type | Purpose |
|
||||||
|
|------|---------|
|
||||||
|
| `&AgentPool` | Agent lifecycle operations |
|
||||||
|
| `&Path` (`project_root`) | Filesystem operations scoped to the project |
|
||||||
|
| `&WorkflowState` | In-memory test result cache |
|
||||||
|
|
||||||
|
**The dependency set chosen for `agents` is the reference pattern for all future
|
||||||
|
service module extractions.**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. HTTP Handler Contract
|
||||||
|
|
||||||
|
After extraction, HTTP handlers are thin adapters:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
async fn start_agent(&self, payload: Json<StartAgentPayload>) -> OpenApiResult<...> {
|
||||||
|
let project_root = self.ctx.agents.get_project_root(&self.ctx.state)
|
||||||
|
.map_err(|e| bad_request(e))?; // extract from AppContext
|
||||||
|
let info = service::agents::start_agent( // call service
|
||||||
|
&self.ctx.agents, &project_root, &payload.story_id, payload.agent_name.as_deref(),
|
||||||
|
).await.map_err(map_service_error)?; // map typed error → HTTP
|
||||||
|
Ok(Json(AgentInfoResponse { ... })) // shape DTO
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Handlers must contain **no**:
|
||||||
|
- `std::fs` / file reads
|
||||||
|
- `std::process` invocations
|
||||||
|
- Inline load-mutate-save sequences
|
||||||
|
- Inline validation that belongs in the service layer
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 7. Follow-up Extractions
|
||||||
|
|
||||||
|
See [future-extractions.md](future-extractions.md) for the recommended order
|
||||||
|
and rationale for remaining extraction targets.
|
||||||
Generated
+2
-2
@@ -1,12 +1,12 @@
|
|||||||
{
|
{
|
||||||
"name": "huskies",
|
"name": "huskies",
|
||||||
"version": "0.10.3",
|
"version": "0.10.4",
|
||||||
"lockfileVersion": 3,
|
"lockfileVersion": 3,
|
||||||
"requires": true,
|
"requires": true,
|
||||||
"packages": {
|
"packages": {
|
||||||
"": {
|
"": {
|
||||||
"name": "huskies",
|
"name": "huskies",
|
||||||
"version": "0.10.3",
|
"version": "0.10.4",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@types/react-syntax-highlighter": "^15.5.13",
|
"@types/react-syntax-highlighter": "^15.5.13",
|
||||||
"react": "^19.1.0",
|
"react": "^19.1.0",
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
{
|
{
|
||||||
"name": "huskies",
|
"name": "huskies",
|
||||||
"private": true,
|
"private": true,
|
||||||
"version": "0.10.3",
|
"version": "0.10.4",
|
||||||
"type": "module",
|
"type": "module",
|
||||||
"scripts": {
|
"scripts": {
|
||||||
"dev": "vite",
|
"dev": "vite",
|
||||||
|
|||||||
@@ -194,7 +194,6 @@ body,
|
|||||||
#root {
|
#root {
|
||||||
height: 100%;
|
height: 100%;
|
||||||
margin: 0;
|
margin: 0;
|
||||||
overflow: hidden;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Agent activity indicator pulse */
|
/* Agent activity indicator pulse */
|
||||||
|
|||||||
@@ -1,8 +1,14 @@
|
|||||||
import { fireEvent, render, screen, waitFor } from "@testing-library/react";
|
import { act, fireEvent, render, screen, waitFor } from "@testing-library/react";
|
||||||
import userEvent from "@testing-library/user-event";
|
import userEvent from "@testing-library/user-event";
|
||||||
import { beforeEach, describe, expect, it, vi } from "vitest";
|
import { beforeEach, describe, expect, it, vi } from "vitest";
|
||||||
import { api } from "./api/client";
|
import { api } from "./api/client";
|
||||||
|
|
||||||
|
vi.mock("./api/gateway", () => ({
|
||||||
|
gatewayApi: {
|
||||||
|
getServerMode: vi.fn().mockResolvedValue({ mode: "standard" }),
|
||||||
|
},
|
||||||
|
}));
|
||||||
|
|
||||||
vi.mock("./api/client", () => {
|
vi.mock("./api/client", () => {
|
||||||
const api = {
|
const api = {
|
||||||
getCurrentProject: vi.fn(),
|
getCurrentProject: vi.fn(),
|
||||||
@@ -76,7 +82,11 @@ describe("App", () => {
|
|||||||
|
|
||||||
async function renderApp() {
|
async function renderApp() {
|
||||||
const { default: App } = await import("./App");
|
const { default: App } = await import("./App");
|
||||||
return render(<App />);
|
let result!: ReturnType<typeof render>;
|
||||||
|
await act(async () => {
|
||||||
|
result = render(<App />);
|
||||||
|
});
|
||||||
|
return result;
|
||||||
}
|
}
|
||||||
|
|
||||||
it("calls getCurrentProject() on mount", async () => {
|
it("calls getCurrentProject() on mount", async () => {
|
||||||
|
|||||||
@@ -1,4 +1,5 @@
|
|||||||
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
|
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
|
||||||
|
import type { ProjectSettings } from "./settings";
|
||||||
import { settingsApi } from "./settings";
|
import { settingsApi } from "./settings";
|
||||||
|
|
||||||
const mockFetch = vi.fn();
|
const mockFetch = vi.fn();
|
||||||
@@ -22,7 +23,77 @@ function errorResponse(status: number, text: string) {
|
|||||||
return new Response(text, { status });
|
return new Response(text, { status });
|
||||||
}
|
}
|
||||||
|
|
||||||
|
const defaultProjectSettings: ProjectSettings = {
|
||||||
|
default_qa: "server",
|
||||||
|
default_coder_model: null,
|
||||||
|
max_coders: null,
|
||||||
|
max_retries: 2,
|
||||||
|
base_branch: null,
|
||||||
|
rate_limit_notifications: true,
|
||||||
|
timezone: null,
|
||||||
|
rendezvous: null,
|
||||||
|
watcher_sweep_interval_secs: 60,
|
||||||
|
watcher_done_retention_secs: 14400,
|
||||||
|
};
|
||||||
|
|
||||||
describe("settingsApi", () => {
|
describe("settingsApi", () => {
|
||||||
|
describe("getProjectSettings", () => {
|
||||||
|
it("sends GET to /settings and returns project settings", async () => {
|
||||||
|
mockFetch.mockResolvedValueOnce(okResponse(defaultProjectSettings));
|
||||||
|
|
||||||
|
const result = await settingsApi.getProjectSettings();
|
||||||
|
|
||||||
|
expect(mockFetch).toHaveBeenCalledWith(
|
||||||
|
"/api/settings",
|
||||||
|
expect.objectContaining({
|
||||||
|
headers: expect.objectContaining({
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
}),
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
expect(result).toEqual(defaultProjectSettings);
|
||||||
|
});
|
||||||
|
|
||||||
|
it("uses custom baseUrl when provided", async () => {
|
||||||
|
mockFetch.mockResolvedValueOnce(okResponse(defaultProjectSettings));
|
||||||
|
await settingsApi.getProjectSettings("http://localhost:4000/api");
|
||||||
|
expect(mockFetch).toHaveBeenCalledWith(
|
||||||
|
"http://localhost:4000/api/settings",
|
||||||
|
expect.anything(),
|
||||||
|
);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe("putProjectSettings", () => {
|
||||||
|
it("sends PUT to /settings with settings body", async () => {
|
||||||
|
const updated = { ...defaultProjectSettings, default_qa: "agent" };
|
||||||
|
mockFetch.mockResolvedValueOnce(okResponse(updated));
|
||||||
|
|
||||||
|
const result = await settingsApi.putProjectSettings(updated);
|
||||||
|
|
||||||
|
expect(mockFetch).toHaveBeenCalledWith(
|
||||||
|
"/api/settings",
|
||||||
|
expect.objectContaining({
|
||||||
|
method: "PUT",
|
||||||
|
body: JSON.stringify(updated),
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
expect(result.default_qa).toBe("agent");
|
||||||
|
});
|
||||||
|
|
||||||
|
it("throws on validation error", async () => {
|
||||||
|
mockFetch.mockResolvedValueOnce(
|
||||||
|
errorResponse(400, "Invalid default_qa value"),
|
||||||
|
);
|
||||||
|
await expect(
|
||||||
|
settingsApi.putProjectSettings({
|
||||||
|
...defaultProjectSettings,
|
||||||
|
default_qa: "invalid",
|
||||||
|
}),
|
||||||
|
).rejects.toThrow("Invalid default_qa value");
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
describe("getEditorCommand", () => {
|
describe("getEditorCommand", () => {
|
||||||
it("sends GET to /settings/editor and returns editor settings", async () => {
|
it("sends GET to /settings/editor and returns editor settings", async () => {
|
||||||
const expected = { editor_command: "zed" };
|
const expected = { editor_command: "zed" };
|
||||||
|
|||||||
@@ -2,6 +2,19 @@ export interface EditorSettings {
|
|||||||
editor_command: string | null;
|
editor_command: string | null;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
export interface ProjectSettings {
|
||||||
|
default_qa: string;
|
||||||
|
default_coder_model: string | null;
|
||||||
|
max_coders: number | null;
|
||||||
|
max_retries: number;
|
||||||
|
base_branch: string | null;
|
||||||
|
rate_limit_notifications: boolean;
|
||||||
|
timezone: string | null;
|
||||||
|
rendezvous: string | null;
|
||||||
|
watcher_sweep_interval_secs: number;
|
||||||
|
watcher_done_retention_secs: number;
|
||||||
|
}
|
||||||
|
|
||||||
export interface OpenFileResult {
|
export interface OpenFileResult {
|
||||||
success: boolean;
|
success: boolean;
|
||||||
}
|
}
|
||||||
@@ -34,6 +47,21 @@ async function requestJson<T>(
|
|||||||
}
|
}
|
||||||
|
|
||||||
export const settingsApi = {
|
export const settingsApi = {
|
||||||
|
getProjectSettings(baseUrl?: string): Promise<ProjectSettings> {
|
||||||
|
return requestJson<ProjectSettings>("/settings", {}, baseUrl);
|
||||||
|
},
|
||||||
|
|
||||||
|
putProjectSettings(
|
||||||
|
settings: ProjectSettings,
|
||||||
|
baseUrl?: string,
|
||||||
|
): Promise<ProjectSettings> {
|
||||||
|
return requestJson<ProjectSettings>(
|
||||||
|
"/settings",
|
||||||
|
{ method: "PUT", body: JSON.stringify(settings) },
|
||||||
|
baseUrl,
|
||||||
|
);
|
||||||
|
},
|
||||||
|
|
||||||
getEditorCommand(baseUrl?: string): Promise<EditorSettings> {
|
getEditorCommand(baseUrl?: string): Promise<EditorSettings> {
|
||||||
return requestJson<EditorSettings>("/settings/editor", {}, baseUrl);
|
return requestJson<EditorSettings>("/settings/editor", {}, baseUrl);
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -165,7 +165,7 @@ describe("Chat message rendering — unified tool call UI", () => {
|
|||||||
},
|
},
|
||||||
];
|
];
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onUpdate(messages);
|
capturedWsHandlers?.onUpdate(messages);
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -199,7 +199,7 @@ describe("Chat message rendering — unified tool call UI", () => {
|
|||||||
{ role: "assistant", content: "The file contains a main function." },
|
{ role: "assistant", content: "The file contains a main function." },
|
||||||
];
|
];
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onUpdate(messages);
|
capturedWsHandlers?.onUpdate(messages);
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -219,7 +219,7 @@ describe("Chat message rendering — unified tool call UI", () => {
|
|||||||
{ role: "assistant", content: "Hi there! How can I help?" },
|
{ role: "assistant", content: "Hi there! How can I help?" },
|
||||||
];
|
];
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onUpdate(messages);
|
capturedWsHandlers?.onUpdate(messages);
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -254,7 +254,7 @@ describe("Chat message rendering — unified tool call UI", () => {
|
|||||||
},
|
},
|
||||||
];
|
];
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onUpdate(messages);
|
capturedWsHandlers?.onUpdate(messages);
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -396,7 +396,7 @@ describe("Chat reconciliation banner", () => {
|
|||||||
|
|
||||||
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onReconciliationProgress(
|
capturedWsHandlers?.onReconciliationProgress(
|
||||||
"42_story_test",
|
"42_story_test",
|
||||||
"checking",
|
"checking",
|
||||||
@@ -417,7 +417,7 @@ describe("Chat reconciliation banner", () => {
|
|||||||
|
|
||||||
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onReconciliationProgress(
|
capturedWsHandlers?.onReconciliationProgress(
|
||||||
"42_story_test",
|
"42_story_test",
|
||||||
"gates_running",
|
"gates_running",
|
||||||
@@ -435,7 +435,7 @@ describe("Chat reconciliation banner", () => {
|
|||||||
|
|
||||||
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onReconciliationProgress(
|
capturedWsHandlers?.onReconciliationProgress(
|
||||||
"42_story_test",
|
"42_story_test",
|
||||||
"checking",
|
"checking",
|
||||||
@@ -447,7 +447,7 @@ describe("Chat reconciliation banner", () => {
|
|||||||
await screen.findByTestId("reconciliation-banner"),
|
await screen.findByTestId("reconciliation-banner"),
|
||||||
).toBeInTheDocument();
|
).toBeInTheDocument();
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onReconciliationProgress(
|
capturedWsHandlers?.onReconciliationProgress(
|
||||||
"",
|
"",
|
||||||
"done",
|
"done",
|
||||||
@@ -504,7 +504,7 @@ describe("Chat localStorage persistence (Story 145)", () => {
|
|||||||
{ role: "assistant", content: "Hi there!" },
|
{ role: "assistant", content: "Hi there!" },
|
||||||
];
|
];
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onUpdate(history);
|
capturedWsHandlers?.onUpdate(history);
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -555,7 +555,7 @@ describe("Chat localStorage persistence (Story 145)", () => {
|
|||||||
{ role: "assistant", content: "I should survive a reload" },
|
{ role: "assistant", content: "I should survive a reload" },
|
||||||
];
|
];
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onUpdate(history);
|
capturedWsHandlers?.onUpdate(history);
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -604,7 +604,7 @@ describe("Chat localStorage persistence (Story 145)", () => {
|
|||||||
{ role: "user", content: "What is Rust?" },
|
{ role: "user", content: "What is Rust?" },
|
||||||
{ role: "assistant", content: "Rust is a systems programming language." },
|
{ role: "assistant", content: "Rust is a systems programming language." },
|
||||||
];
|
];
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onUpdate(priorHistory);
|
capturedWsHandlers?.onUpdate(priorHistory);
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -692,12 +692,12 @@ describe("Chat activity status indicator (Bug 140)", () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
// Simulate tokens arriving (streamingContent becomes non-empty)
|
// Simulate tokens arriving (streamingContent becomes non-empty)
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onToken("I'll read that file for you.");
|
capturedWsHandlers?.onToken("I'll read that file for you.");
|
||||||
});
|
});
|
||||||
|
|
||||||
// Now simulate a tool activity event while streamingContent is non-empty
|
// Now simulate a tool activity event while streamingContent is non-empty
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onActivity("read_file");
|
capturedWsHandlers?.onActivity("read_file");
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -742,7 +742,7 @@ describe("Chat activity status indicator (Bug 140)", () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
// Tokens arrive — streamingContent is non-empty, no activity
|
// Tokens arrive — streamingContent is non-empty, no activity
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onToken("Here is my response...");
|
capturedWsHandlers?.onToken("Here is my response...");
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -765,12 +765,12 @@ describe("Chat activity status indicator (Bug 140)", () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
// Simulate tokens arriving
|
// Simulate tokens arriving
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onToken("Let me read that.");
|
capturedWsHandlers?.onToken("Let me read that.");
|
||||||
});
|
});
|
||||||
|
|
||||||
// Claude Code sends tool name "Read" (not "read_file")
|
// Claude Code sends tool name "Read" (not "read_file")
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onActivity("Read");
|
capturedWsHandlers?.onActivity("Read");
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -792,11 +792,11 @@ describe("Chat activity status indicator (Bug 140)", () => {
|
|||||||
fireEvent.keyDown(input, { key: "Enter", shiftKey: false });
|
fireEvent.keyDown(input, { key: "Enter", shiftKey: false });
|
||||||
});
|
});
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onToken("Running tests now.");
|
capturedWsHandlers?.onToken("Running tests now.");
|
||||||
});
|
});
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onActivity("Bash");
|
capturedWsHandlers?.onActivity("Bash");
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -818,11 +818,11 @@ describe("Chat activity status indicator (Bug 140)", () => {
|
|||||||
fireEvent.keyDown(input, { key: "Enter", shiftKey: false });
|
fireEvent.keyDown(input, { key: "Enter", shiftKey: false });
|
||||||
});
|
});
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onToken("Working on it.");
|
capturedWsHandlers?.onToken("Working on it.");
|
||||||
});
|
});
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onActivity("SomeCustomTool");
|
capturedWsHandlers?.onActivity("SomeCustomTool");
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -899,7 +899,7 @@ describe("Chat message queue (Story 155)", () => {
|
|||||||
).toBeInTheDocument();
|
).toBeInTheDocument();
|
||||||
|
|
||||||
// Simulate agent response completing (loading → false)
|
// Simulate agent response completing (loading → false)
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onUpdate([
|
capturedWsHandlers?.onUpdate([
|
||||||
{ role: "user", content: "First" },
|
{ role: "user", content: "First" },
|
||||||
{ role: "assistant", content: "Done." },
|
{ role: "assistant", content: "Done." },
|
||||||
@@ -1066,7 +1066,7 @@ describe("Chat message queue (Story 155)", () => {
|
|||||||
expect(indicators[1]).toHaveTextContent("Third");
|
expect(indicators[1]).toHaveTextContent("Third");
|
||||||
|
|
||||||
// Simulate first response completing — both "Second" and "Third" are drained at once
|
// Simulate first response completing — both "Second" and "Third" are drained at once
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onUpdate([
|
capturedWsHandlers?.onUpdate([
|
||||||
{ role: "user", content: "First" },
|
{ role: "user", content: "First" },
|
||||||
{ role: "assistant", content: "Response 1." },
|
{ role: "assistant", content: "Response 1." },
|
||||||
@@ -1145,7 +1145,7 @@ describe("Remove bubble styling from streaming messages (Story 163)", () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
// Simulate streaming tokens arriving
|
// Simulate streaming tokens arriving
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onToken("Streaming response text");
|
capturedWsHandlers?.onToken("Streaming response text");
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -1176,7 +1176,7 @@ describe("Remove bubble styling from streaming messages (Story 163)", () => {
|
|||||||
fireEvent.keyDown(input, { key: "Enter", shiftKey: false });
|
fireEvent.keyDown(input, { key: "Enter", shiftKey: false });
|
||||||
});
|
});
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onToken("Some markdown content");
|
capturedWsHandlers?.onToken("Some markdown content");
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -1200,7 +1200,7 @@ describe("Remove bubble styling from streaming messages (Story 163)", () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
// Simulate streaming tokens
|
// Simulate streaming tokens
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onToken("Final response");
|
capturedWsHandlers?.onToken("Final response");
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -1211,7 +1211,7 @@ describe("Remove bubble styling from streaming messages (Story 163)", () => {
|
|||||||
const streamingStyleAttr = streamingStyledDiv.getAttribute("style") ?? "";
|
const streamingStyleAttr = streamingStyledDiv.getAttribute("style") ?? "";
|
||||||
|
|
||||||
// Transition: onUpdate completes the message
|
// Transition: onUpdate completes the message
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onUpdate([
|
capturedWsHandlers?.onUpdate([
|
||||||
{ role: "user", content: "Hello" },
|
{ role: "user", content: "Hello" },
|
||||||
{ role: "assistant", content: "Final response" },
|
{ role: "assistant", content: "Final response" },
|
||||||
@@ -1244,7 +1244,7 @@ describe("Remove bubble styling from streaming messages (Story 163)", () => {
|
|||||||
|
|
||||||
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onUpdate([
|
capturedWsHandlers?.onUpdate([
|
||||||
{ role: "user", content: "Hi" },
|
{ role: "user", content: "Hi" },
|
||||||
{ role: "assistant", content: "Hello there!" },
|
{ role: "assistant", content: "Hello there!" },
|
||||||
@@ -1268,7 +1268,7 @@ describe("Remove bubble styling from streaming messages (Story 163)", () => {
|
|||||||
|
|
||||||
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onUpdate([
|
capturedWsHandlers?.onUpdate([
|
||||||
{ role: "user", content: "I am a user message" },
|
{ role: "user", content: "I am a user message" },
|
||||||
{ role: "assistant", content: "I am a response" },
|
{ role: "assistant", content: "I am a response" },
|
||||||
@@ -1310,7 +1310,7 @@ describe("Bug 264: Claude Code session ID persisted across browser refresh", ()
|
|||||||
|
|
||||||
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onSessionId("test-session-abc");
|
capturedWsHandlers?.onSessionId("test-session-abc");
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -1394,7 +1394,7 @@ describe("Bug 264: Claude Code session ID persisted across browser refresh", ()
|
|||||||
render(<Chat projectPath={PROJECT_PATH} onCloseProject={vi.fn()} />);
|
render(<Chat projectPath={PROJECT_PATH} onCloseProject={vi.fn()} />);
|
||||||
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onSessionId("my-session");
|
capturedWsHandlers?.onSessionId("my-session");
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -1595,7 +1595,7 @@ describe("Slash command handling (Story 374)", () => {
|
|||||||
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
||||||
|
|
||||||
// First add a message so there is history to clear
|
// First add a message so there is history to clear
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onUpdate([
|
capturedWsHandlers?.onUpdate([
|
||||||
{ role: "user", content: "hello" },
|
{ role: "user", content: "hello" },
|
||||||
{ role: "assistant", content: "world" },
|
{ role: "assistant", content: "world" },
|
||||||
@@ -1701,7 +1701,7 @@ describe("Bug 450: WebSocket error messages displayed in chat", () => {
|
|||||||
|
|
||||||
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onError("Something went wrong on the server.");
|
capturedWsHandlers?.onError("Something went wrong on the server.");
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -1715,7 +1715,7 @@ describe("Bug 450: WebSocket error messages displayed in chat", () => {
|
|||||||
|
|
||||||
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
await waitFor(() => expect(capturedWsHandlers).not.toBeNull());
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
capturedWsHandlers?.onError(
|
capturedWsHandlers?.onError(
|
||||||
"OAuth login required. Please visit: https://example.com/oauth/login",
|
"OAuth login required. Please visit: https://example.com/oauth/login",
|
||||||
);
|
);
|
||||||
|
|||||||
@@ -9,6 +9,7 @@ import { useChatWebSocket } from "../hooks/useChatWebSocket";
|
|||||||
import { estimateTokens, getContextWindowSize } from "../utils/chatUtils";
|
import { estimateTokens, getContextWindowSize } from "../utils/chatUtils";
|
||||||
import { ApiKeyDialog } from "./ApiKeyDialog";
|
import { ApiKeyDialog } from "./ApiKeyDialog";
|
||||||
import { BotConfigPage } from "./BotConfigPage";
|
import { BotConfigPage } from "./BotConfigPage";
|
||||||
|
import { SettingsPage } from "./SettingsPage";
|
||||||
import { ChatHeader } from "./ChatHeader";
|
import { ChatHeader } from "./ChatHeader";
|
||||||
import type { ChatInputHandle } from "./ChatInput";
|
import type { ChatInputHandle } from "./ChatInput";
|
||||||
import { ChatInput } from "./ChatInput";
|
import { ChatInput } from "./ChatInput";
|
||||||
@@ -62,7 +63,7 @@ export function Chat({
|
|||||||
null,
|
null,
|
||||||
);
|
);
|
||||||
const [showHelp, setShowHelp] = useState(false);
|
const [showHelp, setShowHelp] = useState(false);
|
||||||
const [view, setView] = useState<"chat" | "bot-config">("chat");
|
const [view, setView] = useState<"chat" | "bot-config" | "settings">("chat");
|
||||||
const [queuedMessages, setQueuedMessages] = useState<
|
const [queuedMessages, setQueuedMessages] = useState<
|
||||||
{ id: string; text: string }[]
|
{ id: string; text: string }[]
|
||||||
>([]);
|
>([]);
|
||||||
@@ -376,16 +377,21 @@ export function Chat({
|
|||||||
wsConnected={wsConnected}
|
wsConnected={wsConnected}
|
||||||
oauthStatus={oauthStatus}
|
oauthStatus={oauthStatus}
|
||||||
onShowBotConfig={() => setView("bot-config")}
|
onShowBotConfig={() => setView("bot-config")}
|
||||||
|
onShowSettings={() => setView("settings")}
|
||||||
/>
|
/>
|
||||||
|
|
||||||
{view === "bot-config" && (
|
{view === "bot-config" && (
|
||||||
<BotConfigPage onBack={() => setView("chat")} />
|
<BotConfigPage onBack={() => setView("chat")} />
|
||||||
)}
|
)}
|
||||||
|
|
||||||
|
{view === "settings" && (
|
||||||
|
<SettingsPage onBack={() => setView("chat")} />
|
||||||
|
)}
|
||||||
|
|
||||||
<div
|
<div
|
||||||
data-testid="chat-content-area"
|
data-testid="chat-content-area"
|
||||||
style={{
|
style={{
|
||||||
display: view === "bot-config" ? "none" : "flex",
|
display: view === "chat" ? "flex" : "none",
|
||||||
flex: 1,
|
flex: 1,
|
||||||
minHeight: 0,
|
minHeight: 0,
|
||||||
flexDirection: isNarrowScreen ? "column" : "row",
|
flexDirection: isNarrowScreen ? "column" : "row",
|
||||||
|
|||||||
@@ -35,6 +35,7 @@ interface ChatHeaderProps {
|
|||||||
wsConnected: boolean;
|
wsConnected: boolean;
|
||||||
oauthStatus?: OAuthStatus | null;
|
oauthStatus?: OAuthStatus | null;
|
||||||
onShowBotConfig?: () => void;
|
onShowBotConfig?: () => void;
|
||||||
|
onShowSettings?: () => void;
|
||||||
}
|
}
|
||||||
|
|
||||||
const getContextEmoji = (percentage: number): string => {
|
const getContextEmoji = (percentage: number): string => {
|
||||||
@@ -60,6 +61,7 @@ export function ChatHeader({
|
|||||||
wsConnected,
|
wsConnected,
|
||||||
oauthStatus = null,
|
oauthStatus = null,
|
||||||
onShowBotConfig,
|
onShowBotConfig,
|
||||||
|
onShowSettings,
|
||||||
}: ChatHeaderProps) {
|
}: ChatHeaderProps) {
|
||||||
const hasModelOptions = availableModels.length > 0 || claudeModels.length > 0;
|
const hasModelOptions = availableModels.length > 0 || claudeModels.length > 0;
|
||||||
const [showConfirm, setShowConfirm] = useState(false);
|
const [showConfirm, setShowConfirm] = useState(false);
|
||||||
@@ -552,6 +554,43 @@ export function ChatHeader({
|
|||||||
</button>
|
</button>
|
||||||
)}
|
)}
|
||||||
|
|
||||||
|
{onShowSettings && (
|
||||||
|
<button
|
||||||
|
type="button"
|
||||||
|
onClick={onShowSettings}
|
||||||
|
title="Edit project.toml settings"
|
||||||
|
style={{
|
||||||
|
padding: "6px 12px",
|
||||||
|
borderRadius: "99px",
|
||||||
|
border: "none",
|
||||||
|
fontSize: "0.85em",
|
||||||
|
backgroundColor: "#2f2f2f",
|
||||||
|
color: "#888",
|
||||||
|
cursor: "pointer",
|
||||||
|
outline: "none",
|
||||||
|
transition: "all 0.2s",
|
||||||
|
}}
|
||||||
|
onMouseOver={(e) => {
|
||||||
|
e.currentTarget.style.backgroundColor = "#3f3f3f";
|
||||||
|
e.currentTarget.style.color = "#ccc";
|
||||||
|
}}
|
||||||
|
onMouseOut={(e) => {
|
||||||
|
e.currentTarget.style.backgroundColor = "#2f2f2f";
|
||||||
|
e.currentTarget.style.color = "#888";
|
||||||
|
}}
|
||||||
|
onFocus={(e) => {
|
||||||
|
e.currentTarget.style.backgroundColor = "#3f3f3f";
|
||||||
|
e.currentTarget.style.color = "#ccc";
|
||||||
|
}}
|
||||||
|
onBlur={(e) => {
|
||||||
|
e.currentTarget.style.backgroundColor = "#2f2f2f";
|
||||||
|
e.currentTarget.style.color = "#888";
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
⚙ Settings
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
|
||||||
{hasModelOptions ? (
|
{hasModelOptions ? (
|
||||||
<select
|
<select
|
||||||
value={model}
|
value={model}
|
||||||
|
|||||||
@@ -0,0 +1,461 @@
|
|||||||
|
import * as React from "react";
|
||||||
|
import type { ProjectSettings } from "../api/settings";
|
||||||
|
import { settingsApi } from "../api/settings";
|
||||||
|
|
||||||
|
const { useState, useEffect } = React;
|
||||||
|
|
||||||
|
interface SettingsPageProps {
|
||||||
|
onBack: () => void;
|
||||||
|
}
|
||||||
|
|
||||||
|
const fieldStyle: React.CSSProperties = {
|
||||||
|
display: "flex",
|
||||||
|
flexDirection: "column",
|
||||||
|
gap: "4px",
|
||||||
|
};
|
||||||
|
|
||||||
|
const labelStyle: React.CSSProperties = {
|
||||||
|
fontSize: "0.8em",
|
||||||
|
color: "#aaa",
|
||||||
|
fontWeight: 500,
|
||||||
|
};
|
||||||
|
|
||||||
|
const descStyle: React.CSSProperties = {
|
||||||
|
fontSize: "0.75em",
|
||||||
|
color: "#666",
|
||||||
|
marginTop: "2px",
|
||||||
|
};
|
||||||
|
|
||||||
|
const inputStyle: React.CSSProperties = {
|
||||||
|
padding: "8px 10px",
|
||||||
|
borderRadius: "6px",
|
||||||
|
border: "1px solid #333",
|
||||||
|
background: "#1e1e1e",
|
||||||
|
color: "#ececec",
|
||||||
|
fontSize: "0.9em",
|
||||||
|
fontFamily: "monospace",
|
||||||
|
outline: "none",
|
||||||
|
};
|
||||||
|
|
||||||
|
const sectionStyle: React.CSSProperties = {
|
||||||
|
background: "#1e1e1e",
|
||||||
|
border: "1px solid #333",
|
||||||
|
borderRadius: "8px",
|
||||||
|
padding: "20px",
|
||||||
|
display: "flex",
|
||||||
|
flexDirection: "column",
|
||||||
|
gap: "16px",
|
||||||
|
};
|
||||||
|
|
||||||
|
const sectionTitleStyle: React.CSSProperties = {
|
||||||
|
fontSize: "0.85em",
|
||||||
|
fontWeight: 600,
|
||||||
|
color: "#aaa",
|
||||||
|
textTransform: "uppercase",
|
||||||
|
letterSpacing: "0.06em",
|
||||||
|
marginBottom: "2px",
|
||||||
|
};
|
||||||
|
|
||||||
|
interface TextFieldProps {
|
||||||
|
label: string;
|
||||||
|
description?: string;
|
||||||
|
value: string;
|
||||||
|
onChange: (v: string) => void;
|
||||||
|
placeholder?: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
function TextField({ label, description, value, onChange, placeholder }: TextFieldProps) {
|
||||||
|
return (
|
||||||
|
<div style={fieldStyle}>
|
||||||
|
<label style={labelStyle}>{label}</label>
|
||||||
|
{description && <span style={descStyle}>{description}</span>}
|
||||||
|
<input
|
||||||
|
type="text"
|
||||||
|
value={value}
|
||||||
|
onChange={(e) => onChange(e.target.value)}
|
||||||
|
placeholder={placeholder ?? ""}
|
||||||
|
style={inputStyle}
|
||||||
|
autoComplete="off"
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
interface NumberFieldProps {
|
||||||
|
label: string;
|
||||||
|
description?: string;
|
||||||
|
value: number | null;
|
||||||
|
onChange: (v: number | null) => void;
|
||||||
|
min?: number;
|
||||||
|
placeholder?: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
function NumberField({ label, description, value, onChange, min, placeholder }: NumberFieldProps) {
|
||||||
|
return (
|
||||||
|
<div style={fieldStyle}>
|
||||||
|
<label style={labelStyle}>{label}</label>
|
||||||
|
{description && <span style={descStyle}>{description}</span>}
|
||||||
|
<input
|
||||||
|
type="number"
|
||||||
|
value={value === null ? "" : value}
|
||||||
|
min={min}
|
||||||
|
onChange={(e) => {
|
||||||
|
const raw = e.target.value.trim();
|
||||||
|
if (raw === "") {
|
||||||
|
onChange(null);
|
||||||
|
} else {
|
||||||
|
const n = Number(raw);
|
||||||
|
if (!Number.isNaN(n)) onChange(n);
|
||||||
|
}
|
||||||
|
}}
|
||||||
|
placeholder={placeholder ?? ""}
|
||||||
|
style={inputStyle}
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
interface CheckboxFieldProps {
|
||||||
|
label: string;
|
||||||
|
description?: string;
|
||||||
|
checked: boolean;
|
||||||
|
onChange: (v: boolean) => void;
|
||||||
|
}
|
||||||
|
|
||||||
|
function CheckboxField({ label, description, checked, onChange }: CheckboxFieldProps) {
|
||||||
|
return (
|
||||||
|
<div style={fieldStyle}>
|
||||||
|
{description && <span style={descStyle}>{description}</span>}
|
||||||
|
<label
|
||||||
|
style={{
|
||||||
|
display: "flex",
|
||||||
|
alignItems: "center",
|
||||||
|
gap: "8px",
|
||||||
|
cursor: "pointer",
|
||||||
|
fontSize: "0.9em",
|
||||||
|
color: "#ccc",
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
<input
|
||||||
|
type="checkbox"
|
||||||
|
checked={checked}
|
||||||
|
onChange={(e) => onChange(e.target.checked)}
|
||||||
|
/>
|
||||||
|
{label}
|
||||||
|
</label>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
const QA_MODES = ["server", "agent", "human"] as const;
|
||||||
|
|
||||||
|
/** Settings page — form-based editor for project.toml scalar settings. */
|
||||||
|
export function SettingsPage({ onBack }: SettingsPageProps) {
|
||||||
|
const [settings, setSettings] = useState<ProjectSettings | null>(null);
|
||||||
|
const [status, setStatus] = useState<"idle" | "loading" | "saving" | "saved" | "error">("loading");
|
||||||
|
const [errorMsg, setErrorMsg] = useState<string | null>(null);
|
||||||
|
const [validationErrors, setValidationErrors] = useState<Record<string, string>>({});
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
settingsApi
|
||||||
|
.getProjectSettings()
|
||||||
|
.then((s) => {
|
||||||
|
setSettings(s);
|
||||||
|
setStatus("idle");
|
||||||
|
})
|
||||||
|
.catch((e: unknown) => {
|
||||||
|
setStatus("error");
|
||||||
|
setErrorMsg(e instanceof Error ? e.message : "Failed to load settings");
|
||||||
|
});
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
function patch(partial: Partial<ProjectSettings>) {
|
||||||
|
setSettings((prev) => (prev ? { ...prev, ...partial } : prev));
|
||||||
|
setValidationErrors({});
|
||||||
|
}
|
||||||
|
|
||||||
|
function validate(s: ProjectSettings): Record<string, string> {
|
||||||
|
const errors: Record<string, string> = {};
|
||||||
|
if (!QA_MODES.includes(s.default_qa as (typeof QA_MODES)[number])) {
|
||||||
|
errors.default_qa = `Must be one of: ${QA_MODES.join(", ")}`;
|
||||||
|
}
|
||||||
|
if (s.max_retries < 0) {
|
||||||
|
errors.max_retries = "Must be 0 or greater";
|
||||||
|
}
|
||||||
|
if (s.watcher_sweep_interval_secs < 1) {
|
||||||
|
errors.watcher_sweep_interval_secs = "Must be at least 1 second";
|
||||||
|
}
|
||||||
|
if (s.watcher_done_retention_secs < 1) {
|
||||||
|
errors.watcher_done_retention_secs = "Must be at least 1 second";
|
||||||
|
}
|
||||||
|
return errors;
|
||||||
|
}
|
||||||
|
|
||||||
|
async function handleSave() {
|
||||||
|
if (!settings) return;
|
||||||
|
const errors = validate(settings);
|
||||||
|
if (Object.keys(errors).length > 0) {
|
||||||
|
setValidationErrors(errors);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
setStatus("saving");
|
||||||
|
setErrorMsg(null);
|
||||||
|
try {
|
||||||
|
const saved = await settingsApi.putProjectSettings(settings);
|
||||||
|
setSettings(saved);
|
||||||
|
setStatus("saved");
|
||||||
|
setTimeout(() => setStatus("idle"), 2000);
|
||||||
|
} catch (e) {
|
||||||
|
setStatus("error");
|
||||||
|
setErrorMsg(e instanceof Error ? e.message : "Save failed");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const s = settings;
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div
|
||||||
|
style={{
|
||||||
|
display: "flex",
|
||||||
|
flexDirection: "column",
|
||||||
|
height: "100%",
|
||||||
|
backgroundColor: "#171717",
|
||||||
|
color: "#ececec",
|
||||||
|
overflow: "auto",
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
{/* Header */}
|
||||||
|
<div
|
||||||
|
style={{
|
||||||
|
padding: "12px 24px",
|
||||||
|
borderBottom: "1px solid #333",
|
||||||
|
display: "flex",
|
||||||
|
alignItems: "center",
|
||||||
|
gap: "16px",
|
||||||
|
background: "#171717",
|
||||||
|
flexShrink: 0,
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
<button
|
||||||
|
type="button"
|
||||||
|
onClick={onBack}
|
||||||
|
style={{
|
||||||
|
background: "transparent",
|
||||||
|
border: "none",
|
||||||
|
cursor: "pointer",
|
||||||
|
color: "#888",
|
||||||
|
fontSize: "0.9em",
|
||||||
|
padding: "4px 8px",
|
||||||
|
borderRadius: "4px",
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
← Back
|
||||||
|
</button>
|
||||||
|
<span style={{ fontWeight: 700, fontSize: "1em" }}>Project Settings</span>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Body */}
|
||||||
|
<div
|
||||||
|
style={{
|
||||||
|
flex: 1,
|
||||||
|
padding: "24px",
|
||||||
|
display: "flex",
|
||||||
|
flexDirection: "column",
|
||||||
|
gap: "20px",
|
||||||
|
maxWidth: "640px",
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
{status === "loading" && (
|
||||||
|
<p style={{ color: "#888", fontSize: "0.9em" }}>Loading settings…</p>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{status === "error" && !s && (
|
||||||
|
<p style={{ color: "#f08080", fontSize: "0.9em" }}>
|
||||||
|
Error: {errorMsg}
|
||||||
|
</p>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{s && (
|
||||||
|
<>
|
||||||
|
{/* Pipeline */}
|
||||||
|
<div style={sectionStyle}>
|
||||||
|
<div style={sectionTitleStyle}>Pipeline</div>
|
||||||
|
|
||||||
|
<div style={fieldStyle}>
|
||||||
|
<label style={labelStyle}>Default QA Mode</label>
|
||||||
|
<span style={descStyle}>
|
||||||
|
How stories are QA-reviewed after the coder stage.
|
||||||
|
Default: server.
|
||||||
|
</span>
|
||||||
|
<select
|
||||||
|
value={s.default_qa}
|
||||||
|
onChange={(e) => patch({ default_qa: e.target.value })}
|
||||||
|
style={{ ...inputStyle, cursor: "pointer" }}
|
||||||
|
>
|
||||||
|
{QA_MODES.map((m) => (
|
||||||
|
<option key={m} value={m}>
|
||||||
|
{m}
|
||||||
|
</option>
|
||||||
|
))}
|
||||||
|
</select>
|
||||||
|
{validationErrors.default_qa && (
|
||||||
|
<span style={{ color: "#f08080", fontSize: "0.8em" }}>
|
||||||
|
{validationErrors.default_qa}
|
||||||
|
</span>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<NumberField
|
||||||
|
label="Max Retries"
|
||||||
|
description="Maximum retries per story per pipeline stage before blocking. Default: 2. Set 0 to disable."
|
||||||
|
value={s.max_retries}
|
||||||
|
min={0}
|
||||||
|
onChange={(v) => patch({ max_retries: v ?? 0 })}
|
||||||
|
/>
|
||||||
|
{validationErrors.max_retries && (
|
||||||
|
<span style={{ color: "#f08080", fontSize: "0.8em" }}>
|
||||||
|
{validationErrors.max_retries}
|
||||||
|
</span>
|
||||||
|
)}
|
||||||
|
|
||||||
|
<NumberField
|
||||||
|
label="Max Concurrent Coders"
|
||||||
|
description="Maximum number of coder-stage agents running at once. Leave blank for unlimited."
|
||||||
|
value={s.max_coders}
|
||||||
|
min={1}
|
||||||
|
placeholder="unlimited"
|
||||||
|
onChange={(v) => patch({ max_coders: v })}
|
||||||
|
/>
|
||||||
|
|
||||||
|
<TextField
|
||||||
|
label="Default Coder Model"
|
||||||
|
description="When set, only coder agents matching this model are auto-assigned (e.g. sonnet, opus)."
|
||||||
|
value={s.default_coder_model ?? ""}
|
||||||
|
onChange={(v) =>
|
||||||
|
patch({ default_coder_model: v.trim() || null })
|
||||||
|
}
|
||||||
|
placeholder="e.g. sonnet"
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Git */}
|
||||||
|
<div style={sectionStyle}>
|
||||||
|
<div style={sectionTitleStyle}>Git</div>
|
||||||
|
|
||||||
|
<TextField
|
||||||
|
label="Base Branch"
|
||||||
|
description="Overrides auto-detection of the merge target branch (e.g. main, master, develop)."
|
||||||
|
value={s.base_branch ?? ""}
|
||||||
|
onChange={(v) =>
|
||||||
|
patch({ base_branch: v.trim() || null })
|
||||||
|
}
|
||||||
|
placeholder="e.g. master"
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Notifications */}
|
||||||
|
<div style={sectionStyle}>
|
||||||
|
<div style={sectionTitleStyle}>Notifications</div>
|
||||||
|
|
||||||
|
<CheckboxField
|
||||||
|
label="Rate Limit Notifications"
|
||||||
|
description="Send chat notifications on soft API rate-limit warnings. Disable to reduce noise."
|
||||||
|
checked={s.rate_limit_notifications}
|
||||||
|
onChange={(v) => patch({ rate_limit_notifications: v })}
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Advanced */}
|
||||||
|
<div style={sectionStyle}>
|
||||||
|
<div style={sectionTitleStyle}>Advanced</div>
|
||||||
|
|
||||||
|
<TextField
|
||||||
|
label="Timezone"
|
||||||
|
description="IANA timezone for timer inputs (e.g. Europe/London, America/New_York). Leave blank for system default."
|
||||||
|
value={s.timezone ?? ""}
|
||||||
|
onChange={(v) => patch({ timezone: v.trim() || null })}
|
||||||
|
placeholder="e.g. Europe/London"
|
||||||
|
/>
|
||||||
|
|
||||||
|
<TextField
|
||||||
|
label="Rendezvous URL"
|
||||||
|
description="WebSocket URL of a remote huskies node for CRDT state sync (e.g. ws://host:3001/crdt-sync)."
|
||||||
|
value={s.rendezvous ?? ""}
|
||||||
|
onChange={(v) => patch({ rendezvous: v.trim() || null })}
|
||||||
|
placeholder="e.g. ws://host:3001/crdt-sync"
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Watcher */}
|
||||||
|
<div style={sectionStyle}>
|
||||||
|
<div style={sectionTitleStyle}>Archiver</div>
|
||||||
|
|
||||||
|
<NumberField
|
||||||
|
label="Sweep Interval (seconds)"
|
||||||
|
description="How often to check the done stage for items ready to archive. Default: 60."
|
||||||
|
value={s.watcher_sweep_interval_secs}
|
||||||
|
min={1}
|
||||||
|
onChange={(v) =>
|
||||||
|
patch({ watcher_sweep_interval_secs: v ?? 60 })
|
||||||
|
}
|
||||||
|
/>
|
||||||
|
{validationErrors.watcher_sweep_interval_secs && (
|
||||||
|
<span style={{ color: "#f08080", fontSize: "0.8em" }}>
|
||||||
|
{validationErrors.watcher_sweep_interval_secs}
|
||||||
|
</span>
|
||||||
|
)}
|
||||||
|
|
||||||
|
<NumberField
|
||||||
|
label="Done Retention (seconds)"
|
||||||
|
description="How long an item must stay in the done stage before archiving. Default: 14400 (4 hours)."
|
||||||
|
value={s.watcher_done_retention_secs}
|
||||||
|
min={1}
|
||||||
|
onChange={(v) =>
|
||||||
|
patch({ watcher_done_retention_secs: v ?? 14400 })
|
||||||
|
}
|
||||||
|
/>
|
||||||
|
{validationErrors.watcher_done_retention_secs && (
|
||||||
|
<span style={{ color: "#f08080", fontSize: "0.8em" }}>
|
||||||
|
{validationErrors.watcher_done_retention_secs}
|
||||||
|
</span>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Save */}
|
||||||
|
<div style={{ display: "flex", alignItems: "center", gap: "12px" }}>
|
||||||
|
<button
|
||||||
|
type="button"
|
||||||
|
onClick={handleSave}
|
||||||
|
disabled={status === "saving"}
|
||||||
|
style={{
|
||||||
|
padding: "8px 24px",
|
||||||
|
borderRadius: "6px",
|
||||||
|
border: "none",
|
||||||
|
background:
|
||||||
|
status === "saved" ? "#1a5c2a" : "#2563eb",
|
||||||
|
color: "#fff",
|
||||||
|
cursor:
|
||||||
|
status === "saving" ? "not-allowed" : "pointer",
|
||||||
|
fontSize: "0.9em",
|
||||||
|
fontWeight: 600,
|
||||||
|
opacity: status === "saving" ? 0.7 : 1,
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
{status === "saving"
|
||||||
|
? "Saving…"
|
||||||
|
: status === "saved"
|
||||||
|
? "Saved!"
|
||||||
|
: "Save"}
|
||||||
|
</button>
|
||||||
|
{status === "error" && errorMsg && (
|
||||||
|
<span style={{ color: "#f08080", fontSize: "0.85em" }}>
|
||||||
|
{errorMsg}
|
||||||
|
</span>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
@@ -138,7 +138,7 @@ describe("usePathCompletion hook", () => {
|
|||||||
expect(result.current.matchList[0].name).toBe("Documents");
|
expect(result.current.matchList[0].name).toBe("Documents");
|
||||||
});
|
});
|
||||||
|
|
||||||
it("calls setPathInput when acceptMatch is invoked", () => {
|
it("calls setPathInput when acceptMatch is invoked", async () => {
|
||||||
const setPathInput = vi.fn();
|
const setPathInput = vi.fn();
|
||||||
|
|
||||||
const { result } = renderHook(() =>
|
const { result } = renderHook(() =>
|
||||||
@@ -151,7 +151,7 @@ describe("usePathCompletion hook", () => {
|
|||||||
}),
|
}),
|
||||||
);
|
);
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
result.current.acceptMatch("/home/user/Documents/");
|
result.current.acceptMatch("/home/user/Documents/");
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -308,14 +308,14 @@ describe("usePathCompletion hook", () => {
|
|||||||
expect(result.current.matchList.length).toBe(2);
|
expect(result.current.matchList.length).toBe(2);
|
||||||
});
|
});
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
result.current.acceptSelectedMatch();
|
result.current.acceptSelectedMatch();
|
||||||
});
|
});
|
||||||
|
|
||||||
expect(setPathInput).toHaveBeenCalledWith("/home/user/Documents/");
|
expect(setPathInput).toHaveBeenCalledWith("/home/user/Documents/");
|
||||||
});
|
});
|
||||||
|
|
||||||
it("acceptSelectedMatch does nothing when matchList is empty", () => {
|
it("acceptSelectedMatch does nothing when matchList is empty", async () => {
|
||||||
const setPathInput = vi.fn();
|
const setPathInput = vi.fn();
|
||||||
|
|
||||||
const { result } = renderHook(() =>
|
const { result } = renderHook(() =>
|
||||||
@@ -328,7 +328,7 @@ describe("usePathCompletion hook", () => {
|
|||||||
}),
|
}),
|
||||||
);
|
);
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
result.current.acceptSelectedMatch();
|
result.current.acceptSelectedMatch();
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -352,7 +352,7 @@ describe("usePathCompletion hook", () => {
|
|||||||
expect(result.current.matchList.length).toBe(1);
|
expect(result.current.matchList.length).toBe(1);
|
||||||
});
|
});
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
result.current.closeSuggestions();
|
result.current.closeSuggestions();
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -450,7 +450,7 @@ describe("usePathCompletion hook", () => {
|
|||||||
expect(result.current.matchList.length).toBe(2);
|
expect(result.current.matchList.length).toBe(2);
|
||||||
});
|
});
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
result.current.setSelectedMatch(1);
|
result.current.setSelectedMatch(1);
|
||||||
});
|
});
|
||||||
|
|
||||||
|
|||||||
@@ -19,7 +19,7 @@ function makeMessages(count: number): Message[] {
|
|||||||
}));
|
}));
|
||||||
}
|
}
|
||||||
|
|
||||||
describe("useChatHistory", () => {
|
describe("useChatHistory", async () => {
|
||||||
beforeEach(() => {
|
beforeEach(() => {
|
||||||
localStorage.clear();
|
localStorage.clear();
|
||||||
});
|
});
|
||||||
@@ -28,7 +28,7 @@ describe("useChatHistory", () => {
|
|||||||
localStorage.clear();
|
localStorage.clear();
|
||||||
});
|
});
|
||||||
|
|
||||||
it("AC1: restores messages from localStorage on mount", () => {
|
it("AC1: restores messages from localStorage on mount", async () => {
|
||||||
localStorage.setItem(STORAGE_KEY, JSON.stringify(sampleMessages));
|
localStorage.setItem(STORAGE_KEY, JSON.stringify(sampleMessages));
|
||||||
|
|
||||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||||
@@ -36,13 +36,13 @@ describe("useChatHistory", () => {
|
|||||||
expect(result.current.messages).toEqual(sampleMessages);
|
expect(result.current.messages).toEqual(sampleMessages);
|
||||||
});
|
});
|
||||||
|
|
||||||
it("AC1: returns empty array when localStorage has no data", () => {
|
it("AC1: returns empty array when localStorage has no data", async () => {
|
||||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||||
|
|
||||||
expect(result.current.messages).toEqual([]);
|
expect(result.current.messages).toEqual([]);
|
||||||
});
|
});
|
||||||
|
|
||||||
it("AC1: returns empty array when localStorage contains invalid JSON", () => {
|
it("AC1: returns empty array when localStorage contains invalid JSON", async () => {
|
||||||
localStorage.setItem(STORAGE_KEY, "not-json{{{");
|
localStorage.setItem(STORAGE_KEY, "not-json{{{");
|
||||||
|
|
||||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||||
@@ -50,7 +50,7 @@ describe("useChatHistory", () => {
|
|||||||
expect(result.current.messages).toEqual([]);
|
expect(result.current.messages).toEqual([]);
|
||||||
});
|
});
|
||||||
|
|
||||||
it("AC1: returns empty array when localStorage contains a non-array", () => {
|
it("AC1: returns empty array when localStorage contains a non-array", async () => {
|
||||||
localStorage.setItem(STORAGE_KEY, JSON.stringify({ not: "array" }));
|
localStorage.setItem(STORAGE_KEY, JSON.stringify({ not: "array" }));
|
||||||
|
|
||||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||||
@@ -58,10 +58,10 @@ describe("useChatHistory", () => {
|
|||||||
expect(result.current.messages).toEqual([]);
|
expect(result.current.messages).toEqual([]);
|
||||||
});
|
});
|
||||||
|
|
||||||
it("AC2: saves messages to localStorage when setMessages is called with an array", () => {
|
it("AC2: saves messages to localStorage when setMessages is called with an array", async () => {
|
||||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
result.current.setMessages(sampleMessages);
|
result.current.setMessages(sampleMessages);
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -69,10 +69,10 @@ describe("useChatHistory", () => {
|
|||||||
expect(stored).toEqual(sampleMessages);
|
expect(stored).toEqual(sampleMessages);
|
||||||
});
|
});
|
||||||
|
|
||||||
it("AC2: saves messages to localStorage when setMessages is called with updater function", () => {
|
it("AC2: saves messages to localStorage when setMessages is called with updater function", async () => {
|
||||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
result.current.setMessages(() => sampleMessages);
|
result.current.setMessages(() => sampleMessages);
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -80,14 +80,14 @@ describe("useChatHistory", () => {
|
|||||||
expect(stored).toEqual(sampleMessages);
|
expect(stored).toEqual(sampleMessages);
|
||||||
});
|
});
|
||||||
|
|
||||||
it("AC3: clearMessages removes messages from state and localStorage", () => {
|
it("AC3: clearMessages removes messages from state and localStorage", async () => {
|
||||||
localStorage.setItem(STORAGE_KEY, JSON.stringify(sampleMessages));
|
localStorage.setItem(STORAGE_KEY, JSON.stringify(sampleMessages));
|
||||||
|
|
||||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||||
|
|
||||||
expect(result.current.messages).toEqual(sampleMessages);
|
expect(result.current.messages).toEqual(sampleMessages);
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
result.current.clearMessages();
|
result.current.clearMessages();
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -95,7 +95,7 @@ describe("useChatHistory", () => {
|
|||||||
expect(localStorage.getItem(STORAGE_KEY)).toBeNull();
|
expect(localStorage.getItem(STORAGE_KEY)).toBeNull();
|
||||||
});
|
});
|
||||||
|
|
||||||
it("AC4: handles localStorage quota errors gracefully", () => {
|
it("AC4: handles localStorage quota errors gracefully", async () => {
|
||||||
const warnSpy = vi.spyOn(console, "warn").mockImplementation(() => {});
|
const warnSpy = vi.spyOn(console, "warn").mockImplementation(() => {});
|
||||||
const setItemSpy = vi
|
const setItemSpy = vi
|
||||||
.spyOn(Storage.prototype, "setItem")
|
.spyOn(Storage.prototype, "setItem")
|
||||||
@@ -106,7 +106,7 @@ describe("useChatHistory", () => {
|
|||||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||||
|
|
||||||
// Should not throw
|
// Should not throw
|
||||||
act(() => {
|
await act(async () => {
|
||||||
result.current.setMessages(sampleMessages);
|
result.current.setMessages(sampleMessages);
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -121,7 +121,7 @@ describe("useChatHistory", () => {
|
|||||||
setItemSpy.mockRestore();
|
setItemSpy.mockRestore();
|
||||||
});
|
});
|
||||||
|
|
||||||
it("AC5: scopes storage key to project path", () => {
|
it("AC5: scopes storage key to project path", async () => {
|
||||||
const projectA = "/projects/a";
|
const projectA = "/projects/a";
|
||||||
const projectB = "/projects/b";
|
const projectB = "/projects/b";
|
||||||
const keyA = `storykit-chat-history:${projectA}`;
|
const keyA = `storykit-chat-history:${projectA}`;
|
||||||
@@ -140,12 +140,12 @@ describe("useChatHistory", () => {
|
|||||||
expect(resultB.current.messages).toEqual(messagesB);
|
expect(resultB.current.messages).toEqual(messagesB);
|
||||||
});
|
});
|
||||||
|
|
||||||
it("AC2: removes localStorage key when messages are set to empty array", () => {
|
it("AC2: removes localStorage key when messages are set to empty array", async () => {
|
||||||
localStorage.setItem(STORAGE_KEY, JSON.stringify(sampleMessages));
|
localStorage.setItem(STORAGE_KEY, JSON.stringify(sampleMessages));
|
||||||
|
|
||||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
result.current.setMessages([]);
|
result.current.setMessages([]);
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -154,20 +154,20 @@ describe("useChatHistory", () => {
|
|||||||
|
|
||||||
// --- Story 179: Chat history pruning tests ---
|
// --- Story 179: Chat history pruning tests ---
|
||||||
|
|
||||||
it("S179: default limit of 200 is applied when saving to localStorage", () => {
|
it("S179: default limit of 200 is applied when saving to localStorage", async () => {
|
||||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||||
|
|
||||||
expect(result.current.maxMessages).toBe(200);
|
expect(result.current.maxMessages).toBe(200);
|
||||||
});
|
});
|
||||||
|
|
||||||
it("S179: messages are pruned from the front when exceeding the limit", () => {
|
it("S179: messages are pruned from the front when exceeding the limit", async () => {
|
||||||
// Set a small limit to make testing practical
|
// Set a small limit to make testing practical
|
||||||
localStorage.setItem(LIMIT_KEY, "3");
|
localStorage.setItem(LIMIT_KEY, "3");
|
||||||
|
|
||||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||||
const fiveMessages = makeMessages(5);
|
const fiveMessages = makeMessages(5);
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
result.current.setMessages(fiveMessages);
|
result.current.setMessages(fiveMessages);
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -180,13 +180,13 @@ describe("useChatHistory", () => {
|
|||||||
expect(stored[0].content).toBe("Message 3");
|
expect(stored[0].content).toBe("Message 3");
|
||||||
});
|
});
|
||||||
|
|
||||||
it("S179: messages under the limit are not pruned", () => {
|
it("S179: messages under the limit are not pruned", async () => {
|
||||||
localStorage.setItem(LIMIT_KEY, "10");
|
localStorage.setItem(LIMIT_KEY, "10");
|
||||||
|
|
||||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||||
const threeMessages = makeMessages(3);
|
const threeMessages = makeMessages(3);
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
result.current.setMessages(threeMessages);
|
result.current.setMessages(threeMessages);
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -197,7 +197,7 @@ describe("useChatHistory", () => {
|
|||||||
expect(stored).toHaveLength(3);
|
expect(stored).toHaveLength(3);
|
||||||
});
|
});
|
||||||
|
|
||||||
it("S179: limit is configurable via localStorage key", () => {
|
it("S179: limit is configurable via localStorage key", async () => {
|
||||||
localStorage.setItem(LIMIT_KEY, "5");
|
localStorage.setItem(LIMIT_KEY, "5");
|
||||||
|
|
||||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||||
@@ -205,10 +205,10 @@ describe("useChatHistory", () => {
|
|||||||
expect(result.current.maxMessages).toBe(5);
|
expect(result.current.maxMessages).toBe(5);
|
||||||
});
|
});
|
||||||
|
|
||||||
it("S179: setMaxMessages updates the limit and persists it", () => {
|
it("S179: setMaxMessages updates the limit and persists it", async () => {
|
||||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
result.current.setMaxMessages(50);
|
result.current.setMaxMessages(50);
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -216,13 +216,13 @@ describe("useChatHistory", () => {
|
|||||||
expect(localStorage.getItem(LIMIT_KEY)).toBe("50");
|
expect(localStorage.getItem(LIMIT_KEY)).toBe("50");
|
||||||
});
|
});
|
||||||
|
|
||||||
it("S179: a limit of 0 means unlimited (no pruning)", () => {
|
it("S179: a limit of 0 means unlimited (no pruning)", async () => {
|
||||||
localStorage.setItem(LIMIT_KEY, "0");
|
localStorage.setItem(LIMIT_KEY, "0");
|
||||||
|
|
||||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||||
const manyMessages = makeMessages(500);
|
const manyMessages = makeMessages(500);
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
result.current.setMessages(manyMessages);
|
result.current.setMessages(manyMessages);
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -233,11 +233,11 @@ describe("useChatHistory", () => {
|
|||||||
expect(stored).toEqual(manyMessages);
|
expect(stored).toEqual(manyMessages);
|
||||||
});
|
});
|
||||||
|
|
||||||
it("S179: changing the limit re-prunes messages on next save", () => {
|
it("S179: changing the limit re-prunes messages on next save", async () => {
|
||||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||||
const tenMessages = makeMessages(10);
|
const tenMessages = makeMessages(10);
|
||||||
|
|
||||||
act(() => {
|
await act(async () => {
|
||||||
result.current.setMessages(tenMessages);
|
result.current.setMessages(tenMessages);
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -248,7 +248,7 @@ describe("useChatHistory", () => {
|
|||||||
expect(stored).toHaveLength(10);
|
expect(stored).toHaveLength(10);
|
||||||
|
|
||||||
// Now lower the limit — the effect re-runs and prunes
|
// Now lower the limit — the effect re-runs and prunes
|
||||||
act(() => {
|
await act(async () => {
|
||||||
result.current.setMaxMessages(3);
|
result.current.setMaxMessages(3);
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -257,7 +257,7 @@ describe("useChatHistory", () => {
|
|||||||
expect(stored[0].content).toBe("Message 8");
|
expect(stored[0].content).toBe("Message 8");
|
||||||
});
|
});
|
||||||
|
|
||||||
it("S179: invalid limit in localStorage falls back to default", () => {
|
it("S179: invalid limit in localStorage falls back to default", async () => {
|
||||||
localStorage.setItem(LIMIT_KEY, "not-a-number");
|
localStorage.setItem(LIMIT_KEY, "not-a-number");
|
||||||
|
|
||||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||||
@@ -265,7 +265,7 @@ describe("useChatHistory", () => {
|
|||||||
expect(result.current.maxMessages).toBe(200);
|
expect(result.current.maxMessages).toBe(200);
|
||||||
});
|
});
|
||||||
|
|
||||||
it("S179: negative limit in localStorage falls back to default", () => {
|
it("S179: negative limit in localStorage falls back to default", async () => {
|
||||||
localStorage.setItem(LIMIT_KEY, "-5");
|
localStorage.setItem(LIMIT_KEY, "-5");
|
||||||
|
|
||||||
const { result } = renderHook(() => useChatHistory(PROJECT));
|
const { result } = renderHook(() => useChatHistory(PROJECT));
|
||||||
|
|||||||
@@ -0,0 +1,75 @@
|
|||||||
|
import { expect, test } from "@playwright/test";
|
||||||
|
|
||||||
|
/// Regression test: gateway UI must have vertical scrolling when content
|
||||||
|
/// overflows the viewport. Verifies the `overflow: hidden` fix on
|
||||||
|
/// `html / body / #root` — without that fix the page is locked at y=0.
|
||||||
|
test.describe("Gateway UI scrolling", () => {
|
||||||
|
test("page scrolls when content exceeds viewport height", async ({
|
||||||
|
page,
|
||||||
|
}) => {
|
||||||
|
// Use a small viewport to guarantee overflow even with modest content.
|
||||||
|
await page.setViewportSize({ width: 1280, height: 400 });
|
||||||
|
|
||||||
|
// --- mock API endpoints ---
|
||||||
|
|
||||||
|
// Identify this server as a gateway.
|
||||||
|
await page.route("/gateway/mode", async (route) => {
|
||||||
|
await route.fulfill({ json: { mode: "gateway" } });
|
||||||
|
});
|
||||||
|
|
||||||
|
// Return enough agents to push the page past 400 px.
|
||||||
|
const agents = Array.from({ length: 15 }, (_, i) => ({
|
||||||
|
id: `agent-${i}`,
|
||||||
|
label: `Build Agent ${i}`,
|
||||||
|
address: `10.0.0.${i}:5000`,
|
||||||
|
registered_at: Date.now() / 1000 - 60,
|
||||||
|
last_seen: Date.now() / 1000 - 10,
|
||||||
|
}));
|
||||||
|
await page.route("/gateway/agents", async (route) => {
|
||||||
|
await route.fulfill({ json: agents });
|
||||||
|
});
|
||||||
|
|
||||||
|
await page.route("/api/gateway", async (route) => {
|
||||||
|
await route.fulfill({ json: { active: "", projects: [] } });
|
||||||
|
});
|
||||||
|
|
||||||
|
await page.route("/api/gateway/pipeline", async (route) => {
|
||||||
|
await route.fulfill({ json: { active: "", projects: {} } });
|
||||||
|
});
|
||||||
|
|
||||||
|
// Non-gateway APIs called by App.tsx on startup — respond quickly so the
|
||||||
|
// loading gate (`isCheckingProject`) clears and the gateway panel renders.
|
||||||
|
await page.route("/api/project", async (route) => {
|
||||||
|
await route.fulfill({ json: null });
|
||||||
|
});
|
||||||
|
await page.route("/api/projects", async (route) => {
|
||||||
|
await route.fulfill({ json: [] });
|
||||||
|
});
|
||||||
|
await page.route("/oauth/status", async (route) => {
|
||||||
|
await route.fulfill({ json: { authenticated: false } });
|
||||||
|
});
|
||||||
|
await page.route("/api/home", async (route) => {
|
||||||
|
await route.fulfill({ json: "/home/test" });
|
||||||
|
});
|
||||||
|
|
||||||
|
await page.goto("/");
|
||||||
|
|
||||||
|
// Wait until the gateway panel is visible.
|
||||||
|
await page.waitForSelector('[data-testid="add-agent-button"]');
|
||||||
|
|
||||||
|
// The scrolling element should be taller than the visible viewport.
|
||||||
|
const isOverflowing = await page.evaluate(() => {
|
||||||
|
const el =
|
||||||
|
document.scrollingElement ?? document.documentElement;
|
||||||
|
return el.scrollHeight > el.clientHeight;
|
||||||
|
});
|
||||||
|
expect(isOverflowing).toBe(true);
|
||||||
|
|
||||||
|
// Scrolling must actually move the viewport.
|
||||||
|
await page.evaluate(() => window.scrollBy(0, 300));
|
||||||
|
const scrollY = await page.evaluate(
|
||||||
|
() => document.scrollingElement?.scrollTop ?? window.scrollY,
|
||||||
|
);
|
||||||
|
expect(scrollY).toBeGreaterThan(0);
|
||||||
|
});
|
||||||
|
});
|
||||||
+1
-1
@@ -1,6 +1,6 @@
|
|||||||
[package]
|
[package]
|
||||||
name = "huskies"
|
name = "huskies"
|
||||||
version = "0.10.3"
|
version = "0.10.4"
|
||||||
edition = "2024"
|
edition = "2024"
|
||||||
build = "build.rs"
|
build = "build.rs"
|
||||||
|
|
||||||
|
|||||||
@@ -0,0 +1,118 @@
|
|||||||
|
//! Project-local agent prompt layer.
|
||||||
|
//!
|
||||||
|
//! Reads `.huskies/AGENT.md` from the project root and appends its content to
|
||||||
|
//! the baked-in agent prompt at spawn time. This lets projects record
|
||||||
|
//! non-obvious facts (directory conventions, known traps, etc.) that every
|
||||||
|
//! agent should know without modifying the shared agent configuration.
|
||||||
|
//!
|
||||||
|
//! Behaviour contract:
|
||||||
|
//! - If the file is missing or empty the caller receives `None`; agents spawn
|
||||||
|
//! normally with no warnings or errors.
|
||||||
|
//! - If the file exists and is non-empty, the content is returned and an
|
||||||
|
//! INFO-level log line is emitted with the file path and byte count.
|
||||||
|
//! - The file is read fresh on every agent spawn — no caching.
|
||||||
|
|
||||||
|
use std::path::Path;
|
||||||
|
|
||||||
|
/// Attempt to load the project-local agent prompt from `.huskies/AGENT.md`.
|
||||||
|
///
|
||||||
|
/// Returns `Some(content)` when the file exists and is non-empty, or `None`
|
||||||
|
/// when the file is absent or empty. Never returns an error; any I/O problem
|
||||||
|
/// is silently treated as "no local prompt".
|
||||||
|
pub fn read_project_local_prompt(project_root: &Path) -> Option<String> {
|
||||||
|
let path = project_root.join(".huskies/AGENT.md");
|
||||||
|
let content = std::fs::read_to_string(&path).ok()?;
|
||||||
|
let trimmed = content.trim();
|
||||||
|
if trimmed.is_empty() {
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
crate::slog!(
|
||||||
|
"[agents] project-local prompt loaded: {} ({} bytes)",
|
||||||
|
path.display(),
|
||||||
|
trimmed.len()
|
||||||
|
);
|
||||||
|
Some(trimmed.to_string())
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn returns_none_when_file_absent() {
|
||||||
|
let tmp = tempfile::tempdir().unwrap();
|
||||||
|
let result = read_project_local_prompt(tmp.path());
|
||||||
|
assert!(result.is_none(), "missing file must return None");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn returns_none_when_file_empty() {
|
||||||
|
let tmp = tempfile::tempdir().unwrap();
|
||||||
|
let huskies_dir = tmp.path().join(".huskies");
|
||||||
|
std::fs::create_dir_all(&huskies_dir).unwrap();
|
||||||
|
std::fs::write(huskies_dir.join("AGENT.md"), "").unwrap();
|
||||||
|
let result = read_project_local_prompt(tmp.path());
|
||||||
|
assert!(result.is_none(), "empty file must return None");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn returns_none_when_file_whitespace_only() {
|
||||||
|
let tmp = tempfile::tempdir().unwrap();
|
||||||
|
let huskies_dir = tmp.path().join(".huskies");
|
||||||
|
std::fs::create_dir_all(&huskies_dir).unwrap();
|
||||||
|
std::fs::write(huskies_dir.join("AGENT.md"), " \n\n ").unwrap();
|
||||||
|
let result = read_project_local_prompt(tmp.path());
|
||||||
|
assert!(result.is_none(), "whitespace-only file must return None");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn returns_content_when_file_non_empty() {
|
||||||
|
let tmp = tempfile::tempdir().unwrap();
|
||||||
|
let huskies_dir = tmp.path().join(".huskies");
|
||||||
|
std::fs::create_dir_all(&huskies_dir).unwrap();
|
||||||
|
let marker = "DISTINCTIVE_MARKER_XYZ42";
|
||||||
|
std::fs::write(huskies_dir.join("AGENT.md"), format!("# Hints\n{marker}\n")).unwrap();
|
||||||
|
let result = read_project_local_prompt(tmp.path());
|
||||||
|
assert!(result.is_some(), "non-empty file must return Some");
|
||||||
|
let content = result.unwrap();
|
||||||
|
assert!(
|
||||||
|
content.contains(marker),
|
||||||
|
"returned content must include the marker: {content}"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn appended_to_prompt_integration() {
|
||||||
|
// Simulates the start.rs usage: marker appears in the constructed
|
||||||
|
// system prompt when the file is present, absent when it is not.
|
||||||
|
let tmp_with = tempfile::tempdir().unwrap();
|
||||||
|
let huskies_dir = tmp_with.path().join(".huskies");
|
||||||
|
std::fs::create_dir_all(&huskies_dir).unwrap();
|
||||||
|
let marker = "INTEGRATION_MARKER_601";
|
||||||
|
std::fs::write(huskies_dir.join("AGENT.md"), marker).unwrap();
|
||||||
|
|
||||||
|
let base_prompt = "You are a coder agent.".to_string();
|
||||||
|
let local = read_project_local_prompt(tmp_with.path());
|
||||||
|
let effective = match local {
|
||||||
|
Some(ref extra) => format!("{base_prompt}\n\n{extra}"),
|
||||||
|
None => base_prompt.clone(),
|
||||||
|
};
|
||||||
|
assert!(
|
||||||
|
effective.contains(marker),
|
||||||
|
"marker must appear in effective prompt when file present: {effective}"
|
||||||
|
);
|
||||||
|
|
||||||
|
// Without the file
|
||||||
|
let tmp_without = tempfile::tempdir().unwrap();
|
||||||
|
let local2 = read_project_local_prompt(tmp_without.path());
|
||||||
|
assert!(local2.is_none(), "no marker when file absent");
|
||||||
|
let effective2 = match local2 {
|
||||||
|
Some(ref extra) => format!("{base_prompt}\n\n{extra}"),
|
||||||
|
None => base_prompt.clone(),
|
||||||
|
};
|
||||||
|
assert!(
|
||||||
|
!effective2.contains(marker),
|
||||||
|
"marker must NOT appear in effective prompt when file absent: {effective2}"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1,6 +1,7 @@
|
|||||||
//! Agent subsystem — types, configuration, and orchestration for coding agents.
|
//! Agent subsystem — types, configuration, and orchestration for coding agents.
|
||||||
pub mod gates;
|
pub mod gates;
|
||||||
pub mod lifecycle;
|
pub mod lifecycle;
|
||||||
|
pub mod local_prompt;
|
||||||
pub mod merge;
|
pub mod merge;
|
||||||
mod pool;
|
mod pool;
|
||||||
pub(crate) mod pty;
|
pub(crate) mod pty;
|
||||||
|
|||||||
@@ -410,6 +410,17 @@ impl AgentPool {
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
// Append project-local prompt content (.huskies/AGENT.md) to the
|
||||||
|
// baked-in prompt so every agent role sees project-specific guidance
|
||||||
|
// without any config changes. The file is read fresh each spawn;
|
||||||
|
// if absent or empty, the prompt is unchanged and no warning is logged.
|
||||||
|
if let Some(local) =
|
||||||
|
crate::agents::local_prompt::read_project_local_prompt(&project_root_clone)
|
||||||
|
{
|
||||||
|
prompt.push_str("\n\n");
|
||||||
|
prompt.push_str(&local);
|
||||||
|
}
|
||||||
|
|
||||||
// Build the effective prompt and determine resume session.
|
// Build the effective prompt and determine resume session.
|
||||||
//
|
//
|
||||||
// When resuming a previous session, discard the full rendered prompt
|
// When resuming a previous session, discard the full rendered prompt
|
||||||
|
|||||||
+7
-1137
File diff suppressed because it is too large
Load Diff
@@ -1,10 +1,10 @@
|
|||||||
//! Matrix bot context — shared state for the Matrix bot (rooms, history, permissions).
|
//! Matrix bot context — shared state for the Matrix bot (rooms, history, permissions).
|
||||||
use crate::agents::AgentPool;
|
use crate::agents::AgentPool;
|
||||||
use crate::chat::ChatTransport;
|
use crate::chat::ChatTransport;
|
||||||
use crate::chat::timer::TimerStore;
|
|
||||||
use crate::http::context::{PermissionDecision, PermissionForward};
|
use crate::http::context::{PermissionDecision, PermissionForward};
|
||||||
|
use crate::service::timer::TimerStore;
|
||||||
use matrix_sdk::ruma::{OwnedEventId, OwnedRoomId, OwnedUserId};
|
use matrix_sdk::ruma::{OwnedEventId, OwnedRoomId, OwnedUserId};
|
||||||
use std::collections::{HashMap, HashSet};
|
use std::collections::{BTreeMap, HashMap, HashSet};
|
||||||
use std::path::PathBuf;
|
use std::path::PathBuf;
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
use tokio::sync::Mutex as TokioMutex;
|
use tokio::sync::Mutex as TokioMutex;
|
||||||
@@ -65,6 +65,10 @@ pub struct BotContext {
|
|||||||
/// In gateway mode: valid project names accepted by the `switch` command.
|
/// In gateway mode: valid project names accepted by the `switch` command.
|
||||||
/// Empty in standalone mode.
|
/// Empty in standalone mode.
|
||||||
pub gateway_projects: Vec<String>,
|
pub gateway_projects: Vec<String>,
|
||||||
|
/// In gateway mode: mapping of project name → base URL (e.g. `"http://localhost:3001"`).
|
||||||
|
/// Used to proxy bot commands to the active project's `/api/bot/command` endpoint.
|
||||||
|
/// Empty in standalone mode.
|
||||||
|
pub gateway_project_urls: BTreeMap<String, String>,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl BotContext {
|
impl BotContext {
|
||||||
@@ -82,6 +86,49 @@ impl BotContext {
|
|||||||
self.project_root.clone()
|
self.project_root.clone()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Returns `true` if the bot is running in gateway mode.
|
||||||
|
pub fn is_gateway(&self) -> bool {
|
||||||
|
self.gateway_active_project.is_some()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Return the base URL for the currently active project, if in gateway mode.
|
||||||
|
pub async fn active_project_url(&self) -> Option<String> {
|
||||||
|
let ap = self.gateway_active_project.as_ref()?;
|
||||||
|
let name = ap.read().await.clone();
|
||||||
|
self.gateway_project_urls.get(&name).cloned()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Proxy a bot command to the active project's `/api/bot/command` endpoint.
|
||||||
|
///
|
||||||
|
/// Returns the Markdown response from the project server, or an error
|
||||||
|
/// message if the request failed.
|
||||||
|
pub async fn proxy_bot_command(&self, command: &str, args: &str) -> Option<String> {
|
||||||
|
let base_url = self.active_project_url().await?;
|
||||||
|
let url = format!("{base_url}/api/bot/command");
|
||||||
|
let client = reqwest::Client::new();
|
||||||
|
let body = serde_json::json!({
|
||||||
|
"command": command,
|
||||||
|
"args": args,
|
||||||
|
});
|
||||||
|
match client.post(&url).json(&body).send().await {
|
||||||
|
Ok(resp) if resp.status().is_success() => {
|
||||||
|
match resp.json::<serde_json::Value>().await {
|
||||||
|
Ok(json) => json
|
||||||
|
.get("response")
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.map(String::from),
|
||||||
|
Err(e) => Some(format!("Failed to parse response from project server: {e}")),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(resp) => Some(format!(
|
||||||
|
"Project server returned HTTP {}: {}",
|
||||||
|
resp.status(),
|
||||||
|
resp.text().await.unwrap_or_default()
|
||||||
|
)),
|
||||||
|
Err(e) => Some(format!("Failed to reach project server at {url}: {e}")),
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// ---------------------------------------------------------------------------
|
// ---------------------------------------------------------------------------
|
||||||
@@ -130,11 +177,12 @@ mod tests {
|
|||||||
"test-token".to_string(),
|
"test-token".to_string(),
|
||||||
"pipeline_notification".to_string(),
|
"pipeline_notification".to_string(),
|
||||||
)),
|
)),
|
||||||
timer_store: Arc::new(crate::chat::timer::TimerStore::load(
|
timer_store: Arc::new(crate::service::timer::TimerStore::load(
|
||||||
std::path::PathBuf::from("/tmp/timers.json"),
|
std::path::PathBuf::from("/tmp/timers.json"),
|
||||||
)),
|
)),
|
||||||
gateway_active_project: None,
|
gateway_active_project: None,
|
||||||
gateway_projects: vec![],
|
gateway_projects: vec![],
|
||||||
|
gateway_project_urls: BTreeMap::new(),
|
||||||
};
|
};
|
||||||
assert_eq!(
|
assert_eq!(
|
||||||
ctx.effective_project_root().await,
|
ctx.effective_project_root().await,
|
||||||
@@ -167,11 +215,15 @@ mod tests {
|
|||||||
"test-token".to_string(),
|
"test-token".to_string(),
|
||||||
"pipeline_notification".to_string(),
|
"pipeline_notification".to_string(),
|
||||||
)),
|
)),
|
||||||
timer_store: Arc::new(crate::chat::timer::TimerStore::load(
|
timer_store: Arc::new(crate::service::timer::TimerStore::load(
|
||||||
std::path::PathBuf::from("/tmp/timers.json"),
|
std::path::PathBuf::from("/tmp/timers.json"),
|
||||||
)),
|
)),
|
||||||
gateway_active_project: Some(Arc::clone(&active)),
|
gateway_active_project: Some(Arc::clone(&active)),
|
||||||
gateway_projects: vec!["huskies".into(), "robot-studio".into()],
|
gateway_projects: vec!["huskies".into(), "robot-studio".into()],
|
||||||
|
gateway_project_urls: BTreeMap::from([
|
||||||
|
("huskies".into(), "http://localhost:3001".into()),
|
||||||
|
("robot-studio".into(), "http://localhost:3002".into()),
|
||||||
|
]),
|
||||||
};
|
};
|
||||||
assert_eq!(
|
assert_eq!(
|
||||||
ctx.effective_project_root().await,
|
ctx.effective_project_root().await,
|
||||||
@@ -204,11 +256,15 @@ mod tests {
|
|||||||
"test-token".to_string(),
|
"test-token".to_string(),
|
||||||
"pipeline_notification".to_string(),
|
"pipeline_notification".to_string(),
|
||||||
)),
|
)),
|
||||||
timer_store: Arc::new(crate::chat::timer::TimerStore::load(
|
timer_store: Arc::new(crate::service::timer::TimerStore::load(
|
||||||
std::path::PathBuf::from("/tmp/timers.json"),
|
std::path::PathBuf::from("/tmp/timers.json"),
|
||||||
)),
|
)),
|
||||||
gateway_active_project: Some(Arc::clone(&active)),
|
gateway_active_project: Some(Arc::clone(&active)),
|
||||||
gateway_projects: vec!["huskies".into(), "robot-studio".into()],
|
gateway_projects: vec!["huskies".into(), "robot-studio".into()],
|
||||||
|
gateway_project_urls: BTreeMap::from([
|
||||||
|
("huskies".into(), "http://localhost:3001".into()),
|
||||||
|
("robot-studio".into(), "http://localhost:3002".into()),
|
||||||
|
]),
|
||||||
};
|
};
|
||||||
|
|
||||||
assert_eq!(
|
assert_eq!(
|
||||||
@@ -250,11 +306,12 @@ mod tests {
|
|||||||
"test-token".to_string(),
|
"test-token".to_string(),
|
||||||
"pipeline_notification".to_string(),
|
"pipeline_notification".to_string(),
|
||||||
)),
|
)),
|
||||||
timer_store: Arc::new(crate::chat::timer::TimerStore::load(
|
timer_store: Arc::new(crate::service::timer::TimerStore::load(
|
||||||
std::path::PathBuf::from("/tmp/timers.json"),
|
std::path::PathBuf::from("/tmp/timers.json"),
|
||||||
)),
|
)),
|
||||||
gateway_active_project: None,
|
gateway_active_project: None,
|
||||||
gateway_projects: vec![],
|
gateway_projects: vec![],
|
||||||
|
gateway_project_urls: BTreeMap::new(),
|
||||||
};
|
};
|
||||||
// Clone must work (required by Matrix SDK event handler injection).
|
// Clone must work (required by Matrix SDK event handler injection).
|
||||||
let _cloned = ctx.clone();
|
let _cloned = ctx.clone();
|
||||||
|
|||||||
@@ -179,6 +179,80 @@ pub(super) async fn on_room_message(
|
|||||||
// a subdirectory named after the project. Standalone mode is unaffected.
|
// a subdirectory named after the project. Standalone mode is unaffected.
|
||||||
let effective_root = ctx.effective_project_root().await;
|
let effective_root = ctx.effective_project_root().await;
|
||||||
|
|
||||||
|
// ── Gateway command proxy ───────────────────────────────────────────
|
||||||
|
// In gateway mode the bot has no local CRDT or project filesystem, so most
|
||||||
|
// commands must be forwarded to the active project's `/api/bot/command`
|
||||||
|
// endpoint. Only a small set of gateway-local commands are handled here.
|
||||||
|
if ctx.is_gateway() {
|
||||||
|
// Commands that are meaningful on the gateway itself (no project state needed).
|
||||||
|
const GATEWAY_LOCAL_COMMANDS: &[&str] =
|
||||||
|
&["help", "ambient", "reset", "switch", "all_status"];
|
||||||
|
|
||||||
|
let stripped = crate::chat::util::strip_bot_mention(
|
||||||
|
&user_message,
|
||||||
|
&ctx.bot_name,
|
||||||
|
ctx.bot_user_id.as_str(),
|
||||||
|
)
|
||||||
|
.trim()
|
||||||
|
.trim_start_matches(|c: char| !c.is_alphanumeric())
|
||||||
|
.to_string();
|
||||||
|
|
||||||
|
let (cmd, args) = match stripped.split_once(char::is_whitespace) {
|
||||||
|
Some((c, a)) => (c.to_ascii_lowercase(), a.trim().to_string()),
|
||||||
|
None => (stripped.to_ascii_lowercase(), String::new()),
|
||||||
|
};
|
||||||
|
|
||||||
|
// Only proxy if the first word is a known bot command (sync or async).
|
||||||
|
let is_known_command = !cmd.is_empty()
|
||||||
|
&& !GATEWAY_LOCAL_COMMANDS.contains(&cmd.as_str())
|
||||||
|
&& (crate::chat::commands::commands()
|
||||||
|
.iter()
|
||||||
|
.any(|c| c.name == cmd)
|
||||||
|
|| [
|
||||||
|
"assign", "start", "delete", "rebuild", "rmtree", "htop", "timer",
|
||||||
|
]
|
||||||
|
.contains(&cmd.as_str()));
|
||||||
|
|
||||||
|
if is_known_command {
|
||||||
|
// Proxy to the active project server.
|
||||||
|
let response = match ctx.proxy_bot_command(&cmd, &args).await {
|
||||||
|
Some(r) => r,
|
||||||
|
None => "No active project selected or project URL not configured.".to_string(),
|
||||||
|
};
|
||||||
|
let html = markdown_to_html(&response);
|
||||||
|
if let Ok(msg_id) = ctx
|
||||||
|
.transport
|
||||||
|
.send_message(&room_id_str, &response, &html)
|
||||||
|
.await
|
||||||
|
&& let Ok(event_id) = msg_id.parse()
|
||||||
|
{
|
||||||
|
ctx.bot_sent_event_ids.lock().await.insert(event_id);
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// `all_status` — aggregate pipeline status across all projects (gateway-only).
|
||||||
|
if cmd == "all_status" {
|
||||||
|
let project_urls = ctx.gateway_project_urls.clone();
|
||||||
|
let client = reqwest::Client::new();
|
||||||
|
let statuses =
|
||||||
|
crate::gateway::fetch_all_project_pipeline_statuses(&project_urls, &client).await;
|
||||||
|
let response = crate::gateway::format_aggregate_status_compact(&statuses);
|
||||||
|
let html = markdown_to_html(&response);
|
||||||
|
if let Ok(msg_id) = ctx
|
||||||
|
.transport
|
||||||
|
.send_message(&room_id_str, &response, &html)
|
||||||
|
.await
|
||||||
|
&& let Ok(event_id) = msg_id.parse()
|
||||||
|
{
|
||||||
|
ctx.bot_sent_event_ids.lock().await.insert(event_id);
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Gateway-local commands and freeform text fall through to normal handling below.
|
||||||
|
}
|
||||||
|
|
||||||
// Check for bot-level commands (help, status, ambient, …) before invoking
|
// Check for bot-level commands (help, status, ambient, …) before invoking
|
||||||
// the LLM. All commands are registered in commands.rs — no special-casing
|
// the LLM. All commands are registered in commands.rs — no special-casing
|
||||||
// needed here.
|
// needed here.
|
||||||
@@ -498,13 +572,13 @@ pub(super) async fn on_room_message(
|
|||||||
|
|
||||||
// Check for the timer command, which requires async file I/O and cannot
|
// Check for the timer command, which requires async file I/O and cannot
|
||||||
// be handled by the sync command registry.
|
// be handled by the sync command registry.
|
||||||
if let Some(timer_cmd) = crate::chat::timer::extract_timer_command(
|
if let Some(timer_cmd) = crate::service::timer::extract_timer_command(
|
||||||
&user_message,
|
&user_message,
|
||||||
&ctx.bot_name,
|
&ctx.bot_name,
|
||||||
ctx.bot_user_id.as_str(),
|
ctx.bot_user_id.as_str(),
|
||||||
) {
|
) {
|
||||||
slog!("[matrix-bot] Handling timer command from {sender}: {timer_cmd:?}");
|
slog!("[matrix-bot] Handling timer command from {sender}: {timer_cmd:?}");
|
||||||
let response = crate::chat::timer::handle_timer_command(
|
let response = crate::service::timer::handle_timer_command(
|
||||||
timer_cmd,
|
timer_cmd,
|
||||||
&ctx.timer_store,
|
&ctx.timer_store,
|
||||||
&ctx.project_root,
|
&ctx.project_root,
|
||||||
@@ -592,12 +666,18 @@ pub(super) async fn handle_message(
|
|||||||
let sent_any_chunk = Arc::new(AtomicBool::new(false));
|
let sent_any_chunk = Arc::new(AtomicBool::new(false));
|
||||||
let sent_any_chunk_for_callback = Arc::clone(&sent_any_chunk);
|
let sent_any_chunk_for_callback = Arc::clone(&sent_any_chunk);
|
||||||
|
|
||||||
// In gateway mode, run Claude Code in the active project's directory.
|
// In gateway mode, run Claude Code in the gateway config directory so it
|
||||||
let project_root_str = ctx
|
// picks up the `.mcp.json` that points to the gateway's MCP proxy endpoint.
|
||||||
.effective_project_root()
|
// The gateway proxies tool calls to the active project automatically.
|
||||||
|
// In standalone mode, use the project root directly.
|
||||||
|
let project_root_str = if ctx.is_gateway() {
|
||||||
|
ctx.project_root.to_string_lossy().to_string()
|
||||||
|
} else {
|
||||||
|
ctx.effective_project_root()
|
||||||
.await
|
.await
|
||||||
.to_string_lossy()
|
.to_string_lossy()
|
||||||
.to_string();
|
.to_string()
|
||||||
|
};
|
||||||
let chat_fut = provider.chat_stream(
|
let chat_fut = provider.chat_stream(
|
||||||
&prompt,
|
&prompt,
|
||||||
&project_root_str,
|
&project_root_str,
|
||||||
|
|||||||
@@ -30,6 +30,7 @@ pub async fn run_bot(
|
|||||||
shutdown_rx: watch::Receiver<Option<crate::rebuild::ShutdownReason>>,
|
shutdown_rx: watch::Receiver<Option<crate::rebuild::ShutdownReason>>,
|
||||||
gateway_active_project: Option<Arc<RwLock<String>>>,
|
gateway_active_project: Option<Arc<RwLock<String>>>,
|
||||||
gateway_projects: Vec<String>,
|
gateway_projects: Vec<String>,
|
||||||
|
gateway_project_urls: std::collections::BTreeMap<String, String>,
|
||||||
) -> Result<(), String> {
|
) -> Result<(), String> {
|
||||||
let store_path = project_root.join(".huskies").join("matrix_store");
|
let store_path = project_root.join(".huskies").join("matrix_store");
|
||||||
let client = Client::builder()
|
let client = Client::builder()
|
||||||
@@ -167,6 +168,11 @@ pub async fn run_bot(
|
|||||||
let notif_room_ids = target_room_ids.clone();
|
let notif_room_ids = target_room_ids.clone();
|
||||||
let notif_project_root = project_root.clone();
|
let notif_project_root = project_root.clone();
|
||||||
let announce_room_ids = target_room_ids.clone();
|
let announce_room_ids = target_room_ids.clone();
|
||||||
|
// Clone values needed by the gateway notification poller (only used in gateway mode).
|
||||||
|
let poller_room_ids: Vec<String> = target_room_ids.iter().map(|r| r.to_string()).collect();
|
||||||
|
let poller_project_urls = gateway_project_urls.clone();
|
||||||
|
let poller_poll_interval = config.aggregated_notifications_poll_interval_secs;
|
||||||
|
let poller_enabled = config.aggregated_notifications_enabled;
|
||||||
|
|
||||||
let persisted = load_history(&project_root);
|
let persisted = load_history(&project_root);
|
||||||
slog!(
|
slog!(
|
||||||
@@ -222,11 +228,14 @@ pub async fn run_bot(
|
|||||||
.unwrap_or_else(|| "Assistant".to_string());
|
.unwrap_or_else(|| "Assistant".to_string());
|
||||||
let announce_bot_name = bot_name.clone();
|
let announce_bot_name = bot_name.clone();
|
||||||
|
|
||||||
let timer_store = Arc::new(crate::chat::timer::TimerStore::load(
|
let timer_store = Arc::new(crate::service::timer::TimerStore::load(
|
||||||
project_root.join(".huskies").join("timers.json"),
|
project_root.join(".huskies").join("timers.json"),
|
||||||
));
|
));
|
||||||
// Auto-schedule timers when an agent hits a hard rate limit.
|
// Auto-schedule timers when an agent hits a hard rate limit.
|
||||||
crate::chat::timer::spawn_rate_limit_auto_scheduler(Arc::clone(&timer_store), watcher_rx_auto);
|
crate::service::timer::spawn_rate_limit_auto_scheduler(
|
||||||
|
Arc::clone(&timer_store),
|
||||||
|
watcher_rx_auto,
|
||||||
|
);
|
||||||
|
|
||||||
let ctx = BotContext {
|
let ctx = BotContext {
|
||||||
bot_user_id,
|
bot_user_id,
|
||||||
@@ -247,6 +256,7 @@ pub async fn run_bot(
|
|||||||
timer_store,
|
timer_store,
|
||||||
gateway_active_project,
|
gateway_active_project,
|
||||||
gateway_projects,
|
gateway_projects,
|
||||||
|
gateway_project_urls,
|
||||||
};
|
};
|
||||||
|
|
||||||
slog!(
|
slog!(
|
||||||
@@ -262,13 +272,27 @@ pub async fn run_bot(
|
|||||||
// Spawn the stage-transition notification listener before entering the
|
// Spawn the stage-transition notification listener before entering the
|
||||||
// sync loop so it starts receiving watcher events immediately.
|
// sync loop so it starts receiving watcher events immediately.
|
||||||
let notif_room_id_strings: Vec<String> = notif_room_ids.iter().map(|r| r.to_string()).collect();
|
let notif_room_id_strings: Vec<String> = notif_room_ids.iter().map(|r| r.to_string()).collect();
|
||||||
super::super::notifications::spawn_notification_listener(
|
crate::service::notifications::spawn_notification_listener(
|
||||||
Arc::clone(&transport),
|
Arc::clone(&transport),
|
||||||
move || notif_room_id_strings.clone(),
|
move || notif_room_id_strings.clone(),
|
||||||
watcher_rx,
|
watcher_rx,
|
||||||
notif_project_root,
|
notif_project_root,
|
||||||
);
|
);
|
||||||
|
|
||||||
|
// In gateway mode, spawn the cross-project notification poller.
|
||||||
|
// It polls every registered project's `/api/events` endpoint and forwards
|
||||||
|
// new events to the configured gateway rooms with a `[project-name]` prefix.
|
||||||
|
// The poller is controlled by the gateway-level `aggregated_notifications_enabled`
|
||||||
|
// flag in bot.toml — set it to `false` to disable without touching per-project configs.
|
||||||
|
if !poller_project_urls.is_empty() && poller_enabled {
|
||||||
|
crate::gateway::spawn_gateway_notification_poller(
|
||||||
|
Arc::clone(&transport),
|
||||||
|
poller_room_ids,
|
||||||
|
poller_project_urls,
|
||||||
|
poller_poll_interval,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
// Spawn a shutdown watcher that sends a best-effort goodbye message to all
|
// Spawn a shutdown watcher that sends a best-effort goodbye message to all
|
||||||
// configured rooms when the server is about to stop (SIGINT/SIGTERM or rebuild).
|
// configured rooms when the server is about to stop (SIGINT/SIGTERM or rebuild).
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -10,6 +10,14 @@ fn default_permission_timeout_secs() -> u64 {
|
|||||||
120
|
120
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn default_aggregated_notifications_poll_interval_secs() -> u64 {
|
||||||
|
5
|
||||||
|
}
|
||||||
|
|
||||||
|
fn default_aggregated_notifications_enabled() -> bool {
|
||||||
|
true
|
||||||
|
}
|
||||||
|
|
||||||
/// Configuration for the Matrix bot, read from `.huskies/bot.toml`.
|
/// Configuration for the Matrix bot, read from `.huskies/bot.toml`.
|
||||||
#[derive(Deserialize, Clone, Debug)]
|
#[derive(Deserialize, Clone, Debug)]
|
||||||
pub struct BotConfig {
|
pub struct BotConfig {
|
||||||
@@ -146,6 +154,26 @@ pub struct BotConfig {
|
|||||||
/// When empty or absent, all users in configured channels are allowed.
|
/// When empty or absent, all users in configured channels are allowed.
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
pub discord_allowed_users: Vec<String>,
|
pub discord_allowed_users: Vec<String>,
|
||||||
|
|
||||||
|
/// How often (in seconds) the gateway polls each project server's
|
||||||
|
/// `/api/events` endpoint to aggregate cross-project notifications.
|
||||||
|
///
|
||||||
|
/// Only used when the gateway's bot is enabled. Defaults to 5 seconds.
|
||||||
|
#[serde(default = "default_aggregated_notifications_poll_interval_secs")]
|
||||||
|
pub aggregated_notifications_poll_interval_secs: u64,
|
||||||
|
|
||||||
|
/// Whether the gateway-level aggregated cross-project notification stream
|
||||||
|
/// is enabled. When `false`, the gateway will not poll per-project
|
||||||
|
/// servers for events even if the bot is otherwise enabled.
|
||||||
|
///
|
||||||
|
/// Set this in the **gateway's** `bot.toml` (not in per-project configs).
|
||||||
|
/// Adding a new project to `projects.toml` never requires touching
|
||||||
|
/// per-project bot configs — the aggregated stream picks it up
|
||||||
|
/// automatically once this flag is `true` (the default).
|
||||||
|
///
|
||||||
|
/// Defaults to `true`.
|
||||||
|
#[serde(default = "default_aggregated_notifications_enabled")]
|
||||||
|
pub aggregated_notifications_enabled: bool,
|
||||||
}
|
}
|
||||||
|
|
||||||
fn default_transport() -> String {
|
fn default_transport() -> String {
|
||||||
@@ -658,6 +686,47 @@ require_verified_devices = true
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn aggregated_notifications_enabled_defaults_to_true() {
|
||||||
|
let tmp = tempfile::tempdir().unwrap();
|
||||||
|
let sk = tmp.path().join(".huskies");
|
||||||
|
fs::create_dir_all(&sk).unwrap();
|
||||||
|
fs::write(
|
||||||
|
sk.join("bot.toml"),
|
||||||
|
r#"
|
||||||
|
homeserver = "https://matrix.example.com"
|
||||||
|
username = "@bot:example.com"
|
||||||
|
password = "secret"
|
||||||
|
room_ids = ["!abc:example.com"]
|
||||||
|
enabled = true
|
||||||
|
"#,
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
let config = BotConfig::load(tmp.path()).unwrap();
|
||||||
|
assert!(config.aggregated_notifications_enabled);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn aggregated_notifications_enabled_can_be_set_to_false() {
|
||||||
|
let tmp = tempfile::tempdir().unwrap();
|
||||||
|
let sk = tmp.path().join(".huskies");
|
||||||
|
fs::create_dir_all(&sk).unwrap();
|
||||||
|
fs::write(
|
||||||
|
sk.join("bot.toml"),
|
||||||
|
r#"
|
||||||
|
homeserver = "https://matrix.example.com"
|
||||||
|
username = "@bot:example.com"
|
||||||
|
password = "secret"
|
||||||
|
room_ids = ["!abc:example.com"]
|
||||||
|
enabled = true
|
||||||
|
aggregated_notifications_enabled = false
|
||||||
|
"#,
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
let config = BotConfig::load(tmp.path()).unwrap();
|
||||||
|
assert!(!config.aggregated_notifications_enabled);
|
||||||
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn load_reads_ambient_rooms() {
|
fn load_reads_ambient_rooms() {
|
||||||
let tmp = tempfile::tempdir().unwrap();
|
let tmp = tempfile::tempdir().unwrap();
|
||||||
|
|||||||
@@ -21,7 +21,6 @@ pub mod commands;
|
|||||||
pub(crate) mod config;
|
pub(crate) mod config;
|
||||||
pub mod delete;
|
pub mod delete;
|
||||||
pub mod htop;
|
pub mod htop;
|
||||||
pub mod notifications;
|
|
||||||
pub mod rebuild;
|
pub mod rebuild;
|
||||||
pub mod reset;
|
pub mod reset;
|
||||||
pub mod rmtree;
|
pub mod rmtree;
|
||||||
@@ -62,6 +61,7 @@ use tokio::sync::{Mutex as TokioMutex, RwLock, broadcast, mpsc, watch};
|
|||||||
/// Returns an [`tokio::task::AbortHandle`] if the bot was actually spawned (Matrix/Discord
|
/// Returns an [`tokio::task::AbortHandle`] if the bot was actually spawned (Matrix/Discord
|
||||||
/// transports), or `None` if the config is absent, disabled, or uses a webhook-based
|
/// transports), or `None` if the config is absent, disabled, or uses a webhook-based
|
||||||
/// transport (Slack/WhatsApp) that does not require a persistent background task.
|
/// transport (Slack/WhatsApp) that does not require a persistent background task.
|
||||||
|
#[allow(clippy::too_many_arguments)]
|
||||||
pub fn spawn_bot(
|
pub fn spawn_bot(
|
||||||
project_root: &Path,
|
project_root: &Path,
|
||||||
watcher_tx: broadcast::Sender<WatcherEvent>,
|
watcher_tx: broadcast::Sender<WatcherEvent>,
|
||||||
@@ -70,6 +70,7 @@ pub fn spawn_bot(
|
|||||||
shutdown_rx: watch::Receiver<Option<ShutdownReason>>,
|
shutdown_rx: watch::Receiver<Option<ShutdownReason>>,
|
||||||
gateway_active_project: Option<Arc<RwLock<String>>>,
|
gateway_active_project: Option<Arc<RwLock<String>>>,
|
||||||
gateway_projects: Vec<String>,
|
gateway_projects: Vec<String>,
|
||||||
|
gateway_project_urls: std::collections::BTreeMap<String, String>,
|
||||||
) -> Option<tokio::task::AbortHandle> {
|
) -> Option<tokio::task::AbortHandle> {
|
||||||
let config = match BotConfig::load(project_root) {
|
let config = match BotConfig::load(project_root) {
|
||||||
Some(c) => c,
|
Some(c) => c,
|
||||||
@@ -108,6 +109,7 @@ pub fn spawn_bot(
|
|||||||
shutdown_rx,
|
shutdown_rx,
|
||||||
gateway_active_project,
|
gateway_active_project,
|
||||||
gateway_projects,
|
gateway_projects,
|
||||||
|
gateway_project_urls,
|
||||||
)
|
)
|
||||||
.await
|
.await
|
||||||
{
|
{
|
||||||
|
|||||||
+837
-2092
File diff suppressed because it is too large
Load Diff
+162
-159
@@ -1,11 +1,14 @@
|
|||||||
//! HTTP agent endpoints — REST API for listing, starting, stopping, and inspecting agents.
|
//! HTTP agent endpoints — thin adapters over `service::agents`.
|
||||||
use crate::config::ProjectConfig;
|
//!
|
||||||
|
//! Each handler: extracts payload → calls `service::agents::X` → shapes
|
||||||
|
//! response DTO → returns HTTP result. No filesystem access, no inline
|
||||||
|
//! validation, no process invocations.
|
||||||
use crate::http::context::{AppContext, OpenApiResult, bad_request, not_found};
|
use crate::http::context::{AppContext, OpenApiResult, bad_request, not_found};
|
||||||
|
use crate::service::agents::{self as svc, AgentConfigEntry, WorkItemContent};
|
||||||
use crate::workflow::{StoryTestResults, TestCaseResult, TestStatus};
|
use crate::workflow::{StoryTestResults, TestCaseResult, TestStatus};
|
||||||
use crate::worktree;
|
use poem::http::StatusCode;
|
||||||
use poem_openapi::{Object, OpenApi, Tags, param::Path, payload::Json};
|
use poem_openapi::{Object, OpenApi, Tags, param::Path, payload::Json};
|
||||||
use serde::Serialize;
|
use serde::Serialize;
|
||||||
use std::path;
|
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
|
|
||||||
#[derive(Tags)]
|
#[derive(Tags)]
|
||||||
@@ -45,6 +48,20 @@ struct AgentConfigInfoResponse {
|
|||||||
max_budget_usd: Option<f64>,
|
max_budget_usd: Option<f64>,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
impl From<AgentConfigEntry> for AgentConfigInfoResponse {
|
||||||
|
fn from(e: AgentConfigEntry) -> Self {
|
||||||
|
Self {
|
||||||
|
name: e.name,
|
||||||
|
role: e.role,
|
||||||
|
stage: e.stage,
|
||||||
|
model: e.model,
|
||||||
|
allowed_tools: e.allowed_tools,
|
||||||
|
max_turns: e.max_turns,
|
||||||
|
max_budget_usd: e.max_budget_usd,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
#[derive(Object)]
|
#[derive(Object)]
|
||||||
struct CreateWorktreePayload {
|
struct CreateWorktreePayload {
|
||||||
story_id: String,
|
story_id: String,
|
||||||
@@ -73,6 +90,17 @@ struct WorkItemContentResponse {
|
|||||||
agent: Option<String>,
|
agent: Option<String>,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
impl From<WorkItemContent> for WorkItemContentResponse {
|
||||||
|
fn from(w: WorkItemContent) -> Self {
|
||||||
|
Self {
|
||||||
|
content: w.content,
|
||||||
|
stage: w.stage,
|
||||||
|
name: w.name,
|
||||||
|
agent: w.agent,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/// A single test case result for the OpenAPI response.
|
/// A single test case result for the OpenAPI response.
|
||||||
#[derive(Object, Serialize)]
|
#[derive(Object, Serialize)]
|
||||||
struct TestCaseResultResponse {
|
struct TestCaseResultResponse {
|
||||||
@@ -153,15 +181,23 @@ struct AllTokenUsageResponse {
|
|||||||
records: Vec<TokenUsageRecordResponse>,
|
records: Vec<TokenUsageRecordResponse>,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Returns true if the story file exists in `work/5_done/` or `work/6_archived/`.
|
/// Map a `service::agents::Error` to a Poem HTTP error with the correct status.
|
||||||
///
|
fn map_svc_error(err: svc::Error) -> poem::Error {
|
||||||
/// Used to exclude agents for already-archived stories from the `list_agents`
|
match err {
|
||||||
/// response so the agents panel is not cluttered with old completed items on
|
svc::Error::AgentNotFound(_) => {
|
||||||
/// frontend startup.
|
poem::Error::from_string(err.to_string(), StatusCode::NOT_FOUND)
|
||||||
pub fn story_is_archived(project_root: &path::Path, story_id: &str) -> bool {
|
}
|
||||||
let work = project_root.join(".huskies").join("work");
|
svc::Error::WorkItemNotFound(_) => {
|
||||||
let filename = format!("{story_id}.md");
|
poem::Error::from_string(err.to_string(), StatusCode::NOT_FOUND)
|
||||||
work.join("5_done").join(&filename).exists() || work.join("6_archived").join(&filename).exists()
|
}
|
||||||
|
svc::Error::Worktree(_) => {
|
||||||
|
poem::Error::from_string(err.to_string(), StatusCode::BAD_REQUEST)
|
||||||
|
}
|
||||||
|
svc::Error::Config(_) => poem::Error::from_string(err.to_string(), StatusCode::BAD_REQUEST),
|
||||||
|
svc::Error::Io(_) => {
|
||||||
|
poem::Error::from_string(err.to_string(), StatusCode::INTERNAL_SERVER_ERROR)
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub struct AgentsApi {
|
pub struct AgentsApi {
|
||||||
@@ -183,10 +219,8 @@ impl AgentsApi {
|
|||||||
.get_project_root(&self.ctx.state)
|
.get_project_root(&self.ctx.state)
|
||||||
.map_err(bad_request)?;
|
.map_err(bad_request)?;
|
||||||
|
|
||||||
let info = self
|
let info = svc::start_agent(
|
||||||
.ctx
|
&self.ctx.agents,
|
||||||
.agents
|
|
||||||
.start_agent(
|
|
||||||
&project_root,
|
&project_root,
|
||||||
&payload.0.story_id,
|
&payload.0.story_id,
|
||||||
payload.0.agent_name.as_deref(),
|
payload.0.agent_name.as_deref(),
|
||||||
@@ -194,7 +228,7 @@ impl AgentsApi {
|
|||||||
None,
|
None,
|
||||||
)
|
)
|
||||||
.await
|
.await
|
||||||
.map_err(bad_request)?;
|
.map_err(map_svc_error)?;
|
||||||
|
|
||||||
Ok(Json(AgentInfoResponse {
|
Ok(Json(AgentInfoResponse {
|
||||||
story_id: info.story_id,
|
story_id: info.story_id,
|
||||||
@@ -214,11 +248,14 @@ impl AgentsApi {
|
|||||||
.get_project_root(&self.ctx.state)
|
.get_project_root(&self.ctx.state)
|
||||||
.map_err(bad_request)?;
|
.map_err(bad_request)?;
|
||||||
|
|
||||||
self.ctx
|
svc::stop_agent(
|
||||||
.agents
|
&self.ctx.agents,
|
||||||
.stop_agent(&project_root, &payload.0.story_id, &payload.0.agent_name)
|
&project_root,
|
||||||
|
&payload.0.story_id,
|
||||||
|
&payload.0.agent_name,
|
||||||
|
)
|
||||||
.await
|
.await
|
||||||
.map_err(bad_request)?;
|
.map_err(map_svc_error)?;
|
||||||
|
|
||||||
Ok(Json(true))
|
Ok(Json(true))
|
||||||
}
|
}
|
||||||
@@ -231,17 +268,12 @@ impl AgentsApi {
|
|||||||
#[oai(path = "/agents", method = "get")]
|
#[oai(path = "/agents", method = "get")]
|
||||||
async fn list_agents(&self) -> OpenApiResult<Json<Vec<AgentInfoResponse>>> {
|
async fn list_agents(&self) -> OpenApiResult<Json<Vec<AgentInfoResponse>>> {
|
||||||
let project_root = self.ctx.agents.get_project_root(&self.ctx.state).ok();
|
let project_root = self.ctx.agents.get_project_root(&self.ctx.state).ok();
|
||||||
let agents = self.ctx.agents.list_agents().map_err(bad_request)?;
|
let agents =
|
||||||
|
svc::list_agents(&self.ctx.agents, project_root.as_deref()).map_err(map_svc_error)?;
|
||||||
|
|
||||||
Ok(Json(
|
Ok(Json(
|
||||||
agents
|
agents
|
||||||
.into_iter()
|
.into_iter()
|
||||||
.filter(|info| {
|
|
||||||
project_root
|
|
||||||
.as_deref()
|
|
||||||
.map(|root| !story_is_archived(root, &info.story_id))
|
|
||||||
.unwrap_or(true)
|
|
||||||
})
|
|
||||||
.map(|info| AgentInfoResponse {
|
.map(|info| AgentInfoResponse {
|
||||||
story_id: info.story_id,
|
story_id: info.story_id,
|
||||||
agent_name: info.agent_name,
|
agent_name: info.agent_name,
|
||||||
@@ -262,21 +294,11 @@ impl AgentsApi {
|
|||||||
.get_project_root(&self.ctx.state)
|
.get_project_root(&self.ctx.state)
|
||||||
.map_err(bad_request)?;
|
.map_err(bad_request)?;
|
||||||
|
|
||||||
let config = ProjectConfig::load(&project_root).map_err(bad_request)?;
|
let entries = svc::get_agent_config(&project_root).map_err(map_svc_error)?;
|
||||||
|
|
||||||
Ok(Json(
|
Ok(Json(
|
||||||
config
|
entries
|
||||||
.agent
|
.into_iter()
|
||||||
.iter()
|
.map(AgentConfigInfoResponse::from)
|
||||||
.map(|a| AgentConfigInfoResponse {
|
|
||||||
name: a.name.clone(),
|
|
||||||
role: a.role.clone(),
|
|
||||||
stage: a.stage.clone(),
|
|
||||||
model: a.model.clone(),
|
|
||||||
allowed_tools: a.allowed_tools.clone(),
|
|
||||||
max_turns: a.max_turns,
|
|
||||||
max_budget_usd: a.max_budget_usd,
|
|
||||||
})
|
|
||||||
.collect(),
|
.collect(),
|
||||||
))
|
))
|
||||||
}
|
}
|
||||||
@@ -290,21 +312,11 @@ impl AgentsApi {
|
|||||||
.get_project_root(&self.ctx.state)
|
.get_project_root(&self.ctx.state)
|
||||||
.map_err(bad_request)?;
|
.map_err(bad_request)?;
|
||||||
|
|
||||||
let config = ProjectConfig::load(&project_root).map_err(bad_request)?;
|
let entries = svc::reload_config(&project_root).map_err(map_svc_error)?;
|
||||||
|
|
||||||
Ok(Json(
|
Ok(Json(
|
||||||
config
|
entries
|
||||||
.agent
|
.into_iter()
|
||||||
.iter()
|
.map(AgentConfigInfoResponse::from)
|
||||||
.map(|a| AgentConfigInfoResponse {
|
|
||||||
name: a.name.clone(),
|
|
||||||
role: a.role.clone(),
|
|
||||||
stage: a.stage.clone(),
|
|
||||||
model: a.model.clone(),
|
|
||||||
allowed_tools: a.allowed_tools.clone(),
|
|
||||||
max_turns: a.max_turns,
|
|
||||||
max_budget_usd: a.max_budget_usd,
|
|
||||||
})
|
|
||||||
.collect(),
|
.collect(),
|
||||||
))
|
))
|
||||||
}
|
}
|
||||||
@@ -321,12 +333,9 @@ impl AgentsApi {
|
|||||||
.get_project_root(&self.ctx.state)
|
.get_project_root(&self.ctx.state)
|
||||||
.map_err(bad_request)?;
|
.map_err(bad_request)?;
|
||||||
|
|
||||||
let info = self
|
let info = svc::create_worktree(&self.ctx.agents, &project_root, &payload.0.story_id)
|
||||||
.ctx
|
|
||||||
.agents
|
|
||||||
.create_worktree(&project_root, &payload.0.story_id)
|
|
||||||
.await
|
.await
|
||||||
.map_err(bad_request)?;
|
.map_err(map_svc_error)?;
|
||||||
|
|
||||||
Ok(Json(WorktreeInfoResponse {
|
Ok(Json(WorktreeInfoResponse {
|
||||||
story_id: payload.0.story_id,
|
story_id: payload.0.story_id,
|
||||||
@@ -345,7 +354,7 @@ impl AgentsApi {
|
|||||||
.get_project_root(&self.ctx.state)
|
.get_project_root(&self.ctx.state)
|
||||||
.map_err(bad_request)?;
|
.map_err(bad_request)?;
|
||||||
|
|
||||||
let entries = worktree::list_worktrees(&project_root).map_err(bad_request)?;
|
let entries = svc::list_worktrees(&project_root).map_err(map_svc_error)?;
|
||||||
|
|
||||||
Ok(Json(
|
Ok(Json(
|
||||||
entries
|
entries
|
||||||
@@ -373,36 +382,12 @@ impl AgentsApi {
|
|||||||
.get_project_root(&self.ctx.state)
|
.get_project_root(&self.ctx.state)
|
||||||
.map_err(bad_request)?;
|
.map_err(bad_request)?;
|
||||||
|
|
||||||
let stages = [
|
let item = svc::get_work_item_content(&project_root, &story_id.0).map_err(|e| match e {
|
||||||
("1_backlog", "backlog"),
|
svc::Error::WorkItemNotFound(_) => not_found(e.to_string()),
|
||||||
("2_current", "current"),
|
other => map_svc_error(other),
|
||||||
("3_qa", "qa"),
|
})?;
|
||||||
("4_merge", "merge"),
|
|
||||||
("5_done", "done"),
|
|
||||||
("6_archived", "archived"),
|
|
||||||
];
|
|
||||||
|
|
||||||
let work_dir = project_root.join(".huskies").join("work");
|
Ok(Json(WorkItemContentResponse::from(item)))
|
||||||
let filename = format!("{}.md", story_id.0);
|
|
||||||
|
|
||||||
for (stage_dir, stage_name) in &stages {
|
|
||||||
let file_path = work_dir.join(stage_dir).join(&filename);
|
|
||||||
if file_path.exists() {
|
|
||||||
let content = std::fs::read_to_string(&file_path)
|
|
||||||
.map_err(|e| bad_request(format!("Failed to read work item: {e}")))?;
|
|
||||||
let metadata = crate::io::story_metadata::parse_front_matter(&content).ok();
|
|
||||||
let name = metadata.as_ref().and_then(|m| m.name.clone());
|
|
||||||
let agent = metadata.and_then(|m| m.agent);
|
|
||||||
return Ok(Json(WorkItemContentResponse {
|
|
||||||
content,
|
|
||||||
stage: stage_name.to_string(),
|
|
||||||
name,
|
|
||||||
agent,
|
|
||||||
}));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
Err(not_found(format!("Work item not found: {}", story_id.0)))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Get test results for a work item by its story_id.
|
/// Get test results for a work item by its story_id.
|
||||||
@@ -414,30 +399,37 @@ impl AgentsApi {
|
|||||||
&self,
|
&self,
|
||||||
story_id: Path<String>,
|
story_id: Path<String>,
|
||||||
) -> OpenApiResult<Json<Option<TestResultsResponse>>> {
|
) -> OpenApiResult<Json<Option<TestResultsResponse>>> {
|
||||||
// Try in-memory workflow state first.
|
// Fast path: return from in-memory state without requiring project_root.
|
||||||
|
let in_memory = {
|
||||||
let workflow = self
|
let workflow = self
|
||||||
.ctx
|
.ctx
|
||||||
.workflow
|
.workflow
|
||||||
.lock()
|
.lock()
|
||||||
.map_err(|e| bad_request(format!("Lock error: {e}")))?;
|
.map_err(|e| bad_request(format!("Lock error: {e}")))?;
|
||||||
|
workflow.results.get(&story_id.0).cloned()
|
||||||
if let Some(results) = workflow.results.get(&story_id.0) {
|
};
|
||||||
return Ok(Json(Some(TestResultsResponse::from_story_results(results))));
|
if let Some(results) = in_memory {
|
||||||
|
return Ok(Json(Some(TestResultsResponse::from_story_results(
|
||||||
|
&results,
|
||||||
|
))));
|
||||||
}
|
}
|
||||||
drop(workflow);
|
|
||||||
|
|
||||||
// Fall back to file-persisted results.
|
// Slow path: fall back to results persisted in the story file.
|
||||||
let project_root = self
|
let project_root = self
|
||||||
.ctx
|
.ctx
|
||||||
.agents
|
.agents
|
||||||
.get_project_root(&self.ctx.state)
|
.get_project_root(&self.ctx.state)
|
||||||
.map_err(bad_request)?;
|
.map_err(bad_request)?;
|
||||||
|
|
||||||
let file_results =
|
let workflow = self
|
||||||
crate::http::workflow::read_test_results_from_story_file(&project_root, &story_id.0);
|
.ctx
|
||||||
|
.workflow
|
||||||
|
.lock()
|
||||||
|
.map_err(|e| bad_request(format!("Lock error: {e}")))?;
|
||||||
|
|
||||||
|
let results = svc::get_test_results(&project_root, &story_id.0, &workflow);
|
||||||
Ok(Json(
|
Ok(Json(
|
||||||
file_results.map(|r| TestResultsResponse::from_story_results(&r)),
|
results.map(|r| TestResultsResponse::from_story_results(&r)),
|
||||||
))
|
))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -458,26 +450,8 @@ impl AgentsApi {
|
|||||||
.get_project_root(&self.ctx.state)
|
.get_project_root(&self.ctx.state)
|
||||||
.map_err(bad_request)?;
|
.map_err(bad_request)?;
|
||||||
|
|
||||||
let log_path = crate::agent_log::find_latest_log(&project_root, &story_id.0, &agent_name.0);
|
let output = svc::get_agent_output(&project_root, &story_id.0, &agent_name.0)
|
||||||
|
.map_err(map_svc_error)?;
|
||||||
let Some(path) = log_path else {
|
|
||||||
return Ok(Json(AgentOutputResponse {
|
|
||||||
output: String::new(),
|
|
||||||
}));
|
|
||||||
};
|
|
||||||
|
|
||||||
let entries = crate::agent_log::read_log(&path).map_err(bad_request)?;
|
|
||||||
|
|
||||||
let output: String = entries
|
|
||||||
.iter()
|
|
||||||
.filter(|e| e.event.get("type").and_then(|t| t.as_str()) == Some("output"))
|
|
||||||
.filter_map(|e| {
|
|
||||||
e.event
|
|
||||||
.get("text")
|
|
||||||
.and_then(|t| t.as_str())
|
|
||||||
.map(str::to_owned)
|
|
||||||
})
|
|
||||||
.collect();
|
|
||||||
|
|
||||||
Ok(Json(AgentOutputResponse { output }))
|
Ok(Json(AgentOutputResponse { output }))
|
||||||
}
|
}
|
||||||
@@ -491,10 +465,9 @@ impl AgentsApi {
|
|||||||
.get_project_root(&self.ctx.state)
|
.get_project_root(&self.ctx.state)
|
||||||
.map_err(bad_request)?;
|
.map_err(bad_request)?;
|
||||||
|
|
||||||
let config = ProjectConfig::load(&project_root).map_err(bad_request)?;
|
svc::remove_worktree(&project_root, &story_id.0)
|
||||||
worktree::remove_worktree_by_story_id(&project_root, &story_id.0, &config)
|
|
||||||
.await
|
.await
|
||||||
.map_err(bad_request)?;
|
.map_err(map_svc_error)?;
|
||||||
|
|
||||||
Ok(Json(true))
|
Ok(Json(true))
|
||||||
}
|
}
|
||||||
@@ -514,39 +487,25 @@ impl AgentsApi {
|
|||||||
.get_project_root(&self.ctx.state)
|
.get_project_root(&self.ctx.state)
|
||||||
.map_err(bad_request)?;
|
.map_err(bad_request)?;
|
||||||
|
|
||||||
let all_records = crate::agents::token_usage::read_all(&project_root)
|
let summary =
|
||||||
.map_err(|e| bad_request(format!("Failed to read token usage: {e}")))?;
|
svc::get_work_item_token_cost(&project_root, &story_id.0).map_err(map_svc_error)?;
|
||||||
|
|
||||||
let mut agent_map: std::collections::HashMap<String, AgentCostEntry> =
|
let agents = summary
|
||||||
std::collections::HashMap::new();
|
.agents
|
||||||
|
.into_iter()
|
||||||
let mut total_cost_usd = 0.0_f64;
|
.map(|a| AgentCostEntry {
|
||||||
|
agent_name: a.agent_name,
|
||||||
for record in all_records.into_iter().filter(|r| r.story_id == story_id.0) {
|
model: a.model,
|
||||||
total_cost_usd += record.usage.total_cost_usd;
|
input_tokens: a.input_tokens,
|
||||||
let entry = agent_map
|
output_tokens: a.output_tokens,
|
||||||
.entry(record.agent_name.clone())
|
cache_creation_input_tokens: a.cache_creation_input_tokens,
|
||||||
.or_insert_with(|| AgentCostEntry {
|
cache_read_input_tokens: a.cache_read_input_tokens,
|
||||||
agent_name: record.agent_name.clone(),
|
total_cost_usd: a.total_cost_usd,
|
||||||
model: record.model.clone(),
|
})
|
||||||
input_tokens: 0,
|
.collect();
|
||||||
output_tokens: 0,
|
|
||||||
cache_creation_input_tokens: 0,
|
|
||||||
cache_read_input_tokens: 0,
|
|
||||||
total_cost_usd: 0.0,
|
|
||||||
});
|
|
||||||
entry.input_tokens += record.usage.input_tokens;
|
|
||||||
entry.output_tokens += record.usage.output_tokens;
|
|
||||||
entry.cache_creation_input_tokens += record.usage.cache_creation_input_tokens;
|
|
||||||
entry.cache_read_input_tokens += record.usage.cache_read_input_tokens;
|
|
||||||
entry.total_cost_usd += record.usage.total_cost_usd;
|
|
||||||
}
|
|
||||||
|
|
||||||
let mut agents: Vec<AgentCostEntry> = agent_map.into_values().collect();
|
|
||||||
agents.sort_by(|a, b| a.agent_name.cmp(&b.agent_name));
|
|
||||||
|
|
||||||
Ok(Json(TokenCostResponse {
|
Ok(Json(TokenCostResponse {
|
||||||
total_cost_usd,
|
total_cost_usd: summary.total_cost_usd,
|
||||||
agents,
|
agents,
|
||||||
}))
|
}))
|
||||||
}
|
}
|
||||||
@@ -562,8 +521,7 @@ impl AgentsApi {
|
|||||||
.get_project_root(&self.ctx.state)
|
.get_project_root(&self.ctx.state)
|
||||||
.map_err(bad_request)?;
|
.map_err(bad_request)?;
|
||||||
|
|
||||||
let records = crate::agents::token_usage::read_all(&project_root)
|
let records = svc::get_all_token_usage(&project_root).map_err(map_svc_error)?;
|
||||||
.map_err(|e| bad_request(format!("Failed to read token usage: {e}")))?;
|
|
||||||
|
|
||||||
let response_records: Vec<TokenUsageRecordResponse> = records
|
let response_records: Vec<TokenUsageRecordResponse> = records
|
||||||
.into_iter()
|
.into_iter()
|
||||||
@@ -590,6 +548,7 @@ impl AgentsApi {
|
|||||||
mod tests {
|
mod tests {
|
||||||
use super::*;
|
use super::*;
|
||||||
use crate::agents::AgentStatus;
|
use crate::agents::AgentStatus;
|
||||||
|
use std::path;
|
||||||
use tempfile::TempDir;
|
use tempfile::TempDir;
|
||||||
|
|
||||||
fn make_work_dirs(tmp: &TempDir) -> path::PathBuf {
|
fn make_work_dirs(tmp: &TempDir) -> path::PathBuf {
|
||||||
@@ -604,7 +563,7 @@ mod tests {
|
|||||||
fn story_is_archived_false_when_file_absent() {
|
fn story_is_archived_false_when_file_absent() {
|
||||||
let tmp = TempDir::new().unwrap();
|
let tmp = TempDir::new().unwrap();
|
||||||
let root = make_work_dirs(&tmp);
|
let root = make_work_dirs(&tmp);
|
||||||
assert!(!story_is_archived(&root, "79_story_foo"));
|
assert!(!svc::is_archived(&root, "79_story_foo"));
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
@@ -616,7 +575,7 @@ mod tests {
|
|||||||
"---\nname: test\n---\n",
|
"---\nname: test\n---\n",
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
assert!(story_is_archived(&root, "79_story_foo"));
|
assert!(svc::is_archived(&root, "79_story_foo"));
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
@@ -628,7 +587,7 @@ mod tests {
|
|||||||
"---\nname: test\n---\n",
|
"---\nname: test\n---\n",
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
assert!(story_is_archived(&root, "79_story_foo"));
|
assert!(svc::is_archived(&root, "79_story_foo"));
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
@@ -953,6 +912,50 @@ allowed_tools = ["Read", "Bash"]
|
|||||||
assert!(result.is_err());
|
assert!(result.is_err());
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn get_work_item_content_falls_back_to_crdt_when_no_file() {
|
||||||
|
let tmp = TempDir::new().unwrap();
|
||||||
|
let root = tmp.path().to_path_buf();
|
||||||
|
// Seed content + CRDT with no .md file on disk.
|
||||||
|
crate::db::write_item_with_content(
|
||||||
|
"44_story_crdt_only",
|
||||||
|
"1_backlog",
|
||||||
|
"---\nname: \"CRDT Only\"\n---\n\nCRDT content.",
|
||||||
|
);
|
||||||
|
let ctx = AppContext::new_test(root);
|
||||||
|
let api = AgentsApi { ctx: Arc::new(ctx) };
|
||||||
|
let result = api
|
||||||
|
.get_work_item_content(Path("44_story_crdt_only".to_string()))
|
||||||
|
.await
|
||||||
|
.unwrap()
|
||||||
|
.0;
|
||||||
|
assert!(result.content.contains("CRDT content."));
|
||||||
|
assert_eq!(result.stage, "backlog");
|
||||||
|
assert_eq!(result.name, Some("CRDT Only".to_string()));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn get_work_item_content_crdt_fallback_with_current_stage() {
|
||||||
|
let tmp = TempDir::new().unwrap();
|
||||||
|
let root = tmp.path().to_path_buf();
|
||||||
|
// Seed a CRDT-only story in the coding/current stage.
|
||||||
|
crate::db::write_item_with_content(
|
||||||
|
"45_story_crdt_current",
|
||||||
|
"2_current",
|
||||||
|
"---\nname: \"Current CRDT\"\n---\n\nIn progress.",
|
||||||
|
);
|
||||||
|
let ctx = AppContext::new_test(root);
|
||||||
|
let api = AgentsApi { ctx: Arc::new(ctx) };
|
||||||
|
let result = api
|
||||||
|
.get_work_item_content(Path("45_story_crdt_current".to_string()))
|
||||||
|
.await
|
||||||
|
.unwrap()
|
||||||
|
.0;
|
||||||
|
assert!(result.content.contains("In progress."));
|
||||||
|
assert_eq!(result.stage, "current");
|
||||||
|
assert_eq!(result.name, Some("Current CRDT".to_string()));
|
||||||
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn get_work_item_content_returns_error_when_no_project_root() {
|
async fn get_work_item_content_returns_error_when_no_project_root() {
|
||||||
let tmp = TempDir::new().unwrap();
|
let tmp = TempDir::new().unwrap();
|
||||||
|
|||||||
@@ -1,50 +1,10 @@
|
|||||||
//! Anthropic API proxy — forwards model listing and key-validation requests to Anthropic.
|
//! Anthropic API proxy — thin adapter over `service::anthropic`.
|
||||||
use crate::http::context::{AppContext, OpenApiResult, bad_request};
|
use crate::http::context::{AppContext, OpenApiResult, bad_request};
|
||||||
use crate::llm::chat;
|
use crate::service::anthropic::{self as svc, ModelSummary};
|
||||||
use crate::store::StoreOps;
|
|
||||||
use poem_openapi::{Object, OpenApi, Tags, payload::Json};
|
use poem_openapi::{Object, OpenApi, Tags, payload::Json};
|
||||||
use reqwest::header::{HeaderMap, HeaderValue};
|
use serde::Deserialize;
|
||||||
use serde::{Deserialize, Serialize};
|
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
|
|
||||||
const ANTHROPIC_MODELS_URL: &str = "https://api.anthropic.com/v1/models";
|
|
||||||
const ANTHROPIC_VERSION: &str = "2023-06-01";
|
|
||||||
const KEY_ANTHROPIC_API_KEY: &str = "anthropic_api_key";
|
|
||||||
|
|
||||||
#[derive(Deserialize)]
|
|
||||||
struct AnthropicModelsResponse {
|
|
||||||
data: Vec<AnthropicModelInfo>,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Deserialize)]
|
|
||||||
struct AnthropicModelInfo {
|
|
||||||
id: String,
|
|
||||||
context_window: u64,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Serialize, Object)]
|
|
||||||
struct AnthropicModelSummary {
|
|
||||||
id: String,
|
|
||||||
context_window: u64,
|
|
||||||
}
|
|
||||||
|
|
||||||
fn get_anthropic_api_key(ctx: &AppContext) -> Result<String, String> {
|
|
||||||
match ctx.store.get(KEY_ANTHROPIC_API_KEY) {
|
|
||||||
Some(value) => {
|
|
||||||
if let Some(key) = value.as_str() {
|
|
||||||
if key.is_empty() {
|
|
||||||
Err("Anthropic API key is empty. Please set your API key.".to_string())
|
|
||||||
} else {
|
|
||||||
Ok(key.to_string())
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
Err("Stored API key is not a string".to_string())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
None => Err("Anthropic API key not found. Please set your API key.".to_string()),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Deserialize, Object)]
|
#[derive(Deserialize, Object)]
|
||||||
struct ApiKeyPayload {
|
struct ApiKeyPayload {
|
||||||
api_key: String,
|
api_key: String,
|
||||||
@@ -79,8 +39,8 @@ impl AnthropicApi {
|
|||||||
/// Returns `true` if a non-empty key is present, otherwise `false`.
|
/// Returns `true` if a non-empty key is present, otherwise `false`.
|
||||||
#[oai(path = "/anthropic/key/exists", method = "get")]
|
#[oai(path = "/anthropic/key/exists", method = "get")]
|
||||||
async fn get_anthropic_api_key_exists(&self) -> OpenApiResult<Json<bool>> {
|
async fn get_anthropic_api_key_exists(&self) -> OpenApiResult<Json<bool>> {
|
||||||
let exists =
|
let exists = svc::get_api_key_exists(self.ctx.store.as_ref())
|
||||||
chat::get_anthropic_api_key_exists(self.ctx.store.as_ref()).map_err(bad_request)?;
|
.map_err(|e| bad_request(e.to_string()))?;
|
||||||
Ok(Json(exists))
|
Ok(Json(exists))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -92,74 +52,62 @@ impl AnthropicApi {
|
|||||||
&self,
|
&self,
|
||||||
payload: Json<ApiKeyPayload>,
|
payload: Json<ApiKeyPayload>,
|
||||||
) -> OpenApiResult<Json<bool>> {
|
) -> OpenApiResult<Json<bool>> {
|
||||||
chat::set_anthropic_api_key(self.ctx.store.as_ref(), payload.0.api_key)
|
svc::set_api_key(self.ctx.store.as_ref(), payload.0.api_key)
|
||||||
.map_err(bad_request)?;
|
.map_err(|e| bad_request(e.to_string()))?;
|
||||||
Ok(Json(true))
|
Ok(Json(true))
|
||||||
}
|
}
|
||||||
|
|
||||||
/// List available Anthropic models.
|
/// List available Anthropic models.
|
||||||
#[oai(path = "/anthropic/models", method = "get")]
|
#[oai(path = "/anthropic/models", method = "get")]
|
||||||
async fn list_anthropic_models(&self) -> OpenApiResult<Json<Vec<AnthropicModelSummary>>> {
|
async fn list_anthropic_models(&self) -> OpenApiResult<Json<Vec<ModelSummary>>> {
|
||||||
self.list_anthropic_models_from(ANTHROPIC_MODELS_URL).await
|
let models = svc::list_models(self.ctx.store.as_ref())
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl AnthropicApi {
|
|
||||||
async fn list_anthropic_models_from(
|
|
||||||
&self,
|
|
||||||
url: &str,
|
|
||||||
) -> OpenApiResult<Json<Vec<AnthropicModelSummary>>> {
|
|
||||||
let api_key = get_anthropic_api_key(self.ctx.as_ref()).map_err(bad_request)?;
|
|
||||||
let client = reqwest::Client::new();
|
|
||||||
let mut headers = HeaderMap::new();
|
|
||||||
headers.insert(
|
|
||||||
"x-api-key",
|
|
||||||
HeaderValue::from_str(&api_key).map_err(|e| bad_request(e.to_string()))?,
|
|
||||||
);
|
|
||||||
headers.insert(
|
|
||||||
"anthropic-version",
|
|
||||||
HeaderValue::from_static(ANTHROPIC_VERSION),
|
|
||||||
);
|
|
||||||
|
|
||||||
let response = client
|
|
||||||
.get(url)
|
|
||||||
.headers(headers)
|
|
||||||
.send()
|
|
||||||
.await
|
.await
|
||||||
.map_err(|e| bad_request(e.to_string()))?;
|
.map_err(|e| bad_request(e.to_string()))?;
|
||||||
|
|
||||||
if !response.status().is_success() {
|
|
||||||
let status = response.status();
|
|
||||||
let error_text = response
|
|
||||||
.text()
|
|
||||||
.await
|
|
||||||
.unwrap_or_else(|_| "Unknown error".to_string());
|
|
||||||
return Err(bad_request(format!(
|
|
||||||
"Anthropic API error {status}: {error_text}"
|
|
||||||
)));
|
|
||||||
}
|
|
||||||
|
|
||||||
let body = response
|
|
||||||
.json::<AnthropicModelsResponse>()
|
|
||||||
.await
|
|
||||||
.map_err(|e| bad_request(e.to_string()))?;
|
|
||||||
let models = body
|
|
||||||
.data
|
|
||||||
.into_iter()
|
|
||||||
.map(|m| AnthropicModelSummary {
|
|
||||||
id: m.id,
|
|
||||||
context_window: m.context_window,
|
|
||||||
})
|
|
||||||
.collect();
|
|
||||||
|
|
||||||
Ok(Json(models))
|
Ok(Json(models))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
impl AnthropicApi {
|
||||||
|
/// List models from an injectable URL (used in tests to avoid real network calls).
|
||||||
|
async fn list_anthropic_models_from(
|
||||||
|
&self,
|
||||||
|
url: &str,
|
||||||
|
) -> OpenApiResult<Json<Vec<ModelSummary>>> {
|
||||||
|
let models = svc::list_models_from(self.ctx.store.as_ref(), url)
|
||||||
|
.await
|
||||||
|
.map_err(|e| bad_request(e.to_string()))?;
|
||||||
|
Ok(Json(models))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Private helper retained for backward compatibility with tests that call it directly.
|
||||||
|
#[cfg(test)]
|
||||||
|
fn get_anthropic_api_key(ctx: &AppContext) -> Result<String, String> {
|
||||||
|
svc::get_api_key(ctx.store.as_ref()).map_err(|e| e.to_string())
|
||||||
|
}
|
||||||
|
|
||||||
|
// Private types retained so existing tests that deserialise them directly continue to compile.
|
||||||
|
#[cfg(test)]
|
||||||
|
#[derive(serde::Deserialize)]
|
||||||
|
struct AnthropicModelsResponse {
|
||||||
|
data: Vec<AnthropicModelInfo>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
#[derive(serde::Deserialize)]
|
||||||
|
struct AnthropicModelInfo {
|
||||||
|
id: String,
|
||||||
|
context_window: u64,
|
||||||
|
}
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
mod tests {
|
mod tests {
|
||||||
use super::*;
|
use super::*;
|
||||||
|
use crate::http::context::AppContext;
|
||||||
use crate::http::test_helpers::{make_api, test_ctx};
|
use crate::http::test_helpers::{make_api, test_ctx};
|
||||||
|
use crate::store::StoreOps;
|
||||||
|
const KEY_ANTHROPIC_API_KEY: &str = "anthropic_api_key";
|
||||||
use serde_json::json;
|
use serde_json::json;
|
||||||
use tempfile::TempDir;
|
use tempfile::TempDir;
|
||||||
|
|
||||||
|
|||||||
+153
-190
@@ -3,19 +3,16 @@
|
|||||||
//! `POST /api/bot/command` lets the web UI invoke the same deterministic bot
|
//! `POST /api/bot/command` lets the web UI invoke the same deterministic bot
|
||||||
//! commands available in Matrix without going through the LLM.
|
//! commands available in Matrix without going through the LLM.
|
||||||
//!
|
//!
|
||||||
//! Synchronous commands (status, git, cost, move, show, overview, help) are
|
//! Dispatches to [`crate::service::bot_command::execute`], which owns all
|
||||||
//! dispatched directly through the matrix command registry.
|
//! parsing and routing logic. This handler is a thin OpenAPI adapter: it
|
||||||
//! Asynchronous commands (assign, start, delete, rebuild) are dispatched to
|
//! receives JSON, calls the service, and maps typed errors to HTTP status codes.
|
||||||
//! their dedicated async handlers. The `reset` command is handled by the frontend
|
|
||||||
//! (it clears local session state and message history) and is not routed here.
|
|
||||||
|
|
||||||
use crate::chat::commands::CommandDispatch;
|
|
||||||
use crate::http::context::{AppContext, OpenApiResult};
|
use crate::http::context::{AppContext, OpenApiResult};
|
||||||
|
use crate::service::bot_command as svc;
|
||||||
use poem::http::StatusCode;
|
use poem::http::StatusCode;
|
||||||
use poem_openapi::{Object, OpenApi, Tags, payload::Json};
|
use poem_openapi::{Object, OpenApi, Tags, payload::Json};
|
||||||
use serde::{Deserialize, Serialize};
|
use serde::{Deserialize, Serialize};
|
||||||
use std::collections::HashSet;
|
use std::sync::Arc;
|
||||||
use std::sync::{Arc, Mutex};
|
|
||||||
|
|
||||||
#[derive(Tags)]
|
#[derive(Tags)]
|
||||||
enum BotCommandTags {
|
enum BotCommandTags {
|
||||||
@@ -50,6 +47,11 @@ impl BotCommandApi {
|
|||||||
/// Dispatches to the same handlers used by the Matrix and Slack bots.
|
/// Dispatches to the same handlers used by the Matrix and Slack bots.
|
||||||
/// Returns a markdown-formatted response that the frontend can display
|
/// Returns a markdown-formatted response that the frontend can display
|
||||||
/// directly in the chat panel.
|
/// directly in the chat panel.
|
||||||
|
///
|
||||||
|
/// # Errors
|
||||||
|
/// - `400 Bad Request` — project root not set, or invalid command arguments.
|
||||||
|
/// - `404 Not Found` — unrecognised command keyword.
|
||||||
|
/// - `500 Internal Server Error` — command execution failed.
|
||||||
#[oai(path = "/bot/command", method = "post")]
|
#[oai(path = "/bot/command", method = "post")]
|
||||||
async fn run_command(
|
async fn run_command(
|
||||||
&self,
|
&self,
|
||||||
@@ -63,173 +65,23 @@ impl BotCommandApi {
|
|||||||
|
|
||||||
let cmd = body.command.trim().to_ascii_lowercase();
|
let cmd = body.command.trim().to_ascii_lowercase();
|
||||||
let args = body.args.trim();
|
let args = body.args.trim();
|
||||||
let response = dispatch_command(&cmd, args, &project_root, &self.ctx.agents).await;
|
|
||||||
|
let response = svc::execute(&cmd, args, &project_root, &self.ctx.agents)
|
||||||
|
.await
|
||||||
|
.map_err(|e| match e {
|
||||||
|
svc::Error::UnknownCommand(msg) => {
|
||||||
|
poem::Error::from_string(msg, StatusCode::NOT_FOUND)
|
||||||
|
}
|
||||||
|
svc::Error::BadArgs(msg) => poem::Error::from_string(msg, StatusCode::BAD_REQUEST),
|
||||||
|
svc::Error::CommandFailed(msg) => {
|
||||||
|
poem::Error::from_string(msg, StatusCode::INTERNAL_SERVER_ERROR)
|
||||||
|
}
|
||||||
|
})?;
|
||||||
|
|
||||||
Ok(Json(BotCommandResponse { response }))
|
Ok(Json(BotCommandResponse { response }))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Dispatch a command keyword + args to the appropriate handler.
|
|
||||||
async fn dispatch_command(
|
|
||||||
cmd: &str,
|
|
||||||
args: &str,
|
|
||||||
project_root: &std::path::Path,
|
|
||||||
agents: &Arc<crate::agents::AgentPool>,
|
|
||||||
) -> String {
|
|
||||||
match cmd {
|
|
||||||
"assign" => dispatch_assign(args, project_root, agents).await,
|
|
||||||
"start" => dispatch_start(args, project_root, agents).await,
|
|
||||||
"delete" => dispatch_delete(args, project_root, agents).await,
|
|
||||||
"rebuild" => dispatch_rebuild(project_root, agents).await,
|
|
||||||
"timer" => dispatch_timer(args, project_root).await,
|
|
||||||
// All other commands go through the synchronous command registry.
|
|
||||||
_ => dispatch_sync(cmd, args, project_root, agents),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn dispatch_sync(
|
|
||||||
cmd: &str,
|
|
||||||
args: &str,
|
|
||||||
project_root: &std::path::Path,
|
|
||||||
agents: &Arc<crate::agents::AgentPool>,
|
|
||||||
) -> String {
|
|
||||||
let ambient_rooms: Arc<Mutex<HashSet<String>>> = Arc::new(Mutex::new(HashSet::new()));
|
|
||||||
// Use a synthetic bot name/id so strip_bot_mention passes through.
|
|
||||||
let bot_name = "__web_ui__";
|
|
||||||
let bot_user_id = "@__web_ui__:localhost";
|
|
||||||
let room_id = "__web_ui__";
|
|
||||||
|
|
||||||
let dispatch = CommandDispatch {
|
|
||||||
bot_name,
|
|
||||||
bot_user_id,
|
|
||||||
project_root,
|
|
||||||
agents,
|
|
||||||
ambient_rooms: &ambient_rooms,
|
|
||||||
room_id,
|
|
||||||
};
|
|
||||||
|
|
||||||
// Build a synthetic message that the registry can parse.
|
|
||||||
let synthetic = if args.is_empty() {
|
|
||||||
format!("{bot_name} {cmd}")
|
|
||||||
} else {
|
|
||||||
format!("{bot_name} {cmd} {args}")
|
|
||||||
};
|
|
||||||
|
|
||||||
match crate::chat::commands::try_handle_command(&dispatch, &synthetic) {
|
|
||||||
Some(response) => response,
|
|
||||||
None => {
|
|
||||||
// Command exists in the registry but its fallback handler returns None
|
|
||||||
// (start, delete, rebuild, reset, htop — handled elsewhere or in
|
|
||||||
// the frontend). Should not be reached for those since we intercept
|
|
||||||
// them above. For genuinely unknown commands, tell the user.
|
|
||||||
format!("Unknown command: `/{cmd}`. Type `/help` to see available commands.")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn dispatch_assign(
|
|
||||||
args: &str,
|
|
||||||
project_root: &std::path::Path,
|
|
||||||
agents: &Arc<crate::agents::AgentPool>,
|
|
||||||
) -> String {
|
|
||||||
// args: "<number> <model>"
|
|
||||||
let mut parts = args.splitn(2, char::is_whitespace);
|
|
||||||
let number_str = parts.next().unwrap_or("").trim();
|
|
||||||
let model_str = parts.next().unwrap_or("").trim();
|
|
||||||
|
|
||||||
if number_str.is_empty()
|
|
||||||
|| !number_str.chars().all(|c| c.is_ascii_digit())
|
|
||||||
|| model_str.is_empty()
|
|
||||||
{
|
|
||||||
return "Usage: `/assign <number> <model>` (e.g. `/assign 42 opus`)".to_string();
|
|
||||||
}
|
|
||||||
|
|
||||||
crate::chat::transport::matrix::assign::handle_assign(
|
|
||||||
"web-ui",
|
|
||||||
number_str,
|
|
||||||
model_str,
|
|
||||||
project_root,
|
|
||||||
agents,
|
|
||||||
)
|
|
||||||
.await
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn dispatch_start(
|
|
||||||
args: &str,
|
|
||||||
project_root: &std::path::Path,
|
|
||||||
agents: &Arc<crate::agents::AgentPool>,
|
|
||||||
) -> String {
|
|
||||||
// args: "<number>" or "<number> <model_hint>"
|
|
||||||
let mut parts = args.splitn(2, char::is_whitespace);
|
|
||||||
let number_str = parts.next().unwrap_or("").trim();
|
|
||||||
let hint_str = parts.next().unwrap_or("").trim();
|
|
||||||
|
|
||||||
if number_str.is_empty() || !number_str.chars().all(|c| c.is_ascii_digit()) {
|
|
||||||
return "Usage: `/start <number>` or `/start <number> <model>` (e.g. `/start 42 opus`)"
|
|
||||||
.to_string();
|
|
||||||
}
|
|
||||||
|
|
||||||
let agent_hint = if hint_str.is_empty() {
|
|
||||||
None
|
|
||||||
} else {
|
|
||||||
Some(hint_str)
|
|
||||||
};
|
|
||||||
|
|
||||||
crate::chat::transport::matrix::start::handle_start(
|
|
||||||
"web-ui",
|
|
||||||
number_str,
|
|
||||||
agent_hint,
|
|
||||||
project_root,
|
|
||||||
agents,
|
|
||||||
)
|
|
||||||
.await
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn dispatch_delete(
|
|
||||||
args: &str,
|
|
||||||
project_root: &std::path::Path,
|
|
||||||
agents: &Arc<crate::agents::AgentPool>,
|
|
||||||
) -> String {
|
|
||||||
let number_str = args.trim();
|
|
||||||
if number_str.is_empty() || !number_str.chars().all(|c| c.is_ascii_digit()) {
|
|
||||||
return "Usage: `/delete <number>` (e.g. `/delete 42`)".to_string();
|
|
||||||
}
|
|
||||||
crate::chat::transport::matrix::delete::handle_delete(
|
|
||||||
"web-ui",
|
|
||||||
number_str,
|
|
||||||
project_root,
|
|
||||||
agents,
|
|
||||||
)
|
|
||||||
.await
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn dispatch_rebuild(
|
|
||||||
project_root: &std::path::Path,
|
|
||||||
agents: &Arc<crate::agents::AgentPool>,
|
|
||||||
) -> String {
|
|
||||||
crate::chat::transport::matrix::rebuild::handle_rebuild("web-ui", project_root, agents).await
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn dispatch_timer(args: &str, project_root: &std::path::Path) -> String {
|
|
||||||
// Re-use the existing parser by constructing a synthetic message that
|
|
||||||
// looks like a bot-addressed timer command.
|
|
||||||
let synthetic = format!("__web_ui__ timer {args}");
|
|
||||||
let timer_cmd = match crate::chat::timer::extract_timer_command(
|
|
||||||
&synthetic,
|
|
||||||
"__web_ui__",
|
|
||||||
"@__web_ui__:localhost",
|
|
||||||
) {
|
|
||||||
Some(cmd) => cmd,
|
|
||||||
None => {
|
|
||||||
return "Usage: `/timer list`, `/timer <number> <HH:MM>`, or `/timer cancel <number>`"
|
|
||||||
.to_string();
|
|
||||||
}
|
|
||||||
};
|
|
||||||
let store =
|
|
||||||
crate::chat::timer::TimerStore::load(project_root.join(".huskies").join("timers.json"));
|
|
||||||
crate::chat::timer::handle_timer_command(timer_cmd, &store, project_root).await
|
|
||||||
}
|
|
||||||
|
|
||||||
// ---------------------------------------------------------------------------
|
// ---------------------------------------------------------------------------
|
||||||
// Tests
|
// Tests
|
||||||
// ---------------------------------------------------------------------------
|
// ---------------------------------------------------------------------------
|
||||||
@@ -268,13 +120,7 @@ mod tests {
|
|||||||
args: String::new(),
|
args: String::new(),
|
||||||
};
|
};
|
||||||
let result = api.run_command(Json(body)).await;
|
let result = api.run_command(Json(body)).await;
|
||||||
assert!(result.is_ok());
|
assert!(result.is_err(), "unknown command should return HTTP 404");
|
||||||
let resp = result.unwrap().0;
|
|
||||||
assert!(
|
|
||||||
resp.response.contains("Unknown command"),
|
|
||||||
"expected 'Unknown command' in: {}",
|
|
||||||
resp.response
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
@@ -286,13 +132,7 @@ mod tests {
|
|||||||
args: String::new(),
|
args: String::new(),
|
||||||
};
|
};
|
||||||
let result = api.run_command(Json(body)).await;
|
let result = api.run_command(Json(body)).await;
|
||||||
assert!(result.is_ok());
|
assert!(result.is_err(), "start with no args should return HTTP 400");
|
||||||
let resp = result.unwrap().0;
|
|
||||||
assert!(
|
|
||||||
resp.response.contains("Usage"),
|
|
||||||
"expected usage hint in: {}",
|
|
||||||
resp.response
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
@@ -304,12 +144,9 @@ mod tests {
|
|||||||
args: String::new(),
|
args: String::new(),
|
||||||
};
|
};
|
||||||
let result = api.run_command(Json(body)).await;
|
let result = api.run_command(Json(body)).await;
|
||||||
assert!(result.is_ok());
|
|
||||||
let resp = result.unwrap().0;
|
|
||||||
assert!(
|
assert!(
|
||||||
resp.response.contains("Usage"),
|
result.is_err(),
|
||||||
"expected usage hint in: {}",
|
"delete with no args should return HTTP 400"
|
||||||
resp.response
|
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -340,7 +177,11 @@ mod tests {
|
|||||||
args: "list".to_string(),
|
args: "list".to_string(),
|
||||||
};
|
};
|
||||||
let result = api.run_command(Json(body)).await;
|
let result = api.run_command(Json(body)).await;
|
||||||
assert!(result.is_ok());
|
assert!(
|
||||||
|
result.is_ok(),
|
||||||
|
"timer list should succeed, got err: {:?}",
|
||||||
|
result.err().map(|e| e.to_string())
|
||||||
|
);
|
||||||
let resp = result.unwrap().0;
|
let resp = result.unwrap().0;
|
||||||
assert!(
|
assert!(
|
||||||
!resp.response.contains("Unknown command"),
|
!resp.response.contains("Unknown command"),
|
||||||
@@ -349,6 +190,128 @@ mod tests {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// -- htop (web-UI slash-command path) ------------------------------------
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn htop_returns_dashboard_not_unknown_command() {
|
||||||
|
let dir = TempDir::new().unwrap();
|
||||||
|
let api = test_api(&dir);
|
||||||
|
let body = BotCommandRequest {
|
||||||
|
command: "htop".to_string(),
|
||||||
|
args: String::new(),
|
||||||
|
};
|
||||||
|
let result = api.run_command(Json(body)).await;
|
||||||
|
assert!(result.is_ok());
|
||||||
|
let resp = result.unwrap().0;
|
||||||
|
assert!(
|
||||||
|
!resp.response.contains("Unknown command"),
|
||||||
|
"htop should not return 'Unknown command': {}",
|
||||||
|
resp.response
|
||||||
|
);
|
||||||
|
assert!(
|
||||||
|
resp.response.contains("htop"),
|
||||||
|
"htop response should contain 'htop': {}",
|
||||||
|
resp.response
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn htop_with_duration_returns_dashboard() {
|
||||||
|
let dir = TempDir::new().unwrap();
|
||||||
|
let api = test_api(&dir);
|
||||||
|
let body = BotCommandRequest {
|
||||||
|
command: "htop".to_string(),
|
||||||
|
args: "10m".to_string(),
|
||||||
|
};
|
||||||
|
let result = api.run_command(Json(body)).await;
|
||||||
|
assert!(result.is_ok());
|
||||||
|
let resp = result.unwrap().0;
|
||||||
|
assert!(
|
||||||
|
!resp.response.contains("Unknown command"),
|
||||||
|
"htop 10m should not return 'Unknown command': {}",
|
||||||
|
resp.response
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn htop_stop_returns_response_not_unknown_command() {
|
||||||
|
let dir = TempDir::new().unwrap();
|
||||||
|
let api = test_api(&dir);
|
||||||
|
let body = BotCommandRequest {
|
||||||
|
command: "htop".to_string(),
|
||||||
|
args: "stop".to_string(),
|
||||||
|
};
|
||||||
|
let result = api.run_command(Json(body)).await;
|
||||||
|
assert!(result.is_ok());
|
||||||
|
let resp = result.unwrap().0;
|
||||||
|
assert!(
|
||||||
|
!resp.response.contains("Unknown command"),
|
||||||
|
"htop stop should not return 'Unknown command': {}",
|
||||||
|
resp.response
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// -- rmtree ----------------------------------------------------------------
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn rmtree_without_number_returns_usage() {
|
||||||
|
let dir = TempDir::new().unwrap();
|
||||||
|
let api = test_api(&dir);
|
||||||
|
let body = BotCommandRequest {
|
||||||
|
command: "rmtree".to_string(),
|
||||||
|
args: String::new(),
|
||||||
|
};
|
||||||
|
let result = api.run_command(Json(body)).await;
|
||||||
|
assert!(
|
||||||
|
result.is_err(),
|
||||||
|
"rmtree with no args should return HTTP 400"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn rmtree_with_non_numeric_arg_returns_usage() {
|
||||||
|
let dir = TempDir::new().unwrap();
|
||||||
|
let api = test_api(&dir);
|
||||||
|
let body = BotCommandRequest {
|
||||||
|
command: "rmtree".to_string(),
|
||||||
|
args: "foo".to_string(),
|
||||||
|
};
|
||||||
|
let result = api.run_command(Json(body)).await;
|
||||||
|
assert!(
|
||||||
|
result.is_err(),
|
||||||
|
"rmtree with non-numeric arg should return HTTP 400"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn rmtree_does_not_return_unknown_command() {
|
||||||
|
let dir = TempDir::new().unwrap();
|
||||||
|
let api = test_api(&dir);
|
||||||
|
let body = BotCommandRequest {
|
||||||
|
command: "rmtree".to_string(),
|
||||||
|
args: "999".to_string(),
|
||||||
|
};
|
||||||
|
let result = api.run_command(Json(body)).await;
|
||||||
|
assert!(result.is_ok());
|
||||||
|
let resp = result.unwrap().0;
|
||||||
|
assert!(
|
||||||
|
!resp.response.contains("Unknown command"),
|
||||||
|
"/rmtree should not return 'Unknown command': {}",
|
||||||
|
resp.response
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// -- htop bot-command path (regression: htop must remain in command registry) --
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn htop_is_registered_in_bot_command_registry() {
|
||||||
|
let commands = crate::chat::commands::commands();
|
||||||
|
assert!(
|
||||||
|
commands.iter().any(|c| c.name == "htop"),
|
||||||
|
"htop must be registered in the bot command registry so /help lists it"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn run_command_requires_project_root() {
|
async fn run_command_requires_project_root() {
|
||||||
// Create a context with no project root set.
|
// Create a context with no project root set.
|
||||||
|
|||||||
@@ -1,8 +1,8 @@
|
|||||||
//! Application context — shared state (`AppContext`) threaded through all HTTP handlers.
|
//! Application context — shared state (`AppContext`) threaded through all HTTP handlers.
|
||||||
use crate::agents::{AgentPool, ReconciliationEvent};
|
use crate::agents::{AgentPool, ReconciliationEvent};
|
||||||
use crate::chat::timer::TimerStore;
|
|
||||||
use crate::io::watcher::WatcherEvent;
|
use crate::io::watcher::WatcherEvent;
|
||||||
use crate::rebuild::{BotShutdownNotifier, ShutdownReason};
|
use crate::rebuild::{BotShutdownNotifier, ShutdownReason};
|
||||||
|
use crate::service::timer::TimerStore;
|
||||||
use crate::state::SessionState;
|
use crate::state::SessionState;
|
||||||
use crate::store::JsonFileStore;
|
use crate::store::JsonFileStore;
|
||||||
use crate::workflow::WorkflowState;
|
use crate::workflow::WorkflowState;
|
||||||
|
|||||||
@@ -0,0 +1,198 @@
|
|||||||
|
//! Per-project event buffer and `GET /api/events` HTTP endpoint.
|
||||||
|
//!
|
||||||
|
//! The gateway polls `/api/events?since={ts_ms}` on each registered project
|
||||||
|
//! server to aggregate cross-project pipeline notifications into a single
|
||||||
|
//! gateway chat channel. Each project server buffers up to 500 events in
|
||||||
|
//! memory and serves them via this endpoint.
|
||||||
|
//!
|
||||||
|
//! Domain logic lives in `service::events`; this module is a thin HTTP
|
||||||
|
//! adapter: extract query params → call service → shape response.
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
pub use crate::service::events::StoredEvent;
|
||||||
|
pub use crate::service::events::{EventBuffer, subscribe_to_watcher};
|
||||||
|
// MAX_BUFFER_SIZE is used in tests via `use super::*`.
|
||||||
|
#[cfg(test)]
|
||||||
|
pub use crate::service::events::MAX_BUFFER_SIZE;
|
||||||
|
|
||||||
|
use poem::web::{Data, Query};
|
||||||
|
use poem::{Response, handler, http::StatusCode};
|
||||||
|
use serde::Deserialize;
|
||||||
|
|
||||||
|
/// Query parameters for `GET /api/events`.
|
||||||
|
#[derive(Deserialize)]
|
||||||
|
pub struct EventsQuery {
|
||||||
|
/// Return only events with `timestamp_ms` strictly greater than this value.
|
||||||
|
/// Defaults to `0` (return all buffered events).
|
||||||
|
#[serde(default)]
|
||||||
|
pub since: u64,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// `GET /api/events?since={ts_ms}`
|
||||||
|
///
|
||||||
|
/// Returns a JSON array of [`StoredEvent`] objects recorded after `since` ms.
|
||||||
|
/// The gateway polls this endpoint on each registered project server to build
|
||||||
|
/// an aggregated cross-project notification stream.
|
||||||
|
#[handler]
|
||||||
|
pub fn events_handler(
|
||||||
|
Query(params): Query<EventsQuery>,
|
||||||
|
Data(buffer): Data<&EventBuffer>,
|
||||||
|
) -> Response {
|
||||||
|
let events = crate::service::events::events_since(buffer, params.since);
|
||||||
|
let body = serde_json::to_vec(&events).unwrap_or_else(|_| b"[]".to_vec());
|
||||||
|
Response::builder()
|
||||||
|
.status(StatusCode::OK)
|
||||||
|
.header(poem::http::header::CONTENT_TYPE, "application/json")
|
||||||
|
.body(body)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
use tokio::sync::broadcast;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn event_buffer_push_and_retrieve() {
|
||||||
|
let buf = EventBuffer::new();
|
||||||
|
buf.push(StoredEvent::MergeFailure {
|
||||||
|
story_id: "42_story_x".to_string(),
|
||||||
|
reason: "conflict".to_string(),
|
||||||
|
timestamp_ms: 1000,
|
||||||
|
});
|
||||||
|
buf.push(StoredEvent::StoryBlocked {
|
||||||
|
story_id: "43_story_y".to_string(),
|
||||||
|
reason: "retry limit".to_string(),
|
||||||
|
timestamp_ms: 2000,
|
||||||
|
});
|
||||||
|
|
||||||
|
let all = buf.events_since(0);
|
||||||
|
assert_eq!(all.len(), 2);
|
||||||
|
|
||||||
|
let after_1000 = buf.events_since(1000);
|
||||||
|
assert_eq!(after_1000.len(), 1);
|
||||||
|
assert!(matches!(after_1000[0], StoredEvent::StoryBlocked { .. }));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn event_buffer_evicts_oldest_when_full() {
|
||||||
|
let buf = EventBuffer::new();
|
||||||
|
for i in 0..MAX_BUFFER_SIZE + 1 {
|
||||||
|
buf.push(StoredEvent::MergeFailure {
|
||||||
|
story_id: format!("{i}_story_x"),
|
||||||
|
reason: "x".to_string(),
|
||||||
|
timestamp_ms: i as u64,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
// Buffer must not exceed MAX_BUFFER_SIZE.
|
||||||
|
assert_eq!(buf.events_since(0).len(), MAX_BUFFER_SIZE);
|
||||||
|
// Oldest entry (timestamp_ms == 0) should have been evicted.
|
||||||
|
assert!(buf.events_since(0).iter().all(|e| e.timestamp_ms() > 0));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn stage_transition_timestamp_ms_accessor() {
|
||||||
|
let e = StoredEvent::StageTransition {
|
||||||
|
story_id: "1".to_string(),
|
||||||
|
from_stage: "2_current".to_string(),
|
||||||
|
to_stage: "3_qa".to_string(),
|
||||||
|
timestamp_ms: 9999,
|
||||||
|
};
|
||||||
|
assert_eq!(e.timestamp_ms(), 9999);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn subscribe_to_watcher_stores_work_item_with_from_stage() {
|
||||||
|
let buf = EventBuffer::new();
|
||||||
|
let (tx, rx) = broadcast::channel(16);
|
||||||
|
|
||||||
|
subscribe_to_watcher(buf.clone(), rx);
|
||||||
|
|
||||||
|
tx.send(crate::io::watcher::WatcherEvent::WorkItem {
|
||||||
|
stage: "3_qa".to_string(),
|
||||||
|
item_id: "42_story_foo".to_string(),
|
||||||
|
action: "qa".to_string(),
|
||||||
|
commit_msg: "huskies: qa 42_story_foo".to_string(),
|
||||||
|
from_stage: Some("2_current".to_string()),
|
||||||
|
})
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
tokio::time::sleep(std::time::Duration::from_millis(50)).await;
|
||||||
|
|
||||||
|
let events = buf.events_since(0);
|
||||||
|
assert_eq!(events.len(), 1);
|
||||||
|
assert!(matches!(events[0], StoredEvent::StageTransition { .. }));
|
||||||
|
if let StoredEvent::StageTransition {
|
||||||
|
ref story_id,
|
||||||
|
ref from_stage,
|
||||||
|
ref to_stage,
|
||||||
|
..
|
||||||
|
} = events[0]
|
||||||
|
{
|
||||||
|
assert_eq!(story_id, "42_story_foo");
|
||||||
|
assert_eq!(from_stage, "2_current");
|
||||||
|
assert_eq!(to_stage, "3_qa");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn subscribe_to_watcher_ignores_work_item_without_from_stage() {
|
||||||
|
let buf = EventBuffer::new();
|
||||||
|
let (tx, rx) = broadcast::channel(16);
|
||||||
|
|
||||||
|
subscribe_to_watcher(buf.clone(), rx);
|
||||||
|
|
||||||
|
// Synthetic event: no from_stage.
|
||||||
|
tx.send(crate::io::watcher::WatcherEvent::WorkItem {
|
||||||
|
stage: "2_current".to_string(),
|
||||||
|
item_id: "99_story_syn".to_string(),
|
||||||
|
action: "start".to_string(),
|
||||||
|
commit_msg: "huskies: start 99_story_syn".to_string(),
|
||||||
|
from_stage: None,
|
||||||
|
})
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
tokio::time::sleep(std::time::Duration::from_millis(50)).await;
|
||||||
|
|
||||||
|
assert_eq!(buf.events_since(0).len(), 0);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn subscribe_to_watcher_stores_merge_failure() {
|
||||||
|
let buf = EventBuffer::new();
|
||||||
|
let (tx, rx) = broadcast::channel(16);
|
||||||
|
|
||||||
|
subscribe_to_watcher(buf.clone(), rx);
|
||||||
|
|
||||||
|
tx.send(crate::io::watcher::WatcherEvent::MergeFailure {
|
||||||
|
story_id: "42_story_foo".to_string(),
|
||||||
|
reason: "merge conflict".to_string(),
|
||||||
|
})
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
tokio::time::sleep(std::time::Duration::from_millis(50)).await;
|
||||||
|
|
||||||
|
let events = buf.events_since(0);
|
||||||
|
assert_eq!(events.len(), 1);
|
||||||
|
assert!(matches!(events[0], StoredEvent::MergeFailure { .. }));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn subscribe_to_watcher_stores_story_blocked() {
|
||||||
|
let buf = EventBuffer::new();
|
||||||
|
let (tx, rx) = broadcast::channel(16);
|
||||||
|
|
||||||
|
subscribe_to_watcher(buf.clone(), rx);
|
||||||
|
|
||||||
|
tx.send(crate::io::watcher::WatcherEvent::StoryBlocked {
|
||||||
|
story_id: "43_story_bar".to_string(),
|
||||||
|
reason: "retry limit exceeded".to_string(),
|
||||||
|
})
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
tokio::time::sleep(std::time::Duration::from_millis(50)).await;
|
||||||
|
|
||||||
|
let events = buf.events_since(0);
|
||||||
|
assert_eq!(events.len(), 1);
|
||||||
|
assert!(matches!(events[0], StoredEvent::StoryBlocked { .. }));
|
||||||
|
}
|
||||||
|
}
|
||||||
File diff suppressed because it is too large
Load Diff
+10
-11
@@ -1,7 +1,13 @@
|
|||||||
//! Health check endpoint — returns a static "ok" response.
|
//! Health check endpoint — thin HTTP adapter over `service::health`.
|
||||||
|
//!
|
||||||
|
//! Domain logic (the `HealthStatus` type and check function) lives in
|
||||||
|
//! `service::health`; this module is a thin adapter: call service → shape
|
||||||
|
//! response.
|
||||||
|
|
||||||
|
pub use crate::service::health::HealthStatus;
|
||||||
|
|
||||||
use poem::handler;
|
use poem::handler;
|
||||||
use poem_openapi::{Object, OpenApi, Tags, payload::Json};
|
use poem_openapi::{OpenApi, Tags, payload::Json};
|
||||||
use serde::Serialize;
|
|
||||||
|
|
||||||
/// Health check endpoint.
|
/// Health check endpoint.
|
||||||
///
|
///
|
||||||
@@ -16,11 +22,6 @@ enum HealthTags {
|
|||||||
Health,
|
Health,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Serialize, Object)]
|
|
||||||
pub struct HealthStatus {
|
|
||||||
status: String,
|
|
||||||
}
|
|
||||||
|
|
||||||
pub struct HealthApi;
|
pub struct HealthApi;
|
||||||
|
|
||||||
#[OpenApi(tag = "HealthTags::Health")]
|
#[OpenApi(tag = "HealthTags::Health")]
|
||||||
@@ -30,9 +31,7 @@ impl HealthApi {
|
|||||||
/// Returns a JSON status object to confirm the server is running.
|
/// Returns a JSON status object to confirm the server is running.
|
||||||
#[oai(path = "/health", method = "get")]
|
#[oai(path = "/health", method = "get")]
|
||||||
async fn health(&self) -> Json<HealthStatus> {
|
async fn health(&self) -> Json<HealthStatus> {
|
||||||
Json(HealthStatus {
|
Json(crate::service::health::check())
|
||||||
status: "ok".to_string(),
|
|
||||||
})
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
+23
-24
@@ -1,6 +1,6 @@
|
|||||||
//! HTTP I/O endpoints — REST API for file and directory operations.
|
//! HTTP I/O endpoints — thin adapters over `service::file_io`.
|
||||||
use crate::http::context::{AppContext, OpenApiResult, bad_request};
|
use crate::http::context::{AppContext, OpenApiResult, bad_request};
|
||||||
use crate::io::fs as io_fs;
|
use crate::service::file_io::{self as svc, FileEntry};
|
||||||
use poem_openapi::{Object, OpenApi, Tags, payload::Json};
|
use poem_openapi::{Object, OpenApi, Tags, payload::Json};
|
||||||
use serde::Deserialize;
|
use serde::Deserialize;
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
@@ -46,18 +46,18 @@ impl IoApi {
|
|||||||
/// Read a file from the currently open project and return its contents.
|
/// Read a file from the currently open project and return its contents.
|
||||||
#[oai(path = "/io/fs/read", method = "post")]
|
#[oai(path = "/io/fs/read", method = "post")]
|
||||||
async fn read_file(&self, payload: Json<FilePathPayload>) -> OpenApiResult<Json<String>> {
|
async fn read_file(&self, payload: Json<FilePathPayload>) -> OpenApiResult<Json<String>> {
|
||||||
let content = io_fs::read_file(payload.0.path, &self.ctx.state)
|
let content = svc::read_file(payload.0.path, &self.ctx.state)
|
||||||
.await
|
.await
|
||||||
.map_err(bad_request)?;
|
.map_err(|e| bad_request(e.to_string()))?;
|
||||||
Ok(Json(content))
|
Ok(Json(content))
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Write a file to the currently open project, creating parent directories if needed.
|
/// Write a file to the currently open project, creating parent directories if needed.
|
||||||
#[oai(path = "/io/fs/write", method = "post")]
|
#[oai(path = "/io/fs/write", method = "post")]
|
||||||
async fn write_file(&self, payload: Json<WriteFilePayload>) -> OpenApiResult<Json<bool>> {
|
async fn write_file(&self, payload: Json<WriteFilePayload>) -> OpenApiResult<Json<bool>> {
|
||||||
io_fs::write_file(payload.0.path, payload.0.content, &self.ctx.state)
|
svc::write_file(payload.0.path, payload.0.content, &self.ctx.state)
|
||||||
.await
|
.await
|
||||||
.map_err(bad_request)?;
|
.map_err(|e| bad_request(e.to_string()))?;
|
||||||
Ok(Json(true))
|
Ok(Json(true))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -66,10 +66,10 @@ impl IoApi {
|
|||||||
async fn list_directory(
|
async fn list_directory(
|
||||||
&self,
|
&self,
|
||||||
payload: Json<FilePathPayload>,
|
payload: Json<FilePathPayload>,
|
||||||
) -> OpenApiResult<Json<Vec<io_fs::FileEntry>>> {
|
) -> OpenApiResult<Json<Vec<FileEntry>>> {
|
||||||
let entries = io_fs::list_directory(payload.0.path, &self.ctx.state)
|
let entries = svc::list_directory(payload.0.path, &self.ctx.state)
|
||||||
.await
|
.await
|
||||||
.map_err(bad_request)?;
|
.map_err(|e| bad_request(e.to_string()))?;
|
||||||
Ok(Json(entries))
|
Ok(Json(entries))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -78,10 +78,10 @@ impl IoApi {
|
|||||||
async fn list_directory_absolute(
|
async fn list_directory_absolute(
|
||||||
&self,
|
&self,
|
||||||
payload: Json<FilePathPayload>,
|
payload: Json<FilePathPayload>,
|
||||||
) -> OpenApiResult<Json<Vec<io_fs::FileEntry>>> {
|
) -> OpenApiResult<Json<Vec<FileEntry>>> {
|
||||||
let entries = io_fs::list_directory_absolute(payload.0.path)
|
let entries = svc::list_directory_absolute(payload.0.path)
|
||||||
.await
|
.await
|
||||||
.map_err(bad_request)?;
|
.map_err(|e| bad_request(e.to_string()))?;
|
||||||
Ok(Json(entries))
|
Ok(Json(entries))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -91,25 +91,25 @@ impl IoApi {
|
|||||||
&self,
|
&self,
|
||||||
payload: Json<CreateDirectoryPayload>,
|
payload: Json<CreateDirectoryPayload>,
|
||||||
) -> OpenApiResult<Json<bool>> {
|
) -> OpenApiResult<Json<bool>> {
|
||||||
io_fs::create_directory_absolute(payload.0.path)
|
svc::create_directory_absolute(payload.0.path)
|
||||||
.await
|
.await
|
||||||
.map_err(bad_request)?;
|
.map_err(|e| bad_request(e.to_string()))?;
|
||||||
Ok(Json(true))
|
Ok(Json(true))
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Get the user's home directory.
|
/// Get the user's home directory.
|
||||||
#[oai(path = "/io/fs/home", method = "get")]
|
#[oai(path = "/io/fs/home", method = "get")]
|
||||||
async fn get_home_directory(&self) -> OpenApiResult<Json<String>> {
|
async fn get_home_directory(&self) -> OpenApiResult<Json<String>> {
|
||||||
let home = io_fs::get_home_directory().map_err(bad_request)?;
|
let home = svc::get_home_directory().map_err(|e| bad_request(e.to_string()))?;
|
||||||
Ok(Json(home))
|
Ok(Json(home))
|
||||||
}
|
}
|
||||||
|
|
||||||
/// List all files in the project recursively, respecting .gitignore.
|
/// List all files in the project recursively, respecting .gitignore.
|
||||||
#[oai(path = "/io/fs/files", method = "get")]
|
#[oai(path = "/io/fs/files", method = "get")]
|
||||||
async fn list_project_files(&self) -> OpenApiResult<Json<Vec<String>>> {
|
async fn list_project_files(&self) -> OpenApiResult<Json<Vec<String>>> {
|
||||||
let files = io_fs::list_project_files(&self.ctx.state)
|
let files = svc::list_project_files(&self.ctx.state)
|
||||||
.await
|
.await
|
||||||
.map_err(bad_request)?;
|
.map_err(|e| bad_request(e.to_string()))?;
|
||||||
Ok(Json(files))
|
Ok(Json(files))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -118,10 +118,10 @@ impl IoApi {
|
|||||||
async fn search_files(
|
async fn search_files(
|
||||||
&self,
|
&self,
|
||||||
payload: Json<SearchPayload>,
|
payload: Json<SearchPayload>,
|
||||||
) -> OpenApiResult<Json<Vec<crate::io::search::SearchResult>>> {
|
) -> OpenApiResult<Json<Vec<crate::service::file_io::SearchResult>>> {
|
||||||
let results = crate::io::search::search_files(payload.0.query, &self.ctx.state)
|
let results = svc::search_files(payload.0.query, &self.ctx.state)
|
||||||
.await
|
.await
|
||||||
.map_err(bad_request)?;
|
.map_err(|e| bad_request(e.to_string()))?;
|
||||||
Ok(Json(results))
|
Ok(Json(results))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -130,11 +130,10 @@ impl IoApi {
|
|||||||
async fn exec_shell(
|
async fn exec_shell(
|
||||||
&self,
|
&self,
|
||||||
payload: Json<ExecShellPayload>,
|
payload: Json<ExecShellPayload>,
|
||||||
) -> OpenApiResult<Json<crate::io::shell::CommandOutput>> {
|
) -> OpenApiResult<Json<crate::service::file_io::CommandOutput>> {
|
||||||
let output =
|
let output = svc::exec_shell(payload.0.command, payload.0.args, &self.ctx.state)
|
||||||
crate::io::shell::exec_shell(payload.0.command, payload.0.args, &self.ctx.state)
|
|
||||||
.await
|
.await
|
||||||
.map_err(bad_request)?;
|
.map_err(|e| bad_request(e.to_string()))?;
|
||||||
Ok(Json(output))
|
Ok(Json(output))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -2,7 +2,7 @@
|
|||||||
use crate::agents::PipelineStage;
|
use crate::agents::PipelineStage;
|
||||||
use crate::config::ProjectConfig;
|
use crate::config::ProjectConfig;
|
||||||
use crate::http::context::AppContext;
|
use crate::http::context::AppContext;
|
||||||
use crate::http::settings::get_editor_command_from_store;
|
use crate::service::settings::get_editor_command;
|
||||||
use crate::slog_warn;
|
use crate::slog_warn;
|
||||||
use crate::worktree;
|
use crate::worktree;
|
||||||
use serde_json::{Value, json};
|
use serde_json::{Value, json};
|
||||||
@@ -86,7 +86,7 @@ pub(super) fn tool_list_agents(ctx: &AppContext) -> Result<String, String> {
|
|||||||
.filter(|a| {
|
.filter(|a| {
|
||||||
project_root
|
project_root
|
||||||
.as_deref()
|
.as_deref()
|
||||||
.map(|root| !crate::http::agents::story_is_archived(root, &a.story_id))
|
.map(|root| !crate::service::agents::is_archived(root, &a.story_id))
|
||||||
.unwrap_or(true)
|
.unwrap_or(true)
|
||||||
})
|
})
|
||||||
.map(|a| json!({
|
.map(|a| json!({
|
||||||
@@ -414,7 +414,7 @@ pub(super) fn tool_get_editor_command(args: &Value, ctx: &AppContext) -> Result<
|
|||||||
.and_then(|v| v.as_str())
|
.and_then(|v| v.as_str())
|
||||||
.ok_or("Missing required argument: worktree_path")?;
|
.ok_or("Missing required argument: worktree_path")?;
|
||||||
|
|
||||||
let editor = get_editor_command_from_store(ctx)
|
let editor = get_editor_command(&*ctx.store)
|
||||||
.ok_or_else(|| "No editor configured. Set one via PUT /api/settings/editor.".to_string())?;
|
.ok_or_else(|| "No editor configured. Set one via PUT /api/settings/editor.".to_string())?;
|
||||||
|
|
||||||
Ok(format!("{editor} {worktree_path}"))
|
Ok(format!("{editor} {worktree_path}"))
|
||||||
|
|||||||
@@ -1,10 +1,15 @@
|
|||||||
//! MCP diagnostic tools — server logs, CRDT dump, and story movement helpers.
|
//! MCP diagnostic tools — server logs, CRDT dump, and story movement helpers.
|
||||||
|
//!
|
||||||
|
//! This file is a thin adapter: it deserialises MCP payloads, delegates to
|
||||||
|
//! `crate::service::diagnostics` for all business logic, and serialises responses.
|
||||||
use crate::agents::move_story_to_stage;
|
use crate::agents::move_story_to_stage;
|
||||||
use crate::http::context::AppContext;
|
use crate::http::context::AppContext;
|
||||||
use crate::log_buffer;
|
use crate::log_buffer;
|
||||||
|
use crate::service::diagnostics::{add_permission_rule, generate_permission_rule};
|
||||||
use crate::slog;
|
use crate::slog;
|
||||||
use crate::slog_warn;
|
use crate::slog_warn;
|
||||||
use serde_json::{Value, json};
|
use serde_json::{Value, json};
|
||||||
|
#[allow(unused_imports)]
|
||||||
use std::fs;
|
use std::fs;
|
||||||
|
|
||||||
pub(super) fn tool_get_server_logs(args: &Value) -> Result<String, String> {
|
pub(super) fn tool_get_server_logs(args: &Value) -> Result<String, String> {
|
||||||
@@ -44,94 +49,6 @@ pub(super) async fn tool_rebuild_and_restart(ctx: &AppContext) -> Result<String,
|
|||||||
crate::rebuild::rebuild_and_restart(&ctx.agents, &project_root, notifier).await
|
crate::rebuild::rebuild_and_restart(&ctx.agents, &project_root, notifier).await
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Generate a Claude Code permission rule string for the given tool name and input.
|
|
||||||
///
|
|
||||||
/// - `Edit` / `Write` / `Read` / `Grep` / `Glob` etc. → just the tool name
|
|
||||||
/// - `Bash` → `Bash(first_word *)` derived from the `command` field in `tool_input`
|
|
||||||
/// - `mcp__*` → the full tool name (e.g. `mcp__huskies__create_story`)
|
|
||||||
fn generate_permission_rule(tool_name: &str, tool_input: &Value) -> String {
|
|
||||||
if tool_name == "Bash" {
|
|
||||||
// Extract command from tool_input.command and use first word as prefix
|
|
||||||
let command_str = tool_input
|
|
||||||
.get("command")
|
|
||||||
.and_then(|v| v.as_str())
|
|
||||||
.unwrap_or("");
|
|
||||||
let first_word = command_str.split_whitespace().next().unwrap_or("unknown");
|
|
||||||
format!("Bash({first_word} *)")
|
|
||||||
} else {
|
|
||||||
// For Edit, Write, Read, Glob, Grep, MCP tools, etc. — use the tool name directly
|
|
||||||
tool_name.to_string()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Add a permission rule to `.claude/settings.json` in the project root.
|
|
||||||
/// Does nothing if the rule already exists. Creates the file if missing.
|
|
||||||
pub(super) fn add_permission_rule(
|
|
||||||
project_root: &std::path::Path,
|
|
||||||
rule: &str,
|
|
||||||
) -> Result<(), String> {
|
|
||||||
let claude_dir = project_root.join(".claude");
|
|
||||||
fs::create_dir_all(&claude_dir)
|
|
||||||
.map_err(|e| format!("Failed to create .claude/ directory: {e}"))?;
|
|
||||||
|
|
||||||
let settings_path = claude_dir.join("settings.json");
|
|
||||||
let mut settings: Value = if settings_path.exists() {
|
|
||||||
let content = fs::read_to_string(&settings_path)
|
|
||||||
.map_err(|e| format!("Failed to read settings.json: {e}"))?;
|
|
||||||
serde_json::from_str(&content).map_err(|e| format!("Failed to parse settings.json: {e}"))?
|
|
||||||
} else {
|
|
||||||
json!({ "permissions": { "allow": [] } })
|
|
||||||
};
|
|
||||||
|
|
||||||
let allow_arr = settings
|
|
||||||
.pointer_mut("/permissions/allow")
|
|
||||||
.and_then(|v| v.as_array_mut());
|
|
||||||
|
|
||||||
let allow = match allow_arr {
|
|
||||||
Some(arr) => arr,
|
|
||||||
None => {
|
|
||||||
// Ensure the structure exists
|
|
||||||
settings
|
|
||||||
.as_object_mut()
|
|
||||||
.unwrap()
|
|
||||||
.entry("permissions")
|
|
||||||
.or_insert(json!({ "allow": [] }));
|
|
||||||
settings
|
|
||||||
.pointer_mut("/permissions/allow")
|
|
||||||
.unwrap()
|
|
||||||
.as_array_mut()
|
|
||||||
.unwrap()
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
// Check for duplicates — exact string match
|
|
||||||
let rule_value = Value::String(rule.to_string());
|
|
||||||
if allow.contains(&rule_value) {
|
|
||||||
return Ok(());
|
|
||||||
}
|
|
||||||
|
|
||||||
// Also check for wildcard coverage: if "mcp__huskies__*" exists, don't add
|
|
||||||
// a more specific "mcp__huskies__create_story".
|
|
||||||
let dominated = allow.iter().any(|existing| {
|
|
||||||
if let Some(pat) = existing.as_str()
|
|
||||||
&& let Some(prefix) = pat.strip_suffix('*')
|
|
||||||
{
|
|
||||||
return rule.starts_with(prefix);
|
|
||||||
}
|
|
||||||
false
|
|
||||||
});
|
|
||||||
if dominated {
|
|
||||||
return Ok(());
|
|
||||||
}
|
|
||||||
|
|
||||||
allow.push(rule_value);
|
|
||||||
|
|
||||||
let pretty =
|
|
||||||
serde_json::to_string_pretty(&settings).map_err(|e| format!("Failed to serialize: {e}"))?;
|
|
||||||
fs::write(&settings_path, pretty).map_err(|e| format!("Failed to write settings.json: {e}"))?;
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
/// MCP tool called by Claude Code via `--permission-prompt-tool`.
|
/// MCP tool called by Claude Code via `--permission-prompt-tool`.
|
||||||
///
|
///
|
||||||
/// Forwards the permission request through the shared channel to the active
|
/// Forwards the permission request through the shared channel to the active
|
||||||
|
|||||||
@@ -1,68 +1,34 @@
|
|||||||
//! MCP git tools — status, diff, add, commit, and log operations on agent worktrees.
|
//! MCP git tools — status, diff, add, commit, and log operations on agent worktrees.
|
||||||
|
//!
|
||||||
|
//! This file is a thin adapter: it deserialises MCP payloads, delegates to
|
||||||
|
//! `crate::service::git_ops` for all business logic, and serialises responses.
|
||||||
use crate::http::context::AppContext;
|
use crate::http::context::AppContext;
|
||||||
use serde_json::{Value, json};
|
use serde_json::{Value, json};
|
||||||
use std::path::PathBuf;
|
use std::path::PathBuf;
|
||||||
|
|
||||||
/// Validates that `worktree_path` exists and is inside the project's
|
/// Validates that `worktree_path` exists and is inside the project's
|
||||||
/// `.huskies/worktrees/` directory. Returns the canonicalized path.
|
/// `.huskies/worktrees/` directory. Returns the canonicalized path.
|
||||||
|
///
|
||||||
|
/// Thin wrapper that obtains the project root from `ctx` and delegates to
|
||||||
|
/// `service::git_ops::io::validate_worktree_path`.
|
||||||
fn validate_worktree_path(worktree_path: &str, ctx: &AppContext) -> Result<PathBuf, String> {
|
fn validate_worktree_path(worktree_path: &str, ctx: &AppContext) -> Result<PathBuf, String> {
|
||||||
let wd = PathBuf::from(worktree_path);
|
|
||||||
|
|
||||||
if !wd.is_absolute() {
|
|
||||||
return Err("worktree_path must be an absolute path".to_string());
|
|
||||||
}
|
|
||||||
if !wd.exists() {
|
|
||||||
return Err(format!("worktree_path does not exist: {worktree_path}"));
|
|
||||||
}
|
|
||||||
|
|
||||||
let project_root = ctx.agents.get_project_root(&ctx.state)?;
|
let project_root = ctx.agents.get_project_root(&ctx.state)?;
|
||||||
let worktrees_root = project_root.join(".huskies").join("worktrees");
|
crate::service::git_ops::io::validate_worktree_path(worktree_path, &project_root)
|
||||||
|
.map_err(|e| e.to_string())
|
||||||
let canonical_wd = wd
|
|
||||||
.canonicalize()
|
|
||||||
.map_err(|e| format!("Cannot canonicalize worktree_path: {e}"))?;
|
|
||||||
|
|
||||||
let canonical_wt = if worktrees_root.exists() {
|
|
||||||
worktrees_root
|
|
||||||
.canonicalize()
|
|
||||||
.map_err(|e| format!("Cannot canonicalize worktrees root: {e}"))?
|
|
||||||
} else {
|
|
||||||
return Err("No worktrees directory found in project".to_string());
|
|
||||||
};
|
|
||||||
|
|
||||||
if !canonical_wd.starts_with(&canonical_wt) {
|
|
||||||
return Err(format!(
|
|
||||||
"worktree_path must be inside .huskies/worktrees/. Got: {worktree_path}"
|
|
||||||
));
|
|
||||||
}
|
|
||||||
|
|
||||||
Ok(canonical_wd)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Run a git command in the given directory and return its output.
|
/// Run a git command in the given directory and return its output.
|
||||||
async fn run_git(args: Vec<&'static str>, dir: PathBuf) -> Result<std::process::Output, String> {
|
async fn run_git(args: Vec<&'static str>, dir: PathBuf) -> Result<std::process::Output, String> {
|
||||||
tokio::task::spawn_blocking(move || {
|
crate::service::git_ops::io::run_git(args, dir)
|
||||||
std::process::Command::new("git")
|
|
||||||
.args(&args)
|
|
||||||
.current_dir(&dir)
|
|
||||||
.output()
|
|
||||||
})
|
|
||||||
.await
|
.await
|
||||||
.map_err(|e| format!("Task join error: {e}"))?
|
.map_err(|e| e.to_string())
|
||||||
.map_err(|e| format!("Failed to run git: {e}"))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Run a git command with owned args in the given directory.
|
/// Run a git command with owned args in the given directory.
|
||||||
async fn run_git_owned(args: Vec<String>, dir: PathBuf) -> Result<std::process::Output, String> {
|
async fn run_git_owned(args: Vec<String>, dir: PathBuf) -> Result<std::process::Output, String> {
|
||||||
tokio::task::spawn_blocking(move || {
|
crate::service::git_ops::io::run_git_owned(args, dir)
|
||||||
std::process::Command::new("git")
|
|
||||||
.args(&args)
|
|
||||||
.current_dir(&dir)
|
|
||||||
.output()
|
|
||||||
})
|
|
||||||
.await
|
.await
|
||||||
.map_err(|e| format!("Task join error: {e}"))?
|
.map_err(|e| e.to_string())
|
||||||
.map_err(|e| format!("Failed to run git: {e}"))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// git_status — returns working tree status (staged, unstaged, untracked files).
|
/// git_status — returns working tree status (staged, unstaged, untracked files).
|
||||||
@@ -86,29 +52,8 @@ pub(super) async fn tool_git_status(args: &Value, ctx: &AppContext) -> Result<St
|
|||||||
));
|
));
|
||||||
}
|
}
|
||||||
|
|
||||||
let mut staged: Vec<String> = Vec::new();
|
let (staged, unstaged, untracked) =
|
||||||
let mut unstaged: Vec<String> = Vec::new();
|
crate::service::git_ops::parse_git_status_porcelain(&stdout);
|
||||||
let mut untracked: Vec<String> = Vec::new();
|
|
||||||
|
|
||||||
for line in stdout.lines() {
|
|
||||||
if line.len() < 3 {
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
let x = line.chars().next().unwrap_or(' ');
|
|
||||||
let y = line.chars().nth(1).unwrap_or(' ');
|
|
||||||
let path = line[3..].to_string();
|
|
||||||
|
|
||||||
match (x, y) {
|
|
||||||
('?', '?') => untracked.push(path),
|
|
||||||
(' ', _) => unstaged.push(path),
|
|
||||||
(_, ' ') => staged.push(path),
|
|
||||||
_ => {
|
|
||||||
// Both staged and unstaged modifications
|
|
||||||
staged.push(path.clone());
|
|
||||||
unstaged.push(path);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
serde_json::to_string_pretty(&json!({
|
serde_json::to_string_pretty(&json!({
|
||||||
"staged": staged,
|
"staged": staged,
|
||||||
|
|||||||
@@ -1,8 +1,12 @@
|
|||||||
//! MCP QA tools — request, approve, and reject QA reviews for stories.
|
//! MCP QA tools — request, approve, and reject QA reviews for stories.
|
||||||
|
//!
|
||||||
|
//! This file is a thin adapter: it deserialises MCP payloads, delegates to
|
||||||
|
//! `crate::service::qa` for all business logic, and serialises responses.
|
||||||
use crate::agents::{
|
use crate::agents::{
|
||||||
move_story_to_done, move_story_to_merge, move_story_to_qa, reject_story_from_qa,
|
move_story_to_done, move_story_to_merge, move_story_to_qa, reject_story_from_qa,
|
||||||
};
|
};
|
||||||
use crate::http::context::AppContext;
|
use crate::http::context::AppContext;
|
||||||
|
use crate::service::qa::{find_free_port, is_spike, merge_spike_branch_to_master};
|
||||||
use crate::slog;
|
use crate::slog;
|
||||||
use crate::slog_warn;
|
use crate::slog_warn;
|
||||||
use serde_json::{Value, json};
|
use serde_json::{Value, json};
|
||||||
@@ -57,8 +61,7 @@ pub(super) async fn tool_approve_qa(args: &Value, ctx: &AppContext) -> Result<St
|
|||||||
let _ = crate::io::story_metadata::clear_front_matter_field(&qa_path, "review_hold");
|
let _ = crate::io::story_metadata::clear_front_matter_field(&qa_path, "review_hold");
|
||||||
}
|
}
|
||||||
|
|
||||||
let item_type = crate::agents::lifecycle::item_type_from_id(story_id);
|
if is_spike(story_id) {
|
||||||
if item_type == "spike" {
|
|
||||||
// Spikes skip the merge stage entirely: merge the feature branch to master
|
// Spikes skip the merge stage entirely: merge the feature branch to master
|
||||||
// directly (fast-forward or simple merge), then move straight to done.
|
// directly (fast-forward or simple merge), then move straight to done.
|
||||||
let branch = format!("feature/story-{story_id}");
|
let branch = format!("feature/story-{story_id}");
|
||||||
@@ -68,7 +71,8 @@ pub(super) async fn tool_approve_qa(args: &Value, ctx: &AppContext) -> Result<St
|
|||||||
let merge_ok =
|
let merge_ok =
|
||||||
tokio::task::spawn_blocking(move || merge_spike_branch_to_master(&root, &br, &sid))
|
tokio::task::spawn_blocking(move || merge_spike_branch_to_master(&root, &br, &sid))
|
||||||
.await
|
.await
|
||||||
.map_err(|e| format!("Merge task panicked: {e}"))??;
|
.map_err(|e| format!("Merge task panicked: {e}"))?
|
||||||
|
.map_err(|e| e.to_string())?;
|
||||||
|
|
||||||
move_story_to_done(&project_root, story_id)?;
|
move_story_to_done(&project_root, story_id)?;
|
||||||
|
|
||||||
@@ -115,73 +119,6 @@ pub(super) async fn tool_approve_qa(args: &Value, ctx: &AppContext) -> Result<St
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Merge a spike's feature branch into master using a fast-forward or simple merge.
|
|
||||||
///
|
|
||||||
/// Unlike the squash-merge pipeline used for stories, spikes skip quality gates
|
|
||||||
/// and preserve their commit history. Returns `true` if a merge was performed,
|
|
||||||
/// `false` if the branch had no unmerged commits.
|
|
||||||
fn merge_spike_branch_to_master(
|
|
||||||
project_root: &std::path::Path,
|
|
||||||
branch: &str,
|
|
||||||
story_id: &str,
|
|
||||||
) -> Result<bool, String> {
|
|
||||||
use std::process::Command;
|
|
||||||
|
|
||||||
// Check the branch exists and has unmerged changes.
|
|
||||||
if !crate::agents::lifecycle::feature_branch_has_unmerged_changes(project_root, story_id) {
|
|
||||||
slog!("[qa] Spike '{story_id}': feature branch has no unmerged changes, skipping merge.");
|
|
||||||
return Ok(false);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Ensure we are on master.
|
|
||||||
let checkout = Command::new("git")
|
|
||||||
.args(["checkout", "master"])
|
|
||||||
.current_dir(project_root)
|
|
||||||
.output()
|
|
||||||
.map_err(|e| format!("git checkout master failed: {e}"))?;
|
|
||||||
if !checkout.status.success() {
|
|
||||||
return Err(format!(
|
|
||||||
"Failed to checkout master: {}",
|
|
||||||
String::from_utf8_lossy(&checkout.stderr)
|
|
||||||
));
|
|
||||||
}
|
|
||||||
|
|
||||||
// Try fast-forward first, then fall back to a regular merge.
|
|
||||||
let ff = Command::new("git")
|
|
||||||
.args(["merge", "--ff-only", branch])
|
|
||||||
.current_dir(project_root)
|
|
||||||
.output()
|
|
||||||
.map_err(|e| format!("git merge --ff-only failed: {e}"))?;
|
|
||||||
|
|
||||||
if ff.status.success() {
|
|
||||||
slog!("[qa] Spike '{story_id}': fast-forward merged '{branch}' into master.");
|
|
||||||
return Ok(true);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Fast-forward failed (diverged history) — fall back to a regular merge.
|
|
||||||
let merge = Command::new("git")
|
|
||||||
.args([
|
|
||||||
"merge",
|
|
||||||
"--no-ff",
|
|
||||||
branch,
|
|
||||||
"-m",
|
|
||||||
&format!("Merge spike branch '{branch}' into master"),
|
|
||||||
])
|
|
||||||
.current_dir(project_root)
|
|
||||||
.output()
|
|
||||||
.map_err(|e| format!("git merge failed: {e}"))?;
|
|
||||||
|
|
||||||
if merge.status.success() {
|
|
||||||
slog!("[qa] Spike '{story_id}': merged '{branch}' into master (no-ff).");
|
|
||||||
Ok(true)
|
|
||||||
} else {
|
|
||||||
Err(format!(
|
|
||||||
"Failed to merge spike branch '{branch}' into master: {}",
|
|
||||||
String::from_utf8_lossy(&merge.stderr)
|
|
||||||
))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub(super) async fn tool_reject_qa(args: &Value, ctx: &AppContext) -> Result<String, String> {
|
pub(super) async fn tool_reject_qa(args: &Value, ctx: &AppContext) -> Result<String, String> {
|
||||||
let story_id = args
|
let story_id = args
|
||||||
.get("story_id")
|
.get("story_id")
|
||||||
@@ -294,16 +231,6 @@ pub(super) async fn tool_launch_qa_app(args: &Value, ctx: &AppContext) -> Result
|
|||||||
.map_err(|e| format!("Serialization error: {e}"))
|
.map_err(|e| format!("Serialization error: {e}"))
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Find a free TCP port starting from `start`.
|
|
||||||
pub(super) fn find_free_port(start: u16) -> u16 {
|
|
||||||
for port in start..start + 100 {
|
|
||||||
if std::net::TcpListener::bind(("127.0.0.1", port)).is_ok() {
|
|
||||||
return port;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
start // fallback
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
mod tests {
|
mod tests {
|
||||||
use super::*;
|
use super::*;
|
||||||
|
|||||||
@@ -1,5 +1,10 @@
|
|||||||
//! MCP shell tools — run commands, execute tests, and stream output via MCP.
|
//! MCP shell tools — run commands, execute tests, and stream output via MCP.
|
||||||
|
//!
|
||||||
|
//! This file is a thin adapter: it deserialises MCP payloads, delegates to
|
||||||
|
//! `crate::service::shell` for all business logic, and serialises responses.
|
||||||
use crate::http::context::AppContext;
|
use crate::http::context::AppContext;
|
||||||
|
#[allow(unused_imports)]
|
||||||
|
use crate::service::shell::{extract_count, is_dangerous, parse_test_counts, truncate_output};
|
||||||
use bytes::Bytes;
|
use bytes::Bytes;
|
||||||
use futures::StreamExt;
|
use futures::StreamExt;
|
||||||
use poem::{Body, Response};
|
use poem::{Body, Response};
|
||||||
@@ -11,92 +16,15 @@ const MAX_TIMEOUT_SECS: u64 = 600;
|
|||||||
const TEST_TIMEOUT_SECS: u64 = 1200;
|
const TEST_TIMEOUT_SECS: u64 = 1200;
|
||||||
const MAX_OUTPUT_LINES: usize = 100;
|
const MAX_OUTPUT_LINES: usize = 100;
|
||||||
|
|
||||||
/// Patterns that are unconditionally blocked regardless of context.
|
|
||||||
static BLOCKED_PATTERNS: &[&str] = &[
|
|
||||||
"rm -rf /",
|
|
||||||
"rm -fr /",
|
|
||||||
"rm -rf /*",
|
|
||||||
"rm -fr /*",
|
|
||||||
"rm --no-preserve-root",
|
|
||||||
":(){ :|:& };:",
|
|
||||||
"> /dev/sda",
|
|
||||||
"dd if=/dev",
|
|
||||||
];
|
|
||||||
|
|
||||||
/// Binaries that are unconditionally blocked.
|
|
||||||
static BLOCKED_BINARIES: &[&str] = &[
|
|
||||||
"sudo", "su", "shutdown", "reboot", "halt", "poweroff", "mkfs",
|
|
||||||
];
|
|
||||||
|
|
||||||
/// Returns an error message if the command matches a blocked pattern or binary.
|
|
||||||
fn is_dangerous(command: &str) -> Option<String> {
|
|
||||||
let trimmed = command.trim();
|
|
||||||
|
|
||||||
// Check each blocked pattern (substring match)
|
|
||||||
for &pattern in BLOCKED_PATTERNS {
|
|
||||||
if trimmed.contains(pattern) {
|
|
||||||
return Some(format!(
|
|
||||||
"Command blocked: dangerous pattern '{pattern}' detected"
|
|
||||||
));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check first token of the command against blocked binaries
|
|
||||||
if let Some(first_token) = trimmed.split_whitespace().next() {
|
|
||||||
let binary = std::path::Path::new(first_token)
|
|
||||||
.file_name()
|
|
||||||
.and_then(|n| n.to_str())
|
|
||||||
.unwrap_or(first_token);
|
|
||||||
if BLOCKED_BINARIES.contains(&binary) {
|
|
||||||
return Some(format!("Command blocked: '{binary}' is not permitted"));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
None
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Validates that `working_dir` exists and is inside the project's
|
/// Validates that `working_dir` exists and is inside the project's
|
||||||
/// `.huskies/worktrees/` directory. Returns the canonicalized path.
|
/// `.huskies/worktrees/` directory. Returns the canonicalized path.
|
||||||
|
///
|
||||||
|
/// Thin wrapper that obtains the project root from `ctx` and delegates to
|
||||||
|
/// `service::shell::io::validate_working_dir`.
|
||||||
fn validate_working_dir(working_dir: &str, ctx: &AppContext) -> Result<PathBuf, String> {
|
fn validate_working_dir(working_dir: &str, ctx: &AppContext) -> Result<PathBuf, String> {
|
||||||
let wd = PathBuf::from(working_dir);
|
|
||||||
|
|
||||||
if !wd.is_absolute() {
|
|
||||||
return Err("working_dir must be an absolute path".to_string());
|
|
||||||
}
|
|
||||||
if !wd.exists() {
|
|
||||||
return Err(format!("working_dir does not exist: {working_dir}"));
|
|
||||||
}
|
|
||||||
|
|
||||||
let project_root = ctx.agents.get_project_root(&ctx.state)?;
|
let project_root = ctx.agents.get_project_root(&ctx.state)?;
|
||||||
let worktrees_root = project_root.join(".huskies").join("worktrees");
|
crate::service::shell::io::validate_working_dir(working_dir, &project_root)
|
||||||
|
.map_err(|e| e.to_string())
|
||||||
let canonical_wd = wd
|
|
||||||
.canonicalize()
|
|
||||||
.map_err(|e| format!("Cannot canonicalize working_dir: {e}"))?;
|
|
||||||
|
|
||||||
// If worktrees_root doesn't exist yet, we can't allow anything
|
|
||||||
let canonical_wt = if worktrees_root.exists() {
|
|
||||||
worktrees_root
|
|
||||||
.canonicalize()
|
|
||||||
.map_err(|e| format!("Cannot canonicalize worktrees root: {e}"))?
|
|
||||||
} else {
|
|
||||||
return Err("No worktrees directory found in project".to_string());
|
|
||||||
};
|
|
||||||
|
|
||||||
// Also allow the merge workspace so mergemaster can fix conflicts.
|
|
||||||
let merge_workspace = project_root.join(".huskies").join("merge_workspace");
|
|
||||||
let canonical_mw = merge_workspace.canonicalize().unwrap_or_default();
|
|
||||||
|
|
||||||
let in_worktrees = canonical_wd.starts_with(&canonical_wt);
|
|
||||||
let in_merge_ws =
|
|
||||||
!canonical_mw.as_os_str().is_empty() && canonical_wd.starts_with(&canonical_mw);
|
|
||||||
if !in_worktrees && !in_merge_ws {
|
|
||||||
return Err(format!(
|
|
||||||
"working_dir must be inside .huskies/worktrees/ or .huskies/merge_workspace/. Got: {working_dir}"
|
|
||||||
));
|
|
||||||
}
|
|
||||||
|
|
||||||
Ok(canonical_wd)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Regular (non-SSE) run_command: runs the bash command to completion and
|
/// Regular (non-SSE) run_command: runs the bash command to completion and
|
||||||
@@ -328,51 +256,6 @@ pub(super) fn handle_run_command_sse(
|
|||||||
.body(Body::from_bytes_stream(stream.map(|r| r.map(Bytes::from))))
|
.body(Body::from_bytes_stream(stream.map(|r| r.map(Bytes::from))))
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Truncate output to at most `max_lines` lines, keeping the tail.
|
|
||||||
fn truncate_output(output: &str, max_lines: usize) -> String {
|
|
||||||
let lines: Vec<&str> = output.lines().collect();
|
|
||||||
if lines.len() <= max_lines {
|
|
||||||
return output.to_string();
|
|
||||||
}
|
|
||||||
let omitted = lines.len() - max_lines;
|
|
||||||
let tail = lines[lines.len() - max_lines..].join("\n");
|
|
||||||
format!("[... {omitted} lines omitted ...]\n{tail}")
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Parse cumulative passed/failed counts from `cargo test` output lines like:
|
|
||||||
/// `"test result: ok. 5 passed; 0 failed; ..."`
|
|
||||||
fn parse_test_counts(output: &str) -> (u64, u64) {
|
|
||||||
let mut total_passed = 0u64;
|
|
||||||
let mut total_failed = 0u64;
|
|
||||||
for line in output.lines() {
|
|
||||||
if line.contains("test result:") {
|
|
||||||
if let Some(p) = extract_count(line, "passed") {
|
|
||||||
total_passed += p;
|
|
||||||
}
|
|
||||||
if let Some(f) = extract_count(line, "failed") {
|
|
||||||
total_failed += f;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
(total_passed, total_failed)
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Extract a count immediately before `label` in `line` (e.g. `"5 passed"` → 5).
|
|
||||||
fn extract_count(line: &str, label: &str) -> Option<u64> {
|
|
||||||
let pos = line.find(label)?;
|
|
||||||
let before = line[..pos].trim_end();
|
|
||||||
let num_str: String = before
|
|
||||||
.chars()
|
|
||||||
.rev()
|
|
||||||
.take_while(|c| c.is_ascii_digit())
|
|
||||||
.collect();
|
|
||||||
if num_str.is_empty() {
|
|
||||||
return None;
|
|
||||||
}
|
|
||||||
let num_str: String = num_str.chars().rev().collect();
|
|
||||||
num_str.parse().ok()
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Run the project's test suite (`script/test`) and block until complete.
|
/// Run the project's test suite (`script/test`) and block until complete.
|
||||||
///
|
///
|
||||||
/// Spawns the test process, then polls every second server-side until the
|
/// Spawns the test process, then polls every second server-side until the
|
||||||
|
|||||||
@@ -1,4 +1,8 @@
|
|||||||
//! MCP story tools — create, update, move, and manage stories, bugs, and refactors via MCP.
|
//! MCP story tools — create, update, move, and manage stories, bugs, and refactors via MCP.
|
||||||
|
//!
|
||||||
|
//! This file is a thin adapter: it deserialises MCP payloads, delegates to
|
||||||
|
//! `crate::service::story` and `crate::http::workflow` for business logic,
|
||||||
|
//! and serialises responses.
|
||||||
use crate::agents::{
|
use crate::agents::{
|
||||||
close_bug_to_archive, feature_branch_has_unmerged_changes, move_story_to_done,
|
close_bug_to_archive, feature_branch_has_unmerged_changes, move_story_to_done,
|
||||||
};
|
};
|
||||||
@@ -12,7 +16,9 @@ use crate::http::workflow::{
|
|||||||
use crate::io::story_metadata::{
|
use crate::io::story_metadata::{
|
||||||
check_archived_deps, check_archived_deps_from_list, parse_front_matter, parse_unchecked_todos,
|
check_archived_deps, check_archived_deps_from_list, parse_front_matter, parse_unchecked_todos,
|
||||||
};
|
};
|
||||||
|
use crate::service::story::parse_test_cases;
|
||||||
use crate::slog_warn;
|
use crate::slog_warn;
|
||||||
|
#[allow(unused_imports)]
|
||||||
use crate::workflow::{TestCaseResult, TestStatus, evaluate_acceptance_with_coverage};
|
use crate::workflow::{TestCaseResult, TestStatus, evaluate_acceptance_with_coverage};
|
||||||
use serde_json::{Value, json};
|
use serde_json::{Value, json};
|
||||||
use std::collections::HashMap;
|
use std::collections::HashMap;
|
||||||
@@ -702,46 +708,6 @@ pub(super) fn tool_list_refactors(ctx: &AppContext) -> Result<String, String> {
|
|||||||
.map_err(|e| format!("Serialization error: {e}"))
|
.map_err(|e| format!("Serialization error: {e}"))
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(super) fn parse_test_cases(value: Option<&Value>) -> Result<Vec<TestCaseResult>, String> {
|
|
||||||
let arr = match value {
|
|
||||||
Some(Value::Array(a)) => a,
|
|
||||||
Some(Value::Null) | None => return Ok(Vec::new()),
|
|
||||||
_ => return Err("Expected array for test cases".to_string()),
|
|
||||||
};
|
|
||||||
|
|
||||||
arr.iter()
|
|
||||||
.map(|item| {
|
|
||||||
let name = item
|
|
||||||
.get("name")
|
|
||||||
.and_then(|v| v.as_str())
|
|
||||||
.ok_or("Test case missing 'name'")?
|
|
||||||
.to_string();
|
|
||||||
let status_str = item
|
|
||||||
.get("status")
|
|
||||||
.and_then(|v| v.as_str())
|
|
||||||
.ok_or("Test case missing 'status'")?;
|
|
||||||
let status = match status_str {
|
|
||||||
"pass" => TestStatus::Pass,
|
|
||||||
"fail" => TestStatus::Fail,
|
|
||||||
other => {
|
|
||||||
return Err(format!(
|
|
||||||
"Invalid test status '{other}'. Use 'pass' or 'fail'."
|
|
||||||
));
|
|
||||||
}
|
|
||||||
};
|
|
||||||
let details = item
|
|
||||||
.get("details")
|
|
||||||
.and_then(|v| v.as_str())
|
|
||||||
.map(String::from);
|
|
||||||
Ok(TestCaseResult {
|
|
||||||
name,
|
|
||||||
status,
|
|
||||||
details,
|
|
||||||
})
|
|
||||||
})
|
|
||||||
.collect()
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
mod tests {
|
mod tests {
|
||||||
use super::*;
|
use super::*;
|
||||||
|
|||||||
@@ -12,106 +12,60 @@
|
|||||||
//! 5. `wizard_retry` — discard staged content and regenerate from scratch
|
//! 5. `wizard_retry` — discard staged content and regenerate from scratch
|
||||||
|
|
||||||
use crate::http::context::AppContext;
|
use crate::http::context::AppContext;
|
||||||
use crate::io::wizard::{StepStatus, WizardState, WizardStep, format_wizard_state};
|
use crate::io::wizard::WizardStep;
|
||||||
|
use crate::service::wizard::state_machine;
|
||||||
|
use crate::service::wizard::{self as svc};
|
||||||
use serde_json::Value;
|
use serde_json::Value;
|
||||||
use std::fs;
|
|
||||||
use std::path::Path;
|
use std::path::Path;
|
||||||
|
|
||||||
// ── helpers ───────────────────────────────────────────────────────────────────
|
// ── Thin adapters (kept for callers in chat/commands/setup.rs) ────────────────
|
||||||
|
|
||||||
/// Return the filesystem path (relative to `project_root`) for a step's output.
|
/// Return the filesystem path for a step's output file.
|
||||||
///
|
///
|
||||||
/// Returns `None` for `Scaffold` since that step has no single output file — it
|
/// Pure path concatenation — delegates to `service::wizard::state_machine`.
|
||||||
/// creates the full `.huskies/` directory structure and is handled by
|
|
||||||
/// `huskies init` before the server starts.
|
|
||||||
pub(crate) fn step_output_path(
|
pub(crate) fn step_output_path(
|
||||||
project_root: &Path,
|
project_root: &Path,
|
||||||
step: WizardStep,
|
step: WizardStep,
|
||||||
) -> Option<std::path::PathBuf> {
|
) -> Option<std::path::PathBuf> {
|
||||||
match step {
|
state_machine::step_output_path(project_root, step)
|
||||||
WizardStep::Context => Some(
|
|
||||||
project_root
|
|
||||||
.join(".huskies")
|
|
||||||
.join("specs")
|
|
||||||
.join("00_CONTEXT.md"),
|
|
||||||
),
|
|
||||||
WizardStep::Stack => Some(
|
|
||||||
project_root
|
|
||||||
.join(".huskies")
|
|
||||||
.join("specs")
|
|
||||||
.join("tech")
|
|
||||||
.join("STACK.md"),
|
|
||||||
),
|
|
||||||
WizardStep::TestScript => Some(project_root.join("script").join("test")),
|
|
||||||
WizardStep::BuildScript => Some(project_root.join("script").join("build")),
|
|
||||||
WizardStep::LintScript => Some(project_root.join("script").join("lint")),
|
|
||||||
WizardStep::ReleaseScript => Some(project_root.join("script").join("release")),
|
|
||||||
WizardStep::TestCoverage => Some(project_root.join("script").join("test_coverage")),
|
|
||||||
WizardStep::Scaffold => None,
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Return true when `step` produces an executable script file.
|
||||||
pub(crate) fn is_script_step(step: WizardStep) -> bool {
|
pub(crate) fn is_script_step(step: WizardStep) -> bool {
|
||||||
matches!(
|
state_machine::is_script_step(step)
|
||||||
step,
|
|
||||||
WizardStep::TestScript
|
|
||||||
| WizardStep::BuildScript
|
|
||||||
| WizardStep::LintScript
|
|
||||||
| WizardStep::ReleaseScript
|
|
||||||
| WizardStep::TestCoverage
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Write `content` to `path`, skipping if the file already exists with real
|
/// Write `content` to `path`, skipping if the file already has real content.
|
||||||
/// (non-template) content.
|
|
||||||
///
|
///
|
||||||
/// Scaffold template files (those containing [`TEMPLATE_SENTINEL`]) are treated
|
/// Delegates to `service::wizard::write_step_file`.
|
||||||
/// as placeholders and will be overwritten with the wizard-generated content.
|
|
||||||
/// Files with real user content are never overwritten. For script steps the
|
|
||||||
/// file is also made executable after writing.
|
|
||||||
pub(crate) fn write_if_missing(
|
pub(crate) fn write_if_missing(
|
||||||
path: &Path,
|
path: &Path,
|
||||||
content: &str,
|
content: &str,
|
||||||
executable: bool,
|
executable: bool,
|
||||||
) -> Result<bool, String> {
|
) -> Result<bool, String> {
|
||||||
use crate::io::onboarding::TEMPLATE_SENTINEL;
|
svc::write_step_file(path, content, executable).map_err(|e| e.to_string())
|
||||||
if path.exists() {
|
|
||||||
// Overwrite scaffold template placeholders; preserve real user content.
|
|
||||||
let is_template = std::fs::read_to_string(path)
|
|
||||||
.map(|s| s.contains(TEMPLATE_SENTINEL))
|
|
||||||
.unwrap_or(false);
|
|
||||||
if !is_template {
|
|
||||||
return Ok(false); // real content already present — skip
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if let Some(parent) = path.parent() {
|
|
||||||
fs::create_dir_all(parent)
|
|
||||||
.map_err(|e| format!("Failed to create directory {}: {e}", parent.display()))?;
|
|
||||||
}
|
|
||||||
fs::write(path, content).map_err(|e| format!("Failed to write {}: {e}", path.display()))?;
|
|
||||||
|
|
||||||
if executable {
|
|
||||||
#[cfg(unix)]
|
|
||||||
{
|
|
||||||
use std::os::unix::fs::PermissionsExt;
|
|
||||||
let mut perms = fs::metadata(path)
|
|
||||||
.map_err(|e| format!("Failed to read permissions: {e}"))?
|
|
||||||
.permissions();
|
|
||||||
perms.set_mode(0o755);
|
|
||||||
fs::set_permissions(path, perms)
|
|
||||||
.map_err(|e| format!("Failed to set permissions: {e}"))?;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
Ok(true)
|
/// Return true when the project directory has no meaningful source files.
|
||||||
|
///
|
||||||
|
/// Delegates to `service::wizard::state_machine::is_bare_project` after
|
||||||
|
/// reading directory entries via `service::wizard::io`.
|
||||||
|
#[cfg(test)]
|
||||||
|
fn is_bare_project(project_root: &Path) -> bool {
|
||||||
|
use crate::service::wizard::io as wio;
|
||||||
|
let names = wio::list_dir_names(project_root);
|
||||||
|
state_machine::is_bare_project(&names)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Serialise a `WizardStep` to its snake_case string (e.g. `"test_script"`).
|
/// Return a generation hint for `step` based on the project at `project_root`.
|
||||||
fn step_slug(step: WizardStep) -> String {
|
///
|
||||||
serde_json::to_value(step)
|
/// Reads filesystem state then delegates pure logic to `state_machine`.
|
||||||
.ok()
|
pub(crate) fn generation_hint(step: WizardStep, project_root: &Path) -> String {
|
||||||
.and_then(|v| v.as_str().map(String::from))
|
use crate::service::wizard::io as wio;
|
||||||
.unwrap_or_default()
|
let names = wio::list_dir_names(project_root);
|
||||||
|
let tools = wio::detect_project_tools(project_root);
|
||||||
|
let is_bare = state_machine::is_bare_project(&names);
|
||||||
|
state_machine::generation_hint(step, is_bare, &tools)
|
||||||
}
|
}
|
||||||
|
|
||||||
// ── MCP tool handlers ─────────────────────────────────────────────────────────
|
// ── MCP tool handlers ─────────────────────────────────────────────────────────
|
||||||
@@ -119,9 +73,7 @@ fn step_slug(step: WizardStep) -> String {
|
|||||||
/// `wizard_status` — return current wizard state as a human-readable summary.
|
/// `wizard_status` — return current wizard state as a human-readable summary.
|
||||||
pub(super) fn tool_wizard_status(ctx: &AppContext) -> Result<String, String> {
|
pub(super) fn tool_wizard_status(ctx: &AppContext) -> Result<String, String> {
|
||||||
let root = ctx.state.get_project_root()?;
|
let root = ctx.state.get_project_root()?;
|
||||||
let state =
|
svc::status(&root).map_err(|e| e.to_string())
|
||||||
WizardState::load(&root).ok_or("No wizard active. Run `huskies init` to begin setup.")?;
|
|
||||||
Ok(format_wizard_state(&state))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// `wizard_generate` — mark the current step as generating or stage content.
|
/// `wizard_generate` — mark the current step as generating or stage content.
|
||||||
@@ -133,245 +85,8 @@ pub(super) fn tool_wizard_status(ctx: &AppContext) -> Result<String, String> {
|
|||||||
/// until `wizard_confirm` is called.
|
/// until `wizard_confirm` is called.
|
||||||
pub(super) fn tool_wizard_generate(args: &Value, ctx: &AppContext) -> Result<String, String> {
|
pub(super) fn tool_wizard_generate(args: &Value, ctx: &AppContext) -> Result<String, String> {
|
||||||
let root = ctx.state.get_project_root()?;
|
let root = ctx.state.get_project_root()?;
|
||||||
let mut state = WizardState::load(&root).ok_or("No wizard active.")?;
|
let content = args.get("content").and_then(|v| v.as_str());
|
||||||
|
svc::generate(&root, content).map_err(|e| e.to_string())
|
||||||
if state.completed {
|
|
||||||
return Ok("Wizard is already complete.".to_string());
|
|
||||||
}
|
|
||||||
|
|
||||||
let current_idx = state.current_step_index();
|
|
||||||
let step = state.steps[current_idx].step;
|
|
||||||
|
|
||||||
// If content is provided, stage it for confirmation.
|
|
||||||
if let Some(content) = args.get("content").and_then(|v| v.as_str()) {
|
|
||||||
state.set_step_status(
|
|
||||||
step,
|
|
||||||
StepStatus::AwaitingConfirmation,
|
|
||||||
Some(content.to_string()),
|
|
||||||
);
|
|
||||||
state
|
|
||||||
.save(&root)
|
|
||||||
.map_err(|e| format!("Failed to save wizard state: {e}"))?;
|
|
||||||
return Ok(format!(
|
|
||||||
"Content staged for '{}'. Run `wizard_confirm` to write it to disk, `wizard_retry` to regenerate, or `wizard_skip` to skip.",
|
|
||||||
step.label()
|
|
||||||
));
|
|
||||||
}
|
|
||||||
|
|
||||||
// No content provided — mark as generating and return a hint.
|
|
||||||
state.set_step_status(step, StepStatus::Generating, None);
|
|
||||||
state
|
|
||||||
.save(&root)
|
|
||||||
.map_err(|e| format!("Failed to save wizard state: {e}"))?;
|
|
||||||
|
|
||||||
let hint = generation_hint(step, &root);
|
|
||||||
let slug = step_slug(step);
|
|
||||||
|
|
||||||
Ok(format!(
|
|
||||||
"Step '{}' marked as generating.\n\n{hint}\n\nOnce you have the content, call `wizard_generate` again with a `content` argument (or PUT /wizard/step/{slug}/content). Then call `wizard_confirm` to write it to disk.",
|
|
||||||
step.label(),
|
|
||||||
))
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Return true if the project directory has no meaningful source files.
|
|
||||||
pub(crate) fn is_bare_project(project_root: &Path) -> bool {
|
|
||||||
std::fs::read_dir(project_root)
|
|
||||||
.ok()
|
|
||||||
.map(|entries| {
|
|
||||||
let names: Vec<String> = entries
|
|
||||||
.filter_map(|e| e.ok())
|
|
||||||
.map(|e| e.file_name().to_string_lossy().to_string())
|
|
||||||
.collect();
|
|
||||||
// A bare project only has huskies scaffolding and no real code
|
|
||||||
names.iter().all(|n| {
|
|
||||||
n.starts_with('.')
|
|
||||||
|| n == "CLAUDE.md"
|
|
||||||
|| n == "LICENSE"
|
|
||||||
|| n == "README.md"
|
|
||||||
|| n == "script"
|
|
||||||
})
|
|
||||||
})
|
|
||||||
.unwrap_or(true)
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Return a generation hint for a step based on the project root.
|
|
||||||
pub(crate) fn generation_hint(step: WizardStep, project_root: &Path) -> String {
|
|
||||||
let bare = is_bare_project(project_root);
|
|
||||||
|
|
||||||
match step {
|
|
||||||
WizardStep::Context => {
|
|
||||||
if bare {
|
|
||||||
"This is a bare project with no existing code. Ask the user what they want \
|
|
||||||
to build — the project's purpose, goals, target users, and key features. \
|
|
||||||
Then generate `.huskies/specs/00_CONTEXT.md` from their answers covering:\n\
|
|
||||||
- High-level goal of the project\n\
|
|
||||||
- Core features\n\
|
|
||||||
- Domain concepts and entities\n\
|
|
||||||
- Glossary of abbreviations and technical terms"
|
|
||||||
.to_string()
|
|
||||||
} else {
|
|
||||||
"Read the project source tree and generate a `.huskies/specs/00_CONTEXT.md` describing:\n\
|
|
||||||
- High-level goal of the project\n\
|
|
||||||
- Core features\n\
|
|
||||||
- Domain concepts and entities\n\
|
|
||||||
- Glossary of abbreviations and technical terms".to_string()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
WizardStep::Stack => {
|
|
||||||
if bare {
|
|
||||||
"This is a bare project with no existing code. Ask the user what language, \
|
|
||||||
frameworks, and tools they plan to use. Then generate `.huskies/specs/tech/STACK.md` \
|
|
||||||
from their answers covering:\n\
|
|
||||||
- Language, frameworks, and runtimes\n\
|
|
||||||
- Coding standards and linting rules\n\
|
|
||||||
- Quality gates (commands that must pass before merging)\n\
|
|
||||||
- Approved libraries and their purpose".to_string()
|
|
||||||
} else {
|
|
||||||
"Read the project source tree and generate a `.huskies/specs/tech/STACK.md` describing:\n\
|
|
||||||
- Language, frameworks, and runtimes\n\
|
|
||||||
- Coding standards and linting rules\n\
|
|
||||||
- Quality gates (commands that must pass before merging)\n\
|
|
||||||
- Approved libraries and their purpose".to_string()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
WizardStep::TestScript => {
|
|
||||||
if bare {
|
|
||||||
"This is a bare project with no existing code. Read the STACK.md generated \
|
|
||||||
in the previous step (or ask the user about their stack if it was skipped) \
|
|
||||||
and generate a `script/test` shell script (#!/usr/bin/env bash, set -euo pipefail) \
|
|
||||||
with appropriate test commands for their chosen language and framework."
|
|
||||||
.to_string()
|
|
||||||
} else {
|
|
||||||
let has_cargo = project_root.join("Cargo.toml").exists();
|
|
||||||
let has_pkg = project_root.join("package.json").exists();
|
|
||||||
let has_pnpm = project_root.join("pnpm-lock.yaml").exists();
|
|
||||||
let mut cmds = Vec::new();
|
|
||||||
if has_cargo {
|
|
||||||
cmds.push("cargo nextest run");
|
|
||||||
}
|
|
||||||
if has_pkg {
|
|
||||||
cmds.push(if has_pnpm { "pnpm test" } else { "npm test" });
|
|
||||||
}
|
|
||||||
if cmds.is_empty() {
|
|
||||||
"Generate a `script/test` shell script (#!/usr/bin/env bash, set -euo pipefail) that runs the project's test suite.".to_string()
|
|
||||||
} else {
|
|
||||||
format!(
|
|
||||||
"Generate a `script/test` shell script (#!/usr/bin/env bash, set -euo pipefail) that runs: {}",
|
|
||||||
cmds.join(", ")
|
|
||||||
)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
WizardStep::BuildScript => {
|
|
||||||
if bare {
|
|
||||||
"This is a bare project with no existing code. Read the STACK.md generated \
|
|
||||||
in the previous step (or ask the user about their stack if it was skipped) \
|
|
||||||
and generate a `script/build` shell script (#!/usr/bin/env bash, set -euo pipefail) \
|
|
||||||
with appropriate build commands for their chosen language and framework."
|
|
||||||
.to_string()
|
|
||||||
} else {
|
|
||||||
let has_cargo = project_root.join("Cargo.toml").exists();
|
|
||||||
let has_pkg = project_root.join("package.json").exists();
|
|
||||||
let has_pnpm = project_root.join("pnpm-lock.yaml").exists();
|
|
||||||
let has_frontend_subdir =
|
|
||||||
project_root.join("frontend").join("package.json").exists()
|
|
||||||
|| project_root.join("client").join("package.json").exists();
|
|
||||||
let has_go = project_root.join("go.mod").exists();
|
|
||||||
let mut cmds = Vec::new();
|
|
||||||
if has_cargo {
|
|
||||||
cmds.push("cargo build --release");
|
|
||||||
}
|
|
||||||
if has_pkg {
|
|
||||||
cmds.push(if has_pnpm {
|
|
||||||
"pnpm run build"
|
|
||||||
} else {
|
|
||||||
"npm run build"
|
|
||||||
});
|
|
||||||
}
|
|
||||||
if has_frontend_subdir {
|
|
||||||
cmds.push("(cd frontend && npm run build)");
|
|
||||||
}
|
|
||||||
if has_go {
|
|
||||||
cmds.push("go build ./...");
|
|
||||||
}
|
|
||||||
if cmds.is_empty() {
|
|
||||||
"Generate a `script/build` shell script (#!/usr/bin/env bash, set -euo pipefail) that builds the project.".to_string()
|
|
||||||
} else {
|
|
||||||
format!(
|
|
||||||
"Generate a `script/build` shell script (#!/usr/bin/env bash, set -euo pipefail) that runs: {}",
|
|
||||||
cmds.join(", ")
|
|
||||||
)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
WizardStep::LintScript => {
|
|
||||||
if bare {
|
|
||||||
"This is a bare project with no existing code. Read the STACK.md generated \
|
|
||||||
in the previous step (or ask the user about their stack if it was skipped) \
|
|
||||||
and generate a `script/lint` shell script (#!/usr/bin/env bash, set -euo pipefail) \
|
|
||||||
with appropriate lint commands for their chosen language and framework."
|
|
||||||
.to_string()
|
|
||||||
} else {
|
|
||||||
let has_cargo = project_root.join("Cargo.toml").exists();
|
|
||||||
let has_pkg = project_root.join("package.json").exists();
|
|
||||||
let has_pnpm = project_root.join("pnpm-lock.yaml").exists();
|
|
||||||
let has_python = project_root.join("pyproject.toml").exists()
|
|
||||||
|| project_root.join("requirements.txt").exists();
|
|
||||||
let has_go = project_root.join("go.mod").exists();
|
|
||||||
let mut cmds = Vec::new();
|
|
||||||
if has_cargo {
|
|
||||||
cmds.push("cargo fmt --all --check");
|
|
||||||
cmds.push("cargo clippy -- -D warnings");
|
|
||||||
}
|
|
||||||
if has_pkg {
|
|
||||||
cmds.push(if has_pnpm {
|
|
||||||
"pnpm run lint"
|
|
||||||
} else {
|
|
||||||
"npm run lint"
|
|
||||||
});
|
|
||||||
}
|
|
||||||
if has_python {
|
|
||||||
cmds.push("flake8 . (or ruff check . if ruff is configured)");
|
|
||||||
}
|
|
||||||
if has_go {
|
|
||||||
cmds.push("go vet ./...");
|
|
||||||
}
|
|
||||||
if cmds.is_empty() {
|
|
||||||
"Generate a `script/lint` shell script (#!/usr/bin/env bash, set -euo pipefail) that runs the project's linters.".to_string()
|
|
||||||
} else {
|
|
||||||
format!(
|
|
||||||
"Generate a `script/lint` shell script (#!/usr/bin/env bash, set -euo pipefail) that runs: {}",
|
|
||||||
cmds.join(", ")
|
|
||||||
)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
WizardStep::ReleaseScript => {
|
|
||||||
if bare {
|
|
||||||
"This is a bare project with no existing code. Read the STACK.md generated \
|
|
||||||
in the previous step (or ask the user about their stack if it was skipped) \
|
|
||||||
and generate a `script/release` shell script (#!/usr/bin/env bash, set -euo pipefail) \
|
|
||||||
with appropriate build/release commands for their chosen language and framework."
|
|
||||||
.to_string()
|
|
||||||
} else {
|
|
||||||
"Generate a `script/release` shell script (#!/usr/bin/env bash, set -euo pipefail) that builds and releases the project (e.g. `cargo build --release` or `npm run build`).".to_string()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
WizardStep::TestCoverage => {
|
|
||||||
if bare {
|
|
||||||
"This is a bare project with no existing code. Read the STACK.md generated \
|
|
||||||
in the previous step (or ask the user about their stack if it was skipped) \
|
|
||||||
and generate a `script/test_coverage` shell script (#!/usr/bin/env bash, set -euo pipefail) \
|
|
||||||
with appropriate test coverage commands for their chosen language and framework."
|
|
||||||
.to_string()
|
|
||||||
} else {
|
|
||||||
"Generate a `script/test_coverage` shell script (#!/usr/bin/env bash, set -euo pipefail) that generates a test coverage report (e.g. `cargo llvm-cov nextest` or `npm run coverage`).".to_string()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
WizardStep::Scaffold => {
|
|
||||||
"Scaffold step is handled automatically by `huskies init`.".to_string()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// `wizard_confirm` — confirm the current step and write its content to disk.
|
/// `wizard_confirm` — confirm the current step and write its content to disk.
|
||||||
@@ -382,111 +97,20 @@ pub(crate) fn generation_hint(step: WizardStep, project_root: &Path) -> String {
|
|||||||
/// advances to the next pending step.
|
/// advances to the next pending step.
|
||||||
pub(super) fn tool_wizard_confirm(ctx: &AppContext) -> Result<String, String> {
|
pub(super) fn tool_wizard_confirm(ctx: &AppContext) -> Result<String, String> {
|
||||||
let root = ctx.state.get_project_root()?;
|
let root = ctx.state.get_project_root()?;
|
||||||
let mut state = WizardState::load(&root).ok_or("No wizard active.")?;
|
svc::confirm(&root).map_err(|e| e.to_string())
|
||||||
|
|
||||||
if state.completed {
|
|
||||||
return Ok("Wizard is already complete.".to_string());
|
|
||||||
}
|
|
||||||
|
|
||||||
let current_idx = state.current_step_index();
|
|
||||||
let step = state.steps[current_idx].step;
|
|
||||||
let content = state.steps[current_idx].content.clone();
|
|
||||||
|
|
||||||
// Write content to disk (only if a file path exists and the file is absent).
|
|
||||||
let write_msg = if let (Some(c), Some(ref path)) = (&content, step_output_path(&root, step)) {
|
|
||||||
let executable = is_script_step(step);
|
|
||||||
match write_if_missing(path, c, executable)? {
|
|
||||||
true => format!(" File written: `{}`.", path.display()),
|
|
||||||
false => format!(" File `{}` already exists — skipped.", path.display()),
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
String::new()
|
|
||||||
};
|
|
||||||
|
|
||||||
state
|
|
||||||
.confirm_step(step)
|
|
||||||
.map_err(|e| format!("Cannot confirm step: {e}"))?;
|
|
||||||
state
|
|
||||||
.save(&root)
|
|
||||||
.map_err(|e| format!("Failed to save wizard state: {e}"))?;
|
|
||||||
|
|
||||||
let next_idx = state.current_step_index();
|
|
||||||
if state.completed {
|
|
||||||
Ok(format!(
|
|
||||||
"Step '{}' confirmed.{write_msg}\n\nSetup wizard complete! All steps done.",
|
|
||||||
step.label()
|
|
||||||
))
|
|
||||||
} else {
|
|
||||||
let next = &state.steps[next_idx];
|
|
||||||
Ok(format!(
|
|
||||||
"Step '{}' confirmed.{write_msg}\n\nNext: {} — run `wizard_generate` to begin.",
|
|
||||||
step.label(),
|
|
||||||
next.step.label()
|
|
||||||
))
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// `wizard_skip` — skip the current step without writing any file.
|
/// `wizard_skip` — skip the current step without writing any file.
|
||||||
pub(super) fn tool_wizard_skip(ctx: &AppContext) -> Result<String, String> {
|
pub(super) fn tool_wizard_skip(ctx: &AppContext) -> Result<String, String> {
|
||||||
let root = ctx.state.get_project_root()?;
|
let root = ctx.state.get_project_root()?;
|
||||||
let mut state = WizardState::load(&root).ok_or("No wizard active.")?;
|
svc::skip(&root).map_err(|e| e.to_string())
|
||||||
|
|
||||||
if state.completed {
|
|
||||||
return Ok("Wizard is already complete.".to_string());
|
|
||||||
}
|
|
||||||
|
|
||||||
let current_idx = state.current_step_index();
|
|
||||||
let step = state.steps[current_idx].step;
|
|
||||||
|
|
||||||
state
|
|
||||||
.skip_step(step)
|
|
||||||
.map_err(|e| format!("Cannot skip step: {e}"))?;
|
|
||||||
state
|
|
||||||
.save(&root)
|
|
||||||
.map_err(|e| format!("Failed to save wizard state: {e}"))?;
|
|
||||||
|
|
||||||
let next_idx = state.current_step_index();
|
|
||||||
if state.completed {
|
|
||||||
Ok(format!(
|
|
||||||
"Step '{}' skipped. Setup wizard complete!",
|
|
||||||
step.label()
|
|
||||||
))
|
|
||||||
} else {
|
|
||||||
let next = &state.steps[next_idx];
|
|
||||||
Ok(format!(
|
|
||||||
"Step '{}' skipped.\n\nNext: {} — run `wizard_generate` to begin.",
|
|
||||||
step.label(),
|
|
||||||
next.step.label()
|
|
||||||
))
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// `wizard_retry` — discard staged content and reset the current step to
|
/// `wizard_retry` — discard staged content and reset the current step to
|
||||||
/// `Pending` so it can be regenerated.
|
/// `Pending` so it can be regenerated.
|
||||||
pub(super) fn tool_wizard_retry(ctx: &AppContext) -> Result<String, String> {
|
pub(super) fn tool_wizard_retry(ctx: &AppContext) -> Result<String, String> {
|
||||||
let root = ctx.state.get_project_root()?;
|
let root = ctx.state.get_project_root()?;
|
||||||
let mut state = WizardState::load(&root).ok_or("No wizard active.")?;
|
svc::retry(&root).map_err(|e| e.to_string())
|
||||||
|
|
||||||
if state.completed {
|
|
||||||
return Ok("Wizard is already complete.".to_string());
|
|
||||||
}
|
|
||||||
|
|
||||||
let current_idx = state.current_step_index();
|
|
||||||
let step = state.steps[current_idx].step;
|
|
||||||
|
|
||||||
// Clear content and reset to pending.
|
|
||||||
if let Some(s) = state.steps.iter_mut().find(|s| s.step == step) {
|
|
||||||
s.status = StepStatus::Pending;
|
|
||||||
s.content = None;
|
|
||||||
}
|
|
||||||
state
|
|
||||||
.save(&root)
|
|
||||||
.map_err(|e| format!("Failed to save wizard state: {e}"))?;
|
|
||||||
|
|
||||||
Ok(format!(
|
|
||||||
"Step '{}' reset to pending. Run `wizard_generate` to regenerate content.",
|
|
||||||
step.label()
|
|
||||||
))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// ── tests ─────────────────────────────────────────────────────────────────────
|
// ── tests ─────────────────────────────────────────────────────────────────────
|
||||||
@@ -495,6 +119,7 @@ pub(super) fn tool_wizard_retry(ctx: &AppContext) -> Result<String, String> {
|
|||||||
mod tests {
|
mod tests {
|
||||||
use super::*;
|
use super::*;
|
||||||
use crate::http::context::AppContext;
|
use crate::http::context::AppContext;
|
||||||
|
use crate::io::wizard::{StepStatus, WizardState, format_wizard_state};
|
||||||
use tempfile::TempDir;
|
use tempfile::TempDir;
|
||||||
|
|
||||||
fn setup(dir: &TempDir) -> AppContext {
|
fn setup(dir: &TempDir) -> AppContext {
|
||||||
|
|||||||
+17
-2
@@ -7,6 +7,7 @@ pub mod bot_command;
|
|||||||
pub mod bot_config;
|
pub mod bot_config;
|
||||||
pub mod chat;
|
pub mod chat;
|
||||||
pub mod context;
|
pub mod context;
|
||||||
|
pub mod events;
|
||||||
pub mod health;
|
pub mod health;
|
||||||
pub mod io;
|
pub mod io;
|
||||||
pub mod mcp;
|
pub mod mcp;
|
||||||
@@ -17,6 +18,7 @@ pub mod settings;
|
|||||||
pub(crate) mod test_helpers;
|
pub(crate) mod test_helpers;
|
||||||
pub mod workflow;
|
pub mod workflow;
|
||||||
|
|
||||||
|
pub mod gateway;
|
||||||
pub mod project;
|
pub mod project;
|
||||||
pub mod wizard;
|
pub mod wizard;
|
||||||
pub mod ws;
|
pub mod ws;
|
||||||
@@ -68,6 +70,7 @@ pub fn build_routes(
|
|||||||
whatsapp_ctx: Option<Arc<WhatsAppWebhookContext>>,
|
whatsapp_ctx: Option<Arc<WhatsAppWebhookContext>>,
|
||||||
slack_ctx: Option<Arc<SlackWebhookContext>>,
|
slack_ctx: Option<Arc<SlackWebhookContext>>,
|
||||||
port: u16,
|
port: u16,
|
||||||
|
event_buffer: Option<events::EventBuffer>,
|
||||||
) -> impl poem::Endpoint {
|
) -> impl poem::Endpoint {
|
||||||
let ctx_arc = std::sync::Arc::new(ctx);
|
let ctx_arc = std::sync::Arc::new(ctx);
|
||||||
|
|
||||||
@@ -103,6 +106,10 @@ pub fn build_routes(
|
|||||||
.at("/", get(assets::embedded_index))
|
.at("/", get(assets::embedded_index))
|
||||||
.at("/*path", get(assets::embedded_file));
|
.at("/*path", get(assets::embedded_file));
|
||||||
|
|
||||||
|
if let Some(buf) = event_buffer {
|
||||||
|
route = route.at("/api/events", get(events::events_handler).data(buf));
|
||||||
|
}
|
||||||
|
|
||||||
if let Some(wa_ctx) = whatsapp_ctx {
|
if let Some(wa_ctx) = whatsapp_ctx {
|
||||||
route = route.at(
|
route = route.at(
|
||||||
"/webhook/whatsapp",
|
"/webhook/whatsapp",
|
||||||
@@ -302,7 +309,7 @@ mod tests {
|
|||||||
fn build_routes_constructs_without_panic() {
|
fn build_routes_constructs_without_panic() {
|
||||||
let tmp = tempfile::tempdir().unwrap();
|
let tmp = tempfile::tempdir().unwrap();
|
||||||
let ctx = context::AppContext::new_test(tmp.path().to_path_buf());
|
let ctx = context::AppContext::new_test(tmp.path().to_path_buf());
|
||||||
let _endpoint = build_routes(ctx, None, None, 3001);
|
let _endpoint = build_routes(ctx, None, None, 3001, None);
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
@@ -311,6 +318,14 @@ mod tests {
|
|||||||
// ensuring the port parameter flows through to OAuthState.
|
// ensuring the port parameter flows through to OAuthState.
|
||||||
let tmp = tempfile::tempdir().unwrap();
|
let tmp = tempfile::tempdir().unwrap();
|
||||||
let ctx = context::AppContext::new_test(tmp.path().to_path_buf());
|
let ctx = context::AppContext::new_test(tmp.path().to_path_buf());
|
||||||
let _endpoint = build_routes(ctx, None, None, 9999);
|
let _endpoint = build_routes(ctx, None, None, 9999, None);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn build_routes_with_event_buffer_constructs_without_panic() {
|
||||||
|
let tmp = tempfile::tempdir().unwrap();
|
||||||
|
let ctx = context::AppContext::new_test(tmp.path().to_path_buf());
|
||||||
|
let buf = events::EventBuffer::new();
|
||||||
|
let _endpoint = build_routes(ctx, None, None, 3001, Some(buf));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
+68
-274
@@ -1,102 +1,23 @@
|
|||||||
//! OAuth endpoints — Anthropic OAuth callback and token exchange flow.
|
//! OAuth endpoints — thin HTTP adapters over `service::oauth`.
|
||||||
use crate::llm::oauth;
|
//!
|
||||||
|
//! Business logic lives in `service::oauth`. These handlers only:
|
||||||
|
//! 1. Extract parameters from the HTTP request.
|
||||||
|
//! 2. Call the service layer.
|
||||||
|
//! 3. Map service errors to HTTP responses.
|
||||||
|
use crate::service::oauth as svc;
|
||||||
use crate::slog;
|
use crate::slog;
|
||||||
use poem::handler;
|
use poem::handler;
|
||||||
use poem::http::StatusCode;
|
use poem::http::StatusCode;
|
||||||
use poem::web::{Data, Query, Redirect};
|
use poem::web::{Data, Query, Redirect};
|
||||||
use serde::Deserialize;
|
use serde::Deserialize;
|
||||||
use std::collections::HashMap;
|
use std::sync::Arc;
|
||||||
use std::sync::{Arc, Mutex};
|
|
||||||
|
|
||||||
/// Anthropic OAuth configuration.
|
// Re-export service types so that existing tests in this file continue to
|
||||||
const CLIENT_ID: &str = "9d1c250a-e61b-44d9-88ed-5944d1962f5e";
|
// compile unchanged (they use `use super::*` and call these by name).
|
||||||
/// Claude.ai authorize URL (for Max/Pro subscriptions).
|
pub(crate) use svc::OAuthState;
|
||||||
const AUTHORIZE_URL: &str = "https://claude.com/cai/oauth/authorize";
|
// Re-exported for tests only (tests use `use super::*` to call these by name).
|
||||||
const TOKEN_ENDPOINT: &str = "https://platform.claude.com/v1/oauth/token";
|
#[cfg(test)]
|
||||||
const SCOPES: &str =
|
pub(crate) use svc::pkce::{base64url_encode, compute_code_challenge, random_string};
|
||||||
"user:inference user:profile user:mcp_servers user:sessions:claude_code user:file_upload";
|
|
||||||
|
|
||||||
/// In-memory store for pending PKCE flows, keyed by state parameter.
|
|
||||||
#[derive(Clone)]
|
|
||||||
pub struct OAuthState {
|
|
||||||
/// Maps state → (code_verifier, redirect_uri)
|
|
||||||
pending: Arc<Mutex<HashMap<String, PendingFlow>>>,
|
|
||||||
/// The port the server is listening on (for building redirect_uri).
|
|
||||||
port: u16,
|
|
||||||
}
|
|
||||||
|
|
||||||
struct PendingFlow {
|
|
||||||
code_verifier: String,
|
|
||||||
redirect_uri: String,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl OAuthState {
|
|
||||||
pub fn new(port: u16) -> Self {
|
|
||||||
Self {
|
|
||||||
pending: Arc::new(Mutex::new(HashMap::new())),
|
|
||||||
port,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn callback_url(&self) -> String {
|
|
||||||
format!("http://localhost:{}/callback", self.port)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Generate a random alphanumeric string of the given length.
|
|
||||||
fn random_string(len: usize) -> String {
|
|
||||||
use std::collections::hash_map::RandomState;
|
|
||||||
use std::hash::{BuildHasher, Hasher};
|
|
||||||
let mut s = String::with_capacity(len);
|
|
||||||
let chars = b"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789";
|
|
||||||
for _ in 0..len {
|
|
||||||
let hasher = RandomState::new().build_hasher();
|
|
||||||
let idx = hasher.finish() as usize % chars.len();
|
|
||||||
s.push(chars[idx] as char);
|
|
||||||
}
|
|
||||||
s
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Compute the S256 PKCE code challenge from a code verifier.
|
|
||||||
fn compute_code_challenge(verifier: &str) -> String {
|
|
||||||
use sha2::{Digest, Sha256};
|
|
||||||
let hash = Sha256::digest(verifier.as_bytes());
|
|
||||||
base64url_encode(&hash)
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Base64url-encode without padding (RFC 7636).
|
|
||||||
fn base64url_encode(data: &[u8]) -> String {
|
|
||||||
// Standard base64 then convert to base64url
|
|
||||||
const CHARS: &[u8] = b"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
|
|
||||||
let mut result = String::new();
|
|
||||||
let mut i = 0;
|
|
||||||
while i < data.len() {
|
|
||||||
let b0 = data[i] as u32;
|
|
||||||
let b1 = if i + 1 < data.len() {
|
|
||||||
data[i + 1] as u32
|
|
||||||
} else {
|
|
||||||
0
|
|
||||||
};
|
|
||||||
let b2 = if i + 2 < data.len() {
|
|
||||||
data[i + 2] as u32
|
|
||||||
} else {
|
|
||||||
0
|
|
||||||
};
|
|
||||||
let triple = (b0 << 16) | (b1 << 8) | b2;
|
|
||||||
|
|
||||||
result.push(CHARS[((triple >> 18) & 0x3F) as usize] as char);
|
|
||||||
result.push(CHARS[((triple >> 12) & 0x3F) as usize] as char);
|
|
||||||
if i + 1 < data.len() {
|
|
||||||
result.push(CHARS[((triple >> 6) & 0x3F) as usize] as char);
|
|
||||||
}
|
|
||||||
if i + 2 < data.len() {
|
|
||||||
result.push(CHARS[(triple & 0x3F) as usize] as char);
|
|
||||||
}
|
|
||||||
i += 3;
|
|
||||||
}
|
|
||||||
// Convert to base64url: replace + with -, / with _
|
|
||||||
result.replace('+', "-").replace('/', "_")
|
|
||||||
}
|
|
||||||
|
|
||||||
/// `GET /oauth/authorize` — Initiates the OAuth flow.
|
/// `GET /oauth/authorize` — Initiates the OAuth flow.
|
||||||
///
|
///
|
||||||
@@ -104,35 +25,11 @@ fn base64url_encode(data: &[u8]) -> String {
|
|||||||
/// Anthropic's authorization page.
|
/// Anthropic's authorization page.
|
||||||
#[handler]
|
#[handler]
|
||||||
pub async fn oauth_authorize(state: Data<&Arc<OAuthState>>) -> Redirect {
|
pub async fn oauth_authorize(state: Data<&Arc<OAuthState>>) -> Redirect {
|
||||||
let code_verifier = random_string(128);
|
let (_, url) = svc::initiate_flow(&state);
|
||||||
let code_challenge = compute_code_challenge(&code_verifier);
|
Redirect::temporary(url)
|
||||||
let csrf_state = random_string(32);
|
|
||||||
let redirect_uri = state.callback_url();
|
|
||||||
|
|
||||||
slog!("[oauth] Starting OAuth flow, state={}", csrf_state);
|
|
||||||
|
|
||||||
// Store the pending flow
|
|
||||||
state.pending.lock().unwrap().insert(
|
|
||||||
csrf_state.clone(),
|
|
||||||
PendingFlow {
|
|
||||||
code_verifier,
|
|
||||||
redirect_uri: redirect_uri.clone(),
|
|
||||||
},
|
|
||||||
);
|
|
||||||
|
|
||||||
let authorize_url = format!(
|
|
||||||
"{}?code=true&client_id={}&response_type=code&redirect_uri={}&scope={}&code_challenge={}&code_challenge_method=S256&state={}",
|
|
||||||
AUTHORIZE_URL,
|
|
||||||
CLIENT_ID,
|
|
||||||
percent_encode(&redirect_uri),
|
|
||||||
percent_encode(SCOPES),
|
|
||||||
percent_encode(&code_challenge),
|
|
||||||
percent_encode(&csrf_state),
|
|
||||||
);
|
|
||||||
|
|
||||||
Redirect::temporary(authorize_url)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Query parameters received on the OAuth callback URL.
|
||||||
#[derive(Deserialize)]
|
#[derive(Deserialize)]
|
||||||
pub struct CallbackParams {
|
pub struct CallbackParams {
|
||||||
code: Option<String>,
|
code: Option<String>,
|
||||||
@@ -141,18 +38,6 @@ pub struct CallbackParams {
|
|||||||
error_description: Option<String>,
|
error_description: Option<String>,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Response from the Anthropic OAuth token endpoint.
|
|
||||||
#[derive(Deserialize)]
|
|
||||||
struct TokenResponse {
|
|
||||||
access_token: String,
|
|
||||||
refresh_token: Option<String>,
|
|
||||||
expires_in: u64,
|
|
||||||
#[allow(dead_code)]
|
|
||||||
token_type: Option<String>,
|
|
||||||
#[allow(dead_code)]
|
|
||||||
scope: Option<String>,
|
|
||||||
}
|
|
||||||
|
|
||||||
/// `GET /oauth/callback` — Handles the OAuth redirect from Anthropic.
|
/// `GET /oauth/callback` — Handles the OAuth redirect from Anthropic.
|
||||||
///
|
///
|
||||||
/// Exchanges the authorization code for tokens and writes them to
|
/// Exchanges the authorization code for tokens and writes them to
|
||||||
@@ -162,7 +47,7 @@ pub async fn oauth_callback(
|
|||||||
state: Data<&Arc<OAuthState>>,
|
state: Data<&Arc<OAuthState>>,
|
||||||
Query(params): Query<CallbackParams>,
|
Query(params): Query<CallbackParams>,
|
||||||
) -> poem::Response {
|
) -> poem::Response {
|
||||||
// Handle errors from Anthropic
|
// Handle provider-side errors (e.g. user denied access).
|
||||||
if let Some(err) = ¶ms.error {
|
if let Some(err) = ¶ms.error {
|
||||||
let desc = params
|
let desc = params
|
||||||
.error_description
|
.error_description
|
||||||
@@ -177,7 +62,7 @@ pub async fn oauth_callback(
|
|||||||
}
|
}
|
||||||
|
|
||||||
let code = match ¶ms.code {
|
let code = match ¶ms.code {
|
||||||
Some(c) => c,
|
Some(c) => c.clone(),
|
||||||
None => {
|
None => {
|
||||||
return html_response(
|
return html_response(
|
||||||
StatusCode::BAD_REQUEST,
|
StatusCode::BAD_REQUEST,
|
||||||
@@ -188,7 +73,7 @@ pub async fn oauth_callback(
|
|||||||
};
|
};
|
||||||
|
|
||||||
let csrf_state = match ¶ms.state {
|
let csrf_state = match ¶ms.state {
|
||||||
Some(s) => s,
|
Some(s) => s.clone(),
|
||||||
None => {
|
None => {
|
||||||
return html_response(
|
return html_response(
|
||||||
StatusCode::BAD_REQUEST,
|
StatusCode::BAD_REQUEST,
|
||||||
@@ -198,164 +83,73 @@ pub async fn oauth_callback(
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
// Look up and remove the pending flow
|
match svc::exchange_code(&state, &code, &csrf_state).await {
|
||||||
let pending = state.pending.lock().unwrap().remove(csrf_state);
|
Ok(()) => html_response(
|
||||||
let flow = match pending {
|
|
||||||
Some(f) => f,
|
|
||||||
None => {
|
|
||||||
slog!("[oauth] Unknown state parameter: {}", csrf_state);
|
|
||||||
return html_response(
|
|
||||||
StatusCode::BAD_REQUEST,
|
|
||||||
"Invalid State",
|
|
||||||
"Unknown or expired state parameter. Please try logging in again.",
|
|
||||||
);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
slog!("[oauth] Received callback, exchanging code for tokens");
|
|
||||||
|
|
||||||
// Exchange the authorization code for tokens
|
|
||||||
let client = reqwest::Client::new();
|
|
||||||
let resp = client
|
|
||||||
.post(TOKEN_ENDPOINT)
|
|
||||||
.header("Content-Type", "application/json")
|
|
||||||
.json(&serde_json::json!({
|
|
||||||
"grant_type": "authorization_code",
|
|
||||||
"code": code,
|
|
||||||
"client_id": CLIENT_ID,
|
|
||||||
"redirect_uri": &flow.redirect_uri,
|
|
||||||
"code_verifier": &flow.code_verifier,
|
|
||||||
"state": csrf_state,
|
|
||||||
}))
|
|
||||||
.send()
|
|
||||||
.await;
|
|
||||||
|
|
||||||
let resp = match resp {
|
|
||||||
Ok(r) => r,
|
|
||||||
Err(e) => {
|
|
||||||
slog!("[oauth] Token exchange request failed: {}", e);
|
|
||||||
return html_response(
|
|
||||||
StatusCode::INTERNAL_SERVER_ERROR,
|
|
||||||
"Token Exchange Failed",
|
|
||||||
&format!("Failed to contact Anthropic: {e}"),
|
|
||||||
);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
let status = resp.status();
|
|
||||||
let body = resp.text().await.unwrap_or_default();
|
|
||||||
|
|
||||||
slog!(
|
|
||||||
"[oauth] Token exchange response (HTTP {}): {}",
|
|
||||||
status,
|
|
||||||
body
|
|
||||||
);
|
|
||||||
|
|
||||||
if !status.is_success() {
|
|
||||||
return html_response(
|
|
||||||
StatusCode::INTERNAL_SERVER_ERROR,
|
|
||||||
"Token Exchange Failed",
|
|
||||||
&format!("Anthropic returned HTTP {status}. Please try again."),
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
let token_resp: TokenResponse = match serde_json::from_str(&body) {
|
|
||||||
Ok(t) => t,
|
|
||||||
Err(e) => {
|
|
||||||
slog!("[oauth] Failed to parse token response: {}", e);
|
|
||||||
return html_response(
|
|
||||||
StatusCode::INTERNAL_SERVER_ERROR,
|
|
||||||
"Token Parse Failed",
|
|
||||||
"Received an unexpected response from Anthropic.",
|
|
||||||
);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
let now_ms = std::time::SystemTime::now()
|
|
||||||
.duration_since(std::time::UNIX_EPOCH)
|
|
||||||
.map(|d| d.as_millis() as u64)
|
|
||||||
.unwrap_or(0);
|
|
||||||
|
|
||||||
let creds = oauth::CredentialsFile {
|
|
||||||
claude_ai_oauth: oauth::OAuthCredentials {
|
|
||||||
access_token: token_resp.access_token,
|
|
||||||
refresh_token: token_resp.refresh_token.unwrap_or_default(),
|
|
||||||
expires_at: now_ms + (token_resp.expires_in * 1000),
|
|
||||||
scopes: SCOPES.split(' ').map(|s| s.to_string()).collect(),
|
|
||||||
subscription_type: None,
|
|
||||||
rate_limit_tier: None,
|
|
||||||
},
|
|
||||||
};
|
|
||||||
|
|
||||||
if let Err(e) = oauth::write_credentials(&creds) {
|
|
||||||
slog!("[oauth] Failed to write credentials: {}", e);
|
|
||||||
return html_response(
|
|
||||||
StatusCode::INTERNAL_SERVER_ERROR,
|
|
||||||
"Credential Write Failed",
|
|
||||||
&format!("Tokens received but failed to save: {e}"),
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
slog!("[oauth] Successfully authenticated and saved credentials");
|
|
||||||
|
|
||||||
html_response(
|
|
||||||
StatusCode::OK,
|
StatusCode::OK,
|
||||||
"Authenticated!",
|
"Authenticated!",
|
||||||
"Claude OAuth login successful. You can close this tab and return to Huskies.",
|
"Claude OAuth login successful. You can close this tab and return to Huskies.",
|
||||||
)
|
),
|
||||||
|
Err(e) => map_service_error(e),
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Check whether valid (non-expired) OAuth credentials exist.
|
/// `GET /oauth/status` — Check whether valid (non-expired) OAuth credentials exist.
|
||||||
#[handler]
|
#[handler]
|
||||||
pub async fn oauth_status() -> poem::Response {
|
pub async fn oauth_status() -> poem::Response {
|
||||||
match oauth::read_credentials() {
|
let status = svc::check_status();
|
||||||
Ok(creds) => {
|
|
||||||
let now_ms = std::time::SystemTime::now()
|
|
||||||
.duration_since(std::time::UNIX_EPOCH)
|
|
||||||
.map(|d| d.as_millis() as u64)
|
|
||||||
.unwrap_or(0);
|
|
||||||
let expired = now_ms > creds.claude_ai_oauth.expires_at;
|
|
||||||
let body = serde_json::json!({
|
let body = serde_json::json!({
|
||||||
"authenticated": true,
|
"authenticated": status.authenticated,
|
||||||
"expired": expired,
|
"expired": status.expired,
|
||||||
"expires_at": creds.claude_ai_oauth.expires_at,
|
"expires_at": status.expires_at,
|
||||||
"has_refresh_token": !creds.claude_ai_oauth.refresh_token.is_empty(),
|
"has_refresh_token": status.has_refresh_token,
|
||||||
});
|
});
|
||||||
poem::Response::builder()
|
poem::Response::builder()
|
||||||
.status(StatusCode::OK)
|
.status(StatusCode::OK)
|
||||||
.header("Content-Type", "application/json")
|
.header("Content-Type", "application/json")
|
||||||
.body(body.to_string())
|
.body(body.to_string())
|
||||||
}
|
}
|
||||||
Err(_) => {
|
|
||||||
let body = serde_json::json!({
|
|
||||||
"authenticated": false,
|
|
||||||
"expired": false,
|
|
||||||
"expires_at": 0,
|
|
||||||
"has_refresh_token": false,
|
|
||||||
});
|
|
||||||
poem::Response::builder()
|
|
||||||
.status(StatusCode::OK)
|
|
||||||
.header("Content-Type", "application/json")
|
|
||||||
.body(body.to_string())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Percent-encode a string for use in URL query parameters.
|
// ── Private helpers ───────────────────────────────────────────────────────────
|
||||||
fn percent_encode(input: &str) -> String {
|
|
||||||
let mut encoded = String::with_capacity(input.len() * 3);
|
/// Map a service-layer `Error` to an HTML HTTP response.
|
||||||
for byte in input.bytes() {
|
fn map_service_error(e: svc::Error) -> poem::Response {
|
||||||
match byte {
|
use svc::Error;
|
||||||
b'A'..=b'Z' | b'a'..=b'z' | b'0'..=b'9' | b'-' | b'_' | b'.' | b'~' => {
|
match e {
|
||||||
encoded.push(byte as char);
|
Error::MissingCode => html_response(
|
||||||
|
StatusCode::BAD_REQUEST,
|
||||||
|
"Missing Code",
|
||||||
|
"No authorization code received.",
|
||||||
|
),
|
||||||
|
Error::MissingState => html_response(
|
||||||
|
StatusCode::BAD_REQUEST,
|
||||||
|
"Missing State",
|
||||||
|
"No state parameter received.",
|
||||||
|
),
|
||||||
|
Error::InvalidState(msg) => html_response(StatusCode::BAD_REQUEST, "Invalid State", &msg),
|
||||||
|
Error::AuthorizationDenied(msg) => {
|
||||||
|
html_response(StatusCode::BAD_REQUEST, "Authentication Failed", &msg)
|
||||||
}
|
}
|
||||||
_ => {
|
Error::InvalidGrant(msg) => {
|
||||||
encoded.push_str(&format!("%{byte:02X}"));
|
html_response(StatusCode::BAD_REQUEST, "Token Exchange Failed", &msg)
|
||||||
}
|
}
|
||||||
|
Error::Network(msg) => html_response(
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR,
|
||||||
|
"Token Exchange Failed",
|
||||||
|
&msg,
|
||||||
|
),
|
||||||
|
Error::TokenExpired(msg) => html_response(StatusCode::UNAUTHORIZED, "Token Expired", &msg),
|
||||||
|
Error::TokenStorage(msg) => html_response(
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR,
|
||||||
|
"Credential Write Failed",
|
||||||
|
&msg,
|
||||||
|
),
|
||||||
|
Error::Parse(msg) => html_response(
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR,
|
||||||
|
"Token Parse Failed",
|
||||||
|
&msg,
|
||||||
|
),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
encoded
|
|
||||||
}
|
|
||||||
|
|
||||||
fn html_response(status: StatusCode, title: &str, message: &str) -> poem::Response {
|
fn html_response(status: StatusCode, title: &str, message: &str) -> poem::Response {
|
||||||
let html = format!(
|
let html = format!(
|
||||||
|
|||||||
+24
-10
@@ -1,6 +1,7 @@
|
|||||||
//! HTTP project endpoints — REST API for project initialization and context management.
|
//! HTTP project endpoints — thin adapters over `service::project`.
|
||||||
use crate::http::context::{AppContext, OpenApiResult, bad_request};
|
use crate::http::context::{AppContext, OpenApiResult, bad_request, not_found};
|
||||||
use crate::io::fs;
|
use crate::service::project::{self as svc, Error as ProjectError};
|
||||||
|
use poem::http::StatusCode;
|
||||||
use poem_openapi::{Object, OpenApi, Tags, payload::Json};
|
use poem_openapi::{Object, OpenApi, Tags, payload::Json};
|
||||||
use serde::Deserialize;
|
use serde::Deserialize;
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
@@ -15,6 +16,17 @@ struct PathPayload {
|
|||||||
path: String,
|
path: String,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Map a typed [`ProjectError`] to a `poem::Error` with the appropriate HTTP status.
|
||||||
|
fn map_project_error(e: ProjectError) -> poem::Error {
|
||||||
|
match e {
|
||||||
|
ProjectError::PathNotFound(msg) => not_found(msg),
|
||||||
|
ProjectError::NotADirectory(msg) => bad_request(msg),
|
||||||
|
ProjectError::Internal(msg) => {
|
||||||
|
poem::Error::from_string(msg, StatusCode::INTERNAL_SERVER_ERROR)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
pub struct ProjectApi {
|
pub struct ProjectApi {
|
||||||
pub ctx: Arc<AppContext>,
|
pub ctx: Arc<AppContext>,
|
||||||
}
|
}
|
||||||
@@ -26,8 +38,8 @@ impl ProjectApi {
|
|||||||
/// Returns null when no project is open.
|
/// Returns null when no project is open.
|
||||||
#[oai(path = "/project", method = "get")]
|
#[oai(path = "/project", method = "get")]
|
||||||
async fn get_current_project(&self) -> OpenApiResult<Json<Option<String>>> {
|
async fn get_current_project(&self) -> OpenApiResult<Json<Option<String>>> {
|
||||||
let result = fs::get_current_project(&self.ctx.state, self.ctx.store.as_ref())
|
let result = svc::get_current_project(&self.ctx.state, self.ctx.store.as_ref())
|
||||||
.map_err(bad_request)?;
|
.map_err(map_project_error)?;
|
||||||
Ok(Json(result))
|
Ok(Json(result))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -36,14 +48,14 @@ impl ProjectApi {
|
|||||||
/// Persists the selected path for later sessions.
|
/// Persists the selected path for later sessions.
|
||||||
#[oai(path = "/project", method = "post")]
|
#[oai(path = "/project", method = "post")]
|
||||||
async fn open_project(&self, payload: Json<PathPayload>) -> OpenApiResult<Json<String>> {
|
async fn open_project(&self, payload: Json<PathPayload>) -> OpenApiResult<Json<String>> {
|
||||||
let confirmed = fs::open_project(
|
let confirmed = svc::open_project(
|
||||||
payload.0.path,
|
payload.0.path,
|
||||||
&self.ctx.state,
|
&self.ctx.state,
|
||||||
self.ctx.store.as_ref(),
|
self.ctx.store.as_ref(),
|
||||||
self.ctx.agents.port(),
|
self.ctx.agents.port(),
|
||||||
)
|
)
|
||||||
.await
|
.await
|
||||||
.map_err(bad_request)?;
|
.map_err(map_project_error)?;
|
||||||
Ok(Json(confirmed))
|
Ok(Json(confirmed))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -55,21 +67,23 @@ impl ProjectApi {
|
|||||||
"[MERGE-DEBUG] DELETE /project called! \
|
"[MERGE-DEBUG] DELETE /project called! \
|
||||||
Backtrace: this is the only code path that clears project_root."
|
Backtrace: this is the only code path that clears project_root."
|
||||||
);
|
);
|
||||||
fs::close_project(&self.ctx.state, self.ctx.store.as_ref()).map_err(bad_request)?;
|
svc::close_project(&self.ctx.state, self.ctx.store.as_ref()).map_err(map_project_error)?;
|
||||||
Ok(Json(true))
|
Ok(Json(true))
|
||||||
}
|
}
|
||||||
|
|
||||||
/// List known projects from the store.
|
/// List known projects from the store.
|
||||||
#[oai(path = "/projects", method = "get")]
|
#[oai(path = "/projects", method = "get")]
|
||||||
async fn list_known_projects(&self) -> OpenApiResult<Json<Vec<String>>> {
|
async fn list_known_projects(&self) -> OpenApiResult<Json<Vec<String>>> {
|
||||||
let projects = fs::get_known_projects(self.ctx.store.as_ref()).map_err(bad_request)?;
|
let projects =
|
||||||
|
svc::get_known_projects(self.ctx.store.as_ref()).map_err(map_project_error)?;
|
||||||
Ok(Json(projects))
|
Ok(Json(projects))
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Forget a known project path.
|
/// Forget a known project path.
|
||||||
#[oai(path = "/projects/forget", method = "post")]
|
#[oai(path = "/projects/forget", method = "post")]
|
||||||
async fn forget_known_project(&self, payload: Json<PathPayload>) -> OpenApiResult<Json<bool>> {
|
async fn forget_known_project(&self, payload: Json<PathPayload>) -> OpenApiResult<Json<bool>> {
|
||||||
fs::forget_known_project(payload.0.path, self.ctx.store.as_ref()).map_err(bad_request)?;
|
svc::forget_known_project(payload.0.path, self.ctx.store.as_ref())
|
||||||
|
.map_err(map_project_error)?;
|
||||||
Ok(Json(true))
|
Ok(Json(true))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
+276
-25
@@ -1,12 +1,39 @@
|
|||||||
//! HTTP settings endpoints — REST API for user preferences and editor configuration.
|
//! HTTP settings endpoints — REST API for user preferences and editor configuration.
|
||||||
use crate::http::context::{AppContext, OpenApiResult, bad_request};
|
use crate::http::context::{AppContext, OpenApiResult, bad_request};
|
||||||
|
use crate::service::settings as svc;
|
||||||
use crate::store::StoreOps;
|
use crate::store::StoreOps;
|
||||||
use poem_openapi::{Object, OpenApi, Tags, param::Query, payload::Json};
|
use poem_openapi::{Object, OpenApi, Tags, param::Query, payload::Json};
|
||||||
use serde::Serialize;
|
use serde::Serialize;
|
||||||
use serde_json::json;
|
use serde_json::json;
|
||||||
|
#[cfg(test)]
|
||||||
|
use std::path::Path;
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
|
|
||||||
const EDITOR_COMMAND_KEY: &str = "editor_command";
|
// Re-export service types so the test module (which does `use super::*`) can
|
||||||
|
// access them without modification.
|
||||||
|
pub use svc::EDITOR_COMMAND_KEY;
|
||||||
|
pub use svc::ProjectSettings;
|
||||||
|
#[cfg(test)]
|
||||||
|
pub use svc::settings_from_config;
|
||||||
|
|
||||||
|
/// Thin wrapper — delegates to [`svc::validate_project_settings`] and maps
|
||||||
|
/// the typed error to `String` so existing tests calling `.unwrap_err()` can
|
||||||
|
/// call `.contains()` directly.
|
||||||
|
fn validate_project_settings(s: &ProjectSettings) -> Result<(), String> {
|
||||||
|
svc::validate_project_settings(s).map_err(|e| e.to_string())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Thin wrapper — delegates to [`svc::write_project_settings`] and maps the
|
||||||
|
/// typed error to `String` so existing tests can call `.unwrap()` unchanged.
|
||||||
|
#[cfg(test)]
|
||||||
|
fn write_project_settings(project_root: &Path, s: &ProjectSettings) -> Result<(), String> {
|
||||||
|
svc::write_project_settings(project_root, s).map_err(|e| e.to_string())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Return the configured editor command from the store, or `None` if not set.
|
||||||
|
pub fn get_editor_command_from_store(ctx: &AppContext) -> Option<String> {
|
||||||
|
svc::get_editor_command(&*ctx.store)
|
||||||
|
}
|
||||||
|
|
||||||
#[derive(Tags)]
|
#[derive(Tags)]
|
||||||
enum SettingsTags {
|
enum SettingsTags {
|
||||||
@@ -37,11 +64,7 @@ impl SettingsApi {
|
|||||||
/// Get the configured editor command (e.g. "zed", "code", "cursor"), or null if not set.
|
/// Get the configured editor command (e.g. "zed", "code", "cursor"), or null if not set.
|
||||||
#[oai(path = "/settings/editor", method = "get")]
|
#[oai(path = "/settings/editor", method = "get")]
|
||||||
async fn get_editor(&self) -> OpenApiResult<Json<EditorCommandResponse>> {
|
async fn get_editor(&self) -> OpenApiResult<Json<EditorCommandResponse>> {
|
||||||
let editor_command = self
|
let editor_command = get_editor_command_from_store(&self.ctx);
|
||||||
.ctx
|
|
||||||
.store
|
|
||||||
.get(EDITOR_COMMAND_KEY)
|
|
||||||
.and_then(|v| v.as_str().map(|s| s.to_string()));
|
|
||||||
Ok(Json(EditorCommandResponse { editor_command }))
|
Ok(Json(EditorCommandResponse { editor_command }))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -55,22 +78,38 @@ impl SettingsApi {
|
|||||||
path: Query<String>,
|
path: Query<String>,
|
||||||
line: Query<Option<u32>>,
|
line: Query<Option<u32>>,
|
||||||
) -> OpenApiResult<Json<OpenFileResponse>> {
|
) -> OpenApiResult<Json<OpenFileResponse>> {
|
||||||
let editor_command = get_editor_command_from_store(&self.ctx)
|
svc::open_file_in_editor(&*self.ctx.store, &path.0, line.0)
|
||||||
.ok_or_else(|| bad_request("No editor configured".to_string()))?;
|
.map_err(|e| bad_request(e.to_string()))?;
|
||||||
|
|
||||||
let file_ref = match line.0 {
|
|
||||||
Some(l) => format!("{}:{}", path.0, l),
|
|
||||||
None => path.0.clone(),
|
|
||||||
};
|
|
||||||
|
|
||||||
std::process::Command::new(&editor_command)
|
|
||||||
.arg(&file_ref)
|
|
||||||
.spawn()
|
|
||||||
.map_err(|e| bad_request(format!("Failed to open editor: {e}")))?;
|
|
||||||
|
|
||||||
Ok(Json(OpenFileResponse { success: true }))
|
Ok(Json(OpenFileResponse { success: true }))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Get current project.toml scalar settings as JSON.
|
||||||
|
#[oai(path = "/settings", method = "get")]
|
||||||
|
async fn get_settings(&self) -> OpenApiResult<Json<ProjectSettings>> {
|
||||||
|
let project_root = self.ctx.state.get_project_root().map_err(bad_request)?;
|
||||||
|
let s =
|
||||||
|
svc::load_project_settings(&project_root).map_err(|e| bad_request(e.to_string()))?;
|
||||||
|
Ok(Json(s))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Update project.toml scalar settings. Array sections (component, agent) are preserved.
|
||||||
|
///
|
||||||
|
/// Returns 400 if the input fails validation (e.g. unknown qa mode, negative max_retries).
|
||||||
|
#[oai(path = "/settings", method = "put")]
|
||||||
|
async fn put_settings(
|
||||||
|
&self,
|
||||||
|
payload: Json<ProjectSettings>,
|
||||||
|
) -> OpenApiResult<Json<ProjectSettings>> {
|
||||||
|
validate_project_settings(&payload.0).map_err(bad_request)?;
|
||||||
|
let project_root = self.ctx.state.get_project_root().map_err(bad_request)?;
|
||||||
|
svc::write_project_settings(&project_root, &payload.0)
|
||||||
|
.map_err(|e| bad_request(e.to_string()))?;
|
||||||
|
// Re-read to confirm what was written
|
||||||
|
let s =
|
||||||
|
svc::load_project_settings(&project_root).map_err(|e| bad_request(e.to_string()))?;
|
||||||
|
Ok(Json(s))
|
||||||
|
}
|
||||||
|
|
||||||
/// Set the preferred editor command (e.g. "zed", "code", "cursor").
|
/// Set the preferred editor command (e.g. "zed", "code", "cursor").
|
||||||
/// Pass null or empty string to clear the preference.
|
/// Pass null or empty string to clear the preference.
|
||||||
#[oai(path = "/settings/editor", method = "put")]
|
#[oai(path = "/settings/editor", method = "put")]
|
||||||
@@ -102,12 +141,6 @@ impl SettingsApi {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn get_editor_command_from_store(ctx: &AppContext) -> Option<String> {
|
|
||||||
ctx.store
|
|
||||||
.get(EDITOR_COMMAND_KEY)
|
|
||||||
.and_then(|v| v.as_str().map(|s| s.to_string()))
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
impl From<std::sync::Arc<AppContext>> for SettingsApi {
|
impl From<std::sync::Arc<AppContext>> for SettingsApi {
|
||||||
fn from(ctx: std::sync::Arc<AppContext>) -> Self {
|
fn from(ctx: std::sync::Arc<AppContext>) -> Self {
|
||||||
@@ -360,4 +393,222 @@ mod tests {
|
|||||||
.await;
|
.await;
|
||||||
assert!(result.is_err());
|
assert!(result.is_err());
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ── /api/settings GET/PUT ──────────────────────────────────────────────
|
||||||
|
|
||||||
|
fn default_project_settings() -> ProjectSettings {
|
||||||
|
let cfg = crate::config::ProjectConfig::default();
|
||||||
|
settings_from_config(&cfg)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn get_settings_returns_defaults_when_no_project_toml() {
|
||||||
|
let dir = TempDir::new().unwrap();
|
||||||
|
// Create .huskies dir so project root detection works but no project.toml
|
||||||
|
std::fs::create_dir_all(dir.path().join(".huskies")).unwrap();
|
||||||
|
let ctx = AppContext::new_test(dir.path().to_path_buf());
|
||||||
|
let api = SettingsApi { ctx: Arc::new(ctx) };
|
||||||
|
let result = api.get_settings().await.unwrap().0;
|
||||||
|
assert_eq!(result.default_qa, "server");
|
||||||
|
assert_eq!(result.max_retries, 2);
|
||||||
|
assert!(result.rate_limit_notifications);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn put_settings_writes_and_returns_settings() {
|
||||||
|
let dir = TempDir::new().unwrap();
|
||||||
|
std::fs::create_dir_all(dir.path().join(".huskies")).unwrap();
|
||||||
|
let ctx = AppContext::new_test(dir.path().to_path_buf());
|
||||||
|
let api = SettingsApi { ctx: Arc::new(ctx) };
|
||||||
|
|
||||||
|
let mut s = default_project_settings();
|
||||||
|
s.default_qa = "agent".to_string();
|
||||||
|
s.max_retries = 5;
|
||||||
|
s.rate_limit_notifications = false;
|
||||||
|
|
||||||
|
let result = api.put_settings(Json(s)).await.unwrap().0;
|
||||||
|
assert_eq!(result.default_qa, "agent");
|
||||||
|
assert_eq!(result.max_retries, 5);
|
||||||
|
assert!(!result.rate_limit_notifications);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn put_settings_preserves_agent_sections() {
|
||||||
|
let dir = TempDir::new().unwrap();
|
||||||
|
let huskies_dir = dir.path().join(".huskies");
|
||||||
|
std::fs::create_dir_all(&huskies_dir).unwrap();
|
||||||
|
|
||||||
|
// Write a project.toml with agent sections
|
||||||
|
std::fs::write(
|
||||||
|
huskies_dir.join("project.toml"),
|
||||||
|
r#"
|
||||||
|
[[agent]]
|
||||||
|
name = "coder-1"
|
||||||
|
model = "sonnet"
|
||||||
|
stage = "coder"
|
||||||
|
|
||||||
|
[[component]]
|
||||||
|
name = "server"
|
||||||
|
path = "."
|
||||||
|
"#,
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let ctx = AppContext::new_test(dir.path().to_path_buf());
|
||||||
|
let api = SettingsApi { ctx: Arc::new(ctx) };
|
||||||
|
|
||||||
|
let mut s = default_project_settings();
|
||||||
|
s.default_qa = "human".to_string();
|
||||||
|
api.put_settings(Json(s)).await.unwrap();
|
||||||
|
|
||||||
|
// Re-read the file and verify agent/component sections are still there
|
||||||
|
let written = std::fs::read_to_string(huskies_dir.join("project.toml")).unwrap();
|
||||||
|
assert!(
|
||||||
|
written.contains("coder-1"),
|
||||||
|
"agent section should be preserved"
|
||||||
|
);
|
||||||
|
assert!(
|
||||||
|
written.contains("server"),
|
||||||
|
"component section should be preserved"
|
||||||
|
);
|
||||||
|
assert!(written.contains("human"), "new setting should be written");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn put_settings_rejects_invalid_qa_mode() {
|
||||||
|
let dir = TempDir::new().unwrap();
|
||||||
|
std::fs::create_dir_all(dir.path().join(".huskies")).unwrap();
|
||||||
|
let ctx = AppContext::new_test(dir.path().to_path_buf());
|
||||||
|
let api = SettingsApi { ctx: Arc::new(ctx) };
|
||||||
|
|
||||||
|
let mut s = default_project_settings();
|
||||||
|
s.default_qa = "invalid_mode".to_string();
|
||||||
|
|
||||||
|
let result = api.put_settings(Json(s)).await;
|
||||||
|
assert!(result.is_err());
|
||||||
|
let err = result.unwrap_err();
|
||||||
|
assert_eq!(err.status(), poem::http::StatusCode::BAD_REQUEST);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn validate_project_settings_accepts_valid_qa_modes() {
|
||||||
|
for mode in &["server", "agent", "human"] {
|
||||||
|
let s = ProjectSettings {
|
||||||
|
default_qa: mode.to_string(),
|
||||||
|
default_coder_model: None,
|
||||||
|
max_coders: None,
|
||||||
|
max_retries: 2,
|
||||||
|
base_branch: None,
|
||||||
|
rate_limit_notifications: true,
|
||||||
|
timezone: None,
|
||||||
|
rendezvous: None,
|
||||||
|
watcher_sweep_interval_secs: 60,
|
||||||
|
watcher_done_retention_secs: 14400,
|
||||||
|
};
|
||||||
|
assert!(
|
||||||
|
validate_project_settings(&s).is_ok(),
|
||||||
|
"qa mode '{mode}' should be valid"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn validate_project_settings_rejects_unknown_qa_mode() {
|
||||||
|
let s = ProjectSettings {
|
||||||
|
default_qa: "robot".to_string(),
|
||||||
|
default_coder_model: None,
|
||||||
|
max_coders: None,
|
||||||
|
max_retries: 2,
|
||||||
|
base_branch: None,
|
||||||
|
rate_limit_notifications: true,
|
||||||
|
timezone: None,
|
||||||
|
rendezvous: None,
|
||||||
|
watcher_sweep_interval_secs: 60,
|
||||||
|
watcher_done_retention_secs: 14400,
|
||||||
|
};
|
||||||
|
let err = validate_project_settings(&s).unwrap_err();
|
||||||
|
assert!(err.contains("robot"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn write_and_read_project_settings_roundtrip() {
|
||||||
|
let dir = TempDir::new().unwrap();
|
||||||
|
std::fs::create_dir_all(dir.path().join(".huskies")).unwrap();
|
||||||
|
|
||||||
|
let s = ProjectSettings {
|
||||||
|
default_qa: "agent".to_string(),
|
||||||
|
default_coder_model: Some("opus".to_string()),
|
||||||
|
max_coders: Some(2),
|
||||||
|
max_retries: 3,
|
||||||
|
base_branch: Some("main".to_string()),
|
||||||
|
rate_limit_notifications: false,
|
||||||
|
timezone: Some("America/New_York".to_string()),
|
||||||
|
rendezvous: Some("ws://host:3001/crdt-sync".to_string()),
|
||||||
|
watcher_sweep_interval_secs: 30,
|
||||||
|
watcher_done_retention_secs: 7200,
|
||||||
|
};
|
||||||
|
|
||||||
|
write_project_settings(dir.path(), &s).unwrap();
|
||||||
|
|
||||||
|
let config = crate::config::ProjectConfig::load(dir.path()).unwrap();
|
||||||
|
let loaded = settings_from_config(&config);
|
||||||
|
|
||||||
|
assert_eq!(loaded.default_qa, "agent");
|
||||||
|
assert_eq!(loaded.default_coder_model, Some("opus".to_string()));
|
||||||
|
assert_eq!(loaded.max_coders, Some(2));
|
||||||
|
assert_eq!(loaded.max_retries, 3);
|
||||||
|
assert_eq!(loaded.base_branch, Some("main".to_string()));
|
||||||
|
assert!(!loaded.rate_limit_notifications);
|
||||||
|
assert_eq!(loaded.timezone, Some("America/New_York".to_string()));
|
||||||
|
assert_eq!(
|
||||||
|
loaded.rendezvous,
|
||||||
|
Some("ws://host:3001/crdt-sync".to_string())
|
||||||
|
);
|
||||||
|
assert_eq!(loaded.watcher_sweep_interval_secs, 30);
|
||||||
|
assert_eq!(loaded.watcher_done_retention_secs, 7200);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn write_project_settings_clears_optional_fields_when_none() {
|
||||||
|
let dir = TempDir::new().unwrap();
|
||||||
|
let huskies_dir = dir.path().join(".huskies");
|
||||||
|
std::fs::create_dir_all(&huskies_dir).unwrap();
|
||||||
|
|
||||||
|
// First write with optional fields set
|
||||||
|
let s_with = ProjectSettings {
|
||||||
|
default_qa: "server".to_string(),
|
||||||
|
default_coder_model: Some("sonnet".to_string()),
|
||||||
|
max_coders: Some(3),
|
||||||
|
max_retries: 2,
|
||||||
|
base_branch: Some("master".to_string()),
|
||||||
|
rate_limit_notifications: true,
|
||||||
|
timezone: Some("UTC".to_string()),
|
||||||
|
rendezvous: None,
|
||||||
|
watcher_sweep_interval_secs: 60,
|
||||||
|
watcher_done_retention_secs: 14400,
|
||||||
|
};
|
||||||
|
write_project_settings(dir.path(), &s_with).unwrap();
|
||||||
|
|
||||||
|
// Then write with optional fields cleared
|
||||||
|
let s_clear = ProjectSettings {
|
||||||
|
default_qa: "server".to_string(),
|
||||||
|
default_coder_model: None,
|
||||||
|
max_coders: None,
|
||||||
|
max_retries: 2,
|
||||||
|
base_branch: None,
|
||||||
|
rate_limit_notifications: true,
|
||||||
|
timezone: None,
|
||||||
|
rendezvous: None,
|
||||||
|
watcher_sweep_interval_secs: 60,
|
||||||
|
watcher_done_retention_secs: 14400,
|
||||||
|
};
|
||||||
|
write_project_settings(dir.path(), &s_clear).unwrap();
|
||||||
|
|
||||||
|
let config = crate::config::ProjectConfig::load(dir.path()).unwrap();
|
||||||
|
let loaded = settings_from_config(&config);
|
||||||
|
assert!(loaded.default_coder_model.is_none());
|
||||||
|
assert!(loaded.max_coders.is_none());
|
||||||
|
assert!(loaded.base_branch.is_none());
|
||||||
|
assert!(loaded.timezone.is_none());
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
+11
-31
@@ -1,6 +1,7 @@
|
|||||||
//! HTTP wizard endpoints — REST API for the project setup wizard.
|
//! HTTP wizard endpoints — REST API for the project setup wizard.
|
||||||
use crate::http::context::{AppContext, OpenApiResult, bad_request, not_found};
|
use crate::http::context::{AppContext, OpenApiResult, bad_request, not_found};
|
||||||
use crate::io::wizard::{StepStatus, WizardState, WizardStep};
|
use crate::io::wizard::{WizardState, WizardStep};
|
||||||
|
use crate::service::wizard as svc;
|
||||||
use poem_openapi::{Object, OpenApi, Tags, param::Path, payload::Json};
|
use poem_openapi::{Object, OpenApi, Tags, param::Path, payload::Json};
|
||||||
use serde::{Deserialize, Serialize};
|
use serde::{Deserialize, Serialize};
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
@@ -80,8 +81,7 @@ impl WizardApi {
|
|||||||
#[oai(path = "/wizard", method = "get")]
|
#[oai(path = "/wizard", method = "get")]
|
||||||
async fn get_wizard_state(&self) -> OpenApiResult<Json<WizardResponse>> {
|
async fn get_wizard_state(&self) -> OpenApiResult<Json<WizardResponse>> {
|
||||||
let root = self.ctx.state.get_project_root().map_err(bad_request)?;
|
let root = self.ctx.state.get_project_root().map_err(bad_request)?;
|
||||||
let state =
|
let state = svc::get_state(&root).map_err(|_| not_found("No wizard active".to_string()))?;
|
||||||
WizardState::load(&root).ok_or_else(|| not_found("No wizard active".to_string()))?;
|
|
||||||
Ok(Json(WizardResponse::from(&state)))
|
Ok(Json(WizardResponse::from(&state)))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -97,16 +97,8 @@ impl WizardApi {
|
|||||||
) -> OpenApiResult<Json<WizardResponse>> {
|
) -> OpenApiResult<Json<WizardResponse>> {
|
||||||
let root = self.ctx.state.get_project_root().map_err(bad_request)?;
|
let root = self.ctx.state.get_project_root().map_err(bad_request)?;
|
||||||
let wizard_step = parse_step(&step.0)?;
|
let wizard_step = parse_step(&step.0)?;
|
||||||
let mut state =
|
let state = svc::set_step_content(&root, wizard_step, payload.0.content)
|
||||||
WizardState::load(&root).ok_or_else(|| not_found("No wizard active".to_string()))?;
|
.map_err(|e| bad_request(e.to_string()))?;
|
||||||
|
|
||||||
state.set_step_status(
|
|
||||||
wizard_step,
|
|
||||||
StepStatus::AwaitingConfirmation,
|
|
||||||
payload.0.content,
|
|
||||||
);
|
|
||||||
state.save(&root).map_err(bad_request)?;
|
|
||||||
|
|
||||||
Ok(Json(WizardResponse::from(&state)))
|
Ok(Json(WizardResponse::from(&state)))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -117,12 +109,8 @@ impl WizardApi {
|
|||||||
async fn confirm_step(&self, step: Path<String>) -> OpenApiResult<Json<WizardResponse>> {
|
async fn confirm_step(&self, step: Path<String>) -> OpenApiResult<Json<WizardResponse>> {
|
||||||
let root = self.ctx.state.get_project_root().map_err(bad_request)?;
|
let root = self.ctx.state.get_project_root().map_err(bad_request)?;
|
||||||
let wizard_step = parse_step(&step.0)?;
|
let wizard_step = parse_step(&step.0)?;
|
||||||
let mut state =
|
let state =
|
||||||
WizardState::load(&root).ok_or_else(|| not_found("No wizard active".to_string()))?;
|
svc::mark_step_confirmed(&root, wizard_step).map_err(|e| bad_request(e.to_string()))?;
|
||||||
|
|
||||||
state.confirm_step(wizard_step).map_err(bad_request)?;
|
|
||||||
state.save(&root).map_err(bad_request)?;
|
|
||||||
|
|
||||||
Ok(Json(WizardResponse::from(&state)))
|
Ok(Json(WizardResponse::from(&state)))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -133,12 +121,8 @@ impl WizardApi {
|
|||||||
async fn skip_step(&self, step: Path<String>) -> OpenApiResult<Json<WizardResponse>> {
|
async fn skip_step(&self, step: Path<String>) -> OpenApiResult<Json<WizardResponse>> {
|
||||||
let root = self.ctx.state.get_project_root().map_err(bad_request)?;
|
let root = self.ctx.state.get_project_root().map_err(bad_request)?;
|
||||||
let wizard_step = parse_step(&step.0)?;
|
let wizard_step = parse_step(&step.0)?;
|
||||||
let mut state =
|
let state =
|
||||||
WizardState::load(&root).ok_or_else(|| not_found("No wizard active".to_string()))?;
|
svc::mark_step_skipped(&root, wizard_step).map_err(|e| bad_request(e.to_string()))?;
|
||||||
|
|
||||||
state.skip_step(wizard_step).map_err(bad_request)?;
|
|
||||||
state.save(&root).map_err(bad_request)?;
|
|
||||||
|
|
||||||
Ok(Json(WizardResponse::from(&state)))
|
Ok(Json(WizardResponse::from(&state)))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -147,12 +131,8 @@ impl WizardApi {
|
|||||||
async fn mark_generating(&self, step: Path<String>) -> OpenApiResult<Json<WizardResponse>> {
|
async fn mark_generating(&self, step: Path<String>) -> OpenApiResult<Json<WizardResponse>> {
|
||||||
let root = self.ctx.state.get_project_root().map_err(bad_request)?;
|
let root = self.ctx.state.get_project_root().map_err(bad_request)?;
|
||||||
let wizard_step = parse_step(&step.0)?;
|
let wizard_step = parse_step(&step.0)?;
|
||||||
let mut state =
|
let state = svc::mark_step_generating(&root, wizard_step)
|
||||||
WizardState::load(&root).ok_or_else(|| not_found("No wizard active".to_string()))?;
|
.map_err(|e| bad_request(e.to_string()))?;
|
||||||
|
|
||||||
state.set_step_status(wizard_step, StepStatus::Generating, None);
|
|
||||||
state.save(&root).map_err(bad_request)?;
|
|
||||||
|
|
||||||
Ok(Json(WizardResponse::from(&state)))
|
Ok(Json(WizardResponse::from(&state)))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
+49
-971
File diff suppressed because it is too large
Load Diff
@@ -11,6 +11,4 @@ pub use files::{
|
|||||||
};
|
};
|
||||||
pub use paths::{find_story_kit_root, get_home_directory, resolve_cli_path};
|
pub use paths::{find_story_kit_root, get_home_directory, resolve_cli_path};
|
||||||
pub use preferences::{get_model_preference, set_model_preference};
|
pub use preferences::{get_model_preference, set_model_preference};
|
||||||
pub use project::{
|
pub use project::open_project;
|
||||||
close_project, forget_known_project, get_current_project, get_known_projects, open_project,
|
|
||||||
};
|
|
||||||
|
|||||||
@@ -84,6 +84,7 @@ pub async fn open_project(
|
|||||||
Ok(path)
|
Ok(path)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[allow(dead_code)]
|
||||||
pub fn close_project(state: &SessionState, store: &dyn StoreOps) -> Result<(), String> {
|
pub fn close_project(state: &SessionState, store: &dyn StoreOps) -> Result<(), String> {
|
||||||
{
|
{
|
||||||
// TRACE:MERGE-DEBUG — remove once root cause is found
|
// TRACE:MERGE-DEBUG — remove once root cause is found
|
||||||
@@ -98,6 +99,7 @@ pub fn close_project(state: &SessionState, store: &dyn StoreOps) -> Result<(), S
|
|||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[allow(dead_code)]
|
||||||
pub fn get_current_project(
|
pub fn get_current_project(
|
||||||
state: &SessionState,
|
state: &SessionState,
|
||||||
store: &dyn StoreOps,
|
store: &dyn StoreOps,
|
||||||
@@ -131,6 +133,7 @@ pub fn get_current_project(
|
|||||||
Ok(None)
|
Ok(None)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[allow(dead_code)]
|
||||||
pub fn get_known_projects(store: &dyn StoreOps) -> Result<Vec<String>, String> {
|
pub fn get_known_projects(store: &dyn StoreOps) -> Result<Vec<String>, String> {
|
||||||
let projects = store
|
let projects = store
|
||||||
.get(KEY_KNOWN_PROJECTS)
|
.get(KEY_KNOWN_PROJECTS)
|
||||||
@@ -143,6 +146,7 @@ pub fn get_known_projects(store: &dyn StoreOps) -> Result<Vec<String>, String> {
|
|||||||
Ok(projects)
|
Ok(projects)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[allow(dead_code)]
|
||||||
pub fn forget_known_project(path: String, store: &dyn StoreOps) -> Result<(), String> {
|
pub fn forget_known_project(path: String, store: &dyn StoreOps) -> Result<(), String> {
|
||||||
let mut known_projects = get_known_projects(store)?;
|
let mut known_projects = get_known_projects(store)?;
|
||||||
let original_len = known_projects.len();
|
let original_len = known_projects.len();
|
||||||
|
|||||||
@@ -100,6 +100,24 @@ const DEFAULT_PROJECT_SETTINGS_TOML: &str = r#"# Project-wide default QA mode: "
|
|||||||
# Per-story `qa` front matter overrides this setting.
|
# Per-story `qa` front matter overrides this setting.
|
||||||
default_qa = "server"
|
default_qa = "server"
|
||||||
|
|
||||||
|
# Maximum number of retries per story per pipeline stage before marking as blocked.
|
||||||
|
# Set to 0 to disable retry limits.
|
||||||
|
max_retries = 2
|
||||||
|
|
||||||
|
# Default model for coder-stage agents (e.g. "sonnet", "opus").
|
||||||
|
# When set, only coder agents whose model matches this value are considered for
|
||||||
|
# auto-assignment, so opus agents are only used when explicitly requested via
|
||||||
|
# story front matter `agent:` field.
|
||||||
|
# default_coder_model = "sonnet"
|
||||||
|
|
||||||
|
# Maximum number of concurrent coder-stage agents.
|
||||||
|
# Stories wait in 2_current/ until a slot frees up.
|
||||||
|
# max_coders = 3
|
||||||
|
|
||||||
|
# Override the base branch for worktree creation and merge operations.
|
||||||
|
# When not set, the system auto-detects the base branch from the current HEAD.
|
||||||
|
# base_branch = "main"
|
||||||
|
|
||||||
# Suppress soft rate-limit warning notifications in chat.
|
# Suppress soft rate-limit warning notifications in chat.
|
||||||
# Hard blocks and story-blocked notifications are always sent.
|
# Hard blocks and story-blocked notifications are always sent.
|
||||||
# rate_limit_notifications = true
|
# rate_limit_notifications = true
|
||||||
@@ -759,6 +777,78 @@ mod tests {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn scaffold_project_toml_contains_max_retries_with_default_value() {
|
||||||
|
let dir = tempdir().unwrap();
|
||||||
|
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||||
|
|
||||||
|
let content = fs::read_to_string(dir.path().join(".huskies/project.toml")).unwrap();
|
||||||
|
assert!(
|
||||||
|
content.contains("max_retries = 2"),
|
||||||
|
"project.toml scaffold should include max_retries with default value 2"
|
||||||
|
);
|
||||||
|
assert!(
|
||||||
|
content.contains("Maximum number of retries"),
|
||||||
|
"project.toml scaffold should include a comment explaining max_retries"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn scaffold_project_toml_contains_commented_out_optional_fields() {
|
||||||
|
let dir = tempdir().unwrap();
|
||||||
|
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||||
|
|
||||||
|
let content = fs::read_to_string(dir.path().join(".huskies/project.toml")).unwrap();
|
||||||
|
assert!(
|
||||||
|
content.contains("# default_coder_model"),
|
||||||
|
"project.toml scaffold should include commented-out default_coder_model"
|
||||||
|
);
|
||||||
|
assert!(
|
||||||
|
content.contains("# max_coders"),
|
||||||
|
"project.toml scaffold should include commented-out max_coders"
|
||||||
|
);
|
||||||
|
assert!(
|
||||||
|
content.contains("# base_branch"),
|
||||||
|
"project.toml scaffold should include commented-out base_branch"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn scaffold_project_toml_round_trips_through_project_config_load() {
|
||||||
|
use crate::config::ProjectConfig;
|
||||||
|
|
||||||
|
let dir = tempdir().unwrap();
|
||||||
|
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||||
|
|
||||||
|
// The generated project.toml must parse without error.
|
||||||
|
let config = ProjectConfig::load(dir.path())
|
||||||
|
.expect("Generated project.toml should parse without error");
|
||||||
|
|
||||||
|
// Key defaults must survive the round-trip.
|
||||||
|
assert_eq!(config.default_qa, "server");
|
||||||
|
assert_eq!(config.max_retries, 2);
|
||||||
|
assert!(
|
||||||
|
config.rate_limit_notifications,
|
||||||
|
"rate_limit_notifications should default to true"
|
||||||
|
);
|
||||||
|
assert!(
|
||||||
|
config.default_coder_model.is_none(),
|
||||||
|
"default_coder_model should be None when commented out"
|
||||||
|
);
|
||||||
|
assert!(
|
||||||
|
config.max_coders.is_none(),
|
||||||
|
"max_coders should be None when commented out"
|
||||||
|
);
|
||||||
|
assert!(
|
||||||
|
config.base_branch.is_none(),
|
||||||
|
"base_branch should be None when commented out"
|
||||||
|
);
|
||||||
|
assert!(
|
||||||
|
config.timezone.is_none(),
|
||||||
|
"timezone should be None when commented out"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn scaffold_context_is_blank_template_not_story_kit_content() {
|
fn scaffold_context_is_blank_template_not_story_kit_content() {
|
||||||
let dir = tempdir().unwrap();
|
let dir = tempdir().unwrap();
|
||||||
|
|||||||
@@ -31,35 +31,6 @@ pub struct ChatResult {
|
|||||||
pub session_id: Option<String>,
|
pub session_id: Option<String>,
|
||||||
}
|
}
|
||||||
|
|
||||||
fn get_anthropic_api_key_exists_impl(store: &dyn StoreOps) -> bool {
|
|
||||||
match store.get(KEY_ANTHROPIC_API_KEY) {
|
|
||||||
Some(value) => value.as_str().map(|k| !k.is_empty()).unwrap_or(false),
|
|
||||||
None => false,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn set_anthropic_api_key_impl(store: &dyn StoreOps, api_key: &str) -> Result<(), String> {
|
|
||||||
store.set(KEY_ANTHROPIC_API_KEY, json!(api_key));
|
|
||||||
store.save()?;
|
|
||||||
|
|
||||||
match store.get(KEY_ANTHROPIC_API_KEY) {
|
|
||||||
Some(value) => {
|
|
||||||
if let Some(retrieved) = value.as_str() {
|
|
||||||
if retrieved != api_key {
|
|
||||||
return Err("Retrieved key does not match saved key".to_string());
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
return Err("Stored value is not a string".to_string());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
None => {
|
|
||||||
return Err("API key was saved but cannot be retrieved".to_string());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
|
|
||||||
fn get_anthropic_api_key_impl(store: &dyn StoreOps) -> Result<String, String> {
|
fn get_anthropic_api_key_impl(store: &dyn StoreOps) -> Result<String, String> {
|
||||||
match store.get(KEY_ANTHROPIC_API_KEY) {
|
match store.get(KEY_ANTHROPIC_API_KEY) {
|
||||||
Some(value) => {
|
Some(value) => {
|
||||||
@@ -172,14 +143,6 @@ pub async fn get_ollama_models(base_url: Option<String>) -> Result<Vec<String>,
|
|||||||
OllamaProvider::get_models(&url).await
|
OllamaProvider::get_models(&url).await
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn get_anthropic_api_key_exists(store: &dyn StoreOps) -> Result<bool, String> {
|
|
||||||
Ok(get_anthropic_api_key_exists_impl(store))
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn set_anthropic_api_key(store: &dyn StoreOps, api_key: String) -> Result<(), String> {
|
|
||||||
set_anthropic_api_key_impl(store, &api_key)
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Build a prompt for Claude Code that includes prior conversation history.
|
/// Build a prompt for Claude Code that includes prior conversation history.
|
||||||
///
|
///
|
||||||
/// When a Claude Code session cannot be resumed (no session_id), we embed
|
/// When a Claude Code session cannot be resumed (no session_id), we embed
|
||||||
@@ -627,22 +590,6 @@ mod tests {
|
|||||||
save_should_fail: false,
|
save_should_fail: false,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
fn with_save_error() -> Self {
|
|
||||||
Self {
|
|
||||||
data: Mutex::new(HashMap::new()),
|
|
||||||
save_should_fail: true,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn with_entry(key: &str, value: serde_json::Value) -> Self {
|
|
||||||
let mut map = HashMap::new();
|
|
||||||
map.insert(key.to_string(), value);
|
|
||||||
Self {
|
|
||||||
data: Mutex::new(map),
|
|
||||||
save_should_fail: false,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
impl StoreOps for MockStore {
|
impl StoreOps for MockStore {
|
||||||
@@ -695,121 +642,6 @@ mod tests {
|
|||||||
assert!(result.is_ok());
|
assert!(result.is_ok());
|
||||||
}
|
}
|
||||||
|
|
||||||
// ---------------------------------------------------------------------------
|
|
||||||
// get_anthropic_api_key_exists_impl
|
|
||||||
// ---------------------------------------------------------------------------
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn api_key_exists_when_key_is_present_and_non_empty() {
|
|
||||||
let store = MockStore::with_entry("anthropic_api_key", json!("sk-test-key"));
|
|
||||||
assert!(get_anthropic_api_key_exists_impl(&store));
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn api_key_exists_returns_false_when_key_is_empty_string() {
|
|
||||||
let store = MockStore::with_entry("anthropic_api_key", json!(""));
|
|
||||||
assert!(!get_anthropic_api_key_exists_impl(&store));
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn api_key_exists_returns_false_when_key_absent() {
|
|
||||||
let store = MockStore::new();
|
|
||||||
assert!(!get_anthropic_api_key_exists_impl(&store));
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn api_key_exists_returns_false_when_value_is_not_string() {
|
|
||||||
let store = MockStore::with_entry("anthropic_api_key", json!(42));
|
|
||||||
assert!(!get_anthropic_api_key_exists_impl(&store));
|
|
||||||
}
|
|
||||||
|
|
||||||
// ---------------------------------------------------------------------------
|
|
||||||
// get_anthropic_api_key_impl
|
|
||||||
// ---------------------------------------------------------------------------
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn get_api_key_returns_key_when_present() {
|
|
||||||
let store = MockStore::with_entry("anthropic_api_key", json!("sk-test-key"));
|
|
||||||
let result = get_anthropic_api_key_impl(&store);
|
|
||||||
assert_eq!(result.unwrap(), "sk-test-key");
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn get_api_key_errors_when_empty() {
|
|
||||||
let store = MockStore::with_entry("anthropic_api_key", json!(""));
|
|
||||||
let result = get_anthropic_api_key_impl(&store);
|
|
||||||
assert!(result.is_err());
|
|
||||||
assert!(result.unwrap_err().contains("empty"));
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn get_api_key_errors_when_absent() {
|
|
||||||
let store = MockStore::new();
|
|
||||||
let result = get_anthropic_api_key_impl(&store);
|
|
||||||
assert!(result.is_err());
|
|
||||||
assert!(result.unwrap_err().contains("not found"));
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn get_api_key_errors_when_value_not_string() {
|
|
||||||
let store = MockStore::with_entry("anthropic_api_key", json!(123));
|
|
||||||
let result = get_anthropic_api_key_impl(&store);
|
|
||||||
assert!(result.is_err());
|
|
||||||
assert!(result.unwrap_err().contains("not a string"));
|
|
||||||
}
|
|
||||||
|
|
||||||
// ---------------------------------------------------------------------------
|
|
||||||
// set_anthropic_api_key_impl
|
|
||||||
// ---------------------------------------------------------------------------
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn set_api_key_stores_and_returns_ok() {
|
|
||||||
let store = MockStore::new();
|
|
||||||
let result = set_anthropic_api_key_impl(&store, "sk-my-key");
|
|
||||||
assert!(result.is_ok());
|
|
||||||
assert_eq!(store.get("anthropic_api_key"), Some(json!("sk-my-key")));
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn set_api_key_returns_error_when_save_fails() {
|
|
||||||
let store = MockStore::with_save_error();
|
|
||||||
let result = set_anthropic_api_key_impl(&store, "sk-my-key");
|
|
||||||
assert!(result.is_err());
|
|
||||||
assert!(result.unwrap_err().contains("mock save error"));
|
|
||||||
}
|
|
||||||
|
|
||||||
// ---------------------------------------------------------------------------
|
|
||||||
// Public wrappers: get_anthropic_api_key_exists / set_anthropic_api_key
|
|
||||||
// ---------------------------------------------------------------------------
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn public_api_key_exists_returns_ok_bool() {
|
|
||||||
let store = MockStore::with_entry("anthropic_api_key", json!("sk-abc"));
|
|
||||||
let result = get_anthropic_api_key_exists(&store);
|
|
||||||
assert_eq!(result, Ok(true));
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn public_api_key_exists_false_when_absent() {
|
|
||||||
let store = MockStore::new();
|
|
||||||
let result = get_anthropic_api_key_exists(&store);
|
|
||||||
assert_eq!(result, Ok(false));
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn public_set_api_key_succeeds() {
|
|
||||||
let store = MockStore::new();
|
|
||||||
let result = set_anthropic_api_key(&store, "sk-xyz".to_string());
|
|
||||||
assert!(result.is_ok());
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn public_set_api_key_propagates_save_error() {
|
|
||||||
let store = MockStore::with_save_error();
|
|
||||||
let result = set_anthropic_api_key(&store, "sk-xyz".to_string());
|
|
||||||
assert!(result.is_err());
|
|
||||||
}
|
|
||||||
|
|
||||||
// ---------------------------------------------------------------------------
|
// ---------------------------------------------------------------------------
|
||||||
// get_tool_definitions
|
// get_tool_definitions
|
||||||
// ---------------------------------------------------------------------------
|
// ---------------------------------------------------------------------------
|
||||||
|
|||||||
+21
-6
@@ -20,6 +20,7 @@ mod llm;
|
|||||||
pub mod log_buffer;
|
pub mod log_buffer;
|
||||||
pub(crate) mod pipeline_state;
|
pub(crate) mod pipeline_state;
|
||||||
pub mod rebuild;
|
pub mod rebuild;
|
||||||
|
mod service;
|
||||||
mod state;
|
mod state;
|
||||||
mod store;
|
mod store;
|
||||||
mod workflow;
|
mod workflow;
|
||||||
@@ -544,6 +545,8 @@ async fn main() -> Result<(), std::io::Error> {
|
|||||||
let watcher_rx_for_whatsapp = watcher_tx.subscribe();
|
let watcher_rx_for_whatsapp = watcher_tx.subscribe();
|
||||||
let watcher_rx_for_slack = watcher_tx.subscribe();
|
let watcher_rx_for_slack = watcher_tx.subscribe();
|
||||||
let watcher_rx_for_discord = watcher_tx.subscribe();
|
let watcher_rx_for_discord = watcher_tx.subscribe();
|
||||||
|
// Subscribe to watcher events for the per-project event buffer (gateway polling).
|
||||||
|
let watcher_rx_for_events = watcher_tx.subscribe();
|
||||||
// Wrap perm_rx in Arc<Mutex> so it can be shared with both the WebSocket
|
// Wrap perm_rx in Arc<Mutex> so it can be shared with both the WebSocket
|
||||||
// handler (via AppContext) and the Matrix bot.
|
// handler (via AppContext) and the Matrix bot.
|
||||||
let perm_rx = Arc::new(tokio::sync::Mutex::new(perm_rx));
|
let perm_rx = Arc::new(tokio::sync::Mutex::new(perm_rx));
|
||||||
@@ -777,7 +780,7 @@ async fn main() -> Result<(), std::io::Error> {
|
|||||||
// in `chat::transport::matrix::bot::run::spawn_bot`. Refactor to consume this
|
// in `chat::transport::matrix::bot::run::spawn_bot`. Refactor to consume this
|
||||||
// shared instance via `AppContext.timer_store` so cancellations from MCP
|
// shared instance via `AppContext.timer_store` so cancellations from MCP
|
||||||
// tools and the bot's tick loop see the same in-memory state.
|
// tools and the bot's tick loop see the same in-memory state.
|
||||||
let timer_store = std::sync::Arc::new(crate::chat::timer::TimerStore::load(
|
let timer_store = std::sync::Arc::new(crate::service::timer::TimerStore::load(
|
||||||
startup_root
|
startup_root
|
||||||
.as_ref()
|
.as_ref()
|
||||||
.map(|r| r.join(".huskies").join("timers.json"))
|
.map(|r| r.join(".huskies").join("timers.json"))
|
||||||
@@ -802,7 +805,18 @@ async fn main() -> Result<(), std::io::Error> {
|
|||||||
test_jobs: std::sync::Arc::new(std::sync::Mutex::new(std::collections::HashMap::new())),
|
test_jobs: std::sync::Arc::new(std::sync::Mutex::new(std::collections::HashMap::new())),
|
||||||
};
|
};
|
||||||
|
|
||||||
let app = build_routes(ctx, whatsapp_ctx.clone(), slack_ctx.clone(), port);
|
// Create the per-project event buffer and subscribe it to the watcher channel
|
||||||
|
// so that pipeline events are buffered for the gateway's `/api/events` poller.
|
||||||
|
let event_buffer = crate::http::events::EventBuffer::new();
|
||||||
|
crate::http::events::subscribe_to_watcher(event_buffer.clone(), watcher_rx_for_events);
|
||||||
|
|
||||||
|
let app = build_routes(
|
||||||
|
ctx,
|
||||||
|
whatsapp_ctx.clone(),
|
||||||
|
slack_ctx.clone(),
|
||||||
|
port,
|
||||||
|
Some(event_buffer),
|
||||||
|
);
|
||||||
|
|
||||||
// Unified 1-second background tick loop: fires due timers, detects orphaned
|
// Unified 1-second background tick loop: fires due timers, detects orphaned
|
||||||
// agents (watchdog), and promotes done→archived items (sweep). Replaces the
|
// agents (watchdog), and promotes done→archived items (sweep). Replaces the
|
||||||
@@ -830,7 +844,7 @@ async fn main() -> Result<(), std::io::Error> {
|
|||||||
// Timer: fire due timers every second.
|
// Timer: fire due timers every second.
|
||||||
if let Some(ref root) = tick_root {
|
if let Some(ref root) = tick_root {
|
||||||
let result =
|
let result =
|
||||||
crate::chat::timer::tick_once(&tick_timer, &tick_agents, root).await;
|
crate::service::timer::tick_once(&tick_timer, &tick_agents, root).await;
|
||||||
if let Err(msg) = result {
|
if let Err(msg) = result {
|
||||||
crate::slog_error!("[tick] Timer tick panicked: {msg}");
|
crate::slog_error!("[tick] Timer tick panicked: {msg}");
|
||||||
}
|
}
|
||||||
@@ -868,6 +882,7 @@ async fn main() -> Result<(), std::io::Error> {
|
|||||||
matrix_shutdown_rx,
|
matrix_shutdown_rx,
|
||||||
None,
|
None,
|
||||||
vec![],
|
vec![],
|
||||||
|
std::collections::BTreeMap::new(),
|
||||||
);
|
);
|
||||||
} else {
|
} else {
|
||||||
// Keep the receiver alive (drop it) so the sender never errors.
|
// Keep the receiver alive (drop it) so the sender never errors.
|
||||||
@@ -878,7 +893,7 @@ async fn main() -> Result<(), std::io::Error> {
|
|||||||
// These mirror the listener that the Matrix bot spawns internally.
|
// These mirror the listener that the Matrix bot spawns internally.
|
||||||
if let (Some(ctx), Some(root)) = (&whatsapp_ctx, &startup_root) {
|
if let (Some(ctx), Some(root)) = (&whatsapp_ctx, &startup_root) {
|
||||||
let ambient_rooms = Arc::clone(&ctx.ambient_rooms);
|
let ambient_rooms = Arc::clone(&ctx.ambient_rooms);
|
||||||
chat::transport::matrix::notifications::spawn_notification_listener(
|
crate::service::notifications::spawn_notification_listener(
|
||||||
Arc::clone(&ctx.transport),
|
Arc::clone(&ctx.transport),
|
||||||
move || ambient_rooms.lock().unwrap().iter().cloned().collect(),
|
move || ambient_rooms.lock().unwrap().iter().cloned().collect(),
|
||||||
watcher_rx_for_whatsapp,
|
watcher_rx_for_whatsapp,
|
||||||
@@ -889,7 +904,7 @@ async fn main() -> Result<(), std::io::Error> {
|
|||||||
}
|
}
|
||||||
if let (Some(ctx), Some(root)) = (&slack_ctx, &startup_root) {
|
if let (Some(ctx), Some(root)) = (&slack_ctx, &startup_root) {
|
||||||
let channel_ids: Vec<String> = ctx.channel_ids.iter().cloned().collect();
|
let channel_ids: Vec<String> = ctx.channel_ids.iter().cloned().collect();
|
||||||
chat::transport::matrix::notifications::spawn_notification_listener(
|
crate::service::notifications::spawn_notification_listener(
|
||||||
Arc::clone(&ctx.transport) as Arc<dyn crate::chat::ChatTransport>,
|
Arc::clone(&ctx.transport) as Arc<dyn crate::chat::ChatTransport>,
|
||||||
move || channel_ids.clone(),
|
move || channel_ids.clone(),
|
||||||
watcher_rx_for_slack,
|
watcher_rx_for_slack,
|
||||||
@@ -904,7 +919,7 @@ async fn main() -> Result<(), std::io::Error> {
|
|||||||
|
|
||||||
// Spawn stage-transition notification listener for Discord.
|
// Spawn stage-transition notification listener for Discord.
|
||||||
let channel_ids: Vec<String> = ctx.channel_ids.iter().cloned().collect();
|
let channel_ids: Vec<String> = ctx.channel_ids.iter().cloned().collect();
|
||||||
chat::transport::matrix::notifications::spawn_notification_listener(
|
crate::service::notifications::spawn_notification_listener(
|
||||||
Arc::clone(&ctx.transport) as Arc<dyn crate::chat::ChatTransport>,
|
Arc::clone(&ctx.transport) as Arc<dyn crate::chat::ChatTransport>,
|
||||||
move || channel_ids.clone(),
|
move || channel_ids.clone(),
|
||||||
watcher_rx_for_discord,
|
watcher_rx_for_discord,
|
||||||
|
|||||||
@@ -0,0 +1,240 @@
|
|||||||
|
//! Agent I/O wrappers — the ONLY place in `service/agents/` that may perform
|
||||||
|
//! filesystem reads, process invocations, or other side effects.
|
||||||
|
//!
|
||||||
|
//! Every function here is a thin adapter over an existing lower-level call.
|
||||||
|
//! No business logic lives here; all branching belongs in the pure topic files
|
||||||
|
//! or in `mod.rs`.
|
||||||
|
use crate::agent_log::{self, LogEntry};
|
||||||
|
use crate::agents::token_usage::{self, TokenUsageRecord};
|
||||||
|
use crate::config::ProjectConfig;
|
||||||
|
use crate::worktree::{self, WorktreeListEntry};
|
||||||
|
use std::path::Path;
|
||||||
|
|
||||||
|
use super::Error;
|
||||||
|
|
||||||
|
/// Return `true` if the story's `.md` file exists in `5_done/` or `6_archived/`.
|
||||||
|
pub fn is_archived(project_root: &Path, story_id: &str) -> bool {
|
||||||
|
let work = project_root.join(".huskies").join("work");
|
||||||
|
let filename = format!("{story_id}.md");
|
||||||
|
work.join("5_done").join(&filename).exists() || work.join("6_archived").join(&filename).exists()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Read and return all log entries for the most recent session of an agent.
|
||||||
|
///
|
||||||
|
/// Returns `Ok(vec![])` when no log file exists yet.
|
||||||
|
pub fn read_agent_log(
|
||||||
|
project_root: &Path,
|
||||||
|
story_id: &str,
|
||||||
|
agent_name: &str,
|
||||||
|
) -> Result<Vec<LogEntry>, Error> {
|
||||||
|
let log_path = agent_log::find_latest_log(project_root, story_id, agent_name);
|
||||||
|
let Some(path) = log_path else {
|
||||||
|
return Ok(Vec::new());
|
||||||
|
};
|
||||||
|
agent_log::read_log(&path).map_err(Error::Io)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Read all token usage records from the persistent JSONL file.
|
||||||
|
///
|
||||||
|
/// Returns an empty vec when the file does not yet exist.
|
||||||
|
pub fn read_token_records(project_root: &Path) -> Result<Vec<TokenUsageRecord>, Error> {
|
||||||
|
token_usage::read_all(project_root).map_err(Error::Io)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Load the project configuration from `project.toml`.
|
||||||
|
///
|
||||||
|
/// Falls back to default config when the file is absent.
|
||||||
|
pub fn load_config(project_root: &Path) -> Result<ProjectConfig, Error> {
|
||||||
|
ProjectConfig::load(project_root).map_err(Error::Config)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// List all worktrees under `.huskies/worktrees/`.
|
||||||
|
pub fn list_worktrees(project_root: &Path) -> Result<Vec<WorktreeListEntry>, Error> {
|
||||||
|
worktree::list_worktrees(project_root).map_err(Error::Io)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Remove the git worktree for a story by ID.
|
||||||
|
///
|
||||||
|
/// Loads the project config to honour teardown commands. Returns an error if
|
||||||
|
/// the worktree directory does not exist.
|
||||||
|
pub async fn remove_worktree(project_root: &Path, story_id: &str) -> Result<(), Error> {
|
||||||
|
let config = load_config(project_root)?;
|
||||||
|
worktree::remove_worktree_by_story_id(project_root, story_id, &config)
|
||||||
|
.await
|
||||||
|
.map_err(Error::Worktree)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Read test results persisted in a story's markdown file.
|
||||||
|
///
|
||||||
|
/// Returns `None` when the story has no test results section.
|
||||||
|
pub fn read_test_results_from_file(
|
||||||
|
project_root: &Path,
|
||||||
|
story_id: &str,
|
||||||
|
) -> Option<crate::workflow::StoryTestResults> {
|
||||||
|
crate::http::workflow::read_test_results_from_story_file(project_root, story_id)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Read a work item file from a pipeline stage directory.
|
||||||
|
///
|
||||||
|
/// Returns `Ok(Some(content))` when found, `Ok(None)` when absent.
|
||||||
|
pub fn read_work_item_from_stage(
|
||||||
|
work_dir: &std::path::Path,
|
||||||
|
stage_dir: &str,
|
||||||
|
filename: &str,
|
||||||
|
) -> Result<Option<String>, Error> {
|
||||||
|
let file_path = work_dir.join(stage_dir).join(filename);
|
||||||
|
if file_path.exists() {
|
||||||
|
let content = std::fs::read_to_string(&file_path)
|
||||||
|
.map_err(|e| Error::Io(format!("Failed to read work item: {e}")))?;
|
||||||
|
Ok(Some(content))
|
||||||
|
} else {
|
||||||
|
Ok(None)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Test-fixture helpers that may call `std::fs` — kept here so that
|
||||||
|
/// `mod.rs` and topic-file `#[cfg(test)]` blocks never need to import
|
||||||
|
/// `std::fs`, `tokio::fs`, or `std::process` directly.
|
||||||
|
#[cfg(test)]
|
||||||
|
pub mod test_helpers {
|
||||||
|
use tempfile::TempDir;
|
||||||
|
|
||||||
|
/// Create the `.huskies/` directory.
|
||||||
|
pub fn make_huskies_dir(tmp: &TempDir) {
|
||||||
|
std::fs::create_dir_all(tmp.path().join(".huskies")).unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Create the `5_done` and `6_archived` work-stage directories.
|
||||||
|
pub fn make_work_dirs(tmp: &TempDir) {
|
||||||
|
for stage in &["5_done", "6_archived"] {
|
||||||
|
std::fs::create_dir_all(tmp.path().join(".huskies").join("work").join(stage)).unwrap();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Create all six pipeline stage directories under `.huskies/work/`.
|
||||||
|
pub fn make_stage_dirs(tmp: &TempDir) {
|
||||||
|
for stage in &[
|
||||||
|
"1_backlog",
|
||||||
|
"2_current",
|
||||||
|
"3_qa",
|
||||||
|
"4_merge",
|
||||||
|
"5_done",
|
||||||
|
"6_archived",
|
||||||
|
] {
|
||||||
|
std::fs::create_dir_all(tmp.path().join(".huskies").join("work").join(stage)).unwrap();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Write `.huskies/project.toml` with the given TOML content.
|
||||||
|
pub fn make_project_toml(tmp: &TempDir, content: &str) {
|
||||||
|
let sk_dir = tmp.path().join(".huskies");
|
||||||
|
std::fs::create_dir_all(&sk_dir).unwrap();
|
||||||
|
std::fs::write(sk_dir.join("project.toml"), content).unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Write a fixture file at `relative_path` (relative to the tmp root).
|
||||||
|
pub fn write_story_file(tmp: &TempDir, relative_path: &str, content: &str) {
|
||||||
|
let path = tmp.path().join(relative_path);
|
||||||
|
if let Some(parent) = path.parent() {
|
||||||
|
std::fs::create_dir_all(parent).unwrap();
|
||||||
|
}
|
||||||
|
std::fs::write(path, content).unwrap();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
use tempfile::TempDir;
|
||||||
|
|
||||||
|
fn make_work_dirs(tmp: &TempDir) {
|
||||||
|
for stage in &["5_done", "6_archived"] {
|
||||||
|
std::fs::create_dir_all(tmp.path().join(".huskies").join("work").join(stage)).unwrap();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── is_archived ───────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn is_archived_false_when_file_absent() {
|
||||||
|
let tmp = TempDir::new().unwrap();
|
||||||
|
make_work_dirs(&tmp);
|
||||||
|
assert!(!is_archived(tmp.path(), "42_story_foo"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn is_archived_true_when_in_5_done() {
|
||||||
|
let tmp = TempDir::new().unwrap();
|
||||||
|
make_work_dirs(&tmp);
|
||||||
|
std::fs::write(
|
||||||
|
tmp.path().join(".huskies/work/5_done/42_story_foo.md"),
|
||||||
|
"---\nname: test\n---\n",
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
assert!(is_archived(tmp.path(), "42_story_foo"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn is_archived_true_when_in_6_archived() {
|
||||||
|
let tmp = TempDir::new().unwrap();
|
||||||
|
make_work_dirs(&tmp);
|
||||||
|
std::fs::write(
|
||||||
|
tmp.path().join(".huskies/work/6_archived/42_story_foo.md"),
|
||||||
|
"---\nname: test\n---\n",
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
assert!(is_archived(tmp.path(), "42_story_foo"));
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── read_agent_log ────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn read_agent_log_returns_empty_when_no_log() {
|
||||||
|
let tmp = TempDir::new().unwrap();
|
||||||
|
let entries = read_agent_log(tmp.path(), "42_story_foo", "coder-1").unwrap();
|
||||||
|
assert!(entries.is_empty());
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── read_token_records ────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn read_token_records_returns_empty_when_no_file() {
|
||||||
|
let tmp = TempDir::new().unwrap();
|
||||||
|
let records = read_token_records(tmp.path()).unwrap();
|
||||||
|
assert!(records.is_empty());
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── load_config ───────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn load_config_returns_default_when_no_file() {
|
||||||
|
let tmp = TempDir::new().unwrap();
|
||||||
|
std::fs::create_dir_all(tmp.path().join(".huskies")).unwrap();
|
||||||
|
let config = load_config(tmp.path()).unwrap();
|
||||||
|
// Default config has one "default" agent
|
||||||
|
assert_eq!(config.agent.len(), 1);
|
||||||
|
assert_eq!(config.agent[0].name, "default");
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── list_worktrees ────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn list_worktrees_empty_when_no_dir() {
|
||||||
|
let tmp = TempDir::new().unwrap();
|
||||||
|
let entries = list_worktrees(tmp.path()).unwrap();
|
||||||
|
assert!(entries.is_empty());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn list_worktrees_returns_subdirs() {
|
||||||
|
let tmp = TempDir::new().unwrap();
|
||||||
|
let wt_dir = tmp.path().join(".huskies").join("worktrees");
|
||||||
|
std::fs::create_dir_all(wt_dir.join("42_story_foo")).unwrap();
|
||||||
|
std::fs::create_dir_all(wt_dir.join("43_story_bar")).unwrap();
|
||||||
|
let mut entries = list_worktrees(tmp.path()).unwrap();
|
||||||
|
entries.sort_by(|a, b| a.story_id.cmp(&b.story_id));
|
||||||
|
assert_eq!(entries.len(), 2);
|
||||||
|
assert_eq!(entries[0].story_id, "42_story_foo");
|
||||||
|
assert_eq!(entries[1].story_id, "43_story_bar");
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,451 @@
|
|||||||
|
//! Agent service — public API for the agent domain.
|
||||||
|
//!
|
||||||
|
//! This module orchestrates calls to `io.rs` (side effects) and the pure
|
||||||
|
//! topic modules (`selection`, `token`) to implement the full agent service
|
||||||
|
//! surface. HTTP handlers call these functions instead of reaching directly
|
||||||
|
//! into `AgentPool` or the filesystem.
|
||||||
|
//!
|
||||||
|
//! Conventions: `docs/architecture/service-modules.md`
|
||||||
|
mod io;
|
||||||
|
pub mod selection;
|
||||||
|
pub mod token;
|
||||||
|
|
||||||
|
use crate::agents::AgentInfo;
|
||||||
|
use crate::agents::AgentPool;
|
||||||
|
use crate::agents::token_usage::TokenUsageRecord;
|
||||||
|
use crate::config::ProjectConfig;
|
||||||
|
use crate::workflow::StoryTestResults;
|
||||||
|
use crate::worktree::{WorktreeInfo, WorktreeListEntry};
|
||||||
|
use std::path::Path;
|
||||||
|
|
||||||
|
pub use io::is_archived;
|
||||||
|
pub use token::TokenCostSummary;
|
||||||
|
|
||||||
|
// ── Error type ────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Typed errors returned by `service::agents` functions.
|
||||||
|
///
|
||||||
|
/// HTTP handlers map these to specific status codes — see the conventions doc
|
||||||
|
/// for the full mapping table.
|
||||||
|
#[derive(Debug)]
|
||||||
|
pub enum Error {
|
||||||
|
/// No agent with the given name/story exists in the pool.
|
||||||
|
AgentNotFound(String),
|
||||||
|
/// No work item found for the requested story ID.
|
||||||
|
WorkItemNotFound(String),
|
||||||
|
/// A worktree operation failed.
|
||||||
|
Worktree(String),
|
||||||
|
/// Project configuration could not be loaded.
|
||||||
|
Config(String),
|
||||||
|
/// A filesystem or I/O operation failed.
|
||||||
|
Io(String),
|
||||||
|
}
|
||||||
|
|
||||||
|
impl std::fmt::Display for Error {
|
||||||
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
|
match self {
|
||||||
|
Self::AgentNotFound(msg) => write!(f, "Agent not found: {msg}"),
|
||||||
|
Self::WorkItemNotFound(msg) => write!(f, "Work item not found: {msg}"),
|
||||||
|
Self::Worktree(msg) => write!(f, "Worktree error: {msg}"),
|
||||||
|
Self::Config(msg) => write!(f, "Config error: {msg}"),
|
||||||
|
Self::Io(msg) => write!(f, "I/O error: {msg}"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Shared service types ─────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Content and metadata for a work-item (story) file.
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct WorkItemContent {
|
||||||
|
pub content: String,
|
||||||
|
pub stage: String,
|
||||||
|
pub name: Option<String>,
|
||||||
|
pub agent: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// A single entry in the project's configured agent roster.
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct AgentConfigEntry {
|
||||||
|
pub name: String,
|
||||||
|
pub role: String,
|
||||||
|
pub stage: Option<String>,
|
||||||
|
pub model: Option<String>,
|
||||||
|
pub allowed_tools: Option<Vec<String>>,
|
||||||
|
pub max_turns: Option<u32>,
|
||||||
|
pub max_budget_usd: Option<f64>,
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Public API ────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Start an agent for a story.
|
||||||
|
///
|
||||||
|
/// Takes only what it needs: the pool (for spawning) and the project root
|
||||||
|
/// (for config and worktree creation). Does not touch `AppContext`.
|
||||||
|
pub async fn start_agent(
|
||||||
|
pool: &AgentPool,
|
||||||
|
project_root: &Path,
|
||||||
|
story_id: &str,
|
||||||
|
agent_name: Option<&str>,
|
||||||
|
resume_context: Option<&str>,
|
||||||
|
session_id_to_resume: Option<String>,
|
||||||
|
) -> Result<AgentInfo, Error> {
|
||||||
|
pool.start_agent(
|
||||||
|
project_root,
|
||||||
|
story_id,
|
||||||
|
agent_name,
|
||||||
|
resume_context,
|
||||||
|
session_id_to_resume,
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.map_err(Error::AgentNotFound)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Stop a running agent.
|
||||||
|
pub async fn stop_agent(
|
||||||
|
pool: &AgentPool,
|
||||||
|
project_root: &Path,
|
||||||
|
story_id: &str,
|
||||||
|
agent_name: &str,
|
||||||
|
) -> Result<(), Error> {
|
||||||
|
pool.stop_agent(project_root, story_id, agent_name)
|
||||||
|
.await
|
||||||
|
.map_err(Error::AgentNotFound)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// List all agents, optionally filtering out those belonging to archived stories.
|
||||||
|
///
|
||||||
|
/// When `project_root` is `None` the archive filter is skipped and all agents
|
||||||
|
/// are returned (safe default when the server is not yet fully configured).
|
||||||
|
pub fn list_agents(pool: &AgentPool, project_root: Option<&Path>) -> Result<Vec<AgentInfo>, Error> {
|
||||||
|
let agents = pool.list_agents().map_err(Error::Io)?;
|
||||||
|
match project_root {
|
||||||
|
Some(root) => Ok(selection::filter_non_archived(agents, |id| {
|
||||||
|
io::is_archived(root, id)
|
||||||
|
})),
|
||||||
|
None => Ok(agents),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Create a git worktree for a story.
|
||||||
|
pub async fn create_worktree(
|
||||||
|
pool: &AgentPool,
|
||||||
|
project_root: &Path,
|
||||||
|
story_id: &str,
|
||||||
|
) -> Result<WorktreeInfo, Error> {
|
||||||
|
pool.create_worktree(project_root, story_id)
|
||||||
|
.await
|
||||||
|
.map_err(Error::Worktree)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// List all worktrees under `.huskies/worktrees/`.
|
||||||
|
pub fn list_worktrees(project_root: &Path) -> Result<Vec<WorktreeListEntry>, Error> {
|
||||||
|
io::list_worktrees(project_root)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Remove the git worktree for a story.
|
||||||
|
pub async fn remove_worktree(project_root: &Path, story_id: &str) -> Result<(), Error> {
|
||||||
|
io::remove_worktree(project_root, story_id).await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get the configured agent roster from `project.toml`.
|
||||||
|
pub fn get_agent_config(project_root: &Path) -> Result<Vec<AgentConfigEntry>, Error> {
|
||||||
|
let config = io::load_config(project_root)?;
|
||||||
|
Ok(config_to_entries(&config))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Reload and return the project's agent configuration.
|
||||||
|
///
|
||||||
|
/// Semantically identical to `get_agent_config`; provided as a distinct
|
||||||
|
/// function so callers can express intent (UI "Reload" button).
|
||||||
|
pub fn reload_config(project_root: &Path) -> Result<Vec<AgentConfigEntry>, Error> {
|
||||||
|
get_agent_config(project_root)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get the concatenated output text for an agent's most recent session.
|
||||||
|
///
|
||||||
|
/// Returns an empty string when no log file exists yet.
|
||||||
|
pub fn get_agent_output(
|
||||||
|
project_root: &Path,
|
||||||
|
story_id: &str,
|
||||||
|
agent_name: &str,
|
||||||
|
) -> Result<String, Error> {
|
||||||
|
let entries = io::read_agent_log(project_root, story_id, agent_name)?;
|
||||||
|
Ok(selection::collect_output_text(&entries))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get the markdown content and metadata for a work item.
|
||||||
|
///
|
||||||
|
/// Searches all pipeline stage directories, falling back to the CRDT content
|
||||||
|
/// store when no file is present on disk. Returns `Error::WorkItemNotFound`
|
||||||
|
/// when neither source has the item.
|
||||||
|
pub fn get_work_item_content(
|
||||||
|
project_root: &Path,
|
||||||
|
story_id: &str,
|
||||||
|
) -> Result<WorkItemContent, Error> {
|
||||||
|
let stages = [
|
||||||
|
("1_backlog", "backlog"),
|
||||||
|
("2_current", "current"),
|
||||||
|
("3_qa", "qa"),
|
||||||
|
("4_merge", "merge"),
|
||||||
|
("5_done", "done"),
|
||||||
|
("6_archived", "archived"),
|
||||||
|
];
|
||||||
|
|
||||||
|
let work_dir = project_root.join(".huskies").join("work");
|
||||||
|
let filename = format!("{story_id}.md");
|
||||||
|
|
||||||
|
for (stage_dir, stage_name) in &stages {
|
||||||
|
if let Some(content) = io::read_work_item_from_stage(&work_dir, stage_dir, &filename)? {
|
||||||
|
let metadata = crate::io::story_metadata::parse_front_matter(&content).ok();
|
||||||
|
return Ok(WorkItemContent {
|
||||||
|
content,
|
||||||
|
stage: stage_name.to_string(),
|
||||||
|
name: metadata.as_ref().and_then(|m| m.name.clone()),
|
||||||
|
agent: metadata.and_then(|m| m.agent),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// CRDT-only fallback
|
||||||
|
if let Some(content) = crate::db::read_content(story_id) {
|
||||||
|
let item = crate::pipeline_state::read_typed(story_id)
|
||||||
|
.map_err(|e| Error::Io(format!("Pipeline read error: {e}")))?;
|
||||||
|
let stage = item
|
||||||
|
.as_ref()
|
||||||
|
.map(|i| match &i.stage {
|
||||||
|
crate::pipeline_state::Stage::Backlog => "backlog",
|
||||||
|
crate::pipeline_state::Stage::Coding => "current",
|
||||||
|
crate::pipeline_state::Stage::Qa => "qa",
|
||||||
|
crate::pipeline_state::Stage::Merge { .. } => "merge",
|
||||||
|
crate::pipeline_state::Stage::Done { .. } => "done",
|
||||||
|
crate::pipeline_state::Stage::Archived { .. } => "archived",
|
||||||
|
})
|
||||||
|
.unwrap_or("unknown")
|
||||||
|
.to_string();
|
||||||
|
let metadata = crate::io::story_metadata::parse_front_matter(&content).ok();
|
||||||
|
return Ok(WorkItemContent {
|
||||||
|
content,
|
||||||
|
stage,
|
||||||
|
name: metadata.as_ref().and_then(|m| m.name.clone()),
|
||||||
|
agent: metadata.and_then(|m| m.agent),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
Err(Error::WorkItemNotFound(format!(
|
||||||
|
"Work item not found: {story_id}"
|
||||||
|
)))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get test results for a work item.
|
||||||
|
///
|
||||||
|
/// Checks in-memory workflow state first (fast path), then falls back to
|
||||||
|
/// results persisted in the story file.
|
||||||
|
pub fn get_test_results(
|
||||||
|
project_root: &Path,
|
||||||
|
story_id: &str,
|
||||||
|
workflow: &crate::workflow::WorkflowState,
|
||||||
|
) -> Option<StoryTestResults> {
|
||||||
|
if let Some(results) = workflow.results.get(story_id) {
|
||||||
|
return Some(results.clone());
|
||||||
|
}
|
||||||
|
io::read_test_results_from_file(project_root, story_id)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get the aggregated token cost for a specific story.
|
||||||
|
pub fn get_work_item_token_cost(
|
||||||
|
project_root: &Path,
|
||||||
|
story_id: &str,
|
||||||
|
) -> Result<TokenCostSummary, Error> {
|
||||||
|
let records = io::read_token_records(project_root)?;
|
||||||
|
Ok(token::aggregate_for_story(&records, story_id))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get all token usage records across all stories.
|
||||||
|
pub fn get_all_token_usage(project_root: &Path) -> Result<Vec<TokenUsageRecord>, Error> {
|
||||||
|
io::read_token_records(project_root)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Helpers ───────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
fn config_to_entries(config: &ProjectConfig) -> Vec<AgentConfigEntry> {
|
||||||
|
config
|
||||||
|
.agent
|
||||||
|
.iter()
|
||||||
|
.map(|a| AgentConfigEntry {
|
||||||
|
name: a.name.clone(),
|
||||||
|
role: a.role.clone(),
|
||||||
|
stage: a.stage.clone(),
|
||||||
|
model: a.model.clone(),
|
||||||
|
allowed_tools: a.allowed_tools.clone(),
|
||||||
|
max_turns: a.max_turns,
|
||||||
|
max_budget_usd: a.max_budget_usd,
|
||||||
|
})
|
||||||
|
.collect()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Integration tests ─────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
use crate::agents::AgentStatus;
|
||||||
|
use io::test_helpers::*;
|
||||||
|
use std::sync::Arc;
|
||||||
|
use tempfile::TempDir;
|
||||||
|
|
||||||
|
fn make_pool(tmp: &TempDir) -> Arc<AgentPool> {
|
||||||
|
let (tx, _) = tokio::sync::broadcast::channel(64);
|
||||||
|
let pool = AgentPool::new(3001, tx);
|
||||||
|
let state = crate::state::SessionState::default();
|
||||||
|
*state.project_root.lock().unwrap() = Some(tmp.path().to_path_buf());
|
||||||
|
Arc::new(pool)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── list_agents ───────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn list_agents_excludes_archived_stories() {
|
||||||
|
let tmp = TempDir::new().unwrap();
|
||||||
|
make_work_dirs(&tmp);
|
||||||
|
write_story_file(
|
||||||
|
&tmp,
|
||||||
|
".huskies/work/6_archived/79_story_archived.md",
|
||||||
|
"---\nname: archived\n---\n",
|
||||||
|
);
|
||||||
|
|
||||||
|
let pool = make_pool(&tmp);
|
||||||
|
pool.inject_test_agent("79_story_archived", "coder-1", AgentStatus::Completed);
|
||||||
|
pool.inject_test_agent("80_story_active", "coder-1", AgentStatus::Running);
|
||||||
|
|
||||||
|
let agents = list_agents(&pool, Some(tmp.path())).unwrap();
|
||||||
|
assert!(!agents.iter().any(|a| a.story_id == "79_story_archived"));
|
||||||
|
assert!(agents.iter().any(|a| a.story_id == "80_story_active"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn list_agents_includes_all_when_no_project_root() {
|
||||||
|
let tmp = TempDir::new().unwrap();
|
||||||
|
let pool = make_pool(&tmp);
|
||||||
|
pool.inject_test_agent("42_story_whatever", "coder-1", AgentStatus::Completed);
|
||||||
|
|
||||||
|
let agents = list_agents(&pool, None).unwrap();
|
||||||
|
assert!(agents.iter().any(|a| a.story_id == "42_story_whatever"));
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── get_agent_config ──────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn get_agent_config_returns_default_when_no_toml() {
|
||||||
|
let tmp = TempDir::new().unwrap();
|
||||||
|
make_huskies_dir(&tmp);
|
||||||
|
let entries = get_agent_config(tmp.path()).unwrap();
|
||||||
|
assert_eq!(entries.len(), 1);
|
||||||
|
assert_eq!(entries[0].name, "default");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn get_agent_config_returns_configured_agents() {
|
||||||
|
let tmp = TempDir::new().unwrap();
|
||||||
|
make_project_toml(
|
||||||
|
&tmp,
|
||||||
|
r#"
|
||||||
|
[[agent]]
|
||||||
|
name = "coder-1"
|
||||||
|
role = "Full-stack engineer"
|
||||||
|
model = "sonnet"
|
||||||
|
max_turns = 30
|
||||||
|
max_budget_usd = 5.0
|
||||||
|
"#,
|
||||||
|
);
|
||||||
|
let entries = get_agent_config(tmp.path()).unwrap();
|
||||||
|
assert_eq!(entries.len(), 1);
|
||||||
|
assert_eq!(entries[0].name, "coder-1");
|
||||||
|
assert_eq!(entries[0].model, Some("sonnet".to_string()));
|
||||||
|
assert_eq!(entries[0].max_turns, Some(30));
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── get_agent_output ──────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn get_agent_output_returns_empty_when_no_log() {
|
||||||
|
let tmp = TempDir::new().unwrap();
|
||||||
|
let output = get_agent_output(tmp.path(), "42_story_foo", "coder-1").unwrap();
|
||||||
|
assert_eq!(output, "");
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── get_work_item_content ─────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn get_work_item_content_reads_from_backlog() {
|
||||||
|
let tmp = TempDir::new().unwrap();
|
||||||
|
make_stage_dirs(&tmp);
|
||||||
|
write_story_file(
|
||||||
|
&tmp,
|
||||||
|
".huskies/work/1_backlog/42_story_foo.md",
|
||||||
|
"---\nname: \"Foo Story\"\n---\n\nSome content.",
|
||||||
|
);
|
||||||
|
let item = get_work_item_content(tmp.path(), "42_story_foo").unwrap();
|
||||||
|
assert!(item.content.contains("Some content."));
|
||||||
|
assert_eq!(item.stage, "backlog");
|
||||||
|
assert_eq!(item.name, Some("Foo Story".to_string()));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn get_work_item_content_returns_not_found_for_absent_story() {
|
||||||
|
let tmp = TempDir::new().unwrap();
|
||||||
|
make_stage_dirs(&tmp);
|
||||||
|
let result = get_work_item_content(tmp.path(), "99_story_nonexistent");
|
||||||
|
assert!(matches!(result, Err(Error::WorkItemNotFound(_))));
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── get_work_item_token_cost ──────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn get_work_item_token_cost_returns_zero_when_no_records() {
|
||||||
|
let tmp = TempDir::new().unwrap();
|
||||||
|
let summary = get_work_item_token_cost(tmp.path(), "42_story_foo").unwrap();
|
||||||
|
assert_eq!(summary.total_cost_usd, 0.0);
|
||||||
|
assert!(summary.agents.is_empty());
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── get_all_token_usage ───────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn get_all_token_usage_returns_empty_when_no_file() {
|
||||||
|
let tmp = TempDir::new().unwrap();
|
||||||
|
let records = get_all_token_usage(tmp.path()).unwrap();
|
||||||
|
assert!(records.is_empty());
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── get_test_results ──────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn get_test_results_returns_none_when_no_results() {
|
||||||
|
let tmp = TempDir::new().unwrap();
|
||||||
|
let workflow = crate::workflow::WorkflowState::default();
|
||||||
|
let result = get_test_results(tmp.path(), "42_story_foo", &workflow);
|
||||||
|
assert!(result.is_none());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn get_test_results_returns_in_memory_results_first() {
|
||||||
|
let tmp = TempDir::new().unwrap();
|
||||||
|
let mut workflow = crate::workflow::WorkflowState::default();
|
||||||
|
workflow
|
||||||
|
.record_test_results_validated(
|
||||||
|
"42_story_foo".to_string(),
|
||||||
|
vec![crate::workflow::TestCaseResult {
|
||||||
|
name: "test1".to_string(),
|
||||||
|
status: crate::workflow::TestStatus::Pass,
|
||||||
|
details: None,
|
||||||
|
}],
|
||||||
|
vec![],
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
let result =
|
||||||
|
get_test_results(tmp.path(), "42_story_foo", &workflow).expect("should have results");
|
||||||
|
assert_eq!(result.unit.len(), 1);
|
||||||
|
assert_eq!(result.unit[0].name, "test1");
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,171 @@
|
|||||||
|
//! Pure agent selection and filtering logic — no I/O, no side effects.
|
||||||
|
//!
|
||||||
|
//! All functions in this module are pure: they take data, transform it, and
|
||||||
|
//! return a result without touching the filesystem, network, or any mutable
|
||||||
|
//! global state. This makes them fast to test without tempdirs or async runtimes.
|
||||||
|
use crate::agent_log::LogEntry;
|
||||||
|
use crate::agents::AgentInfo;
|
||||||
|
|
||||||
|
/// Filter a list of agents, removing any whose story is archived.
|
||||||
|
///
|
||||||
|
/// `is_archived` is a predicate injected by the caller — typically a closure
|
||||||
|
/// over the project root that calls `io::is_archived`. This keeps the function
|
||||||
|
/// pure: it never touches the filesystem itself.
|
||||||
|
pub fn filter_non_archived<F>(agents: Vec<AgentInfo>, is_archived: F) -> Vec<AgentInfo>
|
||||||
|
where
|
||||||
|
F: Fn(&str) -> bool,
|
||||||
|
{
|
||||||
|
agents
|
||||||
|
.into_iter()
|
||||||
|
.filter(|info| !is_archived(&info.story_id))
|
||||||
|
.collect()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Concatenate the text of all `output` events from an agent log.
|
||||||
|
///
|
||||||
|
/// Non-output events (status, done, error, agent_json, thinking) are silently
|
||||||
|
/// skipped. Returns an empty string when `entries` is empty or contains no
|
||||||
|
/// output events.
|
||||||
|
pub fn collect_output_text(entries: &[LogEntry]) -> String {
|
||||||
|
entries
|
||||||
|
.iter()
|
||||||
|
.filter(|e| e.event.get("type").and_then(|t| t.as_str()) == Some("output"))
|
||||||
|
.filter_map(|e| {
|
||||||
|
e.event
|
||||||
|
.get("text")
|
||||||
|
.and_then(|t| t.as_str())
|
||||||
|
.map(str::to_owned)
|
||||||
|
})
|
||||||
|
.collect()
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
use crate::agents::AgentStatus;
|
||||||
|
|
||||||
|
fn make_agent(story_id: &str) -> AgentInfo {
|
||||||
|
AgentInfo {
|
||||||
|
story_id: story_id.to_string(),
|
||||||
|
agent_name: "coder-1".to_string(),
|
||||||
|
status: AgentStatus::Running,
|
||||||
|
session_id: None,
|
||||||
|
worktree_path: None,
|
||||||
|
base_branch: None,
|
||||||
|
completion: None,
|
||||||
|
log_session_id: None,
|
||||||
|
throttled: false,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn make_log_entry(event_type: &str, text: Option<&str>) -> LogEntry {
|
||||||
|
let mut obj = serde_json::Map::new();
|
||||||
|
obj.insert(
|
||||||
|
"type".to_string(),
|
||||||
|
serde_json::Value::String(event_type.to_string()),
|
||||||
|
);
|
||||||
|
if let Some(t) = text {
|
||||||
|
obj.insert("text".to_string(), serde_json::Value::String(t.to_string()));
|
||||||
|
}
|
||||||
|
LogEntry {
|
||||||
|
timestamp: "2024-01-01T00:00:00Z".to_string(),
|
||||||
|
event: serde_json::Value::Object(obj),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── filter_non_archived ───────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn filter_keeps_non_archived_agents() {
|
||||||
|
let agents = vec![make_agent("10_active"), make_agent("11_active")];
|
||||||
|
let result = filter_non_archived(agents, |_| false);
|
||||||
|
assert_eq!(result.len(), 2);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn filter_removes_archived_agents() {
|
||||||
|
let agents = vec![make_agent("10_archived"), make_agent("11_active")];
|
||||||
|
let result = filter_non_archived(agents, |id| id == "10_archived");
|
||||||
|
assert_eq!(result.len(), 1);
|
||||||
|
assert_eq!(result[0].story_id, "11_active");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn filter_removes_all_when_all_archived() {
|
||||||
|
let agents = vec![make_agent("10_a"), make_agent("11_b")];
|
||||||
|
let result = filter_non_archived(agents, |_| true);
|
||||||
|
assert!(result.is_empty());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn filter_returns_empty_for_empty_input() {
|
||||||
|
let result = filter_non_archived(vec![], |_| false);
|
||||||
|
assert!(result.is_empty());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn filter_preserves_order() {
|
||||||
|
let agents = vec![
|
||||||
|
make_agent("1_a"),
|
||||||
|
make_agent("2_b"),
|
||||||
|
make_agent("3_c"),
|
||||||
|
make_agent("4_d"),
|
||||||
|
];
|
||||||
|
let result = filter_non_archived(agents, |id| id == "2_b");
|
||||||
|
assert_eq!(result.len(), 3);
|
||||||
|
assert_eq!(result[0].story_id, "1_a");
|
||||||
|
assert_eq!(result[1].story_id, "3_c");
|
||||||
|
assert_eq!(result[2].story_id, "4_d");
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── collect_output_text ───────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn collect_output_text_empty_entries() {
|
||||||
|
let result = collect_output_text(&[]);
|
||||||
|
assert_eq!(result, "");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn collect_output_text_skips_non_output_events() {
|
||||||
|
let entries = vec![
|
||||||
|
make_log_entry("status", Some("running")),
|
||||||
|
make_log_entry("done", None),
|
||||||
|
];
|
||||||
|
let result = collect_output_text(&entries);
|
||||||
|
assert_eq!(result, "");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn collect_output_text_concatenates_output_events() {
|
||||||
|
let entries = vec![
|
||||||
|
make_log_entry("output", Some("Hello ")),
|
||||||
|
make_log_entry("output", Some("world\n")),
|
||||||
|
];
|
||||||
|
let result = collect_output_text(&entries);
|
||||||
|
assert_eq!(result, "Hello world\n");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn collect_output_text_skips_output_without_text_field() {
|
||||||
|
let entry = LogEntry {
|
||||||
|
timestamp: "2024-01-01T00:00:00Z".to_string(),
|
||||||
|
event: serde_json::json!({"type": "output"}),
|
||||||
|
};
|
||||||
|
let result = collect_output_text(&[entry]);
|
||||||
|
assert_eq!(result, "");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn collect_output_text_mixed_event_types() {
|
||||||
|
let entries = vec![
|
||||||
|
make_log_entry("status", Some("running")),
|
||||||
|
make_log_entry("output", Some("line1\n")),
|
||||||
|
make_log_entry("agent_json", None),
|
||||||
|
make_log_entry("output", Some("line2\n")),
|
||||||
|
make_log_entry("done", None),
|
||||||
|
];
|
||||||
|
let result = collect_output_text(&entries);
|
||||||
|
assert_eq!(result, "line1\nline2\n");
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,160 @@
|
|||||||
|
//! Pure token usage aggregation — no I/O, no side effects.
|
||||||
|
//!
|
||||||
|
//! Functions here take slices of `TokenUsageRecord` (already loaded by `io.rs`)
|
||||||
|
//! and compute summaries. Tests cover every branch without touching the filesystem.
|
||||||
|
use crate::agents::token_usage::TokenUsageRecord;
|
||||||
|
use std::collections::HashMap;
|
||||||
|
|
||||||
|
/// Per-agent cost breakdown entry.
|
||||||
|
#[derive(Debug, Clone, PartialEq)]
|
||||||
|
pub struct AgentTokenCost {
|
||||||
|
pub agent_name: String,
|
||||||
|
pub model: Option<String>,
|
||||||
|
pub input_tokens: u64,
|
||||||
|
pub output_tokens: u64,
|
||||||
|
pub cache_creation_input_tokens: u64,
|
||||||
|
pub cache_read_input_tokens: u64,
|
||||||
|
pub total_cost_usd: f64,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Aggregated token cost for a story.
|
||||||
|
#[derive(Debug, Clone, PartialEq)]
|
||||||
|
pub struct TokenCostSummary {
|
||||||
|
pub total_cost_usd: f64,
|
||||||
|
pub agents: Vec<AgentTokenCost>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Aggregate token usage records for a single story.
|
||||||
|
///
|
||||||
|
/// Records for other stories are ignored. The returned `agents` list is sorted
|
||||||
|
/// alphabetically by `agent_name` for deterministic output. Returns a zero-cost
|
||||||
|
/// summary when no records match the given `story_id`.
|
||||||
|
pub fn aggregate_for_story(records: &[TokenUsageRecord], story_id: &str) -> TokenCostSummary {
|
||||||
|
let mut agent_map: HashMap<String, AgentTokenCost> = HashMap::new();
|
||||||
|
let mut total_cost_usd = 0.0_f64;
|
||||||
|
|
||||||
|
for record in records.iter().filter(|r| r.story_id == story_id) {
|
||||||
|
total_cost_usd += record.usage.total_cost_usd;
|
||||||
|
let entry = agent_map
|
||||||
|
.entry(record.agent_name.clone())
|
||||||
|
.or_insert_with(|| AgentTokenCost {
|
||||||
|
agent_name: record.agent_name.clone(),
|
||||||
|
model: record.model.clone(),
|
||||||
|
input_tokens: 0,
|
||||||
|
output_tokens: 0,
|
||||||
|
cache_creation_input_tokens: 0,
|
||||||
|
cache_read_input_tokens: 0,
|
||||||
|
total_cost_usd: 0.0,
|
||||||
|
});
|
||||||
|
entry.input_tokens += record.usage.input_tokens;
|
||||||
|
entry.output_tokens += record.usage.output_tokens;
|
||||||
|
entry.cache_creation_input_tokens += record.usage.cache_creation_input_tokens;
|
||||||
|
entry.cache_read_input_tokens += record.usage.cache_read_input_tokens;
|
||||||
|
entry.total_cost_usd += record.usage.total_cost_usd;
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut agents: Vec<AgentTokenCost> = agent_map.into_values().collect();
|
||||||
|
agents.sort_by(|a, b| a.agent_name.cmp(&b.agent_name));
|
||||||
|
|
||||||
|
TokenCostSummary {
|
||||||
|
total_cost_usd,
|
||||||
|
agents,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
use crate::agents::TokenUsage;
|
||||||
|
|
||||||
|
fn make_record(story_id: &str, agent: &str, cost: f64) -> TokenUsageRecord {
|
||||||
|
TokenUsageRecord {
|
||||||
|
story_id: story_id.to_string(),
|
||||||
|
agent_name: agent.to_string(),
|
||||||
|
timestamp: "2024-01-01T00:00:00Z".to_string(),
|
||||||
|
model: None,
|
||||||
|
usage: TokenUsage {
|
||||||
|
input_tokens: 100,
|
||||||
|
output_tokens: 50,
|
||||||
|
cache_creation_input_tokens: 10,
|
||||||
|
cache_read_input_tokens: 20,
|
||||||
|
total_cost_usd: cost,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn aggregate_returns_zero_when_no_records() {
|
||||||
|
let summary = aggregate_for_story(&[], "42_story_foo");
|
||||||
|
assert_eq!(summary.total_cost_usd, 0.0);
|
||||||
|
assert!(summary.agents.is_empty());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn aggregate_filters_to_story_id() {
|
||||||
|
let records = vec![
|
||||||
|
make_record("42_story_foo", "coder-1", 1.0),
|
||||||
|
make_record("99_story_other", "coder-1", 5.0),
|
||||||
|
];
|
||||||
|
let summary = aggregate_for_story(&records, "42_story_foo");
|
||||||
|
assert!((summary.total_cost_usd - 1.0).abs() < f64::EPSILON);
|
||||||
|
assert_eq!(summary.agents.len(), 1);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn aggregate_sums_tokens_per_agent() {
|
||||||
|
let records = vec![
|
||||||
|
make_record("42_story_foo", "coder-1", 1.0),
|
||||||
|
make_record("42_story_foo", "coder-1", 2.0),
|
||||||
|
];
|
||||||
|
let summary = aggregate_for_story(&records, "42_story_foo");
|
||||||
|
assert!((summary.total_cost_usd - 3.0).abs() < f64::EPSILON);
|
||||||
|
assert_eq!(summary.agents.len(), 1);
|
||||||
|
assert_eq!(summary.agents[0].input_tokens, 200);
|
||||||
|
assert_eq!(summary.agents[0].output_tokens, 100);
|
||||||
|
assert!((summary.agents[0].total_cost_usd - 3.0).abs() < f64::EPSILON);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn aggregate_splits_by_agent() {
|
||||||
|
let records = vec![
|
||||||
|
make_record("42_story_foo", "coder-1", 1.0),
|
||||||
|
make_record("42_story_foo", "qa", 0.5),
|
||||||
|
];
|
||||||
|
let summary = aggregate_for_story(&records, "42_story_foo");
|
||||||
|
assert!((summary.total_cost_usd - 1.5).abs() < f64::EPSILON);
|
||||||
|
assert_eq!(summary.agents.len(), 2);
|
||||||
|
// sorted alphabetically
|
||||||
|
assert_eq!(summary.agents[0].agent_name, "coder-1");
|
||||||
|
assert_eq!(summary.agents[1].agent_name, "qa");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn aggregate_sorts_agents_alphabetically() {
|
||||||
|
let records = vec![
|
||||||
|
make_record("42_story_foo", "z-agent", 1.0),
|
||||||
|
make_record("42_story_foo", "a-agent", 1.0),
|
||||||
|
make_record("42_story_foo", "m-agent", 1.0),
|
||||||
|
];
|
||||||
|
let summary = aggregate_for_story(&records, "42_story_foo");
|
||||||
|
assert_eq!(summary.agents[0].agent_name, "a-agent");
|
||||||
|
assert_eq!(summary.agents[1].agent_name, "m-agent");
|
||||||
|
assert_eq!(summary.agents[2].agent_name, "z-agent");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn aggregate_returns_zero_when_no_matching_story() {
|
||||||
|
let records = vec![make_record("99_other", "coder-1", 5.0)];
|
||||||
|
let summary = aggregate_for_story(&records, "42_story_foo");
|
||||||
|
assert_eq!(summary.total_cost_usd, 0.0);
|
||||||
|
assert!(summary.agents.is_empty());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn aggregate_preserves_model_from_first_record() {
|
||||||
|
let mut r = make_record("42_story_foo", "coder-1", 1.0);
|
||||||
|
r.model = Some("claude-sonnet".to_string());
|
||||||
|
let summary = aggregate_for_story(&[r], "42_story_foo");
|
||||||
|
assert_eq!(summary.agents[0].model, Some("claude-sonnet".to_string()));
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,100 @@
|
|||||||
|
//! Anthropic I/O — the ONLY place in `service/anthropic/` that may perform
|
||||||
|
//! network requests or store operations.
|
||||||
|
//!
|
||||||
|
//! Every function here is a thin adapter that converts lower-level errors
|
||||||
|
//! into the typed [`super::Error`] variants. No business logic or branching
|
||||||
|
//! lives here; that belongs in `mod.rs`.
|
||||||
|
|
||||||
|
use super::{Error, ModelSummary, ModelsResponse};
|
||||||
|
use crate::store::StoreOps;
|
||||||
|
use reqwest::header::{HeaderMap, HeaderValue};
|
||||||
|
|
||||||
|
/// Store key for the Anthropic API key — shared with `llm::chat`.
|
||||||
|
pub(crate) const KEY_ANTHROPIC_API_KEY: &str = "anthropic_api_key";
|
||||||
|
|
||||||
|
const ANTHROPIC_VERSION: &str = "2023-06-01";
|
||||||
|
|
||||||
|
/// Return whether a non-empty API key is stored.
|
||||||
|
pub(super) fn api_key_exists(store: &dyn StoreOps) -> bool {
|
||||||
|
match store.get(KEY_ANTHROPIC_API_KEY) {
|
||||||
|
Some(value) => value.as_str().map(|k| !k.is_empty()).unwrap_or(false),
|
||||||
|
None => false,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Read the stored API key, returning a typed error when absent or invalid.
|
||||||
|
pub(super) fn get_api_key(store: &dyn StoreOps) -> Result<String, Error> {
|
||||||
|
match store.get(KEY_ANTHROPIC_API_KEY) {
|
||||||
|
Some(value) => {
|
||||||
|
if let Some(key) = value.as_str() {
|
||||||
|
if key.is_empty() {
|
||||||
|
Err(Error::Validation(
|
||||||
|
"Anthropic API key is empty. Please set your API key.".to_string(),
|
||||||
|
))
|
||||||
|
} else {
|
||||||
|
Ok(key.to_string())
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
Err(Error::Validation(
|
||||||
|
"Stored API key is not a string".to_string(),
|
||||||
|
))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
None => Err(Error::Validation(
|
||||||
|
"Anthropic API key not found. Please set your API key.".to_string(),
|
||||||
|
)),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Persist a new API key to the store.
|
||||||
|
pub(super) fn save_api_key(store: &dyn StoreOps, api_key: &str) -> Result<(), String> {
|
||||||
|
store.set(KEY_ANTHROPIC_API_KEY, serde_json::json!(api_key));
|
||||||
|
store.save()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Fetch models from the Anthropic API at `url`.
|
||||||
|
pub(super) async fn fetch_models(api_key: &str, url: &str) -> Result<Vec<ModelSummary>, Error> {
|
||||||
|
let client = reqwest::Client::new();
|
||||||
|
let mut headers = HeaderMap::new();
|
||||||
|
headers.insert(
|
||||||
|
"x-api-key",
|
||||||
|
HeaderValue::from_str(api_key)
|
||||||
|
.map_err(|e| Error::Validation(format!("Invalid API key header value: {e}")))?,
|
||||||
|
);
|
||||||
|
headers.insert(
|
||||||
|
"anthropic-version",
|
||||||
|
HeaderValue::from_static(ANTHROPIC_VERSION),
|
||||||
|
);
|
||||||
|
|
||||||
|
let response = client
|
||||||
|
.get(url)
|
||||||
|
.headers(headers)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.map_err(|e| Error::UpstreamApi(e.to_string()))?;
|
||||||
|
|
||||||
|
if !response.status().is_success() {
|
||||||
|
let status = response.status();
|
||||||
|
let error_text = response
|
||||||
|
.text()
|
||||||
|
.await
|
||||||
|
.unwrap_or_else(|_| "Unknown error".to_string());
|
||||||
|
return Err(Error::UpstreamApi(format!(
|
||||||
|
"Anthropic API error {status}: {error_text}"
|
||||||
|
)));
|
||||||
|
}
|
||||||
|
|
||||||
|
let body = response
|
||||||
|
.json::<ModelsResponse>()
|
||||||
|
.await
|
||||||
|
.map_err(|e| Error::Internal(format!("Failed to parse response: {e}")))?;
|
||||||
|
|
||||||
|
Ok(body
|
||||||
|
.data
|
||||||
|
.into_iter()
|
||||||
|
.map(|m| ModelSummary {
|
||||||
|
id: m.id,
|
||||||
|
context_window: m.context_window,
|
||||||
|
})
|
||||||
|
.collect())
|
||||||
|
}
|
||||||
@@ -0,0 +1,178 @@
|
|||||||
|
//! Anthropic service — public API for Anthropic API-key management and model listing.
|
||||||
|
//!
|
||||||
|
//! Exposes functions to check, store, and use the Anthropic API key, and to
|
||||||
|
//! list available models. HTTP handlers call these functions instead of
|
||||||
|
//! talking to `llm::chat` or making HTTP requests directly.
|
||||||
|
//!
|
||||||
|
//! Conventions: `docs/architecture/service-modules.md`
|
||||||
|
|
||||||
|
pub(super) mod io;
|
||||||
|
|
||||||
|
use crate::store::StoreOps;
|
||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
|
||||||
|
const ANTHROPIC_MODELS_URL: &str = "https://api.anthropic.com/v1/models";
|
||||||
|
|
||||||
|
// ── Error type ────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Typed errors returned by `service::anthropic` functions.
|
||||||
|
///
|
||||||
|
/// HTTP handlers map these to status codes:
|
||||||
|
/// - [`Error::Validation`] → 400 Bad Request
|
||||||
|
/// - [`Error::UpstreamApi`] → 502 Bad Gateway (or 400 for invalid keys)
|
||||||
|
/// - [`Error::Internal`] → 500 Internal Server Error
|
||||||
|
#[derive(Debug)]
|
||||||
|
pub enum Error {
|
||||||
|
/// The request was invalid (e.g. missing, empty, or malformed API key).
|
||||||
|
Validation(String),
|
||||||
|
/// The upstream Anthropic API returned an error or was unreachable.
|
||||||
|
UpstreamApi(String),
|
||||||
|
/// An internal error occurred (JSON parse failure, store I/O error, etc.).
|
||||||
|
Internal(String),
|
||||||
|
}
|
||||||
|
|
||||||
|
impl std::fmt::Display for Error {
|
||||||
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
|
match self {
|
||||||
|
Self::Validation(msg) => write!(f, "Validation error: {msg}"),
|
||||||
|
Self::UpstreamApi(msg) => write!(f, "Upstream API error: {msg}"),
|
||||||
|
Self::Internal(msg) => write!(f, "Internal error: {msg}"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Types ─────────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// A summary of an Anthropic model as returned by the `/v1/models` endpoint.
|
||||||
|
#[derive(Serialize, Deserialize, Debug, PartialEq, poem_openapi::Object)]
|
||||||
|
pub struct ModelSummary {
|
||||||
|
pub id: String,
|
||||||
|
pub context_window: u64,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Raw response shape from the Anthropic `/v1/models` endpoint.
|
||||||
|
#[derive(Deserialize)]
|
||||||
|
pub(super) struct ModelsResponse {
|
||||||
|
pub data: Vec<ModelInfo>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// A single model entry in the Anthropic API response.
|
||||||
|
#[derive(Deserialize)]
|
||||||
|
pub(super) struct ModelInfo {
|
||||||
|
pub id: String,
|
||||||
|
pub context_window: u64,
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Public API ────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Return whether a non-empty Anthropic API key is currently stored.
|
||||||
|
pub fn get_api_key_exists(store: &dyn StoreOps) -> Result<bool, Error> {
|
||||||
|
Ok(io::api_key_exists(store))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Read the stored Anthropic API key.
|
||||||
|
///
|
||||||
|
/// Returns [`Error::Validation`] when the key is absent, empty, or not a string.
|
||||||
|
pub fn get_api_key(store: &dyn StoreOps) -> Result<String, Error> {
|
||||||
|
io::get_api_key(store)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Store or replace the Anthropic API key.
|
||||||
|
pub fn set_api_key(store: &dyn StoreOps, api_key: String) -> Result<(), Error> {
|
||||||
|
io::save_api_key(store, &api_key).map_err(Error::Internal)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// List available Anthropic models from the production endpoint.
|
||||||
|
pub async fn list_models(store: &dyn StoreOps) -> Result<Vec<ModelSummary>, Error> {
|
||||||
|
list_models_from(store, ANTHROPIC_MODELS_URL).await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// List available Anthropic models from `url` (injectable for tests).
|
||||||
|
pub async fn list_models_from(store: &dyn StoreOps, url: &str) -> Result<Vec<ModelSummary>, Error> {
|
||||||
|
let api_key = get_api_key(store)?;
|
||||||
|
io::fetch_models(&api_key, url).await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Parse a raw JSON string from the Anthropic `/v1/models` endpoint into model summaries.
|
||||||
|
///
|
||||||
|
/// Pure function for unit testing; production code uses [`list_models`].
|
||||||
|
#[cfg(test)]
|
||||||
|
pub fn parse_models_response(json: &str) -> Result<Vec<ModelSummary>, Error> {
|
||||||
|
let response: ModelsResponse = serde_json::from_str(json)
|
||||||
|
.map_err(|e| Error::Internal(format!("Failed to parse models response: {e}")))?;
|
||||||
|
Ok(response
|
||||||
|
.data
|
||||||
|
.into_iter()
|
||||||
|
.map(|m| ModelSummary {
|
||||||
|
id: m.id,
|
||||||
|
context_window: m.context_window,
|
||||||
|
})
|
||||||
|
.collect())
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Tests ─────────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
// Pure unit tests for response parsing — no tempdir, no network.
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn parse_models_response_parses_single_model() {
|
||||||
|
let json = r#"{"data":[{"id":"claude-opus-4-5","context_window":200000}]}"#;
|
||||||
|
let models = parse_models_response(json).unwrap();
|
||||||
|
assert_eq!(models.len(), 1);
|
||||||
|
assert_eq!(models[0].id, "claude-opus-4-5");
|
||||||
|
assert_eq!(models[0].context_window, 200000);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn parse_models_response_parses_multiple_models() {
|
||||||
|
let json = r#"{"data":[
|
||||||
|
{"id":"claude-opus-4-5","context_window":200000},
|
||||||
|
{"id":"claude-haiku-4-5-20251001","context_window":100000}
|
||||||
|
]}"#;
|
||||||
|
let models = parse_models_response(json).unwrap();
|
||||||
|
assert_eq!(models.len(), 2);
|
||||||
|
assert_eq!(models[0].id, "claude-opus-4-5");
|
||||||
|
assert_eq!(models[1].context_window, 100000);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn parse_models_response_returns_empty_for_empty_data() {
|
||||||
|
let json = r#"{"data":[]}"#;
|
||||||
|
let models = parse_models_response(json).unwrap();
|
||||||
|
assert!(models.is_empty());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn parse_models_response_returns_internal_error_for_invalid_json() {
|
||||||
|
let result = parse_models_response("not json at all");
|
||||||
|
assert!(matches!(result, Err(Error::Internal(_))));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn parse_models_response_returns_error_for_missing_data_field() {
|
||||||
|
let result = parse_models_response(r#"{"wrong_field":[]}"#);
|
||||||
|
assert!(matches!(result, Err(Error::Internal(_))));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn error_display_validation() {
|
||||||
|
let e = Error::Validation("no key".to_string());
|
||||||
|
assert!(e.to_string().contains("no key"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn error_display_upstream_api() {
|
||||||
|
let e = Error::UpstreamApi("500 Server Error".to_string());
|
||||||
|
assert!(e.to_string().contains("500 Server Error"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn error_display_internal() {
|
||||||
|
let e = Error::Internal("parse failed".to_string());
|
||||||
|
assert!(e.to_string().contains("parse failed"));
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,158 @@
|
|||||||
|
//! Bot command I/O — the ONLY place in `service/bot_command/` that may call
|
||||||
|
//! transport handlers, load stores, spawn tasks, or interact with the agent
|
||||||
|
//! pool.
|
||||||
|
//!
|
||||||
|
//! Every function here is a thin adapter over the underlying matrix/timer/htop
|
||||||
|
//! handlers. No argument parsing or business logic lives here — that belongs in
|
||||||
|
//! `parse.rs` or `mod.rs`.
|
||||||
|
|
||||||
|
use crate::agents::AgentPool;
|
||||||
|
use std::path::Path;
|
||||||
|
use std::sync::Arc;
|
||||||
|
|
||||||
|
use super::parse::{AssignArgs, StartArgs};
|
||||||
|
|
||||||
|
/// Call the Matrix `assign` handler with pre-validated arguments.
|
||||||
|
pub(super) async fn call_assign(
|
||||||
|
args: &AssignArgs,
|
||||||
|
project_root: &Path,
|
||||||
|
agents: &Arc<AgentPool>,
|
||||||
|
) -> String {
|
||||||
|
crate::chat::transport::matrix::assign::handle_assign(
|
||||||
|
"web-ui",
|
||||||
|
&args.number,
|
||||||
|
&args.model,
|
||||||
|
project_root,
|
||||||
|
agents,
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Call the Matrix `start` handler with pre-validated arguments.
|
||||||
|
pub(super) async fn call_start(
|
||||||
|
args: &StartArgs,
|
||||||
|
project_root: &Path,
|
||||||
|
agents: &Arc<AgentPool>,
|
||||||
|
) -> String {
|
||||||
|
crate::chat::transport::matrix::start::handle_start(
|
||||||
|
"web-ui",
|
||||||
|
&args.number,
|
||||||
|
args.hint.as_deref(),
|
||||||
|
project_root,
|
||||||
|
agents,
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Call the Matrix `delete` handler with a pre-validated story number.
|
||||||
|
pub(super) async fn call_delete(
|
||||||
|
number: &str,
|
||||||
|
project_root: &Path,
|
||||||
|
agents: &Arc<AgentPool>,
|
||||||
|
) -> String {
|
||||||
|
crate::chat::transport::matrix::delete::handle_delete("web-ui", number, project_root, agents)
|
||||||
|
.await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Call the Matrix `rmtree` handler with a pre-validated story number.
|
||||||
|
pub(super) async fn call_rmtree(
|
||||||
|
number: &str,
|
||||||
|
project_root: &Path,
|
||||||
|
agents: &Arc<AgentPool>,
|
||||||
|
) -> String {
|
||||||
|
crate::chat::transport::matrix::rmtree::handle_rmtree("web-ui", number, project_root, agents)
|
||||||
|
.await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Call the Matrix `rebuild` handler.
|
||||||
|
pub(super) async fn call_rebuild(project_root: &Path, agents: &Arc<AgentPool>) -> String {
|
||||||
|
crate::chat::transport::matrix::rebuild::handle_rebuild("web-ui", project_root, agents).await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Parse and execute a `timer` command.
|
||||||
|
///
|
||||||
|
/// Returns `Err` with a usage string if the timer arguments cannot be parsed.
|
||||||
|
pub(super) async fn call_timer(args: &str, project_root: &Path) -> Result<String, String> {
|
||||||
|
let synthetic = format!("__web_ui__ timer {args}");
|
||||||
|
let timer_cmd = match crate::service::timer::extract_timer_command(
|
||||||
|
&synthetic,
|
||||||
|
"__web_ui__",
|
||||||
|
"@__web_ui__:localhost",
|
||||||
|
) {
|
||||||
|
Some(cmd) => cmd,
|
||||||
|
None => {
|
||||||
|
return Err(
|
||||||
|
"Usage: `/timer list`, `/timer <number> <HH:MM>`, or `/timer cancel <number>`"
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
let store =
|
||||||
|
crate::service::timer::TimerStore::load(project_root.join(".huskies").join("timers.json"));
|
||||||
|
Ok(crate::service::timer::handle_timer_command(timer_cmd, &store, project_root).await)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Build an `htop` snapshot for the web UI.
|
||||||
|
///
|
||||||
|
/// The web UI uses one-shot HTTP requests, so live-updating sessions are not
|
||||||
|
/// supported. `htop stop` returns a helpful explanation instead of an error.
|
||||||
|
pub(super) fn call_htop(args: &str, agents: &Arc<AgentPool>) -> String {
|
||||||
|
use crate::chat::transport::matrix::htop::{HtopCommand, build_htop_message};
|
||||||
|
|
||||||
|
let synthetic = if args.is_empty() {
|
||||||
|
"__web_ui__ htop".to_string()
|
||||||
|
} else {
|
||||||
|
format!("__web_ui__ htop {args}")
|
||||||
|
};
|
||||||
|
|
||||||
|
match crate::chat::transport::matrix::htop::extract_htop_command(
|
||||||
|
&synthetic,
|
||||||
|
"__web_ui__",
|
||||||
|
"@__web_ui__:localhost",
|
||||||
|
) {
|
||||||
|
Some(HtopCommand::Stop) => "No active htop session in the web UI. \
|
||||||
|
Live sessions are only supported in chat transports (Matrix, Slack, Discord)."
|
||||||
|
.to_string(),
|
||||||
|
Some(HtopCommand::Start { duration_secs }) => build_htop_message(agents, 0, duration_secs),
|
||||||
|
None => build_htop_message(agents, 0, 300),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Dispatch through the synchronous command registry.
|
||||||
|
///
|
||||||
|
/// Returns `Some(response)` if the command keyword is registered, or `None`
|
||||||
|
/// if the keyword is unknown.
|
||||||
|
pub(super) fn call_sync(
|
||||||
|
cmd: &str,
|
||||||
|
args: &str,
|
||||||
|
project_root: &Path,
|
||||||
|
agents: &Arc<AgentPool>,
|
||||||
|
) -> Option<String> {
|
||||||
|
use crate::chat::commands::CommandDispatch;
|
||||||
|
use std::collections::HashSet;
|
||||||
|
use std::sync::Mutex;
|
||||||
|
|
||||||
|
let ambient_rooms: Arc<Mutex<HashSet<String>>> = Arc::new(Mutex::new(HashSet::new()));
|
||||||
|
let bot_name = "__web_ui__";
|
||||||
|
let bot_user_id = "@__web_ui__:localhost";
|
||||||
|
let room_id = "__web_ui__";
|
||||||
|
|
||||||
|
let dispatch = CommandDispatch {
|
||||||
|
bot_name,
|
||||||
|
bot_user_id,
|
||||||
|
project_root,
|
||||||
|
agents,
|
||||||
|
ambient_rooms: &ambient_rooms,
|
||||||
|
room_id,
|
||||||
|
};
|
||||||
|
|
||||||
|
// Build a synthetic bot-addressed message so the registry parses it
|
||||||
|
// identically to messages from chat transports.
|
||||||
|
let synthetic = if args.is_empty() {
|
||||||
|
format!("{bot_name} {cmd}")
|
||||||
|
} else {
|
||||||
|
format!("{bot_name} {cmd} {args}")
|
||||||
|
};
|
||||||
|
|
||||||
|
crate::chat::commands::try_handle_command(&dispatch, &synthetic)
|
||||||
|
}
|
||||||
@@ -0,0 +1,97 @@
|
|||||||
|
//! Bot command service — domain logic for dispatching slash commands.
|
||||||
|
//!
|
||||||
|
//! Extracted from `http/bot_command.rs` so that argument parsing and dispatch
|
||||||
|
//! are independently testable without an HTTP layer.
|
||||||
|
//!
|
||||||
|
//! Conventions: `docs/architecture/service-modules.md`
|
||||||
|
//!
|
||||||
|
//! # Structure
|
||||||
|
//! - `mod.rs` (this file) — public API and typed `Error` type
|
||||||
|
//! - `parse.rs` — pure argument parsing, no I/O
|
||||||
|
//! - `io.rs` — all side-effectful calls (transport handlers, stores, agent pool)
|
||||||
|
|
||||||
|
pub(super) mod io;
|
||||||
|
pub mod parse;
|
||||||
|
|
||||||
|
use crate::agents::AgentPool;
|
||||||
|
use std::path::Path;
|
||||||
|
use std::sync::Arc;
|
||||||
|
|
||||||
|
// ── Error type ────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Typed errors returned by `service::bot_command::execute`.
|
||||||
|
///
|
||||||
|
/// HTTP handlers map these to specific status codes:
|
||||||
|
/// - [`Error::UnknownCommand`] → 404 Not Found
|
||||||
|
/// - [`Error::BadArgs`] → 400 Bad Request
|
||||||
|
/// - [`Error::CommandFailed`] → 500 Internal Server Error
|
||||||
|
#[derive(Debug)]
|
||||||
|
#[allow(dead_code)] // CommandFailed is part of the public API contract; not yet reachable
|
||||||
|
pub enum Error {
|
||||||
|
/// The command keyword does not match any registered command.
|
||||||
|
UnknownCommand(String),
|
||||||
|
/// The command exists but the provided arguments are invalid.
|
||||||
|
BadArgs(String),
|
||||||
|
/// The command ran but failed with an internal error.
|
||||||
|
CommandFailed(String),
|
||||||
|
}
|
||||||
|
|
||||||
|
impl std::fmt::Display for Error {
|
||||||
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
|
match self {
|
||||||
|
Self::UnknownCommand(msg) | Self::BadArgs(msg) | Self::CommandFailed(msg) => {
|
||||||
|
write!(f, "{msg}")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Public API ────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Execute a bot command and return the markdown response.
|
||||||
|
///
|
||||||
|
/// Dispatches to the same handlers used by the Matrix and Slack bots. The
|
||||||
|
/// `cmd` argument is the lower-cased command keyword (e.g. `"status"`,
|
||||||
|
/// `"start"`). The `args` argument is any text after the keyword, already
|
||||||
|
/// trimmed.
|
||||||
|
///
|
||||||
|
/// # Errors
|
||||||
|
/// - [`Error::UnknownCommand`] if the command keyword is not registered.
|
||||||
|
/// - [`Error::BadArgs`] if the arguments fail validation.
|
||||||
|
/// - [`Error::CommandFailed`] if command execution raises an internal error.
|
||||||
|
pub async fn execute(
|
||||||
|
cmd: &str,
|
||||||
|
args: &str,
|
||||||
|
project_root: &Path,
|
||||||
|
agents: &Arc<AgentPool>,
|
||||||
|
) -> Result<String, Error> {
|
||||||
|
match cmd {
|
||||||
|
"assign" => {
|
||||||
|
let parsed = parse::parse_assign(args).map_err(Error::BadArgs)?;
|
||||||
|
Ok(io::call_assign(&parsed, project_root, agents).await)
|
||||||
|
}
|
||||||
|
"start" => {
|
||||||
|
let parsed = parse::parse_start(args).map_err(Error::BadArgs)?;
|
||||||
|
Ok(io::call_start(&parsed, project_root, agents).await)
|
||||||
|
}
|
||||||
|
"delete" => {
|
||||||
|
let number = parse::parse_number("delete", args).map_err(Error::BadArgs)?;
|
||||||
|
Ok(io::call_delete(&number, project_root, agents).await)
|
||||||
|
}
|
||||||
|
"rmtree" => {
|
||||||
|
let number = parse::parse_number("rmtree", args).map_err(Error::BadArgs)?;
|
||||||
|
Ok(io::call_rmtree(&number, project_root, agents).await)
|
||||||
|
}
|
||||||
|
"rebuild" => Ok(io::call_rebuild(project_root, agents).await),
|
||||||
|
"timer" => io::call_timer(args, project_root)
|
||||||
|
.await
|
||||||
|
.map_err(Error::BadArgs),
|
||||||
|
"htop" => Ok(io::call_htop(args, agents)),
|
||||||
|
_ => match io::call_sync(cmd, args, project_root, agents) {
|
||||||
|
Some(response) => Ok(response),
|
||||||
|
None => Err(Error::UnknownCommand(format!(
|
||||||
|
"Unknown command: `/{cmd}`. Type `/help` to see available commands."
|
||||||
|
))),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,216 @@
|
|||||||
|
//! Pure argument parsing for bot commands.
|
||||||
|
//!
|
||||||
|
//! Every function in this module is synchronous and free of I/O. All
|
||||||
|
//! filesystem, network, and agent-pool access belongs in `io.rs`.
|
||||||
|
|
||||||
|
// ── Parsed argument types ─────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Parsed arguments for the `assign` command.
|
||||||
|
#[derive(Debug)]
|
||||||
|
pub struct AssignArgs {
|
||||||
|
/// The numeric story identifier (as a string, e.g. `"42"`).
|
||||||
|
pub number: String,
|
||||||
|
/// The model / agent name (e.g. `"opus"`, `"coder-sonnet"`).
|
||||||
|
pub model: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Parsed arguments for the `start` command.
|
||||||
|
#[derive(Debug)]
|
||||||
|
pub struct StartArgs {
|
||||||
|
/// The numeric story identifier.
|
||||||
|
pub number: String,
|
||||||
|
/// Optional model hint (e.g. `"opus"` → resolved to `"coder-opus"`).
|
||||||
|
pub hint: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Parsing functions ─────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Parse `assign` arguments: `<number> <model>`.
|
||||||
|
///
|
||||||
|
/// Returns `Err` with a user-visible usage string if the arguments are missing
|
||||||
|
/// or invalid (non-numeric number, empty model).
|
||||||
|
pub fn parse_assign(args: &str) -> Result<AssignArgs, String> {
|
||||||
|
let mut parts = args.splitn(2, char::is_whitespace);
|
||||||
|
let number = parts.next().unwrap_or("").trim().to_string();
|
||||||
|
let model = parts.next().unwrap_or("").trim().to_string();
|
||||||
|
|
||||||
|
if number.is_empty() || !number.chars().all(|c| c.is_ascii_digit()) || model.is_empty() {
|
||||||
|
return Err("Usage: `/assign <number> <model>` (e.g. `/assign 42 opus`)".to_string());
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(AssignArgs { number, model })
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Parse `start` arguments: `<number>` or `<number> <model_hint>`.
|
||||||
|
///
|
||||||
|
/// Returns `Err` with a user-visible usage string if the number is missing
|
||||||
|
/// or non-numeric.
|
||||||
|
pub fn parse_start(args: &str) -> Result<StartArgs, String> {
|
||||||
|
let mut parts = args.splitn(2, char::is_whitespace);
|
||||||
|
let number = parts.next().unwrap_or("").trim().to_string();
|
||||||
|
let hint_str = parts.next().unwrap_or("").trim();
|
||||||
|
|
||||||
|
if number.is_empty() || !number.chars().all(|c| c.is_ascii_digit()) {
|
||||||
|
return Err(
|
||||||
|
"Usage: `/start <number>` or `/start <number> <model>` (e.g. `/start 42 opus`)"
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
let hint = if hint_str.is_empty() {
|
||||||
|
None
|
||||||
|
} else {
|
||||||
|
Some(hint_str.to_string())
|
||||||
|
};
|
||||||
|
|
||||||
|
Ok(StartArgs { number, hint })
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Parse a single numeric argument for commands like `delete` and `rmtree`.
|
||||||
|
///
|
||||||
|
/// `cmd_name` is used only in the error message (e.g. `"delete"` or `"rmtree"`).
|
||||||
|
/// Returns `Err` with a user-visible usage string if the argument is missing
|
||||||
|
/// or non-numeric.
|
||||||
|
pub fn parse_number(cmd_name: &str, args: &str) -> Result<String, String> {
|
||||||
|
let number = args.trim().to_string();
|
||||||
|
if number.is_empty() || !number.chars().all(|c| c.is_ascii_digit()) {
|
||||||
|
return Err(format!(
|
||||||
|
"Usage: `/{cmd_name} <number>` (e.g. `/{cmd_name} 42`)"
|
||||||
|
));
|
||||||
|
}
|
||||||
|
Ok(number)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Tests ─────────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
// -- parse_assign ----------------------------------------------------------
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn assign_valid() {
|
||||||
|
let r = parse_assign("42 opus").unwrap();
|
||||||
|
assert_eq!(r.number, "42");
|
||||||
|
assert_eq!(r.model, "opus");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn assign_valid_model_with_spaces() {
|
||||||
|
// splitn(2): everything after first whitespace goes into `model`.
|
||||||
|
let r = parse_assign("42 claude-opus-4").unwrap();
|
||||||
|
assert_eq!(r.number, "42");
|
||||||
|
assert_eq!(r.model, "claude-opus-4");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn assign_missing_all_args() {
|
||||||
|
assert!(parse_assign("").is_err());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn assign_missing_model() {
|
||||||
|
let err = parse_assign("42").unwrap_err();
|
||||||
|
assert!(
|
||||||
|
err.contains("Usage"),
|
||||||
|
"error should contain usage hint: {err}"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn assign_non_numeric_number() {
|
||||||
|
let err = parse_assign("foo opus").unwrap_err();
|
||||||
|
assert!(
|
||||||
|
err.contains("Usage"),
|
||||||
|
"error should contain usage hint: {err}"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn assign_number_with_letters_is_invalid() {
|
||||||
|
assert!(parse_assign("42x opus").is_err());
|
||||||
|
}
|
||||||
|
|
||||||
|
// -- parse_start -----------------------------------------------------------
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn start_valid_number_only() {
|
||||||
|
let r = parse_start("42").unwrap();
|
||||||
|
assert_eq!(r.number, "42");
|
||||||
|
assert!(r.hint.is_none());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn start_valid_with_hint() {
|
||||||
|
let r = parse_start("42 opus").unwrap();
|
||||||
|
assert_eq!(r.number, "42");
|
||||||
|
assert_eq!(r.hint.as_deref(), Some("opus"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn start_missing_number() {
|
||||||
|
let err = parse_start("").unwrap_err();
|
||||||
|
assert!(
|
||||||
|
err.contains("Usage"),
|
||||||
|
"error should contain usage hint: {err}"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn start_non_numeric_number() {
|
||||||
|
let err = parse_start("foo").unwrap_err();
|
||||||
|
assert!(
|
||||||
|
err.contains("Usage"),
|
||||||
|
"error should contain usage hint: {err}"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn start_non_numeric_with_hint() {
|
||||||
|
assert!(parse_start("foo opus").is_err());
|
||||||
|
}
|
||||||
|
|
||||||
|
// -- parse_number ----------------------------------------------------------
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn number_valid() {
|
||||||
|
assert_eq!(parse_number("delete", "99").unwrap(), "99");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn number_missing() {
|
||||||
|
let err = parse_number("delete", "").unwrap_err();
|
||||||
|
assert!(
|
||||||
|
err.contains("Usage"),
|
||||||
|
"error should contain usage hint: {err}"
|
||||||
|
);
|
||||||
|
assert!(
|
||||||
|
err.contains("delete"),
|
||||||
|
"error should mention the command: {err}"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn number_non_numeric() {
|
||||||
|
let err = parse_number("delete", "abc").unwrap_err();
|
||||||
|
assert!(
|
||||||
|
err.contains("Usage"),
|
||||||
|
"error should contain usage hint: {err}"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn number_usage_contains_cmd_name() {
|
||||||
|
let err = parse_number("rmtree", "").unwrap_err();
|
||||||
|
assert!(
|
||||||
|
err.contains("rmtree"),
|
||||||
|
"usage should mention the command: {err}"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn number_whitespace_only_is_invalid() {
|
||||||
|
assert!(parse_number("delete", " ").is_err());
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,70 @@
|
|||||||
|
//! Pure helpers for pipeline item ID parsing.
|
||||||
|
//!
|
||||||
|
//! Pipeline item IDs share the format `{number}_{type}_{slug}`, e.g.
|
||||||
|
//! `"42_story_foo"`, `"7_bug_bar"`, `"100_refactor_baz"`. The functions here
|
||||||
|
//! extract or validate the leading numeric segment without performing any I/O.
|
||||||
|
|
||||||
|
/// Extract the numeric prefix from a pipeline item ID.
|
||||||
|
///
|
||||||
|
/// Returns the leading digit sequence from IDs like `"42_story_foo"` → `"42"`.
|
||||||
|
/// Returns `None` if the ID has no leading digit sequence.
|
||||||
|
pub fn extract_item_number(item_id: &str) -> Option<&str> {
|
||||||
|
item_id
|
||||||
|
.split('_')
|
||||||
|
.next()
|
||||||
|
.filter(|s| !s.is_empty() && s.chars().all(|c| c.is_ascii_digit()))
|
||||||
|
}
|
||||||
|
|
||||||
|
#[allow(dead_code)]
|
||||||
|
/// Return `true` if `item_id` has a valid `{digits}_` prefix format.
|
||||||
|
///
|
||||||
|
/// Valid: `"42_story_foo"`, `"1_bug_bar"`.
|
||||||
|
/// Invalid: `"story_without_number"`, `""`, `"abc_story"`.
|
||||||
|
pub fn has_valid_id_prefix(item_id: &str) -> bool {
|
||||||
|
extract_item_number(item_id).is_some()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Tests ─────────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn extract_item_number_extracts_prefix() {
|
||||||
|
assert_eq!(extract_item_number("42_story_foo"), Some("42"));
|
||||||
|
assert_eq!(extract_item_number("1_bug_bar"), Some("1"));
|
||||||
|
assert_eq!(extract_item_number("100_refactor_baz"), Some("100"));
|
||||||
|
assert_eq!(
|
||||||
|
extract_item_number("261_story_bot_notifications"),
|
||||||
|
Some("261")
|
||||||
|
);
|
||||||
|
assert_eq!(extract_item_number("1_spike_research"), Some("1"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn extract_item_number_returns_none_for_no_numeric_prefix() {
|
||||||
|
assert_eq!(extract_item_number("story_without_number"), None);
|
||||||
|
assert_eq!(extract_item_number("abc_story"), None);
|
||||||
|
assert_eq!(extract_item_number("abc_story_thing"), None);
|
||||||
|
assert_eq!(extract_item_number(""), None);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn extract_item_number_returns_none_for_empty_first_segment() {
|
||||||
|
// Leading underscore: first segment is "".
|
||||||
|
assert_eq!(extract_item_number("_story_thing"), None);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn has_valid_id_prefix_returns_true_for_valid_ids() {
|
||||||
|
assert!(has_valid_id_prefix("42_story_foo"));
|
||||||
|
assert!(has_valid_id_prefix("1_bug_bar"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn has_valid_id_prefix_returns_false_for_invalid_ids() {
|
||||||
|
assert!(!has_valid_id_prefix("story_no_number"));
|
||||||
|
assert!(!has_valid_id_prefix(""));
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,6 @@
|
|||||||
|
//! Shared pure helpers used by multiple service modules.
|
||||||
|
//!
|
||||||
|
//! All sub-modules here are pure (no I/O, no side effects). Any helper that
|
||||||
|
//! duplicates logic across two or more service modules belongs here; anything
|
||||||
|
//! used by only one service stays in that service.
|
||||||
|
pub mod item_id;
|
||||||
@@ -0,0 +1,72 @@
|
|||||||
|
//! Diagnostics I/O — the ONLY place in `service::diagnostics/` that may perform side effects.
|
||||||
|
//!
|
||||||
|
//! Side effects here include: reading and writing `.claude/settings.json` via `std::fs`.
|
||||||
|
//! Pure permission-rule logic (pattern derivation, wildcard domination checks) lives in
|
||||||
|
//! `permission.rs`.
|
||||||
|
|
||||||
|
use serde_json::{Value, json};
|
||||||
|
use std::fs;
|
||||||
|
use std::path::Path;
|
||||||
|
|
||||||
|
/// Add a permission rule to `.claude/settings.json` in the project root.
|
||||||
|
///
|
||||||
|
/// Does nothing if the rule already exists (exact match) or is already covered
|
||||||
|
/// by a wildcard pattern in the allow list. Creates the file and any missing
|
||||||
|
/// parent directories if they do not yet exist.
|
||||||
|
///
|
||||||
|
/// # Errors
|
||||||
|
/// Returns `Err(String)` if the directory cannot be created, the file cannot be
|
||||||
|
/// read or written, or the JSON cannot be parsed or serialised.
|
||||||
|
pub fn add_permission_rule(project_root: &Path, rule: &str) -> Result<(), String> {
|
||||||
|
let claude_dir = project_root.join(".claude");
|
||||||
|
fs::create_dir_all(&claude_dir)
|
||||||
|
.map_err(|e| format!("Failed to create .claude/ directory: {e}"))?;
|
||||||
|
|
||||||
|
let settings_path = claude_dir.join("settings.json");
|
||||||
|
let mut settings: Value = if settings_path.exists() {
|
||||||
|
let content = fs::read_to_string(&settings_path)
|
||||||
|
.map_err(|e| format!("Failed to read settings.json: {e}"))?;
|
||||||
|
serde_json::from_str(&content).map_err(|e| format!("Failed to parse settings.json: {e}"))?
|
||||||
|
} else {
|
||||||
|
json!({ "permissions": { "allow": [] } })
|
||||||
|
};
|
||||||
|
|
||||||
|
let allow_arr = settings
|
||||||
|
.pointer_mut("/permissions/allow")
|
||||||
|
.and_then(|v| v.as_array_mut());
|
||||||
|
|
||||||
|
let allow = match allow_arr {
|
||||||
|
Some(arr) => arr,
|
||||||
|
None => {
|
||||||
|
settings
|
||||||
|
.as_object_mut()
|
||||||
|
.unwrap()
|
||||||
|
.entry("permissions")
|
||||||
|
.or_insert(json!({ "allow": [] }));
|
||||||
|
settings
|
||||||
|
.pointer_mut("/permissions/allow")
|
||||||
|
.unwrap()
|
||||||
|
.as_array_mut()
|
||||||
|
.unwrap()
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let rule_value = Value::String(rule.to_string());
|
||||||
|
|
||||||
|
// Exact duplicate check.
|
||||||
|
if allow.contains(&rule_value) {
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
// Wildcard-coverage check: if "mcp__huskies__*" exists, skip more-specific rules.
|
||||||
|
if super::permission::is_dominated_by_wildcard(rule, allow) {
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
allow.push(rule_value);
|
||||||
|
|
||||||
|
let pretty =
|
||||||
|
serde_json::to_string_pretty(&settings).map_err(|e| format!("Failed to serialize: {e}"))?;
|
||||||
|
fs::write(&settings_path, pretty).map_err(|e| format!("Failed to write settings.json: {e}"))?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
@@ -0,0 +1,89 @@
|
|||||||
|
//! Diagnostics service — server logs, CRDT dump, permission management, and story movement.
|
||||||
|
//!
|
||||||
|
//! Extracted from `http/mcp/diagnostics.rs` following the conventions in
|
||||||
|
//! `docs/architecture/service-modules.md`:
|
||||||
|
//! - `mod.rs` (this file) — public API, typed [`Error`], orchestration
|
||||||
|
//! - `io.rs` — the ONLY place that performs side effects (filesystem reads/writes)
|
||||||
|
//! - `permission.rs` — pure permission-rule generation and wildcard checks
|
||||||
|
|
||||||
|
pub mod io;
|
||||||
|
pub mod permission;
|
||||||
|
|
||||||
|
pub use io::add_permission_rule;
|
||||||
|
pub use permission::generate_permission_rule;
|
||||||
|
#[allow(unused_imports)]
|
||||||
|
pub use permission::is_dominated_by_wildcard;
|
||||||
|
|
||||||
|
// ── Error type ────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Typed errors returned by `service::diagnostics` functions.
|
||||||
|
///
|
||||||
|
/// HTTP handlers map these to status codes:
|
||||||
|
/// - [`Error::NotFound`] → 404 Not Found
|
||||||
|
/// - [`Error::Validation`] → 400 Bad Request
|
||||||
|
/// - [`Error::Conflict`] → 409 Conflict
|
||||||
|
/// - [`Error::Io`] → 500 Internal Server Error
|
||||||
|
/// - [`Error::UpstreamFailure`] → 500 Internal Server Error
|
||||||
|
#[allow(dead_code)]
|
||||||
|
#[derive(Debug)]
|
||||||
|
pub enum Error {
|
||||||
|
/// The requested resource was not found.
|
||||||
|
NotFound(String),
|
||||||
|
/// A required argument is missing or has an invalid value.
|
||||||
|
Validation(String),
|
||||||
|
/// The operation cannot proceed due to a conflicting state.
|
||||||
|
Conflict(String),
|
||||||
|
/// A filesystem read or write operation failed.
|
||||||
|
Io(String),
|
||||||
|
/// An upstream dependency returned an unexpected error.
|
||||||
|
UpstreamFailure(String),
|
||||||
|
}
|
||||||
|
|
||||||
|
impl std::fmt::Display for Error {
|
||||||
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
|
match self {
|
||||||
|
Self::NotFound(msg) => write!(f, "Not found: {msg}"),
|
||||||
|
Self::Validation(msg) => write!(f, "Validation error: {msg}"),
|
||||||
|
Self::Conflict(msg) => write!(f, "Conflict: {msg}"),
|
||||||
|
Self::Io(msg) => write!(f, "I/O error: {msg}"),
|
||||||
|
Self::UpstreamFailure(msg) => write!(f, "Upstream failure: {msg}"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Tests ─────────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn error_display_not_found() {
|
||||||
|
let e = Error::NotFound("log file missing".to_string());
|
||||||
|
assert!(e.to_string().contains("Not found"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn error_display_validation() {
|
||||||
|
let e = Error::Validation("invalid filter".to_string());
|
||||||
|
assert!(e.to_string().contains("Validation error"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn error_display_conflict() {
|
||||||
|
let e = Error::Conflict("story in wrong stage".to_string());
|
||||||
|
assert!(e.to_string().contains("Conflict"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn error_display_io() {
|
||||||
|
let e = Error::Io("settings.json write failed".to_string());
|
||||||
|
assert!(e.to_string().contains("I/O error"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn error_display_upstream_failure() {
|
||||||
|
let e = Error::UpstreamFailure("rebuild failed".to_string());
|
||||||
|
assert!(e.to_string().contains("Upstream failure"));
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,105 @@
|
|||||||
|
//! Pure permission-rule generation for `service::diagnostics`.
|
||||||
|
//!
|
||||||
|
//! These functions produce Claude Code permission-rule strings from tool call
|
||||||
|
//! metadata. No I/O: they take `&str` / `&Value` and return `String`.
|
||||||
|
|
||||||
|
use serde_json::Value;
|
||||||
|
|
||||||
|
/// Generate a Claude Code permission rule string for the given tool name and input.
|
||||||
|
///
|
||||||
|
/// - `Bash` tools → `Bash(first_word *)` derived from the `command` field.
|
||||||
|
/// - All other tools → the tool name verbatim (e.g. `Edit`, `mcp__huskies__create_story`).
|
||||||
|
pub fn generate_permission_rule(tool_name: &str, tool_input: &Value) -> String {
|
||||||
|
if tool_name == "Bash" {
|
||||||
|
let command_str = tool_input
|
||||||
|
.get("command")
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.unwrap_or("");
|
||||||
|
let first_word = command_str.split_whitespace().next().unwrap_or("unknown");
|
||||||
|
format!("Bash({first_word} *)")
|
||||||
|
} else {
|
||||||
|
tool_name.to_string()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Return `true` if `rule` is already covered by an existing wildcard in `allow_list`.
|
||||||
|
///
|
||||||
|
/// For example, if `allow_list` contains `"mcp__huskies__*"`, then the more
|
||||||
|
/// specific rule `"mcp__huskies__create_story"` is already covered.
|
||||||
|
pub fn is_dominated_by_wildcard(rule: &str, allow_list: &[Value]) -> bool {
|
||||||
|
allow_list.iter().any(|existing| {
|
||||||
|
if let Some(pat) = existing.as_str()
|
||||||
|
&& let Some(prefix) = pat.strip_suffix('*')
|
||||||
|
{
|
||||||
|
return rule.starts_with(prefix);
|
||||||
|
}
|
||||||
|
false
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Tests ─────────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
use serde_json::json;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn generate_rule_for_edit_tool() {
|
||||||
|
let rule = generate_permission_rule("Edit", &json!({}));
|
||||||
|
assert_eq!(rule, "Edit");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn generate_rule_for_write_tool() {
|
||||||
|
let rule = generate_permission_rule("Write", &json!({}));
|
||||||
|
assert_eq!(rule, "Write");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn generate_rule_for_bash_git() {
|
||||||
|
let rule = generate_permission_rule("Bash", &json!({"command": "git status"}));
|
||||||
|
assert_eq!(rule, "Bash(git *)");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn generate_rule_for_bash_cargo() {
|
||||||
|
let rule = generate_permission_rule("Bash", &json!({"command": "cargo test --all"}));
|
||||||
|
assert_eq!(rule, "Bash(cargo *)");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn generate_rule_for_bash_empty_command() {
|
||||||
|
let rule = generate_permission_rule("Bash", &json!({}));
|
||||||
|
assert_eq!(rule, "Bash(unknown *)");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn generate_rule_for_mcp_tool() {
|
||||||
|
let rule = generate_permission_rule("mcp__huskies__create_story", &json!({"name": "foo"}));
|
||||||
|
assert_eq!(rule, "mcp__huskies__create_story");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn is_dominated_by_exact_wildcard() {
|
||||||
|
let allow = vec![json!("mcp__huskies__*")];
|
||||||
|
assert!(is_dominated_by_wildcard(
|
||||||
|
"mcp__huskies__create_story",
|
||||||
|
&allow
|
||||||
|
));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn is_not_dominated_by_different_prefix() {
|
||||||
|
let allow = vec![json!("mcp__other__*")];
|
||||||
|
assert!(!is_dominated_by_wildcard(
|
||||||
|
"mcp__huskies__create_story",
|
||||||
|
&allow
|
||||||
|
));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn is_not_dominated_when_list_is_empty() {
|
||||||
|
assert!(!is_dominated_by_wildcard("Edit", &[]));
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,184 @@
|
|||||||
|
//! Pure event-buffer types — no side effects.
|
||||||
|
//!
|
||||||
|
//! `StoredEvent` and `EventBuffer` contain only data-transformation and
|
||||||
|
//! structural logic; all I/O (clocks, spawned tasks) lives in `io.rs`.
|
||||||
|
|
||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
use std::collections::VecDeque;
|
||||||
|
use std::sync::{Arc, Mutex};
|
||||||
|
|
||||||
|
/// Maximum number of events retained in the in-memory buffer.
|
||||||
|
pub const MAX_BUFFER_SIZE: usize = 500;
|
||||||
|
|
||||||
|
/// A pipeline event stored in the event buffer with a timestamp.
|
||||||
|
#[derive(Clone, Debug, Serialize, Deserialize)]
|
||||||
|
#[serde(tag = "type", rename_all = "snake_case")]
|
||||||
|
pub enum StoredEvent {
|
||||||
|
/// A work item transitioned between pipeline stages.
|
||||||
|
StageTransition {
|
||||||
|
/// Work item ID (e.g. `"42_story_my_feature"`).
|
||||||
|
story_id: String,
|
||||||
|
/// The stage the item moved FROM (display name, e.g. `"Current"`).
|
||||||
|
from_stage: String,
|
||||||
|
/// The stage the item moved TO (directory key, e.g. `"3_qa"`).
|
||||||
|
to_stage: String,
|
||||||
|
/// Unix timestamp in milliseconds when this event was recorded.
|
||||||
|
timestamp_ms: u64,
|
||||||
|
},
|
||||||
|
/// A merge operation failed for a story.
|
||||||
|
MergeFailure {
|
||||||
|
/// Work item ID (e.g. `"42_story_my_feature"`).
|
||||||
|
story_id: String,
|
||||||
|
/// Human-readable description of the failure.
|
||||||
|
reason: String,
|
||||||
|
/// Unix timestamp in milliseconds when this event was recorded.
|
||||||
|
timestamp_ms: u64,
|
||||||
|
},
|
||||||
|
/// A story was blocked (e.g. retry limit exceeded).
|
||||||
|
StoryBlocked {
|
||||||
|
/// Work item ID (e.g. `"42_story_my_feature"`).
|
||||||
|
story_id: String,
|
||||||
|
/// Human-readable reason the story was blocked.
|
||||||
|
reason: String,
|
||||||
|
/// Unix timestamp in milliseconds when this event was recorded.
|
||||||
|
timestamp_ms: u64,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
impl StoredEvent {
|
||||||
|
/// Returns the `timestamp_ms` field common to all event variants.
|
||||||
|
pub fn timestamp_ms(&self) -> u64 {
|
||||||
|
match self {
|
||||||
|
StoredEvent::StageTransition { timestamp_ms, .. } => *timestamp_ms,
|
||||||
|
StoredEvent::MergeFailure { timestamp_ms, .. } => *timestamp_ms,
|
||||||
|
StoredEvent::StoryBlocked { timestamp_ms, .. } => *timestamp_ms,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Shared, thread-safe ring buffer of recent pipeline events.
|
||||||
|
///
|
||||||
|
/// Wrapped in `Arc` so it can be shared between the background subscriber
|
||||||
|
/// task and the HTTP handler. The inner `Mutex` guards the `VecDeque`.
|
||||||
|
#[derive(Clone, Debug)]
|
||||||
|
pub struct EventBuffer(Arc<Mutex<VecDeque<StoredEvent>>>);
|
||||||
|
|
||||||
|
impl EventBuffer {
|
||||||
|
/// Create a new, empty event buffer.
|
||||||
|
pub fn new() -> Self {
|
||||||
|
EventBuffer(Arc::new(Mutex::new(VecDeque::new())))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Append an event to the buffer, evicting the oldest entry if the buffer
|
||||||
|
/// exceeds [`MAX_BUFFER_SIZE`].
|
||||||
|
pub fn push(&self, event: StoredEvent) {
|
||||||
|
let mut buf = self.0.lock().unwrap();
|
||||||
|
if buf.len() >= MAX_BUFFER_SIZE {
|
||||||
|
buf.pop_front();
|
||||||
|
}
|
||||||
|
buf.push_back(event);
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Return all events whose `timestamp_ms` is strictly greater than `since_ms`.
|
||||||
|
pub fn events_since(&self, since_ms: u64) -> Vec<StoredEvent> {
|
||||||
|
let buf = self.0.lock().unwrap();
|
||||||
|
buf.iter()
|
||||||
|
.filter(|e| e.timestamp_ms() > since_ms)
|
||||||
|
.cloned()
|
||||||
|
.collect()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Default for EventBuffer {
|
||||||
|
fn default() -> Self {
|
||||||
|
Self::new()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn push_and_retrieve_events() {
|
||||||
|
let buf = EventBuffer::new();
|
||||||
|
buf.push(StoredEvent::MergeFailure {
|
||||||
|
story_id: "42_story_x".to_string(),
|
||||||
|
reason: "conflict".to_string(),
|
||||||
|
timestamp_ms: 1000,
|
||||||
|
});
|
||||||
|
buf.push(StoredEvent::StoryBlocked {
|
||||||
|
story_id: "43_story_y".to_string(),
|
||||||
|
reason: "retry limit".to_string(),
|
||||||
|
timestamp_ms: 2000,
|
||||||
|
});
|
||||||
|
|
||||||
|
let all = buf.events_since(0);
|
||||||
|
assert_eq!(all.len(), 2);
|
||||||
|
|
||||||
|
let after_1000 = buf.events_since(1000);
|
||||||
|
assert_eq!(after_1000.len(), 1);
|
||||||
|
assert!(matches!(after_1000[0], StoredEvent::StoryBlocked { .. }));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn evicts_oldest_when_full() {
|
||||||
|
let buf = EventBuffer::new();
|
||||||
|
for i in 0..MAX_BUFFER_SIZE + 1 {
|
||||||
|
buf.push(StoredEvent::MergeFailure {
|
||||||
|
story_id: format!("{i}_story_x"),
|
||||||
|
reason: "x".to_string(),
|
||||||
|
timestamp_ms: i as u64,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
assert_eq!(buf.events_since(0).len(), MAX_BUFFER_SIZE);
|
||||||
|
assert!(buf.events_since(0).iter().all(|e| e.timestamp_ms() > 0));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn timestamp_ms_accessor_for_all_variants() {
|
||||||
|
let variants = [
|
||||||
|
StoredEvent::StageTransition {
|
||||||
|
story_id: "1".to_string(),
|
||||||
|
from_stage: "2_current".to_string(),
|
||||||
|
to_stage: "3_qa".to_string(),
|
||||||
|
timestamp_ms: 100,
|
||||||
|
},
|
||||||
|
StoredEvent::MergeFailure {
|
||||||
|
story_id: "2".to_string(),
|
||||||
|
reason: "x".to_string(),
|
||||||
|
timestamp_ms: 200,
|
||||||
|
},
|
||||||
|
StoredEvent::StoryBlocked {
|
||||||
|
story_id: "3".to_string(),
|
||||||
|
reason: "y".to_string(),
|
||||||
|
timestamp_ms: 300,
|
||||||
|
},
|
||||||
|
];
|
||||||
|
assert_eq!(variants[0].timestamp_ms(), 100);
|
||||||
|
assert_eq!(variants[1].timestamp_ms(), 200);
|
||||||
|
assert_eq!(variants[2].timestamp_ms(), 300);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn events_since_filters_by_timestamp() {
|
||||||
|
let buf = EventBuffer::new();
|
||||||
|
for ts in [100u64, 200, 300] {
|
||||||
|
buf.push(StoredEvent::MergeFailure {
|
||||||
|
story_id: "x".to_string(),
|
||||||
|
reason: "r".to_string(),
|
||||||
|
timestamp_ms: ts,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
// strictly greater than 100
|
||||||
|
let result = buf.events_since(100);
|
||||||
|
assert_eq!(result.len(), 2);
|
||||||
|
assert!(result.iter().all(|e| e.timestamp_ms() > 100));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn default_creates_empty_buffer() {
|
||||||
|
let buf = EventBuffer::default();
|
||||||
|
assert_eq!(buf.events_since(0).len(), 0);
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,67 @@
|
|||||||
|
//! Events I/O wrappers — the ONLY place in `service/events/` that may perform
|
||||||
|
//! side effects such as reading the system clock or spawning async tasks.
|
||||||
|
|
||||||
|
use crate::io::watcher::WatcherEvent;
|
||||||
|
use tokio::sync::broadcast;
|
||||||
|
|
||||||
|
use super::buffer::{EventBuffer, StoredEvent};
|
||||||
|
|
||||||
|
/// Returns the current Unix timestamp in milliseconds.
|
||||||
|
pub(super) fn now_ms() -> u64 {
|
||||||
|
std::time::SystemTime::now()
|
||||||
|
.duration_since(std::time::UNIX_EPOCH)
|
||||||
|
.map(|d| d.as_millis() as u64)
|
||||||
|
.unwrap_or(0)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Spawn a background task that consumes [`WatcherEvent`] broadcasts and
|
||||||
|
/// stores relevant events in `buffer`.
|
||||||
|
///
|
||||||
|
/// Only [`WatcherEvent::WorkItem`] (with a known `from_stage`),
|
||||||
|
/// [`WatcherEvent::MergeFailure`], and [`WatcherEvent::StoryBlocked`]
|
||||||
|
/// variants are stored. All other variants are silently ignored.
|
||||||
|
pub fn subscribe_to_watcher(buffer: EventBuffer, mut rx: broadcast::Receiver<WatcherEvent>) {
|
||||||
|
tokio::spawn(async move {
|
||||||
|
loop {
|
||||||
|
match rx.recv().await {
|
||||||
|
Ok(WatcherEvent::WorkItem {
|
||||||
|
stage,
|
||||||
|
item_id,
|
||||||
|
from_stage,
|
||||||
|
..
|
||||||
|
}) => {
|
||||||
|
if let Some(from) = from_stage {
|
||||||
|
buffer.push(StoredEvent::StageTransition {
|
||||||
|
story_id: item_id,
|
||||||
|
from_stage: from,
|
||||||
|
to_stage: stage,
|
||||||
|
timestamp_ms: now_ms(),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(WatcherEvent::MergeFailure { story_id, reason }) => {
|
||||||
|
buffer.push(StoredEvent::MergeFailure {
|
||||||
|
story_id,
|
||||||
|
reason,
|
||||||
|
timestamp_ms: now_ms(),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
Ok(WatcherEvent::StoryBlocked { story_id, reason }) => {
|
||||||
|
buffer.push(StoredEvent::StoryBlocked {
|
||||||
|
story_id,
|
||||||
|
reason,
|
||||||
|
timestamp_ms: now_ms(),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
Ok(_) => {}
|
||||||
|
Err(broadcast::error::RecvError::Lagged(n)) => {
|
||||||
|
crate::slog!("[events] Subscriber lagged, skipped {n} events");
|
||||||
|
}
|
||||||
|
Err(broadcast::error::RecvError::Closed) => {
|
||||||
|
crate::slog!("[events] Watcher channel closed; stopping event subscriber");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
@@ -0,0 +1,45 @@
|
|||||||
|
//! Events service — public API for the events domain.
|
||||||
|
//!
|
||||||
|
//! This module re-exports the pure buffer types from `buffer.rs` and the
|
||||||
|
//! side-effectful watcher subscription from `io.rs`. HTTP handlers call
|
||||||
|
//! these exports instead of containing the logic inline.
|
||||||
|
//!
|
||||||
|
//! Conventions: `docs/architecture/service-modules.md`
|
||||||
|
|
||||||
|
pub mod buffer;
|
||||||
|
pub(super) mod io;
|
||||||
|
|
||||||
|
pub use buffer::{EventBuffer, StoredEvent};
|
||||||
|
// Re-exported for tests (http::events uses it via `use super::*`).
|
||||||
|
#[allow(unused_imports)]
|
||||||
|
pub use buffer::MAX_BUFFER_SIZE;
|
||||||
|
pub use io::subscribe_to_watcher;
|
||||||
|
|
||||||
|
// ── Error type ────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Typed errors returned by `service::events` functions.
|
||||||
|
///
|
||||||
|
/// Events operations on the in-memory buffer are infallible; this enum
|
||||||
|
/// exists to satisfy the module convention and to accommodate future
|
||||||
|
/// error cases (e.g. persistence).
|
||||||
|
#[allow(dead_code)]
|
||||||
|
#[derive(Debug)]
|
||||||
|
pub enum Error {
|
||||||
|
/// A serialisation or internal error occurred.
|
||||||
|
Internal(String),
|
||||||
|
}
|
||||||
|
|
||||||
|
impl std::fmt::Display for Error {
|
||||||
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
|
match self {
|
||||||
|
Self::Internal(msg) => write!(f, "Events error: {msg}"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Public API ────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Return all events in `buffer` recorded after `since_ms` milliseconds.
|
||||||
|
pub fn events_since(buffer: &EventBuffer, since_ms: u64) -> Vec<StoredEvent> {
|
||||||
|
buffer.events_since(since_ms)
|
||||||
|
}
|
||||||
@@ -0,0 +1,84 @@
|
|||||||
|
//! File I/O — the ONLY place in `service/file_io/` that may perform
|
||||||
|
//! filesystem reads, writes, shell execution, or other side effects.
|
||||||
|
//!
|
||||||
|
//! Every function here is a thin adapter that converts lower-level
|
||||||
|
//! `String` errors into the typed [`super::Error`] variants.
|
||||||
|
|
||||||
|
use super::Error;
|
||||||
|
use crate::io::fs::FileEntry;
|
||||||
|
use crate::io::search::SearchResult;
|
||||||
|
use crate::io::shell::CommandOutput;
|
||||||
|
use crate::state::SessionState;
|
||||||
|
|
||||||
|
pub(super) async fn read_file(path: String, state: &SessionState) -> Result<String, Error> {
|
||||||
|
crate::io::fs::read_file(path, state)
|
||||||
|
.await
|
||||||
|
.map_err(Error::Filesystem)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub(super) async fn write_file(
|
||||||
|
path: String,
|
||||||
|
content: String,
|
||||||
|
state: &SessionState,
|
||||||
|
) -> Result<(), Error> {
|
||||||
|
crate::io::fs::write_file(path, content, state)
|
||||||
|
.await
|
||||||
|
.map_err(Error::Filesystem)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub(super) async fn list_directory(
|
||||||
|
path: String,
|
||||||
|
state: &SessionState,
|
||||||
|
) -> Result<Vec<FileEntry>, Error> {
|
||||||
|
crate::io::fs::list_directory(path, state)
|
||||||
|
.await
|
||||||
|
.map_err(Error::Filesystem)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub(super) async fn list_directory_absolute(path: String) -> Result<Vec<FileEntry>, Error> {
|
||||||
|
crate::io::fs::list_directory_absolute(path)
|
||||||
|
.await
|
||||||
|
.map_err(Error::Filesystem)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub(super) async fn create_directory_absolute(path: String) -> Result<(), Error> {
|
||||||
|
crate::io::fs::create_directory_absolute(path)
|
||||||
|
.await
|
||||||
|
.map_err(Error::Filesystem)
|
||||||
|
.map(|_| ())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub(super) fn get_home_directory() -> Result<String, Error> {
|
||||||
|
crate::io::fs::get_home_directory().map_err(Error::Filesystem)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub(super) async fn list_project_files(state: &SessionState) -> Result<Vec<String>, Error> {
|
||||||
|
crate::io::fs::list_project_files(state)
|
||||||
|
.await
|
||||||
|
.map_err(Error::Filesystem)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub(super) async fn search_files(
|
||||||
|
query: String,
|
||||||
|
state: &SessionState,
|
||||||
|
) -> Result<Vec<SearchResult>, Error> {
|
||||||
|
crate::io::search::search_files(query, state)
|
||||||
|
.await
|
||||||
|
.map_err(Error::Filesystem)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub(super) async fn exec_shell(
|
||||||
|
command: String,
|
||||||
|
args: Vec<String>,
|
||||||
|
state: &SessionState,
|
||||||
|
) -> Result<CommandOutput, Error> {
|
||||||
|
crate::io::shell::exec_shell(command, args, state)
|
||||||
|
.await
|
||||||
|
.map_err(|e| {
|
||||||
|
if e.contains("not in the allowlist") {
|
||||||
|
Error::Validation(e)
|
||||||
|
} else {
|
||||||
|
Error::Filesystem(e)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
@@ -0,0 +1,183 @@
|
|||||||
|
//! File I/O service — public API for filesystem and shell operations.
|
||||||
|
//!
|
||||||
|
//! Exposes functions for reading, writing, and listing files scoped to the
|
||||||
|
//! active project root, plus utilities for absolute-path and shell operations.
|
||||||
|
//! HTTP handlers call these functions instead of touching `io::fs` directly.
|
||||||
|
//!
|
||||||
|
//! Conventions: `docs/architecture/service-modules.md`
|
||||||
|
|
||||||
|
pub(super) mod io;
|
||||||
|
|
||||||
|
use crate::state::SessionState;
|
||||||
|
|
||||||
|
/// Re-export the canonical filesystem entry type so HTTP handlers don't need
|
||||||
|
/// to import from `io::fs` directly.
|
||||||
|
pub use crate::io::fs::FileEntry;
|
||||||
|
/// Re-export the search result type.
|
||||||
|
pub use crate::io::search::SearchResult;
|
||||||
|
/// Re-export the shell output type.
|
||||||
|
pub use crate::io::shell::CommandOutput;
|
||||||
|
|
||||||
|
// ── Error type ────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Typed errors returned by `service::file_io` functions.
|
||||||
|
///
|
||||||
|
/// HTTP handlers map these to status codes:
|
||||||
|
/// - [`Error::Validation`] → 400 Bad Request
|
||||||
|
/// - [`Error::Filesystem`] → 400 Bad Request (or 404 when appropriate)
|
||||||
|
#[derive(Debug)]
|
||||||
|
pub enum Error {
|
||||||
|
/// The request was invalid (e.g. path traversal attempt, command not allowlisted).
|
||||||
|
Validation(String),
|
||||||
|
/// A filesystem or shell operation failed (file not found, permission denied, etc.).
|
||||||
|
Filesystem(String),
|
||||||
|
}
|
||||||
|
|
||||||
|
impl std::fmt::Display for Error {
|
||||||
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
|
match self {
|
||||||
|
Self::Validation(msg) => write!(f, "Validation error: {msg}"),
|
||||||
|
Self::Filesystem(msg) => write!(f, "Filesystem error: {msg}"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Path validation ───────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Validate a relative path, rejecting directory traversal attempts.
|
||||||
|
///
|
||||||
|
/// Returns [`Error::Validation`] when the path contains `..`.
|
||||||
|
pub fn validate_path(path: &str) -> Result<(), Error> {
|
||||||
|
if path.contains("..") {
|
||||||
|
return Err(Error::Validation(
|
||||||
|
"Security Violation: Directory traversal ('..') is not allowed.".to_string(),
|
||||||
|
));
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Public API ────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Read a file from the project root.
|
||||||
|
pub async fn read_file(path: String, state: &SessionState) -> Result<String, Error> {
|
||||||
|
validate_path(&path)?;
|
||||||
|
io::read_file(path, state).await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Write a file to the project root, creating parent directories as needed.
|
||||||
|
pub async fn write_file(path: String, content: String, state: &SessionState) -> Result<(), Error> {
|
||||||
|
validate_path(&path)?;
|
||||||
|
io::write_file(path, content, state).await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// List directory entries at a project-relative path.
|
||||||
|
pub async fn list_directory(path: String, state: &SessionState) -> Result<Vec<FileEntry>, Error> {
|
||||||
|
io::list_directory(path, state).await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// List directory entries at an absolute path (not scoped to the project root).
|
||||||
|
pub async fn list_directory_absolute(path: String) -> Result<Vec<FileEntry>, Error> {
|
||||||
|
io::list_directory_absolute(path).await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Create a directory (and all parents) at an absolute path.
|
||||||
|
pub async fn create_directory_absolute(path: String) -> Result<(), Error> {
|
||||||
|
io::create_directory_absolute(path).await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Return the current user's home directory path.
|
||||||
|
pub fn get_home_directory() -> Result<String, Error> {
|
||||||
|
io::get_home_directory()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// List all files in the project recursively, respecting `.gitignore`.
|
||||||
|
pub async fn list_project_files(state: &SessionState) -> Result<Vec<String>, Error> {
|
||||||
|
io::list_project_files(state).await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Search the project for files whose contents contain `query`.
|
||||||
|
pub async fn search_files(query: String, state: &SessionState) -> Result<Vec<SearchResult>, Error> {
|
||||||
|
io::search_files(query, state).await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Execute an allowlisted shell command in the project root directory.
|
||||||
|
pub async fn exec_shell(
|
||||||
|
command: String,
|
||||||
|
args: Vec<String>,
|
||||||
|
state: &SessionState,
|
||||||
|
) -> Result<CommandOutput, Error> {
|
||||||
|
io::exec_shell(command, args, state).await
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Tests ─────────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
// Pure unit tests for path validation and sanitisation — no tempdir, no network.
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn validate_path_accepts_simple_relative_path() {
|
||||||
|
assert!(validate_path("src/main.rs").is_ok());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn validate_path_accepts_dot_path() {
|
||||||
|
assert!(validate_path(".").is_ok());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn validate_path_accepts_root_relative() {
|
||||||
|
assert!(validate_path("subdir/file.txt").is_ok());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn validate_path_rejects_parent_traversal() {
|
||||||
|
let result = validate_path("../etc/passwd");
|
||||||
|
assert!(matches!(result, Err(Error::Validation(_))));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn validate_path_rejects_embedded_traversal() {
|
||||||
|
let result = validate_path("src/../../../etc/passwd");
|
||||||
|
assert!(matches!(result, Err(Error::Validation(_))));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn validate_path_rejects_double_dot_only() {
|
||||||
|
let result = validate_path("..");
|
||||||
|
assert!(matches!(result, Err(Error::Validation(_))));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn validate_path_accepts_file_with_single_dots_in_name() {
|
||||||
|
// Filenames like "config.dev.toml" have single dots — must be accepted.
|
||||||
|
assert!(validate_path("config.dev.toml").is_ok());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn validate_path_rejects_traversal_with_url_encoding_lookalike() {
|
||||||
|
// A literal ".." sequence anywhere in the string is rejected.
|
||||||
|
let result = validate_path("valid/..hidden");
|
||||||
|
assert!(matches!(result, Err(Error::Validation(_))));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn error_display_validation() {
|
||||||
|
let e = Error::Validation("bad path".to_string());
|
||||||
|
assert!(e.to_string().contains("bad path"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn error_display_filesystem() {
|
||||||
|
let e = Error::Filesystem("file not found".to_string());
|
||||||
|
assert!(e.to_string().contains("file not found"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn error_display_filesystem_contains_message() {
|
||||||
|
let e = Error::Filesystem("task panic".to_string());
|
||||||
|
assert!(e.to_string().contains("task panic"));
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,136 @@
|
|||||||
|
//! Gateway aggregation — pure functions for cross-project pipeline status.
|
||||||
|
//!
|
||||||
|
//! Formats aggregated pipeline data into compact text suitable for chat
|
||||||
|
//! transports (Matrix, Slack). Uses `service::pipeline::aggregate_pipeline_counts`
|
||||||
|
//! for per-project parsing.
|
||||||
|
|
||||||
|
use serde_json::Value;
|
||||||
|
use std::collections::BTreeMap;
|
||||||
|
|
||||||
|
/// Format an aggregated status map as a compact, one-line-per-project string
|
||||||
|
/// suitable for Matrix/Slack messages.
|
||||||
|
///
|
||||||
|
/// Healthy projects: `🟢 **name** — B:5 C:2 Q:1 M:0 D:12`
|
||||||
|
/// Blocked items appended on the same line: `| blocked: 42 [story]`
|
||||||
|
/// Unreachable projects: `🔴 **name** — UNREACHABLE`
|
||||||
|
pub fn format_aggregate_status_compact(statuses: &BTreeMap<String, Value>) -> String {
|
||||||
|
let mut lines: Vec<String> = Vec::new();
|
||||||
|
for (name, status) in statuses {
|
||||||
|
if let Some(err) = status.get("error").and_then(|e| e.as_str()) {
|
||||||
|
lines.push(format!("\u{1F534} **{name}** — UNREACHABLE: {err}"));
|
||||||
|
} else {
|
||||||
|
let counts = status.get("counts");
|
||||||
|
let b = counts
|
||||||
|
.and_then(|c| c.get("backlog"))
|
||||||
|
.and_then(|n| n.as_u64())
|
||||||
|
.unwrap_or(0);
|
||||||
|
let c = counts
|
||||||
|
.and_then(|c| c.get("current"))
|
||||||
|
.and_then(|n| n.as_u64())
|
||||||
|
.unwrap_or(0);
|
||||||
|
let q = counts
|
||||||
|
.and_then(|c| c.get("qa"))
|
||||||
|
.and_then(|n| n.as_u64())
|
||||||
|
.unwrap_or(0);
|
||||||
|
let m = counts
|
||||||
|
.and_then(|c| c.get("merge"))
|
||||||
|
.and_then(|n| n.as_u64())
|
||||||
|
.unwrap_or(0);
|
||||||
|
let d = counts
|
||||||
|
.and_then(|c| c.get("done"))
|
||||||
|
.and_then(|n| n.as_u64())
|
||||||
|
.unwrap_or(0);
|
||||||
|
|
||||||
|
let blocked_arr = status
|
||||||
|
.get("blocked")
|
||||||
|
.and_then(|a| a.as_array())
|
||||||
|
.cloned()
|
||||||
|
.unwrap_or_default();
|
||||||
|
|
||||||
|
let indicator = if blocked_arr.is_empty() {
|
||||||
|
"\u{1F7E2}" // 🟢
|
||||||
|
} else {
|
||||||
|
"\u{1F7E0}" // 🟠
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut line = format!("{indicator} **{name}** — B:{b} C:{c} Q:{q} M:{m} D:{d}");
|
||||||
|
|
||||||
|
if !blocked_arr.is_empty() {
|
||||||
|
let ids: Vec<String> = blocked_arr
|
||||||
|
.iter()
|
||||||
|
.filter_map(|item| item.get("story_id").and_then(|s| s.as_str()))
|
||||||
|
.map(|s| s.to_string())
|
||||||
|
.collect();
|
||||||
|
line.push_str(&format!(" | blocked: {}", ids.join(", ")));
|
||||||
|
}
|
||||||
|
|
||||||
|
lines.push(line);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if lines.is_empty() {
|
||||||
|
return "No projects registered.".to_string();
|
||||||
|
}
|
||||||
|
format!("**All Projects**\n\n{}", lines.join("\n\n"))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Tests ────────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
use serde_json::json;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn format_healthy_project() {
|
||||||
|
let mut statuses = BTreeMap::new();
|
||||||
|
statuses.insert(
|
||||||
|
"huskies".to_string(),
|
||||||
|
json!({
|
||||||
|
"counts": { "backlog": 5, "current": 2, "qa": 1, "merge": 0, "done": 12 },
|
||||||
|
"blocked": []
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
let output = format_aggregate_status_compact(&statuses);
|
||||||
|
assert!(output.contains("huskies"));
|
||||||
|
assert!(output.contains("B:5"));
|
||||||
|
assert!(output.contains("C:2"));
|
||||||
|
assert!(output.contains("Q:1"));
|
||||||
|
assert!(output.contains("D:12"));
|
||||||
|
assert!(!output.contains("blocked:"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn format_unreachable_project() {
|
||||||
|
let mut statuses = BTreeMap::new();
|
||||||
|
statuses.insert(
|
||||||
|
"broken".to_string(),
|
||||||
|
json!({ "error": "connection refused" }),
|
||||||
|
);
|
||||||
|
let output = format_aggregate_status_compact(&statuses);
|
||||||
|
assert!(output.contains("broken"));
|
||||||
|
assert!(output.contains("UNREACHABLE"));
|
||||||
|
assert!(output.contains("connection refused"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn format_blocked_items_shown() {
|
||||||
|
let mut statuses = BTreeMap::new();
|
||||||
|
statuses.insert(
|
||||||
|
"myproj".to_string(),
|
||||||
|
json!({
|
||||||
|
"counts": { "backlog": 0, "current": 1, "qa": 0, "merge": 0, "done": 0 },
|
||||||
|
"blocked": [{ "story_id": "42_story_x", "name": "X", "stage": "current", "reason": "blocked" }]
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
let output = format_aggregate_status_compact(&statuses);
|
||||||
|
assert!(output.contains("blocked:"));
|
||||||
|
assert!(output.contains("42_story_x"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn format_empty_projects() {
|
||||||
|
let statuses = BTreeMap::new();
|
||||||
|
let output = format_aggregate_status_compact(&statuses);
|
||||||
|
assert_eq!(output, "No projects registered.");
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,191 @@
|
|||||||
|
//! Gateway configuration types — pure parsing and validation.
|
||||||
|
//!
|
||||||
|
//! Contains `ProjectEntry`, `GatewayConfig`, and validation logic.
|
||||||
|
//! All filesystem I/O (loading from disk) lives in `io.rs`.
|
||||||
|
|
||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
use std::collections::BTreeMap;
|
||||||
|
|
||||||
|
/// A single project entry in `projects.toml`.
|
||||||
|
#[derive(Debug, Clone, Deserialize, Serialize)]
|
||||||
|
pub struct ProjectEntry {
|
||||||
|
/// Base URL of the project's huskies container (e.g. `http://localhost:3001`).
|
||||||
|
pub url: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Top-level `projects.toml` config.
|
||||||
|
#[derive(Debug, Clone, Deserialize, Serialize)]
|
||||||
|
pub struct GatewayConfig {
|
||||||
|
/// Map of project name → container URL.
|
||||||
|
#[serde(default)]
|
||||||
|
pub projects: BTreeMap<String, ProjectEntry>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Validate that a gateway config has at least one project.
|
||||||
|
///
|
||||||
|
/// Returns the name of the first project (alphabetically) on success,
|
||||||
|
/// or an error message if the config is empty.
|
||||||
|
pub fn validate_config(config: &GatewayConfig) -> Result<String, String> {
|
||||||
|
if config.projects.is_empty() {
|
||||||
|
return Err("projects.toml must define at least one project".to_string());
|
||||||
|
}
|
||||||
|
Ok(config.projects.keys().next().unwrap().clone())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Validate that a project name exists in the given project map.
|
||||||
|
///
|
||||||
|
/// Returns the project's URL on success.
|
||||||
|
pub fn validate_project_exists(
|
||||||
|
projects: &BTreeMap<String, ProjectEntry>,
|
||||||
|
name: &str,
|
||||||
|
) -> Result<String, String> {
|
||||||
|
projects.get(name).map(|p| p.url.clone()).ok_or_else(|| {
|
||||||
|
let available: Vec<&str> = projects.keys().map(|s| s.as_str()).collect();
|
||||||
|
format!(
|
||||||
|
"unknown project '{name}'. Available: {}",
|
||||||
|
available.join(", ")
|
||||||
|
)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Escape a string as a TOML quoted string.
|
||||||
|
pub fn toml_string(s: &str) -> String {
|
||||||
|
format!("\"{}\"", s.replace('\\', "\\\\").replace('"', "\\\""))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Serialize a `bot.toml` content string from the given fields.
|
||||||
|
pub fn serialize_bot_config(
|
||||||
|
transport: &str,
|
||||||
|
homeserver: Option<&str>,
|
||||||
|
username: Option<&str>,
|
||||||
|
password: Option<&str>,
|
||||||
|
slack_bot_token: Option<&str>,
|
||||||
|
slack_signing_secret: Option<&str>,
|
||||||
|
) -> String {
|
||||||
|
match transport {
|
||||||
|
"slack" => {
|
||||||
|
format!(
|
||||||
|
"enabled = true\ntransport = \"slack\"\n\nslack_bot_token = {}\nslack_signing_secret = {}\nslack_channel_ids = []\n",
|
||||||
|
toml_string(slack_bot_token.unwrap_or("")),
|
||||||
|
toml_string(slack_signing_secret.unwrap_or("")),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
_ => {
|
||||||
|
format!(
|
||||||
|
"enabled = true\ntransport = \"matrix\"\n\nhomeserver = {}\nusername = {}\npassword = {}\nroom_ids = []\nallowed_users = []\n",
|
||||||
|
toml_string(homeserver.unwrap_or("")),
|
||||||
|
toml_string(username.unwrap_or("")),
|
||||||
|
toml_string(password.unwrap_or("")),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Tests ────────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn parse_valid_projects_toml() {
|
||||||
|
let toml_str = r#"
|
||||||
|
[projects.huskies]
|
||||||
|
url = "http://localhost:3001"
|
||||||
|
|
||||||
|
[projects.robot-studio]
|
||||||
|
url = "http://localhost:3002"
|
||||||
|
"#;
|
||||||
|
let config: GatewayConfig = toml::from_str(toml_str).unwrap();
|
||||||
|
assert_eq!(config.projects.len(), 2);
|
||||||
|
assert_eq!(config.projects["huskies"].url, "http://localhost:3001");
|
||||||
|
assert_eq!(config.projects["robot-studio"].url, "http://localhost:3002");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn parse_empty_projects_toml() {
|
||||||
|
let toml_str = "[projects]\n";
|
||||||
|
let config: GatewayConfig = toml::from_str(toml_str).unwrap();
|
||||||
|
assert!(config.projects.is_empty());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn validate_config_rejects_empty() {
|
||||||
|
let config = GatewayConfig {
|
||||||
|
projects: BTreeMap::new(),
|
||||||
|
};
|
||||||
|
assert!(validate_config(&config).is_err());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn validate_config_returns_first_project_name() {
|
||||||
|
let mut projects = BTreeMap::new();
|
||||||
|
projects.insert(
|
||||||
|
"beta".into(),
|
||||||
|
ProjectEntry {
|
||||||
|
url: "http://b".into(),
|
||||||
|
},
|
||||||
|
);
|
||||||
|
projects.insert(
|
||||||
|
"alpha".into(),
|
||||||
|
ProjectEntry {
|
||||||
|
url: "http://a".into(),
|
||||||
|
},
|
||||||
|
);
|
||||||
|
let config = GatewayConfig { projects };
|
||||||
|
assert_eq!(validate_config(&config).unwrap(), "alpha");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn validate_project_exists_succeeds() {
|
||||||
|
let mut projects = BTreeMap::new();
|
||||||
|
projects.insert(
|
||||||
|
"p1".into(),
|
||||||
|
ProjectEntry {
|
||||||
|
url: "http://p1".into(),
|
||||||
|
},
|
||||||
|
);
|
||||||
|
assert_eq!(
|
||||||
|
validate_project_exists(&projects, "p1").unwrap(),
|
||||||
|
"http://p1"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn validate_project_exists_fails() {
|
||||||
|
let projects = BTreeMap::new();
|
||||||
|
assert!(validate_project_exists(&projects, "missing").is_err());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn toml_string_escapes_quotes() {
|
||||||
|
assert_eq!(toml_string(r#"a"b"#), r#""a\"b""#);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn toml_string_escapes_backslashes() {
|
||||||
|
assert_eq!(toml_string(r"a\b"), r#""a\\b""#);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn serialize_bot_config_matrix() {
|
||||||
|
let content = serialize_bot_config(
|
||||||
|
"matrix",
|
||||||
|
Some("https://mx.io"),
|
||||||
|
Some("@bot:mx.io"),
|
||||||
|
Some("pass"),
|
||||||
|
None,
|
||||||
|
None,
|
||||||
|
);
|
||||||
|
assert!(content.contains("transport = \"matrix\""));
|
||||||
|
assert!(content.contains("homeserver = \"https://mx.io\""));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn serialize_bot_config_slack() {
|
||||||
|
let content =
|
||||||
|
serialize_bot_config("slack", None, None, None, Some("xoxb-123"), Some("secret"));
|
||||||
|
assert!(content.contains("transport = \"slack\""));
|
||||||
|
assert!(content.contains("slack_bot_token = \"xoxb-123\""));
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,407 @@
|
|||||||
|
//! Gateway I/O — the ONLY place in `service/gateway/` that may perform side effects.
|
||||||
|
//!
|
||||||
|
//! Side effects here include: reading/writing config and agent state files,
|
||||||
|
//! HTTP requests to project containers (proxying, health checks, polling),
|
||||||
|
//! spawning the Matrix bot task, and the notification poller background task.
|
||||||
|
|
||||||
|
use super::config::{GatewayConfig, ProjectEntry};
|
||||||
|
use super::registration::JoinedAgent;
|
||||||
|
pub use reqwest::Client;
|
||||||
|
use serde_json::{Value, json};
|
||||||
|
use std::collections::{BTreeMap, HashMap};
|
||||||
|
use std::path::Path;
|
||||||
|
|
||||||
|
// ── Config I/O ───────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Load gateway config from a `projects.toml` file.
|
||||||
|
pub fn load_config(path: &Path) -> Result<GatewayConfig, String> {
|
||||||
|
let contents = std::fs::read_to_string(path)
|
||||||
|
.map_err(|e| format!("cannot read {}: {e}", path.display()))?;
|
||||||
|
toml::from_str(&contents).map_err(|e| format!("invalid projects.toml: {e}"))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Load persisted agents from `<config_dir>/gateway_agents.json`.
|
||||||
|
/// Returns an empty list if the file does not exist or cannot be parsed.
|
||||||
|
pub fn load_agents(config_dir: &Path) -> Vec<JoinedAgent> {
|
||||||
|
let path = config_dir.join("gateway_agents.json");
|
||||||
|
match std::fs::read(&path) {
|
||||||
|
Ok(data) => serde_json::from_slice(&data).unwrap_or_default(),
|
||||||
|
Err(_) => Vec::new(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Persist the current projects map to `<config_dir>/projects.toml`.
|
||||||
|
/// Silently ignores write errors or skips when `config_dir` is empty.
|
||||||
|
pub async fn save_config(projects: &BTreeMap<String, ProjectEntry>, config_dir: &Path) {
|
||||||
|
if config_dir.as_os_str().is_empty() {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
let path = config_dir.join("projects.toml");
|
||||||
|
let config = GatewayConfig {
|
||||||
|
projects: projects.clone(),
|
||||||
|
};
|
||||||
|
if let Ok(data) = toml::to_string_pretty(&config) {
|
||||||
|
let _ = tokio::fs::write(&path, data).await;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Persist the current agent list to `<config_dir>/gateway_agents.json`.
|
||||||
|
/// Silently ignores write errors.
|
||||||
|
pub async fn save_agents(agents: &[JoinedAgent], config_dir: &Path) {
|
||||||
|
if config_dir == Path::new("") {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
let path = config_dir.join("gateway_agents.json");
|
||||||
|
if let Ok(data) = serde_json::to_vec_pretty(agents) {
|
||||||
|
let _ = tokio::fs::write(&path, data).await;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Bot config I/O ──────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Read the current raw bot.toml as key/value pairs for the configuration UI.
|
||||||
|
/// Returns `None` values if the file does not exist.
|
||||||
|
pub fn read_bot_config_raw(config_dir: &Path) -> BotConfigFields {
|
||||||
|
let path = config_dir.join(".huskies").join("bot.toml");
|
||||||
|
let content = match std::fs::read_to_string(&path) {
|
||||||
|
Ok(c) => c,
|
||||||
|
Err(_) => return BotConfigFields::default(),
|
||||||
|
};
|
||||||
|
let table: toml::Value = match toml::from_str(&content) {
|
||||||
|
Ok(v) => v,
|
||||||
|
Err(_) => return BotConfigFields::default(),
|
||||||
|
};
|
||||||
|
let s = |key: &str| -> Option<String> {
|
||||||
|
table
|
||||||
|
.get(key)
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.map(|s| s.to_string())
|
||||||
|
};
|
||||||
|
BotConfigFields {
|
||||||
|
transport: s("transport").unwrap_or_else(|| "matrix".to_string()),
|
||||||
|
homeserver: s("homeserver"),
|
||||||
|
username: s("username"),
|
||||||
|
password: s("password"),
|
||||||
|
slack_bot_token: s("slack_bot_token"),
|
||||||
|
slack_signing_secret: s("slack_signing_secret"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Raw bot.toml fields for the configuration UI.
|
||||||
|
#[derive(Default)]
|
||||||
|
pub struct BotConfigFields {
|
||||||
|
pub transport: String,
|
||||||
|
pub homeserver: Option<String>,
|
||||||
|
pub username: Option<String>,
|
||||||
|
pub password: Option<String>,
|
||||||
|
pub slack_bot_token: Option<String>,
|
||||||
|
pub slack_signing_secret: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Write a `bot.toml` from the given content string.
|
||||||
|
pub fn write_bot_config(config_dir: &Path, content: &str) -> Result<(), String> {
|
||||||
|
let huskies_dir = config_dir.join(".huskies");
|
||||||
|
std::fs::create_dir_all(&huskies_dir)
|
||||||
|
.map_err(|e| format!("cannot create .huskies dir: {e}"))?;
|
||||||
|
let path = huskies_dir.join("bot.toml");
|
||||||
|
std::fs::write(&path, content).map_err(|e| format!("cannot write bot.toml: {e}"))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── MCP proxy I/O ───────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Proxy a raw MCP request body to the given project URL.
|
||||||
|
pub async fn proxy_mcp_call(
|
||||||
|
client: &Client,
|
||||||
|
base_url: &str,
|
||||||
|
request_bytes: &[u8],
|
||||||
|
) -> Result<Vec<u8>, String> {
|
||||||
|
let mcp_url = format!("{}/mcp", base_url.trim_end_matches('/'));
|
||||||
|
|
||||||
|
let resp = client
|
||||||
|
.post(&mcp_url)
|
||||||
|
.header("Content-Type", "application/json")
|
||||||
|
.body(request_bytes.to_vec())
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.map_err(|e| format!("failed to reach {mcp_url}: {e}"))?;
|
||||||
|
|
||||||
|
resp.bytes()
|
||||||
|
.await
|
||||||
|
.map(|b| b.to_vec())
|
||||||
|
.map_err(|e| format!("failed to read response from {mcp_url}: {e}"))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Fetch tools/list from a project's MCP endpoint.
|
||||||
|
pub async fn fetch_tools_list(client: &Client, base_url: &str) -> Result<Value, String> {
|
||||||
|
let mcp_url = format!("{}/mcp", base_url.trim_end_matches('/'));
|
||||||
|
|
||||||
|
let rpc_body = json!({
|
||||||
|
"jsonrpc": "2.0",
|
||||||
|
"id": 1,
|
||||||
|
"method": "tools/list",
|
||||||
|
"params": {}
|
||||||
|
});
|
||||||
|
|
||||||
|
let resp = client
|
||||||
|
.post(&mcp_url)
|
||||||
|
.json(&rpc_body)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.map_err(|e| format!("failed to reach {mcp_url}: {e}"))?;
|
||||||
|
|
||||||
|
resp.json()
|
||||||
|
.await
|
||||||
|
.map_err(|e| format!("invalid JSON from upstream: {e}"))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Fetch and aggregate pipeline status for a single project URL.
|
||||||
|
pub async fn fetch_one_project_pipeline_status(url: &str, client: &Client) -> Value {
|
||||||
|
let mcp_url = format!("{}/mcp", url.trim_end_matches('/'));
|
||||||
|
let rpc_body = json!({
|
||||||
|
"jsonrpc": "2.0",
|
||||||
|
"id": 1,
|
||||||
|
"method": "tools/call",
|
||||||
|
"params": {
|
||||||
|
"name": "get_pipeline_status",
|
||||||
|
"arguments": {}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
match client.post(&mcp_url).json(&rpc_body).send().await {
|
||||||
|
Ok(resp) => match resp.json::<Value>().await {
|
||||||
|
Ok(upstream) => {
|
||||||
|
if let Some(text) = upstream
|
||||||
|
.get("result")
|
||||||
|
.and_then(|r| r.get("content"))
|
||||||
|
.and_then(|c| c.get(0))
|
||||||
|
.and_then(|c| c.get("text"))
|
||||||
|
.and_then(|t| t.as_str())
|
||||||
|
{
|
||||||
|
match serde_json::from_str::<Value>(text) {
|
||||||
|
Ok(pipeline) => {
|
||||||
|
crate::service::pipeline::aggregate_pipeline_counts(&pipeline)
|
||||||
|
}
|
||||||
|
Err(_) => json!({ "error": "invalid pipeline JSON" }),
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
json!({ "error": "unexpected response shape" })
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(e) => json!({ "error": format!("invalid response: {e}") }),
|
||||||
|
},
|
||||||
|
Err(e) => json!({ "error": format!("unreachable: {e}") }),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Fetch `get_pipeline_status` from every registered project URL in parallel.
|
||||||
|
pub async fn fetch_all_project_pipeline_statuses(
|
||||||
|
project_urls: &BTreeMap<String, String>,
|
||||||
|
client: &Client,
|
||||||
|
) -> BTreeMap<String, Value> {
|
||||||
|
use futures::future::join_all;
|
||||||
|
|
||||||
|
let futures: Vec<_> = project_urls
|
||||||
|
.iter()
|
||||||
|
.map(|(name, url)| {
|
||||||
|
let name = name.clone();
|
||||||
|
let url = url.clone();
|
||||||
|
let client = client.clone();
|
||||||
|
async move {
|
||||||
|
let result = fetch_one_project_pipeline_status(&url, &client).await;
|
||||||
|
(name, result)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
join_all(futures).await.into_iter().collect()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Fetch the pipeline status from a single project for the `gateway_status` tool.
|
||||||
|
pub async fn fetch_pipeline_status_for_project(
|
||||||
|
client: &Client,
|
||||||
|
base_url: &str,
|
||||||
|
) -> Result<Value, String> {
|
||||||
|
let mcp_url = format!("{}/mcp", base_url.trim_end_matches('/'));
|
||||||
|
let rpc_body = json!({
|
||||||
|
"jsonrpc": "2.0",
|
||||||
|
"id": 1,
|
||||||
|
"method": "tools/call",
|
||||||
|
"params": {
|
||||||
|
"name": "get_pipeline_status",
|
||||||
|
"arguments": {}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
let resp = client
|
||||||
|
.post(&mcp_url)
|
||||||
|
.json(&rpc_body)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.map_err(|e| format!("failed to reach {mcp_url}: {e}"))?;
|
||||||
|
|
||||||
|
resp.json()
|
||||||
|
.await
|
||||||
|
.map_err(|e| format!("invalid upstream response: {e}"))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check health of a single project URL.
|
||||||
|
pub async fn check_project_health(client: &Client, base_url: &str) -> Result<bool, String> {
|
||||||
|
let health_url = format!("{}/health", base_url.trim_end_matches('/'));
|
||||||
|
match client.get(&health_url).send().await {
|
||||||
|
Ok(resp) => Ok(resp.status().is_success()),
|
||||||
|
Err(e) => Err(format!("unreachable: {e}")),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Gateway MCP JSON ────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Write (or overwrite) a `.mcp.json` in `config_dir` that points Claude Code
|
||||||
|
/// CLI at the gateway's own `/mcp` endpoint.
|
||||||
|
pub fn write_gateway_mcp_json(config_dir: &Path, port: u16) -> Result<(), std::io::Error> {
|
||||||
|
let host = std::env::var("HUSKIES_HOST").unwrap_or_else(|_| "127.0.0.1".to_string());
|
||||||
|
let url = format!("http://{host}:{port}/mcp");
|
||||||
|
let content = json!({
|
||||||
|
"mcpServers": {
|
||||||
|
"huskies": {
|
||||||
|
"type": "http",
|
||||||
|
"url": url
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
let path = config_dir.join(".mcp.json");
|
||||||
|
std::fs::write(&path, serde_json::to_string_pretty(&content).unwrap())?;
|
||||||
|
crate::slog!("[gateway] Wrote {} pointing to {}", path.display(), url);
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Init project I/O ────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Check if a path already has a `.huskies/` directory.
|
||||||
|
pub fn has_huskies_dir(path: &Path) -> bool {
|
||||||
|
path.join(".huskies").exists()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Create a directory (and parents) if it does not exist.
|
||||||
|
pub fn ensure_directory(path: &Path) -> Result<(), String> {
|
||||||
|
if !path.exists() {
|
||||||
|
std::fs::create_dir_all(path)
|
||||||
|
.map_err(|e| format!("failed to create directory '{}': {e}", path.display()))?;
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Scaffold a huskies project at the given path.
|
||||||
|
pub fn scaffold_project(path: &Path) -> Result<(), String> {
|
||||||
|
crate::io::fs::scaffold::scaffold_story_kit(path, 3001)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Initialise wizard state at the given path.
|
||||||
|
pub fn init_wizard_state(path: &Path) {
|
||||||
|
crate::io::wizard::WizardState::init_if_missing(path);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Notification poller ─────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Spawn a background task that polls events from all project servers.
|
||||||
|
pub fn spawn_gateway_notification_poller(
|
||||||
|
transport: std::sync::Arc<dyn crate::chat::ChatTransport>,
|
||||||
|
room_ids: Vec<String>,
|
||||||
|
project_urls: BTreeMap<String, String>,
|
||||||
|
poll_interval_secs: u64,
|
||||||
|
) {
|
||||||
|
tokio::spawn(async move {
|
||||||
|
let client = Client::builder()
|
||||||
|
.timeout(std::time::Duration::from_secs(10))
|
||||||
|
.build()
|
||||||
|
.unwrap_or_else(|_| Client::new());
|
||||||
|
let interval = std::time::Duration::from_secs(poll_interval_secs.max(1));
|
||||||
|
|
||||||
|
let mut last_ts: HashMap<String, u64> = project_urls
|
||||||
|
.keys()
|
||||||
|
.map(|name| (name.clone(), 0u64))
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
loop {
|
||||||
|
for (project_name, base_url) in &project_urls {
|
||||||
|
let since = last_ts.get(project_name).copied().unwrap_or(0);
|
||||||
|
let url = format!("{base_url}/api/events?since={since}");
|
||||||
|
|
||||||
|
let response = match client.get(&url).send().await {
|
||||||
|
Ok(r) => r,
|
||||||
|
Err(e) => {
|
||||||
|
crate::slog!(
|
||||||
|
"[gateway-poller] {project_name}: unreachable ({e}); skipping"
|
||||||
|
);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let events: Vec<crate::service::events::StoredEvent> = match response.json().await {
|
||||||
|
Ok(v) => v,
|
||||||
|
Err(e) => {
|
||||||
|
crate::slog!(
|
||||||
|
"[gateway-poller] {project_name}: failed to parse events: {e}"
|
||||||
|
);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
for event in &events {
|
||||||
|
let ts = event.timestamp_ms();
|
||||||
|
if ts > *last_ts.get(project_name).unwrap_or(&0) {
|
||||||
|
last_ts.insert(project_name.clone(), ts);
|
||||||
|
}
|
||||||
|
|
||||||
|
let (plain, html) = super::polling::format_gateway_event(project_name, event);
|
||||||
|
for room_id in &room_ids {
|
||||||
|
if let Err(e) = transport.send_message(room_id, &plain, &html).await {
|
||||||
|
crate::slog!(
|
||||||
|
"[gateway-poller] Failed to send notification to {room_id}: {e}"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
tokio::time::sleep(interval).await;
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Gateway bot spawn ───────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Re-export type alias for the active project lock.
|
||||||
|
pub type ActiveProject = std::sync::Arc<tokio::sync::RwLock<String>>;
|
||||||
|
|
||||||
|
/// Attempt to spawn the Matrix bot against the gateway config directory.
|
||||||
|
pub fn spawn_gateway_bot(
|
||||||
|
config_dir: &Path,
|
||||||
|
active_project: ActiveProject,
|
||||||
|
gateway_projects: Vec<String>,
|
||||||
|
gateway_project_urls: BTreeMap<String, String>,
|
||||||
|
port: u16,
|
||||||
|
) -> Option<tokio::task::AbortHandle> {
|
||||||
|
use crate::agents::AgentPool;
|
||||||
|
use tokio::sync::{broadcast, mpsc};
|
||||||
|
|
||||||
|
let (watcher_tx, _) = broadcast::channel(16);
|
||||||
|
let (_perm_tx, perm_rx) = mpsc::unbounded_channel();
|
||||||
|
let perm_rx = std::sync::Arc::new(tokio::sync::Mutex::new(perm_rx));
|
||||||
|
|
||||||
|
let (shutdown_tx, shutdown_rx) =
|
||||||
|
tokio::sync::watch::channel::<Option<crate::rebuild::ShutdownReason>>(None);
|
||||||
|
std::mem::forget(shutdown_tx);
|
||||||
|
|
||||||
|
let agents = std::sync::Arc::new(AgentPool::new(port, watcher_tx.clone()));
|
||||||
|
|
||||||
|
crate::chat::transport::matrix::spawn_bot(
|
||||||
|
config_dir,
|
||||||
|
watcher_tx,
|
||||||
|
perm_rx,
|
||||||
|
agents,
|
||||||
|
shutdown_rx,
|
||||||
|
Some(active_project),
|
||||||
|
gateway_projects,
|
||||||
|
gateway_project_urls,
|
||||||
|
)
|
||||||
|
}
|
||||||
@@ -0,0 +1,580 @@
|
|||||||
|
//! Gateway service — domain logic for the multi-project gateway.
|
||||||
|
//!
|
||||||
|
//! Follows the conventions in `docs/architecture/service-modules.md`:
|
||||||
|
//! - `mod.rs` (this file) — public API, typed [`Error`], orchestration, `GatewayState`
|
||||||
|
//! - `io.rs` — the ONLY place that performs side effects (filesystem, network, process spawn)
|
||||||
|
//! - `config.rs` — pure config types and validation
|
||||||
|
//! - `registration.rs` — pure agent registration logic
|
||||||
|
//! - `aggregation.rs` — pure cross-project pipeline formatting
|
||||||
|
//! - `polling.rs` — pure notification event formatting
|
||||||
|
|
||||||
|
pub mod aggregation;
|
||||||
|
pub mod config;
|
||||||
|
pub(crate) mod io;
|
||||||
|
pub mod polling;
|
||||||
|
pub mod registration;
|
||||||
|
|
||||||
|
pub use aggregation::format_aggregate_status_compact;
|
||||||
|
pub use config::{GatewayConfig, ProjectEntry};
|
||||||
|
pub use io::{fetch_all_project_pipeline_statuses, spawn_gateway_notification_poller};
|
||||||
|
pub use registration::JoinedAgent;
|
||||||
|
|
||||||
|
use io::Client;
|
||||||
|
use std::collections::{BTreeMap, HashMap};
|
||||||
|
use std::path::PathBuf;
|
||||||
|
use std::sync::Arc;
|
||||||
|
use tokio::sync::Mutex as TokioMutex;
|
||||||
|
use tokio::sync::RwLock;
|
||||||
|
|
||||||
|
// ── Error type ──────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Typed errors returned by `service::gateway` functions.
|
||||||
|
///
|
||||||
|
/// HTTP handlers map these to appropriate status codes:
|
||||||
|
/// - [`Error::ProjectNotFound`] → 404 Not Found
|
||||||
|
/// - [`Error::UnreachableProject`] → 502 Bad Gateway
|
||||||
|
/// - [`Error::DuplicateToken`] → 409 Conflict
|
||||||
|
/// - [`Error::InvalidAgent`] → 404 Not Found / 400 Bad Request
|
||||||
|
/// - [`Error::Config`] → 400 Bad Request
|
||||||
|
/// - [`Error::Upstream`] → 502 Bad Gateway
|
||||||
|
#[derive(Debug)]
|
||||||
|
pub enum Error {
|
||||||
|
/// A referenced project does not exist in the gateway config.
|
||||||
|
ProjectNotFound(String),
|
||||||
|
/// A project container is unreachable.
|
||||||
|
UnreachableProject(String),
|
||||||
|
/// A join token has already been consumed or a project name is taken.
|
||||||
|
DuplicateToken(String),
|
||||||
|
/// An agent ID is invalid or not found.
|
||||||
|
InvalidAgent(String),
|
||||||
|
/// A configuration value is invalid.
|
||||||
|
Config(String),
|
||||||
|
/// An upstream project container returned an unexpected response.
|
||||||
|
Upstream(String),
|
||||||
|
}
|
||||||
|
|
||||||
|
impl std::fmt::Display for Error {
|
||||||
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
|
match self {
|
||||||
|
Self::ProjectNotFound(msg) => write!(f, "Project not found: {msg}"),
|
||||||
|
Self::UnreachableProject(msg) => write!(f, "Unreachable project: {msg}"),
|
||||||
|
Self::DuplicateToken(msg) => write!(f, "Duplicate token: {msg}"),
|
||||||
|
Self::InvalidAgent(msg) => write!(f, "Invalid agent: {msg}"),
|
||||||
|
Self::Config(msg) => write!(f, "Config error: {msg}"),
|
||||||
|
Self::Upstream(msg) => write!(f, "Upstream error: {msg}"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Gateway state ───────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// A one-time join token that has been generated but not yet consumed.
|
||||||
|
pub(crate) struct PendingToken {
|
||||||
|
#[allow(dead_code)]
|
||||||
|
pub(crate) created_at: f64,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Shared gateway state threaded through HTTP handlers.
|
||||||
|
#[derive(Clone)]
|
||||||
|
pub struct GatewayState {
|
||||||
|
/// The live set of registered projects (initially loaded from `projects.toml`).
|
||||||
|
pub projects: Arc<RwLock<BTreeMap<String, ProjectEntry>>>,
|
||||||
|
/// The currently active project name.
|
||||||
|
pub active_project: Arc<RwLock<String>>,
|
||||||
|
/// HTTP client for proxying requests to project containers.
|
||||||
|
pub client: Client,
|
||||||
|
/// Build agents that have joined this gateway.
|
||||||
|
pub joined_agents: Arc<RwLock<Vec<JoinedAgent>>>,
|
||||||
|
/// One-time join tokens that have been issued but not yet consumed.
|
||||||
|
pub(crate) pending_tokens: Arc<RwLock<HashMap<String, PendingToken>>>,
|
||||||
|
/// Directory containing `projects.toml` and the `.huskies/` subfolder.
|
||||||
|
pub config_dir: PathBuf,
|
||||||
|
/// HTTP port the gateway is listening on.
|
||||||
|
pub port: u16,
|
||||||
|
/// Abort handle for the running Matrix bot task (if any).
|
||||||
|
pub bot_handle: Arc<TokioMutex<Option<tokio::task::AbortHandle>>>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl GatewayState {
|
||||||
|
/// Create a new gateway state from a config and config directory.
|
||||||
|
///
|
||||||
|
/// The first project in the config becomes the active project by default.
|
||||||
|
/// Previously registered agents are loaded from `gateway_agents.json`.
|
||||||
|
pub fn new(
|
||||||
|
gateway_config: GatewayConfig,
|
||||||
|
config_dir: PathBuf,
|
||||||
|
port: u16,
|
||||||
|
) -> Result<Self, String> {
|
||||||
|
let first = config::validate_config(&gateway_config)?;
|
||||||
|
let agents = io::load_agents(&config_dir);
|
||||||
|
Ok(Self {
|
||||||
|
projects: Arc::new(RwLock::new(gateway_config.projects)),
|
||||||
|
active_project: Arc::new(RwLock::new(first)),
|
||||||
|
client: Client::new(),
|
||||||
|
joined_agents: Arc::new(RwLock::new(agents)),
|
||||||
|
pending_tokens: Arc::new(RwLock::new(HashMap::new())),
|
||||||
|
config_dir,
|
||||||
|
port,
|
||||||
|
bot_handle: Arc::new(TokioMutex::new(None)),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get the URL of the currently active project.
|
||||||
|
pub async fn active_url(&self) -> Result<String, Error> {
|
||||||
|
let name = self.active_project.read().await.clone();
|
||||||
|
self.projects
|
||||||
|
.read()
|
||||||
|
.await
|
||||||
|
.get(&name)
|
||||||
|
.map(|p| p.url.clone())
|
||||||
|
.ok_or_else(|| {
|
||||||
|
Error::ProjectNotFound(format!("active project '{name}' not found in config"))
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Public API ──────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Switch the active project. Returns the project's URL on success.
|
||||||
|
pub async fn switch_project(state: &GatewayState, project: &str) -> Result<String, Error> {
|
||||||
|
if project.is_empty() {
|
||||||
|
return Err(Error::Config("missing required parameter: project".into()));
|
||||||
|
}
|
||||||
|
|
||||||
|
let url = {
|
||||||
|
let projects = state.projects.read().await;
|
||||||
|
config::validate_project_exists(&projects, project).map_err(Error::ProjectNotFound)?
|
||||||
|
};
|
||||||
|
|
||||||
|
*state.active_project.write().await = project.to_string();
|
||||||
|
Ok(url)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generate a one-time join token. Returns the token string.
|
||||||
|
pub async fn generate_join_token(state: &GatewayState) -> String {
|
||||||
|
let token = uuid::Uuid::new_v4().to_string();
|
||||||
|
let now = chrono::Utc::now().timestamp() as f64;
|
||||||
|
state
|
||||||
|
.pending_tokens
|
||||||
|
.write()
|
||||||
|
.await
|
||||||
|
.insert(token.clone(), PendingToken { created_at: now });
|
||||||
|
crate::slog!("[gateway] Generated join token {:.8}…", &token);
|
||||||
|
token
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Register a build agent with a join token.
|
||||||
|
pub async fn register_agent(
|
||||||
|
state: &GatewayState,
|
||||||
|
token: &str,
|
||||||
|
label: String,
|
||||||
|
address: String,
|
||||||
|
) -> Result<JoinedAgent, Error> {
|
||||||
|
// Validate and consume the token.
|
||||||
|
let mut tokens = state.pending_tokens.write().await;
|
||||||
|
if !tokens.contains_key(token) {
|
||||||
|
return Err(Error::DuplicateToken(
|
||||||
|
"invalid or already-used join token".into(),
|
||||||
|
));
|
||||||
|
}
|
||||||
|
tokens.remove(token);
|
||||||
|
drop(tokens);
|
||||||
|
|
||||||
|
let now = chrono::Utc::now().timestamp() as f64;
|
||||||
|
let agent = registration::create_agent(uuid::Uuid::new_v4().to_string(), label, address, now);
|
||||||
|
|
||||||
|
crate::slog!(
|
||||||
|
"[gateway] Agent '{}' registered (id={})",
|
||||||
|
agent.label,
|
||||||
|
agent.id
|
||||||
|
);
|
||||||
|
|
||||||
|
{
|
||||||
|
let mut agents = state.joined_agents.write().await;
|
||||||
|
agents.push(agent.clone());
|
||||||
|
io::save_agents(&agents, &state.config_dir).await;
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(agent)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Remove a registered agent by ID. Returns `true` if found and removed.
|
||||||
|
pub async fn remove_agent(state: &GatewayState, id: &str) -> bool {
|
||||||
|
let mut agents = state.joined_agents.write().await;
|
||||||
|
let removed = registration::remove_agent(&mut agents, id);
|
||||||
|
if removed {
|
||||||
|
io::save_agents(&agents, &state.config_dir).await;
|
||||||
|
crate::slog!("[gateway] Removed agent id={id}");
|
||||||
|
}
|
||||||
|
removed
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Assign or unassign an agent to a project.
|
||||||
|
pub async fn assign_agent(
|
||||||
|
state: &GatewayState,
|
||||||
|
id: &str,
|
||||||
|
project: Option<String>,
|
||||||
|
) -> Result<JoinedAgent, Error> {
|
||||||
|
let project_clean = project.and_then(|p| if p.is_empty() { None } else { Some(p) });
|
||||||
|
|
||||||
|
let updated = {
|
||||||
|
let projects = state.projects.read().await;
|
||||||
|
let mut agents = state.joined_agents.write().await;
|
||||||
|
registration::assign_agent(&mut agents, id, project_clean, &projects)?
|
||||||
|
};
|
||||||
|
|
||||||
|
crate::slog!(
|
||||||
|
"[gateway] Agent '{}' (id={}) assigned to {:?}",
|
||||||
|
updated.label,
|
||||||
|
updated.id,
|
||||||
|
updated.assigned_project
|
||||||
|
);
|
||||||
|
let agents = state.joined_agents.read().await.clone();
|
||||||
|
io::save_agents(&agents, &state.config_dir).await;
|
||||||
|
Ok(updated)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Update an agent's heartbeat. Returns `true` if found.
|
||||||
|
pub async fn heartbeat_agent(state: &GatewayState, id: &str) -> bool {
|
||||||
|
let now = chrono::Utc::now().timestamp() as f64;
|
||||||
|
let mut agents = state.joined_agents.write().await;
|
||||||
|
registration::heartbeat(&mut agents, id, now)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Add a new project to the gateway config.
|
||||||
|
pub async fn add_project(state: &GatewayState, name: &str, url: &str) -> Result<(), Error> {
|
||||||
|
let name = name.trim().to_string();
|
||||||
|
let url = url.trim().to_string();
|
||||||
|
|
||||||
|
if name.is_empty() {
|
||||||
|
return Err(Error::Config("project name must not be empty".into()));
|
||||||
|
}
|
||||||
|
if url.is_empty() {
|
||||||
|
return Err(Error::Config("project url must not be empty".into()));
|
||||||
|
}
|
||||||
|
|
||||||
|
{
|
||||||
|
let mut projects = state.projects.write().await;
|
||||||
|
if projects.contains_key(&name) {
|
||||||
|
return Err(Error::DuplicateToken(format!(
|
||||||
|
"project '{name}' already exists"
|
||||||
|
)));
|
||||||
|
}
|
||||||
|
projects.insert(name.clone(), ProjectEntry { url: url.clone() });
|
||||||
|
}
|
||||||
|
|
||||||
|
let snapshot = state.projects.read().await.clone();
|
||||||
|
io::save_config(&snapshot, &state.config_dir).await;
|
||||||
|
crate::slog!("[gateway] Added project '{name}' ({url})");
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Remove a project from the gateway config.
|
||||||
|
pub async fn remove_project(state: &GatewayState, name: &str) -> Result<(), Error> {
|
||||||
|
let active = state.active_project.read().await.clone();
|
||||||
|
|
||||||
|
{
|
||||||
|
let mut projects = state.projects.write().await;
|
||||||
|
if !projects.contains_key(name) {
|
||||||
|
return Err(Error::ProjectNotFound(format!(
|
||||||
|
"project '{name}' not found"
|
||||||
|
)));
|
||||||
|
}
|
||||||
|
if projects.len() == 1 {
|
||||||
|
return Err(Error::Config("cannot remove the last project".into()));
|
||||||
|
}
|
||||||
|
projects.remove(name);
|
||||||
|
}
|
||||||
|
|
||||||
|
let snapshot = state.projects.read().await.clone();
|
||||||
|
io::save_config(&snapshot, &state.config_dir).await;
|
||||||
|
|
||||||
|
// If the removed project was active, switch to the first remaining.
|
||||||
|
if active == name {
|
||||||
|
let first = state.projects.read().await.keys().next().cloned();
|
||||||
|
if let Some(new_active) = first {
|
||||||
|
*state.active_project.write().await = new_active;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
crate::slog!("[gateway] Removed project '{name}'");
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Initialise a new huskies project at the given path.
|
||||||
|
///
|
||||||
|
/// Optionally registers the project in the gateway's project map.
|
||||||
|
pub async fn init_project(
|
||||||
|
state: &GatewayState,
|
||||||
|
path_str: &str,
|
||||||
|
name: Option<&str>,
|
||||||
|
url: Option<&str>,
|
||||||
|
) -> Result<Option<String>, Error> {
|
||||||
|
let path_str = path_str.trim();
|
||||||
|
if path_str.is_empty() {
|
||||||
|
return Err(Error::Config("missing required parameter: path".into()));
|
||||||
|
}
|
||||||
|
|
||||||
|
let project_path = std::path::Path::new(path_str);
|
||||||
|
|
||||||
|
if io::has_huskies_dir(project_path) {
|
||||||
|
return Err(Error::Config(format!(
|
||||||
|
"path '{}' is already a huskies project (.huskies/ exists). \
|
||||||
|
Use wizard_status to check setup progress.",
|
||||||
|
project_path.display()
|
||||||
|
)));
|
||||||
|
}
|
||||||
|
|
||||||
|
io::ensure_directory(project_path).map_err(Error::Config)?;
|
||||||
|
|
||||||
|
io::scaffold_project(project_path)
|
||||||
|
.map_err(|e| Error::Config(format!("scaffold failed: {e}")))?;
|
||||||
|
|
||||||
|
io::init_wizard_state(project_path);
|
||||||
|
|
||||||
|
// Optionally register in projects.toml.
|
||||||
|
let registered_name: Option<String> = match (name, url) {
|
||||||
|
(Some(n), Some(u)) if !n.trim().is_empty() && !u.trim().is_empty() => {
|
||||||
|
let n = n.trim();
|
||||||
|
let u = u.trim();
|
||||||
|
let mut projects = state.projects.write().await;
|
||||||
|
if projects.contains_key(n) {
|
||||||
|
return Err(Error::DuplicateToken(format!(
|
||||||
|
"project '{n}' is already registered. Choose a different name or use switch_project."
|
||||||
|
)));
|
||||||
|
}
|
||||||
|
projects.insert(n.to_string(), ProjectEntry { url: u.to_string() });
|
||||||
|
io::save_config(&projects, &state.config_dir).await;
|
||||||
|
crate::slog!("[gateway] init_project: registered '{n}' ({u})");
|
||||||
|
Some(n.to_string())
|
||||||
|
}
|
||||||
|
_ => None,
|
||||||
|
};
|
||||||
|
|
||||||
|
Ok(registered_name)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Fetch aggregated health status across all projects.
|
||||||
|
pub async fn health_check_all(state: &GatewayState) -> (bool, BTreeMap<String, &'static str>) {
|
||||||
|
let mut all_healthy = true;
|
||||||
|
let mut statuses = BTreeMap::new();
|
||||||
|
|
||||||
|
let project_entries: Vec<(String, String)> = state
|
||||||
|
.projects
|
||||||
|
.read()
|
||||||
|
.await
|
||||||
|
.iter()
|
||||||
|
.map(|(n, e)| (n.clone(), e.url.clone()))
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
for (name, url) in &project_entries {
|
||||||
|
let healthy = io::check_project_health(&state.client, url)
|
||||||
|
.await
|
||||||
|
.unwrap_or(false);
|
||||||
|
if !healthy {
|
||||||
|
all_healthy = false;
|
||||||
|
}
|
||||||
|
statuses.insert(name.clone(), if healthy { "ok" } else { "error" });
|
||||||
|
}
|
||||||
|
|
||||||
|
(all_healthy, statuses)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Save bot config and restart the bot.
|
||||||
|
pub async fn save_bot_config_and_restart(state: &GatewayState, content: &str) -> Result<(), Error> {
|
||||||
|
io::write_bot_config(&state.config_dir, content).map_err(Error::Config)?;
|
||||||
|
|
||||||
|
// Abort existing bot task and spawn a fresh one.
|
||||||
|
{
|
||||||
|
let mut handle = state.bot_handle.lock().await;
|
||||||
|
if let Some(h) = handle.take() {
|
||||||
|
h.abort();
|
||||||
|
}
|
||||||
|
let gateway_projects: Vec<String> = state.projects.read().await.keys().cloned().collect();
|
||||||
|
let gateway_project_urls: BTreeMap<String, String> = state
|
||||||
|
.projects
|
||||||
|
.read()
|
||||||
|
.await
|
||||||
|
.iter()
|
||||||
|
.map(|(name, entry)| (name.clone(), entry.url.clone()))
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
let new_handle = io::spawn_gateway_bot(
|
||||||
|
&state.config_dir,
|
||||||
|
Arc::clone(&state.active_project),
|
||||||
|
gateway_projects,
|
||||||
|
gateway_project_urls,
|
||||||
|
state.port,
|
||||||
|
);
|
||||||
|
*handle = new_handle;
|
||||||
|
}
|
||||||
|
|
||||||
|
crate::slog!("[gateway] Bot configuration saved; bot restarted");
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Tests ────────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
fn make_config(names: &[(&str, &str)]) -> GatewayConfig {
|
||||||
|
let mut projects = BTreeMap::new();
|
||||||
|
for (name, url) in names {
|
||||||
|
projects.insert(
|
||||||
|
name.to_string(),
|
||||||
|
ProjectEntry {
|
||||||
|
url: url.to_string(),
|
||||||
|
},
|
||||||
|
);
|
||||||
|
}
|
||||||
|
GatewayConfig { projects }
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn gateway_state_rejects_empty_config() {
|
||||||
|
let config = GatewayConfig {
|
||||||
|
projects: BTreeMap::new(),
|
||||||
|
};
|
||||||
|
assert!(GatewayState::new(config, PathBuf::from("."), 3000).is_err());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn gateway_state_sets_first_project_active() {
|
||||||
|
let config = make_config(&[("alpha", "http://a:3001"), ("beta", "http://b:3002")]);
|
||||||
|
let state = GatewayState::new(config, PathBuf::from("."), 3000).unwrap();
|
||||||
|
let active = state.active_project.blocking_read().clone();
|
||||||
|
assert_eq!(active, "alpha");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn switch_project_to_known_project() {
|
||||||
|
let config = make_config(&[("alpha", "http://a:3001"), ("beta", "http://b:3002")]);
|
||||||
|
let state = GatewayState::new(config, PathBuf::from("."), 3000).unwrap();
|
||||||
|
let url = switch_project(&state, "beta").await.unwrap();
|
||||||
|
assert_eq!(url, "http://b:3002");
|
||||||
|
assert_eq!(*state.active_project.read().await, "beta");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn switch_project_to_unknown_fails() {
|
||||||
|
let config = make_config(&[("alpha", "http://a:3001")]);
|
||||||
|
let state = GatewayState::new(config, PathBuf::from("."), 3000).unwrap();
|
||||||
|
assert!(switch_project(&state, "nonexistent").await.is_err());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn switch_project_empty_name_fails() {
|
||||||
|
let config = make_config(&[("alpha", "http://a:3001")]);
|
||||||
|
let state = GatewayState::new(config, PathBuf::from("."), 3000).unwrap();
|
||||||
|
assert!(switch_project(&state, "").await.is_err());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn active_url_returns_correct_url() {
|
||||||
|
let config = make_config(&[("myproj", "http://my:3001")]);
|
||||||
|
let state = GatewayState::new(config, PathBuf::from("."), 3000).unwrap();
|
||||||
|
let url = state.active_url().await.unwrap();
|
||||||
|
assert_eq!(url, "http://my:3001");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn error_display_variants() {
|
||||||
|
assert!(
|
||||||
|
Error::ProjectNotFound("x".into())
|
||||||
|
.to_string()
|
||||||
|
.contains("Project not found")
|
||||||
|
);
|
||||||
|
assert!(
|
||||||
|
Error::UnreachableProject("x".into())
|
||||||
|
.to_string()
|
||||||
|
.contains("Unreachable")
|
||||||
|
);
|
||||||
|
assert!(
|
||||||
|
Error::DuplicateToken("x".into())
|
||||||
|
.to_string()
|
||||||
|
.contains("Duplicate")
|
||||||
|
);
|
||||||
|
assert!(
|
||||||
|
Error::InvalidAgent("x".into())
|
||||||
|
.to_string()
|
||||||
|
.contains("Invalid agent")
|
||||||
|
);
|
||||||
|
assert!(
|
||||||
|
Error::Config("x".into())
|
||||||
|
.to_string()
|
||||||
|
.contains("Config error")
|
||||||
|
);
|
||||||
|
assert!(Error::Upstream("x".into()).to_string().contains("Upstream"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn generate_and_register_agent() {
|
||||||
|
let config = make_config(&[("test", "http://test:3001")]);
|
||||||
|
let state = GatewayState::new(config, PathBuf::new(), 3000).unwrap();
|
||||||
|
let token = generate_join_token(&state).await;
|
||||||
|
let agent = register_agent(&state, &token, "test-agent".into(), "ws://a".into())
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert_eq!(agent.label, "test-agent");
|
||||||
|
assert!(state.pending_tokens.read().await.is_empty());
|
||||||
|
assert_eq!(state.joined_agents.read().await.len(), 1);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn register_agent_invalid_token_fails() {
|
||||||
|
let config = make_config(&[("test", "http://test:3001")]);
|
||||||
|
let state = GatewayState::new(config, PathBuf::new(), 3000).unwrap();
|
||||||
|
let result = register_agent(&state, "bad-token", "a".into(), "ws://a".into()).await;
|
||||||
|
assert!(result.is_err());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn remove_agent_success() {
|
||||||
|
let config = make_config(&[("test", "http://test:3001")]);
|
||||||
|
let state = GatewayState::new(config, PathBuf::new(), 3000).unwrap();
|
||||||
|
let token = generate_join_token(&state).await;
|
||||||
|
let agent = register_agent(&state, &token, "a".into(), "ws://a".into())
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert!(remove_agent(&state, &agent.id).await);
|
||||||
|
assert!(state.joined_agents.read().await.is_empty());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn heartbeat_agent_updates_timestamp() {
|
||||||
|
let config = make_config(&[("test", "http://test:3001")]);
|
||||||
|
let state = GatewayState::new(config, PathBuf::new(), 3000).unwrap();
|
||||||
|
let token = generate_join_token(&state).await;
|
||||||
|
let agent = register_agent(&state, &token, "a".into(), "ws://a".into())
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let old_ts = agent.last_seen;
|
||||||
|
// Small sleep to ensure timestamp differs.
|
||||||
|
tokio::time::sleep(std::time::Duration::from_millis(10)).await;
|
||||||
|
assert!(heartbeat_agent(&state, &agent.id).await);
|
||||||
|
let agents = state.joined_agents.read().await;
|
||||||
|
assert!(agents[0].last_seen >= old_ts);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn init_project_scaffolds_directory() {
|
||||||
|
let dir = tempfile::tempdir().unwrap();
|
||||||
|
let config = make_config(&[("test", "http://test:3001")]);
|
||||||
|
let state = GatewayState::new(config, PathBuf::new(), 3000).unwrap();
|
||||||
|
let result = init_project(&state, dir.path().to_str().unwrap(), None, None).await;
|
||||||
|
assert!(result.is_ok());
|
||||||
|
assert!(dir.path().join(".huskies").exists());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn init_project_already_exists_fails() {
|
||||||
|
let dir = tempfile::tempdir().unwrap();
|
||||||
|
std::fs::create_dir_all(dir.path().join(".huskies")).unwrap();
|
||||||
|
let config = make_config(&[("test", "http://test:3001")]);
|
||||||
|
let state = GatewayState::new(config, PathBuf::new(), 3000).unwrap();
|
||||||
|
let result = init_project(&state, dir.path().to_str().unwrap(), None, None).await;
|
||||||
|
assert!(result.is_err());
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,91 @@
|
|||||||
|
//! Gateway notification polling — pure event formatting.
|
||||||
|
//!
|
||||||
|
//! Formats pipeline events from project containers into gateway notifications
|
||||||
|
//! with `[project-name]` prefixes. The actual I/O (HTTP polling, spawning
|
||||||
|
//! tasks, sending messages) lives in `io.rs`.
|
||||||
|
|
||||||
|
use crate::service::events::StoredEvent;
|
||||||
|
use crate::service::notifications::{
|
||||||
|
format_blocked_notification, format_error_notification, format_stage_notification,
|
||||||
|
stage_display_name,
|
||||||
|
};
|
||||||
|
|
||||||
|
/// Format a [`StoredEvent`] from a project into a gateway notification.
|
||||||
|
///
|
||||||
|
/// Prefixes the message with `[project-name]` so users can distinguish which
|
||||||
|
/// project emitted the event.
|
||||||
|
pub fn format_gateway_event(project_name: &str, event: &StoredEvent) -> (String, String) {
|
||||||
|
let prefix = format!("[{project_name}] ");
|
||||||
|
|
||||||
|
match event {
|
||||||
|
StoredEvent::StageTransition {
|
||||||
|
story_id,
|
||||||
|
from_stage,
|
||||||
|
to_stage,
|
||||||
|
..
|
||||||
|
} => {
|
||||||
|
let from_display = stage_display_name(from_stage);
|
||||||
|
let to_display = stage_display_name(to_stage);
|
||||||
|
let (plain, html) = format_stage_notification(story_id, None, from_display, to_display);
|
||||||
|
(format!("{prefix}{plain}"), format!("{prefix}{html}"))
|
||||||
|
}
|
||||||
|
StoredEvent::MergeFailure {
|
||||||
|
story_id, reason, ..
|
||||||
|
} => {
|
||||||
|
let (plain, html) = format_error_notification(story_id, None, reason);
|
||||||
|
(format!("{prefix}{plain}"), format!("{prefix}{html}"))
|
||||||
|
}
|
||||||
|
StoredEvent::StoryBlocked {
|
||||||
|
story_id, reason, ..
|
||||||
|
} => {
|
||||||
|
let (plain, html) = format_blocked_notification(story_id, None, reason);
|
||||||
|
(format!("{prefix}{plain}"), format!("{prefix}{html}"))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Tests ────────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn stage_transition_prefixes_project_name() {
|
||||||
|
let event = StoredEvent::StageTransition {
|
||||||
|
story_id: "42_story_my_feature".to_string(),
|
||||||
|
from_stage: "2_current".to_string(),
|
||||||
|
to_stage: "3_qa".to_string(),
|
||||||
|
timestamp_ms: 1000,
|
||||||
|
};
|
||||||
|
let (plain, html) = format_gateway_event("huskies", &event);
|
||||||
|
assert!(plain.starts_with("[huskies] "));
|
||||||
|
assert!(html.starts_with("[huskies] "));
|
||||||
|
assert!(plain.contains("Current"));
|
||||||
|
assert!(plain.contains("QA"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn merge_failure_prefixes_project_name() {
|
||||||
|
let event = StoredEvent::MergeFailure {
|
||||||
|
story_id: "42_story_my_feature".to_string(),
|
||||||
|
reason: "merge conflict".to_string(),
|
||||||
|
timestamp_ms: 1000,
|
||||||
|
};
|
||||||
|
let (plain, _html) = format_gateway_event("robot-studio", &event);
|
||||||
|
assert!(plain.starts_with("[robot-studio] "));
|
||||||
|
assert!(plain.contains("merge conflict"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn story_blocked_prefixes_project_name() {
|
||||||
|
let event = StoredEvent::StoryBlocked {
|
||||||
|
story_id: "43_story_bar".to_string(),
|
||||||
|
reason: "retry limit exceeded".to_string(),
|
||||||
|
timestamp_ms: 2000,
|
||||||
|
};
|
||||||
|
let (plain, _html) = format_gateway_event("huskies", &event);
|
||||||
|
assert!(plain.starts_with("[huskies] "));
|
||||||
|
assert!(plain.contains("BLOCKED"));
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,165 @@
|
|||||||
|
//! Gateway agent registration — pure logic for managing build agents.
|
||||||
|
//!
|
||||||
|
//! Contains `JoinedAgent` and functions that validate and manipulate agent
|
||||||
|
//! state in memory. All persistence (disk I/O) lives in `io.rs`.
|
||||||
|
|
||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
use std::collections::BTreeMap;
|
||||||
|
|
||||||
|
use super::config::ProjectEntry;
|
||||||
|
|
||||||
|
/// A build agent that has registered with this gateway.
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct JoinedAgent {
|
||||||
|
/// Unique ID assigned by the gateway on registration.
|
||||||
|
pub id: String,
|
||||||
|
/// Human-readable label provided by the agent (e.g. `build-agent-abc123`).
|
||||||
|
pub label: String,
|
||||||
|
/// The agent's CRDT-sync WebSocket address (e.g. `ws://host:3001/crdt-sync`).
|
||||||
|
pub address: String,
|
||||||
|
/// Unix timestamp when the agent registered.
|
||||||
|
pub registered_at: f64,
|
||||||
|
/// Unix timestamp of the last heartbeat from this agent.
|
||||||
|
#[serde(default)]
|
||||||
|
pub last_seen: f64,
|
||||||
|
/// Project this agent is assigned to, if any.
|
||||||
|
#[serde(default, skip_serializing_if = "Option::is_none")]
|
||||||
|
pub assigned_project: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Create a new `JoinedAgent` from registration data.
|
||||||
|
pub fn create_agent(id: String, label: String, address: String, now: f64) -> JoinedAgent {
|
||||||
|
JoinedAgent {
|
||||||
|
id,
|
||||||
|
label,
|
||||||
|
address,
|
||||||
|
registered_at: now,
|
||||||
|
last_seen: now,
|
||||||
|
assigned_project: None,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Remove an agent by ID from the list. Returns `true` if found and removed.
|
||||||
|
pub fn remove_agent(agents: &mut Vec<JoinedAgent>, id: &str) -> bool {
|
||||||
|
let before = agents.len();
|
||||||
|
agents.retain(|a| a.id != id);
|
||||||
|
agents.len() < before
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Assign (or unassign) an agent to a project.
|
||||||
|
///
|
||||||
|
/// Returns the updated agent on success, or an error if the agent or project
|
||||||
|
/// is not found.
|
||||||
|
pub fn assign_agent(
|
||||||
|
agents: &mut [JoinedAgent],
|
||||||
|
id: &str,
|
||||||
|
project: Option<String>,
|
||||||
|
projects: &BTreeMap<String, ProjectEntry>,
|
||||||
|
) -> Result<JoinedAgent, super::Error> {
|
||||||
|
// Validate project exists if assigning.
|
||||||
|
if let Some(ref p) = project
|
||||||
|
&& !projects.contains_key(p.as_str())
|
||||||
|
{
|
||||||
|
return Err(super::Error::ProjectNotFound(format!(
|
||||||
|
"unknown project '{p}'"
|
||||||
|
)));
|
||||||
|
}
|
||||||
|
|
||||||
|
match agents.iter_mut().find(|a| a.id == id) {
|
||||||
|
None => Err(super::Error::InvalidAgent(format!("agent not found: {id}"))),
|
||||||
|
Some(a) => {
|
||||||
|
a.assigned_project = project;
|
||||||
|
Ok(a.clone())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Update an agent's last-seen timestamp. Returns `true` if the agent was found.
|
||||||
|
pub fn heartbeat(agents: &mut [JoinedAgent], id: &str, now: f64) -> bool {
|
||||||
|
match agents.iter_mut().find(|a| a.id == id) {
|
||||||
|
None => false,
|
||||||
|
Some(a) => {
|
||||||
|
a.last_seen = now;
|
||||||
|
true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Tests ────────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn create_agent_sets_fields() {
|
||||||
|
let agent = create_agent("id-1".into(), "lbl".into(), "ws://a".into(), 100.0);
|
||||||
|
assert_eq!(agent.id, "id-1");
|
||||||
|
assert_eq!(agent.label, "lbl");
|
||||||
|
assert_eq!(agent.address, "ws://a");
|
||||||
|
assert_eq!(agent.registered_at, 100.0);
|
||||||
|
assert_eq!(agent.last_seen, 100.0);
|
||||||
|
assert!(agent.assigned_project.is_none());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn remove_agent_by_id() {
|
||||||
|
let mut agents = vec![
|
||||||
|
create_agent("a".into(), "A".into(), "ws://a".into(), 0.0),
|
||||||
|
create_agent("b".into(), "B".into(), "ws://b".into(), 0.0),
|
||||||
|
];
|
||||||
|
assert!(remove_agent(&mut agents, "a"));
|
||||||
|
assert_eq!(agents.len(), 1);
|
||||||
|
assert_eq!(agents[0].id, "b");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn remove_agent_missing_returns_false() {
|
||||||
|
let mut agents = vec![];
|
||||||
|
assert!(!remove_agent(&mut agents, "x"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn assign_agent_to_valid_project() {
|
||||||
|
let mut projects = BTreeMap::new();
|
||||||
|
projects.insert(
|
||||||
|
"proj".into(),
|
||||||
|
ProjectEntry {
|
||||||
|
url: "http://p".into(),
|
||||||
|
},
|
||||||
|
);
|
||||||
|
let mut agents = vec![create_agent("a".into(), "A".into(), "ws://a".into(), 0.0)];
|
||||||
|
let result = assign_agent(&mut agents, "a", Some("proj".into()), &projects);
|
||||||
|
assert!(result.is_ok());
|
||||||
|
assert_eq!(result.unwrap().assigned_project, Some("proj".into()));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn assign_agent_to_unknown_project_fails() {
|
||||||
|
let projects = BTreeMap::new();
|
||||||
|
let mut agents = vec![create_agent("a".into(), "A".into(), "ws://a".into(), 0.0)];
|
||||||
|
let result = assign_agent(&mut agents, "a", Some("nope".into()), &projects);
|
||||||
|
assert!(result.is_err());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn assign_agent_unknown_id_fails() {
|
||||||
|
let projects = BTreeMap::new();
|
||||||
|
let mut agents: Vec<JoinedAgent> = vec![];
|
||||||
|
let result = assign_agent(&mut agents, "x", None, &projects);
|
||||||
|
assert!(result.is_err());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn heartbeat_updates_last_seen() {
|
||||||
|
let mut agents = vec![create_agent("a".into(), "A".into(), "ws://a".into(), 0.0)];
|
||||||
|
assert!(heartbeat(&mut agents, "a", 999.0));
|
||||||
|
assert_eq!(agents[0].last_seen, 999.0);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn heartbeat_unknown_id_returns_false() {
|
||||||
|
let mut agents: Vec<JoinedAgent> = vec![];
|
||||||
|
assert!(!heartbeat(&mut agents, "x", 1.0));
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,90 @@
|
|||||||
|
//! Git I/O — the ONLY place in `service::git_ops/` that may perform side effects.
|
||||||
|
//!
|
||||||
|
//! Side effects here include: spawning git processes via `std::process::Command`
|
||||||
|
//! (wrapped in `tokio::task::spawn_blocking`), and filesystem existence and
|
||||||
|
//! canonicalization checks for path validation.
|
||||||
|
//! All pure logic (path-prefix checks, porcelain parsing) lives in `path_guard.rs`
|
||||||
|
//! and `porcelain.rs`.
|
||||||
|
|
||||||
|
use super::Error;
|
||||||
|
use std::path::{Path, PathBuf};
|
||||||
|
use std::process::Output;
|
||||||
|
|
||||||
|
/// Validate that `worktree_path` is an absolute path that exists on disk and
|
||||||
|
/// lies inside the project's `.huskies/worktrees/` directory. Returns the
|
||||||
|
/// canonicalized path on success.
|
||||||
|
///
|
||||||
|
/// # Errors
|
||||||
|
/// - [`Error::Validation`] if the path is relative or does not exist.
|
||||||
|
/// - [`Error::PathNotAllowed`] if the path is outside `.huskies/worktrees/`.
|
||||||
|
/// - [`Error::Io`] if canonicalization fails.
|
||||||
|
pub fn validate_worktree_path(worktree_path: &str, project_root: &Path) -> Result<PathBuf, Error> {
|
||||||
|
let wd = PathBuf::from(worktree_path);
|
||||||
|
|
||||||
|
if !wd.is_absolute() {
|
||||||
|
return Err(Error::Validation(
|
||||||
|
"worktree_path must be an absolute path".to_string(),
|
||||||
|
));
|
||||||
|
}
|
||||||
|
if !wd.exists() {
|
||||||
|
return Err(Error::Validation(format!(
|
||||||
|
"worktree_path does not exist: {worktree_path}"
|
||||||
|
)));
|
||||||
|
}
|
||||||
|
|
||||||
|
let worktrees_root = project_root.join(".huskies").join("worktrees");
|
||||||
|
|
||||||
|
let canonical_wd = wd
|
||||||
|
.canonicalize()
|
||||||
|
.map_err(|e| Error::Io(format!("Cannot canonicalize worktree_path: {e}")))?;
|
||||||
|
|
||||||
|
let canonical_wt = if worktrees_root.exists() {
|
||||||
|
worktrees_root
|
||||||
|
.canonicalize()
|
||||||
|
.map_err(|e| Error::Io(format!("Cannot canonicalize worktrees root: {e}")))?
|
||||||
|
} else {
|
||||||
|
return Err(Error::PathNotAllowed(
|
||||||
|
"No worktrees directory found in project".to_string(),
|
||||||
|
));
|
||||||
|
};
|
||||||
|
|
||||||
|
if !super::path_guard::is_under_root(&canonical_wd, &canonical_wt) {
|
||||||
|
return Err(Error::PathNotAllowed(format!(
|
||||||
|
"worktree_path must be inside .huskies/worktrees/. Got: {worktree_path}"
|
||||||
|
)));
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(canonical_wd)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Run a git command with static arg slices in `dir` and return the process output.
|
||||||
|
///
|
||||||
|
/// # Errors
|
||||||
|
/// - [`Error::UpstreamFailure`] if the task panics or git cannot be spawned.
|
||||||
|
pub async fn run_git(args: Vec<&'static str>, dir: PathBuf) -> Result<Output, Error> {
|
||||||
|
tokio::task::spawn_blocking(move || {
|
||||||
|
std::process::Command::new("git")
|
||||||
|
.args(&args)
|
||||||
|
.current_dir(&dir)
|
||||||
|
.output()
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
.map_err(|e| Error::UpstreamFailure(format!("Task join error: {e}")))?
|
||||||
|
.map_err(|e| Error::Io(format!("Failed to run git: {e}")))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Run a git command with owned `String` args in `dir` and return the process output.
|
||||||
|
///
|
||||||
|
/// # Errors
|
||||||
|
/// - [`Error::UpstreamFailure`] if the task panics or git cannot be spawned.
|
||||||
|
pub async fn run_git_owned(args: Vec<String>, dir: PathBuf) -> Result<Output, Error> {
|
||||||
|
tokio::task::spawn_blocking(move || {
|
||||||
|
std::process::Command::new("git")
|
||||||
|
.args(&args)
|
||||||
|
.current_dir(&dir)
|
||||||
|
.output()
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
.map_err(|e| Error::UpstreamFailure(format!("Task join error: {e}")))?
|
||||||
|
.map_err(|e| Error::Io(format!("Failed to run git: {e}")))
|
||||||
|
}
|
||||||
@@ -0,0 +1,100 @@
|
|||||||
|
//! Git operations service — worktree path validation and git command execution.
|
||||||
|
//!
|
||||||
|
//! Extracted from `http/mcp/git_tools.rs` following the conventions in
|
||||||
|
//! `docs/architecture/service-modules.md`:
|
||||||
|
//! - `mod.rs` (this file) — public API, typed [`Error`], orchestration
|
||||||
|
//! - `io.rs` — the ONLY place that performs side effects (git processes, filesystem)
|
||||||
|
//! - `path_guard.rs` — pure path-prefix safety checks
|
||||||
|
//! - `porcelain.rs` — pure git porcelain output parsers
|
||||||
|
|
||||||
|
pub mod io;
|
||||||
|
pub mod path_guard;
|
||||||
|
pub mod porcelain;
|
||||||
|
|
||||||
|
#[allow(unused_imports)]
|
||||||
|
pub use path_guard::is_under_root;
|
||||||
|
pub use porcelain::parse_git_status_porcelain;
|
||||||
|
|
||||||
|
// ── Error type ────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Typed errors returned by `service::git_ops` functions.
|
||||||
|
///
|
||||||
|
/// HTTP handlers map these to status codes:
|
||||||
|
/// - [`Error::NotFound`] → 404 Not Found
|
||||||
|
/// - [`Error::Validation`] → 400 Bad Request
|
||||||
|
/// - [`Error::Conflict`] → 409 Conflict
|
||||||
|
/// - [`Error::PathNotAllowed`] → 400 Bad Request (sandbox violation)
|
||||||
|
/// - [`Error::Io`] → 500 Internal Server Error
|
||||||
|
/// - [`Error::UpstreamFailure`] → 500 Internal Server Error
|
||||||
|
#[allow(dead_code)]
|
||||||
|
#[derive(Debug)]
|
||||||
|
pub enum Error {
|
||||||
|
/// The requested worktree or path does not exist.
|
||||||
|
NotFound(String),
|
||||||
|
/// A required argument is missing or has an invalid value.
|
||||||
|
Validation(String),
|
||||||
|
/// The git operation cannot proceed due to a conflicting state.
|
||||||
|
Conflict(String),
|
||||||
|
/// The path is outside the allowed sandbox.
|
||||||
|
PathNotAllowed(String),
|
||||||
|
/// A filesystem or git I/O operation failed.
|
||||||
|
Io(String),
|
||||||
|
/// An upstream git command returned an unexpected error.
|
||||||
|
UpstreamFailure(String),
|
||||||
|
}
|
||||||
|
|
||||||
|
impl std::fmt::Display for Error {
|
||||||
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
|
match self {
|
||||||
|
Self::NotFound(msg) => write!(f, "Not found: {msg}"),
|
||||||
|
Self::Validation(msg) => write!(f, "Validation error: {msg}"),
|
||||||
|
Self::Conflict(msg) => write!(f, "Conflict: {msg}"),
|
||||||
|
Self::PathNotAllowed(msg) => write!(f, "Path not allowed: {msg}"),
|
||||||
|
Self::Io(msg) => write!(f, "I/O error: {msg}"),
|
||||||
|
Self::UpstreamFailure(msg) => write!(f, "Upstream failure: {msg}"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Tests ─────────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn error_display_not_found() {
|
||||||
|
let e = Error::NotFound("worktree missing".to_string());
|
||||||
|
assert!(e.to_string().contains("Not found"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn error_display_validation() {
|
||||||
|
let e = Error::Validation("relative path".to_string());
|
||||||
|
assert!(e.to_string().contains("Validation error"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn error_display_conflict() {
|
||||||
|
let e = Error::Conflict("uncommitted changes".to_string());
|
||||||
|
assert!(e.to_string().contains("Conflict"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn error_display_path_not_allowed() {
|
||||||
|
let e = Error::PathNotAllowed("outside sandbox".to_string());
|
||||||
|
assert!(e.to_string().contains("Path not allowed"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn error_display_io() {
|
||||||
|
let e = Error::Io("permission denied".to_string());
|
||||||
|
assert!(e.to_string().contains("I/O error"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn error_display_upstream_failure() {
|
||||||
|
let e = Error::UpstreamFailure("git not found".to_string());
|
||||||
|
assert!(e.to_string().contains("Upstream failure"));
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,58 @@
|
|||||||
|
//! Pure path-guard helpers for `service::git_ops`.
|
||||||
|
//!
|
||||||
|
//! These functions are free of side effects — they operate on already-resolved
|
||||||
|
//! `Path` values and perform no filesystem I/O. Path existence checks and
|
||||||
|
//! canonicalization belong in `io.rs`.
|
||||||
|
|
||||||
|
use std::path::Path;
|
||||||
|
|
||||||
|
/// Return `true` if `canonical_path` starts with (i.e. is under) `root`.
|
||||||
|
///
|
||||||
|
/// Both paths must already be canonicalized so that symlinks, `.`, and `..`
|
||||||
|
/// components do not cause false negatives.
|
||||||
|
pub fn is_under_root(canonical_path: &Path, root: &Path) -> bool {
|
||||||
|
canonical_path.starts_with(root)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Tests ─────────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
use std::path::PathBuf;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn is_under_root_returns_true_for_child() {
|
||||||
|
let root = PathBuf::from("/project/.huskies/worktrees");
|
||||||
|
let child = PathBuf::from("/project/.huskies/worktrees/42_story_foo");
|
||||||
|
assert!(is_under_root(&child, &root));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn is_under_root_returns_false_for_sibling() {
|
||||||
|
let root = PathBuf::from("/project/.huskies/worktrees");
|
||||||
|
let sibling = PathBuf::from("/project/.huskies/other");
|
||||||
|
assert!(!is_under_root(&sibling, &root));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn is_under_root_returns_false_for_parent() {
|
||||||
|
let root = PathBuf::from("/project/.huskies/worktrees");
|
||||||
|
let parent = PathBuf::from("/project/.huskies");
|
||||||
|
assert!(!is_under_root(&parent, &root));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn is_under_root_returns_true_for_exact_match() {
|
||||||
|
let root = PathBuf::from("/project/.huskies/worktrees");
|
||||||
|
assert!(is_under_root(&root, &root));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn is_under_root_returns_false_for_path_with_shared_prefix_but_not_child() {
|
||||||
|
// /foo/bar-extra is NOT under /foo/bar
|
||||||
|
let root = PathBuf::from("/foo/bar");
|
||||||
|
let other = PathBuf::from("/foo/bar-extra");
|
||||||
|
assert!(!is_under_root(&other, &root));
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,107 @@
|
|||||||
|
//! Pure git porcelain output parsers for `service::git_ops`.
|
||||||
|
//!
|
||||||
|
//! These functions parse the text output of `git status --porcelain=v1`
|
||||||
|
//! and similar commands. No I/O: they take `&str` and return structured data.
|
||||||
|
|
||||||
|
/// Parse `git status --porcelain=v1 -u` output into three file lists.
|
||||||
|
///
|
||||||
|
/// Returns `(staged, unstaged, untracked)` where each entry is the file path
|
||||||
|
/// string from the porcelain line.
|
||||||
|
pub fn parse_git_status_porcelain(stdout: &str) -> (Vec<String>, Vec<String>, Vec<String>) {
|
||||||
|
let mut staged: Vec<String> = Vec::new();
|
||||||
|
let mut unstaged: Vec<String> = Vec::new();
|
||||||
|
let mut untracked: Vec<String> = Vec::new();
|
||||||
|
|
||||||
|
for line in stdout.lines() {
|
||||||
|
if line.len() < 3 {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
let x = line.chars().next().unwrap_or(' ');
|
||||||
|
let y = line.chars().nth(1).unwrap_or(' ');
|
||||||
|
let path = line[3..].to_string();
|
||||||
|
|
||||||
|
match (x, y) {
|
||||||
|
('?', '?') => untracked.push(path),
|
||||||
|
(' ', _) => unstaged.push(path),
|
||||||
|
(_, ' ') => staged.push(path),
|
||||||
|
_ => {
|
||||||
|
// Both staged and unstaged modifications.
|
||||||
|
staged.push(path.clone());
|
||||||
|
unstaged.push(path);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
(staged, unstaged, untracked)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Tests ─────────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn parse_empty_output_returns_empty_vecs() {
|
||||||
|
let (s, u, t) = parse_git_status_porcelain("");
|
||||||
|
assert!(s.is_empty());
|
||||||
|
assert!(u.is_empty());
|
||||||
|
assert!(t.is_empty());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn parse_untracked_file() {
|
||||||
|
let output = "?? new_file.txt\n";
|
||||||
|
let (staged, unstaged, untracked) = parse_git_status_porcelain(output);
|
||||||
|
assert!(staged.is_empty());
|
||||||
|
assert!(unstaged.is_empty());
|
||||||
|
assert_eq!(untracked, vec!["new_file.txt"]);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn parse_staged_file() {
|
||||||
|
let output = "A staged.txt\n";
|
||||||
|
let (staged, unstaged, untracked) = parse_git_status_porcelain(output);
|
||||||
|
assert_eq!(staged, vec!["staged.txt"]);
|
||||||
|
assert!(unstaged.is_empty());
|
||||||
|
assert!(untracked.is_empty());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn parse_unstaged_modified_file() {
|
||||||
|
// 'M' in second column = unstaged modification
|
||||||
|
let output = " M modified.txt\n";
|
||||||
|
let (staged, unstaged, untracked) = parse_git_status_porcelain(output);
|
||||||
|
assert!(staged.is_empty());
|
||||||
|
assert_eq!(unstaged, vec!["modified.txt"]);
|
||||||
|
assert!(untracked.is_empty());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn parse_both_staged_and_unstaged() {
|
||||||
|
// 'MM' = staged + unstaged in same file
|
||||||
|
let output = "MM both.txt\n";
|
||||||
|
let (staged, unstaged, untracked) = parse_git_status_porcelain(output);
|
||||||
|
assert_eq!(staged, vec!["both.txt"]);
|
||||||
|
assert_eq!(unstaged, vec!["both.txt"]);
|
||||||
|
assert!(untracked.is_empty());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn parse_mixed_output() {
|
||||||
|
let output = "A staged.rs\n M unstaged.rs\n?? untracked.rs\n";
|
||||||
|
let (staged, unstaged, untracked) = parse_git_status_porcelain(output);
|
||||||
|
assert_eq!(staged, vec!["staged.rs"]);
|
||||||
|
assert_eq!(unstaged, vec!["unstaged.rs"]);
|
||||||
|
assert_eq!(untracked, vec!["untracked.rs"]);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn parse_skips_short_lines() {
|
||||||
|
// Lines shorter than 3 chars should be skipped.
|
||||||
|
let output = "A \nMM both.txt\n";
|
||||||
|
let (staged, _unstaged, _untracked) = parse_git_status_porcelain(output);
|
||||||
|
// Only "both.txt" should appear — the 2-char "A " line is skipped.
|
||||||
|
assert_eq!(staged, vec!["both.txt"]);
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,38 @@
|
|||||||
|
//! Pure health-check logic — no side effects.
|
||||||
|
|
||||||
|
use poem_openapi::Object;
|
||||||
|
use serde::Serialize;
|
||||||
|
|
||||||
|
/// The JSON payload returned by the health check endpoint.
|
||||||
|
#[derive(Serialize, Object)]
|
||||||
|
pub struct HealthStatus {
|
||||||
|
/// Human-readable status string, always `"ok"` when the server is healthy.
|
||||||
|
pub status: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Return a healthy status response.
|
||||||
|
pub fn ok() -> HealthStatus {
|
||||||
|
HealthStatus {
|
||||||
|
status: "ok".to_string(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn ok_returns_status_ok() {
|
||||||
|
let s = ok();
|
||||||
|
assert_eq!(s.status, "ok");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn health_status_serializes() {
|
||||||
|
let s = HealthStatus {
|
||||||
|
status: "ok".to_string(),
|
||||||
|
};
|
||||||
|
let json = serde_json::to_value(&s).unwrap();
|
||||||
|
assert_eq!(json["status"], "ok");
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,4 @@
|
|||||||
|
//! Health I/O wrappers.
|
||||||
|
//!
|
||||||
|
//! Health has no side effects; this file exists to satisfy the
|
||||||
|
//! service-module convention (`docs/architecture/service-modules.md`).
|
||||||
@@ -0,0 +1,39 @@
|
|||||||
|
//! Health service — public API for the health domain.
|
||||||
|
//!
|
||||||
|
//! Exposes a single `check()` function that returns a [`HealthStatus`].
|
||||||
|
//! HTTP handlers call this instead of constructing the response inline.
|
||||||
|
//!
|
||||||
|
//! Conventions: `docs/architecture/service-modules.md`
|
||||||
|
|
||||||
|
pub mod check;
|
||||||
|
pub(super) mod io;
|
||||||
|
|
||||||
|
pub use check::HealthStatus;
|
||||||
|
|
||||||
|
// ── Error type ────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Typed errors returned by `service::health` functions.
|
||||||
|
///
|
||||||
|
/// Health checks are currently infallible; this enum satisfies the module
|
||||||
|
/// convention and accommodates future error cases (e.g. dependency checks).
|
||||||
|
#[allow(dead_code)]
|
||||||
|
#[derive(Debug)]
|
||||||
|
pub enum Error {
|
||||||
|
/// An internal error occurred during the health check.
|
||||||
|
Internal(String),
|
||||||
|
}
|
||||||
|
|
||||||
|
impl std::fmt::Display for Error {
|
||||||
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
|
match self {
|
||||||
|
Self::Internal(msg) => write!(f, "Health error: {msg}"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Public API ────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Perform a health check and return the status.
|
||||||
|
pub fn check() -> HealthStatus {
|
||||||
|
check::ok()
|
||||||
|
}
|
||||||
@@ -0,0 +1,5 @@
|
|||||||
|
//! Merge I/O — the ONLY place in `service::merge/` that may perform side effects.
|
||||||
|
//!
|
||||||
|
//! Currently, the bulk of the merge I/O is handled by `crate::agents::merge`
|
||||||
|
//! and `crate::io::story_metadata`. This file is the designated home for any
|
||||||
|
//! future I/O helpers that are extracted from merge-related MCP handlers.
|
||||||
@@ -0,0 +1,87 @@
|
|||||||
|
//! Merge service — domain logic for merging agent work to master.
|
||||||
|
//!
|
||||||
|
//! Extracted from `http/mcp/merge_tools.rs` following the conventions in
|
||||||
|
//! `docs/architecture/service-modules.md`:
|
||||||
|
//! - `mod.rs` (this file) — public API, typed [`Error`], orchestration
|
||||||
|
//! - `io.rs` — the ONLY place that performs side effects
|
||||||
|
//! - `status.rs` — pure merge-status message formatting
|
||||||
|
|
||||||
|
pub mod io;
|
||||||
|
pub mod status;
|
||||||
|
|
||||||
|
#[allow(unused_imports)]
|
||||||
|
pub use status::format_merge_status_message;
|
||||||
|
|
||||||
|
// ── Error type ────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Typed errors returned by `service::merge` functions.
|
||||||
|
///
|
||||||
|
/// HTTP handlers map these to status codes:
|
||||||
|
/// - [`Error::NotFound`] → 404 Not Found
|
||||||
|
/// - [`Error::Validation`] → 400 Bad Request
|
||||||
|
/// - [`Error::Conflict`] → 409 Conflict
|
||||||
|
/// - [`Error::Io`] → 500 Internal Server Error
|
||||||
|
/// - [`Error::UpstreamFailure`] → 500 Internal Server Error
|
||||||
|
#[allow(dead_code)]
|
||||||
|
#[derive(Debug)]
|
||||||
|
pub enum Error {
|
||||||
|
/// The requested story or merge job was not found.
|
||||||
|
NotFound(String),
|
||||||
|
/// A required argument is missing or has an invalid value.
|
||||||
|
Validation(String),
|
||||||
|
/// The merge cannot proceed due to a conflicting state.
|
||||||
|
Conflict(String),
|
||||||
|
/// A filesystem or process I/O operation failed.
|
||||||
|
Io(String),
|
||||||
|
/// An upstream dependency (agents, git) returned an unexpected error.
|
||||||
|
UpstreamFailure(String),
|
||||||
|
}
|
||||||
|
|
||||||
|
impl std::fmt::Display for Error {
|
||||||
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
|
match self {
|
||||||
|
Self::NotFound(msg) => write!(f, "Not found: {msg}"),
|
||||||
|
Self::Validation(msg) => write!(f, "Validation error: {msg}"),
|
||||||
|
Self::Conflict(msg) => write!(f, "Conflict: {msg}"),
|
||||||
|
Self::Io(msg) => write!(f, "I/O error: {msg}"),
|
||||||
|
Self::UpstreamFailure(msg) => write!(f, "Upstream failure: {msg}"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Tests ─────────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn error_display_not_found() {
|
||||||
|
let e = Error::NotFound("merge job missing".to_string());
|
||||||
|
assert!(e.to_string().contains("Not found"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn error_display_validation() {
|
||||||
|
let e = Error::Validation("story_id required".to_string());
|
||||||
|
assert!(e.to_string().contains("Validation error"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn error_display_conflict() {
|
||||||
|
let e = Error::Conflict("story already merged".to_string());
|
||||||
|
assert!(e.to_string().contains("Conflict"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn error_display_io() {
|
||||||
|
let e = Error::Io("write failed".to_string());
|
||||||
|
assert!(e.to_string().contains("I/O error"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn error_display_upstream_failure() {
|
||||||
|
let e = Error::UpstreamFailure("git crashed".to_string());
|
||||||
|
assert!(e.to_string().contains("Upstream failure"));
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,89 @@
|
|||||||
|
//! Pure merge-status message formatting for `service::merge`.
|
||||||
|
//!
|
||||||
|
//! These functions transform a completed merge report into human-readable
|
||||||
|
//! status messages. No I/O: they are pure functions over plain data.
|
||||||
|
|
||||||
|
use crate::agents::merge::MergeReport;
|
||||||
|
|
||||||
|
#[allow(dead_code)]
|
||||||
|
/// Derive a human-readable status message from a completed [`MergeReport`].
|
||||||
|
///
|
||||||
|
/// The message explains what happened and (on failure) what the caller
|
||||||
|
/// should do next.
|
||||||
|
pub fn format_merge_status_message(report: &MergeReport) -> &'static str {
|
||||||
|
if report.success && report.gates_passed && report.conflicts_resolved {
|
||||||
|
"Merge complete: conflicts were auto-resolved and all quality gates passed. Story moved to done and worktree cleaned up."
|
||||||
|
} else if report.success && report.gates_passed {
|
||||||
|
"Merge complete: all quality gates passed. Story moved to done and worktree cleaned up."
|
||||||
|
} else if report.had_conflicts && !report.conflicts_resolved {
|
||||||
|
"Merge failed: conflicts detected that could not be auto-resolved. Merge was aborted — master is untouched. Call report_merge_failure with the conflict details so the human can resolve them. Do NOT manually move the story file or call accept_story."
|
||||||
|
} else if report.success && !report.gates_passed {
|
||||||
|
"Merge committed but quality gates failed. Review gate_output and fix issues before re-running."
|
||||||
|
} else {
|
||||||
|
"Merge failed. Review gate_output for details. Call report_merge_failure to record the failure. Do NOT manually move the story file or call accept_story."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Tests ─────────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
fn report(
|
||||||
|
success: bool,
|
||||||
|
had_conflicts: bool,
|
||||||
|
conflicts_resolved: bool,
|
||||||
|
gates_passed: bool,
|
||||||
|
) -> MergeReport {
|
||||||
|
MergeReport {
|
||||||
|
story_id: String::new(),
|
||||||
|
success,
|
||||||
|
had_conflicts,
|
||||||
|
conflicts_resolved,
|
||||||
|
conflict_details: None,
|
||||||
|
gates_passed,
|
||||||
|
gate_output: String::new(),
|
||||||
|
worktree_cleaned_up: false,
|
||||||
|
story_archived: false,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn clean_merge_message() {
|
||||||
|
let r = report(true, false, false, true);
|
||||||
|
let msg = format_merge_status_message(&r);
|
||||||
|
assert!(msg.contains("quality gates passed"));
|
||||||
|
assert!(msg.contains("done"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn conflicts_resolved_message() {
|
||||||
|
let r = report(true, true, true, true);
|
||||||
|
let msg = format_merge_status_message(&r);
|
||||||
|
assert!(msg.contains("auto-resolved"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn unresolved_conflicts_message() {
|
||||||
|
let r = report(false, true, false, false);
|
||||||
|
let msg = format_merge_status_message(&r);
|
||||||
|
assert!(msg.contains("could not be auto-resolved"));
|
||||||
|
assert!(msg.contains("report_merge_failure"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn gates_failed_message() {
|
||||||
|
let r = report(true, false, false, false);
|
||||||
|
let msg = format_merge_status_message(&r);
|
||||||
|
assert!(msg.contains("quality gates failed"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn general_failure_message() {
|
||||||
|
let r = report(false, false, false, false);
|
||||||
|
let msg = format_merge_status_message(&r);
|
||||||
|
assert!(msg.contains("Merge failed"));
|
||||||
|
assert!(msg.contains("report_merge_failure"));
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,29 @@
|
|||||||
|
//! Service layer — domain logic extracted from HTTP handlers.
|
||||||
|
//!
|
||||||
|
//! Each sub-module follows the conventions documented in
|
||||||
|
//! `docs/architecture/service-modules.md`:
|
||||||
|
//! - `mod.rs` orchestrates and owns the typed `Error` type
|
||||||
|
//! - `io.rs` is the only file that performs side effects
|
||||||
|
//! - Topic-named pure files contain branching logic with no I/O
|
||||||
|
pub mod agents;
|
||||||
|
pub mod anthropic;
|
||||||
|
pub mod bot_command;
|
||||||
|
pub mod common;
|
||||||
|
pub mod diagnostics;
|
||||||
|
pub mod events;
|
||||||
|
pub mod file_io;
|
||||||
|
pub mod gateway;
|
||||||
|
pub mod git_ops;
|
||||||
|
pub mod health;
|
||||||
|
pub mod merge;
|
||||||
|
pub mod notifications;
|
||||||
|
pub mod oauth;
|
||||||
|
pub mod pipeline;
|
||||||
|
pub mod project;
|
||||||
|
pub mod qa;
|
||||||
|
pub mod settings;
|
||||||
|
pub mod shell;
|
||||||
|
pub mod story;
|
||||||
|
pub mod timer;
|
||||||
|
pub mod wizard;
|
||||||
|
pub mod ws;
|
||||||
@@ -0,0 +1,119 @@
|
|||||||
|
//! Event-to-notification mapping.
|
||||||
|
//!
|
||||||
|
//! Pure functions that classify [`WatcherEvent`] variants into notification
|
||||||
|
//! actions, deciding which events produce user-visible messages and which
|
||||||
|
//! are suppressed or logged server-side only.
|
||||||
|
|
||||||
|
use crate::io::watcher::WatcherEvent;
|
||||||
|
|
||||||
|
/// The notification action to take in response to a [`WatcherEvent`].
|
||||||
|
#[derive(Debug, PartialEq)]
|
||||||
|
pub enum EventAction {
|
||||||
|
/// Post a stage-transition notification; the event carries a known source stage.
|
||||||
|
StageTransition,
|
||||||
|
/// Post a merge-failure error notification.
|
||||||
|
MergeFailure,
|
||||||
|
/// Post a rate-limit warning (subject to config/debounce suppression).
|
||||||
|
RateLimitWarning,
|
||||||
|
/// Post a story-blocked notification.
|
||||||
|
StoryBlocked,
|
||||||
|
/// Log server-side only; do not post to chat (e.g. hard rate-limit blocks).
|
||||||
|
LogOnly,
|
||||||
|
/// Reload the project configuration.
|
||||||
|
ReloadConfig,
|
||||||
|
/// Skip silently (synthetic events, unknown variants).
|
||||||
|
Skip,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Classify a [`WatcherEvent`] into the action the notification listener should take.
|
||||||
|
pub fn classify(event: &WatcherEvent) -> EventAction {
|
||||||
|
match event {
|
||||||
|
WatcherEvent::WorkItem { from_stage, .. } => {
|
||||||
|
if from_stage.is_some() {
|
||||||
|
EventAction::StageTransition
|
||||||
|
} else {
|
||||||
|
// Synthetic events (creation, reassign) have no from_stage.
|
||||||
|
// Posting a notification for these would produce incorrect messages.
|
||||||
|
EventAction::Skip
|
||||||
|
}
|
||||||
|
}
|
||||||
|
WatcherEvent::MergeFailure { .. } => EventAction::MergeFailure,
|
||||||
|
WatcherEvent::RateLimitWarning { .. } => EventAction::RateLimitWarning,
|
||||||
|
WatcherEvent::StoryBlocked { .. } => EventAction::StoryBlocked,
|
||||||
|
WatcherEvent::RateLimitHardBlock { .. } => EventAction::LogOnly,
|
||||||
|
WatcherEvent::ConfigChanged => EventAction::ReloadConfig,
|
||||||
|
_ => EventAction::Skip,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
fn work_item(from_stage: Option<&str>) -> WatcherEvent {
|
||||||
|
WatcherEvent::WorkItem {
|
||||||
|
stage: "3_qa".to_string(),
|
||||||
|
item_id: "1_story_foo".to_string(),
|
||||||
|
action: "qa".to_string(),
|
||||||
|
commit_msg: String::new(),
|
||||||
|
from_stage: from_stage.map(str::to_string),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn work_item_with_from_stage_is_stage_transition() {
|
||||||
|
let event = work_item(Some("2_current"));
|
||||||
|
assert_eq!(classify(&event), EventAction::StageTransition);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn work_item_without_from_stage_is_skip() {
|
||||||
|
let event = work_item(None);
|
||||||
|
assert_eq!(classify(&event), EventAction::Skip);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn merge_failure_is_classified_correctly() {
|
||||||
|
let event = WatcherEvent::MergeFailure {
|
||||||
|
story_id: "1_story_foo".to_string(),
|
||||||
|
reason: "conflict".to_string(),
|
||||||
|
};
|
||||||
|
assert_eq!(classify(&event), EventAction::MergeFailure);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn rate_limit_warning_is_classified_correctly() {
|
||||||
|
let event = WatcherEvent::RateLimitWarning {
|
||||||
|
story_id: "1_story_foo".to_string(),
|
||||||
|
agent_name: "coder-1".to_string(),
|
||||||
|
};
|
||||||
|
assert_eq!(classify(&event), EventAction::RateLimitWarning);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn story_blocked_is_classified_correctly() {
|
||||||
|
let event = WatcherEvent::StoryBlocked {
|
||||||
|
story_id: "1_story_foo".to_string(),
|
||||||
|
reason: "empty diff".to_string(),
|
||||||
|
};
|
||||||
|
assert_eq!(classify(&event), EventAction::StoryBlocked);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn rate_limit_hard_block_is_log_only() {
|
||||||
|
let event = WatcherEvent::RateLimitHardBlock {
|
||||||
|
story_id: "1_story_foo".to_string(),
|
||||||
|
agent_name: "coder-1".to_string(),
|
||||||
|
reset_at: chrono::Utc::now(),
|
||||||
|
};
|
||||||
|
assert_eq!(classify(&event), EventAction::LogOnly);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn config_changed_triggers_reload() {
|
||||||
|
assert_eq!(
|
||||||
|
classify(&WatcherEvent::ConfigChanged),
|
||||||
|
EventAction::ReloadConfig
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,73 @@
|
|||||||
|
//! Pure filtering and debounce logic for notification suppression.
|
||||||
|
//!
|
||||||
|
//! Contains constants and predicates that decide whether a notification
|
||||||
|
//! should be sent, without performing any I/O.
|
||||||
|
|
||||||
|
use std::time::{Duration, Instant};
|
||||||
|
|
||||||
|
/// Minimum time between rate-limit notifications for the same agent key.
|
||||||
|
pub const RATE_LIMIT_DEBOUNCE: Duration = Duration::from_secs(60);
|
||||||
|
|
||||||
|
/// Window during which rapid stage transitions for the same item are coalesced
|
||||||
|
/// into a single notification (only the final stage is announced).
|
||||||
|
pub const STAGE_TRANSITION_DEBOUNCE: Duration = Duration::from_millis(200);
|
||||||
|
|
||||||
|
/// Returns `true` if a rate-limit notification should be sent.
|
||||||
|
///
|
||||||
|
/// `last_notified` is the [`Instant`] of the last sent notification for this
|
||||||
|
/// agent, or `None` if no notification has been sent yet.
|
||||||
|
pub fn should_send_rate_limit(last_notified: Option<Instant>, now: Instant) -> bool {
|
||||||
|
match last_notified {
|
||||||
|
None => true,
|
||||||
|
Some(last) => now.duration_since(last) >= RATE_LIMIT_DEBOUNCE,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
// ── should_send_rate_limit ────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn should_send_when_never_notified() {
|
||||||
|
let now = Instant::now();
|
||||||
|
assert!(should_send_rate_limit(None, now));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn should_not_send_within_debounce_window() {
|
||||||
|
let now = Instant::now();
|
||||||
|
// Pretend last notification was 10 seconds ago — inside the 60s window.
|
||||||
|
let last = now - Duration::from_secs(10);
|
||||||
|
assert!(!should_send_rate_limit(Some(last), now));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn should_send_after_debounce_window_expires() {
|
||||||
|
let now = Instant::now();
|
||||||
|
// Pretend last notification was 61 seconds ago — outside the 60s window.
|
||||||
|
let last = now - Duration::from_secs(61);
|
||||||
|
assert!(should_send_rate_limit(Some(last), now));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn should_not_send_at_exactly_debounce_boundary() {
|
||||||
|
let now = Instant::now();
|
||||||
|
// Exactly at the boundary: duration_since == RATE_LIMIT_DEBOUNCE (>=, so allowed).
|
||||||
|
let last = now - RATE_LIMIT_DEBOUNCE;
|
||||||
|
assert!(should_send_rate_limit(Some(last), now));
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── constants ─────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn rate_limit_debounce_is_one_minute() {
|
||||||
|
assert_eq!(RATE_LIMIT_DEBOUNCE, Duration::from_secs(60));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn stage_transition_debounce_is_200ms() {
|
||||||
|
assert_eq!(STAGE_TRANSITION_DEBOUNCE, Duration::from_millis(200));
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,314 @@
|
|||||||
|
//! Pure message-formatting functions for pipeline-event notifications.
|
||||||
|
//!
|
||||||
|
//! All functions are pure (no I/O, no side effects) and accept only owned
|
||||||
|
//! or borrowed string data. They return `(plain_text, html)` pairs suitable
|
||||||
|
//! for `ChatTransport::send_message`.
|
||||||
|
|
||||||
|
use crate::service::common::item_id::extract_item_number;
|
||||||
|
|
||||||
|
/// Human-readable display name for a pipeline stage directory.
|
||||||
|
pub fn stage_display_name(stage: &str) -> &'static str {
|
||||||
|
match stage {
|
||||||
|
"1_backlog" => "Backlog",
|
||||||
|
"2_current" => "Current",
|
||||||
|
"3_qa" => "QA",
|
||||||
|
"4_merge" => "Merge",
|
||||||
|
"5_done" => "Done",
|
||||||
|
"6_archived" => "Archived",
|
||||||
|
_ => "Unknown",
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Format a stage transition notification message.
|
||||||
|
///
|
||||||
|
/// Returns `(plain_text, html)` suitable for `ChatTransport::send_message`.
|
||||||
|
pub fn format_stage_notification(
|
||||||
|
item_id: &str,
|
||||||
|
story_name: Option<&str>,
|
||||||
|
from_stage: &str,
|
||||||
|
to_stage: &str,
|
||||||
|
) -> (String, String) {
|
||||||
|
let number = extract_item_number(item_id).unwrap_or(item_id);
|
||||||
|
let name = story_name.unwrap_or(item_id);
|
||||||
|
|
||||||
|
let prefix = if to_stage == "Done" { "\u{1f389} " } else { "" };
|
||||||
|
let plain = format!("{prefix}#{number} {name} \u{2014} {from_stage} \u{2192} {to_stage}");
|
||||||
|
let html = format!(
|
||||||
|
"{prefix}<strong>#{number}</strong> <em>{name}</em> \u{2014} {from_stage} \u{2192} {to_stage}"
|
||||||
|
);
|
||||||
|
(plain, html)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Format an error notification message for a story merge failure.
|
||||||
|
///
|
||||||
|
/// Returns `(plain_text, html)` suitable for `ChatTransport::send_message`.
|
||||||
|
pub fn format_error_notification(
|
||||||
|
item_id: &str,
|
||||||
|
story_name: Option<&str>,
|
||||||
|
reason: &str,
|
||||||
|
) -> (String, String) {
|
||||||
|
let number = extract_item_number(item_id).unwrap_or(item_id);
|
||||||
|
let name = story_name.unwrap_or(item_id);
|
||||||
|
|
||||||
|
let plain = format!("\u{274c} #{number} {name} \u{2014} {reason}");
|
||||||
|
let html = format!("\u{274c} <strong>#{number}</strong> <em>{name}</em> \u{2014} {reason}");
|
||||||
|
(plain, html)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Format a blocked-story notification message.
|
||||||
|
///
|
||||||
|
/// Returns `(plain_text, html)` suitable for `ChatTransport::send_message`.
|
||||||
|
pub fn format_blocked_notification(
|
||||||
|
item_id: &str,
|
||||||
|
story_name: Option<&str>,
|
||||||
|
reason: &str,
|
||||||
|
) -> (String, String) {
|
||||||
|
let number = extract_item_number(item_id).unwrap_or(item_id);
|
||||||
|
let name = story_name.unwrap_or(item_id);
|
||||||
|
|
||||||
|
let plain = format!("\u{1f6ab} #{number} {name} \u{2014} BLOCKED: {reason}");
|
||||||
|
let html =
|
||||||
|
format!("\u{1f6ab} <strong>#{number}</strong> <em>{name}</em> \u{2014} BLOCKED: {reason}");
|
||||||
|
(plain, html)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Format a rate limit warning notification message.
|
||||||
|
///
|
||||||
|
/// Returns `(plain_text, html)` suitable for `ChatTransport::send_message`.
|
||||||
|
pub fn format_rate_limit_notification(
|
||||||
|
item_id: &str,
|
||||||
|
story_name: Option<&str>,
|
||||||
|
agent_name: &str,
|
||||||
|
) -> (String, String) {
|
||||||
|
let number = extract_item_number(item_id).unwrap_or(item_id);
|
||||||
|
let name = story_name.unwrap_or(item_id);
|
||||||
|
|
||||||
|
let plain =
|
||||||
|
format!("\u{26a0}\u{fe0f} #{number} {name} \u{2014} {agent_name} hit an API rate limit");
|
||||||
|
let html = format!(
|
||||||
|
"\u{26a0}\u{fe0f} <strong>#{number}</strong> <em>{name}</em> \u{2014} \
|
||||||
|
{agent_name} hit an API rate limit"
|
||||||
|
);
|
||||||
|
(plain, html)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
// ── stage_display_name ────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn stage_display_name_maps_all_known_stages() {
|
||||||
|
assert_eq!(stage_display_name("1_backlog"), "Backlog");
|
||||||
|
assert_eq!(stage_display_name("2_current"), "Current");
|
||||||
|
assert_eq!(stage_display_name("3_qa"), "QA");
|
||||||
|
assert_eq!(stage_display_name("4_merge"), "Merge");
|
||||||
|
assert_eq!(stage_display_name("5_done"), "Done");
|
||||||
|
assert_eq!(stage_display_name("6_archived"), "Archived");
|
||||||
|
assert_eq!(stage_display_name("unknown"), "Unknown");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn stage_display_name_unknown_slug_returns_unknown() {
|
||||||
|
assert_eq!(stage_display_name("99_future"), "Unknown");
|
||||||
|
assert_eq!(stage_display_name(""), "Unknown");
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── format_stage_notification ─────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn format_notification_done_stage_includes_party_emoji() {
|
||||||
|
let (plain, html) =
|
||||||
|
format_stage_notification("353_story_done", Some("Done Story"), "Merge", "Done");
|
||||||
|
assert_eq!(
|
||||||
|
plain,
|
||||||
|
"\u{1f389} #353 Done Story \u{2014} Merge \u{2192} Done"
|
||||||
|
);
|
||||||
|
assert_eq!(
|
||||||
|
html,
|
||||||
|
"\u{1f389} <strong>#353</strong> <em>Done Story</em> \u{2014} Merge \u{2192} Done"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn format_notification_non_done_stage_has_no_emoji() {
|
||||||
|
let (plain, _html) =
|
||||||
|
format_stage_notification("42_story_thing", Some("Some Story"), "Backlog", "Current");
|
||||||
|
assert!(!plain.contains("\u{1f389}"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn format_notification_with_story_name() {
|
||||||
|
let (plain, html) = format_stage_notification(
|
||||||
|
"261_story_bot_notifications",
|
||||||
|
Some("Bot notifications"),
|
||||||
|
"Upcoming",
|
||||||
|
"Current",
|
||||||
|
);
|
||||||
|
assert_eq!(
|
||||||
|
plain,
|
||||||
|
"#261 Bot notifications \u{2014} Upcoming \u{2192} Current"
|
||||||
|
);
|
||||||
|
assert_eq!(
|
||||||
|
html,
|
||||||
|
"<strong>#261</strong> <em>Bot notifications</em> \u{2014} Upcoming \u{2192} Current"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn format_notification_without_story_name_falls_back_to_item_id() {
|
||||||
|
let (plain, _html) = format_stage_notification("42_bug_fix_thing", None, "Current", "QA");
|
||||||
|
assert_eq!(plain, "#42 42_bug_fix_thing \u{2014} Current \u{2192} QA");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn format_notification_non_numeric_id_uses_full_id() {
|
||||||
|
let (plain, _html) =
|
||||||
|
format_stage_notification("abc_story_thing", Some("Some Story"), "QA", "Merge");
|
||||||
|
assert_eq!(
|
||||||
|
plain,
|
||||||
|
"#abc_story_thing Some Story \u{2014} QA \u{2192} Merge"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn format_stage_notification_long_name_is_preserved() {
|
||||||
|
let long_name = "A".repeat(300);
|
||||||
|
let (plain, _html) =
|
||||||
|
format_stage_notification("1_story_long", Some(&long_name), "Current", "QA");
|
||||||
|
assert!(plain.contains(&long_name));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn format_stage_notification_empty_story_name_falls_back_to_id() {
|
||||||
|
// Some("") is a valid Some but empty — treat as missing? Currently we use it as-is.
|
||||||
|
let (plain, _html) = format_stage_notification("42_story_empty", Some(""), "Current", "QA");
|
||||||
|
// The name slot is empty but the structure is still correct.
|
||||||
|
assert!(plain.contains("#42"));
|
||||||
|
assert!(plain.contains("Current \u{2192} QA"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn format_stage_notification_unicode_name() {
|
||||||
|
let (plain, html) =
|
||||||
|
format_stage_notification("7_story_i18n", Some("Ünïcödé Ñämé 🎉"), "QA", "Merge");
|
||||||
|
assert!(plain.contains("Ünïcödé Ñämé 🎉"));
|
||||||
|
assert!(html.contains("Ünïcödé Ñämé 🎉"));
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── format_error_notification ─────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn format_error_notification_with_story_name() {
|
||||||
|
let (plain, html) = format_error_notification(
|
||||||
|
"262_story_bot_errors",
|
||||||
|
Some("Bot error notifications"),
|
||||||
|
"merge conflict in src/main.rs",
|
||||||
|
);
|
||||||
|
assert_eq!(
|
||||||
|
plain,
|
||||||
|
"\u{274c} #262 Bot error notifications \u{2014} merge conflict in src/main.rs"
|
||||||
|
);
|
||||||
|
assert_eq!(
|
||||||
|
html,
|
||||||
|
"\u{274c} <strong>#262</strong> <em>Bot error notifications</em> \u{2014} merge conflict in src/main.rs"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn format_error_notification_without_story_name_falls_back_to_item_id() {
|
||||||
|
let (plain, _html) = format_error_notification("42_bug_fix_thing", None, "tests failed");
|
||||||
|
assert_eq!(plain, "\u{274c} #42 42_bug_fix_thing \u{2014} tests failed");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn format_error_notification_non_numeric_id_uses_full_id() {
|
||||||
|
let (plain, _html) =
|
||||||
|
format_error_notification("abc_story_thing", Some("Some Story"), "clippy errors");
|
||||||
|
assert_eq!(
|
||||||
|
plain,
|
||||||
|
"\u{274c} #abc_story_thing Some Story \u{2014} clippy errors"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn format_error_notification_long_reason_preserved() {
|
||||||
|
let long_reason = "x".repeat(500);
|
||||||
|
let (plain, _html) = format_error_notification("1_story_foo", None, &long_reason);
|
||||||
|
assert!(plain.contains(&long_reason));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn format_error_notification_unicode_reason() {
|
||||||
|
let (plain, _html) =
|
||||||
|
format_error_notification("5_story_foo", Some("Foo"), "错误:合并冲突");
|
||||||
|
assert!(plain.contains("错误:合并冲突"));
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── format_blocked_notification ───────────────────────────────────────────
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn format_blocked_notification_with_story_name() {
|
||||||
|
let (plain, html) = format_blocked_notification(
|
||||||
|
"425_story_blocking_reason",
|
||||||
|
Some("Blocking Reason Story"),
|
||||||
|
"Retry limit exceeded (3/3) at coder stage",
|
||||||
|
);
|
||||||
|
assert_eq!(
|
||||||
|
plain,
|
||||||
|
"\u{1f6ab} #425 Blocking Reason Story \u{2014} BLOCKED: Retry limit exceeded (3/3) at coder stage"
|
||||||
|
);
|
||||||
|
assert_eq!(
|
||||||
|
html,
|
||||||
|
"\u{1f6ab} <strong>#425</strong> <em>Blocking Reason Story</em> \u{2014} BLOCKED: Retry limit exceeded (3/3) at coder stage"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn format_blocked_notification_falls_back_to_item_id() {
|
||||||
|
let (plain, _html) = format_blocked_notification("42_story_thing", None, "empty diff");
|
||||||
|
assert_eq!(
|
||||||
|
plain,
|
||||||
|
"\u{1f6ab} #42 42_story_thing \u{2014} BLOCKED: empty diff"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn format_blocked_notification_unicode_reason() {
|
||||||
|
let (plain, _html) = format_blocked_notification("3_story_x", Some("X"), "理由:空の差分");
|
||||||
|
assert!(plain.contains("BLOCKED: 理由:空の差分"));
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── format_rate_limit_notification ────────────────────────────────────────
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn format_rate_limit_notification_includes_agent_and_story() {
|
||||||
|
let (plain, html) =
|
||||||
|
format_rate_limit_notification("365_story_my_feature", Some("My Feature"), "coder-2");
|
||||||
|
assert_eq!(
|
||||||
|
plain,
|
||||||
|
"\u{26a0}\u{fe0f} #365 My Feature \u{2014} coder-2 hit an API rate limit"
|
||||||
|
);
|
||||||
|
assert_eq!(
|
||||||
|
html,
|
||||||
|
"\u{26a0}\u{fe0f} <strong>#365</strong> <em>My Feature</em> \u{2014} coder-2 hit an API rate limit"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn format_rate_limit_notification_falls_back_to_item_id() {
|
||||||
|
let (plain, _html) = format_rate_limit_notification("42_story_thing", None, "coder-1");
|
||||||
|
assert_eq!(
|
||||||
|
plain,
|
||||||
|
"\u{26a0}\u{fe0f} #42 42_story_thing \u{2014} coder-1 hit an API rate limit"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn format_rate_limit_notification_unicode_agent_name() {
|
||||||
|
let (plain, _html) = format_rate_limit_notification("9_story_foo", Some("Foo"), "агент-1");
|
||||||
|
assert!(plain.contains("агент-1"));
|
||||||
|
assert!(plain.contains("hit an API rate limit"));
|
||||||
|
}
|
||||||
|
}
|
||||||
+155
-414
@@ -1,7 +1,8 @@
|
|||||||
//! Stage transition notifications for Matrix rooms.
|
//! I/O side of the notifications service.
|
||||||
//!
|
//!
|
||||||
//! Subscribes to [`WatcherEvent`] broadcasts and posts a notification to all
|
//! This is the **only** file inside `service/notifications/` that may perform
|
||||||
//! configured Matrix rooms whenever a work item moves between pipeline stages.
|
//! side effects: reading from the CRDT content store, loading configuration,
|
||||||
|
//! and spawning the background listener task.
|
||||||
|
|
||||||
use crate::chat::ChatTransport;
|
use crate::chat::ChatTransport;
|
||||||
use crate::config::ProjectConfig;
|
use crate::config::ProjectConfig;
|
||||||
@@ -11,29 +12,16 @@ use crate::slog;
|
|||||||
use std::collections::HashMap;
|
use std::collections::HashMap;
|
||||||
use std::path::{Path, PathBuf};
|
use std::path::{Path, PathBuf};
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
use std::time::{Duration, Instant};
|
use std::time::Instant;
|
||||||
use tokio::sync::broadcast;
|
use tokio::sync::broadcast;
|
||||||
|
|
||||||
/// Human-readable display name for a pipeline stage directory.
|
use super::events::classify;
|
||||||
pub fn stage_display_name(stage: &str) -> &'static str {
|
use super::filter::{STAGE_TRANSITION_DEBOUNCE, should_send_rate_limit};
|
||||||
match stage {
|
use super::format::{
|
||||||
"1_backlog" => "Backlog",
|
format_blocked_notification, format_error_notification, format_rate_limit_notification,
|
||||||
"2_current" => "Current",
|
format_stage_notification, stage_display_name,
|
||||||
"3_qa" => "QA",
|
};
|
||||||
"4_merge" => "Merge",
|
use super::route::rooms_for_notification;
|
||||||
"5_done" => "Done",
|
|
||||||
"6_archived" => "Archived",
|
|
||||||
_ => "Unknown",
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Extract the numeric story number from an item ID like `"261_story_slug"`.
|
|
||||||
pub fn extract_story_number(item_id: &str) -> Option<&str> {
|
|
||||||
item_id
|
|
||||||
.split('_')
|
|
||||||
.next()
|
|
||||||
.filter(|s| !s.is_empty() && s.chars().all(|c| c.is_ascii_digit()))
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Read the story name from the CRDT content store's YAML front matter.
|
/// Read the story name from the CRDT content store's YAML front matter.
|
||||||
///
|
///
|
||||||
@@ -44,93 +32,13 @@ pub fn read_story_name(_project_root: &Path, _stage: &str, item_id: &str) -> Opt
|
|||||||
meta.name
|
meta.name
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Format a stage transition notification message.
|
/// Look up a story name from the CRDT content store regardless of stage.
|
||||||
///
|
|
||||||
/// Returns `(plain_text, html)` suitable for `RoomMessageEventContent::text_html`.
|
|
||||||
pub fn format_stage_notification(
|
|
||||||
item_id: &str,
|
|
||||||
story_name: Option<&str>,
|
|
||||||
from_stage: &str,
|
|
||||||
to_stage: &str,
|
|
||||||
) -> (String, String) {
|
|
||||||
let number = extract_story_number(item_id).unwrap_or(item_id);
|
|
||||||
let name = story_name.unwrap_or(item_id);
|
|
||||||
|
|
||||||
let prefix = if to_stage == "Done" { "\u{1f389} " } else { "" };
|
|
||||||
let plain = format!("{prefix}#{number} {name} \u{2014} {from_stage} \u{2192} {to_stage}");
|
|
||||||
let html = format!(
|
|
||||||
"{prefix}<strong>#{number}</strong> <em>{name}</em> \u{2014} {from_stage} \u{2192} {to_stage}"
|
|
||||||
);
|
|
||||||
(plain, html)
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Format an error notification message for a story failure.
|
|
||||||
///
|
|
||||||
/// Returns `(plain_text, html)` suitable for `RoomMessageEventContent::text_html`.
|
|
||||||
pub fn format_error_notification(
|
|
||||||
item_id: &str,
|
|
||||||
story_name: Option<&str>,
|
|
||||||
reason: &str,
|
|
||||||
) -> (String, String) {
|
|
||||||
let number = extract_story_number(item_id).unwrap_or(item_id);
|
|
||||||
let name = story_name.unwrap_or(item_id);
|
|
||||||
|
|
||||||
let plain = format!("\u{274c} #{number} {name} \u{2014} {reason}");
|
|
||||||
let html = format!("\u{274c} <strong>#{number}</strong> <em>{name}</em> \u{2014} {reason}");
|
|
||||||
(plain, html)
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Look up a story name from the CRDT content store.
|
|
||||||
///
|
///
|
||||||
/// Used for events (like rate-limit warnings) that arrive without a known stage.
|
/// Used for events (like rate-limit warnings) that arrive without a known stage.
|
||||||
fn find_story_name_any_stage(project_root: &Path, item_id: &str) -> Option<String> {
|
fn find_story_name_any_stage(project_root: &Path, item_id: &str) -> Option<String> {
|
||||||
read_story_name(project_root, "", item_id)
|
read_story_name(project_root, "", item_id)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Format a blocked-story notification message.
|
|
||||||
///
|
|
||||||
/// Returns `(plain_text, html)` suitable for `ChatTransport::send_message`.
|
|
||||||
pub fn format_blocked_notification(
|
|
||||||
item_id: &str,
|
|
||||||
story_name: Option<&str>,
|
|
||||||
reason: &str,
|
|
||||||
) -> (String, String) {
|
|
||||||
let number = extract_story_number(item_id).unwrap_or(item_id);
|
|
||||||
let name = story_name.unwrap_or(item_id);
|
|
||||||
|
|
||||||
let plain = format!("\u{1f6ab} #{number} {name} \u{2014} BLOCKED: {reason}");
|
|
||||||
let html =
|
|
||||||
format!("\u{1f6ab} <strong>#{number}</strong> <em>{name}</em> \u{2014} BLOCKED: {reason}");
|
|
||||||
(plain, html)
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Minimum time between rate-limit notifications for the same agent.
|
|
||||||
const RATE_LIMIT_DEBOUNCE: Duration = Duration::from_secs(60);
|
|
||||||
|
|
||||||
/// Window during which rapid stage transitions for the same item are coalesced
|
|
||||||
/// into a single notification (only the final stage is announced).
|
|
||||||
const STAGE_TRANSITION_DEBOUNCE: Duration = Duration::from_millis(200);
|
|
||||||
|
|
||||||
/// Format a rate limit warning notification message.
|
|
||||||
///
|
|
||||||
/// Returns `(plain_text, html)` suitable for `ChatTransport::send_message`.
|
|
||||||
pub fn format_rate_limit_notification(
|
|
||||||
item_id: &str,
|
|
||||||
story_name: Option<&str>,
|
|
||||||
agent_name: &str,
|
|
||||||
) -> (String, String) {
|
|
||||||
let number = extract_story_number(item_id).unwrap_or(item_id);
|
|
||||||
let name = story_name.unwrap_or(item_id);
|
|
||||||
|
|
||||||
let plain =
|
|
||||||
format!("\u{26a0}\u{fe0f} #{number} {name} \u{2014} {agent_name} hit an API rate limit");
|
|
||||||
let html = format!(
|
|
||||||
"\u{26a0}\u{fe0f} <strong>#{number}</strong> <em>{name}</em> \u{2014} \
|
|
||||||
{agent_name} hit an API rate limit"
|
|
||||||
);
|
|
||||||
(plain, html)
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Spawn a background task that listens for watcher events and posts
|
/// Spawn a background task that listens for watcher events and posts
|
||||||
/// stage-transition notifications to all configured rooms via the
|
/// stage-transition notifications to all configured rooms via the
|
||||||
/// [`ChatTransport`] abstraction.
|
/// [`ChatTransport`] abstraction.
|
||||||
@@ -184,7 +92,7 @@ pub fn spawn_notification_listener(
|
|||||||
to_display,
|
to_display,
|
||||||
);
|
);
|
||||||
slog!("[bot] Sending stage notification: {plain}");
|
slog!("[bot] Sending stage notification: {plain}");
|
||||||
for room_id in &get_room_ids() {
|
for room_id in &rooms_for_notification(&get_room_ids) {
|
||||||
if let Err(e) = transport.send_message(room_id, &plain, &html).await {
|
if let Err(e) = transport.send_message(room_id, &plain, &html).await {
|
||||||
slog!("[bot] Failed to send notification to {room_id}: {e}");
|
slog!("[bot] Failed to send notification to {room_id}: {e}");
|
||||||
}
|
}
|
||||||
@@ -194,139 +102,11 @@ pub fn spawn_notification_listener(
|
|||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
match recv_result.unwrap() {
|
let event = match recv_result.unwrap() {
|
||||||
Ok(WatcherEvent::WorkItem {
|
Ok(ev) => ev,
|
||||||
ref stage,
|
|
||||||
ref item_id,
|
|
||||||
ref from_stage,
|
|
||||||
..
|
|
||||||
}) => {
|
|
||||||
// Only notify for transitions with a known source stage.
|
|
||||||
// Synthetic events (reassign, creation) have from_stage=None
|
|
||||||
// and must be skipped — the old inferred_from_stage fallback
|
|
||||||
// produced wrong notifications for stories that skipped stages
|
|
||||||
// (e.g. "QA → Merge" when QA was never entered).
|
|
||||||
let from_display = from_stage.as_deref().map(stage_display_name);
|
|
||||||
let Some(from_display) = from_display else {
|
|
||||||
continue; // creation or unknown transition — skip
|
|
||||||
};
|
|
||||||
|
|
||||||
// Look up the story name in the expected stage directory; fall
|
|
||||||
// back to a full search so stale events still show the name (AC1).
|
|
||||||
let story_name = read_story_name(&project_root, stage, item_id)
|
|
||||||
.or_else(|| find_story_name_any_stage(&project_root, item_id));
|
|
||||||
|
|
||||||
// Buffer the transition. If this item_id is already pending (rapid
|
|
||||||
// succession), update to_stage_key to the latest destination while
|
|
||||||
// preserving the original from_display (AC2).
|
|
||||||
pending_transitions
|
|
||||||
.entry(item_id.clone())
|
|
||||||
.and_modify(|e| {
|
|
||||||
e.1 = stage.clone();
|
|
||||||
if story_name.is_some() {
|
|
||||||
e.2 = story_name.clone();
|
|
||||||
}
|
|
||||||
})
|
|
||||||
.or_insert_with(|| (from_display.to_string(), stage.clone(), story_name));
|
|
||||||
|
|
||||||
// Start or extend the debounce window.
|
|
||||||
flush_deadline = Some(tokio::time::Instant::now() + STAGE_TRANSITION_DEBOUNCE);
|
|
||||||
}
|
|
||||||
Ok(WatcherEvent::MergeFailure {
|
|
||||||
ref story_id,
|
|
||||||
ref reason,
|
|
||||||
}) => {
|
|
||||||
let story_name = read_story_name(&project_root, "4_merge", story_id);
|
|
||||||
let (plain, html) =
|
|
||||||
format_error_notification(story_id, story_name.as_deref(), reason);
|
|
||||||
|
|
||||||
slog!("[bot] Sending error notification: {plain}");
|
|
||||||
|
|
||||||
for room_id in &get_room_ids() {
|
|
||||||
if let Err(e) = transport.send_message(room_id, &plain, &html).await {
|
|
||||||
slog!("[bot] Failed to send error notification to {room_id}: {e}");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Ok(WatcherEvent::RateLimitWarning {
|
|
||||||
ref story_id,
|
|
||||||
ref agent_name,
|
|
||||||
}) => {
|
|
||||||
if !config.rate_limit_notifications {
|
|
||||||
slog!(
|
|
||||||
"[bot] RateLimitWarning suppressed by config for \
|
|
||||||
{story_id}:{agent_name}"
|
|
||||||
);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
// Debounce: skip if we sent a notification for this agent
|
|
||||||
// within the last RATE_LIMIT_DEBOUNCE seconds.
|
|
||||||
let debounce_key = format!("{story_id}:{agent_name}");
|
|
||||||
let now = Instant::now();
|
|
||||||
if let Some(&last) = rate_limit_last_notified.get(&debounce_key)
|
|
||||||
&& now.duration_since(last) < RATE_LIMIT_DEBOUNCE
|
|
||||||
{
|
|
||||||
slog!(
|
|
||||||
"[bot] Rate-limit notification debounced for \
|
|
||||||
{story_id}:{agent_name}"
|
|
||||||
);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
rate_limit_last_notified.insert(debounce_key, now);
|
|
||||||
|
|
||||||
let story_name = find_story_name_any_stage(&project_root, story_id);
|
|
||||||
let (plain, html) =
|
|
||||||
format_rate_limit_notification(story_id, story_name.as_deref(), agent_name);
|
|
||||||
|
|
||||||
slog!("[bot] Sending rate-limit notification: {plain}");
|
|
||||||
|
|
||||||
for room_id in &get_room_ids() {
|
|
||||||
if let Err(e) = transport.send_message(room_id, &plain, &html).await {
|
|
||||||
slog!(
|
|
||||||
"[bot] Failed to send rate-limit notification \
|
|
||||||
to {room_id}: {e}"
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Ok(WatcherEvent::StoryBlocked {
|
|
||||||
ref story_id,
|
|
||||||
ref reason,
|
|
||||||
}) => {
|
|
||||||
let story_name = find_story_name_any_stage(&project_root, story_id);
|
|
||||||
let (plain, html) =
|
|
||||||
format_blocked_notification(story_id, story_name.as_deref(), reason);
|
|
||||||
|
|
||||||
slog!("[bot] Sending blocked notification: {plain}");
|
|
||||||
|
|
||||||
for room_id in &get_room_ids() {
|
|
||||||
if let Err(e) = transport.send_message(room_id, &plain, &html).await {
|
|
||||||
slog!("[bot] Failed to send blocked notification to {room_id}: {e}");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Ok(WatcherEvent::RateLimitHardBlock {
|
|
||||||
ref story_id,
|
|
||||||
ref agent_name,
|
|
||||||
reset_at,
|
|
||||||
}) => {
|
|
||||||
// Log server-side for debugging; do NOT post to Matrix.
|
|
||||||
// Hard-block auto-resume is normal operation — the status
|
|
||||||
// command already surfaces rate-limit state via emoji.
|
|
||||||
slog!(
|
|
||||||
"[bot] Rate-limit hard block for {story_id}/{agent_name}, \
|
|
||||||
auto-resume at {reset_at}"
|
|
||||||
);
|
|
||||||
}
|
|
||||||
Ok(WatcherEvent::ConfigChanged) => {
|
|
||||||
// Hot-reload: pick up any changes to rate_limit_notifications.
|
|
||||||
if let Ok(new_cfg) = ProjectConfig::load(&project_root) {
|
|
||||||
config = new_cfg;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Ok(_) => {} // Ignore other events
|
|
||||||
Err(broadcast::error::RecvError::Lagged(n)) => {
|
Err(broadcast::error::RecvError::Lagged(n)) => {
|
||||||
slog!("[bot] Notification listener lagged, skipped {n} events");
|
slog!("[bot] Notification listener lagged, skipped {n} events");
|
||||||
|
continue;
|
||||||
}
|
}
|
||||||
Err(broadcast::error::RecvError::Closed) => {
|
Err(broadcast::error::RecvError::Closed) => {
|
||||||
slog!("[bot] Watcher channel closed, stopping notification listener");
|
slog!("[bot] Watcher channel closed, stopping notification listener");
|
||||||
@@ -342,7 +122,7 @@ pub fn spawn_notification_listener(
|
|||||||
to_display,
|
to_display,
|
||||||
);
|
);
|
||||||
slog!("[bot] Sending stage notification: {plain}");
|
slog!("[bot] Sending stage notification: {plain}");
|
||||||
for room_id in &get_room_ids() {
|
for room_id in &rooms_for_notification(&get_room_ids) {
|
||||||
if let Err(e) = transport.send_message(room_id, &plain, &html).await {
|
if let Err(e) = transport.send_message(room_id, &plain, &html).await {
|
||||||
slog!("[bot] Failed to send notification to {room_id}: {e}");
|
slog!("[bot] Failed to send notification to {room_id}: {e}");
|
||||||
}
|
}
|
||||||
@@ -350,6 +130,143 @@ pub fn spawn_notification_listener(
|
|||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
use super::events::EventAction;
|
||||||
|
match classify(&event) {
|
||||||
|
EventAction::StageTransition => {
|
||||||
|
// WorkItem with a known from_stage — extract the fields.
|
||||||
|
let WatcherEvent::WorkItem {
|
||||||
|
ref stage,
|
||||||
|
ref item_id,
|
||||||
|
ref from_stage,
|
||||||
|
..
|
||||||
|
} = event
|
||||||
|
else {
|
||||||
|
continue;
|
||||||
|
};
|
||||||
|
let from_display = stage_display_name(from_stage.as_deref().unwrap_or(""));
|
||||||
|
|
||||||
|
// Look up the story name in the expected stage directory; fall
|
||||||
|
// back to a full search so stale events still show the name.
|
||||||
|
let story_name = read_story_name(&project_root, stage, item_id)
|
||||||
|
.or_else(|| find_story_name_any_stage(&project_root, item_id));
|
||||||
|
|
||||||
|
// Buffer the transition. If this item_id is already pending (rapid
|
||||||
|
// succession), update to_stage_key to the latest destination while
|
||||||
|
// preserving the original from_display.
|
||||||
|
pending_transitions
|
||||||
|
.entry(item_id.clone())
|
||||||
|
.and_modify(|e| {
|
||||||
|
e.1 = stage.clone();
|
||||||
|
if story_name.is_some() {
|
||||||
|
e.2 = story_name.clone();
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.or_insert_with(|| (from_display.to_string(), stage.clone(), story_name));
|
||||||
|
|
||||||
|
// Start or extend the debounce window.
|
||||||
|
flush_deadline = Some(tokio::time::Instant::now() + STAGE_TRANSITION_DEBOUNCE);
|
||||||
|
}
|
||||||
|
EventAction::MergeFailure => {
|
||||||
|
let WatcherEvent::MergeFailure {
|
||||||
|
ref story_id,
|
||||||
|
ref reason,
|
||||||
|
} = event
|
||||||
|
else {
|
||||||
|
continue;
|
||||||
|
};
|
||||||
|
let story_name = read_story_name(&project_root, "4_merge", story_id);
|
||||||
|
let (plain, html) =
|
||||||
|
format_error_notification(story_id, story_name.as_deref(), reason);
|
||||||
|
slog!("[bot] Sending error notification: {plain}");
|
||||||
|
for room_id in &rooms_for_notification(&get_room_ids) {
|
||||||
|
if let Err(e) = transport.send_message(room_id, &plain, &html).await {
|
||||||
|
slog!("[bot] Failed to send error notification to {room_id}: {e}");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
EventAction::RateLimitWarning => {
|
||||||
|
let WatcherEvent::RateLimitWarning {
|
||||||
|
ref story_id,
|
||||||
|
ref agent_name,
|
||||||
|
} = event
|
||||||
|
else {
|
||||||
|
continue;
|
||||||
|
};
|
||||||
|
if !config.rate_limit_notifications {
|
||||||
|
slog!(
|
||||||
|
"[bot] RateLimitWarning suppressed by config for \
|
||||||
|
{story_id}:{agent_name}"
|
||||||
|
);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
let debounce_key = format!("{story_id}:{agent_name}");
|
||||||
|
let now = Instant::now();
|
||||||
|
if !should_send_rate_limit(
|
||||||
|
rate_limit_last_notified.get(&debounce_key).copied(),
|
||||||
|
now,
|
||||||
|
) {
|
||||||
|
slog!(
|
||||||
|
"[bot] Rate-limit notification debounced for \
|
||||||
|
{story_id}:{agent_name}"
|
||||||
|
);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
rate_limit_last_notified.insert(debounce_key, now);
|
||||||
|
let story_name = find_story_name_any_stage(&project_root, story_id);
|
||||||
|
let (plain, html) =
|
||||||
|
format_rate_limit_notification(story_id, story_name.as_deref(), agent_name);
|
||||||
|
slog!("[bot] Sending rate-limit notification: {plain}");
|
||||||
|
for room_id in &rooms_for_notification(&get_room_ids) {
|
||||||
|
if let Err(e) = transport.send_message(room_id, &plain, &html).await {
|
||||||
|
slog!(
|
||||||
|
"[bot] Failed to send rate-limit notification \
|
||||||
|
to {room_id}: {e}"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
EventAction::StoryBlocked => {
|
||||||
|
let WatcherEvent::StoryBlocked {
|
||||||
|
ref story_id,
|
||||||
|
ref reason,
|
||||||
|
} = event
|
||||||
|
else {
|
||||||
|
continue;
|
||||||
|
};
|
||||||
|
let story_name = find_story_name_any_stage(&project_root, story_id);
|
||||||
|
let (plain, html) =
|
||||||
|
format_blocked_notification(story_id, story_name.as_deref(), reason);
|
||||||
|
slog!("[bot] Sending blocked notification: {plain}");
|
||||||
|
for room_id in &rooms_for_notification(&get_room_ids) {
|
||||||
|
if let Err(e) = transport.send_message(room_id, &plain, &html).await {
|
||||||
|
slog!("[bot] Failed to send blocked notification to {room_id}: {e}");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
EventAction::LogOnly => {
|
||||||
|
// Hard-block: log server-side for debugging; do NOT post to chat.
|
||||||
|
// Hard-block auto-resume is normal operation — the status command
|
||||||
|
// already surfaces rate-limit state via emoji.
|
||||||
|
if let WatcherEvent::RateLimitHardBlock {
|
||||||
|
ref story_id,
|
||||||
|
ref agent_name,
|
||||||
|
reset_at,
|
||||||
|
} = event
|
||||||
|
{
|
||||||
|
slog!(
|
||||||
|
"[bot] Rate-limit hard block for {story_id}/{agent_name}, \
|
||||||
|
auto-resume at {reset_at}"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
EventAction::ReloadConfig => {
|
||||||
|
if let Ok(new_cfg) = ProjectConfig::load(&project_root) {
|
||||||
|
config = new_cfg;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
EventAction::Skip => {}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
@@ -630,37 +547,6 @@ mod tests {
|
|||||||
assert_eq!(calls.len(), 0, "No rooms means no notifications");
|
assert_eq!(calls.len(), 0, "No rooms means no notifications");
|
||||||
}
|
}
|
||||||
|
|
||||||
// ── stage_display_name ──────────────────────────────────────────────────
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn stage_display_name_maps_all_known_stages() {
|
|
||||||
assert_eq!(stage_display_name("1_backlog"), "Backlog");
|
|
||||||
assert_eq!(stage_display_name("2_current"), "Current");
|
|
||||||
assert_eq!(stage_display_name("3_qa"), "QA");
|
|
||||||
assert_eq!(stage_display_name("4_merge"), "Merge");
|
|
||||||
assert_eq!(stage_display_name("5_done"), "Done");
|
|
||||||
assert_eq!(stage_display_name("6_archived"), "Archived");
|
|
||||||
assert_eq!(stage_display_name("unknown"), "Unknown");
|
|
||||||
}
|
|
||||||
|
|
||||||
// ── extract_story_number ────────────────────────────────────────────────
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn extract_story_number_parses_numeric_prefix() {
|
|
||||||
assert_eq!(
|
|
||||||
extract_story_number("261_story_bot_notifications"),
|
|
||||||
Some("261")
|
|
||||||
);
|
|
||||||
assert_eq!(extract_story_number("42_bug_fix_thing"), Some("42"));
|
|
||||||
assert_eq!(extract_story_number("1_spike_research"), Some("1"));
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn extract_story_number_returns_none_for_non_numeric() {
|
|
||||||
assert_eq!(extract_story_number("abc_story_thing"), None);
|
|
||||||
assert_eq!(extract_story_number(""), None);
|
|
||||||
}
|
|
||||||
|
|
||||||
// ── read_story_name ─────────────────────────────────────────────────────
|
// ── read_story_name ─────────────────────────────────────────────────────
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
@@ -699,69 +585,6 @@ mod tests {
|
|||||||
assert_eq!(name, None);
|
assert_eq!(name, None);
|
||||||
}
|
}
|
||||||
|
|
||||||
// ── format_error_notification ────────────────────────────────────────────
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn format_error_notification_with_story_name() {
|
|
||||||
let (plain, html) = format_error_notification(
|
|
||||||
"262_story_bot_errors",
|
|
||||||
Some("Bot error notifications"),
|
|
||||||
"merge conflict in src/main.rs",
|
|
||||||
);
|
|
||||||
assert_eq!(
|
|
||||||
plain,
|
|
||||||
"\u{274c} #262 Bot error notifications \u{2014} merge conflict in src/main.rs"
|
|
||||||
);
|
|
||||||
assert_eq!(
|
|
||||||
html,
|
|
||||||
"\u{274c} <strong>#262</strong> <em>Bot error notifications</em> \u{2014} merge conflict in src/main.rs"
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn format_error_notification_without_story_name_falls_back_to_item_id() {
|
|
||||||
let (plain, _html) = format_error_notification("42_bug_fix_thing", None, "tests failed");
|
|
||||||
assert_eq!(plain, "\u{274c} #42 42_bug_fix_thing \u{2014} tests failed");
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn format_error_notification_non_numeric_id_uses_full_id() {
|
|
||||||
let (plain, _html) =
|
|
||||||
format_error_notification("abc_story_thing", Some("Some Story"), "clippy errors");
|
|
||||||
assert_eq!(
|
|
||||||
plain,
|
|
||||||
"\u{274c} #abc_story_thing Some Story \u{2014} clippy errors"
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// ── format_blocked_notification ─────────────────────────────────────────
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn format_blocked_notification_with_story_name() {
|
|
||||||
let (plain, html) = format_blocked_notification(
|
|
||||||
"425_story_blocking_reason",
|
|
||||||
Some("Blocking Reason Story"),
|
|
||||||
"Retry limit exceeded (3/3) at coder stage",
|
|
||||||
);
|
|
||||||
assert_eq!(
|
|
||||||
plain,
|
|
||||||
"\u{1f6ab} #425 Blocking Reason Story \u{2014} BLOCKED: Retry limit exceeded (3/3) at coder stage"
|
|
||||||
);
|
|
||||||
assert_eq!(
|
|
||||||
html,
|
|
||||||
"\u{1f6ab} <strong>#425</strong> <em>Blocking Reason Story</em> \u{2014} BLOCKED: Retry limit exceeded (3/3) at coder stage"
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn format_blocked_notification_falls_back_to_item_id() {
|
|
||||||
let (plain, _html) = format_blocked_notification("42_story_thing", None, "empty diff");
|
|
||||||
assert_eq!(
|
|
||||||
plain,
|
|
||||||
"\u{1f6ab} #42 42_story_thing \u{2014} BLOCKED: empty diff"
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// ── spawn_notification_listener: StoryBlocked ───────────────────────────
|
// ── spawn_notification_listener: StoryBlocked ───────────────────────────
|
||||||
|
|
||||||
/// AC1: when a StoryBlocked event arrives, send_message is called with a
|
/// AC1: when a StoryBlocked event arrives, send_message is called with a
|
||||||
@@ -842,88 +665,6 @@ mod tests {
|
|||||||
assert_eq!(calls.len(), 0, "No rooms means no notifications");
|
assert_eq!(calls.len(), 0, "No rooms means no notifications");
|
||||||
}
|
}
|
||||||
|
|
||||||
// ── format_rate_limit_notification ─────────────────────────────────────
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn format_rate_limit_notification_includes_agent_and_story() {
|
|
||||||
let (plain, html) =
|
|
||||||
format_rate_limit_notification("365_story_my_feature", Some("My Feature"), "coder-2");
|
|
||||||
assert_eq!(
|
|
||||||
plain,
|
|
||||||
"\u{26a0}\u{fe0f} #365 My Feature \u{2014} coder-2 hit an API rate limit"
|
|
||||||
);
|
|
||||||
assert_eq!(
|
|
||||||
html,
|
|
||||||
"\u{26a0}\u{fe0f} <strong>#365</strong> <em>My Feature</em> \u{2014} coder-2 hit an API rate limit"
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn format_rate_limit_notification_falls_back_to_item_id() {
|
|
||||||
let (plain, _html) = format_rate_limit_notification("42_story_thing", None, "coder-1");
|
|
||||||
assert_eq!(
|
|
||||||
plain,
|
|
||||||
"\u{26a0}\u{fe0f} #42 42_story_thing \u{2014} coder-1 hit an API rate limit"
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// ── format_stage_notification ───────────────────────────────────────────
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn format_notification_done_stage_includes_party_emoji() {
|
|
||||||
let (plain, html) =
|
|
||||||
format_stage_notification("353_story_done", Some("Done Story"), "Merge", "Done");
|
|
||||||
assert_eq!(
|
|
||||||
plain,
|
|
||||||
"\u{1f389} #353 Done Story \u{2014} Merge \u{2192} Done"
|
|
||||||
);
|
|
||||||
assert_eq!(
|
|
||||||
html,
|
|
||||||
"\u{1f389} <strong>#353</strong> <em>Done Story</em> \u{2014} Merge \u{2192} Done"
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn format_notification_non_done_stage_has_no_emoji() {
|
|
||||||
let (plain, _html) =
|
|
||||||
format_stage_notification("42_story_thing", Some("Some Story"), "Backlog", "Current");
|
|
||||||
assert!(!plain.contains("\u{1f389}"));
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn format_notification_with_story_name() {
|
|
||||||
let (plain, html) = format_stage_notification(
|
|
||||||
"261_story_bot_notifications",
|
|
||||||
Some("Bot notifications"),
|
|
||||||
"Upcoming",
|
|
||||||
"Current",
|
|
||||||
);
|
|
||||||
assert_eq!(
|
|
||||||
plain,
|
|
||||||
"#261 Bot notifications \u{2014} Upcoming \u{2192} Current"
|
|
||||||
);
|
|
||||||
assert_eq!(
|
|
||||||
html,
|
|
||||||
"<strong>#261</strong> <em>Bot notifications</em> \u{2014} Upcoming \u{2192} Current"
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn format_notification_without_story_name_falls_back_to_item_id() {
|
|
||||||
let (plain, _html) = format_stage_notification("42_bug_fix_thing", None, "Current", "QA");
|
|
||||||
assert_eq!(plain, "#42 42_bug_fix_thing \u{2014} Current \u{2192} QA");
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn format_notification_non_numeric_id_uses_full_id() {
|
|
||||||
let (plain, _html) =
|
|
||||||
format_stage_notification("abc_story_thing", Some("Some Story"), "QA", "Merge");
|
|
||||||
assert_eq!(
|
|
||||||
plain,
|
|
||||||
"#abc_story_thing Some Story \u{2014} QA \u{2192} Merge"
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// ── rate_limit_notifications config flag ─────────────────────────────────
|
// ── rate_limit_notifications config flag ─────────────────────────────────
|
||||||
|
|
||||||
/// AC1+AC2: when rate_limit_notifications = false in project.toml,
|
/// AC1+AC2: when rate_limit_notifications = false in project.toml,
|
||||||
@@ -0,0 +1,89 @@
|
|||||||
|
//! Notifications service — pipeline-event fan-out to chat transports.
|
||||||
|
//!
|
||||||
|
//! Subscribes to [`WatcherEvent`] broadcasts and posts human-readable messages
|
||||||
|
//! to all configured chat rooms whenever a work item moves through the pipeline.
|
||||||
|
//!
|
||||||
|
//! Follows service-module conventions:
|
||||||
|
//! - `mod.rs` (this file) — public API, typed [`Error`] type, orchestration
|
||||||
|
//! - `io.rs` — the ONLY place that performs side effects (DB reads, config
|
||||||
|
//! loads, `tokio::spawn`)
|
||||||
|
//! - `format.rs` — pure: message formatting functions
|
||||||
|
//! - `filter.rs` — pure: debounce constants and suppression predicates
|
||||||
|
//! - `events.rs` — pure: WatcherEvent classification / event mapping
|
||||||
|
//! - `route.rs` — pure: room-routing decisions
|
||||||
|
|
||||||
|
pub(super) mod events;
|
||||||
|
pub(super) mod filter;
|
||||||
|
pub(super) mod format;
|
||||||
|
pub(super) mod io;
|
||||||
|
pub(super) mod route;
|
||||||
|
|
||||||
|
pub use format::{
|
||||||
|
format_blocked_notification, format_error_notification, format_stage_notification,
|
||||||
|
stage_display_name,
|
||||||
|
};
|
||||||
|
pub use io::spawn_notification_listener;
|
||||||
|
|
||||||
|
// ── Error type ────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Typed errors returned by `service::notifications` operations.
|
||||||
|
///
|
||||||
|
/// HTTP handlers and bot commands may map these to user-facing messages.
|
||||||
|
#[derive(Debug)]
|
||||||
|
#[allow(dead_code)]
|
||||||
|
pub enum Error {
|
||||||
|
/// The incoming event type is not recognised or not supported.
|
||||||
|
UnknownEvent(String),
|
||||||
|
/// A message could not be formatted for delivery (e.g. malformed input).
|
||||||
|
RenderFailure(String),
|
||||||
|
/// The underlying chat transport rejected the send operation.
|
||||||
|
TransportSendFailure(String),
|
||||||
|
/// Required configuration (room IDs, credentials) is absent.
|
||||||
|
ConfigMissing(String),
|
||||||
|
}
|
||||||
|
|
||||||
|
impl std::fmt::Display for Error {
|
||||||
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
|
match self {
|
||||||
|
Self::UnknownEvent(msg) => write!(f, "Unknown event: {msg}"),
|
||||||
|
Self::RenderFailure(msg) => write!(f, "Render failure: {msg}"),
|
||||||
|
Self::TransportSendFailure(msg) => write!(f, "Transport send failure: {msg}"),
|
||||||
|
Self::ConfigMissing(msg) => write!(f, "Config missing: {msg}"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
// ── Error Display ─────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn error_unknown_event_display() {
|
||||||
|
let e = Error::UnknownEvent("bad_event_type".to_string());
|
||||||
|
assert!(e.to_string().contains("Unknown event"));
|
||||||
|
assert!(e.to_string().contains("bad_event_type"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn error_render_failure_display() {
|
||||||
|
let e = Error::RenderFailure("malformed input".to_string());
|
||||||
|
assert!(e.to_string().contains("Render failure"));
|
||||||
|
assert!(e.to_string().contains("malformed input"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn error_transport_send_failure_display() {
|
||||||
|
let e = Error::TransportSendFailure("connection refused".to_string());
|
||||||
|
assert!(e.to_string().contains("Transport send failure"));
|
||||||
|
assert!(e.to_string().contains("connection refused"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn error_config_missing_display() {
|
||||||
|
let e = Error::ConfigMissing("room_id not set".to_string());
|
||||||
|
assert!(e.to_string().contains("Config missing"));
|
||||||
|
assert!(e.to_string().contains("room_id not set"));
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,42 @@
|
|||||||
|
//! Room-routing decisions for notifications.
|
||||||
|
//!
|
||||||
|
//! Pure functions that determine which destination room IDs should receive
|
||||||
|
//! a given notification. Currently all notification kinds are broadcast to
|
||||||
|
//! all registered rooms; this module is the single location to change that
|
||||||
|
//! policy if per-event routing is needed in the future.
|
||||||
|
|
||||||
|
/// Return the rooms that should receive a notification.
|
||||||
|
///
|
||||||
|
/// `get_room_ids` is called once per notification to obtain the current list
|
||||||
|
/// of destination room IDs. Passing a closure (rather than a static slice)
|
||||||
|
/// allows callers to use a runtime-mutable set, e.g. WhatsApp ambient senders.
|
||||||
|
///
|
||||||
|
/// All currently supported event kinds are broadcast to every room returned
|
||||||
|
/// by the closure.
|
||||||
|
pub fn rooms_for_notification(get_room_ids: &impl Fn() -> Vec<String>) -> Vec<String> {
|
||||||
|
get_room_ids()
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn returns_all_rooms_from_closure() {
|
||||||
|
let rooms = rooms_for_notification(&|| vec!["room1".to_string(), "room2".to_string()]);
|
||||||
|
assert_eq!(rooms, vec!["room1".to_string(), "room2".to_string()]);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn returns_empty_when_no_rooms_registered() {
|
||||||
|
let rooms = rooms_for_notification(&Vec::new);
|
||||||
|
assert!(rooms.is_empty());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn returns_single_room() {
|
||||||
|
let rooms = rooms_for_notification(&|| vec!["!abc:example.org".to_string()]);
|
||||||
|
assert_eq!(rooms.len(), 1);
|
||||||
|
assert_eq!(rooms[0], "!abc:example.org");
|
||||||
|
}
|
||||||
|
}
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user