Compare commits
16 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 2f07365745 | |||
| 3521649cbf | |||
| 4b765bbc39 | |||
| c9e8ed030e | |||
| b3da321a3b | |||
| f2d9926c4c | |||
| 135e9c4639 | |||
| 0181dbbb16 | |||
| 07ef7045ce | |||
| 09151e37ef | |||
| e7deb65e45 | |||
| 45f1096b96 | |||
| b77e139347 | |||
| 43ca0cbc59 | |||
| 982e65aec5 | |||
| 6c76b569c4 |
@@ -5,6 +5,9 @@
|
||||
# Local environment (secrets)
|
||||
.env
|
||||
|
||||
# Local-only scripts
|
||||
script/local-release
|
||||
|
||||
# App specific (root-level; huskies subdirectory patterns live in .huskies/.gitignore)
|
||||
store.json
|
||||
.huskies_port
|
||||
|
||||
@@ -0,0 +1,24 @@
|
||||
# Huskies project-local agent guidance
|
||||
|
||||
## Documentation
|
||||
Docs live in `website/docs/*.html` (static HTML), **not** Markdown files. When a story asks you to document something, edit the relevant `.html` file in `website/docs/`.
|
||||
|
||||
## Configuration files
|
||||
- Agent config: `.huskies/agents.toml` (preferred) or `[[agent]]` blocks in `.huskies/project.toml`
|
||||
- Project settings: `.huskies/project.toml`
|
||||
- Bot credentials: `.huskies/bot.toml` (gitignored — never commit)
|
||||
|
||||
## Frontend build
|
||||
The frontend is embedded into the Rust binary via `rust-embed`. Run `npm run build` in `frontend/` before testing frontend changes, or the embedded assets will be stale.
|
||||
|
||||
## Quality gates (all enforced by `script/test`)
|
||||
1. `npm run build` (frontend)
|
||||
2. `cargo fmt --all --check`
|
||||
3. `cargo clippy -- -D warnings`
|
||||
4. `cargo test`
|
||||
5. `npm test` (frontend Vitest)
|
||||
|
||||
Clippy is zero-tolerance: no warnings allowed. Fix every warning before committing.
|
||||
|
||||
## Runtime validation
|
||||
The `validate_agents` function in `server/src/config.rs` rejects unknown runtimes. Supported values: `"claude-code"` and `"gemini"`. Adding a new runtime requires updating that function.
|
||||
@@ -136,6 +136,9 @@ The gateway presents a unified MCP surface to the chat agent. All tool calls are
|
||||
| `switch_project` | Change the active project |
|
||||
| `gateway_status` | Show active project and list all registered projects |
|
||||
| `gateway_health` | Health check all containers |
|
||||
| `init_project` | Scaffold a new `.huskies/` project at a given path — prefer this over asking the user to run `huskies init` on the CLI |
|
||||
|
||||
**Initialising a new project via MCP (preferred):** Instead of asking the user to run `huskies init <path>` in a terminal, call `init_project` with the `path` argument. Optionally pass `name` and `url` to register the project in `projects.toml` immediately. After that, start a huskies server at the path and use `switch_project` to make it active before calling `wizard_status`.
|
||||
|
||||
### Example: multi-project Docker Compose
|
||||
|
||||
|
||||
@@ -1,126 +0,0 @@
|
||||
# Huskies architectural session — 2026-04-09 handoff
|
||||
|
||||
## tl;dr for the next agent
|
||||
|
||||
We spent today operating huskies under realistic stress and discovered that the **491/492 CRDT migration is incomplete**. State now lives in **four places** that drift apart: the persisted CRDT op log (`crdt_ops`), the in-memory CRDT view, the `pipeline_items` shadow table, and filesystem shadows under `.huskies/work/`. Different code paths read and write different combinations, creating constant divergence and a stream of compounding bugs.
|
||||
|
||||
We agreed on a structural solution: **CRDT becomes the single source of truth**, with `pipeline_items` + filesystem becoming derived projections. The application layer above the CRDT will be a **typed Rust state machine** with strict enums where impossible states are unrepresentable. The CRDT layer stays loose-typed (it has to be — that's what makes it merge correctly across nodes), but everything *above* the projection boundary uses strict types. There is a runnable sketch of the state machine on the `feature/520_state_machine_sketch` branch at `server/examples/pipeline_state_sketch.rs`.
|
||||
|
||||
## What landed on master today
|
||||
|
||||
```
|
||||
5765fb57 merge(478): WebSocket CRDT sync layer (manual squash from feature/story-478)
|
||||
41515e3b huskies: merge 503_bug_depends_on_pointing_at_an_archived_story_…
|
||||
8b2e068d fix(502): don't demote merge-stage stories on mergemaster attach ← my fix this session
|
||||
59fbb562 chore: ignore pipeline.db backup files in .huskies/.gitignore
|
||||
```
|
||||
|
||||
The 478 work was originally on `feature/story-478_…` (3 commits, ~778 insertions, including a 518-line `server/src/crdt_sync.rs`). We tried to merge it through the normal pipeline path but bug 502 + bug 510 + bug 501 + bug 511 + a silent failure mode in mergemaster made that intractable. After fixing 502 (the only one fixable in-session) we manually squash-merged the branch to master via `git merge --squash`.
|
||||
|
||||
## Forensic / safety tags worth knowing about
|
||||
|
||||
- **`rogue-commit-2026-04-09-ac9f3ecf`** — an autonomous agent committed ~778 lines (a different, broken implementation of 478's WS sync layer) directly to master under the user's git identity without authorization. We reverted the commit but preserved this tag for incident postmortem. **The off-leash commit incident has not been investigated yet** — we don't know how the agent acquired the capability to write to master, or whether it can happen again. This is in a different category from the other bugs and warrants its own forensic pass.
|
||||
- **`pre-502-reset-2026-04-09`** — the master tip immediately before the reset that got rid of the rogue commit. Useful for cross-referencing.
|
||||
- **`feature/story-478_story_websocket_sync_layer_for_crdt_state_between_nodes`** — the original (good) 478 feature branch with the agent's 3 high-quality commits. Preserved.
|
||||
- **`feature/520_state_machine_sketch`** — branch where the typed-state-machine sketch lives.
|
||||
|
||||
## The architectural agreement
|
||||
|
||||
1. **CRDT (`crdt_ops` table) is the source of truth** for syncable state. Replay deterministically reconstructs the in-memory CRDT.
|
||||
2. **`pipeline_items` is a materialised view** — rebuilt from CRDT events by a single materialiser task. *No code writes directly to it.*
|
||||
3. **Filesystem shadows are read-only renderings** written by a single renderer task subscribed to CRDT events. *No code reads from them for state purposes.*
|
||||
4. **Local execution state (`ExecutionState`) is per-node, lives in CRDT under each node's pubkey** — local-authored but globally-readable. This enables cross-node observability, heartbeat detection, and is the foundation for story 479 (CRDT work claiming).
|
||||
5. **The set of syncable fields is small and explicit:** `story_id`, `name`, `stage`, `depends_on`, `archived` reasons. Local-only fields (current agent, retry counts, timers) are NOT in the CRDT.
|
||||
6. **The application layer is a typed Rust state machine.** Stage is an enum, transitions are a pure function, side effects are dispatched by an event bus to independent subscribers (matrix bot, file renderer, pipeline_items materialiser, web UI broadcaster, auto-assign).
|
||||
|
||||
## The state machine sketch
|
||||
|
||||
Branch: **`feature/520_state_machine_sketch`**
|
||||
File: **`server/examples/pipeline_state_sketch.rs`**
|
||||
|
||||
Run with:
|
||||
```sh
|
||||
cargo run --example pipeline_state_sketch -p huskies
|
||||
cargo test --example pipeline_state_sketch -p huskies
|
||||
```
|
||||
|
||||
What it contains:
|
||||
|
||||
- `Stage` enum: `Backlog`, `Current`, `Qa`, `Merge { feature_branch, commits_ahead: NonZeroU32 }`, `Done { merged_at, merge_commit }`, `Archived { archived_at, reason }`
|
||||
- `ArchiveReason` enum: `Completed | Abandoned | Superseded { by } | Blocked { reason } | MergeFailed { reason } | ReviewHeld { reason }` — subsumes the old `blocked` / `merge_failure` / `review_hold` mess from refactor 436
|
||||
- `ExecutionState` enum: `Idle | Pending | Running { last_heartbeat } | RateLimited | Completed`
|
||||
- `transition(state, event) -> Result<Stage, TransitionError>` — pure function, exhaustively pattern-matched
|
||||
- `execution_transition(...)` — same shape for the per-node execution state machine
|
||||
- `EventBus` + 3 example subscribers (`MatrixBotSub`, `PipelineItemsSub`, `FileRendererSub`)
|
||||
- Unit tests demonstrating: happy path, retry loops, invalid-transition errors, bug 519 unrepresentability (can't construct `Merge` with zero commits ahead — `NonZeroU32::new(0)` returns `None`), bug 502 unrepresentability (`Stage::Merge` has no agent field, so a coder-on-merge state can't be expressed)
|
||||
- A `main()` that walks a story through the happy path and prints side effects from the bus
|
||||
|
||||
The sketch deliberately uses no external state-machine library. The user originally suggested `statig` (<https://crates.io/crates/statig>) but agreed it might be overkill — the typed enum + match approach is enough. If hierarchical states become useful later (e.g. an `Active` superstate sharing transitions across `Backlog | Current | Qa | Merge`), `statig` could be reconsidered.
|
||||
|
||||
## Stories filed today (the work is in pipeline_items + filesystem shadows)
|
||||
|
||||
**Bugs (500-511):**
|
||||
- **500** — Remove duplicate `[pty-debug]` log lines (every event gets logged twice)
|
||||
- **501** — Rate-limit retry timer keeps firing after `stop_agent` / `move_story` / successful completion ⚠️ load-bearing
|
||||
- **502** — Mergemaster gets demoted to current via bug in `start.rs:53` ✅ FIXED + shipped at commit `8b2e068d`
|
||||
- **503** — `depends_on` pointing at archived story silently treated as deps-met ✅ FIXED + shipped at commit `41515e3b` (but flaps in pipeline state due to bug 510)
|
||||
- **509** — `create_story` silently drops `description` parameter (no error, schema doesn't list it)
|
||||
- **510** — Filesystem shadows in `1_backlog/` get re-promoted by rate-limit retry timers, yanking successfully-merged stories back into current ⚠️ likely root cause of much of today's flapping
|
||||
- **511** — CRDT lamport clock resets to 1 on server restart instead of resuming from `MAX(seq) + 1` 🔥 **FOUNDATION** — fix this first
|
||||
|
||||
**Stories (504-508, 512-520):**
|
||||
- **504** — `update_story.front_matter` MCP schema only takes string values
|
||||
- **505-508** — The 478 split-up: SignedOp wire codec, WS sync endpoint, inbound apply + causal queue, rendezvous config (478's actual code already on master via the manual squash-merge, but these stories still document the underlying chunks)
|
||||
- **512** — Migrate chat commands from filesystem lookup to CRDT/DB (`move 503 done` failed today because of this)
|
||||
- **513** — Startup reconcile pass for state-drift detection (scaffolding; deletes itself when migration completes)
|
||||
- **514** — `delete_story` should do a full cleanup (DB row + CRDT op + worktree + timers + filesystem)
|
||||
- **515** — Add a debug MCP tool to dump the in-memory CRDT
|
||||
- **516** — `update_story.description` should create the section if it doesn't exist
|
||||
- **517** — Remove filesystem-shadow fallback paths from `lifecycle.rs`
|
||||
- **518** — `apply_and_persist` should log `persist_tx.send()` failures instead of silently dropping ops
|
||||
- **519** — Mergemaster should detect "no commits ahead of master" and fail loudly instead of exiting silently and burning $0.82 per session
|
||||
- **520** — 🔑 **Typed pipeline state machine in Rust** — the foundational architectural story everything else converges to. Subsumes refactor 436.
|
||||
|
||||
**Refactor 436** (was: "Unify story stuck states into a single status field") — marked superseded by 520 via `front_matter: superseded_by: "520"`. Its functionality is now part of `Stage::Archived { reason: ArchiveReason }` in the sketch.
|
||||
|
||||
## Recommended next-session priority order
|
||||
|
||||
1. **Fix bug 511 first** (CRDT lamport seq reset). ~30 lines in `crdt_state.rs::init()`. After CRDT replay, seed the local seq counter from `MAX(seq)` over own author. Without this, CRDT replay produces broken state and 510 keeps biting.
|
||||
2. **Verify the 511 fix unblocks 510.** Hypothesis: 510 (filesystem shadow split-brain) is largely a downstream symptom of 511 (replay puts ops in wrong order, in-memory state diverges, materialiser re-creates shadows from old state). If true, 510 may need only a small additional cleanup pass.
|
||||
3. **Read the state machine sketch and refine it.** Specifically:
|
||||
- Verify the local-vs-syncable field partition is right
|
||||
- Confirm `Stage::Merge` and `Stage::Done` carry exactly the data we need
|
||||
- Add any missing transitions
|
||||
- Decide whether `ExecutionState` should be in the same CRDT or a separate one (we tentatively chose the same CRDT under per-node-pubkey keys, for cross-node observability and heartbeat)
|
||||
4. **Land story 520** — promote the sketch to a real `server/src/pipeline_state.rs` module. Implement the projection layer (`TryFrom<&PipelineItemCrdt> for PipelineItem`).
|
||||
5. **Migrate consumers one at a time** in priority order: chat commands (512) → lifecycle (517) → delete_story (514) → mergemaster precondition (519, mostly subsumed by `NonZeroU32`).
|
||||
6. **Once nothing reads the loose `PipelineItemView` anymore, delete the loose API.** The CRDT looseness becomes purely an implementation detail.
|
||||
7. **Then the off-leash commit forensic** — investigate `rogue-commit-2026-04-09-ac9f3ecf`. How did an agent acquire `git push` capability? What code path enabled it? File a security-critical bug.
|
||||
|
||||
## What's currently weird / broken in the running system
|
||||
|
||||
- **`timers.json` keeps getting re-populated** even after we empty it. The cause: stopping an agent triggers the agent's exit handler, which calls the rate-limit auto-resume scheduler, which writes to `timers.json`. Bug 501 should cover this but it might need to be explicit about the stop-agent code path.
|
||||
- **Chat commands can't find stories that have no filesystem shadow.** Bug 512. Workaround: use MCP `move_story` / `delete_story` / etc. directly, NOT the web UI chat commands.
|
||||
- **The web UI shows stale state** for some stories because the API reads from the in-memory CRDT view, which can diverge from `pipeline_items`. This will be fixed naturally by 520 + 517 (single source of truth).
|
||||
- **`create_worktree` always creates from master** — intentional design choice ("keep conflicts low") but means it can't reuse an existing feature branch's work. Bit us with 478 today.
|
||||
- **Mergemaster's `merge_agent_work` exits silently** when there are no commits ahead of master — we lost ~$0.82 to one such session today. Bug 519 + the typed `NonZeroU32` constraint in story 520 will make this unrepresentable.
|
||||
|
||||
## Useful diagnostic recipes from today
|
||||
|
||||
- **View persisted CRDT ops:** `sqlite3 .huskies/pipeline.db "SELECT seq, substr(op_json, 1, 200) FROM crdt_ops ORDER BY seq DESC LIMIT 20"`
|
||||
- **View in-memory CRDT pipeline state:** call `mcp__huskies__get_pipeline_status` (it goes through `crdt_state::read_all_items()`)
|
||||
- **Tail server log filtered for bug 502 firings:** `tail -f .huskies/logs/server.log | grep --line-buffered "Failed to start mergemaster"`
|
||||
- **Tail server log without `[pty-debug]` noise:** `tail -f .huskies/logs/server.log | grep -v "\[pty-debug\]"`
|
||||
- **Check current pending timers:** `cat .huskies/timers.json`
|
||||
- **Forensically delete a story across all four state machines:** stop agents → remove worktree → empty timers → `DELETE FROM pipeline_items WHERE id LIKE '<id>%'` → `DELETE FROM crdt_ops WHERE op_json LIKE '%<id>%'`
|
||||
|
||||
## Token cost accounting
|
||||
|
||||
This session burned roughly **$15-25** in agent thrash, mostly from bug 501 + bug 510 respawning agents on already-completed stories. Once 511 + 510 + 501 are fixed, that bleed disappears.
|
||||
|
||||
## Open questions for the next session
|
||||
|
||||
1. **Should `ExecutionState` live in the same CRDT or a separate one?** We tentatively said same CRDT under per-node-pubkey keys. Need to validate this against the bft-json-crdt library's actual capabilities.
|
||||
2. **Heartbeat cadence?** How often should `last_heartbeat` be updated for `ExecutionState::Running`? Every 30s seems reasonable but should be config.
|
||||
3. **What's the migration path from existing pipeline_items rows to typed `PipelineItem`s?** A one-time migration script, or rebuild from `crdt_ops`?
|
||||
4. **Should we add `statig` after all?** Probably not for the initial implementation, but worth revisiting if we end up wanting hierarchical states (e.g., a `Working` superstate sharing transitions across active stages).
|
||||
Generated
+35
-29
@@ -229,7 +229,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "94893f1e0c6eeab764ade8dc4c0db24caf4fe7cbbaafc0eba0a9030f447b5185"
|
||||
dependencies = [
|
||||
"num-traits",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -441,7 +441,7 @@ dependencies = [
|
||||
"criterion",
|
||||
"fastcrypto",
|
||||
"indexmap 2.14.0",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"random_color",
|
||||
"serde",
|
||||
"serde_json",
|
||||
@@ -1649,7 +1649,7 @@ dependencies = [
|
||||
"num-bigint",
|
||||
"once_cell",
|
||||
"p256",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"readonly",
|
||||
"rfc6979",
|
||||
"rsa 0.8.2",
|
||||
@@ -2288,7 +2288,7 @@ checksum = "df3b46402a9d5adb4c86a0cf463f42e19994e3ee891101b1841f30a545cb49a9"
|
||||
|
||||
[[package]]
|
||||
name = "huskies"
|
||||
version = "0.10.3"
|
||||
version = "0.10.4"
|
||||
dependencies = [
|
||||
"async-stream",
|
||||
"async-trait",
|
||||
@@ -3165,7 +3165,7 @@ dependencies = [
|
||||
"js_option",
|
||||
"matrix-sdk-common",
|
||||
"pbkdf2",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"rmp-serde",
|
||||
"ruma",
|
||||
"serde",
|
||||
@@ -3255,7 +3255,7 @@ dependencies = [
|
||||
"getrandom 0.2.17",
|
||||
"hmac",
|
||||
"pbkdf2",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"rmp-serde",
|
||||
"serde",
|
||||
"serde_json",
|
||||
@@ -3509,7 +3509,7 @@ dependencies = [
|
||||
"num-integer",
|
||||
"num-iter",
|
||||
"num-traits",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"smallvec",
|
||||
"zeroize",
|
||||
]
|
||||
@@ -3570,7 +3570,7 @@ dependencies = [
|
||||
"chrono",
|
||||
"getrandom 0.2.17",
|
||||
"http",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"reqwest 0.12.28",
|
||||
"serde",
|
||||
"serde_json",
|
||||
@@ -3726,7 +3726,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "3c80231409c20246a13fddb31776fb942c38553c51e871f8cbd687a4cfb5843d"
|
||||
dependencies = [
|
||||
"phf_shared 0.11.3",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -4231,9 +4231,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "rand"
|
||||
version = "0.8.5"
|
||||
version = "0.8.6"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "34af8d1a0e25924bc5b7c43c079c942339d8f0a8b57c39049bef581b46327404"
|
||||
checksum = "5ca0ecfa931c29007047d1bc58e623ab12e5590e8c7cc53200d5202b69266d8a"
|
||||
dependencies = [
|
||||
"libc",
|
||||
"rand_chacha 0.3.1",
|
||||
@@ -4693,7 +4693,7 @@ dependencies = [
|
||||
"js_int",
|
||||
"konst",
|
||||
"percent-encoding",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"regex",
|
||||
"ruma-identifiers-validation",
|
||||
"ruma-macros",
|
||||
@@ -4803,7 +4803,7 @@ dependencies = [
|
||||
"base64",
|
||||
"ed25519-dalek",
|
||||
"pkcs8 0.10.2",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"ruma-common",
|
||||
"serde_json",
|
||||
"sha2 0.10.9",
|
||||
@@ -4952,9 +4952,9 @@ checksum = "f87165f0995f63a9fbeea62b64d10b4d9d8e78ec6d7d51fb2125fda7bb36788f"
|
||||
|
||||
[[package]]
|
||||
name = "rustls-webpki"
|
||||
version = "0.103.12"
|
||||
version = "0.103.13"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "8279bb85272c9f10811ae6a6c547ff594d6a7f3c6c6b02ee9726d1d0dcfcdd06"
|
||||
checksum = "61c429a8649f110dddef65e2a5ad240f747e85f7758a6bccc7e5777bd33f756e"
|
||||
dependencies = [
|
||||
"aws-lc-rs",
|
||||
"ring",
|
||||
@@ -5078,7 +5078,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "25996b82292a7a57ed3508f052cfff8640d38d32018784acd714758b43da9c8f"
|
||||
dependencies = [
|
||||
"bitcoin_hashes",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"secp256k1-sys",
|
||||
]
|
||||
|
||||
@@ -5344,9 +5344,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "sha3"
|
||||
version = "0.10.8"
|
||||
version = "0.10.9"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "75872d278a8f37ef87fa0ddbda7802605cb18344497949862c0d4dcb291eba60"
|
||||
checksum = "77fd7028345d415a4034cf8777cd4f8ab1851274233b45f84e3d955502d93874"
|
||||
dependencies = [
|
||||
"digest 0.10.7",
|
||||
"keccak",
|
||||
@@ -5587,7 +5587,7 @@ dependencies = [
|
||||
"md-5",
|
||||
"memchr",
|
||||
"percent-encoding",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"rsa 0.9.10",
|
||||
"sha1",
|
||||
"sha2 0.10.9",
|
||||
@@ -5623,7 +5623,7 @@ dependencies = [
|
||||
"log",
|
||||
"md-5",
|
||||
"memchr",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"serde",
|
||||
"serde_json",
|
||||
"sha2 0.10.9",
|
||||
@@ -5996,9 +5996,9 @@ checksum = "1f3ccbac311fea05f86f61904b462b55fb3df8837a366dfc601a0161d0532f20"
|
||||
|
||||
[[package]]
|
||||
name = "tokio"
|
||||
version = "1.52.0"
|
||||
version = "1.52.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "a91135f59b1cbf38c91e73cf3386fca9bb77915c45ce2771460c9d92f0f3d776"
|
||||
checksum = "b67dee974fe86fd92cc45b7a95fdd2f99a36a6d7b0d431a231178d3d670bbcc6"
|
||||
dependencies = [
|
||||
"bytes",
|
||||
"libc",
|
||||
@@ -6327,9 +6327,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "typenum"
|
||||
version = "1.19.0"
|
||||
version = "1.20.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "562d481066bde0658276a35467c4af00bdc6ee726305698a55b86e61d7ad82bb"
|
||||
checksum = "40ce102ab67701b8526c123c1bab5cbe42d7040ccfd0f64af1a385808d2f43de"
|
||||
|
||||
[[package]]
|
||||
name = "typewit"
|
||||
@@ -6512,7 +6512,7 @@ dependencies = [
|
||||
"hmac",
|
||||
"matrix-pickle",
|
||||
"prost",
|
||||
"rand 0.8.5",
|
||||
"rand 0.8.6",
|
||||
"serde",
|
||||
"serde_bytes",
|
||||
"serde_json",
|
||||
@@ -6580,11 +6580,11 @@ checksum = "ccf3ec651a847eb01de73ccad15eb7d99f80485de043efb2f370cd654f4ea44b"
|
||||
|
||||
[[package]]
|
||||
name = "wasip2"
|
||||
version = "1.0.2+wasi-0.2.9"
|
||||
version = "1.0.3+wasi-0.2.9"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "9517f9239f02c069db75e65f174b3da828fe5f5b945c4dd26bd25d89c03ebcf5"
|
||||
checksum = "20064672db26d7cdc89c7798c48a0fdfac8213434a1186e5ef29fd560ae223d6"
|
||||
dependencies = [
|
||||
"wit-bindgen",
|
||||
"wit-bindgen 0.57.1",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -6593,7 +6593,7 @@ version = "0.4.0+wasi-0.3.0-rc-2026-01-06"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "5428f8bf88ea5ddc08faddef2ac4a67e390b88186c703ce6dbd955e1c145aca5"
|
||||
dependencies = [
|
||||
"wit-bindgen",
|
||||
"wit-bindgen 0.51.0",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -7271,6 +7271,12 @@ dependencies = [
|
||||
"wit-bindgen-rust-macro",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "wit-bindgen"
|
||||
version = "0.57.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1ebf944e87a7c253233ad6766e082e3cd714b5d03812acc24c318f549614536e"
|
||||
|
||||
[[package]]
|
||||
name = "wit-bindgen-core"
|
||||
version = "0.51.0"
|
||||
|
||||
@@ -79,6 +79,13 @@ cd frontend && npm install && npm run dev
|
||||
|
||||
Configuration lives in `.huskies/project.toml`. See `.huskies/bot.toml.*.example` for transport setup.
|
||||
|
||||
## Architecture
|
||||
|
||||
Internal architecture documentation lives in [`docs/architecture/`](docs/architecture/):
|
||||
|
||||
- [Service module conventions](docs/architecture/service-modules.md) — layout, layering rules, and patterns for `server/src/service/`
|
||||
- [Future extraction targets](docs/architecture/future-extractions.md) — recommended order for remaining handler extractions
|
||||
|
||||
## Releasing
|
||||
|
||||
Requires a Gitea API token in `.env` (`GITEA_TOKEN=your_token`).
|
||||
|
||||
@@ -0,0 +1,29 @@
|
||||
# Future Service Module Extractions
|
||||
|
||||
Recommended order for extracting remaining HTTP handlers into `service/<domain>/`
|
||||
modules, following the conventions in [service-modules.md](service-modules.md).
|
||||
|
||||
## Recommended Order
|
||||
|
||||
1. **`settings`** — small surface, few dependencies, good warm-up
|
||||
2. **`oauth`** — reads/writes token files; pure validation logic separates cleanly
|
||||
3. **`wizard`** — stateless generation logic is already mostly pure; thin I/O layer
|
||||
4. **`project`** — project scaffolding; wraps `io::fs::scaffold`, clean separation
|
||||
5. **`io`** (search/shell) — wraps `io::search` and `io::shell`; pure query-building separable
|
||||
6. **`anthropic`** — token-proxy handler; pure request-shaping + thin HTTP I/O
|
||||
7. **`stories`** (workflow) — CRDT-backed story ops; typed errors for 400/404/409/500
|
||||
8. **`events`** — SSE handler; mostly framework wiring, but event filtering is pure
|
||||
|
||||
## Special Case: `ws`
|
||||
|
||||
The WebSocket handler (`http/ws.rs`) is a **dedicated harder extraction** because
|
||||
it mixes multiple concerns (chat dispatch, permission forwarding, SSE bridging)
|
||||
and depends on long-lived async streams. Extract it last, after the above list
|
||||
is complete and the service module pattern is well-established.
|
||||
|
||||
## Notes
|
||||
|
||||
- Each extraction should link back to `docs/architecture/service-modules.md`
|
||||
in the story description to maintain consistency.
|
||||
- The `agents` extraction (story 604) is the reference implementation every
|
||||
future extraction should follow.
|
||||
@@ -0,0 +1,191 @@
|
||||
# Service Module Conventions
|
||||
|
||||
This document defines the layout, layering rules, and patterns for all service
|
||||
modules under `server/src/service/`. Every extraction from the HTTP handlers to
|
||||
a service module **must** follow these conventions.
|
||||
|
||||
---
|
||||
|
||||
## 1. Directory Layout
|
||||
|
||||
```
|
||||
server/src/service/<domain>/
|
||||
mod.rs — public API, typed Error, orchestration, integration tests
|
||||
io.rs — every side-effectful call; the ONLY file that may touch the
|
||||
filesystem, spawn processes, or call external crates that do
|
||||
<topic>.rs — pure logic for a named concern within the domain; no I/O
|
||||
```
|
||||
|
||||
### Rules
|
||||
|
||||
- `<domain>` matches the HTTP handler filename (e.g. `agents`, `settings`,
|
||||
`oauth`).
|
||||
- **No file named `logic.rs`** — use a descriptive domain name instead
|
||||
(e.g. `selection.rs`, `token.rs`, `validation.rs`).
|
||||
- New topic files are added when a pure concern grows beyond ~50 lines or when
|
||||
it has independent test coverage needs.
|
||||
|
||||
---
|
||||
|
||||
## 2. The Functional-Core / Imperative-Shell Rule
|
||||
|
||||
```
|
||||
io.rs (imperative shell) ←→ mod.rs (orchestrator) ←→ <topic>.rs (functional core)
|
||||
```
|
||||
|
||||
| Layer | Allowed | Forbidden |
|
||||
|-------|---------|-----------|
|
||||
| `<topic>.rs` | Pure Rust, data-transformation, branching logic, pattern matching | Any I/O |
|
||||
| `io.rs` | `std::fs`, `std::process`, `tokio::fs`, network calls, `SystemTime::now` | Business logic beyond a thin wrapper |
|
||||
| `mod.rs` | Calls into `io.rs` and `<topic>.rs`; owns the `Error` type | Direct I/O without going through `io.rs` |
|
||||
|
||||
**Grep-enforceable check:** The following must NOT appear in any `service/<domain>/` file other than `io.rs`:
|
||||
|
||||
- `std::fs`
|
||||
- `std::process`
|
||||
- `std::thread::sleep`
|
||||
- `tokio::fs`
|
||||
- `reqwest`
|
||||
- `SystemTime::now`
|
||||
|
||||
---
|
||||
|
||||
## 3. Error Type Pattern
|
||||
|
||||
Each service domain declares its own typed error enum in `mod.rs`:
|
||||
|
||||
```rust
|
||||
/// Errors returned by `service::agents` operations.
|
||||
#[derive(Debug)]
|
||||
pub enum Error {
|
||||
ProjectRootNotConfigured,
|
||||
AgentNotFound(String),
|
||||
WorkItemNotFound(String),
|
||||
WorktreeError(String),
|
||||
ConfigError(String),
|
||||
IoError(String),
|
||||
}
|
||||
|
||||
impl std::fmt::Display for Error { ... }
|
||||
```
|
||||
|
||||
HTTP handlers map service errors to **specific** HTTP status codes:
|
||||
|
||||
| Error variant | HTTP status |
|
||||
|--------------|-------------|
|
||||
| `ProjectRootNotConfigured` | 400 Bad Request |
|
||||
| `AgentNotFound` | 404 Not Found |
|
||||
| `WorkItemNotFound` | 404 Not Found |
|
||||
| `WorktreeError` | 400 Bad Request |
|
||||
| `ConfigError` | 400 Bad Request |
|
||||
| `IoError` | 500 Internal Server Error |
|
||||
|
||||
**No generic `bad_request` for everything** — distinguish 400 vs 404 vs 500.
|
||||
|
||||
---
|
||||
|
||||
## 4. Test Pattern
|
||||
|
||||
### Pure topic files (`<topic>.rs`)
|
||||
|
||||
```rust
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
// Unit tests MUST:
|
||||
// - Use no tempdir, tokio runtime, or filesystem
|
||||
// - Cover every branch of every public function
|
||||
#[test]
|
||||
fn filter_removes_archived_agents() { ... }
|
||||
}
|
||||
```
|
||||
|
||||
### `io.rs`
|
||||
|
||||
```rust
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use tempfile::TempDir;
|
||||
|
||||
// IO tests MAY use tempdirs and real filesystem.
|
||||
// Keep them few and focused on the thin I/O wrapper contract.
|
||||
#[test]
|
||||
fn is_archived_returns_true_when_in_done() { ... }
|
||||
}
|
||||
```
|
||||
|
||||
### `mod.rs`
|
||||
|
||||
```rust
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
// Integration tests compose io + pure layers end-to-end.
|
||||
// May use tempdirs. Keep the count small — they are integration-level.
|
||||
#[tokio::test]
|
||||
async fn list_agents_excludes_archived() { ... }
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Dependency Injection Pattern
|
||||
|
||||
Service functions take **only the dependencies they actually use**:
|
||||
|
||||
```rust
|
||||
// Good — takes only what it needs
|
||||
pub async fn start_agent(
|
||||
pool: &AgentPool,
|
||||
project_root: &Path,
|
||||
story_id: &str,
|
||||
agent_name: Option<&str>,
|
||||
) -> Result<AgentInfo, Error> { ... }
|
||||
|
||||
// Bad — takes the whole AppContext
|
||||
pub async fn start_agent(ctx: &AppContext, ...) -> Result<AgentInfo, Error> { ... }
|
||||
```
|
||||
|
||||
Standard injected dependencies for `service::agents`:
|
||||
|
||||
| Type | Purpose |
|
||||
|------|---------|
|
||||
| `&AgentPool` | Agent lifecycle operations |
|
||||
| `&Path` (`project_root`) | Filesystem operations scoped to the project |
|
||||
| `&WorkflowState` | In-memory test result cache |
|
||||
|
||||
**The dependency set chosen for `agents` is the reference pattern for all future
|
||||
service module extractions.**
|
||||
|
||||
---
|
||||
|
||||
## 6. HTTP Handler Contract
|
||||
|
||||
After extraction, HTTP handlers are thin adapters:
|
||||
|
||||
```rust
|
||||
async fn start_agent(&self, payload: Json<StartAgentPayload>) -> OpenApiResult<...> {
|
||||
let project_root = self.ctx.agents.get_project_root(&self.ctx.state)
|
||||
.map_err(|e| bad_request(e))?; // extract from AppContext
|
||||
let info = service::agents::start_agent( // call service
|
||||
&self.ctx.agents, &project_root, &payload.story_id, payload.agent_name.as_deref(),
|
||||
).await.map_err(map_service_error)?; // map typed error → HTTP
|
||||
Ok(Json(AgentInfoResponse { ... })) // shape DTO
|
||||
}
|
||||
```
|
||||
|
||||
Handlers must contain **no**:
|
||||
- `std::fs` / file reads
|
||||
- `std::process` invocations
|
||||
- Inline load-mutate-save sequences
|
||||
- Inline validation that belongs in the service layer
|
||||
|
||||
---
|
||||
|
||||
## 7. Follow-up Extractions
|
||||
|
||||
See [future-extractions.md](future-extractions.md) for the recommended order
|
||||
and rationale for remaining extraction targets.
|
||||
Generated
+2
-2
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "huskies",
|
||||
"version": "0.10.3",
|
||||
"version": "0.10.4",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "huskies",
|
||||
"version": "0.10.3",
|
||||
"version": "0.10.4",
|
||||
"dependencies": {
|
||||
"@types/react-syntax-highlighter": "^15.5.13",
|
||||
"react": "^19.1.0",
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"name": "huskies",
|
||||
"private": true,
|
||||
"version": "0.10.3",
|
||||
"version": "0.10.4",
|
||||
"type": "module",
|
||||
"scripts": {
|
||||
"dev": "vite",
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
|
||||
import type { ProjectSettings } from "./settings";
|
||||
import { settingsApi } from "./settings";
|
||||
|
||||
const mockFetch = vi.fn();
|
||||
@@ -22,7 +23,77 @@ function errorResponse(status: number, text: string) {
|
||||
return new Response(text, { status });
|
||||
}
|
||||
|
||||
const defaultProjectSettings: ProjectSettings = {
|
||||
default_qa: "server",
|
||||
default_coder_model: null,
|
||||
max_coders: null,
|
||||
max_retries: 2,
|
||||
base_branch: null,
|
||||
rate_limit_notifications: true,
|
||||
timezone: null,
|
||||
rendezvous: null,
|
||||
watcher_sweep_interval_secs: 60,
|
||||
watcher_done_retention_secs: 14400,
|
||||
};
|
||||
|
||||
describe("settingsApi", () => {
|
||||
describe("getProjectSettings", () => {
|
||||
it("sends GET to /settings and returns project settings", async () => {
|
||||
mockFetch.mockResolvedValueOnce(okResponse(defaultProjectSettings));
|
||||
|
||||
const result = await settingsApi.getProjectSettings();
|
||||
|
||||
expect(mockFetch).toHaveBeenCalledWith(
|
||||
"/api/settings",
|
||||
expect.objectContaining({
|
||||
headers: expect.objectContaining({
|
||||
"Content-Type": "application/json",
|
||||
}),
|
||||
}),
|
||||
);
|
||||
expect(result).toEqual(defaultProjectSettings);
|
||||
});
|
||||
|
||||
it("uses custom baseUrl when provided", async () => {
|
||||
mockFetch.mockResolvedValueOnce(okResponse(defaultProjectSettings));
|
||||
await settingsApi.getProjectSettings("http://localhost:4000/api");
|
||||
expect(mockFetch).toHaveBeenCalledWith(
|
||||
"http://localhost:4000/api/settings",
|
||||
expect.anything(),
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe("putProjectSettings", () => {
|
||||
it("sends PUT to /settings with settings body", async () => {
|
||||
const updated = { ...defaultProjectSettings, default_qa: "agent" };
|
||||
mockFetch.mockResolvedValueOnce(okResponse(updated));
|
||||
|
||||
const result = await settingsApi.putProjectSettings(updated);
|
||||
|
||||
expect(mockFetch).toHaveBeenCalledWith(
|
||||
"/api/settings",
|
||||
expect.objectContaining({
|
||||
method: "PUT",
|
||||
body: JSON.stringify(updated),
|
||||
}),
|
||||
);
|
||||
expect(result.default_qa).toBe("agent");
|
||||
});
|
||||
|
||||
it("throws on validation error", async () => {
|
||||
mockFetch.mockResolvedValueOnce(
|
||||
errorResponse(400, "Invalid default_qa value"),
|
||||
);
|
||||
await expect(
|
||||
settingsApi.putProjectSettings({
|
||||
...defaultProjectSettings,
|
||||
default_qa: "invalid",
|
||||
}),
|
||||
).rejects.toThrow("Invalid default_qa value");
|
||||
});
|
||||
});
|
||||
|
||||
describe("getEditorCommand", () => {
|
||||
it("sends GET to /settings/editor and returns editor settings", async () => {
|
||||
const expected = { editor_command: "zed" };
|
||||
|
||||
@@ -2,6 +2,19 @@ export interface EditorSettings {
|
||||
editor_command: string | null;
|
||||
}
|
||||
|
||||
export interface ProjectSettings {
|
||||
default_qa: string;
|
||||
default_coder_model: string | null;
|
||||
max_coders: number | null;
|
||||
max_retries: number;
|
||||
base_branch: string | null;
|
||||
rate_limit_notifications: boolean;
|
||||
timezone: string | null;
|
||||
rendezvous: string | null;
|
||||
watcher_sweep_interval_secs: number;
|
||||
watcher_done_retention_secs: number;
|
||||
}
|
||||
|
||||
export interface OpenFileResult {
|
||||
success: boolean;
|
||||
}
|
||||
@@ -34,6 +47,21 @@ async function requestJson<T>(
|
||||
}
|
||||
|
||||
export const settingsApi = {
|
||||
getProjectSettings(baseUrl?: string): Promise<ProjectSettings> {
|
||||
return requestJson<ProjectSettings>("/settings", {}, baseUrl);
|
||||
},
|
||||
|
||||
putProjectSettings(
|
||||
settings: ProjectSettings,
|
||||
baseUrl?: string,
|
||||
): Promise<ProjectSettings> {
|
||||
return requestJson<ProjectSettings>(
|
||||
"/settings",
|
||||
{ method: "PUT", body: JSON.stringify(settings) },
|
||||
baseUrl,
|
||||
);
|
||||
},
|
||||
|
||||
getEditorCommand(baseUrl?: string): Promise<EditorSettings> {
|
||||
return requestJson<EditorSettings>("/settings/editor", {}, baseUrl);
|
||||
},
|
||||
|
||||
@@ -9,6 +9,7 @@ import { useChatWebSocket } from "../hooks/useChatWebSocket";
|
||||
import { estimateTokens, getContextWindowSize } from "../utils/chatUtils";
|
||||
import { ApiKeyDialog } from "./ApiKeyDialog";
|
||||
import { BotConfigPage } from "./BotConfigPage";
|
||||
import { SettingsPage } from "./SettingsPage";
|
||||
import { ChatHeader } from "./ChatHeader";
|
||||
import type { ChatInputHandle } from "./ChatInput";
|
||||
import { ChatInput } from "./ChatInput";
|
||||
@@ -62,7 +63,7 @@ export function Chat({
|
||||
null,
|
||||
);
|
||||
const [showHelp, setShowHelp] = useState(false);
|
||||
const [view, setView] = useState<"chat" | "bot-config">("chat");
|
||||
const [view, setView] = useState<"chat" | "bot-config" | "settings">("chat");
|
||||
const [queuedMessages, setQueuedMessages] = useState<
|
||||
{ id: string; text: string }[]
|
||||
>([]);
|
||||
@@ -376,16 +377,21 @@ export function Chat({
|
||||
wsConnected={wsConnected}
|
||||
oauthStatus={oauthStatus}
|
||||
onShowBotConfig={() => setView("bot-config")}
|
||||
onShowSettings={() => setView("settings")}
|
||||
/>
|
||||
|
||||
{view === "bot-config" && (
|
||||
<BotConfigPage onBack={() => setView("chat")} />
|
||||
)}
|
||||
|
||||
{view === "settings" && (
|
||||
<SettingsPage onBack={() => setView("chat")} />
|
||||
)}
|
||||
|
||||
<div
|
||||
data-testid="chat-content-area"
|
||||
style={{
|
||||
display: view === "bot-config" ? "none" : "flex",
|
||||
display: view === "chat" ? "flex" : "none",
|
||||
flex: 1,
|
||||
minHeight: 0,
|
||||
flexDirection: isNarrowScreen ? "column" : "row",
|
||||
|
||||
@@ -35,6 +35,7 @@ interface ChatHeaderProps {
|
||||
wsConnected: boolean;
|
||||
oauthStatus?: OAuthStatus | null;
|
||||
onShowBotConfig?: () => void;
|
||||
onShowSettings?: () => void;
|
||||
}
|
||||
|
||||
const getContextEmoji = (percentage: number): string => {
|
||||
@@ -60,6 +61,7 @@ export function ChatHeader({
|
||||
wsConnected,
|
||||
oauthStatus = null,
|
||||
onShowBotConfig,
|
||||
onShowSettings,
|
||||
}: ChatHeaderProps) {
|
||||
const hasModelOptions = availableModels.length > 0 || claudeModels.length > 0;
|
||||
const [showConfirm, setShowConfirm] = useState(false);
|
||||
@@ -552,6 +554,43 @@ export function ChatHeader({
|
||||
</button>
|
||||
)}
|
||||
|
||||
{onShowSettings && (
|
||||
<button
|
||||
type="button"
|
||||
onClick={onShowSettings}
|
||||
title="Edit project.toml settings"
|
||||
style={{
|
||||
padding: "6px 12px",
|
||||
borderRadius: "99px",
|
||||
border: "none",
|
||||
fontSize: "0.85em",
|
||||
backgroundColor: "#2f2f2f",
|
||||
color: "#888",
|
||||
cursor: "pointer",
|
||||
outline: "none",
|
||||
transition: "all 0.2s",
|
||||
}}
|
||||
onMouseOver={(e) => {
|
||||
e.currentTarget.style.backgroundColor = "#3f3f3f";
|
||||
e.currentTarget.style.color = "#ccc";
|
||||
}}
|
||||
onMouseOut={(e) => {
|
||||
e.currentTarget.style.backgroundColor = "#2f2f2f";
|
||||
e.currentTarget.style.color = "#888";
|
||||
}}
|
||||
onFocus={(e) => {
|
||||
e.currentTarget.style.backgroundColor = "#3f3f3f";
|
||||
e.currentTarget.style.color = "#ccc";
|
||||
}}
|
||||
onBlur={(e) => {
|
||||
e.currentTarget.style.backgroundColor = "#2f2f2f";
|
||||
e.currentTarget.style.color = "#888";
|
||||
}}
|
||||
>
|
||||
⚙ Settings
|
||||
</button>
|
||||
)}
|
||||
|
||||
{hasModelOptions ? (
|
||||
<select
|
||||
value={model}
|
||||
|
||||
@@ -0,0 +1,461 @@
|
||||
import * as React from "react";
|
||||
import type { ProjectSettings } from "../api/settings";
|
||||
import { settingsApi } from "../api/settings";
|
||||
|
||||
const { useState, useEffect } = React;
|
||||
|
||||
interface SettingsPageProps {
|
||||
onBack: () => void;
|
||||
}
|
||||
|
||||
const fieldStyle: React.CSSProperties = {
|
||||
display: "flex",
|
||||
flexDirection: "column",
|
||||
gap: "4px",
|
||||
};
|
||||
|
||||
const labelStyle: React.CSSProperties = {
|
||||
fontSize: "0.8em",
|
||||
color: "#aaa",
|
||||
fontWeight: 500,
|
||||
};
|
||||
|
||||
const descStyle: React.CSSProperties = {
|
||||
fontSize: "0.75em",
|
||||
color: "#666",
|
||||
marginTop: "2px",
|
||||
};
|
||||
|
||||
const inputStyle: React.CSSProperties = {
|
||||
padding: "8px 10px",
|
||||
borderRadius: "6px",
|
||||
border: "1px solid #333",
|
||||
background: "#1e1e1e",
|
||||
color: "#ececec",
|
||||
fontSize: "0.9em",
|
||||
fontFamily: "monospace",
|
||||
outline: "none",
|
||||
};
|
||||
|
||||
const sectionStyle: React.CSSProperties = {
|
||||
background: "#1e1e1e",
|
||||
border: "1px solid #333",
|
||||
borderRadius: "8px",
|
||||
padding: "20px",
|
||||
display: "flex",
|
||||
flexDirection: "column",
|
||||
gap: "16px",
|
||||
};
|
||||
|
||||
const sectionTitleStyle: React.CSSProperties = {
|
||||
fontSize: "0.85em",
|
||||
fontWeight: 600,
|
||||
color: "#aaa",
|
||||
textTransform: "uppercase",
|
||||
letterSpacing: "0.06em",
|
||||
marginBottom: "2px",
|
||||
};
|
||||
|
||||
interface TextFieldProps {
|
||||
label: string;
|
||||
description?: string;
|
||||
value: string;
|
||||
onChange: (v: string) => void;
|
||||
placeholder?: string;
|
||||
}
|
||||
|
||||
function TextField({ label, description, value, onChange, placeholder }: TextFieldProps) {
|
||||
return (
|
||||
<div style={fieldStyle}>
|
||||
<label style={labelStyle}>{label}</label>
|
||||
{description && <span style={descStyle}>{description}</span>}
|
||||
<input
|
||||
type="text"
|
||||
value={value}
|
||||
onChange={(e) => onChange(e.target.value)}
|
||||
placeholder={placeholder ?? ""}
|
||||
style={inputStyle}
|
||||
autoComplete="off"
|
||||
/>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
interface NumberFieldProps {
|
||||
label: string;
|
||||
description?: string;
|
||||
value: number | null;
|
||||
onChange: (v: number | null) => void;
|
||||
min?: number;
|
||||
placeholder?: string;
|
||||
}
|
||||
|
||||
function NumberField({ label, description, value, onChange, min, placeholder }: NumberFieldProps) {
|
||||
return (
|
||||
<div style={fieldStyle}>
|
||||
<label style={labelStyle}>{label}</label>
|
||||
{description && <span style={descStyle}>{description}</span>}
|
||||
<input
|
||||
type="number"
|
||||
value={value === null ? "" : value}
|
||||
min={min}
|
||||
onChange={(e) => {
|
||||
const raw = e.target.value.trim();
|
||||
if (raw === "") {
|
||||
onChange(null);
|
||||
} else {
|
||||
const n = Number(raw);
|
||||
if (!Number.isNaN(n)) onChange(n);
|
||||
}
|
||||
}}
|
||||
placeholder={placeholder ?? ""}
|
||||
style={inputStyle}
|
||||
/>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
interface CheckboxFieldProps {
|
||||
label: string;
|
||||
description?: string;
|
||||
checked: boolean;
|
||||
onChange: (v: boolean) => void;
|
||||
}
|
||||
|
||||
function CheckboxField({ label, description, checked, onChange }: CheckboxFieldProps) {
|
||||
return (
|
||||
<div style={fieldStyle}>
|
||||
{description && <span style={descStyle}>{description}</span>}
|
||||
<label
|
||||
style={{
|
||||
display: "flex",
|
||||
alignItems: "center",
|
||||
gap: "8px",
|
||||
cursor: "pointer",
|
||||
fontSize: "0.9em",
|
||||
color: "#ccc",
|
||||
}}
|
||||
>
|
||||
<input
|
||||
type="checkbox"
|
||||
checked={checked}
|
||||
onChange={(e) => onChange(e.target.checked)}
|
||||
/>
|
||||
{label}
|
||||
</label>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
const QA_MODES = ["server", "agent", "human"] as const;
|
||||
|
||||
/** Settings page — form-based editor for project.toml scalar settings. */
|
||||
export function SettingsPage({ onBack }: SettingsPageProps) {
|
||||
const [settings, setSettings] = useState<ProjectSettings | null>(null);
|
||||
const [status, setStatus] = useState<"idle" | "loading" | "saving" | "saved" | "error">("loading");
|
||||
const [errorMsg, setErrorMsg] = useState<string | null>(null);
|
||||
const [validationErrors, setValidationErrors] = useState<Record<string, string>>({});
|
||||
|
||||
useEffect(() => {
|
||||
settingsApi
|
||||
.getProjectSettings()
|
||||
.then((s) => {
|
||||
setSettings(s);
|
||||
setStatus("idle");
|
||||
})
|
||||
.catch((e: unknown) => {
|
||||
setStatus("error");
|
||||
setErrorMsg(e instanceof Error ? e.message : "Failed to load settings");
|
||||
});
|
||||
}, []);
|
||||
|
||||
function patch(partial: Partial<ProjectSettings>) {
|
||||
setSettings((prev) => (prev ? { ...prev, ...partial } : prev));
|
||||
setValidationErrors({});
|
||||
}
|
||||
|
||||
function validate(s: ProjectSettings): Record<string, string> {
|
||||
const errors: Record<string, string> = {};
|
||||
if (!QA_MODES.includes(s.default_qa as (typeof QA_MODES)[number])) {
|
||||
errors.default_qa = `Must be one of: ${QA_MODES.join(", ")}`;
|
||||
}
|
||||
if (s.max_retries < 0) {
|
||||
errors.max_retries = "Must be 0 or greater";
|
||||
}
|
||||
if (s.watcher_sweep_interval_secs < 1) {
|
||||
errors.watcher_sweep_interval_secs = "Must be at least 1 second";
|
||||
}
|
||||
if (s.watcher_done_retention_secs < 1) {
|
||||
errors.watcher_done_retention_secs = "Must be at least 1 second";
|
||||
}
|
||||
return errors;
|
||||
}
|
||||
|
||||
async function handleSave() {
|
||||
if (!settings) return;
|
||||
const errors = validate(settings);
|
||||
if (Object.keys(errors).length > 0) {
|
||||
setValidationErrors(errors);
|
||||
return;
|
||||
}
|
||||
setStatus("saving");
|
||||
setErrorMsg(null);
|
||||
try {
|
||||
const saved = await settingsApi.putProjectSettings(settings);
|
||||
setSettings(saved);
|
||||
setStatus("saved");
|
||||
setTimeout(() => setStatus("idle"), 2000);
|
||||
} catch (e) {
|
||||
setStatus("error");
|
||||
setErrorMsg(e instanceof Error ? e.message : "Save failed");
|
||||
}
|
||||
}
|
||||
|
||||
const s = settings;
|
||||
|
||||
return (
|
||||
<div
|
||||
style={{
|
||||
display: "flex",
|
||||
flexDirection: "column",
|
||||
height: "100%",
|
||||
backgroundColor: "#171717",
|
||||
color: "#ececec",
|
||||
overflow: "auto",
|
||||
}}
|
||||
>
|
||||
{/* Header */}
|
||||
<div
|
||||
style={{
|
||||
padding: "12px 24px",
|
||||
borderBottom: "1px solid #333",
|
||||
display: "flex",
|
||||
alignItems: "center",
|
||||
gap: "16px",
|
||||
background: "#171717",
|
||||
flexShrink: 0,
|
||||
}}
|
||||
>
|
||||
<button
|
||||
type="button"
|
||||
onClick={onBack}
|
||||
style={{
|
||||
background: "transparent",
|
||||
border: "none",
|
||||
cursor: "pointer",
|
||||
color: "#888",
|
||||
fontSize: "0.9em",
|
||||
padding: "4px 8px",
|
||||
borderRadius: "4px",
|
||||
}}
|
||||
>
|
||||
← Back
|
||||
</button>
|
||||
<span style={{ fontWeight: 700, fontSize: "1em" }}>Project Settings</span>
|
||||
</div>
|
||||
|
||||
{/* Body */}
|
||||
<div
|
||||
style={{
|
||||
flex: 1,
|
||||
padding: "24px",
|
||||
display: "flex",
|
||||
flexDirection: "column",
|
||||
gap: "20px",
|
||||
maxWidth: "640px",
|
||||
}}
|
||||
>
|
||||
{status === "loading" && (
|
||||
<p style={{ color: "#888", fontSize: "0.9em" }}>Loading settings…</p>
|
||||
)}
|
||||
|
||||
{status === "error" && !s && (
|
||||
<p style={{ color: "#f08080", fontSize: "0.9em" }}>
|
||||
Error: {errorMsg}
|
||||
</p>
|
||||
)}
|
||||
|
||||
{s && (
|
||||
<>
|
||||
{/* Pipeline */}
|
||||
<div style={sectionStyle}>
|
||||
<div style={sectionTitleStyle}>Pipeline</div>
|
||||
|
||||
<div style={fieldStyle}>
|
||||
<label style={labelStyle}>Default QA Mode</label>
|
||||
<span style={descStyle}>
|
||||
How stories are QA-reviewed after the coder stage.
|
||||
Default: server.
|
||||
</span>
|
||||
<select
|
||||
value={s.default_qa}
|
||||
onChange={(e) => patch({ default_qa: e.target.value })}
|
||||
style={{ ...inputStyle, cursor: "pointer" }}
|
||||
>
|
||||
{QA_MODES.map((m) => (
|
||||
<option key={m} value={m}>
|
||||
{m}
|
||||
</option>
|
||||
))}
|
||||
</select>
|
||||
{validationErrors.default_qa && (
|
||||
<span style={{ color: "#f08080", fontSize: "0.8em" }}>
|
||||
{validationErrors.default_qa}
|
||||
</span>
|
||||
)}
|
||||
</div>
|
||||
|
||||
<NumberField
|
||||
label="Max Retries"
|
||||
description="Maximum retries per story per pipeline stage before blocking. Default: 2. Set 0 to disable."
|
||||
value={s.max_retries}
|
||||
min={0}
|
||||
onChange={(v) => patch({ max_retries: v ?? 0 })}
|
||||
/>
|
||||
{validationErrors.max_retries && (
|
||||
<span style={{ color: "#f08080", fontSize: "0.8em" }}>
|
||||
{validationErrors.max_retries}
|
||||
</span>
|
||||
)}
|
||||
|
||||
<NumberField
|
||||
label="Max Concurrent Coders"
|
||||
description="Maximum number of coder-stage agents running at once. Leave blank for unlimited."
|
||||
value={s.max_coders}
|
||||
min={1}
|
||||
placeholder="unlimited"
|
||||
onChange={(v) => patch({ max_coders: v })}
|
||||
/>
|
||||
|
||||
<TextField
|
||||
label="Default Coder Model"
|
||||
description="When set, only coder agents matching this model are auto-assigned (e.g. sonnet, opus)."
|
||||
value={s.default_coder_model ?? ""}
|
||||
onChange={(v) =>
|
||||
patch({ default_coder_model: v.trim() || null })
|
||||
}
|
||||
placeholder="e.g. sonnet"
|
||||
/>
|
||||
</div>
|
||||
|
||||
{/* Git */}
|
||||
<div style={sectionStyle}>
|
||||
<div style={sectionTitleStyle}>Git</div>
|
||||
|
||||
<TextField
|
||||
label="Base Branch"
|
||||
description="Overrides auto-detection of the merge target branch (e.g. main, master, develop)."
|
||||
value={s.base_branch ?? ""}
|
||||
onChange={(v) =>
|
||||
patch({ base_branch: v.trim() || null })
|
||||
}
|
||||
placeholder="e.g. master"
|
||||
/>
|
||||
</div>
|
||||
|
||||
{/* Notifications */}
|
||||
<div style={sectionStyle}>
|
||||
<div style={sectionTitleStyle}>Notifications</div>
|
||||
|
||||
<CheckboxField
|
||||
label="Rate Limit Notifications"
|
||||
description="Send chat notifications on soft API rate-limit warnings. Disable to reduce noise."
|
||||
checked={s.rate_limit_notifications}
|
||||
onChange={(v) => patch({ rate_limit_notifications: v })}
|
||||
/>
|
||||
</div>
|
||||
|
||||
{/* Advanced */}
|
||||
<div style={sectionStyle}>
|
||||
<div style={sectionTitleStyle}>Advanced</div>
|
||||
|
||||
<TextField
|
||||
label="Timezone"
|
||||
description="IANA timezone for timer inputs (e.g. Europe/London, America/New_York). Leave blank for system default."
|
||||
value={s.timezone ?? ""}
|
||||
onChange={(v) => patch({ timezone: v.trim() || null })}
|
||||
placeholder="e.g. Europe/London"
|
||||
/>
|
||||
|
||||
<TextField
|
||||
label="Rendezvous URL"
|
||||
description="WebSocket URL of a remote huskies node for CRDT state sync (e.g. ws://host:3001/crdt-sync)."
|
||||
value={s.rendezvous ?? ""}
|
||||
onChange={(v) => patch({ rendezvous: v.trim() || null })}
|
||||
placeholder="e.g. ws://host:3001/crdt-sync"
|
||||
/>
|
||||
</div>
|
||||
|
||||
{/* Watcher */}
|
||||
<div style={sectionStyle}>
|
||||
<div style={sectionTitleStyle}>Archiver</div>
|
||||
|
||||
<NumberField
|
||||
label="Sweep Interval (seconds)"
|
||||
description="How often to check the done stage for items ready to archive. Default: 60."
|
||||
value={s.watcher_sweep_interval_secs}
|
||||
min={1}
|
||||
onChange={(v) =>
|
||||
patch({ watcher_sweep_interval_secs: v ?? 60 })
|
||||
}
|
||||
/>
|
||||
{validationErrors.watcher_sweep_interval_secs && (
|
||||
<span style={{ color: "#f08080", fontSize: "0.8em" }}>
|
||||
{validationErrors.watcher_sweep_interval_secs}
|
||||
</span>
|
||||
)}
|
||||
|
||||
<NumberField
|
||||
label="Done Retention (seconds)"
|
||||
description="How long an item must stay in the done stage before archiving. Default: 14400 (4 hours)."
|
||||
value={s.watcher_done_retention_secs}
|
||||
min={1}
|
||||
onChange={(v) =>
|
||||
patch({ watcher_done_retention_secs: v ?? 14400 })
|
||||
}
|
||||
/>
|
||||
{validationErrors.watcher_done_retention_secs && (
|
||||
<span style={{ color: "#f08080", fontSize: "0.8em" }}>
|
||||
{validationErrors.watcher_done_retention_secs}
|
||||
</span>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{/* Save */}
|
||||
<div style={{ display: "flex", alignItems: "center", gap: "12px" }}>
|
||||
<button
|
||||
type="button"
|
||||
onClick={handleSave}
|
||||
disabled={status === "saving"}
|
||||
style={{
|
||||
padding: "8px 24px",
|
||||
borderRadius: "6px",
|
||||
border: "none",
|
||||
background:
|
||||
status === "saved" ? "#1a5c2a" : "#2563eb",
|
||||
color: "#fff",
|
||||
cursor:
|
||||
status === "saving" ? "not-allowed" : "pointer",
|
||||
fontSize: "0.9em",
|
||||
fontWeight: 600,
|
||||
opacity: status === "saving" ? 0.7 : 1,
|
||||
}}
|
||||
>
|
||||
{status === "saving"
|
||||
? "Saving…"
|
||||
: status === "saved"
|
||||
? "Saved!"
|
||||
: "Save"}
|
||||
</button>
|
||||
{status === "error" && errorMsg && (
|
||||
<span style={{ color: "#f08080", fontSize: "0.85em" }}>
|
||||
{errorMsg}
|
||||
</span>
|
||||
)}
|
||||
</div>
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
+1
-1
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "huskies"
|
||||
version = "0.10.3"
|
||||
version = "0.10.4"
|
||||
edition = "2024"
|
||||
build = "build.rs"
|
||||
|
||||
|
||||
@@ -0,0 +1,118 @@
|
||||
//! Project-local agent prompt layer.
|
||||
//!
|
||||
//! Reads `.huskies/AGENT.md` from the project root and appends its content to
|
||||
//! the baked-in agent prompt at spawn time. This lets projects record
|
||||
//! non-obvious facts (directory conventions, known traps, etc.) that every
|
||||
//! agent should know without modifying the shared agent configuration.
|
||||
//!
|
||||
//! Behaviour contract:
|
||||
//! - If the file is missing or empty the caller receives `None`; agents spawn
|
||||
//! normally with no warnings or errors.
|
||||
//! - If the file exists and is non-empty, the content is returned and an
|
||||
//! INFO-level log line is emitted with the file path and byte count.
|
||||
//! - The file is read fresh on every agent spawn — no caching.
|
||||
|
||||
use std::path::Path;
|
||||
|
||||
/// Attempt to load the project-local agent prompt from `.huskies/AGENT.md`.
|
||||
///
|
||||
/// Returns `Some(content)` when the file exists and is non-empty, or `None`
|
||||
/// when the file is absent or empty. Never returns an error; any I/O problem
|
||||
/// is silently treated as "no local prompt".
|
||||
pub fn read_project_local_prompt(project_root: &Path) -> Option<String> {
|
||||
let path = project_root.join(".huskies/AGENT.md");
|
||||
let content = std::fs::read_to_string(&path).ok()?;
|
||||
let trimmed = content.trim();
|
||||
if trimmed.is_empty() {
|
||||
return None;
|
||||
}
|
||||
crate::slog!(
|
||||
"[agents] project-local prompt loaded: {} ({} bytes)",
|
||||
path.display(),
|
||||
trimmed.len()
|
||||
);
|
||||
Some(trimmed.to_string())
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn returns_none_when_file_absent() {
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
let result = read_project_local_prompt(tmp.path());
|
||||
assert!(result.is_none(), "missing file must return None");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn returns_none_when_file_empty() {
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
let huskies_dir = tmp.path().join(".huskies");
|
||||
std::fs::create_dir_all(&huskies_dir).unwrap();
|
||||
std::fs::write(huskies_dir.join("AGENT.md"), "").unwrap();
|
||||
let result = read_project_local_prompt(tmp.path());
|
||||
assert!(result.is_none(), "empty file must return None");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn returns_none_when_file_whitespace_only() {
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
let huskies_dir = tmp.path().join(".huskies");
|
||||
std::fs::create_dir_all(&huskies_dir).unwrap();
|
||||
std::fs::write(huskies_dir.join("AGENT.md"), " \n\n ").unwrap();
|
||||
let result = read_project_local_prompt(tmp.path());
|
||||
assert!(result.is_none(), "whitespace-only file must return None");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn returns_content_when_file_non_empty() {
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
let huskies_dir = tmp.path().join(".huskies");
|
||||
std::fs::create_dir_all(&huskies_dir).unwrap();
|
||||
let marker = "DISTINCTIVE_MARKER_XYZ42";
|
||||
std::fs::write(huskies_dir.join("AGENT.md"), format!("# Hints\n{marker}\n")).unwrap();
|
||||
let result = read_project_local_prompt(tmp.path());
|
||||
assert!(result.is_some(), "non-empty file must return Some");
|
||||
let content = result.unwrap();
|
||||
assert!(
|
||||
content.contains(marker),
|
||||
"returned content must include the marker: {content}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn appended_to_prompt_integration() {
|
||||
// Simulates the start.rs usage: marker appears in the constructed
|
||||
// system prompt when the file is present, absent when it is not.
|
||||
let tmp_with = tempfile::tempdir().unwrap();
|
||||
let huskies_dir = tmp_with.path().join(".huskies");
|
||||
std::fs::create_dir_all(&huskies_dir).unwrap();
|
||||
let marker = "INTEGRATION_MARKER_601";
|
||||
std::fs::write(huskies_dir.join("AGENT.md"), marker).unwrap();
|
||||
|
||||
let base_prompt = "You are a coder agent.".to_string();
|
||||
let local = read_project_local_prompt(tmp_with.path());
|
||||
let effective = match local {
|
||||
Some(ref extra) => format!("{base_prompt}\n\n{extra}"),
|
||||
None => base_prompt.clone(),
|
||||
};
|
||||
assert!(
|
||||
effective.contains(marker),
|
||||
"marker must appear in effective prompt when file present: {effective}"
|
||||
);
|
||||
|
||||
// Without the file
|
||||
let tmp_without = tempfile::tempdir().unwrap();
|
||||
let local2 = read_project_local_prompt(tmp_without.path());
|
||||
assert!(local2.is_none(), "no marker when file absent");
|
||||
let effective2 = match local2 {
|
||||
Some(ref extra) => format!("{base_prompt}\n\n{extra}"),
|
||||
None => base_prompt.clone(),
|
||||
};
|
||||
assert!(
|
||||
!effective2.contains(marker),
|
||||
"marker must NOT appear in effective prompt when file absent: {effective2}"
|
||||
);
|
||||
}
|
||||
}
|
||||
@@ -1,6 +1,7 @@
|
||||
//! Agent subsystem — types, configuration, and orchestration for coding agents.
|
||||
pub mod gates;
|
||||
pub mod lifecycle;
|
||||
pub mod local_prompt;
|
||||
pub mod merge;
|
||||
mod pool;
|
||||
pub(crate) mod pty;
|
||||
|
||||
@@ -410,6 +410,17 @@ impl AgentPool {
|
||||
}
|
||||
};
|
||||
|
||||
// Append project-local prompt content (.huskies/AGENT.md) to the
|
||||
// baked-in prompt so every agent role sees project-specific guidance
|
||||
// without any config changes. The file is read fresh each spawn;
|
||||
// if absent or empty, the prompt is unchanged and no warning is logged.
|
||||
if let Some(local) =
|
||||
crate::agents::local_prompt::read_project_local_prompt(&project_root_clone)
|
||||
{
|
||||
prompt.push_str("\n\n");
|
||||
prompt.push_str(&local);
|
||||
}
|
||||
|
||||
// Build the effective prompt and determine resume session.
|
||||
//
|
||||
// When resuming a previous session, discard the full rendered prompt
|
||||
|
||||
@@ -4,7 +4,7 @@ use crate::chat::ChatTransport;
|
||||
use crate::chat::timer::TimerStore;
|
||||
use crate::http::context::{PermissionDecision, PermissionForward};
|
||||
use matrix_sdk::ruma::{OwnedEventId, OwnedRoomId, OwnedUserId};
|
||||
use std::collections::{HashMap, HashSet};
|
||||
use std::collections::{BTreeMap, HashMap, HashSet};
|
||||
use std::path::PathBuf;
|
||||
use std::sync::Arc;
|
||||
use tokio::sync::Mutex as TokioMutex;
|
||||
@@ -65,6 +65,10 @@ pub struct BotContext {
|
||||
/// In gateway mode: valid project names accepted by the `switch` command.
|
||||
/// Empty in standalone mode.
|
||||
pub gateway_projects: Vec<String>,
|
||||
/// In gateway mode: mapping of project name → base URL (e.g. `"http://localhost:3001"`).
|
||||
/// Used to proxy bot commands to the active project's `/api/bot/command` endpoint.
|
||||
/// Empty in standalone mode.
|
||||
pub gateway_project_urls: BTreeMap<String, String>,
|
||||
}
|
||||
|
||||
impl BotContext {
|
||||
@@ -82,6 +86,49 @@ impl BotContext {
|
||||
self.project_root.clone()
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns `true` if the bot is running in gateway mode.
|
||||
pub fn is_gateway(&self) -> bool {
|
||||
self.gateway_active_project.is_some()
|
||||
}
|
||||
|
||||
/// Return the base URL for the currently active project, if in gateway mode.
|
||||
pub async fn active_project_url(&self) -> Option<String> {
|
||||
let ap = self.gateway_active_project.as_ref()?;
|
||||
let name = ap.read().await.clone();
|
||||
self.gateway_project_urls.get(&name).cloned()
|
||||
}
|
||||
|
||||
/// Proxy a bot command to the active project's `/api/bot/command` endpoint.
|
||||
///
|
||||
/// Returns the Markdown response from the project server, or an error
|
||||
/// message if the request failed.
|
||||
pub async fn proxy_bot_command(&self, command: &str, args: &str) -> Option<String> {
|
||||
let base_url = self.active_project_url().await?;
|
||||
let url = format!("{base_url}/api/bot/command");
|
||||
let client = reqwest::Client::new();
|
||||
let body = serde_json::json!({
|
||||
"command": command,
|
||||
"args": args,
|
||||
});
|
||||
match client.post(&url).json(&body).send().await {
|
||||
Ok(resp) if resp.status().is_success() => {
|
||||
match resp.json::<serde_json::Value>().await {
|
||||
Ok(json) => json
|
||||
.get("response")
|
||||
.and_then(|v| v.as_str())
|
||||
.map(String::from),
|
||||
Err(e) => Some(format!("Failed to parse response from project server: {e}")),
|
||||
}
|
||||
}
|
||||
Ok(resp) => Some(format!(
|
||||
"Project server returned HTTP {}: {}",
|
||||
resp.status(),
|
||||
resp.text().await.unwrap_or_default()
|
||||
)),
|
||||
Err(e) => Some(format!("Failed to reach project server at {url}: {e}")),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
@@ -135,6 +182,7 @@ mod tests {
|
||||
)),
|
||||
gateway_active_project: None,
|
||||
gateway_projects: vec![],
|
||||
gateway_project_urls: BTreeMap::new(),
|
||||
};
|
||||
assert_eq!(
|
||||
ctx.effective_project_root().await,
|
||||
@@ -172,6 +220,10 @@ mod tests {
|
||||
)),
|
||||
gateway_active_project: Some(Arc::clone(&active)),
|
||||
gateway_projects: vec!["huskies".into(), "robot-studio".into()],
|
||||
gateway_project_urls: BTreeMap::from([
|
||||
("huskies".into(), "http://localhost:3001".into()),
|
||||
("robot-studio".into(), "http://localhost:3002".into()),
|
||||
]),
|
||||
};
|
||||
assert_eq!(
|
||||
ctx.effective_project_root().await,
|
||||
@@ -209,6 +261,10 @@ mod tests {
|
||||
)),
|
||||
gateway_active_project: Some(Arc::clone(&active)),
|
||||
gateway_projects: vec!["huskies".into(), "robot-studio".into()],
|
||||
gateway_project_urls: BTreeMap::from([
|
||||
("huskies".into(), "http://localhost:3001".into()),
|
||||
("robot-studio".into(), "http://localhost:3002".into()),
|
||||
]),
|
||||
};
|
||||
|
||||
assert_eq!(
|
||||
@@ -255,6 +311,7 @@ mod tests {
|
||||
)),
|
||||
gateway_active_project: None,
|
||||
gateway_projects: vec![],
|
||||
gateway_project_urls: BTreeMap::new(),
|
||||
};
|
||||
// Clone must work (required by Matrix SDK event handler injection).
|
||||
let _cloned = ctx.clone();
|
||||
|
||||
@@ -179,6 +179,80 @@ pub(super) async fn on_room_message(
|
||||
// a subdirectory named after the project. Standalone mode is unaffected.
|
||||
let effective_root = ctx.effective_project_root().await;
|
||||
|
||||
// ── Gateway command proxy ───────────────────────────────────────────
|
||||
// In gateway mode the bot has no local CRDT or project filesystem, so most
|
||||
// commands must be forwarded to the active project's `/api/bot/command`
|
||||
// endpoint. Only a small set of gateway-local commands are handled here.
|
||||
if ctx.is_gateway() {
|
||||
// Commands that are meaningful on the gateway itself (no project state needed).
|
||||
const GATEWAY_LOCAL_COMMANDS: &[&str] =
|
||||
&["help", "ambient", "reset", "switch", "all_status"];
|
||||
|
||||
let stripped = crate::chat::util::strip_bot_mention(
|
||||
&user_message,
|
||||
&ctx.bot_name,
|
||||
ctx.bot_user_id.as_str(),
|
||||
)
|
||||
.trim()
|
||||
.trim_start_matches(|c: char| !c.is_alphanumeric())
|
||||
.to_string();
|
||||
|
||||
let (cmd, args) = match stripped.split_once(char::is_whitespace) {
|
||||
Some((c, a)) => (c.to_ascii_lowercase(), a.trim().to_string()),
|
||||
None => (stripped.to_ascii_lowercase(), String::new()),
|
||||
};
|
||||
|
||||
// Only proxy if the first word is a known bot command (sync or async).
|
||||
let is_known_command = !cmd.is_empty()
|
||||
&& !GATEWAY_LOCAL_COMMANDS.contains(&cmd.as_str())
|
||||
&& (crate::chat::commands::commands()
|
||||
.iter()
|
||||
.any(|c| c.name == cmd)
|
||||
|| [
|
||||
"assign", "start", "delete", "rebuild", "rmtree", "htop", "timer",
|
||||
]
|
||||
.contains(&cmd.as_str()));
|
||||
|
||||
if is_known_command {
|
||||
// Proxy to the active project server.
|
||||
let response = match ctx.proxy_bot_command(&cmd, &args).await {
|
||||
Some(r) => r,
|
||||
None => "No active project selected or project URL not configured.".to_string(),
|
||||
};
|
||||
let html = markdown_to_html(&response);
|
||||
if let Ok(msg_id) = ctx
|
||||
.transport
|
||||
.send_message(&room_id_str, &response, &html)
|
||||
.await
|
||||
&& let Ok(event_id) = msg_id.parse()
|
||||
{
|
||||
ctx.bot_sent_event_ids.lock().await.insert(event_id);
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
// `all_status` — aggregate pipeline status across all projects (gateway-only).
|
||||
if cmd == "all_status" {
|
||||
let project_urls = ctx.gateway_project_urls.clone();
|
||||
let client = reqwest::Client::new();
|
||||
let statuses =
|
||||
crate::gateway::fetch_all_project_pipeline_statuses(&project_urls, &client).await;
|
||||
let response = crate::gateway::format_aggregate_status_compact(&statuses);
|
||||
let html = markdown_to_html(&response);
|
||||
if let Ok(msg_id) = ctx
|
||||
.transport
|
||||
.send_message(&room_id_str, &response, &html)
|
||||
.await
|
||||
&& let Ok(event_id) = msg_id.parse()
|
||||
{
|
||||
ctx.bot_sent_event_ids.lock().await.insert(event_id);
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
// Gateway-local commands and freeform text fall through to normal handling below.
|
||||
}
|
||||
|
||||
// Check for bot-level commands (help, status, ambient, …) before invoking
|
||||
// the LLM. All commands are registered in commands.rs — no special-casing
|
||||
// needed here.
|
||||
@@ -592,12 +666,18 @@ pub(super) async fn handle_message(
|
||||
let sent_any_chunk = Arc::new(AtomicBool::new(false));
|
||||
let sent_any_chunk_for_callback = Arc::clone(&sent_any_chunk);
|
||||
|
||||
// In gateway mode, run Claude Code in the active project's directory.
|
||||
let project_root_str = ctx
|
||||
.effective_project_root()
|
||||
.await
|
||||
.to_string_lossy()
|
||||
.to_string();
|
||||
// In gateway mode, run Claude Code in the gateway config directory so it
|
||||
// picks up the `.mcp.json` that points to the gateway's MCP proxy endpoint.
|
||||
// The gateway proxies tool calls to the active project automatically.
|
||||
// In standalone mode, use the project root directly.
|
||||
let project_root_str = if ctx.is_gateway() {
|
||||
ctx.project_root.to_string_lossy().to_string()
|
||||
} else {
|
||||
ctx.effective_project_root()
|
||||
.await
|
||||
.to_string_lossy()
|
||||
.to_string()
|
||||
};
|
||||
let chat_fut = provider.chat_stream(
|
||||
&prompt,
|
||||
&project_root_str,
|
||||
|
||||
@@ -30,6 +30,7 @@ pub async fn run_bot(
|
||||
shutdown_rx: watch::Receiver<Option<crate::rebuild::ShutdownReason>>,
|
||||
gateway_active_project: Option<Arc<RwLock<String>>>,
|
||||
gateway_projects: Vec<String>,
|
||||
gateway_project_urls: std::collections::BTreeMap<String, String>,
|
||||
) -> Result<(), String> {
|
||||
let store_path = project_root.join(".huskies").join("matrix_store");
|
||||
let client = Client::builder()
|
||||
@@ -167,6 +168,11 @@ pub async fn run_bot(
|
||||
let notif_room_ids = target_room_ids.clone();
|
||||
let notif_project_root = project_root.clone();
|
||||
let announce_room_ids = target_room_ids.clone();
|
||||
// Clone values needed by the gateway notification poller (only used in gateway mode).
|
||||
let poller_room_ids: Vec<String> = target_room_ids.iter().map(|r| r.to_string()).collect();
|
||||
let poller_project_urls = gateway_project_urls.clone();
|
||||
let poller_poll_interval = config.aggregated_notifications_poll_interval_secs;
|
||||
let poller_enabled = config.aggregated_notifications_enabled;
|
||||
|
||||
let persisted = load_history(&project_root);
|
||||
slog!(
|
||||
@@ -247,6 +253,7 @@ pub async fn run_bot(
|
||||
timer_store,
|
||||
gateway_active_project,
|
||||
gateway_projects,
|
||||
gateway_project_urls,
|
||||
};
|
||||
|
||||
slog!(
|
||||
@@ -269,6 +276,20 @@ pub async fn run_bot(
|
||||
notif_project_root,
|
||||
);
|
||||
|
||||
// In gateway mode, spawn the cross-project notification poller.
|
||||
// It polls every registered project's `/api/events` endpoint and forwards
|
||||
// new events to the configured gateway rooms with a `[project-name]` prefix.
|
||||
// The poller is controlled by the gateway-level `aggregated_notifications_enabled`
|
||||
// flag in bot.toml — set it to `false` to disable without touching per-project configs.
|
||||
if !poller_project_urls.is_empty() && poller_enabled {
|
||||
crate::gateway::spawn_gateway_notification_poller(
|
||||
Arc::clone(&transport),
|
||||
poller_room_ids,
|
||||
poller_project_urls,
|
||||
poller_poll_interval,
|
||||
);
|
||||
}
|
||||
|
||||
// Spawn a shutdown watcher that sends a best-effort goodbye message to all
|
||||
// configured rooms when the server is about to stop (SIGINT/SIGTERM or rebuild).
|
||||
{
|
||||
|
||||
@@ -10,6 +10,14 @@ fn default_permission_timeout_secs() -> u64 {
|
||||
120
|
||||
}
|
||||
|
||||
fn default_aggregated_notifications_poll_interval_secs() -> u64 {
|
||||
5
|
||||
}
|
||||
|
||||
fn default_aggregated_notifications_enabled() -> bool {
|
||||
true
|
||||
}
|
||||
|
||||
/// Configuration for the Matrix bot, read from `.huskies/bot.toml`.
|
||||
#[derive(Deserialize, Clone, Debug)]
|
||||
pub struct BotConfig {
|
||||
@@ -146,6 +154,26 @@ pub struct BotConfig {
|
||||
/// When empty or absent, all users in configured channels are allowed.
|
||||
#[serde(default)]
|
||||
pub discord_allowed_users: Vec<String>,
|
||||
|
||||
/// How often (in seconds) the gateway polls each project server's
|
||||
/// `/api/events` endpoint to aggregate cross-project notifications.
|
||||
///
|
||||
/// Only used when the gateway's bot is enabled. Defaults to 5 seconds.
|
||||
#[serde(default = "default_aggregated_notifications_poll_interval_secs")]
|
||||
pub aggregated_notifications_poll_interval_secs: u64,
|
||||
|
||||
/// Whether the gateway-level aggregated cross-project notification stream
|
||||
/// is enabled. When `false`, the gateway will not poll per-project
|
||||
/// servers for events even if the bot is otherwise enabled.
|
||||
///
|
||||
/// Set this in the **gateway's** `bot.toml` (not in per-project configs).
|
||||
/// Adding a new project to `projects.toml` never requires touching
|
||||
/// per-project bot configs — the aggregated stream picks it up
|
||||
/// automatically once this flag is `true` (the default).
|
||||
///
|
||||
/// Defaults to `true`.
|
||||
#[serde(default = "default_aggregated_notifications_enabled")]
|
||||
pub aggregated_notifications_enabled: bool,
|
||||
}
|
||||
|
||||
fn default_transport() -> String {
|
||||
@@ -658,6 +686,47 @@ require_verified_devices = true
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn aggregated_notifications_enabled_defaults_to_true() {
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
let sk = tmp.path().join(".huskies");
|
||||
fs::create_dir_all(&sk).unwrap();
|
||||
fs::write(
|
||||
sk.join("bot.toml"),
|
||||
r#"
|
||||
homeserver = "https://matrix.example.com"
|
||||
username = "@bot:example.com"
|
||||
password = "secret"
|
||||
room_ids = ["!abc:example.com"]
|
||||
enabled = true
|
||||
"#,
|
||||
)
|
||||
.unwrap();
|
||||
let config = BotConfig::load(tmp.path()).unwrap();
|
||||
assert!(config.aggregated_notifications_enabled);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn aggregated_notifications_enabled_can_be_set_to_false() {
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
let sk = tmp.path().join(".huskies");
|
||||
fs::create_dir_all(&sk).unwrap();
|
||||
fs::write(
|
||||
sk.join("bot.toml"),
|
||||
r#"
|
||||
homeserver = "https://matrix.example.com"
|
||||
username = "@bot:example.com"
|
||||
password = "secret"
|
||||
room_ids = ["!abc:example.com"]
|
||||
enabled = true
|
||||
aggregated_notifications_enabled = false
|
||||
"#,
|
||||
)
|
||||
.unwrap();
|
||||
let config = BotConfig::load(tmp.path()).unwrap();
|
||||
assert!(!config.aggregated_notifications_enabled);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn load_reads_ambient_rooms() {
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
|
||||
@@ -62,6 +62,7 @@ use tokio::sync::{Mutex as TokioMutex, RwLock, broadcast, mpsc, watch};
|
||||
/// Returns an [`tokio::task::AbortHandle`] if the bot was actually spawned (Matrix/Discord
|
||||
/// transports), or `None` if the config is absent, disabled, or uses a webhook-based
|
||||
/// transport (Slack/WhatsApp) that does not require a persistent background task.
|
||||
#[allow(clippy::too_many_arguments)]
|
||||
pub fn spawn_bot(
|
||||
project_root: &Path,
|
||||
watcher_tx: broadcast::Sender<WatcherEvent>,
|
||||
@@ -70,6 +71,7 @@ pub fn spawn_bot(
|
||||
shutdown_rx: watch::Receiver<Option<ShutdownReason>>,
|
||||
gateway_active_project: Option<Arc<RwLock<String>>>,
|
||||
gateway_projects: Vec<String>,
|
||||
gateway_project_urls: std::collections::BTreeMap<String, String>,
|
||||
) -> Option<tokio::task::AbortHandle> {
|
||||
let config = match BotConfig::load(project_root) {
|
||||
Some(c) => c,
|
||||
@@ -108,6 +110,7 @@ pub fn spawn_bot(
|
||||
shutdown_rx,
|
||||
gateway_active_project,
|
||||
gateway_projects,
|
||||
gateway_project_urls,
|
||||
)
|
||||
.await
|
||||
{
|
||||
|
||||
+1452
-43
File diff suppressed because it is too large
Load Diff
+175
-172
@@ -1,11 +1,14 @@
|
||||
//! HTTP agent endpoints — REST API for listing, starting, stopping, and inspecting agents.
|
||||
use crate::config::ProjectConfig;
|
||||
//! HTTP agent endpoints — thin adapters over `service::agents`.
|
||||
//!
|
||||
//! Each handler: extracts payload → calls `service::agents::X` → shapes
|
||||
//! response DTO → returns HTTP result. No filesystem access, no inline
|
||||
//! validation, no process invocations.
|
||||
use crate::http::context::{AppContext, OpenApiResult, bad_request, not_found};
|
||||
use crate::service::agents::{self as svc, AgentConfigEntry, WorkItemContent};
|
||||
use crate::workflow::{StoryTestResults, TestCaseResult, TestStatus};
|
||||
use crate::worktree;
|
||||
use poem::http::StatusCode;
|
||||
use poem_openapi::{Object, OpenApi, Tags, param::Path, payload::Json};
|
||||
use serde::Serialize;
|
||||
use std::path;
|
||||
use std::sync::Arc;
|
||||
|
||||
#[derive(Tags)]
|
||||
@@ -45,6 +48,20 @@ struct AgentConfigInfoResponse {
|
||||
max_budget_usd: Option<f64>,
|
||||
}
|
||||
|
||||
impl From<AgentConfigEntry> for AgentConfigInfoResponse {
|
||||
fn from(e: AgentConfigEntry) -> Self {
|
||||
Self {
|
||||
name: e.name,
|
||||
role: e.role,
|
||||
stage: e.stage,
|
||||
model: e.model,
|
||||
allowed_tools: e.allowed_tools,
|
||||
max_turns: e.max_turns,
|
||||
max_budget_usd: e.max_budget_usd,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Object)]
|
||||
struct CreateWorktreePayload {
|
||||
story_id: String,
|
||||
@@ -73,6 +90,17 @@ struct WorkItemContentResponse {
|
||||
agent: Option<String>,
|
||||
}
|
||||
|
||||
impl From<WorkItemContent> for WorkItemContentResponse {
|
||||
fn from(w: WorkItemContent) -> Self {
|
||||
Self {
|
||||
content: w.content,
|
||||
stage: w.stage,
|
||||
name: w.name,
|
||||
agent: w.agent,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// A single test case result for the OpenAPI response.
|
||||
#[derive(Object, Serialize)]
|
||||
struct TestCaseResultResponse {
|
||||
@@ -153,15 +181,23 @@ struct AllTokenUsageResponse {
|
||||
records: Vec<TokenUsageRecordResponse>,
|
||||
}
|
||||
|
||||
/// Returns true if the story file exists in `work/5_done/` or `work/6_archived/`.
|
||||
///
|
||||
/// Used to exclude agents for already-archived stories from the `list_agents`
|
||||
/// response so the agents panel is not cluttered with old completed items on
|
||||
/// frontend startup.
|
||||
pub fn story_is_archived(project_root: &path::Path, story_id: &str) -> bool {
|
||||
let work = project_root.join(".huskies").join("work");
|
||||
let filename = format!("{story_id}.md");
|
||||
work.join("5_done").join(&filename).exists() || work.join("6_archived").join(&filename).exists()
|
||||
/// Map a `service::agents::Error` to a Poem HTTP error with the correct status.
|
||||
fn map_svc_error(err: svc::Error) -> poem::Error {
|
||||
match err {
|
||||
svc::Error::AgentNotFound(_) => {
|
||||
poem::Error::from_string(err.to_string(), StatusCode::NOT_FOUND)
|
||||
}
|
||||
svc::Error::WorkItemNotFound(_) => {
|
||||
poem::Error::from_string(err.to_string(), StatusCode::NOT_FOUND)
|
||||
}
|
||||
svc::Error::Worktree(_) => {
|
||||
poem::Error::from_string(err.to_string(), StatusCode::BAD_REQUEST)
|
||||
}
|
||||
svc::Error::Config(_) => poem::Error::from_string(err.to_string(), StatusCode::BAD_REQUEST),
|
||||
svc::Error::Io(_) => {
|
||||
poem::Error::from_string(err.to_string(), StatusCode::INTERNAL_SERVER_ERROR)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub struct AgentsApi {
|
||||
@@ -183,18 +219,16 @@ impl AgentsApi {
|
||||
.get_project_root(&self.ctx.state)
|
||||
.map_err(bad_request)?;
|
||||
|
||||
let info = self
|
||||
.ctx
|
||||
.agents
|
||||
.start_agent(
|
||||
&project_root,
|
||||
&payload.0.story_id,
|
||||
payload.0.agent_name.as_deref(),
|
||||
None,
|
||||
None,
|
||||
)
|
||||
.await
|
||||
.map_err(bad_request)?;
|
||||
let info = svc::start_agent(
|
||||
&self.ctx.agents,
|
||||
&project_root,
|
||||
&payload.0.story_id,
|
||||
payload.0.agent_name.as_deref(),
|
||||
None,
|
||||
None,
|
||||
)
|
||||
.await
|
||||
.map_err(map_svc_error)?;
|
||||
|
||||
Ok(Json(AgentInfoResponse {
|
||||
story_id: info.story_id,
|
||||
@@ -214,11 +248,14 @@ impl AgentsApi {
|
||||
.get_project_root(&self.ctx.state)
|
||||
.map_err(bad_request)?;
|
||||
|
||||
self.ctx
|
||||
.agents
|
||||
.stop_agent(&project_root, &payload.0.story_id, &payload.0.agent_name)
|
||||
.await
|
||||
.map_err(bad_request)?;
|
||||
svc::stop_agent(
|
||||
&self.ctx.agents,
|
||||
&project_root,
|
||||
&payload.0.story_id,
|
||||
&payload.0.agent_name,
|
||||
)
|
||||
.await
|
||||
.map_err(map_svc_error)?;
|
||||
|
||||
Ok(Json(true))
|
||||
}
|
||||
@@ -231,17 +268,12 @@ impl AgentsApi {
|
||||
#[oai(path = "/agents", method = "get")]
|
||||
async fn list_agents(&self) -> OpenApiResult<Json<Vec<AgentInfoResponse>>> {
|
||||
let project_root = self.ctx.agents.get_project_root(&self.ctx.state).ok();
|
||||
let agents = self.ctx.agents.list_agents().map_err(bad_request)?;
|
||||
let agents =
|
||||
svc::list_agents(&self.ctx.agents, project_root.as_deref()).map_err(map_svc_error)?;
|
||||
|
||||
Ok(Json(
|
||||
agents
|
||||
.into_iter()
|
||||
.filter(|info| {
|
||||
project_root
|
||||
.as_deref()
|
||||
.map(|root| !story_is_archived(root, &info.story_id))
|
||||
.unwrap_or(true)
|
||||
})
|
||||
.map(|info| AgentInfoResponse {
|
||||
story_id: info.story_id,
|
||||
agent_name: info.agent_name,
|
||||
@@ -262,21 +294,11 @@ impl AgentsApi {
|
||||
.get_project_root(&self.ctx.state)
|
||||
.map_err(bad_request)?;
|
||||
|
||||
let config = ProjectConfig::load(&project_root).map_err(bad_request)?;
|
||||
|
||||
let entries = svc::get_agent_config(&project_root).map_err(map_svc_error)?;
|
||||
Ok(Json(
|
||||
config
|
||||
.agent
|
||||
.iter()
|
||||
.map(|a| AgentConfigInfoResponse {
|
||||
name: a.name.clone(),
|
||||
role: a.role.clone(),
|
||||
stage: a.stage.clone(),
|
||||
model: a.model.clone(),
|
||||
allowed_tools: a.allowed_tools.clone(),
|
||||
max_turns: a.max_turns,
|
||||
max_budget_usd: a.max_budget_usd,
|
||||
})
|
||||
entries
|
||||
.into_iter()
|
||||
.map(AgentConfigInfoResponse::from)
|
||||
.collect(),
|
||||
))
|
||||
}
|
||||
@@ -290,21 +312,11 @@ impl AgentsApi {
|
||||
.get_project_root(&self.ctx.state)
|
||||
.map_err(bad_request)?;
|
||||
|
||||
let config = ProjectConfig::load(&project_root).map_err(bad_request)?;
|
||||
|
||||
let entries = svc::reload_config(&project_root).map_err(map_svc_error)?;
|
||||
Ok(Json(
|
||||
config
|
||||
.agent
|
||||
.iter()
|
||||
.map(|a| AgentConfigInfoResponse {
|
||||
name: a.name.clone(),
|
||||
role: a.role.clone(),
|
||||
stage: a.stage.clone(),
|
||||
model: a.model.clone(),
|
||||
allowed_tools: a.allowed_tools.clone(),
|
||||
max_turns: a.max_turns,
|
||||
max_budget_usd: a.max_budget_usd,
|
||||
})
|
||||
entries
|
||||
.into_iter()
|
||||
.map(AgentConfigInfoResponse::from)
|
||||
.collect(),
|
||||
))
|
||||
}
|
||||
@@ -321,12 +333,9 @@ impl AgentsApi {
|
||||
.get_project_root(&self.ctx.state)
|
||||
.map_err(bad_request)?;
|
||||
|
||||
let info = self
|
||||
.ctx
|
||||
.agents
|
||||
.create_worktree(&project_root, &payload.0.story_id)
|
||||
let info = svc::create_worktree(&self.ctx.agents, &project_root, &payload.0.story_id)
|
||||
.await
|
||||
.map_err(bad_request)?;
|
||||
.map_err(map_svc_error)?;
|
||||
|
||||
Ok(Json(WorktreeInfoResponse {
|
||||
story_id: payload.0.story_id,
|
||||
@@ -345,7 +354,7 @@ impl AgentsApi {
|
||||
.get_project_root(&self.ctx.state)
|
||||
.map_err(bad_request)?;
|
||||
|
||||
let entries = worktree::list_worktrees(&project_root).map_err(bad_request)?;
|
||||
let entries = svc::list_worktrees(&project_root).map_err(map_svc_error)?;
|
||||
|
||||
Ok(Json(
|
||||
entries
|
||||
@@ -373,36 +382,12 @@ impl AgentsApi {
|
||||
.get_project_root(&self.ctx.state)
|
||||
.map_err(bad_request)?;
|
||||
|
||||
let stages = [
|
||||
("1_backlog", "backlog"),
|
||||
("2_current", "current"),
|
||||
("3_qa", "qa"),
|
||||
("4_merge", "merge"),
|
||||
("5_done", "done"),
|
||||
("6_archived", "archived"),
|
||||
];
|
||||
let item = svc::get_work_item_content(&project_root, &story_id.0).map_err(|e| match e {
|
||||
svc::Error::WorkItemNotFound(_) => not_found(e.to_string()),
|
||||
other => map_svc_error(other),
|
||||
})?;
|
||||
|
||||
let work_dir = project_root.join(".huskies").join("work");
|
||||
let filename = format!("{}.md", story_id.0);
|
||||
|
||||
for (stage_dir, stage_name) in &stages {
|
||||
let file_path = work_dir.join(stage_dir).join(&filename);
|
||||
if file_path.exists() {
|
||||
let content = std::fs::read_to_string(&file_path)
|
||||
.map_err(|e| bad_request(format!("Failed to read work item: {e}")))?;
|
||||
let metadata = crate::io::story_metadata::parse_front_matter(&content).ok();
|
||||
let name = metadata.as_ref().and_then(|m| m.name.clone());
|
||||
let agent = metadata.and_then(|m| m.agent);
|
||||
return Ok(Json(WorkItemContentResponse {
|
||||
content,
|
||||
stage: stage_name.to_string(),
|
||||
name,
|
||||
agent,
|
||||
}));
|
||||
}
|
||||
}
|
||||
|
||||
Err(not_found(format!("Work item not found: {}", story_id.0)))
|
||||
Ok(Json(WorkItemContentResponse::from(item)))
|
||||
}
|
||||
|
||||
/// Get test results for a work item by its story_id.
|
||||
@@ -414,30 +399,37 @@ impl AgentsApi {
|
||||
&self,
|
||||
story_id: Path<String>,
|
||||
) -> OpenApiResult<Json<Option<TestResultsResponse>>> {
|
||||
// Try in-memory workflow state first.
|
||||
let workflow = self
|
||||
.ctx
|
||||
.workflow
|
||||
.lock()
|
||||
.map_err(|e| bad_request(format!("Lock error: {e}")))?;
|
||||
|
||||
if let Some(results) = workflow.results.get(&story_id.0) {
|
||||
return Ok(Json(Some(TestResultsResponse::from_story_results(results))));
|
||||
// Fast path: return from in-memory state without requiring project_root.
|
||||
let in_memory = {
|
||||
let workflow = self
|
||||
.ctx
|
||||
.workflow
|
||||
.lock()
|
||||
.map_err(|e| bad_request(format!("Lock error: {e}")))?;
|
||||
workflow.results.get(&story_id.0).cloned()
|
||||
};
|
||||
if let Some(results) = in_memory {
|
||||
return Ok(Json(Some(TestResultsResponse::from_story_results(
|
||||
&results,
|
||||
))));
|
||||
}
|
||||
drop(workflow);
|
||||
|
||||
// Fall back to file-persisted results.
|
||||
// Slow path: fall back to results persisted in the story file.
|
||||
let project_root = self
|
||||
.ctx
|
||||
.agents
|
||||
.get_project_root(&self.ctx.state)
|
||||
.map_err(bad_request)?;
|
||||
|
||||
let file_results =
|
||||
crate::http::workflow::read_test_results_from_story_file(&project_root, &story_id.0);
|
||||
let workflow = self
|
||||
.ctx
|
||||
.workflow
|
||||
.lock()
|
||||
.map_err(|e| bad_request(format!("Lock error: {e}")))?;
|
||||
|
||||
let results = svc::get_test_results(&project_root, &story_id.0, &workflow);
|
||||
Ok(Json(
|
||||
file_results.map(|r| TestResultsResponse::from_story_results(&r)),
|
||||
results.map(|r| TestResultsResponse::from_story_results(&r)),
|
||||
))
|
||||
}
|
||||
|
||||
@@ -458,26 +450,8 @@ impl AgentsApi {
|
||||
.get_project_root(&self.ctx.state)
|
||||
.map_err(bad_request)?;
|
||||
|
||||
let log_path = crate::agent_log::find_latest_log(&project_root, &story_id.0, &agent_name.0);
|
||||
|
||||
let Some(path) = log_path else {
|
||||
return Ok(Json(AgentOutputResponse {
|
||||
output: String::new(),
|
||||
}));
|
||||
};
|
||||
|
||||
let entries = crate::agent_log::read_log(&path).map_err(bad_request)?;
|
||||
|
||||
let output: String = entries
|
||||
.iter()
|
||||
.filter(|e| e.event.get("type").and_then(|t| t.as_str()) == Some("output"))
|
||||
.filter_map(|e| {
|
||||
e.event
|
||||
.get("text")
|
||||
.and_then(|t| t.as_str())
|
||||
.map(str::to_owned)
|
||||
})
|
||||
.collect();
|
||||
let output = svc::get_agent_output(&project_root, &story_id.0, &agent_name.0)
|
||||
.map_err(map_svc_error)?;
|
||||
|
||||
Ok(Json(AgentOutputResponse { output }))
|
||||
}
|
||||
@@ -491,10 +465,9 @@ impl AgentsApi {
|
||||
.get_project_root(&self.ctx.state)
|
||||
.map_err(bad_request)?;
|
||||
|
||||
let config = ProjectConfig::load(&project_root).map_err(bad_request)?;
|
||||
worktree::remove_worktree_by_story_id(&project_root, &story_id.0, &config)
|
||||
svc::remove_worktree(&project_root, &story_id.0)
|
||||
.await
|
||||
.map_err(bad_request)?;
|
||||
.map_err(map_svc_error)?;
|
||||
|
||||
Ok(Json(true))
|
||||
}
|
||||
@@ -514,39 +487,25 @@ impl AgentsApi {
|
||||
.get_project_root(&self.ctx.state)
|
||||
.map_err(bad_request)?;
|
||||
|
||||
let all_records = crate::agents::token_usage::read_all(&project_root)
|
||||
.map_err(|e| bad_request(format!("Failed to read token usage: {e}")))?;
|
||||
let summary =
|
||||
svc::get_work_item_token_cost(&project_root, &story_id.0).map_err(map_svc_error)?;
|
||||
|
||||
let mut agent_map: std::collections::HashMap<String, AgentCostEntry> =
|
||||
std::collections::HashMap::new();
|
||||
|
||||
let mut total_cost_usd = 0.0_f64;
|
||||
|
||||
for record in all_records.into_iter().filter(|r| r.story_id == story_id.0) {
|
||||
total_cost_usd += record.usage.total_cost_usd;
|
||||
let entry = agent_map
|
||||
.entry(record.agent_name.clone())
|
||||
.or_insert_with(|| AgentCostEntry {
|
||||
agent_name: record.agent_name.clone(),
|
||||
model: record.model.clone(),
|
||||
input_tokens: 0,
|
||||
output_tokens: 0,
|
||||
cache_creation_input_tokens: 0,
|
||||
cache_read_input_tokens: 0,
|
||||
total_cost_usd: 0.0,
|
||||
});
|
||||
entry.input_tokens += record.usage.input_tokens;
|
||||
entry.output_tokens += record.usage.output_tokens;
|
||||
entry.cache_creation_input_tokens += record.usage.cache_creation_input_tokens;
|
||||
entry.cache_read_input_tokens += record.usage.cache_read_input_tokens;
|
||||
entry.total_cost_usd += record.usage.total_cost_usd;
|
||||
}
|
||||
|
||||
let mut agents: Vec<AgentCostEntry> = agent_map.into_values().collect();
|
||||
agents.sort_by(|a, b| a.agent_name.cmp(&b.agent_name));
|
||||
let agents = summary
|
||||
.agents
|
||||
.into_iter()
|
||||
.map(|a| AgentCostEntry {
|
||||
agent_name: a.agent_name,
|
||||
model: a.model,
|
||||
input_tokens: a.input_tokens,
|
||||
output_tokens: a.output_tokens,
|
||||
cache_creation_input_tokens: a.cache_creation_input_tokens,
|
||||
cache_read_input_tokens: a.cache_read_input_tokens,
|
||||
total_cost_usd: a.total_cost_usd,
|
||||
})
|
||||
.collect();
|
||||
|
||||
Ok(Json(TokenCostResponse {
|
||||
total_cost_usd,
|
||||
total_cost_usd: summary.total_cost_usd,
|
||||
agents,
|
||||
}))
|
||||
}
|
||||
@@ -562,8 +521,7 @@ impl AgentsApi {
|
||||
.get_project_root(&self.ctx.state)
|
||||
.map_err(bad_request)?;
|
||||
|
||||
let records = crate::agents::token_usage::read_all(&project_root)
|
||||
.map_err(|e| bad_request(format!("Failed to read token usage: {e}")))?;
|
||||
let records = svc::get_all_token_usage(&project_root).map_err(map_svc_error)?;
|
||||
|
||||
let response_records: Vec<TokenUsageRecordResponse> = records
|
||||
.into_iter()
|
||||
@@ -590,6 +548,7 @@ impl AgentsApi {
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::agents::AgentStatus;
|
||||
use std::path;
|
||||
use tempfile::TempDir;
|
||||
|
||||
fn make_work_dirs(tmp: &TempDir) -> path::PathBuf {
|
||||
@@ -604,7 +563,7 @@ mod tests {
|
||||
fn story_is_archived_false_when_file_absent() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let root = make_work_dirs(&tmp);
|
||||
assert!(!story_is_archived(&root, "79_story_foo"));
|
||||
assert!(!svc::is_archived(&root, "79_story_foo"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -616,7 +575,7 @@ mod tests {
|
||||
"---\nname: test\n---\n",
|
||||
)
|
||||
.unwrap();
|
||||
assert!(story_is_archived(&root, "79_story_foo"));
|
||||
assert!(svc::is_archived(&root, "79_story_foo"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -628,7 +587,7 @@ mod tests {
|
||||
"---\nname: test\n---\n",
|
||||
)
|
||||
.unwrap();
|
||||
assert!(story_is_archived(&root, "79_story_foo"));
|
||||
assert!(svc::is_archived(&root, "79_story_foo"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
@@ -953,6 +912,50 @@ allowed_tools = ["Read", "Bash"]
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn get_work_item_content_falls_back_to_crdt_when_no_file() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let root = tmp.path().to_path_buf();
|
||||
// Seed content + CRDT with no .md file on disk.
|
||||
crate::db::write_item_with_content(
|
||||
"44_story_crdt_only",
|
||||
"1_backlog",
|
||||
"---\nname: \"CRDT Only\"\n---\n\nCRDT content.",
|
||||
);
|
||||
let ctx = AppContext::new_test(root);
|
||||
let api = AgentsApi { ctx: Arc::new(ctx) };
|
||||
let result = api
|
||||
.get_work_item_content(Path("44_story_crdt_only".to_string()))
|
||||
.await
|
||||
.unwrap()
|
||||
.0;
|
||||
assert!(result.content.contains("CRDT content."));
|
||||
assert_eq!(result.stage, "backlog");
|
||||
assert_eq!(result.name, Some("CRDT Only".to_string()));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn get_work_item_content_crdt_fallback_with_current_stage() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let root = tmp.path().to_path_buf();
|
||||
// Seed a CRDT-only story in the coding/current stage.
|
||||
crate::db::write_item_with_content(
|
||||
"45_story_crdt_current",
|
||||
"2_current",
|
||||
"---\nname: \"Current CRDT\"\n---\n\nIn progress.",
|
||||
);
|
||||
let ctx = AppContext::new_test(root);
|
||||
let api = AgentsApi { ctx: Arc::new(ctx) };
|
||||
let result = api
|
||||
.get_work_item_content(Path("45_story_crdt_current".to_string()))
|
||||
.await
|
||||
.unwrap()
|
||||
.0;
|
||||
assert!(result.content.contains("In progress."));
|
||||
assert_eq!(result.stage, "current");
|
||||
assert_eq!(result.name, Some("Current CRDT".to_string()));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn get_work_item_content_returns_error_when_no_project_root() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
|
||||
@@ -81,7 +81,9 @@ async fn dispatch_command(
|
||||
"start" => dispatch_start(args, project_root, agents).await,
|
||||
"delete" => dispatch_delete(args, project_root, agents).await,
|
||||
"rebuild" => dispatch_rebuild(project_root, agents).await,
|
||||
"rmtree" => dispatch_rmtree(args, project_root, agents).await,
|
||||
"timer" => dispatch_timer(args, project_root).await,
|
||||
"htop" => dispatch_htop(args, agents).await,
|
||||
// All other commands go through the synchronous command registry.
|
||||
_ => dispatch_sync(cmd, args, project_root, agents),
|
||||
}
|
||||
@@ -203,6 +205,24 @@ async fn dispatch_delete(
|
||||
.await
|
||||
}
|
||||
|
||||
async fn dispatch_rmtree(
|
||||
args: &str,
|
||||
project_root: &std::path::Path,
|
||||
agents: &Arc<crate::agents::AgentPool>,
|
||||
) -> String {
|
||||
let number_str = args.trim();
|
||||
if number_str.is_empty() || !number_str.chars().all(|c| c.is_ascii_digit()) {
|
||||
return "Usage: `/rmtree <number>` (e.g. `/rmtree 42`)".to_string();
|
||||
}
|
||||
crate::chat::transport::matrix::rmtree::handle_rmtree(
|
||||
"web-ui",
|
||||
number_str,
|
||||
project_root,
|
||||
agents,
|
||||
)
|
||||
.await
|
||||
}
|
||||
|
||||
async fn dispatch_rebuild(
|
||||
project_root: &std::path::Path,
|
||||
agents: &Arc<crate::agents::AgentPool>,
|
||||
@@ -230,6 +250,34 @@ async fn dispatch_timer(args: &str, project_root: &std::path::Path) -> String {
|
||||
crate::chat::timer::handle_timer_command(timer_cmd, &store, project_root).await
|
||||
}
|
||||
|
||||
/// Handle the `htop` command from the web UI.
|
||||
///
|
||||
/// The web UI uses a one-shot HTTP request, so live updates are not possible
|
||||
/// here. Returns a static snapshot of the process dashboard. For `htop stop`,
|
||||
/// returns a helpful message (no persistent session state exists in the web UI).
|
||||
async fn dispatch_htop(args: &str, agents: &Arc<crate::agents::AgentPool>) -> String {
|
||||
use crate::chat::transport::matrix::htop::{HtopCommand, build_htop_message};
|
||||
|
||||
// Re-use the existing parser by constructing a synthetic message.
|
||||
let synthetic = if args.is_empty() {
|
||||
"__web_ui__ htop".to_string()
|
||||
} else {
|
||||
format!("__web_ui__ htop {args}")
|
||||
};
|
||||
|
||||
match crate::chat::transport::matrix::htop::extract_htop_command(
|
||||
&synthetic,
|
||||
"__web_ui__",
|
||||
"@__web_ui__:localhost",
|
||||
) {
|
||||
Some(HtopCommand::Stop) => "No active htop session in the web UI. \
|
||||
Live sessions are only supported in chat transports (Matrix, Slack, Discord)."
|
||||
.to_string(),
|
||||
Some(HtopCommand::Start { duration_secs }) => build_htop_message(agents, 0, duration_secs),
|
||||
None => build_htop_message(agents, 0, 300),
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Tests
|
||||
// ---------------------------------------------------------------------------
|
||||
@@ -349,6 +397,134 @@ mod tests {
|
||||
);
|
||||
}
|
||||
|
||||
// -- htop (web-UI slash-command path) ------------------------------------
|
||||
|
||||
#[tokio::test]
|
||||
async fn htop_returns_dashboard_not_unknown_command() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
let api = test_api(&dir);
|
||||
let body = BotCommandRequest {
|
||||
command: "htop".to_string(),
|
||||
args: String::new(),
|
||||
};
|
||||
let result = api.run_command(Json(body)).await;
|
||||
assert!(result.is_ok());
|
||||
let resp = result.unwrap().0;
|
||||
assert!(
|
||||
!resp.response.contains("Unknown command"),
|
||||
"htop should not return 'Unknown command': {}",
|
||||
resp.response
|
||||
);
|
||||
assert!(
|
||||
resp.response.contains("htop"),
|
||||
"htop response should contain 'htop': {}",
|
||||
resp.response
|
||||
);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn htop_with_duration_returns_dashboard() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
let api = test_api(&dir);
|
||||
let body = BotCommandRequest {
|
||||
command: "htop".to_string(),
|
||||
args: "10m".to_string(),
|
||||
};
|
||||
let result = api.run_command(Json(body)).await;
|
||||
assert!(result.is_ok());
|
||||
let resp = result.unwrap().0;
|
||||
assert!(
|
||||
!resp.response.contains("Unknown command"),
|
||||
"htop 10m should not return 'Unknown command': {}",
|
||||
resp.response
|
||||
);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn htop_stop_returns_response_not_unknown_command() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
let api = test_api(&dir);
|
||||
let body = BotCommandRequest {
|
||||
command: "htop".to_string(),
|
||||
args: "stop".to_string(),
|
||||
};
|
||||
let result = api.run_command(Json(body)).await;
|
||||
assert!(result.is_ok());
|
||||
let resp = result.unwrap().0;
|
||||
assert!(
|
||||
!resp.response.contains("Unknown command"),
|
||||
"htop stop should not return 'Unknown command': {}",
|
||||
resp.response
|
||||
);
|
||||
}
|
||||
|
||||
// -- rmtree ----------------------------------------------------------------
|
||||
|
||||
#[tokio::test]
|
||||
async fn rmtree_without_number_returns_usage() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
let api = test_api(&dir);
|
||||
let body = BotCommandRequest {
|
||||
command: "rmtree".to_string(),
|
||||
args: String::new(),
|
||||
};
|
||||
let result = api.run_command(Json(body)).await;
|
||||
assert!(result.is_ok());
|
||||
let resp = result.unwrap().0;
|
||||
assert!(
|
||||
resp.response.contains("Usage"),
|
||||
"expected usage hint for bare /rmtree: {}",
|
||||
resp.response
|
||||
);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn rmtree_with_non_numeric_arg_returns_usage() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
let api = test_api(&dir);
|
||||
let body = BotCommandRequest {
|
||||
command: "rmtree".to_string(),
|
||||
args: "foo".to_string(),
|
||||
};
|
||||
let result = api.run_command(Json(body)).await;
|
||||
assert!(result.is_ok());
|
||||
let resp = result.unwrap().0;
|
||||
assert!(
|
||||
resp.response.contains("Usage"),
|
||||
"expected usage hint for /rmtree foo: {}",
|
||||
resp.response
|
||||
);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn rmtree_does_not_return_unknown_command() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
let api = test_api(&dir);
|
||||
let body = BotCommandRequest {
|
||||
command: "rmtree".to_string(),
|
||||
args: "999".to_string(),
|
||||
};
|
||||
let result = api.run_command(Json(body)).await;
|
||||
assert!(result.is_ok());
|
||||
let resp = result.unwrap().0;
|
||||
assert!(
|
||||
!resp.response.contains("Unknown command"),
|
||||
"/rmtree should not return 'Unknown command': {}",
|
||||
resp.response
|
||||
);
|
||||
}
|
||||
|
||||
// -- htop bot-command path (regression: htop must remain in command registry) --
|
||||
|
||||
#[test]
|
||||
fn htop_is_registered_in_bot_command_registry() {
|
||||
let commands = crate::chat::commands::commands();
|
||||
assert!(
|
||||
commands.iter().any(|c| c.name == "htop"),
|
||||
"htop must be registered in the bot command registry so /help lists it"
|
||||
);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn run_command_requires_project_root() {
|
||||
// Create a context with no project root set.
|
||||
|
||||
@@ -0,0 +1,341 @@
|
||||
//! Per-project event buffer and `GET /api/events` HTTP endpoint.
|
||||
//!
|
||||
//! The gateway polls `/api/events?since={ts_ms}` on each registered project
|
||||
//! server to aggregate cross-project pipeline notifications into a single
|
||||
//! gateway chat channel. Each project server buffers up to 500 events in
|
||||
//! memory and serves them via this endpoint.
|
||||
|
||||
use crate::io::watcher::WatcherEvent;
|
||||
use poem::web::{Data, Query};
|
||||
use poem::{Response, handler, http::StatusCode};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::collections::VecDeque;
|
||||
use std::sync::{Arc, Mutex};
|
||||
use tokio::sync::broadcast;
|
||||
|
||||
/// Maximum number of events retained in the in-memory buffer.
|
||||
const MAX_BUFFER_SIZE: usize = 500;
|
||||
|
||||
/// A pipeline event stored in the event buffer with a timestamp.
|
||||
#[derive(Clone, Debug, Serialize, Deserialize)]
|
||||
#[serde(tag = "type", rename_all = "snake_case")]
|
||||
pub enum StoredEvent {
|
||||
/// A work item transitioned between pipeline stages.
|
||||
StageTransition {
|
||||
/// Work item ID (e.g. `"42_story_my_feature"`).
|
||||
story_id: String,
|
||||
/// The stage the item moved FROM (display name, e.g. `"Current"`).
|
||||
from_stage: String,
|
||||
/// The stage the item moved TO (directory key, e.g. `"3_qa"`).
|
||||
to_stage: String,
|
||||
/// Unix timestamp in milliseconds when this event was recorded.
|
||||
timestamp_ms: u64,
|
||||
},
|
||||
/// A merge operation failed for a story.
|
||||
MergeFailure {
|
||||
/// Work item ID (e.g. `"42_story_my_feature"`).
|
||||
story_id: String,
|
||||
/// Human-readable description of the failure.
|
||||
reason: String,
|
||||
/// Unix timestamp in milliseconds when this event was recorded.
|
||||
timestamp_ms: u64,
|
||||
},
|
||||
/// A story was blocked (e.g. retry limit exceeded).
|
||||
StoryBlocked {
|
||||
/// Work item ID (e.g. `"42_story_my_feature"`).
|
||||
story_id: String,
|
||||
/// Human-readable reason the story was blocked.
|
||||
reason: String,
|
||||
/// Unix timestamp in milliseconds when this event was recorded.
|
||||
timestamp_ms: u64,
|
||||
},
|
||||
}
|
||||
|
||||
impl StoredEvent {
|
||||
/// Returns the `timestamp_ms` field common to all event variants.
|
||||
pub fn timestamp_ms(&self) -> u64 {
|
||||
match self {
|
||||
StoredEvent::StageTransition { timestamp_ms, .. } => *timestamp_ms,
|
||||
StoredEvent::MergeFailure { timestamp_ms, .. } => *timestamp_ms,
|
||||
StoredEvent::StoryBlocked { timestamp_ms, .. } => *timestamp_ms,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Shared, thread-safe ring buffer of recent pipeline events.
|
||||
///
|
||||
/// Wrapped in `Arc` so it can be shared between the background subscriber
|
||||
/// task and the HTTP handler. The inner `Mutex` guards the `VecDeque`.
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct EventBuffer(Arc<Mutex<VecDeque<StoredEvent>>>);
|
||||
|
||||
impl EventBuffer {
|
||||
/// Create a new, empty event buffer.
|
||||
pub fn new() -> Self {
|
||||
EventBuffer(Arc::new(Mutex::new(VecDeque::new())))
|
||||
}
|
||||
|
||||
/// Append an event to the buffer, evicting the oldest entry if the buffer
|
||||
/// exceeds [`MAX_BUFFER_SIZE`].
|
||||
pub fn push(&self, event: StoredEvent) {
|
||||
let mut buf = self.0.lock().unwrap();
|
||||
if buf.len() >= MAX_BUFFER_SIZE {
|
||||
buf.pop_front();
|
||||
}
|
||||
buf.push_back(event);
|
||||
}
|
||||
|
||||
/// Return all events whose `timestamp_ms` is strictly greater than `since_ms`.
|
||||
pub fn events_since(&self, since_ms: u64) -> Vec<StoredEvent> {
|
||||
let buf = self.0.lock().unwrap();
|
||||
buf.iter()
|
||||
.filter(|e| e.timestamp_ms() > since_ms)
|
||||
.cloned()
|
||||
.collect()
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for EventBuffer {
|
||||
fn default() -> Self {
|
||||
Self::new()
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns the current Unix timestamp in milliseconds.
|
||||
fn now_ms() -> u64 {
|
||||
std::time::SystemTime::now()
|
||||
.duration_since(std::time::UNIX_EPOCH)
|
||||
.map(|d| d.as_millis() as u64)
|
||||
.unwrap_or(0)
|
||||
}
|
||||
|
||||
/// Spawn a background task that consumes [`WatcherEvent`] broadcasts and
|
||||
/// stores relevant events in `buffer`.
|
||||
///
|
||||
/// Only [`WatcherEvent::WorkItem`] (with a known `from_stage`),
|
||||
/// [`WatcherEvent::MergeFailure`], and [`WatcherEvent::StoryBlocked`]
|
||||
/// variants are stored. All other variants are silently ignored.
|
||||
pub fn subscribe_to_watcher(buffer: EventBuffer, mut rx: broadcast::Receiver<WatcherEvent>) {
|
||||
tokio::spawn(async move {
|
||||
loop {
|
||||
match rx.recv().await {
|
||||
Ok(WatcherEvent::WorkItem {
|
||||
stage,
|
||||
item_id,
|
||||
from_stage,
|
||||
..
|
||||
}) => {
|
||||
// Only store genuine transitions (from_stage is known).
|
||||
if let Some(from) = from_stage {
|
||||
buffer.push(StoredEvent::StageTransition {
|
||||
story_id: item_id,
|
||||
from_stage: from,
|
||||
to_stage: stage,
|
||||
timestamp_ms: now_ms(),
|
||||
});
|
||||
}
|
||||
}
|
||||
Ok(WatcherEvent::MergeFailure { story_id, reason }) => {
|
||||
buffer.push(StoredEvent::MergeFailure {
|
||||
story_id,
|
||||
reason,
|
||||
timestamp_ms: now_ms(),
|
||||
});
|
||||
}
|
||||
Ok(WatcherEvent::StoryBlocked { story_id, reason }) => {
|
||||
buffer.push(StoredEvent::StoryBlocked {
|
||||
story_id,
|
||||
reason,
|
||||
timestamp_ms: now_ms(),
|
||||
});
|
||||
}
|
||||
Ok(_) => {} // Ignore all other event types.
|
||||
Err(broadcast::error::RecvError::Lagged(n)) => {
|
||||
crate::slog!("[events] Subscriber lagged, skipped {n} events");
|
||||
}
|
||||
Err(broadcast::error::RecvError::Closed) => {
|
||||
crate::slog!("[events] Watcher channel closed; stopping event subscriber");
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/// Query parameters for `GET /api/events`.
|
||||
#[derive(Deserialize)]
|
||||
pub struct EventsQuery {
|
||||
/// Return only events with `timestamp_ms` strictly greater than this value.
|
||||
/// Defaults to `0` (return all buffered events).
|
||||
#[serde(default)]
|
||||
pub since: u64,
|
||||
}
|
||||
|
||||
/// `GET /api/events?since={ts_ms}`
|
||||
///
|
||||
/// Returns a JSON array of [`StoredEvent`] objects recorded after `since` ms.
|
||||
/// The gateway polls this endpoint on each registered project server to build
|
||||
/// an aggregated cross-project notification stream.
|
||||
#[handler]
|
||||
pub fn events_handler(
|
||||
Query(params): Query<EventsQuery>,
|
||||
Data(buffer): Data<&EventBuffer>,
|
||||
) -> Response {
|
||||
let events = buffer.events_since(params.since);
|
||||
let body = serde_json::to_vec(&events).unwrap_or_else(|_| b"[]".to_vec());
|
||||
Response::builder()
|
||||
.status(StatusCode::OK)
|
||||
.header(poem::http::header::CONTENT_TYPE, "application/json")
|
||||
.body(body)
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use tokio::sync::broadcast;
|
||||
|
||||
#[test]
|
||||
fn event_buffer_push_and_retrieve() {
|
||||
let buf = EventBuffer::new();
|
||||
buf.push(StoredEvent::MergeFailure {
|
||||
story_id: "42_story_x".to_string(),
|
||||
reason: "conflict".to_string(),
|
||||
timestamp_ms: 1000,
|
||||
});
|
||||
buf.push(StoredEvent::StoryBlocked {
|
||||
story_id: "43_story_y".to_string(),
|
||||
reason: "retry limit".to_string(),
|
||||
timestamp_ms: 2000,
|
||||
});
|
||||
|
||||
let all = buf.events_since(0);
|
||||
assert_eq!(all.len(), 2);
|
||||
|
||||
let after_1000 = buf.events_since(1000);
|
||||
assert_eq!(after_1000.len(), 1);
|
||||
assert!(matches!(after_1000[0], StoredEvent::StoryBlocked { .. }));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn event_buffer_evicts_oldest_when_full() {
|
||||
let buf = EventBuffer::new();
|
||||
for i in 0..MAX_BUFFER_SIZE + 1 {
|
||||
buf.push(StoredEvent::MergeFailure {
|
||||
story_id: format!("{i}_story_x"),
|
||||
reason: "x".to_string(),
|
||||
timestamp_ms: i as u64,
|
||||
});
|
||||
}
|
||||
// Buffer must not exceed MAX_BUFFER_SIZE.
|
||||
assert_eq!(buf.events_since(0).len(), MAX_BUFFER_SIZE);
|
||||
// Oldest entry (timestamp_ms == 0) should have been evicted.
|
||||
assert!(buf.events_since(0).iter().all(|e| e.timestamp_ms() > 0));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn stage_transition_timestamp_ms_accessor() {
|
||||
let e = StoredEvent::StageTransition {
|
||||
story_id: "1".to_string(),
|
||||
from_stage: "2_current".to_string(),
|
||||
to_stage: "3_qa".to_string(),
|
||||
timestamp_ms: 9999,
|
||||
};
|
||||
assert_eq!(e.timestamp_ms(), 9999);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn subscribe_to_watcher_stores_work_item_with_from_stage() {
|
||||
let buf = EventBuffer::new();
|
||||
let (tx, rx) = broadcast::channel(16);
|
||||
|
||||
subscribe_to_watcher(buf.clone(), rx);
|
||||
|
||||
tx.send(crate::io::watcher::WatcherEvent::WorkItem {
|
||||
stage: "3_qa".to_string(),
|
||||
item_id: "42_story_foo".to_string(),
|
||||
action: "qa".to_string(),
|
||||
commit_msg: "huskies: qa 42_story_foo".to_string(),
|
||||
from_stage: Some("2_current".to_string()),
|
||||
})
|
||||
.unwrap();
|
||||
|
||||
tokio::time::sleep(std::time::Duration::from_millis(50)).await;
|
||||
|
||||
let events = buf.events_since(0);
|
||||
assert_eq!(events.len(), 1);
|
||||
assert!(matches!(events[0], StoredEvent::StageTransition { .. }));
|
||||
if let StoredEvent::StageTransition {
|
||||
ref story_id,
|
||||
ref from_stage,
|
||||
ref to_stage,
|
||||
..
|
||||
} = events[0]
|
||||
{
|
||||
assert_eq!(story_id, "42_story_foo");
|
||||
assert_eq!(from_stage, "2_current");
|
||||
assert_eq!(to_stage, "3_qa");
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn subscribe_to_watcher_ignores_work_item_without_from_stage() {
|
||||
let buf = EventBuffer::new();
|
||||
let (tx, rx) = broadcast::channel(16);
|
||||
|
||||
subscribe_to_watcher(buf.clone(), rx);
|
||||
|
||||
// Synthetic event: no from_stage.
|
||||
tx.send(crate::io::watcher::WatcherEvent::WorkItem {
|
||||
stage: "2_current".to_string(),
|
||||
item_id: "99_story_syn".to_string(),
|
||||
action: "start".to_string(),
|
||||
commit_msg: "huskies: start 99_story_syn".to_string(),
|
||||
from_stage: None,
|
||||
})
|
||||
.unwrap();
|
||||
|
||||
tokio::time::sleep(std::time::Duration::from_millis(50)).await;
|
||||
|
||||
assert_eq!(buf.events_since(0).len(), 0);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn subscribe_to_watcher_stores_merge_failure() {
|
||||
let buf = EventBuffer::new();
|
||||
let (tx, rx) = broadcast::channel(16);
|
||||
|
||||
subscribe_to_watcher(buf.clone(), rx);
|
||||
|
||||
tx.send(crate::io::watcher::WatcherEvent::MergeFailure {
|
||||
story_id: "42_story_foo".to_string(),
|
||||
reason: "merge conflict".to_string(),
|
||||
})
|
||||
.unwrap();
|
||||
|
||||
tokio::time::sleep(std::time::Duration::from_millis(50)).await;
|
||||
|
||||
let events = buf.events_since(0);
|
||||
assert_eq!(events.len(), 1);
|
||||
assert!(matches!(events[0], StoredEvent::MergeFailure { .. }));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn subscribe_to_watcher_stores_story_blocked() {
|
||||
let buf = EventBuffer::new();
|
||||
let (tx, rx) = broadcast::channel(16);
|
||||
|
||||
subscribe_to_watcher(buf.clone(), rx);
|
||||
|
||||
tx.send(crate::io::watcher::WatcherEvent::StoryBlocked {
|
||||
story_id: "43_story_bar".to_string(),
|
||||
reason: "retry limit exceeded".to_string(),
|
||||
})
|
||||
.unwrap();
|
||||
|
||||
tokio::time::sleep(std::time::Duration::from_millis(50)).await;
|
||||
|
||||
let events = buf.events_since(0);
|
||||
assert_eq!(events.len(), 1);
|
||||
assert!(matches!(events[0], StoredEvent::StoryBlocked { .. }));
|
||||
}
|
||||
}
|
||||
@@ -86,7 +86,7 @@ pub(super) fn tool_list_agents(ctx: &AppContext) -> Result<String, String> {
|
||||
.filter(|a| {
|
||||
project_root
|
||||
.as_deref()
|
||||
.map(|root| !crate::http::agents::story_is_archived(root, &a.story_id))
|
||||
.map(|root| !crate::service::agents::is_archived(root, &a.story_id))
|
||||
.unwrap_or(true)
|
||||
})
|
||||
.map(|a| json!({
|
||||
|
||||
+16
-2
@@ -7,6 +7,7 @@ pub mod bot_command;
|
||||
pub mod bot_config;
|
||||
pub mod chat;
|
||||
pub mod context;
|
||||
pub mod events;
|
||||
pub mod health;
|
||||
pub mod io;
|
||||
pub mod mcp;
|
||||
@@ -68,6 +69,7 @@ pub fn build_routes(
|
||||
whatsapp_ctx: Option<Arc<WhatsAppWebhookContext>>,
|
||||
slack_ctx: Option<Arc<SlackWebhookContext>>,
|
||||
port: u16,
|
||||
event_buffer: Option<events::EventBuffer>,
|
||||
) -> impl poem::Endpoint {
|
||||
let ctx_arc = std::sync::Arc::new(ctx);
|
||||
|
||||
@@ -103,6 +105,10 @@ pub fn build_routes(
|
||||
.at("/", get(assets::embedded_index))
|
||||
.at("/*path", get(assets::embedded_file));
|
||||
|
||||
if let Some(buf) = event_buffer {
|
||||
route = route.at("/api/events", get(events::events_handler).data(buf));
|
||||
}
|
||||
|
||||
if let Some(wa_ctx) = whatsapp_ctx {
|
||||
route = route.at(
|
||||
"/webhook/whatsapp",
|
||||
@@ -302,7 +308,7 @@ mod tests {
|
||||
fn build_routes_constructs_without_panic() {
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
let ctx = context::AppContext::new_test(tmp.path().to_path_buf());
|
||||
let _endpoint = build_routes(ctx, None, None, 3001);
|
||||
let _endpoint = build_routes(ctx, None, None, 3001, None);
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -311,6 +317,14 @@ mod tests {
|
||||
// ensuring the port parameter flows through to OAuthState.
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
let ctx = context::AppContext::new_test(tmp.path().to_path_buf());
|
||||
let _endpoint = build_routes(ctx, None, None, 9999);
|
||||
let _endpoint = build_routes(ctx, None, None, 9999, None);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn build_routes_with_event_buffer_constructs_without_panic() {
|
||||
let tmp = tempfile::tempdir().unwrap();
|
||||
let ctx = context::AppContext::new_test(tmp.path().to_path_buf());
|
||||
let buf = events::EventBuffer::new();
|
||||
let _endpoint = build_routes(ctx, None, None, 3001, Some(buf));
|
||||
}
|
||||
}
|
||||
|
||||
+411
-1
@@ -1,13 +1,181 @@
|
||||
//! HTTP settings endpoints — REST API for user preferences and editor configuration.
|
||||
use crate::config::ProjectConfig;
|
||||
use crate::http::context::{AppContext, OpenApiResult, bad_request};
|
||||
use crate::store::StoreOps;
|
||||
use poem_openapi::{Object, OpenApi, Tags, param::Query, payload::Json};
|
||||
use serde::Serialize;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use serde_json::json;
|
||||
use std::path::Path;
|
||||
use std::sync::Arc;
|
||||
|
||||
const EDITOR_COMMAND_KEY: &str = "editor_command";
|
||||
|
||||
/// Project-level settings exposed via `GET /api/settings` and `PUT /api/settings`.
|
||||
///
|
||||
/// Only contains the scalar fields of `ProjectConfig` — array sections
|
||||
/// (`[[component]]`, `[[agent]]`, `[watcher]`) are preserved in the TOML file
|
||||
/// and are not editable through this API.
|
||||
#[derive(Debug, Object, Serialize, Deserialize)]
|
||||
struct ProjectSettings {
|
||||
/// Project-wide default QA mode: "server", "agent", or "human". Default: "server".
|
||||
default_qa: String,
|
||||
/// Default model for coder-stage agents (e.g. "sonnet"). When set, only agents whose
|
||||
/// model matches this value are used for auto-assignment.
|
||||
default_coder_model: Option<String>,
|
||||
/// Maximum number of concurrent coder-stage agents. When set, stories wait in
|
||||
/// 2_current/ until a slot is free.
|
||||
max_coders: Option<u32>,
|
||||
/// Maximum retries per story per pipeline stage before marking as blocked. Default: 2.
|
||||
max_retries: u32,
|
||||
/// Optional base branch name (e.g. "main", "master"). Overrides auto-detection.
|
||||
base_branch: Option<String>,
|
||||
/// Whether to send RateLimitWarning chat notifications. Default: true.
|
||||
rate_limit_notifications: bool,
|
||||
/// IANA timezone name (e.g. "Europe/London"). Timer inputs are interpreted in this tz.
|
||||
timezone: Option<String>,
|
||||
/// WebSocket URL of a remote huskies node to sync CRDT state with.
|
||||
rendezvous: Option<String>,
|
||||
/// How often (seconds) to check 5_done/ for items to archive. Default: 60.
|
||||
watcher_sweep_interval_secs: u64,
|
||||
/// How long (seconds) an item must remain in 5_done/ before archiving. Default: 14400.
|
||||
watcher_done_retention_secs: u64,
|
||||
}
|
||||
|
||||
/// Load `ProjectSettings` from `ProjectConfig`.
|
||||
fn settings_from_config(cfg: &ProjectConfig) -> ProjectSettings {
|
||||
ProjectSettings {
|
||||
default_qa: cfg.default_qa.clone(),
|
||||
default_coder_model: cfg.default_coder_model.clone(),
|
||||
max_coders: cfg.max_coders.map(|v| v as u32),
|
||||
max_retries: cfg.max_retries,
|
||||
base_branch: cfg.base_branch.clone(),
|
||||
rate_limit_notifications: cfg.rate_limit_notifications,
|
||||
timezone: cfg.timezone.clone(),
|
||||
rendezvous: cfg.rendezvous.clone(),
|
||||
watcher_sweep_interval_secs: cfg.watcher.sweep_interval_secs,
|
||||
watcher_done_retention_secs: cfg.watcher.done_retention_secs,
|
||||
}
|
||||
}
|
||||
|
||||
/// Validate the incoming `ProjectSettings` before writing.
|
||||
fn validate_project_settings(s: &ProjectSettings) -> Result<(), String> {
|
||||
match s.default_qa.as_str() {
|
||||
"server" | "agent" | "human" => {}
|
||||
other => {
|
||||
return Err(format!(
|
||||
"Invalid default_qa value '{other}'. Must be one of: server, agent, human"
|
||||
));
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Write only the scalar settings from `s` into the project.toml at the given root.
|
||||
/// Array sections (`[[component]]`, `[[agent]]`) are preserved unchanged.
|
||||
fn write_project_settings(project_root: &Path, s: &ProjectSettings) -> Result<(), String> {
|
||||
let config_path = project_root.join(".huskies/project.toml");
|
||||
|
||||
let content = if config_path.exists() {
|
||||
std::fs::read_to_string(&config_path).map_err(|e| format!("Read config: {e}"))?
|
||||
} else {
|
||||
String::new()
|
||||
};
|
||||
|
||||
let mut val: toml::Value = if content.trim().is_empty() {
|
||||
toml::Value::Table(toml::map::Map::new())
|
||||
} else {
|
||||
toml::from_str(&content).map_err(|e| format!("Parse config: {e}"))?
|
||||
};
|
||||
|
||||
let table = val
|
||||
.as_table_mut()
|
||||
.ok_or_else(|| "Config is not a TOML table".to_string())?;
|
||||
|
||||
// Scalar root fields
|
||||
table.insert(
|
||||
"default_qa".to_string(),
|
||||
toml::Value::String(s.default_qa.clone()),
|
||||
);
|
||||
table.insert(
|
||||
"max_retries".to_string(),
|
||||
toml::Value::Integer(s.max_retries as i64),
|
||||
);
|
||||
table.insert(
|
||||
"rate_limit_notifications".to_string(),
|
||||
toml::Value::Boolean(s.rate_limit_notifications),
|
||||
);
|
||||
|
||||
// Optional scalar fields
|
||||
match &s.default_coder_model {
|
||||
Some(v) => {
|
||||
table.insert(
|
||||
"default_coder_model".to_string(),
|
||||
toml::Value::String(v.clone()),
|
||||
);
|
||||
}
|
||||
None => {
|
||||
table.remove("default_coder_model");
|
||||
}
|
||||
}
|
||||
match s.max_coders {
|
||||
Some(v) => {
|
||||
table.insert("max_coders".to_string(), toml::Value::Integer(v as i64));
|
||||
}
|
||||
None => {
|
||||
table.remove("max_coders");
|
||||
}
|
||||
}
|
||||
match &s.base_branch {
|
||||
Some(v) => {
|
||||
table.insert("base_branch".to_string(), toml::Value::String(v.clone()));
|
||||
}
|
||||
None => {
|
||||
table.remove("base_branch");
|
||||
}
|
||||
}
|
||||
match &s.timezone {
|
||||
Some(v) => {
|
||||
table.insert("timezone".to_string(), toml::Value::String(v.clone()));
|
||||
}
|
||||
None => {
|
||||
table.remove("timezone");
|
||||
}
|
||||
}
|
||||
match &s.rendezvous {
|
||||
Some(v) => {
|
||||
table.insert("rendezvous".to_string(), toml::Value::String(v.clone()));
|
||||
}
|
||||
None => {
|
||||
table.remove("rendezvous");
|
||||
}
|
||||
}
|
||||
|
||||
// [watcher] sub-table
|
||||
let watcher_entry = table
|
||||
.entry("watcher".to_string())
|
||||
.or_insert_with(|| toml::Value::Table(toml::map::Map::new()));
|
||||
if let toml::Value::Table(wt) = watcher_entry {
|
||||
wt.insert(
|
||||
"sweep_interval_secs".to_string(),
|
||||
toml::Value::Integer(s.watcher_sweep_interval_secs as i64),
|
||||
);
|
||||
wt.insert(
|
||||
"done_retention_secs".to_string(),
|
||||
toml::Value::Integer(s.watcher_done_retention_secs as i64),
|
||||
);
|
||||
}
|
||||
|
||||
// Ensure .huskies/ directory exists
|
||||
if let Some(parent) = config_path.parent() {
|
||||
std::fs::create_dir_all(parent).map_err(|e| format!("Create .huskies dir: {e}"))?;
|
||||
}
|
||||
|
||||
let new_content = toml::to_string_pretty(&val).map_err(|e| format!("Serialize config: {e}"))?;
|
||||
std::fs::write(&config_path, new_content).map_err(|e| format!("Write config: {e}"))?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[derive(Tags)]
|
||||
enum SettingsTags {
|
||||
Settings,
|
||||
@@ -71,6 +239,30 @@ impl SettingsApi {
|
||||
Ok(Json(OpenFileResponse { success: true }))
|
||||
}
|
||||
|
||||
/// Get current project.toml scalar settings as JSON.
|
||||
#[oai(path = "/settings", method = "get")]
|
||||
async fn get_settings(&self) -> OpenApiResult<Json<ProjectSettings>> {
|
||||
let project_root = self.ctx.state.get_project_root().map_err(bad_request)?;
|
||||
let config = ProjectConfig::load(&project_root).map_err(bad_request)?;
|
||||
Ok(Json(settings_from_config(&config)))
|
||||
}
|
||||
|
||||
/// Update project.toml scalar settings. Array sections (component, agent) are preserved.
|
||||
///
|
||||
/// Returns 400 if the input fails validation (e.g. unknown qa mode, negative max_retries).
|
||||
#[oai(path = "/settings", method = "put")]
|
||||
async fn put_settings(
|
||||
&self,
|
||||
payload: Json<ProjectSettings>,
|
||||
) -> OpenApiResult<Json<ProjectSettings>> {
|
||||
validate_project_settings(&payload.0).map_err(bad_request)?;
|
||||
let project_root = self.ctx.state.get_project_root().map_err(bad_request)?;
|
||||
write_project_settings(&project_root, &payload.0).map_err(bad_request)?;
|
||||
// Re-read to confirm what was written
|
||||
let config = ProjectConfig::load(&project_root).map_err(bad_request)?;
|
||||
Ok(Json(settings_from_config(&config)))
|
||||
}
|
||||
|
||||
/// Set the preferred editor command (e.g. "zed", "code", "cursor").
|
||||
/// Pass null or empty string to clear the preference.
|
||||
#[oai(path = "/settings/editor", method = "put")]
|
||||
@@ -360,4 +552,222 @@ mod tests {
|
||||
.await;
|
||||
assert!(result.is_err());
|
||||
}
|
||||
|
||||
// ── /api/settings GET/PUT ──────────────────────────────────────────────
|
||||
|
||||
fn default_project_settings() -> ProjectSettings {
|
||||
let cfg = ProjectConfig::default();
|
||||
settings_from_config(&cfg)
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn get_settings_returns_defaults_when_no_project_toml() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
// Create .huskies dir so project root detection works but no project.toml
|
||||
std::fs::create_dir_all(dir.path().join(".huskies")).unwrap();
|
||||
let ctx = AppContext::new_test(dir.path().to_path_buf());
|
||||
let api = SettingsApi { ctx: Arc::new(ctx) };
|
||||
let result = api.get_settings().await.unwrap().0;
|
||||
assert_eq!(result.default_qa, "server");
|
||||
assert_eq!(result.max_retries, 2);
|
||||
assert!(result.rate_limit_notifications);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn put_settings_writes_and_returns_settings() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
std::fs::create_dir_all(dir.path().join(".huskies")).unwrap();
|
||||
let ctx = AppContext::new_test(dir.path().to_path_buf());
|
||||
let api = SettingsApi { ctx: Arc::new(ctx) };
|
||||
|
||||
let mut s = default_project_settings();
|
||||
s.default_qa = "agent".to_string();
|
||||
s.max_retries = 5;
|
||||
s.rate_limit_notifications = false;
|
||||
|
||||
let result = api.put_settings(Json(s)).await.unwrap().0;
|
||||
assert_eq!(result.default_qa, "agent");
|
||||
assert_eq!(result.max_retries, 5);
|
||||
assert!(!result.rate_limit_notifications);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn put_settings_preserves_agent_sections() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
let huskies_dir = dir.path().join(".huskies");
|
||||
std::fs::create_dir_all(&huskies_dir).unwrap();
|
||||
|
||||
// Write a project.toml with agent sections
|
||||
std::fs::write(
|
||||
huskies_dir.join("project.toml"),
|
||||
r#"
|
||||
[[agent]]
|
||||
name = "coder-1"
|
||||
model = "sonnet"
|
||||
stage = "coder"
|
||||
|
||||
[[component]]
|
||||
name = "server"
|
||||
path = "."
|
||||
"#,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let ctx = AppContext::new_test(dir.path().to_path_buf());
|
||||
let api = SettingsApi { ctx: Arc::new(ctx) };
|
||||
|
||||
let mut s = default_project_settings();
|
||||
s.default_qa = "human".to_string();
|
||||
api.put_settings(Json(s)).await.unwrap();
|
||||
|
||||
// Re-read the file and verify agent/component sections are still there
|
||||
let written = std::fs::read_to_string(huskies_dir.join("project.toml")).unwrap();
|
||||
assert!(
|
||||
written.contains("coder-1"),
|
||||
"agent section should be preserved"
|
||||
);
|
||||
assert!(
|
||||
written.contains("server"),
|
||||
"component section should be preserved"
|
||||
);
|
||||
assert!(written.contains("human"), "new setting should be written");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn put_settings_rejects_invalid_qa_mode() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
std::fs::create_dir_all(dir.path().join(".huskies")).unwrap();
|
||||
let ctx = AppContext::new_test(dir.path().to_path_buf());
|
||||
let api = SettingsApi { ctx: Arc::new(ctx) };
|
||||
|
||||
let mut s = default_project_settings();
|
||||
s.default_qa = "invalid_mode".to_string();
|
||||
|
||||
let result = api.put_settings(Json(s)).await;
|
||||
assert!(result.is_err());
|
||||
let err = result.unwrap_err();
|
||||
assert_eq!(err.status(), poem::http::StatusCode::BAD_REQUEST);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn validate_project_settings_accepts_valid_qa_modes() {
|
||||
for mode in &["server", "agent", "human"] {
|
||||
let s = ProjectSettings {
|
||||
default_qa: mode.to_string(),
|
||||
default_coder_model: None,
|
||||
max_coders: None,
|
||||
max_retries: 2,
|
||||
base_branch: None,
|
||||
rate_limit_notifications: true,
|
||||
timezone: None,
|
||||
rendezvous: None,
|
||||
watcher_sweep_interval_secs: 60,
|
||||
watcher_done_retention_secs: 14400,
|
||||
};
|
||||
assert!(
|
||||
validate_project_settings(&s).is_ok(),
|
||||
"qa mode '{mode}' should be valid"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn validate_project_settings_rejects_unknown_qa_mode() {
|
||||
let s = ProjectSettings {
|
||||
default_qa: "robot".to_string(),
|
||||
default_coder_model: None,
|
||||
max_coders: None,
|
||||
max_retries: 2,
|
||||
base_branch: None,
|
||||
rate_limit_notifications: true,
|
||||
timezone: None,
|
||||
rendezvous: None,
|
||||
watcher_sweep_interval_secs: 60,
|
||||
watcher_done_retention_secs: 14400,
|
||||
};
|
||||
let err = validate_project_settings(&s).unwrap_err();
|
||||
assert!(err.contains("robot"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn write_and_read_project_settings_roundtrip() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
std::fs::create_dir_all(dir.path().join(".huskies")).unwrap();
|
||||
|
||||
let s = ProjectSettings {
|
||||
default_qa: "agent".to_string(),
|
||||
default_coder_model: Some("opus".to_string()),
|
||||
max_coders: Some(2),
|
||||
max_retries: 3,
|
||||
base_branch: Some("main".to_string()),
|
||||
rate_limit_notifications: false,
|
||||
timezone: Some("America/New_York".to_string()),
|
||||
rendezvous: Some("ws://host:3001/crdt-sync".to_string()),
|
||||
watcher_sweep_interval_secs: 30,
|
||||
watcher_done_retention_secs: 7200,
|
||||
};
|
||||
|
||||
write_project_settings(dir.path(), &s).unwrap();
|
||||
|
||||
let config = ProjectConfig::load(dir.path()).unwrap();
|
||||
let loaded = settings_from_config(&config);
|
||||
|
||||
assert_eq!(loaded.default_qa, "agent");
|
||||
assert_eq!(loaded.default_coder_model, Some("opus".to_string()));
|
||||
assert_eq!(loaded.max_coders, Some(2));
|
||||
assert_eq!(loaded.max_retries, 3);
|
||||
assert_eq!(loaded.base_branch, Some("main".to_string()));
|
||||
assert!(!loaded.rate_limit_notifications);
|
||||
assert_eq!(loaded.timezone, Some("America/New_York".to_string()));
|
||||
assert_eq!(
|
||||
loaded.rendezvous,
|
||||
Some("ws://host:3001/crdt-sync".to_string())
|
||||
);
|
||||
assert_eq!(loaded.watcher_sweep_interval_secs, 30);
|
||||
assert_eq!(loaded.watcher_done_retention_secs, 7200);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn write_project_settings_clears_optional_fields_when_none() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
let huskies_dir = dir.path().join(".huskies");
|
||||
std::fs::create_dir_all(&huskies_dir).unwrap();
|
||||
|
||||
// First write with optional fields set
|
||||
let s_with = ProjectSettings {
|
||||
default_qa: "server".to_string(),
|
||||
default_coder_model: Some("sonnet".to_string()),
|
||||
max_coders: Some(3),
|
||||
max_retries: 2,
|
||||
base_branch: Some("master".to_string()),
|
||||
rate_limit_notifications: true,
|
||||
timezone: Some("UTC".to_string()),
|
||||
rendezvous: None,
|
||||
watcher_sweep_interval_secs: 60,
|
||||
watcher_done_retention_secs: 14400,
|
||||
};
|
||||
write_project_settings(dir.path(), &s_with).unwrap();
|
||||
|
||||
// Then write with optional fields cleared
|
||||
let s_clear = ProjectSettings {
|
||||
default_qa: "server".to_string(),
|
||||
default_coder_model: None,
|
||||
max_coders: None,
|
||||
max_retries: 2,
|
||||
base_branch: None,
|
||||
rate_limit_notifications: true,
|
||||
timezone: None,
|
||||
rendezvous: None,
|
||||
watcher_sweep_interval_secs: 60,
|
||||
watcher_done_retention_secs: 14400,
|
||||
};
|
||||
write_project_settings(dir.path(), &s_clear).unwrap();
|
||||
|
||||
let config = ProjectConfig::load(dir.path()).unwrap();
|
||||
let loaded = settings_from_config(&config);
|
||||
assert!(loaded.default_coder_model.is_none());
|
||||
assert!(loaded.max_coders.is_none());
|
||||
assert!(loaded.base_branch.is_none());
|
||||
assert!(loaded.timezone.is_none());
|
||||
}
|
||||
}
|
||||
|
||||
@@ -100,6 +100,24 @@ const DEFAULT_PROJECT_SETTINGS_TOML: &str = r#"# Project-wide default QA mode: "
|
||||
# Per-story `qa` front matter overrides this setting.
|
||||
default_qa = "server"
|
||||
|
||||
# Maximum number of retries per story per pipeline stage before marking as blocked.
|
||||
# Set to 0 to disable retry limits.
|
||||
max_retries = 2
|
||||
|
||||
# Default model for coder-stage agents (e.g. "sonnet", "opus").
|
||||
# When set, only coder agents whose model matches this value are considered for
|
||||
# auto-assignment, so opus agents are only used when explicitly requested via
|
||||
# story front matter `agent:` field.
|
||||
# default_coder_model = "sonnet"
|
||||
|
||||
# Maximum number of concurrent coder-stage agents.
|
||||
# Stories wait in 2_current/ until a slot frees up.
|
||||
# max_coders = 3
|
||||
|
||||
# Override the base branch for worktree creation and merge operations.
|
||||
# When not set, the system auto-detects the base branch from the current HEAD.
|
||||
# base_branch = "main"
|
||||
|
||||
# Suppress soft rate-limit warning notifications in chat.
|
||||
# Hard blocks and story-blocked notifications are always sent.
|
||||
# rate_limit_notifications = true
|
||||
@@ -759,6 +777,78 @@ mod tests {
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn scaffold_project_toml_contains_max_retries_with_default_value() {
|
||||
let dir = tempdir().unwrap();
|
||||
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||
|
||||
let content = fs::read_to_string(dir.path().join(".huskies/project.toml")).unwrap();
|
||||
assert!(
|
||||
content.contains("max_retries = 2"),
|
||||
"project.toml scaffold should include max_retries with default value 2"
|
||||
);
|
||||
assert!(
|
||||
content.contains("Maximum number of retries"),
|
||||
"project.toml scaffold should include a comment explaining max_retries"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn scaffold_project_toml_contains_commented_out_optional_fields() {
|
||||
let dir = tempdir().unwrap();
|
||||
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||
|
||||
let content = fs::read_to_string(dir.path().join(".huskies/project.toml")).unwrap();
|
||||
assert!(
|
||||
content.contains("# default_coder_model"),
|
||||
"project.toml scaffold should include commented-out default_coder_model"
|
||||
);
|
||||
assert!(
|
||||
content.contains("# max_coders"),
|
||||
"project.toml scaffold should include commented-out max_coders"
|
||||
);
|
||||
assert!(
|
||||
content.contains("# base_branch"),
|
||||
"project.toml scaffold should include commented-out base_branch"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn scaffold_project_toml_round_trips_through_project_config_load() {
|
||||
use crate::config::ProjectConfig;
|
||||
|
||||
let dir = tempdir().unwrap();
|
||||
scaffold_story_kit(dir.path(), 3001).unwrap();
|
||||
|
||||
// The generated project.toml must parse without error.
|
||||
let config = ProjectConfig::load(dir.path())
|
||||
.expect("Generated project.toml should parse without error");
|
||||
|
||||
// Key defaults must survive the round-trip.
|
||||
assert_eq!(config.default_qa, "server");
|
||||
assert_eq!(config.max_retries, 2);
|
||||
assert!(
|
||||
config.rate_limit_notifications,
|
||||
"rate_limit_notifications should default to true"
|
||||
);
|
||||
assert!(
|
||||
config.default_coder_model.is_none(),
|
||||
"default_coder_model should be None when commented out"
|
||||
);
|
||||
assert!(
|
||||
config.max_coders.is_none(),
|
||||
"max_coders should be None when commented out"
|
||||
);
|
||||
assert!(
|
||||
config.base_branch.is_none(),
|
||||
"base_branch should be None when commented out"
|
||||
);
|
||||
assert!(
|
||||
config.timezone.is_none(),
|
||||
"timezone should be None when commented out"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn scaffold_context_is_blank_template_not_story_kit_content() {
|
||||
let dir = tempdir().unwrap();
|
||||
|
||||
+16
-1
@@ -20,6 +20,7 @@ mod llm;
|
||||
pub mod log_buffer;
|
||||
pub(crate) mod pipeline_state;
|
||||
pub mod rebuild;
|
||||
mod service;
|
||||
mod state;
|
||||
mod store;
|
||||
mod workflow;
|
||||
@@ -544,6 +545,8 @@ async fn main() -> Result<(), std::io::Error> {
|
||||
let watcher_rx_for_whatsapp = watcher_tx.subscribe();
|
||||
let watcher_rx_for_slack = watcher_tx.subscribe();
|
||||
let watcher_rx_for_discord = watcher_tx.subscribe();
|
||||
// Subscribe to watcher events for the per-project event buffer (gateway polling).
|
||||
let watcher_rx_for_events = watcher_tx.subscribe();
|
||||
// Wrap perm_rx in Arc<Mutex> so it can be shared with both the WebSocket
|
||||
// handler (via AppContext) and the Matrix bot.
|
||||
let perm_rx = Arc::new(tokio::sync::Mutex::new(perm_rx));
|
||||
@@ -802,7 +805,18 @@ async fn main() -> Result<(), std::io::Error> {
|
||||
test_jobs: std::sync::Arc::new(std::sync::Mutex::new(std::collections::HashMap::new())),
|
||||
};
|
||||
|
||||
let app = build_routes(ctx, whatsapp_ctx.clone(), slack_ctx.clone(), port);
|
||||
// Create the per-project event buffer and subscribe it to the watcher channel
|
||||
// so that pipeline events are buffered for the gateway's `/api/events` poller.
|
||||
let event_buffer = crate::http::events::EventBuffer::new();
|
||||
crate::http::events::subscribe_to_watcher(event_buffer.clone(), watcher_rx_for_events);
|
||||
|
||||
let app = build_routes(
|
||||
ctx,
|
||||
whatsapp_ctx.clone(),
|
||||
slack_ctx.clone(),
|
||||
port,
|
||||
Some(event_buffer),
|
||||
);
|
||||
|
||||
// Unified 1-second background tick loop: fires due timers, detects orphaned
|
||||
// agents (watchdog), and promotes done→archived items (sweep). Replaces the
|
||||
@@ -868,6 +882,7 @@ async fn main() -> Result<(), std::io::Error> {
|
||||
matrix_shutdown_rx,
|
||||
None,
|
||||
vec![],
|
||||
std::collections::BTreeMap::new(),
|
||||
);
|
||||
} else {
|
||||
// Keep the receiver alive (drop it) so the sender never errors.
|
||||
|
||||
@@ -0,0 +1,190 @@
|
||||
//! Agent I/O wrappers — the ONLY place in `service/agents/` that may perform
|
||||
//! filesystem reads, process invocations, or other side effects.
|
||||
//!
|
||||
//! Every function here is a thin adapter over an existing lower-level call.
|
||||
//! No business logic lives here; all branching belongs in the pure topic files
|
||||
//! or in `mod.rs`.
|
||||
use crate::agent_log::{self, LogEntry};
|
||||
use crate::agents::token_usage::{self, TokenUsageRecord};
|
||||
use crate::config::ProjectConfig;
|
||||
use crate::worktree::{self, WorktreeListEntry};
|
||||
use std::path::Path;
|
||||
|
||||
use super::Error;
|
||||
|
||||
/// Return `true` if the story's `.md` file exists in `5_done/` or `6_archived/`.
|
||||
pub fn is_archived(project_root: &Path, story_id: &str) -> bool {
|
||||
let work = project_root.join(".huskies").join("work");
|
||||
let filename = format!("{story_id}.md");
|
||||
work.join("5_done").join(&filename).exists() || work.join("6_archived").join(&filename).exists()
|
||||
}
|
||||
|
||||
/// Read and return all log entries for the most recent session of an agent.
|
||||
///
|
||||
/// Returns `Ok(vec![])` when no log file exists yet.
|
||||
pub fn read_agent_log(
|
||||
project_root: &Path,
|
||||
story_id: &str,
|
||||
agent_name: &str,
|
||||
) -> Result<Vec<LogEntry>, Error> {
|
||||
let log_path = agent_log::find_latest_log(project_root, story_id, agent_name);
|
||||
let Some(path) = log_path else {
|
||||
return Ok(Vec::new());
|
||||
};
|
||||
agent_log::read_log(&path).map_err(Error::Io)
|
||||
}
|
||||
|
||||
/// Read all token usage records from the persistent JSONL file.
|
||||
///
|
||||
/// Returns an empty vec when the file does not yet exist.
|
||||
pub fn read_token_records(project_root: &Path) -> Result<Vec<TokenUsageRecord>, Error> {
|
||||
token_usage::read_all(project_root).map_err(Error::Io)
|
||||
}
|
||||
|
||||
/// Load the project configuration from `project.toml`.
|
||||
///
|
||||
/// Falls back to default config when the file is absent.
|
||||
pub fn load_config(project_root: &Path) -> Result<ProjectConfig, Error> {
|
||||
ProjectConfig::load(project_root).map_err(Error::Config)
|
||||
}
|
||||
|
||||
/// List all worktrees under `.huskies/worktrees/`.
|
||||
pub fn list_worktrees(project_root: &Path) -> Result<Vec<WorktreeListEntry>, Error> {
|
||||
worktree::list_worktrees(project_root).map_err(Error::Io)
|
||||
}
|
||||
|
||||
/// Remove the git worktree for a story by ID.
|
||||
///
|
||||
/// Loads the project config to honour teardown commands. Returns an error if
|
||||
/// the worktree directory does not exist.
|
||||
pub async fn remove_worktree(project_root: &Path, story_id: &str) -> Result<(), Error> {
|
||||
let config = load_config(project_root)?;
|
||||
worktree::remove_worktree_by_story_id(project_root, story_id, &config)
|
||||
.await
|
||||
.map_err(Error::Worktree)
|
||||
}
|
||||
|
||||
/// Read test results persisted in a story's markdown file.
|
||||
///
|
||||
/// Returns `None` when the story has no test results section.
|
||||
pub fn read_test_results_from_file(
|
||||
project_root: &Path,
|
||||
story_id: &str,
|
||||
) -> Option<crate::workflow::StoryTestResults> {
|
||||
crate::http::workflow::read_test_results_from_story_file(project_root, story_id)
|
||||
}
|
||||
|
||||
/// Read a work item file from a pipeline stage directory.
|
||||
///
|
||||
/// Returns `Ok(Some(content))` when found, `Ok(None)` when absent.
|
||||
pub fn read_work_item_from_stage(
|
||||
work_dir: &std::path::Path,
|
||||
stage_dir: &str,
|
||||
filename: &str,
|
||||
) -> Result<Option<String>, Error> {
|
||||
let file_path = work_dir.join(stage_dir).join(filename);
|
||||
if file_path.exists() {
|
||||
let content = std::fs::read_to_string(&file_path)
|
||||
.map_err(|e| Error::Io(format!("Failed to read work item: {e}")))?;
|
||||
Ok(Some(content))
|
||||
} else {
|
||||
Ok(None)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use tempfile::TempDir;
|
||||
|
||||
fn make_work_dirs(tmp: &TempDir) {
|
||||
for stage in &["5_done", "6_archived"] {
|
||||
std::fs::create_dir_all(tmp.path().join(".huskies").join("work").join(stage)).unwrap();
|
||||
}
|
||||
}
|
||||
|
||||
// ── is_archived ───────────────────────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn is_archived_false_when_file_absent() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
make_work_dirs(&tmp);
|
||||
assert!(!is_archived(tmp.path(), "42_story_foo"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn is_archived_true_when_in_5_done() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
make_work_dirs(&tmp);
|
||||
std::fs::write(
|
||||
tmp.path().join(".huskies/work/5_done/42_story_foo.md"),
|
||||
"---\nname: test\n---\n",
|
||||
)
|
||||
.unwrap();
|
||||
assert!(is_archived(tmp.path(), "42_story_foo"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn is_archived_true_when_in_6_archived() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
make_work_dirs(&tmp);
|
||||
std::fs::write(
|
||||
tmp.path().join(".huskies/work/6_archived/42_story_foo.md"),
|
||||
"---\nname: test\n---\n",
|
||||
)
|
||||
.unwrap();
|
||||
assert!(is_archived(tmp.path(), "42_story_foo"));
|
||||
}
|
||||
|
||||
// ── read_agent_log ────────────────────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn read_agent_log_returns_empty_when_no_log() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let entries = read_agent_log(tmp.path(), "42_story_foo", "coder-1").unwrap();
|
||||
assert!(entries.is_empty());
|
||||
}
|
||||
|
||||
// ── read_token_records ────────────────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn read_token_records_returns_empty_when_no_file() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let records = read_token_records(tmp.path()).unwrap();
|
||||
assert!(records.is_empty());
|
||||
}
|
||||
|
||||
// ── load_config ───────────────────────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn load_config_returns_default_when_no_file() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
std::fs::create_dir_all(tmp.path().join(".huskies")).unwrap();
|
||||
let config = load_config(tmp.path()).unwrap();
|
||||
// Default config has one "default" agent
|
||||
assert_eq!(config.agent.len(), 1);
|
||||
assert_eq!(config.agent[0].name, "default");
|
||||
}
|
||||
|
||||
// ── list_worktrees ────────────────────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn list_worktrees_empty_when_no_dir() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let entries = list_worktrees(tmp.path()).unwrap();
|
||||
assert!(entries.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn list_worktrees_returns_subdirs() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let wt_dir = tmp.path().join(".huskies").join("worktrees");
|
||||
std::fs::create_dir_all(wt_dir.join("42_story_foo")).unwrap();
|
||||
std::fs::create_dir_all(wt_dir.join("43_story_bar")).unwrap();
|
||||
let mut entries = list_worktrees(tmp.path()).unwrap();
|
||||
entries.sort_by(|a, b| a.story_id.cmp(&b.story_id));
|
||||
assert_eq!(entries.len(), 2);
|
||||
assert_eq!(entries[0].story_id, "42_story_foo");
|
||||
assert_eq!(entries[1].story_id, "43_story_bar");
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,476 @@
|
||||
//! Agent service — public API for the agent domain.
|
||||
//!
|
||||
//! This module orchestrates calls to `io.rs` (side effects) and the pure
|
||||
//! topic modules (`selection`, `token`) to implement the full agent service
|
||||
//! surface. HTTP handlers call these functions instead of reaching directly
|
||||
//! into `AgentPool` or the filesystem.
|
||||
//!
|
||||
//! Conventions: `docs/architecture/service-modules.md`
|
||||
mod io;
|
||||
pub mod selection;
|
||||
pub mod token;
|
||||
|
||||
use crate::agents::AgentInfo;
|
||||
use crate::agents::AgentPool;
|
||||
use crate::agents::token_usage::TokenUsageRecord;
|
||||
use crate::config::ProjectConfig;
|
||||
use crate::workflow::StoryTestResults;
|
||||
use crate::worktree::{WorktreeInfo, WorktreeListEntry};
|
||||
use std::path::Path;
|
||||
|
||||
pub use io::is_archived;
|
||||
pub use token::TokenCostSummary;
|
||||
|
||||
// ── Error type ────────────────────────────────────────────────────────────────
|
||||
|
||||
/// Typed errors returned by `service::agents` functions.
|
||||
///
|
||||
/// HTTP handlers map these to specific status codes — see the conventions doc
|
||||
/// for the full mapping table.
|
||||
#[derive(Debug)]
|
||||
pub enum Error {
|
||||
/// No agent with the given name/story exists in the pool.
|
||||
AgentNotFound(String),
|
||||
/// No work item found for the requested story ID.
|
||||
WorkItemNotFound(String),
|
||||
/// A worktree operation failed.
|
||||
Worktree(String),
|
||||
/// Project configuration could not be loaded.
|
||||
Config(String),
|
||||
/// A filesystem or I/O operation failed.
|
||||
Io(String),
|
||||
}
|
||||
|
||||
impl std::fmt::Display for Error {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
match self {
|
||||
Self::AgentNotFound(msg) => write!(f, "Agent not found: {msg}"),
|
||||
Self::WorkItemNotFound(msg) => write!(f, "Work item not found: {msg}"),
|
||||
Self::Worktree(msg) => write!(f, "Worktree error: {msg}"),
|
||||
Self::Config(msg) => write!(f, "Config error: {msg}"),
|
||||
Self::Io(msg) => write!(f, "I/O error: {msg}"),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ── Shared service types ─────────────────────────────────────────────────────
|
||||
|
||||
/// Content and metadata for a work-item (story) file.
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct WorkItemContent {
|
||||
pub content: String,
|
||||
pub stage: String,
|
||||
pub name: Option<String>,
|
||||
pub agent: Option<String>,
|
||||
}
|
||||
|
||||
/// A single entry in the project's configured agent roster.
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct AgentConfigEntry {
|
||||
pub name: String,
|
||||
pub role: String,
|
||||
pub stage: Option<String>,
|
||||
pub model: Option<String>,
|
||||
pub allowed_tools: Option<Vec<String>>,
|
||||
pub max_turns: Option<u32>,
|
||||
pub max_budget_usd: Option<f64>,
|
||||
}
|
||||
|
||||
// ── Public API ────────────────────────────────────────────────────────────────
|
||||
|
||||
/// Start an agent for a story.
|
||||
///
|
||||
/// Takes only what it needs: the pool (for spawning) and the project root
|
||||
/// (for config and worktree creation). Does not touch `AppContext`.
|
||||
pub async fn start_agent(
|
||||
pool: &AgentPool,
|
||||
project_root: &Path,
|
||||
story_id: &str,
|
||||
agent_name: Option<&str>,
|
||||
resume_context: Option<&str>,
|
||||
session_id_to_resume: Option<String>,
|
||||
) -> Result<AgentInfo, Error> {
|
||||
pool.start_agent(
|
||||
project_root,
|
||||
story_id,
|
||||
agent_name,
|
||||
resume_context,
|
||||
session_id_to_resume,
|
||||
)
|
||||
.await
|
||||
.map_err(Error::AgentNotFound)
|
||||
}
|
||||
|
||||
/// Stop a running agent.
|
||||
pub async fn stop_agent(
|
||||
pool: &AgentPool,
|
||||
project_root: &Path,
|
||||
story_id: &str,
|
||||
agent_name: &str,
|
||||
) -> Result<(), Error> {
|
||||
pool.stop_agent(project_root, story_id, agent_name)
|
||||
.await
|
||||
.map_err(Error::AgentNotFound)
|
||||
}
|
||||
|
||||
/// List all agents, optionally filtering out those belonging to archived stories.
|
||||
///
|
||||
/// When `project_root` is `None` the archive filter is skipped and all agents
|
||||
/// are returned (safe default when the server is not yet fully configured).
|
||||
pub fn list_agents(pool: &AgentPool, project_root: Option<&Path>) -> Result<Vec<AgentInfo>, Error> {
|
||||
let agents = pool.list_agents().map_err(Error::Io)?;
|
||||
match project_root {
|
||||
Some(root) => Ok(selection::filter_non_archived(agents, |id| {
|
||||
io::is_archived(root, id)
|
||||
})),
|
||||
None => Ok(agents),
|
||||
}
|
||||
}
|
||||
|
||||
/// Create a git worktree for a story.
|
||||
pub async fn create_worktree(
|
||||
pool: &AgentPool,
|
||||
project_root: &Path,
|
||||
story_id: &str,
|
||||
) -> Result<WorktreeInfo, Error> {
|
||||
pool.create_worktree(project_root, story_id)
|
||||
.await
|
||||
.map_err(Error::Worktree)
|
||||
}
|
||||
|
||||
/// List all worktrees under `.huskies/worktrees/`.
|
||||
pub fn list_worktrees(project_root: &Path) -> Result<Vec<WorktreeListEntry>, Error> {
|
||||
io::list_worktrees(project_root)
|
||||
}
|
||||
|
||||
/// Remove the git worktree for a story.
|
||||
pub async fn remove_worktree(project_root: &Path, story_id: &str) -> Result<(), Error> {
|
||||
io::remove_worktree(project_root, story_id).await
|
||||
}
|
||||
|
||||
/// Get the configured agent roster from `project.toml`.
|
||||
pub fn get_agent_config(project_root: &Path) -> Result<Vec<AgentConfigEntry>, Error> {
|
||||
let config = io::load_config(project_root)?;
|
||||
Ok(config_to_entries(&config))
|
||||
}
|
||||
|
||||
/// Reload and return the project's agent configuration.
|
||||
///
|
||||
/// Semantically identical to `get_agent_config`; provided as a distinct
|
||||
/// function so callers can express intent (UI "Reload" button).
|
||||
pub fn reload_config(project_root: &Path) -> Result<Vec<AgentConfigEntry>, Error> {
|
||||
get_agent_config(project_root)
|
||||
}
|
||||
|
||||
/// Get the concatenated output text for an agent's most recent session.
|
||||
///
|
||||
/// Returns an empty string when no log file exists yet.
|
||||
pub fn get_agent_output(
|
||||
project_root: &Path,
|
||||
story_id: &str,
|
||||
agent_name: &str,
|
||||
) -> Result<String, Error> {
|
||||
let entries = io::read_agent_log(project_root, story_id, agent_name)?;
|
||||
Ok(selection::collect_output_text(&entries))
|
||||
}
|
||||
|
||||
/// Get the markdown content and metadata for a work item.
|
||||
///
|
||||
/// Searches all pipeline stage directories, falling back to the CRDT content
|
||||
/// store when no file is present on disk. Returns `Error::WorkItemNotFound`
|
||||
/// when neither source has the item.
|
||||
pub fn get_work_item_content(
|
||||
project_root: &Path,
|
||||
story_id: &str,
|
||||
) -> Result<WorkItemContent, Error> {
|
||||
let stages = [
|
||||
("1_backlog", "backlog"),
|
||||
("2_current", "current"),
|
||||
("3_qa", "qa"),
|
||||
("4_merge", "merge"),
|
||||
("5_done", "done"),
|
||||
("6_archived", "archived"),
|
||||
];
|
||||
|
||||
let work_dir = project_root.join(".huskies").join("work");
|
||||
let filename = format!("{story_id}.md");
|
||||
|
||||
for (stage_dir, stage_name) in &stages {
|
||||
if let Some(content) = io::read_work_item_from_stage(&work_dir, stage_dir, &filename)? {
|
||||
let metadata = crate::io::story_metadata::parse_front_matter(&content).ok();
|
||||
return Ok(WorkItemContent {
|
||||
content,
|
||||
stage: stage_name.to_string(),
|
||||
name: metadata.as_ref().and_then(|m| m.name.clone()),
|
||||
agent: metadata.and_then(|m| m.agent),
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// CRDT-only fallback
|
||||
if let Some(content) = crate::db::read_content(story_id) {
|
||||
let item = crate::pipeline_state::read_typed(story_id)
|
||||
.map_err(|e| Error::Io(format!("Pipeline read error: {e}")))?;
|
||||
let stage = item
|
||||
.as_ref()
|
||||
.map(|i| match &i.stage {
|
||||
crate::pipeline_state::Stage::Backlog => "backlog",
|
||||
crate::pipeline_state::Stage::Coding => "current",
|
||||
crate::pipeline_state::Stage::Qa => "qa",
|
||||
crate::pipeline_state::Stage::Merge { .. } => "merge",
|
||||
crate::pipeline_state::Stage::Done { .. } => "done",
|
||||
crate::pipeline_state::Stage::Archived { .. } => "archived",
|
||||
})
|
||||
.unwrap_or("unknown")
|
||||
.to_string();
|
||||
let metadata = crate::io::story_metadata::parse_front_matter(&content).ok();
|
||||
return Ok(WorkItemContent {
|
||||
content,
|
||||
stage,
|
||||
name: metadata.as_ref().and_then(|m| m.name.clone()),
|
||||
agent: metadata.and_then(|m| m.agent),
|
||||
});
|
||||
}
|
||||
|
||||
Err(Error::WorkItemNotFound(format!(
|
||||
"Work item not found: {story_id}"
|
||||
)))
|
||||
}
|
||||
|
||||
/// Get test results for a work item.
|
||||
///
|
||||
/// Checks in-memory workflow state first (fast path), then falls back to
|
||||
/// results persisted in the story file.
|
||||
pub fn get_test_results(
|
||||
project_root: &Path,
|
||||
story_id: &str,
|
||||
workflow: &crate::workflow::WorkflowState,
|
||||
) -> Option<StoryTestResults> {
|
||||
if let Some(results) = workflow.results.get(story_id) {
|
||||
return Some(results.clone());
|
||||
}
|
||||
io::read_test_results_from_file(project_root, story_id)
|
||||
}
|
||||
|
||||
/// Get the aggregated token cost for a specific story.
|
||||
pub fn get_work_item_token_cost(
|
||||
project_root: &Path,
|
||||
story_id: &str,
|
||||
) -> Result<TokenCostSummary, Error> {
|
||||
let records = io::read_token_records(project_root)?;
|
||||
Ok(token::aggregate_for_story(&records, story_id))
|
||||
}
|
||||
|
||||
/// Get all token usage records across all stories.
|
||||
pub fn get_all_token_usage(project_root: &Path) -> Result<Vec<TokenUsageRecord>, Error> {
|
||||
io::read_token_records(project_root)
|
||||
}
|
||||
|
||||
// ── Helpers ───────────────────────────────────────────────────────────────────
|
||||
|
||||
fn config_to_entries(config: &ProjectConfig) -> Vec<AgentConfigEntry> {
|
||||
config
|
||||
.agent
|
||||
.iter()
|
||||
.map(|a| AgentConfigEntry {
|
||||
name: a.name.clone(),
|
||||
role: a.role.clone(),
|
||||
stage: a.stage.clone(),
|
||||
model: a.model.clone(),
|
||||
allowed_tools: a.allowed_tools.clone(),
|
||||
max_turns: a.max_turns,
|
||||
max_budget_usd: a.max_budget_usd,
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
|
||||
// ── Integration tests ─────────────────────────────────────────────────────────
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::agents::AgentStatus;
|
||||
use std::sync::Arc;
|
||||
use tempfile::TempDir;
|
||||
|
||||
fn make_pool(tmp: &TempDir) -> Arc<AgentPool> {
|
||||
let (tx, _) = tokio::sync::broadcast::channel(64);
|
||||
let pool = AgentPool::new(3001, tx);
|
||||
let state = crate::state::SessionState::default();
|
||||
*state.project_root.lock().unwrap() = Some(tmp.path().to_path_buf());
|
||||
Arc::new(pool)
|
||||
}
|
||||
|
||||
fn make_work_dirs(tmp: &TempDir) {
|
||||
for stage in &["5_done", "6_archived"] {
|
||||
std::fs::create_dir_all(tmp.path().join(".huskies").join("work").join(stage)).unwrap();
|
||||
}
|
||||
}
|
||||
|
||||
fn make_stage_dirs(tmp: &TempDir) {
|
||||
for stage in &[
|
||||
"1_backlog",
|
||||
"2_current",
|
||||
"3_qa",
|
||||
"4_merge",
|
||||
"5_done",
|
||||
"6_archived",
|
||||
] {
|
||||
std::fs::create_dir_all(tmp.path().join(".huskies").join("work").join(stage)).unwrap();
|
||||
}
|
||||
}
|
||||
|
||||
fn make_project_toml(tmp: &TempDir, content: &str) {
|
||||
let sk_dir = tmp.path().join(".huskies");
|
||||
std::fs::create_dir_all(&sk_dir).unwrap();
|
||||
std::fs::write(sk_dir.join("project.toml"), content).unwrap();
|
||||
}
|
||||
|
||||
// ── list_agents ───────────────────────────────────────────────────────────
|
||||
|
||||
#[tokio::test]
|
||||
async fn list_agents_excludes_archived_stories() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
make_work_dirs(&tmp);
|
||||
std::fs::write(
|
||||
tmp.path()
|
||||
.join(".huskies/work/6_archived/79_story_archived.md"),
|
||||
"---\nname: archived\n---\n",
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let pool = make_pool(&tmp);
|
||||
pool.inject_test_agent("79_story_archived", "coder-1", AgentStatus::Completed);
|
||||
pool.inject_test_agent("80_story_active", "coder-1", AgentStatus::Running);
|
||||
|
||||
let agents = list_agents(&pool, Some(tmp.path())).unwrap();
|
||||
assert!(!agents.iter().any(|a| a.story_id == "79_story_archived"));
|
||||
assert!(agents.iter().any(|a| a.story_id == "80_story_active"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn list_agents_includes_all_when_no_project_root() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let pool = make_pool(&tmp);
|
||||
pool.inject_test_agent("42_story_whatever", "coder-1", AgentStatus::Completed);
|
||||
|
||||
let agents = list_agents(&pool, None).unwrap();
|
||||
assert!(agents.iter().any(|a| a.story_id == "42_story_whatever"));
|
||||
}
|
||||
|
||||
// ── get_agent_config ──────────────────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn get_agent_config_returns_default_when_no_toml() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
std::fs::create_dir_all(tmp.path().join(".huskies")).unwrap();
|
||||
let entries = get_agent_config(tmp.path()).unwrap();
|
||||
assert_eq!(entries.len(), 1);
|
||||
assert_eq!(entries[0].name, "default");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn get_agent_config_returns_configured_agents() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
make_project_toml(
|
||||
&tmp,
|
||||
r#"
|
||||
[[agent]]
|
||||
name = "coder-1"
|
||||
role = "Full-stack engineer"
|
||||
model = "sonnet"
|
||||
max_turns = 30
|
||||
max_budget_usd = 5.0
|
||||
"#,
|
||||
);
|
||||
let entries = get_agent_config(tmp.path()).unwrap();
|
||||
assert_eq!(entries.len(), 1);
|
||||
assert_eq!(entries[0].name, "coder-1");
|
||||
assert_eq!(entries[0].model, Some("sonnet".to_string()));
|
||||
assert_eq!(entries[0].max_turns, Some(30));
|
||||
}
|
||||
|
||||
// ── get_agent_output ──────────────────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn get_agent_output_returns_empty_when_no_log() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let output = get_agent_output(tmp.path(), "42_story_foo", "coder-1").unwrap();
|
||||
assert_eq!(output, "");
|
||||
}
|
||||
|
||||
// ── get_work_item_content ─────────────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn get_work_item_content_reads_from_backlog() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
make_stage_dirs(&tmp);
|
||||
std::fs::write(
|
||||
tmp.path().join(".huskies/work/1_backlog/42_story_foo.md"),
|
||||
"---\nname: \"Foo Story\"\n---\n\nSome content.",
|
||||
)
|
||||
.unwrap();
|
||||
let item = get_work_item_content(tmp.path(), "42_story_foo").unwrap();
|
||||
assert!(item.content.contains("Some content."));
|
||||
assert_eq!(item.stage, "backlog");
|
||||
assert_eq!(item.name, Some("Foo Story".to_string()));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn get_work_item_content_returns_not_found_for_absent_story() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
make_stage_dirs(&tmp);
|
||||
let result = get_work_item_content(tmp.path(), "99_story_nonexistent");
|
||||
assert!(matches!(result, Err(Error::WorkItemNotFound(_))));
|
||||
}
|
||||
|
||||
// ── get_work_item_token_cost ──────────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn get_work_item_token_cost_returns_zero_when_no_records() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let summary = get_work_item_token_cost(tmp.path(), "42_story_foo").unwrap();
|
||||
assert_eq!(summary.total_cost_usd, 0.0);
|
||||
assert!(summary.agents.is_empty());
|
||||
}
|
||||
|
||||
// ── get_all_token_usage ───────────────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn get_all_token_usage_returns_empty_when_no_file() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let records = get_all_token_usage(tmp.path()).unwrap();
|
||||
assert!(records.is_empty());
|
||||
}
|
||||
|
||||
// ── get_test_results ──────────────────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn get_test_results_returns_none_when_no_results() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let workflow = crate::workflow::WorkflowState::default();
|
||||
let result = get_test_results(tmp.path(), "42_story_foo", &workflow);
|
||||
assert!(result.is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn get_test_results_returns_in_memory_results_first() {
|
||||
let tmp = TempDir::new().unwrap();
|
||||
let mut workflow = crate::workflow::WorkflowState::default();
|
||||
workflow
|
||||
.record_test_results_validated(
|
||||
"42_story_foo".to_string(),
|
||||
vec![crate::workflow::TestCaseResult {
|
||||
name: "test1".to_string(),
|
||||
status: crate::workflow::TestStatus::Pass,
|
||||
details: None,
|
||||
}],
|
||||
vec![],
|
||||
)
|
||||
.unwrap();
|
||||
let result =
|
||||
get_test_results(tmp.path(), "42_story_foo", &workflow).expect("should have results");
|
||||
assert_eq!(result.unit.len(), 1);
|
||||
assert_eq!(result.unit[0].name, "test1");
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,171 @@
|
||||
//! Pure agent selection and filtering logic — no I/O, no side effects.
|
||||
//!
|
||||
//! All functions in this module are pure: they take data, transform it, and
|
||||
//! return a result without touching the filesystem, network, or any mutable
|
||||
//! global state. This makes them fast to test without tempdirs or async runtimes.
|
||||
use crate::agent_log::LogEntry;
|
||||
use crate::agents::AgentInfo;
|
||||
|
||||
/// Filter a list of agents, removing any whose story is archived.
|
||||
///
|
||||
/// `is_archived` is a predicate injected by the caller — typically a closure
|
||||
/// over the project root that calls `io::is_archived`. This keeps the function
|
||||
/// pure: it never touches the filesystem itself.
|
||||
pub fn filter_non_archived<F>(agents: Vec<AgentInfo>, is_archived: F) -> Vec<AgentInfo>
|
||||
where
|
||||
F: Fn(&str) -> bool,
|
||||
{
|
||||
agents
|
||||
.into_iter()
|
||||
.filter(|info| !is_archived(&info.story_id))
|
||||
.collect()
|
||||
}
|
||||
|
||||
/// Concatenate the text of all `output` events from an agent log.
|
||||
///
|
||||
/// Non-output events (status, done, error, agent_json, thinking) are silently
|
||||
/// skipped. Returns an empty string when `entries` is empty or contains no
|
||||
/// output events.
|
||||
pub fn collect_output_text(entries: &[LogEntry]) -> String {
|
||||
entries
|
||||
.iter()
|
||||
.filter(|e| e.event.get("type").and_then(|t| t.as_str()) == Some("output"))
|
||||
.filter_map(|e| {
|
||||
e.event
|
||||
.get("text")
|
||||
.and_then(|t| t.as_str())
|
||||
.map(str::to_owned)
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::agents::AgentStatus;
|
||||
|
||||
fn make_agent(story_id: &str) -> AgentInfo {
|
||||
AgentInfo {
|
||||
story_id: story_id.to_string(),
|
||||
agent_name: "coder-1".to_string(),
|
||||
status: AgentStatus::Running,
|
||||
session_id: None,
|
||||
worktree_path: None,
|
||||
base_branch: None,
|
||||
completion: None,
|
||||
log_session_id: None,
|
||||
throttled: false,
|
||||
}
|
||||
}
|
||||
|
||||
fn make_log_entry(event_type: &str, text: Option<&str>) -> LogEntry {
|
||||
let mut obj = serde_json::Map::new();
|
||||
obj.insert(
|
||||
"type".to_string(),
|
||||
serde_json::Value::String(event_type.to_string()),
|
||||
);
|
||||
if let Some(t) = text {
|
||||
obj.insert("text".to_string(), serde_json::Value::String(t.to_string()));
|
||||
}
|
||||
LogEntry {
|
||||
timestamp: "2024-01-01T00:00:00Z".to_string(),
|
||||
event: serde_json::Value::Object(obj),
|
||||
}
|
||||
}
|
||||
|
||||
// ── filter_non_archived ───────────────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn filter_keeps_non_archived_agents() {
|
||||
let agents = vec![make_agent("10_active"), make_agent("11_active")];
|
||||
let result = filter_non_archived(agents, |_| false);
|
||||
assert_eq!(result.len(), 2);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn filter_removes_archived_agents() {
|
||||
let agents = vec![make_agent("10_archived"), make_agent("11_active")];
|
||||
let result = filter_non_archived(agents, |id| id == "10_archived");
|
||||
assert_eq!(result.len(), 1);
|
||||
assert_eq!(result[0].story_id, "11_active");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn filter_removes_all_when_all_archived() {
|
||||
let agents = vec![make_agent("10_a"), make_agent("11_b")];
|
||||
let result = filter_non_archived(agents, |_| true);
|
||||
assert!(result.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn filter_returns_empty_for_empty_input() {
|
||||
let result = filter_non_archived(vec![], |_| false);
|
||||
assert!(result.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn filter_preserves_order() {
|
||||
let agents = vec![
|
||||
make_agent("1_a"),
|
||||
make_agent("2_b"),
|
||||
make_agent("3_c"),
|
||||
make_agent("4_d"),
|
||||
];
|
||||
let result = filter_non_archived(agents, |id| id == "2_b");
|
||||
assert_eq!(result.len(), 3);
|
||||
assert_eq!(result[0].story_id, "1_a");
|
||||
assert_eq!(result[1].story_id, "3_c");
|
||||
assert_eq!(result[2].story_id, "4_d");
|
||||
}
|
||||
|
||||
// ── collect_output_text ───────────────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn collect_output_text_empty_entries() {
|
||||
let result = collect_output_text(&[]);
|
||||
assert_eq!(result, "");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn collect_output_text_skips_non_output_events() {
|
||||
let entries = vec![
|
||||
make_log_entry("status", Some("running")),
|
||||
make_log_entry("done", None),
|
||||
];
|
||||
let result = collect_output_text(&entries);
|
||||
assert_eq!(result, "");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn collect_output_text_concatenates_output_events() {
|
||||
let entries = vec![
|
||||
make_log_entry("output", Some("Hello ")),
|
||||
make_log_entry("output", Some("world\n")),
|
||||
];
|
||||
let result = collect_output_text(&entries);
|
||||
assert_eq!(result, "Hello world\n");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn collect_output_text_skips_output_without_text_field() {
|
||||
let entry = LogEntry {
|
||||
timestamp: "2024-01-01T00:00:00Z".to_string(),
|
||||
event: serde_json::json!({"type": "output"}),
|
||||
};
|
||||
let result = collect_output_text(&[entry]);
|
||||
assert_eq!(result, "");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn collect_output_text_mixed_event_types() {
|
||||
let entries = vec![
|
||||
make_log_entry("status", Some("running")),
|
||||
make_log_entry("output", Some("line1\n")),
|
||||
make_log_entry("agent_json", None),
|
||||
make_log_entry("output", Some("line2\n")),
|
||||
make_log_entry("done", None),
|
||||
];
|
||||
let result = collect_output_text(&entries);
|
||||
assert_eq!(result, "line1\nline2\n");
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,160 @@
|
||||
//! Pure token usage aggregation — no I/O, no side effects.
|
||||
//!
|
||||
//! Functions here take slices of `TokenUsageRecord` (already loaded by `io.rs`)
|
||||
//! and compute summaries. Tests cover every branch without touching the filesystem.
|
||||
use crate::agents::token_usage::TokenUsageRecord;
|
||||
use std::collections::HashMap;
|
||||
|
||||
/// Per-agent cost breakdown entry.
|
||||
#[derive(Debug, Clone, PartialEq)]
|
||||
pub struct AgentTokenCost {
|
||||
pub agent_name: String,
|
||||
pub model: Option<String>,
|
||||
pub input_tokens: u64,
|
||||
pub output_tokens: u64,
|
||||
pub cache_creation_input_tokens: u64,
|
||||
pub cache_read_input_tokens: u64,
|
||||
pub total_cost_usd: f64,
|
||||
}
|
||||
|
||||
/// Aggregated token cost for a story.
|
||||
#[derive(Debug, Clone, PartialEq)]
|
||||
pub struct TokenCostSummary {
|
||||
pub total_cost_usd: f64,
|
||||
pub agents: Vec<AgentTokenCost>,
|
||||
}
|
||||
|
||||
/// Aggregate token usage records for a single story.
|
||||
///
|
||||
/// Records for other stories are ignored. The returned `agents` list is sorted
|
||||
/// alphabetically by `agent_name` for deterministic output. Returns a zero-cost
|
||||
/// summary when no records match the given `story_id`.
|
||||
pub fn aggregate_for_story(records: &[TokenUsageRecord], story_id: &str) -> TokenCostSummary {
|
||||
let mut agent_map: HashMap<String, AgentTokenCost> = HashMap::new();
|
||||
let mut total_cost_usd = 0.0_f64;
|
||||
|
||||
for record in records.iter().filter(|r| r.story_id == story_id) {
|
||||
total_cost_usd += record.usage.total_cost_usd;
|
||||
let entry = agent_map
|
||||
.entry(record.agent_name.clone())
|
||||
.or_insert_with(|| AgentTokenCost {
|
||||
agent_name: record.agent_name.clone(),
|
||||
model: record.model.clone(),
|
||||
input_tokens: 0,
|
||||
output_tokens: 0,
|
||||
cache_creation_input_tokens: 0,
|
||||
cache_read_input_tokens: 0,
|
||||
total_cost_usd: 0.0,
|
||||
});
|
||||
entry.input_tokens += record.usage.input_tokens;
|
||||
entry.output_tokens += record.usage.output_tokens;
|
||||
entry.cache_creation_input_tokens += record.usage.cache_creation_input_tokens;
|
||||
entry.cache_read_input_tokens += record.usage.cache_read_input_tokens;
|
||||
entry.total_cost_usd += record.usage.total_cost_usd;
|
||||
}
|
||||
|
||||
let mut agents: Vec<AgentTokenCost> = agent_map.into_values().collect();
|
||||
agents.sort_by(|a, b| a.agent_name.cmp(&b.agent_name));
|
||||
|
||||
TokenCostSummary {
|
||||
total_cost_usd,
|
||||
agents,
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::agents::TokenUsage;
|
||||
|
||||
fn make_record(story_id: &str, agent: &str, cost: f64) -> TokenUsageRecord {
|
||||
TokenUsageRecord {
|
||||
story_id: story_id.to_string(),
|
||||
agent_name: agent.to_string(),
|
||||
timestamp: "2024-01-01T00:00:00Z".to_string(),
|
||||
model: None,
|
||||
usage: TokenUsage {
|
||||
input_tokens: 100,
|
||||
output_tokens: 50,
|
||||
cache_creation_input_tokens: 10,
|
||||
cache_read_input_tokens: 20,
|
||||
total_cost_usd: cost,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn aggregate_returns_zero_when_no_records() {
|
||||
let summary = aggregate_for_story(&[], "42_story_foo");
|
||||
assert_eq!(summary.total_cost_usd, 0.0);
|
||||
assert!(summary.agents.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn aggregate_filters_to_story_id() {
|
||||
let records = vec![
|
||||
make_record("42_story_foo", "coder-1", 1.0),
|
||||
make_record("99_story_other", "coder-1", 5.0),
|
||||
];
|
||||
let summary = aggregate_for_story(&records, "42_story_foo");
|
||||
assert!((summary.total_cost_usd - 1.0).abs() < f64::EPSILON);
|
||||
assert_eq!(summary.agents.len(), 1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn aggregate_sums_tokens_per_agent() {
|
||||
let records = vec![
|
||||
make_record("42_story_foo", "coder-1", 1.0),
|
||||
make_record("42_story_foo", "coder-1", 2.0),
|
||||
];
|
||||
let summary = aggregate_for_story(&records, "42_story_foo");
|
||||
assert!((summary.total_cost_usd - 3.0).abs() < f64::EPSILON);
|
||||
assert_eq!(summary.agents.len(), 1);
|
||||
assert_eq!(summary.agents[0].input_tokens, 200);
|
||||
assert_eq!(summary.agents[0].output_tokens, 100);
|
||||
assert!((summary.agents[0].total_cost_usd - 3.0).abs() < f64::EPSILON);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn aggregate_splits_by_agent() {
|
||||
let records = vec![
|
||||
make_record("42_story_foo", "coder-1", 1.0),
|
||||
make_record("42_story_foo", "qa", 0.5),
|
||||
];
|
||||
let summary = aggregate_for_story(&records, "42_story_foo");
|
||||
assert!((summary.total_cost_usd - 1.5).abs() < f64::EPSILON);
|
||||
assert_eq!(summary.agents.len(), 2);
|
||||
// sorted alphabetically
|
||||
assert_eq!(summary.agents[0].agent_name, "coder-1");
|
||||
assert_eq!(summary.agents[1].agent_name, "qa");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn aggregate_sorts_agents_alphabetically() {
|
||||
let records = vec![
|
||||
make_record("42_story_foo", "z-agent", 1.0),
|
||||
make_record("42_story_foo", "a-agent", 1.0),
|
||||
make_record("42_story_foo", "m-agent", 1.0),
|
||||
];
|
||||
let summary = aggregate_for_story(&records, "42_story_foo");
|
||||
assert_eq!(summary.agents[0].agent_name, "a-agent");
|
||||
assert_eq!(summary.agents[1].agent_name, "m-agent");
|
||||
assert_eq!(summary.agents[2].agent_name, "z-agent");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn aggregate_returns_zero_when_no_matching_story() {
|
||||
let records = vec![make_record("99_other", "coder-1", 5.0)];
|
||||
let summary = aggregate_for_story(&records, "42_story_foo");
|
||||
assert_eq!(summary.total_cost_usd, 0.0);
|
||||
assert!(summary.agents.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn aggregate_preserves_model_from_first_record() {
|
||||
let mut r = make_record("42_story_foo", "coder-1", 1.0);
|
||||
r.model = Some("claude-sonnet".to_string());
|
||||
let summary = aggregate_for_story(&[r], "42_story_foo");
|
||||
assert_eq!(summary.agents[0].model, Some("claude-sonnet".to_string()));
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,8 @@
|
||||
//! Service layer — domain logic extracted from HTTP handlers.
|
||||
//!
|
||||
//! Each sub-module follows the conventions documented in
|
||||
//! `docs/architecture/service-modules.md`:
|
||||
//! - `mod.rs` orchestrates and owns the typed `Error` type
|
||||
//! - `io.rs` is the only file that performs side effects
|
||||
//! - Topic-named pure files contain branching logic with no I/O
|
||||
pub mod agents;
|
||||
@@ -200,6 +200,36 @@ prompt = "You are working on story {{story_id}} ..."
|
||||
system_prompt = "You are a senior full-stack engineer ..."</code></pre>
|
||||
<p>To use this agent for a specific story, add <code>agent: opus</code> to the story's front matter, or run <code>start <number> opus</code> in chat.</p>
|
||||
|
||||
<h2 id="agent-md">Project-local agent prompt (<code>.huskies/AGENT.md</code>)</h2>
|
||||
<p>Place a file at <code>.huskies/AGENT.md</code> in your project root to append project-specific guidance to every agent's initial prompt at spawn time.</p>
|
||||
|
||||
<h3>How it works</h3>
|
||||
<ul>
|
||||
<li>Huskies reads <code>.huskies/AGENT.md</code> each time an agent is spawned — no caching, no restart required.</li>
|
||||
<li>The file content is appended <em>after</em> the baked-in agent prompt, so project guidance refines core instructions without overriding them.</li>
|
||||
<li>Applies to all agent roles: coder, QA, mergemaster, and supervisor.</li>
|
||||
<li>If the file is missing or empty, agents spawn normally — no warnings, no errors.</li>
|
||||
<li>When the file exists and is non-empty, a single <code>INFO</code> log line is emitted showing the file path and byte count.</li>
|
||||
</ul>
|
||||
|
||||
<h3>Ordering</h3>
|
||||
<ol>
|
||||
<li>Baked-in agent prompt (from <code>agents.toml</code> or <code>project.toml</code>)</li>
|
||||
<li>Project-local content from <code>.huskies/AGENT.md</code></li>
|
||||
<li>Resume context (only on agent restart after a gate failure)</li>
|
||||
</ol>
|
||||
|
||||
<h3>Example</h3>
|
||||
<pre><code># .huskies/AGENT.md
|
||||
|
||||
## Documentation
|
||||
Docs live in `website/docs/*.html`, not Markdown files.
|
||||
Edit the relevant .html file when a story asks for documentation.
|
||||
|
||||
## Quality gates
|
||||
Run `cargo clippy -- -D warnings` before committing. Zero warnings allowed.</code></pre>
|
||||
<p>Edit the file at any time — the next agent spawn picks up the latest content automatically.</p>
|
||||
|
||||
<h2 id="bot-toml">bot.toml</h2>
|
||||
<p>Chat transport configuration. Lives at <code>.huskies/bot.toml</code>. This file is gitignored as it contains credentials. Copy the appropriate example file to get started:</p>
|
||||
<pre><code>cp .huskies/bot.toml.matrix.example .huskies/bot.toml</code></pre>
|
||||
@@ -217,6 +247,61 @@ system_prompt = "You are a senior full-stack engineer ..."</code></pre>
|
||||
<tr><td>history_size</td><td>Optional. Maximum conversation turns to remember per room/user (default: 20).</td></tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
<h2 id="gateway-aggregated-stream">Gateway: aggregated chat stream</h2>
|
||||
<p>When running <code>huskies --gateway</code>, you can configure a single bot that receives pipeline notifications from <strong>all</strong> registered projects. Events are prefixed with <code>[project-name]</code> so you can tell them apart in one shared room.</p>
|
||||
|
||||
<p>The aggregated stream is configured entirely in the <strong>gateway's</strong> <code>.huskies/bot.toml</code> — no per-project bot config is required and no per-project files need to change when you add a new project to <code>projects.toml</code>.</p>
|
||||
|
||||
<h3>Enabling the aggregated stream</h3>
|
||||
<p>Add or edit <code><gateway-config-dir>/.huskies/bot.toml</code> and set <code>enabled = true</code>. The gateway bot will automatically poll every project listed in <code>projects.toml</code> and forward events to the configured rooms.</p>
|
||||
<pre><code># <gateway-config-dir>/.huskies/bot.toml
|
||||
enabled = true
|
||||
transport = "matrix"
|
||||
homeserver = "https://matrix.example.com"
|
||||
username = "@gateway-bot:example.com"
|
||||
password = "secret"
|
||||
room_ids = ["!gateway-room:example.com"]
|
||||
allowed_users = ["@you:example.com"]
|
||||
|
||||
# Gateway-specific: poll interval and on/off switch
|
||||
aggregated_notifications_poll_interval_secs = 5 # default
|
||||
aggregated_notifications_enabled = true # default</code></pre>
|
||||
|
||||
<h3>Aggregated stream settings</h3>
|
||||
<table>
|
||||
<thead>
|
||||
<tr><th>Key</th><th>Type</th><th>Default</th><th>Description</th></tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td>aggregated_notifications_enabled</td>
|
||||
<td>bool</td>
|
||||
<td><code>true</code></td>
|
||||
<td>Set to <code>false</code> to disable the aggregated stream without disabling the gateway bot entirely. Per-project configs are never consulted.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>aggregated_notifications_poll_interval_secs</td>
|
||||
<td>integer</td>
|
||||
<td><code>5</code></td>
|
||||
<td>How often (in seconds) the gateway polls each project's <code>/api/events</code> endpoint. Lower values reduce notification latency.</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
<h3>No-duplicate guarantee</h3>
|
||||
<p>Per-project bots and the gateway aggregated stream send to different rooms — they are independent. Events from a per-project bot go to that project's rooms; events from the gateway stream go to the gateway rooms. The same event will never appear twice in either room.</p>
|
||||
|
||||
<h3>Unreachable projects</h3>
|
||||
<p>If a per-project server is temporarily unreachable, the gateway logs a warning and skips that project for the current poll cycle. All other projects continue to deliver notifications normally. No configuration change is required — the poller retries on the next interval.</p>
|
||||
|
||||
<h3>Supported event types</h3>
|
||||
<p>The aggregated stream delivers the following event types, each prefixed with the project name:</p>
|
||||
<ul>
|
||||
<li><strong>Stage transitions</strong> — story created, agent started, QA requested, QA approved/rejected, merge succeeded (all pipeline stage moves)</li>
|
||||
<li><strong>Merge failures</strong> — merge failed with a reason</li>
|
||||
<li><strong>Story blocked</strong> — story blocked after exceeding retry limit</li>
|
||||
</ul>
|
||||
</main>
|
||||
</div>
|
||||
|
||||
|
||||
@@ -200,6 +200,13 @@
|
||||
<tr><td>discord_allowed_users</td><td>Optional. Discord user IDs allowed to interact. When absent, all users in configured channels can interact.</td></tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
<h2 id="gateway-aggregated">Gateway: aggregated notifications</h2>
|
||||
<p>When using <code>huskies --gateway</code>, you can configure the gateway bot to receive notifications from <strong>all</strong> registered projects in a single room. Events are prefixed with <code>[project-name]</code>.</p>
|
||||
<p>No additional transport is required — the gateway aggregated stream works with any of the transports above. Configure the gateway's <code>.huskies/bot.toml</code> with your transport credentials and set <code>aggregated_notifications_enabled = true</code> (the default). See <a href="configuration.html#gateway-aggregated-stream">Configuration → Gateway aggregated stream</a> for the full reference.</p>
|
||||
<div class="note">
|
||||
<strong>No per-project changes needed:</strong> Adding a new project to <code>projects.toml</code> does not require editing per-project bot configs — the gateway picks it up automatically.
|
||||
</div>
|
||||
</main>
|
||||
</div>
|
||||
|
||||
|
||||
Reference in New Issue
Block a user